halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 0
36
| timestamp
stringclasses 652
values | year
stringclasses 55
values | url
stringlengths 43
370
| text
stringlengths 16
2.18M
|
---|---|---|---|---|---|---|
04118648 | en | [
"info.info-au"
] | 2024/03/04 16:41:26 | 2022 | https://theses.hal.science/tel-04118648/file/TheseQureshiAiman.pdf | Aiman Mazhar Qureshi
email: [email protected]
Ahmed Rachid
email: [email protected]
Review and Comparative Study of Decision Support Tools for the Mitigation of Urban Heat Stress
Keywords: Urban heat stress, Sensitivity, Modeling, Mitigation, Vulnerability, Decision-making Interventions, urban heat stress, urban heat island, mitigation, cooling effect decision support tools, multi-criteria decision-making, heat stress, urban heat island Artificial intelligence, thermal comfort, modeling, system dynamic approach and urban heat stress
Urban areas are the prevalent places of residence for people and are vulnerable to exasperating weather conditions such as heat stress. Periods of heat waves are increasingly reoccurring in the current atmosphere, and they are known to pose a serious and major threat to the health of human beings all over the world. Urban heat islands and heat waves increase thermal risks in urban areas and the vulnerability of the urban population.
The increase in the number of heat episodes in urban areas has become a significant concern due to its adverse effects on human health and economic activities. The objective of this work is to identify the sensitivity of thermal comfort and their action variables, the modeling of thermal stress using the most influential meteorological variables, the identification of risk factors and highlight the correlation of meteorological trends and influencing parameters, solutions for mitigating heat stress and mathematical support for decision-making.
Several machine and deep learning techniques were used for the system dynamic modeling of the thermal comfort. Optimized results are obtained from the Gated Recurrent Unit (GRU) model which is used for the development of a web simulation tool allowing the inhabitants to evaluate their level of comfort according to the weather conditions.
A heat vulnerability index map has been developed to indicate the vulnerability of occupants considering different aspects in a medium-sized city such as planning, green space, density, energy, air quality, water bodies and extreme heat events. The obtained results highlighted that poor air quality and heat events are interrelated, which draws the attention for decision-makers to intervene the additional measures in high-risk places. Field monitoring was carried out using sensors and a thermal camera to measure relevant variables and take action to minimize the effects of heat stress. In Last, multi-criteria decisionmaking methods were applied for the initial development of a decision support tool for selection of urban heat resilience interventions that allows flexible, dynamic, and predictive use for designers and the users.
Résumé v
Résumé
Les zones urbaines sont les principaux lieux de résidence des personnes et sont vulnérables aux conditions météorologiques exaspérantes telle que le stress thermique. Les périodes de vagues de chaleur se reproduisent de plus en plus dans l'atmosphère, et elles sont connues pour constituer une menace sérieuse et majeure pour la santé des êtres humains partout dans le monde. Les îlots de chaleur urbains et les canicules augmentent les risques thermiques en zone urbaine et la vulnérabilité de sa population.
L'augmentation du nombre de vagues de chaleur dans les zones urbaines est devenue une préoccupation importante en raison de ses effets néfastes sur la santé humaine et les activités économiques. L'objectif de ce travail est d'identifier la sensibilité du confort thermique et de leurs variables d'action, la modélisation des contraintes thermiques à l'aide des variables météorologiques les plus influentes, l'identification des facteurs de risque et de mettre en évidence la corrélation des tendances météorologiques et des paramètres influents, les solutions d'atténuation du stress thermique et l'aide à la prise de décision.
Plusieurs techniques d'apprentissage automatique ont été utilisées pour la modélisation dynamique du du confort thermique. Des résultats optimisés sont obtenus à partir du modèle Gated Recurrent Unit (GRU) qui est utilisé pour le développement d'un outil de simulation web permettant aux habitants d'évaluer leur niveau de confort en fonction des conditions météorologiques.
Une carte d'indice de vulnérabilité à la chaleur a été développée pour indiquer la vulnérabilité des occupants en tenant compte de différents aspects dans une ville de taille moyenne tels que la planification, les espaces verts, la densité, l'énergie, la qualité de l'air, les plans d'eau et les épisodes de chaleur extrême. Les résultats obtenus ont mis en évidence que la mauvaise qualité de l'air et les épisodes de chaleur sont interdépendants, ce qui attire l'attention des décideurs à prendre des mesures supplémentaires dans les lieux à haut risque. La surveillance sur le terrain est effectuée à l'aide de capteurs et d'une caméra thermique pour mesurer les variables pertinentes et prendre des mesures pour minimiser les effets du stress thermique. Finalement, des méthodes d'aide à la décision multicritères ont Résumé vi été appliquées pour le développement initial d'un outil d'aide à la décision pour la sélection d'interventions de résilience thermique urbaine qui permet une utilisation flexible, dynamique et prédictive pour les concepteurs et les utilisateurs.
Mots-clés : Stress thermique urbain ; Sensibilité ; La modélisation ; Atténuation ; Vulnérabilité ; La prise de décision This dissertation was carried out in the Laboratoire des Technologies Innovantes (LTI), at the Université de Picardie Jules Verne, Amiens, under the supervision of Professor Ahmed RACHID. This work was carried out as part of the COOL-TOWNS (Spatial Adaptation for Heat Resilience in Small and Medium Sized Cities in the 2 Seas Region) project which receives funding from the Interreg 2 Seas programme 2014-2020 co-funded by the European Regional Development Fund under subsidy contract N° 2S05-040. I am delighted and thankful for their confidence and faith in funding my thesis.
First, I would like to thank Almighty God for always blessing me with new chances every day of my Ph.D. career and for giving me the strength and patience to complete this doctoral degree. I am extremely grateful to my thesis supervisor Prof. Ahmed Rachid, as a pillar of support for guidance, who was always there for me and provided me technical knowledge whenever I was stuck in something and made this work a success. His skills and expertise have provided me the strength to open several doors of knowledge for this original dissertation. Due to his insight, I was able to achieve all my goals and objectives of my Ph.D. degree. Without him, I would have never accomplished any. Moreover, I am indebted to my parents for believing in me and always motivating me to achieve the best in life. Without you both, I wouldn't achieve this title you always prayed for.
Furthermore, I would like to extend my thanks to the jury member for coming all the way and agreeing to evaluate my dissertation and most importantly to be a part of my Ph.D. thesis committee. I am gratified for your valuable time and comments on the defence day.
The encouragement, support, and help of my LTI colleagues shouldn't be left unmentioned as they were there to motivate me on my bad days. Thank you, everyone, helping me in academics or non-curricular, and providing the best working environment for me and I will never forget our chats during lunch breaks.
I have made such good friends here. I am also thankful to all my friends located all over the world for their immense support throughout the journey.
Table of Contents viii
Table of Contents
General Introduction
Heat stress is an uncomfortable feeling, when the body is unable to sustain a healthy temperature in response to the hot environmental condition during daily activities such as sleeping, travel, work. The growing intensity of heat stress in urban areas has become a significant concern due to its direct and adverse effect on human health and economic activities. [1], [2].
It has been observed in the past recent decades that the rapid increase in the intensity of heatwaves in urban settings has been following global warming. These extreme heatwaves pose a hazardous impact on the urban environment and the population, ultimately increasing the rate of morbidity and mortality.
Reduced natural landscapes in urban areas due to population density is one of the major factors which increases the vulnerability of heat waves and giving rise to the phenomenon of "Urban Heat Island" (UHI) -where the city is hotter than the surrounding. The density of urban areas is rapidly increasing due to the increased birth rates and migration of people from rural settings to improve their lives by means of better income, resources and a better society [3].
The everyday human anthropogenic activities in urban communities emanate an enormous number of contamination particles into the urban air which increases exposure of people to air toxins.
Moreover, the superposition of heat stress and air contamination makes people progressively helpless with the impact of each appropriate risk [4].
Many cities applied heat emergency response plans to reduce the mortality rates during the heatwave. The combination of high heat waves and poor air quality leads to the health risks. Heat stress can lead to heat exhaustion, heat cramps, and rashes and due to long term exposure, it causes heat stroke.
Children, aged people, people live alone, pregnant women, asthma and cardio patients are particularly vulnerable to both extreme events (heat and polluted) which requires additional interventions.
In this present situation, if nothing is adapted to alleviate the intensity of heat stress in urban areas, the temperature extremity would spread greatly across the earth's sphere by 2100 [5]. The unchecked elevation in the extreme heat would impact highly on the communities and ecosystems, ultimately making it harder to cope with it [6].
Heat-stress mitigation strategies are needs to be applied in urban areas, in order to sustain the environment and human health [7]. Intervening heat resilience measures, developing a decision support system is a challenge. This thesis deals with the thermal comfort which combines the heat stress prediction modeling, heat vulnerability mapping, field measurements and application of multi-criteria decision methods. This thesis is composed of 5 chapters. Chapter 1-5 are based on journal articles; the conclusions are provided at the end. The thesis outlines are given below.
Chapter 1: Reviews a total of 71 journal papers, published in English and focused on the interventions which improves thermal comfort. Cooling effect data are extracted from these journal papers to investigate the efficacy of heat stress mitigation strategies. Based on the analysis of the extracted data, past achievements on this research topic are documented, and average cooling of installation of green blue and grey interventions are estimated Moreover, a review study on Decision Support Tools (DSTs) is conducted. The existing published studies on multicriteria decision making, toolkits and simulation tools in the field of heat stress and urban heat island are reviewed. The tools are compared on benchmarking criteria which must covered by decision support system in improvement thermal comfort in urban areas. Past achievements on this research topic are presented, various knowledge gaps are identified and recommendations for better version of any DST are discussed.
Chapter 2: This chapter is a state of art which includes the background of heat stress and existing models are discussed in these sections. A survey on outdoor heat stress indices is conducted to present their existing mathematical models and highlight the different heat indices are being used officially for measuring thermal comfort in different regions across the globe. After the survey of heat indices, it is noticed that most indices can be measured and calibrated directly by using equations and some secondary indices can be estimated by different complexed methods evaluated by their models. The sensitivity analysis of most common heat stress indices which can be estimated by direct mathematical models i.e., CET, HI, SSI, PMV, DI, WBGT, UTCI are carried out and calculated the variation coefficient for each variable by partial differential equation. These operational indices are also simulated analyse sensitivity of discomfort zones by certain variations in summer under minmax function.
Chapter 3: Physiological Equivalent Temperature (PET) and Predicted Mean Vote (PMV) are complexed heat indices which can be calculated by Munich Energy balance models for Individuals and Fangers models respectively. System dynamic and artificial intelligence is used for predictive modeling of PET, Mean Radiant Temperature and PMV. The five most important meteorological parameters such as air temperature, global radiation, relative humidity, surface temperature and wind speed are considered for Heat stress assessment. Three machine learning approaches, namely Support vector machine, Decision tree and Random Forest, are used to predict the heat stress. Afterwards, Deep learning approaches such as Long short-term memory, Gated Recurrent Unit (GRU) and Simple recurrent neural networks are used to evaluate the performance of the developed approaches. It is observed among others GRU is a promising technology, the results with higher accuracy are obtained from this algorithm. Web based simulation tool is developed for the users for heat stress assessment which allows users to select the range of thermal comfort scales based on their perception that depends on the age factor, local weather adaptability, and habit of tolerating the heat events. It also gives a warning to the user by colour code about the level of discomfort which helps them to schedule and manage their outdoor activities.
Chapter 4: After the heat stress assessment, it is also required to analyse the hotspots and vulnerable areas. We evaluated the Heat Vulnerability Index (HVI) in Amiens for extreme heat days recorded for three years (2018-2020). We used the principal component analysis (PCA) technique for fine-scale vulnerability mapping. The main types of considered data included (a) socioeconomic and demographic data, (b) air pollution, (c) land use and cover, (d) elderly heat illness, (e) social vulnerability, and (f) remote sensing data (land surface temperature (LST), mean elevation, normalized difference vegetation index (NDVI), and normalized difference water index (NDWI)). The output maps identified the hot zones through comprehensive GIS analysis. The resultant maps showed that high HVI exists in three typical areas: (1) areas with dense population and low vegetation, (2) areas with artificial surfaces (built-up areas), and (3) industrial zones. Low-HVI areas are in natural landscapes such as rivers and grasslands. This approach can be helpful for decision makers to target hot areas for planning heat resilience measures. In addition, cooling effect of three plant species which are mainly exist at public spaces in city centre are measured during last summers (2021) in Amiens using sensor named Kestrel 5400.
Chapter 5: Selection of intervention for desirable locations is important for decision makers considering certain criteria. In this chapter, Multi-Criteria Decision Methods (MCDM) are applied which helps to select and prioritize mitigation measures step by step. Firstly, An Analytic Hierarchy Process (AHP) based approach is applied in choosing appropriate measures for hotspots. The evaluation of the measures is obtained from a questionnaire where human judgment is used for a comparison, based on their perception and priorities. The limitations of the different MCDMs are still the problem that reduce the reliability of the decision. For this, eight MCDMs which are Elimination and Choice Expressing Reality (ELECTRE) NI (Net Inferior), NS (Net Superior), Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS), Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE), VIekriterijumsko KOmpromisno Rangiranje (VIKOR), Multi-Objective Optimization Ratio Analysis (MOORA), Weight Sum Method (WSM) and Weighted Product Method (WPM) are applied to select the heat mitigation measure under certain criteria. These models are also coupled to the AHP where the AHP method is used to determine the weights of the selected criteria and the operational MCDMs are used to obtain the final ranking of the alternatives. This numerical research evaluated the effectiveness of MCDM using different normalization techniques and the impact of their integrated model with AHP.
Chapter -1
Literature review
This chapter is a state of art consist of following two parts: ❖ The part-A is the review on ambient air-cooling effect of green (vegetation), blue (water features) and grey (shading devices) interventions.
❖ The part-B is the detailed review on published studies on application of multicriteria decision making methods, developed toolkits and simulation tools in the field of mitigation of urban heat.
Part -A
Part -A Quantifying the Cooling Effect of Urban Heat Stress Interventions
The part-A of this chapter has been accepted as "Quantifying the cooling effect of urban heat stress interventions" in last September and will be published soon on "International Journal of Global
Warming". The forth coming article details are available at
Major Findings
This review evaluates the existing studies of blue, green, and grey interventions based on field measurements and modeling aiming to quantify the cooling impact that reduces outdoor heat stress.
Based on findings from literature, it is concluded that cooling effect of interventions depend on the site characteristics and local climate. However, water bodies can reduce the mean air temperature (𝑇 𝑎 ) by 3.4°C and Universal Thermal Climate Index by 10.7°C, while natural vegetation can improve 𝑇 𝑎 by 2.3°C and Physiological Equivalent Temperature (PET) by 10.3°C during summer. Vertical greenery systems provide cooling effect of 𝑇 𝑎 up to 4°C, whereas architectural shades reduce it by approximately 3.8°C and PET up to 6.9°C under shade structure.
Introduction
Large-scale urbanization and the rapid population growth in big cities are contributing significantly to locally experienced impacts of climate change. Several heat-related issues have been reported globally, especially in Europe, and all countries have begun to pay attention to this problem and the adverse effects associated with it. One example is the Urban Heat Island (UHI) effect, a significant issue in hot summers, that affects the microclimate of the urbanized city, increasing the potential for warmer temperatures and where the air temperature (𝑇 𝑎 ) in big cities remains higher with reference to the rural surroundings [1].
Human health is adversely affected by the increase in heat driven by climate change [2]. These effects are especially serious in summer for vulnerable groups such as the elderly, people with cardiovascular disease, and young children [3]. There have been particular events where the intensity of extreme heat has proven disastrous to human health, causing an increase in the mortality rate. The most common effects on the human health of the UHI and Urban Heat Stress (UHS) are heatstroke, dehydration, fainting, asthma, heat cramps, rash, skin allergies, physical and mental stress, and respiratory issues [4].
The urban infrastructure has a high thermal capacity allowing absorption of solar energy, causing a low evaporation rate and adversely affecting air quality for inhabitants [5]. The rapidly growing urban population has increased energy consumption by 75% resulting in energy dissipation as heat, which is further intensified by solar radiation. Surfaces such as roofs, pavements, and roads are composed of impervious, low albedo materials which tend to absorb and re-radiate a high amount of solar radiation in the infrared part of the spectrum. Air pollution and climate change are interlinked. The rapid growth in vehicle uses and fuel consumption is an additional contributor to the increase in temperature, with pollution from exhaust emissions increasing the adverse effects of UHI [6]. All these risk factors have focused the attention of researchers, urban planners, and society on developing appropriate strategies for mitigating UHS. Recent studies have evaluated the different techniques for mitigating the UHS effect which are mostly focused on the implementation and effectiveness of green roofs and cool materials [7],
urban vegetation, watered cool pavements, water bodies, and shading canopies [8] . These interventions are used as the important resource to reduce heat stress through their cooling effect. There is an ongoing debate on the relative effectiveness of different interventions and this paper reviews both natural and Part -A 9 built approaches by surveying peer-reviewed papers and evaluating them to identify the best strategies to mitigate UHS, particularly in summers when the heat island effect is greatest.
The objectives of this review are (i) to provide an overview of UHS mitigation strategies, (ii) to quantify the cooling effect of natural and constructed features based on different indicators, mainly 𝑇 𝑎 (iii) to analyze the results to determine the most efficient method to reduce UHS (iv) identify the co-benefits associated with these interventions.
The methodology of this review paper is explained in section 2. The scientific works on which this article is based are summarized in the tables in section 3. The energy demand and costs/benefits of UHS and the UHI mitigation measures are briefly explained in section 4 and the results are discussed in section 5.
Finally, a conclusion given in Section 6.
Methodology and Indicators of Cooling Effect
This paper is a review of peer-reviewed articles on the cooling effect of various strategies. Among these,
Interventions to Mitigate Heat Stress
Water features, vegetation, and constructed shade are also referred to as blue, green, and grey infrastructure respectively, and are among the most effective ways to provide cooling by evaporation and shading, and so improving the urban microclimate. Blue and green features have multiple additional environmental benefits, for instance ameliorating air quality and increasing biodiversity, particularly by means of urban vegetation, and are potent ways to combat UHS [9] and UHI. They are also beneficial in increasing thermal comfort in open spaces as well as compact and dense urban areas [10]. This paper Part -A 11 presents a review of interventions from across the globe, with the three categories, water features, green spaces, and constructed shade, described separately in the following sections.
Blue Infrastructure
Water areas, such as ponds, rivers, and lakes are known to significantly mitigating heat stress although cooling effect depends on the surrounding environment and atmospheric conditions [11]. This has led scientists to study interventions using water in different ways (shown in Figure 1.2) to reduce the environmental temperature [12].
Misting Systems
One of the most attractive cooling methods is misting [13]. The effect has been measured by checking skin temperature (𝑇 𝑠𝑘 ) [14]. Most studies concerned with water features were carried out in an outdoor environment; the only exceptions where one installed on a station, and another installed indoors. This review has found two different types of misting systems; water misting [13] and dry misting [15]. The greatest cooling effect from a water mist cooling system was observed in a study from Atacama (Chile) which reduced 𝑇 𝑎 by 15°C [13].
Water Fountains, Water Pavements, and Water Sprays
Fountains not only minimize the effect of heat but also add aesthetic value to the surroundings, making them more pleasant and refreshing. Pavement watering has been studied for the past three decades and is considered one of the most effective techniques to improve thermal comfort. Watering surfaces can cool them to a certain extent, for instance watering pervious concrete material can reduce the 𝑇 𝑠 up to 2°C, while watering porous bricks can reduce the 𝑇 𝑠 by 20°C. If green areas in the urban landscape are combined with watering pavements this is particularly helpful in reducing the temperature during both the day and night [17].
Part -A 13
Most research is conducted via simulations using Envi-met and Computational Fluid Dynamics (CFD) on models of water fountains along with water droplets, water jets, and water bodies. The addition of water jets showed a greater effect at night than during daytime. Fountains installed along with water bodies have been found to decrease the 𝑇 𝑎 , 𝑇 𝑚𝑟𝑡 by increasing the humidity and cooling of the air [18].
Water bodies
According to research undertaken in Phoenix, Arizona (USA), the cooling effect of wetting streets, pond surfaces, and lakes was directly proportional to their surface areas, the larger the water body, the greater the cooling effect. The UHI mitigation depends upon the amount of water being used for the purpose [19]. Similarly, another study shows that the 𝑇 𝑚𝑟𝑡 of the asphalt surface was higher as compared to the temperature of the water body with significant cooling effect extending for around 0.5 metres. In contrast, other studies have found that open water surfaces can influences temperature causing it to rise.
One author from the Netherlands concluded that water bodies can increase the daily maximum UHII by 95 percent at night and as, despite seasonal change water temperature remains high [20] due to the absorption of heat throughout the day. Other researchers also support this seasonal variation which has a high impact on warmer days, with water remaining warm in lakes and rivers which influences the surrounding temperature [21]. The papers reviewed regarding the outdoor cooling effects of blue infrastructure are summarised below in Table 1.1 Part -A 16 centre of Athens (Greece) showed that the 𝑇 𝑎 of the green area is lower than the surroundings in the early morning [28]. Vertical green walls or green facades are approaches that can also ameliorate the thermal effect of urban areas.
This paper reviews research into the cooling effect of naturalistic greenery that is cheaper than
Types of Natural Green Interventions
(a) Grass
Researchers in Manchester (England) measured the difference in 𝑇 𝑠 , 𝑇 𝑎 and demonstrated the cooling effect of grass [29]. Urban parks often combine dense vegetation along with water facilities [30].
Increasing the proportion of trees increases the cooling effect and humidity [31]. Grass alone can increase the RH [32] but this effect is greater when combined with trees [33]. A similar effect is observed with green or vegetated parking areas, where grass is grown in holes in paving or in a reinforcing mesh to
Part -A 17 create a stable surface, but this provides less cooling compared to other vegetative paved surfaces due to the convention effect when cars are parked, and thermal energy is transferred, leading to 𝑇 𝑎 drop. Thus, vegetated pavement in parking areas lessens discomfort but not as much as installed at other situations.
(b) Trees
Trees are effective at absorbing and reflecting thermal radiation with the cooling effect depending on tree species and the planting pattern. The cooling effect of small leaved lime (Tillia cordata) was measured and an improvement in 𝑇 𝑎 was recorded during both day and night [34]. This suggests that an appropriate configuration of trees could provide a good cooling effect. Strategic placement of trees and green infrastructure has been found to not only reduce the UHI and UHS but also reduce premature human death during high temperature events [35].
Parks with a high density of trees experience reduced temperature and increased RH particularly during summer and can influence temperature and RH as far as 60 metre away [33]. Different numbers of trees have been compared and the most effective daytime cooling results were found with 50% tree cover [36]. In a study in Kaohsiung (Taiwan) 5 strategies were tested, the results showing that increasing the green coverage ratio (GCR) in the street up to 60%, in the park up to 80 %, and GCR on the roof of building up to 100% can reduce 𝑇 𝑎 [37]. The papers reviewed on the cooling effect of vegetation in outdoor spaces are given below in Table 1.2.
Green Interventions Involving Support
(a) Vertical Greening
Vegetation that is supported by constructed frameworks or built structures to grow are referred to as Vertical Greenery Systems (VGS). In this study we considered two types of VGS; green façades, comprising climbing plants growing in the ground but supported on the walls of buildings [START_REF] Lepp | Planting Green Roofs and Living Walls[END_REF], and green walls, which are vertical built structures consisting of containers of growth medium, such as soil or substitute substrate, in which the plants are grown, as well as an integrated hydration system. These types of greening offer numerous co-benefits, including aesthetics and biodiversity. An attractive solution is the application of vegetated facades, which help reduce heat by the phenomenon of Part -A 20 evapotranspiration as well as mixing air vertically, lowering the temperature in the surroundings and reducing UHI by providing fresh air [START_REF] Johnston | Building Green A guide to using plants on roofs, walls and pavements[END_REF].
(b) Plant Species in vertical greening systems
Different plants showed different efficiency, plants with woody branches and the smallest leaves appeared to be the most efficient in cooling effect during summer [START_REF] Charoenkit | Role of specific plant characteristics on thermal and carbon sequestration properties of living walls in tropical climate[END_REF].
The efficiency in reducing 𝑇 𝑠 and 𝑇 𝑤 of species ranged from 1°C to 5.6°C, with Sword bean (Canavalia gladiate) the most efficient plant. In the UK, the cooling effect was considerable when the outdoor 𝑇 𝑎 evaluated with the extent to which temperature was affected different according to species [START_REF] Cameron | What's 'cool' in the world of green façades? How plant choice influences the cooling properties of green walls[END_REF]. Below are some reviewed studies for constructed greenery referring to cooling effects in outdoor spaces given below in Table 1.3.
Grey Infrastructure (Constructed shading)
Thermal stress in hot weather can limit outdoor activities. Outdoor spaces can be shaded in different ways; via shading devices [START_REF] Yıldırım | Shading in the outdoor environments of climate-friendly hot and dry historical streets: The passageways of Sanliurfa, Turkey[END_REF], sun sails [START_REF] Garcia-Nevado | Benefits of street sun sails to limit building cooling needs in a mediterranean city[END_REF], architectural shading [START_REF] Mcrae | Integration of the WUDAPT, WRF, and ENVI-met models to simulate extreme daytime temperature mitigation strategies in San Jose, California[END_REF], shade pavilions or optimized awnings [START_REF] Rossi | Outdoor thermal comfort improvements due to innovative solar awning solutions: An experimental campaign[END_REF], parasols, deep canyons [START_REF] Johansson | Influence of urban geometry on outdoor thermal comfort in a hot dry climate: A study in Fez, Morocco[END_REF], textile canopies, and other overhead shade structures [START_REF] Lee | Improving street walkability: Biometeorological assessment of artificial-partial shade structures in summer sunny conditions[END_REF].
Some types of constructed shade can be seen in The papers reviewed s referring to cooling effects of shade structures in outdoor spaces is given below in Table 1.4.
Energy-Saving Benefits of Interventions
Nowadays, energy consumption is an important issue and the focus of attention for many scientists and researchers. For both cooling and heating, different technologies and electronic appliances are used, and various methods are applied by different countries in order to balance demand and consumption. The natural and constructed options discussed in this paper to improve thermal comfort in urban areas can reduce energy consumption, cost, and ultimately lead to sustainable city planning. Natural greenery reduces PET, particularly when combined with shading in summers [START_REF] Müller | Counteracting urban climate change: Adaptation measures and their effect on thermal comfort[END_REF]. Trees can decrease outdoor 𝑇 𝑎 and building cooling load by 29% [31] which ultimately reduces indoor air conditioning cost by around
Part -A 24 25 Egyptian pounds, equivalent to 1.25 euro/day [36]. Another study showed that there was an annual saving of about 1.5 million US dollars because the urban forest, of about 100,000 trees, decreased the demand for energy and water [START_REF] Moore | The economic value of trees in the urban forest as climate changes[END_REF]. Specifically in July, at the peak of summer, the installation of green facades can reduce building energy demand by up to 20% [START_REF] Haggag | Experimental study on reduced heat gain through green façades in a high heat load climate[END_REF].
There are other shading technology options that not only provide pedestrian thermal comfort but also reduce energy demand. For an instance, the installation of sun sails in Mediterranean city streets can reduce cooling demand up to 46% [START_REF] Garcia-Nevado | Benefits of street sun sails to limit building cooling needs in a mediterranean city[END_REF]. Other shading devices in street canyons can reduce yearly heating load up to 18% during winter [START_REF] Evins | Simulating external longwave radiation exchange for buildings[END_REF]. These interventions include green walls, suburban parklands and ceiling sprays [START_REF] Narumi | Effect of evaporative cooling techniques by spraying mist waster on energy saving in apartment house[END_REF] not only effective outdoor but also for the indoor environment.
Results and Discussion
The overheating of urban areas has negative impacts on human health and contributes to increased
Conclusions
The reviewed interventions improve the thermal comfort and also have a positive psychological impact on citizens. When the environmental conditions are extreme with intense solar radiation, and heat levels rise, one must consider preventive actions and resources to implement cooling interventions in urban settings. When selecting the most appropriate heat resilience strategy, important criteria should be considered such as cooling effect, cost, maintenance, and public acceptance.
Part -A
27
All the types of mitigation measures that are reviewed in this study provide cooling, but the effect depends on the local climate and geography. Future investigations should focus on developing a practical decision support tool that can help decision-makers to select an adaptation measure based on the characteristics of the proposed site, local social and economic circumstances, and constraints.
Part -B Review and Comparative Study of Decision Support Tools for the Mitigation of Urban Heat Stress
The part-B of this chapter has been published as "Review and comparative study of decision support tools for the mitigation of urban heat stress" on "Climate" This paper is attached at Paper -I with kind permission from the journal and can be cited as:
Paper -I
Review and Comparative Study of Decision Support Tools for the Mitigation of Urban Heat Stress
Paper -I
Introduction
Urbanization and an exponential increase in population have brought the concept of Urban Heat Island (UHI) and heat stress into the limelight. The world has seen adverse effects, particularly a rise in air temperature, a higher mortality rate, and changes in weather patterns [1]. Most studies have focused on the UHI in densely populated capital cities and there is insufficient literature available for smaller cities [2].
Different authors explained that UHI has severe effects on the most vulnerable populations, especially during the summer season. This phenomenon indeed highly raises the consumption of cooling energy as well as the corresponding peak electricity demand of cities. Therefore, the UHI can be linked with a significant increase in urban pollutant concentrations and is concerned with the city's carbon footprint as well as ground-level ozone. Urban Heat Stress (UHS) severely affects health, comfort, and increases mortality problems [3]. In current times, urban planners and policy-makers are keen to address issues such as increased urban heat due to climate change triggered by human activities.
Europe, Australia and North America are the major continents among those working to mitigate UHS in different ways, for example, by increasing urban forestry or by using green and blue interventions. On the other hand, Asia has also worked on thermal comfort but their focal point is grey and blue infrastructure. Accommodating heat stress measures in urban areas is not the easiest task as it encounters issues such as water scarcity, high cost and unsuitable environments for green infrastructure.
It is the responsibility of the decision-makers to evaluate multiple possible solutions to resolve the issues by considering specific criteria. Urban planners are still perplexed due to the severity of changes that have taken place in different zones.
Sometimes, alternative decisions are to be taken in order to combat the complex situation by considering some criteria [4]. It is observed in previous studies that every location has unique characteristics and parameters and the decision-makers have concerns about criteria such as cost, efficiency, and materials. For every location change, the mitigation measure should be modified. To solve these issues, a proper decision support system (DSS) is required to help decision-makers.
A DSS is an information system that requires judgment, determination, and a sequence of actions. It assists the mid-and high-level management of an organization by analyzing huge volumes of unstructured data and information. It is either human-powered, automated or a combination of both and it can be used in any domain due to its versatility.
There are many studies related to climate vulnerabilities-some have used economic or mechanistic modeling [5][6][7] and other researchers have used outranking approaches that later have been criticized due to axiomatic violations [8,9]. The number of characteristics required for the evaluation of UHS management similarly challenges is not constant. There is a need for a tool that allows one to work in consideration of all parameters simultaneously and helps to identify negative trends of urban heat and eventually allow better adaptation measures.
This paper presents a comprehensive review of DSTs in the essence of UHI, climate change adaptation, and heat stress. In Section 2, the methodology of the paper is discussed, Multi-Criteria Decision Analysis (MCDA) approaches are reviewed in Section 3, and DSTs (toolkits and spatial tools) are discussed in Section 4 of the research paper. All tools are critically analyzed by 15 important criteria in Section 5 and, finally, the conclusion is presented in Section 6.
Materials and Methods
Review Strategy
In this review article, we have used a qualitative and exploratory approach. Peerreviewed research papers were gathered from Google Scholar. The research papers were selected by using keywords such as multi-criteria decision, UHI mitigation, heat resilience and UHS DST. Tools that are developed for urban heat resilience under the banner of different projects were searched by using the same keywords. The survey is presented in two tables. In the first table, we reviewed 9 academic studies in which different MCDA approaches were applied for developing DSTs for UHS mitigation. In the second table, we performed a review on different DSTs which deal with the UHI, climate change risks, extreme heat events, heat resilience adaptation and mitigation measures.
Inclusion and Exclusion Criteria
In this paper, we analyze 12 DSTs with the principle aspects to analyze the support of the decision-making tool, such as: (i) experts' assistance in the development of a support system; (ii) social culture factors, for example, number of population and their age, their activities, health data and the local environment; (iii) adaptive capacity of the tool which allows the indication of the suspect areas, informs where intervention is needed and when to schedule outdoor activities; (iv) good integration with other domains, which is correlated to a rise in UHS, can make the tool more advanced and gives a possibility to use the tool universally; (v) input requirements from the user, which means the decision results depend on the input data; (vi) indicator showing the vulnerability, heat events and effectiveness of the intervention; (vii) political and administrative support for developing the tool; (viii) vegetation, which is a basic and natural intervention that helps to reduce heat stress; (ix) graphical interface and heat stress visualization by mapping; (x) spatial coverage, which helps to indicate the suspect areas in a city on a GIS map; (xi) cost assessment of the measure; (xii) quick assessment of the intervention's effectiveness in real-time; (xiii) user-friendliness, which shows how easy and difficult it is to use the tool; (xiv) uncertainty risk analysis, which gives trustworthy results; and (xv) plus points, which are when the tool provides a long-term effect of heat stress or considers other interventions apart from vegetation. These selected criteria were obtained after going through the literature and serve as a methodology, as shown in Figure 1.
coverage, which helps to indicate the suspect areas in a city on a GIS map; (xi) cost assessment of the measure; (xii) quick assessment of the intervention's effectiveness in real-time; (xiii) user-friendliness, which shows how easy and difficult it is to use the tool; (xiv) uncertainty risk analysis, which gives trustworthy results; and (xv) plus points, which are when the tool provides a long-term effect of heat stress or considers other interventions apart from vegetation. These selected criteria were obtained after going through the literature and serve as a methodology, as shown in Figure 1.
Multi-Criteria Decision Analysis
Decision-making tools are valuable in tackling issues with numerous actors, criteria and objectives. Generally, MCDA is based on five components, which are: goals, decisionmakers' preferences, alternatives, criteria and results, respectively. In light of many alternatives, differences can be catered between Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), but both offer comparative characteristics. MODM is reasonable for the assessment of consistent options when there is a need to predefine constraints in the form of choice vectors. A set of target functions is optimized considering the limitations while decreasing the performance of at least one goal. In MADM, inherent characteristics are covered by prompting the thought of fewer options, and evaluation becomes difficult as prioritizing turns out to be more difficult. The result is obtained by comparing different alternatives concerning each criterion [10][11][12]. Differ-
Multi-Criteria Decision Analysis
Decision-making tools are valuable in tackling issues with numerous actors, criteria and objectives. Generally, MCDA is based on five components, which are: goals, decision-makers' preferences, alternatives, criteria and results, respectively. In light of many alternatives, differences can be catered between Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), but both offer comparative characteristics. MODM is reasonable for the assessment of consistent options when there is a need to predefine constraints in the form of choice vectors. A set of target functions is optimized considering the limitations while decreasing the performance of at least one goal. In MADM, inherent characteristics are covered by prompting the thought of fewer options, and evaluation becomes difficult as prioritizing turns out to be more difficult. The result is obtained by comparing different alternatives concerning each criterion [10][11][12]. Different multi-criteria techniques are applied in the field of UHI mitigation, thermal comfort improvement, and the selection of the heat stress index. MCDA models are developed according to the researcher's point of view concerning demand and goal. It can be a direct or indirect methodology. In a direct approach, the task of priorities or weights is performed as a result of contributions from a questionnaire. In an indirect approach, all the potential criteria are separated into components and assigned weights as per past comparable issues, and the judgment of decision-makers is based on experience. MCDA is consistently complex because of the involvement of stakeholders and factors which are technical, institutional, legislative, social and financial. The overall strategy of the MCDA technique is presented in Figure 2. A survey has been conducted on the use of different MCDA techniques for UHI and UHS mitigation.
Climate 2021, 9, 102 4 of 23 or indirect methodology. In a direct approach, the task of priorities or weights is performed as a result of contributions from a questionnaire. In an indirect approach, all the potential criteria are separated into components and assigned weights as per past comparable issues, and the judgment of decision-makers is based on experience. MCDA is consistently complex because of the involvement of stakeholders and factors which are technical, institutional, legislative, social and financial. The overall strategy of the MCDA technique is presented in Figure 2. A survey has been conducted on the use of different MCDA techniques for UHI and UHS mitigation. The following methods were applied for UHS [3,[13][14][15][16][17][18][19][20] and are briefly discussed with their limitations in Table 1. The following methods were applied for UHS [3,[13][14][15][16][17][18][19][20] and are briefly discussed with their limitations in Table 1.
•
Analytical Hierarchy Process (SWOT);
c i (a, b) = 0 if I ib -I ia ≥ p i p i -(I ib -I ia ) p i -q i if q i < I ib -I ia < p i 1 if I ib -I ia ≤ q i Stage 2: Outranking matrix. C(a, b) = 1 ∑ i=1 m w i m ∑ i=1 w i c i (a, b) S(a, b) = c(a, b) if d i (a, b) ≤ c(a, b)∀i = 1, m c(a, b) ∏ i∈I v (a,b) [1-d i (a,b)]
T(a, b) = 1 if S(a, b) ≥ λ -g(λ) 0 otherwise Q(a) = m ∑ k=1 T(a, k) - m ∑ k=1 T(k, a)
Nonlinearities might not be incorporated in the outranking aggregation process.
Abbas El-Zein [14] Table 1. Cont.
Aim of the Study Method
Step Limitation Reference
Investigate the inner-dependencies between the benefits, opportunities, cost, risks for proper adoption of green roof installation.
The enhanced fuzzy Delphi method (EFDM) and fuzzy decision-making trial and evaluation laboratory (FDEMATEL) approaches.
Step 1: Select the panel of experts.
Step 2: Design and distribute the questionnaire.
Membership function is:
µ A (x) = (x -l)/(m -l) , l ≤ x ≤ m (u -x)/(u -m) , m ≤ x ≤ u 0 , otherwise
Step 3: Develop initial direct relation fuzzy matrix.
A (s) = 0 a (s) 12 • • • a (s) 1n a (s) 21 0 • • • a (s) 2n . . . . . . . . . . . . a (s) n1 a (s) n2 • • • 0 s = 1, 2, . . . m
Step 4: Normalize the initial direct relation fuzzy matrix.
E (s) = e (s) 11 e (s)
• • • e (s) nn s = 1, 2, . . . m
Step 5: Develop the total direct and indirect relation fuzzy matrix.
O = E × (I -E) -1
Step 6: Defuzzify the entries in the fuzzy total relation matrix.
o ij = l ij + 4m ij + u ij /6
Step 7: Produce causal diagrams, values for D + R and D -R were calculated by the following equations:
O = o ij n×n , i, j = 1, 2, . . . , n D = n ∑ j=1 o ij n×1 = (t i ) n×1 R = n ∑ j=1 o ij 1×n = t j 1×n
Absence of significant relationships among environmental and economic opportunities.
Sanaz
Tabatabaee [15] Table 1. Cont.
Aim of the Study Method
Step Limitation Reference
Identifying and assessing the critical criteria affecting decision-making for green roof type selection in Kuala Lumpur An enhanced fuzzy Delphi method (EFDM) was developed for criteria identification. EFDM consists of two rounds: firstly, knowledge acquisition through a semi-structured interview, and secondly, criteria prioritization using a Likert scale questionnaire.
•
First round: discuss the potential of criteria; • Second round;
1.
Design the questionnaire and send it to the experts, 2.
Organize experts' opinions collected from the questionnaire into an estimate, and create the Triangular Fuzzy Numbers (TFNs), 3.
Select the criteria affecting decision making.
Fuzzy Delphi Method:
1.
Sets of pairwise comparisons according to the direction of influence of the relationship between the criteria/sub-criteria were generated. The comparison scale for pairwise comparison is 0, 1, 2, 3, and 4, which denote no influence, low influence, medium influence, and high influence, respectively.
2.
The direct-relation matrix was generated, which is the average of pairwise comparison matrixes that have been generated in step 1 by 28 experts. An n × n matrix A, in which Aij is the degree to which criterion i affects criterion j.
If the expert decides
to change an answer or decides to add any new information, the first round should be repeated, and the process will be time-consuming.
Amir Mahdiyar [16] The study aims to map the UHI of a mid-size city (Rennes, France) and define the relevant land-use factors. The UHI was measured by 22 weather stations in different contexts: urban, suburban, and peri-urban.
Multi-criteria linear regression method used to build a model of the UHI.
1.
The first step of the process was to build a regression model by selecting explanatory variables; 2.
The second step of the process was to execute the selected regression during the first step; 3.
The regression coefficients were applied to the associated raster.
Limited variables considered, do not provide reasoning and spatial method.
X. Foissard [17] Climate 2021, 9, 102 The technique for order of preference by similarity to the ideal solution (TOPSIS).
1.
Construct decision matrix (X) and assign weightage to the criteria.
X = x ij w = [w 1 , w 2 , . . . , w n ] Considering n ∑ j=1 wj = 1 2. Calculate normalized decision matrix N. N ij = x ij ∑ m i=1 x 2 ij for i = 1, . . . , m; j = 1, . . . , n 3. Calculate weighted normalized decision matrix v. v ij = w j N ij for i = 1, . . . , m; j = 1, . . . , n 4.
Determine the positive ideal (A + ) and negative ideal (A -) solutions.
A + = v + 1 , v + 2 , . . . , v + n = maxv ij | j ∈ I , minv ij | j ∈ J A -= v - 1 , v - 2 , . . . , v - n = minv ij | j ∈ I , maxv ij | j ∈ J 5.
Calculate the separation measures from the positive ideal solution (di + ) and the negative ideal solution (di -)
d + i = n ∑ j=1 v ij -v + j p , 1/p i = 1, 2, . . . , m d - i = n ∑ j=1 v ij -v - j p , 1/p i = 1, 2, . . . , m 6.
Calculate the relative closeness to the positive ideal solution (Performance Score) and rank the preference order or select the alternative closest to 1. The AHP step 2 is used to individually analyze each aspect of the defined UHII problem in order to weigh the parameters involved.
R i = d - d - i +d + i where 0 ≤ R i ≤ 1, i = 1, 2, . . . ,
3.
The summary of priority is obtained by multiplying each criteria weight by the intensity range weight and adding the results.
Based on literature quantitative analysis.
Sangiorgio [3] Weighting Criteria and Prioritizing of Heat Stress Indices in Surface Mining.
The viewpoints of occupational health experts and the qualitative Delphi methods were used to extract the most important criteria. Then, the weights of 11 selected criteria were determined by the Fuzzy Analytic Hierarchy Process.
Finally, the fuzzy TOPSIS technique was applied for choosing the most suitable heat stress index.
1.
The formation of implementing team and monitoring the Delphi process; 2.
Selecting the experts and participants; 3.
Adjusting the questionnaire for the first round; 4.
Editing the questionnaire grammatically (deductive and removing ambiguities); 5.
Sending the questionnaire to experts; 6.
Analyzing the obtained responses in the first round; 7.
Preparing the second-round questionnaire considering the required revisions; 8.
Sending the second questionnaire to the same experts; 9.
Analyzing the results of the second questionnaire; 10. Determining the relative weights of each criterion using the fuzzy AHP; 11. Choosing a heat stress index among the existing ones in the study using the fuzzy TOPSIS method.
WBGT overestimates the heat stress. Asghari [20]
Decision Support Tools
From an environmental perspective, decision-making involves multiple complex steps for various stakeholders with different objectives and priorities. Most concerned people tend to attempt heuristic or intuitive approaches in order to simplify the problem to make it manageable. By following this approach, stakeholders lose important information and may discard the contradictory facts and factors of uncertainty and risks. In other words, it is not suitable for making thoughtful choices that can focus on all the important points of the process [21]. Therefore, a proper strategic decision-making tool is helpful to assess the decision-makers to bring about the process strategically and manage the multitude of ideas properly [22,23]. Additionally, during the process of decision-making, practitioners are supposed to take the elements of biodiversity, social innovation, governance, and urban management into consideration within a socio-ecological framework [24,25].
These tools are defined as an approach involving any techniques, models, frameworks (one project's framework can be seen in Figure 3), or methodologies that strategically manage and support the decision-making [26]. Moreover, decision-making tools help to evaluate and monitor the co-benefits systematically [27] and processes for connecting, reflecting and investigating, exploring, and modeling while suggesting proper solutions [28].
Climate 2021, 9, 102 10 of 23
Decision Support Tools
From an environmental perspective, decision-making involves multiple complex steps for various stakeholders with different objectives and priorities. Most concerned people tend to attempt heuristic or intuitive approaches in order to simplify the problem to make it manageable. By following this approach, stakeholders lose important information and may discard the contradictory facts and factors of uncertainty and risks. In other words, it is not suitable for making thoughtful choices that can focus on all the important points of the process [21]. Therefore, a proper strategic decision-making tool is helpful to assess the decision-makers to bring about the process strategically and manage the multitude of ideas properly [22,23]. Additionally, during the process of decision-making, practitioners are supposed to take the elements of biodiversity, social innovation, governance, and urban management into consideration within a socio-ecological framework [24,25].
These tools are defined as an approach involving any techniques, models, frameworks (one project's framework can be seen in Figure 3), or methodologies that strategically manage and support the decision-making [26]. Moreover, decision-making tools help to evaluate and monitor the co-benefits systematically [27] and processes for connecting, reflecting and investigating, exploring, and modeling while suggesting proper solutions [28]. One such example of these tools is the Adaptation Planning Support Tool (APST), which is specifically designed to focus on the impacts due to climate change. This toolbox has been proven to be useful for policy-makers and has been applied practically in many cities [30].
The Mitigation Impact Screening Tool (MIST) is another decision software-based tool developed by the US Environmental Protection Agency (EPA) for an assessment of the impacts of UHI mitigation strategies' (mainly albedo and vegetation) increase on the reduction in urban air temperatures, ozone, and energy consumption for over 200 US cities [31]. The tool is currently unavailable as it was disabled by the EPA due to the update of One such example of these tools is the Adaptation Planning Support Tool (APST), which is specifically designed to focus on the impacts due to climate change. This toolbox has been proven to be useful for policy-makers and has been applied practically in many cities [30].
The Mitigation Impact Screening Tool (MIST) is another decision software-based tool developed by the US Environmental Protection Agency (EPA) for an assessment of the impacts of UHI mitigation strategies' (mainly albedo and vegetation) increase on the reduction in urban air temperatures, ozone, and energy consumption for over 200 US cities [31]. The tool is currently unavailable as it was disabled by the EPA due to the update of the methodology and data inputs. Nevertheless, some authors have analyzed how it functioned, as it attempted to provide a practical and customized assessment for UHI reduction.
Furthermore, there are various nature-based solutions and their implementation can offer multiple benefits, for example, Stadtklimalotse, Wiki, REGKLAM, SUPER (Sustainable Urban Planning for Ecosystem Services and Resilience), and many more [32,33].
Table 2 represents the review of tools designed for policy-makers and urban planners to use during the process of decision-making for urban heat, climate change, heat vulnerability, health heat events, etc.
Right place-right tree
For city officials as well as residents who are interested in expanding or maintaining Boston's urban forest.
Informs decision-making for planting new
trees for UHI mitigation.
• Provides full fact sheets that indicate the tree's potential for heat reduction.
•
Provides resources that can be consulted for maintenance of selected tree, includes links for contacting Boston's tree maintenance teams, up-to-date information about pests, and tips for maintenance from the government website.
English
Decision-making online tool for Boston only.
Results and Discussion
Multi-criteria mathematical models [3,[12][13][14][15][16][17][18][19] are a valuable, theoretical, qualitative, and quantitative way of decision making and also a first step towards developing a DST. These models are supported by expert assistance which considers the socio-cultural factors and local environment. They cover the criteria which can be assessed statistically, e.g., cost analysis and political and administrative support.
The AHP is a qualitative approach and depends on the judgments of the people who are involved in the task, but lengthy pairwise comparisons might lead to inconsistency. Multi-criteria outranking is also controversial, and questions were raised about outranking procedures, nonlinearities' incorporation, and aggregation processes. Similarly, in FDE-MATEL, no significant relationships could be found for some criteria. Another issue is that the questionnaire can have a low response rate, be time demanding, and have a low Most of the time, this is a trial and error process. Decision-making for urban heat mitigation involves multiple and complex steps that vary on different stakeholders with various adaptation measures and needs. Plus, during the process of decision-making, practitioners should take into consideration the criteria of biodiversity, social innovation, governance, and metropolitan management within a socio-ecological framework. Some North American, European and Australian DSTs are critically analyzed concerning all the criteria which were considered in this review paper. The results are summarized and classified in Table 3. For future development, recommendations of approaches learnt from the surveyed tools are highlighted by a color-coding scale shown in Table 4.
The DSS [START_REF] Cameron | What's 'cool' in the world of green façades? How plant choice influences the cooling properties of green walls[END_REF] was developed in the framework of the European project "Development and application of mitigation and adaptation strategies for counteracting the global phenomenon UHI". This tool is user-friendly and covers many aspects which are needed to support urban planners.
It
Conclusions
Decision-making is a difficult task that has to go through different phases such as identifying reliable and efficient measures, assessing the challenges to investigate the case studies, and building a systematic framework for decision support. The MCDA approach is a valuable and very important initial step to develop a DST to deal with UHS. Toolkits in the form of handbooks are neither spatial nor interactive. Web-based tools are mostly interactive and can provide an assessment of green, blue and grey interventions on heat impact in real-time and help decision-makers to take actions on the heat vulnerability of the suspected area. In these tools, economic and environmental assessment can be performed quite easily through a graphical interface; however, the results always depend on input data which are often difficult to obtain.
In this review and comparative study, we conclude that despite many existing publications and reported tools, there is still room for improvement, which can be achieved by a holistic approach dealing with subjective and objective aspects of heat stress, combining various inputs from sensors as well as from experts and residents' feedback, and using different techniques such as MCDA, GIS, urban planning and, in the end, artificial intelligence tools to correlate these aspects with each other to develop a reliable DSS for the mitigation of heat stress.
Chapter -2 Sensitivity Analysis of Heat Stress Indices
This chapter is submitted as: Sensitivity analysis of outdoor heat stress indices.
Qureshi, A. M and Rachid, A. (Submitted in journal) 2022
Major Findings
More than 40 heat-indices are used across the world to quantify the outdoor thermal comfort. Several parameters including clothing, age, awareness, local environment, food consumption, human activities, and resources are relevant to select Outdoor Heat Stress Index (OHSI). This study was conducted to investigate (i) OHSI which are used to quantify the heat stress officially around the world (ii) Estimation method of indices (iii) Sensitivity analysis of indices namely Corrected Effective Temperature (CET), Heat Index (HI), Wet Bulb Globe Temperature (WBGT), Universal Thermal Climate Index (UTCI), Discomfort Index (DI), Summer Simmer Index (SSI) and Predicted Mean Vote (PMV). It has been observed that HI is a sensitive index for estimating heat stress. Conclusively, WBGT, HSI, and CET are recommended indices which can be directly measured using sensors instead of indices which are calculated using an estimation technique.
Keywords: Heat stress indices; sensitivity; variation coefficient; thermal comfort; public health
Introduction
Heat stress refers to a discomfort condition that can be caused by long exposure to extreme heat [1]. It usually occurs when the body is unable to maintain a healthy temperature in response to a hot environment [2]. This effect may result in several negative impacts on public health and the quality of life in urban areas [3,4]. Major consequences of heat stress include heavy sweating, muscle cramps, unusual headaches, low productivity rate, etc. [5,6]. In addition, extreme heat itself can take a toll on the body and lungs. Breathing in hot and humid air can exacerbate respiratory conditions like asthma [7].
In different countries, heat stress is quantified with the help of specific indices. The thermal stress index is a quantitative measure that integrates one or more of the physical, thermal, and personal factors effecting heat transfer between the environment and a person [8]. Many heat stress indices have been developed and classified based on thermal comfort assessment, physiological strain, physical factors of the environment, and "rational" heat balance equation [9]. The thermal comfort of an individual depends mainly on the individual's activities and other factors including behavioural activities, age group, health condition, and local climate and environment [10]. The collective factors are presented in Table 2.1.
Individuals under high age group are generally more vulnerable to heat stress because of weak tolerance to bear extreme heat events and demand more comfort than young people [11]. Similarly, The rational heat measures are also dependent on heat stress [12]. The relation of the factors in Table 2.1 helps to evaluate the heat stress vulnerability of the residents. To overcome heat stress issues, countries have adopted different techniques for the assessment of thermal comfort such as Wet Bulb Global Temperature (WBGT), Heat Stress Index (HSI), and Thermal work limit (TWL) which are the most popular indices and have been officially used by different countries' climate, environmental and national weather agencies [13]. Rising temperatures are more common in urban areas as compared to rural spaces. Most indices can be measured and calibrated directly by using equations whereas some secondary indices can be estimated by different methods and their models.
Table 2.1: Important Factors involve in Heat Stress
Environmental
Current study aims to critically analyse the heat stress indices that are most commonly used nationally in different regions. The following tasks were investigated:
❖ Survey for Outdoor Heat Stress Indices (OHSI), officially used around the world.
❖ Estimation methods of heat stress and currently available modified thermal comfort scales.
❖ Sensitivity analysis of CET, HI, SSI, PMV, DI, WBGT, and UTCI.
The remainder of this paper is organized as follows: research methodology is presented in section 2, direct formulas for estimating different heat stress indices and methods for empirical indices are given in section 3, the survey map of OHSI used across the world by official agencies is presented in section 4, and results of sensitivity and thermal comfort scale variations are discussed in section 5. Finally, the conclusion and perspective are given in section 6.
Materials and Methods
The adopted methodology of current study is described in following Figure 2.1. The data was collected through OHSI survey. Following this, sensitivity analysis of OHSI was performed based on updated estimation models. Resultant models were computed for heat sensitivity indices vs.
different variables to get the thermal comfort zones from the generated data using min:max functions.
The overall findings along with computations has been provided in results section 5.
Figure 2.1: A Schematic diagram showing the workflow of adopted methodology for sensitivity analysis of heat stress indices
Estimation of Outdoor Thermal Indices
More than 40 indices have been used to quantify heat stress all over the world [14]. They are classified into three groups: "direct indices", "empirical indices" and "rational indices". Rational and empirical indices are very complex indices, evolving meteorological as well as physiological and behavioural factors of an individual making it difficult to estimate the thermal stress for an individual. In contrast, direct indices are simpler and can be monitored in outdoor and indoor environments by using different instruments or sensors like Kestrel [15], KIMO [16], and WBGT meter [17]. Calibration methods for popular and mostly used 16 outdoor heat stress indices are provided with their formulas in [37] and in good physical condition. HI was calibrated for an "average" American male [38]. We bring the attention of the reader to the only indices which can be measured using sensors i.e., CET, HSI, HI, WBGT, and
Chapter -2 44
Thermal Work Limit (TWL). All other indices presented in Table 2.2 are also estimated using their respective formulas or physical models which depend on assumptions e.g., age group, activities, clothing, area, etc. but cannot be directly measured. Then a small variation of y can be expressed as
Outdoor Heat Indices in Different Regions
𝛥𝑦 = 𝜕𝑓 𝜕𝑥 1 | 0 𝛥𝑥 1 + 𝜕𝑓 𝜕𝑥 2 | 0 𝛥𝑥 2 + ⋯ + 𝜕𝑓 𝜕𝑥 𝑛 | 0 𝛥𝑥 𝑛 𝑒𝑞: 2.2
PMV
• 𝑃𝑀𝑉 𝑜 = 0.379
• 𝑚𝑒𝑡 0 = 1.68
• 𝑚𝑤 0 = 0.1
• 𝛼 = -0.33 • ∂𝑃𝑀𝑉 ∂met | 0 = 3.6× 10 -3 • ∂𝑃𝑀𝑉 ∂mw | 0 = -3.6× 10 -2 • ∂𝑃𝑀𝑉 ∂𝛼 | 0 = -0.33 ∂𝑃𝑀𝑉 ∂m | 0 = 𝛽 𝑤ℎ𝑒𝑟𝑒 𝛽 = -2.1 × 0.303 × 𝑒 -2.1𝑚 0 × 𝑚 0 + 0.303 × 𝑒 -2.1𝑚 0 + 0.028 -2.1 × 0.303 × 𝑒 -2.1𝑚 0 × 𝜔 0 ∂𝑃𝑀𝑉 ∂𝜔 | 0 = -0.028 -0.303 × 𝑒 -2.1𝑚 0 ∂𝑃𝑀𝑉 ∂𝛼 | 0 = -1 HI • 𝐻𝐼 𝑜 = 78.74°F
• 𝑇 𝑎𝑜 = 81°F
• 𝑅𝐻 0 = 30% Chapter -2
48
This study is relevant to know the trend of the variation and to exhibit the most sensitive parameters in the heat index:
❖ A positive sign means that the index increased with the parameter, whereas a negative sign means that the index increases (resp. decreases) when the parameter decreases (resp. increases).
❖ A high value of sensitivity means a high influence of the corresponding parameter on the given index.
❖ We can see that relative humidity has a negligible impact on DI and UTCI.
Variations of Thermal Comfort Zones versus Heat Indices
In this section, we analyse the influence of variables for widely used heat indices. More precisely, simulations have been performed for the parameter ranges for each thermal comfort zone defined according to the referenced Table 2.4.
WBGT • 𝑊𝐵𝐺𝑇 𝑜 = 23.11°C
• 𝑇 𝑔𝑜 = 45°C
• 𝑇 𝜔𝑜 = 16.27°C
• 𝑇 𝑎𝑜 = 27.22°C Chapter -2 50 In conclusion, we recommend the use of sensor-measurable indices such as WBGT, HI, and CET as they avoid errors that can occur with indices based on models that are only valid under certain assumptions that are difficult to realize in practice.
Annex
Major Findings
In this chapter effective and precise Heat Stress Assessment (HSA) is carried out using artificial intelligence (AI). The significant correlation with five most important meteorological parameters such as air temperature (𝑇 a ), global radiation (GR), relative humidity (RH), surface temperature (𝑇 s ) and wind speed (𝑊 s ) is evaluated by system dynamic approach. and a new version of the Gated Recurrent Unit (GRU) method is used for the prediction of the mean radiant temperature, the mean predicted vote and the physiological equivalent temperature. New version of machine and deep learning approaches which are; Support Vector Machine (SVM) [1], Decision Tree (DT) [2], Random Forest (RF) [3], Long Shortterm Memory (LSTM), simple Recurrent Neural Network (RNN) [4] The colour codes warn the user about the level of discomfort which could help users to schedule and manage their outdoor activities.
Annex
The GRU model was suitable with higher accuracy in predicting heat stress with meteorological data. The recurrent neural network (RNN) with memory function is used for modeling which is apparently more suited to this type of task. In this article, we make 3 contributions.
•
We used Gated Recurrent Units (GRU) networks [START_REF] Cho | Learning phrase representations using RNN encoder-decoder for statistical machine translation[END_REF], which largely contributes to mitigating the problem of gradient vanishing of RNNs through the gating mechanism and simplify the structure while maintaining the effect of LSTM.
• High precision is achieved by the model and coupled with a friendly user interface that recognizes the individual thermal sensation level corresponding to the results.
• The database makes it possible to analyze the thermal comfort scales chosen by the users.
• The developed model is flexible, which will allow in the future to couple it with real data on the cooling effect of urban greenery and estimate the absolute HS for individuals. The remainder of this paper is organized as follows: the methodology is discussed in section 2, section 3 covers the model framework and experimental results. Section 4 presents framework of the GUI. Finally, conclusion in section 5.
2. METHODOLOGY
Systemic Dynamic Approach
As mentioned in the previous section that the mitigation of thermal discomfort caused by climate change is todays challenge; it happens due to a complex environment where multiple parameters are involved with their known behavioral feedback. To encounter this complex issue, a holistic dynamic systems approach is used which connects and emerge various influencing variables and address the nonlinear and linear interactions between them. This article proposes a systems dynamic modeling approach to simulate HS in a complex environment using interdependent factors that are strongly influenced by UHS. The systematic approach of this study is presented in Fig. 1.
Most often, there is an uncertainty about the responses and strengths of interactions in such model, but it is always contented to see conditions resulting in behaviors that are plausible and internally consistent. The systems approach allows user to compare a number of assumptions and alternative strategies and it makes the model simple to understand, without trivializing the underlying assumptions and interrelated processes.
Data Generation
The Rayman model is beneficial for wave radiation flux densities in complex or simpler environments (Matzarakis et al., 2007 b). It is used throughout the simulation for data generation. Four variables such as GR, RH, 𝑇𝑇 air and 𝑊𝑊 s were considered as the main input variables for the calculation of PMV, PET and 𝑇𝑇 mrt . The calculated strategy consisted of varying a one variable for each simulation and it was repeated for every point to observe the variations of each input on the output.
Data Selection
The output data file is received with 𝑇𝑇 mrt , PMV, PET, thermal radiation (TR) and 𝑇𝑇 s corresponding to the main input variables. Later, the data was further analyzed by correlation coefficient against each output variable and evaluated the significant influential input variable affecting the behavior of the system Fig. 2 shows the correlation of the most influenced input variables affecting the output variables (i.e., PMV, PET, 𝑇𝑇 mrt ). We included 5 important variables as inputs (X) with 3 outputs (Y) while the remaining variables found with low dependence were neglected.
MODELING FRAMEWORK
The final and stabilized model was produced in several stages. The frame work is shown in Fig. 3 and each procedure
Data Ranging
Data ranging helps to determine the number of different classes present in the data and gives the basic idea of the certainty of output below the range of input limits. It also helps to understand the distribution and spreading range of the data by looking at the mean value, the standard deviation and percentile distribution for numeric values. Graphical presentation of the data ranging is shown in Fig. 4.
Data Preparation
This step is necessary to understand the dataset to avoid estimation problems. The correlated attributes of the data are discussed in the section above. This is the fastest way to see if the features correspond to the output.
Probability Density Function
To visualize the likelihood of an outcome in a given range, we estimated a Probability Density Function (PDF) from the available data. First, we observed the density of a random variable x with a simple histogram and identified the probability distribution p(x). The PDF is shown in Fig. 5 (a) PET (b) PMV and (c) 𝑇𝑇 mrt discrete random variable and the probabilities (𝑃𝑃( 𝑋𝑋) = 𝑥𝑥) for all the possible values of x. The area below the curve indicates the interval in which the variable will fall and the total area of the interval is equal to the probability of occurrence of a single random variable.
Resampling
Resampling the dataset is necessary to generate confidence intervals, it helps to quantify the uncertainty, to gather data and best use in the predictive problem. After visualizing the dataset using PDF, the hourly data is resampled over 30 minutes, but it was observed that sum of the resampled dataset have similar structures.
Data scaling
transforms and data points can have specific useful properties. The difference is that in scaling, the data range is changed while when normalizing, the shape of the data distribution is changed. Min: max scaling and normalization is the simplest method to resize the range of characteristics in [0, 1]. It is found by using eq. ( 1)
𝑥𝑥 ′ = 𝑥𝑥-𝑚𝑚𝑚𝑚𝑚𝑚(𝑥𝑥) 𝑚𝑚𝑚𝑚𝑥𝑥(𝑥𝑥)-𝑚𝑚𝑚𝑚𝑚𝑚(𝑥𝑥) (1)
Sklearn library helps in making random partitions for both subsets for training data and for test data. X and Y arrays are divided into 4 more arrays, 70% of input and output data sets (73616, 5) (73616, 3) are trained for deep learning, 30% (31551, 5) (31551, 3) kept for testing phase to fully guarantee the randomness of each use of the data set.
The model architecture
For solving the vanishing gradient problem of a standard RNN, the GRU model uses update gate and reset gate. These are two vectors which decide what information should be passed and can trained to retain information from the past, without washing it out over time. It removes information which is irrelevant for the prediction. The model is based on governing eq. (2-5). The structure of the GRU is shown in Fig. 6.
Where We update the gate z for the time step t using eq. ( 2). 𝑧𝑧 𝑡𝑡 = σ(𝑤𝑤 𝑧𝑧 𝑥𝑥 𝑡𝑡 + 𝑢𝑢 𝑧𝑧 ℎ 𝑡𝑡-1 + 𝑏𝑏 𝑧𝑧 )
(2)
Vector 𝑥𝑥 𝑡𝑡 in matrix multiplied it with 𝑤𝑤 𝑧𝑧 and added with the multiplication of 𝑢𝑢 𝑧𝑧 and ℎ 𝑡𝑡-1 (Note that ℎ 𝑡𝑡-1 =0), then we have added results with the bias b, afterwards the sigmoid function was activated.
The reset gate is used, which helps to decide how much of the past information is needed to forget with the help of eq.( 3)
𝑟𝑟 𝑡𝑡 = σ(𝑤𝑤 𝑟𝑟 𝑥𝑥 𝑡𝑡 + 𝑢𝑢 𝑟𝑟 ℎ 𝑡𝑡-1 + 𝑏𝑏 𝑟𝑟 ) (3)
This is the same as update gate, the difference comes from the weights and the usage of the gates, so these are two different vectors 𝑟𝑟 𝑡𝑡 and 𝑧𝑧 𝑡𝑡 . It is noticed that how exactly the gates affect the final output. It is started with the use of the reset gate and introduces a new memory content by using tanh which creates a new memory vector and store the relevant information from the past using eq. ( 4). ℎ′ 𝑡𝑡 = ϕ(𝑤𝑤 ℎ 𝑥𝑥 𝑡𝑡 + 𝑢𝑢 ℎ (𝑟𝑟 𝑡𝑡 ⊙ ℎ 𝑡𝑡-1 ) + 𝑏𝑏 ℎ ) (4)
In the last step, the network calculates the current hidden state output vector ℎ 𝑡𝑡 which holds information for the current unit and passes it down to the network. In order to do this, the update gate is required. It determines what to collect from the current memory content ℎ′ 𝑡𝑡 and what from the previous steps ℎ 𝑡𝑡-1 . This is the final output calculated by using eq. ( 5). ℎ 𝑡𝑡 = (1 -𝑧𝑧 𝑡𝑡 ) ⊙ ℎ 𝑡𝑡-1 + 𝑧𝑧 𝑡𝑡 ⊙ ℎ′ 𝑡𝑡 (5)
Model convergence
The mean Square Error (MSE) loss function and the efficient Adam version of stochastic gradient descent is used to measure the accuracy and optimize the deep learning model. Adam optimization is a stochastic gradient descent method that is based on adaptive estimation of first and second order moments. It is the best among the adaptive optimizers with perfect adaptive learning rate. MSE is calculated by eq. ( 6).
where n is the total amount of the dataset, 𝑌𝑌𝑌𝑌 is the real observed 𝑌𝑌 ̂𝑖𝑖 is the estimated data. The GRU network is implemented using the tensor flow deep learning package. Hereby, we provide a detailed description of our GRU based models as follows:
•
The first visible input layer consists of 128 GRU cells, and the dropout rate is 0.5 with the relu activation function.
•
Second hidden dense layer consists of 256 neurons with 3 outputs where the dropout rate is 0.25 and linear activation function sets for the last layer.
•
Finally, the Adam optimization algorithm is used for network training. We set the patience of the training process over 40 epochs with a batch size of 256, can be seen in Fig. 7.
•
70% of the data was used for learning and 30% of data in the dataset was employed as test data which is later used for the validation. An accuracy score of 99.36% is obtained with MSE= 0.0002.
Model implementation
Based on the systems approach of this model, GR, RH, , 𝑇𝑇 air , , 𝑊𝑊 s and , 𝑇𝑇 s forms the input vector 𝑥𝑥 𝑡𝑡 , where 𝑥𝑥 𝑡𝑡 is represented as follows: [ 𝑥𝑥 𝑡𝑡1 : GR; 𝑥𝑥 𝑡𝑡2 : , 𝑇𝑇 s ; 𝑥𝑥 𝑡𝑡3 : , 𝑇𝑇 air ; 𝑥𝑥 𝑡𝑡4 : RH; 𝑥𝑥 𝑡𝑡5 : , 𝑊𝑊 s ] Each row of the input matrix 𝑥𝑥 𝑡𝑡 is taken into the GRU unit and the obtained output (ℎ 𝑡𝑡 ) vectors are represented as follows: [ℎ 𝑡𝑡1 ∶ 𝑇𝑇 mrt ; ℎ 𝑡𝑡2 ∶ PMV; ℎ 𝑡𝑡3 ∶ PET ] respectively. Eq. (7)(8)(9) are used for predictions derived from governing eq. (2-5) of GRU model.
( 𝑧𝑧 𝑡𝑡 𝑟𝑟 𝑡𝑡 ) = ( 𝜎𝜎 𝜎𝜎 ) . [ 𝑤𝑤 𝑧𝑧 𝑢𝑢 𝑧𝑧 𝑤𝑤 𝑟𝑟 𝑢𝑢 𝑟𝑟 ] × ( 𝑥𝑥 𝑡𝑡 ℎ 𝑡𝑡-1 ) +( 𝑏𝑏 𝑧𝑧 𝑏𝑏 𝑟𝑟 ) (7)
ℎ 𝑡𝑡 ′ = tanℎ 𝑥𝑥 [𝑤𝑤 ℎ 𝑢𝑢 ℎ ] × ( 𝑥𝑥 𝑡𝑡 𝑟𝑟 𝑡𝑡 ⊙ ℎ 𝑡𝑡-1 )+ 𝑏𝑏 ℎ (8a) ℎ 𝑡𝑡 ′ = ( 𝑒𝑒 2𝑥𝑥 -1 𝑒𝑒 2𝑥𝑥 +1
). [𝑤𝑤 ℎ 𝑢𝑢 ℎ ] × ( 𝑥𝑥 𝑡𝑡 𝑟𝑟 𝑡𝑡 ⊙ ℎ 𝑡𝑡-1
)+ 𝑏𝑏 ℎ (8b)
ℎ 𝑡𝑡 = 𝑧𝑧 𝑡𝑡 ⊙ ℎ 𝑡𝑡 ′ + (1 -𝑧𝑧 𝑡𝑡 ) ⊙ ℎ 𝑡𝑡-1 (9)
ℎ 𝑡𝑡 is initialized from 0 so, ℎ 𝑡𝑡-1 = 0, but at t=1, ℎ 𝑡𝑡 changes and the weight matrices (w, u) of the input and output data of the previous cells updated by using eq. ( 10) [where α is
Validation
The performance of the developed deep learning models was validated using the test data. This process was independent of the learning process of the algorithm. Validation process repeated twice. First, the input variables of 100 test samples from the test dataset. The resulting output (PET) of the model is plotted with the actual output of the dataset can be seen in Fig. 8. To check the reliability of the model, another dataset with realistic summer measurements is used and plotted 300 examples (see Fig. 9). We notice that the model responds efficiently and reliable results are obtained which are compared to the reference software.
WEB BASED SIMULATION TOOL
After validation of the model, the open-source web framework based on Python, "Django" is used to develop the graphical user interface (GUI) for the evaluation of HS. This interface provides the platform for users to choose their thermal comfort scale based on their current feeling and considering the obtained assessment results plan outdoor activities. The interface also provides a platform where the users can compare their current comfort level with the given index results, which can help them compare their results and choose the index that matches with the situation. For example, it is shown in Fig. 10d that PET and PMV results indicate the different comfort zones which may needs the user's assessment. The functions and use of the GUI are as follows:
• Home page (Fig. 10a): It gives general information on thermal stress.
• Parameters (Fig. 10b): Here users can either set the thermal comfort scale for PET according to their thermal perception or choose the standard default parameters and save them. Users can do the same for PMV and 𝑇𝑇 mrt .
• Predictions (Fig. 10c): The input variables are necessary for the prediction.
• History (Fig. 10d): It records entries and results calculated by random users.
Introduction
Climate change has greatly impacted the global mean temperatures and has resulted in strong heat waves during the last couple of decades. It has been responsible for heatrelated morbidities and mortalities globally, including heat waves in the Balkans (2007), the Midwestern United States (1980), France (2003), and Russia (2010) [1,2]. In August 2003, France was hit by a strong heat wave named Lucifer, with catastrophic health consequences. Heat events, as well as socioeconomic vulnerability, led to more than 14,800 mortalities in France due to dehydration, hyperthermia, and heat stroke [3]. Heat waves with urban heat islands can increase the death ratio, particularly for vulnerable people such as outdoor workers and elders who are socially isolated and/or with pre-existing disease [4,5]. Other influencing factors include urbanization, poverty, literacy rate, and possibly air pollution [6,7]. According to the World Health Organization (WHO), a 2 • C increase in the apparent temperature (AT) is a limiting warning that can prevent rising heat mortalities, but later studies proved that heat events are inevitable even if the global heat stress warning is restricted to 2 • C AT [8]. Mortality risk in France can be increased by 1-1.9 log for every 1 • C AT. Long heat waves (more than 5 days) have an impact of 1.5-5 times greater than shorter events [9]. Urbanization promotes anthropogenic activities which lead to heat events. Adaptive strategies are necessary to protect the residents from heat-related events and health risks in the coming years.
Amiens is a medium-sized city in northern France, crossed by the Somme River. The hot season lasts 3 months, from June to August, where maximum air temperatures can [11]. This threat of extreme heat events is likely to increase due to the combined effects of global warming and rapid urbanization in the future. Although the data related to the strong 2003 heat wave and associated adverse health outcomes have been evaluated previously [12], the Heat Vulnerability Index (HVI) was recently investigated for big cities, e.g., Camden, Philadelphia [13], London [14], and Sydney [15], where the influence of air quality was not considered except for the study presented by Sabrin for Camden [16]. To our best knowledge, there are no studies addressing the impact of heat waves on a medium-sized city using the HVI approach, where the population is less than one million. Within this context, the current study aimed to identify the heat-vulnerable communities and areas in Amiens where heat stress mitigation strategies are required. The main data types which we used for this study to develop the HVI model were (a) socioeconomic and demographic data, (b) air pollution, (c) land use and land cover, (d) elderly heat illness, (e) social vulnerability, and (f) satellite data (land surface temperature and mean elevation). Heat maps of high spatial and temporal resolution are generated from satellite data, and HVI maps are derived using principal component analysis (PCA) to help urban planners and public health professionals to identify places at high risk of extreme heat and air pollution. This case study aims to bring attention to the fact that medium-size cities are also vulnerable to heat, requiring some proactive measures against future extreme heat events. Our suggested index can be a useful tool in decision making for dealing with extreme events and can guide city planners and municipalities.
The paper is organized as follows: the methodology is presented in Section 2 with data analysis and the developed working model. It also provides the information of the used technique and the influence of components. The obtained results and HVI map with valuable information are given in Sections 3 and 4, respectively. The paper ends with a conclusion in Section 5, with some perspectives and recommendations in Section 6.
Materials and Methods
Several parameters have been studied that have a possible correlation with extreme heat events and air pollution in urban settings, which were identified and discussed in the previous literature to develop our conceptual model [14,17]. The methodology for the case study was developed as a working model for HVI mapping, as shown in Figure 1. In this study, risk factors such as social vulnerability (factors taken from the literature) and the environment (identified after extreme event analysis of the studied area) are discussed in this section. Age, pre-existing medical conditions, and social deprivation are among the various key factors that make people likely to experience more adverse health outcomes related to extreme temperatures. References were used for the population density, poverty rate, illiteracy rate, vulnerable age group, illness rate (asthma, cardiovascular disease, and respiratory disease other than asthma), and isolated elderly (living alone in the summer), as presented in Table 1.
Mapping for the socially vulnerable population was performed using a dataset from world pop [18], which provides a population at a map scale of 100 m. Hexagon grids of 5 ha were generated in the area of Amiens city. Zonal statistics was used to extract the population at the grid level to maintain homogeneity of analysis throughout the study. In addition, a multi-frame population map of a specific group was also created, as presented in Figure 2.
Extreme Event Analysis
To identify the extreme events recorded, the hourly data for the summer (July and August) were collected from Météo France [10] and Atmo France [26]. Data were analyzed by dividing them into categories to estimate risk alerts. The Météo France weather station is located in Amiens Glisy, 14 km from the city center. Three air pollution stations are Article: "The city of Amiens watches over our seniors" [25] Note: The statistical data from referenced sources give the rough estimation of social vulnerability.
Extreme Event Analysis
To identify the extreme events recorded, the hourly data for the summer (July and August) were collected from Météo France [10] and Atmo France [26]. Data were analyzed by dividing them into categories to estimate risk alerts. The Météo France weather station is located in Amiens Glisy, 14 km from the city center. Three air pollution stations are located in different areas (details for data recording are given in Table A1). The geographical locations of air pollution and weather stations can be seen in Figure 3. Weather data were analyzed to assess the levels and duration of heat episodes. The assumption scale was made by categorizing air temperature ranges into risk warnings: slightly warm (26-30 • C), warm (31-36 • C), and very hot (37-41 • C). This approach made it possible to analyze the huge hourly data during the summer seasons of 2018-2020. The number of hours of heat stress with their levels is presented in Figure 4. The air quality data were obtained over the past 10 years from local monitoring stations referred to as AM1, AM2, and AM3, where certain non-regularization of monitoring was noticed, particularly in the AM3 station (details are provided in Table A1). The temperature (ambient and surface) and air quality data were analyzed for a correlation study from 2018 to 2020, and it was observed that, due to the irregularity, the air quality data of 2018 were not sufficient. However, data from 2019 and 2020 were adequate for this study. The correlation coefficients are plotted in Figure 5, showing that anthropogenic activities also increased the frequency and intensity of extreme heat. Moreover, a significant relationship was observed between heat events and ground-level ozone, representing the motivation of this research. The air quality data were obtained over the past 10 years from local monitoring stations referred to as AM1, AM2, and AM3, where certain non-regularization of monitoring was noticed, particularly in the AM3 station (details are provided in Table A1). The temperature (ambient and surface) and air quality data were analyzed for a correlation study from 2018 to 2020, and it was observed that, due to the irregularity, the air quality data of 2018 were not sufficient. However, data from 2019 and 2020 were adequate for this study. The correlation coefficients are plotted in Figure 5, showing that anthropogenic activities also increased the frequency and intensity of extreme heat. Moreover, a significant relationship was observed between heat events and ground-level ozone, representing the motivation of this research.
Environmental Risk Factors
After analyzing the collected data, it was found that low air quality and an increase in temperature are risk factors that depend on urban geometry, the proportion of urban a. LST mapping of extreme heat days Data for environmental risk factors such as land surface temperature (LST), normalized difference vegetation index (NDVI), and normalized difference water index (NDWI) were first collected in the area of study to create vulnerability maps. The city suffers from a lack of canopy-scale temperature readings, air quality data, and consistent weather stations, which limited us to studying the spatial patterns of temperature and pollution in the city at high resolution. We only had data from one meteorological station, which was insufficient to achieve realistic and reliable data for spatial distribution. The city mainly relies on weather stations located outside the city for weather forecasts. Thus, temperature data were collected at fine spatial scales via the Landsat 8 earth observation satellite [27],
Environmental Risk Factors
After analyzing the collected data, it was found that low air quality and an increase in temperature are risk factors that depend on urban geometry, the proportion of urban a.
LST mapping of extreme heat days Data for environmental risk factors such as land surface temperature (LST), normalized difference vegetation index (NDVI), and normalized difference water index (NDWI) were first collected in the area of study to create vulnerability maps. The city suffers from a lack of canopy-scale temperature readings, air quality data, and consistent weather stations, greenery, and materials. In current study, three main environmental risk factors are considered and mapped for the identification of heat-vulnerable areas. Further details are provided in the subsections below. which limited us to studying the spatial patterns of temperature and pollution in the city at high resolution. We only had data from one meteorological station, which was insufficient to achieve realistic and reliable data for spatial distribution. The city mainly relies on weather stations located outside the city for weather forecasts. Thus, temperature data were collected at fine spatial scales via the Landsat 8 earth observation satellite [27], which integrates the role of the built environment. Satellite data can be used to derive the surface temperature using high-spatial-resolution imagery and remote sensing techniques to study the effect of heat over a large area. Therefore, we used Landsat 8 multispectral satellite images to obtain high-resolution LST, NDVI, and NDWI data using Equations (1)(2)(3)(4). The mean LST values at the pixel level show that Amiens experienced high LST on 27 July 2018, 25 July 2019, and 31 July 2020. Landsat 8 images were used to derive the LST raster layers. The maps appear to show regions with higher temperatures between 20 • C and 41.5 • C. The summer 2020 (July and August) mean LST map was also derived because last year is considered highly relatable for expected coming heat events. The derived maps are presented in Figure 6.
LST = T sensor 1 + (λ × (T sensor /β)) ln(ε) (1)
where LST is the land surface temperature, and T sensor is the band 10 brightness temperature in K, later converted into • C [28], λ is the wavelength of the emitted radiance in meters, β = 1.438 × 10 -2 Mk, and ε is the surface emissivity [29].
NDVI = (NIR -RED) (NIR + RED) (2) NDWI = G -NIR G + NIR ( 3
)
where NIR is the near-infrared waveband (band 5 for Landsat 8), RED channels of remotely sensed images are the reflectance of the visible red waveband (band 4 for Landsat 8), and G represents the green channels.
For ε, it is necessary to correct the spectral emissivity using the NDVI value.
ε = 1 + 0.047 ln(NDVI), 0 ≤ NDVI < 0.15 (4)
In previous studies [30], mean elevation was taken as an indicator of PCA. In this study, altitude was considered an important factor in temperature distribution. The elevation was taken from the Shuttle Radar Topography Mission (SRTM) 30 m pixel raster data. This elevation of the terrain was used to visualize and analyze the flat or mountainous distribution areas.
b.
Land use and land cover (LULC)
It can be observed that a large scattered hot area existed in the center of the city. The high LST was mainly distributed in the built-up areas of Amiens. These areas were combined with a land use/land cover map (Figure 7), and it was recognized that the high LST was mostly distributed in the following areas: (i) densely populated areas, (ii) areas with low vegetation coverage, (iii) areas with artificial surfaces, and (iv) industrial zones.
However, low-LST areas were mainly located in natural landscapes, such as rivers and grasslands. The LULC ratio of Amiens is shown in Table 2.
c. Air quality
The Air Quality Index (AQI) for 2019 and 2020 for the summer season (July-August) was estimated using the AQI calculator [31]. The inverse distance weighted (IDW) interpolation method was used to create the AQI surface to develop the maps in Figure 8. Additional layers of each raw pollution variable were also created, which were later used in the PCA for HVI calculations.
NDWI = G -NIR G + NIR (3)
where NIR is the near-infrared waveband (band 5 for Landsat 8), RED channels of remotely sensed images are the reflectance of the visible red waveband (band 4 for Landsat 8), and G represents the green channels. For 𝜀, it is necessary to correct the spectral emissivity using the NDVI value.
𝜀 = 1 + 0.047ln (NDVI), 0 ≤ NDVI < 0.15 (4)
In previous studies [30], mean elevation was taken as an indicator of PCA. In this study, altitude was considered an important factor in temperature distribution. The elevation was taken from the Shuttle Radar Topography Mission (SRTM) 30 m pixel raster data. This elevation of the terrain was used to visualize and analyze the flat or mountainous distribution areas. It can be observed that a large scattered hot area existed in the center of the city. The high LST was mainly distributed in the built-up areas of Amiens. These areas were combined with a land use/land cover map (Figure 7), and it was recognized that the high LST was mostly distributed in the following areas: (i) densely populated areas, (ii) areas with low vegetation coverage, (iii) areas with artificial surfaces, and (iv) industrial zones. However, low-LST areas were mainly located in natural landscapes, such as rivers and grasslands. The LULC ratio of Amiens is shown in Table 2. The Air Quality Index (AQI) for 2019 and 2020 for the summer season (July-August) was estimated using the AQI calculator [31]. The inverse distance weighted (IDW) interpolation method was used to create the AQI surface to develop the maps in Figure 8. Additional layers of each raw pollution variable were also created, which were later used in the PCA for HVI calculations.
Principal Component Analysis (PCA)
PCA is typically used in heat vulnerability studies to reduce the number of indicators. We applied this method in Stata v.16, which is an integrated statistical software package used for data analysis, management, and graphing. Stata's PCA was used to estimate the parameters of principal component models, where increasing variables and higher component scores indicated higher HVI. The 32 vulnerability indicators were grouped into five independent components. The variables in the components were allocated via the PCA algorithm [32].
Results
Data Analysis
A linearity between extreme heat events and ground-level ozone concentrations was observed according to the recorded data at AM1 = 157 µg/m 3 , AM2 = 145 µg/m 3 on 25 July 2019 and at AM1 = 154.9 µg/m 3 , AM2 = 164.5 µg/m 3 on 31 July 2020 at 3:00 p.m.
After a detailed analysis, we observed that air temperature and ozone data were cor-
Principal Component Analysis (PCA)
PCA is typically used in heat vulnerability studies to reduce the number of indicators. We applied this method in Stata v.16, which is an integrated statistical software package used for data analysis, management, and graphing. Stata's PCA was used to estimate the parameters of principal component models, where increasing variables and higher component scores indicated higher HVI. The 32 vulnerability indicators were grouped into five independent components. The variables in the components were allocated via the PCA algorithm [32].
Climate 2022, 10, 113 9 of 14
Results
Data Analysis
A linearity between extreme heat events and ground-level ozone concentrations was observed according to the recorded data at AM1 = 157 µg/m 3 , AM2 = 145 µg/m 3 on 25 July 2019 and at AM1 = 154.9 µg/m 3 , AM2 = 164.5 µg/m 3 on 31 July 2020 at 3:00 p.m.
After a detailed analysis, we observed that air temperature and ozone data were correlated with significant coefficient (0.8) at the abovementioned stations during the extreme heat days in 2019 and 2020. Due to missing air quality data, heat days in 2018 could not be compared with poor-air-quality events.
Factor Scores
The factor scores were calculated, and it can be observed that the cumulative contribution of the components was 89.20%, which shows that the proportion of variance of the raw vulnerability indicators captured by PCs were explained by five independent components; for each variable, the sum of its squared loading across all PCs was equal to 1. Mathematically, the loadings were equal to the coordinates of the variables divided by the square root of the eigenvalues associated with the component. The first component explained 44.97% of the total variance, followed by 25.23%, 11.48%, 4.18%, and 3.34% for the second, third, fourth, and fifth components, respectively, as shown in Table 3. The first component included 22 variables (total population, no. of habitants aged ≥65 years, approximate no. of old habitants having asthma, cardiovascular diseases, and other respiratory diseases, no. of socially vulnerable people in the summertime, artificial surfaces in the city, area covered by vegetation, average AQI (N 2 , NO, O 3 , PM 10 ), illiteracy and poverty rates, and LST of extremely hot days recorded in 2019-2020). The second component was characterized by two variables (mean AQI calculated in the summers of 2019 and 2020). Component 3 was characterized by six variables (NDWI, NDVI, area covered by water bodies, LST of the hottest day in 2018, and summer mean LST of 2020). Components 4 and 5 were represented by the mean elevation and natural surfaces of Amiens, respectively. The merged vegetation was different from the NDVI, as well as the average AQI of each type of air pollutant. After aggregating components into the final HVI through different weight factors, the spatial distribution of HVI was obtained as shown in the map in Figure 9. The distribution of data for each point is provided in Table A2, where factor loadings of variables greater than ±0.6 played an important role in the allocation of variables into defined components via PCA.
ered by water bodies, LST of the hottest day in 2018, and summer mean LST of 2020). Components 4 and 5 were represented by the mean elevation and natural surfaces of Amiens, respectively. The merged vegetation was different from the NDVI, as well as the average AQI of each type of air pollutant. After aggregating components into the final HVI through different weight factors, the spatial distribution of HVI was obtained as shown in the map in Figure 9. The distribution of data for each point is provided in Table A2, where factor loadings of variables greater than ±0.6 played an important role in the allocation of variables into defined components via PCA.
Spatial Derivation Distribution of Heat Vulnerability Index (HVI)
HVI was derived from the sum of all components, and the resultant index map can be seen in Figure 10, where the accumulation of high scores shows that the city center is more vulnerable than rural areas. There may be several possible explanations for our result. The high number of elders and those living alone are concentrated in the city center. Meanwhile, there is a lack of awareness about extreme events and a high poverty rate in suburbs compared to the central area. Moreover, when asphalt is exposed to the sun, pavements start to soften, which can lead to delays and some roads being closed for traffic. This makes the city center more vulnerable to heat stress and poor-air-quality events. The agricultural land also has a high tendency to capture heat due to bare soil and harvesting grains, which causes an increase in HVI during extreme heat days of summer.
Climate 2022, 10, x FOR PEER REVIEW 11 of 15
Spatial Derivation Distribution of Heat Vulnerability Index (HVI)
HVI was derived from the sum of all components, and the resultant index map can be seen in Figure 10, where the accumulation of high scores shows that the city center is more vulnerable than rural areas. There may be several possible explanations for our result. The high number of elders and those living alone are concentrated in the city center. Meanwhile, there is a lack of awareness about extreme events and a high poverty rate in suburbs compared to the central area. Moreover, when asphalt is exposed to the sun, pavements start to soften, which can lead to delays and some roads being closed for traffic. This makes the city center more vulnerable to heat stress and poor-air-quality events. The agricultural land also has a high tendency to capture heat due to bare soil and harvesting grains, which causes an increase in HVI during extreme heat days of summer.
Discussion
It has been observed that urban vulnerability is linked to various key factors, such as temperature, population, age, gender, literacy and poverty rate, and health-associated problems. It is estimated that, from 1999-2018, the global heat mortality rate increased by 53.7%, resulting in 296,000 deaths in 2018 [33]. By considering the local characters, six im-
Discussion
It has been observed that urban vulnerability is linked to various key factors, such as temperature, population, age, gender, literacy and poverty rate, and health-associated problems. It is estimated that, from 1999-2018, the global heat mortality rate increased by 53.7%, resulting in 296,000 deaths in 2018 [33]. By considering the local characters, six important factors were selected to construct the HVI using PCA. These key performance indicators were applied to identify susceptible regions vulnerable to heat waves, as well as population sensitivity and adaptation. This tool can assist in the planning of infrastructure and resources to reduce residents' vulnerability to extreme heat events, especially for the elderly population, since they are more vulnerable and at higher risk of heat-related deaths. The study also highlighted the influence of air pollution on heat events. However, the following limitations and challenges were faced during the development of HVI:
•
Irregular and limited monitoring stations of weather and air quality; • Lack of data from heating and cooling facilities.
The current case study provides a detailed methodology related to the impact of heat stress in the Amiens region, and this approach can be applied to the other regions for understanding the impact of heat waves, serving as a valuable tool for the development of HVI. In this study, our key emphasis was on investigating the adverse effects of strong heat waves on medium-sized cities. In Amiens, a medium-sized French city on which the 2003 heat wave had drastic impacts, it was reported that the annual mean temperature of this city has increased by +1 • C since 2000. A PCA-based novel approach was applied to study the fine-scale vulnerability mapping using various data types, and hotspot zones in the Amiens regions were identified using a comprehensive GIS mapping approach. The analysis identified the elevated HVI in three typical zones, i.e., population-dense and low-vegetation areas, as well as built-up and industrial zones. This was further linked with low vegetation cover, which is greatly responsible for the increasing temperature [34]. Moreover, it is also an established fact that industrialization is a major contributor to global warming [35]. By evaluating multiple covariates influencing the HVI, we are convinced that our current approach may be applicable to other regions of the world, including larger cities, to evaluate the heat-related vulnerabilities and help the authorities to take mitigation measures. Urban greenery and water bodies can be taken as existing cooling strategies; however, for better precision, district cooling consumption data should be considered in future.
Conclusions
This work aimed to determine medium-sized city areas with higher heat vulnerability, which are more likely to experience high rates of morbidity and mortality on abnormally warm days. The parameters that influence current heat vulnerability were selected after data analysis and from the scientific literature. A strong relationship was noticed between heat and low air quality. This is a clear illustration of the system theory where anthropogenic activities appear in accordance with the extreme heat events in the city. The PCA technique was very helpful to derive the spatial HVI of the Amiens region. After analyzing the resulting maps, it was observed that the elevated HVI exists particularly in high-density built-up and industrial zones that release thermal energy and ozone at the ground level. A low HVI was located in natural landscapes such as rivers and grasslands. The developed methodology and maps can serve as a powerful tool for an assessment of the effect of extreme heat on vulnerable populations and for communication. It reveals the complex spatial and temporal patterns that would be difficult to interpret through text alone, allowing residents and local stockholders to visualize known areas of high HVI. It can also be influential in decisions to target resources for vulnerable populations to develop adaptation responses that promote resilience.
Recommendations
Data fusion techniques are recommended to collect data from multiple sources for analysis and development of HVI, thus increasing reliability and decreasing redundancy to support the decision-making process. This research sheds light on the following solutions that can help citizens to combat heat episodes:
•
Information provision to local people about heat warnings and precautions, with more attention to vulnerable people; • Implementation of proactive adaptive practices such as shades, blue infrastructure, and greenery where the HVI score is above 6;
•
Regular monitoring during the summer season in the city.
Part -B Field Monitoring for Measuring the Cooling Effect
Location
Measurements were taken on August 4 th and September 7 th -8 th , 2021. The weather station named
Kestrel 5400 [1], [2] shown in Figure 4.1 was used to monitor the cooling impact of existing measures at three public places. In addition, measurement plan was prepared after selecting three pilot sites on foot in the city centre of Amiens, France. These sites are public open spaces where pedestrians and tourists often visit and use public benches. In the peak hours of a hot day, the shade of the trees and the surrounding measures such as water fountains, vegetation contributes to make the atmosphere pleasant by reducing the air temperature. Measurements were made to monitor the cooling impact of existing tree species at the chosen pilot sites. Furthermore, two 15 minutes rounds of monitoring were taken for all species on each day and each sample was recorded after every 10 seconds [3]. The measurement days were those when the sky was clear between peak hours (12:00 p.m. to 4:00 p.m.) and the position of the sun was southeast to southwest. Thermal images are captured prior to measurements with the FLIR E6 thermal camera.
Results and Discussion
The average cooling effect of these species was calculated by calculating the average value from the two monitoring cycles and by using the equation below:
𝑄 = 𝐴 𝑅 -𝐴 𝐼 𝑒𝑞. (4.1)
Where 𝑄 is cooling effect and A referred as variable such as 𝑅𝐻, 𝑇 𝑎 and 𝑅 is measurements taken at reference point and 𝐼 is the intervention point. The estimated average cooling effect for every 30 mins of each tree species is presented in Table 4.1. The recorded data for each site are plotted in The estimated results showed that Tilia Cordata Mill Malvacae was the effective species that improved thermal comfort by shading during hot days. In addition, the effectiveness obtained from Tilia Platyphyllous cannot be neglected; it was the cooling impact of a single tree with a span of 2-3 m. It increased humidity (𝑅𝐻), which reduced the 𝑊𝐵𝐺𝑇, however, an insignificant scale of air temperature (𝑇 𝑎 ) decreased. Golden Rain Tree with the effectiveness of the water fountain was also effective due to its large cooling extent. Increasing humidity and decreasing 𝑇 𝑎 help to reduce thermal discomfort in a shopping area and increased attraction for visitors.
Cordata Mill Malvacae (B) Tilia Platyphyllous (C) Golden Rain Tree INTRODUCTION
Due to global warming phenomena, it is notable that the atmosphere temperature is increasing over the years, and several problems are encountered related to this increase that must be addressed. A prolonged duration of extreme hot weather is called heat wave. Heat waves are one of the most dangerous problems related to global warming [1] The frequency of heat waves is increasing over recent years, and it is spatially distributed around the world.
At this point the human body cannot be able to adapt itself to excessive exposure to temperature boundaries during heat waves and loses capacity to cool down. It leads to dehydration, hyperthermia, heatstroke and heat mortalities that might cause high quantities of heat-related sicknesses and deaths. In 1980, 10,000 individuals died in the United States because of a heat wave [2]. In July of 1995, it has executed more than 700 individuals in Chicago [3], and in August of 2003, more than 70,000 deaths were recorded in Europe [4]. Extreme heat has been observed to be the deadliest climate-related risk in especially for vulnerable people for example citizens with respiratory, cardio, diabetes, mental disorder or other previous health problems. People, who are illiterate, older, with a low income, and socially disconnected, are likewise at more serious hazard.
In 2050 the part of worldwide urban population will grow by more than 68%. This implies that around 7 billion individuals are expected to live in urban zones [6]. Urban communities are the prevalent home spots for people to live and are more vulnerable to extreme weather conditions. The impact of heat stress could be prevented if proper measures are made. A few urban areas that have executed such measures have experienced decreases in the morbidity and mortality of heat-related illnesses [7]. A review of crisis reaction plans found that half of the examined urban communities had some specific plans for extreme heat events [8]. However, it is hard for decision makers while choosing interventions for thermal comfort. Therefore, it is important to decide which plans and preventive measures to implement in order to reduce the harmful risks caused by heat stress. Modified and advanced urban planning strategies, green roofs and green facade walls, for example, can reduce the heat stress level and increase the outdoor thermal comfort
In previous studies team of researchers were focused on the several heat mitigation strategies. However, it is important to figure out the best measure which can improve thermal comfort especially in outdoor environment. This selection is quite difficult especially for decision makers due to the lack of sufficient data and unidentified principle of judgment. The main objective of this study is to suggest a benchmarking hierarchy which allows decision makers to think about different aspects and can help to identify the required data-gap of interventions and develop the decision support system on the principle of judgment. This paper is organized as follows: section 2 contains a general review of heat preventive measures. Selection Process and proposed criteria are described in section 3 while section 4 presents the AHP tool and methodology. Results are discussed in the section 5 and finally, section 6 concludes this work and gives some perspectives.
II. MEASURE OF BEST PRACTICE
There are several preventive measures which helps to decrease the heat stress in urban areas. In this study 5 best practice measures are considered for the application of the methodology which are summarized below in Fig. 1.
III. SELECTION PROCESS
An intervention selection procedure is an important tool that helps in decision of suitable measure at desire location. It empowers specialists to survey whether the thoughts and discoveries can be significant and reasonable. It allows decision makers to develop their criteria framework for alternatives to be applied on pilot sites. For this work, we found 5 general criteria that evaluates the outcomes as services and improvement in heat stress reduction. The criteria are defined below. certain areas [5]. Its widespread impacts on human health Impact: This criterion of investigation is a thorough statistical method for analyzing the impacts of man actuated or natural changes on the environment. By utilizing this intervention investigation, the real difference in the level of pollutants, land or water scarcity can be statistically obtained. For example, it might be essential to decide if a recently introduced intervention significantly reduces the previous mean level of a pollutant. e. Legal: Approval from authorities is necessary before planning and implementing the mitigation measure in cities IV.
Prof. Thomas L. Saaty [9]. It is a strategy to get proportion scales from combined examinations. The process is used for several fields of study until now, e.g. The method applied to evaluate the transport policies to reduce climate change impacts [10], An analysis of future water policies in Jordan [11], selection of electric power plants [12], contractor selection [13]. In this study AHP is implemented as a decisionmaking approach for the choosing intervention to mitigate the heat stress in urban areas under certain criteria. The hierarchy process is shown below in fig. 2 and steps are explained in subsections. The information was obtained from specific measures [14][15] for example, cost, efficiency, satisfaction feelings and preferences. Several questionnaires were filled by the research team and inputs obtained from these questionnaires are placed in the methodology. AHP permits some little irregularity in judgment that's called inconsistency intervention to mitigate heat stress.
since human isn't always reliable but still, this method helps decision-makers to identify priorities through a pairwise comparison of alternatives.
A. Goal
The goal of this study is to select a preventive measure to reduce the heat stress in urban area. The goal of this study is to select a preventive measure to reduce the heat stress in urban area.
Authorized licensed use limited to: Universite Picardie Jules Verne. Downloaded on October 14,2022 at 08:52:20 UTC from IEEE Xplore. Restrictions apply.
B. Criteria framework
This step helps to construct the framework which expresses the aspects of the decision-makers' reflection on the desirable measure. The criteria framework is illustrated in Fig. 3.
C. Pair-wise Comparison
This step allows to compare the alternatives with all of them. Formation of pairs depends on the number of alternatives e.g. five criteria are considered for choosing best the intervention which improve the thermal comfort during hot summers. Hence, ten pairs for the criteria are calculated which are: (C1 -C2), (C1 -C3), (C1 -C4), (C1 -C5), (C2 -C3), (C2 -C4), (C2 -C5), (C3 -C4), (C3 -C5), (C4-C5) with help of the formula shown in Table 1. Example of pairwise comparison between 'Green Roof' and Cool Pavements is shown in fig. 4. The judgements are made with the help of scales recommended by Thomas Saaty, shown in Table .2
D. Matrices formulation
Outcomes of pair-wise judgment are formulated in matrices; diagonal elements of the matrix are always 1. The two following rules are used to create upper triangular matrix:
• If the judgment value is on the left side of 1, we put the actual judgment value. • If the judgment value is on the right side of 1, we put the reciprocal value.
In order to set up the priority vector for criteria, AHP suggests an n×n pairwise comparison of matrix A eq (1).
A= a 11 ⋯ a 1n ⋮ ⋱ ⋮ a n1 ⋯ a nn = (a ij ) ij 1
Where a is the element of row i column j of the matrix. The reciprocal values of the upper diagonal are used for lower triangular matrix. If the element of row column of the matrix, then the lower diagonal is filled using eq (2)
a ji = 1 a ij 2
After building the matrix A, the priority vector of criteria is calculated using the following steps:
• Normalization of pairwise comparison matrix Anorm is calculated by using eq (3) where the sum of the entries of each column is equal to 1 i.e
∑ 1 Anorm = a 11 ⋯ a 1n ⋮ ⋱ ⋮ a n1 ⋯ a nn = (a ij ) ij 3
• The matrix Anorm entries are calculated using the entries of the matrix A using the eq (4)
a ij = a ij ∑ a kj n k=1
• Priority vector of criteria is an n-dimensional column vector P and it is calculated by eq (5)
Π = p 1 ⋮ p n 5
Where priority vector is achieved by averaging the entries of each row of the matrix Anorm using the eq (5). (7) which confirms that results are consistent with the provided judgments. If the consistency ratio value is 10%, the inconsistency is acceptable, but if the consistency ratio is 10% which means that the subjective preference assessment needs to be revised. The consistency index is called Random Consistency Index (CRI). Professor Saaty [9] provided the random indices shown in Table 3.
V. RESULTS AND DISCUSSION
The participants choose the best-given options for the mitigation of heat stress in summers and their judgements are pair wisely compared and evaluate the importance of criteria over other. Afterward, judgments matrices are formulated to understand the priority vector. The results showed that environment impact is the most important (36.1%) criterion for the selection of an intervention which improves thermal comfort in urban areas, followed by the durability of measure on the desired location (30.8%). Efficiency (heat stress reduction level) is the third dominant criteria while deciding to implement nearby hotspots. Cost-effectiveness and Legal (permission by authority) are also essential criteria; however, it can be mold by investors and favorable arguments. The results of the questionnaire are illustrated in the pie diagram shown in Fig.
A. Interventions Priority results with respect to Cost-Effectiveness
Cool pavements are 59.2% cost-effective option among all while cool/white roof is 26.5% effective concerning cost. The green roof is 9.9%, and green parking is 4.5% important, presented in Fig. 6.
B. Interventions Priority results with respect to Efficiency
The efficiency of alternatives is prioritized among the three major effective interventions are shown in Fig. 7. A green roof is 64.9% efficient to reduce heat stress in the environment followed by green parking with, 27.9%, while cool roof is 7.2% important in terms of efficiency.
C. Interventions Priority results with respect to Durability
According to the survey green roof is the most durable intervention by 54.9% for outdoor thermal comfort among the other alternatives shown in Fig. 8. Green parking is second ranked by 28.8% while cool roof is 11.5% and temporary interventions (watering method, fountain, other materials, etc...) are 4.8%.
D. Interventions Priority results with respect to environmental Impact
Environmental impact is high ranked criterion over all other standards as shown in Fig. 5. Results are shown in Fig. 9. Green roof is 56.2%, green-parking is 29.3%, 10.1% for cool/white roofs and cool pavement are 4.4% important over each other based on the aforementioned criterion.
E. Interventions Priority results with respect to Legal aspects
Before implementing anything, it is necessary to have permission to respect the law. It is a valuable criterion for selection of intervention in the urban area, especially the city center and historical locations. However, it represents only 6% among other approaches that have been chosen. Considering this criterion before taking any action, it is needed to have legal permission for green-parking and green roof since both interventions are 38.7% and 37% important to legal criteria shown in fig. 10. On the contrary, cool roofs (13.9%), cool pavements (5.6%) and temporary interventions (4.9%) are less valuable in terms of getting permission to be adapted.
F. Final Priority results with respect to all criteria
Interventions are ranked using eq (8). The final rankings are shown in Table .4. According to the survey entries, the green roof is expensive and the installation must meet the legal deadlines, but it gives a good cooling effect, has a significant impact on the environment, and the service life of the intervention is good and does not require additional maintenance but care. While interventions such as water fountains, misting, humidification of streets, use of air condition water for planting are temporary solutions that could be adapted to improve comfort but not solutions that can mitigate heat stress at the city level.
VI. CONCLUSION
In this study, an AHP process is applied for the selection of an appropriate intervention to improve thermal comfort in small and medium cities. The pairwise comparison between the criteria and the judgments of the proposed intervention is assessed through a questionnaire based on the perception of the decision makers. The obtained results based on the survey entries showed that green roof is expensive and the installation must follow the legal formalities, but it gives a good cooling effect, has a significant impact on the environment, and the service life of the intervention is good and does not require extra maintenance comparatively others but care. However, water fountains, misting shouldn't take in to account for implementation since it gives a cooling effect for time being. The ranking results presented in this article are given to illustrate the use of the proposed technique and must not be taken as universal. However, the same procedure could be followed and used in any location. It concludes that application of AHP in urban planning provides a useful decision support tool to local users and stakeholders.
Introduction
Multi-criteria Decision Methods (MCDMs) are valuable tools to handle the selection problem. They are based on five components, which are: goals, thoughts of the specialist, alternatives, criteria, and results. The MCDM requires human recognitions as sources of information where uncertainty and subjective aspects exist. The decision maker's assessments can be expressed by using linguistic terms such as "low importance" or "brilliant performance". The idea of these assessments is often subjective because some criteria that do not have an objective measure, which forces the decision makers to express their thoughts using numeric scales. There are many MCDM methods which are used in different fields of study; for example, the Previously Fuzzy logic method [1] has been applied in the soil sciences [2], supplier's performance [3], for imprecise information related to distribution problems [4], in the field of accounting and finance to develop guidelines for investment decisions [5,6], and in the selection of the appropriate process performance [7]. However, the fuzzy logic method has no potential to measure the level of consistency in the judgments provided by a decision maker.
AHP is a one of the oldest and most trusted decision-making methods [8]. It is a comprehensive technique that has ability to solve the complex decision-making problems by assembling, quantifying, and evaluating the alternative solution through hierarchies [9,10]. Furthermore, it is easy to implement by experts of other fields and overcomes the resulting risk of inconsistency. Consistency plays a vital role in AHP. When the consistency ratio of the pairwise comparison matrix is greater than 10%, it requires a review of the inputs to make the results consistent with the provided judgments [11]. The modified version of Fuzzy-AHP is aimed at removing the vagueness and uncertainty in decision-making, but due to heavy calculations and a high risk of errors, it is difficult to adapt. In contrast, the conventional AHP is quite easy to update, completely reliable, and cost effective, and its analysis can easily be performed by software [12,13]. Previously, the AHP was coupled with other MCDMs, such as MOORA for public transport service quality [14], ELECTRE for personnel selection [15], TOPSIS for the evaluation of knowledge sharing capabilities of supply chain partners [16] and suitable technology transfer strategy for wind turbines [17], PROMETHEE for the selection of policy scenarios for vehicle fleet [18], VIKOR for the assessment of school teachers [19], WSM to evaluate the knowledge in supply chain patterns [20], and WPM for the selection of open-source electronical medical records [21]. In one research, the DEA model was used to assess the performance of small-to mediumsized manufacturing enterprises, and it showed that the MCDM model combined with the AHP was more consistent than stand-alone models where the decision was entirely based on quantitative inputs [22]. The main reason why many companies do not rely on MCDM methods can be due to the fact that decision makers intuitively notice ranking errors [23]. However, there is a need to determine the comparative study on the reliability of the MCDMs. So far, the application of several MCDMs and their comparative study in the field of Urban Heat Stress (UHS) has not been carried out due to long reasoning, which is difficult to quantify and scale. This research is based on the selection of interventions to mitigate outdoor heat stress by using different multicriteria decision applications, such as PROMETHEE, VIKOR, MOORA, ELECTRE (NS, NI), TOPSIS, WPM, and WSM. The applied methods are also combined with AHP, which identifies the effectiveness of standalone application using direct criteria weightage and the impact of AHP on the decision process in field of UHS. The impact of different normalization methods and coherent frequency in ranking results obtained from the application of different MCDMs are also investigated. The article is organized as follows. Section 2 presents the research background and related work. The underlying concepts and mathematical formulas are given in Section 3. The research methodology is explained in Section 4. Sections 5 and 6 provides the simulation results and discussion. The article ends with a conclusion and some perspectives.
Research Background
UHS is the current crucial concern for scientists and residents of medium-and largesized cities, which are at more risk for heat events. Prolonged exposure can cause heat exhaustion, cramps, stroke, as well as exacerbate pre-existing chronic conditions, such as various respiratory, cerebral, and cardiovascular diseases, especially for vulnerable people. There are many heat mitigation strategies for improving thermal comfort in urban areas. As human beings, it is our sacred duty to save the environment in which we live, but decision maker responsibility is much bigger because many remedial measures need to be implemented on a large scale. The application of MCDMs could be first approach that assists decision makers to quantitively assess the importance of criteria and the performance of alternatives in selection processes.
In previous studies, AHP-SWOT, multi-criteria outranking approach, EFDM, FDEMA-TEL, multi-criteria method by linear regression, TOPSIS, SMCE, Fuzzy-AHP TOPSIS have been used in the field of UHS [24]. In addition, the AHP method was specifically used to select the urban heat resilience intervention under certain criteria [25].
From the literature, it was investigated that most popular interventions considered to deal with thermal stress are white roofs, extensive green roofs, intensive green roofs [26], planting trees in cities [27], green parking lots, cool roofs, watering methods (sprays in public areas) [28], green walls, and shades. Figure 1 shows the interventions chosen for this study that can mitigate thermal discomfort in an urban setting.
From the literature, it was investigated that most popular interventions considered to deal with thermal stress are white roofs, extensive green roofs, intensive green roofs [26], planting trees in cities [27], green parking lots, cool roofs, watering methods (sprays in public areas) [28], green walls, and shades. Figure 1 shows the interventions chosen for this study that can mitigate thermal discomfort in an urban setting. For our studies, we considered four general criteria to address for the selection process of interventions (alternatives). These criteria are the following: • Cost: capital and running cost of the intervention, which is often taken as a nonbeneficial (NB) criterion. • Environment: impact of the intervention on the level of air, land, and water. For example, it might be necessary to know if a recently introduced intervention significantly improves the previous mean level of air quality. • Efficiency: cooling effect of intervention in open spaces.
• Durability: intervention capability to withstand the level of heat and remain useful without requiring additional maintenance after extreme weather events throughout the service life.
Mathematical Models of MCDMs
Eight different MCDMs and seven normalization methods were computed for deciding the UHS mitigation intervention. Additionally, the AHP was also applied for calculating the weightage of criteria. The procedure and key equations of methods are given in Table 1. Mathematical formulas of applied normalized methods are presented in Table 2. For our studies, we considered four general criteria to address for the selection process of interventions (alternatives). These criteria are the following:
•
Cost: capital and running cost of the intervention, which is often taken as a nonbeneficial (NB) criterion.
•
Environment: impact of the intervention on the level of air, land, and water. For example, it might be necessary to know if a recently introduced intervention significantly improves the previous mean level of air quality.
•
Efficiency: cooling effect of intervention in open spaces.
•
Durability: intervention capability to withstand the level of heat and remain useful without requiring additional maintenance after extreme weather events throughout the service life.
Mathematical Models of MCDMs
Eight different MCDMs and seven normalization methods were computed for deciding the UHS mitigation intervention. Additionally, the AHP was also applied for calculating the weightage of criteria. The procedure and key equations of methods are given in Table 1. Mathematical formulas of applied normalized methods are presented in Table 2.
MCDM
Steps Reference
TOPSIS: Technique for Order Preference by Similarity to Ideal Solutions
Step 1: make decision matrix [29] Step 2: normalize decision matrix Step 3: weighted normalized decision matrix r ij : normalized decision matrix V ij = w j r ij w j : weight of the Jth criteria (attribute) Condition ∑ n j=1 w j = 1 Step 4: ideal best V + j and ideal worst V - j values If beneficial criteria:
V + j = max i v ij = max v ij , i = 1, . . . , m If cost criteria: V + j = min i v ij = min v ij , i = 1, . . . , m
Step 5: Calculate the distances of each alternative from the positive ideal solution and the negative ideal solution
S ± i = n ∑ j=1 V ij -V ± j 2
Step 6: Calculate the relative closeness to the ideal solution (performance score)
R i = S - i S + i +S - i
Step 7: Ranking the best alternative
ELECTRE(NI-NS): Elimination and Choice Expressing Reality
Step 1: make decision matrix [30] Step 2: normalize decision matrix Step 3: weighted normalized decision matrix
v ij = R × W = r 11 • W 1 r 12 • W 2 K r 1n • W n r 21 • W 1 r 22 • W 2 K r 2n • W n K K K K r m1 • W 1 r m2 • W 2 K r mn • W n
Step 4: concordance and discordance interval sets
C ab = j | v aj ≥ v bj D ab = j | v aj < v bj = J -C ab
Step 5: calculation of the concordance interval matrix
C ab = ∑ j∈ C ab W j C = - c(1, 2) . . . c(1, m) c(2, 1) - . . . c(2, m) . . . . . . . . . . . . c(m, 1) c(m, 2) . . . - Step 6: Determine the concordance index matrix - c = ∑ m a=1 ∑ m b c(a,b) m(m-1) e(a, b) = 1 if c(a, b) ≥ - c e(a, b) = 0 if c(a, b) < - c
Step 7: Calculation of the discordance interval matrix
d(a, b) = max j∈D ab V aj -V bj max j∈Jmn V aj -V bj D = - d (
MCDM Steps Reference
Step 8: determine the discordance index matrix
- d = ∑ m a=1 ∑ m b d(a,b) m(m-1) f (a, b) = 1 if d(a, b) ≤ - d f (a, b) = 0 if d(a, b) > - d
Step 9: calculate the net superior and inferior value C a : net superior
C a = n ∑ b = 1 C(ab) - n ∑ b = 1 C(ba) d a : net inferior d a = n ∑ b = 1 d(ab) - n ∑ b = 1 d(ba)
Step 10: select the best alternative choose highest value of net superior (C a ) and lowest value of net inferior (d a )
PROMETHEE: This method utilizes a preferential function to drive the preference difference between alternative pairs.
Step 1: decision matrix [31] Step 2: normalized the decision matrix Step 3: deviation by pairwise comparison
d j (a, b) = R j (a) -R j (b) Step 4: preference function p j (a, b) = 0 if d j (a, b) ≤ 0 p j (a, b) = d j (a, b) if d j (a, b) > 0 Step 5: multi-criteria preference index pi j (a, b) = ∑ n j=1 p(a, b)W j Step 6: positive and negative outranking flows (a = b ) φ + (a) = 1 m-1 ∑ n b=1 pi(a, b) Step 7: net flow φ = φ + -φ -
Step 8: Ranking the best alternative by using highest value of net flow VIKOR: multi-criteria optimization and compromise solution which focuses on ranking and selecting from a set of alternatives in the presence of conflicting criteria.
Step 1: Determine the objective and identify the pertinent evaluation attributes.
[32]
Step 2: normalized decision matrix f
Step 3: Find best and worst Best:
f + j = f ij max = max i f ij Beneficial attribute f + j = f ij min = min i f ij Non beneficial attribute Worst: f - j = f ij min Beneficial attribute f - j = f ij max Non beneficial attribute
Step 4: utility measure S i and regret measure R i
S i = n ∑ j=1 w j f + j -( f ij ) f + j -f - j R i = max j w j f + j -( f ij ) f + j -f - j Step 5: calculate the value of Q i Q i = v S i -(S i ) min (S i ) max -(S i ) min + (1 -v) R i -(R i ) min (R i ) max -(R i ) min v = 0 . . .
MCDM
Steps Reference
MOORA: Multi-Objective Optimization on the Basis of Ratio Analysis
Step 1: The alternatives and attributes values in the decision matrix: [33] Step 2: Normalize decision matrix Step 3: positive and negative effects: maximization for beneficial criteria, minimization for non-beneficial (cost)
yb i = g ∑ j=1 x * ij w j ynb i = n-g ∑ j x * ij w j
g is the number of criteria to be maximized ng is the number of criteria to be minimized
x * ij is normalized decision matrix Step 4: determine the weighted assessment value
y i = yb i -ynb i y i = g ∑ j-1 x * ij w j - n-g ∑ j=1
x * ij w j
Step 5: ranking the best alternative Where alternative has the 1st rank with highest value of y i WSM: Weighted Sum Method
Step 1: make decision matrix. [34] Step 2: normalized decision matrix Step 3: weighted normalized decision matrix r ij : normalized decision matrix v(i, j) = r(i, j).w(j) Step 4: weighted sum
ws(i) = n ∑ j=1 v(i, j)
Step 5: ranking the best alternative WPM: Weighted Product Method
Step 1-3: same as WSM Step 4: weighted product
wp(i) = n ∏ j=1 v(i, j)
Step 5: ranking the best alternative
AHP
Step 1: Pair-wise comparison matrix of criteria or alternatives
P(i) = n ∏ j=1 A(i, j) Pn(i) = (P(i)) 1/N Pn(i) = n ∏ j=1 A(i, j) 1/N sp = m ∑ i=1 Pn(i)
Step 2: Criteria weights or alternatives scores: w(i) = Pn(i)/sp Step 3: Calculate consistency w(i) = w(j), where n = m size of A matrix v(i, j) = x(i, j).w(j) Step 4: calculate weighted sum value:
sw(i) = n ∑ j=1 v(i, j)
Step 5: calculate consistency error
R(i) = sw(i)/w(i) λ m ∑ i=1 R(i)/m max CI = λ max -n n-1
For n = 4, Ri = 0.9 ent normalization methods were used for the simple application of MCDMs (standalone). The research methodology is shown in Figure 3. The inputs for criteria and alternatives using the linguistic scale 1-10 was used, where high value of scale represents the high importance; for example, cost is a nonbeneficial criterion, and in the case of shades the score was 8, which means it is very expensive, but the durability score was 5, which means that sometimes shades require maintenance after windstorms. All inputs were formulated in the decision matrix shown in Table 4, where criteria weights are given by direct method for the stand-alone application, and criteria weights are calculated by AHP for the combined approach by using the related formulas in Table 1. The inputs for criteria and alternatives using the linguistic scale 1-10 was used, where high value of scale represents the high importance; for example, cost is a non-beneficial criterion, and in the case of shades the score was 8, which means it is very expensive, but the durability score was 5, which means that sometimes shades require maintenance after windstorms. All inputs were formulated in the decision matrix shown in Table 4, where criteria weights are given by direct method for the stand-alone application, and criteria weights are calculated by AHP for the combined approach by using the related formulas in Table 1.
Results
Comparative Analysis of Normalization Methods for Applied MCDM
The results show that the logarithmic normalization method had no impact on the ranking results calculated by the applied MCDMs (except for ELECTRE and VIKOR) when compared to other normalization methods. In contrast, the same ranking was observed in PROMETHEE and WSM results using all normalization techniques. Table 5 shows the consistency of similar ranking results using MCDMs under different normalization techniques.
Priority Ranking
The priority ranking for the selection of heat resilience interventions were calculated by using direct and AHP criteria weights. The ranking results obtained by stand-alone MCDM using L nN and AHP-MCDM are shown in Tables 6 and7, where alternative A1 is water feature, A2 is surfaces, A3 is green wall, A4 is trees, and A5 is shades, respectively.
Ranking Frequency Error of Stand-Alone MCDMs and AHP-MCDMs
The obtained ranking results were showing frequency errors when comparing each applied method, which could mislead the decision. This problem was solved by evaluating the frequency of the same rankings. The pairwise frequency error was calculated by Equations ( 1) and ( 2). The sum of the standard deviation defines the frequency error and assist to observe the variation in decision by applying different MCDMs using the same judgments. The method aims to check the consistency of the results and to evaluate the reliability of outcomes. The ends results are shown in the pairwise matrices in Table 8.
E ij = 5 ∑ k=1 A kM i -A kM j 2 (1) E j = ∑ 8 i=1 E ij (2)
where: i = no of rows, j = no of columns, k = no of alternatives, M = no of methods, A = ranking result of alternative, and E j = sum of the variation in ranking results.
Discussion
The applied MCDMs were analyzed by considering three criteria in negative and positive attributes (presented in Table 9) which are:
•
Normalization: Positive evaluation is performed for MCDMs that gives the same results under different normalization techniques, where variations in results are taken as negative.
•
MCDM Frequency: similar ranking results obtained by stand-alone MCDMs are assessed as positive, and high variations are considered as negative.
•
AHP-MCDM Frequency: this criterion is used to investigate the impact of coupling AHP with applied MCDMs, where positive and negative signs show the decrease and increase in frequency variation of final ranking results, respectively. The frequency error in the ranking results is illustrated in Figure 4, which shows that AHP helps to reduce the irregularity of the final ranking due to the pairwise subjective judgment for calculating criteria weights, which makes more reliable results in decisionmaking. It is compatible to combine with all the methods except ELECTRE-NS. The increased frequency error in ranking was noticed when the ELECTRE-NS model was coupled with the AHP. It was observed that WSM and PROMETHEE were not affected using any normalization techniques and gave the same ranking, which proves that they are the most reliable methods. Moreover, TOPSIS, MOORA and VIKOR models provided improved results after coupling, but the different normalization techniques could affect the final outcome. Based on provided judgments (Table 4), the priority ranking obtained from the m jority of the MCDMs showed that planting trees in the urban area was an effective co ing strategy that provides shade, improves air quality, and gives good cooling in cert areas where green walls improve indoor and outdoor air temperature. Additionally, green walls enhance the aesthetics of the property. The watering method to wet streets in the summer, planting grass on the surfaces, and water features such as fo tains provides the limited cooling extent and requires extra care. While artificial sh ings are expensive to install in hotspots, there was no co-benefit associated with this tervention. Based on provided judgments (Table 4), the priority ranking obtained from the majority of the MCDMs showed that planting trees in the urban area was an effective cooling strategy that provides shade, improves air quality, and gives good cooling in certain areas where green walls improve indoor and outdoor air temperature. Additionally, the green walls enhance the aesthetics of the property. The watering method to wet the streets in the summer, planting grass on the surfaces, and water features such as fountains provides the limited cooling extent and requires extra care. While artificial shadings are expensive to install in hotspots, there was no co-benefit associated with this intervention.
Conclusions
This study was performed for the selection of intervention to mitigate outdoor UHS by applying multiple MCDMs. Eight different well known and classical techniques were computed to evaluate the priority ranking of interventions. A major concern with decisionmaking is that different MCDM methods provide different results for the same problem. For reliability of the outcomes, a comparative study was conducted on the basis of three criteria, which were (i) influence of normalization techniques, frequency of similar ranking results by (ii) stand-alone MCDMs, and (iii) AHP-MCDM application. It was observed that PROMETHEE and WSM were reliable methods in this field among other applied MCDMs, namely, MOORA, WPM, ELECTRE-NS, ELECTRE-NI, TOPSIS, VIKOR, which are sensitive methods and, due to variations, these MCDM models provided different priority results. Additionally, the L nN was a more reasonable normalization technique, and it provided similar rankings in the majority of applied MCDMs. It was noticed that the coupling of AHP helped to minimize the frequency error through the pairwise method for criteria weights, which increased the reliability of the decision.
In this study, the priority of green walls and trees is an arbitrary output of decision makers. The ranking obtained on the parameters was not a general rule, and this procedure was carried out to check the reliability. The results were entirely based on the terrain, the perspectives, characteristics of the pilots, climatic conditions, and inputs of the decision-makers.
The improved frequency of consistent results by AHP-MCDM revealed that the ranking results mainly depended on the nature and the values of the criteria. The reasonable disagreement that was observed among the methods did not affect their reliability. As a result, MCDM models proved generally very effective for dealing with UHS problems before their implementation and selection of the best ones.
However, a possible limitation of this work is that this comparative conclusion is based only on the evaluation of ranking errors. Future works are to extend the experiment with more MCDM models and to perform sensitivity analysis to confirm that the results would not change. ❖ To evaluate the effect of social and environmental risk factors and calculate the heat vulnerability index of a medium sized city, Amiens (case study);
❖ Field monitoring by using thermal camera and Kestrel 5400 to evaluate the cooling effect of existing measures mainly in public spaces;
❖ Implementation and comparative study of multi-criteria decision methods for the selection of urban heat mitigation intervention;
In the following, we address some concluding remarks for each chapter.
Conclusions 85
In Chapter 1, previously published studies on the cooling effect of vegetation, water features and shading devices were investigated. Data on the ambient air-cooling effect was extracted from each study based on the considered urban settings/locations (for generic/real urban areas). In addition, another review study of theoretical research and published tools on decision support for urban heat resilience was carried out with their mathematical models and target parameters. Existing decision support tools are analysed with benchmarking criteria that should be covered by the decision support system for UHS mitigation.
It is concluded;
❖ Water features, green walls and artificial shading provide a comparable air-cooling effect during the daytime in summer. Green walls and shades are expensive to intervene.
❖ Cooling effect of reviewed interventions depends on the local climate and geography.
❖ It was noted that online assessment DSTs are practical but spatial coverage remains the limit.
❖ Many existing decision tools for UHS are published but there is still a gap that can be improved by considering all the considered criteria.
In Chapter 2, the survey map was produced for OHSIs used by environmental agencies around the world.
The aim of this study was to quantify the variation coefficient of evolving parameters of HI, CET, PMV, SSI, DI, UTCI, WBGT. Sensitivity analysis was performed by simulating the indices using the min-max function. The obtained results confirm;
❖ The selection of the index to estimate the UHS depends on the regional weather conditions or the age group and activities of the inhabitants.
❖ WBGT, HSI, and CET are recommended indices which can be measured using sensors to avoid errors.
❖ HI, UTCI and WBGT indicate a slightly warm comfort zone while PMV, SSI, CET, DI indicate a comfortable zone using the same input.
❖ HI is a very influential index with fewer variations can lead to increased thermal discomfort.
Conclusions
86
❖ CET underestimates the thermal comfort zone.
In line with the 4 th research objective, in Chapter 3, emphasis was placed on heat stress modeling for prediction. A system dynamic approach was used, which allows the identification of significant meteorological variables influencing heat stress. The obtained results confirm;
❖ SVM was a suitable machine learning technique that estimated heat stress with 98% accuracy, while GRU was the finest deep learning model for optimized prediction of PET, PMV and 𝑇 𝑚𝑟𝑡 with 99% precision %.
❖ The developed model is useful only for the summer season (April-September).
❖ The developed simulation tool can help users to analyse and choose the appropriate heat stress index between PMV and PET by comparing their current feeling.
❖ The tool will allow the developer to analyse user inputs for further analysis.
Chapter 4 presented a case study of a medium-sized city that aimed to derive heat vulnerability index mapping using PCA. The index was derived for the hottest days in the last 3 years. The output maps identified hot spots through comprehensive GIS analysis. The following conclusions can be drawn:
❖ A high correlation coefficient was observed between poor air quality and 2019-2020 heat events, especially with 𝑂 3 .
❖ High HVI observed in three typical areas: (1) areas with dense population and low vegetation,
(2) areas with artificial surfaces (built-up areas), and (3) industrial areas. Low HVI areas are found in natural landscapes such as rivers and grasslands.
❖ Research has proven that poor air quality and heat events are interrelated and that the combined effect of two events could increase risk and vulnerability.
❖ The current approach may be applicable to other regions of the world, including large cities, to assess heat-related vulnerabilities and help authorities to take mitigation measures.
Conclusions
87
❖ The challenges of this study are; irregular and limited weather and air quality monitoring stations.
❖ The disadvantage of this study is that district heating and cooling facilities were not considered due to lack of data.
The field measurements are carried out last summer (2021) at 3 public spaces in city centre. It was estimated;
❖ Tilia cordata mill malvacae gives a significant cooling effect by lowering the average 𝑇 𝑎 by 3.7°C
whereas Golden rain tree drops by 0.6°C and the cooling extent wider than other sites due to presence of water fountain.
❖ Cooling effect depends on the characteristics and span of the site.
The last research objective is presented chapter 5, which focused on the decision-making for prioritizing the heat stress mitigation interventions with help of MCDMs. In first step, AHP was implemented for the selection of intervention to develop the criteria framework and confirm the reliability of the method.
Secondly, ELECTRE (NI, NS) TOPSIS, PROMETHEE, VIKOR, MOORA, WSM and WPM are applied for the selection of heat resilience measure under certain criteria. The models are applied in two ways: AHP-integrated and non-integrated for better precision. It is concluded;
❖ LnN is an efficient normalization technique that does not greatly influence the final ranking.
❖ WSM and PROMETHEE provide reliable and consistent output. These techniques do not vary between different normalization techniques.
❖ The integration of AHP has improved the quality of the results except with ELECTRE-NS which increases the inconsistency in the final classification.
❖ After a comparative analysis of the priority ranking calculated from all the methods, it is concluded that green walls and trees are the top priority.
❖ Incomplete and inconsistent judgments was the real challenge as missing information could lead to a wrong choice.
Le stress thermique est une sensation inconfortable, lorsque le corps est incapable de maintenir une température saine en réponse aux conditions environnementales chaudes pendant les activités quotidiennes telles que dormir, voyager, travailler.
Thèse de Doctorat.................................................................................................................................................... 1
24 articles studied water features, 31 green technologies, 13 shadings and 25 green vegetation. These studies were analysed and frequency of different indicators such as 𝑇 𝑎 , Universal Thermal Climate Index (UTCI), Physiological Equivalent Temperature (PET), Predicted Mean Vote (PMV), Urban Heat Island Intensity (UHII), Mean Radiant Temperature (𝑇 𝑚𝑟𝑡 ), Universal Effective Temperature (ETU), Surface Temperature of land and soil (𝑇 𝑠 ), Pavement Heat Flux (𝑃𝐻 𝑓 ), Building Heat Flux (𝐻 𝑓 ), Mediterranean Outdoor Comfort Index (MOCI), Wet Bulb Globe Temperature (WBGT), Relative Humidity (RH), Skin Temperature (𝑇 𝑠𝑘 ), Façade Temperature (𝑇 𝑓 ), Park Cooling Intensity (PCI), Human Comfort Index (HCI), Globe temperature (𝑇 𝑔 ), Black Globe Temperature (𝑇 𝑔𝑏 ) and Wall Temperature (𝑇 𝑤 ) have been used to measure the cooling effect of blue, green and grey interventions that are graphically represented in the following sections. Among the numerous indicators used in past studies, this review is focused on the cooling difference in 𝑇 𝑎 because this was the most frequently used indicator for measuring the cooling effect. selected on a random basis from across the world and published between 2006 and 2021. These studies involved field experiments, simulations, and modeling and most experimentally validated their simulations and models. The methodology is illustrated graphically in Figure 1.1.
Figure 1 . 1 :
11 Figure 1.1: Methodological Framework of this Review Study
Figure 1 . 3
13 graphically represents the frequency of different indicators were used to evaluate the cooling effect of water features in selected papers.
Figure 1 . 2 :
12 Figure 1.2: Images Showing Outdoor Interventions with Water Features. (A) Fountain Tree, (B) WaterMisting System, (C) Little Water Fountains.
Figure 1 . 3 :
13 Figure 1.3: Measuring Parameters Used to Evaluate Cooling Effect of Water Features.
constructed alternatives and can be implemented with less effort. During the screening of research articles, it was observed that different indicators have been used to quantify the cooling effect. The frequency of these indicators is graphically represented in Figure 1.5 which shows that ambient air temperature is the most used indicator for quantifying the cooling effect.
Figure 1 . 5 :
15 Figure 1.5: Frequency of Measuring Parameters used to Evaluate Cooling Effect of (a) Vegetation (Natural Green Infrastructure), (b) Supported Green Infrastructure.
Figure 1 . 6 andA 22 Figure 1 . 6 :Figure 1 . 7 :
16221617 Figure 1.6: Images of Outdoor Grey Interventions (Constructed Shading). (a) Parasols, (b) Pavilion Shades (c) Green Pergolas
morbidity and mortality in cities. Different interventions have been the subject of experiments and found to contribute to improving thermal comfort in outdoor open spaces, with most research conducted during the daytime in summer, as shown in Figures 1.8(a) and (b).
Figure 1 . 8 :
18 Figure 1.8: No. of Studies were Monitored in (a) Season and (b) Measurements Time.
Figure 1 . 9 :❖ 1 ❖
191 Figure 1.9: Cooling Effect of (a) Blue Interventions and (b) Natural Green Interventions Achieved by Different Studies. ❖ Natural vegetation: This can create have significantand multiple -impacts on the environment. For example, measurements taken over grass alone were beneficial, but when combined with trees, showed a greater cooling effect. Grass contributed significantly to mitigating the UHS by reducing PET by at least 10°C while making a slight decrease in the 𝑇 𝑎 of approximately 2°C. Outdoor cooling effects of different type of natural vegetation are plotted in Figure 1.9 (b).
Figure 1 .
1 Figure 1.10: (a) Cooling Effect of Built-In Green Interventions Achieved by Different Studies, (b) Cooling Effect of Built-In Shades Achieved by Different Studies ❖ Grey infrastructure: Sun sails and other shading device are beneficial due to their maximum cooling effect. Most studies support the idea that people prefer to walk on the streets because of overhead shading as it reduces the heat intensity [71]. The shades enhance pedestrian comfort in summer but during winter it causes cold stress and increases the heating requirement. Overall artificial shading structures provide a cooling effect with a decrease of the 𝑇 𝑎 by approximately 4°C and PET by 7°C. Results obtained with different types of shading are presented in Figure 10 (b).
Figure 1 .
1 Figure 1. Methodology of the review paper.
Figure 1 .
1 Figure 1. Methodology of the review paper.
Figure 2 .
2 Figure 2. A general procedure of multi-criteria decision methods.
Analytical Hierarchy Process (SWOT); Multi-criteria outranking approach (MCDA and IBVA); Enhanced Fuzzy Delphi Method (EFDM); Fuzzy decision-making trial and evaluation laboratory (FDEMATEL); Multi-criteria method by linear regression; The technique for order of preference by similarity to ideal solution (TOPSIS); Spatial Multi-Criteria Evaluation (SMCE); Fuzzy Analytic Hierarchy Process; Fuzzy TOPSIS.
Figure 2 .
2 Figure 2. A general procedure of multi-criteria decision methods.
[ 1 -
1 c(a,b)] otherwise where I v (a, b) is the set of indicators for which d i (a, b) > c(a, b) Stage 3: Distillation and ranking procedures.
Figure 3 .
3 Figure 3. Framework of RP2023; Microclimate and Urban Heat Island Mitigation Decision-Support Tool [29].
Figure 3 .
3 Figure 3. Framework of RP2023; Microclimate and Urban Heat Island Mitigation Decision-Support Tool [29].
049 -0.22𝑅𝐻 0 -2(6.84 × 10 -3 )(𝑇 𝑎𝑜 ) + 2(1.22 × 10 -3 )(𝑇 𝑎𝑜 )(𝑅𝐻 0 ) + 8.5 × 10 -4 (𝑅𝐻 0 )² -1.99 × 10 -6 (𝑇 𝑎𝑜 )(𝑅𝐻 0 143 -0.22(𝑇 𝑎𝑜 ) -2(5.48 × 10 -2 )(𝑅𝐻 0 ) + 1.22 × 10 -3 (𝑇 𝑎𝑜 )² + 2(8.5 × 10 -4 )(𝑇 𝑎𝑜 )(𝑅𝐻 0 ) -2(1.99 × 10 -6 )(𝑇 𝑎𝑜 )²(𝑅𝐻 0 )
•
𝑇 𝑎𝑜 = 27.22°C • 𝑇 𝑚𝑟𝑡 = 27°C • 𝑣 𝑜 = 2 m/s • 𝑅𝐻 0 = 30%
to an increase in the level of discomfort while the results of other indices remain in the same comfort zone. Further, x1, … xn are simulated by using the min-max function. Where xmin is the lowest possible (S0) and xmax is maximum possible value under the realistic situation in the summer season (𝑇 𝑔 = 50, 𝑇 𝑎 = 50°𝐶, 𝑇 𝑚𝑟𝑡 = 50°𝐶, 𝑅𝐻 = 100%, 𝑣 = 10 𝑚 𝑠 , 𝑇 𝑤 = 25°𝐶, 𝑚𝑒𝑡 = 4, 𝑚𝑤 = 0.9 , 𝛼 = -3). At each time step only one variable was changing at the same time by 1. The obtained results are plotted shown in Figure 2.3 which shows the sensitivity of thermal discomfort.
Figure 2 . 3 : 6 Conclusions
236 Figure 2.3: Thermal Comfort Scale Sensitivity Chart of Simulated Heat Stress Indices
Figure 3 . 1 .
31 It is observed that GRU is a promising and efficient technology, the results with higher accuracy are obtained from this algorithm. The results obtained from the model are validated with the output of reference software named Rayman. After validation the GRU model was adapted for the development of the user's friendly tool for HSA which allows users to select the range of thermal comfort scales based on their perception and will allow developer to use the database for further research.
Figure 3 . 1 :
31 Figure 3.1: Comparative Graph of Output (a) PET (b) PMV (c) 𝑻 mrt of Testing Samples from AppliedModels.
2 .Figure A3. 1 :Figure A3. 2 :
212 Figure A3.1: Validation of Model Outputs Testing Phase 1 (a) PET (b) PMV (c) 𝑻 mrt
Fig. 1
1 Fig.1 System dynamic approach for the assessment of heat stress
Fig. 5 Fig. 3 Fig. 4
534 Fig.5 Probability distribution of (a) PET (b) PMV (c) 𝑇𝑇 mrt (c)
Fig. 6 GRUFig. 7
67 Fig.6 GRU Unit at time step t
Fig. 8
8 Fig.8 Validation of model outputs testing phase 1 for PET.
Fig. 9
9 Fig.9 Validation of model outputs testing phase 2 for PET.
Climate 2022 , 15 Figure 1 .Figure 1 . 14 2. 1 .
20221511141 Figure 1. Working model for Heat Vulnerability Index mapping of Amiens. 2.1. Identification of Risk Factors 2.1.1. Social Vulnerability Factors (SVF)Age, pre-existing medical conditions, and social deprivation are among the various key factors that make people likely to experience more adverse health outcomes related to extreme temperatures. References were used for the population density, poverty rate,
statistical data from referenced sources give the rough estimation of social vulnerability.
Figure 2 .
2 Figure 2. (a) Total population in the city; (b) elderly population ≥65 years; (c) those living alone; (d) asthma patients; (e) patients with other respiratory diseases; (f) cardiovascular patients.
Figure 2 .
2 Figure 2. (a) Total population in the city; (b) elderly population ≥65 years; (c) those living alone; (d) asthma patients; (e) patients with other respiratory diseases; (f) cardiovascular patients.
Climate 2022 ,
2022 10, x FOR PEER REVIEW 5 of 15was made by categorizing air temperature ranges into risk warnings: slightly warm (26-30 °C), warm (31-36 °C), and very hot(37-41 °C). This approach made it possible to analyze the huge hourly data during the summer seasons of 2018-2020. The number of hours of heat stress with their levels is presented in Figure4.
Figure 3 .
3 Figure 3. The geographical locations of air pollution and weather stations.
Figure 3 .
3 Figure 3. The geographical locations of air pollution and weather stations.
Figure 3 .
3 Figure 3. The geographical locations of air pollution and weather stations.
Figure 4 .
4 Figure 4. Heat stress hours in Amiens (data source: Meteo France).
Figure 4 .
4 Figure 4. Heat stress hours in Amiens (data source: Meteo France).
Climate 2022 ,
2022 10, x FOR PEER REVIEW 6 of 15
Figure 5 .
5 Figure 5. Correlation coefficient (α) of monitored air temperature (Ta) and surface temperature (Ts) with air pollutants during summer season (July and August) of 2019 and 2020 (data source ATMO France).
Figure 5 .
5 Figure 5. Correlation coefficient (α) of monitored air temperature (Ta) and surface temperature (Ts) with air pollutants during summer season (July and August) of 2019 and 2020 (data source ATMO France).
Figure 6 .
6 Figure 6. Land surface temperature maps: (a) summer mean 2020; (b) 27 July 2018; (c) 25 July 2019; (d) 31 July 2020. Note: The summer mean temperature is the mean LST of July and August 2020.
Figure 6 .
6 Figure 6. Land surface temperature maps: (a) summer mean 2020; (b) 27 July 2018; (c) 25 July 2019; (d) 31 July 2020. Note: The summer mean temperature is the mean LST of July and August 2020.
Figure 7 .
7 Figure 7. Land use/land cover map of Amiens.
Figure 7 .
7 Figure 7. Land use/land cover map of Amiens.
Figure 8 .
8 Figure 8. Air quality index maps created with the available data provided by a local agency (Atmo) (a) 2019 and (b) 2020.
Figure 8 .
8 Figure 8. Air quality index maps created with the available data provided by a local agency (Atmo) (a) 2019 and (b) 2020.
Figure 9 .
9 Figure 9. Map of components allocated by PCA algorithm.
Figure 9 .
9 Figure 9. Map of components allocated by PCA algorithm.
Figure 10 .
10 Figure 10. HVI map of Amiens indicating the greater vulnerability of the city center area to heat compared to the rural areas.
Figure 10 .
10 Figure 10. HVI map of Amiens indicating the greater vulnerability of the city center area to heat compared to the rural areas.
Figure 4 . 1 :Pilot site 1 :
411 Figure 4.1: Pictorial View of Monitoring Sensor and its Working
Figure 4 . 2 :Pilot site 2 :
422 Figure 4.2: Pilot site 1 at 34 rue des rinchevaux 80000 Amiens
Figure 4 . 3 :Pilot site 3 :
433 Figure 4.3: Pilot site 2 at 18 pl Notre-Dame 80000 Amiens Pilot site 3: It is a small triangular park located near a shopping centre in Amiens. The inhabitants of the city take advantage of this site during the summer season. Eleven trees of Koelreuteria paniculata (Golden Rain Tree) and lemongrass exist in this park. In addition, long water fountains give a pleasant environment by small water droplets with an extent of 8-10 m. Two sensors were placed, one under the
Figure 4 . 4 :
44 Figure 4.4: Pilot site 3 at 6 rue dusevel 80000 Amiens
Figure 4
4
Figure 4 . 5 :
45 Figure 4.5: Recorded Monitoring Data at Intervention (I) And Reference (R) Points for Species; (A) Tilia
a.
Cost effectiveness: In this study, we acknowledged capital and operational cost with future advancements. b. Efficiency: defines the thermal comfort improvement in environment and intervention effectiveness within the time frame.
2022 2nd International Conference on Digital Futures and Transformative Technologies (ICoDT2) | 978-1-6654-9819-7/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICoDT255437.2022.9787481 Authorized licensed use limited to: Universite Picardie Jules Verne. Downloaded on October 14,2022 at 08:52:20 UTC from IEEE Xplore. Restrictions apply.
Fig. 1
1 Fig.1 Best Practices measures to improve thermal comfort c. Durability: Criteria focuses on the toughness of intervention with the capable level of resisting heat and remain useful without demanding extra maintenance while coping with the heat events throughout the service life. d. EnvironmentImpact: This criterion of investigation is a thorough statistical method for analyzing the impacts of man actuated or natural changes on the environment. By utilizing this intervention investigation, the real difference in the level of pollutants, land or water scarcity can be statistically obtained. For example, it might be essential to decide if a recently introduced intervention significantly reduces the previous mean level of a pollutant. e. Legal: Approval from authorities is necessary before planning and implementing the mitigation measure in cities
Fig. 2
2 Fig.2 Analytic Hierarchy Process for Selection of an appropriate
Fig. 3
3 Fig.3 Criteria Framework along with important alternatives.
Fig. 4
4 Fig.4 Example of pair-wise comparison between two intervention Table1 Quantification of pairs No. of interventions No. of comparisons 1 0 2 1 3 3 4 6
6 E
6 . Consistency Ratio AHP suggests a consistency ratio (CR) technique based on testing. It is calculated by eq
criteria weight of first row) +SOC 2(criteria weight of second row+…
5
5
Fig. 5
5 Fig.5 Pie diagram of Criteria Importance
Fig. 6
6 Fig.6 Interventions Correspond to Cost-effectiveness Priority Graph
Fig. 7
7 Fig.7 Interventions Correspond to Efficiency Priority Graph
Fig. 8
8 Fig.8 Interventions Correspond to Durability (a) Priority results (b) Priority Graph
Fig. 9
9 Fig.9 Interventions Correspond to Environment Impact Priority Graph
Fig. 10 Priority
10 Fig.10 Priority Graph for Interventions Correspond to Legal criterion
Figure 1 .
1 Figure 1. Type of heat resilience interventions.
Figure 1 .
1 Figure 1. Type of heat resilience interventions.
Figure 2 .
2 Figure 2. Linguistic scale for rating the importance of criteria and performance of alternatives.
Figure 3 .
3 Figure 3. Research methodology.
Figure 3 .
3 Figure 3. Research methodology.
Appl. Sci. 2022, 12, x FOR PEER REVIEW 12 o
Figure 4 .
4 Figure 4. Graph of calculated error in pairs.
Figure 4 .
4 Figure 4. Graph of calculated error in pairs.
VIKOR❖❖❖❖
Viekriterijumsko Kompromisno Rangiranje MOORA Multi-Objective Optimization Ratio Analysis MCDA Multi Criteria Decision Analysis Conclusions This thesis aimed to analyse the effect of Urban Heat Stress (UHS) and the applications to improve the thermal comfort. The specific work of this thesis concerns the following aspects: characterization of the relevant quantities of thermal comfort as well as action variables. Review of ambient air-cooling of cooling technologies, system dynamic modeling of thermal comfort for prediction, inventory of specific techniques, technologies and tools adapted to the problem of UHS considering different aspects in urban areas such as planning, green spaces, density, energy, air quality and occupant vulnerability. Field monitoring with sensors to measure relevant variables and take action to minimize the effects of heat stress. Development of a decision support tool allowing flexible, dynamic, and predictive use for both designers and users. The main objectives of this research are formulated as follows: To investigate prior research and evaluate the ambient air cooling by green, blue and grey interventions; To investigate prior research on decision support tools for urban heat resilience; Survey on different outdoor heat stress indices; To develop an optimized predictive model for thermal comfort assessment using artificial intelligence;
LRésuméChapitre 2 :❖ 6 ;❖Chapitre 5 :
265 'intensité croissante du stress thermique dans les zones urbaines est devenue une préoccupation importante en raison de son effet direct et néfaste sur la santé humaine et les activités économiques.Il a été observé au cours des dernières décennies que l'augmentation rapide de l'intensité des vagues de chaleur en milieu urbain a suivi le réchauffement climatique. Ces vagues de chaleur extrêmes ont un impact dangereux sur l'environnement urbain et la population, augmentant à terme le taux de morbidité et de mortalité.La densité de population est le facteur majeur car la vulnérabilité aux vagues de chaleur augmente et donne lieu au phénomène d'îlot de chaleur urbain (UHI)où la ville est plus chaude que ses environs. La densité des zones urbaines augmente rapidement en raison de l'augmentation des taux de natalité et la migration des personnes des milieux ruraux pour améliorer leur vie au moyen de meilleurs revenus, de ressources, d'une meilleure société et, finalement, d'une vie meilleure.Les activités anthropiques humaines quotidiennes dans les communautés urbaines émanent un nombre énorme de particules de contamination dans l'air urbain, ce qui augmente la vulnérabilité des personnes aux toxines de l'air. De plus, la superposition du stress thermique et de la contamination de l'air rend les gens progressivement impuissants face à l'impact de chaque risque approprié.RésuméRésumé90 De nombreuses villes ont appliqué des plans d'intervention d'urgence liés à la chaleur pour réduire les taux de mortalité pendant la canicule. La combinaison de fortes vagues de chaleur et d'une mauvaise qualité de l'air entraîne des risques pour la santé. Le stress thermique peut entraîner un épuisement dû à la chaleur, des crampes de chaleur et des éruptions cutanées et, en raison d'une exposition à long terme, il provoque un coup de chaleur. Les enfants, les personnes âgées, les personnes vivant seules, les femmes enceintes, les asthmatiques et les patients cardio sont particulièrement vulnérables aux deux événements extrêmes (chaleur et pollution) qui nécessitent des interventions supplémentaires. Dans la situation actuelle, si rien n'est adapté pour atténuer l'intensité du stress thermique dans les zones urbaines, l'extrémité de température se propagerait fortement sur la sphère terrestre d'ici 2100. L'élévation incontrôlée de la chaleur extrême aurait un impact considérable sur les communautés et les écosystèmes, ce qui rendrait finalement plus difficile d'y faire face. Des stratégies d'atténuation du stress thermique doivent être appliquées dans les zones urbaines, afin de préserver l'environnement et la santé humaine. Intervenir sur les mesures de résistance à la chaleur, développer un système d'aide à la décision est un défi. Cette thèse porte sur le confort thermique qui combine la modélisation de la prédiction du stress thermique, la cartographie de la vulnérabilité à la chaleur, les mesures sur le terrain et l'application de méthodes de décision multicritères. Cette thèse est composée de 5 chapitres. Les chapitres 1 à 5 sont basés sur des articles de revues ; les conclusions sont fournies à la fin. Le résumé de chaque chapitre est donné ci-dessous. Ce chapitre est un état de l'art qui comprend le contexte du stress thermique et les modèles existants sont discutés dans ces sections. Une enquête sur les indices de stress thermique extérieur est menée pour présenter leurs modèles mathématiques existants et mettre en évidence les différents indices de chaleur utilisés officiellement pour mesurer le confort thermique dans différentes régions du monde. Après l'étude des indices de chaleur, on remarque que la plupart des indices peuvent être mesurés et calibrés directement en utilisant des équations et certains indices secondaires peuvent être estimés par différentes méthodes complexes évaluées par leurs modèles. L'analyse de sensibilité des indices de stress thermique les plus courants qui peuvent être estimés par des modèles mathématiques directs, c'est-à-dire la température effective corrigée (CET), l'indice de chaleur (HI), l'indice de mijotage d'été (SSI), le vote moyen prévu (PMV), l'indice d'inconfort (DI), Wet Bulb Globe Temperature (WBGT), Universal Thermal Climate Index (UTCI) sont programmés et simulés pour analyser le coefficient de variation de leurs paramètres évolutifs. Ces indices opérationnels sont également simulés pour analyser la sensibilité des zones d'inconfort à certaines variations d'été sous la fonction minmax. Cette étude conclut après l'enquête que chaque région du globe choisit des indices de chaleur en fonction de certains paramètres spécifiques. Les facteurs qui ont influencé la sélection sont : la sensibilisation des personnes particulièrement vulnérables, les entretiens physiques (connaître l'âge, le sexe, les sensations, les styles vestimentaires, les activités), la corrélation entre les systèmes immunitaires et le nombre d'événements de santé cardiaque. De nombreux pays ne se soucient pas beaucoup des indices de stress thermique, sont moins axés sur la résolution des aléas climatiques et disposent de moins de ressources pour prêter attention aux problèmes environnementaux. Résumé Après cette étude, des techniques de fusion de données sont recommandées pour une meilleure précision de l'analyse et du développement du HVI, augmentant ainsi la fiabilité et diminuant la redondance pour soutenir le processus de prise de décision. Cette recherche met en lumière les solutions suivantes qui peuvent aider les citoyens à lutter contre les épisodes de chaleur : Fournir des informations aux populations locales sur les avertissements de chaleur et les précautions, en accordant une plus grande attention aux personnes vulnérables ; ❖ Mise en oeuvre de pratiques adaptatives proactives telles que les ombrages, les infrastructures bleues et la verdure lorsque le score HVI est supérieur à Surveillance régulière pendant la saison estivale dans la ville. De plus, un plan de mesure a été préparé après avoir sélectionné trois sites pilotes à pied dans le centre-ville d'Amiens, en France. Ces sites sont des espaces publics ouverts où les piétons et les touristes visitent et utilisent souvent les bancs publics. Aux heures de pointe d'une journée chaude, l'ombre des arbres et les mesures environnantes telles que les fontaines d'eau, la végétation contribuent à rendre l'atmosphère agréable en réduisant la température de l'air. Des mesures ont été effectuées au cours des derniers étés (2021) à Amiens à l'aide d'un capteur nommé Kestrel 5400 pour surveiller l'impact de refroidissement des espèces d'arbres existantes sur les sites pilotes choisis. De plus, deux cycles de surveillance de 15 minutes ont été effectués pour toutes les espèces chaque jour et chaque échantillon a été enregistré toutes les 10 secondes. Les jours de mesure étaient ceux où le ciel était clair entre les heures de pointe (12h00 à 16h00) et la position du soleil était du sud-est au sud-ouest. Les images thermiques sont capturées avant les mesures avec la caméra thermique FLIR E6. L'analyse de la surveillance enregistrée Résumé a montré que l'effet de refroidissement des interventions dépend des caractéristiques du pilote et de la taille de la zone. La sélection de l'intervention pour les emplacements souhaitables est importante pour les décideurs qui tiennent compte de certains critères. Les MCDM (Multi-Criteria Decision Methods) sont des aides pour sélectionner et hiérarchiser les alternatives étape par étape. Dans ce chapitre, les MCDM sont appliqués, ce qui aide à sélectionner et hiérarchiser les mesures d'atténuation étape par étape. Tout d'abord, une approche basée sur le processus de hiérarchie analytique (AHP) est appliquée pour choisir les mesures appropriées pour les points chauds. L'évaluation des mesures est obtenue à partir d'un questionnaire où le jugement humain est utilisé pour une comparaison, en fonction de leur perception et de leurs priorités. Ces techniques peuvent aider dans de nombreux domaines d'ingénierie où le problème est complexe et avancé. Cependant, il existe certaines limites des différents MCDM qui réduisent la fiabilité de la décision qui doit être améliorée et mise en évidence. Dans cette étude Elimination and Choice Expressing Reality (ELECTRE) NI (Net Inferior), NS (Net Superior), Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS), Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE), VIekriterijumsko KOmpromisno Rangiranje (VIKOR), l'analyse du ratio d'optimisation multi-objectifs (MOORA), la méthode de la somme des poids (WSM) et la méthode du produit pondéré (WPM) sont appliquées pour la sélection des mesures d'atténuation de la chaleur urbaine selon certains critères Les modèles sont appliqués à l'aide de critères de pondération déterminés de deux manières ; (i) méthode de pondération directe et (ii) processus de hiérarchie analytique (AHP) pour un facteur de pondération précis grâce à une comparaison par paires. Cette recherche numérique a évalué la fiabilité des MCDM utilisant la même matrice de décision sous différentes techniques de normalisation et montre Résumé l'impact de l'AHP sur la décision. Les résultats montrent que WSM et PROMETHEE fournissent des résultats fiables et cohérents pour toutes les techniques de normalisation. De plus, le LnN est une technique de normalisation plus raisonnable et fournit un classement similaire dans la majorité des MCDM appliqués. On remarque que le couplage d'AHP aide à minimiser l'erreur de fréquence par la méthode par paires pour les poids des critères qui augmentent la fiabilité de la décision. Dans cette étude, la priorité des murs verts et des arbres est une sortie arbitraire des décideurs. Le classement obtenu sur les paramètres n'est pas une règle générale et cette procédure a été effectuée pour vérifier la fiabilité. Les résultats sont entièrement basés sur le terrain, les perspectives, les caractéristiques des pilotes, les conditions climatiques et les apports des décideurs. L'amélioration de la fréquence des résultats de classement similaires par AHP-MCDM a révélé que les résultats de classement dépendent principalement de la nature et des valeurs des critères. Le désaccord raisonnable qui a été observé entre les méthodes n'a pas affecté leur fiabilité. En conséquence, les modèles MCDM se sont avérés généralement très efficaces pour traiter le problème UHS avant leur mise en oeuvre et sélectionner les meilleurs.Résumé
Table 1 .1: Literature Reviewed Regarding Blue Infrastructure Symbol Type of water feature Measurement method Location Monitoring Time (Summer) Cooling Effect Indicator Ref.
1 Note: In this review, the effectiveness of most interventions presented was recorded in outdoor spaces in the summer; those also taken in winter are indicated by ~ and those in spring by ^.
a Water Field Rome and Day 8.2 to UTCI °C [22]
misting Measurements Ancona 7.9°C
system (Italy)
(cloud
droplet)
b A fountain Field Rotterdam Day 5°C/7°C 𝑇 𝑎 °C/UTCI [23]
of water Measurements (Netherlands) °C
spray and CFD
simulations
Table 1 .2: Literature Review of Natural Green Infrastructure Symbol Natural Green Infrastructure Measurement method Location Monitoring time (Summer) Cooling effect Indicator Refs.
1
a Grass Experimental Manchester Day up to 𝑇 𝑠 °C / 𝑇 𝑎 °C [29]
Measurements (UK) 24°C /
0°C to
3°C
b Green coverage Envi-met Freiburg Day up to 𝑇 𝑚𝑟𝑡 °K / [38]
(grasslands and simulations (Germany) 43°K / PET°K /
broad-leaved 22°K / 𝑇 𝑎 °K
trees) 3.4°K
Table 1 .3: Literature review of constructed green infrastructure
1
Symbol Constructed Measurement Location Monitoring Cooling Indicator Refs.
Green method time effect
Technologies (Summer)
a Green wall Measurements UK Day 3°C and 𝑇 𝑎 °C [49]
and specially special
Stachys and species
Hedera up to 7°C
b Green Measurements La Rochelle day and up to 4°C 𝑇 𝑎 °C [50]
wall/facade (France) night
c Green wall Envi-met Colombo Day 1°C to 𝑇 𝑎 °C [51]
simulations 2°C
d Green wall Microscale Riyadh Day up to 𝑇 𝑎 °C [52]
modeling (KSA) 1.2°C to
9.3°C
e Vertical Measurements Singapore Day 3.3°C / 4 𝑇 𝑎 °C / [53]
greenery to 12°C 𝑇 𝑤 °C
systems
f Vertical wall Experimental Thessaloniki Night 1.5°C / 𝑇 𝑎 °C / [54]
with green measurements (Greece) 0.58°C to 𝑇 𝑤 °C
vegetation and C-M 3.5°C
simulations
Table 1 .4: Literature Review of Constructed Grey (Shades) Infrastructure
1
Symbol Type of Measurement Location Monitoring Cooling Indicator Refs.
Constructed method time effect
shading (Summer)
a Optimized Measurements Central Day 2.5°C PET°C [62]
Awnings Italy
b Sun sails Measurements Pecs Day 5.8°C to PET°C [65]
(Hungary) 10°C
c Black and FEM Cordoba Day up to 3°C / 𝑇 𝑎 / 𝑇 𝑠 [60]
white sun sails simulations (Spain) 12°C
d Overhead Measurements Milan Day 0.8°C / 𝑇 𝑎 °C / [64]
shaded (Italy) 6.1°C / 𝑇 𝑔𝑏 °C /
structures 8.2°C / PET°C /
5.3°C UTCI°C
e Architectural Envi-met California Day 1.6°C to 𝑇 𝑎 °C [61]
shading simulations (USA) 1.7°C
f Deep canyons Measurements Fez Day 6°K to 𝑇 𝑎 °K [63]
(Morocco) 10°K
Table 1 .
1 Review of academic research on multi-criteria DST approaches for urban heat mitigation.
Aim of the Study Method
Table 1 .
1 Cont.
Aim of the Study Method
Table 1 .
1 m Cont.
Quantitative analysis
of urban geometric
factors, street
orientation, and
thermal comfort and Rajashree
socio-economic Kotharkar [18]
condition
assessments remain a
limitation in this
study.
Table 2 .
2 Review of DSTs.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
Practice guides of 78
Developed for Energy, health, adaptation measures are Online toolkit.
urban planners tourism, water, available for resisting heat • Search ability of the Bundesinstituts
and policymakers infrastructure, events, and among all only 3 entire database with a für Bau-, Stadt-
Stadtklimalotse from small and transportation, are about green spaces in simple search mask. und
(Urban climate medium-sized green spaces, air open public and private German • Does not attempt to - Green spaces Raumforschung [34]
pilot) towns and cities quality, spaces, 330 links to make direct (BBSR)-under
who need quick agriculture, legislative texts, and 61 recommendations for different projects.
and easy access to forestry, heat examples for planning and action. Developed and (Germany)
information. stress. implementation of heat published in 2013.
stress measures.
• Input data WBGT are
WBGT decision support tool High school athletes adjust practice schedules based on heat threat through the week Heat stress • • • temperature, dew point temperature, wind speed, relative humidity, pressure. Forecasting data from National Digital Forecast Database (NDFD). Past 24 h: Real-Time Meso-scale Analysis (RTMA). WBGT estimations are compared to from Kestrels at 2 sites measurements taken English Online tool • Publicly accessible tool North Carolina. assesses hourly WBGT which helps to avoid heat stress exposure by making informed decisions about when to schedule outdoor • • Spatial coverage is for WBGT risk categories. actions to take for Provides guidelines for activities. WBGT - Collaboration between the State Climate Office of North Carolina, Assessments Sciences and Integrated and the Carolinas the SE Regional Climate Center, [35]
and an ExtechHT30 at
1 site.
Table 2 .
2 Cont.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
California Heat Assessment Tool (CHAT) Target practitioners group includes local government such as urban planners, policy makers, public health associations and agencies. Long-term public health impacts of extreme heat. Meteorological dataset (minimum temperature (Tmin), maximum temperature (Tmax), minimum vapor pressure deficit (vpdmin), and maximum vapor pressure deficit (vpdmax)) for the years 1984-2013 were obtained from the PRISM Climate Group, and data were extracted at a daily time-step and at a resolution medical and meteorological data and set a threshold for prediction mapping of heat publications are available. vulnerability data, solutions, health events (HHEs). Heat English Decision support user-friendly web tool. • Generates projected heat health event maps changing between 2011 and 2099. Helps to identify existing areas of need over 63 unique, health-informed heat thresholds tailored to • limited to California. Spatial coverage and demographics. tapestry of climates California's diverse Projected heat tree canopy). UHI delta, % of events, heat vulnerability, social vulnerability (% of outdoor workers, poverty, no health ozone exceedance, concentration, (PM 2•5 environment diseases) safety diploma, no vehicle access), health events (rate cardiovascular of asthma and - Four twenty-seven tool project) conducted UNA (California heat [36,37]
Table 2 .
2 Cont.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
• Decision support Excel A toolkit
Nature-based solution selection tool Urban planners, municipalities Challenges city is facing:Heat waves, biodiversity, flooding, public health and wellbeing, water quality, urban renewal, air quality and green space provision. • • Provides challenges and nature-based solutions catalog and gives recommendations for solutions of challenges with respect to users' input. methods. decision-making multiple criteria evaluated through Priority factors are English • toolkit. Provides decision interventions considering political and executive support, innovation and risk disciplines, culture of departments and alignment of internal management skills, suitable internal regulation policy, staff time and motivation, advanced community - 18 green interventions and cool pavements developed under the project of URBAN Green Up funded by the European Union's and Izmir Liverpool (UK), Valladolid (Spain), 3 European cities: project, including Horizon 2020 involved in this program. Eight cities were [40]
tolerance. (Turkey).
DST in a detailed document.
• Explains how and
when local government
can adopt each method
considering several
criteria, including Published by
Adapting to the urban heat Local government Urban heat mitigation Potential energy savings maps and thermal images of locations with and without interventions are presented to indicate benefits. English effectiveness at reducing heat, improving public health, saving money, and providing Benefits and co-benefits analysis. Cool roofs, green roofs, cool urban forestry. pavements, and Georgetown climate center-A federal policy for state and leading resource [41]
environmental (America).
co-benefits, and
governance criteria
including
administrative and
legal considerations.
Table 2 .
2 Cont.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
Step by step provides: links
of Climate-ADAPT case
studies of concrete examples
from multiple European This tool is based on the
Urban adaptation support tool Decision-makers, urban municipalities practitioners and Climate change; heat waves, snow, drought. scarcity, ice and flooding, water cities, guidance and tools relevant to local adaptation database resources, relevant and other Climate-ADAPT action, publications, reports English adaptation policy cycle, assists cities with making guidelines and database valuable support in detailed climate strategy and offers - Green spaces Published and European project updated under [42]
EU-funded projects, through adaptation plans
Covenant of Mayors for
Climate and Energy
resources.
Tool developed
Microclimate and Urban Heat Island Mitigation Decision-Support Tool Government municipalities, urban planners, and urban policymakers Thermal comfort and vulnerability, UHI due to climate change Fact sheets and publications and case studies are available. English • This spatial web tool for Sydney aims to integrate scientific models, performs and mitigation strategies. evidence-based UHI assesses UTCI Vegetation, coatings shading, water bodies, building under the project named RP2023 was carried out by government and collaboration with University in UNSW Sydney and Swinburne [43]
industry partners.
The tool is
Climate Resilient city toolbox Urban planners, landscape architects Heat stress, pluvial water safety, pluvial floods, and drought. • Handbook for adaptation measures, description of adaptation key performance indicators, water balance model, and multi-criteria score tables of the selection tool in terms of suitability are available. Dutch • • Spatial web tool offers 18 adaptation measures for reducing heat stress and estimating the intervention's cost. Various plan alternatives (scenarios) this tool. adaptation goals by previously set each other, and with up, compared with can be quickly drawn PET • C 10 green and 7 and 1 albedo. blue interventions in different ways developed by the cooperation of the following Dutch (Netherlands) partners: Deltares enabling and Hogeschool Slabbers, Tauw TNO, Bosch Groen Blauw, Research, Atelier University and delta life, Wageningen [44,45]
van Amsterdam.
Table 2 .
2 Cont.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
Tool based on:
• Land Surface • Spatial web tool for
Temperature layer is Minneapolis indicates
derived from data from land surface
Extreme Heat Map tool Urban planners, local government, community Climate vulnerability assessment • Landsat 8 Thermal Infrared Sensor (TIRS) imagery taken during a heatwave in July of 2016. Other data include the 2016 Generalized Land English • temperature on GIS map, assesses the effectiveness of tree shades. Allows users to determine what land cover classes may Land surface temperature. Tree shades (Coniferous and Deciduous tree wetlands). canopy and shrub Developed under Metropolitan (Minneapolis) assistance Council local planning [46]
Use and the 2015 Twin contribute to mitigate
Cities Metropolitan the extreme heat.
Area 1m Urban Tree
Canopy Classification.
• Spatial Web tool for
Antwerp.
• Calculate the
adaptation measures'
Groen Tool City planners, planters, builders, designers, analysts, maintainers, etc. Heat stress, air quality, water management, biodiversity, sound, CO 2 absorption, and recreation and proximity. • Policy documents, practice booklet, literature and case studies, and plans are available on the website. Dutch • impact for the selected quality (PM 10 PM 2•5 , area. Effectiveness maps of green adaptive measures for all themes, i.e., heat stress, sub themes (e.g., air air quality, etc., with UH impact • C, average radiation temperature • C Different green combinations. measures and their The city (Belgium) commissioned develop this tool. VITO and Ghent University to [47]
NO 2 and elemental
carbon)) and indicates
the high, medium and
low risk areas.
Table 2 .
2 Cont.
Tool Name Users Climate Change Fields of Actions Database for UHS Language Tool Information Indicator Interventions for UHS Projects Refs
• A free and
user-friendly spatial
database management
• Pilot tool but not
(Bologna/Modena, Venice/Padua, Wien, • open-source. Allows users to choose
Stuttgart, mitigation actions at
Lodz/Warsaw, building and urban
Decision Support System (DSS) Urban planners, decision-makers and users who are interested in mitigating urban heat. UHI mitigation • Ljubljana, Budapest and Prague) simulations based on the data collected within the UHI project. Provides user-defined report in an Html page which comprises three primary components: a climate change assessment of the selected area, a set of normative data English • • scale and analyze the feasibility of the selected measures on an interactive map of central Europe. Provides economic assessment of chosen measures through online calculator. A set of maps shows change in the average temperature in every annual mean Change in annual mean temperature and surface temperature, heatwave frequency. • Cool roofs, green roofs, green facades, cool pavements, planting parks. canyon and the urban trees within Fund. Tool developed by UHI. The project was implemented through the Development Regional European co-financed by the Central Europe Programme [48,49]
applicable to the decade, changes in
selected area and skills, annual near-surface
a bunch of potential temperature during
mitigation strategies. 2021-2050 and
2071-2100 and heat
wave frequency during
1961-1990 and
2071-2100.
Table 3 .
3 is known that every testing (pilot) site is different depending on several factors such as climate, population, group of persons, building infrastructure, availability of existing interventions and number of heat events. The development of a DST depends on the scale of the project. Objectives and limited spatial coverage are always a drawback because all decision results are based on different pilot sites' data and tools are based on those characteristics. probability of filtering out specific opinions. DSTs with respect to each criterion.
Climate 2021, 9, 102
Evaluation Criteria/ Tools Stadtklimalotse [34] WBGT Decision Support Tool [35] CHAT [37] Right Place-Right Tree [38] NBS Selection Tool [40] Adapting to the Urban Heat [41] Urban Adaptation Support Tool [42] Microclimate and Urban Heat Island Mitigation Decision-Support Tool [43] Climate Resilient City Toolbox [44] Extreme Heat Map Tool [46] Groen Tool [47] Decision Support System (DSS) [49]
Expert assistance + + + + + + + + + - + +
Social culture and other factors - - + + + - - - + - - +
Adaptive capacity - + + + + - - + + - + +
Good integration - - + + - - - + + - + +
Input requirements - - - - + - - + + - + +
Political and
administrative + + + + + + + - - + + more or less
support
Quick assessment of interventions - - - + + - - + + + + +
GUI and visualization - + + + - - - + + + + +
Vegetation + - - + + + + + + + + +
Other interventions - - - - + + + + + - - +
Cost analysis - - - - - - - - + - - +
Spatial - + + + - - - + + + + +
Heat stress indicator - + + + - - - + + + + +
User-friendly + + + + + - - + + + + +
Uncertainty assessment - - + + + - - - + - - +
Recommendation
priority
Table 4 .
4 Color code scale.
Color Codes Explanation
Covers all criteria (Highly recommendable)
Covers 14/15 criteria (Highly recommendable)
Covers 12/15 criteria (Strongly recommendable)
Covers 11/15 criteria (Strongly recommendable)
Covers 10/15 criteria (Recommendable)
Covers 7/15 criteria (Slightly not recommendable)
Covers 4/15 criteria (Not recommendable)
Table 2 .2.
2
Chapter -2
Table 2 .2 Outdoor Heat Stress Indices and Their Method for Estimation
2
Heat stress indices Formula
Important Remark: Each heat index has different assumptions/calibrations (such as body size, physical fitness, etc.) that consider temperature and humidity differently. A high-heat-stress event indicated by one index does not necessarily transfer onto another index. For example, the original equation of WBGT was derived and calibrated using US Marine Corps Marines during basic training
45 Figure 2.2: Map Showing Outdoor Heat Stress Indices Officially Used Around the World
Local climate characteristics in regions (e.g., sub-tropical, tropical, Mediterranean) influence human thermal conditions and physiology. Many countries monitor heat stress using different parameters and estimates or measures through different indicators to inform locals about heat event warnings.
Chapter -2 Chapter -2
2.5 Results and Discussion
2.5.
46
Parameters like air temperature, relative humidity, and wind speed are helpful for the measurement of heat stress indices. WBGT is the most widely used index in different countries like Australia, U.S.A, Europe, Japan, and Columbia because it can be measured by sensors and estimated by a simple mathematical equation. The choice of the heat index depends on several factors of an area such as weather patterns (hot and dry, semi-humid and cold winter regions, etc.), age group of population (elderly, children, youth), activities, A limited survey study is being conducted to investigate the indices that are being used all over the world. The information of agencies officially involved in the assessment of heat is presented in Annex Table
A2
.
1
and the map shown in Figure
2
.2. Countries highlighted in yellow on the map are using indices that can be measured using sensors.
1 Sensitivity Analysis of Heat Indices versus variables
In this section, we consider small variations (theoretically infinitesimal) around an operational point defined by a set S0 where the chosen values for the parameters involved in each index have been considered in a comfortable situation. A multi-variable partial differential equation is used for sensitivity analysis of the operational HS indices which are HI, WBGT, PMV, SSI, CET, DI, and UTCI.
Table 2 .
2 3 gives the values for a set S0 which correspond to the comfort zone. For example, with Tgo = 45°C and Two = 16.27°C, CET indicates comfortable conditions, however, small variations could lead to slight discomfort.
Subsequently, the sensitivity is computed in relation to small variations around the chosen reference values.
Consider a heat index y as a function of explicative variables 𝑥 1 , … , 𝑥 𝑛 that is
𝑦 = 𝑓(𝑥 1 , … , 𝑥 𝑛 ) 𝑒𝑞: 2.1
Where 𝑦 (thermal index) is the function of 𝑥 (input variables e.g., 𝑇 𝑔 , 𝑇 𝑎 , 𝑇 𝑚𝑟𝑡 , 𝑅𝐻, 𝑣, 𝑇 𝑤 , 𝑚𝑒𝑡, 𝑚𝑤).
Table 2 .3: Sensitivity Analysis of Heat Stress Indices
2
Heat S0 Sensitivity around S0 Partial differential
stress
indices
CET • 𝐶𝐸𝑇 𝑜 = 27.83°C • ∂𝐶𝐸𝑇 ∂𝑇 𝑔 | 0 = 0.19 ∂𝐶𝐸𝑇 ∂𝑇 𝑔 | 0
• 𝑇 𝑔𝑜 = 45°C • 𝑇 𝜔𝑜 = 16.27°C • ∂𝐶𝐸𝑇 ∂𝑇 𝑤 | 0 = -0.08 = 1.21[1 + 0.029(𝑇 𝑔 0 -𝑇 𝜔 0 )] -1.21 × 0.029 × 𝑇 𝑔 0 [1 + 0.029(𝑇 𝑔 0 -𝑇 𝜔 0 )]
Table 2 .5: Sensitivity of Comfort Level
2
Index 𝒚 𝜟𝒚
HI 78.74 80.78
SSI 78.52 79.67
PMV 0.37 0.29
WBGT 23.11 23.11
UTCI 28.52 28.52
CET 27.83 27.94
DI 20.24 20.76
Table A2 .1. List of Outdoor Heat Stress Indices are officially used in Countries.
A2 T 𝑎 , T w , T g .
Chapter -3 Chapter -2 Chapter -2 Chapter -2
Eastern and western Egypt HSI Required Sweating DI (hot dry climate) T a , RH, M, convective heat RH, Sweat rate, M https://www.ncdc.noaa.gov/societal-https://www.weather.gov/oun/safety-https://www.weather- National Centres for National Weather service (W.S) Weather Forecast
Region/country name TSI Columbia WBGT Dallas WBGT Tulsa California SSI (Pacific Ocean and Mediterranean regions Australia WBGT (for civilians) AT (for workers) South Africa PMV (overrated summertime sensation and underestimated in Sudan CET Nigeria CET TWL WBGT Sweden (Stockholm) HSI (elder population) T a , RH, M, convective heat Evolve Parameters T https://columbiaweather.com/products/we Sources ather-stations/wet-bulb-globe-temperature/ https://perryweather.com/weather-station/ Perry weather Agency name Columbia weather systems https://www.weather.gov/arx/wbgt National weather services https://www.weather.gov/ exchange, radiant heat exchange, 𝑣 impacts/heat-stress/climatology Environmental Information T w , T a , T g , RH, 𝑣 http://www.bom.gov.au/ Australian Government Bureau of meteorology summer-heathumidity 𝐼 𝑐𝑙 , M, RH, 𝑇 𝑚𝑟𝑡 . https://customweather.com/ Pietermaritzburg, South Africa forecast.com/maps/Egypt https://worldweather.wmo.int/en/country. html?countryCode=203 World Meteorological organization (M.D) https://www.nimet.gov.ng/ Chapter -3 NiMet (M.D) T a , RH, 𝑣 physiological data like height, age, T w , T a , T g , RH, WS https://mainichi.jp/english/articles/201807 19/p2a/00m/0na/004000c Ministry of environment exchange, radiant heat exchange, WS https://www.smhi.se/en/q/Stockholm/267 3730 SMHI UAE (Abu Dhabi) TWL (Occupational) for workers physiological data like height, age, sweat rate, T a , T w , T g . https://weather.com/weather/today/l/Abu +Dhabi+Abu+Dhabi+Emirate+United+A The weather channels Iran PET (hot dry climate) T html?countryCode=114 World Meteorological Organization Canada HU (humid weather patterns) T a , dew point temperature, RH factor, molecular weight of water, latent heat and gas constant. http://ec.gc.ca/meteo-weather/default.asp?lang=En Weather and Meteorology. Retrieved May 19, 2016, sweat rate, Japan (Tokyo) winter) China SET (subtropical regions) RH, 𝑣, M, radiation temperature, T w , T a http://en.weather.com.cn/ T g , T w , RH, 𝑣 Heat Stress Modeling Using Neural Networks Weather China Technique
Europe weather) UTCI (semi humid hot RH, T a in Fahrenheit T a , average radiation temperature, rab+Emirates?placeId=0755f9b1a0f8538 https://climate-8ca0d9510010eed3e6274c95ec9ecc1a835 European Environmental
Maryland Heat index (elder summer and cold WS, RH. https://www.weather.gov/safety/heat-adapt.eea.europa.eu/knowledge/european-3af4782d304238 NOAA National Weather agency
population) winter regions) index climate-data-explorer/health/thermal- Services (Weather prediction
Bangladesh DI T 𝑎 , RH comfort-indices-universal-thermal-http://live.bmd.gov.bd/ centre) Bangladesh Meteorological
climate-index#details department
Miami Oxford Index T w , T a https://www.miamioh.edu/cas/academics/ Ecology Research Centre
centers/erc/weather-station/index.html
57 58 59
60
w , T a , T g , RH, 𝑣 a , 𝑣 and 𝑀𝑟𝑡 . https://worldweather.wmo.int/en/country. This chapter has been published as "Heat stress modeling using neural networks technique" on journal "IFAC-Papers Online" This paper is attached at Paper -II with kind permission from the journal and can be cited as:
Qureshi, A. M., & Rachid, A. (2022). HEAT STRESS MODELING USING NEURAL NETWORKS
TECHNIQUE. IFAC-Papers Online, 55
(12)
,
[13][14][15][16][17][18]
. https://doi.org/10.1016/j.ifacol.2022.07.281 Keywords: Artificial intelligence, thermal comfort, modeling, system dynamic approach and urban heat stress.
and Gated Recurrent Unit (GRU) are used to predict outputs: mean radiant temperature (𝑇 mrt ), Predicted Mean vote (PMV) and Physiological Equivalent Temperature (PET) using input variables (𝑇 s , 𝑇 a , GR, RH and 𝑊 s ) for the Comparative analysis of obtained results showed that SVM is efficient approach, but GRU is most reliable and accurate with high testing efficiency among other used algorithms for dealing with weather data and estimate thermal comfort level. The applied algorithms' performances are presented in Table3.1. The output of testing samples achieved from each model are plotted shown in
Chapter -3
estimation of heat stress.
61
Table 3 .1: Algorithms Performance with Respective Data
3
Machine Learning
Model name Training efficiency Testing efficiency MSE train MSE test
(%) (%)
Decision Tree 86.21 28.80 0.02 0.43
Random Forest 87.12 33.23 0.019 0.4
Support Vector Machine 98.54 98.52 0.00043 0.00077
Deep Learning
Long Short-Term Memory 84.93 55.38 0.0047 0.012
Simple Recurrent Neural Network 97.49 88.16 0.00081 0.0035
Gated Recurrent Unit 99.34 99.04 0.00022 0.00028
Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists(Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people[START_REF] Luber | Climate change and extreme heat events[END_REF]. Due to human activities, the properties of the local climate are altered[START_REF] Kalnay | Impact of urbanization and land-use change on climate[END_REF]. Urbanization around the world is happening at an accelerating rate, with global warming which is increasing Heat Stress (HS). It is imperative to identify certain factors that contribute to the Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter[START_REF] Mirzaei | Recent challenges in modeling of urban heat island[END_REF].In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV), Physiological Equivalent Temperature (PET), 𝑇𝑇 Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists(Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people[START_REF] Luber | Climate change and extreme heat events[END_REF]. Due to human activities, the properties of the local climate are altered[START_REF] Kalnay | Impact of urbanization and land-use change on climate[END_REF]. Urbanization around the world is happening at an accelerating rate, with global Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter[START_REF] Mirzaei | Recent challenges in modeling of urban heat island[END_REF].In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV), Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists(Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people[START_REF] Luber | Climate change and extreme heat events[END_REF]. Due to human activities, the properties of the local climate are altered[START_REF] Kalnay | Impact of urbanization and land-use change on climate[END_REF]. Urbanization around the world is happening at an accelerating rate, with global warming which is increasing Heat Stress (HS). It is imperative to identify certain factors that contribute to the Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter[START_REF] Mirzaei | Recent challenges in modeling of urban heat island[END_REF].
1. INTRODUCTION In 2003, Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV), Physiological Equivalent Temperature (PET), 𝑇𝑇 mrt and Standard Effective Temperature (SET) (Tsoka et al., 2018). Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 1. INTRODUCTION In 2003, Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists (Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people (Luber and McGeehin, 2008). Due to human activities, the properties of the local climate are altered (Kalnay and Cai, 2003). Urbanization around the world is happening at an accelerating rate, with global warming which is increasing Heat Stress (HS). It is imperative to identify certain factors that contribute to the 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter (Mirzaei, 2015). In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV), Physiological Equivalent Temperature (PET), 𝑇𝑇 mrt and Standard Effective Temperature (SET) (Tsoka et al., 2018). 1. INTRODUCTION In 2003, Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists (Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people (Luber and McGeehin, 2008). Due to human activities, the properties of the local climate are altered (Kalnay and Cai, 2003). Urbanization around the world is happening at an accelerating rate, with global warming which is increasing Heat Stress (HS). It is Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter (Mirzaei, 2015). In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV), In 2003, Europe experienced the driest and hottest summer since AD 1500, responsible for the death toll of 30,000 from the heat. Encountering this, team of scientists (Poumadere et al., 2005) suggested that this European heat wave could have occurred due to climate change. The increase in temperature worsens health problems. In addition, from a physiological point of view, during peak summers, when the temperature exceeds the normal tolerance limit of the human body which can lead to circulatory collapse or dehydration leading to death. It becomes unfavorable in a vulnerable population, comprising of children, old aged people, and handy-capped people (Luber and McGeehin, 2008). Hoffmann and Schlünzen, 2013 b, Ivajnšič and Žiberna, 2019). Many parameters are considered while calculating HS in metropolitan cities (Akbari et al., 1990). The fundamentals of various numerical micro-scale models emphasize the synergy of urban fabric (land surface and building materials) and meteorological parameters; solar radiation (diffuse, direct), airflow and heat transfer from open surfaces (Grimmond, 2007). Air temperature (𝑇𝑇 air ,) is the most used input parameter, followed by mean radiant temperature (𝑇𝑇 mrt ), after surface temperature (𝑇𝑇 s ) has been applied, and rarely wind speed (𝑊𝑊 s ) was used as an input parameter (Mirzaei, 2015). In 2019, a study on assessment of thermal comfort in open urban areas used the input factors; location, activity, gender, Physiological Equivalent Temperature (PET), 𝑇𝑇 1. INTRODUCTION Due to human activities, the properties of the local climate are altered (Kalnay and Cai, 2003). Urbanization around the world is happening at an accelerating rate, with global locality, age group, temperature of the globe, 𝑇𝑇 air , Solar Radiation (SR), Relative Humidity (RH), 𝑊𝑊 s , wind direction and the output was Predicted Mean Vote (PMV),
intensification of HS
[START_REF] Fischer | Consistent geographical patterns of changes in high-impact European heatwaves[END_REF]
,
[START_REF] Mcdonald | Urban growth, climate change, and freshwater availability[END_REF]
. In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, mrt and Standard Effective Temperature (SET)
[START_REF] Tsoka | Analyzing the ENVI-met microclimate model's performance and assessing cool materials and urban vegetation applications-A review[END_REF]
.
In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models
[START_REF] Kicovic | Assessment of visitors' thermal comfort based on physiologically equivalent temperature in open urban areas[END_REF]
and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The 1. INTRODUCTION In 2003, warming which is increasing Heat Stress (HS). It is imperative to identify certain factors that contribute to the intensification of HS (Fischer and Schär, 2010),(McDonald et al., 2011). In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, Physiological Equivalent Temperature (PET), 𝑇𝑇 mrt and Standard Effective Temperature (SET) (Tsoka et al., 2018). In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models (Kicovic et al., 2019) and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The 1. INTRODUCTION In 2003, intensification of HS (Fischer and Schär, 2010),(McDonald et al., 2011). In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models (Kicovic et al., 2019) and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The intensification of HS (Fischer and Schär, 2010),(McDonald et al., 2011). In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models (Kicovic et al., 2019) and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The imperative to identify certain factors that contribute to the intensification of HS (Fischer and Schär, 2010),(McDonald et al., 2011). In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, mrt and Standard Effective Temperature (SET) (Tsoka et al., 2018). In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models (Kicovic et al., 2019) and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The warming which is increasing Heat Stress (HS). It is imperative to identify certain factors that contribute to the intensification of HS (Fischer and Schär, 2010),(McDonald et al., 2011). In general, the external factors that influence the formation of Urban Heat Stress (UHS) are: seasons, synoptic conditions and climate. There are many researchers have worked hard to extract the influence of meteorological parameters on HS (Arnds et al., 2017, Hoffmann, 2012, Physiological Equivalent Temperature (PET), 𝑇𝑇 mrt and Standard Effective Temperature (SET) (Tsoka et al., 2018). In another study, an advanced algorithm based on neurofuzzy logic was attempted to create predictive models (Kicovic et al., 2019) and concluded that SR has the highest impact compared to 𝑇𝑇 air on thermal comfort of visitors in urban areas. In 2019, team of Kuala Lumpur University Campus worked on the Heat Stress Assessment (HSA). The following meteorological data was used in their research as input variables: 𝑊𝑊 s , wind direction, initial atmospheric temperature, RH, cloud cover, location, soil data, building characteristics, walking speed, mechanical factor, heat transfer, clothing data to calculate 𝑇𝑇 mrt , PMV and PET values. 𝑇𝑇 mrt is the key meteorological parameter that affects the human energy balance, PMV and PET are significant indices, which are under the influence of 𝑇𝑇 mrt (Ghaffarianhoseini et al., 2019). In Previous studies urban geometry with green adaptive measures were missing during estimation of HS which is extremely difficult, mainly due to the complexity of the urban system. Artificial intelligence (AI) is a widely used technique to deal such problems. The support vector machine (SVM) is machine learning approach was used for the estimation of PMV. The results obtained with 76.7% accuracy which was twice as high as the widely adopted Fanger model which has an accuracy of 35% (Farhan et al., 2015). Recently, deep learning (DL) approaches have been reported with high precision results. Particularly, Convolutional and Long-Short Term Memory (LSTM). Recurrent Neural Networks (RNNs) have been used to predict hourly 𝑇𝑇 air with much less error (Hewage et al., 2021). In this study, a system dynamic approach is used for choosing the influenced variables of weather for HSA. The Rayman model (Matzarakis et al., 2007b, Matzarakis et al., 2010) is initially used for collection of simulated PET, PMV and 𝑇𝑇 mrt data and for comparative reference.
C[10]. The hottest month of the year in Amiens is July. It has been reported that the mean annual air temperature between 2000 and 2018 increased by 1 • C above the 20th century average, with 2003, 2011, 2014, 2017, and 2018 being the warmest years
Climate 2022, 10, 113 2 of 14
reach 41 •
Climate 2022, 10, 113. https://doi.org/10.3390/cli10080113 https://www.mdpi.com/journal/climate
Table 1 .
1 Estimation of social vulnerability factors
Factors Estimated Statistics Year Source Reference
Poverty rate 15% (17,045 habitants) 2020 French Newspaper "Courrier Picard" 2020 -
Elderly population >65 19% (25,246) 2014 National Institute of Statistics in France-INSEE [19]
Illiteracy rate (no diploma aged >15 years) 22% 2015 Municipality of Amiens City population [20]
Illness ratio of the elderly population 28 out of every 200 patients 2000 Insurance company survey [21]
Cardiovascular patients 799 Elder = 112 2008 Research paper [22]
8%
Asthma patients Total = (10,629) Elders = 1400 2014 Eurostat [23,24]
Other respiratory diseases excluding asthma 6% Total = 7972 Elders = 1100
Socially vulnerable elders in the summer 2000 2014
Table 2 .
2 LULC of Amiens.
S.No. Class Area (%) Category for PCA Total Area (%)
1 Artificial surfaces 31.65 Built-up area 31.65
2 Coniferous tree cover 1.17
3 4 Cultivated areas Deciduous tree cover 26.69 9.29 Vegetation 56.88
5 Herbaceous vegetation 3.93
6 Moors and heathland 15.80
7 Natural material surfaces 1.99 Open areas 1.99
8 9 Marshes Peatbogs 4.68 2.54 Wetlands 2.22
10 Water bodies 2.26 Water 2.26
Total 100 100
b. Land use and land cover (LULC)
Table 2 .
2 LULC of Amiens.
1 Artificial surfaces 31.65 Built-up area 31.65
2 Coniferous tree cover 1.17
3 Cultivated areas 26.69
4 Deciduous tree cover 9.29 Vegetation 56.88
5 Herbaceous vegetation 3.93
6 Moors and heathland 15.80
7 Natural material surfaces 1.99 Open areas 1.99
8 9 Marshes Peatbogs 4.68 2.54 Wetlands 2.22
10 Water bodies 2.26 Water 2.26
Total 100 100
S.No.
Class Area (%) Category for PCA Total Area (%)
Table 3 .
3 The cumulative contribution of variables.
Extraction Sums of Squared Loadings
Factors Eigenvalue Difference Proportion Cumulative
1 14.39 6.31 0.45 0.44
2 8.07 4.40 0.25 0.70
3 3.67 2.33 0.11 0.81
4 1.34 0.27 0.04 0.85
5 1.07 0.26 0.03 0.89
Table A2 .
A2 Variable correlation magnitude with each component.
Variable PC1 PC2 PC3 PC4 PC5 Uniqueness
NDWI -0.3153 -0.4014 0.6994 -0.0494 0.116 0.2344
NDVI -0.4412 -0.3843 0.5793 -0.2461 0.0486 0.2591
Total population 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Age ≥65 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Age ≥65 + asthma 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Age ≥65 + respiratory 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Age ≥65 + cardio 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Age ≥65 + living alone 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Artificial surfaces 0.7689 0.4482 0.0143 0.1858 0.0711 0.1682
Natural material 0.0125 0.1735 -0.1649 0.4099 0.6498 0.3523
Water bodies -0.0301 -0.2768 0.5838 0.2811 0.113 0.4899
Merged vegetation -0.6601 -0.343 -0.2613 -0.3114 -0.2607 0.2134
Wetlands 0.0924 -0.0937 0.6548 0.0701 0.2628 0.48
Mean_AQI_2019_avg_N 2 Mean_AQI_2019_avg_ NO 0.7961 0.7961 -0.5903 -0.5903 -0.0698 -0.0698 -0.0861 -0.0861 0.0679 0.0679 0.0008 0.0008
Mean_AQI_2019_avg_O 3 Mean_AQI_2019_avg_PM 10 Max_AQI_2019_Max_O 3 Mean_AQI_2020_avg_N 2 Mean_AQI_2020_avg_NO -0.7961 -0.7961 -0.7877 0.7961 0.7961 0.5903 0.5903 0.6026 -0.5903 -0.5903 0.0698 0.0698 0.0795 -0.0698 -0.0698 0.0861 0.0861 0.0773 -0.0861 -0.0861 -0.0679 -0.0679 -0.0605 0.0679 0.0679 0.0008 0.0008 0.0006 0.0008 0.0008
Mean_AQI_2020_avg_O 3 Mean_AQI_2020_avg_PM 10 Max_AQI_2020_Max_O 3 Mean elevation 0.7961 -0.7961 0.8039 -0.199 -0.5903 0.5903 -0.5772 0.257 -0.0698 0.0698 -0.0601 -0.5217 -0.0861 0.0861 -0.095 -0.5109 0.0679 -0.0679 0.0762 0.1721 0.0008 0.0008 0.0021 0.3316
Mean_AQI_2019 0.2481 -0.6447 -0.1689 0.5041 -0.3919 0.0865
Mean_AQI_2020 0.3431 -0.6756 -0.1677 0.4493 -0.3438 0.0777
Illiteracy 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
Poverty 0.8145 0.4883 0.2666 -0.0608 -0.1032 0.0128
LST hottest day 2020 0.5975 0.5702 -0.3591 0.1494 0.1406 0.147
LST hottest day 2019 0.6638 0.5418 -0.2211 0.1996 0.1399 0.1575
LST hottest day 2018 0.3725 0.3529 -0.6882 -0.0702 -0.0506 0.2555
LST summer mean 2020 0.509 0.5079 -0.6212 0.0583 0.0474 0.0915
Table 4 .1: Estimated Cooling Effect of Tree Species during Summer Season
4
Tree specie 𝑾𝑩𝑮𝑻 𝑹-𝑰 (°C) 𝑹𝑯 𝑹-𝑰 (%) 𝑻 𝒂 𝑹-𝑰 (°C)
Tilia Cordata Mill 3.25 -6.58 3.7
Malvacae
Tilia Platyphyllous 0.86 -1.66 0.12
Koelreuteria paniculata 0.27 -1.15 0.6
(Golden Rain Tree)
Table 2
2 Linguistic scale by Saaty
Values Linguistic
Scale
1 Equal importance
3 Moderate importance of one element over another
5 Strong importance of one
element over another
7 Very strong importance of one element over
another
9 Extreme importance of one element over another
2,4,6,8 Intermediate values
compromise between two
choices
Table 3
3
Random index
!o. of alternatives 1 "#!$%& '!$() 0
2 0
3 0.58
4 0.9
5 1.12
6 1.24
7 1.32
8 1.41
9 1.45
10 1.49
Table 4
4
. Final priority ranks
C/I C1 C2 C3 C4 C5 Rank
0.08 0.19 0.31 0.36 0.06
I1 0.099 0.649 0.549 0.562 0.37 1
I2 0.592 0 0 0.044 0.056 4
I3 0.265 0.072 0.115 0.1 0.139 3
I4 0.045 0.279 0.288 0.293 0.387 2
I5 0 0 0.048 0 0.049 5
Appl. Sci. 2022, 12, 12308. https://doi.org/10.3390/app122312308 https://www.mdpi.com/journal/applsci
Appl. Sci. 2022, 12, 12308 2 of 14
Table 1 .
1 Key equations of MCDMs used in this study.
Table 1 .
1 Cont.
Table 1 .
1 Cont.
Table 4 .
4 Decision matrix.
Direct weightage 0.45 0.15 0.20 0.20
Weightage by AHP 0.42 0.23 0.12 0.23
Interventions/Criteria Cost Efficiency Durability Environment Impacts
Water features 6 4 4 5
Surfaces 5 4 5 3
Green walls 7 6 6 7
Trees 4 7 8 8
Shades (shelter canopies) 8 4 5 2
NB B B B
Table 5 .
5 Comparative analysis of normalization methods.
Name Results Consistency
L N-i ELE-NS, ELE-NI, PROMETHEE, WSM
L N-ii ELE-NS, ELE-NI, PROMETHEE, WSM
L N-max-min ELE-NS, PROMETHEE, WSM
E AN ELE-NS, PROM, WSM, WPM
L nN WSM, WPM, TOPSIS, PROMETHEE, MOORA
V N WSM, PROMETHEE, ELE-NS
L N-Sum WSM, PROMETHEE, ELE-NS, ELE-NI
Table 6 .
6 Rank calculated using MCDMs with L nN method for normalization.
Methods A1 Alternatives/Interventions Priority Results A2 A3 A4 A5
1-ELE-NS 3 5 1 2 4
2-ELE-NI 4 5 1 3 2
3-MOORA 2 3 4 1 5
4-PROMETHEE 4 5 1 2 3
5-TOPSIS 3 2 4 1 5
6-VIKOR 2 3 5 1 4
7-WPM 4 5 1 2 3
8-WSM 4 5 1 2 3
Table 7 .
7 Rank calculated by AHP-MCDM.
Alternatives/Interventions Priority Results
Methods A1 A2 A3 A4 A5
1-ELE-NS 4 5 1 2 3
2-ELE-NI 3 4 1 2 5
3-MOORA 3 4 2 1 5
4-PROMETHEE 3 4 2 1 5
5-TOPSIS 3 4 2 1 5
6-VIKOR 3 2 4 1 5
7-WPM 3 4 2 1 5
8-WSM 3 4 2 1 5
Table 8 .
8 Pairwise comparison of frequency error matrix.
Ranking Frequency of Standalone MCDM
ELE-NS ELE-NI MOORA PROMETHEE TOPSIS VIKOR WPM WSM
ELE-NS 0 2.45 4 1.41 4.47 4.69 1.41 1.41
ELE-NI 2.45 0 5.48 1.41 5.66 5.66 1.41 1.41
MOORA 4 5.48 0 4.69 1.41 1.41 4.69 4.69
PROMETHEE 1.41 1.41 4.69 0 4.90 5.10 0 0
TOPSIS 4.47 5.66 1.41 4.90 0 2 4.90 4.90
VIKOR 4.69 5.66 1.41 5.10 2 0 5.10 5.10
WPM 1.41 1.41 4.69 0 4.90 5.10 0 0
WSM 1.41 1.41 4.69 0 4.90 5.10 0 0
Sum 19 23.48 26.37 17.5 28.24 29.06 17.5 17.5
Ranking Frequency of AHP-MCDM
ELE-NS ELE-NI MOORA PROMETHEE TOPSIS VIKOR WPM WSM
ELE-NS 0 2.45 2.83 2.83 2.83 4.90 2.83 2.83
ELE-NI 2.45 0 1.41 1.41 1.41 3.74 1.41 1.41
MOORA 2.83 1.41 0 0 0 2.83 0 0
PROMETHEE 2.83 1.41 0 0 0 2.83 0 0
TOPSIS 2.83 1.41 0 0 0 2.83 0 0
VIKOR 4.90 3.74 2.83 2.83 2.83 0 2.83 2.83
WPM 2.83 1.41 0 0 0 2.83 0 0
WSM 2.83 1.41 0 0 0 2.83 0 0
Sum 21.5 13.24 7.07 7.07 7.07 22.79 7.07 7.07
Table 9 .
9 Comparative analysis of MCDM.
Assessment
Methods Normalization MCDM Frequency Error AHP-MCDM
TOPSIS - - +
MOORA - - +
PROMETHEE + + +
WPM - + +
WSM + + +
VIKOR - - +
ELE-NS + + -
ELE-NI - - +
, www.pinterest.fr www.pinterest.fr www.alamy.com
www.buamliving.com/parasols www.archdaily.com www.medium.com
Authorized licensed use limited to: Universite Picardie Jules Verne. Downloaded on October 14,2022 at 08:52:20 UTC from IEEE Xplore. Restrictions apply.
Acknowledgements vii
Acknowledgements
Trees (33 species) Supported by the BU URBAN Program, funded by a National Science Foundation Research Traineeship (NRT) grant to Boston University (DGE 1735087). [38,39] of 4 km. Analyzed historical Climate 2021, 9, 102 ACKNOWLEDGEMENT This paper has been produced within the COOL-TOWNS (Spatial Adaptation for Heat Resilience in Small and Medium Sized Cities in the 2 Seas Region) project which receives funding from the Interreg 2 Seas programme 2014-2020 cofunded by the European Regional Development Fund under subsidy contract N° 2S05-040
Funding:
The COOL-TOWNS (Spatial Adaptation for Heat Resilience in Small and Medium-Sized Cities in the 2 Seas Region) project receives funding from the Interreg 2 Seas program 2014-2020 co-funded by the European Regional Development Fund under subsidy contract N • 2S05-040.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Funding: This paper has been produced within the COOL-TOWNS (Spatial Adaptation for Heat Resilience in Small and Medium Sized Cities in the 2 Seas Region) project which receives funding from the Interreg 2 Seas programme 2014-2020 co-funded by the European Regional Development Fund under subsidy contract N • 2S05-040.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
Author Contributions: The paper was a collaborative effort between the authors. A.M.Q. and A.R. contributed collectively to developing the methodology of this survey, tools comparison and the manuscript preparation. All authors have read and agreed to the published version of the manuscript.
Paper -II Heat Stress Modeling Using Neural Networks Technique
Paper -II In this study, system dynamic approach and GRU network is used as a powerful method to predict HS. The variables which strongly effects the system are analyzed and extracted as input variables (𝑇𝑇 air , 𝑇𝑇 s , GR, RH, 𝑊𝑊 𝑠𝑠 ). The GRU network is modeled, after using the grid search to find the optimal hyper-parameters and estimates the outputs (𝑇𝑇 mrt , PET, PMV). For the given dataset, the proposed GRU training algorithm was 99.36% accurate. The model was coupled with GUI for individual HSA. This study concludes that system approach helps to assess HS even in the complex environment and address the nonlinear interactions of the variables. Also, a user can use this interface for their own comfort definition which will give the platform to analyze the different thermal comfort scale chosen by users. There are still some limitations in this study that requires further research. An optimization and extension in the model may be future work. It will be coupled with the cooling effect of urban greenery which may influence the estimation of HS.
ACKNOWLEDGEMENT
This paper has been produced within the COOL-TOWNS project which receives funding from the Interreg 2 Seas programme 2014-2020 co-funded by the European Regional Development Fund under subsidy contract N° 2S05-040.
Chapter -4
68
Chapter -4
Heat Vulnerability Analysis and Field Monitoring in Amiens
This chapter is a case study of a medium-sized French city named Amiens. It consists of the following two parts:
❖ Part-A is intended for estimating the current heat vulnerability index considering influential parameters and data from recent years.
❖ Part-B is based on field measurements to calculate the cooling effect of tree species.
Measurements have been taken in 3 public spaces in the city centre
Part -A
Part -A Heat Vulnerability Index Mapping: A Case
Study of a Medium-Sized City (Amiens)
The part-A of this chapter has been published as "
Major Findings
In this study, real weather, and air quality monitoring data from a medium-sized city in Hauts de France (Amiens) was analysed. It is noticed that extreme events (heat and poor air quality) are interdependent and recorded at the same time in summer season. The high correlation was calculated between the heat stress hours and poor air quality events especially with ground level ozone (0.8).
The Heat Vulnerability Index (HVI) in Amiens for extreme heat days recorded over three years Chapter -5
Paper -III
Application of Multi-criteria Decision-making methods for Urban Heat Mitigation
This chapter is based on multi-criteria decision analysis to deal with heat stress. It consists of the following two parts:
❖ Part-A deals with Analytic Hierarchy Process for the selection of heat resilience measures.
❖ Part-B is based on the application of multiple methods and comparative analysis in terms of decision reliability for district heat mitigation
Part -A
Part -A An Analytic Hierarchy Process for Urban Heat Stress Mitigation
The part-A of this chapter has been published as "An Analytic Hierarchy Process for urban heat stress mitigation" in (ICoDT2). This paper is attached at Paper -IV with kind permission from the conference publisher and can be cited as:
Major Findings
Analytic Hierarchy Process (AHP) approach is used for the selection of heat stress prevention measures.
The applied technique assists to develop the key criteria framework that help the decision-making process in choosing appropriate measures for hotspots before implementing an intervention. The method was applied by pairwise comparison between the interventions to show the importance. The evaluation of the measures was based on the participants judgments, perception, and priorities through a questionnaire. Based on the survey inputs, it was found that the green roof is expensive, and the installation should follow the legal formalities, but it gives good cooling effect, has a significant impact on the environment and service life of the intervention is good and does not require additional maintenance compared to other but care. However, water fountains, misting should not be considered for implementation as it gives a short time cooling effect.
Paper -IV An Analytic Hierarchy Process for Urban Heat Stress Mitigation
Paper -IV
Part -B Comparative Analysis of Multi-Criteria Decision-Making Techniques for Outdoor Heat Stress Mitigation
The part of this chapter has been published as "Comparative Analysis of Multi-Criteria Decision-Making
Techniques for Outdoor Heat Stress Mitigation" on journal "Applied Sciences" This paper is attached at
Paper -V with kind permission from the journal and can be cited as: the LnN was a more reasonable normalization technique, and it provided similar rankings in the majority of applied MCDMs. It was noticed that the coupling of AHP helped to minimize the frequency error through the pairwise method for criteria weights, which increased the reliability of the decision.
However, the priority of green walls and trees is an arbitrary output of decision makers. The ranking obtained on the parameters was not a general rule, and this procedure was carried out to check the reliability. The results were entirely based on the terrain, the perspectives, characteristics of the pilots, climatic conditions, and inputs of the decision-makers.
The improved frequency of consistent results by AHP-MCDM revealed that the ranking results mainly depended on the nature and the values of the criteria. The reasonable disagreement that was observed among the methods did not affect their reliability. As a result, MCDM models proved generally very effective for dealing with heat stress problems before their implementation and selection of the best ones.
Keywords: Heat stress; Multi criteria decision making; Analytic hierarchy process; Priority; Interventions
Research Methodology
The collection of human perspectives is the first step for implementing any MCDM. The survey was distributed with the explanation of the purpose of the study. Participants were asked to participate in quantitative judgment using the linguistic scale, shown in Figure 2, to assess the importance of criteria and the performance of alternatives (interventions) for UHS mitigation. The experts belonged to academics in the field of urban climate.
ppl. Sci. 2022, 12, x FOR PEER REVIEW
Research Methodology
The collection of human perspectives is the first step for implementi The survey was distributed with the explanation of the purpose of the pants were asked to participate in quantitative judgment using the l shown in Figure 2, to assess the importance of criteria and the perform tives (interventions) for UHS mitigation. The experts belonged to academ of urban climate.
The collected questionnaires were checked, and assessments that co sistencies were discarded and not used for further analysis. After this q was observed that 25 ratings could be useful. These judgments were ag geometric means and then MCDMs, such as ELECTRE-NS, ELECTRE PROMETHEE, TOPSIS, VIKOR, WPM, and WSM, were applied to prio mitigation alternatives (A1, A2...). These methods were implemented stand-alone, where direct criteria weights were used, and coupled with weights calculated by AHP using the judgment matrix shown in Table 3 ent normalization methods were used for the simple application of M alone). The research methodology is shown in Figure 3. The collected questionnaires were checked, and assessments that contained inconsistencies were discarded and not used for further analysis. After this quality check, it was observed that 25 ratings could be useful. These judgments were aggregated using geometric means and then MCDMs, such as ELECTRE-NS, ELECTRE-NI, MOORA, PROMETHEE, TOPSIS, VIKOR, WPM, and WSM, were applied to prioritize the UHS mitigation alternatives (A1, A2 . . . ). These methods were implemented in two ways: stand-alone, where direct criteria weights were used, and coupled with AHP (criteria weights calculated by AHP using the judgment matrix shown in Table 3). Seven different normalization methods were used for the simple application of MCDMs (stand-alone). The research methodology is shown in Figure 3.
List of Publications
Journal papers
List of papers in preparation |
04118650 | en | [
"info.info-mo"
] | 2024/03/04 16:41:26 | 2022 | https://theses.hal.science/tel-04118650/file/TheseQuindaoCheryl.pdf | * Cuves
Boîtes De Conserve
Fûts
Bouteilles
Bidons D'eau
Vases
* Bottles
Water Jars
Roof Gutters
walleriana Live plants Anopheles gambiae
Keywords:
My doctorate journey was not an easy one. The uneasy feeling of leaving my little ones behind, of being in a new place with new people with a foreign language and culture, plus a DELAYED allowance and perk with the COVID-19 pandemic, are just a few of the challenges I faced. The trip was indeed not easy, and there were a lot of ups and downs, crying and laughing. But I made it out alive, and I have these people there to support and guide me along the way, those who, of varying degrees, provided assistance, encouragement, and guidance, and without whom, I would not have succeeded. There are so many people to thank, and I am so
saved me during this trip, especially during the quarantine period. Thank you very much for your unrelenting support and long patience. I am glad I made the right choice of choosing you as my adviser. You are THE BEST ADVISER (I cannot think of another word much superior to that) anyone can have. I would also like to extend my appreciation to my thesis co-adviser and colleague, Professor Jayrold Arcede, for his knowledge, support, and guidance throughout my Ph.D. journey. Thank you so much, Professor Jayrold, for being so supportive when I applied for Ph.D. and for always encouraging me and believing in my work and in me. Thank you for introducing me to the world of applied mathematics and to SEAMS school. You were actually the reason why I ventured into applied mathematics and met Professor Youcef. Thank you very much. I want to extend my gratitude to the reviewer of this thesis, Dr. Yves Dumont and Prof. Yannick Privat. Thank you very much, Dr. Dumont and Prof. Privat, for sharing your expertise and giving your valuable time in reading and writing a detailed review and helpful remarks on my thesis. I am also grateful to Dr. Hervé Le Meur, Dr. Nicolas Parisey, Dr. Geneviève Prévost, and Dr. Nuning Nuraini for accepting the invitation to be part of my thesis jury. My most profound appreciation to Dr. Marion Darbas for agreeing to be a iii member of my comité de suivi de thèse. Special thanks to Prof. Nabil Bedjaoui for constantly checking up on my well-being and reminding me to unwind and travel in France more.
I am grateful beyond words to all the members of LAMFA, especially in Analyse Appliquée (A3) unit, for the weekly sharing of knowledge in mathematics. It opened my mind to the different problems applied mathematics addressed, making me more in love with the field. Many thanks to Isabelle, Christelle Calimez, and Mylène Gaudissart for being very helpful with me in my administrative papers. Special thanks to Christelle for being so considerate of me in speaking English. You have no idea how much you have helped me, especially when I have a conference or when I am in the Philippines.
Since childhood, I have had difficulty understanding other languages. Thus, living alone in a foreign country where I do not speak the language is extremely difficult. But my stay in France has been bearable and genuinely enjoyable because of my fellow doctoral students, who have become my friends and support system here.
My most profound appreciation to these fantastic people, Yohan, Jihade, Henry, Jérémy, Gauthier, Clément, Marouan, Valérie, Sebastian Cea, Ismail, and Owen. To Alice, thank you so much for all your help, especially during the time at La Grand Motte. I truly enjoyed our conversation and the time we spent together. To my Chilean friends, Christopher, Felipe, Bastian, and Josefa, thank you very much for your friendship and all the extra help I received. To my Lao friends, Boausy, Gnord, and Khankham, thank you so much for always cooking for me and allowing me to be the baby in the group. You had no idea how happy I was when you arrived; I have found a constant companion from Thil to UPJV. Thank you for the late-night talks on mathematics, python, and life. To Mariem, for being a supportive, loving, and thoughtful friend, thank you so much. To Afaf, for being a friend I know I can count on; you have a special place in my heart. I am genuinely comfortable talking and spending time with you. Thank you for assuring me that our friendship is still secure even though I know I do not talk that much and I rarely communicate. Thank you for being my "Ate." To my first Filipino friend in Amiens, Arrianne, you made my first year in Amiens so bearable and passable. Thank you for helping me settle in Amiens and for the constant tips on how to live in France. To my colleague and friend, Ma'am Sheila, and Ma'am Karen, thank you for always cooking Filipino food and being patient with me. You have no idea how happy I am to spend time with you and be able to talk in our language. I am forever grateful to GOD for sending you to France. I would also like to thank my Filipino friends here in France, whom I constantly annoy with many questions and favors, Sir Juls and Sir Lhords. Thank you so much for always helping me, even though I only communicate when I need something.
3.16
Behaviour of the solution of healthy humans in optimal vector control using constant growth function for human and mosquito population. .
3.17 Behaviour of the solution of optimal control of mosquitoes in optimal vector control using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Response comparison of the infected human compartment in the 4
control strategies: vaccination only (green), vector control only (orange), vaccination and vector control (blue), and without control (red).
Response comparison of each variables in the model with 4 control
strategies: vaccination (green), vector control (orange), the combination of vaccination and vector control (blue), and without control (red). Parmi les quelques 3 600 espèces de moustiques qui peuplent la planète, celles qui appartiennent à l'ordre des Diptera sont connues pour jouer un rôle crucial en tant que vecteurs de transmission des arbovirus [START_REF] Narang | Small Bite, Big Threat: Deadly Infections Transmitted by Aedes Mosquitoes[END_REF]. Au sein de cette famille Cuclicidae, le genre Aedes est impliqué dans la transmission des maladies. Les deux espèces les plus importantes de ce genre sont Aedes aegypti et Aedes albopictus. Ils sont les principaux vecteurs de la dengue, de la fièvre jaune, de la fièvre du Nil occidental, du chikungunya, de l'encéphalite équine de l'Est, du virus Zika et de nombreuses autres maladies moins importantes. albopictus [START_REF]Mosquitoes: Mosquito Life Cycles[END_REF].
Seuls les moustiques adultes femelles pondent des oeufs quelques jours après avoir pris un repas de sang. Les moustiques pondent généralement 100 oeufs à la fois. Ils pondent leurs oeufs isolément [START_REF]Consensus Document on the Biology of Mosquito Aedes , 13 July 2018. URL: https%3A%2F%2Fwww.oecd.org%2Fofficialdocuments% 2Fpublicdisplaydocumentpdf%2F%3Fcote%3DENV%2[END_REF] sur les parois intérieures des récipients juste au-dessus de la ligne de flottaison qui sont ou seront remplis d'eau. Ce site de ponte comprend une paroi de cavité telle qu'une souche creuse ou un récipient tel qu'un seau ou un pneu de véhicule mis au rebut. Seule une infime quantité d'eau est nécessaire pour pondre des oeufs. Cependant, les oeufs de moustiques peuvent survivre au dessèchement pendant 8 mois ou même en hiver. Dans ce cas, ils doivent supporter une dessiccation considérable avant d'éclore [START_REF] Schmidt | Effects of desiccation stress on adult female longevity in Aedes aegypti and Ae. albopictus (Diptera: Culicidae): results of a systematic review and pooled survival analysis[END_REF]. Une fois qu'ils ont atteint un niveau de dessiccation approprié, ils peuvent entrer en diapause pendant plusieurs mois. Les oeufs d'Aedes en diapause ont tendance à éclore de manière irrégulière sur une période prolongée.
Ils éclosent ensuite en larves lorsque de l'eau inonde les oeufs, par exemple à la suite de pluies ou du remplissage d'eau par des personnes. Après l'immersion dans l'eau, les oeufs éclosent par lots. Étant donné que certains oeufs doivent être immergés plusieurs fois dans l'eau avant d'éclore, ce processus peut durer des jours ou des semaines [START_REF]Aedes -Wikipedia, the free encyclopedia[END_REF].
Les larves vivent dans l'eau et se nourrissent de micro-organismes hétérotrophes tels que des bactéries, des champignons et des protozoaires. Elles se développent en quatre stades, ou instars. Du premier au quatrième stade, les larves muent et perdent leur peau pour poursuivre leur croissance. Au quatrième stade, lorsque la larve est complètement développée, elle se métamorphose en une nouvelle forme appelée pupe. La pupe vit toujours dans l'eau, mais elle ne se nourrit pas. Au bout de deux jours, elles se développent complètement en forme de moustique adulte et percent la peau de la nymphe. Le moustique adulte n'est plus aquatique ; il a un habitat terrestre et peut voler. L'ensemble du cycle de vie des moustiques dure de huit à dix jours à température ambiante, en fonction du niveau d'alimentation.
Habitudes alimentaires des moustiques adultes
Comme tous les autres animaux, les moustiques ont besoin d'énergie et de nutriments pour survivre et se reproduire. Les matières végétales et le sang en sont des sources précieuses.
Seules les femelles moustiques piquent. Elles sont attirées par la lumière infrarouge, la lumière, la transpiration, l'odeur corporelle, l'acide lactique et le dioxyde de carbone. La partie buccale de nombreux moustiques femelles est adaptée pour percer la peau des animaux hôtes et sucer leur sang en tant qu'ectoparasites. Les moustiques femelles se posent sur la peau de l'hôte pendant le repas sanguin et y plantent leur trompe. Leur salive contient des protéines anticoagulantes qui empêchent la coagulation du sang. Elles aspirent ensuite le sang de l'hôte dans leur abdomen. Les moustiques de l'espèce Ae. Aegypti ont besoin de 5µL par portion.
Chez de nombreuses espèces de moustiques femelles, les nutriments obtenus à partir des repas sanguins sont essentiels à la production d'oeufs, tandis que chez de nombreuses autres espèces, l'obtention de nutriments à partir d'un repas sanguin permet au moustique de pondre davantage d'oeufs. Parmi les humains, les moustiques préféraient se nourrir de ceux qui ont un sang de type O [START_REF] Shirai | Landing preference of aedes albopictus (diptera: Culicidae) on human skin among abo blood groups, secretors or nonsecretors, and abh antigens[END_REF], les gros respirateurs, une abondance de bactéries cutanées, une chaleur corporelle élevée et les femmes enceintes [START_REF] Chappell | 5 stars: A mosquito's idea of a delicious human[END_REF]. L'attrait des individus pour les moustiques a également une composante héréditaire, contrôlée par les gènes [START_REF] Fernández-Grandon | Heritability of attractiveness to mosquitoes[END_REF].
Les espèces de moustiques hématophages sont des mangeurs sélectifs qui préfèrent une espèce hôte particulière. Néanmoins, ils relâchent cette sélectivité lorsqu'ils sont confrontés à une concurrence sévère, à une pénurie de nourriture et à une activité défensive de la part des hôtes. Si les humains sont rares, les moustiques se nourrissent de singes, tandis que d'autres préfèrent les équidés, les rongeurs, les oiseaux, les chauves-souris et les porcs, d'où proviennent un grand nombre de nos craintes de maladies inter-espèces [START_REF] Lehane | The Biology of Blood-Sucking in Insects[END_REF]. Certains moustiques ignorent complètement les humains et se nourrissent exclusivement d'oiseaux, tandis que la plupart mangent tout ce qui est disponible. Les amphibiens, les serpents, les reptiles, les écureuils, les lapins et d'autres petits mammifères comptent parmi les autres repas les plus populaires des moustiques. Les moustiques s'attaquent également à des animaux plus grands, comme les chevaux, les vaches, les primates, les kangourous et les wallabies. Certaines espèces de moustiques peuvent même s'attaquer aux poissons s'ils s'exposent au-dessus du niveau de l'eau. De même, les moustiques peuvent parfois se nourrir d'insectes dans la nature. Ae. Aegypti et Culextarsalis sont attirés et se nourrissent de larves d'insectes, et ils vivent pour produire des oeufs viables [START_REF] Sharris | Survival and fecundity of mosquitoes fed on insect haemolymph[END_REF]. Alors que Anopheles Stephensi est attiré par les larves d'espèces de papillons de nuit comme Manduca sexta et Heliothis subflexa et peut s'en nourrir avec succès [START_REF] George | Malaria mosquitoes host-locate and feed upon caterpillars[END_REF].
Le nectar des plantes est une source d'énergie commune pour l'alimentation de toutes les espèces de moustiques, en particulier les moustiques mâles, exclusivement dépendants du nectar des plantes ou de sources alternatives de sucre. La conception de pièges efficaces appâtés au sucre pour les moustiques serait grandement bénéfique pour la prévention des maladies à transmission vectorielle. La préférence pour les plantes est probablement due à une attraction innée qui peut être renforcée par l'expérience, les moustiques reconnaissant les récompenses en sucre disponibles [START_REF] Wolff | Olfaction, experience and neural mechanisms underlying mosquito host preference[END_REF]. Elle varie selon les espèces de moustiques, les habitats géographiques et la disponibilité saisonnière. La recherche de nectar implique l'intégration d'au moins trois systèmes sensoriels : l'olfaction, la vision et le goût.
Néanmoins, tous les moustiques sont capables de faire la distinction entre les sources de sucre riches et pauvres pour choisir les plantes ayant une teneur plus élevée en glycogène, en lipides et en protéines [START_REF] Yu | Feeding on different attractive flowering plants affects the energy reserves of culex pipiens pallens adults[END_REF]. Voici les plantes préférées de différentes espèces de moustiques d'après l'article de Barredo et DeGennaro [START_REF] Barredo | Not just from blood: Mosquito nutrient acquisition from nectar sources[END_REF].
Dengue
La dengue est l'infection virale transmise par les moustiques la plus courante. On la trouve dans les régions tropicales et subtropicales du monde entier, avec un pic de transmission pendant la saison des pluies. En 2019, l'Organisation mondiale de la santé [START_REF] Who | Dengue and severe dengue[END_REF] a signalé 5,2 millions de cas de dengue dans le monde. Rien qu'aux Philippines, 271 480 cas avec 1 107 décès sont signalés du 1er janvier au 31 août 2019, en raison de la dengue [28].
La dengue est causée par quatre sérotypes de virus relevant de la famille des Flaviviridae. Il s'agit de sérotypes de virus distincts mais étroitement liés, appelés DENV-1, DENV-2, DENV-3 et DENV-4. Environ une personne sur quatre infectée par la dengue tombera malade [START_REF] Cdc | Dengue: Symptoms and treatments[END_REF]. La maladie commence généralement 5 à 7 jours après la piqûre infectante des moustiques femelles Ae. aegypti et Ae. albopictus [START_REF] Becker | Mosquitoes, identification[END_REF]. FIGURE 1.4: Coupe transversale d'un virus de la dengue montrant ses composants structurels similaires à ceux du virus Zika [START_REF] Khera | 10 solid links between the zika virus and neurological defects, 26[END_REF].
Dans la plupart des cas, la dengue est une maladie autolimitée mais peut nécessiter une hospitalisation, où des soins de soutien peuvent modifier l'évolution de la maladie. Les symptômes peuvent être légers ou graves et durent généralement de 2 à 7 jours. Le symptôme le plus courant de la dengue est la fièvre accompagnée de nausées, de vomissements, d'une éruption cutanée, de courbatures et de douleurs musculaires ou articulaires. L'infection par un type de virus confère une immunité à vie contre cette souche virale et confère temporairement une protection partielle contre les autres types. Une deuxième infection par un autre type de virus entraîne une maladie plus grave, appelée fièvre hémorragique de la dengue (DHF).
Aux Philippines, 1 107 décès sont signalés du 1er janvier au 31 août 2019, dus à la dengue [28].
Transmission
Les virus de la dengue se transmettent aux personnes par les piqûres de moustiques infectés de l'espèce Aedes [START_REF]Dengue transmission[END_REF]. Il peut être transmis par une transmission d'homme à moustique, de moustique à homme et par d'autres transmissions comme la transmission d'homme à homme et de moustique à moustique.
La transmission verticale se produit lorsque des moustiques parents infectés transmettent l'arbovirus à une partie de leur progéniture dans l'ovaire ou pendant la ponte [START_REF] Victor | Natural vertical transmission of dengue virus in aedes aegypti and aedes albopictus: A systematic review[END_REF]. Plusieurs articles confirment ces résultats. Une étude de Shroyer DA.
[24] est l'un des articles qui confirme la présence de la transmission verticale na- ans et chez les personnes déjà infectées par un type de virus [START_REF] Aguiar | The impact of the newly licensed dengue vaccine in endemic countries[END_REF]. En effet, les résultats peuvent être moins bons chez les personnes qui n'ont pas encore été infectées auparavant.
Contrôle des vecteurs
Un vecteur de maladie est tout agent vivant portant et transmettant un agent pathogène infectieux à un autre organisme vivant. Le contrôle de ces vecteurs est une méthode essentielle pour limiter ou éradiquer la transmission de ces maladies. Dans le cas de la dengue, il est essentiel de lutter contre les moustiques en s'appuyant sur de bonnes connaissances scientifiques de l'écologie des moustiques et des modes de transmission de la maladie.
L'Organisation mondiale de la santé a considéré les catégories suivantes pour contrôler ou prévenir la propagation du virus de la dengue [START_REF]Guidelines for dengue surveillance and mosquito control[END_REF].
• Gestion de l'environnement -Un facteur de risque important pour la transmission du virus de la dengue est la proximité des sites de reproduction des moustiques vecteurs avec les habitations humaines. Un système efficace de gestion de l'environnement constitue donc une stratégie efficace de lutte contre les vecteurs. Il comprend la gestion des conteneurs, l'élimination de l'altération des sites de reproduction et la prévention de la reproduction dans les conteneurs de stockage de l'eau. -Wolbachia est un type de bactérie que l'on trouve couramment chez les insectes mais qui est inoffensif pour les humains ou les animaux. On ne la trouve pas chez les moustiques Ae. aegypti. Lorsque des moustiques Ae. aegypti mâles porteurs de Wolbachia s'accouplent avec des moustiques femelles sauvages qui n'en sont pas porteurs, les oeufs n'éclosent pas, ce qui entraîne une diminution de la population de moustiques Ae. aegypti [START_REF] Cdc | Mosquitoes with wolbachia for reducing numbers of aedes aegypti mosquitoess[END_REF].
-Les copépodes sont un groupe de petits crustacés que l'on trouve dans presque tous les habitats d'eau douce et d'eau salée. L'utilisation de cyclopoïde copepoda dans la lutte contre les moustiques s'est avérée plus efficace que les prédateurs invertébrés [START_REF] Marten | Cyclopoid copepods[END_REF].
Modèles mathématiques de la dengue
La modélisation mathématique a été utilisée pour tester et déterminer l'efficacité de différentes stratégies d'intervention pour contrôler ou éliminer la dengue. Ces différents modèles mathématiques aident les mathématiciens à tester les différentes hypothèses de la dynamique de transmission de la dengue afin de mieux comprendre leur importance.
Les modèles compartimentaux SEIRS tenant compte de la susceptibilité, de l'exposition, de l'infection et de l'élimination pour la population humaine et de la
Plan de la thése
Cet thése est la première étude à prendre en compte la proposition de Sanofi Pasteur.
Nous présentons ici un nouveau modèle mathématique de la dengvaxia. -
R 0 = a 2 b m u * 5 ( b h u * 3 + b h u * 1 ) H 2 0 µ m (γ h + δ h )
J (w 1 , w 3 , w m ) = T 0 u 2 (t) + 1 2 A 1 w 2 1 (t) + 1 2 A 3 w 2 3 (t) + 1 2 A m w 2 5 (t) + 1 2 A m w 2 6 (t) dt sous la contrainte u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + g(M(t)) -w 5 (t)u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -w 6 (t)u 6 (t) (1.1) pour t ∈ [0, T], avec 0 ≤ w 1 , w 3 ≤ w H et 0 ≤ w 5 ,
dλ 1 dt = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 - dλ 4 dt = 0 - dλ 5 dt = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w 5 ) + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w 6 )
avec la condition de transversalité λ(T) = 0. De plus, les variables de contrôle optimales, pour j =
R 0 := a 2 b h b m S * m S * h µ A σ h = a 2 b h b m H(γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) ln N Y µ A σ h α m β m γ E,L γ L,P , avec N Y = α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) .
Les copépodes sont les ennemis naturels du premier et du deuxième stade larves
J (w Y , w A , w H ) = T 0 I h (t) + 1 2 A Y w 2 Y (t) + 1 2 A A w 2 A (t) + 1 2 A H w 2 H (t) dt sous la contrainte E ′ (t) = α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) L ′ (t) = γ E,L E(t) -γ L,P L(t) -µ L L(t) -w Y L(t) P ′ (t) = γ L,P L(t) -γ P,S m P(t) -µ P P(t) S ′ m (t) = γ P,S m P(t)e -β m P -µ A S m (t) -ab m I h (t)S m (t) -w A S m (t) I ′ m (t) = ab m I h (t)S m (t) -µ A I m (t) -w A I m (t) S ′ h (t) = γ h R h (t) -ab h I m (t)S h (t) -w H S h (t) I ′ h (t) = ab h I m (t)S h (t) -σ h I h (t) R ′ h (t) = σ h I h (t) -γ h R h (t) (1.2)
pour obtenir la meilleure stratégie de contrôle. Le principe du maximum de Pontryagin est appliqué pour ce faire. -
∂λ 1 (t) ∂t = -λ 1 µ E + (λ 2 -λ 1 )γ E,L - ∂λ 2 (t) ∂t = -λ 2 (µ L + w Y ) + (λ 3 -λ 2 )γ L,P - ∂λ 3 (t) ∂t = -λ 3 µ P + (λ 4 (1 -β m P)e -β m P -λ 3 )γ P,S m - ∂λ 4 (t) ∂t =λ 1 α m -λ 4 (µ A + w A ) + (λ 5 -λ 4 )ab m I h (t) - ∂λ 5 (t) ∂t =λ 1 α m -λ 5 (µ A + w A ) + (λ 7 -λ 6 )ab h S h (t) - ∂λ 6 (t) ∂t = -λ 6 w H + (λ 7 -λ 6 )ab h I m (t) - ∂λ 7 (t) ∂t =1 + (λ 5 -λ 4 )ab m S m (t) -λ 7 σ h - ∂λ 8 (t) ∂t =(λ 6 -λ 8 )γ h avec la condition de transversalité λ(T) = 0.
+ ∂ ∂x D(x, y) ∂I m ∂x + ∂ ∂y D(x, y) ∂I m ∂y .
(1.5)
Le système est complété par des conditions aux bords de type Neumann.
Théorème 1.0.4. Soient 0 ≤ S h,0 , I h,0 , R h,0 ≤ H 0 , 0 ≤ E 0 , L 0 , P 0 ≤ M Y,0 , et 0 ≤ S m,0 , I m,0 ≤ M A,0 où H 0 , M Y,0 et M A,0 sont la densité initiale de la population humaine, des jeunes moustiques et des moustiques adultes, respectivement. Il existe alors une unique solution faible globale en temps (E, L, P, S m , I m , S h ,
I h , R h ) ∈ L ∞ (R + , L ∞ (Ω))
+ I h ≤ H 0 , E + L + P ≤ M Y,0 et S m + I m ≤ M A,0 .
Nous avons montré ce résultat en appliquant le théorème du point fixe de Picard dans la boule
B T = Y ∈ L ∞ (R + , L ∞ (Ω)) 8 : sup t∈[0,T] ||Y(t, .) -Y 0 || L ∞ (Ω) ≤ r (1.6)
à partir de la formulation intégrale
E = e -(γ E,L +µ E )t E 0 + α m t 0 e -(γ E,L +µ E )(t-s) (S m + I m ) L = e -(γ L,P +µ L )t L 0 + γ E,L t 0 e -(γ L,P +µ L )(t-s) Eds P = e -(γ P,Sm +µ P )t P 0 + γ L,P t 0 e -(γ P,Sm +µ P )(t-s) Lds S m = K ⋆ S m,0 + t 0 K ⋆ (γ P,S m Pe -β m P -ab m I h S m )ds I m = K ⋆ I m,0 + ab m t 0 K ⋆ I h S m ds S h = S h,0 + t 0 (γ h R h -ab h I m S h )ds I h = e -σ h t I h,0 + ab h t 0 e -σ h (t-s) I m S h ds R h = e -γ h t R h,0 + σ h t 0 e -γ h (t-s) I h ds (1.7)
où K est le noyau de la chaleur.
Dans la dernière section du chapitre quatre, nous déterminons la stratégie de contrôle optimale en appliquant trois contrôles : l'exposition au copépode w Y pour les jeunes moustiques dans les zones de ponte, le pesticide w A pour les moustiques adultes, et l'application de la vaccination w H pour les humains. Ici les controles dépendent du temps et de l'espace. Nous considérons le problème, pour X = (x, y)
J (w) = Ω T 0 I h (X, t) + 1 2 A Y w 2 Y (X, t) + 1 2 A A w 2 A (X, t) + 1 2 A H w 2 H (X, t) dtdX.
Nous utilisons la méthode de l'état adjoint pour déterminer les variables de contrôle optimales.
Théorème 1.0.5. Il existe les variables adjointes λ i , i = 1, 2, • • • , 6 qui satisfont le système d'équations différentielles partielles à rebours dans le temps suivant :
- ∂λ 1 (x, t) ∂t = λ 1 (x, t)µ E + (λ 1 (x, t) -λ 2 (x, t))γ E,L - ∂λ 2 (x, t) ∂t = λ 2 (x, t)(µ L + w Y ) + (λ 2 (x, t) -λ 3 (x, t))γ L,P - ∂λ 3 (x, t) ∂t = λ 3 (x, t)µ P + (λ 3 (x, t) -λ 4 (x, t)(1 -β m P(x, t)e -β m P(x,t) )γ P,S m - ∂λ 4 (x, t) ∂t -D∆λ 4 = -λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t) -λ 5 (x, t))ab m I h (x, t) - ∂λ 5 (x, t) ∂t -D∆λ 5 = -λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t) -λ 7 (x, t))ab h S h (x, t) - ∂λ 6 (x, t) ∂t = λ 6 (x, t)w H + (λ 6 (x, t) -λ 7 (x, t))ab h I m (x, t) - ∂λ 7 (x, t) ∂t = 1 + (λ 7 (x, t) -λ 8 (x, t))σ h + (λ 4 (x, t) -λ 5 (x, t))ab m S m (x, t) - ∂λ 8 (x, t) ∂t = (λ 8 (x, t) -λ 6 (x, t))γ h (1.8)
avec la condition de transversalité λ T (x, T) = 0 et les conditions aux limites Chapter 2
µ T = λ T (x,0)h(U(x,0) g(U(x,0),w) et ∂λ(x,t) ∂x ∂Ω = ∂U(x,t) ∂x ∂Ω = 0. En outre, la variable de contrôle op- timale w * est définie comme suit w * Y (t) = max 0, min λ 2 L -A Y , w M w * A (t) = max 0, min (λ 4 I h + λ 5 S h ) -A A , w M w * H (t) = max 0, min λ 6 S h -A H , w H .
Perspectives de l'étude
General Introduction
Mosquitoes
Mosquitoes are an important vector for disease transmission of many of the classified pathogens and parasites, including viruses, bacteria, fungi, protozoa, and nematodes. It is mainly due to their blood-feeding habits, for which they feed on vertebrate hosts. Infected mosquitoes carry these organisms from person to person without exhibiting symptoms themselves. According to [START_REF] Harvey | Mosquito-borne disease could threaten half the globe by 2050[END_REF], by 2050, half the world's population could be at risk of mosquito-borne diseases like dengue fever or the Zika virus, malaria, and many more. By transmitting these diseases, mosquitoes cause the deaths of more people than any other animal taxon. With more than 100 million years of an evolutionary process, mosquitoes developed adaptation mechanisms capable of thriving in various environments. Except for permanently frozen places, these mosquitoes are found in every land region globally. They occupy the tropics and sub-tropics where the climate seems favorable and efficient for their development-making them nearly the universal animal in the world. Of almost 3,600 species of mosquitoes inhabiting the planet, the ones belonging to the family Cuclicidae of order Diptera are known to play a crucial role as vectors of arbovirus transmission [START_REF] Narang | Small Bite, Big Threat: Deadly Infections Transmitted by Aedes Mosquitoes[END_REF]. Within this family, the genus Aedes is involved in the transmission of diseases. The two most prominent species within this genus are Aedes aegypti and Aedes albopictus. They are the primary vector for dengue, yellow fever, West Nile fever, chikungunya, eastern equine encephalitis, Zika virus [START_REF]PAHO statement on zika virus transmission and prevention[END_REF], and many other less notable diseases.
In this thesis, we aim to develop a mathematical model of dengue taking into account vaccination and vector control. Thus, we focus on the biology of Ae. aegypti and Ae. albopictus mosquitoes as the primary vector of dengue.
Life Cycle
Mosquitoes have a complex life cycle. Nevertheless, all of them require water to complete their life cycle. They change their shape and habitat as they develop. Like all other mosquitoes, Ae. aegypti and Ae. albopictus have four distinct stages: egg, larva, pupa, and adult (Fig. 2.2).
Mosquito Life Cycle
Aedes aegypti and Ae. albopictus
It takes about 7-10 days for an egg to develop into an adult mosquito. Female adult mosquitoes lay eggs a few days after acquiring a blood meal. Mosquitoes generally lay up to 100 eggs at a time. They lay their eggs singly [START_REF]Consensus Document on the Biology of Mosquito Aedes , 13 July 2018. URL: https%3A%2F%2Fwww.oecd.org%2Fofficialdocuments% 2Fpublicdisplaydocumentpdf%2F%3Fcote%3DENV%2[END_REF] on the inner walls of containers just above the water line that is or will be filled with water. It sticks like glue. This oviposition site includes a cavity wall such as a hollow stump or a container such as a bucket or a discarded vehicle tire. Only a tiny amount of water is needed to lay eggs. However, mosquito eggs can survive drying out for up to 8 months or even in winter in the southern United States [20]. When that happens, they have to resist a considerable desiccation before they hatch [START_REF] Schmidt | Effects of desiccation stress on adult female longevity in Aedes aegypti and Ae. albopictus (Diptera: Culicidae): results of a systematic review and pooled survival analysis[END_REF]. Once they achieve a suitable desiccation level, they can enter diapause for several months.
Aedes eggs in diapause tend to hatch irregularly over an extended period.
It then hatches to larvae when water inundates the eggs, such as rains or filling water by people. Following water immersion, eggs hatch in batches. Since some eggs require multiple soakings in water before hatching, this process may last days or weeks [START_REF]Aedes -Wikipedia, the free encyclopedia[END_REF].
Larvae live in water, and they feed on heterotrophic microorganisms such as bacteria, fungi, and protozoans. They develop through four stages, or instars. The larvae molt in the first to fourth-instar, shedding their skins to grow further. On the fourth instar, when the larva is fully grown, they metamorphose into a new form called pupae. Pupa still lives in water, but they do not feed. After two days, they fully develop into adult mosquito forms and break through the pupa's skin. The adult mosquito is no longer aquatic; it has a terrestrial habitat and can fly. This entire life cycle of mosquitoes lasts for eight to ten days at room temperature, depending on the feeding level.
Feeding Habits by Adult Mosquitoes
Like all other living animals, Mosquitoes need energy and nutrients for survival and reproduction. Plant materials and blood are valuable sources of this.
Only female mosquitoes bite. They are attracted by infrared light, light, perspiration, body odor, lactic acid, and carbon dioxide. The mouth part of many female mosquitoes is adapted for piercing animal hosts' skin and sucking their blood as ectoparasites. The female mosquitoes land on the host skin during the blood meal and stick their proboscis. Their saliva contains anticoagulant proteins that prevent blood clotting. They then suck the host blood into their abdomen. Ae. Aegypti mosquitoes need 5 µL per serving [START_REF] Ph | How mosquitoes work[END_REF]. In many female mosquito species, nutrients obtained from blood meals are essential for the production of eggs, whereas in many other species, obtaining nutrients from a blood meal enables the mosquito to lay more eggs. Among humans, mosquitoes preferred feeding those with type O blood [START_REF] Shirai | Landing preference of aedes albopictus (diptera: Culicidae) on human skin among abo blood groups, secretors or nonsecretors, and abh antigens[END_REF], heavy breathers, an abundance of skin bacteria, high body heat, and pregnant women [START_REF] Chappell | 5 stars: A mosquito's idea of a delicious human[END_REF]. Individuals' attractiveness to mosquitoes also has a heritable, genetically-controlled component [START_REF] Fernández-Grandon | Heritability of attractiveness to mosquitoes[END_REF]. Chapter 2. General Introduction Blood-sucking species of mosquitoes are selective feeders that prefer a particular host species. Nevertheless, they relax this selectivity when they experience severe competition and scarcity of food and defensive activity on the part of the hosts.
If humans are scarce, mosquitoes resort to feeding on monkeys, while others prefer on equines, rodents, birds, bats, and pigs, which is where so many of our cross-species disease fears originate from [START_REF] Lehane | The Biology of Blood-Sucking in Insects[END_REF]. Some mosquitoes ignore humans altogether and feed exclusively on birds, while most eat whatever is available. Some of the other most popular dining options for mosquitoes include amphibians, snakes, reptiles, squirrels, rabbits, and other small mammals. Mosquitoes also target larger animals, such as horses, cows, primates, kangaroos, and wallabies [START_REF] Staughton | Do animals get mosquito bites? ScienceABC.com[END_REF]. Some mosquito species may attack even fish if they expose themselves above water level, as mudskippers do [START_REF] Slooff | Mosquitoes (culicidae) biting a fish (periophthalmidae)[END_REF]. . . Comparably, mosquitoes may sometimes feed on insects in nature. Ae. Aegypti and Culextarsalis are attracted and feed on insect larvae, and they live to produce viable eggs [START_REF] Sharris | Survival and fecundity of mosquitoes fed on insect haemolymph[END_REF]. While Anopheles Stephensi is attracted to and can feed successfully on larvae of moth species known as Manduca sexta and Heliothis subflexa [START_REF] George | Malaria mosquitoes host-locate and feed upon caterpillars[END_REF].
Plant nectar is a common energy source for diet across mosquito species, particularly male mosquitoes, exclusively dependent on plant nectar or alternative sugar sources. The design of efficient sugar-baited traps for mosquitoes would greatly benefit the prevention of vector-borne illness. Plant preference is likely driven by an innate attraction that may be enhanced by experience, as mosquitoes recognize available sugar rewards [START_REF] Wolff | Olfaction, experience and neural mechanisms underlying mosquito host preference[END_REF]. It varies among mosquito species, geographical habitats, and seasonal availability. Nectar-seeking involves integrating at least three sensory systems: olfaction, vision, and taste.
Nevertheless, altogether mosquitoes can discriminate between rich and poor sugar sources to choose plants with higher glycogen, lipid, and protein content [START_REF] Yu | Feeding on different attractive flowering plants affects the energy reserves of culex pipiens pallens adults[END_REF].
Below are the preferred plant of different mosquito species from the paper of Barredo and DeGennaro [START_REF] Barredo | Not just from blood: Mosquito nutrient acquisition from nectar sources[END_REF].
Breeding Sites
The Aedes mosquitoes breed in all imaginable receptacles. It can be classified as artificial and natural wet containers, preferably with dark-colored surfaces and holding clear unpolluted water [START_REF]Potential breeding sites[END_REF]. Below is roughly the list of different kinds of breeding sites of Aedes mosquitoes [START_REF]PAHO statement on zika virus transmission and prevention[END_REF]:
• Natural Contaniner
Transmission
Dengue viruses are spread to people through the bites of infected Aedes species mosquitoes [START_REF]Dengue transmission[END_REF]. It can be transmitted by human-to-mosquito, mosquito-to-human, and other transmissions such as human-to-human and mosquito-to-mosquito transmission.
Vertical transmission is when infected parent mosquitoes transmit the arbovirus to some part of their offspring within the ovary or during oviposition [START_REF] Victor | Natural vertical transmission of dengue virus in aedes aegypti and aedes albopictus: A systematic review[END_REF]. Several articles confirm these findings. A study of Shroyer DA. [START_REF] Shroyer | Vertical maintenance of dengue-1 virus in sequential generations of aedes albopictus[END_REF] is one of the articles that confirms the presence of the natural vertical transmission of DENV in Ae. aegypti and Ae. albopictus. It says that the DENV virus can be transferred from parent to offspring in seven consecutive generations of Ae. aegypti and Ae. albopictus, under laboratory conditions. This transmission can contribute to the continuation of infected mosquitoes; however, this is not enough to support the dengue spread.
A more common form of transmission is known as horizontal transmission. The virus is transmitted to humans through the bites of infected Ae. aegypti mosquitoes.
After feeding on a dengue-infected person, the virus replicates in the mosquito midgut before disseminating to secondary tissues, including the salivary glands.
The Extrinsic Incubation Period (EIP) is the time it takes from ingesting the virus to actual transmission to a new host. It takes about 8-12 days when the ambient temperature is between 25-28°C. Variations in the EIP are also influenced by factors such as the magnitude of daily temperature fluctuations, virus genotype, and initial viral concentration [START_REF] Who | Dengue and severe dengue[END_REF]. Once infectious, the mosquito can transmit the virus for the rest of its life.
Mosquitoes can become infected by someone who is viremic with DENV. Viremia is a condition in which there is a high level of the dengue virus in the person's blood.
It occurs four days after an infected Ae. aegypti mosquito bites an individual. Most people are viremic for about 4-5 days, but viremia can last as long as 12 days [START_REF] Who | Dengue and severe dengue[END_REF].
Though the possibility is low, there is evidence that dengue can also spread through maternal transmission or be transmitted through infected blood transfusion. A pregnant woman who has a DENV infection can pass the virus to her fetus.
Babies who carry DENV may suffer from pre-term birth, low birth weight, and fetal distress [START_REF] Who | Dengue and severe dengue[END_REF].
Vaccination
There is no specific treatment for dengue fever. However, efforts to develop a vaccine have been ongoing for decades. Dengvaxia is a live attenuated tetravalent chimeric vaccine. It is made using recombinant DNA technology by replacing the PrM (pre-membrane) and E (envelope) structural genes of yellow fever attenuated 17D strain vaccine with those from the four dengue serotypes. It should be administered in three doses of 0.5 mL subcutaneous (SC) six months apart. Sanofi Pasteur recommended that the vaccine only be used in people between the age of 9 to 45 and people already infected by one type of virus [START_REF] Aguiar | The impact of the newly licensed dengue vaccine in endemic countries[END_REF]. It is because outcomes may be worse in those who have not yet been previously infected.
Vector Control
A disease vector is any living agent carrying and transmitting an infectious pathogen to another living organism. Controlling such vectors is an essential method of limiting or eradicating the transmission of such diseases. For dengue fever, mosquito control with good scientific insights into mosquito ecology and disease transmission patterns is essential to combat dengue fever.
The World Health Organization considered the following categories to control or prevent the spread of the dengue virus [START_REF]Guidelines for dengue surveillance and mosquito control[END_REF].
• Environmental management -A significant risk factor for dengue virus transmission is the proximity of mosquito vector breeding sites to human habitation. Thus an efficient environmental management system is an effective strategy for vector control. It includes container management, eliminating breeding sites' alteration, and preventing breeding in the water storage container.
• Chemical and biological methods -Chemical larviciding is effective against container breeders of Aedes mosquitoes in clean water. It uses organic synthetic insecticides such as temephos (Abate) and insect growth regulators (IGRs) such as methoprene (Altosid, juvenile hormone mimic), where environmental impact is minimal if appropriately used on human premises.
-Wolbachia is a common type of bacteria found in insects but is harmless to people or animals. It is not found in Aedes aegypti mosquitoes. When male Ae. aegypti mosquitoes with Wolbachia mate with wild female mosquitoes that do not have Wolbachia; the eggs will not hatch, resulting in a decease population of Ae. aegypti mosquitoes [START_REF] Cdc | Mosquitoes with wolbachia for reducing numbers of aedes aegypti mosquitoess[END_REF].
-Copepods are a group of small crustaceans found in nearly every freshwater and saltwater habitat. The use of cyclopoida copepoda as mosquito control has proven to be more effective than invertebrate predators [START_REF] Marten | Cyclopoid copepods[END_REF].
Only copepod with a body length greater than 1.4 mm is of practical use as mosquito control. They kill the first instar mosquitoes with 40 Aedes larvae/copepod/day. They typically reduce Aedes production by 99-100%.
• Personal protection -Avoiding getting further mosquito bites when a person is in a viremic state is an excellent way to prevent the spread of the dengue virus. During this time, DENV is circulating in the person's blood and therefore may transmit the virus to new uninfected mosquitoes, who may, in turn, infect other people. Therefore, personal protection using mosquito coils and aerosols, insecticide-impregnated curtains and mosquito nets, and mosquito repellents are essential methods for vector control.
• Space spray application -Space spray is an effective strategy for rapidly killing adult Aedes mosquitoes in dengue epidemic areas. It uses thermal fogs and ultra-low volume aerosol sprays.
Another vector control that is proven effective under field trials has demonstrated that dengue incidence can be substantially reduced by introgressing strains of the endosymbiotic bacterium called Wolbachia into Aedes aegypti mosquito populations [START_REF] Utarini | Efficacy of wolbachia-infected mosquito deployments for the control of dengue[END_REF]. Wolbachia is a ubiquitous bacteria that occurs naturally in insects and is safe for humans. They live inside insect cells and are passed from one generation to the next through an insect's eggs. Independent risk analyses indicate that the release of Wolbachia-infected mosquitoes poses negligible risk to humans and the environment. Wolbachia-carrying mosquitoes reduced their ability to transmit arboviruses [START_REF]How it works[END_REF]. The bacteria compete with the virus, making it harder for viruses to reproduce inside the mosquitoes. In effect, mosquitoes are much less likely to spread viruses from person to person.
Mathematical Models of Dengue Fever
Mathematical modeling has been used to test and determine the effectiveness of different intervention strategies in controlling or eliminating dengue. These various mathematical models aid mathematicians in testing the different hypotheses in the dengue transmission dynamic to understand their importance better.
SEIRS compartmental models accounting for susceptible, exposed, infected, and removed for the human population and susceptible and infected for mosquito population were widely promoted. Syafruddin and Noorani [START_REF] Side | SEIR model for transmission of dengue fever[END_REF] studied a system of differential equations that models the population dynamics of an SEIR vector transmission of dengue fever. It is a mathematical model that analyses the spread of one serotype of dengue virus between host and vector. They have shown that it can model dengue fever using actual data.
On the other hand, Nuraini et al. [START_REF] Nuraini | Mathematical model of dengue disease transmission with severe dhf compartment[END_REF] derived and analyzed the model taking into account the severe Dengue Hemorrhagic Fever (DHF) compartment in the transmission model. They consider a SIR model for dengue disease transmission. It is assumed that two viruses, strain one and strain 2, cause the disease. Long-lasting immunity from infection caused by one virus may not be valid concerning a secondary infection by the other virus. They find a control measure to reduce the DHF patients in the population or keep them at an acceptable level. They also discuss the ratio between the total number of severe DHF compartments, the total number of first infection compartments, and the total number of secondary infection compartments, respectively. Furthermore, they found out that this ratio is needed for practical control measures to predict the "real" intensity of the endemic phenomena since only data on severe DHF compartment is available.
Furthermore, Derouich et al. [START_REF] Derouich | A model of dengue fever[END_REF] proposed a model with two different viruses acting at separated intervals of time. They study the dynamics of dengue fever while concentrating on its progression to the hemorrhagic form to understand the epidemic phenomenon and suggest strategies for controlling the disease. Their model showed that the strategy based on preventing the dengue epidemic using vector control through environmental management or chemical methods remains insufficient since it only permits delaying the outbreak of the epidemic. Moreover, the reduction of susceptibles via vaccination is unlikely to be applicable in the short term because it faces some hurdles since a vaccine must protect against the four serotypes simultaneously.
Also, Aguiar and Stollenwerk [START_REF] Aguiar | Mathematical models of dengue fever epidemiology: multi-strain dynamics, immunological aspects associated to disease severity and vaccines[END_REF] analyzed a modeling framework and assumptions used by Aguiar et al. [START_REF] Aguiar | The role of seasonality and import in a minimalistic multi-strain dengue model capturing differences between primary and secondary infections: Complex dynamics and its implications for data analysis[END_REF] (2 and 4 strain dengue model) and assessed the impact of the newly licensed dengue vaccine. They discuss the role of several subsequent infections versus an exact number of dengue serotypes included in the model framework and the human immunological aspects associated with disease severity, identifying the implications for model dynamics and their impact on vaccine implementation. Their results suggested that reserving vaccines for seropositive individuals should provide a high level of protection, whereas indiscriminate vaccination could increase the number of hospitalizations also on the population level.
Determining the optimal control in minimizing the spread of dengue fever has also been studied. Yang and Ferreira [START_REF] Yang | Assessing the effects of vector control on dengue transmission[END_REF] described the dynamics of dengue disease in the compartment model, taking into account chemical controls and mechanical control applied to the mosquitoes. Allowing some model parameters to depend on time, they were able to mimic seasonal variations and divide the calendar year into favorable and unfavorable periods regarding the vector population's development.
Their simulations showed 'unpredictable' epidemic outbreaks when abiotic variations are taken into account. If controlling mechanisms are introduced regularly every year, they observe the decline of the efficiency index with the elapsed time.
On the other hand, an essential strategy in controlling the dengue epidemic is controlling the vector population. Among the many kinds of research, Almeida et al. [START_REF] Almeida | Mosquito population control strategies for fighting against arboviruses[END_REF] considers two techniques; it consists in releasing mosquitoes to reduce the size of the population (Sterile Insect Technique) or in replacing the wild population with a population carrying a Wolbachia bacteria (a bacteria responsible for blocking the transmission of viruses from mosquitoes to human). Their paper presents an optimal strategy in the release protocol of these two strategies wherein they look for a control function that minimizes the distance to the desired equilibrium (replacement or extinction of the wild population) at the final treatment time.
Moreover, Puntani et al. [START_REF] Puntani Pongsumpun | Optimal control of the dengue dynamical transmission with vertical transmission[END_REF] presented a control mechanism based on a dengue model with vertical transmission considering the two policies, namely vaccination and insecticide administration. Carvalho et al. [START_REF] Sylvestre Carvalho | Mathematical modeling of dengue epidemic: Control methods and vaccination strategies[END_REF] evaluated a control strategy, which aims to eliminate the Aedes aegypti mosquito, as well as proposals for the vaccination campaign. Their results show that eradicating dengue fever is done using an immunizing vaccine since control measures against its vector are insufficient to stop the disease from spreading. Additionally, Iboi and Gumel [START_REF] Iboi | Mathematical assessment of the role of dengvaxia vaccine on the transmission dynamics of dengue serotypes[END_REF] designed a new mathematical model to assess the impact of the newly-released Dengvaxia vaccine on the transmission dynamics of two co-circulating dengue strains. The manuscript is organized as follows.
Outline of Study
The third chapter started with the presentation of the Ross-type model of dengue that considers vaccinating individuals who have previous dengue infections. Using the logistic and exponential functions for human and mosquito populations, respectively, we have shown the well-posedness and positivity of the solution of the model.
We obtained that the diseases free equilibrium is locally asymptotically stable while the endemic equilibrium is unstable. In this chapter, we compare the model using three growth functions:
• Pop 1 : constant human and mosquitoes population,
• Pop 2 : Gompertz growth function for the human population and an exponential growth function for mosquitoes population,
• Pop 3 : an entomological growth function for mosquito and a constant growth function for the human population.
In the Pop 1 model, we showed that the model has only the disease-free equilibrium and we were able to prove that it is locally asymptotically stable Similarly, the Pop 2 model has only the disease-free equilibrium which is locally asymptotically stable as soon as the growth rate α m is smaller than the mortality rate µ m . On the other hand, the Pop 3 model has both an endemic and a disease-free equilibrium. We were able to define the basic reproduction number
R 0 = a 2 b m u * 5 ( b h u * 3 + b h u * 1 ) H 2 0 µ m (γ h + δ h )
, then show that the disease-free equilibrium of model Pop 3 is locally asymptotically stable if α m < µ m and that the endemic equilibrium is stable only if α m > µ m and R 0 > 1. More widely, we have proved the theorem below for the Pop 3 model.
Theorem 2.4.1.
1. If α m < µ m , the the trivial disease-free equilibrium is globally asymptotically stable.
2. If α m > µ m and R 0 > 1, then the non-trivial disease-free equilibrium is globally asymptotically stable.
We then determine the optimal control strategy for minimizing infected humans of each of these three control strategies. We attribute three control inputs, w 1 , w 3 , and w m , for the primary, secondary human, and mosquito populations. Here, the action of w 1 (t) is the percentage of primary susceptible, and w 3 (t) is the percentage of a secondary susceptible individual being vaccinated per unit of time. While w 5 (t), w 6 (t) is the percentage of removed mosquitoes due to insecticide administration to the environment per unit of time. Considering the objective function
J (w 1 , w 3 , w m ) = T 0 u 2 (t) + 1 2 A 1 w 2 1 (t) + 1 2 A 3 w 2 3 (t) + 1 2 A m w 2 5 (t) + 1 2 A m w 2 6 (t) dt subject to u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + g(M(t)) -w 5 (t)u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -w 6 (t)u 6 (t) (2.1)
for t ∈ [0, T], with 0 ≤ w 1 , w 3 ≤ w H and 0 ≤ w m ≤ w M , we use the Pontryagin's maximum principle to determine the optimal control.
Theorem 2.4.2.
There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (4.13) that satisfy the following backward in time system of ordinary differential equation.
-
dλ 1 dt = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 - dλ 4 dt = 0 - dλ 5 dt = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w 5 ) + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w 6 )
with the transversality condition λ(T) = 0. Furthermore, the optimal control variables, for j = 1, 3, 5, 6, are given by w * j (t) = max 0, min
λ j u j A j , w H , w M .
The optimality of the models is numerically solved using a gradient method written in Python. The figure obtained showed that vaccinating only the secondary susceptible humans is not ideal. It requires constant effort and takes a long time to vaccinate them. Instead, it is better to vaccinate the primary susceptible humans.
However, since safe vaccines for primary susceptible humans do not exists to date, the application of vector control to minimize infected humans is a better counterstrategy.
The fifth chapter introduced a new mathematical model of dengue that accounts for the mosquitoes' life cycle. Following the dynamics of the metamorphosis of the mosquito population, the aquatic stage: egg E, larvae L, and pupae P, are added to the model. We show that our new model is well-posed and has positive solutions.
The basic reproduction number is defined as
R 0 := a 2 b h b m S * m S * h µ A σ h = a 2 b h b m H(γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) ln N Y µ A σ h α m β m γ E,L γ L,P , with N Y = α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) .
Copepod are natural enemies of the first and second instar of mosquito larvae. A
Responses comparison for infected humans u 2 FIGURE 2.6: Behaviour of infected humans I h with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue). Cyan curve corresponds to optimal control of vaccination of secondary humans only.
large sized cyclopoid copepods, having body size greater than 1mm, acts as predators of mosquito larvae which strongly influence the mosquito larval population.
With this, copepod as a new control strategy is applied in Section 5.4. By applying vaccination and vector control to the model, we determine the optimal control strategy in minimizing infected humans. We attribute three control inputs, w Y for the percentage of young mosquitoes exposed to copepods, w A for the percentage of adult mosquitoes exposed to pesticides and w H for the percentage of susceptible humans being vaccinated. Thus we consider the objective function
J (w Y , w A , w H ) = T 0 I h (t) + 1 2 A Y w 2 Y (t) + 1 2 A A w 2 A (t) + 1 2 A H w 2 H (t) dt subject to E ′ (t) = α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) L ′ (t) = γ E,L E(t) -γ L,P L(t) -µ L L(t) -w Y L(t) P ′ (t) = γ L,P L(t) -γ P,S m P(t) -µ P P(t) S ′ m (t) = γ P,S m P(t)e -β m P(t) -µ A S m (t) -ab m I h (t)S m (t) -w A S m (t) I ′ m (t) = ab m I h (t)S m (t) -µ A I m (t) -w A I m (t) S ′ h (t) = γ h R h (t) -ab h I m (t)S h (t) -w H S h (t) I ′ h (t) = ab h I m (t)S h (t) -σ h I h (t) R ′ h (t) = σ h I h (t) -γ h R h (t) (2.2)
in obtaining the best control strategy. The Pontryagin's maximum principle is applied in doing so.
Theorem 2.4.3.
There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (5.37) that satisfy the following backward in time system of ordinary differential equation.
-
∂λ 1 (t) ∂t = -λ 1 µ E + (λ 2 -λ 1 )γ E,L - ∂λ 2 (t) ∂t = -λ 2 (µ L + w Y ) + (λ 3 -λ 2 )γ L,P - ∂λ 3 (t) ∂t = -λ 3 µ P + (λ 4 (1 -β m P(t))e -β m P(t) -λ 3 )γ P,S m - ∂λ 4 (t) ∂t =λ 1 α m -λ 4 (µ A + w A ) + (λ 5 -λ 4 )ab m I h (t) - ∂λ 5 (t) ∂t =λ 1 α m -λ 5 (µ A + w A ) + (λ 7 -λ 6 )ab h S h (t) - ∂λ 6 (t) ∂t = -λ 6 w H + (λ 7 -λ 6 )ab h I m (t) - ∂λ 7 (t) ∂t =1 + (λ 5 -λ 4 )ab m S m (t) -λ 7 σ h - ∂λ 8 (t) ∂t =(λ 6 -λ 8 )γ h
with the transversality condition λ(T) = 0. Moreover, the optimal control variables, for j = Y, A, are given by
w * Y = max 0, min λ 2 L A Y , w M w * A = max 0, min λ 4 S m + λ 5 I m A A , w M w * H = max 0, min λ 6 S h A H , w H .
Our results show that the combination of copepods and pesticides is a good strategy for eliminating infected humans and the mosquito population. However, the elimination of infected humans is slow. The combination of pesticide and vaccination seems less efficient than the combination of copepods and pesticides. It takes a shorter time to reduce the number of mosquitoes with a reduced duration of the control application.
The last chapter of this study accounts for the spatial distribution of mosquitoes.
In this study, we assume that only the adult mosquito is moving, and thus, only S m and I m have spatial dimension. The propensity of adult mosquitoes to leave the determined focal point (x, y) can be defined by the diffusion coefficient
D(x, y) = D min + αF l (x, y) + βF f (x, y) (2.3)
where D min is the minimal diffusion value in the absence of resources perception, F l (x, y) and F f (x, y) are the dispersion kernels that covered the entire landscape of the laying and food resources respectively. In this study, we consider that mosquitoes will always prefer the nearest laying sites to them. Thus, considering the population
+ ∂ ∂y D(x, y) ∂S m ∂y (2.4) ∂I m (t, x, y) ∂t = ab m I h (t, x, y)S m (t, x, y) -µ A I m (t, x, y, U) + ∂ ∂x D(x, y) ∂I m ∂x + ∂ ∂y D(x, y) ∂I m ∂y .
(2.5)
Neumann boundary conditions are considered.
Theorem 2.4.4. Let 0 ≤ S h,0 , I h,0 , R h,0 ≤ H 0 , 0 ≤ E 0 , L 0 , P 0 ≤ M Y,0
, and 0 ≤ S m,0 , I m,0 ≤ M A,0 where H 0 , M Y,0 and M A,0 are the initial population density for human, young mosquito and adult mosquito population, respectively. Then there exists a unique global in time weak 8 , of the initial boundary value problem. Moreover, the solution is nonnegative, S h
solution (E, L, P, S m , I m , S h , I h , R h ) ∈ L ∞ (R + , L ∞ (Ω))
+ I h ≤ H 0 , E + L + P ≤ M Y,0 and S m + I m ≤ M A,0 .
This result is proved by applying Picard's fixed point theorem in the closed ball
B T = Y ∈ L ∞ (R + , L ∞ (Ω)) 8 : sup t∈[0,T] ||Y(t, .) -Y 0 || L ∞ (Ω) ≤ r , ( 2.6)
of the integral formulation
E = e -(γ E,L +µ E )t E 0 + α m t 0 e -(γ E,L +µ E )(t-s) (S m + I m ) L = e -(γ L,P +µ L )t L 0 + γ E,L t 0 e -(γ L,P +µ L )(t-s) Eds P = e -(γ P,Sm +µ P )t P 0 + γ L,P t 0 e -(γ P,Sm +µ P )(t-s) Lds S m = K ⋆ S m,0 + t 0 K ⋆ (γ P,S m Pe -β m P -ab m I h S m )ds I m = K ⋆ I m,0 + ab m t 0 K ⋆ I h S m ds S h = S h,0 + t 0 (γ h R h -ab h I m S h )ds I h = e -σ h t I h,0 + ab h t 0 e -σ h (t-s) I m S h ds R h = e -γ h t R h,0 + σ h t 0 e -γ h (t-s) I h ds (2.7)
where K is the heat kernel.
In the last section of chapter four, we determine the optimal control strategy by applying three controls: exposure to copepodes w Y for the young mosquitoes in the laying areas, pesticide w A for the adult mosquitoes, and application of vaccination w H for the humans. Here controls are time and space dependent. We consider the problem
J (w) = Ω T 0 I h (x, t) + 1 2 A Y w 2 Y (x, t) + 1 2 A A w 2 A (x, t) + 1 2 A H w 2 H (x, t) dtdX.
We use the adjoint state method to determine the optimal control variables.
Theorem 2.4.5. There exists the adjoint variables λ i , i = 1, 2, • • • , 6 that satisfy the following backward in time system of partial differential equations
- ∂λ 1 (x, t) ∂t = λ 1 (x, t)µ E + (λ 1 (x, t) -λ 2 (x, t))γ E,L - ∂λ 2 (x, t) ∂t = λ 2 (x, t)(µ L + w Y ) + (λ 2 (x, t) -λ 3 (x, t))γ L,P - ∂λ 3 (x, t) ∂t = λ 3 (x, t)µ P + (λ 3 (x, t) -λ 4 (x, t)(1 -β m P(x, t))e -β m P(x,t) )γ P,S m - ∂λ 4 (x, t) ∂t -D∆λ 4 = -λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t) -λ 5 (x, t))ab m I h (x, t) - ∂λ 5 (x, t) ∂t -D∆λ 5 = -λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t) -λ 7 (x, t))ab h S h (x, t) - ∂λ 6 (x, t) ∂t = λ 6 (x, t)w H + (λ 6 (x, t) -λ 7 (x, t))ab h I m (x, t) - ∂λ 7 (x, t) ∂t = 1 + (λ 7 (x, t) -λ 8 (x, t))σ h + (λ 4 (x, t) -λ 5 (x, t))ab m S m (x, t) - ∂λ 8 (x, t) ∂t = (λ 8 (x, t) -λ 6 (x, t))γ h (2.8) with the transversality condition λ T (x, T) = 0 and boundary conditions µ T = λ T (x,0)h(U(x,0) g(U(x,0),w) and ∂λ(x,t) ∂x ∂Ω = ∂U(x,t) ∂x ∂Ω = 0.
Furthermore, the optimal control variable w * is defined as
w * Y (t) = max 0, min λ 2 L -A Y , w M w * A (t) = max 0, min (λ 4 I h + λ 5 S h ) -A A , w M w * H (t) = max 0, min λ 6 S h -A H , w H .
Using the gradient method written in Python, we numerically solved the optimality of the model and obtained the following figures.
Perspectives of the Study
Mathematical modelling of dengue fever is a wide topic that deals with various unknowns. Covering a a good deal of scope for three years is somehow impossible.
Thus, here are the list of possible research perspectives we plan to study in the future.
One perspective of the study is to consider the age structure of the human population. Considering the recommendation of Sanofi Pasteur on the application of Dengvaxia, it is interesting to create a model with age structure in the human population to describe dengue transmission with different infection rates among different age groups.
Another is to develop a complete dengue-dengvaxia model incorporating the mosquitoes' life cycle, the human population's four dengue strain viruses, and the efficacy of Dengvaxia in different virus strains. Adding the age structure and climate effect on dengue in this model would make this a robust dengue model.
An additional perspective of the study is to consider the mosquitoes' reproduction and feeding habits. One can incorporate the mosquitoes' gender in the model and apply a control strategy to minimize infected mosquitoes. Since male mosquitoes feed on plant nectar and some plants eat mosquitoes, by devising a strategic position of plants in the environment, one can determine the optimal control strategy for minimizing infected mosquitoes in the population.
In connection, one can also consider the energy mosquito needs. Feeding and laying sites directly affect the energy supply of mosquitoes. Mosquito energy increases when they are feeding and decreases during the laying period. With this in mind, we define the fourth dimension U which account the energy supply of mosquito, called the energetic dimension. We can assume that only the adult mosquito are moving and thus only S m and I m have energetic dimension. This energetic dimension uses a simplified dynamic energy budget through advection terms in the additional energy dimension U. This relies on an energetic landscape after space discretisation. Land-covers were grouped depending on their presumed effects on energy supply. New emerging adult mosquitoes have energy level U wherein U = 1 is the upper energetic boundary and U = 0 is the lower energetic boundary, that is, S m , I m (t, x, y, U = 0) = 0 simulates the death by starvation of adult susceptible and infected mosquitoes. Thus one can define the dynamics of adult mosquito as follows:
∂S m (t, x, y) ∂t = γ P,S m P(t, x, y)e -β m P(t,x,y) -µ A S m (t, x, y) -ab m I h (t, x, y)S m (t, x, y) + ∂ ∂x D(x, y) ∂S m ∂x + ∂ ∂y D(x, y) ∂S m ∂y -C(x, y) ∂S m ∂U (2.9) ∂I m (t, x, y) ∂t = ab m I h (t, x, y)S m (t, x, y) -µ A I m (t, x, y) + ∂ ∂x D(x, y) ∂I m ∂x + ∂ ∂y D(x, y) ∂I m ∂y -C(x, y) ∂I m ∂U .
(2.10)
Another interesting perspective of the study is to consider the co-infection of dengue and Covid-19. Because of the overlapping clinical and laboratory features of these diseases, Covid-19 pandemic in a dengue-endemic areas causes a major challenge. Thus one can design a good mathematical model showing the co-infection of these disease and apply optimal control strategy to minimize infected humans.
Chapter 3
A preliminary study of Dengue models accounting for the Vaccination
In this chapter, we introduce some mathematical models of dengue that consider a vaccine that should be given to people who have previous dengue infections. We compare various growth functions.
Description of the Model with Vaccination
Based on the Ross-type model, we assumed that dengue viruses are virulent with no other microorganism attacking the human body. Let M be the population of female mosquitoes split into two groups of susceptible S m and infectious I m mosquitoes. Figure 3.1 describes the flow of dengue disease. In this chapter, we introduce a mathematical model of dengue that considers the vaccine that should be given to people who are already infected by one type of virus.
𝐼 " 𝑆 " 𝛼
" 𝜇 " 𝜇 " 𝑎𝑏 " 𝐼 " 𝐻 𝑆 ) 𝑅 ) 𝐼 ) 𝛼 ) 𝑎𝑏 ) 𝐼 " 𝐻 𝛿 ) 𝑆 , ) 𝑎 𝑏 . " 𝐼 " 𝐻 𝛾 )
Since humans have a meager mortality rate compared to mosquitoes, we neglect the natural death of humans but still consider their growth. The following system of ordinary equations governed the dynamics of humans.
S ′ h (t) = - ab h I m (t) H(t) S h (t) + f (H(t)) (3.1)
I ′ h (t) = aI m (t) H(t) (b h S h (t) + b h S h (t)) -γ h I h (t) -δ h I h (t) (3.2) S ′ h (t) = γ h I h (t) - a b h I m (t) H(t) S h (t) (3.3) R ′ h (t) = δ h I h (t). (3.4)
While the dynamics of mosquitoes are as follows
S ′ m (t) = - ab m I h (t) H(t) S m (t) -µ m S m (t) + g(M(t)) (3.5)
I ′ m (t) = ab m I h (t) H(t) S m (t) -µ m I m (t). (3.6)
Note that the total human population is given by H = S h + I h + S h + R h and the total mosquito population is given by M = S m + I m . The function f (H(t)) is the change in the total human population, while g(M(t)) is the change in the total mosquito population. In this study, we will consider different growth model for human and mosquito population. Since human have a meager mortality rate compared to mosquitoes, we neglect the natural death of the humans.
The parameters
ab h I m (t) H(t)
is the probability of a primary susceptible individual to be infected with dengue virus, where b h is the probability of transmission of the virus from an infected mosquito to primary susceptible human, and a represents a mosquito's average bites. Whereas,
a b h I m (t) H(t)
is the probability of susceptible individuals who had been previously infected with dengue to become infectious again with different serotypes. That is, b h is the probability of transmission of the virus from an infected mosquito to a secondary susceptible human. Furthermore, the rate of secondary susceptible people who recovered from infection from one, two, or three serotypes is represented by γ h I h (t) and δ h denotes the recovery rate from the four serotypes.
For the mosquito compartment,
ab m I h (t) H(t)
is the probability of susceptible mosquito to be infectious once it bites a ratio of the infected human population. The parameter b m is the transmission probability from an infected human to a susceptible mosquito and µ m is the mosquitoes' death rate.
Study of the Model with Logistic Growth
In this section, let us consider the logistic growth functions for human population,
which is H ′ (t) = f (H(t)) = α h 1 -H(t) K H(t)
, where K is the carrying capacity of human population. And an exponential growth function for mosquito population,
which is M ′ (t) = g(M(t)) -µ m M(t) = α m M(t) -µ m M(t)
, where α m and µ m are the mosquitoes growth and death rate, respectively. In this study, we assume that
α m ≤ µ m .
Well-posedness and Positivity of the Solution
To simplify the reading, the system of ordinary differential equation above is rewrit-
ten as U ′ (t) = F(t, u(t)) (3.7) with U(t) = u 1 = S h , u 2 = I h , u 3 = S h , u 4 = R h , u 5 = S m , u 6 = I m t and F(U(t), t) = - ab h u 6 u 1 H + f (H), au 6 b h u 1 + b h u 3 H -γ h u 2 -δ h u 2 , γ h u 2 - a b h u 6 u 3 H , δ h u 2 , - ab m u 2 u 5 H -µ m u 5 + g(M), ab m u 2 u 5 H -µ m u 6 t . Lemma 3.2.1. Let (S h (0), I h (0), S h (0), R h (0), S m (0), I m (0)) be a nonnegative initial da- tum with H(0) = S h (0) + I h (0) + S h (0) + R h (0) > 0 and M(0) = S m (0) + I m (0) > 0.
Then there exist a time T > 0 and a unique solution
(S h , I h , S h , R h , S m , I m ) in C ([0, T], R) 6 .
Proof. Consider the initial value problem
U ′ (t) = F(t, U(t)) where U(0) = U 0 .
Note that F is continuous and has continuous derivative on I × U. Thus, F satisfies the local Lipschitz condition. Therefore, by Cauchy-Lipschitz theorem, there exist T > 0 and a unique solution to equation (3.7) in C ([0, T], R) 6 .
Lemma 3.2.2. The region Ω defined by
Ω log = (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) ∈ R 6 + : u 1 + u 2 + u 3 + u 4 ≤ K, u 5 + u 6 ≤ M 0
is invariant for the flow given by (3.7).
Proof. Let (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) ∈ Ω log be the solution of the system of equation (3.7). Since the total human population is given by the logistic growth model
H ′ (t) = f (H(t)) = α h 1 - H K H we have, 1 K dH α h -α h H K + 1 α h dH H = dt.
And, integrating both sides of the equation gives us
- 1 α h ln α h - α h H K + 1 α h ln |H| = t + c.
Combining the logarithmic function, we get ln H
α h -α h H K = α h t + c 1 where c 1 = cα h .
Thus, exponentiating both sides, we have
H α h -α h H K = e α h t+c 1 = Ce α h t where C = e c 1 = e cα h H = α h - α h H K Ce α h t .
Solving for H, we get
H = α h KCe α h t K + α h Ce α h t .
Taking the initial condition, when t = 0,
H(0) = H 0 H 0 = α h KC K + α h C . Thus, C = H 0 K H 0 α h -Kα h
. Therefore, the solution of the differential equation becomes
H(t) = KH 0 Ke -α h t -H 0 (e -α h t -1)
.
Since α h ≥ 0, H(t) ≤ K for all time t ≥ 0. For the total mosquito population, we consider an exponential growth model
M ′ (t) = α m M -µ m M = (α m -µ m )M. Let σ m = α m -µ m ≤ 0. Then M = Ce σ m t .
Now, taking the initial condition, i.e. when t = 0, M(0) = M 0 , we get M 0 = C. Hence, the solution to our differential equation becomes
M(t) = M 0 e σ m t . Consequently, since σ m ≤ 0, M(t) ≤ M 0 ∀t ≥ 0.
Therefore, all feasible solution of the population of the system (3.7) satisfies
u 1 + u 2 + u 3 + u 4 ≤ K, u 5 + u 6 ≤ M 0 .
In proving the positivity, we assume that the parameters are positive.
• For u 1 in equation (3.1), we have for all u 2 , u 3 , u 4 , u 5 , u 6 ≥ 0,
f 1 (u 1 = 0, u 2 , u 3 , u 4 , u 5 , u 6 ) = - ab h u 6 (0) H + f (H) = f (H)
Since we take a logistic growth function for human population, we have f (H) > 0. Therefore, f 1 ≥ 0.
• For u 2 in equation (4.2), we have for all u 1 , u 3 , u 4 , u 5 , u 6 ≥ 0,
f 2 (u 1 , u 2 = 0, u 3 , u 4 , u 5 , u 6 ) = au 6 H (b h u 1 + b h u 3 ) -γ h (0) -δ h (0) = au 6 H (b h u 1 + b h u 3 )
Thus f 2 ≥ 0, since the parameters a, b h and b h are positive, and u 1 , u 3 , u 4 , u 5 , u 6 ≥ 0. Therefore, f 2 ≥ 0.
• For u 3 in equation ( 4.3), we have for all u 1 , u 2 , u 4 , u 5 , u 6 ≥ 0,
f 3 (u 1 , u 2 , u 3 = 0, u 4 , u 5 , u 6 ) = γ h u 2 - a b h u 6 (t)(0) H(t) = γ h u 2 Since u 2 ≥ 0, then f 3 ≥ 0.
• For u 4 in equation (4.4), we have for all u 1 , u 2 , u 3 , u 5 , u 6 ≥ 0,
f 4 (u 1 , u 2 , u 3 , u 4 = 0, u 5 , u 6 ) = δ h u 3
Since u 3 ≥ 0 and δ > 0, therefore, f 4 ≥ 0.
• For u 5 in equation (4.5), we have for all u 1 , u 2 , u 3 , u 4 , u 6 ≥ 0,
f 5 (u 1 , u 2 , u 3 , u 4 , u 5 = 0, u 6 ) = - ab m u 2 (0) H -µ m (0) + g(M) = g(M)
Since we consider an exponential growth function for the mosquito population, g(M) ≥ 0. Therefore, f 5 ≥ 0.
• For u 6 in equation (4.6), we have for all u 1 , u 2 , u 3 , u 4 , u 5 ≥ 0,
f 6 (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 = 0) = ab m u 2 u 5 u 1 + u 2 + u 3 + u 4 + u 5 + 0 -µ m (0) = ab m u 2 u 5 u 1 + u 2 + u 3 + u 4 + u 5 Since u 1 , u 2 , u 3 , u 4 , u 5 ≥ 0 and a, b m > 0, ab m u 2 (t)u 5 (t) u 1 + u 2 + u 3 + u 4 + u 5 ≥ 0. Therefore, f 6 ≥ 0.
From the two lemmas above, we can deduce the following global well-posedness theorem.
Theorem 3.2.3. Let (u 1 (0), u 2 (0), u 3 (0), u 4 (0), u 5 (0), u 6 (0)) be in Ω log . Then there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(R + , Ω log ).
Stability of the Equilibrium
In this section, we will try to determine the possible equilibrium point of the system of an ordinary differential equation (3.7) and assess their stability.
Equilibrium
Let (u * 1 , u * 2 , u * 3 , u * 4 , u * 5 , u * 6 )
be an equilibrium point of the system of equation (3.7). Then we have
- ab h u 6 u 1 u 1 + u 2 + u 3 + u 4 + α h (u 1 + u 2 + u 3 + u 4 ) - α h (u 1 + u 2 + u 3 + u 4 ) 2 K = 0 (3.8) au 6 (b h u 1 + b h u 3 ) u 1 + u 2 + u 3 + u 4 -γ h u 2 -δ h u 2 = 0 (3.9) γ h u 2 - a b h u 3 u 6 u 1 + u 2 + u 3 + u 4 = 0 (3.10) δ h u 2 = 0 (3.11) - ab m u 2 u 5 u 1 + u 2 + u 3 + u 4 -µ m u 5 + α m u 5 + α m u 6 = 0 (3.12) ab m u 2 u 5 u 1 + u 2 + u 3 + u 4 -µ m u 6 = 0 (3.13)
Solving the system of equation above, we get (u
* 1 , 0, u * 3 , -u * 1 -u * 3 , u * 5 , 0), (u * 1 , 0, u * 3 , K -u * 1 -u * 3 , 0, 0), (u * 1 , 0, -u * 1 b h b h , -u * 1 ( b h -b h ) b h , u * 5 , 0), (u * 1 , 0, 0, -u * 1 , u * 5 , 0), (0, 0, 0, 0, u * 5 , u * 6 ), and (u * 1 , 0, -u * 1 b h b h , -u * 1 ( b h -b h )+K b h b h , 0, 0).
From Lemma 5.3.2, the solution of the system is positively invariant; thus, we
disregard the solutions (u * 1 , 0, u * 3 , -u * 1 -u * 3 , u * 5 , 0), (u * 1 , 0, -u * 1 b h b h , -u * 1 ( b h -b h ) b h , u * 5 , 0), (u * 1 , 0, 0, -u * 1 , u * 5 , 0), (u * 1 , 0, -u * 1 b h b h , -u * 1 ( b h -b h )+K b h b h
, 0, 0) and consider only (0, 0, 0, 0,
u * 5 , u * 6 ) and (u * 1 , 0, u * 3 , K -u * 1 -u * 3 , 0, 0).
The lemma below shows that they are an equilibrium point of the system (3.7). Lemma 3.2.4. The system of equation (3.7) admits the equilibrium (0, 0, 0, 0, 0, 0) and
(u * 1 , 0, u * 3 , K -u * 1 -u * 3 , 0, 0).
Proof. Consider the system of equation above. From equation (3.11), since the parameters are all positive, we can conclude that u 2 = 0. Now, substituting u 2 by 0 and multiplying each equation of the system by u 1 + u 3 + u 4 , the system of equation (3.8)-(3.13) above would become
-ab h u 6 u 1 + α h (u 1 + u 3 + u 4 ) 2 - α h (u 1 + u 3 + u 4 ) 3 K = 0 (3.14) au 6 (b h u 1 + b h u 3 ) = 0 (3.15) -a b h u 3 u 6 = 0 (3.16) -(µ m u 5 -α m u 5 -α m u 6 )(u 1 + u 3 + u 4 ) = 0 (3.17) -µ m u 6 (u 1 + u 3 + u 4 ) = 0 (3.18)
From equation (3.16), we get u 3 = 0 or u 6 = 0. Thus, we consider the following cases.
• Case 1. If u 3 ̸ = 0 and u 6 = 0 Then equation (3.14) would become
α h (u 1 + u 3 + u 4 ) 2 - α h (u 1 + u 3 + u 4 ) 3 K = 0 Implying that u 1 + u 3 + u 4 = K or u 4 = K -u 1 -u 3 .
-(µ m u 5 -α m u 5 )K = 0.
Concluding that u 5 = 0, since K ̸ = 0 and µ m ̸ = α m . Thus, for any nonnegative u * 1 , u * 3 and u * 4 we get an equilibrium point (u
* 1 , 0, u * 3 , K -u * 1 -u * 3 , 0, 0). • Case 2. If u 3 = 0 and u 6 ̸ = 0 Then equation (3.15) would become au 6 b h u 1 = 0. Consequently, u 1 = 0 since u 6 ̸ = 0. Now, substituting u 1 , u 3 by 0 to equation (3.18), we get -µ m u 6 u 4 = 0.
Hence, u 4 = 0 since u 6 ̸ = 0. Therefore, for any nonnegative u * 5 , u * 6 , we get an equilibrium point (0, 0, 0, 0,
u * 5 , u * 6 ). Moreover M * = u * 5 , u * 6 = 0 and u * 5 = u * 6 = 0. • Case 3. If u 3 = 0 and u 6 = 0 Then equation (3.14) would become α h (u 1 + u 4 ) 2 - α h (u 1 + u 4 ) 3 K = 0.
Simplifying the equation we get u 1 + u 4 = K. Thus, substituting this to equation (3.17) and u 6 , u 3 by 0, we get -(
µ m u 5 -α m u 5 )K = 0. Since µ m ̸ = α m , u 5 = 0. Therefore, for any nonnegative u * 1 , u * 3 and u * 4 we get an equilibrium point (u * 1 , 0, u * 3 , K -u * 1 -u * 3 , 0, 0).
Next Generation Matrix and Basic Reproduction Number
In this section, we will show the stability of the equilibrium using the next-generation matrix. Since the infected individuals are in u 2 and u 6 , then we can rewrite the system of the equation (3.7) as
F = au 6 (b h u 1 + b h u 3 ) u 1 +u 2 +u 3 +u 4 ab m u 2 u 5 u 1 +u 2 +u 3 +u 4 V = (γ h + δ h )u 2 µ m u 6
where F is the rate of appearance of new infections in each compartment, and V is the rate of other transitions between all compartments.
If F is an entry wise nonnegative matrix and V is a non-singular M-matrix, then we have
F = ∂F 1 ∂u 2 ∂F 1 ∂u 6 ∂F 2 ∂u 2 ∂F 2 ∂u 6 and V = ∂V 1 ∂u 2 ∂V 1 ∂u 6 ∂V 2 ∂u 2 ∂V 2 ∂u 6 Thus, F = -au 6 (b h u 1 + b h u 3 ) (u 1 +u 2 +u 3 +u 4 ) 2 a(b h u 1 + b h u 3 ) u 1 +u 2 +u 3 +u 4 ab m u 5 (u 1 +u 3 +u 4 ) (u 1 +u 2 +u 3 +u 4 ) 2 0 , V = γ h + δ h 0 0 µ m and V -1 = 1 γ h +δ h 0 0 1 µ m . Therefore, FV -1 = -au 6 (b h u 1 + b h u 3 ) (u 1 +u 2 +u 3 +u 4 ) 2 a(b h u 1 + b h u 3 ) u 1 +u 2 +u 3 +u 4 ab m u 5 (u 1 +u 3 +u 4 ) (u 1 +u 2 +u 3 +u 4 ) 2 0 1 γ h + δ h 0 0 1 µ m = -au 6 (b h u 1 + b h u 3 ) (γ h +δ h )(u 1 +u 2 +u 3 +u 4 ) 2 a(b h u 1 + b h u 3 ) µ m (u 1 +u 2 +u 3 +u 4 ) ab m u 5 (u 1 +u 3 +u 4 ) (γ h +δ h )(u 1 +u 2 +u 3 +u 4 ) 2 0 .
Since the characteristic polynomial is det FV -1 -λI , we have
det FV -1 -λI = -au 6 (b h u 1 + b h u 3 ) (γ h +δ h )(u 1 +u 2 +u 3 +u 4 ) 2 -λ a(b h u 1 + b h u 3 ) µ m (u 1 +u 2 +u 3 +u 4 ) ab m u 5 (u 1 +u 3 +u 4 ) (γ h +δ h )(u 1 +u 2 +u 3 +u 4 ) 2 -λ .
Solving the determinant of the matrix, we get
det FV -1 -λI = -au 6 ( b h u 3 + b h u 1 ) (γ h + δ h )(u 4 + u 3 + u 2 + u 1 ) -λ (-λ) - a 2 b m ( b h u 3 + b h u 1 )u 5 (u 4 + u 3 + u 1 ) (γ h + δ h )µ m (u 4 + u 3 + u 2 + u 1 ) 3 = λ 2 + λ au 6 ( b h u 3 + b h u 1 ) (γ h + δ h )(u 4 + u 3 + u 2 + u 1 ) - a 2 b m ( b h u 3 + b h u 1 )u 5 (u 4 + u 3 + u 1 ) (γ h + δ h )µ m (u 4 + u 3 + u 2 + u 1 ) 3 Let u H = u 1 + u 2 + u 3 + u 4 .
Then solving the equation, we have
λ = ± 1 2 au 6 ( b h u 3 + b h u 1 ) (γ h + δ h )u H 2 2 - 4a 2 b m u 5 ( b h u 3 + b h u 1 )(u H -u 2 ) µ m (γ h + δ h )u H 3 - au 6 ( b h u 3 + b h u 1 ) 2(γ h + δ h )u H 2 = ± a( b h u 3 + b h u 1 ) 2(γ h + δ h )u H u 2 6 µ m ( b h u 3 + b h u 1 ) -4b m u 5 u H (γ h + δ h )(u H -u 2 ) µ m ( b h u 3 + b h u 1 ) - au 6 ( b h u 3 + b h u 1 ) 2(γ h + δ h )u H 2
Simplifying the equation above would give us the eigenvalue
λ = a( b h u 3 + b h u 1 ) 2(γ h + δ h )u H -u 6 u H ± u 2 6 - 4b m u 5 u H (γ h + δ h )(u H -u 2 ) µ m ( b h u 3 + b h u 1 )
. Now using this eigenvalue, let us determine the stability of the equilibrium points.
Lemma 3.2.5. The equilibrium point (u
* 1 , 0, u * 3 , u * 4 , 0, 0), where u * 4 = K -u * 1 -u * 3 , called the Disease Free Equilibrium, of the system of equation (3.7) is locally asymptotically stable. Proof. If u 1 = u * 1 , u 2 = 0, u 3 = u * 3 , u * 4 = K -u * 1 -u * 3 , then u H = u 1 + u 2 + u 3 + u 4 = K.
Therefore, from the above eigenvalues, we have
λ = a( b h u * 3 + b h u * 1 ) 2(γ h + δ h )K -0 K ± (0) 2 - 4b m (0)K(γ h + δ h )(K -0) µ m ( b h u * 3 + b h u * 1 )
. Thus, λ = 0 < 1. Therefore, the system is locally asymptotically stable at the equilibrium point DFE = (u * 1 , 0, u * 3 , u * 4 , 0, 0). Lemma 3.2.6. The equilibrium point (0, 0, 0, 0, u * 5 , u * 6 ), called the Endemic Equilibrium, of the system of equation (3.7) is unstable.
Proof. From the above eigenvalues,
ρ(FV -1 ) = lim u 1 ,u 2 ,u 3 ,u 4 →0 λ = +∞ > 1.
Therefore, the system of equation is unstable at EE = (0, 0, 0, 0, u * 5 , u * 6 ).
Jacobian Matrix
Let u
′ (t) = f (u(t), t) where u = (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ).
In this section, let us confirm the stability result using the Jacobian matrix defined as
J(u * 1 , u * 2 , u * 3 , u * 4 , u * 5 , u * 6 ) = ∂ f 1 ∂u 1 ∂ f 1 ∂u 2 ∂ f 1 ∂u 3 ∂ f 1 ∂u 4 ∂ f 1 ∂u 5 ∂ f 1 ∂u 6 ∂ f 2 ∂u 1 ∂ f 2 ∂u 2 ∂ f 2 ∂u 3 ∂ f 2 ∂u 4 ∂ f 2 ∂u 5 ∂ f 2 ∂u 6 ∂ f 3 ∂u 1 ∂ f 3 ∂u 2 ∂ f 3 ∂u 3 ∂ f 3 ∂u 4 ∂ f 3 ∂u 5 ∂ f 3 ∂u 6 ∂ f 4 ∂u 1 ∂ f 4 ∂u 2 ∂ f 4 ∂u 3 ∂ f 4 ∂u 4 ∂ f 4 ∂u 5 ∂ f 4 ∂u 6 ∂ f 5 ∂u 1 ∂ f 5 ∂u 2 ∂ f 5 ∂u 3 ∂ f 5 ∂u 4 ∂ f 5 ∂u 5 ∂ f 5 ∂u 6 ∂ f 6 ∂u 1 ∂ f 6 ∂u 2 ∂ f 6 ∂u 3 ∂ f 6 ∂u 4 ∂ f 6 ∂u 5 ∂ f 6 ∂u 6 .
Computing for the partial derivative
∂ f i ∂u i , for each i, where i = 1, 2, • • • , 6, we have ∂ f 1 ∂u 1 = -ab h u 6 (u 2 + u 3 + u 4 ) (u 1 + u 2 + u 3 + u 4 ) 2 + α h - 2α h (u 1 + u 2 + u 3 + u 4 ) K ∂ f 1 ∂u 2 = ab h u 6 u 1 (u 1 + u 2 + u 3 + u 4 ) 2 + α h - 2α h (u 1 + u 2 + u 3 + u 4 ) K ∂ f 1 ∂u 3 = ab h u 6 u 1 (u 1 + u 2 + u 3 + u 4 ) 2 + α h - 2α h (u 1 + u 2 + u 3 + u 4 ) K ∂ f 1 ∂u 4 = ab h u 6 u 1 (u 1 + u 2 + u 3 + u 4 ) 2 + α h - 2α h (u 1 + u 2 + u 3 + u 4 ) K and ∂ f 1 ∂u 5 = 0 ∂ f 1 ∂u 6 = -ab h u 1 u 1 + u 2 + u 3 + u 4 ∂ f 2 ∂u 1 = ab h u 6 (u 2 + u 3 + u 4 ) -a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 2 ∂u 2 = -ab h u 6 u 1 -a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 -γ h -δ h ∂ f 2 ∂u 3 = a b h u 6 (u 1 + u 2 + u 4 ) -ab h u 6 u 1 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 2 ∂u 4 = -ab h u 6 u 1 -a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 2 ∂u 5 = 0 ∂ f 2 ∂u 6 = ab h u 1 + a b h u 3 u 1 + u 2 + u 3 + u 4 ∂ f 3 ∂u 1 = a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 3 ∂u 2 = γ h + a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 3 ∂u 3 = -a b h u 6 (u 1 + u 2 + u 4 ) (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 3 ∂u 4 = a b h u 6 u 3 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 3 ∂u 5 = 0 ∂ f 3 ∂u 6 = -a b h u 3 u 1 + u 2 + u 3 + u 4 ∂ f 4 ∂u 1 = ∂ f 4 ∂u 3 = ∂ f 4 ∂u 4 = ∂ f 4 ∂u 5 = ∂ f 4 ∂u 6 = 0 ∂ f 4 ∂u 2 = δ h ∂ f 5 ∂u 1 = ab m u 2 u 5 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 5 ∂u 2 = -ab m u 5 (u 1 + u 3 + u 4 ) (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 5 ∂u 3 = ab m u 2 u 5 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 5 ∂u 4 = ab m u 2 u 5 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 5 ∂u 5 = -ab m u 2 u 1 + u 2 + u 3 + u 4 -µ m + α m ∂ f 5 ∂u 6 = α m ∂ f 6 ∂u 1 = -ab m u 2 u 5 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 6 ∂u 2 = ab m u 5 (u 1 + u 3 + u 4 ) (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 6 ∂u 3 = -ab m u 5 u 2 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 6 ∂u 4 = -ab m u 5 u 2 (u 1 + u 2 + u 3 + u 4 ) 2 ∂ f 6 ∂u 5 = ab m u 2 u 1 + u 2 + u 3 + u 4 ∂ f 6 ∂u 6 = -µ m
From the Jacobian matrix above, the lemma below verified our result for the stability of the disease-free equilibrium point.
Lemma 3.2.7. The disease-free equilibrium point DFE = (u * 1 , 0, u * 3 , u * 4 , 0, 0) where u * 4 = K -u * 1 -u * 3 of the system of equation (3.7
) is locally asymptotically stable. And the endemic equilibrium (0, 0, 0, 0, u * 5 , u * 6 ) is locally asymptotically unstable.
Proof.
Let DFE = (u * 1 , 0, u * 3 , u * 4 , 0, 0) where u * 4 = K -u * 1 -u *
3 be an equilibrium point of the system of equation (3.7). Then, the above Jacobian matrix for DFE can be deduce to
J(DFE) = -α h -α h -α h -α h 0 -ab h u * 1 K 0 -γ h -δ h 0 0 0 ab h u * 1 +a b h u * 3 K 0 γ h 0 0 0 -a b h u * 3 K 0 δ h 0 0 0 0 0 0 0 0 -µ m + α m α m 0 0 0 0 0 -µ m Let |J(E 1 ) -λI 6 | = 0. Then -α h -λ -α h -α h -α h 0 -ab h u * 1 K 0 -γ h -δ h -λ 0 0 0 ab h u * 1 +a b h u * 3 K 0 γ h -λ 0 0 -a b h u * 3 K 0 δ h 0 -λ 0 0 0 0 0 0 -µ m + α m -λ α m 0 0 0 0 0 -µ m -λ = 0.
Solving this determinant give us the characteristic polynomial of the system which is
(-λ)(-λ)(-λ -α h )(-λ -γ h -δ h )(-λ -µ m )(-λ -(µ m -α m )) = 0.
Hence the eigenvalues are,
λ = 0 (multiplicity 2) λ = -α h λ = -(γ h + δ h ) λ = -µ m λ = -(µ m -α m )
Since µ mα m > 0, all eigenvalues are negatives. Therefore, the equilibrium point, DFE = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is locally asymptotically stable. Similar computations shows that the endemic equilibrium is locally asymptotically unstable.
Phase Portrait Analysis
A phase portrait graph of our dynamical system graphically represents the system behavior. It is a geometric representation of the trajectories of a dynamical system in the phase plane. In this section, we will illustrate some simulations performed using Python. Initially, the human and mosquito population were healthy, with 10,000 humans and 100,000 mosquitoes populations, respectively, and only ten infected mosquitoes.
Symbol
Since mosquito needs to feed two to three times a day to take a full-blood meal, it takes 11 days for the primary susceptible S h human population to reach its lowest population at six humans and then exponentially increases to its equilibrium 10,019 population. While it only takes seven days for the infected human I h population to reach its highest population at 6,084 humans. After such time, a rapid decrease in the population follows. Whereas, for all time t, the recovered human R h population continuously increases to its maximum population of 22,864 humans.
Note that for a mosquito to be infected by the virus, a mosquito must take its blood meal during viremia, when the infected person has high levels of the dengue virus in the blood. Thus, it takes 16 days for infected mosquito I m to reach its highest population at 6,8116 mosquitoes before it drops exponentially.
Since humans kill some of these mosquitos and the mosquito's system requires eight to twelve days for the virus to spreads through its body, the population of susceptible mosquito S m drops for 18 days to its local minimum of 24,814 mosquitos then increases after up to 49,857 mosquitos at 82 days and decreases again afterward.
Moreover, since viremia lasts for 4 to 5 days in primary condition, most people will recover after about a week. It only takes seven days for the secondary susceptible S h human population to reach its local maximum at 1,730 population, then decreases to 53 population at 44 days and exponentially increases to a maximum of 2,474 population. Figure 3.3a shows that in the beginning there are 10 infected mosquito and no infected human. As the infected humans increases, infected mosquitoes also increases.
After some time, the two variables become inversely proportional. As infected humans decreases, the infected mosquito continue to increase. Towards the end of time, both variables decreases towards zero.
On the other hand, figure 3.3b shows that at time t = 0, there are 100,000 susceptible mosquito and 10 infected mosquitoes. For some time, susceptible mosquitoes decreases while infected mosquitoes increases. When it reaches the maximum of 68117.679 infected mosquitoes, it then decreases while susceptible mosquitoes increases. After some time, both variables decreases towards zero.
For infected humans, figure 3.3c shows that at time t = 0, there are no infected humans but have 10,000 primary susceptible humans. For some time, as infected humans increases, primary susceptible humans decreases. Upon reaching 6125.015 infected human populations, both variables decreases. Then primary susceptible humans started to increase but infected humans continue to decrease.
Comparison against Growth Functions
In this section, we will consider different growth functions for the human population f (H(t)) and mosquito population g(M(t)) of the system (3.1)-(4.6) of the ordinary differential equation. We consider three growth functions:
• Pop 1 : constant human and mosquitoes population,
• Pop 2 : Gompertz growth function for the human population and an exponential growth function for mosquitoes population,
Constant Human and Mosquito Population
Consider the constant human and mosquito population. For human population, we set
H(t) = H 0 where H 0 is constant. Then, H ′ (t) = f (H(t)) = 0. Also, for mosquito population we set M(t) = M 0 where M 0 is constant. Then M ′ (t) = g(M(t)) - µ m M(t) = 0, our model would become u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + µ m u 6 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) (3.19) Theorem 3.3.1. Let (u 1 (0), u 2 (0), u 3 (0), u 4 (0), u 5 (0), u 6 (0)) be in Ω cons defined by Ω cons = U ∈ R 6 + : u 1 + u 2 + u 3 + u 4 = H 0 , u 3 + u 4 = M 0 .
Then, there exists a unique global in time solution (u
1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(R + , Ω cons ).
Proof. The proof follows from the fact that constant population is bounded and welldefined.
Equilibrium
If we solve for all possible values of x * that lie on Ω, we only have (u
= (u * 1 , 0, u * 3 , u * 4 , u * 5 , 0). Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Thus, ab m u 2 u 5 H -µ m u 6 = 0 becomes ab m (0)u 5 H -µ m u 6 = 0 -µ m u 6 = 0
Hence, u 6 = 0. Since the system of equation contains either u 2 or u 6 or both, which has a zero value, then any nonnegative values of u 1 , u 3 , u 4 , u 5 satisfies the system of equation. Therefore, (u * 1 , 0, u * 3 , u * 4 , u * 5 , 0) is an equilibrium point.
Next Generation Matrix and Basic Reproduction Number
Now, let us determine the stability of this equilibrium points by solving for the next generation matrix of the system of equation (3.19). We have
F = au 6 (b h u 1 + b h u 3 ) H 0 ab m u 2 u 5 H 0 V = (γ h + δ h )u 2 µ m u 6
where F is the rate of appearance of new infections in each compartment and V is the rate of other transitions between all other compartment. Thus,
F = 0 a(b h u 1 + b h u 3 ) H 0 ab m u 5 H 0 0 and V = γ h + δ h 0 0 µ m .
Therefore,
FV -1 = 0 a(b h u 1 + b h u 3 ) H 0 ab m u 5 H 0 0 1 γ h + δ h 0 0 1 µ m = 0 a(b h u 1 + b h u 3 ) µ m H 0 ab m u 5 (γ h +δ h )H 0 0 Since the characteristic polynomial is det FV -1 -λI , we have det FV -1 -λI = -λ a(b h u 1 + b h u 3 ) µ m H 0 ab m u 5 (γ h +δ h )H 0 -λ = λ 2 - a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h )
Solving for λ, we get the eigenvalues
λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h ) (3.20) Therefore, R 2 0 := a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h )
, and we have the following theorem.
Theorem 3.3.3. If R 0 < 1, then the disease free equilibrium E cons is asymptotically stable. If R 0 > 1, then the disease free equilibrium E cons is unstable.
To explain R 0 biologically, let us consider the following cases.
• Suppose R 0 > 1, that is ab m u 5 H 0 a b h u 3 +ab h u 1 H 0 > µ m (γ h + δ h ).
It means that the severity of the probability of infection to spread on susceptible mosquito ab m u 5 H 0 and susceptible humans a b h u 3 +ab h u 1 H 0 is greater than the product of mortality rate of mosquito µ m and the recovery rate of infectious humans γ h + δ h . In effect, there is a spread of disease in the population, that is, resulting to dengue outbreak. Therefore, if R 0 > 1, then the DFE is unstable.
• Suppose R 0 < 1, that is ab m u 5 H 0 a b h u 3 +ab h u 1 H 0 < µ m (γ h + δ h ).
It means that the severity of the probability of infection to spread on susceptible mosquito ab m u 5 H 0 and susceptible humans a b h u 3 +ab h u 1 H 0 is lesser than the product of mortality rate of mosquito µ m and recovery rate of infectious humans γ h + δ h . Thus, the disease would be lessened, resulting in controlled dengue disease. Therefore, if R 0 < 1, there is an asymptotic stability on the DFE.
Jacobian Matrix
To confirm the theorem above, we compute the Jacobian matrix of the system (3.19).
We compute the partial derivatives of f i with respect to u i , for i = 1, 2, • • • , 6. We have
∂ f ∂u = -ab h u 6 H 0 ∂ f 1 ∂u 2 = ∂ f 1 ∂u 3 = ∂ f 1 ∂u 4 = ∂ f 1 ∂u 5 = 0 ∂ f 1 ∂u 6 = -ab h u 1 H 0 ∂ f ∂u = ab h u 6 H 0 ∂ f 2 ∂u 2 = -γ h -δ h ∂ f 2 ∂u 3 = a b h u 6 H 0 ∂ f ∂u = ∂ f 2 ∂u 5 = 0 ∂ f 2 ∂u 6 = ab h u 1 + a b h u 3 H 0 ∂ f 3 ∂u 1 = 0 ∂ f ∂u = γ h ∂ f 3 ∂u 3 = -a b h u 6 H 0 ∂ f 3 ∂u 4 = ∂ f 3 ∂u 5 = 0 ∂ f ∂u = -a b h u 3 H 0 ∂ f 4 ∂u 1 = 0 ∂ f 4 ∂u 2 = δ h ∂ f ∂u = ∂ f 4 ∂u 4 = ∂ f 4 ∂u 5 = ∂ f 4 ∂u 6 = 0 ∂ f 5 ∂u 1 = 0 ∂ f 5 ∂u 2 = -ab m u 5 H 0 ∂ f ∂u = ∂ f 5 ∂u 4 = 0 ∂ f 5 ∂u 5 = -ab m u 2 H 0 ∂ f 5 ∂u 6 = µ m ∂ f ∂u = 0 ∂ f 6 ∂u 2 = ab m u 5 H 0 ∂ f 6 ∂u 3 = 0 ∂ f ∂u = 0 ∂ f 6 ∂u 5 = ab m u 2 H 0 ∂ f 6 ∂u 6 = -µ m
Therefore, we have the following lemma.
Lemma 3.3.4. The equilibrium point E cons = (u * 1 , 0, u * 3 , u * 4 , u * 5 , 0) is locally asymptotically stable. Proof. Let E cons = (u * 1 , 0, u * 3 , u * 4 , u * 5 , 0
) be an equilibrium point of the system of equation. Then the above Jacobian matrix for E cons can be deduce to
J(E 1 ) = -ab h (0) H 0 0 0 0 0 -ab h u * 1 H 0 ab h (0) H 0 -γ h -δ h a b h (0) H 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h -a b h (0) H 0 0 0 -a b h u * 3 H 0 0 δ h 0 0 0 0 0 -ab m u * 5 H 0 0 0 -ab m (0) H 0 µ m 0 ab m u * 5 H 0 0 0 ab m (0) H 0 -µ m = 0 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h 0 0 0 -a b h u * 3 H 0 0 δ h 0 0 0 0 0 -ab m u * 5 H 0 0 0 0 µ m 0 ab m u * 5 H 0 0 0 0 -µ m . Let |J(E 1 ) -λI 6 | = 0. Then |J(E 1 ) -λI 6 | = -λ 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h -λ 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h -λ 0 0 -a b h u * 3 H 0 0 δ h 0 -λ 0 0 0 -ab m u * 5 H 0 0 0 -λ µ m 0 ab m u * 5 H 0 0 0 0 -µ m -λ = 0.
Therefore, solving the equation will determine its characteristic polynomial. We get
(-λ) 4 (-γ h -δ h -λ)(-µ m -λ) - ab m u * 5 ab h u * 1 + a b h u * 3 H 2 0 = 0 Now, let us solve λ for (-γ h -δ h -λ)(-µ m -λ) - ab m u * 5 (abhu * 1 +a b h u * 3 ) H 2 0 = 0. We have (-(γ h + δ h ) -λ) (-µ m -λ) - ab m u * 5 (ab h u * 1 + a b h u * 3 ) H 2 0 = 0 (γ h + δ h )µ m + (γ h + δ h )λ + µ m λ + λ 2 - ab m u * 5 (ab h u * 1 + a b h u * 3 ) H 2 0 = 0 λ 2 + (γ h + δ h + µ m )λ + (γ h + δ h )µ m - ab m u * 5 (ab h u * 1 + a b h u * 3 ) H 2 0 = 0.
Solving λ by quadratic formula, we have
λ = -(γ h + δ h + µ m ) ± (γ h + δ h + µ m ) 2 -4(1) (γ h + δ h )µ m - ab m u * 5 (ab h u * 1 +a b h u * 3 ) H 2 0 2 λ = -(γ h + δ h + µ m ) 2 ± 1 2 (γ h + δ h + µ m ) 2 -4(γ h + δ h )µ m + 4ab m u * 5 (ab h u * 1 + a b h u * 3 ) H 2 0 λ = -(γ h + δ h + µ m ) 2 ± (γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 (ab h u * 1 + a b h u * 3 ) 2H 0
Hence, the eigenvalues are
λ 1 = 0 (multiplicity 4) λ 2 = -(γ h + δ h + µ m ) 2 + (γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 (ab h u * 1 + a b h u * 3 ) 2H 0 λ 3 = -(γ h + δ h + µ m ) 2 - (γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 (ab h u * 1 + a b h u * 3 ) 2H 0 .
For E cons to be asymptotically stable, the Re(λ 2 ) < 0. We have
(γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 ab h u * 1 + a b h u * 3 2H 0 < γ h + δ h + µ m 2 .
Simplifying the equation would give us
(γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 ab h u * 1 + a b h u * 3 < (γ h + δ h + µ m )H 0 .
Now taking the square of the inequality, we get
(γ h + δ h -µ m ) 2 H 2 0 + 4ab m u * 5 ab h u * 1 + a b h u * 3 < (γ h + δ h + µ m ) 2 H 2 0 .
Combining like terms and then simplifying, we have
4ab m u * 5 (ab h u * 1 + a b h u * 3 ) < (γ h + δ h + µ m ) 2 H 2 0 -(γ h + δ h -µ m ) 2 H 2 0 4ab m u * 5 (ab h u * 1 + a b h u * 3 ) < 4µ m (γ h + δ h )H 2 0 a 2 b m u * 5 (b h u * 1 + b h u * 3 ) µ m (γ h + δ h )H 2 0 < 1.
Therefore, we get the same R 0 we obtain from the next generation matrix. That is,
R 0 2 = a 2 b m u * 5 b h u * 1 + b h u * 3 µ m (γ h + δ h )H 2 0 < 1 is locally asymptotically stable.
Numerical Illustrations
To straighten our lemmas above, let us look into the numerical simulation. In this simulation, we let time T be 2000 days with initial condition (10000, 0, 0, 0, 100000, 10). Figure 3.5 shows the behaviour of the variables S h , I h , S h , R h , S m and I m versus time using the parameters in Table 3.2.
Initially, the human S h and mosquito S m population are healthy. However, briefly, they decrease exponentially and become infected I m and I h . Since humans kill some of these mosquitoes, and once the virus enters the mosquito's system in the blood meal, the virus spreads through the mosquito's body for eight to twelve days, S h decreases faster than S m . Hence I h increases exponentially faster than I m . Since infected humans I h suffer for about 3 -7 days following the infectious mosquito bite, there is an exponential increase of infected humans by then. Nevertheless, it is followed by an overwhelming recovery of the infected that increases the recovered human R h population and the susceptibility S h to other DENV strains. For 2000 days, it would only take more or less 40 days for the recovered human population R h and the population of susceptible humans to other DENV strains S h to take its equilibrium while the susceptible human S h population takes its equilibrium at about ten days. For infected humans, figure 3.6a shows that at time t = 0, there are no infected humans but have 10,000 primary susceptible humans. For some time, as infected humans increases while primary susceptible humans decreases. Upon reaching 6065.190 infected human populations, both variables decreases thru time.
Figure 3.6b shows that at time t = 0, there are no infected and secondary susceptible humans. The figure shows that the variables are directly proportional to each other for all time. For some time, both variables increases up to the maximum of 1668.002 secondary susceptible humans. They then decreases up to the equilibrium. Figure 3.6c shows that in the beginning there are 10 infected mosquito and no infected human. As the infected humans increases, infected mosquitoes also increases.
After some time, the two variables become inversely proportional. As infected humans decreases, the infected mosquito continue to increase. Then both variables decreases towards zero.
On the other hand, figure 3.6d shows that at time t = 0, there are 100,000 susceptible mosquito and 10 infected mosquitoes. The figure shows that the two variables are inversely proportional to each other. For some time, susceptible mosquitoes decreases while infected mosquitoes increases. When it reaches the maximum of 71460.393 infected mosquitoes, both variables then decreases towards zero.
Gompertz Human Population Growth and an Exponential Mosquito Population Growth
Let us consider a human population that agrees with the Gompertz growth equation and a mosquito population that agrees with the exponential growth equation. Then
we have H ′ (t) = f (H(t)) = r ln K H(t) H(t) and M ′ (t) = (α m -µ m )M(t) such that H(t) = Ke ln H 0 K e -rt
and g(M(t)) = α m M(t). Then our model become
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) Ke ln H 0 K e -rt + r ln K H(t) H(t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) Ke ln H 0 K e -rt -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) Ke ln H 0 K e -rt u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) Ke ln H 0 K e -rt -µ m u 5 (t) + α m u 5 (t) + α m u 6 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) Ke ln H 0 K e -rt -µ m u 6 (t) (3.21)
Assume that α m ≤ µ m . We have the following theorems.
Well-posedness and Positivity of the Solution
Theorem 3.3.5. Let Ω Gomp be the region defined by
Ω Gomp = (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) ∈ R 6 + ; 0 ≤ u 1 + u 2 + u 3 + u 4 ≤ H 0 , 0 ≤ u 5 + u 6 ≤ M 0 (3.22) such that (u 1 (0), u 2 (0), u 3 (0), u 4 (0), u 5 (0), u 6 (0)) ∈ Ω. Then there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(R + , R 6 + ).
Proof. Consider the initial value problem
U ′ (t) = F(t, U(t)) where U(0) = U 0 .
Then,
f ′ 1 (0, u 2 , u 3 , u 4 , u 5 , u 6 ) = r ln K u 2 + u 3 + u 4 (u 2 + u 3 + u 4 ), ∀u 2 , • • • , u 6 ∈ Ω f ′ 2 (u 1 , 0, u 3 , u 4 , u 5 , u 6 ) = au 6 b h u 1 + b h u 3 Ke ln H 0 K e -rt ≥ 0, ∀u 1 , u 3 , • • • , u 6 ∈ Ω f ′ 3 (u 1 , u 2 , 0, u 4 , u 5 , u 6 ) = γ h u 2 , ∀u 1 , u 2 , u 4 , • • • , u 6 ∈ Ω f ′ 4 (u 1 , u 2 , u 3 , 0, u 5 , u 6 ) = δ h u 2 , ∀u 1 , u 2 , u 3 , u 5 , u 6 ∈ Ω f ′ 5 (u 1 , u 2 , u 3 , u 4 , 0, u 6 ) = α m u 6 , ∀u 1 , • • • , u 4 , u 6 ∈ Ω f ′ 6 (u 1 , u 2 , u 3 , u 4 , u 5 , 0) = ab m u 2 u 5 Ke ln H 0 K e -rt , ∀u 1 , • • • , u 5 ∈ Ω.
On one hand, note that
H ′ = r ln K H H where H = Ke ln H 0 K e -rt
. Since e -rt ≤ 1, ln
H 0 K e -rt ≤ ln H 0 K . Thus, H ≤ Ke ln H 0 K ≤ K H 0 K ≤ H 0 . While M ′ = (α m -µ m )M, then M = M 0 e (α m -µ m )t . Since α m ≤ µ m , we have M(t) = e (α m -µ m )t M 0 ≤ M 0 .
Thus F satisfies the local Lipschitz condition. Therefore, by Cauchy-Lipschitz Theorem, there exist T > 0 and a unique solution to equation (3.21) in C(R + , R 6 + ).
Equilibrium
Now, solving for all possible values of x * that lie on Ω Gomp we get (u * 1 , 0, u * 3 , u * 4 , 0, 0). To see this, consider the lemma below. Lemma 3.3.6. The system of equation (3.21)
admits an equilibrium at E Gomp = (u * 1 , 0, u * 3 , u * 4 , 0, 0) . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Thus, ab m u 2 u 5 Ke ln H 0 K e -rt -µ m u 6 = 0 becomes ab m (0)u 5 Ke ln H 0 K e -rt -µ m u 6 = 0 -µ m u 6 = 0.
Hence, u 6 = 0. Now for -ab m u 2 u 5 Ke ln H 0 K e -rtµ m u 5 + α m u 5 + α m u 6 = 0, since u 2 = 0 and u 6 = 0, we have
- ab m (0)u 5 Ke ln H 0 K e -rt -µ m u 5 + α m u 5 + α m (0) = 0 u 5 (α m -µ m ) = 0.
Therefore, u 5 = 0, since α mµ m < 0. Since the system of equation contains either u 2 , u 5 or u 6 or both, which has a zero value, then any nonnegative values of u 1 , u 3 , u 4 satisfies the system of equation. Therefore, E Gomp = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is an equilibrium point.
Next Generation Matrix and Basic Reproduction Number
Now if we show that the equilibrium point E Gomp = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is asymptotically stable using the eigenvalues from the next generation matrix, then we get the same result as equation (3.20). Then solving for the next generation matrix we get the eigenvalues
λ = ± a 2 b m u 5 b h u 3 + b h u 1 K 2 µ m (γ h + δ h ) . ( 3.23)
Jacobian Matrix
Now, let us show that E Gomp is asymptotically stable using the eigenvalue from the Jacobian matrix. Now if we compute the Jacobian Matrix of the system in order to confirm our lambda, then only the
∂ f 5 ∂u i , i = 1, 2, • • • , 6
, would change. We can get
∂ f 5 ∂u 1 = 0 ∂ f 5 ∂u 2 = -ab m u 5 K ∂ f 5 ∂u 3 = 0 ∂ f 5 ∂u 4 = 0 ∂ f 5 ∂u 5 = -ab m u 2 K -µ m + α m ∂ f 5 ∂u 6 = µ m + α m
Therefore, we have the Jacobian matrix
J = -ab h u 6 K 0 0 0 0 -ab h u 1 K ab h u 6 K -γ h -δ h a b h h u 6 K 0 0 ab h u 1 +a b h h u 3 K 0 γ h -a b h h u 6 K 0 0 -a b h h u 3 K 0 δ h 0 0 0 0 0 -ab m u 5 K 0 0 -ab m u 2 K -µ m + α m µ m + α m 0 ab m u 5 K 0 0 ab m u 2 K -µ m
Thus, we have the following theorem.
Lemma 3.3.7. The equilibrium point E Gomp = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is locally asymptotically stable.
Proof. Let E Gomp = (u * 1 , 0, u * 3 , u * 4 , 0, 0) be an equilibrium point of the system of equation. Then the above Jacobian matrix can be deduce to
J(E Gomp ) = 0 0 0 0 0 -ab h u * 1 K 0 -γ h -δ h 0 0 0 ab h u * 1 +a b h h u * 3 K 0 γ h 0 0 0 -a b h h u * 3 K 0 δ h 0 0 0 0 0 0 0 0 -µ m + α m µ m + α m 0 0 0 0 0 -µ m
Therefore, determining its characteristic polynomial, we have
J(E Gomp ) -λI 6 = -λ 0 0 0 0 -ab h u * 1 K 0 -γ h -δ h -λ 0 0 0 ab h u * 1 +a b h h u * 3 K 0 γ h -λ 0 0 -a b h h u * 3 K 0 δ h 0 -λ 0 0 0 0 0 0 α m -λ µ m + α m 0 0 0 0 0 -µ m -λ = 0 implying further that -λ 3 (-λ -γ h -δ h )(-λ -µ m )(-λ -µ m + α m ) = 0.
Hence, we get
λ 1 = 0 multiplicity 3 λ 2 = -µ m λ 3 = -γ h -δ h λ 4 = α m -µ m .
Since α mµ m < 0, all the eigenvalues are negative. Therefore, the system of equation is locally asymptotically stable at the equilibrium point (u * 1 , 0, u * 3 , u * 4 , 0, 0).
Numerical Illustrations
In this section, we presented the numerical illustration using the Gompertz growth function for human population and exponential growth function for mosquito population.
We let the final time T be 2000 days with initial condition (10000, 0, 0, 0, 100000, 10). Using the same parameter value as Table 3.2, r = 0.00446, µ h = 0.0000391 and K = 100, 000, 000, a phase portrait graph of system (3.21) was simulated using the Python program. For infected humans, figure 3.9a shows that at time t = 0, there are no infected humans but have 10,000 primary susceptible humans. For a very short period of time, infected humans increases rapidly while primary susceptible humans increases slowly. Upon reaching the maximum of 6566.323 infected human, infected humans decreases quickly while primary susceptible humans continuous to increase slowly. Upon reaching the equilibrium of infected humans, primary susceptible humans continues to increase following gompertzian curve. Figure 3.9b shows that at time t = 0, there are no infected and secondary susceptible humans. The figure shows that the variables are directly proportional to each other for all time. For some time, both variables increases up to the maximum of 53787.330 secondary susceptible humans. They then decreases for a short time interval and then increases up to the equilibrium. Figure 3.9c shows that in the beginning there are 10 infected mosquito and no infected human. As the infected humans increases, infected mosquitoes also increases.
After some time, the two variables become inversely proportional. As infected humans decreases, the infected mosquito continue to increase. Then both variables decreases towards zero.
On the other hand, figure 3.9d shows that at time t = 0, there are 100,000 susceptible mosquito and 10 infected mosquitoes. The figure shows that the two variables are inversely proportional to each other. For some time, susceptible mosquitoes decreases while infected mosquitoes increases. When it reaches the maximum of 52199.482 infected mosquitoes, infected mosquitoes then decreases while susceptible mosquito increase for a short time period. After then both variables decreases towards zero.
Choice of Control Strategies
Preventing or reducing dengue virus transmission depends entirely on controlling the mosquito vectors or vaccination. This section applied three control strategies to reduce dengue transmission: vaccination, vector control and the combination of vaccination and vector control.
Vaccination
Dengue fever is the most rapidly spreading mosquito-borne viral disease found in tropical and sub-tropical climates worldwide. It is caused by the single positivestranded RNA virus of the family Flaviviridae that is transmitted to humans through a diurnal mosquito. [START_REF] Rodenhuis-Zybert | Dengue virus life cycle: Viral and host factors modulating infectivity[END_REF] So far, there is no specific treatment for dengue fever. According to the theory of facilitating antibodies, vaccine research is made more difficult by the need for a vaccine immunizing sustainability and simultaneously against the four serotypes of the virus [START_REF] Normile | Surprising new dengue virus throws a spanner in disease control efforts[END_REF]. Half a dozen vaccine candidates are under study.
Constant Human and Mosquito Population of Dengvaxia Model
Let us consider the constant human and mosquito population, that is, f (H(t)) = 0 and g(M(t)) = µ m M(t). Then our model becomes
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -v 1 u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -v 3 u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + µ m u 6 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t). (3.24)
Where v 1 u 1 (t) is the vaccination given to the primary susceptible human population, and v 3 u 3 (t) is the vaccination given to the secondary susceptible human. The total immunity is given by T ′ h (t) = v 1 u 1 (t) + v 3 u 3 (t). Note that there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Cons , R + ) 6 . Lemma 3.4.1. The system of equation (3.24) admits an equilibrium at E VacCons,1 = (0, 0, 0,
u * 4 , u * 5 , 0) . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Thus, ab m u 2 u 5 H -µ m u 6 = 0 becomes ab m (0)u 5 H -µ m u 6 = 0 -µ m u 6 = 0
Hence, u 6 = 0. Therefore, substituting u 6 = 0 and u 2 = 0 to both u ′ 1 = 0 and u ′ 3 = 0, we have
- ab h u 6 u 1 H 0 -v 1 u 1 = 0 γ h u 2 - a b h u 3 u 6 H 0 -v 3 u 3 = 0 - ab h (0)u 1 H 0 -v 1 u 1 = 0 γ h (0) - a b h u 3 (0) H 0 -v 3 u 3 = 0 -v 1 u 1 = 0 -v 3 u 3 = 0
Since v 1 , v 3 > 0, u 1 = 0 and u 3 = 0. Consequently, since system (3.24) contains either u 1 , u 2 , u 3 or u 6 or both, which has a zero value, then any nonnegative values of u * 4 , u * 5 satisfies the system of equation. Therefore, E VacCons,1 = (0, 0, 0, u * 4 , u * 5 , 0) is an equilibrium point. Now, in solving for the next generation matrix, there is no changes in F and V in Section 6.1. Therefore, the eigenvalues of system (3.24) are
λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h )
Therefore, we have the following theorem.
Lemma 3.4.2. The equilibrium point E VacCons,1 = (0, 0, 0, u * 4 , u * 5 , 0) of the system of equation (3.24) is locally asymptotically stable.
Proof. From the above eigenvalues,
ρ(FV -1 ) = ± a 2 b m u * 5 b h (0) + b h (0) H 2 0 µ m (γ h + δ h ) = 0 < 1.
Therefore, the system of equation is locally asymptotically stable at E VacCons,1 .
Vector Control
Vector control is a method to limit or eradicate the vectors which transmit disease pathogens. The most frequent type of vector control uses a variety of strategies such as habitat and environmental control, reducing vector contact, chemical control, and biological control.
Let us consider the following model in two different populations to include vectorial control.
Vector Control with Constant Human and Mosquito Population
Let us consider the constant human and mosquito population where f (H(t)) = 0 and g(M(t)) = µ m M(t). Then our model would become
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + µ m u 6 (t) -v 5 u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -v 6 u 6 (t) (3.25)
Where v 5 u 5 (t) and v 6 u 6 (t) are the introduction of vectorial control to the environment resulting to the removal of susceptible and infectious mosquito. The total controlled mosquito control is given by T ′ M (t) = v 5 u 5 (t) + v 6 u 6 (t). Again there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Cons , R + ) 6 . Lemma 3.4.3. The system of equation (3.25) admits an equilibrium at
E VecCons,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Thus, ab m u 2 u 5 H 0 -µ m u 6 -v 6 u 6 = 0 becomes -µ m u 6 -v 6 u 6 = 0 (-µ m -v 6 )u 6 = 0
Since -µ mv 6 ̸ = 0, u 6 = 0. Therefore, substituting u 6 = 0 and u 2 = 0 to both u ′ 5 = 0, we have
- ab m u 2 u 5 H 0 + µ m u 6 -v 5 u 5 = 0 - ab m (0)u 5 H 0 + µ m (0) -v 5 u 5 = 0 -v 5 u 5 = 0 Since v 5 > 0, u 5 = 0.
F = au 6 (b h u 1 + b h u 3 ) H 0 ab m u 2 u 5 H 0 V = (γ h + δ h )u 2 (µ m + v 6 )u 6
where F as the appearance of new infections in each compartment and V as the transitions between all other compartments. Thus,
F = ∂F 1 ∂u 2 ∂F 1 ∂u 6 ∂F 2 ∂u 2 ∂F 2 ∂u 6 and V = ∂V 1 ∂u 2 ∂V 1 ∂u 6 ∂V 2 ∂u 2 ∂V 2 ∂u 6
we have
F = 0 a(b h u 1 + b h u 3 ) H 0 ab m u 5 H 0 0 and V = γ h + δ h 0 0 µ m + v 6 where V -1 = 1 γ h +δ h 0 0 1 µ m +v 6 .
Consequently,
FV -1 = 0 a(b h u 1 + b h u 3 ) H 0 ab m u 5 H 0 0 1 γ h +δ h 0 0 1 µ m +v 6 = 0 a(b h u 1 + b h u 3 ) H 0 (µ m +v 6 ) ab m u 5 H 0 (γ h +δ h ) 0 Since the characteristic polynomial is det FV -1 -λI , we have det FV -1 -λI = -λ a(b h u 1 + b h u 3 ) H 0 (µ m +v 6 ) ab m u 5 H 0 (γ h +δ h ) -λ = λ 2 - a 2 b m u 5 (b h u 1 + b h u 3 ) H 2 0 (γ h + δ h )(µ m + v 6 )
Therefore, the eigenvalues of system (3.25) are
λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 6 )(γ h + δ h ) (3.26)
and we have the following theorem.
Lemma 3.4.4. The equilibrium point E VecCons,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) of the system of equation (3.25) is locally asymptotically stable.
Proof. From the above eigenvalues,
ρ(FV -1 ) = ± a 2 b m (0) b h u * 3 + b h u * 1 H 2 0 (µ m + v 6 )(γ h + δ h ) = 0 < 1.
Therefore, the system of equation (3.25) is locally asymptotically stable at E VecCons,1 .
Theorem 3.4.5. The equilibrium point E VecCons,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) of the system of equation (3.25) is globally asymptotically stable.
Combination of Vaccination and Vector Control
Let us combine the dengue vaccination and the vectorial control in our model using two growth function.
Constant Human and Mosquito Population with Vaccination and Vector Control
Consider a constant growth for human and mosquito population. We have f (H(t)) = 0 and g(M(t)) = µ m M(t). Then our model becomes
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -v 1 u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -v 3 u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + µ m u 6 (t) -v 5 u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -v 6 u 6 (t) (3.27)
where total human immunity is given by
T ′ H (t) = v 1 u 1 (t) + v 3 u 3 (t)
and total vectorial control is given by T ′ M (t) = v 5 u 5 (t) + v 6 u 6 (t). There exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Cons , R + ) 6 . Lemma 3.4.6. The system of equation (3.27) admits an equilibrium at E CombCons,1 = (0, 0, 0, u * 4 , 0, 0) .
Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0. Thus, ab m u 2 u 5 H -µ m u 6 -v 6 u 6 = 0 becomes ab m (0)u 5 H -µ m u 6 -v 6 u 6 = 0 (-µ m -v 6 )u 6 = 0
Hence, u 6 = 0. Therefore, substituting u 6 = 0 to both u ′ 1 = 0 and u ′ 3 = 0, we have
- ab h u 6 u 1 H 0 -v 1 u 1 = 0 γ h u 2 - a b h u 3 u 6 H 0 -v 3 u 3 = 0 - ab h (0)u 1 H 0 -v 1 u 1 = 0 γ h (0) - a b h u 3 (0) H 0 -v 3 u 3 = 0 -v 1 u 1 = 0 -v 3 u 3 = 0 Since v 1 , v 3 > 0, u 1 = 0 and u 3 = 0. Now, substituting u 6 = 0 to -ab m u 2 u 5 H 0 + µ m u 6 -v 5 u 5 = 0, we get - ab m (0)u 5 H 0 + µ m (0) -v 5 u 5 = 0 -v 5 u 5 = 0 u 5 = 0
Consequently, since the system (3.27) contains either u 1 , u 2 , u 3 , u 5 or u 6 or both, which has a zero value, then any nonnegative values of u * 4 satisfies the system of equation. Therefore, E CombCons,1 = (0, 0, 0, u * 4 , 0, 0) is an equilibrium point. Now, in solving for the next generation matrix, there is no changes in F and V as in Section 3.3.1. Therefore, the eigenvalues of system are the same as equation (3.26) which are
λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 6 )(γ h + δ h ) .
Therefore, we have the following theorem.
Lemma 3.4.7. The equilibrium point E CombCons,1 = (0, 0, 0, u * 4 , 0, 0) of the system of equation (3.27) is locally asymptotically stable.
Proof. Since u * 1 , u * 3 , u * 5 = 0, the above eigenvalues would become
λ = ± a 2 b m (0) b h (0) + b h (0) H 2 0 (µ m + v 6 )(γ h + δ h ) = 0 < 1.
Therefore, the system of equation is locally asymptotically stable at E CombCons,1 .
Summary of the Effective Reproduction Number of Different Control Strategies
Instead of R 0 , it is more interesting to consider the effective reproduction number R e f f . We have
R e f f = ± a 2 b m u 5 (b h u 1 + b h u 3 ) µ m H 0 2 (γ h + δ h ) without control R e f f = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h ) for vaccination only R e f f = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 5 )(γ h + δ h ) for vector control only R e f f = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 5 )(γ h + δ h )
for both vaccination and vector control.
Optimal Control strategy
In the next section, we determine the optimal control in minimizing infected humans by applying vaccination and vectorial control. Using the constant growth function for human and mosquito population, a numerical simulation is presented using vaccination only and vector control only. For the combination of vaccination and vector control, we use the entomological growth function for mosquito population.
Minimizing Infected Humans by Optimal Vaccination
Let us consider the constant growth population in model for the human population H ′ (t) = f (H(t)) = 0 and a general mosquito growth population M ′ (t) = µ m M(t) + g(M(t)). We will now write a control problem that aim to minimize the number of infected human by optimal vaccination. We attribute two control inputs, w 1 and w 3
for the human population. Here, the action of w 1 (t) is the percentage of primary susceptible and w 3 (t) is the percentage of secondary susceptible individual being vaccinated per unit of time. Furthermore, we assume that both control inputs are measurable functions that takes its values in a positively bounded set W = [0, w H ] 2 . Thus we consider the objective function
J (w 1 , w 3 ) = T 0 u 2 (t) + 1 2 A 1 w 2 1 (t) + 1 2 A 3 w 2 3 (t) dt subject to u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + g(M(t)) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) (3.28)
for t ∈ [0, T], with 0 ≤ w 1 , w 3 ≤ w H . The variables A j are the positive weights associated with the control variables w 1 and w 3 , respectively. They corresponds to the effort of vaccinating the primary susceptible human u 1 and the secondary susceptible human u 3 compartment.
Lemma 3.5.1.
There exists an optimal control w * = (w
) ′ , (u n 2 ) ′ , • • • , (u n 6 ) ′ ) is also bounded, or in other words (u n 1 , u n 2 , • • • , u n 6 ) is in W 1,∞ ([0, T]
). Thus, from Arzelà-Ascoli theorem, we can extract a subsequence of ((
u n 1 , u n 2 , • • • , u n 6 )) n that strongly converges to (u * 1 , u * 2 , • • • , u * 6 ) in C 0 ([0, T]). Rewritting (3.28) in integral form, we get T 0 (u n 1 (t) -u n 0 )φ 1 (t)dt - T 0 (u 1 (t) -u 0 )φ 1 (t)dt = - T 0 ab h H 0 (u n 6 (s)u n 1 (s) -u 6 (s)u 1 (s)) + (w n 1 (s)u n 1 (s) -w 1 (s)u 1 (s))dt or again T 0 (u n 1 (t) -u 1 )dt - T 0 (u n 0 -u 0 )dt = - T 0 ab h H 0 ((u n 1 (t) -u 1 (t))u n 6 (t) + (u n 6 (t) -u 6 (t))u 1 (t)) +(w n 1 (t) -w 1 (t))u n 1 (t) + (u n 1 (t) -u 1 (t))w 1 (t))dt
that converges to 0 as n → +∞. Deal similarly for the remaining equations, the limit
(u * 1 , u * 2 , • • • , u * 6 )
is then the solution of the system for the limit control (w * 1 , w * 3 ). Finally,
lim n→∞ inf J(w n 1 , w n 3 ) = lim n→∞ inf T 0 u n 2 (t) + 1 2 A 1 (w n 1 ) 2 (t) + 1 2 A 3 (w n 3 ) 2 (t) dt ≥ T 0 u * 2 (t) + 1 2 A 1 (w * 1 ) 2 (t) + 1 2 A 3 (w * 3 ) 2 (t) dt = J (w * 1 , w * 3 )
by lower semi-continuity of J .
Pontryagin's maximum principle is used to find the best possible control for taking a dynamical system from one state to another. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system [START_REF] Lenhart | Optimal Control Applied to Biological Models[END_REF] plus the maximum condition of the Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions. Now let us apply the Pontryagin's Maximum Principle in our system. We state the lemma below.
Lemma 3.5.2.
There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (3.28) that satisfy the following backward in time system of ordinary differential equation.
-
dλ 1 dt = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 - dλ 4 dt = 0 - dλ 5 dt = λ 5 - ab m u 2 H 0 -µ m + ∂g ∂u 5 + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 µ m (3.29)
with the transversality condition λ(T) = 0.
Proof. Using the Hamiltonian for system (3.28), we have
H =L(w 1 , w 3 ) + λ 1 (t)u ′ 1 (t) + λ 2 (t)u ′ 2 (t) + λ 3 (t)u ′ 3 (t) + λ 4 (t)u ′ 4 (t) + λ 5 (t)u ′ 5 (t) + λ 6 (t)u ′ 6 (t) =u 2 + 1 2 A 1 w 2 1 + 1 2 A 3 w 2 3 + λ 1 - ab h u 6 u 1 H 0 -w 1 u 1 + λ 2 au 6 b h u 1 + b h u 3 H 0 -γ h u 2 -δ h u 2 + λ 3 γ h u 2 - a b h u 3 u 6 H 0 -w 3 u 3 + λ 4 (δ h u 2 ) + λ 5 - ab m u 2 u 5 H 0 -µ m u 5 + g(u 5 , u 6 ) + λ 6 ab m u 2 u 5 H 0 -µ m u 6 (3.30)
Therefore, finding the partial derivatives of H with respect to u i 's, i = 1, 2, • • • , 6, we have
∂H ∂u 1 = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 ∂H ∂u 2 = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 ∂H ∂u 3 = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 ∂H ∂u 4 = 0 ∂H ∂u 5 = λ 5 - ab m u 2 H 0 -µ m + ∂g ∂u 5 + λ 6 ab m u 2 H 0 ∂H ∂u 6 = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 µ m .
Then the adjoint system is defined by
dλ 1 dt = -∂H ∂u 1 , dλ 2 dt = -∂H ∂u 2 , dλ 3 dt = -∂H ∂u 3 , dλ 4 dt = -∂H
∂u 4 , dλ 5 dt = -∂H ∂u 5 and dλ 6 dt = -∂H ∂u 6 . We have the following
dλ 1 dt = λ 1 ab h u 6 H 0 + w 1 -λ 2 ab h u 6 H 0 dλ 2 dt = -1 + λ 2 (γ h + δ h ) -λ 3 γ h -λ 4 δ h + λ 5 ab m u 5 H 0 -λ 6 ab m u 5 H 0 dλ 3 dt = -λ 2 a b h u 6 H 0 + λ 3 a b h u 6 H 0 + w 3 dλ 4 dt = 0 dλ 5 dt = λ 5 ab m u 2 H 0 + µ m - ∂g ∂u 5 -λ 6 ab m u 2 H 0 dλ 6 dt = λ 1 ab h u 1 H 0 -λ 2 ab h u 1 + a b h u 3 H 0 + λ 3 a b h u 3 H 0 -λ 5 ∂g ∂u 6 + λ 6 µ m .
Theorem 3.5.3. The optimal control variables, for j = 1, 3, are given by w * j = max 0, min
λ j u j A j , w H .
Proof. By the Pontryagin maximum principle, the optimal control w * minimizes the Hamiltonian given by (3.30). We have ∂H ∂w j = 0, for all j = 1, 3 at w j = w * j .
Thus, we get
∂H ∂w 1 = A 1 w 1 -λ 1 u 1 , ∂H ∂w 3 = A 3 w 3 -λ 3 u 3 .
Implying further that
w * 1 = λ 1 u 1 A 1 w * 3 = λ 3 u 3 A 3 .
Therefore, the optimal control derived from the stationary condition dλ i ∂t is given by
w * 1 = 0 if λ 1 u 1 A 1 ≤ 0 λ 1 u 1 A 1 if λ 1 u 1 A 1 < w H w H if λ 1 u 1 A 1 ≥ w H w * 3 = 0 if λ 3 u 3 A 3 ≤ 0 λ 3 u 3 A 3 if λ 3 u 3 A 3 < w H . w H if λ 3 u 3 A 3 ≥ w H
Numerical Simulation of Optimal Vaccination using Constant Mosquito Population
This section gives the numerical analyses of the vaccination method through dengvaxia in minimizing the infected human population in the dengue outbreak. We consider a constant growth population for the mosquito population and fixed time T to 100 days or two and a half months, which is around the average infection season duration. The parameters value used are the same as in Table 3.2 and the values of control weights set initially at A 1 = 0.1 and A 3 = 1. Note that the effort in operating vaccination control w 3 is set higher than the effort in operating vaccination control w 1 , since primary susceptible humans are readily available in the population compared to the secondary susceptible individual, vaccinating them would render effortless.
The optimality system is numerically solved using the following gradient algorithm written in Python.
Algorithm:
Initially we set u 0 = [1.e4, 0., 0., 0., 1.e5, 1e3] with
H 0 = u 0 [0] + u 0 [1] + u 0 [2] + u 0 [3] and M 0 = u 0 [4] + u 0 [5]
. We choose a random positive value for w 1 and w 3 between ]0, 1[ with W = (w 1 , w 3 ).
While |H(w j , u i , λ i )| > ϵ.
1. Solve the direct objective problem for t from 0 to T u ′ 1 (t) = -
ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + µ m u 6 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t)
2. Solve the adjoint system for t from T to 0
- dλ 1 dt = ab h u 6 H 0 (-λ 1 + λ 2 ) -λ 1 w 1 - dλ 2 dt = 1 -γ h (λ 2 -λ 3 ) -δ h (λ 2 -λ 4 ) - ab m u 5 H 0 (λ 5 -λ 6 ) - dλ 3 dt = a b h u 6 H 0 (λ 2 -λ 3 ) -λ 3 w 3 - dλ 4 dt = 0 - dλ 5 dt = ab m u 2 H 0 (-λ 5 + λ 6 ) - dλ 6 dt = - ab h u 1 H 0 (λ 1 -λ 2 ) + a b h u 3 H 0 (λ 2 -λ 3 ) + µ m (λ 5 -λ 6 )
3. Using the value of u i 's in step 1 and λ ′ i s in step 2, we solve the Wnew = (w 1 , w 3 ) such that, for j = 1, 3
w * j = 0 if λ j u j A j ≤ 0 λ j u j A j if λ j u j A j < w H . w H if λ j u j A j ≥ w H 4. Compute H(w j , u i , λ i ).
A finite difference scheme is used to numerically solve direct and the adjoint system of ordinary differential equations. More precisely, an explicit correction Adams-Bashford and implicit correction Adams-Moulton of order 2 is written in python.
Setting w H = 1 with error Err = 1 and tolerance tol = 0.01. Note that in the human compartment, the total immunity to dengue by means of implementing the Dengvaxia vaccine is denoted by T h which is given by T h (t) = w 1 (t)u 1 (t) + w 3 (t)u 3 (t). Using the algorithm above, get the following figure. Healthy human population comprises the combination of primary susceptible human u 1 , secondary susceptible human u 3 , recovered human u 4 and the immune human T h . Figure 3.12 shows that for two and a half days, a healthy human population exponentially decreases to its lowest point at 8110 population then increases exponentially to its equilibrium at day 83.
Minimizing Infected Humans by Optimal Vector Control
In this section we consider minimizing infected humans by applying vector control trough pesticide administration. Let us consider the constant growth population model for the human H ′ (t) = f (H(t)) = 0 and a general mosquito growth population M ′ (t) = µ m M(t) + g(M(t)). We will now write a control problem that aim to minimize the number of infected human. We attribute a control inputs w m for the mosquito population. Here, the action of w m (t) is the percentage of administration of insecticide to the environment per unit of time resulting to the removal of susceptible and infectious mosquito in the system. Furthermore, we assume that the control input is measurable functions that take its values in a positively bounded set W = [0, w M ]. Thus we consider the objective function
J (w m ) = T 0 u 2 (t) + 1 2 A m w 2 m (t) dt subject to u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + g(M(t)) -(µ m + w m (t))u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -w m (t)u 6 (t) (3.31)
for t ∈ [0, T], with 0 ≤ w m ≤ w M . The variables A m are the positive weights associated with the control variables.
Lemma 3.5.4.
There exists an optimal control w * m such that
J (w * m ) = min w m ∈W J (w m )
under the constraint (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) is a solution to the system (3.31). Now let us apply the Pontryagin's Maximum Principle in our system. We state the lemma below.
Lemma 3.5.5. There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (3.31) that satisfy the following backward in time system of ordinary differential equation.
-
dλ 1 dt = -λ 1 ab h u 6 H 0 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 -λ 3 a b h u 6 H 0 - dλ 4 dt = 0 - dλ 5 dt = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m )
with the transversality condition λ(T) = 0.
Proof. Using the Hamiltonian for equation (3.31), we have
H =L(w m ) + λ 1 (t)u ′ 1 (t) + λ 2 (t)u ′ 2 (t) + λ 3 (t)u ′ 3 (t) + λ 4 (t)u ′ 4 (t) + λ 5 (t)u ′ 5 (t) + λ 6 (t)u ′ 6 (t) =u 2 + 1 2 A m w 2 m + λ 1 - ab h u 6 u 1 H 0 + λ 3 γ h u 2 - a b h u 3 u 6 H 0 + λ 2 au 6 b h u 1 + b h u 3 H 0 -γ h u 2 -δ h u 2 + λ 4 (δ h u 2 ) + λ 5 - ab m u 2 u 5 H 0 + g(M) -(µ m + w m )u 5 + λ 6 ab m u 2 u 5 H 0 -µ m u 6 -w m u 6 (3.32)
Therefore, finding the partial derivatives of H with respect to u i 's, i = 1, 2, • • • , 6, we have
∂H ∂u 1 = -λ 1 ab h u 6 H 0 + λ 2 ab h u 6 H 0 ∂H ∂u 2 = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 ∂H ∂u 3 = λ 2 a b h u 6 H 0 -λ 3 a b h u 6 H 0 ∂H ∂u 4 = 0 ∂H ∂u 5 = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 ∂H ∂u 6 = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m )
Then the adjoint system is defined by dλ i dt = -∂H ∂u i for i = 1, 2, • • • , 6. We have the following
dλ 1 dt = λ 1 ab h u 6 H 0 -λ 2 ab h u 6 H 0 dλ 2 dt = -1 + λ 2 (γ h + δ h ) -λ 3 γ h -λ 4 δ h + λ 5 ab m u 5 H 0 -λ 6 ab m u 5 H 0 dλ 3 dt = -λ 2 a b h u 6 H 0 + λ 3 a b h u 6 H 0 dλ 4 dt = 0 dλ 5 dt = λ 5 ab m u 2 H 0 - ∂g ∂u 5 + λ 5 (µ m + w m ) -λ 6 ab m u 2 H 0 dλ 6 dt = λ 1 ab h u 1 H 0 -λ 2 ab h u 1 + a b h u 3 H 0 + λ 3 a b h u 3 H 0 -λ 5 ∂g ∂u 6 + λ 6 (µ m + w m )
Theorem 3.5.6. The optimal control variables, for j = 5, 6, are given by w * m (t) = max 0, min
λ 5 u 5 + λ 6 u 6 A m , w M .
Proof. By the Pontryagin maximum principle, the optimal control w * m should be the one that minimizes, at each instant t, the Hamiltonian given by (3.32). Therefore, we get ∂H ∂w m = A m w mλ 5 u 5λ 6 u 6 .
In effect, we get
w * m = λ 5 u 5 + λ 6 u 6 A m
Therefore, the optimal control derived from the stationary condition dλ i dt is given by
w * m = 0 if λ 5 u 5 +λ 6 u 6 A m ≤ 0 λ 5 u 5 +λ 6 u 6 A 5 if λ 5 u 5 +λ 6 u 6 A m < w M w M if λ 5 u 5 +λ 6 u 6 A m ≥ w M
Numerical Simulation of Optimal Vector using Constant Mosquito Population
This section gives the numerical analyses of the vector control method through pesticide administration in minimizing the infected human population in the dengue outbreak. We consider a constant growth population for the mosquito population and fixed time T to 100 days or two and a half months, which is around the average infection season duration. The parameters value used are the same as in Table 3.2 and the values of control weights set initially at A m = 1. The optimality system is numerically solved using the same gradient algorithm as in Section 3.5.1. This strengthens further the conclusion that vector control is an effective method in minimizing the mosquito population. Healthy human population comprises the combination of primary susceptible human u 1 , secondary susceptible human u 3 , recovered human u 4 and the immune human T h . Figure 3.16 shows that for at most three days, a healthy human population exponentially decreases to its lowest point at 7955 population then increases exponentially to its equilibrium at day 22. The figure shows that vector control is more effective in maintaining a healthy human population. With the difference of only 155 population, vector control has less minimum population than vaccination but vector control reaches the equilibrium faster than vaccination.
Figure 3.17 shows that in order to achieve optimal control in minimizing the infected human, we need to constantly apply insecticide for 34.5 days. After then .17: Behaviour of the solution of optimal control of mosquitoes in optimal vector control using constant growth function for human and mosquito population.
there is a bang-bang control. The figure shows that we can stop the application of insecticide on the 63rd day.
Minimizing Infected Humans by both Optimal Vaccination and Vector Control
In this section, we combine the vaccination and vector control as a strategy in minimizing infected humans. We we write a control problem that aim to minimize the number of infected human population. We attribute three control inputs, w 1 and w 3 for the human population and w m for the mosquito population. Here, the action of w 1 (t) is the percentage of primary susceptible and w 3 (t) is the percentage of secondary susceptible individual being vaccinated per unit of time implying the removal of infected human individuals from the system. While the action of w m (t) is the percentage of administration of insecticide to the environment per unit of time resulting to the removal of susceptible and infected mosquito in the system. Furthermore, we assume that all control inputs are measurable functions that takes its values in a positively bounded set W = [0, w H , w M ]. Thus we consider the objective function
J (w 1 , w 3 , w m ) = T 0 u 2 (t) + 1 2 A 1 w 2 1 (t) + 1 2 A 3 w 2 3 (t) + 1 2 A m w 2 m (t) dt subject to u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + g(M(t)) -(µ m + w m (t))u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -w m (t)u 6 (t) (3.33) for t ∈ [0, T], with 0 ≤ w 1 , w 3 ≤ w H , 0 ≤ w m ≤ w M .
The variables A j are the positive weights associated with the control variables w j , j = 1, 3, m, respectively, Lemma 3.5.7. There exists an optimal control w * = (w *
1 (t), w * 3 (t), w * m (t)) such that J (w * 1 , w * 3 , w * m ) = min w∈W J (w 1 , w 3 , w m )
under the constraint (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) is a solution to the system (4.13).
Now let us apply
Pontryagin's Maximum Principle in our system. We state the lemma below.
Lemma 3.5.8. There exist the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (4.13) that satisfy the following backward in time system of ordinary differential equations:
-
dλ 1 dt = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 - dλ 4 dt = 0 - dλ 5 dt = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m )
with the transversality condition λ(T) = 0.
Proof. Using the Hamiltonian for (4.13), we have
H =L(w 1 , w 3 , w m ) + λ 1 (t)u ′ 1 (t) + λ 2 (t)u ′ 2 (t) + λ 3 (t)u ′ 3 (t) + λ 4 (t)u ′ 4 (t) + λ 5 (t)u ′ 5 (t) + λ 6 (t)u ′ 6 (t) = 1 2 u 2 2 + A 1 w 2 1 + A 3 w 2 3 + A m w 2 m + λ 1 - ab h u 6 u 1 H 0 -w 1 u 1 + λ 3 γ h u 2 - a b h u 3 u 6 H 0 -w 3 u 3 + λ 2 au 6 b h u 1 + b h u 3 H 0 -γ h u 2 -δ h u 2 + λ 4 (δ h u 2 ) + λ 5 - ab m u 2 u 5 H 0 + g(M) -µ m u 5 -w m u 5 + λ 6 ab m u 2 u 5 H 0 -µ m u 6 -w m u 6 .
(3.34)
Therefore, finding the partial derivatives of H with respect to u i 's, i = 1, 2, • • • , 6, we have
∂H ∂u 1 = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 ∂H ∂u 2 = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 ∂H ∂u 3 = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 ∂H ∂u 4 = 0 ∂H ∂u 5 = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 ∂H ∂u 6 = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m ) .
Then the adjoint system is defined by
dλ i dt = -∂H ∂u i for i = 1, 2, • • • , 6.
Theorem 3.5.9. The optimal control variables are given by w * 1 (t) = max 0, min
λ 1 u 1 A 1 , w H w * 3 (t) = max 0, min λ 3 u 3 A 3 , w H w * m (t) = max 0, min λ 5 u 5 + λ 6 u 6 A m , w M
Proof. By the Pontryagin maximum principle, the optimal control w * minimizes, at each instant t, the Hamiltonian given by (4.14). We have ∂H ∂w j = 0, for all j = 1, 3, m at w j = w * j .
Therefore, we get
∂H ∂w 1 = A 1 w 1 -λ 1 u 1 , ∂H ∂w 3 = A 3 w 3 -λ 3 u 3 , ∂H ∂w m = A m w m -λ 5 u 5 -λ 6 u 6 ,
and
w 1 = λ 1 u 1 A 1 , w 3 = λ 3 u 3 A 3 , w m = λ 5 u 5 + λ 6 u 6 A m .
Numerical Simulation of the Optimal Control Problem
In this section, we presented numerical simulations showing the difference in minimizing the infected human during the dengue outbreak between the three methods: vaccination, vector control, and the combination of the vaccination and vector control. The optimal control are (w * 1 , w * 3 ), (w * 5 , w * 6 ) and (w * 1 , w * 3 , w * 5 , w * 6 ), respectively. We consider a constant growth function for human population f (H(t)) = 0 and an entomological growth function for the mosquito population g(M(t)) = α m Me -β m M . The parameters value used presented in Table 3.2 which is taken from Bakach et al. [START_REF] Bakach | A survey of mathematical models of dengue fever[END_REF] and the author estimates some from Indonesia, with the similar environmental condition as the Philippines. Notice that we set α m < µ m , by Theorem 4.2.1 the global stability corresponds to E 1 . In this situation, E 2 is biologically thus we do not take it into consideration.
The control weights A 1 and A 3 are the efforts in vaccinating the human population while A m is the effort to eliminate the mosquito population by means of administering insecticides. Since primary susceptible humans are readily available in the population compared to the secondary susceptible humans, the efforts used in vaccinating them would be less than the effort exerted in vaccinating the secondary susceptible. Thus, A 3 is set higher than A 1 . While insecticide administration in susceptible mosquitoes and infected mosquitoes uses the same effort and achieves a similar result. Hence, we initially set the control weights as A 1 = 0.1, A 3 = 1 and A m = 1. Note that the values of A 1 , A 3 , A m do not change the convergence of optimal control. The optimality system is numerically solved using the same gradient algorithm describe in Section 3.5.1. In here, we set u 0 = [1.e4, 0., 0., 0., 1.e5, 1e3] with [START_REF] Aguiar | The impact of the newly licensed dengue vaccine in endemic countries[END_REF] and M 0 = u 0 [4] + u 0 [START_REF] Anderson | Descartes' rule of signs revisited[END_REF]. We choose a random positive value for w 1 , w 3 and w m between ]0, 1[ with W = (w 1 , w 3 , w m ) such that |H(w j , u i , λ i )| > ϵ. Note that the initial choice of w does not affect the convergence of the solution. Note that in the human compartment, the total immunity to dengue by means of implementing the Dengvaxia vaccine is denoted by T h which is given by T h (t) = w 1 (t)u 1 (t) + w 3 (t)u 3 (t). Thus, a healthy human combines immune human T h , primary susceptible u 1 , secondary susceptible u 3 , and the recovered u 4 humans. The figure shows that the combination of vaccination and vector control is the best method in maximizing the healthy human population. It only takes 26 days to combine vaccination and vector control methods to reach the equilibrium of healthy humans. Its minimum population is 8.73% (8,734) on day 1.8.
H 0 = u 0 [0] + u 0 [1] + u 0 [2] + u 0
In contrast, there is no significant difference in the vector and vaccination method alone in the healthy human population. The vector method takes 29 days to reach its equilibrium with 8.02% (8,021) minimum population on 2.8 days. The vaccination required 39 days to reach an equilibrium with 8.16% (8,157) minimum population on 2.5 days. Without any control strategies applied to the healthy human population, it requires a much higher time to reach its equilibrium with 4% (4,000) minimum population.
For the recovered human compartment, the figure shows that the human population would eventually recover through time without control strategies applied to the variables. It supports that dengue infection lasts only three to seven days following the infectious mosquito bite, and a spontaneous, full health recovery follows.
However, comparing the three control methods, the combination of vaccination and vector control methods stands out. It only requires 26 days to reach its equilibrium at 0.79% (787) recovered human population. At the same time, there is no significant difference between vaccination alone and vector control alone. Both require 32 days to reach its equilibrium at 1.59% (1,590) and 1.65% (1,646) recovered human, respectively. Now, minimizing the susceptible mosquito population, no control applied to the variables is better than vaccination. It decreases faster with 0.56% (556) minimum susceptible mosquito population while vaccination decreases slower with 3.69% (3,692) minimum population at the end of time. Nevertheless, the vector control method and the combination of vaccination and vector control are the better methods for controlling the mosquito population. They annihilate the susceptible mosquito population.
Minimizing the infected mosquito, either vector control alone or combining vaccination and vector control is the best method. There is no significant difference between the two. They both require minimum time for the infected mosquito to reach zero population and with only 2.6% (2596) and 2.52% (2521) maximum population for the vector control only and the combination, respectively. However, vaccination is better compared to the one without control. The infected mosquito has a 58.09% (58,092) maximum population without applying a control strategy, while the vaccination has a 22% (22,006) maximum population. Now, let us show the controlled variable's behavior by comparing the vaccination, vector control only, and the combination of vaccination and vector control.
Description of the Model with Vaccination
Based on the Ross-type model, we assumed that dengue viruses are virulent with no other microorganism attacking the human body. Let M be the population of female mosquitoes split into two groups of susceptible S m and infectious I m mosquitoes. Figure 4.1 describes the flow of dengue disease. In this chapter, we introduce a mathematical model of dengue that considers the vaccine that should be given to people who are already infected by one type of virus.
𝐼 " 𝑆 "
Since humans have a meager mortality rate compared to mosquitoes, we neglect the natural death of humans but still consider their growth. The following system of ordinary equations governed the dynamics of humans.
S ′ h (t) = - ab h I m (t) H(t) S h (t) + f (H(t)) (4.1)
I ′ h (t) = aI m (t) H(t) (b h S h (t) + b h S h (t)) -γ h I h (t) -δ h I h (t) (4.2) S ′ h (t) = γ h I h (t) - a b h I m (t) H(t) S h (t) (4.3) R ′ h (t) = δ h I h (t). (4.4)
While the dynamics of mosquitoes are as follows
S ′ m (t) = - ab m I h (t) H(t) S m (t) -µ m S m (t) + g(M(t)) (4.5)
I ′ m (t) = ab m I h (t) H(t) S m (t) -µ m I m (t). (4.6)
Note that the total human population is given by H = S h + I h + S h + R h and the total mosquito population is given by M = S m + I m . The function f (H(t)) is the change in the total human population, while g(M(t)) is the change in the total mosquito population. In this study, we will consider different growth model for human and mosquito population. Since human have a meager mortality rate compared to mosquitoes, we neglect the natural death of the humans.
Study of the Model with Entomological Growth
Let us consider an entomological growth population model for the mosquito population and a constant human population. We have H ′ (t) = f (H(t)) = 0 and g(M(t)) = α m Me -β m M , as in Bilman et al. [START_REF] Bliman | Implementation of control strategies for sterile insect techniques[END_REF], where β m is the characteristic of the competition effect per individual. Then our model would become
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + α m Me -β m M u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) (4.7)
Now, let us prove that our new model is bounded and well-defined.
Well-posedness and Positivity of the Solution
Theorem 4.2.1. The domain Ω defined by
Ω Ento = U ∈ R 6 + : 0 ≤ u 1 + u 2 + u 3 + u 4 = H 0 , 0 ≤ u 5 + u 6 ≤ max α m β m µ m , M 0
is positively invariant. In particular, for an initial datum U(0) in Ω Ento , there exists a unique global in time solution U in C(R + , Ω Ento ).
Proof. Consider the initial value problem
U ′ (t) = F(t, U(t)) where U(0) = U 0 .
The right-hand-side F satisfies the local Lipschitz condition. Therefore, the Cauchy-Lipschitz theorem ensure the local well-posedness. Then,
f 1 (0, u 2 , u 3 , u 4 , u 5 , u 6 ) = 0, ∀u 2 , • • • , u 6 ∈ Ω Ento f 2 (u 1 , 0, u 3 , u 4 , u 5 , u 6 ) = au 6 b h u 1 + b h u 3 H ≥ 0, ∀u 1 , u 3 , • • • , u 6 ∈ Ω Ento f 3 (u 1 , u 2 , 0, u 4 , u 5 , u 6 ) = γ h u 2 , ∀u 1 , u 2 , u 4 , • • • , u 6 ∈ Ω Ento f 4 (u 1 , u 2 , u 3 , 0, u 5 , u 6 ) = δ h u 2 , ∀u 1 , u 2 , u 3 , u 5 , u 6 ∈ Ω Ento f 5 (u 1 , u 2 , u 3 , u 4 , 0, u 6 ) = α m u 6 e -β m u 6 , ∀u 1 , • • • , u 4 , u 6 ∈ Ω Ento f 6 (u 1 , u 2 , u 3 , u 4 , u 5 , 0) = ab m u 2 u 5 H , ∀u 1 , • • • , u 5 ∈ Ω Ento
We write
α m Me -β m M = α m M e β m M = α m M ∑ n≥0 (β m M) n n! = α m M 1 + β m M + β 2 m M 2 2 + β 3 m M 3 6 + • • • ≤ α m β m Then M ′ (t) ≤ α m β m -µ m M(t)
and by Gronwall's lemma
M(t) ≤ e -µ m t M 0 - α m β m µ m + α m β m µ m ≤ max α m β m µ m , M 0 Equilibrium Now,
= (u * 1 , 0, u * 3 , u * 4 , 0, 0) and E Ento,2 = u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m µ m , 0 . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0.
Now for ab m u 2 u 5 H 0 µ m u 6 , since u 2 = 0, we have
ab m (0)u 5 H 0 -µ m u 6 = 0 -µ m u 6 = 0.
Therefore, u 6 = 0. Thus, for -ab m u 2 u 5 H 0
-µ m u 5 + α m (u 5 + u 6 )e -β m (u 5 +u 6 ) = 0 becomes - ab m (0) H 0 -µ m u 5 + α m (u 5 + (0))e -β m (u 5 +(0)) = 0 -µ m u 5 + α m u 5 e -β m u 5 = 0 u 5 (-µ m + α m e -β m u 5 ) = 0 Hence, u 5 = 0 or -µ m + α m e -β m u 5 = 0. If -µ m + α m e -β m u 5 = 0, then e -β m u 5 = µ m α m . Thus, u 5 = 1
β m ln α m µ m . Since the system of equation contains either u 2 or u 6 or both, which has a zero value, then any nonnegative values of u 1 , u 3 , u 4 satisfies the system of equation.
Therefore,
E Ento,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) and E Ento,2 = u * 1 , 0, u * 3 , u * 4 , 1
β m ln α m µ m , 0 are an equilibrium point of the system (4.7).
Next Generation Matrix and Basic Generation Matrix
Since the infected individuals are in u 2 and u 6 , then we can rewrite the system of the equations as
F = au 6 (b h u 1 + b h u 3 ) H 0 ab m u 2 u 5 H 0 V = (γ h + δ h )u 2 µ m u 6
where F is the rate of appearance of new infections in each compartment and V is the rate of other transitions between all compartments. Thus,
F = 0 a(b h u 1 + b h u 3 ) H 0 ab m u 5 H 0 0 , V = γ h + δ h 0 0 µ m and V -1 = 1 γ h +δ h 0 0 1 µ m
. Therefore, the next generation matrix is
FV -1 = 0 a(b h u 1 + b h u 3 ) µ m H 0 ab m u 5 (γ h +δ h )H 0 0 .
It follows that by [START_REF] Van Den Driessche | Reproduction numbers and subthreshold endemic equilibria for compartmental models of disease transmission[END_REF] the basic reproduction number, denoted by ρ(FV -1 ), where ρ is the spectral radius, is given by
ρ(FV -1 ) = a 2 b m u 5 (b h u 1 + b h u 3 ) µ m H 0 2 (γ h + δ h ) . ( 4.8)
Now using this eigenvalue, we determine the local stability of the equilibrium points E Ento,1 and E Ento,2 .
Proposition 4.2.3.
The disease free equilibrium E
Ento,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) = (S * h , 0, S h * , R * h , 0, 0) is locally asymptotically stable. 2. If α m > µ m and R 0 < 1 where R 0 = a 2 b m ln αm µm b h S * h + b h S h * H 2 0 µ m β m (γ h +δ h )
, then the disease free
equilibrium point E Ento,2 = (u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m µ m , 0) == (S * h , 0, S h * , R * h , 1 β m ln α m µ m , 0) is locally asymptotically stable.
Proof.
1. From the above eigenvalues,
ρ(FV -1 ) = a 2 b m (0) b h u * 3 + b h u * 1 H 2 0 µ m (γ h + δ h ) = 0
Therefore, the system of equation is local asymptotically stable at E Ento,1 .
Similarly,
R 0 = a 2 b m u * 5 (b h u * 1 + b h u * 3 ) µ m H 0 2 (γ h + δ h ) = a 2 b m ln α m µ m b h u * 3 + b h u * 1 µ m H 2 0 β m (γ h + δ h )
. Therefore, if α m > µ m and R 0 < 1, the system of equation is local asymptotically stable at E Ento,2 .
The basic reproduction number R 0 has a biological meaning when α m > µ m . It means that the average number of new infected humans is proportional to the pro-
Jacobian Matrix
To confirm the stability, we compute the Jacobian Matrix of the system in order to confirm our lambda, then only the
∂ f 5 ∂u i , i = 1, 2, • • • , 6, would change. We can get ∂ f 5 ∂u 1 = 0 ∂ f 5 ∂u 2 = -ab m u 5 H 0 ∂ f 5 ∂u 3 = 0 ∂ f 5 ∂u 4 = 0 ∂ f 5 ∂u 5 = -ab m u 2 H 0 -µ m + α m e -β m M (1 -β m M) ∂ f 5 ∂u 6 = α m e -β m M (1 -β m M)
Therefore, we get the Jacobian Matrix
J = -ab h u 6 H 0 0 0 0 0 -ab h u 1 H 0 ab h u 6 H 0 -γ h -δ h a b h u 6 H 0 0 0 ab h u 1 +a b h u 3 H 0 0 γ h -a b h u 6 H 0 0 0 -a b h u 3 H 0 0 δ h 0 0 0 0 0 -ab m u 5 H 0 0 0 -ab m u 2 H 0 -µ m + α m e -β m M (1 -β m M) α m e -β m M (1 -β m M) 0 ab m u 5 H 0 0 0 ab m u 2 H 0 -µ m
Thus, we have the following theorem.
Lemma 4.2.4. The disease-free equilibrium point E Ento,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is locally asymptotically stable.
Proof. Let E Ento,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) be an equilibrium point of the system of equation (4.7). Then the above Jacobian matrix for E Ento,1 can be deduce to
J(E Ento,1 ) = 0 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h 0 0 0 -a b h u * 3 H 0 0 δ h 0 0 0 0 0 0 0 0 -µ m + α m α m 0 0 0 0 0 -µ m . Let |J(E Ento,1 ) -λI 6 | = 0. Then |J(E Ento,1 ) -λI 6 | = -λ 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h -λ 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h -λ 0 0 -a b h u * 3 H 0 0 δ h 0 -λ 0 0 0 0 0 0 -µ m + α m -λ α m 0 0 0 0 0 -µ m -λ .
Therefore, solving the determinant above would determine its characteristic polynomial. We have
-λ 3 (-λ -γ h -δ h )(-λ -µ m )(-λ -µ m + α m ) = 0.
Hence, we get
λ 1 = 0 multiplicity 3 λ 2 = -γ h -δ h λ 3 = -µ m λ 4 = -µ m + α m .
Since α mµ m ≤ 0, all the eigenvalues are negative, implying further that the system of equation is locally asymptotically stable at the E Ento,1 .
Lemma 4.2.5. The endemic equilibrium point (u
* 1 , 0, u * 3 , u * 4 , 1 β m ln α m µ m , 0) is locally asymp- totically stable if α m > µ m and R 0 < 1. Proof. Let E Ento,2 = (u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m µ m , 0
) be an equilibrium point of the system of equation (??). Then the above Jacobian matrix for E Ento,2 can be deduce to
J(E Ento,2 ) = 0 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h 0 0 0 -a b h u * 3 H 0 0 δ h 0 0 0 0 0 -ab m ln αm µm H 0 β m 0 0 -µ m ln α m µ m µ m 1 -ln α m µ m 0 ab m ln αm µm H 0 β m 0 0 0 -µ m . Let |J(E Ento,2 ) -λI 6 | = 0. Then -λ 0 0 0 0 -ab h u * 1 H 0 0 -γ h -δ h -λ 0 0 0 ab h u * 1 +a b h u * 3 H 0 0 γ h -λ 0 0 -a b h u * 3 H 0 0 δ h 0 -λ 0 0 0 -ab m ln αm µm H 0 β m 0 0 µ m ln µ m α m -λ µ m 1 -ln α m µ m 0 ab m ln αm µm H 0 β m 0 0 0 -µ m -λ = 0.
Therefore, solving the determinant above would determine its characteristic polynomial. We have
(-λ) 3 -µ m ln α m µ m -λ -γ h -δ h -λ ab h u * 1 +a b h u * 3 H 0 ab m ln αm µm H 0 β m -µ m -λ = 0.
Solving the determinant of the remaining 2x2 matrix, we get
(-γ h -δ h -λ)(-µ m -λ) - ab m ln α m µ m H 0 β m • ab h u * 1 + a b h u * 3 H 0 = 0.
Expanding the equation above give us the quadratic equation
λ 2 + λ(γ h + δ h + µ m ) + µ m (γ h + δ h ) - ab m ln α m µ m (ab h u * 1 + a b h u * 3 ) H 0 2 β m = 0
Using quadratic formula to solve λ, we get
λ = - γ h + δ h + µ m 2 ± (γ h + δ h + µ m ) 2 -4 µ m (γ h + δ h ) - ab m ln αm µm (ab h u * 1 +a b h u * 3 ) H 0 2 β m Since ln α m µ m ab m H 0 a b h u 3 +ab h u 1 H 0 < µ m β m (γ h + δ h ), we have λ < - γ h + δ h + µ m 2 ± (γ h + δ h + µ m ) 2 -4 µ m (γ h + δ h ) -1 β m • µ m β m (γ h + δ h ) 2 Simplifying this we get λ < - γ h + δ h + µ m 2 ± (γ h + δ h + µ m ) 2 -4 (µ m (γ h + δ h ) -µ m (γ h + δ h )) 2
That is,
λ < - γ h + δ h + µ m 2 ± γ h + δ h + µ m 2 Therefore, λ < - γ h + δ h + µ m 2 + γ h + δ h + µ m 2 λ < - γ h + δ h + µ m 2 - γ h + δ h + µ m 2 λ = 0 λ < -(γ h + δ h + µ m )
Thus, the eigenvalues of system are λ = 0 (multiplicity 4)
λ = -µ m ln α m µ m λ < -(γ h + δ h + µ m )
which are negative if α m > µ m and R 0 > 1. Therefore, the system of equation is locally asymptotically stable at the equilibrium point E Ento,2 .
Theorem 4.2.6.
1. If α m < µ m , then E Ento,1 is globally asymptotically stable.
2. If α m > µ m and R 0 > 1, then E Ento,2 is globally asymptotically stable.
Proof. From the fourth equation of system (4.7), we can deduce that u 4 is increasing.
Since u 4 is bounded by H 0 , u 4 has a limit u * 4 as t → +∞. Thus, integrating the equation gives
u 4 (t) -u 4 (0) = δ h t 0 u 2 (s)ds. Thus u * 4 -u 4 (0) = δ h ∞ 0 u 2 (
s)ds which is finite. Implying further that u 2 (s) → 0 as s → +∞. Now, adding the fifth and six equation of system (4.7) gives us
u ′ 5 + u ′ 6 = M ′ = g(M) = α m Me -β m M -µ m M.
As in Theorem 3.2.3, if α m < µ m , then M(t) → 0 as t → +∞. Thus, by positivity of the solution u 5 and u 6 goes to 0 as t → +∞.
From u ′ 1 = -ab h u 6 u 1 H 0 ≤ 0, the function u 1 is a decreasing non-negative function bounded by H 0 . Thus, as t → +∞, u 1 → u * 1 . The solution of the third equation u ′ 3 + a b h u 6 u 3 H 0 = γ h u 2 , can be written as
u 3 (t) = E(t)u 3 (0) + t 0 E(t -s)γ h u 2 (s)ds with E(t) = e - a b h H 0 t 0 u 6 (s)ds . Since, for all t ≥ 0, 0 ≤ u 2 (t), u 3 (t) ≤ H 0 , 0 ≤ u 6 (t) ≤ M 0 , then u 2 → 0, u 3 → u * 3 when t → +∞. When α m ≥ µ m , let us denote M * = 1 β m ln α m µ m . Since M ′ (t) = (α m e -βM -µ m )M, if M(t) < M * ,
then M is increasing and bounded by above, while if M(t) > M * , then M is decreasing and bounded by below. In particular, M(t) has a limit when t → +∞. Using now the local asymptotic stability with R 0 < 1, this limit is equal to M * .
Numerical Illustrations
In this section, we presented the numerical illustration using the constant human population and entomological growth function for mosquito population. We let the final time T be 2000 days with initial condition (10000, 0, 0, 0, 100000, 10). Using the same parameter value as Table 3.2 and β m = 0.375, a phase portrait graph of system (4.7) was simulated using the Python pro- For the mosquito population, susceptible humans decreases rapidly for 120 days then follows by a smooth decrease towards its equilibrium. Whereas, infected mosquito exponentially increase for 14 days with maximum population of 51603.490 and then exponentially decreases towards its equilibrium. For infected humans, figure 4.3a shows that at time t = 0, there are no infected humans but have 10,000 primary susceptible humans. For a period of time, infected humans increases while primary susceptible humans decreases. Upon reaching the maximum of 5790.384 infected human, It then decreases while primary susceptible humans continuous to decrease. After some time, the two variables become inversely proportional. As infected humans decreases, the infected mosquito continue to increase. Then both variables decreases towards zero.
On the other hand, figure 4.3d shows that at time t = 0, there are 100,000 susceptible mosquito and 10 infected mosquitoes. The figure shows that the two variables are inversely proportional to each other. For some time, susceptible mosquitoes decreases while infected mosquitoes increases slowly. Then an exponentially increase of infected mosquito follows. When it reaches the maximum of 51603.490 infected mosquitoes, it then decreases while susceptible mosquito continues to decrease towards zero.
Choice of Control Strategies
Preventing or reducing dengue virus transmission depends entirely on controlling the mosquito vectors or vaccination. This section applied three control strategies to reduce dengue transmission: vaccination, vector control and the combination of vaccination and vector control. We applied this control strategies to model (4.7).
Vaccination
Dengue fever is the most rapidly spreading mosquito-borne viral disease found in tropical and sub-tropical climates worldwide. It is caused by the single positivestranded RNA virus of the family Flaviviridae that is transmitted to humans through a diurnal mosquito. [START_REF] Rodenhuis-Zybert | Dengue virus life cycle: Viral and host factors modulating infectivity[END_REF] So far, there is no specific treatment for dengue fever. According to the theory of facilitating antibodies, vaccine research is made more difficult by the need for a vaccine immunizing sustainability and simultaneously against the four serotypes of the virus [START_REF] Normile | Surprising new dengue virus throws a spanner in disease control efforts[END_REF]. Half a dozen vaccine candidates are under study. Let us consider an entomological growth for the mosquito population and constant human population. We have f (H(t)) = 0 and g(M(t)) = α m Me -β m M . Then our model becomes
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -v 1 u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -v 3 u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + α m Me -β m M u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) (4.9)
Where v 1 u 1 (t) is the vaccination given to the primary susceptible human population, and w 3 u 3 (t) is the vaccination given to the secondary susceptible human. The total immunity is given by T
′ (t) = v 1 u 1 (t) + v 3 u 3 (t).
Note that there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Ento , R + ) 6 . Lemma 4.3.1. The system of equation (4.9) admits two equilibria E VacEnto,1 = (0, 0, 0, u * 4 , 0, 0) and E VacEnto,2 = (0, 0, 0,
u * 4 , 1 β m ln α m µ m , 0). Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Now for ab m u 2 u 5 H 0 -µ m u 6 = 0, since u 2 = 0, we have ab m (0)u 5 H 0 -µ m u 6 = 0 -µ m u 6 = 0
Therefore, u 6 = 0. Consequently, substituting u 6 = 0 and u 2 = 0 to both u ′ 1 = 0 and u ′ 3 = 0, we have
- ab h u 6 u 1 H 0 -v 1 u 1 = 0 γ h u 2 - a b h u 3 u 6 H 0 -v 3 u 3 = 0 - ab h (0)u 1 H 0 -v 1 u 1 = 0 γ h (0) - a b h u 3 (0) H 0 -v 3 u 3 = 0 -v 1 u 1 = 0 -v 3 u 3 = 0 Since v 1 , v 3 > 0, u 1 = 0 and u 3 = 0. For -ab m u 2 u 5 H 0 -µ m u 5 + α m (u 5 + u 6 )e -β m (u 5 +u 6 ) = 0 becomes - ab m (0) H 0 -µ m u 5 + α m (u 5 + (0))e -β m (u 5 +(0)) = 0 -µ m u 5 + α m u 5 e -β m u 5 = 0 u 5 (-µ m + α m e -β m u 5 ) = 0 Hence, u 5 = 0 or -µ m + α m e -β m u 5 = 0. If -µ m + α m e -β m u 5 = 0, then e -β m u 5 = µ m α m . Thus, u 5 = 1 β m ln α m µ m .
Consequently, since the system (4.9) contains either u 1 , u 2 , u 3 or u 6 or both, which has a zero value, then any nonnegative values of u * 4 satisfies the system of equation.
Therefore, E VacEnto,1 = (0, 0, 0, u * 4 , 0, 0) and
E VacEnto,2 = (0, 0, 0, u * 4 , 1 β m ln α m µ m , 0
) is an equilibrium point of system (4.9).
As in Section 6.1, one gets
λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 µ m (γ h + δ h ) ,
and, we have the following lemma.
Lemma 4.3.2. The equilibrium points E VacEnto,1 = (0, 0, 0, u * 4 , 0, 0) and E VacEnto,2 = (0, 0, 0, u * 4 , 1 β m ln α m µ m , 0) of the system of equation (4.9) are locally asymptotically stable.
Vector Control
Vector control is a method to limit or eradicate the vectors which transmit disease pathogens. The most frequent type of vector control uses a variety of strategies such as habitat and environmental control, reducing vector contact, chemical control, and biological control.
Let us consider the following model in two different populations to include vectorial control.
Our model would become
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + α m Me -β m M -v 5 u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -v 5 u 6 (t) (4.10)
Where v 5 u 5 (t) is the vectorial control given to the susceptible mosquito population, and v 5 u 6 (t) is the vectorial control to infectious mosquito. The total mosquito control is given by T ′ M (t) = v 5 u 5 (t) + v 6 u 6 (t). Again there exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Ento , R + ) 6 . Lemma 4.3.3. The system of equation (4.10) admits two equilibria
E VecEnto,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) and E VecEnto,2 = u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m v 5 +µ m , 0 . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0 Now for ab m u 2 u 5 H 0 -µ m u 6 -v 5 u 6 = 0, since u 2 = 0, we have ab m (0)u 5 H 0 -µ m u 6 -v 5 u 6 = 0 (-µ m -v 5 )u 6 = 0 Since -µ m -v 5 ̸ = 0, u 6 = 0. Consequently, substituting u 6 = 0 and u 2 = 0 to u ′ 5 = 0, we have - ab m u 2 u 5 H 0 -µ m u 5 + α m (u 5 + u 6 )e -β m (u 5 +u 6 ) -v 5 u 5 = 0 - ab m (0)u 5 H 0 -µ m u 5 + α m (u 5 + (0))e -β m (u 5 +(0)) -v 5 u 5 = 0 -µ m u 5 + α m u 5 e -β m u 5 -v 5 u 5 = 0 -µ m + α m e -β m u 5 -v 5 u 5 = 0 Implying that -µ m + α m e -β m u 5 -v 5 = 0 or u 5 = 0 α m e -β m u 5 = v 5 + µ m -β m u 5 = ln v 5 + µ m α m u 5 = 1 β m ln α m v 5 + µ m Consequently,
= ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 5 )(γ h + δ h ) (4.11)
Therefore, we have the following theorem.
Lemma 4.3.4. The diseases free equilibrium point E VecEnto,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) of the system of equation (4.10) is locally asymptotically stable. While the endemic equilibrium
point E VecEnto,2 = u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m v 5 +µ m , 0 is locally asymptotically stable if R 0 < 1. Proof. For E VecEnto,1 , since u * 5 = 0, R 0 = 0 < 1.
Thus, E VecEnto,1 is locally asymptotically stable.
For E VecEnto,2 since u * 5 = 1 β m ln α m v 5 +µ m ,the above eigenvalues will become
R 0 = a 2 b m 1 β m ln α m v 5 +µ m b h u * 3 + b h u * 1 H 0 2 (µ m + v 5 )(γ h + δ h ) Consequently, R 2 0 = a 2 b m ln α m v 5 +µ m b h u * 3 + b h u * 1 H 0 2 β m (µ m + v 5 )(γ h + δ h ) = ln α m v 5 +µ m ab m H 0 a b h u * 3 +ab h u * 1 H 0 β m (µ m + v 5 )(γ h + δ h ) , if R 0 < 1,
Combination of Vaccination and Vector Control
Let us combine the dengue vaccination and the vectorial control in our model using two growth function.
Then our model becomes
u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -v 1 u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -v 3 u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 -µ m u 5 (t) + α m Me -β m M -v 5 u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -v 5 u 6 (t) (4.12)
where the total human immunity is given by
T ′ H (t) = v 1 u 1 (t) + v 3 u 3 (t)
and total vectorial control is given by T ′ M (t) = v 5 u 5 (t) + v 5 u 6 (t). There exists a unique global in time solution (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) in C(Ω Ento , R + ) 6 . Lemma 4.3.6. The system of equation (4.12) admits two equilibria E CombEnto,1 = (0, 0, 0, u * 4 , 0, 0) and E CombEnto,2 = 0, 0, 0,
u * 4 , 1 β m ln α m µ m +v 5 , 0 . Proof. Let u ′ 1 , u ′ 2 , u ′ 3 , u ′ 4 , u ′ 5 , u ′ 6 = 0.
Since all parameter are positive, then
δ h u 2 = 0 =⇒ u 2 = 0. Thus, ab m u 2 u 5 H -µ m u 6 -v 5 u 6 = 0 becomes ab m (0)u 5 H -µ m u 6 -v 5 u 6 = 0 (-µ m -v 5 )u 6 = 0.
Hence, u 6 = 0. Therefore, substituting u 6 = 0 to both u ′ 1 = 0 and u ′ 3 = 0, we have
- ab h u 6 u 1 H 0 -v 1 u 1 = 0 γ h u 2 - a b h u 3 u 6 H 0 -v 3 u 3 = 0 - ab h (0)u 1 H 0 -v 1 u 1 = 0 γ h (0) - a b h u 3 (0) H 0 -v 3 u 3 = 0 -v 1 u 1 = 0 -v 3 u 3 = 0 Since v 1 , v 3 > 0, u 1 = 0 and u 3 = 0. Now, substituting u 6 = 0 to -ab m u 2 u 5 H 0 -µ m u 5 + α m (u 5 + u 6 )e -β m (u 5 +u 6 ) -v 5 u 5 = 0, we get - ab m (0)u 5 H 0 -µ m u 5 + α m (u 5 + 0)e -β m (u 5 +0) -v 5 u 5 = 0 -µ m u 5 + α m u 5 e -β m u 5 -v 5 u 5 = 0 u 5 (-µ m + α m e -β m u 5 -v 5 ) = 0 implying further that u 5 = 0 or -µ m + α m e -β m u 5 -v 5 = 0 e -β m u 5 = µ m + v 5 α m u 5 = 1 β m ln α m µ m + v 5
Consequently, since the system (4.12) contains either u 1 , u 2 , u 3 or u 6 or both, which has a zero value, then any nonnegative values of u * 4 satisfies the system of equation. Therefore, the equilibrium point of the system of equation (4.12) are E CombEnto,1 = (0, 0, 0, u * 4 , 0, 0) and
E CombEnto,2 = 0, 0, 0, u * 4 , 1 β m ln α m µ m +v 5 , 0 .
The next generation matrix remains the same as in Section 4.2. Thus the eigen-
values of system are λ = ± a 2 b m u 5 b h u 3 + b h u 1 H 0 2 (µ m + v 5 )(γ h + δ h ) .
Therefore, we have the following theorem.
Lemma 4.3.7. The disease free equilibrium point E CombEnto,1 = (0, 0, 0, u * 4 , 0, 0) and the endemic equilibrium point E CombEnto,2 = 0, 0, 0, u * 4 , 1 β m ln α m µ m +v 5 , 0 of the system of equation (4.12) are locally asymptotically stable.
Proof. Since u * 1 , u * 3 = 0 in either E CombEnto,1 and E CombEnto,2 , b h u 3 + b h u 1 = b h (0) + b h (0) = 0.
Thus the above eigenvalues will banished. Consequently, the system of equation is locally asymptotically stable at E CombEnto,1 and E CombEnto,2 .
Following Theorem 4.2.6, we can also prove the theorem below.
Optimal Control strategy
Assume that both control inputs are piecewise continuous functions that take its values in a positively bounded set W = [0, w H ] 2 × [0, w M ] 2 . Thus we consider the objective function
J (w 1 , w 3 , w m ) = T 0 u 2 (t) + 1 2 A 1 w 2 1 (t) + A 3 w 2 3 (t) + A m w 2 m (t) dt subject to u ′ 1 (t) = - ab h u 6 (t)u 1 (t) H 0 -w 1 (t)u 1 (t) u ′ 2 (t) = au 6 (t) b h u 1 (t) + b h u 3 (t) H 0 -γ h u 2 (t) -δ h u 2 (t) u ′ 3 (t) = γ h u 2 (t) - a b h u 3 (t)u 6 (t) H 0 -w 3 (t)u 3 (t) u ′ 4 (t) = δ h u 2 (t) u ′ 5 (t) = - ab m u 2 (t)u 5 (t) H 0 + g(M(t)) -µ m u 5 (t) -w m (t)u 5 (t) u ′ 6 (t) = ab m u 2 (t)u 5 (t) H 0 -µ m u 6 (t) -w m (t)u 6 (t) (4.13)
for t ∈ [0, T], with 0 ≤ w 1 , w 3 ≤ w H , 0 ≤ w m ≤ w M and w = (w 1 , w 3 , w m ). The variables A j are the positive weights associated with the control variables w j , j = 1, 3, m, respectively.
Lemma 4.4.1.
There exists an optimal control w * = (w *
1 (t), w * 3 (t), w * m (t)) such that J (w * 1 , w * 3 , w * m ) = min w∈W J (w 1 , w 3 , w m ) under the constraint (u 1 , u 2 , u 3 , u 4 , u 5 , u 6
) is a solution of (4.13).
Pontryagin's maximum principle is used to find the best possible control for taking a dynamical system from one state to another. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called
Hamiltonian system [START_REF] Lenhart | Optimal Control Applied to Biological Models[END_REF]. We state the lemma below.
Lemma 4.4.2.
There exist the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (4.13) that satisfy the following backward in time system of ordinary differential equations:
-
dλ 1 dt = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 - dλ 2 dt = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 - dλ 3 dt = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 - dλ 4 dt = 0 - dλ 5 dt = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 - dλ 6 dt = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m )
with the transversality condition λ(T) = 0.
Proof. Using the Hamiltonian for (4.13), we have
H = L(w 1 , w 3 , w m ) + λ 1 (t)u ′ 1 (t) + λ 2 (t)u ′ 2 (t) + λ 3 (t)u ′ 3 (t) + λ 4 (t)u ′ 4 (t) + λ 5 (t)u ′ 5 (t) + λ 6 (t)u ′ 6 (t) = 1 2 u 2 2 + A 1 w 2 1 + A 3 w 2 3 + A m w 2 m + λ 1 - ab h u 6 u 1 H 0 -w 1 u 1 + λ 3 γ h u 2 - a b h u 3 u 6 H 0 -w 3 u 3 + λ 2 au 6 b h u 1 + b h u 3 H 0 -γ h u 2 -δ h u 2 + λ 4 (δ h u 2 ) + λ 5 - ab m u 2 u 5 H 0 + g(M) -µ m u 5 -w m u 5 + λ 6 ab m u 2 u 5 H 0 -µ m u 6 -w m u 6 . (4.14)
Therefore, finding the partial derivatives of H with respect to
u i 's, i = 1, 2, • • • , 6, we have ∂H ∂u 1 = λ 1 -ab h u 6 H 0 -w 1 + λ 2 ab h u 6 H 0 ∂H ∂u 2 = 1 + λ 2 (-γ h -δ h ) + λ 3 γ h + λ 4 δ h -λ 5 ab m u 5 H 0 + λ 6 ab m u 5 H 0 ∂H ∂u 3 = λ 2 a b h u 6 H 0 + λ 3 -a b h u 6 H 0 -w 3 ∂H ∂u 4 = 0 ∂H ∂u 5 = λ 5 -ab m u 2 H 0 + ∂g ∂u 5 -λ 5 (µ m + w m ) + λ 6 ab m u 2 H 0 ∂H ∂u 6 = -λ 1 ab h u 1 H 0 + λ 2 ab h u 1 + a b h u 3 H 0 -λ 3 a b h u 3 H 0 + λ 5 ∂g ∂u 6 -λ 6 (µ m + w m ) .
Then the adjoint system is defined by
dλ i dt = -∂H ∂u i for i = 1, 2, • • • , 6.
Theorem 4.4.3. The optimal control variables are given by w * 1 (t) = max 0, min
λ 1 u 1 A 1 , w H w * 3 (t) = max 0, min λ 3 u 3 A 3 , w H w * m (t) = max 0, min λ 5 u 5 + λ 6 u 6 A m , w M
Proof. By the Pontryagin maximum principle, the optimal control w * minimizes, at each instant t, the Hamiltonian given by (4.14). We have ∂H ∂w j = 0, for all j = 1, 3, m at w j = w * j .
Therefore, we get
∂H ∂w 1 = A 1 w 1 -λ 1 u 1 , ∂H ∂w 3 = A 3 w 3 -λ 3 u 3 , ∂H ∂w m = A m w m -λ 5 u 5 -λ 6 u 6 ,
and
w 1 = λ 1 u 1 A 1 , w 3 = λ 3 u 3 A 3 , w m = λ 5 u 5 + λ 6 u 6 A m .
Numerical Simulation of Optimal Control
Numerical simulations show the difference in minimizing the infected human during the dengue outbreak between the three methods: vaccination, vector control, and the combination of the vaccination and vector control. The parameters are presented in Table 4.1, which is taken from [START_REF] Bakach | A survey of mathematical models of dengue fever[END_REF]. Notice that α m < µ m , by Theorem 4.3.8 the global stability corresponds to E CombEnto,1 . Here E CombEnto,2 is biologically not meaningful in this situation. The author estimated some from Indonesia, where environmental conditions are similar to the Philippines.
Symbol
The control weights A 1 and A 3 are the efforts in vaccinating the human population.
In contrast, the control weights A m is the effort to eliminate the mosquito population by means of administering insecticides. Since primary susceptible humans are readily available in the population compared to the secondary susceptible humans, the efforts used in vaccinating them would be less than the effort exerted in vaccinating the secondary susceptible. Thus, A 3 is set higher than A 1 . While insecticide administration in susceptible mosquitoes and infected mosquitoes uses the same effort and achieves a similar result. Hence, we initially set the control weights as
A 1 = 0.1, A 3 = 1 and A m = 1.
Note that the values of A 1 , A 3 , A m do not change the convergence of optimal control A finite difference scheme is used to numerically solve direct and the adjoint system of ordinary differential equations. More precisely, an explicit correction Adams-Bashford and implicit correction Adams-Moulton of order 2 is written in python.
The optimality of the system is numerically solved using Algorithm 1 with ϵ = 0.01.
Algorithm 1 Computation of optimal control of dengue-dengvaxia
Given U 0 = (10 4 , 0, 0, 0, 10 5 , 10 3 ) as initial datum , a final time T > 0 and a tolerance ε > 0. Let w 0 1 , w 0 2 , w 0 m randomly chosen following N (0, 1). while ||∇H(w n , U n , λ n )|| > ε do, solve the forward system u n , solve the backward system λ n , update w n solve the gradient ∇H(w n , U n , λ n ) w * = w n . A.
Responses comparison for infected humans
B. Note that in the human compartment, the total immunity to dengue by means of implementing the Dengvaxia vaccine is denoted by T h which is given by T h (t) = w 1 (t)u 1 (t) + w 3 (t)u 3 (t). Thus, a healthy human combines immune human T h , primary susceptible u 1 , secondary susceptible u 3 , and the recovered u 4 humans. The figure shows that the combination of vaccination and vector control is the best method in maximizing the healthy human population. It only takes 26 days to combine vaccination and vector control methods to reach the equilibrium of healthy humans. Its minimum population is 8.73% (8,734) on day 1.8.
In contrast, there is no significant difference in the vector and vaccination method alone in the healthy human population. The vector method takes 29 days to reach its equilibrium with 8.02% (8,021) minimum population on 2.8 days. The vaccination required 39 days to reach an equilibrium with 8.16% (8,157) minimum population on 2.5 days. Without any control strategies applied to the healthy human population, it requires a much higher time to reach its equilibrium with 4% (4,000) minimum population.
For the recovered human compartment, the figure shows that the human population would eventually recover through time without control strategies applied to the variables. It supports that dengue infection lasts only three to seven days following the infectious mosquito bite, and a spontaneous, full health recovery follows.
However, comparing the three control methods, the combination of vaccination and vector control methods stands out. It only requires 26 days to reach its equilibrium at 0.79% (787) recovered human population. At the same time, there is no significant difference between vaccination alone and vector control alone. Both require 32 days to reach its equilibrium at 1.59% (1,590) and 1.65% (1,646) recovered human, respectively. Now, minimizing the susceptible mosquito population, no control applied to the variables is better than vaccination. It decreases faster with 0.56% (556) minimum susceptible mosquito population while vaccination decreases slower with 3.69% (3,692) minimum population at the end of time. Nevertheless, the vector control method and the combination of vaccination and vector control are the better methods for controlling the mosquito population. They annihilate the susceptible mosquito population.
Minimizing the infected mosquito, either vector control alone or combining vaccination and vector control is the best method. There is no significant difference between the two. They both require minimum time for the infected mosquito to reach zero population and with only 2.6% (2596) and 2.52% (2521) maximum population for the vector control only and the combination, respectively. However, vaccination is better compared to the one without control. The infected mosquito has a 58.09% Chapter 5
A Model of Dengue accounting for the Life Cycle
In this chapter, we introduced a ROSS-type model of dengue accounting for mosquitoes' life cycle. The qualitative study of this model was also discussed, as the identifiability of the parameters involved. An optimal control strategy using copepods and pesticides was added to the model. The numerical simulations of the optimal control use the effectivity of Mesocyclops aspericornis, a Philippines specie of copepod, to eliminate larvae and thermal fogging to eliminate adult mosquitoes.
Life Cycle of Mosquitoes
Mosquitoes have a complex life cycle. They change their shape and habitat as they develop. Only female adult mosquitoes lay eggs. They lay their eggs which stick like glue, above the water line on the inner walls of containers that are or will be filled with water. This oviposition site includes the wall of a cavity, such as a hollow stump, or a container, such as a bucket or a discarded vehicle tire. Only a tiny amount of water is needed to lay eggs. The egg hatch into larvae when water inundates the eggs by any means, such as rain or filling water by people. But mosquito eggs can survive drying out for up to 8 months or even in winter in the southern United States [20]. When that happens, they have to withstand considerable desiccation before that hatch [START_REF] Schmidt | Effects of desiccation stress on adult female longevity in Aedes aegypti and Ae. albopictus (Diptera: Culicidae): results of a systematic review and pooled survival analysis[END_REF].
Once they achieve a suitable desiccation level, they can enter diapause for several months. Aedes eggs in diapause tend to hatch irregularly over an extended period of time.
Larvae live in water, and they feed on microorganisms and particulate organic matter. They develop through four stages, or instars. In the first to fourth instar, the larvae molt, shedding their skins to allow for further growth. On the fourth instar, when the larva in fully grown, they metamorphose into a new form called pupae.
Pupa still lives in water but they do not feed. After two days, they fully developed into adult mosquito forms and breaks through the skin of the pupa. Adult mosquito is no longer aquatic, it has a terrestrial habitat and is able to fly. This entire life cycle of mosquito last for eight to ten days at room temperature, depending on the level of feeding.
Dengue viruses are spread to people through the bites of infected Aedes species mosquitoes (Ae. aegypti or Ae. albopictus). These are the same types of mosquitoes that spread Zika and chikungunya viruses [START_REF]Dengue transmission[END_REF].
Dengue can also spread from mother to child. A pregnant woman already infected with dengue can pass the virus to her fetus during pregnancy or around the time of birth.
Rarely, dengue can also be transmitted through infected blood, laboratory, or healthcare setting exposures, i.e., through blood transfusion, organ transplant, or through a needle stick injury.
Description of the model
In this section, we presented a new model that involves the mosquitoes aquatic stage. Based on the Ross-type model, we assumed that dengue viruses are virulent and no other microorganism attacking the body and it is not transmitted from mother to child. this study, we assumed that adult mosquito cannot pass the virus to its eggs. That is, we assume that eggs reproduce by either susceptible or infected mosquito is not a genetic carrier of dengue virus. α m represents mosquitoes birth rate through egg production. Thus, α m S m and α m I m represents the rate of egg laying of susceptible mosquitoes and infected mosquitoes, respectively. As mosquito evolved from one life stage to another, we use the parameter γ i,j as the conversation between state variables. µ i represents the death rate in each state variables. With total population of M Y = E + L + P, the dynamics of the metamorphosis of young mosquito is govern by the equation below.
𝑺 𝒎 𝑰 𝒎 𝑺 𝒉 𝑰 𝒉 𝜇 ! 𝜇 " 𝜇 # 𝜇 $ 𝛼 % 𝛼 % 𝜇 ! 𝛾 ",$ 𝛾 $,# 𝛾 !," 𝛾 #,'! larva puppa 𝑎𝑏 % 𝐼 % 𝐻 𝑎𝑏 ( 𝐼 % 𝐻 𝑹 𝒉 𝜎 ( 𝛼 ( 𝜇 ( 𝜇 ( 𝜇 ( 𝛾 (
E ′ (t) = α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) (5.1)
L ′ (t) = γ E,L E(t) -γ L,P L(t) -µ L L(t) (5.2)
P ′ (t) = γ L,P L(t)γ P,S m P(t)µ P P(t).
(
When pupa fully developed it breaks through its skin and become an adult mosquito. Adult mosquito is longer aquatic and is able to fly. We denote γ P,S m Pe -β m P as the transition rate of pupae to adult mosquito. The pupal competition is translated by e -β m P meaning that the death rate increases with the pupae density P [START_REF] Cailly | Climatedriven abundance model to assess mosquito control strategies[END_REF]. In this study, we assume that all emerging adults are susceptible. ab m I h represents the probability of susceptible mosquito to be infectious once it bites an infected humans.
The parameter a represents the average mosquito bites and b m is the transmission probability from infected humans to susceptible mosquito. With total population of M A = S m + I m , the dynamics of the interaction of adult mosquito is govern by the equation below.
S ′ m (t) = γ P,S m P(t)e -β m P(t) -µ A S m (t) -ab m I h (t)S m (t) (5.4)
I ′ m (t) = ab m I h (t)S m (t) -µ A I m (t) (5.5)
Since humans have a meager mortality rate compared to mosquitoes, we neglect the human death rate. Let H be the human population subdivided into susceptible S h , infected I h and recovered R h humans. The dynamics of human population is given by
S ′ h (t) = γ h R h (t) -ab h I m (t)S h (t) (5.6)
I ′ h (t) = ab h I m (t)S h (t) -σ h I h (t) (5.7)
R ′ h (t) = σ h I h (t) -γ h R h (t) (5.8)
In this chapter, susceptible humans represents both primary and secondary susceptible. Thus as humans recovered (γ h R h is the recovery rate of humans from dengue infection) from one, two or three types of dengue virus, it goes back to being susceptible to the other type of the virus. Since an individual being infected by all types of dengue virus is a rare case, we neglect the total immunity. The probability of susceptible humans to be infected with dengue is given by ab h I m , where b h represents the probability of transmission of virus from infected mosquito to susceptible humans.
Qualitative Study of the Model
Let U(t) = (E(t), L(t), P(t), S m (t), I m (t), S h (t), I h (t), R h (t)) T . Then the system above can be rewritten in compact form as
U ′ (t) = f (t, U(t)), (5.9)
where
f (t, U) = α m (S m + I m ) -γ E,L E -µ E E γ E,L E -γ L,P L -µ L L γ L,P L -γ P,S m P -µ P P γ P,S m Pe -β m P -µ A S m -ab m I h S m ab m I h S m -µ A I m γ h R h -ab h I m S h ab h I m S h -σ h I h σ h I h -γ h R h
(5.10)
Well-posedness and Positivity of the Solution
Lemma 5.3.1. Let (E(0), L(0), P(0), S m (0),
I m (0), S h (0), I h (0), R h (0)) be a nonnegative initial datum with H(0) = S h (0) + I h (0) + R h (0) > 0, M A = S m (0) + I m (0) > 0 and M Y (0) = E(0) + L(0) + P(0) > 0.
Then there exist a time T > 0 and a unique solution
(E, L, P, S m , I m , S h , I h , R h ) in C ([0, T], R) 8 .
Proof. Consider the initial value problem Proof. Let U = (E, L, P, S m , I m , S h , I h , R h ) ∈ R 8 + be the solution of the system of equation (5.10). In proving for positivity, we assume that the parameters are positive for all time t > 0. We have
U ′ (t) = f (t, U(t)) where U(0) = U 0 . The function f (t, U(t)) is C 1 on [0, T]
f 1 (E = 0, L, P, S m , I m , S h , I h , R h ) = α m (S m + I m ) -γ E,L E -µ E E = α m (S m + I m ) ≤ 0 ∀L, P, S m , I m , S h , I h , R h ≥ 0 f 2 (E, L = 0, P, S m , I m , S h , I h , R h ) = γ E,L E -γ L,P L -µ L L = γ E,L E > 0 ∀E, P, S m , I m , S h , I h , R h ≥ 0 f 3 (E, L, P = 0, S m , I m , S h , I h , R h ) = γ L,P L -γ P,S m P -µ P P = γ L,P L > 0 ∀E, L, S m , I m , S h , I h , R h ≥ 0 f 4 (E, L, P, S m = 0, I m , S h , I h , R h ) = γ P,S m Pe -β m P -µ A S m -ab m I h S m = γ P,S m Pe -β m P > 0 ∀E, L, P, I m , S h , I h , R h ≥ 0 f 5 (E, L, P, S m , I m = 0, S h , I h , R h ) = ab m I h S m -µ A I m = ab m I h S m > 0 ∀E, L, P, S m , S h , I h , R h ≥ 0 f 6 (E, L, P, S m , I m , S h = 0, I h , R h ) = γ h R h -ab h I m S h = γ h R h > 0 ∀E, L, P, S m , I m , I h , R h ≥ 0 f 7 (E, L, P, S m , I m , S h , I h = 0, R h ) = ab h I m S h -σ h I h = ab h I m S h > 0 ∀E, L, P, S m , I m , S h , R h ≥ 0 f 8 (E, L, P, S m , I m , S h , I h , R h = 0) = σ h I h -γ h R h = σ h I h > 0 ∀E, L, P, S m , I m , S h , I h ≥ 0
Hence, we have shown that for all time t > 0, the solution E, L, P, S m , I m , S h , I h and R h remains nonnegative.
Since we consider a constant human population H ′ (t) = 0, then
H(t) = H 0 = constant
and the human components, being nonnegative, are bounded by H 0 . Now, note that mosquito population is the combination of young and adult mosquito. Let µ m = min (µ E , µ L , µ P , µ A ), then
M ′ = (E + L + P + S m + I m ) ′ = α m (S m + I m ) -µ E E -µ L L -µ P P -µ A (S m + I m ) -γ P,S m P(1 -e -β m P ) ≤ (α m -µ m )M Therefore, from Grönwall's Lemma, M(t) ≤ e (α m -µ m )t M 0 . If α m ≤ µ m , then α m -µ m ≤ 0 and M(t) = e (α m -µ m )t M 0 ≤ M 0 .
On the other hand, if α mµ m > 0 and M(t) ≤ e (α m -µ m )t M 0 , which is finite for all finite time t and infinite only when t = +∞. Let (E(0), L(0), P(0), S m (0), I m (0), S h (0), I h (0), R h (0)) be in Ω. Then there exists a unique global in time solution (E, L, P, S m , I m , S h , I h , R h ) in C(R + , Ω) 8 .
Equilibrium of the Model
Let (E * , L * , P * , S * m , I * m , S * h , I * h , R * h )
be an equilibrium point of the system of equation (5.10). Then solving the system of equation below
α m (S m + I m ) -γ E,L E -µ E E = 0 (5.11) γ E,L E -γ L,P L -µ L L = 0 (5.12)
γ L,P Lγ P,S m Pµ P P = 0 (5.13)
γ P,S m Pe -β m P -µ A S m -ab m I h S m = 0 (5.14)
ab m I h S m -µ A I m = 0 (5.15) γ h R h -ab h I m S h = 0 (5.16
)
ab h I m S h -σ h I h = 0 (5.17
)
σ h I h -γ h R h = 0 (5.18)
would give us the equilibrium points E DFE = (E * , L * , P * , S * m , 0, S * h , 0, 0) and
E EE = (E * , L * , P * , S * m , I * m , S * h , I * h , R * h ).
To show this, consider the lemma below.
Lemma 5.3.4. The system of equation (5.10) admits a positive disease free equilibrium E DFE = (E * , L * , P * , S * m , 0, S * h , 0, 0) and an endemic equilibrium
E EE = (E * , L * , P * , S * m , I * m , S * h , I * h , R * h ).
Proof. Suppose I h ̸ = 0. Expressing each equation in the system in terms of P, I m and I h we can have from equation (5.13),
L = γ P,S m + µ P γ L,P P. (5.19)
From equation (5.12), solving for E we get
γ E,L E = (γ L,P + µ L )L E = γ L,P + µ L γ E,L L
Substituting equation (5.19) to the equation above would give us
E = (γ L,P + µ L )(γ P,S m + µ P ) γ E,L γ L,P P (5.20)
Now, adding equation (5.14) and (5.15) we get
γ P,S m Pe -β m P -µ A S m -µ A I m = 0 µ A (S m + I m ) = γ P,S m Pe -β m P S m + I m = γ P,S m Pe -β m P µ A
From equation (5.11), solving for S m + I m we have
α m (S m + I m ) = (γ E,L + µ E )E S m + I m = (γ E,L + µ E )E α m
Now, equating the two equations above and substituting equation (5.20) to E, we have
γ P,S m Pe -β m P µ A = (γ E,L + µ E ) • (γ L,P +µ L )(γ P,Sm +µ P ) γ E,L γ L,P P α m γ P,S m Pe -β m P µ A = (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P )P α m γ E,L γ L,P e -β m P = µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) α m γ E,L γ L,P γ P,S m e β m P = α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P )
Applying ln to both sides of the equation would give us
β m P = ln α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) Therefore, P = 1 β m ln α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) (5.21)
Solving S m from equation (5.14), we get
S m = γ P,S m Pe -β m P µ A + ab m I h .
From equation (5.15), solving for I m , we have
I m = ab m I h S m µ A = ab m I h • γ P,Sm Pe -βm P µ A +ab m I h µ A
Thus,
I m = ab m I h γ P,S m Pe -β m P µ A (µ A + ab m I h ) (5.22)
Solving R h from equation (5.18), we can get
R h = σ h I h γ h . ( 5.23)
From equation (5.17), solving for S h we have
S h = σ h I h ab h I m (5.24)
Therefore the endemic equilibrium of the system of equation (5.10) is
E EE = (E * , L * , P * , S * m , I * m , S * h , I * h , R * h )
where
E * = (γ L,P + µ L )(γ P,S m + µ P ) γ E,L γ L,P P * L * = γ P,S m + µ P γ L,P P * P * = 1 β m ln α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) S * m = γ P,S m P * e -β m P * µ A + ab m I * h I * m = ab m I * h γ P,S m P * e -β m P * µ A (µ A + ab m I * h ) S * h = σ h I * h ab h I * m I * h = I * h R * h = σ h I * h γ h . Now if I * h = 0 then I * m = R * h = 0 and S * h = H. Hence the disease free equilibrium is E DFE = (E * , L * , P * , S * m , 0, H, 0, 0)
, where
E * = (γ L,P + µ L )(γ P,S m + µ P ) γ E,L γ L,P P * L * = γ P,S m + µ P γ L,P P * P * = 1 β m ln α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) S * m = γ P,S m P * e -β m P * µ A
Next Generation Matrix and Basic Reproduction Number
In this section, will obtain the basic reproduction number using the next generation matrix.
Since the infected individuals are in I h and I m , if F is the rate of appearance of new infections in each compartment and V is the rate of other transitions between all compartments, then we can have
F = ab m I h S m ab h I m S h and V = µ A I m σ h I h
where
F = 0 ab m S m ab h S h 0 and V = µ A 0 0 σ h .
Thus, solving for FV -1 , we have
FV -1 = 0 ab m S m ab h S h 0 1 µ A 0 0 1 σ h = 0 ab m S m σ h ab h S h µ A 0 .
Hence, solving the determinant of the characteristic polynomial det FV -1 -λI , we have
det FV -1 -λI = -λ ab m S m σ h ab h S h µ A -λ = λ 2 - a 2 b h b m S m S h µ A σ h λ = ± a 2 b h b m S m S h µ A σ h .
Therefore, the basic reproduction is
R 0 := a 2 b h b m S * m S * h µ A σ h = a 2 b h b m H(γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) ln α m γ E,L γ L,P γ P,Sm µ A (γ E,L +µ E )(γ L,P +µ L )(γ P,Sm +µ P ) µ A σ h α m β m γ E,L γ L,P .
(5.25)
We will use now Jacobian matrix to get more details about the stability.
Jacobian Matrix
Computing for the partial derivative
∂ f i
∂U , for each U = E, L, P, S m , I m , S h , I h , R h , and i = 1, 2, . . . , 8, we have J(U) equal to
-γ E,L -µ E 0 0 α m α m 0 0 γ E,L -γ L,P -µ L 0 0 0 0 0 0 γ L,P -γ P,S m -µ P 0 0 0 0 0 0 γ P,S m e -β m P (1 -β m P) -µ A -ab m I h 0 0 -ab m S m 0 0 0 ab m I h -µ A 0 ab m S m 0 0 0 0 -ab h S h -ab h I m 0 γ h 0 0 0 0 ab h S h ab h I m -σ h 0 0 0 0 0 0 σ h -γ h
Now let us verify the stability of the equilibrium point using Jacobian matrix.
Note that
N Y := α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) R 0 := a 2 b h b m H(γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P ) ln (N Y ) µ A σ h α m β m γ E,L γ L,P . Lemma 5.3.5. Assume N Y > 1 and R 0 < 1.
Then the disease free equilibrium E DFE = (E * , L * , P * , S * m , 0, H, 0, 0) of the system of equation (5.10) is locally asymptotically stable.
Proof. To simplify the writing, we denote
R Y = 1 + µ E γ E,L 1 + µ L γ L,P
1 + µ P γ P,Sm . The Jacobian matrix above for E DFE can be deduce to
J(E DFE ) = -γ E,L -µ E 0 0 αm αm 0 0 0 γ E,L -γ L,P -µ L 0 0 0 0 0 0 0 γ L,P -γ P,Sm -µ P 0 0 0 0 0 0 0 µ A γ P,Sm R Y αm 1 -ln αm µ A R Y -µ A 0 0 -abm γ P,Sm R Y αm βm ln αm µ A R Y 0 0 0 0 0 -µ A 0 -abm γ P,Sm R Y αm βm ln αm µ A R Y 0 0 0 0 0 -ab h H 0 0 γ h 0 0 0 0 ab h H 0 -σ h 0 0 0 0 0 0 0 σ h -γ h Let |J(E DFE ) -λI 8 | = 0. Then we can have -γ E,L -µ E -λ 0 0 αm αm 0 0 0 γ E,L -γ L,P -µ L -λ 0 0 0 0 0 0 0 γ L,P -γ P,Sm -µ P -λ 0 0 0 0 0 0 0 µ A γ P,Sm R Y αm 1 -ln αm µ A R Y -µ A -λ 0 0 -abm γ P,Sm R Y αm βm ln αm µ A R Y 0 0 0 0 0 -µ A -λ 0 abm γ P,Sm R Y αm βm ln αm µ A R Y 0 0 0 0 0 -ab h H -λ 0 γ h 0 0 0 0 ab h H 0 -σ h -λ 0 0 0 0 0 0 0 σ h -γ h -λ = 0
Observe that there is no sign change in S(λ) and T(λ) if R 0 < 1. By Descartes' rule of sign [START_REF] Anderson | Descartes' rule of signs revisited[END_REF], the polynomials S and T has 0 positive roots.
For S(-λ) we have
S(-λ) = (-λ) 2 + (σ h + µ A )(-λ) + σ h µ A (1 -R 2 0 ) = λ 2 -(σ h + µ A )λ + σ h µ A (1 -R 2 0 )
Thus, there are 2 sign change in S(-λ). Implying further that the polynomial S has 2 Doing the same process for T, we can conclude that all eigenvalues of the character-
istics polynomial 0 = -λ • (-λ -γ h ) • S • T are negative.
Consequently, the disease free equilibrium E DFE is locally asymptotically stable.
Parameter Identifiability
The dynamic system given by equation (5.9) is identifiable if θ can be uniquely determined from the measurable system output Y(t); otherwise, it is said to be unidentifiable.
Definition 5.3.6. [START_REF] Miao | On identifiability of nonlinear ode models and applications in viral dynamics[END_REF] A system structure is said to be globally identifiable if for any two parameter vectors θ 1 and θ 2 in the parameter space Θ, Y(U,
θ 1 ) = Y(U, θ 2 ) holds if and only if θ 1 = θ 2 .
Now let us determine the global identifiability of the parameters using the study proposed by Denis-Vidal and Joly-Blanchard [START_REF] Denis-Vidal | Some effective approaches to check the identifiability of uncontrolled nonlinear systems[END_REF].
We choose
Y = (E, L, P, S m , I m , S h , I h ). From f (U, θ 1 ) = f (U, θ 2 ), we have α m,1 (S m + I m ) -(γ {E,L},1 + µ E,1 )E = α m,2 (S m + I m ) -(γ {E,L},2 + µ E,2 )E (5.26) γ {E,L},1 E -γ {L,P},1 L -µ L,1 L = γ {E,L},2 E -γ {L,P},2 L -µ L,2 L (5.27) γ {L,P},1 L -γ {P,S m },1 P -µ P,1 P = γ {L,P},2 L -γ {P,S m },2 P -µ P,2 P (5.28) γ {P,S m },1 Pe -β m,1 P -µ A,1 S m -a 1 b m,1 I h S m = γ {P,S m },2 Pe -β m,2 P -µ A,2 S m -a 2 b m,2 I h S m (5.29) a 1 b m,1 I h S m -µ A,1 I m = a 2 b m,2 I h S m -µ A,2 I m (5.30) γ h,1 R h -a 1 b h,1 I m S h = γ h,2 R h -a 2 b h,2 I m S h (5.31) a 1 b h,1 I m S h -σ h,1 I h = a 2 b h,2 I m S h -σ h,2 I h (5.32) σ h,1 I h -γ h,1 R h = σ h,2 I h -γ h,2 R h (5.33)
Now solving each equation above, we can solve the identifiability of each parameters.
• For α m : Using equation (5.26), we can imply that α m,1 (S m + I m ) = α m,2 (S m + I m ). Thus α m,1 = α m,2 , so the parameters α m and β m are identifiable.
• For γ E,L : Using equation (5.27), we can imply that γ {E,L},1 E = γ {E,L},2 E, implying further that the parameter γ E,L is identifiable.
• For γ L,P : Using equation (5.28), we can imply that γ {L,P},1 L = γ {L,P},2 L, implying further that the parameter γ L,P is identifiable.
• For γ P,S m and β m : Using equation (5.29), we can imply that γ {P,S m },1 Pe -β m,1 P = γ {P,S m },2 Pe -β m,2 P , implying further that the parameter γ P,S m is identifiable.
• For µ E : Using equation (5.26), we have
γ {E,L},1 γ {E,L},2 = µ E,1 µ E,2 . Thus, µ E is unidentifiable but the sum (γ E,L + µ E ) is identifiable.
• For µ L : Using equation (5.27), we have
γ {L,P},1 γ {L,P},2 = µ L,1
µ L,2 . Thus, µ L is unidentifiable. However, since γ L,P is identifiable, the sum (γ L,P + µ L ) is identifiable.
• For µ P : Using equation (5.28), we have
γ {P,Sm },1 γ {P,Sm },2 = µ P,1
µ P,2 . Thus, µ P is unidentifiable. However, since γ P,S m is identifiable, the sum (γ P,S m + µ P ) is identifiable.
• For µ A : Using equation (5.30), we can imply that µ A,1 I m = µ A,2 I m . Thus the parameter µ A is identifiable.
• For γ h : Using equation (5.31), we can imply that γ h,1 R h = γ h,2 R h . Thus the parameter γ h is identifiable.
• For σ h : Using equation (5.32), we can imply that σ h,1 I h = σ h,2 I h . Thus the parameter σ h is identifiable.
• For ab m : Using again equation (5.30), we can imply that a 1 b m,1 I h S m = a 2 b m,2 I h S m . Thus we can imply further that
a 1 a 2 = b m,1 b m,2
Thus the parameters a and b m are unidentifiable. However, the product ab m is identifiable.
• For ab h : Using again equation (5.32), we can imply that a
1 b h,1 I m S h = a 2 b h,2 I m S h .
Thus we can imply further that
a 1 a 2 = b h,1 b h,2
Thus the parameters a and b h are unidentifiable. However, the product ab m is identifiable.
From this result we have the following theorem. Theorem 5.3.7. The parameters (α m , γ E,L , γ L,P , γ P,S m , β m , µ E , µ L , µ P , µ A , γ h , σ h , ab m , ab h ) are globally identifiable but the rest is not.
Optimal Control strategies : Copepods and Pesticides
Our aim in this section to minimize the number of infected humans by controlling the vector population. We attribute two control inputs, w Y for the percentage of young mosquitoes exposed to copepods and w A for the percentage of adult mosquitoes exposed to pesticides. According to mosquitoesreviews.com, Copepods are natural enemies of the first and second instar (the smallest sizes) of mosquito larvae. Also according to [START_REF] Dapinder | Mosquito larvae specific predation by native cyclopoid copepod species, mesocylops aspericornis[END_REF], large sized cyclopoid copepods (having body size greater than 1.0 mm) act as predators of mosquito larvae which strongly influence the mosquito larval population. Furthermore, we assume that both control inputs are mesureable continuous functions that takes its values in a positively bounded set W = [0, w Y,max ] × [0, w A,max ]. Thus we consider the objective function
J (w Y , w A ) = T 0 I h (t) + 1 2 A Y w 2 Y (t) + 1 2 A A w 2 A (t) dt subject to E ′ (t) = α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) L ′ (t) = γ E,L E(t) -γ L,P L(t) -µ L L(t) -w Y L(t) P ′ (t) = γ L,P L(t) -γ P,S m P(t) -µ P P(t) S ′ m (t) = γ P,S m P(t)e -β m P(t) -µ A S m (t) -ab m I h (t)S m (t) -w A S m (t) I ′ m (t) = ab m I h (t)S m (t) -µ A I m (t) -w A I m (t) S ′ h (t) = γ h R h (t) -ab h I m (t)S h (t) I ′ h (t) = ab h I m (t)S h (t) -σ h I h (t) R ′ h (t) = σ h I h (t) -γ h R h (t) (5.34)
for t ∈ [0, T], with 0 ≤ w Y , w A ≤ w M . The variables A Y , A A are the positive weights associated with the control variables w Y and w A , respectively. They corresponds to the efforts rendered in exposing the larvae L and the adult mosquitoes S m , I m compartments.
Lemma 5.4.1. There exists an optimal control w
* = (w * Y (t), w * A (t)) such that J (w * Y , w * A ) = min w∈W J (w Y , w A )
under the constraint (E, L, P, S m , I m , S h , I h , R h ) is a solution to the ordinary differential equation (5.34).
Proof. This lemma can be proven using the similar arguments as Lemma 3.5.1.
Lemma 5.4.2.
There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (5.34) that satisfy the following backward in time system of ordinary differential equation.
-
dλ 1 dt = -(λ 1 -λ 2 )γ E,L -λ 1 µ E - dλ 2 dt = -(λ 2 -λ 3 )γ L,P -λ 2 (µ L + w Y ) - dλ 3 dt = -(λ 3 -λ 4 (1 -β m P)e -β m P )γ P,S m -λ 3 µ P - dλ 4 dt = λ 1 α m -λ 4 (µ A + w A ) -(λ 4 -λ 5 )ab m I h - dλ 5 dt = λ 1 α m -λ 5 (µ A + w A ) -(λ 6 -λ 7 )ab m S h - dλ 6 dt = -(λ 6 -λ 7 )ab h I m - dλ 7 dt = 1 -(λ 4 -λ 5 )ab m S m -(λ 7 -λ 8 )σ h - dλ 8 dt = -(λ 8 -λ 6 )γ h (5.35)
with the transversality condition λ(T) = 0.
Proof. Using the Hamiltonian for system (5.34), we have
H =L(w Y , w A ) + λ 1 (t)E ′ (t) + λ 2 (t)L ′ (t) + λ 3 (t)P ′ (t) + λ 4 (t)S ′ m (t) + λ 5 (t)I ′ m (t) + λ 6 (t)S ′ h (t) + λ 7 (t)I ′ h (t) + λ 8 (t)R ′ h (t) =I h + 1 2 A Y w 2 Y + 1 2 A A w 2 A + λ 1 (α m (S m + I m ) -γ E,L E -µ E E) + λ 2 (γ E,L E -γ L,P L -µ L L -w Y L) + λ 3 (γ L,P L -γ P,S m P -µ P P) + λ 4 γ P,S m Pe -β m P -µ A S m -ab m I h S m -w A S m + λ 5 (ab m I h S m -µ A I m -w A I m ) + λ 6 (γ h R h -ab h I m S h ) + λ 7 (ab h I m S h -σ h I h ) + λ 8 (σ h I h -γ h R h ) (5.36)
To prove this, we determine the partial derivatives of H with respect to each variables then set the adjoint system as
dλ 1 dt = -∂H ∂E , dλ 2 dt = -∂H ∂L , dλ 3 dt = -∂H ∂P , dλ 4 dt = -∂H ∂S m , dλ 5 dt = -∂H ∂I m , dλ 6 dt = -∂H ∂S h , dλ 7 dt = -∂H ∂I h and dλ 8 dt = -∂H ∂R h
. We have the following
dλ 1 dt = (λ 1 -λ 2 )γ E,L + λ 1 µ E dλ 2 dt = (λ 2 -λ 3 )γ L,P + λ 2 (µ L + w Y ) dλ 3 dt = (λ 3 -λ 4 (1 -β m P)e -β m P )γ P,S m + λ 3 µ P dλ 4 dt = -λ 1 α m + λ 4 (µ A + w A ) + (λ 4 -λ 5 )ab m I h dλ 5 dt = -λ 1 α m + λ 5 (µ A + w A ) + (λ 6 -λ 7 )ab m S h dλ 6 dt = (λ 6 -λ 7 )ab h I m dλ 7 dt = -1 + (λ 4 -λ 5 )ab m S m + (λ 7 -λ 8 )σ h dλ 8 dt = (λ 8 -λ 6 )γ h .
Theorem 5.4.3. The optimal control variables are given by w * Y = max 0, min
λ 2 L A Y , w Y,max w * A = max 0, min λ 4 S m + λ 5 I m A A , w A,max .
Proof. By the Pontryagin maximum principle, the optimal control w * minimizes the Hamiltonian given by equation (5.36). Now, setting the partial derivative of H with respect to the control variables to zero, then solving w Y and w A , we get
w * Y = λ 2 L A Y w * A = λ 4 S m + λ 5 I m A A .
Therefore, the optimal control derived from the stationary condition dλ i ∂t is given by
w * Y = 0 if λ 2 L A Y ≤ 0 λ 2 L A Y if λ 2 L A Y < w M w M if λ 2 L A Y ≥ w M w * A = 0 if λ 4 S m +λ 5 I m A A ≤ 0 λ 4 S m +λ 5 I m A A if λ 4 S m +λ 5 I m A A < w M w H if λ 4 S m +λ 5 I m A A ≥ w M .
Numerical Simulation of Optimal Control Strategies: Copepodes vs Pesticides
In this section, we will show a numerical simulations of the optimal control strategy in minimizing infected humans. The optimal control is (w * Y , w * A ) where w Y is the percentage of young mosquitoes exposed to copepodes and w A is the percentage of adult mosquitoes exposed to pesticides.
The control weights A Y is the efforts rendered in exposing young mosquito population to copepodes while the control weights A A is the effort in eliminating adult mosquito population by means of administering insecticides. Since adults mosquitoes is more visible compare to the young mosquitoes, eliminating them would render an effortless job. Thus, A Y is set smaller than A A . Hence, we initially set the control weights as A A = 10 and A Y = 1. Note that the values of A Y and A A does not change the convergence of optimal control.
Parameters Description
Value Source
α m Oviposition 1 day -1 [13] γ E,L
Transformation from egg to larva 0.330000 day Considering a constant growth function for human and mosquito population, the optimality of the system is numerically solved using a gradient method programmed in Python. The algorithm are describe below and the parameters value used are presented in Table 6.2. The optimality of the system is numerically solved using Algorithm 2 with ϵ = 0.01.
Algorithm 2 Computation of optimal control of the model (5.34) Given U 0 = (9.4e7, 5.e4, 1.e4, 94.4e4, 5.6e4, 8768197, 1895, 1878) as initial datum , a final time T > 0 and a tolerance ε > 0. Let w 0 Y , w 0 A randomly chosen following N (0, 1). while ||∇H(w n , U n , λ n )|| > ε do, solve the forward system u n , solve the backward system λ n , update w n solve the gradient ∇H(w n , U n , λ n ) w * = w n . Using the algorithm above with a tolerance of 10 -2 , we get the following results.
with optimal control without control The simulation was done assuming that the Aedes Aegypti does not become resistant to the insecticide and that it is financially possible to apply insecticide at all times. Figure 5.5 shows that setting w Y,max = 23.96, w A,max = 1 takes 26 days and 42 days of continuous application of copepod and pesticide, respectively. It then slowly minimizes the application toward its equilibrium.
Influence of the copepods number
One Mesocyclops aspericornis, a Philippine species of copepod, is capable of eating an average of 23.96 among 50 Aedes aegypti larvae [START_REF] Mejica Panogadia-Reyes | Philippine species of mesocyclops (crustacea: Copepoda) as a biological control agent of aedes aegypti (linnaeus)[END_REF]. In this section, we compare the optimal control by varying the maximum number of copepod N exposed to larvae as w Y,max = (23.96/50)N = 0.4792N. From the figure above, we consider increasing the effort by setting N equal to 20 and 200. With this, we get the figure below.
The simulation was done assuming Mesocyclops aspericornis have no predators in the laying sites. Figure 5.6 shows the influence of increasing the number of copepods N exposed to larvae on the control variables. It shows that the application of the control strategies, both w * Y and w * A , decreases as N increases. The figures show that in N = 2, you need to increase the effort at day one by a hundred percent and then continuously apply copepod and pesticide for 43 days and 52 days, respectively. While with 20 Mesocyclops aspericornis, it decreases to 27 days and 43 days of continuous application of copepod and pesticide, respectively. Since we assume there is no copepod predator in the laying site, applying 200 copepods in the laying site requires only 26 and 41 days of copepod and pesticide to eliminate the mosquito population.
Considering all Control Strategies: Copepods, Pesticides and Vaccination
Now, let us include vaccination in our control strategy. We attribute three control inputs, w Y for the percentage of young mosquitoes exposed to copepods, w A for the percentage of adult mosquitoes exposed to pesticides and w H for the efforts in vaccinating susceptible humans. Furthermore, we assume that both control inputs are mesureable continuous functions that takes its values in a positively bounded set
W = [0, w Y,max ] × [0, w A,max ] × [0, w H,max
]. Thus we consider the objective function Proof. This lemma can be proven using the similar arguments as Lemma 3.5.1.
J (w Y , w A , w H ) = T 0 I h (t) + 1 2 A Y w 2 Y (t) + 1 2 A A w 2 A (t) + 1 2 A H w 2 H (t) dt subject to E ′ (t) = α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) L ′ (t) = γ E,L E(t) -γ L,P L(t) -µ L L(t) -w Y L(
I ′ m (t) = ab m I h (t)S m (t) -µ A I m (t) -w A I m (t) S ′ h (t) = γ h R h (t) -ab h I m (t)S h (t) -w H S h (t) I ′ h (t) = ab h I m (t)S h (t) -σ h I h (t) R ′ h (t) = σ h I h (t) -γ h R h (t)
Lemma 5.4.5.
There exists the adjoint variables λ i , i = 1, 2, • • • , 6 of the system (5.37) that satisfy the following backward in time system of ordinary differential equation.
-
∂λ 1 (t) ∂t = -λ 1 µ E + (λ 2 -λ 1 )γ E,L - ∂λ 2 (t) ∂t = -λ 2 (µ L + w Y ) + (λ 3 -λ 2 )γ
w * Y = λ 2 L A Y w * A = λ 4 S m + λ 5 I m A A w * H = λ 6 S h A H .
Therefore, the optimal control derived from the stationary condition dλ i ∂t is given by
w * Y = 0 if λ 2 L A Y ≤ 0 λ 2 L A Y if λ 2 L A Y < w M w M if λ 2 L A Y ≥ w M w * A = 0 if λ 4 S m +λ 5 I m A A ≤ 0 λ 4 S m +λ 5 I m A A if λ 4 S m +λ 5 I m A A < w M w H if λ 4 S m +λ 5 I m A A ≥ w M w * H = 0 if λ 6 S h A H ≤ 0 λ 6 S h A H if λ 6 S h A H < w H w H if λ 6 S h A H ≥ w H
Now let us incorporate vaccination into our control strategy and compare it with its different combinations. To perform our simulation, we choose the upper bound in our optimal control, reflecting most of the Philippines' conditions. Herein, w Y,max is set to 23.96, corresponding to the average number of larvae eaten by 50 Mesocyclops aspericornis copepods [START_REF] Mejica Panogadia-Reyes | Philippine species of mesocyclops (crustacea: Copepoda) as a biological control agent of aedes aegypti (linnaeus)[END_REF]. In the Philippines, thermal fogging is the main way to apply pesticides. It is conducted using a PULSFOG ™ machine loaded with a pyrethroid insecticide. Using the study by Mistica, M.S et al. [START_REF] Mistica | Dengue mosquito ovitrapping and preventive fogging trials in the philippines[END_REF], wherein they used the water-based pyrethroid called Aqua-Resigen ® , we use the efficacy they evaluated as w A,max = 0.65. Finally, the dengvaxia efficacy provides w H,max = 0.8 [START_REF] Hadinegoro | Efficacy and long-term safety of a dengue vaccine in regions of endemic disease[END_REF][START_REF] Sridhar | Effect of dengue serostatus on dengue vaccine safety and efficacy[END_REF]. Using Algorithm 2 with ϵ = 0.01, we numerically solved the optimality of the system. In doing so, we get the graph in Figure 5. Copepod and vaccination methods take only two days for a copepod applicant. In contrast, combining pesticide and vaccination methods is not a good strategy since they must be constantly applied until the end.
Influence of the starting date of control
In this section, we will determine the influence of the optimal control not starting at day zero. We consider three dates, day 40 or the day during the growth; day 64 at the peak; and day 150 at the endemic equilibrium.
starting day 40 starting day 64 starting day 150 Figure 5.12 shows the behavior of each optimal control variable influenced by the different starting dates of the control inputs. It shows that the later the starting date of the control inputs the longer the days of application.
Comparison with larval and pupal competition
In the above section, we considered the pupal competition only. Now the transition rate of pupae to adult mosquito is given by γ P,S m Pe -β m (P+L) , meaning that the death rate increases with the pupae density P and the larvae density L. Equation (5.4) becomes S ′ m (t) = γ P,S m P(t)e -β m (P(t)+L(t))µ A S m (t)ab m I h (t)S m (t).
(5.38)
The system remains globally well-posed and its positive disease free equilibrium E DFE = (E * , L * , P * , S * m , 0, S * h , 0, 0) is written as pupae and larvae pupae only Chapter 6
E * = (γ L,P + µ L )(γ P,S m + µ P ) γ E,
A Model of Dengue accounting for the Spatial Distribution
There is no gray host for the dengue virus. It is circulating between humans and mosquitoes. Thus, the mosquitoes' spatial distribution highly affects the disease's epidemiology. In this chapter, we introduce a dengue mathematical model that considers adult mosquitoes' spatial distribution.
Spatial analysis is a study that entails using topological, geometric, or geographic properties of a subject. The existence of information regarding the spatial spread of dengue is a crucial ingredient in controlling the spread of the disease. It is important to study because each area has different characteristics, such as land surface elevation, soil type, population density, and many more [START_REF] Nur Afrida Rosvita | Spatial-temporal distribution of dengue in banjarmasin, indonesia from 2016 to 2020[END_REF]. A geographic distribution map is handy for empirically studying the relationship between geography and disease and is helpful in the implementation of intervention plans. This chapter assumes that only adult mosquitoes move while humans are immobile. Mosquitoes' movements are governed by their habits. Thus, a summary of the random walk model was included for mosquito feeding and laying habits to understand better how the dynamics are constructed.
Adult Mosquitoes Habits
Feeding Habits
Like all other animals, mosquitoes need energy and nutrients for survival and reproduction. Plant materials and blood are useful sources of this.
Only female mosquitoes bite. They are attracted by several things like infrared light, light, perspiration, body odor, lactic acid and carbon dioxide. Mouth-part of many female mosquitoes are adapted for piercing the skin of animal hosts and sucking their blood as ectoparasites. During the blood meal, the female mosquitoes lands on the host skin and sticks their proboscis. Their saliva contains anticoagulants proteins that prevents blood clotting. They then sucks the host blood into their abdomen. A. Aegypti mosquitoes needs 5 µL per serving [START_REF] Ph | How mosquitoes work[END_REF]. In many female mosquito species, nutrients obtain from blood meal is essential for the production of eggs, whereas in many other species, obtaining nutrients from a blood meal enables the mosquito to lay more eggs. Among humans, mosquitoes preferred feeding those with type O blood [START_REF] Shirai | Landing preference of aedes albopictus (diptera: Culicidae) on human skin among abo blood groups, secretors or nonsecretors, and abh antigens[END_REF], heavy breathers, an abundance of skin bacteria, high body heat, and pregnant women [START_REF] Chappell | 5 stars: A mosquito's idea of a delicious human[END_REF]. Individuals' attractiveness to mosquitoes also has a heritable, genetically-controlled component. [START_REF] Fernández-Grandon | Heritability of attractiveness to mosquitoes[END_REF] Blood-sucking species of mosquitoes are selective feeders that prefer a particular host species. But they relax this selectivity when they experience severe competition and scarcity of food, and/or defensive activity on the part of the hosts. If humans are scarce, mosquitoes resort to feed on monkeys, while others prefer on equines, rodents, birds, bats and pigs, which is where so many of our cross-species disease fears originate from. [START_REF] Lehane | The Biology of Blood-Sucking in Insects[END_REF] Some mosquitoes ignores humans altogether and feed exclusively on birds, while most will eat whatever is available. Some of the other most popular dining options for mosquitoes include amphibians, snakes, reptiles, squirrels, rabbits and other small mammals. Mosquitoes also target larger animals, such as horses, cows and primates, as well as kangaroos and wallabies [START_REF] Staughton | Do animals get mosquito bites? ScienceABC.com[END_REF]. Even fish may be attacked by some mosquito species if they expose themselves above water level, as mudskippers do [START_REF] Slooff | Mosquitoes (culicidae) biting a fish (periophthalmidae)[END_REF]. Comparably, mosquitoes may sometimes feed on insects in nature. A. Aegypti and Culextarsalis are attracted and feed on insect larvae and they live to produce viable eggs [START_REF] Sharris | Survival and fecundity of mosquitoes fed on insect haemolymph[END_REF]. While Anopheles Stephensi is attracted to and can feed successfully on larvae of moth species known as Manduca sexta and
Heliothis subflexa [START_REF] George | Malaria mosquitoes host-locate and feed upon caterpillars[END_REF].
Plant nectar is a common energy source for diet across mosquito species, particularly male mosquitoes, which are exclusively dependent on plant nectar or alternative sugar sources for survival. The design of efficient sugar-baited traps for mosquitoes would greatly benefit the prevention of vector-borne illness. Plant preference is likely driven by an innate attraction that may be enhanced by experience, as mosquitoes learn to recognize available sugar rewards [START_REF] Wolff | Olfaction, experience and neural mechanisms underlying mosquito host preference[END_REF]. It varies among mosquito species, geographical habitats, and seasonal availability. Nectar-seeking involves the integration of at least three sensory systems: olfaction, vision and taste.
But altogether mosquitoes can discriminate between rich and poor sugar sources to choose plants that offer higher glycogen, lipid, and protein content [START_REF] Yu | Feeding on different attractive flowering plants affects the energy reserves of culex pipiens pallens adults[END_REF]. Below are the preferred plant of different mosquito species from Barredo and DeGennaro [START_REF] Barredo | Not just from blood: Mosquito nutrient acquisition from nectar sources[END_REF].
Breeding Sites
The dengue vectors, are container breeders; they breed in a wide variety of artificial and natural wet containers/receptacles, preferably with dark coloured surfaces and holding clear (unpolluted) water [START_REF]Potential breeding sites[END_REF]. Some mosquitoes like living near people, while others prefer forests, marshes, or tall grasses. All mosquitoes like water because mosquito larvae and pupae live in the water with little or no flow [START_REF]Where mosquitoes live[END_REF].
Different types of water attract different types of mosquitoes. • Permanent water mosquitoes: These mosquitoes tend to lay their eggs in permanentto-semi-permanent bodies of water.
Mosquito Species Nectar Source
• Floodwater mosquitoes: These mosquitoes lay their eggs in moist soil or in containers above the water line. The eggs dry out, then hatch when rain floods the soil or container.
Summary of Random Walk Modelling
Mathematicians have long been researching qualitative models that describe the process of dispersal. The common of which are random walk types. The invention of the term 'random walk' was indebted to Karl Pearson, who wrote a letter entitled "The problem of random walk" published in the journal Nature on 27 July 1905 [START_REF] Barry | Random Walks and Random Environments: Volume 1: Random Walks[END_REF].
A random walk is a random process that describes a walker's path consisting of a sequence of discrete random steps with a fixed length. It is used in an essential model in time series forecasting known as the random walk model. This model assumes that in each period, the variable takes a random step away from its previous value; the steps are independently and identically distributed in size [START_REF] Nau | Notes on the random walk model[END_REF]. Meaning that the first difference of the variable is a series to which the mean model should be applied.
Below, we will introduce the fundamental theory and equations of random walks from the paper of Codling et al. [START_REF] Codling | Random walks in biology[END_REF].
Reaction-Diffusion Equation
Consider a mosquito that moves randomly in one direction with a fixed step δ on time τ. Let p(m, n) be the probability that the mosquito reaches the point mδ after n time steps.
δx δx The number of possible path is
n! a!b! = n! a!(n -a)! = n a ,
and the total number of path with n steps is 2 n and the probability is
p(m, n) = n a 1 2 n . Note that n ∑ m=-n p(m, n) = n ∑ a=0 n a 1 2 n-a 1 2 a = 1 2 + 1 2 n = 1,
and p(m, n) follows a binomial law. Thanks to the Stirling formula n! ≃ n→+∞ (2πn) 1/2 e n ln n-n , we have
p(m, n) ≃ n→+∞ 1 2 n (2πn) 1/2 (2πa) 1/2 (2π(n -a)) 1/2 e (n ln n-n)-(a ln a-a)-((n-a) ln(n-a)-(n-a) ≃ n→+∞ 2 πn 1/2
-D∆K = 0 K(x, 0) = 0, with R d K(x, t)dx = 1.
Proof. For equation ( 6.1) taking it's partial derivative with respect to t and x, respectively would give us
∂K ∂t = 1 4πDt 1 2 e -x 2 4Dt x 2 (4Dt) -2 (4D) + e -x 2 4Dt - 1 2 (4πDt) -3 2 (4πD) = 1 4πDt 1 2 e -x 2 4Dt 4Dx 2 (4Dt) -2 -2πD(4πDt) -1 = 1 4πDt 1 2 e -x 2 4Dt 4π 2 x 2 D(4πDt) -2 -2πD(4πDt) -1 = 1 4πDt 3 2 e -x 2 4Dt (2πD) x 2 2Dt - 1
and
∂K ∂x = 1 4πDt 1 2 e -x 2 4Dt -2x(4Dt) -1 = 1 4πDt 1 2 e -x 2 4Dt -2x(4πDt) -1 (π) = - 1 4πDt 3 2 e -x 2 4Dt (2πx).
Thus, the Laplacian of K is given by
∆K = - 1 4πDt 3 2 e -x 2 4Dt (2π) + 2πx -x 2 4Dt (-2x)(4Dt) -1 = 1 4πDt 3 2 e -x 2 4Dt (2π) 2πx 2 (4πDt) -1 -1 = 1 4πDt 3 2 e -x 2 4Dt (2π) x 2 2Dt -1 .
Therefore, ∂K ∂t -D∆K = 0.
Note that as t → 0, e -x 2 4Dt = 0 while 1 4πDt 1/2 approaches infinity. But e -x 2 4Dt approaches 0 faster than the later. Thus,
K(x, 0) = lim t→0 1 4πDt 1/2 e -x 2 4Dt = lim t→0 1 4πDt 1/2 lim t→0 e -x 2 4Dt = 0.
Now, taking the integral of K(x, t) with respect to x, would give us
R d K(x, t)dx = R d 1 4πDt 1/2 e -x 2 4Dt dx = R d 1 4πDt 1/2 e - x (4Dt) 1/2 2 dx. Let z = x (4Dt) 1/2 . Then (4Dt) 1/2 dz = dx. Thus for each time t > 0, R d K(x, t)dx = 1 π 1/2 R d e -z 2 dz = 1 π 1/2 n ∏ i=1 ∞ -∞ e -z 2 i dz i = 1. Corollary 6.2.2. Let u 0 ∈ L ∞ (R). 1. The function u(x, t) = R d K(x -y, t)u 0 (y)dy = K ⋆ u 0 (x)
is a solution of the initial value problem
∂u ∂t -D∆u = 0 u(x, 0) = u 0 (x) 2. The function u(x, t) = R d K(x -y, t)u 0 (y)dy + t 0 R d K(x -y, t -s) f (y, s)dyds is a solution of the reaction-diffusion problem ∂u ∂t -D∆u = f u(x, 0) = u 0 (x)
In particular, if f = f (t, u) is Lipschitz with respect to u, there exists a unique local in time strong solution.
Proof.
1. Let x 0 ∈ R d , ϵ > 0. Choose δ > 0 such that |u(y) -u(x 0 )| < ϵ if |y -x 0 | < δ, y ∈ R d (6.2)
Thus, u(x, 0) = u 0 (x).
Now note that yx 0 = yx + xx 0 . By triangle property,
|y -x 0 | ≤ |y -x| + |x -x 0 |. Since |y -x 0 | < δ, |x -x 0 | < δ 2 . By Theorem 6.2.1, |u(x, t) -u(x 0 )| = R d K(x -y, t)|u(y) -u(x 0 )|dy ≤ B(x 0 ,δ) K(x -y, t)|u(y) -u(x 0 )|dy + R d -B(x 0 ,δ) K(x -y, t)|u(y) -u(x 0 )
|dy
Let I be the first term and J the second term of the right-hand side of the equation above. Then by equation 6.2 and Theorem 6.2.1,
I ≤ ϵ B(x 0 ,δ) K(x -y, t)dy ≤ ϵ(1) ≤ ϵ. Furthermore, |u(y) -u(x 0 )| ≤ |u(y)| + |u(x 0 )| ≤ ||u 0 || + ||u 0 || ≤ 2||u 0 ||.
Implying further that
J ≤ 2||u 0 || L ∞ R d -B(x 0 ,δ) K(x -y, t)dy ≤ 2||u 0 || L ∞ R d -B(x 0 ,δ) 1 4πDt 1/2 e -(x-y) 2 4Dt dy ≤ 2||u 0 || (4πDt) 1/2 R d -B(x 0 ,δ) e -(x-y) 2 4Dt dy ≤ 2||u 0 || (4πDt) 1/2 R d -B(x 0 ,δ) e -(y-x 0 ) 2 4Dt dy Let z = y-x (4Dt) 1/2 . Then (4Dt) 1/2 dz = dy. Thus J ≤ 2||u 0 || π 1/2 R d -B(x 0 ,δ/ √ t)
e -z 2 16 dz
Note that as
t → 0 + , R d -B(x 0 ,δ/ √ t) e -z 2 16 dz → 0. Hence, J = 0. Thus if |x -x 0 | ≤ δ 2 and t > 0 is small enough, |u(x, t) -u(x 0 )| < 2ϵ. 2. Let u(x, t) = R d K(x -y, t)u 0 (y)dy + t 0 R d K(x -y, s) f (y, s)dyds. Then ∂u ∂t -D∆u = ∂ ∂t R d K(x -y, t)u 0 (y)dy + t 0 R d K(x -y, t -s) f (y, s)dyds -D ∂ 2 ∂x 2 R d K(x -y, t)u 0 (y)dy + t 0 R d K(x -y, t -s) f (y, s)dyds = ∂ ∂t R d K(x -y, t)u 0 (y)dy + ∂ ∂t t 0 R d K(x -y, t -s) f (y, s)dyds -D ∂ 2 ∂x 2 R d K(x -y, t)u 0 (y)dy -D ∂ 2 ∂x 2 t 0 R d K(x -y, t -s) f (y, s)dyds = ∂ ∂t -D ∂ 2 ∂x 2 R d K(x -y, t)u 0 (y)dy + ∂ ∂t -D ∂ 2 ∂x 2 t 0 R d K(x -y, t -s) f (y, s)dyds By (1), ∂ ∂t -D ∂ 2 ∂x 2 R d K(
xy, t)u 0 (y)dy = 0. Thus, we only have
u t -D∆u = ∂ ∂t -D ∂ 2 ∂x 2 t 0 R d K(x -y, t -s) f (y, s)dyds.
Now we change variables, to write
u t -D∆u = ∂ ∂t -D ∂ 2 ∂x 2 t 0 R d K(y, s) f (x -y, t -s)dyds.
For s = t > 0, we compute that
∂ ∂t t 0 R d K(y, s) f (x -y, t -s)dyds = t 0 R d K(y, s) f t (x -y, t -s)dyds + R d K(y, t) f (x -y, 0)dy and D ∂ 2 ∂x 2 t 0 R d K(y, s) f (x -y, t -s)dyds = t 0 R d K(y, s) f x 1 x 2 (x -y, t -s)dyds.
Hence, we have
u t -D∆u = t 0 R d K(y, s) ∂ ∂t -D ∂ 2 ∂x 2 f (x -y, t -s) dyds + R d K(y, t) f (x -y, 0)dy = t ϵ R d K(y, s) -∂ ∂s -D ∂ 2 ∂y 2 f (x -y, t -s) dyds + ϵ 0 R d K(y, s) -∂ ∂s -D ∂ 2 ∂y 2 f (x -y, t -s) dyds + R d K(y, t) f (x -y, 0)dy (6.3)
Let I be the first term, J be the second term and G be the third term of the right-hand side of the equation above. Then
|J| ≤ (||
f t || L ∞ + D||H 2 f || L ∞ ) ϵ 0 R d K(y, s)dyds ≤ (|| f t || L ∞ + D||H 2 f || L ∞ ) ϵ 0 R d 1 4πDs 1/2 e -y 2 4Ds dyds ≤ (|| f t || L ∞ + D||H 2 f || L ∞ ) ϵ 0 π -1/2 R d e -z 2 dz ds ≤ (|| f t || L ∞ + D||H 2 f || L ∞ ) ϵ 0 π -1/2 π 1/2 ds
≤ ϵC
We also have, by integration by parts
I = t ϵ R d -∂ ∂s -D ∂ 2 ∂y 2 K(y, s) f (x -y, t -s)dyds + R d K(y, ϵ) f (x -y, t -ϵ)dy -R d K(y, t) f (x -y, 0)dy = R d K(y, ϵ) f (x -y, t -ϵ)dy -R d K(y, t) f (x -y, 0)dy (6.4)
Since K solves the heat equation. Combining equations (6.3, 6.4) and J, we have
u t (x, t) -D∆(x, t) = lim ϵ→0 R d K(y, ϵ) f (x -y, t -ϵ)dy = f (x, t)
for x ∈ R d and t > 0. Remark 6.2.3. Similar results remains true in a bounded domain Ω.
Advection-Diffusion Equation
This section will discuss a random walk with a preferred direction or bias and a possible waiting time between movement steps.
Consider that at each time step t, a mosquito moves a distance δ to the left or right with probabilities l and r, respectively, or stays in the exact location, with probability 1lr. If the mosquito is at location x at the time t + τ, there are three possibilities for its location at time t:
• it was at xδ and then moved to the right,
• it was at x + δ and then moved to the left, and
• it was at x and did not moved at all. Thus the probability that at time t + τ the mosquito is at distance x, is given by
p(x, t + τ) = p(x, t)(1 -l -r) + p(x -δ, t)r + p(x + δ, t)l. (6.5)
Taylor's expansions are written
p(x + δ, t) = p(x, t) + δ ∂p ∂x + δ 2 2 ∂ 2 p ∂x 2 + O(δ 3 ) p(x -δ, t) = p(x, t) -δ ∂p ∂x + δ 2 2 ∂ 2 p ∂x 2 + O(δ 3 ).
Then by subtracting, respectively adding these equations, gives us
∂p ∂x = p(x + δ, t) -p(x, t) δ + O(δ) = p(x, t) -p(x -δ, t) δ + O(δ) = lim δ→0 p(x, t) -p(x -δ, t) δ ∂ 2 p ∂x 2 = p(x + δ, t) -2p(x, t) + p(x -δ, t) δ + O(δ 2 ) ∂p ∂t = p(x, t + τ) -p(x, t) τ + O(τ)
Let τ, δ be small. Then, the partial derivative of (6.5) with respect to time t gives
∂p ∂t = - δϵ τ ∂p ∂x + kδ 2 2τ ∂ 2 p ∂x 2 + O(τ 2 ) + O(δ 3 )
with ϵ = rl; k = l + r and where O(τ 2 ) and O(δ 3 ) represents higher order terms. Note that δ 2 /t is positive and finite as δ, τ → 0 since the difference ϵ = rl between the probabilities of moving left and right is proportional to δ, and that ϵ → 0 as δ, τ → 0. Thus the probabilities r and l are not fixed, but vary with the spatial and temporal step sizes such that the limit
u = lim δ,τ→0 δϵ τ , D = k lim δ,τ→0 δ 2 2τ
exists and are positive and finite. Implying further that the limits of the terms O(τ 2 ) and O(τ 3 ) tends to zero. Hence we have the advection-diffusion equation
∂p ∂t = -u ∂p ∂x + D ∂ 2 p ∂x 2 (6.6)
where the first term on the right-hand side represents advection due to the bias in the probability of moving in the preferred direction and the second terms represents diffusion.
For an N-dimensional lattice, the standard drift-diffusion equation is given by
∂p ∂t = -u • ∇p + D∆ 2 p
where u is the average drift velocity, ∇ is the gradient operator and ∆ 2 is the Laplacian. Assuming an initial Dirac delta function distribution p(x, 0) = δ d (x 1 ), . . . , δ d (x N ), the equation above has the solution
p(x, t) = 1 (4πDt) N/2 exp -|x -ut| 2 4Dt . (6.7)
Fokker-Plank Equation
This section will extend a simple random walk in two or more dimensions to include the probability of spatially dependent movements.
Consider a mosquito that moves in a two-dimensional lattice. Suppose that at each time step τ, the mosquito move a distance δ either upward, downward, to the left or to the right with probabilities dependent on location given by u(x, y), d(x, ), l(x, y) and r(x, y), respectively where u + d + l + r ≤ 1, or remain at the same location with probability 1u(x, y)l(x, y)d(x, y)r(x, y). Then the probability that the mosquito is at a distance x at time t is given by p((x, y), t) = p((x, y), t)(1u(x, y)l(x, y)d(x, y)r(x, y))
+p((xδ, y), t)l(x, y) + p((x + δ, y), t)r(x, y) +p((x, yδ), t)d(x, y) + p((x, y + δ), t)u(x, y)
For i = 1, 2, the limit
b i = lim δ,τ,ϵ i →0 ϵ i δ τ , a ii = lim δ,τ→0 k i δ 2 2τ tends to constants with ϵ 1 = r -l, ϵ 2 = u -d, k 1 = r + l and k 2 = u + d.
Dengue Model with Spatial Distribution
In this section we presented a new model for dengue that involves the spatial spread of adult mosquitoes. We follow the method proposed by Bourhis et al. [START_REF] Bourhis | Perception-based foraging for competing resources: Assessing pest population dynamics at the landscpae from heterogeneous resource distribution[END_REF] for the fly spread.
Consider a domain Ω ⊂ R 2 . The propensity of adult mosquito to leave the determined focal point (x, y) can be defined by the diffusion coefficient
D(x, y) = D min + αF l (x, y) + βF f (x, y) (6.9)
where D min is the minimal diffusion value in the absence of resources perception, F l (x, y) and F f (x, y) are the dispersion kernels that covered the entire landscape of the laying and food resources respectively. That is, the mosquitoes moves in random direction D min if they see no resources. But if they see food source, the mosquitoes would prefer to move in that direction βF f (x, y). The same is true for laying sites αF l (x, y). The coefficients α and β is used to weight the differential impact of resources on the diffusion intensity. And the dispersion kernels F l (x, y) and F f (x, y) is defined as
F f (x, y) = ∑ Ω K f (d) × 1 f (x, y) ∑ Ω K f (d) F l (x, y) = ∑ Ω K l (d) × 1 l (x, y) ∑ Ω K l (d) with K f (d) = e -c f d K l (d) = e -c l d
as the kernels for feeding sites K f (d) and ovipositing sites K l (d) where d is the distance to the focal point and c f and c l tune the perception ranges of feeding and laying sites, respectively.
Defining the population density of adults mosquito for every (x, y) ∈ Ω and we have ∂S m (t, x, y) ∂t = γ P,S m P(t, x, y)e -β m P(t,x,y)µ A S m (t,
Well-posedness of the Model
To simplify, we assume in this section that D(x, y) = D constant. For x ∈ Ω, t > 0, with initial datum (E(0), L(0), P(0), S m (0), I m (0), S h (0), I h (0), R h (0)) and Neumann boundary condition
∂E = ∂L = ∂P = ∂S m = ∂I m = ∂S h = ∂I h = ∂R h = 0 on ∂Ω.
Consider the system of mixed ODE and PDE below
E ′ = α m (S m + I m ) -γ E,L E -µ E E (6.12)
L ′ = γ E,L E -γ L,P L -µ L L (
S ′ h = γ h R h -ab h I m S h (6.17)
I ′ h = ab h I m S h -σ h I h (6.18) R ′ h = σ h I h -γ h R h ( 6
= e -(γ E,L +µ E )t E 0 + α m t 0 e -(γ E,L +µ E )(t-s) (S m + I m )ds L = e -(γ L,P +µ L )t L 0 + γ E,L t 0
e -(γ L,P +µ L )(t-s) Eds P = e -(γ P,Sm +µ P )t P 0 + γ L,P t 0 e -(γ P,Sm +µ P )(t-s) Lds S m = K(., t)S m,0 + t 0 K(., t -∆)(γ P,S m Pe -β m Pab m I h S m )ds
I m = K(., t)I m,0 + ab m t 0 K(., t -∆)I h S m ds S h = S h,0 + t 0 (γ h R h -ab h I m S h )ds I h = e -σ h t I h,0 + ab h t 0 e -σ h (t-s) I m S h ds R h = e -γ h t R h,0 + σ h t 0 e -γ h (t-s) I h ds (6.20)
Proof. Rewriting equation (6.12) would give us
E ′ + (γ E,L + µ E )E = α m (S m + I m ).
Multiplying both side of the equation by the integrating factor e (γ E,L +µ E )dt = e (γ Note that for t = 0, we have E(0) = E 0 . Thus, determining the Constant and solving for E, we get
E = e -(γ E,L +µ E )t E 0 + α m t 0 e -(γ E,L +µ E )(t-s) (S m + I m )ds.
Applying the same procedure for the ordinary differential equation in (6.13)-(6.19), we get
L = e -(γ L,P +µ L )t L 0 + γ E,L t 0
e -(γ L,P +µ L )(t-s) Eds P = e -(γ P,Sm +µ P )t P 0 + γ L,P t 0 e -(γ P,Sm +µ P )(t-s) Lds
S h = S h,0 + t 0 (γ h R h -ab h I m S h )ds I h = e -σ h t I h,0 + ab h t 0 e -σ h (t-s) I m S h ds R h = e -γ h t R h,0 + σ h t 0 e -γ h (t-s) I h ds
Now, for the partial differential equation (6.15) and (6.16), we have
S ′ m -D∆S m = γ P,S m Pe -β m P -(µ A + ab m I h )S m I ′ m -D∆I m = ab m I h S m -µ A I m
By Corollary 6.2.2, the solution of the reaction-diffusion problem above is
S m = K(., t)S m,0 + t 0 K(., t -s)(γ P,S m Pe -β m P -ab m I h S m )ds I m = K(., t)I m,0 + ab m t 0 K(., t -s)I h S m ds
where K is the kernels defined as the solution of the diffusion equation with boundary conditions
∂K ∂t -D∆K = f for x ∈ Ω, t > 0 K(x, 0) = K 0 where R d K(t, x)dx = 1.
We will denote by Φ the right-hand side of equation (6.20), that is,
Φ = (Φ E , Φ L , Φ P , Φ S m , Φ I m , Φ S h , Φ I h , Φ R h ) . Lemma 6.4.2. Let U = (E, L, P, S m , I m , S h , I h , R h ) in B T the ball defined by B T := U ∈ L ∞ (R + , L ∞ (Ω)) 8 : sup t∈[0,T] ||U(t, .) -U 0 || L ∞ (Ω) ≤ r . ( 6
.21)
There exists a time T > 0 such that Φ(B T ) ⊆ B T .
Proof.
• For egg:
|Φ E (U)(t, x) -E 0 (x)| ≤ α m t 0 e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x))ds + e -(γ E,L +µ E )t E 0 (x) -E 0 (x) ≤ α m t 0 e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x))ds + e -(γ E,L +µ E )t -1 E 0 (x) .
Applying triangle inequality we get
|Φ E (U)(t, x) -E 0 (x)| ≤ α m t 0 e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x))ds + e -(γ E,L +µ E )t -1 |E 0 (x)|. Then, t 0 e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x))ds ≤ t 0 sup t∈[0,T] e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x)) ds ≤ sup t∈[0,T] e -(γ E,L +µ E )(t-s) (S m (t, x) + I m (t, x)) T 0 ds ≤ T sup t∈[0,T] (|S m (t, x)| + |I m (t, x)|) ≤ T sup t∈[0,T] |S m (t, x)| + sup t∈[0,T] |I m (t, x)| ≤ T sup t∈[0,T] |U(t, x)| + sup t∈[0,T] |U(t, x)| ≤ 2T sup t∈[0,T] |U(t, x)|. Therefore, |Φ E (U)(t, x) -E 0 (x)| will become |Φ E (U)(t, x) -E 0 (x)| ≤ α m T sup t∈[0,T] (|S m (t, x)| + |I m (t, x)|) + |E 0 (x)| ≤ 2α m T sup t∈[0,T] |U(t, x)| + |U 0 (x)| Since U ∈ B T , sup t∈[0,T] |U(t, x)| = sup t∈[0,T] |U(t, x) -U 0 (x) + U 0 (x)| ≤ sup t∈[0,T] |U(t, x) -U 0 (x)| + |U 0 (x)| ≤ r + ||U 0 || (6.22)
Finally, we obtain
|Φ E (U)(t, x) -E 0 (x)| ≤ 2(r + ||U 0 ||)α m T + ||U 0 ||. Choose T ≤ r -||U 0 || 2(r + ||U 0 ||)α m . Thus, sup t∈[0,T] ||Φ E (U)(t, .) -E 0 || L ∞ ≤ r.
• For larvae:
|Φ L (U)(t, x) -L 0 (x)| ≤ γ E,L t 0 e -(γ L,P +µ L )(t-s) E(t, x)ds + e -(γ L,P +µ L )t L 0 (x) -L 0 (x) ≤ γ E,L t 0 e -(γ L,P +µ L )(t-s) E(t, x)ds + e -(γ L,P +µ L )t -1 L 0 (x) ≤ γ E,L t 0 e -(γ L,P +µ L )(t-s) E(t, x)ds + e -(γ L,P +µ L )t -1 |L 0 (x)| As before, t 0 e -(γ L,P +µ L )(t-s) E(t, x)ds ≤ t 0 sup t∈[0,T] e -(γ L,P +µ L )(t-s) E(t, x) ds ≤ T sup t∈[0,T] |E(t, x)| ≤ T sup t∈[0,T] |U(t, x)|
Therefore, by equation (6.22),
|Φ L (U)(t, x) -L 0 (x)| ≤ γ E,L T sup t∈[0,T] |U(t, x)| + |U 0 (x)| ≤ γ E,L T(r + ||U 0 ||) + ||U 0 (x)||. Choose T ≤ r -||U 0 || γ E,L (r + ||U 0 ||) . Then, sup t∈[0,T] ||Φ L (U)(t, .) -L 0 || L ∞ ≤ r.
• For puppae:
|Φ P (U)(t, x) -P 0 (x)| ≤ γ L,
||Φ P (U)(t, .) -P 0 || L ∞ ≤ r.
• For susceptible human:
|Φ S h (U)(t, x) -S h,0 (x)| ≤ t 0 (γ h R h (t, x) -ab h I m (t, x)S h (t, x))ds ≤ γ h t 0 R h (t, x)ds + ab h t 0 I m (t, x)S h (t, x)ds ≤ γ h t 0 sup t∈[0,T] R h (t, x)ds + ab h t 0 sup t∈[0,T] I m (t, x)S h (t, x)ds ≤ γ h T sup t∈[0,T] |R h (t, x)| + ab h T sup t∈[0,T] |I m (t, x)||S h (t, x)| ≤ γ h T sup t∈[0,T] |U(t, x)| + ab h T sup t∈[0,T] |U(t, x)| 2
Therefore, by equation (6.22), for all x ∈ Ω and t ∈ [0, T],
|Φ S h (U)(t, x) -S h,0 (x)| ≤ γ h T(r + ||U 0 ||) + ab h T(r + ||U 0 ||) 2 .
Choose T ≤ r
γ h + ab h (r + ||U 0 ||) (r + ||U 0 ||) . Thus, sup t∈[0,T] ||Φ S h (U)(t, .) -S h,0 || L ∞ ≤ r.
• For infected human:
|Φ I h (U)(t, x) -I h,0 (x)| ≤ ab h t 0 e -σ h (t-s) I m (t, x)S h (t, x)ds + e -σ h t I h,0 (x) -I h,0 (x)
|Φ I h (U)(t, x) -I h,0 (x)| ≤ ab h T sup t∈[0,T] |U(t, x)| 2 + |U 0 (x)| ≤ ab h T(r + ||U 0 ||) 2 + ||U 0 (x)||. Choose T ≤ r -||U 0 || ab h (r + ||U 0 ||) 2 . Then for all x, sup t∈[0,T] ||Φ I h (U)(t, .) -I h,0 || L ∞ ≤ r. • For recovered human: |Φ R h (U)(t, x) -R h,0 (x)| ≤ σ h t 0 e -γ h (t-s) I h (t, x)ds + e -γ h t R h,0 (x) -R h,0 (x) ≤ σ h t 0 e -γ h (t-s) I h (t, x)ds + e -γ h t -1 |R h,0 (x)| ≤ σ h T sup t∈[0,T] |U(t, x)| + |U 0 (x)| ≤ σ h T(r + ||U 0 ||) + ||U 0 (x)||. Choose T ≤ r -||U 0 || σ h (r + ||U 0 ||) . Then for all x, sup t∈[0,T] ||Φ R h (U)(t, .) -R h,0 || L ∞ ≤ r.
For the terms with diffusion, we have the following
|| f (U 1 ) -f (U 2 )|| ≤ 2γ P,S m ||P 1 -P 2 || + ab m (||I h,1 -I h,2 ||||S m,1 || + ||S m,1 -S m,2 ||||I h,2 ||) ≤ 2γ P,S m ||U 1 -U 2 || + ab m (||S m,1 || + ||I h,2 ||)||U 1 -U 2 || ≤ 2(γ P,S m + (r + ||U 0 ||)ab m )||U 1 -U 2 ||.
According to [START_REF] Maati | Analysis of Heat Equations on Domains[END_REF], there exists a constant C Ω > 0, depending only on Ω, such that the kernel satisfies
||K(., t)|| L ∞ (Ω) ≤ C Ω . Thus, for x ∈ Ω |Φ(S m )(t, x) -S m,0 (x)| ≤ Ω K(x -y, t)S m,0 (y)dy + T 0 R d K(x -y, t -s) f (U)(s, y)dy ds ≤ C Ω ||S m,0 || L ∞ (Ω) + C Ω T sup t∈[0,T] || f (U)(t, .)|| L ∞ (Ω) ≤ C Ω ||U 0 || + 2(γ P,S m + (r + ||U 0 ||)ab m )C Ω T. Then ||Φ(S m )(t, x) -S m,0 (x)|| L ∞ (Ω) ≤ C Ω ||U 0 || + 2(γ P,S m + (r + ||U 0 ||)ab m )C Ω T. Choose C Ω ||U 0 || + 2(γ P,S m + (r + ||U 0 ||)ab m )C Ω T ≤ r where T ≤ r -C Ω ||U 0 || 2(γ P,S m + (r + ||U 0 ||)ab m )C Ω . Then, sup t∈[0,T] ||Φ(S m )(t, .) -S m,0 || L ∞ ≤ r.
• For infected mosquito:
Let g(U) = I h S m . Then g(U) is a Lipschitz function in B T . If U 1 and U 2 in B T , then g(U 1 ) -g(U 2 ) = I h,1 S m,1 -I h,2 S m,2 = I h,1 S m,1 -I h,1 S m,2 + I h,1 S m,2 -I h,2 S m,2 = I h,1 (S m,1 -S m,2 ) + S m,2 (I h,1 -I h,2 ). Thus, ||g(U 1 ) -g(U 2 )|| ≤ sup t∈[0,T] |I h,1 ||S m,1 -S m,2 | + sup t∈[0,T] |S m,2 ||I h,1 -I h,2 | ≤ sup t∈[0,T] |U(t, x)||U 1 (t, x) -U 1 (t, x)| + sup t∈[0,T] |U(t, x)||U 1 (t, x) -U 1 (t, x)| ≤ 2 sup t∈[0,T] |U(t, x)||U 1 (t, x) -U 1 (t, x)| ≤ 2(r + ||U 0 ||)||U 1 -U 2 ||.
• Similar to the susceptible mosquito, we have
|Φ(I m )(t, .) -I m,0 | ≤ C Ω ||I m,0 (.)|| L ∞ (Ω) + ab m C Ω T sup t∈[0,T] ||g(U)(t, .)|| L ∞ (Ω) ≤ C Ω ||U 0 || + ab m C Ω T(2(r + ||U 0 ||)). Then ||Φ(I m )(t, .) -I m,0 || L ∞ (Ω) ≤ C Ω ||U 0 || + ab m C Ω T(2(r + ||U 0 ||)). Choose T ≤ r -C Ω ||U 0 || ab m C Ω (2(r + ||U 0 ||))
.
Then for all x,
sup t∈[0,T] ||Φ(I m )(t, x) -I m,0 (x)|| L ∞ ≤ r.
Finally choosing r = max(2||U 0 ||, 2C Ω ||U 0 ||), and T smaller than the minimum between
1 6α m ; 1 3γ E,L ; 1 3γ L,P ; 2 3γ h + 9ab h ||U 0 || ; 1 9ab h ||U 0 || ; 1 3σ h ; ||U 0 || (γ P,S m + 2(2C Ω + 1)||U 0 ||ab m ) ; 1 2ab m (2C Ω + 1) implies that Φ(B T ) ⊂ B T .
Lemma 6.4.3. There exists a time T > 0 such that the map Φ is a contraction map from B T onto itself.
Proof. Let U and U be in B T .
• For the equation for eggs:
|Φ E (U)(t) -Φ E ( U)(t)| = α m t 0 e -(γ E,L +µ E )(t-s) (S m + I m )ds + α m t 0 e -(γ E,L +µ E )(t-s) ( S m + I m )ds ≤ α m t 0 e -(γ E,L +µ E )(t-s) (S m -S m ) + (I m -I m ) ds
Since e -(γ E,L +µ E )t ≤ 1 for any time t from 0 to infinity, we have
|Φ E (U)(t) -Φ E ( U)(t)| ≤ α m T sup t∈[0,T] ||(S m -S m ) + (I m -I m )|| L ∞ ≤ 2α m T sup t∈[0,T] ||U -U|| L ∞ . Then, sup t |Φ E (U)(t) -Φ E ( U)(t)| ≤ 2α m T sup t∈[0,T] ||U -U|| L ∞ which is con- traction if 2α m T < 1, i.e T < 1 2α m .
• For the equation for larvae:
|Φ L (U)(t) -Φ L ( U)(t)| = γ E,L t 0 e -(γ L,P +µ L )(t-s) E ds -γ E,L t 0 e -(γ L,P +µ L )(t-s) E ds = γ E,L t 0 e -(γ L,P +µ L )(t-s) (E -E)ds ≤ γ E,L T sup t∈[0,T] ||E -E|| L ∞ ≤ γ E,L T sup t∈[0,T] ||U -U|| L ∞ . Then, sup t |Φ L (U)(t) -Φ L ( U)(t)| ≤ γ E,L T sup t∈[0,T] ||U -U|| L ∞ which is con- traction if T < 1 γ E,L .
• For the equation for pupae:
|Φ P (U)(t) -Φ P ( U)(t)| = γ L,
)(t-s) (L -L)ds ≤ γ L,P T sup t∈[0,T] ||L -L|| L ∞ ≤ γ L,P T sup t∈[0,T] ||U -U|| L ∞ . Then, sup t |Φ P (U)(t) -Φ P ( U)(t)| ≤ γ L,P T sup t∈[0,T] ||U -U|| L ∞ which is con- traction if T < 1 γ L,P .
• For the equation for susceptible humans:
|Φ S h (U)(t) -Φ S h ( U)(t)| = t 0 (γ h R h -ab h I m S h )ds - t 0 (γ h R h -ab h I m S h )ds = t 0 γ h (R h -R h )ds - t 0 ab h (I m S h -I m S h ) ≤ γ h T sup t∈[0,T] ||R h -R h || + ab h T sup t∈[0,T] ||I m (S h -S h ) + S h (I m -I m )|| ≤ γ h T sup t∈[0,T] ||U -U|| L ∞ + ab h T sup t∈[0,T] ||U|| L ∞ ||U -U|| L ∞ + sup t∈[0,T] ||U|| L ∞ ||U -U|| L ∞ ≤ (γ h T + 2ab h T(r + ||U 0 ||)) sup t∈[0,T] ||U -U|| L ∞ . Then, sup t |Φ S h (U)(t) -Φ S h ( U)(t)| ≤ (γ h T + 2ab h T(r + ||U 0 ||)) sup t∈[0,T] ||U - U|| L ∞ which is contraction if T < 1 γ h + 2ab h (r + ||U 0 ||) .
• For the equation for infected humans:
|Φ I h (U)(t) -Φ I h ( U)(t)| = ab h t 0 e -σ h (t-s) I m S h ds -ab h t 0 e -σ h (t-s) I m S h ds = ab h t 0 e -σ h (t-s) (I m S h -I m S h )ds ≤ ab h T sup t∈[0,T] ||I m (S h -S h ) + S h (I m -I m )|| ≤ ab h T sup t∈[0,T] ||I m || ||S h -S h || + sup t∈[0,T] || S h || ||I m -I m || ≤ ab h T sup t∈[0,T] ||U|| L ∞ ||U -U|| L ∞ + sup t∈[0,T] || U|| L ∞ ||U -U|| L ∞ ≤ 2ab h T sup t∈[0,T] ||U|| L ∞ ||U -U|| L ∞ ≤ 2ab h T(r + ||U 0 ||) sup t∈[0,T] ||U -U|| L ∞ . Then sup t |Φ I h (U)(t) -Φ I h ( U)(t)| ≤ 2ab h T(r + ||U 0 ||) sup t∈[0,T] ||U -U|| L ∞ which is contraction if T < 1 2ab h (r + ||U 0 ||) .
• For the equation for recovered humans:
|Φ R h (U)(t) -Φ R h ( U)(t)| = σ h t 0 e -γ h (t-s) I h ds -σ h t 0 e -γ h (t-s) I h ds = σ h t 0 e -γ h (t-s) (I h -I h )ds ≤ σ h T sup t∈[0,T] ||I h -I h || ≤ σ h T sup t∈[0,T] ||U -U|| L ∞ . Then sup t |Φ R h (U)(t) -Φ R h ( U)(t)| ≤ σ h T sup t∈[0,T] ||U -U|| L ∞ which is con- traction if T < 1 σ h .
• For the equation for susceptible mosquitoes:
|Φ S m (U)(t) -Φ S m ( U)(t)| = t 0 K f (U)ds - t 0 K ⋆ f ( U)ds = t 0 K ⋆ ( f (U) -f ( U))ds ≤ 2C Ω (γ P,S m + (r + ||U 0 ||)ab m ) sup t∈[0,T] ||U -U|| L ∞ ≤ T sup t∈[0,T] ||U -U|| L ∞ .
where T < 1 2C Ω (γ P,S m + (r + ||U 0 ||)ab m )
, which is a contraction mapping.
• For the equation for infected mosquitoes:
|Φ I m (U)(t) -Φ I m ( U)(t)| = t 0 K ⋆ g(U)ds - t 0 K ⋆ g( U)ds = t 0 K ⋆ (g(U) -g( U))ds ≤ C Ω (2(r + ||U 0 ||)ab m ) sup t∈[0,T] ||U -U|| L ∞ ≤ T sup t∈[0,T] ||U -U|| L ∞ .
where
T < 1 C Ω (2(r + ||U 0 ||)ab m )
, which is a contraction mapping.
Therefore, sup t∈
[0,T] ||Φ(U) -Φ( U)|| ≤ K||U -U|| with K < 1 if T is strictly smaller than the minimum of 1 2α m ; 1 γ E,L ; 1 γ L,P ; 1 γ h + 2ab h (r + ||U 0 ||) ; 1 2ab h (r + ||U 0 ||) ; 1 σ h ; 1 C Ω (γ P,S m + 2(r + ||U 0 ||)ab m ) ; 1 C Ω (2(r + ||U 0 ||)ab m ) .
Using the lemmas above, we can conclude that our system of equation is globally well-posed. We have the stated the theorem below. Theorem 6.4.4. Let 0 ≤ S h,0 , I h,0 , R h,0 ≤ H 0 and 0 ≤ E 0 , L 0 , P 0 ≤ M Y,0 , 0 ≤ S m,0 , I m,0 ≤ M A,0 where H 0 , M Y,0 and M A,0 are the initial population density for human, young mosquito and adult mosquito population, respectively. Then there exists a unique global in time weak solution (E, L, P, S m , I m , S h ,
I h , R h ) ∈ L ∞ (R + , L ∞ (Ω))
J (U, w) where J (U, w) = Ω T 0 f (U, w, (x, t))dtdX such that f (U, w, (x, t)) = I h (x, t) + 1 2 A Y w 2 Y (x, t) + 1 2 A A w 2 A (x, t) + 1 2 A H w 2 H (x, t), subject to h(U, U, w, (x, t)) = 0 g(U(0), w) = (E 0 , L 0 , P 0 , S m,0 , I m,0 , S h,0 , I h,0 , R h,0 )
where h is defined by
dR h (x, t) dt -σ h I h (x, t) + γ h R h (x, t) = 0. (6.30) with ∂E ∂x = ∂L ∂x = ∂P ∂x = ∂S m ∂x = ∂I m ∂x = ∂S h ∂x = ∂I h ∂x = ∂R h ∂x = 0.
Derivation of the Optimal Control
Lemma 6.5.1. There exists the adjoint variables λ i , i = 1, 2, • • • , 6 that satisfy the following backward in time system of partial differential equations
- dλ 1 (x, t) dt = λ 1 (x, t)µ E + (λ 1 (x, t) -λ 2 (x, t))γ E,L - dλ 2 (x, t) dt = λ 2 (x, t)(µ L + w Y ) + (λ 2 (x, t) -λ 3 (x, t))γ L,P - dλ 3 (x, t) dt = λ 3 (x, t)µ P + (λ 3 (x, t) -λ 4 (x, t)(1 -β m P)e -β m P )γ P,S m - ∂λ 4 (x, t) ∂t -D∆λ 4 = -λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t) -λ 5 (x, t))ab m I h (x, t) - ∂λ 5 (x, t) ∂t -D∆λ 5 = -λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t) -λ 7 (x, t))ab h S h (x, t) - dλ 6 (x, t) dt = λ 6 (x, t)w H + (λ 6 (x, t) -λ 7 (x, t))ab h I m (x, t) - dλ 7 (x, t) dt = 1 + (λ 7 (x, t) -λ 8 (x, t))σ h + (λ 4 (x, t) -λ 5 (x, t))ab m S m (x, t) - dλ 8 (x, t) dt = (λ 8 (x, t) -λ 6 (x, t))γ h ( 6
.31) with the transversality condition λ T (x, T) = 0 and boundary conditions µ T = λ T (x,0)h(U(x,0) g(U(x,0),w) and ∂λ(x,t) ∂x ∂Ω = ∂U(x,t) ∂x ∂Ω = 0.
Proof. Consider the Lagrangian corresponding to the optimization problem:
L ≡ Ω T 0 [ f (U, w, (x, t) + λ T h(U, U, w, (x, t))]dtdX + Ω µ T g(U(x, 0), w)dX
The vector of Lagrangian multiplier λ is a function of space and time, and µ is another vector of multipliers that are associated with the initial conditions. Then, we have
L = J + Ω µ T g(U(x, 0), w)dX + Ω T 0 λ 1 (x,
Ω T 0 λ 1 (x, t)(6.23)dtdX = Ω T 0 λ 1 (x, t) ∂E(x, t) ∂t -α m (S m (x, t) + I m (x, t)) + γ E,L E(x, t) + µ E E(x, t) dtdX = Ω T 0 λ 1 (x, t) ∂E(x, t) ∂t dtdX + Ω T 0 λ 1 (x, t) µ E E(x, t) + γ E,L E(x, t) -α m (S m (x, t) + I m (x, t)) dtdX = Ω λ 1 (x, t)E(x, t) T 0 - T 0 E(x, t) ∂λ 1 (x, t) ∂t dt dX + Ω T 0 λ 1 (x, t) µ E E(x, t) + γ E,L E(x, t) -α m (S m (x, t) + I m (x, t)) dtdX = Ω λ 1 (x, T)E(x, T) -λ 1 (x, 0)E(x, 0) - T 0 E(x, t) ∂λ 1 (x, t) ∂t dt dX + Ω T 0 λ 1 (x, t) µ E E(x, t) + γ E,L E(x, t) -α m (S m (x, t) + I m (x, t)) dtdX
By tranversality condition λ 1 (x, T) = 0, then we can have
Ω T 0 λ 1 (x, t)(6.23)dtdX = Ω -λ 1 (x, 0)E(x, 0) - T 0 E(x, t) ∂λ 1 (x, t) ∂t dt dX + Ω T 0 λ 1 (x, t) µ E E(x, t) + γ E,L E(x, t) -α m (S m (x, t) + I m (x, t)) dtdX = - Ω λ 1 (x, 0)E(x, 0)dX - Ω T 0 E(x, t) ∂λ 1 (x, t) ∂t dtdX + Ω T 0 λ 1 (x, t)µ E E(x, t)dtdX + Ω T 0 λ 1 (x, t)γ E,L E(x, t)dtdX - Ω T 0 λ 1 (x, t)α m (S m (x, t) + I m (x, t))dtdX
For the expression with Ω T 0 λ 2 (x, t)(6.24)dtdX we have
Ω T 0 λ 2 (x, t)(6.24)dtdX = Ω T 0 λ 2 (x, t) ∂L(x, t) ∂t -γ E,L E(x, t) + γ L,P L(x, t) + µ L L(x, t) + w Y L(x, t) dtdX = Ω T 0 λ 2 (x, t) ∂L(x, t) ∂t dtdX - Ω T 0 λ 2 (x, t)γ E,L E(x, t)dtdX + Ω T 0 λ 2 (x, t)γ L,P L(x, t)dtdX + Ω T 0 λ 2 (x, t)µ L L(x, t)dtdX + Ω T 0 λ 2 (x, t)w Y L(x, t)dtdX = - Ω λ 2 (x, 0)L(x, 0)dX - Ω T 0 L(x, t) ∂λ 2 (x, t) ∂t dtdX - Ω T 0 λ 2 (x, t)γ E,L E(x, t)dtdX + Ω T 0 λ 2 (x, t)γ L,P L(x, t)dtdX + Ω T 0 λ 2 (x, t)µ L L(x, t)dtdX + Ω T 0 λ 2 (x, t)w Y L(x, t)dtdX
For the expression with Ω T 0 λ 3 (x, t)(6.25)dtdX we have
Ω T 0 λ 3 (x, t)(6.25)dtdX = Ω T 0 λ 3 (x, t) ∂P(x,
+ Ω T 0 λ 4 (x, t)ab m I h (x, t)S m (x, t)dtdX + Ω T 0 λ 4 (x, t)w A S m (x, t)dtdX = - Ω λ 4 (x, 0)S m (x, 0)dX - Ω T 0 S m (x, t) ∂λ 4 (x, t) ∂t dtdX - Ω T 0 λ 4 (x, t)D∆S m dtdX - Ω T 0 λ 4 (x, t)γ P,S m P(x, t)e -β m P(x,t) dtdX + Ω T 0 λ 4 (x, t)µ A S m (x, t)dtdX + Ω T 0 λ 4 (x, t)ab m I h (x, t)S m (x, t)dtdX + Ω T 0 λ 4 (x, t)w A S m (x, t)dtdX
For the term with
Ω T 0 λ 4 (x, t)D∆S m dtdX Ω T 0 λ 4 (x, t)D∆S m dtdX = Ω T 0 λ 4 (x, t)D ∂ 2 ∂x 2 S m dtdX = T 0 Ω λ 4 (x, t)D ∂ ∂x ∂S m ∂x dX dt
By integration by parts with respect to space, we have
Ω T 0 λ 4 (x, t)D∆S m dtdX = T 0 λ 4 (x, t)D ∂S m ∂x Ω - Ω ∂λ 4 (x, t)D ∂x ∂S m ∂x dX dt = T 0 λ 4 (x, t)D ∂S m ∂x Ω - ∂λ 4 (x, t)D ∂x S m Ω + Ω S m ∂ ∂x ∂λ 4 (x, t)D ∂x dX dt = T 0 λ 4 (x, t) ∂S m (x, t) ∂x Ω - ∂λ 4 (x, t) ∂x S m (x, t) Ω Ddt + T 0 Ω S m
Numerical Simulation of the Model with one laying site
This section shows the numerical simulations of the model above in minimizing the infected human by applying three control strategies: vector control by copepode and pesticide and vaccination for human.
The control weights A Y and A A are the efforts in insecticide administration for mosquito population while A H is the effort to vaccinate susceptible humans. Since adult mosquitoes are readily available in the population, the efforts used in controlling them would be simpler than the effort exerted for controlling young mosquitoes and vaccinating the susceptible humans. Thus, A Y , A H are set smaller than A M .
Hence, we initially set the control weights as A M = 10, A Y = 1 and A H = 1. Note that the values of A M , A Y , A H do not change the convergence of optimal control.
The optimality of the system is numerically solved using Algorithm 3 with ϵ = 0.01.
Algorithm 3 Computation of optimal control of dengue model with spatial distribution
Given U 0 = (10000, 500, 100, 10000, 1000, 1000, 10, 0) as initial datum, a final time T = 50, a domain [-200, 200] and a tolerance ε > 0. Let w Y,0 , w A,0 , w H,0 randomly chosen following N (0, 1). while ||∇L(w, U, λ)|| > ε do, solve the forward system U, solve the backward system λ, update w solve the gradient ∇L(w, U, λ) w * = w n . Explicit Euler finite differences are used to numerically solve the direct and the adjoint system of ordinary differential equations and partial differential equations.
The simulations were carried out using D min = 0.1, α = 0.01 and β = 0. The initial configuration follows Figure 6.2. The spatial domain is [-200, 200]. The laying site is located at the center with a width of 20. Adult mosquitoes are initially located between -100 and 100. Humans are everywhere except on the laying site. 1 laying site with optimal control without control Time evolution of the total number (integral over space) For the young mosquito compartments (upper figures of Figure 6.3), we can see that with the control inputs, specifically the copepod application w * Y , there is a rapid decrease in each population for a short time towards equilibrium. Whereas without control, each population increases faster and shows no sign of decrease. With no predator for larvae, the figure shows that larvae increase faster than egg and pupae. Similarly, for adult mosquito compartments (middle figures of Figure 6.3), since pesticide administration affects both susceptible and infected mosquitoes, the figure shows that with the control inputs, both populations decrease exponentially over a short period.
The human compartment (bottom figure of Figure 6.3) shows that the application of control strategies effectively minimizes infected humans. It increases for at least ten days and then decreases exponentially towards zero. Figure 6.4 shows the spatiotemporal evolution of infectious humans and mosquitoes with and without control. The figure shows that without control inputs, we need to apply the control strategy for a long time and then decrease it. However, decreasing the control strategy's efforts does not mean stopping its application. The figure shows that we must continuously apply the control strategy near the laying sites.
Consequently, with the three control inputs, we only need to apply the control strategy for a short period and eventually stop it in more or less 20 days.
Numerical Simulation of the Model with Spatial Distribution having Two Laying Sites
In this section, we consider a numerical simulation with 2 laying sites. We assumed that mosquitoes would prefer the nearest laying site to its position. The initial configuration follows Figure 6.6. The spatial domain is [-200, 200]. The laying sites are located between [-50, -30] and [START_REF] Victor | Natural vertical transmission of dengue virus in aedes aegypti and aedes albopictus: A systematic review[END_REF][START_REF] Normile | Surprising new dengue virus throws a spanner in disease control efforts[END_REF]. Adult mosquitoes are initially located between -100 and 100. Humans are everywhere except on the laying site. Using Algorithm 3, we get the following behavior of each variables in the system.
- Figure 6.9, shows the progression of the optimal control w * Y , w * A and w * H , respectively. It shows that a near zero optimal control does not mean a termination of applying the control strategies. Instead, it shows that we need to continue applying the control strategy near the laying sites at maximum capacity for an interval of time. with respect to the carrying capacity of pupae.
Behavior with respect to capacity and diffusion parame-
Influence of the Laying Sites Capacity
In this section, we consider changing the capacity of the laying sites. The equation for eggs E is modified to take into account the laying capacity as
E ′ = α m (S m + I m ) 1 - E k lay -γ E,L E -µ E E
where k lay represents the capacity of the laying sites. do not depend on the carrying capacity k lay . On the contrary, the control of larvae changes with k lay . As shown in Figure 6.13, the greater the carrying capacity k lay , the longer the application optimal control w Y . It is explained because more eggs can be accommodated, and thus more larvae.
klay duration of upper bound in control
.13: Duration of the upper bound in the optimal control w y with respect to the laying capacity.
Sensitivity Analysis with respect to the diffusion
In this section, we study the effect of different parameter values on the number of infected humans since measuring the spatial spread of mosquitoes is a difficult task.
So we compute the maximum number of infected humans by varying D min between 0.1 to 1, c f , c l , α and β between 10 -4 to 10 -3 . The numerical simulations presented in Section 6.5 use an insensitive parameter. We deduce as in [START_REF] Van Den Driessche | Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission[END_REF] that the basic reproduction number R 0 for the Disease Free Equilibrium (S * , 0, 0, 0, R * ), with N * = S * + R * , is This number has an epidemiological meaning. The term ωβ e δ e +ν e represents the contact rate with exposed during the average latency period 1/(δ e + ν e ). The term ω f β s γ s +µ s +ν s is the contact rate with symptomatic during the average infection period, and the last one is the part of asymptomatic.
A.3 Results
A.3.1 Basic and effective reproduction numbers
In the subsequent, we write DFE when we mean by Disease Free Equilibrium.
Theorem A.3.1. The DFE (S * , 0, 0, 0, R * ) is the unique positive equilibrium. Moreover it is globally asymptotically stable.
Proof. By computing the eigenvalues of the Jacobian matrix, we deduce that if R 0 < 1, then DFE is locally asymptotically stable.
Here we will prove that global asymptotic stability is independent that of R 0 . Indeed, from the last differential equation in our system of ODE, we can deduce that R is an increasing function bounded by N(0). Thus R(t) converges to R * as t goes to +∞. This theorem means that the asymptotic behavior does not depend on R 0 . For all initial data in Ω, the solution converges to the DFE when time goes to infinity.
Nevertheless, to observe initial exponential growth, R 0 > 1 is necessary. Indeed, S is initially close to N such that infected states are given by the linear system of The characteristic polynomial is P(x) = x 3 + a 2 x 2 + a 1 x + a 0 , with a 0 = (γ s + µ s + ν s )γ a (δ e + ν s ) (1 -R). If R > 1, there is at least one positive eigenvalue that coincides with an initial exponential growth rate of solutions. To juxtapose the benefit of the intervention, we assume that interventions start 53 days after the first confirmed infection. We remind that the first infection in France was confirmed on January 24 th , 2020, and containment begins on March 17 th , 2020.
Comparison between three strategies can be found in Table A.1 and Figures A.4-A.5.
Without intervention to control the disease, the maximum number of symptomatic infecteds varies from 3.49 × 10 5 to 2.02 × 10 7 . The maximum number of deaths totals from 8.85 × 10 3 to 7.92 × 10 6 . We observe that any intervention strongly reduces the number of dead. Concerning France, Philippines, Italy, Spain, and the United Kingdom, when containment is fully respected and when the sum of infecteds is reduced to 1, the maximum number of symptomatic infecteds and deaths has been cut sharply, of order 10 3 . It varies now from 5.75 × 10 2 to 7.04 × 10 4 and 1.94 × 10 2 to 2.52 × 10 4 respectively. To wait from 104 to 407 days is the price to pay. On the contrary, for the states of Hubei and New York, 53 days to intervene seems to be already too late. We can also see in Table A.1 that treating only the symptomatic does not reduce the duration Note that when the intervention ends at time t c , the number of susceptible S(t c ) is large so that the effective reproduction number R eff is larger than 1. Figure A.6 compares the maximum number of dead and symptomatic but infected individuals, as well as the intervention duration, to reach T c = 1000 varying from 0 to 100%. Containment is the most efficient when it is respected by more than 76% in France, 63% in the Philippines. Beyond that, treating the exposed is the best choice. We also observe that the intervention duration becomes long below 89% in France, 82% in the Philippines. This can be understood by too little susceptibility to achieve recovery but enough for the disease to persist.
2 . 8
28 hehehe... Thank you very much. Of course, I cannot forget my university back in the Philippines. I am grateful to all my colleagues in Caraga State University, especially to Ma'am Amie Paluga and List of Figures1.1 Photo d'un moustique adulte femelle Aedes aegypti (gauche) et Aedes albopictus (droite) pendant un repas sanguin [17]. . . . . . . . . . . . . . 1.2 Stades de vie des moustiques femelles Ae. aegypti et Ae. albopictus [19]. 1.3 Vue dorsale de la femelle adulte Ae. Aegypti adulte [58]. . . . . . . . . . 1.4 Coupe transversale d'un virus de la dengue montrant ses composants structurels similaires à ceux du virus Zika [40]. . . . . . . . . . . . . . . 1.5 Vaccin tétravalent contre la dengue fabriqué par Sanofi Pasteur. . . . . 1.6 Comparaison de la réponse du compartiment humain infecté dans les 4 stratégies de contrôle : vaccination (vert), contrôle vectoriel (orange), vaccination et contrôle vectoriel (bleu), et sans contrôle (rouge). 1.7 Evolution du nombre d'humains infectés sous contrôle opitmal. . . . . 1.8 Comparaison entre le nombre d'humains infectés I h , le nombre de moustiques infectieux I m , et le nombre de larves L, avec trois entrées de contrôle optimal (bleu) et sans contrôle (orange). . . . . . . . . . . . 1.9 Comparaison entre le nombre d'humains infectés I h , le nombre de moustiques infectieux I m , et le nombre de larves L, avec trois entrées de contrôle (bleu) et sans contrôle (orange). . . . . . . . . . . . . . . . . 1.10 Evolution spatio-temporelle de la variable de contrôle optimale w Y liée à l'utilisation des copépodes (en haut) et à sa somme dans l'espace (en bas). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Photo of an adult female Aedes aegypti (left) and Aedes albopictus (right) mosquito during a blood meal [17]. . . . . . . . . . . . . . . . . . . . . . 2.2 Life stages of female Ae. aegypti and Ae. albopictus mosquitoes. [19] . . 2.3 Dorsal view of the adult female Ae. Aegypti mosquito. [58] . . . . . . . 2.4 Cross section of a dengue virus showing its structural components [40] similar to the Zika Virus. . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Dengue tetravalent vaccine manufactured by Sanofi Pasteur. . . . . . . 2.6 Behaviour of infected humans I h with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue). Cyan curve corresponds to optimal control of vaccination of secondary humans only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Optimal solutions of the infected human in the model. . . . . . . . . . x Comparison between the number of infected humans I h , number of infectious mosquitoes I m , and number of larvae L, with three control inputs (blue) and without control (orange). . . . . . . . . . . . . . . . . 2.9 Spatiotemporal evolution of the optimal control variable w Y related to the copepods use (top) and its sum in space (down). . . . . . . . . . 3.1 Compartmental representation of the model with vaccination considering individuals who have previous dengue infections. . . . . . . . . . 3.2 Behaviour of the solution of each variables in the model with dengvaxia versus time using logarithmic and exponential growth functions for human and mosquito population, respectively. . . . . . . . . . 3.3 Phase portrait of the model with dengvaxia showing primary susceptible S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables. . . . . . . . . . . . . . . . . . 3.4 Three dimensional phase portrait of the primary susceptible human S h , secondary susceptible S h , and the infected I h human population in the model with dengvaxia. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Behaviour of the solution of each variables in the model 3.19 with dengvaxia versus time using constant growth functions for human and mosquito population. S h is the red colour figure, I h is the green, S h is the blue, R h is the yellow, S m is the cyan and I m is the magenta. .3.6 Phase portrait of the model with dengvaxia using constant growth function showing primary S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables. . . . . . 3.7 Phase portrait of the Model . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Behaviour of the solution of each variables in the model 3.19 with dengvaxia versus time using Gompertz growth functions for human population and exponential growth function for mosquito population. S h is the red colour figure, I h is the green, S h is the blue, R h is the yellow, S m is the cyan and I m is the magenta. . . . . . . . . . . . . . 3.9 Phase portrait of the model with dengvaxia using constant growth function showing primary S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables. . . . . . xi 3.10 Behaviour of the solution of infected and secondary susceptible humans (top) and primary susceptible, recovered and totally immune humans (bottom) in optimal vaccination using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . 3.11 Behaviour of the solution of susceptible and infected mosquito in optimal vaccination using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 Behaviour of the solution of healthy humans in optimal vaccination using constant growth function for human and mosquito population. . 3.13 Behaviour of the solution of optimal control in primary susceptible (top) and secondary susceptible (bottom) humans in optimal vaccination using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Behaviour of the solution of the variables in the human compartments in optimal vector control using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Behaviour of the solution of susceptible, infected and total controlled mosquito in optimal vector control using constant growth function for human and mosquito population. . . . . . . . . . . . . . . . . . . . . . .
3. 20 4 . 2
2042 Optimal control of the (A) primary susceptible and (B) secondary susceptible human compartment using vaccination only versus the combination of both control strategies. Optimal control of (C) mosquitoes compartment using vector control only versus the combination of both control strategies. Cyan curve represents optimal control of vaccination of secondary humans only . . . . . . . . . . . . . . . . . . . . . . . 3.21 Convergence of the Error in each control strategies . . . . . . . . . . . . 4.1 Compartmental representation of the model with vaccination considering individuals who have previous dengue infections. . . . . . . . . . xii Behaviour of the solution of each variables in the model ?? with dengvaxia versus time using constant growth functions for human population and entomological growth function for mosquito population. S h is the red colour figure, I h is the green, S h is the blue, R h is the yellow, S m is the cyan and I m is the magenta. . . . . . . . . . . . . . . . . . . . . 4.3 Phase portrait of the model with dengvaxia using constant growth function showing primary S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables. . . . . . 4.4 Behaviour of infected humans I h with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue). Cyan curve corresponds to optimal control of vaccination of secondary humans only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Behaviour of (A) human compartment and (B) mosquito compartment with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue). . . . . . . . . . . . . . . . . . . . 4.6 Optimal control of the (A) primary susceptible and (B) secondary susceptible human compartment using vaccination only versus the combination of both control strategies. Optimal control of (C) mosquitoes compartment using vector control only versus the combination of both control strategies. Cyan curve represents optimal control of vaccination of secondary humans only . . . . . . . . . . . . . . . . . . . . . . . 5.1 Life cycle of Aedes mosquitoes. . . . . . . . . . . . . . . . . . . . . . . . 5.2 Compartmental representation . . . . . . . . . . . . . . . . . . . . . . . 5.3 Optimal solutions of the infected human compartment in the model (5.34) with w Y,max = 23.96, w A,max = 1. . . . . . . . . . . . . . . . . . . . 5.4 Optimal solutions of each compartments in model (5.34) with w Y,max = 23.96, w A,max = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Optimal solutions of control variables in model (5.34) with w Y,max = 23.96, w A,max = 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Optimal solutions of the control variables for different maximum number of copepod. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Optimal solutions of the infected human in the model with different control strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Optimal solutions of the infected mosquito in the model with different control strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Optimal solution of the control variables in the model with Copepod, Pesticide and Vaccination control. . . . . . . . . . . . . . . . . . . . . . . xiii 5.10 Optimal solution of the control variables in the model using the different combination of the control strategies. . . . . . . . . . . . . . . . . 5.11 Optimal solution of the control variables starting control at day 40, 64 and 150. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12 P control at day 40, 64 and 150. . . . . . . . . . . . . . . . . . . . . . . . 5.13 Comparison of the solution considering competition induced by larvae and pupae (left), and by pupae only (right). . . . . . . . . . . . . . 5.14 Comparison of the optimal solution considering competition induced by larvae and pupae (left), and by pupae only (right). . . . . . . . . . . 6.1 Mosquito can move left or right. . . . . . . . . . . . . . . . . . . . . . . 6.2 Initial configuration made of one laying site. . . . . . . . . . . . . . . . 6.3 Behavior of each variables in the model with spatial distribution with three control inputs (left) and without control inputs (right). . . . . . . 6.4 Spatiotemporal evolution of the infected human I h and infectious mosquitoes I m without control (top) and with optimal control (down). . . . . . . . 6.5 Spatiotemporal evolution of the optimal control variables w Y , w A , w H of the model (top) and its sum in space (down). . . . . . . . . . . . . . . 6.6 Initial configuration made of two laying sites. . . . . . . . . . . . . . . . 6.7 Behavior of each variables in the model with spatial distribution with three control inputs (left) and without control inputs (right). . . . . . . 6.8 Spatiotemporal evolution of the infected human I h and infectious mosquitoes I m without control (top) and with optimal control (down). . . . . . . . 6.9 Spatiotemporal evolution of the optimal control variables w Y , w A , w H of the model (top) and its sum in space (down). . . . . . . . . . . . . . . 6.10 Behavior of eggs, larvae, and pupae in different capacities of pupae. . . 6.11 Duration of the upper bound in the optimal control w Y with respect to the carrying capacity of pupae. . . . . . . . . . . . . . . . . . . . . . . 6.12 Behavior of eggs, larvae, and pupae in different capacities of the laying sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Duration of the upper bound in the optimal control w y with respect to the laying capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Effect on the maximum of infected humans I h from the variations of D min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15 Maximum value of an infected human in varying parameter values involved in diffusion coefficients. . . . . . . . . . . . . . . . . . . . . . A.1 Compartmental representation of the SEI a I s UR-model. Blue arrows represent the infection flow. Green arrows denote for the treatments. Purple arrow is the death. . . . . . . . . . . . . . . . . . . . . . . . . . . xiv A.2 A. Parameters calibrated according to data from France, Philippines, Italy, Spain, United Kingdom, Hubei, and New York. B. and C. Calibrated solution (straight line) and data (dots) with respect to day for France, Philippines, Italy, Spain, United Kingdom, Hubei and New York. First is the infected I s (B.), and the second one is the death D (C.). D. Effective reproduction number with respect to day. . . . . . . . A.3 A. Boxplot of the posterior distribution computed from France data. B. Effective reproduction number in grey of the posterior distribution, median (= 3.096738) in blue straight line, mean (= 3.474858) is dotted line. C. Fitted symptomatic infected in grey of the posterior distribution, median in red straight line, mean is dotted line. D. Fitted death in grey of the posterior distribution, median in black straight line, mean is dotted line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Comparison of solutions S, E, I s , I a , R, D without control in blue, full containment in green, full treatment of symptomatic in red, and full treatment of exposed in cyan for France. Ordinate axis is expressed in log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.5 Comparison of solutions S, E, I s , I a , R, D without control in blue, full containment in green, full treatment of symptomatic in red, and full treatment of exposed in cyan for the Philippines. Ordinate axis is expressed in log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6 A. Comparison of the maximum number of dead. B. Comparison of the maximum number of symptomatic infected. C. Comparison of the intervention duration to reach T c = 1000 with respect to percentage of containment (green), treatment of symptomatic (red), and treatment of exposed (cyan) for France and Philippines. . . . . . . . . . . . . . . . Les moustiques sont un vecteur important pour la transmission de nombreux agents pathogènes et parasites classés, notamment les virus, les bactéries, les champignons, les protozoaires et les nématodes. Cela est principalement dû à leurs habitudes de consommation de sang, pour lesquelles ils se nourrissent d'hôtes vertébrés. Les moustiques infectés transportent ces organismes d'une personne à l'autre sans présenter eux-mêmes de symptômes. Selon [36], d'ici 2050, la moitié de la population mondiale pourrait être exposée à des maladies transmises par les moustiques, comme la dengue ou le virus Zika, le paludisme, et bien d'autres encore. En transmettant ces maladies, les moustiques causent la mort de plus de personnes que tout autre taxon animal. Grâce à un processus d'évolution de plus de 100 millions d'années, les moustiques ont développé des mécanismes d'adaptation capables de prospérer dans divers environnements. À l'exception des endroits gelés en permanence, on trouve ces moustiques dans toutes les régions terrestres du monde. Ils occupent les régions tropicales et subtropicales où le climat semble favorable et efficace pour leur développement, ce qui en fait presque l'animal universel du monde.
FIGURE 1 . 1 :
11 FIGURE 1.1: Photo d'un moustique adulte femelle Aedes aegypti (gauche) et Aedes albopictus (droite) pendant un repas sanguin [17].
FIGURE 1 . 2 :
12 FIGURE 1.2: Stades de vie des moustiques femelles Ae. aegypti et Ae.albopictus[START_REF]Mosquitoes: Mosquito Life Cycles[END_REF].
FIGURE 1 . 3 :
13 FIGURE 1.3: Vue dorsale de la femelle adulte Ae. Aegypti adulte [58].
FIGURE 1 . 5 :
15 FIGURE 1.5: Vaccin tétravalent contre la dengue fabriqué par Sanofi Pasteur.
FIGURE 1 . 8 :FIGURE 1 . 9 :FIGURE 1 . 10 :
1819110 FIGURE 1.8: Comparaison entre le nombre d'humains infectés I h , le nombre de moustiques infectieux I m , et le nombre de larves L, avec trois entrées de contrôle optimal (bleu) et sans contrôle (orange). La figure montre une différence significative entre le graphique avec trois entrées de contrôle et sans stratégie de contrôle. Elle montre que l'application de la stratégie de contrôle minimise efficacement la population de larves, les humains infectés et la population de moustiques. Il faut peu de temps pour minimiser chaque population. La figure montre l'évolution spatio-temporelle des humains et des moustiques infectieux avec et sans contrôle. La figure montre qu'en l'absence de contrôle, nous
FIGURE 2 . 1 :
21 FIGURE 2.1: Photo of an adult female Aedes aegypti (left) and Aedes albopictus (right) mosquito during a blood meal [17].
FIGURE 2 . 3 :
23 FIGURE 2.3: Dorsal view of the adult female Ae. Aegypti mosquito.[58]
The first dengue vaccine used commercially is CYD-TDV, marketed as Dengvaxia by Sanofi Pasteur. It was licensed in December 2015 and approved by regulatory authorities in 20 countries. One such country is the Philippines. In December 2015, the Philippine Food and Drugs Administration (FDA) greenlighted the vaccine making the Philippines the first Asian country to commercialize it [65]. In April 2016, the Department of Health (DOH) launched the dengue vaccination campaign in the Philippine regions of Central Luzon, Calabarzon, and Metro Manila. More than 800,000 school children received at least one dose of the vaccine.
FIGURE 2 . 5 :
25 FIGURE 2.5: Dengue tetravalent vaccine manufactured by Sanofi Pasteur.
FIGURE 2 . 8 :
28 FIGURE 2.8: Comparison between the number of infected humans I h , number of infectious mosquitoes I m , and number of larvae L, with three control inputs (blue) and without control (orange).
FigureFIGURE 2 . 9 :
29 Figure shows the spatiotemporal evolution of infectious humans and mosquitoeswith and without control. The figure shows that without control inputs, we need to apply the control strategy for a long time and then decrease it. However, decreasing the control strategy's efforts does not mean stopping its application. The figure shows that we must continuously apply the control strategy near the laying sites.
Figure shows the spatiotemporal
Figure shows the spatiotemporal evolution of each optimal control variable of the model with three control inputs. It shows that we need to administer copepod continuously for 100 days while lowering pesticide application and vaccination over time. However, the figure shows that we need to continuously apply the pesticide and vaccination near the laying sites unfadingly.The appendix section contains the published article about covid entitled "Accounting for Symptomatic and Asymptomatic in a SEIR-type model of COVID-19".
FIGURE 3 . 1 :
31 FIGURE 3.1: Compartmental representation of the model with vaccination considering individuals who have previous dengue infections.
FIGURE 3 . 2 :
32 FIGURE 3.2: Behaviour of the solution of each variables in the model with dengvaxia versus time using logarithmic and exponential growth functions for human and mosquito population, respectively.
FIGURE 3 . 3 :
33 FIGURE 3.3: Phase portrait of the model with dengvaxia showing primary susceptible S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables.
Figure 3 .FIGURE 3 . 4 :
334 Figure 3.3d shows that at time t = 0, there are no infected and secondary susceptible humans. The figure shows that the variables are directly proportional to
Figure 3 .
3 Figure 3.4 graphically shows the system behavior of the primary susceptible human S h , secondary susceptible S h , and the infected I h human population. The solution is positive and bounded, as shown in the previous theorem.
FIGURE 3 . 5 :
35 FIGURE 3.5: Behaviour of the solution of each variables in the model 3.19 with dengvaxia versus time using constant growth functions for human and mosquito population. S h is the red colour figure, I h is the green, S h is the blue, R h is the yellow, S m is the cyan and I m is
FIGURE 3 . 7 :
37 FIGURE 3.7: Phase portrait of the Model
FIGURE 3 . 8 :
38 FIGURE 3.8: Behaviour of the solution of each variables in the model 3.19 with dengvaxia versus time using Gompertz growth functions for human population and exponential growth function for mosquito population. S h is the red colour figure, I h is the green, S h is the
Figure 3 .FIGURE 3 . 9 :
339 Figure 3.8 clearly shows a gompertzian growth in the primary susceptible humans. It has the slowest growth at the beginning and towards the end of the time period. It has an early, almost exponential growth rate followed by slower growth rate which reaches a plateau towards the end. On the other hand, secondary susceptible humans follows an increasing linear growth rate for a short time period until it reaches it equilibrium. Moreover, infected humans increases exponentially for 10 days towards its maximum population of 6566.323. It then gradually decreases towards it equilibrium. Whereas recovered humans increases exponentially for 200 days then slowly continuing increasing towards the equilibrium.For the mosquito population, susceptible humans decreases exponentially for 23 days until it reach 38733 population, Then in gradually increases up to 55600 population on the 95th day and then decreases towards its equilibrium. Whereas, infected mosquito exponentially increase for 20 days with maximum population of 2199.482 and then exponentially decreases towards its equilibrium.
The most competitive candidate was Dengvaxia, by Sanofi Pasteur. Dengvaxia (CYD-TDV) was licensed in December 2015 and has now been approved by regulatory authorities in 20 countries. CYD-TDV vaccine is for the prevention of dengue disease caused by dengue virus serotypes 1, 2, 3, and 4. It should be administered three doses six months apart of 0.5 mL subcutaneous (SC) administration for individuals aged 9 -16 years old with laboratory-confirmed previous dengue infection and living in endemic areas. To account for the vaccine in the model, let us consider the following mathematical model in two different populations.
FIGURE 3 . 10 :
310 FIGURE 3.10: Behaviour of the solution of infected and secondary susceptible humans (top) and primary susceptible, recovered and totally immune humans (bottom) in optimal vaccination using constant growth function for human and mosquito population.
Figure 3 .FIGURE 3 . 11 :
3311 Figure3.10 shows that for 100 days, it would only take two days for the infected human u 2 population to reach its highest point at 510 population while it takes around two and a half days for the secondary susceptible u 3 human population to reach its highest point at 1889 individual and then decrease there population going there equilibrium at day 80 and 73, respectively. While both recovered human u 3 and immune human T h converge at the equilibrium on day 40 at 1654 and
Figure 3 .
3 Figure 3.11 shows that at nine and a half days, susceptible mosquito reaches its lowest point at 75557 population and the highest point of infected mosquito at 25442 population. The figure clearly shows that susceptible mosquito u 5 and infected mosquito u 6 behave opposite. It is because we consider a constant mosquito population.
FIGURE 3 . 12 :
312 FIGURE 3.12: Behaviour of the solution of healthy humans in optimal vaccination using constant growth function for human and mosquito population.
3 FIGURE 3 . 13 :
3313 FIGURE 3.13: Behaviour of the solution of optimal control in primary susceptible (top) and secondary susceptible (bottom) humans in optimal vaccination using constant growth function for human and mosquito population.
Figure 3 .
3 Figure 3.13 shows that in order to achieve optimal control in minimizing the infected human, we need to constantly vaccine 100% the secondary susceptible u 3 human for 80 days. Then we can stop for two days and resume the vaccination on 82 and a half days with only 3% of the secondary susceptible human. We can only stop the vaccination of secondary susceptible humans on day 85. While for primary susceptible u 1 humans, we can have many breaks between the 80 days. Break starts on day 14, but this is only a short break. Three long breaks are noticeable, on days 47-50, 63-65, and days 69-74.
FIGURE 3 . 14 :
314 FIGURE 3.14: Behaviour of the solution of the variables in the human compartments in optimal vector control using constant growth function for human and mosquito population.
Figure 3 .
3 Figure 3.14 shows that for 100 days, it would take at most three days for the infected human population to reach its maximum at 2044 population. It then decreases exponentially towards it equilibrium. On the other hand, secondary susceptible humans increases exponentially for approximately 20 days towards its equilibrium with 3121 maximum population. Furthermore, primary susceptible humans decreases while recovered humans increases over time with minimum population of 3121 and maximum population of 1715 individuals, respectively.
Figure 3 .
3 Figure 3.15 shows that vector control is an effective method in minimizing the mosquito population. For ten days, susceptible mosquito decreases until it reaches almost zero population. While infected mosquito increases only for at most two days with 2688 maximum population.Totally controlled mosquitoes means those mosquito who have been eliminated in the process of applying insecticide. The figure shows that the totally controlled mosquito increases exponentially and reach its equilibrium for a short days only.
FIGURE 3 . 15 :
315 FIGURE 3.15: Behaviour of the solution of susceptible, infected and total controlled mosquito in optimal vector control using constant growth function for human and mosquito population.
FIGURE 3 . 16 :
316 FIGURE 3.16: Behaviour of the solution of healthy humans in optimal vector control using constant growth function for human and mosquito population.
FIGURE 3.17: Behaviour of the solution of optimal control of mosquitoes in optimal vector control using constant growth function for human and mosquito population.
FIGURE 3 . 18 :
318 FIGURE 3.18: Response comparison of the infected human compartment in the 4 control strategies: vaccination only (green), vector control only (orange), vaccination and vector control (blue), and without control (red).
Figure 3 .
3 Figure 3.18 shows in minimizing the infected human that the combination of Dengvaxia and vector control is the most effective method. It would only take 30 days to reach equilibrium, resulting in the total elimination of infected humans with a maximum of 12.55% (1,255) infected humans over time. Nevertheless, vector con-
FIGURE 3 . 19 :
319 FIGURE 3.19: Response comparison of each variables in the model with 4 control strategies: vaccination (green), vector control (orange), the combination of vaccination and vector control (blue), and without control (red).
Figure 3 .
3 Figure 3.19 shows no significant difference between the three methods concerning the primary and secondary susceptibles to reach its equilibrium point. It takes seven and a half days for vaccination and 14 days to combine vaccination and vector control to reach zero primary susceptible individuals. Moreover, it takes 42 days for vaccination and 30 days for the combination to reach zero secondary susceptible individuals. For vector control, it takes 20 days to reach an equilibrium of 53.36% (5336) primary susceptible humans and 34 days to reach 30.15% (3015) secondary susceptible humans.
FIGURE 3 . 20 :Figure 3 .
3203 FIGURE 3.20: Optimal control of the (A) primary susceptible and (B) secondary susceptible human compartment using vaccination only versus the combination of both control strategies. Optimal control of (C) mosquitoes compartment using vector control only versus the combination of both control strategies. Cyan curve represents optimal control of vaccination of secondary humans only
FIGURE 3 . 21 :Figure 3 .
3213 FIGURE 3.21: Convergence of the Error in each control strategies
portion u * 5 H 3 H
53 0 of susceptible mosquitoes among the human population. The terms 0 proportion of primary and secondary susceptible humans respectively. The terms ab h µ m (γ h +δ h ) , and a b h µ m (γ h +δ h ) represent the transmission rate due by biting during the infection period 1/(γ h + δ h ) and mosquitoes life expectancy 1/µ m .
FIGURE 4 . 2 :
42 FIGURE 4.2: Behaviour of the solution of each variables in the model ?? with dengvaxia versus time using constant growth functions for human population and entomological growth function for mosquito population. S h is the red colour figure, I h is the green, S h is the
Figure 4 .
4 Figure 4.2 shows that for 9 days susceptible humans decreases exponentially towards its equilibrium. On the other hand, infected and secondary susceptible humans increases exponentially. Upon reaching the maximum of 5790.384 infected and 1798.828 secondary susceptible humans, they then decreases to there equilibrium on the 60th day. As a consequence, recovered humans exponentially increase for 60 days towards its equilibrium.
FIGURE 4 . 3 :
43 FIGURE 4.3: Phase portrait of the model with dengvaxia using constant growth function showing primary S h and secondary susceptible S h versus infected humans I h in blue color, and susceptible mosquitoes S m and infected humans I h versus infected mosquitoes I m in cyan color. Square and circle indicates the first and last solution of the variables.
Figure 4 .
4 Figure 4.3b shows that infected and secondary susceptible humans are directly proportional to each other for all time. For some time, both variables increases up to the maximum of 1798.828 secondary susceptible humans. Then they decreases towards the total elimination.
Figure 4 .
4 Figure 4.3c shows that in the beginning there are 10 infected mosquito and no infected human. As the infected humans increases, infected mosquitoes also increases.
The most competitive candidate was Dengvaxia, by Sanofi Pasteur. Dengvaxia (CYD-TDV) was licensed in December 2015 and has now been approved by regulatory authorities in 20 countries. CYD-TDV vaccine is for the prevention of dengue disease caused by dengue virus serotypes 1, 2, 3, and 4. It should be administered three doses six months apart of 0.5 mL subcutaneous (SC) administration for individuals aged 9 -16 years old with laboratory-confirmed previous dengue infection and living in endemic areas. To account for the vaccine in the model, let us consider the following mathematical model in two different populations.
u 2 FIGURE 4 . 4 :
244 FIGURE 4.4: Behaviour of infected humans I h with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue). Cyan curve corresponds to optimal control of vaccination of secondary humans only.
Figure 4 .
4 Figure 4.4 shows in minimizing the infected human that the combination of Dengvaxia and vector control is the most effective method. It would only take 30days to reach equilibrium, resulting in the total elimination of infected humans with a maximum of 12.55% (1,255) infected humans over time. Nevertheless, vector con-
FIGURE 4 . 5 :
45 FIGURE 4.5: Behaviour of (A) human compartment and (B) mosquito compartment with respect to time without control (red), for the optimal control related to the vaccination only (green), related to the vector only (orange), and with both control (blue).
Figure 4 .
4 Figure 4.5 shows no significant difference between the three methods concerning the primary and secondary susceptibles to reach its equilibrium point. It takes seven and a half days for vaccination and 14 days to combine vaccination and vector control to reach zero primary susceptible individuals. Moreover, it takes 42 days for vaccination and 30 days for the combination to reach zero secondary susceptible individuals. For vector control, it takes 20 days to reach an equilibrium of 53.36% (5336) primary susceptible humans and 34 days to reach 30.15% (3015) secondary susceptible humans.
( 58 ,
58 092) maximum population without applying a control strategy, while the vaccination has a 22% (22,006) maximum population. Now, let us show the controlled variable's behavior by comparing the vaccination, vector control only, and the combination of vaccination and vector control.
FIGURE 4 . 6 :Figure 4 . 6
4646 FIGURE 4.6: Optimal control of the (A) primary susceptible and (B) secondary susceptible human compartment using vaccination only versus the combination of both control strategies. Optimal control of (C) mosquitoes compartment using vector control only versus the combination of both control strategies. Cyan curve represents optimal control of vaccination of secondary humans only
FIGURE 5 . 1 :
51 FIGURE 5.1: Life cycle of Aedes mosquitoes.
FIGURE 5 . 2 :
52 FIGURE 5.2: Compartmental representation
Lemma 5 . 3 . 2 .
532 and thus, U satisfies the local Lipschitz condition. Therefore, by Cauchy-Lipschitz Theorem, there exist T > 0 and a unique solution to equation (5.10) in C ([0, T], R) 8 . Now we will show that if the solution exists then it is positive and bounded. The solution is nonnegative and bounded for all time.
From lemma 5
5 .3.1 and lemma 5.3.2, we can deduce the global well-posedness theorem below. Theorem 5.3.3.
negative roots. Whence by Descartes rule of sign, the possible combination of
Figure 5 . 3
53 Figure 5.3
FIGURE 5 . 3 :
53 FIGURE 5.3: Optimal solutions of the infected human compartment in the model (5.34) with w Y,max = 23.96, w A,max = 1.
Figure
Figure 5.3 shows the behavior of infected humans if we apply with and without
Figure 5 .FIGURE 5 . 5 :
555 Figure 5.4 shows the behavior of each variable in the compartment. The figure shows that applying copepods and pesticides eliminates the mosquito population. With the control strategy, young and infected adult mosquito populations are eliminated quickly. It only has 1,105,864.772 and 23,443,653.571 maximum infected mosquito and larvae populations. This population increases to 8,752,880.738 and 18,763,929,783.442 infected mosquito and larvae populations without control strategies. The figure clearly shows that controlling the mosquito population is an
FIGURE 5 . 6 :
56 FIGURE 5.6: Optimal solutions of the control variables for different maximum number of copepod.
(5. 37 )Lemma 5 . 4 . 4 .
37544 for t ∈ [0, T], with 0 ≤ w Y , w A ≤ w M and 0 ≤ w H . The variables A Y , A A , A H are the positive weights associated with the control variables w Y , w A and w H , respectively. They corresponds to the efforts rendered in exposing the larvae L, the adult mosquitoes S m , I m and susceptible humans S h to the control strategy. There exists an optimal control w * = (w
7 and Figure 5 FIGURE 5 . 7 :
557 FIGURE 5.7: Optimal solutions of the infected human in the model with different control strategies.
Figure 5 .
5 Figure 5.7 shows the behavior of the infected human compartments on the different combinations of control strategies and no control strategy applied. It shows
FIGURE 5 . 8 :FIGURE 5 . 9 :
5859 FIGURE 5.8: Optimal solutions of the infected mosquito in the model with different control strategies.
Figure 5 .
5 Figure 5.9 shows the behavior of the optimal control variables in applying the copepod, pesticide, and vaccination. It shows that the copepod and pesticide should be continuously used for 27 and 56 days, respectively, before a rapid decrease toward equilibrium. However, vaccination should always be done until the end of time.
Optimal solution of the control variables in the model with Copepod and Vaccination control.
Optimal solution of the control variables in the model with Pesticide and Vaccination control.
FIGURE 5 . 10 :
510 FIGURE 5.10: Optimal solution of the control variables in the model using the different combination of the control strategies.
Figure 5 .
5 Figure 5.10 shows the behavior of the control variables by applying a different combination of control strategies. It shows that combining copepod and pesticide is the best control strategy since it requires a shorter time of application. It takes 26 days for copepod application and 42 days for pesticide administration. However, combining copepod and vaccination is better than pesticide and vaccination.
FIGURE 5 . 11 :
511 FIGURE 5.11: Optimal solution of the control variables starting control at day 40, 64 and 150.
Figure 5 .
5 Figure 5.11 shows the behavior of each variable influenced by the different starting dates of the control inputs. It clearly shows that starting the control inputs earlier in time is the best choice. It prohibits the number of young mosquitoes from increasing and restrains infected mosquitoes from spreading. In effect, the figure shows that starting the control at day 64 and day 150 will not eliminate the infected human compartments.
FIGURE 5 .
5 FIGURE 5.12: P control at day 40, 64 and 150.
FIGURE 5 . 13 :
513 FIGURE 5.13: Comparison of the solution considering competition induced by larvae and pupae (left), and by pupae only (right).
Figure 5 .
5 Figure 5.13 shows the behaviour of each variables in the competition induced by larvae and pupae and pupae alone. The figure shows that each variables behaves the same but the mosquito population increases whereas the human population stays the same. For young mosquitoes with competition induced by larvae and pupae, there are 10,375,262,030, 17,669,725,442 and 6,795,266,115 maximum population for eggs, larvae, pupae, respectively. But for competition induced by pupae only, there are only 11,146,838,868, 18,763,929,783 and 7,194,524,993 maximum egg, larvae and pupae population. For adult mosquitoes with competition induced by larvae and pupae, there are 944000 and 3962681700 maximum population for susceptible and infected mosquitoes, respectively. But for competition induced by pupae only,
FIGURE 5 . 14 :
514 FIGURE 5.14: Comparison of the optimal solution considering competition induced by larvae and pupae (left), and by pupae only (right).
FIGURE 6 . 1 :
61 FIGURE 6.1: Mosquito can move left or right.
FIGURE 6 . 2 :
62 FIGURE 6.2: Initial configuration made of one laying site.
FIGURE 6 . 3 :
63 FIGURE 6.3: Behavior of each variables in the model with spatial distribution with three control inputs (left) and without control inputs (right).
Figure 6 .
6 Figure 6.3 shows the comparison of the behavior of each variables in the model with diffusion using 3 control inputs and without control input. It clearly shows that having control strategy is better than no control at all.
FIGURE 6 . 4 :
64 FIGURE 6.4: Spatiotemporal evolution of the infected human I h and infectious mosquitoes I m without control (top) and with optimal control (down).
Figure 6 .
6 Figure 6.5 shows the spatiotemporal evolution of each optimal control variable of the model with three control inputs. It shows that we need to administer copepod continuously for 45 days while lowering pesticide application and vaccination over time. However, the figure shows that we need to continuously apply the pesticide and vaccination near the laying sites unfadingly.
FIGURE 6 . 5 :
65 FIGURE 6.5: Spatiotemporal evolution of the optimal control variables w Y , w A , w H of the model (top) and its sum in space (down).
FIGURE 6 . 7 :
67 FIGURE 6.7: Behavior of each variables in the model with spatial distribution with three control inputs (left) and without control inputs (right).
Figure 6 .
6 Figure 6.7 shows that we get a relatively similar graph as the left Figure 6.3.As observed, due to the increase of carrying capacity of mosquitoes laying site, the
FIGURE 6 . 8 :
68 FIGURE 6.8: Spatiotemporal evolution of the infected human I h and infectious mosquitoes I m without control (top) and with optimal control (down).
FIGURE 6 . 9 :
69 FIGURE 6.9: Spatiotemporal evolution of the optimal control variables w Y , w A , w H of the model (top) and its sum in space (down).
ters 6 . 6 . 1 FIGURE 6 . 10 :FIGURE 6 . 11 :
661610611 FIGURE 6.10: Behavior of eggs, larvae, and pupae in different capacities of pupae.
Figure 6 .
6 Figure 6.12 show that controls of adult mosquitoes and the human population
FIGURE 6 . 14 :
614 FIGURE 6.14: Effect on the maximum of infected humans I h from the variations of D min .
Figure 6 .
6 Figure 6.14 and 6.15 shows the maximum values of the infected humans in varying values of D m in, c l , c f , α, and β. It shows that the varying values give no significant difference in the maximum values of an infected human. There is only a 0.08 difference in the human population. This is due to the fact that the perception is exponentially decreasing.
FIGURE 6 . 15 :
615 FIGURE 6.15: Maximum value of an infected human in varying parameter values involved in diffusion coefficients.
It is standard to check that the domainΩ = {(S, E, I s , I a , U, R) ∈ R 6 + ; 0 ≤ S + E + I s + I a + U + R ≤ N(0)}is positively invariant. In particular, there exists a unique global in time solution (S, E, I s , I a , U, R) in C(R + ; Ω) as soon as the initial condition lives in Ω.Since the infected individuals are in E, I a and I s , the rate of new infections in each compartment (F ) and the rate of other transitions between compartments (V) can be rewritten asF = ω(β e E + β s I s + β a I a ) e + ν e )E (γ s + µ s + ν s )I sf δ e E γ a I a -(1f )δ e E + ν e ) 0 0 f δ e γ s + µ s + ν s 0 -(1f )δ e +ν e )N + f ωβ s S(γ s +µ s +ν s )N + (1-f )ωβ a S γ a N ωβ s S (γ s +µ s +ν s )N
R 0 := ω β e δ e + ν e + f β s γ s + µ s + ν s + ( 1 -
1
eδ eν s ωβ s ωβ a f δ e -(γ s + µ s + ν s ) 0 (1f )δ e 0
FIGURE A. 3 :A. 3 . 3
333 FIGURE A.3: A. Boxplot of the posterior distribution computed from France data. B. Effective reproduction number in grey of the posterior distribution, median (= 3.096738) in blue straight line, mean (= 3.474858) is dotted line. C. Fitted symptomatic infected in grey of the posterior distribution, median in red straight line, mean is dotted line. D. Fitted death in grey of the posterior distribution, median in black straight line, mean is dotted line.
turelle du DENV chez Ae. aegypti et Ae. albopictus. Il indique que le virus DENV peut être transféré du parent à la progéniture dans sept générations consécutives d'Ae. aegypti et Ae. albopictus, dans des conditions de laboratoire. Cette transmission peut contribuer à la pérennité des moustiques infectés, mais elle n'est pas suffisante pour favoriser la propagation de la dengue. Une forme de transmission plus courante est connue sous le nom de transmission horizontale. Le virus est transmis à l'homme par les piqûres de moustiques Ae. aegypti infectés. Après s'être nourri d'une personne infectée par la dengue, le virus se réplique dans l'intestin moyen du moustique avant de se disséminer dans les tissus secondaires, notamment les glandes salivaires. La période d'incubation extrinsèque
(PIE) est le temps qui s'écoule entre l'ingestion du virus et la transmission effective à un nouvel hôte. Elle dure environ 8 à 12 jours lorsque la température ambiante est comprise entre 25 et 28°C. Les variations de la PIE sont également influencées par des facteurs tels que l'ampleur des fluctuations quotidiennes de température, le génotype du virus et la concentration virale initiale [68]. Bien que la possibilité soit faible, il existe des preuves que la dengue peut également se propager par transmission maternelle ou être transmise par une transfusion sanguine infectée. Une femme enceinte qui a une infection par le DENV peut transmettre le virus à son foetus. Les bébés porteurs du DENV peuvent souffrir d'une naissance prématurée, d'un faible poids à la naissance et d'une détresse foetale [68].
Comme le recommande l'Organisation Mondiale de la Santé [71], la vaccination doit être administrée aux personnes qui ont déjà été infectées par une souche du virus. Dans ce document, nous divisons le compartiment des humains sensibles en humains sensibles primaires ou secondaires, c'est-à-dire les individus qui n'ont pas été infectés et les individus qui ont été infectés par une ou plusieurs souches du virus de la dengue. . D'autre part, le modèle Pop 3 possède à la fois un équilibre endémique et un équilibre sans maladie. Nous avons pu définir le nombre de reproduction de base
Ce travail de thèse vise à introduire un nouveau modèle mathématique de la dengue. Il a la objectifs suivants: 1. étudier le dengvaxia et montrer si la recommandation de Sanofi est suffisante, 2. déterminer un contrôle efficace de la dengue, et 3. générer un modèle mathématique de la dengue tenant compte de son cycle de vie et de sa distribution spatiale. Le manuscrit est organisé comme suit. Le chapitre 3 commence par la présentation d'une modèle de type Ross de la dengue qui considère la vaccination des individus ayant déjà été infectés par la dengue. Dans ce chapitre, on teste des fonctions logistiques, exponentielles et Gompertzienne pour décrire la croissance des populations humaines et de moustiques. Nous montrons le caractère bien posé et la positivité de la solution du modèle. Nous avons obtenu que l'équilibre sans maladie est localement asymptotiquement stable alors que l'équilibre endémique est instable. Le chapitre 4 se concentre sur un nouveau modèle mathématique de la dengvaxia. Ici la population humaine est constante, et une fonction de croissance entomologique est considéré pour la population de moustiques. Trois types de croissance sont comparés: • Pop 1 : population humaine et population de moustiques constantes, • Pop 2 : fonction de croissance de Gompertz pour la population humaine et une fonction de croissance exponentielle pour la population de moustiques, • Pop 3 : une fonction de croissance entomologique pour le moustique et une fonction de croissance constante pour la population humaine. Dans le modèle Pop 1 , nous avons montré que le modèle ne possède que l'équilibre sans maladie et nous avons pu prouver qu'il est localement asymptotiquement stable De même, le modèle Pop 2 ne possède que l'équilibre sans maladie qui est localement stable dès que le taux de croissance α m est plus petit que le taux de mortalité µ m
Pop 3 est localement asymptotiquement stable si α m < µ m et que l'équilibre endémique est stable seulement si α m > µ m et R 0 > 1. Plus largement, nous avons prouvé le théorème ci-dessous pour le modèle Pop 3 . Si α m < µ m , alors l'équilibre sans maladie trivial est globalement asymptotiquement stable. 2. Si α m > µ m et R 0 > 1, alors l'équilibre sans maladie non trivial est globalement asymptotiquement stable.
Théorème 1.0.1. 1.
, puis montrer que l'équilibre sans maladie du modèle Nous déterminons ensuite la stratégie de contrôle optimale pour minimiser les humains infectés de chacune de ces trois stratégies de contrôle. Nous attribuons trois entrées de contrôle, w 1 , w 3 , et w m , pour les populations humaines primaires, secondaires susceptibles, et les moustiques. Ici, l'action de w 1 (t) est le pourcentage de personnes susceptibles primaires, et w 3 (t) est le pourcentage de personnes susceptibles secondaires vaccinées par unité de temps. Tandis que w 5 (t), w 6 (t) est le pourcentage de moustiques éliminés par l'administration d'insecticide dans l'environnement par unité de temps. En considérant la fonction coût
Il existe les variables adjointes λ i , i = 1, 2, • • • , 6 du système (4.13) qui satisfont le système d'équation différentielle ordinaire à rebours dans le temps suivant :
w 6 ≤ w M , nous utilisons le principe du maximum de Pontryagin pour déterminer la commande optimale. Théorème 1.0.2.
Il existe les variables adjointes λ i , i = 1, 2, • • • , 6 du système (5.37) qui satisfont le système d'équation différentielle ordinaire à rebours dans le temps suivant :
Théorème 1.0.3.
La modélisation mathématique de la dengue est un sujet vaste qui traite de diverses inconnues. Il est quelque peu impossible de couvrir une grande partie du sujet en trois ans. Voici donc une liste de perspectives de recherche possibles que nous
famine des moustiques adultes sensibles et infectés. On peut donc définir la dy-
namique du moustique adulte comme suit :
∂S m (t, x, y) ∂t = γ P,S m P(t, x, y)e -β m P(t,x,y) -µ A S m (t, x, y)
prévoyons d'étudier. humaine. Compte tenu de la recommandation de Sanofi Pasteur sur l'application + ∂ ∂y D(x, y) ∂S m ∂y ∂U -C(x, y) ∂S m Une perspective de l'étude est de considérer la structure d'âge de la population -ab m I h (t, x, y)S m (t, x, y) + ∂ ∂x D(x, y) ∂S m ∂x (1.9)
de Dengvaxia, il est intéressant de créer un modèle avec une structure d'âge dans la
population humaine pour décrire la transmission de la dengue avec différents taux
d'infection parmi les différents groupes d'âge. Une autre solution consiste à développer un modèle complet de dengue-dengvaxia intégrant le cycle de vie des moustiques, les quatre souches de virus de la dengue ∂I m (t, x, y) = ab m I h (t, x, y)S m (t, x, y) -µ A I m (t, x, y) ∂t + ∂ ∂x D(x, y) ∂I m ∂x + ∂ ∂y D(x, y) ∂I m ∂y -C(x, y) ∂U . ∂I m (1.10)
de la population humaine, et l'efficacité de Dengvaxia sur les différentes souches
de virus. L'ajout de la structure d'âge et de l'effet du climat sur la dengue dans ce Une autre perspective intéressante de l'étude est de considérer la co-infection de
modèle en ferait un modèle robuste de la dengue. la dengue et du Covid-19. En raison du chevauchement des caractéristiques clin-
Une perspective supplémentaire de l'étude est de considérer les habitudes de iques et de laboratoire de ces maladies, la pandémie de Covid-19 dans les zones où
reproduction et d'alimentation des moustiques. On peut incorporer le sexe des la dengue est endémique représente un défi majeur. On peut donc concevoir un bon
moustiques dans le modèle et appliquer une stratégie de contrôle pour minimiser modèle mathématique montrant la co-infection de ces maladies et appliquer une
les moustiques infectés. Puisque les moustiques mâles se nourrissent du nectar des stratégie de contrôle optimal pour minimiser les humains infectés.
plantes et que certaines plantes mangent les moustiques, en concevant une position
stratégique des plantes dans l'environnement, on peut déterminer la stratégie de
contrôle optimale pour minimiser les moustiques infectés dans la population.
En lien, on peut aussi considérer les besoins énergétiques des moustiques. Les
sites d'alimentation et de ponte affectent directement l'approvisionnement en én-
ergie des moustiques. L'énergie des moustiques augmente lorsqu'ils se nourrissent
et diminue pendant la période de ponte. Dans cette optique, nous définissons la
quatrième dimension U qui rend compte de l'approvisionnement en énergie du
moustique, appelée dimension énergétique. Nous pouvons supposer que seuls les
moustiques adultes se déplacent et donc que seuls S m et I m ont une dimension én-
ergétique. Cette dimension énergétique utilise un budget énergétique dynamique
simplifié par des termes d'advection dans la dimension énergétique supplémentaire
U. Cela repose sur un paysage énergétique après discrétisation de l'espace. Les
couvertures terrestres ont été regroupées en fonction de leurs effets présumés sur
la fourniture d'énergie. Les nouveaux moustiques adultes émergents ont un niveau
d'énergie U où U = 1 est la limite énergétique supérieure et U = 0 est la limite
énergétique inférieure, c'est-à-dire que S m , I m (t, x, y, U = 0) = 0 simule la mort par
2.2 Dengue
FIGURE 2.4: Cross section of a dengue virus showing its structural
components [40] similar to the Zika Virus.
is fever accompanied by nausea, vomiting, rash, aches, and pains in the muscles or
joints. Infection from one type grants life-long immunity to that virus strain and
temporarily grants partial protection against the other types. When infected with a
different type of virus for a second time, a more severe disease will occur, known
as Dengue Hemorrhagic Fever (DHF). In the Philippines, 1,107 deaths are reported
from January 1 to August 31, 2019, due to dengue fever [28].
Dengue is the most common mosquito-borne viral infection. It can be found in trop-
ical and subtropical regions worldwide, with peak transmission during the rainy
season. In 2019, World Health Organization [68] reported 5.2 million dengue cases
worldwide. In the Philippines alone, 271,480 cases with 1,107 deaths are reported
from January 1 to August 31, 2019, due to dengue fever [28].
Dengue is caused by four serotypes of viruses under the Flaviviridae family.
They are distinct but closely related serotypes of viruses called DENV-1, DENV-2,
DENV-3, and DENV-4. About one in four people that are infected with dengue will
-Tree Holes get sick [15]. The illness usually begins 5-7 days after the infective bite of Ae. aegypti
-Leaf Axils and Ae. albopictus female mosquitoes [8].
-Rock Holes. In most cases, dengue is a self-limiting illness but may require hospital admis-
sion, where supportive care can modify the course of the illness. Symptoms can be
• Artificial Container mild or severe and typically last 2-7 days. The most common symptom of dengue
The parameters are summed up in the table below.
Symbol Description
a number of human beaten per mosquito
b h probability of becoming infected
b h probability of becoming infected again
γ h recovery rate of human from one, two or three serotypes
δ h recovery rate of human from four serotypes
b m probability of becoming infectious
µ m death rate of mosquito
TABLE 3 .
3
1: Description of the parameters used in the model.
Now from equation (3.17), substituting u 6 by 0 and u 1 + u 3 + u 4 by K, we get
TABLE 3 .
3 2: Values of the parameter used in the numerical simulationIn this study, a numerical simulations are done using 2000 days. Figure3.2 shows the behavior of the variables S h , I h , S h , R h , S m and I m versus time using the parameters in Table3.2 that are taken from Bakach[START_REF] Bakach | A survey of mathematical models of dengue fever[END_REF]. The variable S h is the red color, I h is the green, S h is the blue, R h is yellow, S m is the cyan, and I m is the magenta.
1000 1250 1500 1750 2000 Time Time
750
500
250
0
2500 2000 1500 Sh(t) 1000 500 0 70000 60000 50000 40000 30000 Im(t) 20000 10000
1000 1250 1500 1750 2000 Time Time
750
500
250
0
6000 5000 4000 3000 2000 1000 0 100000 80000 60000 40000 20000
Ih(t) Sm(t)
1000 1250 1500 1750 2000 Time Time
750
500
250
0
10000 8000 6000 4000 2000 0 20000 15000 10000 5000
Sh(t) Rh(t)
Consequently, since the system (3.25) contains either u 2 , u 5 or u 6 or both, which has a zero value, then any nonnegative values of u * Since the infected individuals are in u 2 and u 6 , then the next generation matrix is
1 , u * 3 ,
u * 4 satisfies the system of equation. Therefore, E VecCons,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) is an
equilibrium point.
the constraint (u 1 , u 2 , u 3 , u 4 , u 5 , u 6 ) is a solution to the ordinary differential equation (3.28).Proof. Let (w n 1 , w n 3 ) n∈N a minimizing sequence of controls in W = [0, w H ] 2 , i.e.
lim n→∞ J(w n 1 , w n 3 ) = inf
This sequence is bounded by w H , and using sequential Banach-Alaoglu theorem to
extract a subsequence weak* convergent to (w * 1 , w * 3 ) in L ∞ ([0, T]; W). For n ∈ N,
we denote (u n 1 , u n 2 , • • • , u n 6 ) the solution corresponding to the control (w n 1 , w n 3 ). Sim-
ilarly to Lemma 3.2.2, one can prove that (u n 1 , u n 2 , • • • , u n 6 ) is nonnegative and uni-
formly bounded. Then, ((u n 1
* 1 (t), w * 3 (t)) such that J (w * 1 , w * 3 ) = min w∈W J (w 1 , w 3 )
under w∈W J(w 1 , w 3 ).
H be the human population subdivided into primary susceptible S h , secondary susceptible S h , infected I h and removed R h . Primary susceptible humans are individuals who have not yet been infected by dengue, while secondary susceptible humans are individuals who previously had dengue infection.
𝑅 )
𝛿 )
𝛼 ) 𝑆 ) 𝑎𝑏 ) 𝐼 " 𝐻 𝐼 ) 𝛾
𝑎 𝑏 . " 𝐼 " 𝐻 𝑆 , )
𝛼 "
𝑎𝑏 " 𝐼 " 𝐻
𝜇 " 𝜇 "
) FIGURE 4.1: Compartmental representation of the model with vaccination considering individuals who have previous dengue infections.
Let
solving for all possible values of x * that lie on Ω Ento we get E Ento,1 = (u *
1 , 0, u * 3 , u * 4 , 0, 0)
and E Ento,2 = (u * 1 , 0, u * 3 , u * 4 , 1 β
m ln α m µ m , 0). We state the following lemma. Lemma 4.2.2. The system of equation (4.7) admits an equilibria at E Ento,1
since system (4.10) contains either u 2 , or u 6 or both, which has a
zero value, then any nonnegative values of u * 1 , u * 3 and u * 4 satisfies the system of equa-
tion. Therefore, E VecEnto,1 = (u * 1 , 0, u * 3 , u * 4 , 0, 0) and E VecEnto,2 = u * 1 , 0, u * 3 , u * 4 , 1 β m ln α m v 5 +µ m , 0
is an equilibrium point of system (4.10).
Now, in solving for the next generation matrix, since equation (3.25) and (4.10)
have the same u ′ 2 and u ′
[START_REF] Bakach | A survey of mathematical models of dengue fever[END_REF]
, then they have the same next generation matrix where the eigenvalues are λ
the system is locally asymptotically stable at E VecEnto,2 . If α m < µ m + v 5 , then E VecEnto,1 is globally asymptotically stable.2. If α m > µ m + v 5 and R 0 < 1, then E VecEnto,2 is globally asymptotically stable.
Following Theorem 4.2.6, we can also prove the theorem below.
Theorem 4.3.5. 1.
Theorem 4.3.8. 1
. If α m < µ m + v 5 , then E CombEnto,1 is globally asymptotically stable.2. If α m > µ m + v 5 and R 0 < 1, then E CombEnto,2 is globally asymptotically stable.
TABLE 4 .
4
1: Parameter values used in the numerical simulations.
slowly decrease after reaching 60.07% (6,007) but would never annihilate. Vaccinating secondary humans would only take 48 days to reach equilibrium with a maximum of 54.94%[START_REF] Anderson | Descartes' rule of signs revisited[END_REF]494) infected humans over time.
.55% (1,255) infected humans over time. Nevertheless, vector con-
trol stands out if we compare only the vaccination and vector control method. It would only take 34 days with a maximum population of 19.68% (1,968) infected humans for vector control to eliminate infected humans. In comparison, vaccination takes 45 days, with 18.42% (1,842) infected humans. Without control, infected humans would
TABLE 5 . 1 :
51 Value of the Parameters used for the simulations.
5.3 shows the behavior of infected humans if we apply with and without
With Optimal Control Without Control
control strategies. It shows that using control strategies would eliminate infected
humans over time, with an 8,715,766.164 maximum population. Having no con-
trol strategy increases this at 8,752,880.738 infected humans and an equilibrium of
877,196 infected humans over time.
FIGURE 5.4: Optimal solutions of each compartments in model (5.34)
with w Y,max = 23.96, w A,max = 1.
t) P ′ (t) = γ L,P L(t)γ P,S m P(t)µ P P(t) S ′ m (t) = γ P,S m P(t)e -β m Pµ A S m (t)ab m I h (t)S m (t)w A S m (t)
Y , w A , w H ) under the constraint (E, L, P, S m , I m , S h , I h , R h ) is a solution to the ordinary differential equation(5.37).
* Y (t), w * A (t), w * H (t)) such that J (w * Y , w * A , w * H ) = min w∈W J (w
=λ 3 µ P + (λ 4 (1β m P)e -β m Pλ 3 )γ P,S m -∂λ 4 (t) ∂t =λ 1 α mλ 4 (µ A + w A ) + (λ 5λ 4 )ab m I h (t)-∂λ 5 (t) ∂t =λ 1 α mλ 5 (µ A + w A ) + (λ 7λ 6 )ab h S h (t) λ 4 )ab m S m (t)λ 7 σ hProof. Using the Hamiltonian for the system (5.37), we haveH =L(w Y , w A , w H ) + λ 1 (t)E ′ (t) + λ 2 (t)L ′ (t) + λ 3 (t)P ′ (t) + λ 4 (t)S ′ m (t) + λ 5 (t)I ′ m (t) + λ 6 (t)S ′ h (t) + λ 7 (t)I ′ h (t) + λ 8 (t)R ′ + λ 2 γ E,L E(t)γ L,P L(t)µ L L(t)w Y L(t)+ λ 3 γ L,P L(t)γ P,S m P(t)µ P P(t)+ λ 4 γ P,S m P(t)e -β m Pµ A S m (t)ab m I h (t)S m (t)w A S m (t) + λ 5 ab m I h (t)S m (t)µ A I m (t)w A I m (t) + λ 6 γ h R h (t)ab h I m (t)S h (t)w H S h (t) + λ 7 ab h I m (t)S h (t)σ h I h (t) + λ 8 σ h I h (t)γ h R h (t) .Now, taking the partial derivative of H with respect to U, we have∂H ∂E =λ 1 (-γ E,Lµ E ) + λ 2 γ E,L ∂H ∂L =λ 2 (-γ L,Pµ Lw Y ) + λ 3 γ L,P ∂H ∂P =λ 3 (-γ P,S mµ P ) + λ 4 γ P,S m (1β m P)e -β m P ∂H ∂S m =λ 1 α m + λ 4 (-µ Aab m I h (t)w A ) + λ 5 ab m I h (t) ∂H ∂I m =λ 1 α m + λ 5 (-µ Aw A )λ 6 ab h S h (t) + λ 7 ab h S h (t) (-ab h I m (t)w H ) + λ 7 ab h I m (t) =1λ 4 ab m S m (t) + λ 5 ab m S m (t)λ 7 σ h + λ 8 σ h ∂H ∂R h =λ 6 γ hλ 8 γ h .Therefore, the adjoint system of equation (5.37) is∂λ 1 (t) ∂t =λ 1 µ E + (λ 2λ 1 )γ E,L ∂λ 2 (t) ∂t =λ 2 (µ L + w Y ) + (λ 3λ 2 )γ L,P ∂λ 3 (t) ∂t =λ 3 µ P + (λ 4λ 3 (1β m P)e -β m P )γ P,S m ∂λ 4 (t) ∂t =λ 1 α mλ 4 (µ A + w A ) + (λ 5λ 4 )ab m I h (t) ∂λ 5 (t) ∂t =λ 1 α mλ 5 (µ A + w A ) + (λ 7λ 6 )ab h S h (t)∂λ 6 (t) ∂t =λ 6 w H + (λ 7λ 6 )ab h I m (t) λ 4 )ab m S m (t) + (λ 8λ 7 )σ h ∂λ 8 (t) ∂t =(λ 6λ 8 )γ h . Now, setting the partial derivative of H with respect to the control variables to zero, then solving w Y and w A , we get
h (t)
=I h (t) + 1 2 A Y w 2 Y (t) + 1 2 A A w 2 A (t) + 1 2 A H w 2 H (t) L,P
-+ λ 1 α m (S m (t) + I m (t)) -γ E,L E(t) -µ E E(t) ∂λ 3 (t)
--=1 + (λ 5 -∂λ 6 (t) ∂t = -λ 6 w H + (λ 7 -λ 6 )ab h I m (t) ∂λ 7 (t) ∂t ∂λ 8 (t) ∂t =(λ 6 -λ 8 )γ h . ∂λ 7 (t) ∂t =1 + (λ 5
with the transversality condition λ(T) = 0. Moreover, the optimal control variables are
given by
w * Y = max 0, min λ 2 L A Y , w M
w * A = max 0, min λ 4 S m + λ 5 I m A A , w M
w * H = max 0, min λ 6 S h A H , w H .
∂H ∂S h =λ 6 ∂H
∂I h
∂t
The two methods have no significant difference in the number of maximum infected humans. They both have an 8,715,766.164 maximum
infected human population.
Concerning the individual application of the control strategies, applying the
copepod is the best strategy for minimizing infected humans. It had a minimum
population at the end of the time of 552,606.823 infected humans, whereas pesticide
and vaccination had 877,121.808 and 877,190.111 infected humans population at the
end of time. On the other hand, not applying a control strategy is not the best choice.
It increases the number of infected humans significantly, with an 8,752,880.738 pop-
ulation. However, it does not eliminate the infected humans and has 877,196.995
infected humans at the end of time.
Similarly, for the behavior of infected mosquitoes, figure 5.8 shows that in mini-
mizing the infected mosquito, there is no significant difference in the combination of
the three strategies (copepod, pesticide, and vaccination) and the two strategies (the
combination of copepod and pesticide). Both methods have a 1,105,864.772 max-
imum infected mosquito population on day one and equilibrium at 1,105,864.772
infected mosquito population. On the other hand, combining pesticides and vacci-
nation is not a good strategy since it is slow in minimizing the infected mosquito
population and has 6,886,581.680 maximum infected mosquitoes.
Meanwhile, for the individual application of the control strategy, copepod alone
is the best control in minimizing infected mosquitoes. It prevents the increase of
infected mosquitoes with a 2,305,570.072 maximum infected mosquito population,
exponentially decreasing over time towards equilibrium. Conversely, vaccination
alone is not a good strategy for minimizing the number of infected mosquitoes. It has
4,272,649,867.795 maximum infected mosquito and an equilibrium at 3,697,402,540
infected mosquito.
L,P + γ P,S m + µ P ) ln α m γ E,L γ L,P γ P,S m µ A (γ E,L + µ E )(γ L,P + µ L )(γ P,S m + µ P )
pupae and larvae pupae only
L γ L,P P *
L * = γ P,S m + µ P γ L,P P *
γ L,P γ P,S m P * e -β m P * β m (γ S * P * = m = µ A
TABLE 6 . 1 :
61 Preferred plants of different mosquito species from Barredo and DeGennaro[START_REF] Barredo | Not just from blood: Mosquito nutrient acquisition from nectar sources[END_REF].
D∆S m = γ P,S m Pe -β m Pµ A S mab m I h S m
6.13)
P ′ = γ L,P L -γ P,S m P -µ P P (6.14)
∂S m ∂t -(6.15)
∂I m ∂t -D∆I m = ab m I h S m -µ A I m (6.16)
If the solution of the system above exists, then it is of the form (E, L, P, S m , I m , S h , I h , R h ) such that E
.19) Lemma 6.4.1.
|Φ P (U)(t, x) -P 0 (x)| ≤ γ L,P T sup |U(t, x)| + |U 0 (x)| ≤ γ L,P T(r + ||U 0 ||) + ||U 0 (x)||.
Therefore,
t∈[0,T]
Choose T ≤ r -||U 0 || γ L,P (r + ||U 0 ||) . Then,
sup
t∈[0,T]
P 0 t e -(γ P,Sm +µ P )(t-s) L(t, x)ds + e -(γ P,Sm +µ P )t P 0 (x) -P 0 (x)
≤ γ L,P
t 0 e -(γ P,Sm +µ P )(t-s) L(t, x)ds + e -(γ P,Sm +µ P )t -1 P 0 (x) ≤ γ L,P t 0 e -(γ P,Sm +µ P )(t-s) L(t, x)ds + e -(γ P,Sm +µ P )t -1 |P 0 (x)|
I m (t, x)S h (t, x)ds + e -σ h t -1 |I h,0 (x)| |Φ I h (U)(t, x) -I h,0 (x)| will become
0 ≤ T sup t e -σ h (t-s) I m (t, x)S h (t, x)ds ≤ ≤ ab h t 0 sup t∈[0,T] t∈[0,T] |I m (t, x)||S h (t, x)| e -σ h (t-s) I m (t, x)S h (t, x) ds ≤ T sup t∈[0,T] |U(t, x)| 2 e -σ h (t-s) Note that, t 0 Therefore, by equation (6.22),
•
For susceptible mosquito:Let f (U) = γ P,S m Pe -β m Pab m I h S m . Then f (U) is a Lipschitz function in B T . If U 1 and U 2 in B T , then f (U 1 )f (U 2 ) = (γ P,S m P 1 e -β m P 1ab m I h,1 S m,1 ) -(γ P,S m P 2 e -β m P 2ab m I h,2 S m,2 ) = γ P,S m (P 1 -P 2 )e -β m P 1 + γ P,S m P 2 (e -β m P 1e -β m P 2 )ab m (I h,1 S m,1 -I h,2 S m,2 ). I h,1 S m,1 -I h,2 S m,2 = (I h,1 -I h,2 )S m,1 + (S m,1 -S m,2 )I h,2, and e -β m P locally 1-Lipschitz, we have
Because
8 , of the initial boundary value problem. Moreover, the solution is nonnegative, S h+ I h ≤ H 0 and E + L + P ≤ M Y,0 S m + I m ≤ M A,0 .Proof. Combining Lemmas 6.4.1, 6.4.2 and 6.4.3, one can applied the Picard's fixed point theorem, which provides the local well-posedness. Positivity and boundedness of E, L, P, S m , I m , S h , I h , R h follows the one from the model without space (see
Lemma 5.3.2).
We consider the problem
minimize w
6.5 Optimal Control strategies : Copepods, Pesticides & Vaccination
Our aim in this section to minimize the number of infected humans by minimizing the control inputs. We attribute three control inputs, w Y for the percentage of young mosquitoes exposed to copepods, w A for the percentage of adult mosquitoes exposed to pesticides, w H for the percentage of vaccinated susceptible humans. Furthermore, we assume that both control inputs are piece-wise continuous functions that takes its values in a positively bounded set W = [0, w Y,max ] × [0, w A,max ] × [0, w H,max ].
Let U = (E, L, P, S m , I m , S h , I h , R h ) and w = (w Y , w A , w H ) be the control inputs.
(S m (x, t) + I m (x, t)) + γ E,L E(x, t) + µ E E(x, t) = 0 (6.23) D∆S mγ P,S m P(x, t)e -β m P(x,t) + µ A S m (x, t) + ab m I h (x, t)S m (x, t) + w A S m (x, t) = 0 (6.26)∂I m (x, t) ∂t -D∆I mab m I h (x, t)S m (x, t) + µ A I m (x, t) + w A I m (x, t) = 0 (6.27) dS h (x, t) dt γ h R h (x, t) + ab h I m (x,t)S h (x, t) + w H S h (x, t) = 0 (6.28) dI h (x, t) dt ab h I m (x, t)S h (x, t) + σ h I h (x, t) = 0 (6.29)
dE(x, t) dt -α m dL(x, t) dt -γ E,L E(x, t) + γ L,P L(x, t) + µ L L(x, t) + w Y L(x, t) = 0 (6.24)
dP(x, t) t . -γ L,P L(x, t) + γ P,S m P(x, t) + µ P P(x, t) = 0 (6.25)
∂S m (x, t) -
∂t
t)(6.23) + λ 2 (x, t)(6.24) +λ 3 (x, t)(6.25) + λ 4 (x, t)(6.26) + λ 5 (x, t)(6.27) +λ 6 (x, t)(6.28) + λ 7 (x, t)(6.29) + λ 8 (x, t)(6.30) dtdX.
Computing one by one by integration by parts,
For the expression with Ω T 0 λ 1 (x, t)(6.23)dtdX we have
D∆S mγ P,S m P(x, t)e -β m P(x,t) + µ A S m (x, t)+ ab m I h (x, t)S m (x, t) + w A S m (x, t) dtdX
For the expression with Ω T 0 λ 4 (x, t)(6.26)dtdX we have
Ω 0 T λ 4 (x, t)(6.26)dtdX = -= Ω T 0 λ 4 (x, t) ∂S m (x, t) ∂t Ω T 0 λ 4 (x, t) ∂S m (x, t) ∂t dtdX - Ω 0 T λ 4 (x, t)D∆S m dtdX
- Ω 0 T λ 4 (x, t)γ P,S m P(x, t)e -β m P(x,t) dtdX + Ω 0 T λ 4 (x, t)µ A S m (x, t)dtdX
∂t t) -γ L,P L(x, t)
+ γ P,S m P(x, t) + µ P P(x, t) dtdX
= Ω 0 T λ 3 (x, t) ∂P(x, t) ∂t dtdX - Ω 0 T λ 3 (x, t)γ L,P L(x, t)dtdX
T T
+ Ω 0 λ 3 (x, t)γ P,S m P(x, t)dtdX + Ω 0 λ 3 (x, t)µ P P(x, t)dtdX
= - Ω λ 3 (x, 0)P(x, 0)dX - Ω 0 T P(x, t) ∂λ 3 (x, t) ∂t dtdX
T
- Ω 0 λ 3 (x, t)γ L,P L(x, t)dtdX
T T
+ Ω 0 λ 3 (x, t)γ P,S m P(x, t)dtdX + Ω 0 λ 3 (x, t)µ P P(x, t)dtdX
D∆λ 4 dXdt Since our equation satisfies the Neumann boundary condition, we have ∂I m (x, t) ∂t -D∆I mab m I h (x, t)S m (x, t) + µ A I m (x, t) + w A I m (x, t) dtdX (x, t)ab m I h (x, t)S m (x, t)dtdX (x, t)w A I m (x, t)dtdX For the expression with Ω T 0 λ 6 (x, t)(6.28)dtdX we have (x, t) ∂S h (x, t) ∂t γ h R h (x, t) + ab h I m (x, t)S h (x, t) + w H S h (x, t) dtdX (x, t)ab h I m (x, t)S h (x, t)dtdX (x, t)ab h I m (x, t)S h (x, t)dtdX ∂I h (x, t) ∂t ab h I m (x, t)S h (x, t) + σ h I h (x, t) dtdX (x, t)ab h I m (x, t)S h (x, t)dtdX (x, t)ab h I m (x, t)S h (x, t)dtdX (x, t)σ h I h (x, t)dtdX For the expression with Ω T 0 λ 8 (x, t)(6.30)dtdX we have (x, t) ∂R h (x, t) ∂t σ h I h (x, t) + γ h R h (x, t) dtdX (x, t)γ h R h (x, t)dtdX Therefore, combining the results gives us (x, t)(6.23) + λ 2 (x, t)(6.24) + λ 3 (x, t)(6.25) + λ 4 (x, t)(6.26) + λ 5 (x, t)(6.27) + λ 6 (x, t)(6.28) + λ 7 (x, t)(6.29) + λ 8 (x, t)(6.30) dtdX (x, t)α m (S m (x, t) + I m (x, t))dtdX (x, t)ab m I h (x, t)S m (x, t)dtdX + (x, t)ab h I m (x, t)S h (x, t)dtdX + (x, t)ab h I m (x, t)S h (x, t)dtdX + (x, 0)S m (x, 0)dX (x, t)γ P,S m P(x, t)dtdX -(x, t)γ P,S m P(x, t)e -β m P(x,t) dtdX (x, t)σ h I h (x, t)dtdX -Factoring out the common terms give us (x, t) ∂t + λ 1 (x, t)µ E + (λ 1 (x, t)λ 2 (x, t))γ E,L E(x, t)dtdX (x, t)(µ L + w Y ) + (λ 2 (x, t)λ 3 (x, t))γ L,P L(x, t)dtdX (x, t)µ P + (λ 3 (x, t)λ 4 (x, t)e -β m P(x,t) )γ P,S m P(x, t)dtdX D∆λ 4λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) S m (x, t)dtdX (x, t)ab m I h (x, t)S m (x, t)dtdX -(x, t)ab m I h (x, t)S m (x, t)dtdX D∆λ 5λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) I m (x, t)dtdX (x, t)ab h I m (x, t)S h (x, t)dtdX -(x, t)ab h I m (x, t)S h (x, t)dtdX + (λ 7 (x, t)λ 8 (x, t))σ h I h (x, t)dtdX (x, t)λ 6 (x, t))γ h R h (x, t)dtdX.Since the total derivative of L is equal to zero at the minimum, i.e., (x,t)µ E + (λ 1 (x, t)λ 2 (x, t))γ E,L (x, t)(µ L + w Y ) + (λ 2 (x, t)λ 3 (x, t))γ L,P (x, t)µ P + (λ 3 (x, t)λ 4 (x, t)(1β m P(x, t))e -β m P )γ P,S m D∆λ 5λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t)λ 5 (x, t))ab m I h (x, t) D∆λ 5λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t)λ 7 (x, t))ab h S h (x, t) (x, t)w H + (λ 6 (x, t)λ 7 (x, t))ab h I m (x, t) + (λ 7 (x, t)λ 8 (x, t))σ h + (λ 4 (x, t)λ 5 (x, t))ab m S m (x, t) (x, t)λ 6 (x, t))γ hTherefore, the adjoint system is defined by∂λ 1 (x, t) ∂t = λ 1 (x, t)µ E + (λ 1 (x, t)λ 2 (x, t))γ E,L ∂λ 2 (x, t) ∂t = λ 2 (x, t)(µ L + w Y ) + (λ 2 (x, t)λ 3 (x, t))γ L,P ∂λ 3 (x, t) ∂t = λ 3 (x, t)µ P + (λ 3 (x, t)λ 4 (x, t)(1β m P(x, t))e -β m P(x,t) )γ P,S m ∂λ 4 (x, t) ∂t -D∆λ 4 = -λ 1 (x, t)α m + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t)λ 5 (x, t))ab m I h (x, t) ∂λ 5 (x, t) ∂t -D∆λ 5 = -λ 1 (x, t)α m + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t)λ 7 (x, t))ab h S h (x, t) ∂λ 6 (x, t) ∂t = λ 6 (x, t)w H + (λ 6 (x, t)λ 7 (x, t))ab h I m (x, t) ∂λ 7 (x, t) ∂t = 1 + (λ 7 (x, t)λ 8 (x, t))σ h + (λ 4 (x, t)λ 5 (x, t))ab m S m (x, t) ∂λ 8 (x, t) ∂t = (λ 8 (x, t)λ 6 (x, t))γ h . Proof. The partial derivative of L with respect to w is written ∂L ∂w Y = A Y w Y (x, t) + λ 2 (x, t)L(x, t) ∂L ∂w A = A A w A (x, t) + λ 4 (x, t)S m (x, t) + λ 5 (x, t)I m (x, t) ∂L ∂w H = A H w H (x, t) + λ 6 (x, t)S h (x, t). Furthermore, almost everywhere in Ω, we have A Y w Y + λ 2 L = 0 A A w A + λ 4 S m + λ 5 I m = 0 A H w H + λ 6 S h = 0
Therefore, for the expression with Ω Now, combining like terms, we have Finally, T 0 λ 4 (x, t)(6.26)dtdX we have
Ω For the expression with Ω T 0 λ 4 (x, t)(6.26)dtdX = -T 0 λ 5 (x, t)(6.27)dtdX we have Ω λ 4 (x, 0)S m (x, 0)dX -Ω -Ω T 0 S m D∆λ 4 dtdX -Ω + Ω T 0 λ 4 (x, t)µ A S m (x, t)dtdX T 0 S m (x, t) T 0 λ 4 (x, t)γ P,S m P(x, t)e -β m P(x,t) dtdX ∂λ 4 (x, t) ∂t dtdX + Ω T 0 λ 4 (x, t)ab m I h (x, t)S m (x, t)dtdX + Ω T 0 λ 4 (x, t)w A S m (x, t)dtdX Ω T 0 λ 5 (x, t)(6.27)dtdX = Ω T 0 λ 5 (x, t) = Ω T 0 λ 5 (x, t) ∂I m (x, t) ∂t dtdX -Ω T 0 I m (x, t) ∂λ 5 (x, t) ∂t dtdX -Ω T 0 I m D∆λ 5 dtdX -Ω T 0 λ 5 + Ω T 0 λ 5 (x, t)µ A I m (x, t)dtdX + Ω T 0 λ 5 Ω T 0 λ 6 (x, t)(6.28)dtdX = Ω T 0 λ 6 = Ω T 0 λ 6 (x, t) ∂S h (x, t) ∂t dtdX -Ω T 0 λ 6 (x, t)γ h R h (x, t)dtdX + Ω T 0 λ 6 + Ω T 0 λ 6 (x, t)w H S h (x, t)dtdX = -Ω λ 6 (x, 0)S h (x, 0)dX -Ω T 0 S h (x, t) ∂λ 6 (x, t) ∂t dtdX -Ω T 0 λ 6 (x, t)γ h R h (x, t)dtdX + Ω T 0 λ 6 = Ω T 0 λ 7 (x, t) ∂I h (x, t) ∂t dtdX -Ω T 0 λ 7 + Ω T 0 λ 7 (x, t)σ h I h (x, t)dtdX = -Ω T 0 I h (x, t) ∂λ 7 (x, t) ∂t dtdX -Ω T 0 λ 7 + Ω T 0 λ 7 Ω T 0 λ 8 (x, t)(6.30)dtdX = Ω T 0 λ 8 = Ω T 0 λ 8 (x, t) ∂R h (x, t) ∂t dtdX -Ω T 0 λ 8 (x, t)σ h I h (x, t)dtdX + Ω T 0 λ 8 (x, t)γ h R h (x, t)dtdX = -Ω λ 8 (x, 0)R h (x, 0)dX -Ω T 0 R h (x, t) ∂λ 8 (x, t) ∂t dtdX -Ω T 0 λ 8 (x, t)σ h I h (x, t)dtdX + Ω T 0 λ 8 = J + Ω µ T g(U(0), w)dX -Ω λ 1 (x, 0)E(x, 0)dX -Ω T 0 E(x, t) ∂λ 1 (x, t) ∂t dtdX + Ω T 0 λ 1 (x, t)µ E E(x, t)dtdX + Ω T 0 λ 1 (x, t)γ E,L E(x, t)dtdX -Ω T 0 λ 1 -Ω λ 2 (x, 0)L(x, 0)dX -Ω T 0 L(x, t) ∂λ 2 (x, t) ∂t dtdX -Ω T 0 λ 2 (x, t)γ E,L E(x, t)dtdX + Ω T 0 λ 2 (x, t)γ L,P L(x, t)dtdX + Ω T 0 λ 2 (x, t)µ L L(x, t)dtdX + Ω T 0 λ 2 (x, t)w Y L(x, t)dtdX -Ω T 0 P(x, t) ∂λ 3 (x, t) ∂t dtdX -Ω T 0 I m (x, t) ∂λ 5 (x, t) ∂t dtdX -Ω T 0 I m D∆λ 5 dtdX -Ω T 0 λ 5 Ω T 0 S h (x, t) ∂λ 6 (x, t) ∂t dtdX -Ω T 0 λ 6 (x, t)γ h R h (x, t)dtdX + Ω T 0 λ 6 Ω T 0 λ 6 (x, t)w H S h (x, t)dtdX -Ω λ 7 (x, 0)I h (x, 0)dX -Ω T 0 I h (x, t) ∂λ 7 (x, t) ∂t dtdX -Ω T 0 λ 7 Ω T 0 λ 7 (x, t)σ h I h (x, t)dtdX -Ω λ 8 (x, 0)R h (x, 0)dX -Ω T 0 R h (x, t) ∂λ 8 (x, t) ∂t dtdX -Ω T 0 λ 8 (x, t)σ h I h (x, t)dtdX + Ω T 0 λ 8 (x, t)γ h R h (x, t)dtdX L = Ω T 0 I h (x, t) + 1 2 A Y w 2 Y (x, t) + 1 2 A A w 2 A (x, t) + 1 2 A H w 2 H (x, t) dtdX + Ω µ T g(U(x, 0), w)dX -Ω λ (x, 0)E(x, 0)dX --Ω λ (x, 0)I m (x, 0)dX -Ω λ 6 (x, 0)S h (x, 0)dX -Ω λ 7 (x, 0)I h (x, 0)dX -Ω λ 8 (x, 0)R h (x, 0)dX -Ω T E(x, t) ∂λ 1 (x, t) ∂t dtdX + Ω T 0 λ 1 (x, t)µ E E(x, t)dtdX + Ω T λ 1 (x, t)γ E,L E(x, t)dtdX -Ω T 0 λ 2 (x, t)γ E,L E(x, t)dtdX -Ω T L(x, t) ∂λ 2 (x, t) ∂t dtdX + Ω T 0 λ 2 (x, t)µ L L(x, t)dtdX + Ω T 0 λ 2 (x, t)w Y L(x, t)dtdX + Ω T λ 2 (x, t)γ L,P L(x, t)dtdX -Ω T 0 λ 3 (x, t)γ L,P L(x, t)dtdX -Ω T P(x, t) ∂λ 3 (x, t) ∂t dtdX + Ω T 0 λ 3 (x, t)µ P P(x, t)dtdX + Ω T λ 3 Ω T 0 λ 4 -Ω T S m (x, t) ∂λ 4 (x, t) ∂t dtdX -Ω T 0 Ω T S h (x, t) ∂λ 6 (x, t) ∂t dtdX + Ω T 0 λ 6 (x, t)w H S h (x, t)dtdX -Ω T I h (x, t) ∂λ 7 (x, t) ∂t dtdX + Ω T 0 λ 7 Ω T 0 λ 8 (x, t)σ h I h (x, t)dtdX -Ω T R h (x, t) ∂λ 8 (x, t) ∂t dtdX + + Ω T 0 -∂t Ω T 0 -∂λ 8 (x, t) ∂t ∂L ∂w • ∂w + ∂L ∂U • ∂U = 0 we have the partial derivative of L with respect to U, ∂L ∂E = -∂λ 1 (x, t) ∂t + λ 1 ∂L ∂L = -∂λ 2 (x, t) ∂t + λ 2 ∂L ∂P = -∂λ 3 (x, t) ∂t + λ 3 ∂L ∂S m = -∂λ 4 (x, t) ∂t -∂L ∂I m = -∂λ 5 (x, t) ∂t -∂L ∂S h = -∂λ 6 (x, t) ∂t + λ 6 ∂L ∂I h = -∂λ 7 (x, t) ∂t + 1 ∂L ∂R h = -∂λ 8 (x, t) ∂t w * H (t) = max 0, min λ 6 S h -A H , w H,max . w Y = λ 2 L -A Y w A = λ 4 S m + λ 5 I m -A A w H = λ 6 S h -A H . + (λ 8 λ 2 L -A Y , w Y,max w * A (t) = max 0, min (λ 4 I h + λ 5 S h ) -A A , w A,max + (λ 8 dL = + 1 + ∂λ 7 (x, t) S m D∆λ 4 dtdX L = Ω T 0 1 2 A Y w 2 Y (x, t) + 1 2 A A w 2 A (x, t) + 1 2 A H w 2 H (x, t) dtdX + Ω µ T g(U(x, 0), w)dX -Ω λ(x, 0)U(x, 0)dX + Ω T 0 -w * Y (t) = max 0, min λ 2 L , w Y,max -A Y ∂λ 1 + Ω T 0 -∂λ 2 (x, t) ∂t w * A (t) = max 0, min λ 4 S m + λ 5 I m , w A,max -A A + λ 2 + Ω T 0 -∂λ 3 (x, t) ∂t Ω T 0 -∂λ 4 (x, t) ∂t Ω T 0 λ 4 Ω T 0 Ω T 0 -∂λ 5 (x, t) ∂t Ω T 0 λ 6 Ω T 0 Ω T 0 -∂λ 6 (x, t) ∂t + λ 6 (x, t)w H S h (x, t)dtdX λ 7 + -+ λ 5 + -+ + λ 3 + w * H (t) = max 0, min λ 6 S h -A H , w H,max .
Ω T 0 λ 4 (x, t)D∆S m dtdX = T 0 Ω S m D∆λ 4 dXdt. Ω T 0 λ 5 (x, t)D∆I m dtdX -Ω T 0 λ 5 (x, t)ab m I h (x, t)S m (x, t)dtdX
+ Ω T 0 λ 5 (x, t)µ A I m (x, t)dtdX + Ω T 0 λ 5 (x, t)w A I m (x, t)dtdX = -Ω λ 5 (x, 0)I m (x, 0)dX -+ Ω T 0 λ 6 (x, t)w H S h (x, t)dtdX
For the expression with Ω T 0 λ 7 (x, t)(6.29)dtdX we have
Ω T 0 λ 7 (x, t)(6.29)dtdX = Ω T 0 λ 7 (x, t) Ω λ 7 (x, 0)I h (x, 0)dX -L = J + Ω µ T g(U(x, 0), w)dX + Ω T 0 λ 1 Ω λ 3 (x, 0)P(x, 0)dX -Ω T 0 λ 3 (x, t)γ L,P L(x, t)dtdX + Ω T 0 λ 3 (x, t)γ P,S m P(x, t)dtdX + Ω T 0 λ 3 (x, t)µ P P(x, t)dtdX -Ω λ 4 (x, 0)S m (x, 0)dX -Ω T 0 S m (x, t) ∂λ 4 (x, t) ∂t dtdX -Ω T 0 S m D∆λ 4 dtdX -Ω T 0 λ 4 (x,
t)γ P,S m P(x, t)e -β m P(x,t) dtdX + Ω T 0 λ 4 (x, t)µ A S m (x, t)dtdX + Ω T 0 λ 4 (x, t)ab m I h (x, t)S m (x, t)dtdX + Ω T 0 λ 4 (x, t)w A S m (x, t)dtdX
-Ω λ 5 (x, 0)I m (x, 0)dX -Ω T 0 λ 5 (x, t)µ A I m (x, t)dtdX + Ω T 0 λ 5 (x, t)w A I m (x, t)dtdX -Ω λ 6 (x, 0)S h (x, 0)dX -Ω λ 2 (x, 0)L(x, 0)dX -Ω λ 3 (x, 0)P(x, 0)dX -Ω λ 4 -Ω T λ 1 (x, t)α m S m (x, t)dtdX + Ω T 0 λ 4 (x, t)µ A S m (x, t)dtdX + Ω T 0 λ 4 (x, t)w A S m (x, t)dtdX + Ω T λ 4 (x, t)ab m I h (x, t)S m (x, t)dtdX -Ω T 0 λ 5 (x, t)ab m I h (x, t)S m (x, t)dtdX -Ω T I m (x, t) ∂λ 5 (x, t) ∂t dtdX -Ω T 0 I m D∆λ 5 dtdX -Ω T λ 1 (x,
t)α m I m (x, t)dtdX + Ω T 0 λ 5 (x, t)µ A I m (x, t)dtdX + Ω T 0 λ 5 (x, t)w A I m (x, t)dtdX + Ω T λ 6 (x, t)ab h I m (x, t)S h (x, t)dtdX -Ω T 0 λ 7 (x, t)ab h I m (x, t)S h (x, t)dtdX -Ω T 0 λ 8 (x, t)γ h R h (x, t)dtdX -Ω T 0 λ 6 (x, t)γ h R h (x, t)dtdX Theorem 6.5.2. The optimal control variable w * is defined as w * Y (t) = max 0, min
TABLE 6 .
6 1 the as values for the diffusion coefficients. Parameters are summarized in the table below. 2: Value of the Parameters used for the simulations.
Parameters Description Value Source
α m Oviposition 1 day -1 [13]
γ E,L Transformation from egg to larva 0.330000 day -1 [13]
γ L,P Transformation from larvae to pupa 0.140000 day -1 [13]
γ P,S m Transformation from pupa to adult 0.346000 day -1 [13]
mosquito
µ E Mortality rates of egg 0.050000 day -1 [13]
µ L Mortality rates of larva 0.050000 day -1 [13]
µ P Mortality rates of pupa 0.016700 day -1 [13]
µ A Mortality rates of mosquito 0.042000 day -1 [13]
γ h Rate of decline in human immunity to 0.575000 day -1 [43]
disease
σ h ab m Rate of cure for disease Probability of susceptible mosquitoes 0.328833 day -1 0.375000 day -1 [43] [43]
to be infectious
ab h Probability of susceptible humans to 0.750000 day -1 [43]
be infected
w Y,max upper bound of young mosquitoes 23.96 [53]
exposed to copepods
w A,max upper bound of adult mosquitoes ex- 0.65 [47]
posed to fogging
w H,max upper bound of vaccinated suscepti- 0.8 [35, 64]
ble humans
The adjoint problem is then modified as follows∂λ 1 (x, t) ∂t = λ 1 (x, t) µ E -α m k lay (S m + I m ) + (λ 1 (x, t)λ 2 (x, t))γ E,L ∂λ 4 (x, t) ∂t -D∆λ 4 = -λ 1 (x, t)α m 1 -E k lay + λ 4 (x, t)(µ A + w A ) + (λ 4 (x, t)λ 5 (x, t))ab m I h (x, t) ∂λ 5 (x, t) ∂t -D∆λ 5 = -λ 1 (x, t)α m 1 -E k lay + λ 5 (x, t)(µ A + w A ) + (λ 6 (x, t)λ 7 (x, t))ab h S h (x, t).
Carrying capacity
k lay = 1000 k lay = 5000 k lay = 10000
Time Time Time
FIGURE 6
.12: Behavior of eggs, larvae, and pupae in different capacities of the laying sites.
Effect on the maximum of infected humans I h from the variations of c l and c f . Effect on the maximum of infected humans I h from the variations of α and β.
cl -Ih_max 𝒄 𝒍 cf -Ih_max 𝒄 𝒇
648.08 648.08
Maximum value of 𝑰 𝒉 648.02 648.04 648.06 Maximum value of 𝑰 648.02 648.04 648.06
648.00 1e-04 0.000325 0.000775 0.001 648.00 1e-04 0.000325 0.000775 0.001
alpha -Ih_max 𝜶 beta -Ih_max 𝜷
648.08 648.08
Maximum value of 𝑰 𝒉 648.02 648.04 648.06 Maximum value of 𝑰 648.02 648.04 648.06
648.00 648.00
1e-04 0.000325 0.000775 0.001 1e-04 0.000325 0.000775 0.001
𝒉 (A) 𝒉 (B)
Then integrating over time this equation providesR(t) -R(0) = t 0 γ s I s (s) + γ a I a (s) + γ u U(s) ds and R * -R(0) = +∞ 0 γ s I s (s) + γ a I a (s) + γ u U(s)ds, which is finite. Furthermore, γ s I s + γ a I a + γ u U goes to to 0 as t → +∞, and each term of this sum does thanks to the positivity of the solution. Adding the two first equations implies that (S + E) ′ = -(δ e + ν e )E and S + E is a nonnegative decreasing function whose derivative tends to zero. Then E(t) → t→+∞ 0 and S(t) → t→+∞ S * .
To better reflect the time dynamic of the disease, the effective reproduction numrepresented in Figure A.2D and values of R are computed in Table A of Figure A.2.
ber R eff (t) = ω β e δ e + ν s + f β s γ s + µ s + ν s + (1 -f )β a γ a S(t) N(t) .
is
TABLE A .
A 1: Comparison between the maximum number of symptomatic infected and death without control and the solution reducing contact rate to 0, 100% of exposed under treatment, and 100% of symptomatic infected under treatment to reach T c = 1 and T c = 1000. Interventions are assumed to being 53 days after the first confirmed infection.
laying site
Solving the determinant of this matrix give us
Factoring out the common term, we have
Let S and T be equal to (-µ Aλ)(-σ hλ)σ h µ A R 2 0 and (-γ E,Lµ Eλ)(-µ Aλ)(-γ P,S mµ Pλ)(-γ L,Pµ Lλ) + µ A γ E,L γ L,P γ P,S m R Y 1ln α m µ A R Y , respectively. Then we have 0 = (-λ) • (-γ hλ) • S • T Now, note that expanding S would give us
and expanding T gives us
A.1 Introduction
In late 2019, a disease outbreak emerged in a city of Wuhan, China. Several mathematical models have been proposed from various epidemiological groups. These models help governments as an early warning device about the size of the outbreak, how quickly it will spread, and how effective control measures may be. However, due to the limited emerging understanding of the new virus and its transmission mechanisms, the results are largely inconsistent across studies.
In this paper, we will mention a few models and, in the end, to propose one.
Gardner and his team [START_REF] Gardner | Modeling the spreading risk of 2019-ncov. 31 january[END_REF] Here, we proposed an extension from the classical SEIR model by adding a compartment of asymptomatic infected. We address the challenge of predicting the spread of COVID-19 by giving our estimates for the basic reproductive numbers R 0 and its effective reproductive number R eff . Afterward, we also assess risks and interventions via containment strategy or treatments of exposed and symptomatic infected.
The rest of the paper is organized as follows. Section 2, outlines our methodology. Here the model was explained, where the data was taken, and its parameter estimates. Section 3 contained the qualitative analysis for the model. Here, we give the closed-form equation of reproductive number R 0 , then tackling the best strategy to reduce transmission rates. Finally, section 4 outlines our brief discussion on some measures to limit the outbreak.
A.2 Materials and methods
A.2.1 Confirmed and death data
In this study, we used the publicly available dataset of COVID-19 provided by the Johns Hopkins University [START_REF] Dong | An interactive web-based dashboard to track covid-19 in real time[END_REF]. This dataset includes many countries' daily count of confirmed cases, recovered cases, and deaths. Data can be downloaded from https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data.
These data are collected through public health authorities' announcements and are directly reported public and unidentified patient data, so ethical approval is not required. The dynamics is governed by a system of six ordinary differential equations (ODE) as follows
A.2.2 Mathematical model
Note that the total living population follows N ′ (t) = -µ s I sµ u U, while death is computed by D ′ (t) = µ s I s + µ u U. We assume that there is no new recruit. The parameters are described in
A.2.3 Parameters estimation
Calibration is made before intervention. Thus it is set ν e = ν s = 0. The model is made up of eleven parameters θ = (ω, β e , β s , β a , δ e , f , γ s , µ s , γ a , γ u , µ u ) that need to be determined. Given, for n days, the observations I s,obs (t i ) and D obs (t i ), the cost function consists of a nonlinear least square function
with constraints θ ≥ 0, and 0 ≤ f ≤ 1. Here I s (t i , θ) and D(t i , θ) denote output of the mathematical model at time t i computed with the parameters θ. The optimization problem is solved using Approximate Bayesian Computation combined with a quasi-Newton method [START_REF] Csilléry | Approximate bayesian computation (abc) in practice[END_REF].
A.3.2 Model resolution
To calibrate the model, our simulations start the day of first confirmed infection and finish before interventions to reduce the disease. Therefore ν e and ν s are assumed to be equal to 0. We assume that the whole population of the country is susceptible to the infection. Seven states with comparable populations are chosen. The objective function J is computed to provide a relative error of order less than 10 -2 . In
A.4 Discussion
Without intervention, we observe in Figures A.4-A.5 that the number of susceptible S is decreasing; most of the individuals are recovering, which generates population immunity. It translates that the disease free equilibrium is globally asymptotically stable. Nevertheless, the price to pay is high, the number of deaths being excessive.
As presented in Figure A.2, the effective reproduction is decreasing and points out that control has to be done as fast as possible.
The other important information is that, as discovered by Danchin et al. [START_REF] Danchin | A new transmission route for the propagation of the sars-cov-2 coronavirus[END_REF],
an alternative transmission way may occur. Here, it is due to the proportion of asymptomatic infected individuals that is not negligible, as shown in Table A
Finally, with the little knowledge about COVID-19 nowadays, decreasing transmission, i.e. β e , β s , β a , is the preferred option. The simplest choice consists in reducing contact between individuals. Table A.1 and Figures A.4-A.5-A.6 show that total and partial containment do indeed drastically reduce the disease. However, the duration of containment may be too long and then impracticable especially if we aim at totally eradicating the infection (T c = 1). Instead, to stop the containment as soon as the capacity of the hospitals has been reached could be privileged. When this criterion is set to 1000 patients (T c = 1000), the duration goes from 104 to 39 days for France. A similar reduction in duration is also obtained for other countries. Again, we see that the earlier the intervention, the more effective it is. Due to the high number of susceptible, it is worth noting that the effective reproduction number remains large after containment. Screening tests, especially to carry out exposed individuals, are then necessary to be carried out, and the positive individuals are quarantined.
Modélisation mathématique de l'invasion des ravageurs et application à la lutte contre les maladies transmises par les ravageurs aux Philippines
Résumé. La dengue est une infection virale transmise par les moustiques dans les régions tropicales et subtropicales du monde entier. Il s'agit d'une infection virale causée par quatre types de virus (DENV-1, DENV-2, DENV-3, DENV-4), qui se transmettent par la piqûre de moustiques femelles infectés (Aedes aegypti) et (Aedes albopictus) pendant la journée. Le premier vaccin à être utilisé commercialement est le CYD-TDV, commercialisé sous le nom de dengvaxia par Sanofi Pasteur. Dengvaxia est un vaccin vivant des sérotypes 1, 2, 3 et 4. Il doit être administré en trois doses de 0,5 ml par voie sous-cutanée (SC) à six mois d'intervalle. Sanofi Pasteur recommande que le vaccin ne soit utilisé que chez les personnes âgées de 9 à 45 ans et chez les personnes déjà infectées par un type de virus.
Cette thèse présente un modèle épidémique de type Ross pour décrire l'interaction entre les humains et les moustiques. Après avoir établi le nombre de reproduction de base R 0 et la stabilité des équilibres, nous présentons trois stratégies de contrôle : la vaccination, le contrôle des vecteurs par l'application de pesticides, et l'introduction de copépodes comme contrôle vectoriel pour les larves. Le principe du maximum de Pontryagin est utilisé pour caractériser le contrôle optimal, et des simulations numériques sont appliquées pour déterminer les stratégies les mieux adaptées à la population.
Dans le dernier chapitre, nous avons défini un nouveau modèle décrivant explicitement la distribution spatiale des moustiques adultes. Dans ce modèle d'équations aux dérivées partielles, nous avons montré qu'en appliquant le théorème du point fixe de Picard, l'existence et l'unicité d'une solution faible globale en temps. Nous déterminons la stratégie de contrôle optimale en appliquant trois contrôles : l'exposition au copépode w Y pour les jeunes moustiques dans les zones de pontes, le pesticide w A pour les moustiques adultes, et l'application de la vaccination w H pour les humains.
Nos résultats montrent que la vaccination des humains sensibles secondaires uniquement n'est pas idéale. Cela demande un effort constant et prend beaucoup de temps pour les vacciner. Par ailleurs, les copépodes et les pesticides constituent une stratégie efficace pour éliminer la maladie et les populations de moustiques. Cependant, le retour à l'équilibre est lent. La combinaison des pesticides et de la vaccination semble moins efficace que la combinaison des copépodes et des pesticides. Il faut moins de temps pour réduire le nombre de moustiques infectieux avec une durée d'application de la lutte réduite.
Mots-clés. Dengvaxia, Vaccination, R 0 , Contrôle optimal, Principe du maximum de Pontryagin
Mathematical modeling of pest invasion and application to pest-borne disease control in the Philippines
Abstract. Dengue is a mosquito-borne viral infection found in tropical and subtropical regions worldwide. It is a viral infection caused by four types of viruses (DENV-1, DENV-2, DENV-3, DENV-4), which transmit through the bite of infected Aedes aegypti and Aedes albopictus female mosquitoes during the daytime. The first vaccine to be used commercially is CYD-TDV, marketed as dengvaxia by Sanofi Pasteur. Dengvaxia is a live vaccine of serotypes 1, 2, 3, and 4. It should be administered in three doses of 0.5 mL subcutaneous (SC) six months apart. Sanofi Pasteur recommended that the vaccine only be used in people between the age of 9 to 45 and people already infected by one type of virus.
This thesis presents a Ross-type epidemic model to describe the vaccine interaction between humans and mosquitoes using different population growth models. After establishing the basic reproduction number R 0 and the stability of the equilibrium, we present three control strategies: vaccination, vector control through pesticide application, and the introduction of copepods as a vector control for larvae. Pontryagin's maximum principle is used to characterize optimal control, and numerical simulations are applied to determine which strategies best suit the population.
In the last chapter, we defined a new model with an explicit spatial distribution of adult mosquitoes. In this model made of partial differential equations, we have shown that by applying Picard's fixed point theorem, the existence and uniqueness of global in time weak solution. We determine the optimal control strategy by applying three control: exposure to copepod w Y for the young mosquitoes in the laying sites, pesticide w A for the adult mosquitoes, and application of vaccination w H for the humans.
Our results show that vaccinating secondary susceptible humans only is not ideal. It requires constant effort and takes a long time to vaccinate them. Also, copepods and pesticides are a good strategy for eliminating the disease and mosquito populations. However, the recovery of infected humans is slow. The combination of pesticide and vaccination seems less efficient than the combination of copepods and pesticides. It takes a shorter time to reduce the number of mosquitoes with a reduced duration of the control application.
Keywords. Dengvaxia, Vaccination, R 0 , Optimal Control, Pontryagin maximum principle |
04118664 | en | [
"info.info-ai",
"info.info-lg",
"info.info-tt",
"sdv.bv.ap"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04118664/file/AnnotationGuidelines%20Taec%202022.pdf | This document is an extended version of the annotation guidelines of the FSOV SamBlé project. It includes the addition of the phenotype entity and the normalization of the phenotype and trait entities. The rest of the entity and relations of SamBlé guidelines are not considered here.
Introduction
This document specifies the guidelines for the annotation of the Wheat Trait and Phenotype D2KAB corpus. The task consists of the extraction of plant species, traits, and phenotypes of bread wheat varieties in a set of scientific texts (Pubmed abstracts). Species are annotated by the NCBI taxonomy entries, trait, and phenotype entities are normalized by classes from the Wheat Trait and Phenotype Ontology.
Copyright and License
Copyright 2022 by Institut National de la Recherche Agronomique.
The Guidelines for Annotation of Wheat Trait and Phenotypes are made available under a Creative Commons Attribution-ShareAlike 4.0 License (CC-BY-SA). To view a copy of the license, visit: http://creativecommons.org/licenses/by-sa/4.0/
Conventions
In the examples, Trait annotations are highlighted in light green, Phenotype annotations in dark green. Species annotation are highlighted in turquoise .
Mentions
Trait
Trait definition
The traits refer to the observable characters or properties but do not include the phenotype, which is the observable value of the trait.
Examples
In tests for resistance to P. triticina race 5, plants wheat cultivars to provide protection from WSM
Boundaries
The trait mention may include the name of species that expresses the trait resistance to Wheat streak mosaic virus (WSMV). or of the pathovar
In tests for resistance to P. triticina race 5, plants
The trait includes the properties of the plant that condition the trait conditions for the trait High-temperature adult-plant (HTAP) resistance to stripe rust of wheat. genes that confer seedling resistance in Chinese wheat cultivars
Discontinuous annotation
Distinct entities must be annotated separately.
resistance to both WSMV and Triticum mosaic virus
Apposition
A trait mention that includes appositions must be annotated in a single fragment (not discontinuous) if it is part of the mention. current knowledge about genes for resistance to Septoria tritici blotch (STB) of wheat
If the apposition is a distinct mention, it is annotated separately.
Plant height (PHT) is a crucial trait related to plant architecture
Trait domain
Mentions of traits are expressions and phrases that denote a characteristic of the plant. This includes:
-morphology of the plant or part of the plant (color, size, etc.); -response to biotic and abiotic stress (tolerance, damage, etc); -development (flowering time, growth habit, etc.))
-quality (grain hardness, starch content, etc.)
This excludes: -Diseases, symptoms, pests -Diagnostic or observation methods -Environmental conditions (temperature, humidity, etc.) When the trait mentioned characterizes an organism that is not a plant, but is applicable to, it is annotated.
improve the viability of E. coli under heat and cold stress increase the Saccharomyces cerevisiae tolerance under salt and osmotic stress.
Over general trait mention
When a trait mention is too generic or too imprecise, it must not be annotated. The following list is a vocabulary of terms that are too generic:
-"trait"
-"character"
-"resistance" without any further specification
Phenotype
Phenotype definition
The phenotype is the value of the trait.
Examples cultivars that originally were resistant to leaf rust
Eight dwarf and semi-dwarf varieties, covering a range of genetic sources
Boundaries
The phenotype mention may include the name of pathogen species that expresses the trait if it is part of the phenotype name resistant to Wheat streak mosaic virus (WSMV).
or of the pathovar accessions that were resistant to the Ug99 race group
If the pathogen name is overgeneral or too vague to identify the species, then it is not annotated accessions that were resistant to the Type I and II
The phenotype mention includes the properties of the plant that condition the trait conditions for the trait wheat cultivar Kariega expresses complete adult plant resistance against stripe rust for field resistance to the Ug99 stem rust pathogen The Yr18 gene is known to confer slow rusting resistance in adult plants resistant at the adult plant stage
Counter-example
The parental accessions were susceptible to all the prevalent pathotypes at the seedling stage,
The word Phenotype itself should not be included in relation to the lodging-resistant phenotype in wheat
Discontinuous annotation
Distinct entities must be annotated separately.
resistant to both WSMV and Triticum mosaic virus
It happens that the phenotype term does not include the name of the trait. However, it is annotated.
and highly-susceptible cultivar Wheaton
Apposition
A phenotype mention that includes appositions must be annotated in a single fragment (not discontinuous). evaluation of fusarium head blight resistant (FHB) wheat germplasm
Phenotype domain
Mentions of phenotypes are expressions and phrases that denote a value for the characteristic of the plant. This includes:
-the morphology value of the plant or part of the plant (white as a color, small as size, etc.); -the response value to biotic and abiotic stress (resistant, highly susceptible, etc.); -development value (winter habit, etc.) -quality (shriveled grain, etc.) This excludes:
-Diseases, symptoms, pests -Results of diagnostic or observation methods -Values of environmental conditions (high temperature, low humidity, etc.)
The phenotype must not be confused with the trait, or the environmental factor.
Photoperiod has an important effect on plant growth Photoperiod is not a phenotype of the plant, but a factor of the environment.
Over general phenotype mention
A phenotype mention must not be annotated when it is too generic or imprecise. Single adjectives or adverbs denoting a value should not be annotated. High, low, and extremely are examples. The following list is a vocabulary of terms that are too generic:
-"phenotype"
-"value"
Taxon
Taxon definition
Mentions of a taxon are expressions and phrases that denote a name of a plant. This includes:
-scientific names; -vernacular names; -infra species when defined in the reference NCBI taxonomy (https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?). This excludes Varieties or Cultivars that are created for experimental purposes. The bread taxon (Triticum aestivum) annotates wheat and common wheat mentions by default.
improve the efficiency of designed breeding in wheat Durum mention is annotated by the durum wheat taxon Tritucum turgidum subsp. by default.
Tunisian durum landraces the short arm of chromosome 2D of common wheat (Triticum aestivum L.).
Over general trait mention
A taxon mention must not be annotated when it is too generic or imprecise. The following list is a vocabulary of terms that are too generic:
-"plant"
Boundaries
The name of the taxon should include the binomial scientific name
Triticum aestivum The name of the taxon should include the binomial scientific name, the authority, and the year when specified Triticum aestivum L., 1753
The name of the taxon includes the variety, the subspecies, and the crossing when defined in the NCBI taxonomy Triticum aestivum var. lutescens Triticum aestivum subsp. hadropyrum
Agropyron x Elymus
The vernacular name of a taxon includes the adjective that is necessary to specify the taxon but excludes unnecessary modifiers. resistant bread wheat varieties synthetic hexaploid wheat
Normalization
Normalization of traits and phenotypes
Trait and phenotype mentions are associated to one relevant class of the Wheat Trait and Phenotype Ontology, version 2.2 through the attribute WTO.
In simple cases, the label or the synonym of the WTO class is close to the trait or phenotype mention. resistance to Wheat streak mosaic virus (WSMV) is associated to WTO:0000568 'resistance to wheat streak mosaic virus'
In more complex cases, the terms may strongly differ.
When no phenotype class can be associated to a phenotype mention, then the mention is associated to the trait that corresponds to the phenotype. Reduced starch content accounts for most of the reduction in grain dry matter at high temperature.
Reduced starch content -> WTO:0000131 "grain starch content" reduction in grain dry matter -> WTO:0000131 "dry matter yield"
Normalization of taxa
Taxon mentions are associated with one relevant class of the NCBI taxonomy (https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?).
In simple cases, the label or the synonym of the NCBI taxon is close to the taxon mention.
Triticum aestivum is associated to the Taxonomy ID: 4565 ' Triticum aestivum In other cases, the mention is a synonym. e.g., bread wheat, an abbreviation, or an acronym. |
04118668 | en | [
"info.info-au"
] | 2024/03/04 16:41:26 | 2022 | https://theses.hal.science/tel-04118668/file/TheseBuiTuanVietLong.pdf | M Abdellah
Salama
Ismaël Louis
i Tout d'abord, je tiens à remercier mon directeur de thèse, qui m'a accepté au sein de l'équipe de recherche Commande Véhicule (CoVE) pour mon doctorat, M. Ahmed El Hajjaji. Je lui dois ma reconnaissance pour son aide concernant les démarches administratives que j'ai eu à effectuer et pour son soutien lors des périodes difficiles durant ma thèse. Je tiens également à remercier M. Olivier Pages, mon co-encadrant, qui a toujours été très minutieux et m'a corrigé dans les moindres détails mes résultats et mes publications. J'espère pouvoir continuer à collaborer avec vous dans le futur.
Je remercie M. Thierry Guerra, et M. Michel Zasadzinski, les rapporteurs de ce mémoire qui ont accepté de consacrer une partie de leur précieux temps pour me faire part de leurs remarques et questions. Je remercie également M. Driss Mehdi, et
Introduction Générale et Résume
Avant-propos
La thèse a été menée au sein de l'équipe de recherche COVE (Commande et Vehicles) du laboratoire MIS (Modélisation, Information et Systèmes) réalisée dans le cadre d'un doctorant. Le sujet de la thèse porte sur la « Stabilité et stabilisation des systèmes de retard à paramètres linéaires et variables dans le temps avec saturation des actionneurs ». Nous nous concentrons sur la résolution des problèmes de stabilité et de stabilisation des systèmes LPV et quasi-LPV incluant certaines contraintes de performances (retard variable dans le temps, saturation des actionneurs, variations de paramètres, perturbations externes, etc.).
Contexte et motivations
Les systèmes physiques contiennent des non-linéarités et des dynamiques variant dans le temps. Il est possible d'approximer le comportement non linéaire d'un état du système à partir de la linéarisation. Les méthodes de linéarisation envisagées pour les systèmes non linéaires pourraient être divisées en trois catégories [START_REF] Vidyasagar | Nonlinear Systems Analysis[END_REF]): 1-Linéarisation autour d'un équilibre; 2-Linéarisation globale; et 3-Linéarisation autour d'une trajectoire d'état. Cette représentation des systèmes LPV sera étudiée ici.
L'intérêt des techniques LPV consiste en une approximation des systèmes non linéaires par des systèmes dépendant de paramètres avec l'hypothèse où l'ensemble des paramètres est supposé compact. Le comportement du système non linéaire est linéarisé localement autour de la trajectoire des paramètres variant dans le temps. Sur la base de ces hypothèses, l'analyse des critères de robustesse, de stabilité et de performance des systèmes LPV sont déduites. L'analyse de la robustesse des incertitudes dynamiques (non-linéarités, dynamique négligée, etc.) et des paramètres incertains (connaissance incertaine sur les grandeurs physiques simulées) a reçu une attention considérable. La théorie de la commande linéaire à paramètres variables (LPV) joue un rôle clé dans la gestion des incertitudes ou des inexactitudes. Les transformations fractionnaires linéaires (LFT) et les inégalités matricielles linéaires (LMI) ont été développées pour traiter l'analyse robuste de la stabilité et les performances des systèmes LPV et quasi-LPV. D'une manière générale, la fonction de Lyapunov quadratique (QLF) joue un rôle central dans l'analyse de la stabilité et la stabilisation des systèmes LPV via les LMI. Cependant, une fonction de Lyapunov quadratique peut être trop conservatrice pour une analyse de stabilité robuste car elle impose l'existence d'une seule matrice définie positive de Lyapunov vérifiant un ensemble de LMI. Il en résulte une dégradation des performances (conservatisme) pour des exigences multi-objectifs, par exemple, contraintes sur les entrées et les sorties, saturation des actionneurs, retard variant dans le temps, etc. Suite à cet argument, une condition de stabilité robuste dérivée du lemme réel borné à l'échelle est vi généralement moins conservatrice que le QLF. Une question intéressante ressort de cette discussion : comment pourrions-nous exploiter plus d'informations sur le système et améliorer la flexibilité des conditions de conception ? Ainsi, la première motivation consiste à assouplir les conditions de stabilité basées sur les QLF.
En raison de la nature des systèmes paramétriques, un problème dans la technique d'analyse LPV repose sur les formulations dérivées associées à l'ensemble compact de paramètres réellement représentés comme des conditions de dimension infinie. Ainsi, des méthodes de relaxation des LMI paramétrés ont été proposées pour formuler efficacement des problèmes d'analyse base sur l'optimisation convexe impliquant des contraintes LMI de finie dimension. En conséquence, une synthèse des méthodes de relaxation pour les LMI paramétrés a été proposée dans [START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF][START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF].
De manière générale, la stabilité robuste utilisant une fonction de Lyapunov dépendante des paramètres (PDLF, qui semble plus adaptée à la synthèse de contrôle LPV) a été bien étudiée. Cependant, le problème de conception de la commande n'est pas entièrement résolu en présence de contraintes de saturation des actionneurs. Ce problème crucial sera étudié en détail et constitue l'une des principales contributions.
Dans l'analyse de la conception des systèmes de commande, un phénomène observé dans de nombreux systèmes d'ingénierie est la saturation des actionneurs. À première vue, l'effet de cette non-linéarité peut paraître simple, mais une analyse inappropriée ou ignorant ses effets peuvent entraîner une dégradation des performances ou une instabilité du système. La saturation des actionneurs est inévitable dans l'ingénierie des systèmes dynamiques pratiques concernant les limites physiques (vitesse, tension, cycle, etc.) et les contraintes de sécurité (pression, température, puissance, consommation d'énergie, etc.). C'est pourquoi nous devons trouver une méthode de stabilisation consistant à replacer les points de fonctionnement du système d'asservissement dans la région sans élément sature. Au cours des dernières décennies, une attention considérable a été accordée aux systèmes LTI soumis à la saturation des actionneurs, voir par exemple [START_REF] Hu | Control Systems with Actuator Saturation[END_REF][START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF][START_REF] Zaccarian | Modern Anti-windup Synthesis: Control Augmentation for A tuator Saturation[END_REF] et les références qui s'y trouvent. D'une manière générale, il existe deux approches principales pour effectuer l'analyse de stabilisation dans la littérature. La première considère les bornes de saturation dans la stratégie de conception. Dans la seconde part, une synthèse stabilisante asymptotique proposée pour un système en boucle fermée ne tient pas compte des bornes de commande. Ensuite, une stratégie de conception appropriée analysera pour compenser la saturation, telle que Direct Linear Anti-windup (DLAW) ou Model Recovery Anti-windup (MRAW). Le domaine anti-windup a fait l'objet de discussions approfondies au cours des dernières décennies. Nous pouvons nous rendre compte que l'analyse de stabilisation en boucle fermée pour DLAW et MRAW est plus compliquée en tenant compte de l'effet du comportement non linéaire et de la dynamique incertaine. Néanmoins, la construction DLAW dépasse le cadre de la thèse, elle ne sera donc pas incluse.
Enfin, une autre contribution porte sur les retards, des phénomènes de retard temporel sont observés dans divers systèmes d'ingénierie tels que les procédés chimiques, les transmissions mécaniques, les transmissions hydrauliques, les processus métallurgiques et les vii systèmes de contrôle en réseau. La stabilité et la stabilisation des systèmes retards (TDS) ont reçu une attention considérable dans la pratique et la théorie du contrôle. Le retard peut être classé en différentes approches en fonction de la caractéristique ou du comportement de réponse du retard au système. La littérature sur la stabilité et la stabilisation des systèmes retardés porte sur les systèmes LTI, les systèmes LPV, le domaine temporel l'analyse de stabilité basée sur Lyapunov et l'approche basée sur les valeurs propres.
De plus, les conditions dépendantes du retard dérivées de l'analyse de stabilité et de stabilisation via la technique LMI basée sur la fonction de Lyapunov-Krasovskii sont rencontrées dans de nombreux articles de la littérature. La majorité des résultats de la littérature concerne les systèmes LPV/quasi-LPV avec retards sans prise en compte des effets de saturation des actionneurs. Peu de techniques de synthèse sont disponibles pour la stabilisation robuste des systèmes LPV retards avec des actionneurs contraints ce sera notre contribution. De plus, les conditions de stabilisation de ces classes de systèmes sont généralement des inégalités matricielles non linéaires (NMI ou NLMI), qui sont généralement des problèmes polynomiaux non déterministes (NP-hard).
Une structure appropriée de LKF peut faire référence aux termes intégraux supplémentaires, aux vecteurs d'état croissants et aux approches de partitionnement de retard/fragmenté qui se sont révélées extrêmement efficaces pour réduire la résolution des conditions de stabilité. Il convient de noter que plus les matrices de variables d'écart utilisées sont nombreuses, plus l'analyse des conditions de stabilité dépendantes au retard est compliquée. En conséquence, ces approches sont le compromis entre la relaxation de la condition de stabilité et celle du calcul.
Le troisième objectif de cette thèse est de fournir la stratégie de conception de contrôle la moins restrictive pour les systèmes LPV et quasi-LPV avec des contraintes de retard variant dans le temps et de saturation. Outre l'utilisation de LKF appropriés, de variables d'écart et de bornes de saturation, pour obtenir des conditions plus flexibles, la méthode proposée est un équilibre entre conservatisme et réduction de la complexité de calcul.
Plan de thèse
Cette thèse est organisée selon les chapitres suivants :
Le Chapitre 1 donne une introduction générale et un résumé de la thèse.
Le Chapitre 2 donne un aperçu des représentations de la famille des systèmes LPV. Ensuite, les propriétés dépendantes des paramètres implicites dans les LMIs dérivées sont linéarisées par les méthodes de relaxation. La convergence asymptotique peut être obtenue lors de la résolution d'un ensemble de conditions d'inégalité matricielle. Enfin, une synthèse détaillée de la stabilité des approches non quadratiques de Lyapunov donne une approche pour stabiliser les systèmes LPV.
Le Chapitre 3 est consacré à l'analyse de la stabilité et de la stabilisation robuste de systèmes dépendant de paramètres sans contraintes de saturation. La première contribution sur l'algorithme d'itération optimale concave utilisant des blocs de paramètres diagonaux est présentée et comparée à la littérature existante. viii Le Chapitre 4 traite l'analyse et la synthèse du contrôleur de programmation de gain saturé avec des inégalités de stabilisation plus strictes basées sur la fonction paramétrique de Lyapunov telle que PDLF et la fonction floue de Lyapunov (FLF). Les résultats ont été obtenus par la méthode de relaxation appliquée aux conditions PDLMI obtenues. Le chapitre se termine par la troisième contribution de l'analyse de stabilisation pour les systèmes LPV avec contraintes de saturation.
Le Chapitre 5 traite de l'analyse de stabilité et de la conception de contrôle pour les systèmes LPV avec retard utilisant une fonction convexe appropriée basée sur la fonction de Lyapunov-Krasovskii. Une nouvelle condition de stabilité dépendante du retard est donnée à l'aide de la fonctionnelle de Lyapunov-Krasovskii dépendante des paramètres (PDLKF) combinée à la bounding technique. Cette approche fournit une inégalité plus étroite pour délimiter l'intégrale quadratique d'un vecteur étendu. Plusieurs types de stabilité dans les cadres de valeur de retard de mémoire exacte et de retard approximatif sont étudiés. La condition de stabilité du retard de mémoire incertaine est considérée comme convenant à l'exigence de mise en oeuvre. Enfin, l'efficacité des conditions PDLMI proposées est démontrée par des résultats d'analyse de stabilité par rapport aux méthodes existantes pour les systèmes linéaires invariants dans le temps et LPV.
Le Chapitre 6 contribue à la stabilisation des systèmes à retard variable dans le temps LPV avec saturation de l'actionneur. En incluant le délai de mémoire approché, un contrôleur par retour d'état et par retour de sortie dynamique sont présenté. Ensuite, des conditions PLMI nécessaires et suffisantes ont été proposées pour garantir une stabilisation résiliente à mémoire respectant les contraintes de saturation. Par rapport aux résultats existants récents, cette méthode fournit une performance améliorée avec une borne supérieure du retard. La discussion finale démontre les caractéristiques efficaces de cette stabilisation.
Table of Content
Remerciements i Introduction Générale et Résume v
Table of Content ix
List of Figures xii
List of Tables xiv
Notation xv
Acronyms xvi
Chapter 1. General Introduction and Summary
List of Figures
General Introduction and Summary
Foreword
The thesis was conducted at the "COVE" research team of MIS laboratory throughout a Ph.D. research. The subject of the dissertation is "Stability and Stabilization of linear parameter-varying and time-varying delay Systems with Actuators Saturation." We focus on solving the stability and stabilization problems for the LPV and quasi-LPV systems including some performance constraints (time-varying delay, actuator saturation, parameter variations, external disturbances, etc.).
Context and motivations
Since the last decade of the 20th century, the guarantee of the stability and the robustness analysis of the system robustly against the influence of dynamical uncertainties (nonlinearities, neglected dynamics, etc.) and uncertain parameters (uncertain knowledge about simulated physical values, the time variations of these values during operation) have received considerable attention of engineering control community. The robustness analysis and linear parameter-varying (LPV) control theory plays a key role in handling uncertainties or inaccuracies. The Linear Fractional Transformations (LFTs) and Linear Matrix Inequalities (LMIs) have been developed to deal with the robust stability and performance analysis for both LPV and quasi-LPV systems.
Generally speaking, the quadratic Lyapunov function (QLF) plays a central role in analyzing the stability and stabilization of LPV systems via LMIs. However, a quadratic Lyapunov function may be too conservative for robust stability synthesis because it imposes the existence of a single Lyapunov positive definite matrix verifying a set of LMIs. But, due to some particular properties of the implementation, a considerable approach has been developed or analyzed the ℋ∞ gain-scheduling controller with this modest case. And, because it cannot characterize slow variation parameters, it results in a degradation performance (conservatism) for multi-objective requirements, for example, hard time constraints, actuators saturation, time-varying delay, etc. Followed this argument, a robust stability condition derived from the scaled-bounded real lemma is typically less conservative than the QLF. An interesting question arises from this discussion: how could we exploit more system information and improve the flexibility of design conditions? So, the first motivation engages in relaxing the QLF-based stability conditions by taking advantage of the parametric properties of the scaling structure.
Due to the nature of parametric systems, one critical issue in the LPV analysis technique is the derived formulations associated with the compact set of parameters actually represented as infinite dimension LMI conditions. Thereby, popular relaxation methods of Parameterized LMIs have been proposed to efficiently formulate analysis problems as convex optimization problems involving a finite LMI constraints. Generally speaking, the robust stability using a parameter-dependent Lyapunov function (PDLF, which seems more suitable for LPV control synthesis) is well-studied and well-advanced. However, the control design issue is not completely resolved in the presence of actuator saturation constraints. This crucial problem will be studied in detail and constitutes one of the main motivations of this thesis.
It is well known that actuator saturation can lead to performance degradation or even instability in several engineering systems. As a result, actuator saturation is considered as an exciting challenge in control system design. The problem of the saturated feedback controller design for these classes of dynamics systems constitutes an interesting problem for both theoretical and practical reasons. Stability analysis and control synthesis of saturated LPV systems are generally divided into two different strategies: (1) Anti-windup scheme, (2) Saturation nonlinearities.
It can be seen that the conventional input constraint imposed by a small-gain theorem is related to the input-output approach in a strict manner. So, the less conservative method, such as generalized sector condition (GSC), gives an extra degree of freedom in the stabilization synthesis used in this literature. Besides, the GSC is appropriate for the parameter-dependent LMI conditions and well suits extension for delay-dependent stabilization.
On the other side, the delay-dependent conditions derived from the stability and stabilization analysis via the Lyapunov-Krasovskii functional based LMI technique is encountered in many papers of literature. The majority of the results in the literature concerns LPV/quasiLPV systems with time delay without taking into account the actuator saturation effects. Few synthesis techniques are available for the robust stabilization of timed LPV systems with constrained actuators. Moreover, the stabilization conditions for these classes of systems are usually nonlinear matrix inequalities (NMI or NLMI), which are usually nondeterministic (NP-hard) polynomial problems. Therefore, the stability analysis and stabilization of saturated LPV/quasi-LPV systems with time delay become a more attractive challenge.
The third objective of this thesis is to propose less restrictive control design strategy for LPV and quasi-LPV systems with time-varying delay and saturation constraints. Besides, the use of appropriate LKFs, slack-variables, saturation bounds, to obtain more flexible design conditions, the proposed method is a balance between conservatism and computational complexity reduction.
Chapter 3 is devoted to the analysis of Robust Stability and Stabilization of parameter-dependent systems without saturation constraints. An improved solution of the Robust T-S Fuzzy controller stabilizing analysis with ℒ2 norm-bounded input constraints is presented. The first contribution about the concave optimal iteration algorithm using diagonal parameter blocks is presented and compared with the existing literature.
Chapter 4 discusses the analysis and synthesis of the saturated gain-scheduling controller with tighter stabilizing inequalities based on the parametric Lyapunov function such as PDLF and fuzzy Lyapunov function (FLF). The less conservative results were attained by the proposed relaxation method applied to the designed PDLMI conditions. The chapter is concluded with the third contribution of stabilization analysis for LPV systems with saturation constraints.
Chapter 5 deals with stability analysis and control design for LPV time-delay systems using an appropriated Lyapunov-Krasovskii functional based convex function. A new delay-dependent stability condition is addressed using parameterdependent Lyapunov-Krasovskii functional (PDLKF) combined with the advanced bounding technique. This approach provides a tighter inequality for bounding the quadratic integral of an extended vector. Several types of stability in both exact-memory delay value and approximate delay frameworks are studied. The uncertain-memory delay stability condition is considered suiting the implementation requirement. Finally, the effectiveness of the proposed PDLMI conditions are demonstrated through stability analysis results compared with the existing methods for both linear time-invariant (LTI) and LPV time-delay systems.
Chapter 6 contributes to the stabilization of LPV time-varying delay systems with actuator saturation. Including the approximated-memory delay, a more general controller is introduced for both state feedback and dynamic output feedback controllers. Then, necessary and sufficient PLMI conditions have been proposed 1.4. Publications to guarantee a memory-resilient stabilization respecting the saturation constraints (sector nonlinearities). Besides, the optimization of the estimation of the domain of attraction (DOA) is analyzed. The proposed method is validated by considering several numerical examples. Compared to the recent existing results, this method provides an enhanced performance conforming to a higher upper bound of the delay value. The final discussion demonstrates the efficient characteristics of saturation stabilization.
Publications
The thesis is based on the following publications and other studies in the process of being submitted.
Chapter 2. Overview Linear Parameter-Varying Systems
Overview Linear Parameter-Varying Systems
Physical systems are practically involved with nonlinearities and time-varying dynamics. It is possible to approximate the nonlinear behavior of a system state in the range of nominal operating conditions, usually referred to as linearization [START_REF] Isidori | Nonlinear Control Systems[END_REF][START_REF] Khalil | Nonlinear Systems[END_REF]. The linearization methods considered for nonlinear systems could be supposedly characterized into three categories [START_REF] Vidyasagar | Nonlinear Systems Analysis[END_REF]): 1-Linearization around an equilibrium; 2-Global linearization; and 3-Linearization around a state trajectory. § Linear Time-Invariant (LTI) system as a representative for first method related with the simplest analysis and synthesis techniques, which is expressed by linearizing the dynamic systems around the neighborhood of equilibrium points. During the operation, the presence of nonlinearities with a wide range of variation (include dynamical uncertainty, saturation, and inaccurate knowledge of dynamics....) causes the inaccurate linearization occurs over equilibrium conditions. § The multi-model representation of Linear Time Varying (LTV) systems also called Linear Differential Inclusions (LDI) is used to represent trajectories of a nonlinear system by a set of trajectories of LDI in the entire operating range. Nevertheless, this linearization method could be conservative because the approximated trajectories that are sometimes not actual trajectories of the given system. § Parameter Dependent systems as a typical for third method where the nonlinear system can be approximated by a family of linearization or the parameterized linearization. Since the proposed method is valid around a state trajectory rather than a single equilibrium point, then it can characterize a nonlinear system in a wider range of operating conditions than LTI. This representation of the LPV systems that will intensively study here.
This chapter provides a non-exhaustive overview of the linear parameter-varying (LPV) systems used to approximate nonlinear systems according to the trajectories of parameters. Depending on the characteristics of the parameter, it can classify as linear time-varying (LTV), LPV systems [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF]Mohammadpour & Scherer, 2012;[START_REF] Shamma | Analysis and design of gain scheduled control systems[END_REF][START_REF] Wu | Control of linear parameter varying systems[END_REF], or quasi-linear parameter varying (quasi-LPV or qLPV) systems, for example, the representation of T-S fuzzy systems [START_REF] Lam | Polynomial Fuzzy Control Systems Stability Analysis and Control Synthesis Using Membership Function-dependent[END_REF][START_REF] Takagi | Fuzzy identification of systems and its applications to modeling and control[END_REF][START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]. In addition, LPV systems can also classify according to the parameter properties, e.g., the intrinsic (endogenous) or extrinsic (exogenous), the physical properties (e.g., parametric uncertainty, and dynamical parameters), and mathematical significance (continuous/discrete, smooth or non-smooth, continuous derivative, etc.).
The benefit of LPV techniques consists in an approximation of the characterizations of the nonlinear systems to the parameter-dependent systems. Where the compact sets of parameters and their derivatives are the prerequisite of the system design hypothesis. The behavior of the nonlinear system is linearized locally around the trajectory of the timevarying parameters. Based on these assumptions, the analysis of robustness, stability, and performance criteria of the LPV systems thus simplifies as on LTV or LTI systems (Apkarian, [START_REF] Apkarian | Parameter-Dependent Lyapunov Functions for Robust Control of Systems with Real Parametric Uncertainty[END_REF]Apkarian & Gahinet, 1995;[START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF].
In the first part, in section 2.1, we discuss commonly used framework to represent LPV systems along with some applications. Corresponding to each representation is a characteristic approach for the stability analysis provided in section 2.2. The stability synthesis for the LPV system via the Lyapunov function results in the parametric conditions. In which the convex optimization linear matrix inequality (LMI) tools cannot solve these conditions directly. As a result, a synthesis of relaxing methods for the parameterized LMIs has been methodically discusses in [START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF][START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF] including: § The gridding technique with uniform density [START_REF] Wu | Control of linear parameter varying systems[END_REF][START_REF] Wu | Induced L2-norm control for LPV systems with bounded parameter variation rates[END_REF], and the meshing parametric affine [START_REF] Apkarian | Parametrized LMIs in control theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF]. § The convex combination of multi-LTI systems is known as the polytopic paradigm (Apkarian, Gahinet, et al., 1995;[START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF], and the T-S fuzzy model [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF][START_REF] Tuan | New Fuzzy Control Model and Dynamic Output Feedback Parallel Distributed Compensation[END_REF]Tuan, Apkarian, et al., 2001). § The multiplier-based linear fraction transformation (LFT) use D-scaling to capture the behavior of the parameter with the additional inequality on the multiplier quadratic in the scheduling block (Apkarian & Gahinet, 1995;[START_REF] Packard | Gain scheduling via linear fractional transformations[END_REF]. The matrix algebraic transformations of LFT LPV based on S-procedure can be found at [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Pólik | A Survey of the S-Lemma[END_REF]. § The sum of squares (SoS) relaxation-based LPV stability synthesis on the SoS decomposition for multivariate polynomials can be efficiently computed using semidefinite programming [START_REF] Parrilo | Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization[END_REF][START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF].
The parameterized linear matrix inequality (PLMI) can be converted into finite-dimensional inequalities. Where the feasible solution is obtained by solving the LMI conditions. The methodologies and numerical examples of LPV stability synthesis are introduced at the end of section 2.2. It should be notice that the design requirements conformed to the LPV framework such as, robust stability and performance e.g., ℋ∞ criterion, or the fullblock S-procedure (FBSP) will be presented in Appendix B.
In section 2.3, we recall some fundamental definitions like the region of linearity, region of attraction and regions of asymptotic stability and the developments concerned within the dissertation, with respect to main problems of the stability analysis and the stabilization of linear parameter-varying and time-delay systems with saturating inputs. The characterization of sets of admissible initial states and admissible disturbances plays a central role in stability analysis as well as in the synthesis of stabilizing control laws when saturation occurs.
Introduction of LPV/quasi-LPV systems
Let us introduce a generalized expression of the LPV system that is studied throughout this dissertation under the forms of a non-autonomous non-stationary system of ordinary differential equations: where vectors ,,,, nprmd xtytztutwt RRRRR are respectively the state of the system, the measured output, the regulated output, the control input and the external disturbance. The behavior of LPV system (2.1) depends on the behavior of the parameters. From the point of view of the physical meaning of parameters (e.g., measurability, endogenous or exogenous parameters, etc.), or mathematical properties (i.e., continuous/discontinuous parameters, differentiable/non-differentiable parameter, etc.), that provides a classification for LPV modeling or the appropriate method for system stability analysis and control system design strategy.
Endogenous Parameters and Exogenous Parameters
Let consider a time-continuous state space system: (2.3) where the exogenous parameters 12 sin2, cos/31, 1 tttt are in- dependent of the system state. It can be noticed that these parameters have continuous and bounded derivatives.
On the other hand, the time-varying parameters can also characterize the states of the nonlinear system, for instance (2.5) In this case, when the parameters are functions of states, they are usually classified as endogenous parameters. The representation systems are referred to as a quasi-LPV system (Briat, 2015a;[START_REF] Hoffmann | A survey of linear parameter-varying control applications validated by experiments or high-fidelity simulations[END_REF][START_REF] Lovera | LPV Modelling and Identification: An Overview[END_REF][START_REF] Rotondo | Quasi-LPV modeling, identification and control of a twin rotor MIMO system[END_REF][START_REF] Shamma | An Overview of LPV Systems[END_REF]. It is interesting to note that systems (2.3) and (2.5) have a similar LPV representation.
Continuous (discontinuous) parameter values with continuous (discontinuous) derivative
In addition, parameter behaviors can also be classified by mathematical properties such as discrete or continuous value, smooth or non-smooth functions, and differentiability or non-differentiability. Let's consider an example:
: 0,1 t RB (2.6)
where B is the image set of function t maps from t R to :0,1. B In this case, the parameter trajectory is a continuous switching between the piecewise constants. The systems involved in the function described the discrete value parameter could be considered as Hybrid Systems (deterministic and stochastic switched cases). The interested readers can refer to the literature [START_REF] Alwan | Theory of Hybrid Systems: Deterministic and Stochastic[END_REF][START_REF] Boukas | Stochastic Switching Systems[END_REF][START_REF] Briat | Stability analysis and state-feedback control of LPV systems with piecewise constant parameters subject to spontaneous poissonian jumps[END_REF][START_REF] Briat | Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers[END_REF][START_REF] Chatterjee | Stability analysis of deterministic and stochastic switched systems via a comparison principle and multiple Lyapunov functions[END_REF][START_REF] Colaneri | Dwell time analysis of deterministic and stochastic switched systems[END_REF][START_REF] Teel | Stability analysis for stochastic hybrid systems: A survey[END_REF] Generally, the bounding derivative of the parameters often interferes with stability analysis. The dynamics of state are theoretically unbounded, but with the definition, the parameters do. Nonetheless, if considered that functions mapping from state to parameter domain are bounded on each state, it is a too strong assumption. Let borrow a simple example from [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF] to make this clear: in the synthesis of stabilization conditions of single input single output SISO LPV system is ensued in condition , . xtab && Then, by implementing the closed-loop system with the found controller, the system's trajectory exhibits a stable characteristic, but the behavior of the derivative of the state goes outside the bounded region, e.g., 1, 1. ab So, it is not reasonable to confirm that the closed-loop system is stable in the domain , , ab and then the analysis should start over with an expansion of the bounds on the derivative of the state, e.g., 2, 2. ab But, the increase of the limits also leads to the conservative results. The compact parameter sets can be symbolized as follows ,conv. p tt UU & Hereafter, we inherit the mathematical definition of convex optimization from [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Boyd | Convex Optimization[END_REF]) and use the computational toolboxes such as LMI toolbox MATLAB [START_REF] Gahinet | LMI Control Toolbox For Use with MATLAB[END_REF], Yalmip toolbox [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF], semidefinite programming problems -SDP Sedumi toolbox [START_REF] Sturm | Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones[END_REF], cone programming Mosek toolbox (E D [START_REF] Andersen | On implementing a primal-dual interiorpoint method for conic quadratic optimization[END_REF][START_REF] Andersen | The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm[END_REF] to solve the convex optimization problems.
In the following sections, the presentations of the LPV/qLPV paradigms are provided along with the summarizing stability synthesis of LPV systems based on Lyapunov technique expressing in the parameterized LMIs.
Affine LPV Systems
An affine LPV (ALPV) system is linear parameter-varying systems whose matrices are affine functions of the scheduling parameters. Considering a dynamic system depend affinely on the parameter vector For the sake of simplicity, parametric dependent matrix expression At are reduced to . A This is one of the most common LPV system formulations encountered in control synthesis, where the affine-dependent system leads to a low degree of conservatism of stability conditions.
Polynomial Systems
The parameter polynomial formulations are widely concerned to the modeling and control system design of the LPV systems. A polynomial system relating to the parameter-dependent state-space representation could express by the following form: It should note that a polynomial system is directly approximated by Taylor's expansion of the nonlinear expressions. A general formulation could find in the literature [START_REF] Parrilo | Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization[END_REF][START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF][START_REF] Sato | Robust stability/performance analysis for linear timeinvariant polynomially parameter-dependent systems using polynomially parameter-dependent Lyapunov functions[END_REF].
Polytopic Systems
The polytopic LPV formulation is a linear combination of a convex set of dependent parameters, widely used in the framework for control synthesis of LPV systems. Introduced the early 90s in the literatures [START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Feron | Analysis and synthesis of robust control systems via parameter-dependent Lyapunov functions[END_REF]Gahinet et al., 1994[START_REF] Gahinet | Explicit controller formulas for LMI-based H ∞ synthesis[END_REF] which address robust stability and robust performance of the uncertainty systems. Generally, a polytopic system is defined by the following equations: Example 2.1.1. [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF]) Considering an affine system with 2 parameters: [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF], the author has used this example as a simple way to demonstrate the drawback of the polytopic approach. As seen in Figure 2-1.b, the polytope lost its parametric dependence since the two vertices of the system are not related to the parameter domain 2 , tt resulting in the conservative stability conditions. Consequently, a uncertain polytope method (Appendix A. 1.4) that reshapes the quasi-convex vertex containing the parameter curve presented in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF] and [START_REF] Gonçalves | New Approach to Robust$ cal D$-Stability Analysis of Linear Time-Invariant Systems With Polytope-Bounded Uncertainty[END_REF] to reduce this conservatism.
Takagi-Sugeno Fuzzy Systems
Since introduced by [START_REF] Takagi | Fuzzy identification of systems and its applications to modeling and control[END_REF], the T-S fuzzy model has shown the effective linearization of the nonlinear systems by using logical rules (fuzzy sets). Following this research direction, [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF] presented the control and observation analysis for nonlinear systems that could reformulate to the T-S fuzzy framework. Like the LPV system, the T-S fuzzy systems grow the influence in multidisciplinary applications of the robust stability control analysis. Now, let's consider a nonlinear system:
pn i C R
The aggregation of the subsystems is based on membership function j t
in ij M 1 , 0, , jjjjjjiji j ttttt with 1 0, 1 and 1. l N ii i tt (2.20)
A similarity can be found in this expression of the membership function and the polytopic coordinate system (2.13). Then, the defuzzification process of this model can derive by: for more details refer to [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF].
, 1 , 1 , 1,2,,. l l N iiwi i l N iiwi i xttAxtBwt iN zttHxtJwt & K (2.21)
The fuzzification is convenient for modeling dynamic systems, complex non-smooth nonlinear systems, chaotic systems, etc. The choice of distribution law and member functions is dependent on the purpose of system design, more specific of the fuzzification and defuzzification referred to [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]). In the next section, we briefly describe the mathematical formula of membership function i t of a bounded nonlinear system (local sector) and unbounded nonlinear system (global sector).
Sector Nonlinearity
As discussed in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF], T-S fuzzy modeling has two typical approaches.
In the one hand, the fuzzy identification modeling using input-output relation is difficult to analyze the control design for physical models. On the other hand, the nonlinear dynamic models obtained from the Lagrange equation or the Newton-Euler theorem are fuzzificated by "sector nonlinearity." The sector nonlinearity method introduced by [START_REF] Kawamoto | An approach to stability analysis of second order fuzzy systems[END_REF] and generalized by [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF] Usually, these constraints are often involved in the stability synthesis with the parameter-dependent Lyapunov functions (i.e., fuzzy Lyapunov function -FLF).
A computational load requirement for the sector nonlinearity approach is to simplify the original nonlinear system with fewer model rules (vertices) as possible so that the reduction of the effort for analysis and design of control systems does not degrade the performance of the modeling process. In the work of [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF] a local approximation in fuzzy partition spaces was used to simplify the system with a significant decrease of the model rules. In the other direction, the combination of fuzzification with the monomial formulation will balance less conservative condition and computational synthesis complexity.
Polynomial Fuzzy Model
The polynomial fuzzy proposed for the SoS-based state-space approach in [START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF] is more effective for representing nonlinear control systems and providing less conservative stability analysis and synthesis than the traditional T-S fuzzy model. The main difference between T-S fuzzy model (2.21)
Example
In this section, a nonlinear system is used to illustrate the representation to all the discussed types of LPV/quasi-LPV systems.
Example 2.1.3. Let's consider a nonlinear system [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]):
T-S Fuzzy Model
Let's designate the local sector nonlinearities: This quasi-LPV representation is expressed by a convex combination of the four vertices of the nonlinear system (2.25) in the specified domain. The linear combinational representation is convenient for stability and stabilization analysis. As stated in the discussion of Polytopic systems, the convexity conditions are directly delivered. The conservatism of the convex envelope is illustrated in Figure 2-1.b, and the computational burden increases exponentially with the number of vertices of the system (i.e., the number of membership functions).
In the other aspect, the polynomial formulation more accurately describes the nonlinear behaviors. But it depends on the selection of the monomial vector Zx to have a reasonable number of polynomial order (a higher degree leads to complexity in the stability analysis and increased computational load). Accordingly, a fuzzy polynomial system balances conservatism with numerical complexity is considered as follows.
Polynomial Fuzzy Model
Using the same monomial vector .
ii i xtxAxZx & (2.35)
The vertices of system (2.35) is halved compared to system (2.32), which also has a simpler polynomial matrix structure than system (2.29).
An appropriate stability analysis method will be delivered for each presented model in section 2.2, including the advantages and disadvantages suitable for the implementation. Then, in section 2.2.5 with these representation, the appropriate stability conditions will be developed on each LPV model corresponding to an LMI relaxation method.
Applications
Linear Parameter-Varying (LPV) systems have been extensively studied over the last three decades to approximate the nonlinear systems and provide a systematic design framework of gain-scheduled controllers robust again the uncertain information. Their applications have been found in various fields such as automotive systems, aircraft systems, robotic manipulators, mechatronic systems see, for example, (Briat, 2015a;[START_REF] Giri | Robust Control and Linear Parameter Varying Approaches[END_REF][START_REF] Hoffmann | A survey of linear parameter-varying control applications validated by experiments or high-fidelity simulations[END_REF], etc. From a theoretical perspective, in addition to choosing an appropriate LPV model, the online measurement feature also plays a decisive role in accurately restructuring the system trajectories and linearizing the behavior of the nonlinear dynamics. However, in some LPV control applications, not all scheduling parameters are available for measuring. The high-precision engineering systems are even more demanding on requirements and system performances, e.g., aerospace application, Unmanned aircraft systems [START_REF] Marcos | Development of Linear-Parameter-Varying Models for Aircraft[END_REF][START_REF] Marcos | LPV modeling, analysis and design in space systems: Rationale, objectives and limitations[END_REF]. It can be found that increasing the number of scheduling parameters enhances the simulation accuracy and the system validation process. However, it also burdens the computational load, increases memory requirements, and growths the synthesis complexity, see, e.g., (Hoffmann et al., 2014;Hoffmann & Werner, 2014, 2015). So, depending on the design requirements, we have appropriate analysis and synthesis tools. Now, let's discuss some applications of the LPV modeling.
Automotive Chassis Systems
In the last decades, LPV control synthesis has been addressed the robust stability and performance of the lateral dynamic stabilizing system integrated on-road vehicles, see, for example, in [START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF][START_REF] Doumiati | Integrated vehicle dynamics control via coordination of active front steering and rear braking[END_REF][START_REF] Ono | Theoretical approach for improving the vehicle robust stability and maneuverability by active front wheel steering control[END_REF][START_REF] Ono | Robust stabilization of the vehicle dynamics by gain-scheduled H/sub /spl infin// control[END_REF][START_REF] Zhang | Robust gain-scheduling energy-to-peak control of vehicle lateral dynamics stabilisation[END_REF]. The simplified lateral dynamic stabilization system (illustrated in Figure 2-3.b) can be described by the following equations:
22 22 yyffyrrx zyfffyrrr mvtFtFtmtvt ItFtlFtl & & && (2.36)
where the full description of the physical parameters is detailed in Appendix E.2. This dynamic system depends on the lateral friction forces , yfyr FF (also called cornering forces), described by the nonlinear equations of the tire slip angle at the contact point of front tires f t (rear tires ) r t and road surface, see, e.g., [START_REF] Bakker | A New Tire Model with an Application in Vehicle Dynamics Studies[END_REF][START_REF] Dugoff | An analysis of tire traction properties and their influence on vehicle dynamic performance[END_REF][START_REF] Kiencke | Automotive Control Systems[END_REF]. If longitudinal speed x v is constant, then the vehicle system is regarded with LTI system [START_REF] Farrelly | Estimation of vehicle lateral velocity[END_REF][START_REF] Fukada | Slip-angle estimation for vehicle stability control[END_REF][START_REF] Guldner | Analysis of automatic steering control for highway vehicles with look-down lateral reference systems[END_REF]. (the variation cornering stiffness shows in Figure 2-3.a), then it is characterized by LTV systems discussed in [START_REF] Ono | Robust stabilization of the vehicle dynamics by gain-scheduled H/sub /spl infin// control[END_REF][START_REF] Zhang | Robust gain-scheduling energy-to-peak control of vehicle lateral dynamics stabilisation[END_REF] that results in a quasi-LPV formulization. are intrinsic parameters of these qLPV systems, see, e.g., [START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF][START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF][START_REF] Dahmani | Detection of impending vehicle rollover with road bank angle consideration using a robust fuzzy observer[END_REF]Dahmani, Pages, & El Hajjaji, 2015;[START_REF] El Hajjaji | Observer-based robust fuzzy control for vehicle lateral dynamics[END_REF][START_REF] Latrech | Vehicle dynamics decentralized networked control[END_REF].
LTI and LTV systems
The coordinate parameters i are usually accompanied by the assumption that the scheduling parameters can be measured. However, the stabilization chassis control system (2.40) is involved with an unmeasurable dynamic -tire sideslip angle . t The prob- lem of an inexact parameter î has been addressed for the LPV system, see, e.g., [START_REF] Daafouz | On inexact LPV control design of continuous-time polytopic systems[END_REF][START_REF] Heemels | Observer-based control of discrete-time LPV systems with uncertain parameters[END_REF][START_REF] Rotondo | Robust state-feedback control of uncertain LPV systems: An LMI-based approach[END_REF][START_REF] Sato | Gain-scheduled output-feedback controllers using inexact scheduling parameters for continuous-time LPV systems[END_REF] and the T-S fuzzy system [START_REF] Li | Fault Detection for T-S Fuzzy Systems With Unknown Membership Functions[END_REF][START_REF] Yang | Finite-Time Convergence Adaptive Fuzzy Control for Dual-Arm Robot with Unknown Kinematics and Dynamics[END_REF]Zhang & Wang, 2017).
Using the estimates of membership function improve the performance of the control system but increases the complexity of the stability condition.
Now, let's consider a time-varying longitudinal velocity .
x tvt A compact set of the parameter transformed to the polytopic formulation discussed in [START_REF] Bosche | An output feedback controller design for lateral vehicle dynamic[END_REF][START_REF] Nguyen | Simultaneous Estimation of Vehicle Lateral Dynamics and Driver Torque using LPV Unknown Input Observer[END_REF][START_REF] Zhang | Robust gain-scheduling energy-to-peak control of vehicle lateral dynamics stabilisation[END_REF]Zhang et al., , 2015)).
& &&& 4 1 , i (2.41)
In this case, the representation is considered as an extrinsic LPV system because x vt is a state-independent parameter. If a global chassis system is used [START_REF] Kiencke | Automotive Control Systems[END_REF][START_REF] Poussot-Vassal | Commande Robuste LPV Multivariable de Châssis Automobile[END_REF], this parameter is now an implicit function of the states 1 , xt
2
. xt So, the definition is quite abstract and depends on the specific design purpose. In addition, x vt is measurable, so it suits the gain-scheduling controller technique.
Generally, depending on the system requirements (such as robust stability, robust performance, fast response, good tracking, etc.) and the accessibility of state variables and parameters, we have an appropriate approach for the analysis and design of control systems. Another application of LPV control relates to the improvement of performance and comfort of the automotive chassis system. The recent advances technique can find for instance in [START_REF] Do | Approche LPV pour la commande robuste de la dynamique des véhicules : amélioration conjointe du confort et de la sécurité[END_REF][START_REF] Do | An LPV control approach for semi-active suspension control with actuator constraints[END_REF][START_REF] Doumiati | Gain-scheduled LPV/H∞ controller based on direct yaw moment and active steering for vehicle handling improvements[END_REF][START_REF] Giri | Robust Control and Linear Parameter Varying Approaches[END_REF][START_REF] Nguyen | LPV approaches for modelling and control of vehicle dynamics : application toa small car pilot plant with ER dampers[END_REF][START_REF] Poussot-Vassal | Commande Robuste LPV Multivariable de Châssis Automobile[END_REF][START_REF] Savaresi | Semi-Active Suspension Control Design for Vehicles[END_REF][START_REF] Tuan | Nonlinear H/sub ∞/ control for an integrated suspension system via parameterized linear matrix inequality characterizations[END_REF][START_REF] Vu | Active anti-roll bar control using electronic servo valve hydraulic damper on single unit heavy vehicle[END_REF], and the reference therein.
Aircrafts Systems
The physical properties of the flight dynamics (high velocity, large number of degrees of freedom, aerodynamic influence, etc.) characterize the aviation control systems. The requirements, therefore, are more demanding in terms of robustness, system performance, and stability compared to the ground vehicles. The LPV theory is suitable for enhancing performance, robustness, and ensuring accuracy and safety during operation against the influence of aerodynamics. See, for example, analysis robustness margins [START_REF] Schug | Robustness Margins for Linear Parameter Varying Systems[END_REF], robust ℋ∞ control [START_REF] Papageorgiou | Taking robust LPV control into flight on the VAAC Harrier[END_REF], high performance on F-16 Aircraft System [START_REF] Shin | Blending approach of linear parameter varying control synthesis for F-16 aircraft[END_REF] on F-14 and F-18 Aircraft System (Balas et al., n.d., 1997), LPV modeling and controller design for Boeing 747-100/200 [START_REF] Ganguli | Reconfigurable LPV control design for Boeing 747-100/200 longitudinal axis[END_REF][START_REF] Marcos | Linear parameter varying modeling of the boeing 747-100/200 longitudinal motion[END_REF], 2004), and developments of LPV controllers for an unmanned air vehicle (UAV) [START_REF] Chen | Robust LPV Control of UAV with Parameter Dependent Performance[END_REF][START_REF] Natesan | Design of static H∞ linear parameter varying controllers for unmanned aircraft[END_REF][START_REF] Rotondo | LPV model reference control for fixed-wing UAVs[END_REF]. The application of gain-scheduling applies to missile autopilot LPV systems [START_REF] Pellanda | Missile autopilot design via a multichannel LFT/LPV control method[END_REF][START_REF] Shamma | Linear parameter varying approach to gain scheduled missile autopilot design[END_REF][START_REF] Wu | LPV control design for pitch-axis missile autopilots[END_REF].
Let's consider a qLPV modeling of Boeing 747-100/200 longitudinal motion [START_REF] Marcos | A Linear Parameter Varying Model of the Boeing 747-100/200 Longitudinal Motion[END_REF][START_REF] Marcos | Development of Linear-Parameter-Varying Models for Aircraft[END_REF] as follows: [START_REF] Marcos | A Linear Parameter Varying Model of the Boeing 747-100/200 Longitudinal Motion[END_REF].
, , ,,,, ,,,,.
Mechatronics and Robotics Systems
Another application of the LPV control system concerns the stabilization of the nonlinear robotic arm system depending on the parameter dynamics and nonlinearities. Showing in Figure 2-4.a is the force diagram of another example of an inverted pendulum analyzed in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF][START_REF] Wang | A Course in Fuzzy Systems and Control[END_REF]. The nonlinear state-space equations governing the two-link dynamic system are given by: can consider as an uncertainty, then followed the definition of [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF], we have the following membership functions: (2.45) As presented before, the local sector linearization method could transform the nonlinear system (2.44) into the qLPV (T-S fuzzy) system as follows:
1 , 1,2,,. l N ijiil i xtxAxtButiN &K (2.46)
However, this T-S fuzzy system representation requires 4 2 l N linear combination of membership functions (2.45). This number increase exponentially in control design analysis (fuzzy PDC controller). The computational burden is a hindrance of the implementation. In addition, the following LPV system (2.47) is based on assumptions about the bounded region of the parameter , t but qLPV system (2.46) is involved with the con- straints bounding on the state 1 .
xt This association cause trouble in the design prerequi- sites such as initial condition 0 xand the upper bound of the derivative /.
ij dxdt Illustrated in Figure 2-4.b is the diagram of the physical parameters and force diagram of the arm-driven inverted pendulum (ADIP) system [START_REF] Kajiwara | LPV techniques for control of an inverted pendulum[END_REF]Canudas-de-Wit et al., 1996). ut is control signal of the electric motor, 1212 ,,, mmllare respectively the mass and the half of the length of the arm and the pendulum. The control objective is to maintain the pendulum in a reference vertical position (inverted pendulum motion) using the generated torque from the arm. This moment is regulated by a motor power amplifier voltage, with , aa KTare constant mechanic-electric parameters.
To facilitate for developing a gain-scheduling controller, t and zt are assumed to be measurable in this two-link robot manipulator example. This intrinsic parameter is state independent. The further discussions about robot dynamics models can find at (Canudasde-Wit et al., 1996), for the LPV application on robot-arm control, see, e.g., [START_REF] Halalchi | Flexible-Link Robot Control Using a Linear Parameter Varying Systems Methodology[END_REF][START_REF] Robert | An H∞ LPV design for sampling varying controllers: Experimentation with a T-inverted pendulum[END_REF][START_REF] Sename | A LPV approach to control and realtime scheduling codesign: Application to a robot-arm control[END_REF], for the fuzzy application and control [START_REF] Roose | Fuzzy-logic control of an inverted pendulum on a cart[END_REF][START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF][START_REF] Wang | A Course in Fuzzy Systems and Control[END_REF][START_REF] Yi | Stabilization fuzzy control of inverted pendulum systems[END_REF].
Other applications
From the above discussion, the LPV modeling and control design provide wide-range applications such as automobile engine systems, photovoltaic systems, electronic circuit systems, Etc. Another application that we would like to mention here is the LPV modeling and control for the tokamak fusion reactor [START_REF] Ariola | Magnetic Control of Tokamak Plasmas[END_REF][START_REF] Wesson | Tokamaks[END_REF]. A control-oriented distributed model discussed in [START_REF] Witrant | A control-oriented model of the current profile in tokamak plasma[END_REF], then an approximation of an LPV system deployed in [START_REF] Bribiesca Argomedo | Polytopic control of the magnetic flux profile in a tokamak plasma[END_REF] from heterogeneous transport partial differential equation (PDE) model of dynamics of the poloidal flux. Thereby, developing a polytopic feedback control law for the non-inductive lower hybrid current drive (LHCD) is proposed [START_REF] Bribiesca Argomedo | Polytopic control of the magnetic flux profile in a tokamak plasma[END_REF], that regulates the current and heat source on the plasma. This method shows an efficient reduction in computational cost, and easier to integrate the saturated constraint than to seek for the weight matrices for the linear-quadratic regulator (LQR) method by solving the algebraic Riccati equation (ARE) [START_REF] Bribiesca Argomedo | Safety Factor Profile Control in a Tokamak[END_REF], 2010). Readers interested in this topic can refer to the monographs [START_REF] Ariola | Magnetic Control of Tokamak Plasmas[END_REF][START_REF] Bribiesca Argomedo | Safety Factor Profile Control in a Tokamak[END_REF]. More discussion and analysis on nonlinear dynamics motion, and methods of identification and linearizing LPV models refers to [START_REF] Tóth | Modeling and Identification of Linear Parameter-Varying Systems[END_REF].
Stability of LPV/quasi-LPV Systems
Before going into the stability analysis of the parameter-dependent system, let's revise the fundamental control system theory by analyzing the stability of the equilibrium point at the origin of a pendulum system (2.48) with ,. ab R The system stability in the sense of Lyapunov function is to find a con- tinuously differentiable energy functionVxt that decreases over time, and the solutions of the system starting in the vicinity of the equilibrium point will be nearby or converge to the equilibrium point as the time approaches infinity. For more details on stability of dynamical systems, the reader could refer to [START_REF] Khalil | Nonlinear Systems[END_REF][START_REF] Scherer | Linear matrix inequalities in control[END_REF][START_REF] Vidyasagar | Nonlinear Systems Analysis[END_REF].
Theorem 2.2.1: Lyapunov Theorem [START_REF] Khalil | Nonlinear Systems[END_REF] [START_REF] Khalil | Nonlinear Systems[END_REF] along the 1 x axis, we can only confirm that the system is stable at origin using the Lyapunov function (2.50). However, the phase portrait of the dynamic 1 [START_REF] Khalil | Nonlinear Systems[END_REF], chapter 4 system exhibits asymptotically convergence to this equilibrium. So let include a quadratic term to the previous Lyapunov function: [START_REF] Khalil | Nonlinear Systems[END_REF], the failure of Lyapunov function does not imply that the equilibrium of the system is stable nor asymptotically stable. It can only emphasize that the stability of the system cannot be guaranteed with these inappropriate Lyapunov candidates. Let's move forward to discuss the necessary and sufficient Lyapunov conditions addressed for the stability of a linear system. 4 Abba in the open right-half plane for all ,0. ab Thus, this equilibrium is unstable. It can be checked that there will be no matrix 0 P f satisfies 22 0, ,0.
T APPAab p
Quadratic Stability and Non-Quadratic Stability Analysis
The stability analysis for the LTI system using the Lyapunov function can be easily performed. But the linearizing characteristic of the nonlinear system in the form of an LTI system does not accurately describe the behaviors of the original system.
In this case, by approximating the nonlinear system as a parametric affine is much more convenient for the analysis and synthesis of system design. In the early 90s, the term parametric uncertainties or parameter-dependent is related to two commonly analytical methods, robust stability for LTV system [START_REF] Dullerud | A Course in Robust Control Theory[END_REF][START_REF] Khargonekar | Robust stabilization of uncertain linear systems: quadratic stabilizability and H/sup infinity / control theory[END_REF][START_REF] Zhou | Robust and Optimal Control[END_REF][START_REF] Zhou | [END_REF][START_REF] Zhou | Robust stabilization of linear systems with normbounded time-varying uncertainty[END_REF] and quadratic stability (Apkarian & Gahinet, 1995;[START_REF] Becker | Robust performance of linear parametrically varying systems using parametrically-dependent linear feedback[END_REF]. To ensure stability and performance against uncertainty parameters, Apkarian has presented two different approaches in the middle 90s. On one side, according to the linear fraction dependence and linear fractional transformation (LFT) technique (Apkarian & Gahinet, 1995), the parametric-dependency extraction from the nominal plant using an uncertainty structure [START_REF] Doyle | Structured uncertainty in control system design[END_REF][START_REF] Doyle | Review of LFTs, LMIs, and μ[END_REF]. Followed the scaling commuting structure, LFT approach shows the efficiency in dealing with uncertainties and provides more relaxation than old-fashioned conditions. Besides, the bounded real lemma [START_REF] Feron | Analysis and synthesis of robust control systems via parameter-dependent Lyapunov functions[END_REF]Gahinet & Apkarian, 1994;[START_REF] Scherer | The Riccati inequality and state-space H∞-optimal control[END_REF]) also plays a central role in robust ℋ∞ performance synthesis during this time.
However, these stability conditions capture only fast variation parameters that become very conservative with slowly varying parameters. So, the affine quadratic Lyapunov formulation [START_REF] Apkarian | Parameter-Dependent Lyapunov Functions for Robust Control of Systems with Real Parametric Uncertainty[END_REF]Gahinet et al., 1994) is proposed to address the stability of the LPV system. Where the scaled-bounded real lemma can enhance the robustness and performance requirements. The parameter-dependent Lyapunov functions (PDLF) consider for stability analysis and gain-scheduling controller synthesis of LPV systems that better describe the behavior of parameters, see for example, in the literature [START_REF] Apkarian | Advanced gain-scheduling techniques for uncertain systems[END_REF][START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF][START_REF] Gahinet | LMI Control Toolbox For Use with MATLAB[END_REF][START_REF] Lim | Analysis of LPV systems using a piecewise affine parameter-dependent Lyapunov function[END_REF][START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF][START_REF] Wu | Induced L2-norm control for LPV systems with bounded parameter variation rates[END_REF], for the qLPV T-S fuzzy systems [START_REF] Tanaka | A multiple Lyapunov function approach to stabilization of fuzzy control systems[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF][START_REF] Tuan | New Fuzzy Control Model and Dynamic Output Feedback Parallel Distributed Compensation[END_REF]Tuan, Apkarian, et al., 2001), for the LPV time-delay systems [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF] and the references therein.
The leverage of the development of convex optimization programing and the relaxation PLMI methods, it makes the use of PDLF widespread in the stability and stabilization analysis for the parameter-dependent systems. The following sections are devoted to the stability analysis for the LPV/quasi-LPV systems using Lyapunov function via parameterized LMI conditions.
Stability of Polytopic Systems
This issue was excavated decades ago because of its effectiveness in analyzing for robust stability and robust performance [START_REF] Feron | Analysis and synthesis of robust control systems via parameter-dependent Lyapunov functions[END_REF]Gahinet et al., 1994[START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF]. The quadratic [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Scherer | Linear matrix inequalities in control[END_REF] and non-quadratic stability [START_REF] Apkarian | Advanced gain-scheduling techniques for uncertain systems[END_REF][START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF]Tuan, Apkarian, et al., 2001;[START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF] are discussed for the polytopic systems. Generally, the open loop polytopic LPV system is obtained under the form:
W
However, considering only a common matrix to guarantee the quadratic stability for the multi-convex system is conservative. In some cases, it doesn't exist a candidate matrix n P S that satisfies stability conditions (2.55). Hence, it makes more sense to consider the parameter-dependent Lyapunov function PDLF or the piece-wise Lyapunov function approach introduces in [START_REF] Johansson | Piecewise Linear Control Systems[END_REF] to relax the conservativeness. Theorem 2.2.4: (Briat, 2015a) Polytopic system (2.54) is robust stability, if there exist symmetric positive matrices
1 ,, l N n iii i PPP S such that 11 0, l N T i ji ij j APPAtP & p (2.57) hold for 1,2,,, l iN with 1 l N iii AtA is a convex combination.
When the relationship between the parameters and its derivations is undetermined, these conditions will be more difficult to deal with. By assuming the derivatives of parameters
1 / jijji t
upper bound the rate variation of the parameter. An exciting transformation method introduced in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF] uses the differential-algebraic equation to transfer the coordinate of the vertices enclosed in the unknown derivate parameters.
Stability of Polynomial Parameter-Dependent Systems
One of the most effective ways to approximate the nonlinear systems is to represent as polynomials of the Taylor expansion. The polynomial form of state and time-varying parameters is a regular representation in a company with trigonometric forms, for example, diodes system [START_REF] Khalil | Nonlinear Systems[END_REF], jet engines [START_REF] Azuma | A new LMI approach to analysis of linear systems depending on scheduling parameter in polynomial forms[END_REF][START_REF] Fakhri | Application of Polytopic Separation Techniques to Nonlinear Observer Design[END_REF][START_REF] Watanabe | Hinf Control of Gasturbine Engines for Helicopters Helicopters Control of Gasturbine Engines for Gasturbine Engines for Helicopters[END_REF], and the academic applications [START_REF] Sala | The polytopic/fuzzy polynomial approach for non-linear control: advantages and drawbacks[END_REF][START_REF] Sala | Stability analysis of LPV systems: Scenario approach[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF]Sato & Peaucelle, 2007a;[START_REF] Scherer | LMI relaxations in robust control[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF]Wu & Prajna, 2004), etc. Let's introduce a parameter-dependent system expressed as a polynomial formulation:
0 1 0 1 , 0, p N N j i j iji xtAtxtAAtxt xx & (2.58)
and a parameter-dependent Lyapunov function: (2.60)
Proof.
The derivative of PDLF (2.59) along the trajectories of LPV system (2.58) is given by:
1 00 111111 0 1 0 0 111 0000 0 ppp pp NNNNNN jjjj ijiijiijii ijijij NNNN i T T jj ijiiji jij T i ij j T ji T dV AtPtPtAt dt AAAP Pt AAtPPtPtt PPtAAt APPAPP 1 0 1 0 p NN j i i ij j t PA 1 ,1,111 0. pp NNNN jljj klklikijii ikjli T ijij j PPt A tPtt A
Stability of T-S Fuzzy Systems
The stability analysis for this class of nonlinear systems based on the Lyapunov theory expanded via LMI conditions. The overview of the PDC controller strategy analyzes systematically for this class of systems are introduced in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]. The design analysis of the fuzzy control system at this stage is essentially a matter of finding a quadratic Lyapunov function for all stability conditions, see, e.g., [START_REF] Tanaka | Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs[END_REF][START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF] and the references therein. The relaxing stabilization conditions methods [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF][START_REF] Tuan | Nonlinear H/sub ∞/ control for an integrated suspension system via parameterized linear matrix inequality characterizations[END_REF] or the non-quadratic Lyapunov function NQLF approach [START_REF] Rheex | A new fuzzy Lyapunov function approach for a Takagi-Sugeno fuzzy control system design[END_REF][START_REF] Sala | The polytopic/fuzzy polynomial approach for non-linear control: advantages and drawbacks[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF][START_REF] Tanaka | A multiple Lyapunov function approach to stabilization of fuzzy control systems[END_REF] are considered to reduce the conservatism of the design conditions. However, the proposed conditions do not exactly describe the behavior of membership function rate but instead are subdivision into local stability by the linear combinatorial method applied to non-quadratic Lyapunov functions.
Let's consider an open-loop T-S fuzzy system:
1 0 ,dim,1,2,,, 0. l N iiil i xttAxtAniN xx &K (2.62)
Both quadratic and non-quadratic stability is delivered and well-developed and studied in the framework of the T-S fuzzy system. It can state that the quadratic stability condition of T-S fuzzy system is essentially the same formulation as condition (2.55). It frequently encounters in the analyzing stability of the nonlinear systems approaching via linear combination. These conditions are solved at each vertex of a convex polyhedron encapsulates the behavior of the parameters. As mentioned, there does not always exist a global solution n P S that can satisfy all LMI conditions (2.55). So, the relaxation of LMI condition has been considered to reduce conservativeness, see, for example [START_REF] Tanaka | Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs[END_REF]Tuan, Apkarian, et al., 2001). In this period, the piecewise Lyapunov functions effectively reducing the conservativeness of stability and stabilization problems have received attention, see e.g., [START_REF] Johansson | Piecewise Linear Control Systems[END_REF][START_REF] Xie | Piecewise Lyapunov functions for robust stability of linear time-varying systems[END_REF]. Based on this approach, the Fuzzy Lyapunov Function (FLF) is delivered for the non-quadratic stability analysis, provides the less conservative condition.
Most of the challenges are related to the derivative development of the membership functions (constrains this value also leads to limitation of the system dynamics). Perhaps, for this reason, much effort of the early work of the stabilizing implementations for T-S fuzzy systems focused on quadratic stability. The fuzzy Lyapunov function is implemented later in [START_REF] Mozelli | Reducing conservativeness in recent stability conditions of TS fuzzy systems[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF][START_REF] Tanaka | A multiple Lyapunov function approach to stabilization of fuzzy control systems[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF], Etc.
Non-quadratic Stability (Bounded Parameters)
The PDLF is expanded as a fuzzy Lyapunov function under form: [START_REF] Guerra | A way to escape from the quadratic framework[END_REF][START_REF] Mozelli | Reducing conservativeness in recent stability conditions of TS fuzzy systems[END_REF][START_REF] Sala | The polytopic/fuzzy polynomial approach for non-linear control: advantages and drawbacks[END_REF][START_REF] Sala | Stability analysis of LPV systems: Scenario approach[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF].
Non-quadratic Stability (Sum of Squares)
A generalization of fuzzy modeling and control presented in [START_REF] Lam | Polynomial Fuzzy Control Systems Stability Analysis and Control Synthesis Using Membership Function-dependent[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF] shows the refinement of nonlinearities as the fuzzy polynomial model. In which the stability and stabilizability conditions can convert to the SoS problem based on polynomial Lyapunov function [START_REF] Guerra | A way to escape from the quadratic framework[END_REF][START_REF] Jaadari | Continuous quasi-LPV Systems: how to leave the quadratic Framework?[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF] develop the local sector for the polynomial-fuzzy system. The polynomial condition can be checked as an SoS or consider as the parametric polynomial matric inequality.
The defuzzification process discussed in section 2.1.4.2 can applied as follows: for the polynomial system. The preliminary concepts and the prerequisites will be covered in section 2.2.4.2. According to the SoS argument, the stability analysis for quasi-LPV system (2.65) is usually characterized by the following results. Theorem 2.2.7: [START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF] for 0, x then the equilibrium is asymp- totical stable. If Px is a constant matrix, the stability condition holds globally. Hence, the polynomial SoS relaxation method provides a more general condition. But the coefficients associated with irreducible polynomials in the solutions ensues in the polynomial dependencies with utopian exponents.
Relaxation of the Parameterized Linear Matrix Inequality
The parameter-dependent characterization of the stability conditions is an infinite set of LMIs across the parameter space domain. For delivering the convexity argument, the commonly used relaxations of PLMIs condition include: § Gridding technique [START_REF] Wu | Control of linear parameter varying systems[END_REF] is fragmented N-finite parameter operation range, and affine meshing parameter space [START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF]. § Convex combination or multi-convexities [START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF]Tuan & Apkarian, 2002) -Fuzzy Lyapunov Function [START_REF] Guerra | A way to escape from the quadratic framework[END_REF][START_REF] Sala | The polytopic/fuzzy polynomial approach for non-linear control: advantages and drawbacks[END_REF][START_REF] Tanaka | A multiple Lyapunov function approach to stabilization of fuzzy control systems[END_REF]. § Generalization of Filner's Lemma: (1)-Sum of squares (SoS) decomposition for polynomial systems [START_REF] Papachristodoulou | On the construction of Lyapunov functions using the sum of squares decomposition[END_REF][START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF] -for fuzzy systems [START_REF] Lam | Polynomial Fuzzy Control Systems Stability Analysis and Control Synthesis Using Membership Function-dependent[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF]. ( 2)-Slack variable or S-variable [START_REF] Sato | Robust stability/performance analysis for linear timeinvariant polynomially parameter-dependent systems using polynomially parameter-dependent Lyapunov functions[END_REF][START_REF] Sato | Robust stability/performance analysis for uncertain linear systems via multiple slack variable approach: Polynomial LTIPD systems[END_REF].
The gridding technique is simple and can be deployed directly on the parameter dependence condition. Following the argument of (Apkarian & Tuan, 2000a[START_REF] Lim | Parameter-Varying Systems[END_REF][START_REF] Wu | Control of linear parameter varying systems[END_REF]: with finite intervals, it is impossible to verify whether it captures all of the critical points or describes the nonlinear behavior of the parameters t within their defined boundaries , . In the work of [START_REF] Apkarian | Parametrized LMIs in control theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF], the author has introduced an approach solving the parameterized LMIs (PLMIs), which only need to grid a surface of lower dimension whenever the function is quasi-convex or convex along some direction. But it costs in high computation load.
During this time, the relaxation based on the sum of squares decomposition has also gained considerable attention that separates of the polynomial parameters and can be cast as a semidefinite programming problem, see, e.g., the convex optimization SoS toolbox [START_REF] Papachristodoulou | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF][START_REF] Prajna | Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach[END_REF]. This approach promises less conservative results for the relaxation of the PLMI conditions. It is also widely employed for the analysis of stabilizing analysis of the qLPV & T-S fuzzy systems in the literature such as [START_REF] Gahlawat | Control and verification of the safetyfactor profile in Tokamaks using sum-of-squares polynomials[END_REF][START_REF] Sala | The polytopic/fuzzy polynomial approach for non-linear control: advantages and drawbacks[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF][START_REF] Tanaka | A sum-of-squares approach to modeling and control of nonlinear dynamical systems with polynomial fuzzy systems[END_REF]. But the fractional form involving the polynomial gains is an obstacle of the implementation. For example: a feedback gain Then, how to perform the inverse of Xt ?
An alternative method is to convert the polynomial parameter-dependent condition into a Slack-Variable formulation [START_REF] Ebihara | S-Variable Approach to LMI-Based Robust Control[END_REF][START_REF] Hosoe | S-variable approach to robust stabilization state feedback synthesis for systems characterized by random polytopes[END_REF]Sato & Peaucelle, 2007a[START_REF] Sato | Robust stability/performance analysis for uncertain linear systems via multiple slack variable approach: Polynomial LTIPD systems[END_REF]. This method, also based on Finsler's generalization, shows a computational advantage over SoS decomposition. Furthermore, the S-variable allows to manipulate , Xtt X X is t independent variable matrix, and t is a column of parameters. A numerical computation comparison of the two methods is given in (Sato & Peaucelle, 2007a[START_REF] Sato | Robust stability/performance analysis for uncertain linear systems via multiple slack variable approach: Polynomial LTIPD systems[END_REF], and a numerical comparison of the optimization of polynomial methods provides in Section 3.2.2. More details on the S-variable application to the robustness and stability analysis of the system based on LMI conditional developments refers to [START_REF] Ebihara | Robust H2 Performance Analysis Of Uncertain Lti Systems Via Polynomially Parameter-Dependent Lyapunov Functions[END_REF][START_REF] Ebihara | S-Variable Approach to LMI-Based Robust Control[END_REF][START_REF] Hosoe | S-variable approach to robust stabilization state feedback synthesis for systems characterized by random polytopes[END_REF]Dimitri Peaucelle & Ebihara, 2014).
The proposed methods have distinctive advantages and disadvantages, which may be the trade-off between conditional conservatism and computational complexity. For example, the gridding method simplifies the parameter-dependent conditions into a set of LMI conditions, but the weakness of this linearization is whether it covers all the critical points or how to accurately describe specific characteristics of parameters on operating conditions. On the other hand, the SoS method is characterized by the polynomial Lyapunov functional formulation. Giving more tight relaxation on variation of the S-procedure constraints are exchanged with complexity in parametric decomposition. Finally, the multiconvexities Lyapunov function is a linear combination between vertices, covering the trajectories of the parameter within the convex domain for analyzing controller design.
The synthesis of relaxing PLMI methods discusses in the literature [START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF]El Ghaoui & Niculescu, 2000;[START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF]. The promising fruition of the LPV control system in this time inherited the growth of linear programming or convex optimization tools (Erling D [START_REF] Andersen | The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm[END_REF][START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Boyd | Convex Optimization[END_REF][START_REF] Gahinet | LMI Control Toolbox For Use with MATLAB[END_REF][START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF][START_REF] Papachristodoulou | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF]. Which converts the linear matrix inequality (LMI) constraint derived from the stability analysis into barrier function conditions. Readers interested in the interior-point methods or the other contemporary methods, for example, conjugate gradient, golden section, the wider scope of polynomial-time complexity, can refer to the monographs [START_REF] Bertsimas | Nonlinear programming[END_REF][START_REF] Bertsimas | Introduction to Linear Optimization[END_REF][START_REF] Konno | Optimization on Low Rank Nonconvex Structures[END_REF]. Bertsimas describes in more detail algorithm problems (i.e., gradient descent, update step size, etc.). For optimization problems and global optimization can be found in [START_REF] Tuy | Convex Analysis and Global Optimization[END_REF]. The work of literature (Hiriart-Urruty & Lemaréchal, 1993a[START_REF] Hiriart-Urruty | Convex Analysis and Minimization Algorithms II[END_REF] provides multiple sections appropriately devoted to readers.
Relaxation of Parametrized LMIs by Discretization
It is possible to refer the finite discretizing methods over time-varying intervals [START_REF] Apkarian | Parametrized LMIs in control theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF] and time-independent intervals [START_REF] Wu | Control of linear parameter varying systems[END_REF]. The LMI relaxation proposed by [START_REF] Wu | Control of linear parameter varying systems[END_REF], well-known as a "gridding" method, illustrates in the following example. its determination remains a difficult question. It is difficult to assert an ideal density of a parameter domain because their unfeasible regions are ambiguous information. The unfeasible set is estimated only when the infeasibility of the problem finds. That can visualize by adjusting the sound of an instrument that has to hit a chord to adjust the correct range. We cannot fine-tune before playing an instrument, and we cannot copy the tuning of one musical accessory to another. Like the lament of Briat: "This paradox shows that probably no method to find a perfect gridding would develop someday."
On the other hand, the piecewise affine parameter-dependent (PAPD) approaches introduced as multi-switch partitioned parameter space, see, e.g., [START_REF] Apkarian | Parametrized LMIs in control theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF], could provide less conservative stability conditions. The parametric switch subsystem is recalled in Appendix 1.4.2. But the number of LMI condition that must check is overwhelming. For example, given a LPV system depend on p parameters, each parameter is partitioned into i N subspace, so the number of conditions that need to be checked is about
1 2. p p ii N
Relaxation of Parametrized LMIs by Sum-of-Squares Decomposition
Generally, the stability conditions based on the parameter-dependent Lyapunov function has difficulty expressing the derivative expansion of the parameter. But the polynomials allow for easier development and expand the partial derivation of state with the endogenous parameter polynomial formulation. Let's recall some mathematical premises mainly introduced by [START_REF] Parrilo | Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization[END_REF][START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF][START_REF] Prajna | Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach[END_REF]. Lemma 2.2.1. [START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF] The canonical decomposition of a polynomial is expanded to SoS that can be converted to a semidefinite programming problem. The analysis of polynomials are introduced by [START_REF] Papachristodoulou | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF][START_REF] Papachristodoulou | On the construction of Lyapunov functions using the sum of squares decomposition[END_REF][START_REF] Prajna | Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach[END_REF] could handle the stabilization PLMI condition as SoS expressions.
Lemma 2.2.2. [START_REF] Prajna | Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach[END_REF]) Let Fx be an NN polynomial matrix degree 2d in .
n
x R Moreover, let Zx be a column vector whose entries are all monomials in x with degree no greater than d, then the following statements
(1) 0 Fx ± for all . n x R
(2) T vFxv is a sum of squares, where . The polynomial LMI stability are the infinite-dimensional parameter-dependent conditions. Following the SOS-based polynomial method [START_REF] Parrilo | Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization[END_REF][START_REF] Parrilo | Semidefinite programming relaxations for semialgebraic problems[END_REF], the monomial in x (variables) are apart from its coefficients (decision variables) in the polynomial conditions. Then, some slack variables are injected during the S-procedure, which converts the infinite parametric conditions into a finite LMIs (that able to be solved by the interiorpoint method with solvers such as Mosek, Sedumi, SDPT3, SDPA, etc.). The variation of the S-procedure constraints provides significantly relaxed condition than the existing approaches (discretization in section 2.2.4.1, or convex combination in section 2.2.1 and 2.2.3.1). In addition, there are other methods to decompose the polynomial matrix inequalities, see, for example the S-variable approach [START_REF] Ebihara | Robust H2 Performance Analysis Of Uncertain Lti Systems Via Polynomially Parameter-Dependent Lyapunov Functions[END_REF][START_REF] Sato | Robust stability/performance analysis for linear timeinvariant polynomially parameter-dependent systems using polynomially parameter-dependent Lyapunov functions[END_REF], 2007a).
Example
According to the representation for each model (i.e., LPV, T-S fuzzy, Polynomial, Polynomial Fuzzy) discussed in Section 2.1.5, the characteristic stability analysis will be deployed by a suitable approach (e.g., gridding, convex combination, SoS, etc.).
Relaxation of Parameter-Dependent Lyapunov Function
It should be noted that the stability analysis using the Lyapunov quadratic form i.e., condition (2.55) of Theorem 2.2.2 fails to verify the stability of T-S fuzzy system (2.32).
Example 2.2.2. Let's consider the representations of nonlinear system (2.25), including affine system (2.28), polynomial system (2.29), T-S fuzzy system, (2.32) and fuzzy polynomial (2.35).
LPV model & Gridding
The stability for LPV system (2.28) is analyzed using the compact set of the parameters defined in (2.71) As discussed in section 2.2, these conditions can either handle by the sum of squares decomposition method, or directly treated by the SoS toolbox. Both methods are based on a generalization of Finsler's lemma. In this example, the polynomial matrix conditions are converted to the SoS expressions and solved by SoS toolbox. The higher number of the monomial degree increases the computational complexity, so if the polynomials with the highest degree are fuzzification, then the fuzzy polynomial model is theoretically more advantageous than the original polynomial model.
Polynomial model & SoS
Followed the line of Theorem 2.2.7, polynomial system (2.29) 2.301101.009100.4532.194100.7493.881, 1.304102.849100.2841.412107.157101.566 2.036101.822100.4361.576100.3640.827, 9.019102.715100.2071.329106.498101.482, 2.994102.267103.475103 1.846100.506, 4.215102.074100.1921.700100.3650.682 1.960108.877101.0767.6639.17620.89, 6.401102.215100.4483.1970.0731.877, 2.845101.209101.0441.806106.5426 1.904101.719100.3083.4891.8502.095, 3.469102.125100.5641.215102.2002.305, 3.781101.529101.1712.798102.1864.34
xxxxxx Pxxxxxx Pxxxxxx 7.
The computational time is taken to solve the conditions by Gridding and convex combinations methods is 14.4237 and 0.2297 seconds, respectively, while the time spent on sum-squared polynomials is 19.1659 and 24.6257 seconds. The characteristic convergence of the region of stability is shown in Figure 2-5. However, we cannot draw any further conclusions. Each relaxation method has its own advantages and disadvantages, which are compatible with each type of LPV representation.
Theoretically, the SoS decomposition should give the least conservative results. But up to now, the limitations of numerical computation (e.g., SoS toolbox), have not allowed to take full advantage of the lossless transformation. Specifically, the time takes to process sorting, separating the variables, and solving the conditions will increase exponentially.
Besides, this numerical computation tool is sensitive to complex conditions (e.g., a large scale PLMI condition). On the other hand, the convex polyhedron method (such as the stability conditions of T-S fuzzy or polytopic systems) provides solutions with a reasonable algorithmic time is 0.2297s compared to 24.6257s of the polynomial-fuzzy SoS method. Finally, the gridding method is straightforward to handle the parameter-dependent conditions by discretizing the parameter domain. However, the computational time to treat the LMI conditions also increases exponentially with the number of parameters.
There's always a price to pay!
The Conservativeness of Fuzzy and Fuzzy Polynomial Lyapunov Functions
In the last section, the relaxation of PLMI stability conditions analyzed for LPV systems has delivered satisfactory results (except for the quadratic Lyapunov, which returns an infeasible result). Right after, an example of [START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF] is used to show conservatism of the parameter-dependent LMI conditions where traditional relaxation methods do not work (as illustrated in Table 2.1). The approximations of the nonlinear model to the LPV representations can be fulfilled similarly to section 2.1.5. For details of the transformation can be found in the literature cited below. If this designed accuracy radius is achieved, then info.numerr is set to 0 [START_REF] Sturm | Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones[END_REF]. A way out of these misunderstandings is to "transform" all semi-definite inequalities into definite inequalities [START_REF] Labit | SEDUMI INTERFACE 1.02: a tool for solving LMI problems with SEDUMI[END_REF]Dimitri Peaucelle et al., 2002). In this work, we introduce a positive definite scalar in all inequalities s.t., 000 0, ,
nn n PPPI ±R± (2.87)
with a small arbitrary constant . ò This constraint transformation is expected do not add some extra conservatism and tune the solver convergence without strongly modifying the feasibility radius. It should be noted that a strict inequality 0 0 P f is not recommended on the current numerical computation tools such as Yalmip. So, numerical adjustment (2.87) provides an appropriate modification to the linear matrix inequalities (e.g., the stability condition of the T-S fuzzy multi-convex system), but it doesn't fit the SoS constraints. There are two reasons, first it can be asserted that there is no solution, and the second is that the polynomial matrix inequality can be positive but not a sum of squares. Accordingly, we applied a refinements of the sum of squares polynomials proposed by [START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF] 4.688101.249102.398100.31822.463107.40110, 8.674100.31818.474104.0101.141106.98710, 0.31813.134101 Though the old-fashioned sum of squares decomposition method failed to approve the stability of the fuzzy polynomial system, the obtained results demonstrate the effectiveness of the Positivstellensatz multipliers polynomial approaches. However, this method is also sensitive to the numerical problem, which regularly entails unsatisfactory results related to, e.g., the complex stabilization conditions [START_REF] Furqon | An SOS-Based Control Lyapunov Function Design for Polynomial Fuzzy Control of Nonlinear Systems[END_REF]Sala & Ariño, 2008). On the other hand, the arbitrary uniform discretization over the parameter domain (i.e., gridding) delivers promising results (as observed in Table 2.1), with the advantage of being straightforward to perform on the polynomial LMI conditions.
Through these two examples, it can be emphasized that all the stabilization conditions developed for saturated LPV systems in this thesis are formulated by the parameter dependency matrix inequalities, e.g., condition (2.73), which are solved by the gridding over the defined domain of parameters. The decision matrices such as Lyapunov candidate will be chosen polynomial form, for example:
22 011221212111222 . P PPtPtPttPtPt
This polynomial expression is simpler to unify than other nonlinear forms such as trigonometric forms.
The Saturation Nonlinearity -Stabilization Analysis
In control system design analysis, a phenomenon observed in many engineering systems, chemical processes, biology, and even economics, is saturation in actuators. At first glance, the effect of this nonlinearity is quite simple, but analysis inappropriately or ignoring its effects can lead to performance degradation or system instability. Actuator saturation is unavoidable in engineering practical dynamics systems concerning physical limits (velocity, voltage, cycle, etc.) and safety constraints (pressure, temperature, power, energy consumption, etc.). Besides, the saturation effect characterized as nonlinearity cannot linearize. That must find a stabilization method of replacing the operating points of the feedback control system into the region without element saturates. In the last decades, considerable attention has been devoted to LTI systems subject to actuator saturation, see for instance [START_REF] Hu | Control Systems with Actuator Saturation[END_REF][START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF][START_REF] Zaccarian | Modern Anti-windup Synthesis: Control Augmentation for A tuator Saturation[END_REF] and the references therein.
There are two main approaches to carry out the stabilization analysis in the literature. The first one considers the saturation bounds like a prerequisite in the design strategy (João Manoel Gomes da Silva & Tarbouriech, 1999;[START_REF] Henrion | Control of linear systems subject to time-domain constraints with polynomial pole placement and LMIs[END_REF][START_REF] Henrion | LMI relaxations for robust stability of linear systems with saturating controls[END_REF][START_REF] Hu | An analysis and design method for linear systems subject to actuator saturation and disturbance[END_REF][START_REF] Hu | Composite quadratic Lyapunov functions for constrained control systems[END_REF][START_REF] Tarbouriech | Stability analysis and stabilization of systems presenting nested saturations[END_REF]. On the other side, an asymptotical stabilizing synthesis proposed for a closed-loop system disregards the control bounds. Then, a suitable design strategy will analyze to compensate for the saturation, such as Direct Linear Anti-windup (DLAW) or Model Recovery Antiwindup (MRAW). The anti-windup domain has been discussed thoroughly in recent decades, see for example [START_REF] Galeani | Reduced order linear anti-windup augmentation for stable linear systems[END_REF][START_REF] Galeani | A Tutorial on Modern Anti-windup Design[END_REF][START_REF] Gomes Da Silva | Antiwindup design with guaranteed regions of stability: An LMI-based approach[END_REF][START_REF] Grimm | Antiwindup for Stable Linear Systems with Input Saturation: An LMI-Based Synthesis[END_REF][START_REF] Hu | Anti-windup synthesis for linear control systems with input saturation: Achieving regional, nonlinear performance[END_REF][START_REF] Wu | Anti-windup controller design using linear parameter-varying control methods[END_REF][START_REF] Zaccarian | A common framework for anti-windup, bumpless transfer and reliable designs[END_REF], 2005[START_REF] Do | Approche LPV pour la commande robuste de la dynamique des véhicules : amélioration conjointe du confort et de la sécurité[END_REF] and the references therein. It can realize that the closed-loop stabilization analysis for DLAW and MRAW is more complicated in considering the effect of the nonlinear behavior and uncertain dynamics. Nonetheless, the DLAW construction is beyond the scope of the dissertation, so it will not be included. Instead, controversial analyzes of the Anti-windup compensator issue using the differential-algebraic equations to constrain the DOF controller are presented in (Bui Tuan et al., 2021).
It can be emphasized that many cited research papers and books are devoted to LTI sys-tems. Obviously, the saturated synthesis for the LPV/quasi-LPV systems isn't fit in stabilization analysis for saturated LTI systems. However, just several works in the literature subject to the saturation problems analyzed for nonlinear systems or parametric dependence systems, for example, the LPV systems [START_REF] Cao | Set invariance analysis and gain-scheduling control for LPV systems subject to actuator saturation[END_REF][START_REF] Forni | Model based, gain-scheduled anti-windup control for LPV systems[END_REF], 2010;Kapila & Grigoriadis, 2002;[START_REF] Nguyen | Gain-scheduled static output feedback control for saturated LPV systems with bounded parameter variations[END_REF][START_REF] Roos | On-Ground Aircraft Control Design Using an LPV Anti-windup Approach[END_REF][START_REF] Theis | Observer-based LPV control with anti-windup compensation: A flight control example[END_REF][START_REF] Wu | Anti-windup controller design using linear parameter-varying control methods[END_REF] or T-S fuzzy systems [START_REF] Benzaouia | Advanced Takagi-Sugeno Fuzzy Systems: Delay and Saturation[END_REF][START_REF] Dey | Stability and Stabilization of Linear and Fuzzy Time-Delay Systems[END_REF]. So, let's discuss one stability analysis tool used for the saturated LPV systems and LPV time-delay systems.
Sector Nonlinearity Model
A representation of LPV systems with actuator saturation gives under the forms: [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF], with the assumptions that openloop poles are located in the closed left-half plane, and the set of admissible-initial conditions is explicitly defined. But it can notice that the stabilizing condition analysis for system (2.1) is typically localized corresponding to the assumptions about the compact set of the parameters (2.8)-(2.9). Similar to the LPV system, the T-S fuzzy system is characterized by the operating ranges of the fuzzificated functions. Perceptibly, the global stabilization condition does exist in control synthesis for this class of systems. But when saturation limits are involved, the local stabilization condition is more reasonable.
Saturation nonlinearity.
Following this approach, it is generally classified into three representations used for closed-loop system with saturated actuators:
(1)-Polytopic models,
(2)-Sector nonlinearity models, and
(3)-Regions of saturation models.
Giving a control input vector 12 ,,,
(2.90)
The notations :max,:min iiii uutuut are used for the purpose of simplifying the presentation. As analyzed in the literature [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF], the polytopic bounds technique and regions of saturation might result in higher computational complexity, respectively, with 2, m and 3 m conditions compared with sector nonlinearity has only m conditions. Furthermore, it worthy to note that the region of stability of the first two approaches seem to be equally scaled (under the same primary assumptions), if not to say the local sector bounding provides better performance. Since mentioned works are commonly imposed on a symmetric saturation (there is rare work realizing for asymmetric saturation) so the lower bounded is typically set by , with 0.
iii uuu which can be recognized as the prototype GSC condition in [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF]. However, the existence of vector vt in the GSC condition and polyhedral set It is possible to find the feedback control laws (e.g. ) utKxt so that the unsatu- rated closed-loop system is stable (i.e. ABK is Hurwitz ).
p t U Due to actuator saturation, there exist initial conditions that could lead to divergence of the closed-loop system, an incorrect convergence of the equilibrium point away from the origin, or a destabilization [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF]. However, determining the initial conditions such that the trajectories of the closed-loop system are asymptotical stable is not simple as it seems. Therefore, the estimation of R A related with the admissible set of initial conditions is generally encountered in saturation control synthesis, see for examples [START_REF] Cao | An antiwindup approach to enlarging domain of attraction for linear systems subject to actuator saturation[END_REF][START_REF] Gomes Da Silva | Local stabilization of discretetime linear systems with saturating controls: an LMI-based approach[END_REF], 2005;[START_REF] Hu | An analysis and design method for linear systems subject to actuator saturation and disturbance[END_REF][START_REF] Hu | Anti-windup synthesis for linear control systems with input saturation: Achieving regional, nonlinear performance[END_REF]. In this thesis, a stability analysis tool for the dynamic system primarily deploys by considering the various forms of Lyapunov function. The associated level sets are given by the characterized domains corresponding to:
1 1 ,, , , l N T xii i PxtVxtxtPxt ED (2.103) 1 0, 1,1,,,1. l l N N lili i iNx U R K (2.104) Polynomial Fuzzy Lyapunov function (PFLF) candidate: 1 1 ,, , . l N T xiil i PxtVtxtPxxt EDU (2.105)
Then, the problem is formulated by finding the matrices P are defined in (2.100)-(2.105) so that the given level sets , P E are regions of asymptotical stability for the closed-loop system. Since the parameter dependency form (2.102) able to present both (2.103), (2.105), and if t is constant then it yields to quadratic form, so this formulation is more general. Henceforth, this parameter-dependent elliptical domain is employed mostly, the remaining forms are only considered in specific cases.
Among the Lyapunov functions are derived for stability analysis of LPV system (2.1)
(without external disturbance), the quadratic formulation generally leads to strict conditions (illustrated by the smallest ellipsoid, as seen in Figure 2-7.a). The next figure shows the piecewise Lyapunov function commonly delivered for switching LTI systems. In visualization, we can see that the estimates of the region of asymptotic stability of the parametric Lyapunov functions (FLF and PFLF) are significantly larger. Nevertheless, the optimization problems such as the size or performance criteria results in the parameterized conditions. The relaxation of these PLMIs depends on the structure of transformation used for the stabilization condition.
Optimization problems.
The elliptic domain optimization discussed in [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF] [START_REF] Gomes Da Silva | Local stabilization of discretetime linear systems with saturating controls: an LMI-based approach[END_REF]Tarbouriech, , 1999;;[START_REF] Hu | An analysis and design method for linear systems subject to actuator saturation and disturbance[END_REF][START_REF] Hu | Anti-windup synthesis for linear control systems with input saturation: Achieving regional, nonlinear performance[END_REF] or a convex combination piecewise Lyapunov function [START_REF] Hu | Composite quadratic Lyapunov functions for constrained control systems[END_REF]. Without loss generality, we might consider the unit level set 1, to simplify the optimization problems involved in bilinear couple between and P. In Chapter 4, the maximization of the minor axis carries out for the parametric ellipsoidal set (2.102). The optimization problem is even more interesting in implemented on an LPV time-delay system with stability analysis based on the Lyapunov-Krasovskii function discussed in Chapter 6.
On the other side, in consideration of the effect of the external disturbances and the initial condition, the optimization of performance requirement consisted of finding the L2-gain scheduling controller for saturated system (2.1) so that the criterion is satisfied:
22 22 11 0 ztwt LL (2.106)
where 0 is maximum of the non-null admissible initial conditions 0 . , P E And, an energy bounded exogenous signal such as:
2 1 ,|. d wwwt L W R
Actually, the optimization problem is the trade-off between the estimation of the region of asymptotical stability (RAS), the level of attenuated disturbances, and the region of linear behavior (characterized by the unsaturated regulation region).
Saturated Feedback Control Synthesis
The feedback controller structures illustrated in Figure 2-8 are proposed to stabilize the saturated LPV systems. The PLMI stabilization conditions are derived from stability analysis using parameter-dependent Lyapunov (2.102). Concerning constraints defined by polyhedrons ,, u S the analysis of the saturation bounds address as follows:
21 2 ,, 1,2,,, T s s t Vtttt uxx m Px ts K (2.107) if 11 0 0: , 0,0 , w x tVtt x V (2.108) if 2 1 1 2 1 0 0: , 0, . 0 w xxwt tVttV L (2.109)
The necessary condition (2.107) is set directly for each auxiliary controller . s t And, the sufficient conditions (2.108)-( 2.109) related to the stabilization condition (with and without the influence of disturbance), it is guaranteed that the closed-loop system trajectories is confined within the level set of the ellipsoidal domain , t P E from the initial conditions are belonged to this domain. The satisfaction of the necessary and sufficient conditions means that the ellipsoid is included in the polyhedron set ,. u S From Corollary 2.3.1, the following GSC condition holds
1 0. T uTuttu (2.110)
So, depending on the feedback controller structure , ut we will choose the appropriate auxiliary controller structure , t so that the combination of these two vectors in the latter GSC condition is convenient for the designed purpose. In the following sections, the necessary conditions are specifically designed for each controller structure (state feedback, observer-based feedback, static output feedback, and dynamic output feedback) correspond to each optimization method.
Parameterized State Feedback Controller
The simplest method of robust stabilizing and performance analysis for the LPV system is evident the development of the state feedback law. Where the stabilization problem can express directly as the PLMI condition. The studies have been coherently discussed for the continuous and discrete LTI system in the work of (J.M. Gomes da Silva [START_REF] Gomes Da Silva | Local stabilization of discrete-time linear systems with saturating controls: an LMI-based approach[END_REF][START_REF] Gomes Da Silva | Local stabilization of linear systems under amplitude and rate saturating actuators[END_REF][START_REF] Gomes Da Silva | Contractive polyhedra for linear continuous-time systems with saturating controls[END_REF][START_REF] Hu | An analysis and design method for linear systems subject to actuator saturation and disturbance[END_REF][START_REF] Hu | Stability and performance for saturated systems via quadratic and nonquadratic Lyapunov functions[END_REF][START_REF] Hu | Anti-windup synthesis for linear control systems with input saturation: Achieving regional, nonlinear performance[END_REF][START_REF] Tarbouriech | Stability analysis and stabilization of systems presenting nested saturations[END_REF][START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF].
Considering compact sets of parameters (2.8)-( 2.9) and a gain-scheduled state feedback controller ,sat. utKxtuKxtu Then, the closed-loop system is ob- tained by substitute this gain-scheduled state feedback controller to saturated LPV system (2.89) as follows: In Section 4.1.2, the optimization problems involve in the sets of the admissible initial condition 0, the estimation of the ellipse domain , the upper bound of the disturbance -1 and the rejection disturbance level is investigated. But, the minimization of disturbance attenuation entailing the decline of the linear operating area (region of unsaturated control signals), and predominantly effects on the generalized sector condition. So, an enhancement of the control system's performance could be done by utilizing the D-stable LMI method to relocate the pole of the closed-loop system.
Parameterized Static Output Feedback Controller
Since a relevant design case impractical control engineering as some states is unmeasurable, the full-state feedback controller is not appropriate for the implementation. But, solving the stabilization condition of static output feedback (SOF) is much more difficult, usually leading to a nonconvex, bilinear matrix inequality (BMI) [START_REF] Sadabadi | From static output feedback to structured robust static output feedback: A survey[END_REF][START_REF] Syrmos | Static output feedback -A survey[END_REF].
On the one side, the iterative LMI algorithm [START_REF] Cao | Static output feedback stabilization: An ILMI approach[END_REF][START_REF] He | An Improved ILMI Method for Static Output Feedback Control With Application to Multivariable PID Control[END_REF], the algebraic equation [START_REF] Gossmann | Parameter dependent static output feedback control-An LPV approach[END_REF][START_REF] Syrmos | Static output feedback -A survey[END_REF], the iterative global optimization CCL algorithm [START_REF] El Ghaoui | A cone complementarity linearization algorithm for static output-feedback and related problems[END_REF], the two-steps algorithm with output structural constraints (D. [START_REF] Peaucelle | An efficient numerical solution for H2 static output feedback synthesis[END_REF], the congruence transformation [START_REF] Dong | Robust static output feedback control synthesis for linear continuous systems with polytopic uncertainties[END_REF][START_REF] Prempain | Static output feedback stabilisation with H∞ performance for a class of plants[END_REF] and the S-variable method [START_REF] Ebihara | S-Variable Approach to LMI-Based Robust Control[END_REF][START_REF] Pipeleers | Extended LMI characterizations for stability and performance of linear systems[END_REF] have been proposed to cope with SOF control design problem. Besides, the other unsaturated SOF controller synthesis could refer to [START_REF] Chang | New Results on Output Feedback <formula formulatype="inline"> <tex Notation="TeX">$H_{\infty} $</tex></formula> Control for Linear Discrete-Time Systems[END_REF][START_REF] Gossmann | Parameter dependent static output feedback control-An LPV approach[END_REF][START_REF] Kau | Robust H∞ fuzzy static output feedback control of T-S fuzzy systems with parametric uncertainties[END_REF][START_REF] Nguyen | Static output feedback design for a class of constrained Takagi-Sugeno fuzzy systems[END_REF][START_REF] Nguyen | Gain-scheduled static output feedback control for saturated LPV systems with bounded parameter variations[END_REF][START_REF] Qiu | Static-output-feedback H∞ control of continuoustime T-S fuzzy affine systems via piecewise lyapunov functions[END_REF], where a gain-scheduled static output feedback (SOF) controller law is generally designed with the form . utKyt However, there is still room for researching and developing the satu- rated SOF structure implemented in the LPV systems.
On the other side, a controller gain could be considered by
1 , KYW
and a congruent transformation is deployed like [START_REF] Dong | Robust static output feedback control synthesis for linear continuous systems with polytopic uncertainties[END_REF]Nguyen et al., 2018). Let's consider SOF controller 1 utYWyt for saturated LPV system (2.89), then a closed-loop system is represented as a parametric dependence formulation:
1 , , ,
.
w w xtABKCxtBuBwt ztHJKCxtJuJwt ytCxt utKytYWCxt & (2.113)
The stabilization PLMI condition is derived from a congruence transformation treated the bilinear structure matrix 1 YWC without using the strong mathematical constraints. Due to this specific construction, some scalar variable injects into the design condition. Generally, the gridding logarithm searches linearly on a scale in the interval, e.g., 10,10, nn withn N is a positive integer. In Section 4.2, we use this controller structure to analyze the local stabilization involved in the expansions of the polyhedron set provided in conditions (2.107)-(2.109).
Parameterized Observer-Based Output Feedback Controller
In the branch using measurement outputs to design control systems, the dynamic output feedback (DOF3 ) and observer-based feedback controller (OBF) have gained considerable attention in recent decades. Each approach has its benefits and drawbacks. The observer-based controller structure is an exceptional form of DOF, introduced by [START_REF] Cristi | Dynamic Output Feedback by Robust Observer and Variable Structure Control[END_REF], provided a simple construction and easier to implement. Alternatively, the DOF full-block design analysis was presented by (Chilali et al., 1996;[START_REF] Gahinet | Affine parameter-dependent Lyapunov functions and real parametric uncertainty[END_REF]Gahinet & Apkarian, 1994;[START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF] for the LTI systems proposed a congruence transformation with new substitution of variables. Both approaches have been analyzed and implemented in a wide range of system engineering.
Besides, it can mention that there are two common approaches for observer-based controller synthesis: the 2-step separated strategy and the full-block observer-based output feedback control framework. Whereas one method separates the design of the observer from the controller, it is common to apply to accurately known systems (the states are not influenced by the uncertainties).
On the other hand, the simultaneous design method of the extended system includes system dynamics, and the estimated error could handle the parameter uncertainty. In this case, the observation error involves in the input control and state of the plant with the feedback controller . ûtKxt The second method focused in this thesis shows many difficulties and more challenges for control design strategy. For readers interested, more details about observer-based controller develop for LTI systems [START_REF] Lien | Robust observer-based control of systems with state perturbations via LMI approach[END_REF], for nonlinear Lipschitz systems [START_REF] Ahmad | Observer-based robust control of one-sided Lipschitz nonlinear systems[END_REF][START_REF] Ibrir | Observer-based control of discrete-time Lipschitzian non-linear systems: Application to one-link flexible joint robot[END_REF][START_REF] Zemouche | Robust observerbased stabilization of Lipschitz nonlinear uncertain systems via LMIs -discussions and new design procedure[END_REF][START_REF] Zemouche | A new LMI based H∞ observer design method for Lipschitz nonlinear systems[END_REF]Zemouche & Boutayeb, 2013), for nonlinear systems represented by T-S fuzzy model [START_REF] Benzaouia | Advanced Takagi-Sugeno Fuzzy Systems: Delay and Saturation[END_REF][START_REF] Bui Tuan | Robust TS-Fuzzy observer-based control for Quadruple-Tank system[END_REF][START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF][START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF]Dahmani, Pages, El Hajjaji, et al., 2015;[START_REF] Gassara | Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach[END_REF], for LPV system (Briat, 2015a;[START_REF] Heemels | Observer-based control of discrete-time LPV systems with uncertain parameters[END_REF], for the LTI saturated system [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF], and the references therein. This section targets to deliver necessary conditions of feedback control based on the structure observer as follows: The stabilization of closed-loop system (2.115) is analysis like unsaturated observerbased feedback control system, with an additional reform in the constrained control conditions that deploy the same in section 2.3.2.1. Now, considering an ellipsoid as region of asymptotic stability for system (2.115),
, ˆˆ, xtAxtButBuLCxtCxt utKxt & ( 2
121 1 2 ,,, ,, , , T xe
L 1 1 1 0 (2.119)
However, the traditional approach of analytical stabilization conditions for this controller has a major drawback (the conservatism will expose in Section 4.2 and 4.5.1).
Parameterized Dynamic Output Feedback Controller
The full-block output-feedback control law framework has also earned a lot of interest in a wide range of control syntheses. The early methodologies of the dynamic output feedback controller synthesis could mention (Chilali et al., 1996;[START_REF] Gahinet | Explicit controller formulas for LMI-based H ∞ synthesis[END_REF]Gahinet & Apkarian, 1994;[START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF] employed on the LTI systems; [START_REF] Apkarian | Advanced gain-scheduling techniques for uncertain systems[END_REF]Apkarian & Gahinet, 1995;[START_REF] Apkarian | Parameterized LMIs in Control Theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF]Tuan & Apkarian, 2002) deployed for uncertain parameters and LPV systems, the LPV time-delay systems discussed in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF](Briat, , 2015a)), and a dynamic parallel distributed compensation (DPDC) analyzed for the T-S fuzzy systems in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF] via LMI conditions for both cubic, quadratic and linear parameterizations. The synthesis methods is presented by Gahinet, but the ℋ∞ performance synthesis is widely known by [START_REF] Scherer | A full block S-procedure with applications[END_REF] especially for the LTI systems. In essence, the stabilization problem for saturated DOF controllers is alternatively approached for the anti-windup strategy (DLAW or MRAW). In monograph [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF] From initial condition 0 , E the analysis of saturation constraints on auxiliary vector t are expanded similarly to the previous sections.
2 21 2 21 , T ss s T N uuttt t N Y G é 52 CHAPTER 2. OVERVIEW LINEAR PARAMETER-VARYING SYSTEMS if 0 11 0 0 0 , 0 0: 0 TT TT t w NN t NN t YY éé if 2 2 00 0 00. 0: TT TT w t NN twt N t N YY L éé 1 1 1 0 (2.127)
The nonlinear terms in the stabilization conditions related to system construction (2.121),
or the heterogeneous form with the occurrence of é in the latter conditions, will be thor- oughly handled thanks to a congruence transform presented in Section 4.4.
Conclusions
In this chapter, the fundamental concepts of LPV/quasi-LPV systems have been recapitulated. Thereby, three usual approximation forms: polytopic system, polynomial system, and polynomial-fuzzy system have been delivered, corresponding to the stability analysis and synthesis for each representation.
The stability conditions derived from the analysis of the parameter dependent Lyapunov function are presented as parametric linear matrix inequalities. Then, the relaxation methods of parameterized LMI: the gridding, the sum of squares, and the convex combination that convert the infinite-dimension conditions into the finite-dimension constraints as linear matrix inequalities. It can be solved by numerical mathematic tools (SDP, CP, etc.). Then, the design specifications and requirements for the saturated control system are discussed and analyzed based on the Lyapunov technique. The necessary and sufficient conditions deliver for the stabilization of saturation LPV systems corresponding to constrained feedback control systems.
Chapter 3. Quadratic Stabilization Analysis for LPV/quasi-LPV Systems with Actuators Saturation
Quadratic Stabilization Analysis for LPV/quasi-LPV Systems
In this chapter, the stability of LPV systems is solved by using an observer-based feedback control law corresponds to the ℋ∞ performance criterion. The first section devotes to the stabilizing analysis of LPV/quasi-LPV systems including: § The observer-based feedback control stabilization delivers for the LPV systems considering the influence of disturbance and uncertain parameter. The non-convex problem related to coupling variable matrices is handled by the generalized of Young's inequality. This controller design improves the results of the quadratic conditions [START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF](Dahmani et al., , 2015) ) and presents an adjustment to optimize the concave condition using the scaling-commuting sets, that is addressed in sections 3.1.3 and 3.2.3.2. The results show a significant enhancement in the stabilization condition application of Young's inequality (relates to the scalars considering as the weighted geometric mean). § The stabilization condition delivered on the parametric linear matrix inequalities (PLMIs) formulation are infinite dimension, which could not solve directly by semidefinite programming (SDP) or cone programming (CP). So, the relaxation of PLMI methods is represented in section 3.2 to reformulate the design conditions to finite convex optimization problems.
A raised question about the content focuses on the stabilization analysis of the observerbased controller for the LPV system. First, it can realize that feedback control synthesis for this class of systems is well-investigated and developed. However, the conventional approach of the observer-based feedback design is conservative. In addition, the analysis and synthesis of controllers (i.e., state feedback, new observer-based feedback, output static and dynamic feedback) for the LPV saturation system will deliver in the next chapter using parameter-dependent Lyapunov functions. Therefore, this chapter is devoted to addressing the concave problem relating to quadratic stabilizing conditions and presenting the relaxation methods of the parametrized LMI conditions. A global optimization method cone complementarity linearization (CCL) has effectively reduced the gaps in Young's inequality and enhanced the system's performance. A quadratic Lyapunov candidate is applied to demonstrate the conservative relaxation using the CCL method for the performance and robustness requirements. Accordingly, the scaling parameters method combined with the CCL algorithm provides smaller optimal disturbance rejection values confirming the system performance improvement. In addition, the parameter-dependent conditional relaxation methods such as Gridding, parametric matrix polynomials (S-variable, Sum of squares), convex combination (polytopic/T-S fuzzy) are presented along with the further discussions. Finally, the illustrated examples, a design PDC controller validates on a vehicle lateral stabilization and a quadruple-tank process systems. Bases on this ellipsoidal set, a inputs with bounded L2-norm impose by small-gain theorem [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF]. The norm-bounded conditions set directly to the input control ,, diagXII for condition (3.10) yields to:
Stabilization Analysis
1 11 2 2 1 0. 0 0 T T s sss s P K Y P u K Y ± (3.12)
However, the remnant of s K in these conditions leads to a heterogeneous form. By deploying a matricial generalization of Young's inequality to eliminate these non-convex problems, we have:
1 11 11 00000000 0000. 00 00
T T TT T sss s IIIPIP YY KK ± (3.13)
Substituting (3.13) into (3.12), continually applying Schur's Lemma that entails in condition LMI (3.8).
It concludes the proof.
W
The LMIs (3.8) are conservative conditions, and there is a small probability of finding a feasible solution that satisfies the stabilization problems with a preselected value. However, better results could be obtained if this scalar were a variable. Besides, the satisfaction of (3.8) only guarantees a necessary condition. Sufficient condition (3.11) is satisfied if the derivative of Lyapunov function (3.6) along the trajectory of the closed-loop system is negative. Further analysis will deliver in section 3.1.2.
Remark 3.1.1. The norm-bounded conditions are given in Lemma 3.1.1, and the observer-based control stabilization had been addressed in (Bui [START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF]. But, the bounds on the control input has not been properly treated.
Observer-based control stabilization
In consideration of the effect of external disturbances the robustness and performance requirements of system (3.1) are consisted on finding the L2-gain scheduling controller K and observer L given in (3.4) that guarantee a stabilization condition concerns to a minimize disturbance rejection level 0 across the frequency domain as follows:
22 1 ztwt LL (3.14)
Use quadratic Lyapunov function (3.6) for stabilizing analysis associated with this ℋ∞ performance that leads to the following result.
ip U ,1,2,4, i R
such that the PLMI:
0, p (3.15)
is satisfied. Where the scheduling controller and observer gains are For the sake of simplicity, the time-varying " t " and parameter-dependent expressions " " are omitted from next inequality. Takes the derivative of the Lyapunov function (3.6) iii X
The cone complementarity linearization CCL method [START_REF] El Ghaoui | A cone complementarity linearization algorithm for static output-feedback and related problems[END_REF] has been employed to handle with concave problems involved with the strategic SOF or observerbased controller designs. Nonetheless, this iterative optimization algorithm could deliver positive results if the constraints are not too complex, other than there are no guarantees for the larger scale of stability conditions.
Concave Nonlinearity -Cone Complementarity Linearization
The nonconvex or quasi-convex problem relate to nonlinearity matrix structure is frequently encountered in the stabilization condition via LMI synthesis. These problems are not easy to handle directly or cannot solve successfully by standard techniques (convex programming). A relaxation method reformates as a global optimization problem -cone complementary linearization Algorithm [START_REF] El Ghaoui | A cone complementarity linearization algorithm for static output-feedback and related problems[END_REF] provides a useful tool that consists in linearizing a nonconvex problem to a canonical form: minimizing a linear function over a difference of two convex sets. This method is commonly found in the robust control system design involving SOF synthesis or parameter uncertainty such as [START_REF] Cao | Static output feedback stabilization: An ILMI approach[END_REF][START_REF] Moon | Delay-dependent robust stabilization of uncertain state-delayed systems[END_REF][START_REF] Sun | Delay-dependent stability and stabilization of neutral time-delay systems[END_REF]. Most of them could solve practically only problem instances of very limited size, as would be expected from the NPhardness of these problems. In another aspect, for a more exhaustive and comprehensive analysis of this issue and other approaches of global optimization should refer to the following monographs [START_REF] Konno | Optimization on Low Rank Nonconvex Structures[END_REF][START_REF] Tuy | Convex Analysis and Global Optimization[END_REF].
On the other hand, Young's inequality plays an essential role in the observer-based controller analysis method for nonlinear systems (Zemouche et al., 2017;Zemouche & Boutayeb, 2013), most of which are pre-selected or gridding within reasonable intervals with these variables. But it is tricky to pick the appropriate ranges of the scalars to deliver good results, which may lead to conservative design conditions.
Iterative algorithm
We now discuss a concave problem encountered in robust stabilizing conditions of the observer-based control analysis (3.15), related to bi-linearity forms
1 , . XX CHAPTER 3. QUADRATIC STABILIZATION ANALYSIS FOR LPV/QUASI-LPV SYSTEMS WITH ACTUATORS SATURATION 1 0 0, 0 I X X ± L OMM (3.22)
As discussed in Appendix A.4, the role of slack-variable is to tighten the gaps of these inequalities and is essential to the relaxation of Young's inequality. Then, an iterative algorithm is provided to seek for the globally optimal value of the stabilization condition for the substitutions shown in (Bui [START_REF] Bui Tuan | Robust TS-Fuzzy observer-based control for Quadruple-Tank system[END_REF][START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF].
Gives a new scalar Now we analyze this problem by the approach of (Bui [START_REF] Bui Tuan | Robust TS-Fuzzy observer-based control for Quadruple-Tank system[END_REF], then nonconvex problems (3.22) Step 2: Assigning above solutions be initial set 0000000 ,,,,,, XUSUS then set 0. k
Step 3: Find new solution at th k by solving LMI problem given as:
Minimize 1,2,3, Trace 0.25 kkk JJJ (3.29) subject to (3.22)-(3.25), with 1, 2, 3, , ,
.
kn kk kkkkk kkkkkkkkkkkkk kkkkkkkkkkkk JI JUUUUSSSS JXUXUXUXU XSXSXSXS
Setting th k optimal solution of (3.29)
Step 4: Fix a positive scalar , ¡ and a sufficient small tolerance ò , if By seeking for the new variables ,,,,,, XUUSS at each loop such that the conditions (3.22)-(3.25) hold. A better value is achieved by increasing (or decreasing) their value each time the conditions (3.30) are satisfied. But it does not guarantee the accurate convergence of the solution. So, we only use this method just to find the epsilon coefficients, and then deploy a conjoint algorithm to efficiently achieve a better local optimization.
Algorithm 3.2. Enhanced CCL & Local Optimization
Step 1: Choose on purpose initial values , Ks.t. the conditions (3.22)-(3.25) are feasible.
Step 2: Solve steps 2 through 4 of Algorithm 3.1 to find a solution k that satisfies conditions (3.30), then set 2 .
k
Step 3: Optimizing under design stabilization LMI conditions.
The proposed iterative algorithm able to converge directly a better local region for design condition, e.g., better performance attenuation … It should remind that the above CCL algorithms have been improved to be better adapt to the concave structure and to converge faster to the optimal solution. As illustrated in the example section, the varying scalars reduces effectively the conservation of stabilization conditions that some existing approaches are confused to find a feasible solution.
Reduction to Finite-Dimensional Problems
The purpose of this section is to present the scaling-parameter method that can be combined with the optimization algorithms in the previous section to increase the performance of the control system. As discussed in chapter two, the PLMI relaxation methods could be straightforwardly applied for stabilization condition (3.15) in Theorem 3.1.1. Hereafter, the design conditions for the saturated LPV system in the next chapters will only be presented in the parametric formulation.
It should remind that the proposed stabilizing conditions expresses as parametric dependence matric inequalities, which are infinite-dimensional problems characterized by infinite space in the range of parameters. The relaxation PLMI methods generally reformulate design condition to finitely LMIs. One of the widely known relaxation methods is the finite-dimensional meshing -gridding techniques, or the affine parameter-dependence distributed over each parameter subspace. The limitation of the methods is how approximate the behavior of the parameters in their operating range, including the critical points with the smallest number of discrete points.
Besides, the parameterized LMI conditions could represent as multiconvexities by polytopic or T-S fuzzy representations or tensor product transformation, that shows efficiency in reducing the numerical computation, the number of iterations, etc. An alternative approach uses the polynomial expression converted to a SoS problem is treated effectively by the SoS toolbox. The semi-definite programming (SDP) problems derived from the polynomial matrix and SoS constraints could be solved by the interior-point techniques. However, the computational time and numerical resources reserved for this method is enormous. This is a trade-off between conservative and reasonable computational efforts.
In the first part, the piecewise-affine parameter, the sum of squares, and the convex combination (fuzzification) are manipulated to relax the parametric LMIs.
Finite Discretization of Parametrized LMIs via
The first discussion deserves for the discretization of parametrized LMIs into finite multiple subspaces without knowledge of the density of the parameters. Define a parameter set as follows: It should mention that the parametric-dependent conditions are converted effortlessly into the multiple LMIs, but there is a probability of missing essential information (critical points). Precisely, the gridding points must be enormous to cover exactly the behavior of the parameters. That entails in the exponentially increasing number of the solved conditions. Alternatively, the piecewise affine parameter-dependent is introduced in the works [START_REF] Apkarian | Robust control via concave minimization local and global algorithms[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF][START_REF] Tuan | Relaxations of parameterized LMIs with control applications[END_REF] assigning continuous subspace domains.
Let us now introduce a simplification of the method piecewise switching-dependent functional. Giving the set of parameters , p t U then it is distributed into m subspace pa- rameter domains as follows:
Polynomial Parameter-Dependent LMIs via Sum-of-Squares
As introduced in Chapter 2, a polynomial of parameter dependence matrices inequality could address by the SoS approach or Slack-Variable (SV) approach. This method decomposes the sum-of-squares polynomials, then converts them to a convex coordinate transformation (related to the coefficients of the polynomials over the defined domain of the parameter). This parametrized relaxation converts the parameter-dependence matrix inequalities to the Sum-of-Squares expression, which means positive definite (guaranteed with a minor deviation). As mentioned by [START_REF] Prajna | Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach[END_REF], the scalar variables defined by a set of polynomial inequalities (vars and decvars) concerned the variation of the S-procedure constraints where the pre-selected scalars. In such an approach, the parameter values are not constant, giving more relaxation in stability conditions.
The sum of squares decomposition has limitations in the experiment application, even for a simple structure of the polynomial gain-scheduling e.g., 2 012 , XXXX the feasible solution of the stabilization conditions results in higher orders of the polynomials. For example, a structured controller classically decomposes by 1 , KYX that ensues in a complicated fraction form in monomials. This method is evident an illustration of trade-off between computational burden and conservativeness.
SV-LMI-based control design
The slack-variable method has shown convenience for control synthesis and applicability in analyzing the parametric polynomial dependency matrix structure. This relaxation method based on the generalization of Finsler's lemma characterized via quadratic relations generalized S-Lemma (so-called S-procedure). In this aspect, a comparison between the SoS formulation and the S-variable approach has been delivered in (Sato & Peaucelle, 2007a[START_REF] Sato | Robust stability/performance analysis for uncertain linear systems via multiple slack variable approach: Polynomial LTIPD systems[END_REF] The S-variable approach provides better results, and closes to the theoretical technique (error 12 ,).
ee The interesting point is that the S-variable is solved at the boundaries (two points) also gives a positive result where error 1 e approximates to error 2 e obtained by 19 points gridding in the range 6,12. When solving simple parameter dependency conditions, the S-variable gives better convergence results than SoS decomposition. However, for large-scale conditions, the SoS toolbox handles the PLMI conditions more delicately with fewer additional slack variables.
Besides, the mathematical programming software such as MATLAB ® effortlessly returns the invertible matrix as fractions of the parameters with large polynomial exponents. But, the possibility of the implementation in practice is questionable. Another approach, more conservative, the LMI conditions are directly delivered by the linear combination of the vertices of the parametric convex domain. The two well-known methods are the T-S fuzzy and Polytopic models. There was no difference in the representations of the two systems but a distinction in the stabilizing control synthesis for the PDC scheme and polytopic gain-scheduling, respectively.
Parametrical Dependent LMIs via Convex Combination
Given compact parameter set .
p t U There exists a linear parallel to transform the basis conversion bilinear mapping and linear conservation from the parametric dependent function
1 0 :, ,| , 1,, p N ipp ttiN P FCU R K to 1 0 :, ,| , 1,,. p N ipp ttiN K Q FCU R (3.51) where 0, 1,1,,. p N qip iN K U R
The conversion of the parametric coordinate system to the convex coordinate system
1 , 1, 1,, l N iijjl j ttttjN K (3.52)
has been discussed in Chapter 2. However, the reverse transformation is rarely mentioned in many documents. Such an algebraic transformation method is presented in Appendix A.1.4.1, which allows for the generalization of the coordinates expression of the multiconvex system. In the next section, a Lyapunov quadratic is considered for the stability analysis of a T-S fuzzy system.
T-S Fuzzy Controller Stabilization Construction
The fuzzification and defuzzification are applied to convert the parametric dependent conditions to the linear combination forms. Let's introduce a convex combination of affine system (3.5) and a feedback parallel distributed compensation controller by using rules (3.52) as follows:
, , where ,,,,,.
1 l N iijijijij ij ij iiiijiiiij ABKABKBKBK xtxt tt ALCBKALCBK etet & & ,, 1
1, 22, ,, 2 1,, , ,,1,,1, 22, , 1, 0 ,
ii ijiii Biii ii X diagIIIIX NYI Proof.
The demonstration is directly inferred from the parameter-dependent condition in Theorem 3.1.1 with the use of linear convex combinations (3.52). We now look for a stabilization feedback control
l l N N ijij ijiil ij ij i ttttijN K p (3.56)
Then, following the property of convex combinations, if each condition at each vertex is satisfied, condition (3.56) holds. Next, by applying a relaxation of the stabilization condition approach [START_REF] Tanaka | Fuzzy regulators and fuzzy observers: relaxed stability conditions and LMI-based designs[END_REF]) that yields to (3.55).
W
The stabilizability of PDC controller and relaxing LMI conditions can find in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]Tuan et al., 2001) for quadratic Lyapunov function and for fuzzy Lyapunov function [START_REF] Tanaka | A multiple Lyapunov function approach to stabilization of fuzzy control systems[END_REF]. It should note that the fuzzy Lyapunov functions (FLF) reduces conservatism in design conditions. And, the derivative of the membership function relates to
1 l k N k k X X t & &
could expand as method of [START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF]. This issue is encountered in the next chapter. Now, let impose the constraints for the PDC control law of system (3.53) (3.55) to demonstrate the effectiveness of the relaxation method -CCL algorithm on a T-S fuzzy system and compares to the generalized sector condition. In the result section, we show a comparison of the pre-selection epsilon method [START_REF] Benzaouia | Advanced Takagi-Sugeno Fuzzy Systems: Delay and Saturation[END_REF][START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF]Dahmani et al., , 2015;;[START_REF] El Hajjaji | Observer-based robust fuzzy control for vehicle lateral dynamics[END_REF][START_REF] Kheloufi | On LMI conditions to design observer-based controllers for linear systems with parameter uncertainties[END_REF]Zemouche et al., 2017) with the CCL iteration algorithms (combining with scaling parameters).
Problem 3.2.1. For the control synthesis based on convex combination framework:
What is an essential distinction between the stabilized condition structure analyzed for a T-S fuzzy system and a Polytopic system? (In another view, what is the difference between a structure PDC controller and a polytopic gain-scheduling controller in control system design?)
Let's consider a state feedback controller construction
1 1 , 1, 0, 1, 1,,. l l N N iiiil i i uttKxtttiN K (3.59)
At first glance, it does not show difference in the control analysis of the two approaches. Supposes the parameters t belongs to , , then the rules set on local system:
T-S Fuzzy: In the conventional approaches using CCL, authors often simplify scalars .
1 , l N iijj j xtAxtBtKxt & , , clijiij AABK (3.
i R In this section, an enhancement of the CCL condition associated with a set of uncertain parameters develops via the linear combinations.
In robust control theory, the set of scaling associated with the uncertain structure is considered to be compatible with the LPV control synthesis. In this approach, the scaling small-gain theorem or structured-robust stability [START_REF] Apkarian | Advanced gain-scheduling techniques for uncertain systems[END_REF]Apkarian & Gahinet, 1995) is probably the most commonly known, which delivers a less conservative stabilizing condition associated with the scaling matrices depends on uncertain parameters. We employ the scaled parameters to conditions (3.18)
Example
In this section, the numerical results are implemented to demonstrate the performance and robustness improvements using the CCL method, a quadratic Lyapunov candidate is applied on the most conservative condition -observer-based control stabilization.
We employ the design controller based on the structure observer to stabilize the quadruple-tank process system, and the vehicle lateral dynamic system with the influence of external disturbance and uncertain parameters. The PLMI stabilization conditions in Theorem 3.1.1 (without saturation constraint) is relaxed to the stabilizing LMIs, respectively, conditions of Theorem 3.2.1. Firstly, we discuss on the results of (Bui [START_REF] Bui Tuan | Robust TS-Fuzzy observer-based control for Quadruple-Tank system[END_REF], demonstrating the effectiveness of the proposed method when using Young's inequality in combination with the global optimal algorithm CCL.
In the following analysis, the results of (Bui Tuan et al., 2021;[START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF] will be reproduced to compare with the other results. In this section, At the same time, we deploy the stabilization condition of Theorem 3.2.1 associated with norm bounded inputs of Lemma 3.2.2 for system (3.70) compared with the stabilization condition of feedback control. By pre-selecting the initial conditions and using Algorithm 3.1 (CCL1) & Algorithm 3.2 (CCL2) to minimize the performance criterion , the comparison are given.
Quadruple-tank process system
Let's consider a quadruple-tank process system [START_REF] Johansson | Relay Feedback and Multivariable Control[END_REF] The fuzzy controller gains given in Appendix E.1 obtained by using toolbox Yalmip [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF] with solver Sedumi [START_REF] Sturm | Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones[END_REF]
L L L L (3.67)
where MP is minimum phase setting [START_REF] Johansson | Relay Feedback and Multivariable Control[END_REF] The minimization of disturbance effects on the system is enhanced by using the CCL algorithm. As we can see in Figure 3-3 and Figure 3-4 (left), the error estimate of liquid level in tanks 1 st , 2 nd is smaller than 0.6 mm, corresponding to level in tank 3 rd being less than 3 mm and level tank 4 is less than 5 mm. And the stabilized control signal of the observer-based controller design for the quadruple-tank process system gives in Figure 3-4 (right). With the limited voltage of each pump is 12 (V), the maximum flow of each pump is 40 (ml. s -1 ) corresponding to 2.4 (l. mn -1 ). The illustrative simulation results show the high performances of the proposed design technique.
The norm-bounded constraint [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF] increases the conservatism of the stabilization condition, rendering unsatisfactory results obtained from the preselected scalars methods. The scaling scalars method does indeed relax the proposed conditions. However, the above results do not expose the effectiveness of scaling-parameter transformation combined with the CCL algorithm. So, the optimal disturbance rejection for each design case is discussed just below, where the L2 norm-bounded condition is compared with the other works.
Vehicle chassis stabilization system
Let's recall an example discussed in Section 2.1.6. [START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF](Dahmani et al., , 2015;;[START_REF] El Hajjaji | Observer-based robust fuzzy control for vehicle lateral dynamics[END_REF][START_REF] Kheloufi | On LMI conditions to design observer-based controllers for linear systems with parameter uncertainties[END_REF]Kheloufi et al., , 2013a)).
Cone Complementarity Linearization and Scaling-Parameters
As can be seen in Table 3.1, if the scalars i are pre-selected, these stabilization inequalities become more conservative, causing a decline in system performance (with a larger value of or even infeasible conditions). When the variables i are considered, as a way of envisioning that makes appropriately rotates the self-selected barriers to reduce the inequality gap (as illustrated in On one hand, performance degradation can observer in Table 3.1 as a result of applying a quadratic Lyapunov function to enforce the stabilizing condition of the parameter-dependent system. For example, let's consider a diagonal slack variable matrix tighten the gaps of Young's inequality. Besides, it shows an adaptation to integrate with structure parameterization, from simple form i to parameter-depend- ent form i and affine scaling form . i The slack-variable reduces conservatism and reach to better optimal performance level. § Algorithm CCL2 searches for the better local minimum value and faster than the iterative global optimization -algorithm CCL1. However, CCL2 returns positive results based on the solutions of parameters inherited from CCL1. § Reduction -Redn (simplified) conditions shows better results thanks to a more compact and simpler conditional structure. However, if the slack-variable of the Young's inequalities reach their optimal values, both approaches (the original and the reduced form) approximately converge to the same optimal solutions (for example, in the case of affine scaling). § Finally, the less conservative results observed in the 2 nd catalog versus the 3 rd and 4 th catalog in Table 3.1.
In this example, the selection of T-S fuzzy model (3.71) with a distinction between the local linear systems leads to a disparate outcome in the 2 nd catalog versus the 3 rd and 4 th catalogs in Table 3.1. The quadratic Lyapunov function used for stabilization analysis for the LPV system led to conservatism. It makes randomly choosing a set of scalars i as [START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF](Dahmani et al., , 2015;;[START_REF] El Hajjaji | Observer-based robust fuzzy control for vehicle lateral dynamics[END_REF][START_REF] Kheloufi | On LMI conditions to design observer-based controllers for linear systems with parameter uncertainties[END_REF]Zemouche et al., 2017) could be unsolvable.
For example, 3 rd catalog of Table 3 It may not be an exaggeration to say that have no manipulation that can select this set of parameters to achieve optimal values (evenly gridding method, fine selection, etc.). Most of the work involves parametric uncertainty, in which these scalars often are randomly pre-selected. When dealing with numerical example (3.71), it is miserably hard to pick up a set of ij for the feasible solution. Thereby, it authenticates the effectiveness of the proposed CCL algorithm.
Conclusions
In this chapter, we analyzed the conservation of the stabilization conditions delivered for the observer-based feedback system. The design structure of this controller is typically based on the application of Young's inequality combined with a quadratic Lyapunov matrix. Both mentioned properties will end in strict stabilization conditions. Based on the CCL algorithm, an improvement using scaling-dependent sets has enhanced the performance of the closed-loop systems. The numerical simulation results and reducing disturbance optimization values have demonstrated the effectiveness of the proposed method. However, applying a quadratic Lyapunov function to analyze the stabilization of the parameter-dependent system is very conservative. So, in the next chapter, we present a synthesizing method of the parameter-dependent conditions derived from the stabilizing analysis for LPV/quasi-LPV system subject to actuator saturation.
Furthermore, the L2-norm bounded input does not accurately guarantee the saturation limit of the actuator. Therefore, in the next chapter, the saturated LPV controller stabilization is ensured by generalized sector condition.
Chapter 4. Stabilization Synthesis for LPV/quasi-LPV Systems with Actuators Saturation
Stabilization Synthesis for LPV/quasi-LPV Systems with Actuators Saturation
In this chapter, the sector bounding condition is predominantly used to enforce the bounds on the saturated control, and the parameter-dependent Lyapunov function is considered in stabilization analysis to enhance system performance. The following sections devote to the synthesis of controllers for saturated LPV systems including: § The state feedback (section 4.1), the static output feedback (section 4.2), the observerbased feedback (section 4.3), and the dynamic output feedback (section 4.4) controllers are considered for LPV systems in the influence of disturbance and the uncertain parameter. Where the generalized sector condition (GSC) condition is employed to ensure the saturation limits. The saturated gain-scheduling controllers are obtained by the feasible solution of the parametric LMI stabilization conditions. § The developments of observer-based control and dynamic output controller in sections 4. 3 & 4.4 involve with the non-convex forms in the stabilization matrix inequality and the saturation condition. The bilinear terms will be handled by a less conservative congruence transformation.
It should note that of the feedback controller design architectures, the observer-based controller usually results in the most conservative stabilization condition. As a result, the scaling sets are considered for this control design strategy in chapter 3. However, the problem is still not fully covered, so a new design strategy is presented. Finally, the numerical simulation results in sections 4.1.2 and 4.5 will demonstrate the effectiveness of the proposed methods.
State Feedback Stabilization
Sector Nonlinearity Models
As discussed in section 2.3. S such that the following matrix inequalities hold: (2) for 0, wt the ellipsoid E is a region of asymptotical stability (RAS) for satu- rated LPV system (2.111).
2 0, 0 T w T d wr symABYX YZTBT BI HXJYJTJ X I & p (4.1) (a). 0 1 0 0 0 xX ± , (b). 2 0, 1,2,..,, s s X u sm Z ± (4.2)
Proof.
On one side, as analyzed in section 2.3.2, the PLMIs (4.2) are equivalent to:
(4.2).a 1 000 001, 0, T xPxV (4.3) (4.2).b
hold. However, this sub-optimization is conservative. Especially, the latter condition is expressed by a bilinear form relating to parameter-dependent conditions (4.1), (4.2). Furthermore, the minimization of results in the decline of the linear operating area (regions of unsaturated control signal), which predominantly affects the constraints like GSC (that is discussed in section 4. 1.2.3).
Example
In the first part, the stabilization conditions design for the saturated system considering the effect disturbance and the time-varying parameters reveals the relaxation of GSC constraints (on the single input systems). Three relaxation methods of PLMI conditions will apply to Theorem 4. 1.1 (i.e.,gridding,SoS,and polytope). Then, the relaxation of multimodel (T-S fuzzy) is used to compare with the works in the literatures (A. T. [START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF][START_REF] Vafamand | A robust L1 controller design for continuous-time TS systems with persistent bounded disturbance and actuator saturation[END_REF].
Concerning the stabilization of the saturated MIMO systems, some discussion is delivered about the high-gain problem of the bounding sector constraints. A typical example used with the difference between the vertices of the local linear systems and the disproportion of the actuator's limits aggravates this numerical problem. As a result, a modification is proposed via LMI formulation to overcome the limitation.
Ellipsoidal regions of stability
& & (4.11)
The saturation bound is The LMI conditions are solved by the convex optimization Mosek (Erling D [START_REF] Andersen | The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm[END_REF] combined with the numerical calculation toolbox Yalmip [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF]. The second column is obtained from using the SoS toolbox [START_REF] Papachristodoulou | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF] to solve the parameter matrices polynomial according to the SoS decomposition approach with solver Sedumi [START_REF] Sturm | Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones[END_REF]. And provided in catalog number 3, by deploying a conditional relaxation of Theorem 4. The optimization results are obtained by solving Theorem 4.1.1 with Problem 4.1.1 by these relaxed PLMI methods corresponding to the weighting scalar 12 1.
It can be seen that the multi-convexities (T-S fuzzy) returns the best results with the optimal solve time. Meanwhile, the complexity in the conditional structure of the SoS decomposition method leads to a higher computation time. The finite-discreteness over the parameter domain provides a satisfactory result with reasonable time (this approach is the simplest relaxation of PLMI).
A largest invariant ellipsoid contained in a polytope [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF]
X E and 1 4 . X E
Nonetheless, the condition (4.15) is very conservative constraint, and it is not always guaranteed that all of the trajectories of saturated closed-loop system is bounded in the definite domain. In this case, the hard-bounds imposed on the states are better fulfilled by the barrier function, see, e.g., [START_REF] Ngo | Integrator backstepping using barrier functions for systems with multiple state constraints[END_REF][START_REF] Nguyen | Exponential Control Barrier Functions for enforcing high relative-degree safety-critical constraints[END_REF][START_REF] Tee | Barrier Lyapunov Functions for the control of output-constrained nonlinear systems[END_REF].
The comparison of conservativeness of conditions
Example 4.1.2: Consider a T-S fuzzy system modified by (A. T. [START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF] expressed by the following local linear matrices:
guarantees that the ellipsoid E is a region of asymptotical stability for saturated qLPV system (4.17).
Colored dash-dotted trapezoid (A. T. [START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF] Violet dotted trapezoid [START_REF] Guerra | Non-quadratic local stabilization for continuous-time Takagi-Sugeno models[END_REF] Green solid trapezoid [START_REF] Vafamand | A robust L1 controller design for continuous-time TS systems with persistent bounded disturbance and actuator saturation[END_REF] Small blue circle (Corollary 4.1.1) XX ± The feasi- ble region exhibits the proposed condition to be much less conservative than other approaches. In the work of [START_REF] Vafamand | A robust L1 controller design for continuous-time TS systems with persistent bounded disturbance and actuator saturation[END_REF], although the author intends to test on a smaller conditional region 2, 11,6, 16, ab it can be objectively recognized that there is a big difference compared with the feasibility region of the saturation constraint of Corollary 4.1.1. Besides, the optimization results of the stabilization conditions of Theorem 4.1, solved for the system (4.14), are also superior to the results of (A. T. [START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF]. However, the comparison will be more reasonable in analyzing with the same structure as the DOF control system provided in Section 4.4.
Generalized Sector Condition
As analyzed on the feasibility of the saturated LPV systems, the GSC shows a better performance. Now, let's study the numerical problem related to GSC for a two-rule T-S fuzzy system (example 3.4.2). The peculiarity of this example is that the saturation limits on the control inputs vary greatly. For simplified analysis, the parameter-dependent stabilization conditions (4. At first glance, we can realize a numerical problem using the quadratic Lyapunov form to stabilize the uncertain parameter system corresponding to the norm bounded and generalized sector constraints. However, why are only high gain problems observed in Table 4.2 with GSC conditions?
To understand this better, let's first remind some preliminary definitions of the generalized sector condition. Given a vector auxiliary control 12 ,,, It's worthy to note that if conditions (4.21) hold, then condition (4.23) is kept, but the reverse is not correct. The problem does not reveal when the similarity limits enforced in each control input are considered. In monographs [START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF] Let's recall the stabilization problems discussed on LTI systems with saturated actuators. The closed-loop poles are pushed away from the imaginary axis to the left at the optimized cancellation values. The features such as fast convergence and good response are ineffective when the control signal reaches its limit. The dead-zone nonlinear behavior can cause the system to become unstable and convergent about equilibria other than the origin (Gomes da Silva, 1997; Gomes da [START_REF] Gomes Da Silva | Local stabilization of linear systems under amplitude and rate saturating actuators[END_REF][START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF]. As a result, a deteriorated control performance (the chattering phenomenon) is observed. And presented in Figure 4-2.b, the poles get closer to the imaginary axis avoids chattering in the input signal control but the response characteristics are slower. This problem is alleviated by the method of displacing poles into the D-stable region [START_REF] Chilali | Robust pole placement in LMI regions[END_REF]Chilali & Gahinet, 1996), a tradeoff between the system performance and the linear characteristic region. This approach could be implemented directly on design LMI conditions by relocating the eigenvalues of the matrix into region D. The better control performance and the improved results are obtained.
Remark 4.1.1. This example was intentionally chosen to discover the limitation of the weakness of GSC condition in applied to the MIMO systems. It could remind that the structure of GSC conditional stabilization using quadratic Lyapunov was similar between Polytopic and PDC-fuzzy analyses.
Remark 4.1.2. The state-feedback controller construction yields a simplified control construction and a better characteristic of closed-loop system response. But it may be unimplemented in practice due to metrological and economic reasons. In this case, the more suitable approach would be the output feedback.
Static Output Feedback Stabilization
The static output design strategies often suffer from a bilinear structure due to the form of the measurement matrix. In this section, we present a decomposition structure of feedback controller gain that allows using simple congruence transformations. Besides, a more general problem could be included by considering a parameter-dependent formulation in the measurement matrix. By choosing the set of admissible initial conditions with 0 0, and polyhedral set (2.92) related to level sets of ellipsoidal domains (2.102), the stability analysis of LPV system (2.113) to the following theorem. S such that the following matrix inequalities satisfy:
Proof.
In a repetitive manner, conditions (4.36).a brings to the initial condition:
1 10200 1 0 0000. 0 TT xPxePeV (4.37)
And the saturation constraints on the additional vector are derived from: A relaxing method for the stabilization condition using Young's inequality presents in Chapter 3. But the conservation of this inequality is also related to the organization of the right side. As observed in condition , 0,0 X f the right-hand side of this inequality is always positive, whereas the left side (the primary solution domain) can be positive or negative. That leads to the conservatism in the design conditions.
2 2 2
Generalization of Finsler's Lemma
The conservatism of OBF controllers using Young's inequality will reveal in the comparison section. To overcome the problem in Theorem 4.3.1, we introduce a new observerbased control structure using the generalization of Finsler's lemma. Then, the stabilization condition is stated as follows. In a repetitive manner, conditions (4.43) proceed from the initial condition and saturation constraints on the additional controls such that:
1 10200 000 , 0 TT xPxePe (4.44) 2 2 2 1 2 1 1 0 0 . T s ss ttt etet xxPx et uGG P (4.45)
Where the GSC condition can recall as follows:
1 12 0. T xt TGG Ku et uK (4.
Dynamic Output Feedback Stabilization
As described in section 2.3.2.4, we now address the necessary and sufficient conditions for the stabilization of closed-loop system (2.121) using dynamic control system (2.120) combined with auxiliary controller (2.126) The crux of the parameter-dependent stabilization synthesis is the derivatives of the parameter lied on the design conditions. The approach of [START_REF] Gahinet | Explicit controller formulas for LMI-based H ∞ synthesis[END_REF][START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF]
Examples
In this section, the presented results relate to the following methodological arguments: § The conservatism and implementation of the designed saturation controllers: the optimal level of reduced disturbance and the size criterion are addressed consistent to each proposed method (the state feedback and the output feedback strategy). In addition, the responses of the closed-loop systems governed by the design controllers expose an overall features and characteristics of the saturated control synthesis. § The performance degradation and instability of the nominal (unconstrained) control system when the actuator saturates, compared with designed saturated systems. § The analysis of the stabilization conditions using the parameter-dependent Lyapunov function shows the performance superiority over the quadratic Lyapunov condition.
In the first part, the stabilization analysis of a lateral axis dynamics for the L-1011 aircraft are deployed by the state feedback -SF (Theorem 4.1.1), the static output feedback -SOF (Theorem 4.2.1), the observer-based feedback -OBF (Theorem 4.3.1&Theorem 4.3.2), and the dynamic output feedback -DOF (Theorem 4.4.1). Based on these optimization results, we choose the appropriate feedback control structures to continue for the succeeding comparison. In which the gain-scheduling controllers stabilize the closed-loop LPV system demonstrate the effectiveness of the proposed method (ensures the stability of the corresponding designed saturation limits, and improves the system performance when the actuator reaches the saturation threshold).
Finally, the parameter-dependent stabilization conditions deployed for a quadratic Lyapunov stabilization condition and a non-quadratic Lyapunov function (NQLF). The latter discussion provides the less conservative stabilizing conditions of the parameter-dependent Lyapunov function toward the quadratic formulation.
Saturated Feedback Controller Comparison
Example 4.5.1: Consider the lateral axis dynamics of the L-1011 aircraft. The state-space representation for the L-1011 aircraft associated with the yaw rate, side slip angle, bank angle, and roll rate dynamics borrowed from [START_REF] Andry | Modalized Observers[END_REF][START_REF] Galimidi | The Constrained Lyapunov Problem and Its Application to Robust Output Feedback Stabilization[END_REF] and modified in (A. T. Through the prompt so- lutions, the comparison of the design stabilization conditions for the feedback controllers is delivered. It is interesting to point out that the stabilization conditions of the saturated gain-scheduling SOF (Theorem 4.2.1) and DOF (Theorem 4.4.1) roughly attain the same disturbance rejection optimization values with the SF (Theorem 4.1.1). However, as can be expected about conservativeness of the old-fashioned observer-based controller stabilization conditions. Briefly, it can explain that this approach has two critical drawbacks:
1.
Young's inequality is typically deployed for the strategy design of observer-based controllers. But, using this bounding technique loses the equivalence of the solution domain of the conditions before and after applying the inequality (explained in more detail in the Appendix A.4).
2.
If it compares with the other structure of output feedback strategies, the use of a block-matrix diagonal Lyapunov candidature associated with the observer-based controller design is conservative. Besides, a marked improvement on the results of the new OBF design method can be appreciated (compare 3 rd and 4 th catalogs). Solving problem (4.13) does not directly yield the optimums of the disturbance rejection levels or the size criterion (Volume Maximization, Minor Axis Maximization, or Trace Minimization, Etc.). But, based on the simultaneous minimization of opt and , opt we acquire comparable optimal values of the state feedback, static output feedback, and dynamic output feedback approaches. On the contrary, the traditional approach for the observer-based control is not nearly as feasible. Therefore, this method (Theorem 4.3.1) is excluded in the following comparisons.
The simulations start from initial condition 0/8 /4 /3 /20 T x that belongs to the estimate of RAS. Using Yalmip LMI toolbox [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF]) integrated with interior-point optimizer Mosek ® (E D [START_REF] Andersen | On implementing a primal-dual interiorpoint method for conic quadratic optimization[END_REF][START_REF] Andersen | The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm[END_REF] to solve optimization problem (4.13). Then, the parametric dependent forms of decision matrices are given corresponding to the design strategies:
Saturated Dynamic Output Feedback Controller
The optimal values achieve in Table 4.3 by solving the stabilization conditions of Theorem 4.4.1 with problem (4.13). Nonetheless, the high-gains cause the computation burden relating to the numerical simulation (it takes almost 10 minutes to complete a 10-second simulation). So, we slightly increase 0.3753 0.4528. Where, the dynamic gains and compensator gain are expressed, respectively, in Eqs. ( 4 Applying the gains scheduling to the LPV system (4.74), we obtain the simulation results. The signals of the dynamic output and state feedback control system exceed the saturation limits during the effect of exogenous signal (given in Figure 4-3) while the SOF controller and the new OBF exhibit a decent response corresponding to the design bound. The project lemma method demonstrates outstanding performance in designing static output controllers, which directly deliver a convex condition without applying mathematical constraints. When observing the time-evolution of the closed-loop states, the SOF system regulated a reasonable control signal conforming to the saturation limit. On the contrary, the convergence rate of the OBF control signal is too low (it can observe in the third and the fourth states). Furthermore, there exists a quasi-convex in condition (4.47) so this controller is also not deployed in the next comparison.
The robust performance and the stabilizability of the designed controllers evaluate under the influence of external disturbance from a non-zero initial condition. The time-evolution of the dynamical states shows in the 1 st frame to the 4 th frame and the time-varying There is no difference between states 12 , xtxt governing by state feedback, static output feedback, and dynamic output feedback controllers. But it should note that the 3 rd and 4 th states are measurable, which leads to a distinction in the response of the respective systems. As seen in Figure 4-4, the disturbance (sine function with amplitude 1) affects the system during the 4 th to 6 th second minimized corresponding to the L2-norm values given in Table 4.3. The control systems show the effectiveness of ensuring saturation limits and reinforcing performance. Now, let's discuss the size criteria associated with the ellipsoidal domain. The volume maximization delivers a linear and convex form relating to the decision variables. But it results in a disproportionate scale of the ellipsoidal region (that could entail an inaccurate estimate of the domain of attraction). The trace minimization and the maximization along certain direction method provide a multi-directional minimization of the ellipsoid characterized for the stability region. Nonetheless, the minor axis maximization exhibits the simplicity and the integrability with the stabilizing conditions. It should point out that the DOF stabilization conditions have a more complex structure. So, let's study the following minor axis maximization problem employed, respectively, for the saturated SF, SOF, and DOF control systems:
Problem 4.5. ensues in a quasi-convex condition, where N is supposed to be an implicit slack-variable that reappears in the optimal conditions. One solution for this is to preset a full-rank matrix , N and then solve the design LMI conditions. Note that in this simulation, we consider the case where either M or N is a full-rank constant matrix, and the remaining matrix is parameter-dependent. Similar to the first case, we could optimize eta and beta values at the same time. But, for this time, we fixed 10 and varied from 0.01 to 0.1 to optimize .
Note that the smaller the value of , the larger the linear behavior region.
It can observe in Table 4.3 and Table 4.4 that the static and dynamic output feedback controller solved by the conditions of Theorem 4.2.1 and Theorem 4.4.1 show good disturbance attenuation levels and reasonable estimations of RAS compared to state feedback controller. There is an insignificant difference of the optimal values. But it should be noted that the output feedback design method provides a practical implementation.
Evaluation of the performance and stability of the saturated system
In Example 4.5.1, only two states are measurable in four states of the system dynamic. But the vector spaces larger than two ensues in difficulty to exhibit the ellipsoidal domain.
Using two out of four states in combination with the corresponding basis matrix (A. T. Nguyen et al., 2018) is incorrect to characterize the estimate of the convergence region. So, let's study the following example to discuss more the domain of attraction of the closed-loop systems governed by the proposed saturated feedback controllers and the performance deteriorate in the unsaturation design control systems.
Example 4.5.2: Consider the dynamic of the pitch-axis motion of an autopilot for a missile model. The aircraft associated with the angle of attack, the pitch rate discussed many times in the literature [START_REF] Biannic | Missile autopilot design via a modified LPV synthesis technique[END_REF][START_REF] Daafouz | On inexact LPV control design of continuous-time polytopic systems[END_REF][START_REF] Pellanda | Missile autopilot design via a multichannel LFT/LPV control method[END_REF][START_REF] Wu | LPV control design for pitch-axis missile autopilots[END_REF] p N points uniformly spaced over parameter range, we attain the optimal results as given in Table 4.3. As noticed in the above table, the SOF controller is exhausted in this example, both in the two optimization categories. By varying coefficient SOF from 3 10 to 3 10 , the optimal values are obtained at In the first comparison of the state feedback systems, the performance degradation can be 4.5.2. Evaluation of the performance and stability of the saturated system 107 discerned on the nominal system (does not include saturation constraints). An enlargement of the frame from 4 th to 7 th second (Figure 4-5.a&b) clarifies the instability of states corresponding to the chattering effect of the controller shown in Figure 4-5. In the opposite direction, the feedback controller designed by Theorem 4.1.1 exhibits a good performance and enforces stability when the actuator is saturated. In the second comparison, an instability behavior with increasing amplitude of oscillation is observed on the response of the nominal SOF control system, corresponding to this control signal being saturated all the simulated time. Once again, the SOF controller designed by Theorem 4.2.1 has demonstrated the enhanced performance compared to the nominal system without integrating saturation conditions. It should be noted that the phase plane diagrams of the nominal SOF closed-loop system are ellipses extending to infinity (the instability trajectories are given in Figure 4567).
Through these simulation results, the control signals are regulated from stabilized control systems obtained by design conditions with no saturation conditions (e.g., sector bounding GSC). It could be noticed the performance degradation and system instability when the actuator is saturated. On the opposite, from similar initial conditions, the designed controllers ensure stability and enhance system performance corresponding to the optimal values obtained in Table 4.
Quadratic and Non-Quadratic Stabilization
As mentioned in Chapter 3, there is a performance deteriorate of the stabilization conditions using quadratic Lyapunov function (QLF) candidature against the parameter-dependent form. In this section, Example 4.5.1 is adopted to deliver the optimization results
Conclusions
We have developed several feedback controls law to stabilize the saturated LPV/qLPV systems. The control system design strategy related to the desired performance ensures that the operation agrees to the actuator capacity. Accordingly, the necessary and sufficient stabilization conditions via the PLMI formulation are addressed for the feedback controllers conforming to the design requirements (i.e., the admissible set of the initial conditions, the estimated region of the asymptotic convergence domain, the robust stability against uncertain dynamics, and time-varying parameters, and the system performance with the influence of input disturbance, Etc.). Besides, the nonlinearities and concave problems involved the generalized sector condition converted to the tractable conditional forms. The extension of the gains-scheduling technique has been addressed for the saturated LPV systems. Then, specific criteria are compared based on optimization results.
Performance degradation and instability trajectories on a control system without a saturation design occurred when the actuator reached saturation bounds. The presented results persuade the essential of saturation design for control system strategy. In addition, the relaxation of the design conditions is claimed by the comparison between the quadratic Lyapunov and PDLF stabilization conditions.
It is worth noting that the design stabilization is presented in the generalized form as an expression of the parameterized linear matrix inequality (PLMI). So, it can adapt and develop for each specific strategy or a suitable relaxation method of the PLMI. Furthermore, it remains an open problem in the LPV control framework.
Chapter 5. Stability Analysis of the LPV/Quasi-LPV Time-Delay Systems
Stability Analysis of the LPV/Quasi-LPV Time-Delay Systems
Time-delay phenomena observe in various engineering systems such as chemical processes, mechanical transmissions, hydraulic transmissions, metallurgical processes, and networked control systems. They are often a source of instability and poor control performance. The stability and stabilization of the time-delay systems (TDS) have received considerable attention in the practice and control theory. The time delay can sort into different approaches depending on the characteristic or the response behavior of the delay to the system. The framework of time-delay systems represents by functional differential equations classified into four types: discrete delay, distributed delay, neutral delay, and scale delay [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF]. The literature on stability and stabilization of time-delay systems is exhaustive and could find in the monographs for LTI systems [START_REF] Dey | Stability and Stabilization of Linear and Fuzzy Time-Delay Systems[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF] for LPV systems [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF], time domain-based Lyapunov stability analysis [START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF][START_REF] Sipahi | Stability and Stabilization of Systems with Time Delay[END_REF], and the eigenvalue based approach [START_REF] Michiels | Continuous pole placement for delay equations[END_REF][START_REF] Michiels | Stability and Stabilization of Time-Delay Systems[END_REF].
Frequency-domain approaches dedicated to linear time-invariant (LTI) systems addressed a few cases of model transformations or varying delays. The stability of a system is verified from the distribution of the roots of its characteristic equation or the solutions of a complex Lyapunov matrix function equation. Interested readers can consult in more depth on this issue in the literature [START_REF] Gu | Stability of Time-Delay Systems[END_REF][START_REF] Michiels | Stability and stabilization of time-delay systems[END_REF]Michiels et al., 2005;[START_REF] Michiels | Stability and Stabilization of Time-Delay Systems[END_REF][START_REF] Michiels | An eigenvalue based approach for the stabilization of linear time-delay systems of neutral type[END_REF][START_REF] Schoen | Stability and stabilization of time-delay systems[END_REF]. But the approach meets with difficulties in analyzing the robust performance of dynamic system with the uncertainty, disturbances, and nonlinearities. In this case, the time-domain analysis technique is more suitable for dealing with the control challenge of this class of LPV time-delay system.
During the last decade, significant effort has addressed the problem of stability analysis and controller design for time-delay systems. Based on the time-domain approach, the Lyapunov stability is deployed primarily by two famous stability theorems: namely (1) Lyapunov-Krasovskii (LK) theorem and (2) Lyapunov-Razumikhin (LR) theorem. A generalized analysis for both approaches outlines in the works of [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF] and references therein. Generally, there are the major approaches to carrying out the stability analysis of TDS, depending upon the size and bound of the delay as follows: § Delay-Independent Stability condition § Delay-Dependent Stability condition § Time-Varying-Delay and Delay-Range Stability condition
Recently, the primary research trend performed by the Lyapunov-Krasovskii functional analysis usually focuses on seeking less conservative (LC) stability conditions. There are two fundamental conservatism reduction approaches: (1) the reformulation of Lyapunov-Krasovskii functional and ( 2) the bounding techniques of its derivatives.
On the one hand, the tighter bounding integral inequalities can mention as the Wirtingerbased I (WBI) [START_REF] Seuret | On the use of the Wirtinger inequalities for timedelay systems[END_REF], the Wirtinger-based II (WBII) (M. Park et al., 2015;[START_REF] Seuret | Wirtinger-based integral inequality: Application to time-delay systems[END_REF], the Free-matrix-based II (FMBII) [START_REF] Zeng | Free-Matrix-Based Integral Inequality for Stability Analysis of Systems With Time-Varying Delay[END_REF][START_REF] Zeng | Hierarchical stability conditions of systems with time-varying delay[END_REF] On the other hand, a suitable structure of LKF can refer to the additional integral terms, the increasing state vectors, and delay-partitioning/fragmented approaches showed superbly efficient on reducing the conservativeness of the stability conditions. It should note that the more slack-variable matrices used, the more complication is in analyzing the delay-dependent stability conditions. Accordingly, these approaches are the trade-off of the relaxation of the stability condition with the computational complexity.
In section 5.1, we discuss the Lyapunov-Krasovskii stability analysis for the time-delay LPV/qLPV system. An essential key point to relax the parameter-dependent stability condition founds on appropriate LKFs combined with reasonable bounding inequalities. In section 5.1.2, the convex function features, such as the auxiliary-function-based method (P. G. Park et al., 2015;[START_REF] Van Hien | Refined Jensen-based inequality approach to stability analysis of time-delay systems[END_REF][START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF], fragmented/discretized Lyapunov functional (Y. [START_REF] Chen | Robust Stabilization for Uncertain Saturated Time-Delay Systems: A Distributed-Delay-Dependent Polytopic Approach[END_REF][START_REF] Fridman | Descriptor discretized Lyapunov Functional method[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF][START_REF] Han | A Delay Decomposition Approach to Stability of Linear Neutral Systems[END_REF] are employed to tackle with the conservatism of Jensen's inequality.
Besides, a well-known problem in control design is capturing or measuring the exactdelay value. The input-output approach proposed in [START_REF] Gu | Stability of Time-Delay Systems[END_REF] provides a methodology where the delay treats as uncertain dynamics of the LTI system. Then, an improvement for the LPV time-delay system is presented in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Briat | Delay-scheduled state-feedback design for time-delay systems with time-varying delays-A LPV approach[END_REF][START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF]. This approach purposely converts delay into the uncertain parameter, so it can also deploy by the different LMI-based stability designs (e.g., LFT framework). Based on the delay approximation, also known as a -memory-resilient, a stability condition derives from a Lyapunov-Krasovskii functional presented in section 5.2. Moreover, the auxiliaryfunction-based method provides a less conservative stability condition than the traditional Jensen-based inequality (Briat, 2015a;[START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF].
Introduction to LPV Time-Delay Systems
The first part of the chapter will reserve the definitions used in the rest of the thesis, such as delay space, convex function, etc. As discussed in the previous chapters, we are interested in the class of the parameter-dependent systems represented by LPV or quasi-LPV models (declaring properties such as time-continuous and having definite derivatives in the specific domain). Second, the time-varying delay only considers the case of small and slow-varying values.
Representation of LPV System with Single Delay
Let's introduce a generic LPV time-delay system of the form: Vt has been deployed in some delay-independent stability conditions. Since it doesn't contain information on the implemented delay, the approach is excessively conservative, especially when the delay is small, see, e.g., [START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF]. Subsequently, the delay-dependent condition involved in the stability analysis contains the addition of a quadratic doubleintegral term 3 , Vt which capture the upper bound of delay. The derivative of 3 Vt is expressed by
2 3 . t TT th VthxtRxthxRxd & &&&& (5.4)
entails an obstacle with the integral term on the right side. At first glance, it seems like a complicated integration and should find ways to cancel out rather than tackling it directly.
During the first decade of the 21st century, considerable efforts and attention devoted to the study of delay-dependent stability can mention such as: the model transformations method (Descriptor, Parameterization, Cross-Term Bounding, and Free-Weighting Matrices, etc.), the input-output method (Delay Operators, Small gain Theorem, etc.), and the discretized Lyapunov-Krasovskii functional method. The outline of the methods and their pros and cons are discussed in more detail in Appendix C. Besides, one of the limitations of the delay space 0 H and LKF (5.3) is that it does not include the lower bound of the delay. Which in practice this condition sometimes does not exactly characterize nonzero hysteresis systems. The delay-range stability research has attracted a lot of attention in recent decades for LTI systems such as [START_REF] He | Delay-range-dependent stability for systems with time-varying delay[END_REF][START_REF] Liu | Stability analysis of systems with time-varying delays via the second-order Bessel-Legendre inequality[END_REF][START_REF] Park | Reciprocally convex approach to stability of systems with time-varying delays[END_REF][START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF][START_REF] Briat | Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers[END_REF]; X. M. Zhang et al., 2017), for T-S fuzzy systems such as [START_REF] Datta | Improved stabilization criteria for Takagi-Sugeno fuzzy systems with variable delays[END_REF][START_REF] Dey | Improved delay-dependent stabilization of time-delay systems with actuator saturation[END_REF][START_REF] Li | Stability and Stabilization with Additive Freedom for Delayed Takagi-Sugeno Fuzzy Systems by Intermediary-Polynomial-Based Functions[END_REF][START_REF] Lian | Stability and Stabilization of T-S Fuzzy Systems with Time-Varying Delays via Delay-Product-Type Functional Method[END_REF][START_REF] Lian | Stability and stabilization for delayed fuzzy systems via reciprocally convex matrix inequality[END_REF][START_REF] Peng | Delay-range-dependent robust stabilization for uncertain T-S fuzzy control systems with interval time-varying delays[END_REF][START_REF] Tian | Stability Analysis and Generalized Memory Controller Design for Delayed T-S Fuzzy Systems via Flexible Polynomial-Based Functions[END_REF]Wang & Lam, 2018a[START_REF] Wang | A new approach to stability and stabilization analysis for continuous-time takagi-sugeno fuzzy systems with time delay[END_REF][START_REF] Sala | Stability analysis of LPV systems: Scenario approach[END_REF], etc. However, in this dissertation, we do not cover this issue but focus more on relaxing the stability conditions and handling the stabilization for the saturated control system.
Recently, the convex function inequalities have significantly contributed to the more accurate estimation of the upper bound of the derivative of the Lyapunov function. Accompanying these developments is the suitable modification of the LKF candidate from an old-style form (5.3) to an extended vector with double and triple integrals. Generally, these studies are based on the application of analytic functions (see e.g., Appendix C.1).
The approaches should meet the requirements: reduce the conservativeness and optimize the numbers of decision-variable matrices. The following section is devoted to the analyses of the recent studies concerned with the less conservatism of Jensen's inequality.
Delay-Dependent LKF Stability -Convex function
We would deliver the alternative improvement for the tighter bounding inequalities involved with the LKF stability analysis via LMIs. All developments and postulates are based on characteristic analysis of convex functions such as Jensen, Wirtinger, Bessel-Legendre inequalities, etc., presented in Appendix C.1.
Jensen's Inequality and Extended Approach
There are considerable famous inequalities derived from original Jensen's inequality are characterized by convex function or variations of convexity. The definition and some properties of convex functions of higher order, the definitions of convex domain properties refer to Appendix A.1 or [START_REF] Boyd | Convex Optimization[END_REF]Mitrinović et al., 1993a). Among studies, the integral version of Jensen's inequality is frequently employed in control delay theory in the last decades. Jensen's inequality improvement also can be found in the mathematical literature [START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF]Mitrinović et al., 1993b). . Accordingly, the most effective of these bounding conditions is accompanied by a reasonable choice of expansion vectors with single integral, double integral and triple integral, respectively. Let's consider the following LKF candidates associated to LPV time-delay system (5.1) as follows: § The single integral case for application of Wirtinger-based inequality (WBII): Vt Correspondingly, inequalities (5.9), (5.10) are used to estimate the lower bound of the derivative 4 .
The generalizations of
Vt The effectiveness of the convex function is demonstrated in the works of [START_REF] Datta | Improved stabilization criteria for Takagi-Sugeno fuzzy systems with variable delays[END_REF]P. G. Park et al., 2015;[START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF][START_REF] Briat | Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers[END_REF][START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF] with the significant improvement in the stability conditions. In section 5.2, the maximum allowable hysteresis are compared with recent work on stability analysis of LTI and LPV time-varying delay systems.
Discretized Convex Function
Along with the auxiliary function method, the n-convex discretization method also shows the effectiveness in relaxing the conservatism. The analyses of n-convex can find in the monographs [START_REF] Boyd | Convex Optimization[END_REF][START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF]Mitrinović et al., 1993a). It is interesting to emphasize that the gap of Jensen's inequality significantly decreases corresponding to the number of segments. The reduction of the inequality gap is obtained by discretizing, respectively, 1, 2, and 3 segments (in Appendix C.1.
2) It shows that the higher fragmentation, the less conservative Jensen-based inequality. Using the simple Lyapunov-Krasovskii functional (5.3) cannot attain to the analytical delay limit (including using free-matrix conjoint with the decision matrices). So, let's consider a discretized Lyapunov-Krasovskii functional candidate associated to LPV timedelay system (5.1) under the form: (Jensen and Wirtinger). The minimizing gap in inequalities, making the condition least conservative. Some result has shown almost closer with analytical prediction [START_REF] Briat | Delay-Scheduled State-Feedback Design for Time-Delay Systems with Time-Varying Delays[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF][START_REF] Han | A Delay Decomposition Approach to Stability of Linear Neutral Systems[END_REF]. The interesting point about the discretizing delay intervals is that it suits to all advanced bounding techniques (section 5.1.2).
Delay-Dependent Stability -Input-Output Approach
As discussed, the delay decomposition approach provides less conservative results for stability analysis and controller design. But the method is effective for systems that access the exact knowledge of the delay, which is ideal for numerical computation in practical design. Actually, the identifications or estimations of the continuous-time delay phenomenon in practice are tough challenges, see, e.g., [START_REF] Anguelova | State elimination and identifiability of the delay parameter for nonlinear time-delay systems[END_REF][START_REF] Belkoura | Parameters estimation of systems with delayed and structured entries[END_REF][START_REF] Chen | Robust identification of continuous-time models with arbitrary time-delay from irregularly sampled data[END_REF][START_REF] Ren | Online identification of continuous-time systems with unknown time delay[END_REF][START_REF] Zheng | Delay identification for nonlinear timedelay systems with unknown inputs[END_REF]. It should to mention that almost all the works from the literature analyzes the stabilization problem with memoryless (conservative) or exact-memory (non-implementable) controllers. In this case, the uncertainty (approximation) delay method discussed in Section 8.6 [START_REF] Gu | Stability of Time-Delay Systems[END_REF] shows to be more suitable for implementing the strategy of control system design. Specifically, the time-varying delay that is not accurately known at the time of analysis and design is considered as the dynamical uncertainties of nominal system. The inputoutput approach is very convenient in analyzing the stability based on the reformulation of the original system to feedback interconnection with additional inputs and outputs of auxiliary systems. Based on this approach, the stability is formulated in the input-output framework where the characterized LMI conditions obtain by the Scaled Small-Gain theorem [START_REF] Briat | Delay-scheduled state-feedback design for time-delay systems with time-varying delays-A LPV approach[END_REF][START_REF] Hmamed | Stability analysis of linear systems with time varying delay: An input output approach[END_REF] or supply function [START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF].
The objective of this section is to deliver a delay-dependent stability condition with an uncertain knowledge of the implemented delay.
zt zt L L .
Temporarily ignore the effect of control input, TDS system (5.1) is transformed to the following differential equation using the internal topology with input-output structure: The equivalent between the Scaled Small-Gain and Lyapunov-based technique is discussed in [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Boyd | Structured and simultaneous lyapunov functions for system stability problems[END_REF][START_REF] Doyle | Review of LFTs, LMIs, and μ[END_REF][START_REF] Zhang | Stability of time-delay systems: Equivalence between Lyapunov and scaled small-gain conditions[END_REF][START_REF] Zhou | [END_REF] for LTI/LTV systems. Both constant and time-varying approximate delay approaches will be more detailed in Appendix C.3. The quadratic supply rate (5.18) will be integrated in the delay-dependent stability analysis so-called m memory resilient (in Section 5.2.4), and developed for the saturated control design of the LPV time-delay system (in Chapter 6).
Stability Analysis of Lyapunov-Krasovskii functional
In this section, the stability of the time-varying delay LPV/quasi-LPV system is verified by using the parameter-dependent Lyapunov-Krasovskii function (PDLKF) candidate given in (5.3), (5.11), and (5.13). Besides, the advanced bounding techniques provide a better relaxation for the stability condition but return in the conditional complexity. That requires a delicate manipulation to decouple the bilinear components encountered in the control design strategy.
Single Delay-Dependent LKF Stability and associated relaxation
Jensen's Inequality
From the above discussion, PDLKF in Eqs. (5.3) is used to deliver a stability condition of system (5.1) combining with ℋ∞ performance that leads to the following results.
Lemma 5. From this point, a linearizing transformation with the slack-variables concerning a generalization of Finsler's lemma decouples the decision matrix variables and maintains the parametric characterization structure (provides flexibility for the LMI condition without the additional assumptions). Now, by using the projection lemma that results in the associated relaxation of PLMI condition. produce extra-degrees of freedom for the designed condition (less conservative). But, the inclusion of condition (5.29) entails the unnecessary constraints for decision matrices. In the view of the LMI-Based relaxation methods, e.g., the slack-variable method [START_REF] Ebihara | LMI approach to linear positive system analysis and synthesis[END_REF][START_REF] Zope | Delay-Dependent Output Feedback Control of Time-Delay LPV Systems[END_REF]
Auxiliary Function-Based Integral Inequality
In this section, the stability condition delivers for the LPV time-delay system using an AFBII. As shown in the result section 5.2.5, this approach is the superior improvement of system performance compared to WBII and Jensen Inequality. Using the LKF equation (5.13) for stability analysis for the dynamical system (5.1), we have the following result. The LPV time-delay system (5.1) is asymptotical stable if the derivative of LKF (5.13) along the trajectories of system satisfies:
Decomposition Lyapunov-Krasovskii Functional Stability
In the last sections, the tighter bounding techniques has delivered a less conservative condition with the augmented LKF. So, how could one improve the system performance by using simple Lyapunov-Krasovskii functional? The necessary and sufficient conditions are derived from the discretized delay method in the works [START_REF] Gu | A further refinement of discretized Lyapunov functional method for the stability of time-delay systems[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF] for the LTV systems, then refined to the LKF decomposition [START_REF] Han | A Delay Decomposition Approach to Stability of Linear Neutral Systems[END_REF].
First, let recall a discretized Lyapunov-Krasovskii function associated to LPV time-delay system (5. The sketch of demonstration is based on the lines of Lemma 5.2.4 combined with the use of Wirtinger-based inequality (5.23) in Lemma 5.2.2. The stability delay-dependent is verified if the conditions:
& M L L L MMMO LL 0, p (5.
MO L LL LL LL MMMMMMMO LLL 0, p (5.
1 1 2 tiht N TT i i tiht d VttPtxQxd dt & &L 1 1 0. tiht N TT i i tih d tPthxsRxsdsd dt & && (5.45)
holds along the trajectories of LPV system (5.1). The key role of development lies in the application of the WBII inequality to the second term of the following expansion .
tihtih t NN TTT iii ii tihtih d hxsRxsdsdhxtRxthxRxd dt
1 1 22 1 1 1 0 ,0, 00 r TT N I R Vtztwttt R
N ii NNNN R N h h iiN R NNNN h h TTT wwwd R R R R BPBPBPI & OO & L OOMO LL LLL ,
Uncertain Delay-Dependent Lyapunov-Krasovskii Functional Stability
As discussed about the obstacle of knowing the exact-delay value in implementation of control system design, let's consider an uncertain time-delay system as follows:
, ,
hdw hdw xtAxtAxthtAxtdtBwt ztHxtHxthtHxtdtJwt &
(5.50)
with delay features as specified in the previous section. In the absence of exact knowledge of the delay , ht a robust stability addresses for the control and observation design strat- egy involved with two-delays [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF]. Concerning some design requirements such that the admissible maximum of the delay value, the permissible estimate (margin robust uncertain delay
), m tdtht and the optimization of ℋ∞ performance criterion level might consider for the optimization problems.
A parameter-dependent Lyapunov-Krasovskii function is associated with system (5.50) as follows: m By reformatting the integral limits, it is possible to capture the un- certainty variation of the approximate delay instead of just tracking the maximal relation. More specifics could find in section 5.7 [START_REF] Gu | Stability of Time-Delay Systems[END_REF] and section 4.7 [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF] Proof. This section is represented in D.3.
Memory-Resilient Delay-Dependent Lyapunov-Krasovskii Functional Stability
Now, let's address uncertain delay-dependent as the input disturbance by using the relation of bounded delay operator (5.16). Then, the L2 scaled bounded real lemma is applied to ensure robust stability for the uncertain structure (satisfies a well-connected property (5.18)). The interesting point is the approximation of the exact-delay value varying within an uncertain ball, defined by an algebraic inequality . m tdtht
Jensen's Inequality
This methodology so-called -memory resilient where the stability PDLMI conditions are derived from the development of Lyapunov-Krasovskii function (5.14).
Lemma 5.2.7. (Briat, 2015a)
S sym
Proof. A sketch proof is given in the D.4, a the full version can find in the literature (Briat, 2015a).
Wirtinger-Based Inequality
Let's employ a bounding technique WBII (5.6) for the resilient delay-dependent stability analysis for LPV system (5.50) involved the L2-norm bounded delay operator (5.16) that results in the following theorem.
Proof.
Take the derivative the Lyapunov-Krasovskii functional (5.11) along trajectories of LPV time-delay system (5.50) and combined with L2-norm performance on the controlled output, that implies an accustomed delay-dependent stability condition: Applying the SSG lemma, with substituting
1 1 1 0 0, 0 TTTT r I Vtztztwtwttt R & (5.
(5.59) into condition (5.58) that yields the following parameter dependent LMI condition: Similarly, rearranging condition (5.64), we obtain stability condition (5.57).
1 1 0000 1 0 0, 0 TTTTT r I tTTtztLLLztwtLwt R
W
Remark 5.2.5. It should be noted that the satisfaction of parametric condition (5.61) implies the fulfillment of statements (5.64) and (5.65), but the reverse is not correct. By choosing L sufficiently small when 0, m the approximate delay stability conditions
(5.61) suggest to PDLMI (5.25). So this development provides a more general conditional form for delay-dependent stability analysis. Besides, the augmented Lyapunov-Krasovskii functional (as discussed in Sections 5.2.1 and 5.2.2) could consider delivering a further improvement of the inequality gap.
Remark 5.2.6. The feasible solutions of inequalities (5.57) depend on the parameters and its variation rates ,, tt & nonetheless the slack-variable X in condition (5.61) only 5.2.5. Example 135 depends on . t That causes a degradation in the equivalent characterization of the con- gruence transformation. However, the obtained condition is convenient for the stabilization synthesis. It should be emphasized that the associated relaxation of delay-dependent stability conditions (5.25) and (5.61) will be thoroughly utilized in the controller design strategy for the saturated LPV time-delay system.
Example
In the first section, the two well-known examples in the domain of delay-dependent stability analysis for LTI time-delay system is used to deliverer a concise comparison of the proposed conditions with other works. Then, the proposed PLMI stability conditions is relaxed to the multiconvexities forms (linear combination, T-S fuzzy, as discussed in previous chapters) to compare with the literature of delay-dependent stability analysis for fuzzy systems. First, let's consider the simplifications of the time-delay system (5.1) with:
Example 5.2.1: [START_REF] Gu | Stability of Time-Delay Systems[END_REF] [START_REF] Dey | Improved delay-dependent stabilization of time-delay systems with actuator saturation[END_REF] 2.2594 1.8502 2.3370 5n 2 +2n Theorem 1 [START_REF] Seuret | Delay-Dependent Reciprocally Convex Combination Lemma for the Stability Analysis of Systems with a Fast-Varying Delay[END_REF] 2.2130 18.5n 2 +5.5n Theorem 1 [START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF] 3.1544 7.0463 9.5n 2 +3.5n Theorem 1 [START_REF] Kwon | Improved results on BIBLIOGRAPHY 225 stability of linear systems with time-varying delays via Wirtinger-based integral inequality[END_REF] 2.4203 1.6962 9n 2 +3n Theorem 1 (T. H. Lee & Park, 2017) 3.1555 2.4963 114n 2 +18n Proposition 1 (X. M. Zhang et al., 2017) As can be observed from the above table, the delay-dependent reciprocally convex combination method [START_REF] Seuret | Delay-Dependent Reciprocally Convex Combination Lemma for the Stability Analysis of Systems with a Fast-Varying Delay[END_REF][START_REF] Zeng | Hierarchical stability conditions of systems with time-varying delay[END_REF]C. K. Zhang et al., 2017;[START_REF] Zhang | An improved reciprocally convex inequality and an augmented Lyapunov-Krasovskii functional for stability of linear systems with time-varying delay[END_REF] considers amazed decision variables. However, the effectiveness is not as impressive as using the auxiliary convex function (Lemma 5.2.3). Actually, most of the maximal values of the upper bound delay (the bold values) solved by this lemma show superiority in all categories. So, the extending application of the bounding technique in conditions (5.5)-(5.10) Lemma 5.2.3 has expressively enhanced the MAUB. Now, by using the multi-convexities conditional relaxation form of PLMI for Lemma 5. Where the time-varying delay 0 ,, p htxt HU and the membership function:
21 121 1, 1. xt xtextxt
Applying LMI relaxation methods similar to those in previous chapters to verify the stability for system (5.68) that yields the MAUB values in the catalogs 1 and 2. 2.4293 2.0616 3.7638 3.0913 10n 2 +4n Theorem 1 [START_REF] Yang | New delay-dependent stability analysis and synthesis of T-S fuzzy systems with time-varying delay[END_REF] 0.4995 0.4988 58n 2 +4n Theorem 1 [START_REF] Zeng | Improved delay-dependent stability criteria for T-S fuzzy systems with time-varying delay[END_REF] 0.7584 0.7524 16.5n 2 +6.5n Theorem 1 [START_REF] Lian | Further robust stability analysis for uncertain Takagi-Sugeno fuzzy systems with time-varying delay via relaxed integral inequality[END_REF] 1.3123 1.2063 51.5n 2 +9.5n Theorem 1 [START_REF] Li | Stability and Stabilization with Additive Freedom for Delayed Takagi-Sugeno Fuzzy Systems by Intermediary-Polynomial-Based Functions[END_REF] 1.4819 Recently, the stability and stabilization analysis for the delayed LPV/Quasi-LPV system has adopted the novel LKF constructions for the LTI time-delay system and the advanced bounding techniques s.t. Wirtinger-based II [START_REF] Zeng | Improved delay-dependent stability criteria for T-S fuzzy systems with time-varying delay[END_REF][START_REF] Zhang | New stability and stabilization conditions for T-S fuzzy systems with time delay[END_REF], free-matrix-based II [START_REF] Lian | Stability analysis for T-S fuzzy systems with time-varying delay via free-matrix-based integral inequality[END_REF], auxiliary-function-based II [START_REF] Datta | Improved stabilization criteria for Takagi-Sugeno fuzzy systems with variable delays[END_REF][START_REF] Li | Stability and Stabilization with Additive Freedom for Delayed Takagi-Sugeno Fuzzy Systems by Intermediary-Polynomial-Based Functions[END_REF][START_REF] Tian | Stability Analysis and Generalized Memory Controller Design for Delayed T-S Fuzzy Systems via Flexible Polynomial-Based Functions[END_REF], reciprocal convex combination [START_REF] Lian | Stability analysis for T-S fuzzy systems with time-varying delay via free-matrix-based integral inequality[END_REF][START_REF] Lian | Stability and Stabilization of T-S Fuzzy Systems with Time-Varying Delays via Delay-Product-Type Functional Method[END_REF], then combines with the relaxation methods of the PLMI condition (Wang & Lam, 5.2.5. Example 137 2018a[START_REF] Wang | A new approach to stability and stabilization analysis for continuous-time takagi-sugeno fuzzy systems with time delay[END_REF][START_REF] Sala | Stability analysis of LPV systems: Scenario approach[END_REF] to deliver a less conservative condition. In recent years much effort has been devoted to the delay-partitioning LKFs and augmented LKFs, and fruitful results have been achieved, see for example [START_REF] Li | Stability and Stabilization with Additive Freedom for Delayed Takagi-Sugeno Fuzzy Systems by Intermediary-Polynomial-Based Functions[END_REF][START_REF] Lian | Stability and Stabilization of T-S Fuzzy Systems with Time-Varying Delays via Delay-Product-Type Functional Method[END_REF][START_REF] Tian | Stability Analysis and Generalized Memory Controller Design for Delayed T-S Fuzzy Systems via Flexible Polynomial-Based Functions[END_REF] and the reference therein. In order to further reduce the conservatism of the stability results, the new auxiliary polynomial-based functions (APFs) (S. Y. Lee et al., 2017;[START_REF] Lee | A novel Lyapunov functional for stability of timevarying delay systems via matrix-refined-function[END_REF], the Intermediary-Polynomial-Based Functions (IPFs) [START_REF] Li | Stability and Stabilization with Additive Freedom for Delayed Takagi-Sugeno Fuzzy Systems by Intermediary-Polynomial-Based Functions[END_REF] and Flexible Polynomial-Based Functions (FPFs) (Y. [START_REF] Tian | Stability Analysis and Generalized Memory Controller Design for Delayed T-S Fuzzy Systems via Flexible Polynomial-Based Functions[END_REF] are studied by introducing a set of orthogonal polynomials. These advanced techniques (i.e., generalized parameter-dependent reciprocally convex inequality) are proposed to better estimate the triple integral inequalities. However, it is possible to realize a significant improvement in system performance when comparing Lemma 5.2.3 with the mentioned works (as seen in the catalogs 1 and 2 of Table 5.2).
It can objectively acknowledge that the construction of polynomial-based functions aims to address the delay-range stability conditions. While the designed stability conditions in Lemma 5.2.2 and Lemma 5.2.3 are quite simple and have fewer computational complexities. With this advantage, these conditions would effortlessly adapt to the extension of the stabilized design strategy with the saturation constraints, approximation delay, etc. It's worth noting that both Lemma 5.2.1-Lemma 5.2.3 have validated delay-dependent stability for system (5.68) with a MAUB greater than 100s, corresponds to slowly varying delay cases 0, 0.1. hthht & Shown in Figure 5-2 is the evolution of the system dynamic with a slow-varying time function Currently, the numerical simulation tool Simulink ® does not allow integrals in the variable interval, so we reformat the LKF as follows: TT fxxQxgxxRx &&& Then, the approximation of LKF is given in the third frame of Figure 5-2. Besides, another popular example has been used in the last decade in demonstrating the effectiveness of the proposed stability conditions for the T-S fuzzy system is given below. The results of the MAUB delay values are provided in the catalogs 3 and 4 of Table 5.2 respectively with Lemma 5.2.1-Lemma 5.2.3. However, the majority of studies use this example deal with the problem of delay-range stability. So, we could not deliver a further comparison result. It's worthy to note that Wirtinger-based inequality (Lemma 5.2.2) could be considered as an exceptional case of the auxiliary-function-based method (Lemma 5.2.3). The vector expansion of this lemma employs only one single integral, showing an adequate trade-off between the number of variables to be solved (computational complexity) and the maximum value of the upper bound delay (conservatism). It provides a less conservative stability condition than traditional Jensen conditions, with an integrable conditional structure (without too much decoupling of the decision matrices to the dynamic system).
Furthermore, as mentioned about the multiple production complexity of the stability conditions using LKF formulation (5.13), in order to be compatible with control strategy designs for LPV time-delay systems subject to the saturated actuator, this appropriate LKF takes precedence over conservatism. Specifically, the analysis to the approximate delay and the memoryless gain-scheduled feedback controller with the saturation constraints are based on condition (5.25) in Theorem 5.2.1. It exhibits a well-proportioned stabilization condition between conservatism and numerical burden. These issues will be continued to discuss in detail in the next chapter.
Conclusions
In this chapter, the preliminary premises of the LPV delay system have been delivered. The delay-dependent stability is addressed on time-domain based on the Lyapunov Krasovskii functional technique. Where the convex properties of the analytical function are generalized the application to improve the stability conditions. Corresponding to each bounding technique (Jensen-based integral inequality, Wirtinger-based integral inequality, auxiliary-function-based integral inequality, etc.) is an appropriate selection of the augmented LKF to achieve the highest efficiency.
These delay-dependent stability conditions are analyzed for the parametric dependent system adopting the new structures LKF with double integral and triple integral. The parametric LMI stability conditions can be relaxed into LMI conditions and effectively solved by common convex optimization algorithms (barrier function, interior-point method, etc.). Along with the original stabilizing LFK conditions via PLMI, are associated-relaxation presented to decouple the multiple-production of the decision matrices and the dynamical matrices of state system. Furthermore, the transformation could adapt to all generalization of the proposed auxiliary-function condition structures were analyzed for the delay-dependent stability. A simple linearization method directly delivers a tractable condition that is suitable with the stabilization synthesis. The comparison results illustrate the effectiveness of the designed stability condition.
Chapter 6. Stabilization Synthesis for the LPV/quasi-LPV
Time-delay Systems with Actuators Saturation
Stabilization Synthesis for the LPV/quasi-LPV
Time-delay Systems with Actuators Saturation
This chapter inherits the implementations of the LPV saturation system in the previous chapter to develop the controller for the LPV delay system with constrained actuators. The analysis of the LPV delay control system typically entangles in multiple productions (for example, the stability conditions in sections 5.2.1, 5.2.4 are related to the matrix product expressions PA and RA with P, R being the decision variable matrices). The approaches such as descriptor or free weighting matrix will be more problematic when considering saturation constraints in the stabilization condition. Thus, this methodology settles these problems agreeing with the following design strategy:
1 -Develop the stability conditions for open-loop system (unsaturated)
2 -Employ the saturation conditions imposing on controller (generalized sector bounding), then multiple-productions will be decoupled using Finsler's lemma.
3 -Substitute the variable into the closed-loop expression (using the congruence transformations and setting the variable to obtain the parameter-dependent LMI condition).
4 -Relax the associated PLMI conditions, corresponding to the design requirements, then the PLMI conditions are converted to finite dimension LMIs by gridding, convex combination, S-Variable methods, etc.
The "keywords" of modern control technology related to the control theory of LPV systems, the time-delay LPV systems, and the saturated system analysis, respectively, have been fully annotated in chapters 2, 4 and 5 with respective references. It should be noted that the characteristic of exact-memory controllers is non-implementable in practice due to the difficulty in estimating delays. Therefore, the uncertain delay gain-scheduled controller is more suitable for the delay-dependent stabilization condition. The features of the control system are: § The delay considers to vary in a range or approximate, and thereby more applicable in practice. § In the framework of modified sector condition, a suitable auxiliary controller strategy not only gives a more accurate estimation of the lower bound of LKF but also relaxes the saturation constraints. § The use of Wirtinger inequality reduces the gap in Jensen's inequality, it has shown a reducing conservatism of the stability and stabilization delay-dependent conditions analysis. The improvements compare with the existing results by using fewer number of decision matrix variables.
In the first section, the rudimentary definitions such as the admissible set of the initial condition, saturation constraints, and structure of delay-dependent controllers will be present. The corresponding stabilization conditions then deliver for each design strategy.
Problem Formulation and Preliminaries
Sector Nonlinearity Model Approach
A time-delay LPV system with actuator saturation presents under the forms:
sat,
Saturation nonlinearity.
Control input vector 12 ,,,
Tm m utututut K R constrains by the limits of satura- tion , ,1,2,,.
iii utuuim K Let's recall a dead-zone nonlinearity associated with a symmetric saturation function sat uutu : signif 1,2. 0if
iiiii i i i uuuuu uim u u K (6.2)
Define an auxiliary control law t belong to the polyhedral set
,,:, 1,2. m ii uuuim K S R (6.3)
Region of Attraction.
The saturation limits of actuators make the control design of the time delay LPV system more challenging. The system (6.1) attains global stability if the trajectories asymptotically converge to the origin from all initial conditions ,,0, h without effect of disturbance 0. wt Nonetheless, this condition is hard to satisfy in practice. Instead of having to assurance all the initial conditions, an estimation of the region of attraction determines the initial conditions to which the system will converge asymptotically. The key issue relating to the estimate of the Region of Attraction (or Domain of Attraction -DoA) belongs to Banach space of continuous vector function of initial:
2 2 1 0 12 2 2 ,0,0 |sup,sup. ,0, n hh h X C R & (6.4)
Ellipsoidal Set of Stability.
Lyapunov-Krasovskii functional candidate is used as a primary stability analysis tool for the dynamic systems. The estimations of DoA are associated to the following LKF:
In the next sections, the delay-dependent stabilization conditions derive for the state feedback, and the dynamic output feedback controller related to the determination of the delay in the system.
Parameterized State Feedback Controller
Considering a controller law is based on the compact sets of parameters :
t , d utKxtKxtdt (6.6)
with scheduling gain , d KK are sought to stabilize the time delay system (6.1), and dt is the delay approximation of the system delay ht as presented in section 5.2.4.2.
Let's recall the permissible delay estimate: Most delay-dependent control strategies are typically concerned with the first case to simplify the design and be suitable for implementation (where delay values are unavailable for measurement). Since the gain d K does not include in the feedback controller struc- ture, this approach is conservative. The second case allows relaxation of the stabilization conditions but is inapplicable in practice. In the last case, the delay approximation (within the robust margin) indicates a practical implementation and is less conservative than a memoryless controller.
In the framework of saturation control using the generalized sector condition, the restriction on the feedback control law can be wiped off with the auxiliary controller. Specifically, let's consider the following controllers t S associated with ut (which clcl AABKHHJK The design requirements related to the ℋ∞-performance criterion is to search for bounded feedback controllers and auxiliary controllers, which guarantees the region stability of system (6.11). From initial condition (6.4), then condition (6.3) states briefly as follows: for the design saturation limits i u look for the auxiliary control law t that satisfy ,1,2,,. ii uim K Generally, to ensure the stability of the closed-loop system under the influence of disturbance, the following necessary and sufficient conditions need to be fulfilled:
must
2 2 , 1 , i i V t xtxt u & (6.12) if 1 , 0 , 0: wtVV xtxt & (6.13) if 2 2 1 , 0 0 : . w t t twt V x Vx & L (6.14)
From the view of condition (6.12), the appropriate selection of an auxiliary controller ensues a better estimate of the lower bound of the Lyapunov function. That makes the stabilization conditions of the saturated control system less conservative. But conditions (6.13)-(6.14) are complicated to enforce directly for the time-delay LPV system unlike the method proposed in Chapter 4. Without loss of generality, we can assume that the energy bound of the disturbance is known
2 1 2 , wt L
and the set of admissible initial condition is defined by the upper bounds in (6.4).
Optimization problems.
Combined with the estimation of DoA (6.4), the performance criterion, memory resilient and the upper-lower bounds of delay value, we formulate the following optimization problems: § Given ,,, m h then maximize the size of DoA. § Given ,, m h and a set of admissible initial condition then minimize (optimization disturbance rejection level). § Given , a set of admissible initial condition then optimize h (the maximal upper bound on the delay value) or maximize m (the allowable delay approximation).
It should be noted that the above cases consider a supposition on energy bounded exogenous signals (a L2-bound on the admissible disturbances defined by ). In addition, the optimization problems such as minimizing energy-to-energy index , maximizing the up- per bound of the delay value , h and maximizing the delay approximation value m are all convex problems. These values can be derived from a sub-optimization method or an iterative algorithm. However, the admissible set of the initial conditions usually relates to the concave problem. Therefore, we focus more on seek for the largest estimate of DoA 6. 1.3. Parameterized Dynamic Output Feedback Controller 145 that satisfies the designed delay-dependent stabilization condition.
Parameterized Dynamic Output Feedback Controller
Let's consider a dynamic controller feedback system with approximate memory:
0 ,, 0 ,0, 0 ,
DD
It is confusing to deploy the delay-dependent stability condition directly for the saturated LPV system considering a dynamic output feedback controller. Unlike the state feedback controller analysis, the stabilization analysis for closed-loop system (6.16) involves nonlinear structures. The use of congruence transformation only exacerbates the problem because of the coupling of the decision matrix LKF. Moreover, it can be seen that the quasiconvex related to the saturation conditions did not completely resolve in previous work (even the controller analysis for LTI systems). Inspired by the research of [START_REF] Apkarian | Advanced gain-scheduling techniques for uncertain systems[END_REF][START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF], we propose a new approach to solving the problem sequentially in the following steps: § First, deliver a delay-dependent stability condition for unsaturated system (6.16) with inputs u andwt (employ Theorem 5.2.2 to address the stability condition with approximated delay value). § Then, include the saturation conditions and integrate the GSC condition (developed similarly to Section 4.4). § Lastly, use a congruence transformation to return the tractable condition.
It can see that stability analysis for resilient memory DOF controllers is more challenging than for exact-memory DOF controllers. The variable substitution is more problematic when there is no match in the matrices h A and d A concerning A (which shows the impos- sibility of setting the variable as the method in section 4.4). The problem synthesis will be discussed and presented in Section 6.3.
State Feedback Controllers
This section concerns the synthesis of saturated state-feedback control laws with a memoryless and approximate delay value. The stabilization of closed-loop is addressed based on the delay-dependent stability conditions given in sections 5.2.1.2 and 5.2.4.2, respectively, for the single delay and approximate-delay case. These conditions are approached based on the application of the Wirtinger inequality, which are less conservative than the Jensen inequality (the comparison has shown in section 5.2.5).
Memoryless Delay-Dependent Stabilization
In the first case, we do not include the exact memory gain in the feedback controller structure, but the auxiliary controller t could employ with formulation (6.9) to relax the saturation condition. Now, the closed-loop system is obtained from the general form (6.11) with 0
% %% %%% 0, p (6.
Proof.
A sketch of the proof is presented sequentially as follows. First, by using Lyapunov-Krasovskii functional candidate (6.5), the parametric LMI condition (6.19) suggests the lower bound of the saturation constraints on auxiliary controller (6.9) are developed similar to Theorem 4.1.1:
(6.19) 1 2 1 23 2 1 , 22 T II T ii i h PhR GGGG PhRPQR ° Lemma 5.1.1 1 2 1 23 2 1 . 22 TT ii T i h PhR ttVt PhRPQR (6.20) with 1,2.
im K Then, in the view of the initial bounding set conditions (6.4), we have: The development of the delay-dependent stability is based on Lemma 5.2.2 by using Lyapunov-Krasovskii functional candidate (6.5) combined with ℋ∞ performance criterion, and GSC condition (6.22) that results in: UU Finally, if the stabilization condition (6.18) fulfills then the derivative of Lyapunov function (6.5) holds along the trajectories of closed-loop system (6.17), that ensues § When 0, wt then Then, the DoA size optimization problem is formulated to seek the minimum value of the upper bound on condition (6.21). In contrast, the upper limit is indisposed to set by 1 because it also relates to the energy bounded disturbance and the attenuation level 1 .
22 1 1 t T I tht w u tu VtzttT KGxGxd & 1 1 0 0, 0 TTT r I tttt R
Since conditions (6.18)-(6.19) are linear matrix inequalities, seeking the upper bound of time-varying delay is not much of a challenge. However, condition (6.21) yields the nonconvex formulations relating to decision matrix matrices ,, QXQXRXRX %% Kmake it not always possible to attain a good estimate of the initial condition domains. In Section 6.2.3, a linearization is proposed to handle the concave problem.
Approximated Delay-Dependent Stabilization
As discussed in section 5.2.4.2, the uncertain delaydt provides a more general form of stability condition, in which a delay-dependent stability condition with memoryless or exact memory can derive from an approximate delay condition. By substituting the closed-loop system (6.11) into the relaxed LMI condition (5.61) and repeat the same analysis as in Theorem 6.2.1 that leads to following result.
Proof.
This part will be omitted because it is similar to the development of memoryless control laws in the last theorem. By substituting closed-loop system (6.17) into delay-dependent stability condition (2.194) gives in (6.9). W Remark 6.2.1. The delay approximation transformation is derived from the delay-dependent stability condition in Theorem 5.2.2. The difference between conditions (6.27) and (6.18) lies in the fourth and eleventh row and column. As a result, the approximated memory state-feedback controller provides the implementation feasibility compared to the exact-memory controller and gives less conservative stabilizing conditions than the memoryless controller.
Optimization problem -Maximization Domain of Attraction
The optimization problems usually involve to the size criteria of the admissible initial condition and the maximum allowable level of the disturbances. Specifically, let's consider the following criteria relate to inequalities (6.13), (6.14), that includes: § Preselect , then minimizing As discussed, the optimization problems such as minimizing energy-to-energy index , maximizing the upper bound of the delay value , h and maximizing the delay approxima- tion value m are all affine conditions. These values can be found by optimization methods or by iterative algorithms. However, the expressions of the admissible set of the initial conditions are involved in the bilinear forms (e.g., condition (6.21)). In this section, we seek for the largest estimate of DoA that satisfies the delay-dependent stabilization conditions in Theorem 6. . PI °So, optimization problem (6.30) also means looking for the minimization of the greatest eigenvalue of variable matrices ,, PQR of Lyapunov-Krasovskii functional (6.5). By such expansions, the concave problem is transformed into a linear optimization problem with the objective of finding variables ,1,2,3,4,
i i R
such that the cost function (6.30) reaches the smallest value corresponding to the design stabilization conditions and the linearity conditions (6.31).
Example
This section is devoted to the analysis of the results and discussion of the proposed method. First, an example of the LPV delay system discussed in [START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF][START_REF] Zhang | Delay-dependent stability analysis and H∞ control for state-delayed LPV system[END_REF] is applied to demonstrate the performance of a designed controller with the approximate delay value (in the case of with and without external disturbance). Second, the efficiency of the saturation constraints is validated on an unstable open-loop LPV system with the bounded signal control (due to physical limitation or safety mechanism, etc.). Then, the performance degradation and instability will observe in the systems without the saturated control design.
In the next part -section 6.2.4.2, we borrow a well-known example (Fridman et al., 2003;[START_REF] Gomes Da Silva | Stabilisation of neutral systems with saturating control inputs[END_REF] to show the adaptation of the proposed method. In which the local stabilization enforces the saturated LTI time-delay system. After that, a comparison of the maximizing estimation of the domain of attraction is provided. The optimal disturbance reduction levels solving by the stabilization condition of Theorem 6.2.1 and Theorem 6.2.2 deliver for unsaturated system (6.32) that will compare with other unsaturated control systems. So, the constrained saturation on controller temporarily is ignored. Specifically, condition (6.19), the fifth columns and row of condition (6.18) of Theorem 6.2.1, condition (6.28), the sixth columns and row of condition (6.27) in Theorem 6.2.2 will be not included in the first controller comparison. Parameter t is gridded over 41 p N points uniformly spaced 1,1 . The optimal re- sults are solved by the modified conditions of Theorem 6.2.1 and Theorem 6.2.2 consistent with given delay values in Table 6.1 and the assumption of zero initial conditions. The proposed stabilizations show the flexibility when it can comfortably apply to the unsaturated controller system analysis. Moreover, performance improvement has been attained with the memoryless and memory-resilient controllers compared to [START_REF] Briat | Linear Parameter-Varying and Time-Delay Systems[END_REF][START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF], demonstrated the relaxation of the proposed method. It is interesting to point out that the stabilization conditions with an approximated delay value could deliver the same disturbance rejection optimization levels with the exact-memory and memoryless, respectively, when 0 m and .
m h However, the effects of saturation are less attractive analyzing the stabilizable system (LPV time delay system (6.32) has h AA Hurwitz). So, let's consider a modification of the previous example. Example 6.2.2: Let's introduce a quasi-LPV time-delay system: The responses of the bounded controllers correspond to saturation limit 5, u with 1,0.9. h
In this example, the simulation starts from safe initial conditions that belongs to the estimate of DoA. Then, the disturbance amplitude gradually increases but does not exceed the maximum allowable bounded energy The purpose of this process is to estimate the evolutions of the state-dependent parameter as well as the saturation response of the unbounded feedback controllers. It clears to realize that t & is bounded between 5,5. But most of the response characteristics are in the range of 3,3 (even when system is influenced by a large magnitude external disturbance). So we proceed to solve the proposed theorems with parameter values selected in the assumed range.
The actuator saturation usually takes place in many practical applications where its existence may lead to the degradation performance or even cause the instability of the closedloop system. In the monographs [START_REF] Hu | Control Systems with Actuator Saturation[END_REF][START_REF] Tarbouriech | Stability and Stabilization of Linear Systems with Saturating Actuators[END_REF], the authors thoroughly explained all the behavior of the closed-loop states frozen when the actuator is saturated. In which the state dynamics can destabilize or converge to a parasitic equilibrium instead of toward to the origin. The gain-scheduling controllers solved by the proposed methods could be compared with the system without the saturation conditions. The constrained conditions (6.19), and (6.28) ensure the local stabilization contrasting to the system without a saturated design.
Example 155
However, these results may be less objective than in Table 6.1, when we reconstruct stabilization conditions of Theorem 8.1.5 and Theorem 4.2 of the literature [START_REF] Briat | Linear Parameter-Varying and Time-Delay Systems[END_REF][START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF] to implement controls applicable for the saturated LPV time-delay system (6.33). Nevertheless, the scheduling gains of the proposed theorems have shown the stability regulation efficiency for the LPV time-delay system conforming to actuator limits. The parametric dependent gains are given by:
Memoryless Saturated Controller
Solving the stabilization condition of Theorem 6.
In the bounded controller framework, the signal is sensitive to saturation limit if the designed gain is too high. Explicitly, the higher rejection disturbance level ensues in smaller linear behavior region. As a way of repeating, it is a trade-off between system performance, estimation of domain asymptotical stability, and the admissible set of the initial conditions.
Maximization of the set of admissible initial conditions
Now, we present the results of the optimization method discussed in Section 6.2.3 for estimating the allowable initial conditions. There are inconsiderable studies on this aspect for the LPV time-varying delay system with the saturated actuator, so we employ the The more slack-matrix variables usually lead to high computation complexity. The proposed method provides good results with less conservative stabilization conditions for a reasonable number of variables to be determined. To solve the stabilization conditions for system (6.37), we only use 27 variables compared to 36, 37 variables of the reciprocally convex combination (RCC) and free matrix based (FMB) method [START_REF] Dey | Improved delay-dependent stabilization of time-delay systems with actuator saturation[END_REF], and 82, 35 variable of the free-weighting matrix (FWM) method (Chen et al., 2015[START_REF] Chen | Robust Stabilization for Uncertain Saturated Time-Delay Systems: A Distributed-Delay-Dependent Polytopic Approach[END_REF], respectively. The methods outlined in Table 6.2 focus on the LTI system. The trade-off of conditional relaxation with the computational complexity of the condition makes these approaches hard to implement on a parameter-dependent stabilization. The proposed condition balances the number of variable matrices, the conservatism, and the scalability of the control synthesis. To the best of our knowledge, the stabilization problem for LPV/quasi-LPV time-delay system subject to actuator saturation has not been well addressed, although the problematics have significant practical applications. In the next section, the proposed structure conditions are used to deliver the stabilization of the dynamic output controller. Then, the optimization problems for LPV time-varying delay systems relate to the size criteria of the admissible initial condition and the maximum disturbances attenuation are provided in section 6.
Proof.
A sketch of the proof is provided as follows. Firstly, using the transformation (6.41), we have LMI (6.51).a equivalent to the following condition: We consider the Lyapunov-Krasovskii functional (6.5) for extended system (6.38) consistent with the dynamic state . t It can realize that (6.53) are the saturation constraints involved in feedback controller (6.39) and auxiliary controller (6.40). By inversing variable assignments (6.49), we get the dynamic output controller (6.52). The rest of the demonstration will follow the lines as in Theorem 6.2.1.
W
For using the congruence transformation, the slack-matrix W is chosen as a symmetric matrix. Besides, as discussed in the previous sections 12 ,,, W must be parameter-in- dependent variable matrices to avoid nonlinear components appearing in the condition. Analysis of stability conditions for system (6.38) takes full advantage of the commensurate structure between matrices A and , will suppress the bilinear terms and deliver a convex condition. Nonetheless, it loses the generality and interestingness of the approximated delay method. In the next section, Young's inequality is recalled to deal with these nonlinear problems.
Approximate Delay-Dependent Stabilization
Let's introduce an input control Applying Young's inequality for above PLMI condition, then using Schur to rearrange the matrix inequality that entails in the stability condition (6.66). The rest follows the same lines as the one of Theorem 6.3.1.
,
It can be noticed the difference between the analysis of a dynamic output controller with an approximated delay and a state feedback controller. Where the exact-delay value is still stuck in the controller structure (6.57). We could assume a simplification of output measurement, but the hysteresis also affects the measured signals in practice. Thus, system (6.15) presents a more general form. But, to deal with the concave problem, we have to use Young's inequality for condition (6.68), which has limitation.
Optimization problem -Maximization Domain of Attraction
As shown in the results section 6.2.4, the minimum values of attenuation criterion can be obtained by directly optimizing in PLMI stabilization conditions. The maximum allow- able value of the delay margin h can be found by the iterative sweeping technique. It should note that both approaches carry out with preselected values of 0 ,,, or , h etc.
As we discussed in section 6.2.3, the estimates of the domain of attraction are concerned with the coupling of the variable matrices in the conditions. In which minimizing ,,, PQR %%% concomitant with maximizing X % makes the global optimization sometimes not converge to the correct solution. Nonetheless, so far, the recent methods have solved the optimization problem of the stabilization condition according to this approach. By substituting 1 in place of X and slightly modifies Problem 6. The methodology avoids concave-problem and singular matrix forms. But, it has a drawback in approaching the dynamic output controller stabilization. That will be revealed in section 6.3.4.2.
Example
The stabilization implementations analyzed in this chapter use the generalized sector bounding condition to enforce the control saturation. Nevertheless, the saturation constraints set for the state feedback controller in Theorem 6.2.2, distinguishes from the dynamic output controller in Theorem 6. In addition, most of the work on developing stabilization conditions with delay and actuator saturation is generally applied to LTI systems and employs a quadratic Lyapunov function. In section 6.3.4.1, the parameter-dependent stabilization conditions are addressed for an LPV time-delay system. Then, the comparison of gain-scheduled controllers, and the maximizing the estimation of DoA concern to the parameter-dependent Lyapunov function will be provided in section 6. [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF] with solver Mosek [START_REF] Andersen | On implementing a primal-dual interiorpoint method for conic quadratic optimization[END_REF].
Following the definition of domain of attraction 0 , X provides the maximal values of On the contrary, if the initial conditions do not belong to 0 , X then the trajectories of the closed-loop system won't leave the RAS corresponding to the optimal value 1 . opt It is noteworthy that compared with the results in section 6. 2.4.1 (optimizing ), the scheduling gains (6.75) are smaller. The system trades between performance and stability in the size criterion optimization (e.g., minimizing eta). As explained in Chapter 4, since the linear response region expands, the feedback control signal rarely exceeds the saturation threshold. These properties will illustrate in the next figures.
Saturated State Feedback Controller with Approximate Memory
Solving the stabilization condition of Theorem 6. The dynamic controller gains schedule online consistent with the expressions given in (6.49). From the allowable set of the initial conditions, we can observe from Figure 6-3 that the control feedback signals exceed the saturation threshold from the 0 th to the 1 st second. The state feedback controller continues to surpass the control limit from the 3.5 rd to the 5.5 th second. That can be referred to as the states of the feedback control system in Figure 6-2. If there is no saturation bound, then the system will converge asymptotically 6. The evolutions of the system state regulated by the state feedback controller and the dy- The behaviors of the dynamic system governed by state feedback controller are underdamped (in comparison to the dynamic output feedback controller). In this example, an exchange between system performance and the set of the admissible initial conditions could deduce from expression: 111 0 . opt is pre-set for both theorems, and a disturbance input sin wtt effects from the 6 th to the 10 th second. Explicitly, both LPV time-delay systems ensure stable asymptotes consistent with the design values. The time-varying delay expressed like the one in Eqs. (6.36) and the approximate delay are presented in the bottom frame of Figure 6-2. The feedback controller gains (6.75) with an uncertain delay value obtained from the solution of Theorem 6.2.2. Besides, the scheduling gains of the dynamic control system (6.52) are indirectly found from the variable matrices (6.77), which attain from the feasible solution of Theorem 6.3.1. The results demonstrate the effectiveness of the proposed method with the guarantee of local stability for the delayed LPV system respect to the saturated constraints. It should note that in practice, it is difficult to reach all states (cause the limitations of measurement, the effect of delay on measurement output, etc.). Therefore, the approach of Theorem 6.3.1 shows more practical significance.
Optimization Problems
We deliver a comparison of the minor axis maximization of the ellipsoid , the minimization of disturbance attenuation index , the maximization of the upper bounds of the X
The comparison between the two theorems are detailed in Table 6.3.
As can be easily visualized, Theorem 6.2.2 dominates Theorem 6.3.1 in all optimization categories. From the estimation of RAS, the rejection disturbance criterion or the maximum allowable value of the delay are all better. It is reasonable because the stabilizability synthesis of the state feedback controller always attains the best relaxation compared to the output feedback controller. It should note that the difficulty of optimization problems increases from categories 1 to 4. In which the optimizations of and could achieve directly from the stabilization conditions. And the maximum delay is solved by an incremental loop algorithm. But, maximizing the set of the admissible initial conditions must meet the satisfaction of Problem 6.2.1 and Problem 6.3.1, respectively, combines with the stabilization condition of Theorem 6.2.2 and Theorem 6.3.1. Let's recall the upper bound expression of the initial condition (6.21) combined with the analysis of the stabilization with and without the influence of external disturbance (6.13)-(6.14), we have: In the simulation with the feedback controller (relating to Theorem 6.2.2, illustrating in Figure 6-4.a&b), we chose 010,1030,30, x and 0,0,0. h In con- trast to Chapter 4, the characterization of the asymptotic stable domain of an LPV timedelay system meets difficulties with the extended vector and the initial condition. It can notice that the level set of ellipsoids (6.79) could not completely cover the behavior of Vt (cause that does not characterize the integrals). So, it could not accurately describe the domain of attraction. Specifically, some initial conditions outside the ellipsoidal set, such as 06,22, x7,20, 8,18 Kwill converge to origin. The estimates of the domain of attraction presented in Figure 6-4.c&d are the solution of solving Theorem 6.3.1&Problem 6.3.1. It is apparent to recognize that the region of attraction of the closed-loop system regulated by the state feedback controller is larger than that of the dynamic output feedback controller.
The optimal results of Theorem 6.3.1 given in Table 6.3 are approached similarly to Theorem 6.2.2. However, inequality (6.70) contains a quasi-convex form, if 2 is present in Using the second approach, we obtain a disc of DOA with a diameter of 0.2985 and an ellipsoidal domain shown in Figure 6-4.c&d.
Conclusions
We have addressed the feedback control laws to stabilize the LPV time-delay systems with the saturated actuators. The control analysis and synthesis with the state feedback and the dynamic output feedback structure is provided. Where the nonlinearities and concave problems in the stabilization conditions handle effectively by using a simple transformation. The multi-criteria optimizations implement both memoryless, approximation delay controllers subject to the control saturation. The results show an enhancement of system performance and robust stability against the effect of the external disturbance and time-varying parameters.
Chapter 7.
Conclusions and Perspectives
Summary
Through the contents discussed in this thesis, there are main contributions which are briefly summarized by the following conclusions:
PLMI conditions have been considered to solve the analysis and design problems for LPV and quasiLPV systems. The derived conditions have a general formulation which is convenient for various design purpose. Some numerical results in Chapter 2 have given to illustrate the advantage this methodology.
By considering the scaling structure, the non-convex optimization related to the robust controller design is linearized into the multiple-convex optimization problem through an iterative algorithm CCL. The relaxation results given in Chapter 3 have shown the effectiveness of the proposed method.
In Chapter 4, the control synthesis conforming to saturation constraints is investigated to address the stabilization of the feedback controller design for saturated LPV systems. The derived PLMI formulations allow relaxation for an individual implementation. The simulation results emphasize the reducing conservativeness of the presented condition compared to the existing works and provide an LPV analysis tool for gain-scheduling feedback controller subject to saturation constraints.
The following chapter has focused on the stability of time-delay systems based on Lyapunov-Krasovskii functional. This chapter is a slightly different construction in which an appropriate stability analysis method is addressed for the LPV delay systems before proceeding with the stabilization analysis. The comparisons presented in chapter 5 demonstrate the less conservative results of refined LKF conditional forms using expanded vectors. The best results are obtained by the investigated method compared with the recent works in the literature.
Based on this design strategy, the saturation conditions combined with the delaydependent stabilization scheme allow the balance between the conservatism and computational efforts. The estimation of attraction domains shown in the last chapter has demonstrated this point of view. Furthermore, the simulation results also reveal the system's enhancement when actuators are saturated. In addition, the resilientmemory controller has shown good performance with respect to saturation limits and the robustness with uncertain knowledge of time-varying delay values. Finally, a linearization method fruitfully is converted the nonlinear matrix inequality constraints involving the optimization of the DOA of dynamic output feedback controllers to the tractable LMI conditions. The estimation of DOA also exposes some characterized ellipsoidal domains associated with LKF.
Remaining problems and future work
The work of this dissertation is covered by five chapters related to stability analysis and stabilization for LPV systems and time-varying delay subject to actuator saturation. The main objectives of the thesis have been achieved through theoretical results. In which the proposed methods have quite productively treated the problems corresponding to specific contexts. However, it is possible to point out some open problems involving in the particular case as follows:
The CCL algorithm in Chapter 3 is convenient for linearizing design problems using QLF. However, if the more general forms i.e., PDLF are considered, it must be quite confusing to deploy this algorithm. On the other hand, the application of CCL algorithm mainly involves refining the slack scalars in Young's inequality. Unfortunately, the use of this inequality to decompose uncertain structures or bilinear matrices is a very conservative manner. Similar to the concave optimization problem discussed in Chapter 6, mathematical tools are needed to properly treat these nonlinear matrix structures.
Regarding the developed saturation conditions, limits only consider the symmetry case. In fact, there are many asymmetric bounded saturation systems for which the design constraints (GSC) needs to be developed to be more suitable for the general case. Moreover, the guarantee of regional stability conditions corresponding to the state constraints has not really been completely solved. One of the promising approaches that can be integrated with the saturation condition via LMI technique is the shaping Lyapunov function (control barrier function) method.
Almost all the examples consider time-delay systems are stabilizable/detectable for 0 , 0, hth H which makes the designed conditions not accurately describe (conservative) the stability characteristic of the system with 0. h In these cases, to take into account the information of the lower bound of delay, a delay-range-dependent condition makes more sense.
1 : sjj j sxxx (A.3)
A.1.2 Convex sets
The convex set contains all points such that the segment connecting any two points of the set is stay inside the convex set. And the convex hull is the smallest outer boundary that contains of this set. then as defined in [START_REF] Boyd | Convex Optimization[END_REF]) the convex set is the set of all convex combinations of points in :
112233123 , 1 , ppi xxxxx conv R (A.4)
A.1.3 Convex functions
Now, let's recall some definitions provided in the literature [START_REF] Boyd | Convex Optimization[END_REF]Mitrinović et al., 1993a) as follows:
Definition A. TTT abaRabRb (A.9)
Further discussion with the application of this inequality will be presented in the following sections.
A.1.4 Polytope Partition
"Polytopic" extensively used in robust analysis and control strategy for the LPV system in recent decades. The application of the quadratic Lyapunov function is straightforward for stabilization conditions but results in conservatism. Since then, many methods proposed to improve the performance of LPV systems. The representation of the system as a polytopic model using all the vertices of the convex hull covering the parameter domain directly yields a multi-LTIs formulation. However, in some case, it might cause a conservativeness and numerical burden. The partitioning illustrated in The representation of the LPV system as the uncertain polytope is proposed by [START_REF] Gonçalves | New Approach to Robust$ cal D$-Stability Analysis of Linear Time-Invariant Systems With Polytope-Bounded Uncertainty[END_REF] known as Polytope-Bounded Uncertainty method. It promises to reduce computational load but increase complexity with parametric uncertainties. An application can be mentioned, see for example, a relaxing result of ℋ2/ℋ∞ performance condition presented in the articles (H. [START_REF] Zhang | Robust gain-scheduling energy-to-peak control of vehicle lateral dynamics stabilisation[END_REF]Zhang et al., , 2015)).
A.1.4.1 Switching Multiple-Affine
Now, let's discuss a generalization of converting the coordinate parameter dependence to the fundamental coordinate system of the convex domain. Given compact parameter set .
p t U There exists a linear mapping to transform the basis linear/bilinear conserva- tion from the parametric dependent function
ij if i j A and , 1 ij if i j B so that 11 , 1 1, 1,,, 2. l p ii N N iijjkljpl i kl tttttjNN AB K (A.12)
Coordinate Eqs. (A.12) presents a generalization of the combinatorial convex from the defined parameter hyper-rectangle set. Furthermore, the combinatorial formulation introduced in Eq. (A.10) is convenient for the expansion of derivatives that makes the relaxation method [START_REF] Guerra | A way to escape from the quadratic framework[END_REF][START_REF] Sala | Polynomial fuzzy models for nonlinear control: A taylor series approach[END_REF] to deploy more efficiently. On the other hand, the piecewise affine parameter-dependent (PAPD) approaches introduced as multi-switch partitioned parameter space, see, e.g., [START_REF] Apkarian | Parametrized LMIs in control theory[END_REF][START_REF] Lim | Parameter-Varying Systems[END_REF], could provide less conservative stability conditions. The parametric switch subsystem is illustrated in the following figure. [START_REF] Lim | Parameter-Varying Systems[END_REF].
For simplicity, we assume the affine system depends on the parameter vector 12 T ttt U Then the piecewise discretization of the parameters is given by: approach leads to less conservative stability conditions. But the number of LMI condition that must check is overwhelming. For example, given a LPV system depend on p parameters, each parameter is partitioned into i N subspace, so the number of conditions that need to be checked is about
Linear Matrix Inequality
In recent decades there has been a wide variety of problems in system control theory related to convex optimization expressed in the linear matrix inequalities LMIs. For a more detailed history of the linear matrix inequality, readers can refer to [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF]. Since these resulting optimization problems can be solved numerically very efficiently using interior-point methods (also referred to as barrier methods), and are very convenient compared to seeking an analytic or frequency-domain solution. So, many constraints including convex quadratic inequalities, matrix norm inequalities, and Lyapunov function, can be expressed as convex optimization problems The reader intends to study in-depth about interior points, refers to the following monograph [START_REF] Nesterov | Interior-Point Polynomial Algorithms in Convex Programming[END_REF].
A.2.1 Linear Programming
The problem statement and optimality condition of the linear programming relates to minimization of linear function subject to linear constraints can represented in the inequality form where positive scalar is called the barrier parameter. For a sequence of monotonically decreasing and sufficiently small values of , there exists an associated sequence x called barrier trajectory (or central path) that converges to the feasible solution x from the strict interior of the feasible region [START_REF] Wright | Interior methods for constrained optimization[END_REF]. (A.35) For specified value p y,k will find the corresponding value p x,k and p z,k .
Minimize
A.2.2 The Semidefinite Programming Problem
Semidefinite programming may be viewed as a generalization of linear programming, but they are not much harder to solve. Semidefinite programming unifies some standard problems and could be found in many applications of control system theory and combinatorial optimization, for details, see the survey [START_REF] Vandenberghe | Semidefinite Programming[END_REF] Presenting the SDP in this form is quite similar to the standard-form LP problem. In the work [START_REF] Nesterov | Interior-Point Polynomial Algorithms in Convex Programming[END_REF], the author has shown that the function logdet X is a self-concordant barrier function for the semidefinite programming problem, which can be solved in polynomial time via a sequence of barrier parameter :
& Peaucelle, 2016), etc.). Moreover, this bounded technique is also encountered in deployments of stability of time-delay system. In this aspect, this generalization known as cross term bounding technique [START_REF] He | Delay-dependent robust stability criteria for uncertain neutral systems with mixed delays[END_REF][START_REF] Moon | Delay-dependent robust stabilization of uncertain state-delayed systems[END_REF][START_REF] Park | A delay-dependent stability criterion for systems with uncertain time-invariant delays[END_REF]M. Wu et al., 2004) did not include the scaling scalar but rather free weighting matrices (FWM). As has been analyzed by (Briat, 2015a;[START_REF] Han | Absolute stability of time-delay systems with sector-bounded nonlinearity[END_REF]Han, , 2005a)), these methods do not seem to yield satisfactory results.
A.5 Finsler's Lemma
Yakubovich's S-lemma is a consequence quadratic result known as non-strict Finsler's lemma. In its original form it is widely used in optimization and control theory. A comprehensive state-of-the-art review of the S-Lemma and its applications was given by Polik and Terlaky in [START_REF] Pólik | A Survey of the S-Lemma[END_REF]. Following the well-known concepts in optimization, relaxation methods, and functional analysis in the work of the literature [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Cimprič | Finsler's Lemma for matrix polynomials[END_REF][START_REF] Pólik | A Survey of the S-Lemma[END_REF][START_REF] Skelton | A Unified Algebraic Approach To Control Design[END_REF][START_REF] Tuy | Generalized S-Lemma and strong duality in nonconvex quadratic programming[END_REF], the lemma was briefly represented in a general form as follows.
Lemma A.5. For further discussions and demonstrations can be found in the mentioned documents.
Generalization of Finsler's Lemma
The following statement, known as Projection Lemma (or also as Elimination Lemma), is particularly related to robust control and linear matrix inequalities Lemma A.5.2. Projection Lemma (Gahinet & Apkarian, 1994)
B.1 Bounded Real Lemma -ℋ∞ performance
In the presence of external disturbance, giving a so-called energy-to-energy performance criterion involved in the stability analysis for LPV time-delay system (B.2) which meets the following requirements: § for 0, wt the LPV system (B.2) is asymptotically stable. § for 0, wt guarantees L2-norm on output of (B.2) bounded with 0:
22 22 1 ztwt LL (B.3)
This margin robustness and performance are typically required for the design specification of the systems affected by external disturbance and parametric uncertainty. The bounded real lemma [START_REF] Scherer | The Riccati inequality and state-space H∞-optimal control[END_REF]) is a well-known criterion allowing for the computation of the ℋ∞-norm, coincides with the L2-norm, that is, the highest input-output gain for finite energy. Generally speaking, the sensitiveness of the disturbance input wt on the regulated output zt is evaluated by the performance norm Energy-to-energy index also refers to a level of rejection disturbance. This criterion is usually encountered in the design objective, e.g., the robust stability and performance.
B.2 Block-Structured Uncertainty -Scaled Bounded Real Lemma
The representation of LPV system in a linear fractional transformation (LFT) form, the original system is separate in two interconnected subsystems Figure B-1. Where the sta-bility analysis of nominal system affected by the dynamical uncertainties, and the parametric dynamic is reformulated in the input-output framework. The characterized LMI stability conditions are derived from Scaled Small-Gain theorem [START_REF] Doyle | Structured uncertainty in control system design[END_REF][START_REF] Doyle | Review of LFTs, LMIs, and μ[END_REF][START_REF] Dullerud | A Course in Robust Control Theory[END_REF][START_REF] Feron | Analysis and synthesis of robust control systems via parameter-dependent Lyapunov functions[END_REF][START_REF] Packard | The complex structured singular value[END_REF]).
Let's introduce a LPV plant with dynamical uncertain governed by state-space equations of the generalized form: Where the block-diagonal uncertainty set i is subset of the unit ball in L2 space. The commuting of the uncertainty is devoted to reduce the conservatism of the small gain condition [START_REF] Doyle | Structured uncertainty in control system design[END_REF][START_REF] Dullerud | A Course in Robust Control Theory[END_REF][START_REF] Zhou | Robust and Optimal Control[END_REF] Following the arguments of (Apkarian & Gahinet, 1995;[START_REF] Packard | The complex structured singular value[END_REF] This formulation offers an extra degree of freedom provides more relaxation than the small-gain theorem. The necessary and sufficient condition are delivered for LTI systems could find in [START_REF] Packard | The complex structured singular value[END_REF] and (Apkarian & Gahinet, 1995;Briat, 2015a) for LPV systems. Following the linear fractional transformation, the uncertain structures appear to detach from the plant. The stability problem of an LPV system based on this approach does not require much essential information from uncertain parameters, except for its range of variations. It should note that the LMI (B.12) is a quadratic stability, in which P is independent-parameter decision matrix. So, this condition in some circumstances is a strict condition.
B.3 Full-Block S-Procedure
Finsler's lemma has been widely recognized in control system theory by the well-known -projection lemma introduced in the early 1990s [START_REF] Gahinet | Explicit controller formulas for LMI-based H ∞ synthesis[END_REF]Gahinet & Apkarian, 1994;[START_REF] Iwasaki | All controllers for the general H∞ control problem: LMI existence conditions and state space formulas[END_REF][START_REF] Skelton | A Unified Algebraic Approach To Control Design[END_REF]. Another generalization can be mentioned as the S-procedure (or S-lemma) provides an efficient polynomial-time powerful approach for system analysis and synthesis via convex optimization problems. The analytic solutions can be losslessly reformulated as the feasibility of SDPs and deliver an alternative approach to the stability control synthesis based-LMI condition. A comprehensive state-of-the-art review is given in the monographs [START_REF] Boyd | Convex Optimization[END_REF][START_REF] Pólik | A Survey of the S-Lemma[END_REF]. Now, we would like to discuss the S-Variable LMI-based method [START_REF] Ebihara | S-Variable Approach to LMI-Based Robust Control[END_REF] using a generalization of Finsler's lemma to carry the unconventional conditions of the ℋ∞-performance. From the expression of system (B. ) reverts to ℋ∞-performance (B.5), so this condition provides the more general stability formulation. The main advantage is the decoupling between the decision variable and the dynamic system (P and ,, w AB respectively). This transformation is the key for conservatism reduction of the new robust stability analysis. In the work of literature [START_REF] Ebihara | S-Variable Approach to LMI-Based Robust Control[END_REF] shows a relaxation of the robust stabilization condition compared with the traditional LMI based Lyapunov method.
It is worth recognizing that it is also used for the stability analysis of time-delay systems known as the free-weighting matrix. In section 5.2, this decoupling technique is applied essentially to handle with the couples between the Lyapunov-Krasovskii matrices with the dynamics system in the stability analysis of the LPV time-delay system.
The full-block S-procedure provides a general result and comprehends the (scaled) smallgain, ℋ∞-norm results of the previous sections. The term full-block comes from the fact that the scaling involved are general matrices, as opposed to the block-diagonal scaling in (B.9), (B.12). Besides, a variety of methods has been developed within the area of robust control (includes all the previous results), which can be reformulated to fall within the framework of integral quadratic constraints IQCs [START_REF] Megretski | System analysis via integral quadratic constraints[END_REF][START_REF] Pfifer | Integral quadratic constraints for delayed nonlinear and parameter-varying systems[END_REF]. This important mathematical object that can implicitly characterize the operators in an input/output framework, e.g., small-gains theorem, bounded real lemma, the full-block S-procedure, etc.
Definition B.3.1. [START_REF] Scherer | LPV control and full block multipliers[END_REF] The latter condition can be recognized as a more general form when both conditions (B.12) and (B.16) are included. Therefore, it is expected to cover a wider class of systems with a reduced conservatism.
B.4 Pole-Placement LMI regions
Following the performance constraints of ℋ∞ synthesis [START_REF] Chilali | Robust pole placement in LMI regions[END_REF]Chilali & Gahinet, 1996) has proposed an effective pole placement method based on the LMI formulation of Lyapunov stability condition. The objective is looking for the pole clustering in suitable stability sub-regions consistent to good behavior such as rational controller dynamics, well damping, fast decay, etc. This LMI-based representation of D-stability regions is characterized by relocating its poles in sub-region D of the complex plane (halfplanes, disks, sectors, vertical/horizontal stripes, and any intersection (as illustrated in (Chilali & Gahinet, 1996 Since condition (B.24) is parameter-dependent, according to the analysis in reference [START_REF] Chilali | Robust pole placement in LMI regions[END_REF] it is recommended to check overall characteristic function for each region i D in order to be more efficient and less conservative to test the robust D-stability.
B.5 ℋ2 Performance
The ℋ2-norm is more effective in dealing with stochastic characteristics such as measurement noise and random disturbance (tremors, wind loads, wind gusts, surface profiles, turbulent, etc.). Rather than bounding the output energy, it may be desirable to keep the are satisfied. These conditions restrict the limits of the impulse response of the system to be smaller than some admissible values conforming to the design specification. Indeed, that implies in a less conservative condition.
B.6 Generalized ℋ2 Performance
The multi-objective typically relates to the conflicting requirements e.g., satisfying timedomain hard constraints, capturing the peak amplitude of the output signal over all unitenergy inputs, etc. So, the energy-to-peak strategy shows a reasonable choice to relax the conservation of the stability condition involved in the robust control design. Similarly, supposing 0, D then the so-called generalized ℋ2-norm is defined for LPV system (B.2) as L2-L∞ induced norm: This modified condition keeps the peak amplitude of the output zt bounded by an allowable value corresponding to the design specifications, e.g., to guarantee the safety constraints, to avoid the actuator saturations (saving energy), etc. The generalized ℋ2-norm uses to ensure robust stability that appears less conservative than the ℋ∞-norm (since it bounds the peak amplitude of the output over the input disturbances -white noise or impulse). However, the specification of time-domain hard constraints is sometimes in the sense of probability.
In order to better estimate the lower bound on the quadratic integral term derived from the stability conditions, the application of the following inequalities plays a core role in the developments:
0 , XY Z f (C.2)
then the inequality hold by choosing matrix 0, M the above condition returns to its basic form.
C.1.1 Jensen's Inequality and Extensions Approach
There are considerable famous inequalities are derived from original Jensen's inequality to some applied convex function or variations of characterize convexity. Among studies, the integral version of Jensen's inequality is frequently employed in control delay theory in the last decades.
Lemma C.1.3. (Mitrinović et al., 1993a) that entails in the original form of Jensen inequality that might be found in the time delay literature [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF][START_REF] Gu | Stability of Time-Delay Systems[END_REF] and references mentioned therein. Jensen's inequality improvement also can be found in the mathematical literature [START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF]Mitrinović et al., 1993c). Thereby, it provides a judicious generalization of this inequality including the finite n-segments on the specified interval ,.
ab The convexity application is proposed to reduce the conservation of Jensen integral inequality (C.4).
In addition, the higher monotonicity derive from the use of Chebyshev's inequality. These consist essentially of functions for which several derivatives are also convex. In the literature [START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF], they have invested the "best possible" of Jensen's inequality for all convex and is end-positive. The article also covers the case of function is nconvex, which is widely applied in two development directions. Those are the discrete interval integrals [START_REF] Gu | Stability of Time-Delay Systems[END_REF][START_REF] Han | A Delay Decomposition Approach to Stability of Linear Neutral Systems[END_REF] and high-order convex functions (Park et al., 2015;[START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF][START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF].
Lemma C.1.4. [START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF]) For ,, abI and an integrable function Proof. The demonstration is referred to [START_REF] Fink | Jensen inequalities for functions with higher monotonicities[END_REF][START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF].
Lemma C.1.5. (Park et al., 2015) Gives an integrable function :, , Proof. The proof is given in appendix 2.2 or referred to (Park et al., 2015). Inequality (C.6) is essentially a generalization of Jensen's inequality from which the following consequences can be deduced.
Corollary C.1.1. Jensen's Inequality [START_REF] Gu | Stability of Time-Delay Systems[END_REF] For all continuous integrable function :, , Another proof of Corollary C.1.4 can be found at [START_REF] Van Hien | Refined Jensen-based inequality approach to stability analysis of time-delay systems[END_REF]. Recently, a general case of the double integral inequalities deployed for the 2 nd order formulation (similar to Corollary C.1.4) and the 3 rd order formulation provided in the literature [START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF] are attracting works.
Lemma C.1.6. [START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF] This method significantly enhances the stability analysis of time-delay systems with the least conservative results by minimizing gap in matrix inequalities constraint.
C.2 Model Transformation
There are considerable researches using model Transformation methods for analyzing delay-dependent stability conditions using the Lyapunov-Krasovskii function as outlined in [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF]Fridman & Shaked, 2003). Actually, it can be recapitulated in two approaches: explicit model transformations and implicit model transformations [START_REF] Gu | Stability of Time-Delay Systems[END_REF]. The model transformation methods and the improvement inequalities were proposed to reduce the conservatism of delay-dependent conditions of the LTI delay system. According to the analysis in [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Fridman | Tutorial on Lyapunov-based methods for time-delay systems[END_REF]Fridman & Shaked, 2003;[START_REF] Gu | Stability of Time-Delay Systems[END_REF], nonetheless these methods have yet to be thoroughly improved. Let's consider the features of model transformations.
C.2.1 Model transformation I -Newton-Leibniz Model Transformation
The delayed terms xth in the LTI delay systems are substituted to yield the fixed model transformations:
. The following Lyapunov-Krasovskii functional is used to determine a delay-dependent stability condition for system (C.12) and (C. ity problem analysis is replaced by original system, but that entails in the persisted delay terms . xth The inconsistent elimination of the integral terms and delay terms in this explicit model transformation leads to conservativeness. On the other hand, the use of parametrized model transformation method produces a new matrix parameter in crossterm, where the compensation of the additional dynamics is reformatted into variable matrix optimization problem. However, the basic bounding inequality applied for crossterms resulting in conservatism, whereas the application of Park's inequality to PMT stability condition as analysis (Fridman & Shaked, 2003) shows no more improvement in conditional relaxation.
C.2.2 Model transformation II -Neutral type Transformation
As an alternative introduction to system (C.12), we have the following LTI delay system: . The aim of above model transformations is to convert the integral term into the functional differential equation so as to produce both cross terms and quadratic integral terms in the derivative of a Lyapunov-Krasovskii functional along the trajectories of the systems. But because of the following disadvantages that leads to their supersession by other methods.
1 -The original system is recovered in (C.17) then the effect of the transformed model would be lost.
2 -Both transformation methods introduce additional dynamics into the system, then the transformed system is not equivalent to the original one.
3 -By applied the bounding inequality for cross-term (C.17) entails in conservatism since the right-hand side of (C.18), (C.19) are always positive.
4 -Eliminating the integral expression in the derivative of LKF has also lost important information in negative definite stability conditions.
As point out by [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF]Fridman & Shaked, 2003;[START_REF] Gu | Further remarks on additional dynamics in various model transformations of linear delay systems[END_REF] the use of the Newton-Leibniz model transformation might lead to more conservative stability conditions and lose its generality for the reasons outlined above. In order to improve these results certainly count on either by using less restrictive model-transformations (or even no model-transformation at all) or by employing more accurate bounding techniques (as given in Lemma C. 1.1 and Lemma C.1.2). The application these inequalities reduces the conservative LMI stability conditions, and can be incorporated with others model transformations. However, when the computational complexity of the conditions increase (parameter dependent Lyapunov function PDLF, optimization of the attraction domain -saturation control, robust stability...) then the generality of the inequality lost.
C.2.3 Model transformation III -Descriptor (Fridman) Transformation
The following descriptor model transformation is proposed by [START_REF] Fridman | New Lyapunov-Krasovskii functionals for stability of linear retarded and neutral type systems[END_REF] ). Even though the descriptor method relies on a non-conservative model transformation and make an interesting result, however, it still has some limitations.
1 -It is still based on cross-terms bounding inequality ensuing to conservatism.
2 -Inconsistent substitution of delay term in system and in stability condition and the adding zero-equivalent term with weighting matrices lead to the requirement of determining optimization of the fixed weighting matrices.
The use of Moon's inequality (Lemma C.1.2) in combination with this transformation method yield to less conservativeness delay-dependent conditions that seems useful in stability, stabilization analysis and control synthesis. Besides, the computational complexity of the above result can be improved by relaxation method [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF]) that provide a better construction of the variable matrices.
C.2.4 Model transformation IV -Free Weighting Matrix Approach
In the above sections, we have analyzed the transformation methods and the inequalities that enhance the LK dependent stability conditions. As has been mentioned, the distribution of heterogeneous matrix weight functions corresponding to variables , xt , xth and t th xd & (there are a relationship between them), that can interfere with solution of stability conditions. Consequently, the Free Weighting Matrix (FWM) method is introduced in (He et al., 2004a[START_REF] He | Delay-range-dependent stability for systems with time-varying delay[END_REF] generally called by lifting-variables that gives more degree of freedom in design condition stability. In which the fixed weighting matrices are replaced by appropriate dimensional slack matrices by the following null algebraic matrix equations: A methodological improvement is found in the works [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Wu | Stability analysis and robust control of time-delay systems[END_REF], however there are the matrix variables to be determined along with decision matrices of C.3. Input-Output Approach 201 stability conditions including P-system dynamic, Q and R-delay dependent. And as pointed out [START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF], after using the reduction in computational complexity method for FWM inequality that leads to in similar results to the inequality introduced hereafter.
C.3 Input-Output Approach
As discussed, the delay decomposition approach is less conservative results for stability analysis and controller design. But the method is effective for systems that access the exact knowledge of the delay, which is ideal for numerical computation in practical design. The identifications or estimations of the continuous-time delay phenomenon in practice are tough challenges, see, e.g., [START_REF] Anguelova | State elimination and identifiability of the delay parameter for nonlinear time-delay systems[END_REF][START_REF] Belkoura | Parameters estimation of systems with delayed and structured entries[END_REF]Chen et al., 2015;[START_REF] Ren | Online identification of continuous-time systems with unknown time delay[END_REF][START_REF] Zheng | Delay identification for nonlinear timedelay systems with unknown inputs[END_REF]. In this case, the uncertainty (approximation) delay method discussed in Section 8.6 [START_REF] Gu | Stability of Time-Delay Systems[END_REF] shows to be more suitable for implementing the strategy of control system design. Specifically, the timevarying delay that is not accurately known at the time of analysis and design is considered as the dynamical uncertainties of a nominal system. Based on this approach, the stability is formulated in the input-output framework where the characterized LMI conditions obtain by the Scaled Small-Gain theorem [START_REF] Briat | Delay-scheduled state-feedback design for time-delay systems with time-varying delays-A LPV approach[END_REF][START_REF] Hmamed | Stability analysis of linear systems with time varying delay: An input output approach[END_REF] or supply function [START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF]. The equivalent between the Scaled Small-Gain and Lyapunov-based technique is discussed in [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Boyd | Structured and simultaneous lyapunov functions for system stability problems[END_REF][START_REF] Doyle | Review of LFTs, LMIs, and μ[END_REF][START_REF] Zhang | Stability of time-delay systems: Equivalence between Lyapunov and scaled small-gain conditions[END_REF][START_REF] Zhou | [END_REF] for LTI/LPV systems.
C.3.1 Approximate Delay-Range Approach
The input-output approach is very convenient in analyzing the stability based on the representation of the original system to feedback interconnection with additional inputs and outputs of auxiliary systems. The analysis of uncertain LPV systems deployed by the small-gain theorem has attracted much interest in the literature in the last century. Besides, the robust stability and performance problems could reformulate in LMIs formulation can efficiently solve by convex programming.
The approximate delay value around the nominal values derived from the uncertain delayrange-dependent approach [START_REF] Gu | Stability of Time-Delay Systems[END_REF], or the time-varying approximation delay [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF]. Temporarily ignore the effect of disturbance, the TDS system is transformed to the following differential equation: The system (C.25) is represented by a generalization [START_REF] Gu | Stability of Time-Delay Systems[END_REF], where the operators are defined as From condition (C.36) an alternative proof may be assigned.
C.3.2 Uncertain Delay-Dependent Approach
The previous stability of the comparison system is analyzed at a constant approximated delay in the varying dependency range, that purposely approached by the small gain theorem. Followed this approach, a time-varying approximation delaydt was discussed in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF] tegration limits which are generalized in the approximate delay-dependent stability and stabilization analysis. Note that xt has derivatives across the defined domain, the condition needs to be validated at the critical extreme time , dtht that is explained detail in Appendix D.1. Besides, the delay approximation analyzed in [START_REF] Briat | Delay-scheduled state-feedback design for time-delay systems with time-varying delays-A LPV approach[END_REF][START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF] is more general than that introduced of [START_REF] Gu | Stability of Time-Delay Systems[END_REF] and enhances stability condition. But there is a restriction when dt is involved in the system with two dependent delays, that need more design memory for the observer and controller structure.
In the framework of non-small delay, the upper bound on gains wtzt given by [START_REF] Shustin | On delay-derivative-dependent stability of systems with fast-varying delays[END_REF] with assumption of zero-initial condition as follows
C.3.3 Delay-Scheduled LFT Approach
In the framework of input-output stabilizing, the delay operators could let the time-varying delay play a role as the parametric uncertain, where the characterized stability synthesis of the approximate dependent delay can be addressed by the Lyapunov technique or by the small-gain theorem. In addition, similar to the uncertain structure framework, time-varying delay values can be cast as a gain-scheduled parameter. The gain scheduling in the analytical robust stability control framework for LTI/LPV systems are well-developed. Unlikely, the delay scheduling analysis for delay-dependent LPV systems has not been adequately studied and analyzed, see, for example [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF][START_REF] Briat | Stability analysis and control of a class of LPV systems with piecewise constant parameters[END_REF][START_REF] Briat | Delay-Scheduled State-Feedback Design for Time-Delay Systems with Time-Varying Delays[END_REF][START_REF] Briat | A LFT/ℋ<inf>∞</inf> state feedback design for linear parameter varying time delay systems[END_REF]). An LPV plant with linear fractional depends on parameter and delay can represent as an upper LFT interconnection structure with delay operators and parameter sets (illustrated in Figure B-1). Let's consider the following operators. , with 1,2. i ztzti LL .
The operator mentioned in Equation (C.47) has a singularity at zero, so the delay dependent is limited in the interval minmax , hh does not contain zero. Besides, operator (C.48) is quite similar to operator (C.42) with a defined interval conforming to the delay space 0 H and transformed L2-norm encouraging for tightening conditions. By using structural LFT, the stabilizing condition is resolved via small gain theorem in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF]. Henceforward, for the sake of simplicity, the time-dependent expressions " t ", and scheduling parameter " " are omitted in equations, denote :, By using Schur complement [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF]
D.2 The proof of Lemma 5.2.4.
The stability delay-dependent of LPV system (5.1) is ensured if the condition holds along the trajectories of system. The stability LMIs are derived from the derivative expansion of Lyapunov-Krasovskii functional (5.51) along trajectories of LPV time-delay system (5.50). Firstly, by expanding the 0
Vt & and combining with L2-norm performance on the controlled output (5.50), that entails a similar condition with Lemma 5.2.1 as follows: Derivative the simple Lyapunov-Krasovskii functional along trajectories of LPV timedelay system (5.50) and integrating with L2-norm performance on the controlled output (5.50), that implies accustomed stability delay-dependent condition: where the membership functions are given by the expression: The generalized method is based on the Levenberg-Marquardt algorithm combined with the least square method. The details of the fuzzy parameters are given Table E.1. This assumption reduced the number of the membership functions from 8 to 2. Areas of outlet in tanks 1,3 (m 2 ) 7.1 × 10 -6 a s2 , a s4
Areas of the outlet in tanks 2,4 (m 2 ) 5.7 × 10 -6 k vi
Coefficient of pump-i, i =1,2 (ml V -1 s -1 )
3.33, 3.35 h i
The liquid level in tank-i (m) γ i
The value scaling of flow at valve-i v i
The voltage control signal at pump-i (V)
Remark 1. Time-varying parameters , i t i = 1, 2 is the flow rate value of valve-i correspond tanks, by assumed to be measurable and transformed to the convex combination parameters as follows:
k k t
The gains of observer-based controller derived from the design conditions in Theorem 3.2.1 solving by toolbox Matlab Yalmip [START_REF] Lofberg | YALMIP : a toolbox for modeling and optimization in MATLAB[END_REF] with solver Mosek [START_REF] Andersen | On implementing a primal-dual interiorpoint method for conic quadratic optimization[END_REF] are given by:
The fuzzy controller gains:
E.2 Chapter 3 -Lateral Vehicle Dynamic
The accurate estimation of the tire friction's side force plays an important role in the design vehicle stability system. Among the static tire models, it can be mentioned the HSRI model [START_REF] Dugoff | An analysis of tire traction properties and their influence on vehicle dynamic performance[END_REF], the semi-empirical Pacejka model [START_REF] Pacejka | Tire Characteristics and Vehicle Handling and Stability[END_REF], and the Kiencke model [START_REF] Kiencke | Automotive Control Systems[END_REF], etc. Since, the Pacejka model describes the lateral forces that could have an effective/linear cornering stiffness and nonlinear characteristics. In this literature, the nonlinear forces are modeled by TS-fuzzy method as in (Bui [START_REF] Bui Tuan | Robust Observer-Based Control for TS Fuzzy Models Application to Vehicle Lateral Dynamics[END_REF][START_REF] Dahmani | Observer-Based Robust Control of Vehicle Dynamics for Rollover Mitigation in Critical Situations[END_REF][START_REF] Dahmani | Detection of impending vehicle rollover with road bank angle consideration using a robust fuzzy observer[END_REF]Dahmani, Pages, et al., 2015;[START_REF] El Hajjaji | Observer-based robust fuzzy control for vehicle lateral dynamics[END_REF] The membership functions satisfy the following properties The lower bound of the longitudinal velocity (m s -1 ) 10
x v
The upper bound of the longitudinal velocity (m s -
Figure 3 - 2 .Figure 3 - 3 .
3233 Figure 3-2. The diagram of quadruple-tank process model. ..........................................Figure 3-3. The evolution of dynamic states and estimations under disturbance and uncertain dynamics. ..................................................................................... Figure 3-4. The observer performance -error estimations (left); and the DC-Motor pumps signal control (right). ....................................................................... Figure 4-1. The estimate of feasibility regions with 25, 20,0, 25. ab ..........
Figure 4 - 2 .Figure 5 - 2 .
4252 Figure 4-2. The improvement of the controller performance with LMI D-Region, via Polytopic quadratic stability condition. ...................................................... Figure 4-3. a. The simulated constrained responses of the state-feedback SF controller (colored solid line), the static-output feedback SOF controller (colored dash-dotted line), the observer-based feedback OBF controller (colored dashed line), and the dynamic output feedback DOF controller (colored dotted line). b. The time-evolution of external disturbance. ...................... Figure 4-4. The stabilized states of the closed-loop systems are regulated, respectively, by the gain-scheduled SF controller (colored solid line), SOF controller (colored dash-dotted line), OBF controller (colored dashed line), DOF controller (colored dotted line) conforming to the simulated parameter (green solid line in the last frame). ............................................................ Figure 4-5. The comparison of the closed-loop systems regulated, respectively, by the saturated gain-scheduled SF controller (colored solid line), and nominal state feedback controller (colored dotted line). ......................................... Figure 4-6. The comparison of the closed-loop systems regulated, respectively, by the saturated gain-scheduled SOF controller (colored solid line), and nominal static output feedback controller (colored dash-dotted line). ................... Figure 4-7. Region of stability. Trajectories of the closed-loop systems respond from the same initial conditions during 10 seconds of simulation........................... Figure 5-1. Graph of a convex function. .......................................................................
Figure 2 - 1 .
21 Figure 2-1. Polytopic parametrization.
Figure 2 - 2 .
22 Figure 2-2. Illustrates fuzzy model construction by sector nonlinearity.
Figure 2 - 3 .v
23 Figure 2-3. Vehicle lateral dynamics.
Figure 2 - 4 .
24 Figure 2-4. LPV modelling and control of robotic systems.
in accordance with the relationships (1) (2) (3).
Figure 2 - 5 .
25 Figure 2-5. Time transient of Lyapunov function.
Figure 2 - 7 .
27 Figure 2-7. The level set of ellipsoidal domains. Quadratic Lyapunov function (QLF) candidate: 1 , .
Figure 2 - 8 .
28 Figure 2-8. The closed-loop diagram of saturated feedback controllers.
Theorem 3 . 1 . 1 :
311 In presence of parametric uncertainties and disturbances, for the positive scalars 3 ,, then closed-loop system (3.5) is robustly asymptotical stable corre- sponding to energy-to-energy index(3.14), if there exist matrices
Figure 3 - 2 .
32 Figure 3-2. The diagram of quadruple-tank process model.
Figure 3 - 3 .
33 Figure 3-3. The evolution of dynamic states and estimations under disturbance and uncertain dynamics. The time-evolutions of the liquid level in tanks including the measurable states 12 ,, hh the unmeasurable states 34 ,, hh respectively, and its estimations shown in Figure 3-3 that have demonstrated the good performance of the designed observer-based control strategy.The minimization of disturbance effects on the system is enhanced by using the CCL algorithm. As we can see in Figure3-3 and Figure3-4 (left), the error estimate of liquid level in tanks 1 st , 2 nd is smaller than 0.6 mm, corresponding to level in tank 3 rd being less than 3 mm and level tank 4 is less than 5 mm. And the stabilized control signal of the observer-based controller design for the quadruple-tank process system gives in. With the limited voltage of each pump is 12 (V), the maximum flow of each pump is 40 (ml. s -1 ) corresponding to 2.4 (l. mn -1 ). The illustrative simulation results show the high performances of the proposed design technique.
Figure 3 - 4 .
34 Figure 3-4. The observer performance -error estimations (left); and the DC-Motor pumps signal control (right).
Figure A-5). The feasible solutions are obtained at the permissible errors of iterative Algorithm 3.1 CCL1. Then, taking advantage of the new sought scalars i to locally optimize (using iterative Algorithm 3.2 CCL2) that affords a better minimum value. For found values , i the best relaxations are attained for stabili- zation conditions corresponding to each design case. Besides, the conservation drops significantly by considering the slack variables corresponding from the 2 nd catalog to the 4 th catalog. It can realize that the stabilization conditions are infeasible with fixing constant values , i however the CCL algorithm provides the feasible solutions. Moreover, CCL1 (Algorithm 3.1 -global optimization) always returns to a higher rejection level than CCL2 (Algorithm 3.2 -local optimization). But algorithm CCL2 achieves good results inheriting from the set of optimal values , i de- rived from algorithm CCL1. Without these data, algorithm CCL2 cannot converge to the local optimal region.
im K and using a PDLF associated with ellipsoidal domains(2.102) to analyze the stability of closed-loop system(2.111). Then, the stabilization condition combined with the ℋ∞-norm condition and the GSC condition (Corollary 2.3.1) are expressed as follows. Theorem 4.1.1: In the presence of disturbance, for the positive scalars 0 ,,,, s u if ex- ists continuously differentiable matrices function :,
,
From above analysis, the trade-offs between the performance requirement , the esti- mation of RAS (ellipsoidal domain) -, and the set of the admissible initial conditions - 0 is a mandatory condition. The selection of appropriate constants combined with opti- mization of the remaining variables depends on the design purposes. For instance, preselecting 0 , then minimizing such that conditions (4.1), (4
Example 4. 1 . 1 :
11 Consider a nonlinear open-loop unstable system introduced by (A. T.[START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF] is presented by the following equations:
Figure 4 - 1 .
41 Figure 4-1. The estimate of feasibility regions with 25, 20,0, 25. ab By setting 1, the feasibility of T-S fuzzy system (4.17) is checked using the proposed relaxation (polytope) Corollary 4.1.1 for all 1,196 points of a 4626 rectangle uniformly grid over the space 25, 20,0, 25 ab with assumption 12 0. XX ± The feasi-
Figure 4 - 2 .
42 Figure 4-2. The improvement of the controller performance with LMI D-Region, via Polytopic quadratic stability condition. It could realize that the magnitude of the first and second row of the gains solved by the
Theorem 4. 2 . 1 :
21 For the positive scalars 0 ,,,,, s u if there exist continuously matrices function :,:,:,:, nmpmnpp pppp XYZW UUUU SRRR and a diagonal matrix function :, m p T U
Figure 4 - 3 .
43 Figure 4-3. a. The simulated constrained responses of the state-feedback SF controller (colored solid line), the static-output feedback SOF controller (colored dash-dotted line), the observer-based feedback OBF controller (colored dashed line), and the dynamic output feedback DOF controller (colored dotted line). b. The time-evolution of external disturbance.
Figure 4 - 4 .
44 Figure 4-4. The stabilized states of the closed-loop systems are regulated, respectively, by the gain-scheduled SF controller (colored solid line), SOF controller (colored dash-dotted line), OBF controller (colored dashed line), DOF controller (colored dotted line) conforming to the simulated parameter (green solid line in the last frame).
STABILIZATION SYNTHESIS FOR LPV/QUASI-LPV SYSTEMS WITH ACTUATORS SATURATION
a-b. The time-evolution of state dynamics. c. The simulated constrained responses of the SF controller Theorem 4.1.1, and the nominal controller.
Figure 4 - 5 .
45 Figure 4-5. The comparison of the closed-loop systems regulated, respectively, by the saturated gain-scheduled SF controller (colored solid line), and nominal state feedback controller (colored dotted line). The time-varying parameters 1,1, 1, tt & and the saturation limit sets on actuator 5. utu Similar to the discussed optimization methods, by solving Theorem 4.1.1, Theorem 4.2.1, and Theorem 4.4.1 gridded over 121
compare the design controller with GSC condition with the corresponding nominal controller. The control systems are set up for simulation of LPV system (4.82) with bounded actuators. Given in Figure4-5 and Figure4-6 are the comparisons of state-feedback and static output feedback controllers. The simulations start at the same initial condition 0100 0 T x regardless of the effect of disturbances.
a-b. The time-evolution of state dynamics. c. The simulated constrained responses of the SOF controller Theorem 4.2.1, and the nominal controller.
Figure 4 - 6 .
46 Figure 4-6. The comparison of the closed-loop systems regulated, respectively, by the saturated gain-scheduled SOF controller (colored solid line), and nominal static output feedback controller (colored dash-dotted line).
STABILIZATION SYNTHESIS FOR LPV/QUASI-LPV SYSTEMS WITH ACTUATORS SATURATION
3. a. Destabilize trajectories governed by a nominal SOF controller. b. Stabilize trajectories regulated by design saturated SOF controller.
Figure 4 - 7 .
47 Figure 4-7. Region of stability. Trajectories of the closed-loop systems respond from the same initial conditions during 10 seconds of simulation.
conditions for LPV saturation systems analyzed with a quadratic Lyapunov function compared to a parameter-dependent Lyapunov functional.
Jensen-based inequality such as Wirtinger-based inequality (Appendix C.1.1 Corollary C1.2) and auxiliary functions show significantly the relaxation of the stability conditions. By substituting the function xs & forws in these corollaries, that yields the following results. conditions (5.6), (5.7), and (5.9) is inferred directly from Appendix C.1.1 Corollary C1.1-4, and extended conditions (5.8), (5.10) can consult at (Y.[START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF][START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF]. It should note that these methods provide better estimates of the lower bound of the expression
is applied to derive a better estimate of the lower bound of expression derivative of the function 3 .
a. The chord of the parabola expresses convex inequality. b. The integral region of the parabola is in the interval. c. The integral domain fragmentation.
Figure 5 - 1 .
51 Figure 5-1. Graph of a convex function.
and N is the number of divisions in the interval, i.e., ,0. h This method significantly enhances the stability analysis of time-delay systems with the continuous and piecewise Lyapunov matrices. Then, the integral term t T th xsRxsds && involving in the derivative of function 3 Vt bounded by the Jensen-based inequalities 5.1.3. Delay-Dependent Stability -Input-Output Approach 117
complement yields to (5.43).
W
the scaling sets and uncertain norm-bounded operator 0 .
Figure 5 - 2 .
52 Figure 5-2. The evolutions of quasi-LPV time-delay system (5.68) with the slowvarying delay 20.4917, 0.1. htht &
understanding of dt versus nominal delay value , ht we have the follow- ing controller design strategies: as a memoryless controller. § If , dtht the controller refers to as an exact-memory controller. § If ||, m dtht the controller labels to as a -memory-resilient controller.
1 :
1 -loop system into the relaxed LMI condition (5.25) in section 5.2.1.2, with the dead-zone nonlinearity related in the GSC condition treated as in Chapter 4. Then, the memoryless state feedback controller attains by solving the following delaydependent stability conditions. Theorem 6.2.For time-varying delay 0 , ht H parameter ,, p tt & UU posi- tive scalars ,,, ij u and presence of the L2-bound disturbance, if there exist continu- ously differentiable matrices function 1 :,,,:,
W
If the disturbance is not taken into account, we can choose 1.
the certain allowable level of the initial conditions (it should note that in the cases 11 , opt then we adjust the bounds on the external disturbances). § Preselect , then minimizing
6. 2
2 .4.1. Memory-resilient saturated controller synthesis 6.2.4. Example 153
Figure
Figure 6-1. The responses of the bounded controllers correspond to saturation limit 5, u with 1,0.9. h
hA
so the variable substitutions are not much of a problem. However, the uncertain delay value typically leads to incompatibility between the
3 . 1 .
31 What lies in the feedback signal of the deadzone function u considered the input of the controller system (6.15). And dynamic gain c E plays the role of the saturated compensation and enhances the performance of 6
perform in Figure 6-2.
Figure 6 - 3 .
63 Figure 6-3. The responses of the stabilizing gain-scheduling controllers solved by Theorem 6.2.2, and Theorem 6.3.1 corresponding to saturation limit 5, u with 1,0.5, h and 2.5.
a. DoA -Theorem 6.2.2. b. Ellipsoidal sets -Theorem 6.2.2. c. DoA -Theorem 6.3.1. d. Ellipsoidal sets -Theorem 6.3.1.
Figure 6 - 4 .
64 Figure 6-4. Example 6.3.1 -designed controllers. The estimates of region of stability using sector nonlinearities approaches with different criteria. Asymptotically stabilized (green solid-lines) and destabilized trajectories (brown dotted-lines) of the closed-loop systems from the initial conditions (o).
As a result, minimizing the estimation of the set of the admissible initial conditions leads N to be a singular matrix. To avoid this, based on the relationship YX ,T n NMI we propose two solutions: § Choose a small value of such that the stability conditions of Theorem 6.3.1 to have a solution, then use this matrix N to solve the optimal Problem 6.3.1. § Use the optimal value of opt to calculate the upper bound of 1 0 , then indirectly ob- tains the boundary of the DoA estimation domain
4
Figure A- 2 .Example A. 1 . 1 .
211 Figure A-2. Illustrates some simple convex and nonconvex sets 2 . R Example A.1.1. In the plane px (Figure A-2.a) the smallest convex hull containing three points 123 , ,, xxxpx is the triangle domain 123 , xxxpx then as defined in[START_REF] Boyd | Convex Optimization[END_REF]) the convex set is the set of all convex combinations of points in :
Figure A-3 significantly decreases the vertices of the polytope and results in the less conservative condition.
Figure A- 3 .
3 Figure A-3. Two steps of three possible subdivisions of a triangle (Gonçalves et al., 2006). (a) Division by two, or bisection. (b) Division by three. (c) Division by four, or edgewise subdivision.
the transient binary switching (A.12) to converts the LPV system to the following switched LTI systems:
Figure A-4. Visualization PALPV model discretization[START_REF] Lim | Parameter-Varying Systems[END_REF].
and real symmetric matrices .
system model. b. Block-Structured LPV.
Figure B- 1 .
1 Figure B-1. The interconnection structure.
Figure B- 2 .
2 Figure B-2.Pole-placement of LMI regions(Chilali & Gahinet, 1996).
and only if there exists an energy-to-peak performance index 0
Lemma C. 1 . 1 .
11 [START_REF] Park | A delay-dependent stability criterion for systems with uncertain time-invariant delays[END_REF] Given vector functions ,:. n ab a¡ Then above condition returns to its basic form.Lemma C.1.2.[START_REF] Moon | Delay-dependent robust stabilization of uncertain state-delayed systems[END_REF] Assume that : if the following condition is satisfied:
1.3. Auxiliary-Function-Based Inequality(Park et al., 2015) For a positive matrix n R R and any continuous integrable function :
Figure C- 1 .
1 Figure C-1. Graph of a convex function.
1 .
1 to be directly suppressed by model transformation. By imposed the bounding cross-term inequality we have: Model transformation given in (C.13) has the same purpose as the fixed model transformation (C.12). Where the term T hxRx && produced in derivative stabil-
constraint of Newton-Leibniz formula and associate with information on maximizing the upper bound on delay value.
auaa thththhhhh The distributed delay is now considered as an input disturbance. The stability analysis for system (C.25) would infer to system , h xtAxtAxth & (C.26)but without the initial condition there does not guarantee an equivalence.By using the internal topology with input-output structure:
1 .
1 These operators enjoy the following immediate property § is L2 input-output stable and satisfies the scaled small gain condition.Proof.Let's consider a set of block-diagonal appropriate dimension matrices (C.29) to the left side of (C.32) that results in: & and assuming the zero-initial condition. It should be reminded the derivate uncertain delay . max,min0,, aaa hhh then by changing order of integration in double integrals as illustrated in Figure C-2.a, with assumption of the zero initial condition we obtain:
Figure C- 2 .
2 Figure C-2. Changing order of integration of double integrals.
an asymmetric bound of in-
,[START_REF] Briat | Memory-resilient gain-scheduled statefeedback control of uncertain LTI/LPV systems with time-varying delays[END_REF] deliverers an operator to represent the interconnection input-output and have implemented both on LK and LFT stabilization approaches for delay-dependent LPV systems. operator 0 satisfies SSG condition is addressed the same as proposition. The interconnected system is well-defined and input-output mapping stable if the following Integral Quadratic Constraints (IQC) satisfy: limits of integration into the domain of double integrals are described in Figure C-2.b. With assumption of the zero initial condition, we get:
These operators enjoy the property: i are L2 input-output stable and satisfies the bounded small gain constraint 22 00
trajectories of system. From the above condition, xt &are substituted by the original dynamic system, the derivative of the time-varying delay is bounded, the derivative of the matrix dependent Pt analyzed as in Chapter 2, while the integral of the positive real function is organized by Jensen's inequality as follows. verify the well-posed of the inequality (D.2) at extreme point 0, ht it is necessary to prove the defined domain included point 0 (D.2) is defined for all xt and ht satisfying the initial assumption.
external perturbations, combined the performance constraint (B.3) with the PDLKF stability condition (D.1) that results in the following stability condition:
using Schur complement and rearrange rows and columns that results (5.41).Q.E.D.
Case 2 :
2 (Uncertain delay) the study involved when the two delays were different j dt . j ht Consequently, by expanded of conditions (D.10) and (D.11), we obtain:
use operator (C.42) and interconnection (C.43) to express the relation between h x and d x as input perturbed delay 0 w is bounded by the IQC (C.44). Accordingly, we have a transformation of coordinate: the scaling commuting sets D and uncertain bounded operator 0 . Then applied the scaled bounded real lemma for uncertainty, with substituting (D.15) into condition (D.14) that lead to the following parameter dependent LMI condition: Schur-complement that results in the PDLMI condition: ftht of the liquid level in tank-k, k = 1...4 is described in two regions as following rule-based system:
generalization of the Cauchy distribution (also known as the Bell MF), which is specified by three parameters {a j , b j , c j }:
, with the front and rear lateral forces are represented by stiffness in front and rear of SUV E-Class car model are fuzzificated as the local linear gain between tire sideslip angle and tire force. For the simplicity, expression ˆ, f r may be shorten to only .
Figure E- 1 .
1 Figure E-1. The forces acting on two-tracks 3D vehicle model.
Figure 2-1. Polytopic parametrization. ..........................................................................Figure 2-2. Illustrates fuzzy model construction by sector nonlinearity. ....................... Figure 2-3. Vehicle lateral dynamics. ............................................................................. Figure 2-4. LPV modelling and control of robotic systems. ........................................... Figure 2-5. Time transient of Lyapunov function. ..........................................................
. Figure 2-7. The level set of ellipsoidal domains. ............................................................
Figure 2-8. The closed-loop diagram of saturated feedback controllers. ......................
Figure 3-1. The extreme points of the polynomial in the interval 6,12. ...................
-LPV time-delay system (5.68) with the slow-varying delay 20.4917, 0.1. htht & .............................................................. 137 Figure 6-1. The responses of the bounded controllers correspond to saturation limit 5, u with 1,0.9. h ......................................................................... 154 Figure 6-2. The evolutions of the LPV time-delay system regulate by saturated state feedback and dynamic output feedback controllers. ................................. 167 Figure 6-3. The responses of the stabilizing gain-scheduling controllers solved by ................................................................ 168 Figure 6-4. Example 6.3.1 -designed controllers. The estimates of region of stability using sector nonlinearities approaches with different criteria. Asymptotically stabilized (green solid-lines) and destabilized trajectories (brown dotted-lines) of the closed-loop systems from the initial conditions (o). .............................................................................................................. 170
Theorem 6.2.2, and Theorem 6.3.1 corresponding to saturation limit with 1,0.5, h and 2.5. u 5,
xiv
List of Tables
Table 2 .
2 1. The Conservativeness of Parametrized LMI Conditions. ..............................Table 3.1. Observer-Based Feedback Robust Stabilization and Performance with Saturation actuators. ................................................................................... Table 4.1. Multi-objective optimization -Problem 4.1.1 ............................................... Table 4.2. Robust Pole Placement in LMI Regions. ....................................................... Table 4.3. Multi-objective optimization and . ................................................. Table 4.4. Minor Axis Maximization 10. ........................................................... Table 4.5. Actuator Saturation -Optimization Problems. ........................................... Table 4.6. Multi-objective optimization -Example 4.5.1. ............................................ Table 5.1. The maximum admissible upper bound MAUB for delay 0 . ht H ..........
optopt
Table 5 . 2 .
52 The maximum admissible upper bound for delay 0 . ht H ......................
Table 6 . 1 .
61 The optimization of ℋ∞ performance criterion . ................................. Table 6.2. Domain of attraction, with 1. h ...............................................................
opt
Table 6 . 3 .
63 Multi-criteria optimization. .........................................................................
xv
1 A Inverse of matrix A T A Transpose of the inverse of A A Moore-Penrose pseudoinverse of A q zt L Lq-norm of the function : n zt RR defined as
General Introduction and Summary
).
Let's consider a continuous parameter with discontinuous trajectory:
sin2, 1cos2,21 k 2 ttkk tk ttkk . N (2.7)
There is no existence of the continuous derivative function discontinuous at times . tk . & This function is ftt
Instead, it takes left and right values by -1 and 1, respectively.
Nonetheless, to unify the symbolizations on the control system in this dissertation,ht is signified for the time-varying delay, and zt is used for the regulated output. And parameters i t is a monochromatic transfor-
14 CHAPTER 2. OVERVIEW LINEAR PARAMETER-VARYING SYSTEMS
denoted by etc. mation from ,, iil hzttrN , jij tM
It should note that the commonly used notations in the fuzzy logic control community are
In this 2.1.5. Example case, nonlinear systems are approximated around a local bound on each state. However, there are no prerequisites requirements for the dynamics of system states, e.g., the upper bound of the derivative ?. xt &
xt
As shown in Figure 2-2 is the Cartesian coordinate system Oxy representing nonlinear function fxt is locally or globally bounded by two linear function 22 yxtaxt and 11 . yxtaxt Where Figure 2-2.a, the construction of T-S fuzzy model for the nonlin-ear system fxt is illustrated at point 11 , xfx as follows:
111211121 11, fxyxyxaxax with 1 211 1 211 afxfx aafx . (2.22)
can interpret as follows: in the specified time domain of state , x xt D R give a finite nonlinear function , fxt R such that the upper and lower bound functions of fxt are finite over certain time-domain of . Similarly, Figure 2-2.b shows a fuzzyfication of the system in local region ,, xtdd where the fuzzification of the function fxt is so-called local sector linearization.
and a polynomial fuzzy model is indicated in the following formulation of a local linear model.
if 11 i tM and … and pip tM (2.23)
then , , iiwi iwi xtAxtZxtBxtwt ztHxtZxtJxtwt & , 1,2,, iN K l (2.24)
Where trices in AxtCxtBxt , ,,, nNpN iiwi RRand . xt The term 121212 , wi Jxt are the polynomial ma-22 . T Z x xtt xxtxxtx t t
N
Zxt R signifies a column vector of monomial in , n xt R e.g., a second order monomial
It should be noted that system (2.28) depends affinely on the time-varying parameters and is presented as a first-order polynomial matrix. The expression t can be implicitly con-
CHAPTER 2. OVERVIEW LINEAR PARAMETER-VARYING SYSTEMS
Then, system (2.25) is represented by LPV formulation as follows:
01122 xtAxtxtAAxtAxtxt , & (2.28)
with 1 AxtAA 01 2 1 1001 ,,, 1 0100 xt xt and 2 A 00 . 10
Polynomial formulation
Now, by defining a monomial vector in x : (2.25) is expressed by the following polynomial equation: 1212 Zxtt xxxtx t T , then system
. xtAxZx & (2.29)
with Ax 2 2 22 11 10 31 xt xtxt .
3 112 33 1212 xtxtxt xtxtxtxt 3 1 2 xt xt & & with state vector 12 , , T xt xtxt and 12 , . xtxt belong to domain (2.25) 1,1. x D
Affine system
Let's define a parameters vector 2 112 2 , x xtt xt xx xtxtxt 12 ,, 2 U R with 2 1 2 3. tt x t (2.26)
are constrained by:
12 11,04. xtxt (2.27)
sidered as an uncertain parameter (either endogenous parameters or exogenous parameters). Whereas system (2.29) is an explicit state-dependent polynomial.
Considering linear cornering forces, i.e., , .
yfffyrrr FCtFCt could approximate by the following expression: , fr tt , y fff dynamics xx vt t tlt vtvt & and . y rr xx vt t l vtvt & Where the (2.37)
).
2222 2222 2222 frfrffrrffrr xx ffrrffrrffrrffrr zxzx CCCCClClClCl mvmv yy x ClClClClClClClCl IvIv v vtvt tt & &&& 2 2 CCl ff fff z CC m I . f t (2.39)
a. Tire sideslip angle -local sectors b. Single-track parameter description
The approximation of this LPV model is presented as follows:
with 11 xx ztzt ztzt Aut 0 0 0 a a K T rtrt tt & , &&& & &&& 111122 3 4 :2sin,:2cos,:, xyx rtlttrtltztrtlt 2 1 0100 00 0000 310 10100001, 0000 01 4 000 00 a T At l (2.47)
In this case, it is not possible to determine the stability feature at the origin by this linearization system. Similarly, linearizing the system (2.48) at the equilibrium 2
on the imaginary axis for eq x b 0, T 0. we obtain matrix 2 2 0 01 , T eq x fxt A ab xt
has one of the eigenvalues 2 11 22 2
Theorem 2.2.2: Let us consider the time-continuous LTI system
0 xtAxt 0 xx & (2.53)
n PQ S such that the Lyapunov equation
T APPAQ 0,
is satisfied. 5 -There exist matrices , PQ n S such that the Lyapunov inequality holds.
T APPA p 0,
By linearizing the system (2.48) around the origin 1 0 0, T eq x the Jacobian matrix: 1 1 0 01 , eq x fxt A ab xt is a Hurwitz matrix
with the eigenvalues that this equilibrium is asymptotically stable for ,0. 2 11 1 22 4 Abba have negative real part. It can conclude ab But, the eigenvalues will slide
is obtained as the linearization around an equilibrium point. The following statements are equivalent 1 -The system (2.53) is globally asymptotically stable. 2 -The system (2.53) is globally exponentially stable.
3 -The matrix A is Hurwitz (i.e., the eigenvalues of matrix A have negative real part).
4 -There exist matrices ,
Example 2.2.1. Let
consider polynomial system (2.28): 1111 01122 2222 1 1 xtxtxtxt AAxtAxt xtxtxtxt 100100 ,, and . 010010 AAA Since the LPV system is affine in 12 ,, xx then choosing a polynomial matrix Lya-. punov as follows: 01122 0. PxPPxPx f (2.69) Following condition (2.61), the LPV system (2.68) is robustly stable if and only if the matrix inequality 01122011221122 0, symPPxPxAAxAxPP p (2.70) hold, for 2 012 ,, PPP S and 12 , xx within the specified range: 22 12 ,1,10,4, ||, 1,2. ii xxxi U RR && (2.71) Indeed, the LMI conditions are derived from (2.70) by equally discretized Ni points in the intervals of parameters (2.27). If the conditions hold throughout the parameter domain, then the stability of LPV system (2.68) is ensured robustly in the presence of the time-varying parameters. with 012 This approach bases on the discretizing parameter space. So, what is a "good" density to be able to cover most of critical points? The critical points are a set of , i x U for which the LMI is unfeasible in Dx. For example, system (2.68) is unstable in the interval of parameter 12 1. xx In addition, what are the appropriate parameter density, and
Let fx be a polynomial in
n x R of degree 2d. And let Zx be a column vector whose entries are all monomials in x with degree no greater than d. Then fx is a sum of squares if and only if there exists a positive semidefinite matrix Q such that
. fxZxQZx T (2.72)
.78410
Pxxxxxx Pxxxxxx Pxxxx 22 22 62652 33112122
.958,
Pxxxxxx Pxxxxxx Pxxxxxx 3242 22112122 P 52442 23112122
42342 33112122
Table 2 .1. The
2 Conservativeness of Parametrized LMI Conditions.
Quadratic LF Parameter-Dependent LF
LPV System T-S fuzzy Affine T-S fuzzy Polynomial Poly-fuzzy
Stability Theorem 2.2.3 Theorem 2.2.5 Theorem 2.2.6 Theorem 2.2.5 Theorem 2.2.7
Relaxation Gridding Multi-convex SoS SoS
Example 2.1.3 (2.32) (2.28) (2.32) (2.29) (2.35)
infeasible feasible feasible feasible feasible
Example 2.2.3 (2.83) (2.81) (2.83) (2.82) (2.84)
infeasible feasible infeasible infeasible infeasible
Imprv [ref1] [ref2] [ref3]
infeasible feasible feasible
0 1 2 3 4 5
Imprv is the improvement of the respective stability conditions combining with the following methods:
[ref1] -Relaxed Stability LMI conditions (Tuan, Apkarian, et al., 2001),
[ref2] -Locality and Shape-Dependent Conditions (Sala, 2009),
[ref3] -Positivstellensatz Relaxation (Furqon et al., 2017; Sala & Ariño, 2009).
Example 2.2.3. Let's consider a nonlinear system (Sala & Ariño, 2009):
112 2212 30.5 23sin xtxtxt xtxtxtxt & & (2.80)
with state vector 12 , xtxt T xt , and 12 , xtxt belong to domain 1,1. x D
First, an affine system gives as follows:
30.5 023 xtxt t . & with parameter 1 sin0.8415, 0.8415, x tt t 1, 1. & (2.81)
Then, using a third-order polynomial of Taylor series of the sinusoid around 1 0, x we get a polynomial system:
3 11 30.5 0230.5 xtxt xtxt . & (2.82)
By applying the fuzzy modeling (Sala & Ariño, 2009), we obtain a representation of T-S
fuzzy system
12 30.530.5 0501 xtttxt . & (2.83)
to verify the stability of the polynomial systems (2.82),(2.83).
Corollary 2.2.1. Giving a set | n xi xtfFfxin 1 12 ,,,, n Ffxftft R K 1 , 0, 1,,, K D R and arbitrary polynomials 2 ,1,, j gxjn K are composed of products of F. Then, a suf-then define a region ficient condition for polynomials x are positive , x x D if there exist multiplier SoS polynomial 2 0,1,,, j qxjn K such that the following expressions:
vv x 2 1 n j T j j xqxg , (2.88)
are sum of squares for arbitrary appropriate dimension vector . v
Using a third order Taylor expansion method similar to (Sala & Ariño, 2009), combining
the relaxed SoS condition (2.88) with the PLMI stability condition analyzed for fuzzy
polynomial system (2.84), we obtain a second degree decision matrix:
12 2 2 P P 11 P 4244255 0, for , x x D f where 11112122 P x
8267266 12112122
276 221121
.18110
Pxxxxxx Pxxxxxx Pxxxx 9276 22 5.805101.024101.39710, xx
with local constraints multipliers of second degree 1122 1,1, 22 gxxgxx and the polynomial Positivstellensatz
1 2 qx qx 9 9 10 10 11 22 11 22 5.35660 0.42542.5644 4.69830 . 0.052766968 T T xx xx xx xx 0, 0. ± ±
And, even with the quadratic case that returns a feasible solution
6 10 10 10 0.2101 0.21011.963 7 3 9.014 9 11 2 , I ò f for 22 9 11 22 6.25790 1.14142.6545 5.68230 . 0.082380597 T T xx xx xx xx 2 P 1 qx qx ± ò 0. 0, 10, 7 ± and arbitrary polynomials
Figure 2-6. Dead
-zone nonlinearity.
that satisfy, for
sat0. 0,sat0, , iiii ii iiiiiiiiiii 0, uu uu uuuuuvvu .
Then, for T for all 0 i u there exist functions : ip T U 1 sat0, iiii uTu R such that , iii uuim 1,2,,. K (2.96)
Similarly, for
sat0, ,sat0. iiiiiiiii 0, iiiiii 0, ii uuuvvu uu uuuu .
Hence, for T uTu 0 i u there exist functions : ip T U 1 sat0, iiii for all R such that , iii uuim 1,2,,. K (2.97)
It follows immediately from condition (2.96), (2.97), there exists a diagonal matrix func-tion 12 :, :diag,,, mm pm TTTTT K U such that (2.93) holds. R W
By substituting sat, uutututvt for the condition (2.93), we get , 1 0, T uTuvt
Accordingly, let's introduce with the dead-zone nonlinearity associated with a symmetric saturation function sat uutu : signif 1,2. 0if iiiii i i i uuuuu uim u u K (2.91)
Given a vector tutvt vtvtvtvt 12 ,,, Tm m K R and defined an auxiliary control vector , belong to the polyhedral set ,,: , 1,2. m iiiii uvuuvuuvuim S R K (2.92)
Then, a generalized sector condition (GSC) is introduced as follows.
Lemma 2.3.1. Given nonlinearity uutu ,, utt S and a diagonal matrix function :, mm p T U R sat satisfies the following inequality then
1 T uTut sat0. (2.93)
Proof.
Consider the dead-zone nonlinear function (2.91) conceding the following properties:
0if 0if iii iii uuu uuu 0 with 0 1,2. im K (2.94)
This is illustrated in Figure 2-6 (left), where nonlinear function line, and linear function i u is the green dashed line. For all i define i u is the solid blue 0, , iii vu we look for a vector 12 ,,,, T m uvu K s.t. S , 1, with 1,2,,, iiiiiiii uuuvuim K (2.95)
2.3.1.2. Region of Attraction.
uvu , uration constraints. Instead, the GSC condition is reformatted by finding a vector S would lead to the unnecessarily difficult in analyzing sat-, t a feedback control law , ut and a nonlinear function . u
Corollary 2.3.1 Given nonlinearity uutu ,, utt S and a diagonal matrix function :, mm p T U R sat satisfies the following inequality then
1 T uTuttu 0. (2.98)
From now on, this formulation will be essentially implemented for the saturated LPV and
LPV time-delay system in the following chapters. Where the definition of the polyhedron ,u S is recalled as an alternative of ,. uvu S
The defined domain of states of the system (2.1) is denoted by . x D Without effect of dis-turbance 0, wt the region of attraction of the LPV system is defined as a set of xt x D such that from specified initial condition, the trajectory ,0 xtx converges asymptot-ically to the origin.
Ax xtxtxt ,00 as n RD R . (2.99)
2.3.1.3. Ellipsoidal Set of Stability.
has presented and discussed the two separate step design for the LTI systems. Now, let's introduce a dynamic controller feedback law derived for LPV system (2.89) by the following equation:
cccc ccc xtAxtByt utCxtDyt & (2.120)
with output measurement conditional stabilizing structure with proposed DOF (2.120). By replacing the controller , w ytCxtDwt the goal is to seek a saturation
in the system (2.89), the extended closed-loop system is obtained:
12 12 ttuwt zttuwt , . & ABB CDD , T TT c txtxt cccww (2.121) 12 12 ,,, 0 ,,. cccw cccww ABDCBCBDDB B with BCABD HJDCJCJJDDJ A B B C DD
TT NM PP NM YX éé (2.122)
whereé we don't care,
PVttPtt txtxt 1 , ,,, T TTT c EU p .
(2.123)
A level set of admissible initial conditions,
000000 ,000 1 00, TTTT cp PxxP EU (2.124) .
Then, expanding the structure of the DOF controller as follows:
cwcc , with u ttDDwtDCC KK , (2.125)
and defining an auxiliary dynamic controller
ttGCF , with cc GG . (2.126)
, .
From this point, the stabilization analysis for saturated DOF is quite challenging, in which the system (2.121) contains nonlinear couples concerned with the variable matrices. Now, let's assign the following parameter-dependent Lyapunov variables: 1 , , 2n P S is Lyapunov matrix belong to ellipsoid as region of asymptotic stability of system (2.121),
Imposing the constraints on observerbased feedback controller law(3.4) leads to the following result.
3.1.1. Norm-Bounded Input 55
such that K Lemma 3.1.1. Giving a set of initial condition, positive scalars , s 2 2 max:, 1,2,,. sss um u tuts u and a feedback con-trol law (3.4), then the constraints on input signal control are enforced 0, t if the following matrix inequalities hold
1 22 1 1 0 0 0 xP PeP 1 11 2 1 11 0 ± 1 0 s T ss PY uY P with 1 s Y symbolizes s-th row of 1 , , 0, PI 2 1 P 1 ±± Y gains controller 0, 12 , . 1 TT xtetVtxtPxtetPet ED (3.7) (3.8) and ellipsoid 1,2,,, sm K 1 1 , P Y with K
Proof.
Condition (3.7) directly implies the initial condition:
12 0000 TT xPxePe 1. (3.9)
The limit constraints 2 ss 2 ut u are guaranteed if the following conditions hold:
2 1 s u KKxeet 2 12 TT s xt xtPttP et , (3.10)
1212 0000 TTTT xtPttPtxPxePe xee 1. (3.11)
Let's set transformation by YKX By using Schur's complement and then doing a congruence .
The stability development for the feedback control system is guaranteed robustly against the influence of external disturbance and the model uncertainty conforming to the selected design values (e.g., ,,1,,4). The transformation(3.19) provides a tractable condition without imposing strong constraints like the rank of the matrix. But the use of the Young inequality (bounding lemma) usually leads to conservation. That can point out two weaknesses of this inequality. First, the upper bound is always positive, and this disadvantage is at the primary of the transform method (which cannot be improved). Second, as we can see that this approach is dependent on a wise choice of scalars .
3.1.3. Concave Nonlinearity -Cone Complementarity Linearization 59
SYSTEMS WITH ACTUATORS SATURATION
along the trajectories of LPV system (3.5), combines with ℋ∞ performance (3.14) (Young's inequality) for this non-convex problem that results in: that exposes: 11 22 22 11 12 00 . 00 00 00 00 00 00 T T CC XMXM YMYM PMPM Parametric LMI (3.17) contains bilinear forms K perform a congruence transformation. Continue applies the tedious matrix transformation presented in places that is difficult to (3.18) It completes the proof.
11 22 00 22 00 TT xABKABKBKBKxxBB XX ww ww eALCBKALCBKeeBB PP 1 0. 0000 T T ww T ee xx HJKJKJHJKJKJ eeww HH ww (3.16) 11 11 2121 111 33 11 00 . 00 00 00 00 00 T T K XBBYXBBY II PBYPBY XX JYJY ° (3.19) Reorganizing stabilization condition (3.16) yields to: 11 22 312 1 0 0, 0 000 e T K wd wr er BPI HXJYJI HI p (3.17) Combining conditions (3.18), (3.19) with PLMI (3.17), consecutively deploys congru-ence transformation by ,,,,, diagXIIII and Schur's complement that ensues in: 1112 22 0, p (3.20) where 11 : gives in (3.15), and with 1 122 , , XPYPL 1 11 11 2 1 31 2222 1 2 0 , w , 0 ,, 0 0 0 0 T T Kw XBBK symXAXBK I PBK symBX JK PAYC XM P sym 11 2221 12 1 00 00 , 00000 00000 000000 TT ABC T W MNXNYXNBBY PMYMIPBY N JY 111 22112233 ,,,,,. diagIIIIXX 2 0 00 . 0 00 0 000 0 000 TT TT ABC C T W NNKN YM M FtFt N The uncertainties can be eliminated similarly as follows: The uncertainties can be suppressed using Young's inequality (Appendix A.4): 12 0000 00 0000 0000 TT TTTT ABABCC TT WW NNKNNKNN NN ° 1 2122 1 4 1 0 0 0 000 000 T T B BYMMM PBYPMPMP symsymFt NY M MMM °2 0 0 T M M 4 11 00 00 00 T TT BB NYNY MM (3.21)
Substitutes inequality
(3.21)
into condition
(3.20)
, then repeatedly applies Shur's complement that results in
(3.15)
.
W i i K Remark 3.
1.2. i If it was set as variables which turn conditions (3.18), (3.19) and (3.21) are nonconvex with bilinearity 1 ,,.
are transformed into the optimization problem as follows:
subject (3.22) and the inequalities,
1 0, 0, n XI UI U U n ±±± 0. (3.28)
Then, the Cone Complementary Algorithm allows converting concave problems (3.24),
(3.25) (or reduced conditions (3.27), (3.28)) into the following iterative optimization.
Minimize IUUSSXUXS Trace0.25 n (3.24)
subject (3.22) and the inequalities,
1 0, ±±±±± 0, 0, 0, nn nn XIXI UISI USSU 0. (3.25)
where matrix d is as a pseudo of 1 ,
0, 0.50.5 ± :, :. 0 0 U I X L OMM with :, UX and (3.26) Pursuant to this formulation that makes condi-
Minimize Trace 0.5 IUUXU n (3.27)
d etc.
Furthermore, to reduce the complexity of problem
(3.24)
and the number of LMI constraints
(3.25)
, then conditions
(3.22)
can also be reformatted to an alternative approach: tions
(3.24)
and (3.25) decrease computational efforts by almost half. We obtain a reducing optimization version of original problem (3.24): Algorithm 3.1. Adapted CCL & Multi-optimization Objective Step 1: Choose any initial values , K s.t. the conditions (3.22)-(3.25) are feasible.
ultaneously verify the feasibility at N-discretized points of the parameter range.
3.2.2. Polynomial Parameter-Dependent LMIs via Sum-of-Squares 63
belonged to a subset /2, /2 (or t 1,1). These conditions need to sim-t
1 :, 0 pp tt ,:. p N FCU R (3.31)
This approach is applicable for almost all types of parameterized LMIs conditions.
Example 3.2.1 Let us consider the PDLF candidates such as:
012 sincos, XXXX with or in the form 012 XXXtXt t /2, /2, 2 , with t 1, 1. (3.32) (3.33)
We now seek for symmetric matrices 012 ,, XXXsuch that 0 f for X p U has t
). We seek the maximum value of this polynomial by using the S-variable method then compare it with the SoS method and theoretical one. The problem rewrites into the following statement:We now compare the results in turn using the following methods: § The S-variable LMI polytope 1
1000 § The S-variable LMI Gridding-N X: -2.385 2 SVopt minimize s.t., , 0,, 0,, : i with 1,,1, iN K and interval /. N QQQ °°° X: 8.348 Y: 1600 0, (3.47)
-1000 0 § The sum of squares decomposition SoSopt Y: 241.5 X: 1.579 Y: 44.23 minimize s.t., ,0, a M ± : with ,. (3.48)
§ The theoretical approach:
-5 Based on the SoS decomposition, this methodology cast the polynomial expression as a 0 5 -2000 , sup,. T a M (3.49) 10 semidefinite programming problem. Let assign Z are vector of monomials in 12 ,,, p Np U K (vars) and coefficients , ij a are decision variables (decvars). Then, the parameter dependent matrix involving in constructing vectors of monomials With 6,12, solving SDPs (3.46), we obtain slack-variable matrix 13.907340.07162.4979 , 22.76632.29470.3019 P (3.50) introduced as follows: 0 ,. p N jj j aaaZ MMM (3.37) where , n j a M S and . { 22 , 11024361 ,,650 1 T a a and the optimal values, respectively, M 1 1600.358242052000, (3.42) 6 11 1 1 2.77232652479142410, 1600.358239279673, 14444244443 M 7 22 2 1 2.19239154830575010, 1600.358242271239,
Applying Lemma 3.2.1 to PLMI (3.42), we have a following condition: 1 3 2.670134199433960 1600.358215350658,
T a M The condition (3.38) is satisfied f , TT U if exits the symmetric matrix , p 0, a MPP ° where Pis slack-variable matrix and 0. One reasonable choice is given by (3.38) ,, c aa M (3.43) R de-composed in (3.37), and a slack-variable matrix P with appropriate dimensions such that the following parameterized condition 0. TT a MPP f (3.39) 10 . 01 (3.44)
hold, where is an orthogonal matrix gathering monomials occurring in the PLMIs. With an appropriate choice of could yield to . p N j jj Although the polynomial (3.43) has a global extreme, we cannot solve the above optimi-zation problem on the whole domain of . For simplicity, we choose a local domain , so that could cover all 3 extreme points ,/0. a In combining with M PLMI (3.43) and optimization problem (3.41), we get:
Example 3.2.2. Consider the following polynomial 432 43210 43210 ,, with 1, 10, 18, 86, aaaaaa aaaaa 102, M Since, 4 minimize with ,, such that: 102436 1010 . R ,65 0101 1 T T QPP °(3.45) (3.40) 0, a the coefficient of highest order monomial is negative and the derivative ,/0 a has three distinct solutions, then polynomial (3.40) exists a global ex- M treme. minimize such that, ,,, 0, . aa MM R (3.41) SVopt : minimize s.t., , 0,, 0. QQ °° (3.46)
c a R Lemma 3.2.1. (Apkarian & Tuan, 2000) Let us consider a parameterized linear matrix (3.51) is represented as a spectral formulation of polynomial matrix inequality constraint 0, Figure 3-1. The extreme points of the polynomial in the interval 6,12. Uses the decomposition method for polynomial (3.40), then the optimization problem (3.41) is rewrite in the spectral form:
The control analysis of the Polytopic system seems to be the switching of the individual stabilized LTI systems corresponding to the operating regions of the parameter coordinates. The T-S fuzzy system is an alternate combination of LTI systems that in each operating area is a stabilized PDC control rule (even gain control j K are sometimes not stabilization law of local linear systems ,,
Polytopic: ,, T cli cli APP A p 0, 1,, iN K p . (3.63)
ii AB with ± but 11 ,,,, ). ij Specifically, in some case ,, 0, T clijclij APPA 22 0, TT clijcljiclijclji APAPPAPA p that gives more re-laxation of the stabilizing conditions.
3.2.3.2. Reducing Conservatism -Parameter-Dependent Scaling
It can realize that the scalars i related to the Young inequality parameterized in the design conditions of theorems Theorem 3.1.1, e.g., inequalities (3.18), (3.19), and (3.21).
60)
Polytopic: , xtAxtBKxt iii & cliiii AABK (3.61)
Based on Lyapunov quadratic stability analysis, we have the following statements: If there exists a common positive definite matrix 0 P f such that
T-S Fuzzy: 11 ,,,, 22 TT clijcljiclijclji APAPPAPAj i 0, with , i p ,1,,. p jN K (3.62)
, .
with diagram shown in Figure 3-2.
Example 3.3.1: The continuous-time nonlinear dynamical system is represented by the
following equations:
,1,31,1 1131 ,1,1,1 ,2,42,2 2242 ,2,2,2 ,32,2 332 ,3,3 ,41 44 ,4 22, 22, 1 2, 1 2 ssv sss ssv sss sv ss sv s aatk htghtghtVt AAA aatk htghtghtVt AAA atk htghtVt AA atk htght A ,1 1 ,4 , s Vt A (3.64)
where i ht are the liquid level in tank ,1,2,3,4; ii tank ; i , si a are the outlet cross-sectional area of tank ;0, 1,1,2 , si A are the cross sectional area of j itj are time- varying valve flow ratio; sumed proportional to level measurement cm of tank 1 and tank 2. For further descrip-tion of the parameters of tank process model and fuzzyfication analysis, refer to Appendix
E.1. Based on this T-S fuzzy rules and the set of membership function, the quadruple-
tank process can be represented as follows:
,, . iiiiw xtAAtxtButBwt 8 1 i ytCCtxt & (3.65)
The flow disturbance from pump to tanks is presented by the following equation:
w sin30cos20sin26sin31 ttt t t T . (3.66)
The nominal constant matrices 4442 ,, ii AB RR and
j Vt are the voltage control signal of Pump j with the corresponding coefficient , . vj k The measurement voltage V on output signal , p yt ¡ are as- 24 C R are given: SYSTEMS WITH ACTUATORS SATURATION ,1,,
to solve optimization Algorithm 3.1 and the stabilization condition in Theorem 3.2.1 (with
3.2.3. Parametrical Dependent LMIs via Convex Combination 73
The uncertain parameters are the unmodeled and neglected dynamics, and error lineari-zation, modified by varying 1, 100, 1..8. i i We consider the flow distribution rate in the tanks placed by the position of the regulating valve 1 t and 2 , t is adjusted ac-cording to the following rules:
if if if if 020 20120 120220 220300 ts sim sim sim sim ts ts ts then then then then 1, 2 0, 1 1, 2 0, 1 ttMP 12 12 12 12 ttNMP ttMP ttNMP
i B
, ,, iwi JJ , , wi B and 0).
Table 3 . 1 .
31 Observer-Based Feedback Robust Stabilization and Performance with Saturation actuators.
1, with the dynamic states govern the
is 2842 Nm.
5.6668 reduced to 5.0952, respectively, by replacing the block variable matrices (3.73) with(3.76) ones. Moreover, the reduced formulation derived in (3.26)-(3.28) corresponding to Theorem 3.2.1 associated with Lemma 3.2.2 solved by algorithm CCL1 shows a slight enhancement from 5.2818 to 5.1002. Finally, through the above analysis, some conclusions can state: § The algorithm CCL provides more flexible stabilizing conditions by letting variables
,1,2,3, i i
111 123123 ,,,,,, diagIIXIIX (3.72)
1 to 11, 2 i j tj ,with j catalog). In the second case -Polytope, diagonal slack-variables matrix (3.72) represents 1,2, provides significant relaxation (from 2 nd to 3 rd
as convex combination formulation:
111 1,2,31,2,3 ,,,,,, jjjj diagIIXIIXj 1,2. (3.73)
Precisely, the attenuation optimum of stabilization condition of Theorem 3.2.1 associated
with Lemma 3.2.2 and algorithm CCL1 is considerably declined from 53.7486 to only
5.6668 by using the proposed method.
On the other hand, consider the set of positive definite similarity scaling associated with
structure uncertainty (Apkarian & Adams, 1998; Apkarian & Gahinet, 1995):
K , ,,1, : , ,,0 ijij ijijijk ttt diag f , (3.74)
where associates with the scaling set of parameters enjoys ,, , , 1,2,1,2, and 1,2,,rank. ij ijk K Then, given a symmetric matrix Yt ijij YtYt we have:
0, 0.50.50.50.5 ,,,, 1 , 1 , , . T TT ijijijij TTTTTT ij TT ij XYZXYZ XYtZZYtXXXZYtYtZ XXZZ mm (3.75)
in LMI conditions
(3.55)
in Table 3.1. It can interpret that each epsilon 123 ,, must be satisfied all LMIs ij corresponding to all vertex of condition (3.55), which makes the stabilization conditions claustrophobic. In this case, the parameterization is a more reasonable choice for the parameter-dependent conditions. As discussed in section 3.2.3.2, the improvement includes by changing e.g., And the last case -scaling, we have the following change in the variable matrix (3.72) by applying about development: 111 123123 ,,,,,, 1,2. jjjj diagXIXj (3.76) By deploying the affine scaling formulation to the stabilization conditions (3.56)-(3.58) leads to a better performance from
Table 4 . 1 .
41 Multi-objective optimization -Problem 4.1.1
1.1 by convex combination of four vertices
The process of fuzzification of nonlinear system (4.11) and membership functions can be found in more detail at (A. T.
[START_REF] Nguyen | Anti-windup based dynamic output feedback controller design with performance consideration for constrained Takagi-Sugeno systems[END_REF]
.
is recalled to impose the constraints on the state. By defining a polyhedron :| 1
encloses the ellipsoidal set 0. T x x Xh h ± E nT xx xthxt P R 1 1 :| nT xtxtXxt that yields: R (4.15)
In this example, a vector ing conditions (4.13), (4.15) with the stabilization conditions solved with the relaxation 1 01.5 x h is chosen to enforce norm 2 2 2 1.5. Combin-xt methods at 0.25 and 2.75, we obtain the regions of asymptotic stability of the LPV systems.
For the gridding and SoS methods, the discretization of ellipsoidal domain (at level set 1 E ) by uniformly grids 20 values of t over interval 1.5, 1.5 corresponding to 20 ellipses in the bounded region 2 1.5. On the other hand, 20 ellipsoids of FLF xt 4 1 1 :| 1. nT ii i xttxtXxt E R (4.16) is obtained by discretized the membership function 1 t uniformly over interval 0, 1. It can be noticed that these ellipsoids are bounded by 1 1
In the second example, the objective is to compare the conservatives of the novel procedure for considering saturation constraint enforced by the GSC condition with those of the different approaches. The controller design conditions are applied for saturated T-S fuzzy systems without disturbance input. To facilitate comparison with work in the literature. A compact (no input disturbance) stabilization version of conditions (4.1)-(4.2) is presented as a relaxation of parameterized LMI as follows.
1212, 210510.1 ,,,,, 2012120.1 wi ab AABBB ,, 10, 0, 0, with 1,2. iiiiwiwi CHDJDJi (4.17)
Corollary 4.1.1: For the positive scalars ,, s u if there exist matrices , mn R and diagonal matrix , m T S such that the following LMIs satisfy: kjkjk ,, n XYZ S
p ijkjikjkiikjkijkji 2 2 , 1 1 6 0 1 22 j jk j k k X X Zu Zu ± with jk 0, 12 2 iijk ijk T jkjkji k symABYXX YZT X BT j with , 1 , ,, ,1 ,,2 k i x jk X where with E a bounded rate of membership function , ijjk then state feedback controller gains are given by 2 1 jkkk j k tX KY (4.18) (4.19) 1, t & 1
1)-(4.2) are employed by a quadratic Lyapunov function compare to a state-feedback controller constrained by norm bounded (Lemma 3.1.1).
12 12 ,1,1 10.392518.91200.051519.9951 , 0.741615.23110.00340.0752 117.333600.58400 ,, 94.36270.00040.46970.0004 0.54590117.33360. , 00.372094.3627 ww AA BB BB 545900.5840 , , 00.37200.4697 2, , 0, 0.
iwi HIJJ (4.20)
Table 4 . 2 .
42 and references therein usually consider only single control input signal, or two control inputs with similarity bounds (e.g., 12 ||5,||15). utut Robust Pole Placement in LMI Regions.
ℋ∞ Quadratic Stability
Polytopic PDC-Fuzzy
NB GSC GSC
opt 2.8887 6.104 10 3 6.444 10 3
High gain High gain
opt r 20,5 1.3054 & D-Stable 30,15 r 0.0977 r 30,15 0.0978
0 1 2
NB -Norm bounded; GSC -generalized sector condition;
In this case, however, the bounds 1 u , and 2 2820 0.0524 d ra u Nm differ greatly.
2075 58.8411549.8932.7810960. , KK ,
and the GSC condition:
99 12 4.19423.2400.31950.2468 10, 10. 15.07216.88114.11317.626 KK
Hence, the problem becomes more severe in optimizing the reduction disturbance level with the stabilization conditions approached by the convex combination. For example, the fuzzy-feedback gains obtained corresponding to 1 st catalog of Table
4
.2 are given, respectively, with the Norm-Bounded (NB) constraint: 12 0.00210.10150.01660.a. GSC constraint without D-Stable c. GSC constraint with D-Stable
Observer-based Feedback Stabilization 4.3.1. Generalization of Young's Inequality As
Now, given any matrices form P : KerP, Q : KerQ, base of the null spaces of P and Q. For example, let's consider: By using projection lemma (a generalization of Finsler's lemma), the nonlinearities in the stabilizing condition are converted to the affine parameter condition. analyzed in section 2.3.2.3, the stabilization of the closed-loop system (2.115) using a parametric dependent ellipsoidal domain(2.116) corresponding to feedback controller(2.114) and auxiliary controller(2.118) is stated as follows. wt the ellipsoid E is a region of asymptotical stability for saturated LPV system (2.115) and error estimate et converges asymptotically in the domain E.
4.1.2. Example 89
1 2 0000 1 2 1 2 000 0 00 w d wr WCW BYAXBYCXBTB XC X YYCZT I JYHJYCJTJI & Followed the projection lemma (see in Appendix A.5), the solvability of inequality (4.29) 0, s ym for is equivalent to the feasibility of the underlying LMIs: p kerker 0, T (4.30) PP p kerker 0, T QQ p (4.31) (4.24) (a). 0 1 0 0, 0 xX ± (b). 2 0, s s X Zu with . YKW Then, condition (4.30) is rearranged as follows: ± (4.25) 1 , xX E then PDC feedback controller with gains 1 KYW guaran-tees that (1) for 0, wt the ellipsoid E is a region of asymptotical stability for saturated LPV system (2.113). (2) for bounded disturbance , wt from 0 0 x EE the trajectories of saturated closed-loop system (2.113) stay in the enclosed domain ellipsoid E corresponding to performance index . 1 2 1 2 1 2 0 00 0, 000 w w B X AXBKCXXBT KCXZT sym I HJKCXJTJI & p (4.32) Applying a congruent transformation by 1 ,,, diagPTII Theorem 4.3.1: In presence of disturbances, for the positive scalars 0 ,,,,. s u If there exist continuously differentiable matrices function 2112 ,:, ,,: n pp XPYZZ UU S 2 ,:, mnnp p Y U RR and a diagonal matrix function :, m p T U S such that the follow-ing PLMI conditions: for the latter matrix in-equality that results in 11 2 0, 0 s w T w T PABKP TKGBPT BPI HJKJ C J C C I & p (4.33) TTT tt xw u and its transposition that entails to the condition: 0, p (4.35) (a). 0 1 0 22 00 0 ,0 0 0 xX PeP ± (b). 1 22 2 0, 0 s s T s XZ PG u (4.36) ± hold, 2 , ,,,, , , 1.., x sm eXP E then feedback controller and observer with gains 11 122 , KYXLPY ensures that
1 000 001,0, T xPxV (4.25).b 22 11 . T s sss s P u uGG (4.27) (4.26) 2 , tV t 1 , and . GZPPX And, the GSC condition can be presented by: 1 0. T ut TKCxG u x t (4.28) Then, PDLMI (4.24) is reorganized by: with 0, TTT PQQP p (4.29) where , W 0, 0000, T T TTT IKBKKJI PQ 1 2 1 2 1 2 0000 00 000 0. 0000 0 w d wr CW AXBYCXBTB YCZ XC T sym I HJYCJTI X J & algebraic manipulation effectively permutes nonlinearities terms K X C to the SOF controller analysis with substitution KCY X CK CC XW and slack-variable . W In addition, if related The p 1 22 2 TTT w TTT tPttttP xABKCxxPxxBw xB t tPutttt zzww & (1) for 0, 1 2 0. T TKCxG t x utu (4.34) Combining with condition (4.28), we have the derivative of PDLF (2.102) along the tra-jectories of system (2.113) satisfies 22 1 0. Vtztt w & (2) for bounded disturbance , wt from 0 0,0 xe EE the trajectories of satu-rated closed-loop system (2.115) stay in the domain ellipsoid E corresponding to performance index . That ensues in the statements: § When 0, wt then 2 1 0 Vtzt & ensures that , P E are the region of asymptotical stability of system (2.113) from 00 . x EE § When \0, wt W then (4.34) claims from 00 , x EE the trajectories of LPV system (2.113) do not leave the set , ,, P E with 2 2 1 0 VtwtV L L 11 0 . where, 1 ,,,,, dnn diagIIIXX 1 222 112 0, 2 S S T AXBYX PAYCP YZTBT G & & 11 2 1 00 00. 000 T w T wen T BHXYBY PB T J HI Y J
000 000 ker, and ker. 0000 0000 0000 0000 000 0000 TT BKI I KI I I I JKI I PQ
Proof.
Following the lines of Theorem 4.
1.1, inequalities (4.25)
directly imply the constraints of initial conditions and saturation condition on the additional control vector: (4.25).a Firstly, by applied Schur's complement for above inequality, then pre-and post-multiplying with , Remark 4.2.1.
1 W does not involve in the controller structure, the stabilization conditions return to the general case of SOF controller synthesis for LPV systems. So, this approach provides a more general condition. Based on this development, a parameter-dependent SOF and a fuzzy PDC controller are addressed in the work of literature
(Bui Tuan et al., 2021a)
.
4.3.
Theorem 4.3.2:
In presence of disturbances, for the positive scalars wt the ellipsoid E is a region of asymptotical stability for saturated LPV system(2.115), and error estimate et converges asymptotically in the domain E.
0 ,,,,. s u If there exist continuously differentiable matrices function 12112 ,:,,,: n pp PPYGG UU S 2 ,:, mnnp p Y U RR a matrix function 3 ,:, mm p XY U R and a diagonal matrix func-tion :, m p T U S such that the following PLMI conditions:
1122 133 1 312 2 12 0 00 00 00000 m w S T T T TT wd w en r X PBY Y IYGGT BPBPI JHJI HI (a). 1 00 00 1 0 1 22 0 0 0, 0 PxP PeP ± (b). 11 22 2 0, 0 s s T T s PG PG u ± 0, (4.42) p (4.43)
hold, gains ,, 12 xePP ,, E 11 1 ,..,, sm then feedback controller and observer with 122 , KXYLPY ensures that
(1) for 0,
(2) for bounded disturbance rated closed-loop system (2.115) are contractive in the domain ellipsoid E corre-, wt from 0 0,0 xe EE the trajectories of satu-sponding to performance index . where, 221133222 , . S S PAPPAYCP &&
Proof.
. A decomposition uses for the gain-scheduling DOF design strategy with the transformation matrix variables:
12 , 00 TT II MN XY 1212 , T P I , I X Y . (4.52) (4.53)
Based on the new transformation, we get the following stabilization conditions for the saturated system (2.121) with feedback control law(2.125).
4.3.2. Generalization of Finsler's Lemma 97
1 11 2 T symPP & A 1 2 ˆ, 1 2 ˆÂBABC sym YAC & & XCXD ABY 2122 , and , TT ww ww BBDB B BDB D Y Y BB 1 ˆ, HJHJC C XCD 11 ,, ˆˆˆˆĈ C GKG (4.55) (4.56) (4.57) FGCFDG (4.58) where the change of controller variables: ˆ, ˆ, ˆ, ˆ, ˆTT c T cc cc T cc c cc NMABDCNAM BCMNBC BDNB DCCM D GCF && AYXYX YX BY CX D FX , T M ˆ. c G G (2) for bounded disturbance , wt from 0 0,0 c xx EE the trajectories of satu-Continuously, by using substitutions (4.54)-(4.58) that entails directly stabilization con-rated closed-loop system (2.121) converge in the enclosed domain ellipsoid E cor-responding to performance index . where, ˆˆˆ, ˆˆˆˆ2 S TS TT AB ABCYAC TBCTBT & & ditions (4.60).a. Whereas, condition (4.60).b is set to guarantee the definitively symmet-rical positive of the matrix function , P can indirectly determine by 11 . T P Non-zero scalar ensures that matrix XY is nonsingular. XCX AD G Y F Y This end of the proof. B CD ˆ, ˆT ww T ww T w BDBHJ BDBHJC DTJ DXC T TB Y relates to the expression 12 T T B causes condition (4.60).a to no longer affine, because both Y andT are already presented in this condition. In this case, the dead-zone nonlinearity is now included in dynamic YD controller (2.120) by following expression: D and ˆ. T dww r IJDJ I D , . ccccc ccc xtAxtBytEu utCxtDyt & (4.69) That entails a slight modification in extended system (2.121) such as: Proof. 1 . c B E B (4.70) (4.59) Then, the following the development, the bilinear problem is reformatted as:
Theorem 4.4.1: In presence of disturbances, for the positive scalars exist continuously differentiable matrices function ,:, n If there 0 ,,,,. s u p XY U S ˆ:,: nn p AB U R ˆˆˆ, :,,:, npmnmp ppp ,CFDG UUU RRR and diagonal matrix , m T S such that the following conditions: 1 0 1 2020 0 0, 10010 00 0 T T P (4.64) 21 . T c BT T BTNET Y B (4.71) ± 11 2 0, 1,..,. 00 0101 s T s s T P u sm G G (4.65) Now, let's denote a new variable ˆ, c BTNET EY then the stabiliza-tion in Theorem 4.4.1 returns to convex form, where the dynamic gain is obtained by: ± It can realize that these are the initial condition and the saturation bounding on the con-11 ˆ. EY (4.72)
meets the difficulties with the expansion of derivative P & involved in the deriv-ative of . é To avoid this problem, we can take advantage of the relationship (4.53) s.t., 111211 , TTT T T PP YMN NM &&& &&& &&& XX YXY (4.54) with . TT YXNMYXNM &&&& From the definition of state-space system (2.121), the production of variable matrices are deployed as follow: (a). 0, p (b). 0, I I X Y ± (4.60) 0 0 0 1 0 0 0 0, 00 c x xN I x troller. Now, applying Corollary 2.3.1 to the feedback control laws (2.125), we have: 1 ˆ0. T w uTtutDwt KG (4.66) Remark 4.4.2. It should notice that the observer-based output feedback development in the previous section is a particular case of dynamic output feedback by setting: D X YY ± (4.61) 2 ˆ ˆ 0, T TT s s s I C u ± XF YG (4.62) are satisfied, , 1,..,, ,, c xP sm txt E then the controller gains are given by 1 1 ˆ, ˆˆ, ˆ, ˆ, c T c c T c T D CCM BNB ANNMC YBABCM && L BYD AYXBX CYDX (4.63) ensures that, (1) for 0, system (2.121). wt the ellipsoid E is a region of asymptotical stability for saturated LPV 1 112122 2 1 1 2 1 112 2 0. 0 ˆ0 000 T w PPT TD sym I TI CDD & KG p (4.68) D ABB CDX By pre-and post-multiply 1 ,,, T diagTII for above condition: D Repeating tedious mathematical transformations, combining the ℋ∞ performance, GSC :; :; :; :. ĉcccc xtxtAALCBKBLDCCKK (4.73) condition (4.66) and derivative of Lyapunov function (2.123) along the trajectories of closed-loop system (2.121) for all ,, tP E that yields to: 1 12 2 111 1 2 1 12 2 . 0 ˆ0 0 0 0 0 w PPPP TTTD sym I I CDD & D ABB p (4.67) KG Besides, we only present the ordinary case where the order of the dynamical output-feed-back controller equals to the plant system. The square matrices , NM provide the unique solutions for Eqs. (4.63) and (4.72).
Inherently, the above conditions are interpreted the same as previous theorems. Firstly, the saturation constraints involved in feedback controller (2.125) and auxiliary controller (2.126) will be analyzed. By using the congruence transformation (4.52), (
4
.53), and (4.58) we have LMIs (4.61)-(4.62) equivalent to the following conditions: W Remark 4.4.1. Bilinearity c ENTB
Table 4 . 3 .
43 Multi-objective optimization and .
optopt
Parameter-dependent Stabilization & Actuator Saturation
State Feedback Output Feedback
SF SOF OBF # OBF DOF
opt opt Theorem 4.1.1 Theorem 4.2.1 Theorem 4.3.1 Theorem 4.3.2 0.3656 0.3661 6 1.065210 0.7094 3 2.268210 3 2.882710 4 4.470910 7 3.989610 Theorem 4.4.1 0.3753 3 2.450910
0 1 2 3 4 5
SF: State feedback; SOF: Static output feedback; OBF: Observer-based; DOF: Dynamic output feedback.
Table 4 .
4
4. Minor Axis Maximization 10.
Parameter-dependent Stabilization & Actuator Saturation
State Feedback Output Feedback
SF SOF DOF
0.01 0.1 opt opt Theorem 4.1.1 Problem 4.5.1 1.3106 0.7324 Theorem 4.2.1 Problem 4.5.1 1.3108 0.7344 Theorem 4.4.1 Problem 4.5.2 1.4373 0.8361
0 1 2 3
SF: State feedback; SOF: Static output feedback; DOF: Dynamic output feedback.
is represented by the state-space as follows:
,,,1.520, 0.1190.01 130.80 w ABBC 0.890.891 142.6178.250 t t 0, 01, 1, 0.
ww DDHJJ (4.82)
Table 4 .
4
Multi-objective optimization
State Feedback Output Feedback
SF SOF DOF
Theorem 4.1.1 Theorem 4.2.1 Theorem 4.4.1
Problem 4.1.1 Problem 4.1.1 Problem 4.1.1
opt opt 2 1.044810 4 2.622610 2 5.382110 4 4.228110 2 1.045110 4 2.602010
Minor Axis Maximization 10.
Problem 4.5.1 Problem 4.5.1 Problem 4.5.2
0.01 0.1 opt opt 1.9632 0.8316 4.3569 * 1.6935 * 1.9664 0.8337
0 1 2 3
5. Actuator Saturation -Optimization Problems. SF: State feedback; SOF: Static output feedback (*: ε=0.02); DOF: Dynamic output feedback.
Table 4 .
4 6. Multi-objective optimization -Example 4.5.1.Implementing the minimization methods as in Table4.3 and Table 4.4 for the quadratic optimization conditions, we obtain the results in Table 4.6. It can observe a performance degradation using a quadratic Lyapunov functional for the stabilization conditions as the optimization values opt increase approximately five times and ten times for opt
Quadratic Stabilization & Actuator Saturation
State Feedback Output Feedback
SF SOF DOF
opt opt Theorem 4.1.1 Problem 4.1.1 1.7416 2 1.807110 Minor Axis Maximization Theorem 4.2.1 Problem 4.1.1 3.8731 2 7.622510 10. Theorem 4.4.1 Problem 4.1.1 1.7416 2 1.806910 Compare with Table 4.3
0.01 0.1 opt opt Problem 4.5.1 10.4753 6.1052 Problem 4.5.1 16.3353 14.9865 Problem 4.5.2 15.4075 8.7692 Table 4.4
0 1 2 3
adjustments: other two theorems. XXXTT ,0,. & following Then, analyze in a similar way for the
SF: State feedback; SOF: Static output feedback; DOF: Dynamic output feedback. The quadratic stabilization expansion based on Theorem 4.1.1, Theorem 4.2.1, and Theorem 4.4.1 using the Lyapunov function (2.120) can interpret briefly as follows: considering the parameter-dependent condition (4.1)-(4.2) in Theorem 4.1.1, and the (the comparison between Table
4
.6 QLF vs. Table
4
.3 PDLF). Similarly, we also have conservatism for the minor axis optimization conditions (the comparison between Table
4
.6 QLF vs. Table
4
.4 PDLF) where the value opt also growths approximately ten times.
, the Auxiliary-function-based II (AFBII) (P. G.Park et al., 2015), the reciprocal convex combination[START_REF] Datta | Improved stabilization criteria for Takagi-Sugeno fuzzy systems with variable delays[END_REF][START_REF] Park | Reciprocally convex approach to stability of systems with time-varying delays[END_REF] C. K. Zhang et al., 2017;[START_REF] Zhang | An improved reciprocally convex inequality and an augmented Lyapunov-Krasovskii functional for stability of linear systems with time-varying delay[END_REF], and the generalized vectors-based multiple integral inequalities (Y.[START_REF] Tian | A new multiple integral inequality and its application to stability analysis of time-delay systems[END_REF][START_REF] Briat | Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers[END_REF][START_REF] Van Hien | Refined Jensen-based inequality approach to stability analysis of time-delay systems[END_REF][START_REF] Zhao | A new double integral inequality and application to stability test for time-delay systems[END_REF] significantly improve the asymptotic stability of TDS systems. The fundamental methodology of bounding techniques is to estimate better the lower bounds of the quadratic integral terms in the derivative of the Lyapunov-Krasovskii functional.
It should note that this stability analysis approach simplifies the stability PLMI condition, which concerns only the three decision matrices variables P(),Q and R from Lyapunov-Krasovskii functional (no slack matrices are included). Then, to avoid the use of the old-style techniques (e.g., cross-term inequality, model transformation), Jensen-based inequality (5.5) is employed to bound the integral . The coupling decision matrices and system matrices in inequality (5.20) are more complicated than one in inequality (5.19). Consider condition (5.22), the expansion of2 If the double and triple integrals include in LKF, then the design stabilization conditions are really in trouble.
5.2.1. Single Delay-Dependent LKF Stability and associated relaxation 5.2.1. Single Delay-Dependent LKF Stability and associated relaxation 119 121
11 T wd 0 1 T hwr 0 hw APRQR h BPI HHJI hRAhRAhRBR . p N i i i P symPAQR with: 11 Proof. The proof is detailed in Appendix D.1. 12 23 , . T t TT T th PtP txtxsdsPt PP For the sake of simplicity, denote :, h real function is rearranged by Wirtinger-Based inequality WBII (5.6): xxtht The integral of the positive 0, p (5.19) VttPttxtQxthtxthtQxtht && & 2 0, t TTT th tPtthxtRxthxRxd hold along the trajectories of system, with & &&&& (5.22) tiple productions terms 12 , T PAPA prevents performing a congruence transfor-mation with the inverse of matrices 12 ,. T PP
Remark 5.2.1. t th xsRxsds T && 5.2.1.2. Wirtinger-Based Inequality As we know, Wirtinger-Based inequality (5.6) has a better estimate of the lower bound of the expression t T 1122 2 3, 426/ 46/. 12/ t t TT tht th WBII TT T tt hh tt thth hxRxdhxRxd &&&& h RR ht xx RRRh xx RRh xsdsxsds Rh (5.23) where, 12 2 , . t thth tht xxxxxsds ht th xsRxsds The well-posed problem of the inequality (5.23) at extreme point 0
Lemma 5.2.2. For positive scalars ,, h delay space system (5.1) is asymptotically stable corresponding to ℋ∞ performance index, if there 0 , ht H and , p t U then LPV exist a continuously differentiable matrix function 1 :, n p P U S positive symmetric ma-trices 3 ,, PQR n S and a matrix 2 , nn P R such that the following PLMIs fulfill 1 1 1 0 0, 0 TTTTT r I Vtztztwtwttttt R & (5.24)
2.1. Given positive scalars ,, h delay and parameter belong to the sets 1 0 1 0 ,0, :||1, , ,, ||, , 1,,. p N iii piiiip hthhh t tiN & & K HC C U RR R Then, the LPV time-delay system (5.1) is asymptotically stable corresponding to designed 2 11 21 12 3132 12 , 14 0, 0 0 00 h TT wwd hwr hw QR R BPBPI HHJI hRAhRAhRBR where & p (5.20) 12 23 0, T PP P PP f (5.21) 11121 2112 6 3123 6 3223 ,4, 12, , 1. p N ii i TT h T h T h h symPAPPQR APPR PAPR PAPR & with 1 2 12 12 66 12 2323 1 , 0 , 0 ||4 24 0 i t T TTTT th th hw hw P S i i TT h TT h hh h TT ww xxxsdswt HHJ hRAhRAhRB PAPQR APPRQR PAPRPAPRR BPB & 2 d PI with notation 2233 1,1,1, PPPPQQ and :. S sym Then, using Schur's complement yields to (5.20). ℋ∞ performance attenuation, if there exist a continuously differentiable matrix function :, n p P U S matrices ,, n QR such that the parameter-dependent matrix inequal-Proof. S ity satisfies Consider LKF (5.11) for LPV time-delay system (5.1). Then, this dynamic system is de-lay-dependent stable if the conditions: T hxtRxt && results in multiple product terms RA() and RA h () etc., which could avoid if the conditional vector is expanded with . xt & But, the existence of the mul- ,
&&than the traditional Jensen's inequality (5.5). The manipulation of this inequality accompanies a slight change in the LKF formulation from (5.3) to (5.11) combined with an augmented vector that yields the following lemma. t xxt :. i ht at time i tt is validated as analyzed in D.1. Then, considering the influence of external disturbance, performance constraint included in stability condition (5.22) and combines with condition (5.23) that entails in the following PLMI condition: W Remark 5.2.2.
XXXX but it reduces the interestingness of the SV method. By the way, one slack matrix is concerned in PDLMI (5.25).
5.2.1. Single Delay-Dependent LKF Stability and associated relaxation 123
with system's matrices the slack-variable matrices couple AA , h and . 123 ,, diagXXX
w B
Three slack-variables yields a more relaxed condition, and the second LMI condition kerker0 T QQ p is always feasibility.
Nonetheless, too much coupling hinders the scalability of the controller design strategy.
A linearization could derive from choosing 2131 ,,
N the latter condition reverts to stability condition (5.19) in Lemma 5.2.1. So, if we discretize the auxiliary convex function in the conditions of Lemma 5.2.2 and Lemma 5.2.3, that would yield the least conservative results. Now, let consider a discretization of extended Lyapunov-Krasovskii functional (5.11):
23 VttPttVtVt T , , T t TTn 2 12 23 T th PP txtxsdsP PP S By using this Lyapunov-Krasovskii functional, we now study the asymptotic stability for (5.42) where .
LPV time-delay system (5.1) based on Lemma 5.2.2 as follows.
Lemma 5.2.5. For time-delay integer number N, then LPV system (5.1) is asymptotically delay-dependent stable with 0 , ht H parameter , p t U a positive scalar , and an corresponding to design L2 norm performance, if there exist matrices 3 ,,,, n ii PQR S 1,2,,, iN K a continuously differentiable matrix function 1 :, n p P U S and a matrix
holds, where: 1111 1 ,/, p N jj j symPAPQR & 1 2 22 1 11 11 , 0 00 1, 1,,1, i NN iiiiii R R QQRRiN O O MOO L K 41)
and the decomposition of delay spaces into N-subsets
1 :,0,: ||1. hththht 0 HC RR &
Proof. The demonstration is delivered in D.2.
W
It is worth mentioning that when 1,
Similar to the development in Appendix D.1, by combined the ℋ∞ performance criteria with the PDLKF stability condition (5.45), we have the following condition:
128 CHAPTER 5. STABILITY ANALYSIS OF THE LPV/QUASI-LPV TIME-DELAY SYSTEMS
12 23 23 23 23 00 00 00 T T T T PP PP AAB 0 00 0 0 00 00 hw tPttt TT II PP PP PP MM , LL & MM LL (5.48)
&&&&&&
By denoting 0 ::,:,:. tiN xxxtxxtihtxxtht The latter integrals of the positive real functions are reorganized by WBII:
11 11 11 2 426/ 46/ 12/ tihtiht TT ii tihtiht T ii iii ii tihttih ii i tihttiht hxRxdhxRxd xx RRRh xx RRh xdxd Rh &&&& t (5.46) .
It should be noted that ttiht N 1 1 i thtiht dd xdxd dtdt 1 1111 N i ihtxtihtihtxtiht && (5.47)
Denoting :1, II and giving an extended vector: 11 ttNht 1 T TTTTT tNN thttht xxxxxdxdw LL .
In the view of uniform distribution, we have:
. It should realize that the uncertain delay involves in the two following cases. At the times of ,,
1 hdu 11 22 , 00 33 T T T wd hdwr 0 00 hdw mumuhmudmuwu APR h APR BPI HHHJI hRAhRAhRAhRBR RARARARBR & 11 , 1 0 0 0 0 T hdu T wd hwr hdw mumuhdmuwu 0, p AAPRQQR BPI HHJI hRAhRAAhRBR RARAARBR & where 11 1 ,, p N iiu i symPAPQQR & 2233 1,11, uduu QRRQR (5.54) 0, p (5.55)
and the delay space 1 ,0,: ||1.
ij tt belongs to the specified domain such that condition reforms to similar condition of a single delay dependent. And in the second ,0, iii dthtt then the stability case when , jj dtht that ensues on the following result.
Lemma 5.2.6. (Briat, 2008) For positive scalars ,, h parameter time-varying delay (5.50) is asymptotically stable with 0 , ht H uu QRQR uously differentiable matrix function :, n p P U S such that the following conditions are , p t U then LPV , n S and a contin-satisfied
d dt H correspond- ing to H∞ performance criterion, if there exist matrices ,,, dd ddd & HC RR
For positive scalar,,, m
5.2.4. Memory-Resilient Delay-Dependent Lyapunov-Krasovskii Functional Stability
131
with 11 77 31 22 21 411 72 22 77 33 44 7 53 2 ,, , , , , 1, , p i N P S ii T mhm T hd T w hd mm mh PAQR APR AAPR BP hRAA QRL H & 22 32 52 62 63 73 7 2 7 2 7 2 1, 1, , , , , m hd hd mh mh QR QR HH LAA LA hRA
and notation :.
h parameter ht H the LPV time-delay (5.50) is asymptotically stable consistent the ℋ∞ perfor-, p t U and delay 0 , mance, if there exist continuously differentiable matrix function ,:, n p PL U S and pos-itive matrices ,, n QR S such that the following matrix inequality satisfies
11 2122 313233 41 5253 6263 7273 , 0 0 HJI 00 0 d wr w w I LALBL hRAhRBR & p 0, (5.56)
The satisfaction of small-gain stability for operator (5.16) is associated with finding a resilient-stable trajectory for an approximated delaydt constrained within a ball of diameter m centered along the trajectory of . ht It can be shown that if 0Let's introduce the matrices form base of the null spaces of P, and Q respectively,
5.2.4. Memory-Resilient Delay-Dependent Lyapunov-Krasovskii Functional Stability
133
with
Remark 5.2.4. m ( i dt is approached close to ) i ht and L sufficiently small, then inequality (5.57) brings about stability condition (5.20). This condition is therefore more general than the memory-delay dependent stability condition in Lemma 5.2.2. Now, similar to Theorem 5.2.1, an associated relaxation condition of Theorem 5.2.2 is provided in next result. Theorem 5.2.3: For positive scalar ,,,, m h delay 0 , ht H and parameter , p t U then the LPV time-delay (5.1) is asymptotically stable conforming to the energy-to-energy index, if there exist continuously matrices function :, :, nnn pp XL UU RS a con-tinuously differential matrices function 1 :, n p P U S positive matrices 3 ,,, n PQR S and a matrix 2 , nn P R such that the following PLMIs satisfy 2 122 3233 424344 12 2525354 7374 2 0 , 0 0 00000 00 000000 0000000 000000 T h d wr P PR I HJI LL hRR PL & 1 00000, 000000000. hw hRP IAABI I P Q ,
0 11 21 3132 6 234243 7 2 2 7 4 12 2 12 0, (5.60) , 4 4 , 00 hdmhw m T h h TT wwd LztLALAALALBt with T QR QR TT PAPRR BPBPI 7 2 7 2 0 . 0 hdmhw hdmhw HHHHJ T hRAhRAAhRAhRB First, stability condition (5.61) is rearranged as follows: Proof. & 2 2122 313233 41424344 12 2525354 61 7374 , 0000 00 000000 0000000 S T h d wr X PR I HJI LL hR & 21 0, 0 0000 R XPLhRP p 00000 000000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 , 00000000 00000000 00 00000000 00000000 00000000 00000000 hw AABI I I I I I I I I I I I I I I PQ 000000 00000000 00000000 00000000 I I I I (5.61) 12 23 0. T PP P PP f Followed the projection lemma, the feasibility of (5.61) entails the feasibility of the un-(5.62) derlying conditions: with: 1 33 211 31 7 41 2 61 2221 322 7 422 2 6 523 1 , , , , ,4, 12, 12, , i T T hd T mh T w P S i i T T m h Q PAX AAX AX BX PQRP PR PR PR & 7 43 2 6 533 73 2 7 44 4 7 6 543 2 7 74 2 4, 14, 1, , 14, 1, . m h hd m m h mh R QR PR HH QRL PR H kerker 0, T PP (5.64) p kerke r0, T QQ p (5.65) .
Finally, using Schur-complement to rearrange condition (5.60) that results PDLMI (5.57). W 0, TTT XX PQQP p (5.63)
Table 5 .1. The maximum admissible upper bound MAUB for delay 0 . ht H Delay-dependent Stability LTI System Example 5.2.1 Example 5.2.2
5
2010 , 00.91 .0 1 h AA . (5.66)
The analytical maximal delay value for which system (5.66) is asymptotically stable
6.17.
0100 , 1211 h AA where the time-varying delay 0 . . ht H (5.67)
0.5 0.9 0.3 0.5 NoV
Lemma 5.2.1 1.5874 1.1798 2.1756 1.5n 2 +1.5n
Lemma 5.2.4 (N = 2) 2.3200 1.2012 2.3025 2.5n 2 +2.5n
Lemma 5.2.4 (N = 3) 5.0553 4.2626 5.9301 3.5n 2 +3.5n
Lemma 5.2.2 2.1111 1.7576 2.1798 3n 2 +2n
Lemma 5.2.3 5.2312 3.9416 7.5882 6.1862 9.5n 2 +3.5n
Theorem 1
analytics h
Example 5.2.2:
[START_REF] Kharitonov | On the stability of linear systems with uncertain delay[END_REF]
2.1-Lemma 5.2.3 for analyzing the delay-dependent stability of the following delayed quasi-LPV system:
Example 5.2.3: (Wu & Li, 2007) Let's consider a T-S fuzzy time-delay system with the
local linear matrices are given by:
11 22 3.201.00.9 , 0.02.10.02.0 1.00.00.90.0 , 1.03.01.01.6 .6 h h AA AA , . (5.68)
Table 5 .
5
2. The maximum admissible upper bound for delay 0 . ht H
Delay-dependent Stability quasi-LPV System
Example 5.2.3 Example 5.2.4
0.5 0.9 0.5 0.9 NoV
Lemma 5.2.1 (Multi-convexities) 0.4917 0.4743 1.5782 0.9116 2n 2 +2n
Lemma 5.2.2 (Multi-convexities) 0.9603 0.5113 1.6960 1.2316 3.5n 2 +2.5n
Lemma 5.2.3 (Multi-convexities)
R is state vector with initial condition and unknown time varying delay ht assumes belong to spaces 0 .H
sat, sat, hw ,,0 hw hw xtAxtAxthtBuBwt ytCxtCxthtDuDwt ztHxtHxthtJuJwt xh & (6.1)
where xt n
p yt R and r zt R are outputs; d wt R is dis- turbance inputs; and nonlinear saturation function sat. u The parameters t and its un- known rate of variation belong to parameter spaces (2.8)-(2.9).
Similar to the analysis for the feedback controller with uncertain delay value cases, when dtht refers to an exact-memory DOF controller. If dc
cccdcccc . . ccdccc hw xtAxtAxtdtBytEu utCxtCxtdtDyt ytCxtCxthtDwt , & (6.15)
A dt H we have a resilient memory DOF 0, dc C that repre-d controller. Replacing the controller in the system (6.1), the extended closed-loop system sents a memoryless DOF controller, and if
is given by:
with 12 12 ttthtduwt hd hd zttthtduwt , . & AAABB CCCDD , TTT c txtxt and (6.16)
Theorem 6.2.2:
For time-varying delay 0 ,, d htdt HH positive scalars ,,,,
i u %%%% R continuously 23 ,,,,, nn XPPQR differentiable matrices function 1 :,,,,:, nmn and L2-bound disturbances. If there exist matrices pp dh PYYZZ % UU SR and a diagonal matrix function :, m p T U S s.t. the following PDLMIs satisfy:
2 12 02 41424344 2122 313233 2525354 6162 828384 00000 00 0000000 00000000 h dI T wd T r S w X PR YZT BI JTJI LL hRR %% %% %% 21 00 0000 T XPLhRP %%%% 1 1 23 2 2 22 0, 1,2, T h iIii PhR PhRPQRim ZZu %% %%%%% K f where 0, (6.27) p (6.28)
211 31 7 41 2 61 22211 1 322 7 422 2 6 523 62 , , , 4, 12, 12, , / p T T hd T mh T N S jj j T TT m h AXBYP AXBY AX BT PPQRP PR PR PR YZ % %% %%%%% %% %% 82 84 543 2 2 7 7 83 2 7 44 4 6 533 2 43 7 33 14, , 6
2.1 & Theorem 6.2.2 conforming to the design delay space 0 . By utilized the minor axis maximization, the non-convex problems are converted to the following linear criteria.
152 CHAPTER 6. STABILIZATION SYNTHESIS FOR THE LPV/QUASI-LPV TIME-DELAY
SYSTEMS WITH ACTUATORS SATURATION
1 .
Problem 6.2.1. Given delay space the definition of the initial set 0 . XFind the variable decision matrices ,,, 0 , ht Hcompact sets of parameters t U and , p XPQR %%% such that the initial conditions meet condition (6.21), and the following statement fulfills:
Minimize 23 1234 10.5 fhhh (6.30)
subject to (6.18), (6.19), 4 II 123 , , PIQIRI %%% ppp 2 0. T XXI ± for pre-selected scalar , weighting scalar . , and (6.31)
It's worth noting that inequality matrix . nn X R So, the constraint (6.31) implies XIXI ± always hold for all scalar 0 T 121 TT XXXXII R , 4 , °°shows less conservatism than a directly imposed condition 1 4 . XI ° This method de-pends on the selection of , R and condition 2 TT XXXXI ± for all ,, X but the opposite holds only when 2 0. T XXI ± Thus, it is possible to miss the solution belong to the negative side: 2 0. T XXI p
Besides, we have 11 224 , TT QXQXXXI % °° and similarly for 34 , RI °14
H
Followed the definition of the domain of attraction 0 , X maximizing the size of DoA means minimizing of the greatest eigenvalue of nonlinear matrices 11 ,, TT XPXXQX %%%%
T XRX %
Table 6 . 1 .
61 The optimization of ℋ∞ performance criterion .
Example 6.2.1: Let's consider an LPV time-delay system:
01sinsin0.1 23sin0.2sin0.3 sin 0.2 , 0.1sin 0.2 0100 . 000.1 tt xtxtxtht tt t utwt t ztxtut & 0.2,0.1. By setting parameter sin tt that implies & (6.32) ,1,1 tt and assuming time-varying delay belong to where 0 . ht H
opt
Delay-Dependent Stabilization
Example 6.2.1 h 0.5, 0.5 h 10 0.9 h 3 0.99 h 3.3 0.99
(Briat, 2010) Theorem 4.1 Theorem 4.2 m m h 0 1.9089 12.8799 13.0604 04.1658 (Briat, 2015) Corollary 8.1.3 Theorem 8.1.5 10.2210 10.2210 03.4691 m h 0 m
Theorem 6.2.1 1.0821 03.1444 0 3.4924 4.5415
Theorem 6.2.2 m 1.0821 03.1444 h 0 m 1.0761 02.6648 m m 02.1131 2.2758 h 03.4924 4.5415 0
0 1 2 3 4 5
: does not include.
SYSTEMS WITH ACTUATORS SATURATIONproposed implementation methods for an LTI system.
Example 6.2.3: Consider the following linear time-delay system (LTDS):
11.50110 , , 0.32001 h AABu , and 15. (6.37)
The stabilizations of LTDS (6.37) address by Theorem 6.2.1 and Theorem 6.2.2, respec-tively, for a delay bound of 1,0, h and 0.1. The estimates of DoA are carried out by the optimization Problem 6.2.1.
The estimate domain of attraction solving by Theorem 6.2.1 (a stabilization of memor-yless controller) bounds by 225 12 11.31252.002210 (for the case of 12 ) and 86.6633 (for the case of 12 ) . The maximum radius of the stability ball stabilizing state-feedback controller obtains for all delays that are less than or equal to 1 h corre-sponding to 0, and 0.1.
Table 6 .
6
2. Domain of attraction, with h 1.
Delay-Dependent Stabilization
Example 6.2.3 Theorem 3 (Dey, 2014) 0.1 12 0.0 12 106.2856 Number of Variables 2,1 nm 722 nnmm 37 Parameters , 11 10
Theorem 1 (Chen, 2017) 092.5966 2 Nnn 21 2 32 Nmm n 35 9 0.4,0, 1 N 3
Theorem 6.2.1 Theorem 1 (Chen, 2015) 84.8793 086.6634 2331 nnmn 27 084.6074 5274 nnmn 82 10 1,10,1.15 9 0.4,10
(Gomes da Silva, 2011) 83.55 2 22 3 11 nnmnm 2 22
Theorem 2 (Dey, 2014) 080.3239 722 nnm 36 10 9
Th1 (Fridman, 2003) 079.43 2 323 22 nnmn 2 33
0 1 2 3 4
: does not include; Th1: Theorem 1.
2.1, we have the following result.
Problem 6.3.1. Given delay space the initial condition definitions 0 . ht Hcompact sets of parameters 0 , XFind the variable decision matrices 1 ,,, PQR , p t U and %%% such that the following statement satisfies:
Minimize 23 1234 10.5 fhhh (6.69)
subject to (6.50), (6.51), 4222 123 , , PIQIRI %%% ppp 212 2 0 2 0. nnn TT n II I ± for pre-selected scalar , R weighting scalar R With 242 1 ,,, and . nnn QRP %%% SSR , . and (6.70)
Condition (6.70) is interpreted similar to condition (6.31), with expansions:
1212 T ± infers 0, (6.71)
121 1121224 2, TTT I °° (6.72)
Table 6 . 3 .
63 Multi-criteria optimization.
Delay-Dependent Stabilization
Example 6.3.1 Theorem 6.2.2 h 1.0 0.5 opt 1 332.6817 2.5 h 1 0.4852 1.0 0.5 opt 10 1 0.510 2.5 h 2.8908 h 1 12 1.0 0.5 0.7740 40 2.5
Theorem 6.3.1 41.0729 2.0741 1.0612 0.2985
0 1 2 3 4
Theorem 6.2.2: State Feedback Controller; Theorem 6.3.1: Dynamic Output Feedback Controller.
The generalization of proposed methods have been developed for: systems with un-combination of p x slide on the plane px (shows in Figure A-1.a) is given:
112233 xx : p x x 44 ,, p x x C xpx and the set of parameters fine combination of s With x in space 3 R (shows in Figure A-1.b) is obtained: 4 1 i ii 1, R (A.2) then the af-
certain parameters ˆ, t systems with asymmetrical limits on derivative , t & systems with delayed parameters . tht Or consider an extension of the parameter-dependent decision matrices such that , QtRt are other per-spectives that will be developed.
Application for dynamic systems such as vehicle body stability system (ESC, ESP, ADS, etc.), remote robot control for slave -master system (e.g., for medical service), or other engineering systems. affine
At the optimal solution, there is an estimate of the Lagrange multipliers for the equality constraints y such that: Fxhhh is the third differential of Fx taken at x along the collection of direction 123 ,,. hhh Using the concept of self-concordance, new barrier functions have been devised for certain convex programming problems, e.g., semidefinite programming.
A.2. Linear Matrix Inequality 181
of vector x. where 3 ,,
A.2.1.1 The Primal Barrier Method for Linear Programming
Consider a standard-form linear program, then a barrier sub-problem associated with lin-ear problem (A.22) is A.2.1.2 The Primal-Dual Barrier Method for Linear Programming
Minimize In the last decades of the last century, innumerable papers have been written about the 1 ln m T j j cxx subject to A , xb (A.25) interior revolution relates to the primal-dual family. Both methods are based on applying with the assumptions (1) the set of x satisfies A , xb and 0 x Newton's method, but the Primal barrier algorithm is formulated in terms of only seeking primal variables x. Let's consider the optimal solution of the sub-problem (A.25) that satisfies (A.27) for some vector y. By defining a vector z such that f is nonempty, (2) given the set , yz satisfies , T Ayzc and 0 z f is nonempty, and (3) rank . Am , . T cAyz Xz 1 (A.32) Make the use of Lagrange multiplier estimate to seek dual variables y and z satisfying the following barrier trajectory
11 TT gxcXAycXAy , B 11 The barrier trajectory for standard-from LP (A.22) is defined by vectors . , for 0, , for 0, . kk T kkk kk Axbx Ayzcz Xz 1 f f , kk xy (A.26) (A.33)
1 kk kk , for . T Axbx XAyc 1 Starting from 0, x f using gradient and barrier Hessian give in (A.24), the Newton equa-0, f (A.27) tion of sub-problem (A.25) corresponding to Newton step p k is 1 2 1 . 0 T k kk kk k k p Xc XA y Axb A 1 (A.28) , , , 00 0 . 0 xkk TT ykkk kkzkkkk ApbAx AIpcAyz ZXpXZ (A.34) 11 using the similar transformation as (A.29)-(A.30) to eliminate p x,k and p z,k , we have In which the Newton step p k satisfies 111 , . TT kkykkkkkkk AZXApAZXcAyXbAx
21 kkkkkk XpXAy T 1 1 , (A.29)
for some Lagrange multiplier vector y k+1 . Using the relation plying the latter equation with 2 , AX we have k 2 1 , T kkkk AXAyAXXc 1 ,0, kk AxbAp by multi-(A.30)
has a unique solution y k+1 since rank, Am and step is defined in term of y k+1 as follows 2 T k AXA is positive definite. So, the new
12 11 T kkkkk pxXAyc . (A.31)
In the late 1980s, the analysis of polynomial-time complexity for an interior method
should recognize the work of (Nesterov & Nemirovskii, 1994) with the contribution of
the defining of a self-concordant barrier function. A convex function :
12 , , BB , gxcXHxX 1 1,,1, and T X 1 K means the diagonal matrix whose diagonal elements are those . (A.24) (2) for all where 323/2 ,: ,,2. nT xhFxhhhhFxh R
The gradient of barrier function and barrier Hessian have simple forms:
nn Fx RR a is self-concordant in if (1) Fx is three time continuously differentiable in , and
With the presence of nonlinear problem in the third equation. Following Newton's method, we obtain a linear system for Newton step p k in x, y, and z: 1
and reference therein. The semidefinite programming problem could be expressed as follows:
Minimize trace CX (A.36)
subject to trace, ii AXbimX 1,,, K with an appropriated real matrix , C a symmetric matrix 0, ± , A S and a vector n i .
that leads to the following results.
Proposition B.2.1. Supposing M is bounded LTI operator mapping w z and Ais Hur-witz matrix. Then the uncertainty structure , u M is robustly well-connected, and the system (B.6) is asymptotically stable if one of the following conditions hold:
§ there exists L § there exists matrix D such that , n P S T T w T w H PAAPPB LML 2 1 L such that 0. w LHJ J L p 1. (B.9)
The formulation (B.15) is dependent on the decision variable P and slack variables. It should remark that the equivalence of the two conditions (B.14) and (B.15) is still held in the case of taking into account the PDLF
T ww T T PH IJHJ P Then, following the argument of S-variable LMI based on Finsler's lemma that yields:
0, UUU 123 TTTT p TT with an injected matrix dimension. . are slack variable (SV) matrices of adequate (B.15) P By applying Schur's com-plement for the latter inequality that entails in the LMI condition:
111213 2223 33 TTTT 0 w TTT wwww T UAAUUBAUUAUPH IUBBUUBUJ UU I 0. (B.16) p
The stability condition (B.14) is satisfied if condition (B.16) is fulfilled. Furthermore, the
matrices ,1,2,3
2), take full advantage of null can-
celation equation:
ABIxtwtxt ker TTTT w & 144424443 144424443 0. (B.13)
Consider a quadratic Lyapunov function candidate
kerker T Vxt & p , 0 (B.14)
T VxtxtPxt for LPV system (B.2). Then, the derivation of this function along the trajectories of system dynamics combining with ℋ∞-norm can be represented by: i Ui are slack variables in the above inequality, do not require definite positive symmetry like P. And, if 123 ,0,0, UPUU then condition (B.16
Let us now rewrite the full-block problem formulation with the S-variable approach by taking the derivative of the Lyapunov function candidate
B.4. Pole-Placement LMI regions 191
J 0 T T QS wsws ds zszs SR 0. (B.17)
T VxtxtPxt along the trajectories of system dynamics combining with IQC (B.17) as follows
kerker kerker T TTT Vxt && J ker, T TTT xtxtwt & 0000 0, p 00000 00 00000 000 0000 000 0000 . 00 0000 00 00 T T T ww (B.18) T T ww QS II IPI HJHJ IPI SR P II P II QS II HJHJ SR Similar to the previous argument, there exists an SV matrix such that where
with 0, AIBUUU TT p 123 , TTTT w . (B.19)
Gives two signals mance specification from the channel , wz if the following condition holds 2 , wz L satisfies the IQC perfor-
).(LMI regions -Chilali & Gahinet, 1996) A subset D of complex plane is called an LMI region if there exist a real symmetric matrix L, real matrix M such that Ais said D-stable when its spectrum :
Definition B.4.1. : zfzzMzM D 0, DL C where fz D is characteristic function of D. Given below the examples of LMI regions (B.20) 1 -Half-plane Re:20. zfzzz D 2 -Disk centered at ,0 q with radius :0, rqz rfz qzr D p and some dynamic e.g., oscillations Re, ; zzr bandwidth 12 Re; z horizontal strip Im; z and damping cone RetanIm. zz
Example B.4.1. Giving a dynamic system:
. xtAxt & (B.21)
Then, matrix Ai A belongs to region D. Let's consider a Lyapunov characterization of LMI stability of system (B.21):
T PAAP p 0, (B.22)
Then, a disk of radiusr and center if there exists a symmetric matrix P such that: ,0 q is an LMI region with characteristic function f D 0, with 0. T rPqPA fPAP qAPr D pf (B.23) Pole clustering in LMI regions can formulate as a more general region, e.g., ,,. r S
Let's analysis the ℋ∞-performance LMI constraint (B.5) belong to this disk region. Spe-cifically, system (B.2) is quadratically D-stable in LMI region disk of center ,0 q and radius , r if there exists a symmetric matrix P such that condition (B.24) holds, . 0 0 0 w T T dw r rPqPPAPB rPH IJ I p (B.24)
where: L { 1 011 000 T M , and rq M qr { 2 01. M
B.6. Generalized ℋ2 Performance 193 peak amplitude of the output below a certain level. The ℋ2-norm of a system measures the output energy in the impulse responses of the system. Suppose LPV system (B.2) with
0 is asymptotically stable, if and only if exists a performance index D symmetric positive matrix , nr PQ SS such that the following LMIs: 0, and a
2 0,0,trace. TT APPAPBPC Q IQ f p (B.25)
Let I be an interval in R . For ,,
abI then an abif the following inequality IR is said to be J-convex on integrable function :, a , , bbbb aaaa fddfdd (C.4) is valid for all integrable function . fdom
By choosing ,, fxfxQfxfxx T & with , and ,
nn xQ IS
1.2 Discretized Convex Function We
For an auxiliary scalar function :,, use an example to show the effective reduction of the conservation of inequality by the discretizing method of the n-convex function. ¡ is J-convex functions satisfying the definitions given in Appendix A.. The gap in Jensen's inequality is characterized by a positive difference between integrals 0 ,1,,.
C.2. Model Transformation 197
And,
2 22 2 3322 2 202 2 3 . 22 3 22 565. 33 2 . 3164812 ab ab b b a a II ccc xx x baba cccbacba babaaabb 2
Similarly, by dividing interval , ab into 3 equal segments, we have integral
12 111 33 222 12 33 22 333 ababb aabab Icxdxcxdxcxdx 3 bababa 2 ,
and 3 th -order integral inequality gap
12 33 12 33 22 2 222 33222422 303 3 3 . 333 . 3 222 741407423 343 ababb b a aabab II cccc xxx x bababa cc baabaabbab ba
n (C.11) and any continuous inte-xab . wwddspdds 0, aaij bb dds a positive matrix n R R grable function satisfies :, , n wxab 1 1 , bb TT ii as i i wRwddswRw p holds with 2 , iiii asas bbbb R the following equality R 33 2 . 10812 cbacba 3
a. The chord of the parabola b. The integral region of the c. The integral domain
expresses convex inequality. parabola is in the interval. fragmentation.
Example C.1.1: By considering the following integrals:
111 2 1 222 1 2 2 22 012 2 122 ,,and . bbabb aaaab IcxdxIcxdxIcxdxcxdx bababa where 2 , fxcxc
ii IIiN K 33 2 . 341212 analyze the first three orders 1,2,3, as follows: i 01 2 2 3 3333 3 2 b b a a II cc x x ba cccbacba babaabba 1 1 Let's
C.
The bounding of this cross terms eliminates the quadratic integral terms in the derivative of the Lyapunov-Krasovskii functional (the one related to the derivative of the double integral 2Vt &
The Lyapunov-Krasovskii functional is employed for this class of system:
, VxQxdVt xt { 1 2 23 0 0 00 T t T tht E P xtxt I PP ytyt . 14243 P After derivative of LKF equation (C.22) similar to the previous methods, we obtained the (C.22)
following cross terms:
0 2. T t T th h xt Pyd A yt (C.23)
t hh th xtytytAAxtAyd & (C.21)
has attracted significant attentions in last decade:
,.
along with a specified LK function to be able to capture both the maximal nominal delay value max, hht and the maximal resilient delay max. htdt This delay-dependent approach is based on Lyapunov-Krasovskii
technique with a resilient uncertain By consider an additional Lyapunov-Krasovskii functions as follow: , t is introduced by relation . dthtt
unumu tt TT tdttht VtxQxdxRxdd && . (C.39)
with max,||.
m
The derivative of this LK function results to the following integral inequalities with approximately limited integral range:
&
As latter discussion, from the above inequality we have the two possibilities: at time , .4. The proof of Lemma 5.2.7.209(D.10) with (D.11) will gather into the following form.By using the Schur complement that leads to the condition (5.54).
1 01111111 00 1 00 r TTTTT u I VVzzwwR R && where time expression t is dropped 1 , T TTT th xxw , hdw hdw mmhdmw HHHJ hAhAAhB AAAB 1 1 , ,1. (D.12) 0 p N T iu i i T dhu T d P PAAPQQR AAPRQQR EPI
1 0000 1 0 0 TTTTT r I Vtztztwtwttttt R , & where 0 , , hdw T TTTT thd hdw HHHJ t xxxwt hAhAhAhB and 0 || ,. 1 00 00 T i i i T h T d T wd P PAAPQR APRQR AP BPI & & (D.10)
On the other hand, applying the analysis in (C.40) we have
11 2 TTTT tht tutdudmttmu tht VtxQxdtxQxxRxhtxsRxsds &&& &&&& 2 11. TTTT tutdtudmtutdhudh xQxxQxxRxxxRxx && (D.11)
the delay approximatesdt is close to so T dhdh xxRxx approaching 0. And, the second case is . ht According to analysis in (D.3) when . t t i 0
jj dtht Case 1: (Time uniformity) , ii dtht then the derivative LKF conditions associate D
Table E . 1 .
E1 The Parameter of Membership Functions. The coefficient of fuzzy set in region M1 2.389 × 10 3C fz2The coefficient of fuzzy set in region M2 0.8149Assumption 1: By sharing the same fuzzy structure and same nonlinear function characteristics, so the proposed rules could determine by only
Parameters Description Values
a 1 , a 2 b 1 , b 2 c 1 , c 2 C fz1 The width of MFs (Standard deviation) The center of MFs (mean) 0.0021, 0.3078 0.7219, 5.3137 -0.1656, 0.3155
1 , j h j = 1, 2.
Table E . 2 .
E2 The Nominal Parameter Value of Tank Process.
Parameters Description Values
A s1 , A s3 A s2 , A s4 a s1 , a s3 Areas of tanks 1,3 (m 2 ) Areas of tanks 2,4 (m 2 ) 2.8 × 10 -3 3.2 × 10 -3
Table E . 3 .
E3 Coefficient of Pacejka tire model: 265/75 R16.
Tire characteristics Parameter of the Membership Function
Front tires Rear tires Front tires Rear tires
B1 = 9.2916 B2 = 9.3449 Cf1 = 1.0748e5 Cr1 = 8.2913e4
C1 = 1.6307 C2 = 1.6307 Cf2 = 534.98 Cr2 = 407.68
D1 = 4339.4 D2 = 3393.2 a1 = 0.0801 a2 = 112.64
E 1 = 0.2290 E2 = 0.2292 b1 = 0.7309 b2 = 676.78
c1 = 0.0202 c2 = 112.61
Table E .
E 4. Simulation vehicle parameters -SUV E-Class car model.
When simplifying the model of system based on two vertices of the front wheel slip angle that consider only
1 ) 40
Iz Yaw moment of inertia at center of gravity (CG) (kg m 2 ) 2988
l r Distance from CoG to rear axle (m) 1.77
lf Distance from CoG to front axle (m) 1.18
Mz Driving/Braking force control (Nm)
δf Front steering angle (rad)
δsw Steering wheel angle (deg)
iS Steering ration 20.0312
The local time-varying matrices i A R 22 , B i R 22 , and C 12 R are given as:
i A 22 01 22 12 xx zxzx iix mvmv ii IvIv CCvt CC , i B , , 21 2 0 , zz fj fjf m II C Cl B , i , fj , fjf C Cl 2 2 z m I and d B 1 0 z 1 0 . m I
where 0 CCC ifjrk if , jk then 1,4 i and if jk then , 1 ifjfrkr CClCl i , and 2,3, CClCl 22 2 ifjfrkr with ,1,2, and , jki and i follows the rule: 1,2,3,4.
It should be noted that the combination of membership function (E.7) is:
ˆ, ijfkr with ,1,2, and jki 1,2,3,4. (E.9)
ˆ:, iifir then 1,2. i
& & & & &
&, then parametric matrix inequality (2.57) are multi-affine in 0,1,1,2,,. il tiN Actually, there is no method to adequately describe the
CHAPTER 2. OVERVIEW LINEAR PARAMETER-VARYING SYSTEMS
& &
& p(2.61)The stability condition is presented as a matrix polynomial inequality that can be casted in terms of convex problems LMIs by the gridding method (meshing affine) or Sum of Square decomposition technique.
& &(2.68)
The polynomial condition is executed on Matlab v2020b, using the SoS toolbox v4.00 with Interior-point solvers, Sedumi v1.03 and Mosek v9.3 respectively.
The Degree of Freedom is denoted as DoF throughout this dissertation.
& & & &
& & %%% & %%%%%
Remerciements
Acknowledgments
I would like to express my sincere gratitude to the anonymous reviewers for taking the time to review and provide comments. Your criticism and suggestions help me significantly progress the results and improve the writing style of the PhD dissertation.
Acknowledgments iv
The work resulting in this thesis has been funded by the Ministry of Education and Training of Vietnam for three years (2016-2019) under grant project 911, along with the tuition fee and insurance policy supported by the Ministry of Foreign Affairs of the French Republic for five years is gratefully acknowledged.
Viet Long. v
Contributions
Appendix A Linear algebra
Linear algebra
This section is not intended to provide complete definitions of matrix algebra, ring fields, matrix determinants, eigenvalues, and etc. Instead, it is mainly concerned with the essential algebraic mathematical techniques relating to system and control theory. Specifically, the postulates and properties of convex functions, integrable functions, and convex optimization involved in the analysis of system and control expressed by linear/bilinear matrix inequality conditions.
A.1 Affine space
The rudimental concepts (i.e. convex sets, convex functions, and linear combinations, etc.) involved in this work could be illustrated by n-dimensional vector space in the following figure. in n-dimensional coordinate space is expressed linearly independent based on the basis set and the corresponding coordinate parameters
It is conceivable that the coordinates of a point belonging to subspace n R can be expressed by linear expressions (convex or quasi-convex). Now, let's review some basis of the linear formulations. From this point, it can be seen that the similarity is quite clear between last equations and the one in linear programming problem (A.33).
A.1.1 Affine sets
A.3 Schur Complement
The Schur complement is a fundamental and core mathematical tool used in matrix analysis in the field of theoretical control systems. In the context of LMIs formulation, the conditions for positive definiteness and semi-definiteness that can be expressed by: Lemma A.3.1. The following statements are equivalent:
(1) Let a real symmetric 0.
From this view, it could be realized that the nonlinear matrix inequalities in statements 2 and 3 also deliver convex problems in the form of affine LMI (statement 1).
A.4 Young's inequality
Let's recall Young's inequality (Mitrinović et al., 1993b) and its matricial generalization [START_REF] Ando | Matrix Young Inequalities[END_REF] for further discussion about its application for the LMI analysis.
Lemma A.4.1. (Mitrinović et al., 1993b) Let a continuous function :0, f a¡ is increasing function defined for nonnegative real numbers , x with initial condition 00. f Give , abare positive real numbers such thata is in the domain of f and b is in the image of f . Then
W
Actually, these conditions are the matricial generalization of Young's inequality [START_REF] Ando | Matrix Young Inequalities[END_REF]. These inequalities are well-known in the control system theory, while condition (A.42) is regularly used to eliminate uncertainty matrices, condition (A.43) is typically encountered in general in output feedback control design (SOF and OBF, see for example [START_REF] Benzaouia | Advanced Takagi-Sugeno Fuzzy Systems: Delay and Saturation[END_REF][START_REF] He | Output Feedback Stabilization for a Discrete-Time System With a Time-Varying Delay[END_REF][START_REF] Leibfritz | An LMI-Based Algorithm for Designing Suboptimal Static $\cal H_2/\cal H_\infty$ Output Feedback Controllers[END_REF][START_REF] Peng | Delay-range-dependent robust stabilization for uncertain T-S fuzzy control systems with interval time-varying delays[END_REF] This lemma is usually used in signal and system control analysis (common for both time and frequency domains) based on the LMI technique. In this thesis, we could only utilize a small application for dealing with the nonlinear structures in controller synthesis for LPV saturated systems.
Appendix B Robust Stability and Performances Analysis via LMIs
Robust Stability and Performances Analysis via LMIs
In this section, the design specifications and requirements analyze on the state-space using the Lyapunov stability via linear matrix inequality formulation. The performance and robustness analysis for the LTI system systematically introduced by [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF] deploys on the concept of the input-output properties. The terms such as L2-norm, L2-gain and ℋ∞-norm have been widespread in the stability and performance evaluation consistent with the (Scaled) Small-Gain Theorem [START_REF] Boyd | Linear Matrix Inequalities in System and Control Theory[END_REF][START_REF] Doyle | Structured uncertainty in control system design[END_REF][START_REF] Zhou | Robust and Optimal Control[END_REF], the (Scaled) Bounded Real Lemma (Apkarian & Gahinet, 1995;Gahinet & Apkarian, 1994;[START_REF] Scherer | The Riccati inequality and state-space H∞-optimal control[END_REF], the full-block S-procedure [START_REF] Scherer | LPV control and full block multipliers[END_REF][START_REF] Scherer | A full block S-procedure with applications[END_REF][START_REF] Scherer | Linear matrix inequalities in control[END_REF], or the Integral Quadratic Constraints (IQC) [START_REF] Jönsson | Stability analysis with Popov multipliers and integral quadratic constraints[END_REF][START_REF] Megretski | System analysis via integral quadratic constraints[END_REF]. This issue is also considered for the LPV delay systems in [START_REF] Briat | Commande et Observation Robuste des Systemes LPV Retardés[END_REF](Briat, , 2015a)), and the TS-fuzzy systems in [START_REF] Tanaka | Fuzzy Control Systems Design and Analysis: A Linear Matrix Inequality Approach[END_REF]. We briefly recall the general definitions for control system design criteria discussed in the work of [START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF] and references therein.
The approximate deviations, and the uncertain knowledge about the parameters in the modeling are often referred to as the parametric uncertainty. Whereas the nonlinearities components are considered as dynamical uncertainty. The distinction demarcates between robust control theory and the LPV gain-scheduling technique (typically related to measurable parameters). Since the scheduling controller design are introduced by [START_REF] Shamma | Analysis and design of gain scheduled control systems[END_REF][START_REF] Shamma | Analysis of gain scheduled control for nonlinear plants[END_REF]) that shown an efficient way to analyze and synthesize for this class of system. In recent decades, applications of LPV gain-scheduling controllers pervades in a wide range of system engineering from aeronautical engineering (aircraft, missiles, helicopters, AUVs...) to road traffic engineering (cars, trucks, trains), robotics, energy engineering (renewable energy systems), etc. It's also easy to find the applications in the same areas are addressed by the robust stability theory. We would discuss here the alternative improvement for the tighter bounding inequalities relate to the LKF stability conditions. Some definitions of convex domain properties refer to Appendix A.1.3.
Appendix D Demonstration
Demonstration
D.1 The proof of Lemma 5.2.1.
The LPV system (5.1) is delay-dependent stable if the conditions:
,, 0 01
Abstract
The dissertation is devoted to developing a methodology of stability and stabilization for the linear parameter-dependent (PD) and time-delay systems (TDSs) subject to control saturation. In the industrial process, control signal magnitude is usually bounded by the safety constraints, the physical cycle limits, and so on. For this reason, a suitable synthesis and analysis tool is needed to accurately describe the characteristics of the saturated linear parameter-varying (LPV) systems.
In the part one, a parameter-dependent form of the generalized sector condition (GSC) is considered to solve the saturated stabilization problem. Several feedback control strategies are investigated to stabilize the saturated LPV/qLPV systems. Necessary and sufficient stabilization conditions via the parameterized linear matrix inequality (PLMI) formulation proposed for the feedback controllers conforming to the design requirements (i.e., the admissible set of the initial conditions, the estimated region of the asymptotic convergence domain, the robust stability and performance with the influence of perturbations, Etc.). The relaxation of the designed PLMIs is shown through the comparison results using a parameter-dependent Lyapunov function (PDLF).
In the second part, the delay-dependent stability developments based on Lyapunov-Krasovskii functional (LKF) are presented. The modern advanced bounding techniques are utilized with a balance between conservatism and computational complexity. Then, saturation stabilization analyzes for the gain-scheduling controllers are proposed.
Inspired by uncertain delay system methods, a novel stabilization condition is derived from the delay-dependent stabilizing analysis for the LPV time-delay system subject to saturation constraints. In this aspect, the stabilizing gainscheduling feedback controllers improve the performance and stability of the saturated system and provide a large attraction domain. It can be emphasized that the derived formulation is general and can be used for the design control of many dynamic systems. Finally, to maximize the attraction region while guaranteeing the asymptotic stability of the closed-loop system, an optimization problem is included to the proposed control design strategy.
Key-Words: LPV/quasi-LPV Systems, Actuators Saturation, Time-Delay Systems, Robust Control, Parametrized LMIs.
Résumé
La thèse est consacrée au développement d'une méthodologie de stabilité et de stabilisation pour les systèmes linéaires à paramètres variables (LPV) et à retard soumis à la saturation de la commande. Dans les procédés industriel, l'amplitude du signal de commande est généralement limitée par les contraintes de sécurité, etc. Une synthèse de commande est donc nécessaire pour les systèmes saturés à paramètres linéaires variables.
Dans la première partie, une nouvelle expression de la condition de secteur généralisée (GSC) est considérée pour résoudre le problème de stabilisation saturée. Plusieurs stratégies de contrôle sont étudiées pour stabiliser les systèmes LPV/quasi-LPV saturés. Des conditions de stabilisation nécessaires et suffisantes via la résolution des inégalités matricielles linéaires paramétrées sont proposées pour les contrôleurs par retour d'état respectant les exigences de conception (c'est-à-dire l'ensemble admissible des conditions initiales, la région estimée du domaine de convergence asymptotique, la stabilité et les performances robustes sous l'influence des perturbations, etc.). La relaxation des conditions LMI paramétrées est illustrée par des résultats proposant une fonction de Lyapunov dépendant des paramètres.
Dans la deuxième partie, les conditions de stabilité dépendant du retard basé sur la fonction de Lyapunov-Krasovskii (LKF) sont présentées. Les techniques modernes et avancées de délimitation sont utilisées avec un compromis entre conservatisme et complexité de calcul. Ensuite, des conditions de stabilisation avec saturation sont proposés pour les contrôleurs à gain préprogrammé. Inspirée des méthodes de systèmes à retard incertains, une nouvelle condition de stabilisation est dérivée pour le systèmes à retard LPV soumis à des contraintes de saturation. Les contrôleurs de rétroaction à gain préprogrammé améliorent les performances et la stabilité du système saturé et fournissent un grand domaine d'attraction. Nous pouvons souligner que la formulation dérivée des conditions est générale et peut être utilisée pour le contrôle de nombreux systèmes dynamiques. Enfin, pour maximiser la région d'attraction tout en garantissant la stabilité asymptotique du système en boucle fermée, un problème d'optimisation est inclus dans la stratégie de conception de commande proposée.
Mots-clés : Systèmes LPV/quasi-LPV, Saturation des Actionneurs, Systèmes à Retard, Commande Robuste, LMI Paramétrés. |
04118767 | en | [
"shs"
] | 2024/03/04 16:41:26 | 2023 | https://shs.hal.science/halshs-04118767/file/WP_EcoX_2023-8.pdf | Noémi Berlin
Maria J Montoya-Villalobos
Levels of uncertainty and charitable giving
Keywords: JEL classification: C91, D64, D81 Charitable giving, uncertainty, pro-social behavior, ambiguity attitudes
This experiment seeks to study the impact of uncertainty and attitudes towards uncertainty on charity donations. We use a modified dictator game, where the donations received by the beneficiaries (environmental NGOs) are exposed to different levels of uncertainty. We study the level of donations and elicit risk aversion, ambiguity aversion, likelihood insensitivity, and pessimism. We aim to test if different levels of uncertainty at the receiver level (risk and ambiguity) impact donations. We do not find any differences between levels of uncertainty compared to no uncertainty. We find that a "high" level of ambiguity has a significant and negative effect on altruistic behavior compared to a risk or a"low" ambiguity environment. We also find that the effect of pessimism depends on the level of ambiguity. We find no effect of ambiguity aversion, likelihood insensitivity, and pessimism under "low" ambiguity on altruistic behavior. Meanwhile, under "high" ambiguity, we find a negative effect of pessimism on charitable giving. These results suggest that there is a threshold for which ambiguity and ambiguity attitudes have a negative impact on donations.
Introduction
For the last few years, charitable giving has increased in most countries. For instance, in France, the amount donated has increased by 27% between 2013 and 20211 . The importance of charities lies in their provision of public goods from private entities. A large literature has focused on determinants of charitable giving (see the review by [START_REF] Bekkers | Who gives? A literature review of predictors of charitable giving Part One: Religion, education, age and socialisation[END_REF]); however, not much attention has been given to the impact of risk and ambiguity, although in some cases, individuals do not know exactly if their donations will effectively have the intended impact and to what extent. This is specifically true for environmental NGOs. More generally, individuals are not always certain of how their donations are being used. Some actions undertaken by NGOs and their impact are easily observed, such as the cleaning of a forest. However, for other actions, it is difficult to know the real impact undertaken by charities. For instance, the impact of actions aiming to increase environmental quality or to decrease pollution is distant in time and difficult to quantify; they are also difficult to observe. Determining if the improvement of environmental quality comes from the actions undertaken by a specific NGO or not is not straightforward. Furthermore, the presence of uncertainty leads to the misperception about the impact of donations and not being able to estimate its impact correctly through the over or underestimation of it, which could lead to non-efficient levels of donations. In the case of NGOs, uncertainty can have different sources; NGOs possess different levels of cost-effectiveness, and individuals generally do not know the level of cost-effectiveness of NGOs because of a lack of transparency. Some NGOs have high operating costs or collection fees. Charities are not equally cost-effective, and individuals are not always able to know about the impact of their donations.
Another source of uncertainty can come from a risk of misappropriation of donations; for example, in 2019, the Red Cross revealed that 5 million euros that were supposed to fight against the Ebola virus in West Africa were embezzled between 2014 and 2016. Depending on the NGO, these risks are more or less important.
Hence, individuals may have biased beliefs about the consequences of their donations and that the benefits and the positive impact of their donations will effectively occur, affecting their perception of uncertainty. An individual might underestimate the impact of her donations, believing, for instance, that donations will not have the expected impact (for instance, thinking that all of her donations will be embezzled).
An individual might also underestimate the impact of her donations by not taking into account the multiplier effect of the pooling of contributions. That is the case when, for example, a certain amount of money is targeted to allow the realization of a project. Moreover, individuals do not equally behave when risk is on their side or on the beneficiaries' side.
The presence of uncertainty may or may not be probabilized. It is common not to know the exact amount of a donation that will be used for the intended cause. Therefore, studying different levels of risk and ambiguity is relevant. We can distinguish two levels of uncertainty: probabilized uncertainty (risk), individuals know about the probability distribution of the possible events. And non-probabilized uncertainty (ambiguity), where the available information is too imprecise to associate a probability to each event: the individuals do not know the probability distribution of the possible events. Studies that have focused on the impact of risk on donations [START_REF] Haisley | Self-serving interpretations of ambiguity in other-regarding behavior[END_REF][START_REF] Krawczyk | Give me a chance!' An experiment in social decision under risk[END_REF][START_REF] Brock | Dictating the Risk: Experimental Evidence on Giving in Risky Environments[END_REF][START_REF] Exley | Excusing Selfishness in Charitable Giving: The Role of Risk[END_REF][START_REF] Freundt | On the determinants of giving under risk[END_REF][START_REF] Cettolin | Giving in the face of risk[END_REF] found that altruistic behaviors are reduced under risk. Other studies focusing on studying ambiguity and donations [START_REF] Haisley | Self-serving interpretations of ambiguity in other-regarding behavior[END_REF][START_REF] Cettolin | Giving in the face of risk[END_REF][START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF]; however, found contradictory results, leading to a lack of consensus on the effect of ambiguity in donations.
In this perspective, in this paper, we study the impact of different levels of uncertainty (risk and ambiguity) and attitudes towards risk and ambiguity on donations to charity and make an attempt to clarify this lack of consensus. We also seek to study the impact of risk and ambiguity attitudes. Some papers have studied the role of risk aversion on altruistic behavior [START_REF] Freundt | On the determinants of giving under risk[END_REF][START_REF] Cettolin | Giving in the face of risk[END_REF][START_REF] Fahle | How do risk attitudes affect prosocial behavior? Theory and experiment[END_REF], finding mixed effects on altruistic behaviors. Ambiguity attitudes are characterized by two cognitive components. Ambiguity aversion, which is defined as how much a person dislikes ambiguity; (ambiguity generated) likelihood insensitivity which is defined as the insensitivity to changes in likelihood. Finally, besides ambiguity attitudes, we also study the impact of pessimistic beliefs, which is the over-weighting of the probability of realization of the worst possible event. To our knowledge, only [START_REF] Cettolin | Giving in the face of risk[END_REF] studied the impact of ambiguity aversion on donations, however, there are no other papers that have studied the impact of likelihood insensitivity or pessimism on donations.
In this paper, we aim at answering the following questions: How does the introduction of different levels of uncertainty have an impact on donations? What is the impact of ambiguity attitudes on charitable giving? To that aim, we conduct a laboratory experiment where participants have the opportunity of donating to a charity when the amount received by the charity is unknown. We introduce different levels of uncertainty to each treatment: risk, lower ambiguity, and higher ambiguity.
We find that introducing risk, a "low" and a "high" level of ambiguity does not impact mean donations. However, we find that a "high" level of ambiguity decreases mean donations compared to a lower level of ambiguity and risk. We also find that ambiguity aversion and pessimism only play a role in donations when in the presence of a "high" level of ambiguity. However, we do not find any effect of ambiguity attitudes in lower levels of ambiguity.
The paper is organized as follows: section 2 summarizes the related literature, section 3 presents the experimental design, section 4 details the predictions of the experiment, and section 5 presents the results. Finally, section 6 discusses the results and concludes.
Related literature
We will split the literature in two levels of uncertainty: risk and ambiguity.
Risk
Krawczyk and Le Lec (2010), [START_REF] Brock | Dictating the Risk: Experimental Evidence on Giving in Risky Environments[END_REF] and [START_REF] Freundt | On the determinants of giving under risk[END_REF] elicit altruistic behavior with a dictator game and show that altruistic behavior is reduced under risk. Furthermore, the literature focused on proving and isolating one of the reasons there is a decrease in charitable giving under risk: the presence of a moral wiggle room, where individuals use risk as a justification for unfair behavior. In the context of donations, [START_REF] Cettolin | Giving in the face of risk[END_REF], and Exley (2016) find that the negative impact of riskiness comes from "the adoption of a favorable view of ambiguous risk" leading to a justification for unfair behavior [START_REF] Haisley | Self-serving interpretations of ambiguity in other-regarding behavior[END_REF].
To the best of our knowledge, few studies have focused on risk attitudes regarding charitable giving. Furthermore, the literature is inconclusive about the effect of risk preferences on donations: [START_REF] Freundt | On the determinants of giving under risk[END_REF] find no effect of risk aversion on donations, [START_REF] Cettolin | Giving in the face of risk[END_REF] showed that risk aversion decreases donations when risk is on the giver's side. However, they find that risk aversion has a positive effect on donations when risk is on the recipient's side. [START_REF] Fahle | How do risk attitudes affect prosocial behavior? Theory and experiment[END_REF] found this positive effect exists for large-scale risk, and they also study the impact of loss aversion on donations. In this paper, we focus on the impact of risk aversion when risk is on the recipient's side, bringing more evidence to the literature. We are also going to study the impact of risk aversion under ambiguity.
Ambiguity
Few papers have been studying ambiguity and charitable giving. There is no consensus in the literature about the impact of ambiguity. [START_REF] Haisley | Self-serving interpretations of ambiguity in other-regarding behavior[END_REF] find a decrease in donations when introducing ambiguity, compared to risk. [START_REF] Cettolin | Giving in the face of risk[END_REF] reveal mixed results; for some conditions, they find that individuals adopt a similar behavior regarding donations under risk and ambiguity, and for other conditions, they find a negative effect of ambiguity compared to risk. [START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF] do not find any behavioral differences between donations under partial and full ambiguity, and they find that individuals do not use an increase of ambiguity to donate less. They also find that excuse-driven behavior is comparable under ambiguity and risk. However, they do find that introducing ambiguity decreases altruistic behaviors.
The lack of consensus might be due to the mixed effect of ambiguity in donations. The decrease in donations when introducing uncertainty is partly explained by excuse-driven behavior. However, as seen in the literature, the impact of excusedriven behavior is the same for any level of uncertainty. Hence, other factors may also explain this decrease. The objective of this paper is to study what these other variables are and from what level of uncertainty they start/stop to matter in charitable giving and why.
As for ambiguity attitudes, only [START_REF] Cettolin | Giving in the face of risk[END_REF] have studied the effect of ambiguity aversion on donations by using matching probabilities. We are unaware of any other papers that study other attitudes towards ambiguity, such as likelihood insensitivity, or beliefs, such as pessimism and charitable giving. Our experiment aims to enrich this literature by studying the effect of likelihood insensitivity and pessimism.
Experimental design
The experiment consists of different tasks: the donation task (main task), which is a dictator game, to measure altruistic behavior, and different elicitation tasks of risk and ambiguity attitudes, which differ according to the different treatments. We will hereafter describe the main task (subsection 3.1), the different treatments (subsection 3.2), and how we elicit risk and ambiguity attitudes (subsection 3.3).
The main task: a modified dictator game
The main task of the experiment consists of a modified dictator game, where the senders are the experimental subjects, and the receivers are represented by environmental NGOs. At the very beginning of the experiment, participants are told they will have to decide whether or not they would like to make donations to an NGO. They are then asked to choose between three environmental-related NGOs: i) WWF (World Wide Fund for Nature), ii) Greenpeace, and iii) Zero Waste France. Participants are also provided with a description of each NGO2 and are told that the NGOs will effectively receive their given amount a few days after the experimental session, for which they will receive a tax receipt.
After choosing the NGO, each participant is endowed with 100 ECUs (Experimental Currency Units). Participants are then asked whether they would like to donate to the chosen NGO. They can choose an amount x between 0 and 100 ECUs. The participants keep the amount they decided not to donate, which is 100-xECUs. Note that each participant can decide not to donate (x = 0);
The treatments
The experiment includes one control group and three treatment groups, which modulate the level of risk and ambiguity applied to the donations. We used a betweensubject design where the participants were randomly assigned to one of the four groups. In the following subsections, we present the Control group, the Risk Treatment (RT), the Low Ambiguity Treatment (LAmbT), and the High Ambiguity Treatment (HAmbT)3 .
Control group
In the Control group, there is no uncertainty. It is based on a dictator game where the amount donated to the NGO is multiplied by 1.2.
In the following subsections, we describe the three treatment groups. In those groups, participants always face an urn with three types of colored marbles (purple, blue, and orange) that would lead to three possible events, each determining the amount received by the NGOs. The three possible events are: i) A purple marble is drawn: the NGO does not receive anything from the participant (i.e., the donation is destroyed), ii) A blue marble is drawn: the NGO receives the exact amount given by the participant, iii) An orange marble is drawn: the NGO receives twice as much as the mount given by the participant.
In the Risk Treatment, participants know the exact number of each colored marble in the urn. In ambiguity treatments, participants do not know about the whole distribution of the marbles.
Treatment 1: Risk Treatment (RT)
The Risk Treatment introduces risk in the donations received by the NGO: participants are told that with a 10% probability, the NGO will not receive the donation (purple marble); with a 60% probability, the NGO will receive the exact amount given by the participant (blue marble); and with a probability of 30%, the NGO will receive twice the amount given by the participant (orange marble). In this treatment, participants face an urn composed of 18 blue marbles, nine orange marbles, and three purple marbles. Figure 1 shows what the participant actually saw on their screen as insctructions for the task before the decision to donate or not was made. At the end of each RT session, a volunteer participant drew a marble in an opaque urn without looking. The drawn marble would determine the event for each participant: if the purple marble is drawn, the NGO will not receive anything at all; if the blue marble is drawn, the NGO receives exactly the amount donated by each participant of the session; and finally, if the orange marble is drawn, the NGO receives the twice the amount donated by each participant.
Treatment 2: Low Ambiguity Treatment (LAmbT)
The Low Ambiguity Treatment introduces ambiguity in the donations received by the NGO. As opposed to the RT, participants do not know about the whole distribution of probabilities associated with each possible event. Participants are told that with a 70% probability, they know about the distribution of probabilities to the realization of each possible event. The distribution is the same as in the RT. And they are told that with a 30% probability, they do not know about the distribution of probabilities associated with the realization of each possible event. In this case, participants are under complete uncertainty. Figure 2 shows the lottery participants face under LAmbT. • If an orange ball is drawn, the experimenter composes an urn with known probabilities, as in the RT. And a second participant would draw a ball in this second opaque urn that would determine the event for the NGO.
• If a white ball is drawn, an urn with unknown probabilities is composed.
First, the experimenter mixes an opaque urn with 30 blue marbles, 30 orange marbles, and 30 purple marbles. In this case, a second volunteer participant randomly draws, without looking, 30 marbles from the urn composed of 90 marbles to create the urn with unknown probabilities. Finally, a third volunteer participant randomly draws from the urn with unknown probabilities a marble for which the color would determine the payoff to the NGO.
Treatment 3: High ambiguity treatment (HAmbT)
The third treatment is the High Ambiguity Treatment. We call this treatment "high ambiguity" in opposition to the LAmbT. However, participants in this treatment do not face full ambiguity, they still face partial ambiguity as in the LAmbT. Subjects know the exact distribution of the probabilities of each event with a 30% probability. And with a 70% probability, participants are in complete uncertainty. Figure 3 shows the lottery faced by the participants in the HAmbT.
Elicitation of risk and ambiguity attitudes
As in the LAmbT, at the end of each HAmbT session, and to implement ambiguity, the same drawing procedure occurs except that the first urn is composed of seven white balls and three orange balls. Hence, if an orange ball is randomly drawn, the urn with unknown probabilities is composed; if a white ball is randomly drawn, then the urn with known probability is composed.
We elicit levels of risk aversion and ambiguity aversion, likelihood insensitivity, and pessimism to study their effect on charitable giving. First, we elicit risk aversion and unframed ambiguity attitudes, the tasks appear randomly before the donations task. Then, we elicit framed ambiguity attitudes and excuse-driven behavior, the tasks appear randomly after the main task.
Elicitation of risk aversion
In all treatment and control groups, risk aversion is elicited using the [START_REF] Holt | Risk Aversion and Incentive Effects[END_REF] method. Participants face ten pairs of lotteries (see Figure 4. They are asked to choose the lottery they prefer between lottery A and lottery B. Lottery A represents a safer lottery choice (winning 20 ECUs vs. winning 16 ECUs). Meanwhile, lottery B is riskier (winning 38.5 ECUs vs. 10 ECUs). The probability of winning the favorable payoff increases for each pair of lottery (therefore, the probability of winning the unfavorable payoff also changes). The later the participant switches from lottery A to lottery B, the more the participant is risk averse, and its coefficient of relative risk aversion would increase. Participants could only switch once from lottery A to lottery B.
Elicitation of ambiguity attitudes
Ambiguity aversion and likelihood insensitivity are elicited following the [START_REF] Baillon | Measuring Ambiguity Attitudes for All (Natural) Events[END_REF] method. We elicit matching probabilities for each single and composite events such that a matching probability of an event is defined as the probability m when the individual is indifferent between receiving an amount X if the event occurs or receiving X with probability m. In the control and RT groups, we elicit likelihood insensitivity and ambiguity aversion only once within an unframed setup (not contextualized). In LAmbT and HAmbT groups, they are elicited twice: once within an unframed setup, as in the two other groups, and once within a framed one (contextualized). We elicit framed, unframed ambiguity attitudes since [START_REF] Baillon | Measuring Ambiguity Attitudes for All (Natural) Events[END_REF][START_REF] Baillon | Belief hedges: Measuring ambiguity for all events and all models[END_REF] claim that ambiguity attitudes are context dependent.
Unframed elicitation of ambiguity attitudes
After determining the event of the donation, thanks to the randomly chosen urn using the probabilities associated with each treatment, we would create a second urn. This urn was also constituted of 30 colored marbles; blue, orange, and purple. However, participants would not know the number of marbles for each color (unknown probabilities). To elicit ambiguity attitudes, we used the method of [START_REF] Baillon | Measuring Ambiguity Attitudes for All (Natural) Events[END_REF]. First, we would elicit six matching probabilities: one for every single event and one for every composite event. There were three single events: to draw a blue, orange, or purple marble randomly. And three composite events: to randomly draw a blue or an orange marble, to randomly draw a blue or a purple marble, and to randomly draw an orange or a purple marble. Subjects face six tables that appeared in a random order, each composed of 20 decisions. They had to choose the option they preferred between option A and option B. Option A was the same for every table : The participant could win 30 ECUs if the blue/orange/purple (depending on the presented table) were randomly drawn. If the subject chose option B, she could win 30 ECUs with a probability of p (or win 0 ECUs with a probability of 1-p). Figure 5 is a screenshot of one of the six tables for the composite event "to draw an orange or purple marble randomly".
To elicit the matching probability of a specific event, we calculated the sum of the probabilities associated with the decision before and after the switching point (between option A and option B) and divided it by two to obtain a more precise estimation of the matching probability.
Since in this task, participants faced full ambiguity (i.e., they did not possess any information about the number of marbles in the urn for each color), (unframed) ambiguity attitudes were always elicited before the donation task (main task) before the participants knew the distribution of probabilities associated to donations. Therefore, the individuals were not contaminated by any beliefs about the probability distribution.
Framed elicitation of ambiguity attitudes
To elicit framed ambiguity attitudes, we used once again the method ofBaillon et al. ( 2018). We replicated a second time the same task for subjects in the LAmbT and HAmbT after the donation task. In this task, we elicit ambiguity attitudes for the exact level of ambiguity that participants faced in the main task. Since control and RT participants did not face ambiguity in the donation task, we could not elicit their framed ambiguity attitudes. In the framed setup, individuals did not face any more full ambiguity when choosing between options A and B, they faced partial ambiguity and the exact probability distribution as the one presented in the main task.
In order to elicit framed ambiguity attitudes, subjects faced six tables appearing in random order, one for each matching probability (three for single events and three for composite events). They had to choose the option they preferred between option A and option B. The three single events in this context are: the NGO receives 0 ECUs; the NGO receives the ECUs donated; the NGO receives double the amount donated. The three composite events are: the NGO receives either 0 ECUs or the amount donated; the NGO receives either 0 ECUs or double the amount donated; the NGO receives either the amount donated or double the amount. Each event possessed the exact probabilities as in the main task, depending on the treatment.
These matching probabilities allow us to elicit an ambiguity aversion index, a likelihood insensitivity index, and, thanks to the first two indexes, a pessimism index. The matching probability of an event will depend on the subjective belief of the decision maker in the event and on her ambiguity attitude. Figure 6 is a screenshot of one of the six task tables for eliciting the matching probability of the composite event "the NGO will receive 0 ECUs or the NGO will receive the amount donated".
The ambiguity aversion index (b) is calculated from the following equation:
b = 1 -mc -ms
The insensitivity index (a) is:
a = 3 × ( 1 3 -( mc -ms ))
Where mc corresponds to the average composite-event matching probability of the composite events' matching probabilities, and ms corresponds to the average single-event matching probability.
Under ambiguity neutrality ms = 1 3 and mc = 2 3 , hence a = 0 and b = 0. The indexes are normalized so that the maximal value is 1. An ambiguity-averse individual will have a positive ambiguity aversion index; for an extreme ambiguityaverse individual, her ambiguity aversion index will be equal to 1. Ambiguity lovers will have a negative aversion index. The likelihood insensitivity index is defined as the lack of discriminatory power of the decision maker regarding different levels of likelihood [START_REF] Li | Trust as a decision under ambiguity[END_REF] or perception of the level of ambiguity. If the subject discriminates between composite and single events, the smaller the insensitivity index is. This index is usually positive. However, there are some subjects with a < 0 (sensitive individuals), which is desirable to include in our analysis (as explained in [START_REF] Baillon | Measuring Ambiguity Attitudes for All (Natural) Events[END_REF]).
Thanks to the indexes above, we are able to obtain a pessimism index (α), following [START_REF] Baillon | Belief hedges: Measuring ambiguity for all events and all models[END_REF]:
α = b 2a + 1 2
This index represents the individual's belief about the probability of the event "the NGO will receive 0 ECUs". A pessimistic individual will assign a high probability to the realization of this event; in this case, her pessimism parameter will be close to 1. On the contrary, if the individual is optimistic, she will assign a low probability to the worst possible event, "the NGO will receive 0 ECUs"; her parameter will be close to 0.
Eliciting excuse-driven behavior and additional questions
When introducing risk or ambiguity to donations, it creates situations that decrease the guilt of not being altruistic. This may lead to a "moral wiggle room" for individuals to behave less altruistically. It has been proven that introducing risk or ambiguity reduces donations because of this moral wiggle room, as shown in [START_REF] Exley | Excusing Selfishness in Charitable Giving: The Role of Risk[END_REF] and in [START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF]. Proving that the individual is less altruistic when confronted with uncertainty. In this experiment, we controlled for this effect by using a modified method of elicitation [START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF]. We included two price lists to take into account any excuse-driven behavior.
In one table (charity table), participants faced 20 decisions, and they had to choose the option they preferred between option A and option B. Option A was the same across the 20 decisions: a lottery for the NGO; this lottery gave the NGO an additional payoff of either 0, 30 ECUs, or 60 ECUs. Option B: a safe payoff for the NGO.
In the other table (self table), participants also faced 20 decisions and also had to choose the option they preferred between option A and option B. Option A stayed the same as in the charity table. However, option B was a safe payoff for the subject. In both tables, subjects could not switch back and forth and could not switch from option B to option A. The tables appeared randomly. In the two tables, the safe payoff goes from 0 ECUs to 60 ECUs.
Since this behavior only appears when there is uncertainty, we did not include this task in the control group. This task was different for every treatment (RT, LAmbT, and HAmbT), the lottery for each treatment in this task possessed the exact probabilities as in the main task for each treatment to measure a possible excusedriven behavior in the same setup when the subjects made their giving decisions. However, the events were not the same. Figure 7 is a screenshot of the self table under HAmbT.
Charity-valuation corresponds to the safe payoff of the switching point from option A to option B in the charity table. Self-valuation corresponds to the safe payoff of the switching point from option A to option B in the self table. For an individual with excuse-driven preferences, the charity valuation is above the selfvaluation [START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF]. Therefore, we only calculated the difference between charity-valuation and self-valuation to have a simplified measure of excuse-driven behavior. Finally, participants had to answer a survey that included socio-demographic questions: age, sex, income, and level of education. They also had to answer a questionnaire that measures environmental attitudes, the NEP scale (New Environmental Paradigm scale) [START_REF] Dunlap | New Trends in Measuring Environmental Attitudes: Measuring Endorsement of the New Ecological Paradigm: A Revised NEP Scale[END_REF] to control any pro-environmental behavior since charities were environmental NGOs. We also controlled for previous donation behaviors: such as if the participant has previously donated, the amount they usually donate, and the frequency.
Experimental procedures
The experiment was conducted at the "Laboratoire d'Economie Expérimentale de la Défense" (Courbevoie, France). We obtained the approval of the ethics committee of the University of Paris Nanterre (CER-PN). 218 individuals took part in the experiment and were randomly assigned to one of the four treatment groups: 53 participants were assigned to the control group, 53 were assigned to RT, 54 were assigned to LAmBT, and 58 to HAmbT. Sessions took place in April, June, and November 2022. 81 subjects were male (39.90%), and three participants did not inform about their gender (1.49%). Their average age was 37.75 years old, and 35.32% were students. The experiment was developed using z-Tree [START_REF] Fischbacher | z-Tree: Zurich toolbox for ready-made economic experiments[END_REF].
The analysis is run on a total of 203 subjects since 15 of them (6.8%) made non-consistent decisions in the framed elicitation task of ambiguity attitudes: they did not exhibit a switching point between option A and option B for more than three tables in the framed elicitation task, i.e., they only chose option A or only chose option B in more than three tables associated to the matching probabilities of the single events.
The experiment was incentivized using the prior incentive system PRINCE [START_REF] Johnson | Prince: An improved method for measuring incentivized preferences[END_REF], to avoid any strategic behavior from individuals conceiving the set of decisions as a meta-lottery and not considering each decision independently; this might happen in matching. The experiment contains a certain number of decisions y in total (by adding all decisions from all elicitation tables). 140 for the control group, 170 for the RT group, and 290 for the LAmbT and HambT groups. At the beginning of the experiment, each participant had to pick and enter a number between 1 and the corresponding y. This chosen number was previously randomly paired with one of the decisions of the experiment. The decision randomly associated with the chosen number was implemented to determine the additional payoff. At the end of the experiment, we gave the participants an envelope with a table inside where they could verify that the decision implemented corresponded to the number they chose at the beginning of the experiment.
The session lasted, on average, one hour and fifteen minutes. The average payoff was e14.66 (including a show-up fee of e7), and the average donations were e2.83. All participants received their payoff privately in cash at the end of the experiment.
Individuals' payoff depended on the main task and on one of the elicitation tasks. The participants received at the beginning of the experiment 100 ECU (100 ECUs = e7.5), and they kept the money that they didn't decide to donate. Donations to charities were effectively made, and each individual received proof of their donation and a tax receipt directly issued by the NGO. As explained above, besides the payoff of the main task, one of the decisions of the experiment in one of the elicitation tasks was randomly chosen to determine the additional payoff.
Predictions
Our experiment aims at testing five main hypotheses that will help fill gaps in the literature on the role of uncertainty in charitable giving. The first hypothesis is based on the level of donations of each treatment.
Hypothesis 1 Donations under risk will be lower than donations under no-uncertainty.
The first assumption is taken from the literature. Krawczyk and Le Lec (2010); Brock, Lange and Ozbay (2013); Freundt and Lange (2017); [START_REF] Cettolin | Giving in the face of risk[END_REF] and [START_REF] Exley | Excusing Selfishness in Charitable Giving: The Role of Risk[END_REF] find that donations decrease under risk compared to no uncertainty.
Hypothesis 2
The introduction of any level of ambiguity will decrease donations compared to risk.
As explained in section 2, the existing literature finds different results regarding donations under risk compared to ambiguity. This hypothesis seeks to add evidence to the literature.
Hypothesis 3 Donations under low ambiguity are higher than donations under high ambiguity.
This hypothesis follows hypotheses 1 and 2: an increase in uncertainty leads to a decrease in mean donations. In the literature, there is no evidence of the difference between different levels of ambiguity on donations. Only Garcia, Massoni and Villeval (2020) do not find any differences in excuse-driven behavior between partial and full ambiguity.
The following hypothesis focuses on risk, ambiguity attitudes, and pessimistic beliefs.
Hypothesis 4 Ambiguity (risk) aversion will decrease donations under ambiguity (risk).
A more risk-averse individual should decrease their donations, following results from [START_REF] Cettolin | Giving in the face of risk[END_REF]. Moreover, ambiguity aversion should also have a negative effect on donations under ambiguity. The more ambiguity averse the individual is, the more she will dislike ambiguity and the fewer donations she will give.
Hypothesis 5 Pessimism will decrease donations
The more pessimistic an individual is, the less she will donate under ambiguity. Implying that the overweighting of the probabilities of low payoffs and underweighting the probabilities of high payoffs have a negative impact on donations. This hypothesis has not yet been tested experimentally.
Results
We present the different results in this section.
Table 1 presents summary statistics of the main control variables used in our analysis.
15 participants were excluded from the experiment. We could not compute their matching probabilities under the framed setup in the ambiguity attitudes elicitation task. These individuals preferred more than three times option B in the first decision of each table (option A: winning 30 ECUs if the specified event in the table occurs vs. option B winning 30 ECUs with a probability of 0%); or they preferred more than three times option A in the last decision of each table (option A: winning 30 ECUs if the specified event in the table occurs vs. option B winning 30 ECUs with a probability of 100%). This behavior is most likely to be due to misunderstanding of the instructions than to non-rationality. We decided not to exclude individuals that did not switch between option A to option B for only one table since allowing us to include "irrational" behavior to some extent.
We first study the effect of risk and ambiguity on the level of donations (subsection 5.1). We then, show the results on the effect of ambiguity attitudes on donations (subsection 5.2). Notes: NEP score is comprised between 0 and 60, the score measures pro-environmental attitudes. Previous donations to an NGO is a dummy variable = 1 if the participant has already donated to a NGO, and = 0 if not.
The effect of risk and ambiguity on the levels of donations
We represent in Figure 9 the average donation levels in the different treatment groups. The average donation in the control group is 26.25 ECUs (s.d. = 30.6); in the Risk treatment it is 24.7 ECUs (s.d. = 25); in the LAmbT, the average is 29.4 ECUs (s.d. = 28.2), and in the HAmbT it is 17.7 ECUs (s.d. = 24.2). We do not observe any significant differences in donation levels between the control and the different treatment groups. Hypothesis 1 is not verified, we do not find any significant difference between the control group and RT (a Wilcoxon test yields a p-value= 0.550).
Result 1: There is no difference in donations between no-uncertainty and risk.
In Figure 9, we observe an increase in the level of donations in LAmBT, compared to risk, however, the difference is not significant (Wilcoxon test, p-value= 0.246. On the contrary, we observe a significantly lower level of donations between HAmbT and RT (a Wilcoxon test yields a p-value= 0.045). We can conclude that hypothesis 2 is partially fulfilled.
Result 2 : The introduction of a low level of ambiguity does not decrease nor increase donations compared to risk. A high level of ambiguity decreases donations.
We can also observe in Figure 9 a decrease of donations in HAmbT compared to LAmbT (a Wilcoxon test yields a p= 0.017). This figure hence shows that donations were lower in the context of higher levels of ambiguity, confirming hypothesis 3. This result is confirmed by Figure 9. The figure shows the probability density function of donation levels in each treatment. What stands out in this figure is that the probability of the donations being smaller than 20 ECUs is higher for the HAmbT, followed by the RT. This indicates that individuals tend to be less altruistic in a high-ambiguity environment than in a low-ambiguity or no-uncertainty environment.
Result 3: A high level of ambiguity decreases donations compared to a lower level of ambiguity.
We run an OLS regression on the amount of donations to NGOs as shown in Table 2. Column (1) confirms the results previously discussed such that there is no impact of risk, low ambiguity, or high ambiguity on donations compared to the control group. When adding control variables in Column ( 2), we confirm that we do not find any treatment effects. Control variables include age, gender, income if the individual has previously donated to an NGO, and her pro-environmental Notes: The dependent variable is the level of donations (continuous variable between 0 and 100). 4 participants (1.97%) did not want to share their gender. 28 participants (13.8%) did not want to share their income. LAmbT and HambT are the treatment dummy variables. The risk aversion coefficient corresponds to the CRRA coefficient estimated with the Holt & Laury measure (the higher, the more risk averse). The NEP score is a continuous variable and a measure of environmental attitudes. The variable previous donations to an NGO is a dummy variable = 1 if the participant has already donated to an NGO, and = 0 if not.
preferences (NEP score).
By looking at this regression, we can also confirm the gender effect regarding donations found in the literature [START_REF] Eckel | Are Women Less Selfish Than Men?: Evidence From Dictator Experiments[END_REF], such as women give more than men. We also find that participants with an income between €1200 and €1800 give more than participants with an income below €800. We do not find any effect of age on donations. Finally, we do not find any effect of pro-environmental preferences on donations.
Ambiguity attitudes and donations
Table 3 shows the results using OLS regressions with unframed ambiguity attitudes, where the dependent variable is the level of donations while controlling for ambiguity attitudes (unframed ambiguity aversion, likelihood insensitivity, and pessimism).
To study ambiguity attitudes, in these regressions, we only include ambiguity treatments (LAmbT and HAmbT) since ambiguity attitudes only have an impact on behavior under an ambiguous environment. We excluded 15 participants in addition to those already excluded (hence N=82) to run these regressions, following the same method explained in 3.3.4. We excluded these participants since their decisions seem to indicate that they did not understand the (unframed) ambiguity attitudes task.
Column (1) shows that (unframed) likelihood insensitivity has a positive effect on donations (significant at a 1% level). However, when adding the control variables to the regression (column ( 2)), it is no longer significant. Columns (3) and (4) show that there is no impact of unframed pessimism on donations4 . Finally, Columns (1) and (3) show that HAmbT yields a negative effect on donations (compared to LAmbT). However, the treatment effect disappears when adding controls to the regressions. These results seem to indicate that ambiguity attitudes do not have any effect on donations in ambiguity environments. However, in Table 3, we only focused on unframed ambiguity attitudes. Ambiguity attitudes depend on sources of uncertainty [START_REF] Baillon | Measuring Ambiguity Attitudes for All (Natural) Events[END_REF][START_REF] Baillon | Belief hedges: Measuring ambiguity for all events and all models[END_REF]. They are context dependent and do not stay constant across different environments and levels of ambiguity. Hence, it is interesting to focus on analyzing the impact of (framed) ambiguity attitudes on donations per treatment (i.e., on specific environments and levels of ambiguity). Moreover, focusing on risk and ambiguity attitudes will allow studying the impact of these different variables according to a specific environment. Notes: OLS regressions. The dependent variable is the level of donations and is a continuous variable between 0 and 100. HAmbT is a dummy variable = 1 when it is the high ambiguity treatment and equal to 0 when it corresponds to the low ambiguity treatment. Unframed ambiguity attitudes are continuous variables that measure attitudes under full uncertainty. Ambiguity aversion and likelihood insensitivity are continuous variables comprised between -1 and 1. The higher the ambiguity aversion coefficient, the more ambiguity averse is the participant. If the index is negative, the participant is ambiguity lover. The higher the insensitivity index, the more insensitive to likelihood variations is the participant. The risk aversion coefficient corresponds to the CRRA coefficient estimated with the Holt & Laury measure (the higher the more risk averse). Pessimism is a continuous variable between 0 and 1, where 0 indicates extreme optimism and 1 extreme pessimism. Controls include age, gender, previous donation to an NGO, NEP score, income.
Risk treatment
Table 4 presents two regression analysis taking only into account the risk treatment (N=53). We seek to understand under a risky environment which factors have an effect on donations. Column (1) shows no effect of risk aversion on donations (p-value= 0.12). However, while adding controls, the parameter becomes significant: risk aversion has a positive impact on donations. This is a counter-intuitive result. However, some papers find the same effect [START_REF] Cettolin | Giving in the face of risk[END_REF][START_REF] Fahle | How do risk attitudes affect prosocial behavior? Theory and experiment[END_REF]. This result contradicts hypothesis 4, where we assumed risk aversion decreases donations under risk, however, we find that it increases donations. Notes: OLS regressions on the risk treatment subsamble only.
The dependent variable is a continuous variable on the level of donations between 0 an 100. The risk aversion coefficient corresponds to the CRRA coefficient estimated with the Holt & Laury measure (the higher the more risk averse). Excuse behavior is a continuous variable such that the higher, the more the participant uses risk as an excuse not to give. Controls include age, gender, previous donation to an NGO, NEP score, and income.
Low ambiguity treatment
In this section, we analyze the effect of ambiguity attitudes. Table 5 presents different regressions under LAmbT, including risk and ambiguity attitudes as explanatory variables. We find that neither ambiguity aversion, likelihood insensitivity, pessimism, nor risk aversion has a significant impact on donations under a "low" ambiguity environment.
This result indicates that at a "low" level of ambiguity, ambiguity attitudes do not seem to matter, and other variables will explain the level of donations. However, we find a negative and significant effect of excuse-driven behavior on donations when there is ambiguity, which confirms Garcia, Massoni and Villeval (2020)'s finding.
High ambiguity treatment
Finally, in this section, we study the impact of ambiguity attitudes under a "high" ambiguity environment. We only include participants belonging to the HAmbT. Table 6 presents OLS regressions under HAmbT, where we analyze the impact of Notes: OLS regressions using only the low ambiguity treatment. The dependent variable is the level of donations and is a continuous variable between 0 and 100. Framed ambiguity attitudes are continuous variables measured under a low ambiguity environment. Ambiguity aversion and likelihood insensitivity are continuous variables comprised between -1 and 1. The higher the ambiguity aversion coefficient, the more ambiguity averse is the participant. If the index is negative, the participant is ambiguity lover. The higher the insensitivity index, the more insensitive to likelihood variations is the participant. The risk aversion coefficient corresponds to the CRRA coefficient estimated with the Holt & Laury measure (the higher the more risk averse). Excuse behavior is a continuous variable, the higher, the more the participant uses ambiguity as an excuse not to give. Pessimism is a continuous variable between 0 and 1, where 0 indicates extreme optimism and 1 extreme pessimism. Controls include age, gender, previous donation to an NGO, NEP score, and income.
ambiguity attitudes under high ambiguity. Columns (1) and ( 2) (respectively, with and without controls) show no effect of ambiguity aversion nor likelihood insensitivity on donations. In Column (3), we find that the impact of pessimism on donations is negative and significant. Moreover, when adding controls (Column (4)), pessimism still has a significant effect on donations, beliefs seem to matter more than ambiguity aversion. Finally, all regressions suggest a negative effect of excuse-driven behavior. In these regressions, there is no significant effect of risk aversion.
We do not find any effect of ambiguity aversion under ambiguity. Therefore, we can't prove hypothesis 4.
Result 4: There is no evidence of a negative impact of ambiguity (risk) aversion under ambiguity (risk). However, risk aversion has a positive effect on donations under risk.
These results suggest that pessimism has a negative impact in a high ambiguity environment. This result partially supports hypothesis 5. Pessimism will play a role in donations only under a high ambiguity environment.
Result 5: There is some partial evidence that pessimism decreases donations. Notes: OLS regressions using only the high ambiguity treatment. The dependent variable is the level of donations and is a continuous variable between 0 and 100. Framed ambiguity attitudes are continuous variables measured under a high ambiguity environment. Ambiguity aversion and likelihood insensitivity are continuous variables comprised between -1 and 1. The higher the ambiguity aversion coefficient, the more ambiguity averse is the participant. If the index is negative, the participant is ambiguity lover. The higher the insensitivity index, the more insensitive to likelihood variations is the participant. The risk aversion coefficient corresponds to the CRRA coefficient estimated with the Holt & Laury measure (the higher the more risk averse). Excuse behavior is a continuous variable, the higher, the more the participant uses ambiguity as an excuse not to give. Pessimism is a continuous variable between 0 and 1, where 0 indicates extreme optimism and 1 extreme pessimism. Controls include age, gender, previous donation to an NGO, NEP score, and income.
Discussion and conclusion
This experiment seeks to study the impact of levels of uncertainty on donations. Our results show that levels of uncertainty do not have any impact on mean donations or the distribution of donations. However, we do find a negative effect of a "high" level of ambiguity on mean donations compared to risk and a "low" level of ambiguity. This result indicates that if we seek to increase altruistic behaviors, we should not excessively focus on decreasing ambiguity, except when ambiguity is high. We also study the impact of ambiguity attitudes on donations.5 Risk aversion only plays a role in the level of donations in a risky environment. We find a positive effect of risk aversion on donations only under risk [START_REF] Cettolin | Giving in the face of risk[END_REF][START_REF] Fahle | How do risk attitudes affect prosocial behavior? Theory and experiment[END_REF]. This effect is found since an increase in risk aversion increases the concavity of the utility function of giving. Therefore, if the utility function is more concave, the expected marginal utility of donating will be higher, leading the giver to donate more. Moreover, if the dictator reflects her preferences on the recipients' preferences and there is an increase in risk aversion, she will increase her donation. Moreover, we find that risk aversion does not have an effect under ambiguity, implying that under ambiguity, only ambiguity attitudes have an effect.
We find that pessimism is only correlated with donations under high ambiguity. More than disliking ambiguity, subjective beliefs have an effect on donations. This result indicates that the overweighting of the probabilities of low payoffs and underweighting of the probabilities of high payoffs have a negative impact on donations. This might be explained because a high level of ambiguity may increase the effect of subjective beliefs. On the contrary, when there is a "low" level of ambiguity, subjective beliefs about the probabilities of different events play a minor role, and a low ambiguity level will dampen any effect from subjective beliefs. Therefore, a high level of ambiguity may increase the effect of ambiguity attitudes, such as ambiguity aversion and pessimism leading to a decrease in mean donations. These results suggest that there is a threshold for which ambiguity and ambiguity attitudes have a negative impact on donations. An increase in ambiguity may have an amplifying effect on pessimistic beliefs. If an individual is pessimistic, an increase in ambiguity will reinforce the effect of pessimism, decreasing donations. On the contrary, if an individual is an optimist, an increase in ambiguity will reinforce the effect of optimism, increasing donations. Ambiguity and risk attitudes do not seem to matter under "low" levels of ambiguity, supporting the existence of a threshold or level of uncertainty for which ambiguity attitudes matter.
We do not find any effect of risk on donations nor an excuse-driven behavior under risk, as opposed to the literature [START_REF] Krawczyk | Give me a chance!' An experiment in social decision under risk[END_REF][START_REF] Brock | Dictating the Risk: Experimental Evidence on Giving in Risky Environments[END_REF][START_REF] Freundt | On the determinants of giving under risk[END_REF]. This might be explained because in those studies giving takes the form of lottery tickets for winning a prize. In this paper, it is not the case. We could think that individuals focus more on the amount that they are keeping rather than the amount the NGO will receive. We do not find any significant correlation between the risk aversion coefficient and the unframed ambiguity aversion coefficient, as this result suggests the independence between risk and ambiguity attitudes as found in [START_REF] Attanasi | Risk Aversion, Overconfidence and Private Information as Determinants of Majority Thresholds[END_REF]. However, when we focus on framed ambiguity attitudes and risk aversion, we find a negative and small correlation between the risk aversion and the framed ambiguity coefficient (Spearman's ρ = -0.24, p-value≤ 0.05).
We find a small and positive correlation (Pearson correlation = 0.21, p-value≤ 0.05) between the framed ambiguity aversion and unframed ambiguity aversion. Even if there is a significant correlation, it is small, supporting the idea that ambiguity aversion is context-dependent. This confirms that the source of information and the context changes the level of ambiguity aversion of an individual. Therefore, this result suggests that it is necessary to elicit framed ambiguity attitudes (as opposed to unframed) since these are more accurate for measuring attitudes toward ambiguity.
We showed that the coefficient relative to excuse behavior between a low ambiguity and a high ambiguity environment is quite similar, suggesting that more ambiguity does not increase the effect of excuse-driven behavior. This result suggests that individuals do not use the increase in the ambiguity level as an excuse to give less, as in [START_REF] Garcia | Ambiguity and excuse-driven behavior in charitable giving[END_REF]. The findings indicate that the decrease in donations under high ambiguity is not due to an increase in excusedriven behavior; rather, it is due to the effect of pessimism. Further research should focus on understanding what level of uncertainty and ambiguity attitudes impact the level of donations. This paper aims to add evidence to the inconclusive literature about the effect of uncertainty on charitable giving. To the best of our knowledge, there is a lack of studies examining the impact of ambiguity attitudes on charitable giving, with only one paper by [START_REF] Cettolin | Giving in the face of risk[END_REF] looking at the impact of ambiguity aversion. Our paper seeks to address this gap by exploring the effect of ambiguity aversion, likelihood insensitivity, and pessimism on charitable giving. In the context of donations, we would argue that policymakers and/or NGOs should not necessarily focus on decreasing levels of uncertainty, except if it is extreme cases of ambiguity regarding where the donation will go and if it is socially desirable to increase charitable giving. Further research should focus on studying experimentally the existence of an ambiguity threshold for which ambiguity attitudes have an impact on altruistic behaviors.
The experiment has a total of 290 decisions. Each decision has been randomly assigned a number. We will ask you to choose a number between 1 and 290 that corresponds to the decision that determines your payment. At the end of the experiment, you will know what decision it is. At the beginning of each part, we will remind you that the decisions made in this part are part of the 290 decisions that can earn you additional money.
As a thank you for your participation, you will receive 7 euros in addition to the earnings accumulated in the experiment.
The total payment of your earnings in euros will be made in cash and privately at the end of the experiment.
The following two parts will appear randomly.
Stage "Risk"
In this part of the experiment, your choices will have no impact on the following parts of the experiment and will only impact your earnings.
You will have to make 10 decisions. On the next screen, you will find 10 lines, each corresponding to a decision.
The decision for each line is to indicate the option you prefer between option A and option B. You can change from one option to another only once.
Note that in this part of the experiment, the decisions you make will not impact your earnings determined in the "donation" part of the experiment. If this part of the experiment is chosen, you can earn an additional gain based on your decisions.
This part of the experiment may determine your additional earnings based on the number you chose at the beginning of the experiment.
Stage "Colors"
In this part of the experiment, you will successively see six tables.
In each of the six tables, you have 20 decisions to make. For each decision, you must choose the option that you prefer between option A and option B. Within a single table, option A remains the same throughout the 20 decisions you have to make. Regarding option B, the probability (chances) of winning ECUs increases with each row.
Once you have chosen option B, you can no longer choose option A for subsequent decisions.
If one of the tables in this section is chosen to determine your additional payment, then at the end of the experiment, we will randomly draw one marble from 30 marbles. Three colors of marbles are present in the urn: blue, violet, and orange.
You do not know the number of marbles for each color (unknown probabilities).
You will be paid based on your decisions and the marble's color drawn at random. Note, in this part of the experiment, the decisions you make will not affect your gain determined in the "give" part. If this part is chosen, you can receive an additional gain based on your decisions, and the marble's color will be randomly drawn at the end of the experiment.
This part of the experiment is likely to determine your additional gain based on the number you chose at the beginning of the experiment.
Stage "Donation"
In this section, we will propose that you donate to an environmental association of your choice.
We will present you with a list of environmental associations. You must choose one from the following list. Your donations will actually be transferred to the association you have chosen.
We have specified for each association its mission, as specified on its website. The associations are presented in alphabetical order.
Greenpeace: "Greenpeace is an international network of independent organizations that act based on non-violent principles to protect the environment, biodiversity, and promote peace. It relies on a movement of engaged citizens to build a sustainable and equitable world." WWF: "Since 1973, WWF France, a public utility foundation, has acted on a daily basis to offer future generations a living planet. We act to curb environmental degradation and build a future where humans live in harmony with nature."
Zero Waste France: "They defend an ambitious zero waste, zero waste approach, which prioritizes source reduction. Their vision is part of a global ecological transition, respect for human rights and a better consideration of the most If the unknown composition urn is drawn (70% chance), then we do not know the number of marbles for each color (unknown probabilities).
Note: In this part of the experiment, the decisions you will make will not affect the amount (ECUs given) that the association will receive, nor your ECUs kept (you receive your ECUs kept from the donation part of the experiment). If this part of the experiment is chosen, you can obtain an additional gain based on your decisions and the color of the marble drawn at the end of the experiment.
This part of the experiment is likely to determine your additional gain, based on the number you chose at the beginning of the experiment.
The amount that the association will receive only depends on the choice you made in the previous part and the color of the marble drawn at the end of the experiment.
40
Powered by TCPDF (www.tcpdf.org)
Figure 1 :
1 Figure 1: Donations under the risk treatment
Figure 2 :
2 Figure 2: Lottery under low ambiguity treatment
Figure 3 :
3 Figure 3: Lottery under high ambiguity treatment
Figure 4 :
4 Figure 4: Screenshot of the risk aversion elicitation task
Figure 5 :
5 Figure 5: Screenshot of the unframed elicitation task
Figure 6 :
6 Figure 6: Screenshot of the framed elicitation task under HAmbT
Figure 7 :
7 Figure 7: Screenshot of the excuse-driven behavior task under HAmbT: self table
Figure 8 :
8 Figure 8: Mean of donations per treatment
Figure 9 :
9 Figure 9: Kernel density estimation of donations per treatment
Table 1 :
1 Summary statistics
Treatments
Table 2 :
2 OLS regression on the determinants of the level of donation
(1) (2)
Table 3 :
3 Impact of ambiguity attitudes on donations
(1) (2) (3) (4)
Table 4 :
4 Risk attitudes on the level of donations under risk
(1) (2)
Table 5 :
5 Ambiguity attitudes on the level of donations under low ambiguity
(1) (2) (3) (4)
Table 6 :
6 Ambiguity attitudes on the level of donations under high ambiguity
(1) (2) (3) (4)
La générosité des Français,
27ème édition, Novembre 2022.
WWF focuses on wildness preservation and the reduction of human impact on the environment; Greenpeace seeks to ensure the ability of the earth to nurture life in all its diversity, and Zero Waste France promotes the zero waste approach in Paris and Ile-de-France
Instructions for the HAmbT can be found in Appendix B
Note that we do not include ambiguity aversion, likelihood insensitivity, and pessimism in the same regression since the parameter pessimism is constructed from ambiguity aversion and insensitivity.
This experiment uses compound lotteries and fails to control for attitudes to compound objective lotteries. However, we control for ambiguity attitudes, and[START_REF] Halevy | Ellsberg Revisited: An Experimental Study[END_REF] shows that attitudes to ambiguity and compound objective lotteries are tightly associated.
Acknowledgments
We are grateful to Meglena Jeleva for her valuable guidance and advice. We are also thankful to Giuseppe Attanasi, Tarek Jaber-Lopez, Santiago Sautua, and Claire Mollier for their useful help and suggestions. We thank the participants at the AS-FEE 2022 in Lyon, the ESA Job Market Seminars, and the SEET 2023 in Valencia.
Appendix A New-Environmental Paradigm scale and additional questions New-environmental paradigm scale [START_REF] Dunlap | New Trends in Measuring Environmental Attitudes: Measuring Endorsement of the New Ecological Paradigm: A Revised NEP Scale[END_REF] In this part of the experiment, you will find sentences about the relationship between humans and the environment. For each sentence, indicate if you don't know -if you strongly disagree -if you somewhat disagree -if you strongly agreeif you strongly agree.
1. We are approaching the limit of the number of people the Earth can support.
2. Humans have the right to modify the natural environment to suit their needs.
3. When humans interfere with nature it often produces disastrous consequences.
4. Human ingenuity will insure that we do not make the Earth unlivable.
5. Humans are seriously abusing the environment.
6. The Earth has plenty of natural resources if we just learn how to develop them. 7. Plants and animals have as much right as humans to exist.
8. The balance of nature is strong enough to cope with the impacts of modern industrial nations. 9. Despite our special abilities, humans are still subject to the laws of nature.
10. The so-called "ecological crisis" facing humankind has been greatly exaggerated.
11. The Earth is like a spaceship with very limited room and resources.
12. Humans were meant to rule over the rest of nature.
13. The balance of nature is very delicate and easily upset.
14. Humans will eventually learn enough about how nature works to be able to control it.
15. If things continue on their present course, we will soon experience a major ecological catastrophe. The answers to these questions are important to us and will be completely anonymous and confidential. This experiment consists of six completely independent parts. Throughout the experiment, and based on your decisions, you can earn ECUs.
Behavior related to donations
Your earnings are expressed in ECUs. Your total earnings for the experiment correspond to the total amount of ECUs accumulated.
At the end of the experiment, your ECU earnings will be converted to euros at the rate of 100 ECUs=e7.50 euros (1 ECU=e0.075).
The "donations" part of the experiment will guarantee you ECU earnings.
Of the six parts of this experiment, four parts can also allow you to earn additional money: the "risk" part, the "colors" part, the "association" part, and the "lottery" part. We will explain the procedure at the beginning of each part determining your earnings. disadvantaged populations and future generations." --
We will now explain how you can make your donation.
We give you an amount of 100 ECUs and you must choose an amount that you want to donate to the association you have chosen, this amount must be between 0 and 100 ECUs.
You will keep the remaining ECUs for certain that you have decided not to donate to the association.
The total number of ECUs is equal to 100, that is, the sum of ECUs Kept (EG) by you and the ECUs Given (ED) must be equal to 100 (EG + ED = 100).
The ECUs that the association will receive will depend on the ECUs that you have given and the color of the marble that will be drawn at the end of the experience. This amount will be actually paid to the association of your choice.
--
We will now explain the random draw that will take place at the end of the experiment.
The exact amount you will give will not always reach the NGO. Three outcomes are possible: either the association will receive 0 ECUs, or it will receive the exact amount you have given the NGO, or it will receive double the ECUs you have given the NGO.
We will draw a marble from an unknown or known composition urn.
In order to determine which urn we will draw the ball from, we will conduct a pre-draw to determine whether we will use the known or unknown composition urn. This first urn is composed of 10 balls: 3 orange balls and 7 white balls.
If the white ball is drawn: with 70% chances, we have no information about the probabilities of the three outcomes: we will draw a marble from an unknown composition urn.
If the orange ball is drawn: with 30% chances, we know the exact probabilities of each outcome: we will draw a marble from a known composition urn.
These probabilities are:
There is a 60% chance that the association will receive exactly the number of ECUs that you have decided to give it (ED). This will happen if a blue marble is drawn at the end of the experiment. There are 18 blue marbles among the 30 marbles in the known composition urn.
There is a 30% chance that the association will receive twice the number of ECUs that you have decided to give it (2 × ED). This will happen if an orange marble is drawn at the end of the experiment. There are 9 orange marbles among the 30 marbles in the known composition urn.
There is a 10% chance that the association will receive nothing (0 × ED), regardless of the ECUs you have decided to give it. This will happen if a purple marble is drawn at the end of the experiment. There are three purple marbles among the 30 in the known composition urn.
At the end of the experiment, one of the three outcomes will occur.
The realization of one of the outcomes will depend on the color of the marble, which will determine the amount of donation the NGO will receive.
This part of the experiment will determine your gain independently of the number chosen at the beginning of the experiment. This part will not determine your additional gain.
Stage "lottery"
This part of the experiment is completely independent of the choices made in the donation part.
In this part of the experiment, two tables will appear successively.
In each table, you will have to make 20 decisions (each table has 20 lines).
Each decision involves choosing your preferred option between options A and B.
Option A does not change depending on the decisions (lines), whereas choosing option B means the amount you (or the organization) can win changes.
If you choose option A, a lottery will be drawn, and based on the result, the organization will receive additional ECUs (either 0, 30, or 60 ECUs).
If you choose option B, the organization (you) will win additional ECUs for sure.
The decisions you make can impact your payment and the amount received by the NGO.
Note: In this part of the experiment, the decisions you make will not impact the amount (ECUs given) that the organization will receive, which was determined in the donation part of the experiment, nor your kept ECUs (you will receive your kept ECUs from the "donation" part of the experiment for sure).
If this part of the experiment is chosen, you or the organization can earn additional money based on your decisions and the color of the marble drawn at the end of the experiment.
This part of the experiment may determine your additional earnings based on the number you chose at the beginning of the experiment.
Reminder:
If the known composition urn is drawn (30% chance), then we will present you with an urn with 30 marbles and three colors: Blue (18 marbles), violet (3 marble), and orange (9 marbles).
If the unknown composition urn is drawn (70% chance), then we do not know the number of marbles for each color (unknown probabilities).
You and the organization will be paid based on your decisions and the color of the marble drawn at the end of the experiment.
Stage "NGO"
In this part of the experiment, you will see six tables in succession.
In each of the six tables, you have 20 decisions to make. You must choose the option that you prefer between option A and option B.
Be careful, once you have chosen option B, you cannot choose option A anymore.
Within the same table, option A remains the same throughout the 20 decisions you have to make. As for option B, the probability (the chances) of winning ECUs increases with each line.
If one of the tables in this part is chosen, then the marble's color drawn at the end of the experiment will determine your additional gain.
Reminder: If the known composition urn is drawn (30% chance), then we will present you with an urn with 30 marbles and three colors: Blue (18 marbles), violet (3 marbles), and orange (9 marbles). |
00323586 | en | [
"spi.meca.vibr",
"phys.meca.vibr"
] | 2024/03/04 16:41:26 | 2009 | https://hal.science/hal-00323586v2/file/jsv_v19.pdf | J M Génevaux
N Dauchez
O Doutres
Non linear damping of a plate using Faraday instability of a fluid film
Keywords: Non linear damping, Damping of panels, Faraday instability, Fluid film, Vibroacoustics PACS: 43.55.Wk, 46.40.Ff, 68.35.J, 43.25
Damping using an instability of a fluid film in contact with a vibrating structure is investigated. Waves induced in the fluid film are the source of the added damping. A model based on the theory of Faraday instability is applied to a clamped circular plate covered by a fluid film. It is shown that this original technique can provide a significant damping, as with viscoelastic or porous material treatments. It is related to the amplitude of the waves which is a non linear function of the plate acceleration. Theoretical and experimental results are compared. The model overestimates the added damping: it is four times greater than the measured one.
Introduction
This paper examines a method to reduce the vibration and therefore the emitted noise of a structure by means of a fluid film in the low frequency range. The usual techniques for noise and vibration reduction from a structure use a viscoelastic layer bonded onto the structure [START_REF] Nashif | Vibrations damping[END_REF]. In this case, the dissipation is proportional to the loss factor of the material and to the flexural strain energy of the viscoelastic layer. To be efficient, this technique requires the use of a thick layer. Its thickness, and therefore the mass adding, can be reduced by using a light and stiff constraining sheet that increases the strain energy in the dissipating layer. Optimal partial covering may also be used to reduce the added mass [START_REF] Alvelid | Optimal position and shape of applied damping material Journal of Sound and Vibration[END_REF]. These techniques are limited by viscoelastic properties that depend on frequency and temperature [START_REF] Pritz | Loses factor peak of viscoelastic materials : magnitude to width relations[END_REF]. Designed primarily for sound absorbing, porous materials such as polymer foam may also add significant damping when mounted onto a structure [4][5][START_REF] Dauchez | Investigation and modelling of damping in a plate with a bonded porous layer[END_REF]. To improve the efficiency of Fig. 1. Oscillation of a fluid film on a vibrating plate, with w(x, y, t) the transverse displacement of a point P of the plate (coordinates in the plane (x, y)), t the time, h the water level of the fluid film at rest and ξ(x, y, t) the amplitude of the waves above the point P.
passive treatments, active control techniques have also been developed [START_REF] Baz | Robust control of active constrained layer damping[END_REF][START_REF] Guyomar | Nonlinear semi-passive multimodal vibration damping: An efficient probabilistic approach[END_REF] but require more sophisticated set up. Moreover, their robustness has to be carefully demonstrated.
In this paper, damping added by a fluid film in contact with a structure is investigated (Fig. 1). When the normal acceleration of the structure is strong enough, stationary waves appear in the fluid film (Fig. 2). This phenomenon is called Faraday instability [START_REF] Faraday | On the forms and states of fluids on vibrating elastic surfaces[END_REF][START_REF] Kityk | Spatio-temporal Fourier analysis, of Faraday surface wave patterns on a two-liquid interface[END_REF]. In case of a finite area of the fluid-air interface, the boundary conditions select countable wave lengths and several stationary mode shapes are solution of the problem [START_REF] Douady | Experimental study of the Faraday instability[END_REF]. The mode shape which has the greater amplification coefficient appears [START_REF] Benjamin | The stability of the plane free surface of a liquid in vertical periodic motion[END_REF][START_REF] Merlen | Duality of the supercritical solutions in magnetoacoustic wave phase conjugation[END_REF]. In case of an infinite area, this amplification coefficient will select the shape of the free surface among elementary cell patterns (roll, hexagon or square) (Fig. 2). These elementary cells can be considered as oscillators distributed over the surface of the structure [START_REF] Thompson | A continuous damped vibration absorber to reduce broadband wave propagation in beams[END_REF], which damping depends on fluid flow in a cell and on the viscosity of the fluid. Moreover, the relation between the wave amplitude and the driving acceleration of the plate is nonlinear: it is necessary to determine the acceleration threshold for waves to appear and their amplitude at saturation [START_REF] Douady | Experimental study of the Faraday instability[END_REF][START_REF] Milner | Square patterns and secondary instabilities in driven capillary waves[END_REF]. Note that for high acceleration level, ejection of droplets can be observed [START_REF] Alzuaga | Motion of droplets on solid surface using acoustic radiation pressure[END_REF]. This paper focuses on the added dissipation to the structure by the Faraday instability, using the thinnest fluid layer without droplet ejections. To the author's knowledge, this technique aimed at reducing the vibration using the Faraday instability has not been previously presented.
In a first part (Sec. 2), the modelling the method of calculating the added damping is detailed. To this end, a particular geometrical configuration is chosen: a circular plate clamped at its edge.
In section 3, the corresponding experiment is designed in order to measure the added damping of the first mode of the plate. This highlights the nonlinear The results of the model are compared with those obtained by the experiment in section 4.
Model
From local to global dissipation
This section details how to calculate the global dissipation added on the structure as function of the driving acceleration and for a given mode shape of the plate. Indeed, the modal dissipation is function of the area where the instabil-ity appears and of the nonlinear relation between the amplitude of the waves and the local acceleration.
In the present paper, the first mode of a circular clamped plate of diameter d is considered. Its mode shape is given by [START_REF] Geradin | Théorie des vibrations : application à la dynamique des structures Masson[END_REF],
φ(r, θ) = I 0 (β 01 d/2) J 0 (β 01 r) -J 0 (β 01 d/2) I 0 (β 01 r) I 0 (β 01 d/2) J 0 (0) -J 0 (β 01 d/2) I 0 (0) , ( 1
)
with r the distance of the point to the centre of the plate, I 0 as the modified Bessel function, J 0 the Bessel function and β 01 = 1.015 2π d . This mode shape is normalized so that φ(0, 0) = 1.
The modal damping ratio of the plate is given as function of modal parameters,
ζ ap = c ap 2 k p m pf , ( 2
)
where c ap is the damping coefficient added on the plate, k p the modal stiffness of the plate and m pf the modal mass of the plate loaded by the fluid film. These three terms c ap , k p and m pf are calculated in the two following sections.
Modal stiffness and modal mass
The strain energy of the plate is calculated by integrating the local strain energy over the plate. This expression gives the modal stiffness k p [START_REF] Geradin | Théorie des vibrations : application à la dynamique des structures Masson[END_REF]:
1 2 k p (φ(0, 0)) 2 = 1 2 D 2π θ=0 R r=0 φ ,rr + 1 r φ ,r 2 -2(1 -ν p ) φ ,rr φ ,r r rdrdθ,
(3) with D the stiffness of the plate defined by
D = Ee 3 /(2(1 -ν 2 p )), (4)
with E the Young modulus and ν p the Poisson's ratio of the material, e the thickness of the plate. This modal stiffness does not depend on the fluid film properties.
By the same approach, the modal mass of the plate m p is given by [START_REF] Geradin | Théorie des vibrations : application à la dynamique des structures Masson[END_REF]:
1 2 m p (φ(0, 0)) 2 = 1 2 2π θ=0 R r=0 ρ p eφ(r, θ) 2 rdrdθ, (5)
with, ρ p e = ρ s e + ρ f h = ρ s e(1 + ρ), (6) the equivalent mass per unit area of the system, based on ρ s the density of the aluminum, ρ f the density of the fluid and an added mass due to the fluid layer. In this equation, the kinetic energy of the fluid due to the flow relative to the plate is neglected. Assuming that ρ p is constant along the plate and that φ(0, 0) = 1, equation (5) can be rewritten,
m p = ρ p e 2π θ=0 R r=0 φ(r, θ) 2 rdrdθ. ( 7
)
The value of ρ s is given by fitting the first resonance frequency of the bare plate,
f 1 = 1.015 2 2π π 2 (d/2) 2 D ρ s e . ( 8
)
Added damping coefficient
The added damping coefficient c ap is determined considering the dissipated energy during one period of the plate vibration [START_REF] Faraday | On the forms and states of fluids on vibrating elastic surfaces[END_REF][START_REF] Benjamin | The stability of the plane free surface of a liquid in vertical periodic motion[END_REF][START_REF] Matthiessen | Ueber die Transversalschwingungen tnender tropfbarer und elastischer Flssigkeiten[END_REF][START_REF] Rayleigh | On the crispation of fluid resting upon a vibrating support[END_REF][START_REF] Maksimov | Transient processes near the threshold of acoustically driven bubble shape oscillations[END_REF],
2π/ω e t=0 1 2 c ap ẇ2 (0, 0, t)dt = 2π/ω e t=0 1 2 2π θ=0 R r=0 ĉ ∂hα(r, θ) cos(ω f t) ∂t 2 rdrdθ dt, (9)
with w(0, 0, t) = W sin(ω e t) the instantaneous transverse displacement of the centre of the plate, W the amplitude of the displacement at this point, ĉ the local damping coefficient per unit area, α(r, θ) = A(r, θ)/h (Eq. 12) the dimensionless amplitude of the waves , A(r, θ) = max t (ξ(r, θ, t) the amplitude of the waves at saturation (Fig. 2), ξ(r, θ, t) the instantaneous position of the free surface, ω f the circular frequency of the free surface waves due to the Faraday instability, ω e the circular frequency of excitation and t the time.
Taking into account the axisymmetry of the first mode shape, that ω f = ω e /2 and ẇ(0, 0, t) = W ω e sin(ω e t), the integration in time and in angle θ gives,
c ap W 2 = ĉh 2 π R r=0 α(r) 2 rdr. ( 10
)
The integration is made on the area where the instability appears. It is a disk of radius r d , so that for r > r d , α(r) = 0. Assuming that the amplitude of the waves does not depend on the acceleration of the other points of the plate (local hypothesis), r d may be defined by
ẅ(r d , θ) = ˜ c g, ( 11
)
with ˜ c the dimensionless acceleration threshold for the Faraday instability to appear and g the gravity.
For r < r d , a linear relation between α 2 and ˜ is accounted for as suggested in ref. [START_REF] Douady | Experimental study of the Faraday instability[END_REF]: the amplitude of the waves at saturation, when ∂A ∂t = 0, is an affine function of ˜ ,
α 2 = ˜ A ω f -B ω f , ( 12
)
where A ω f and B ω f are two parameters which will be determined experimentally in this paper (Sec. 3 and Fig. 7). Note that ˜ c is ˜ for α = 0 in equation ( 12):
˜ c = B ω f A ω f . ( 13
)
Equation ( 10) then writes
c ap = πĉh 2 W 2 A ω f g r d r=0 ˜ (r)rdr -B ω f r d r=0 rdr , ( 14
)
where the dimensionless acceleration ˜ (r) can be related to φ(r, θ) the normalized mode shape of the plate (Eq. 1) by
˜ (r) = W ω 2 e g φ(r, θ). ( 15
)
Equivalent damping coefficient of a fluid cell
The aim of this section is to calculate the modal damping coefficient ĉ of the fluid per unit area, which is required to calculate c ap (Eq. 14). It is given by,
ĉ(r, θ) = 2ζ 0 m f ω f , ( 16
)
with m f the modal mass per unit area and ζ 0 the damping ratio of the fluid. ζ 0 can be related to the logarithmic decrement α s of free oscillations [START_REF] Geradin | Théorie des vibrations : application à la dynamique des structures Masson[END_REF] by
ζ 0 = α s 4π 2 + α 2 s , ( 17
)
with α s = 2πδk [START_REF] Milner | Square patterns and secondary instabilities in driven capillary waves[END_REF]. Here δ = 2ν/ω f is the thickness of the viscous layer for a fluid with a kinematic viscosity ν solicited at the circular frequency ω f , and k the wave number. This wave number is solution of [START_REF] Génevaux | Gravity effects on coupled frequencies of a 2D fluid-structure problem with free surface[END_REF]]
ω 2 f = gk tanh(kh) 1 + σk 2 ρ f g , ( 18
)
with σ the surface tension of the fluid-air interface.
The modal mass per unit area m f is
m f = m f cell (λ/2) 2 , ( 19
)
with m f cell the modal mass of one cell whose area is (λ/2) 2 , with λ = 2π/k the wave length. Here, the first mode of a fluid cell for a given pattern of the free surface is considered. Several patterns appear successively on the free surface with the increase of the level of the acceleration [START_REF] Kityk | Spatio-temporal Fourier analysis, of Faraday surface wave patterns on a two-liquid interface[END_REF]: roll, hexagon, then square. In this paper, the square pattern which is present for high accelerations is chosen (Fig. 2). The modal shape can be defined according to a potential function for the flow: the Reynolds is large enough (Re = 1.7 10 6 ) for the thickness of viscous boundary layer (δ = 0.063 mm) to be smaller than the fluid depth. Let us consider a fluid cell defined by a volume of incompressible fluid whose dimensions are λ/2 in x and y directions, and h in z direction. Symmetry arguments allow the use of a flow v relative to the plate associated to Φ f (x, y, z, t) = φ f (x, y, z)g(t), with g(t) a harmonic function and
φ f (x, y, z) = β(r, θ) cos(q(2x/λ -1)) cos(q(2y/λ -1)) cosh( √ 2q, z/λ), ( 20
)
with β the amplitude of the potential function which depends on the position of the fluid cell (cylindrical coordinates r,θ), and (x,y,z) the coordinates of a point into the cell. The velocity field v is given by,
h ∂φ f ∂x g(t) = v.x, h ∂φ f ∂y g(t) = v.y, h ∂φ f ∂z g(t) = v.z. (21)
The boundary conditions at x = λ/2 and y = λ/2 give for the first mode of the fluid cell, q = π.
The kinetic energy of the fluid in the cell is used to calculate m f cell :
1 2 m f cell ∂α(r,θ) cos(ω f t) ∂t 2 = 1 2 ρ f g(t) 2 λ x=0 λ y=0 h z=0 φ 2 f,x + φ 2 f,y + φ 2 f,z dxdydz, ( 22
)
with φ f,x = ∂f ∂x . β(r, θ) is derived from α(r, θ) the amplitude of the waves, by equating the vertical velocity of a point of the free surface (z = h):
α(r, θ) ∂ cos(ω f t) ∂t = g(t) ∂φ f ∂z ∀t, ( 23
)
and using equation ( 20) it gives,
α(r, θ) = β(r, θ) 2 √ 2π λ sinh( √ 2πh/λ), ( 24
)
g(t) = -ω f sin(ω f t) (25) so that, β(r, θ) = α(r, θ)λ 2 √ 2π sinh( √ 2πh/λ) . ( 26
)
Using equation ( 26), and integrating the second member of equation ( 22), the modal mass of the 1 degree of freedom system associated to the fluid cell writes,
m f cell = ρ f λ 3 √ 2 256π 1 -exp -8 √ 2πh λ exp 4 √ 2πh λ cosh √ 2πh λ 2 -1 . ( 27
)
The modal mass per unit area m f and the modal damping coefficient by unit area ĉ may be obtained with equations ( 16) and ( 19).
Experimental results
The aim of this section is to determine experimentally the plate damping induced by the Faraday instability of the fluid film. Because the system is nonlinear, all experimental quantities are referenced to the acceleration at the centre of the plate, called driving acceleration, and denoted W a (0, 0). Four water levels and four values of the driving acceleration are considered.
Experimental set-up
The geometrical configuration is presented in figure 3. A circular aluminum plate of diameter d = 0.290 m is clamped. Its characteristics are summarized in tables 1 and 2. It is excited by a shaker connected to its centre via a force transducer giving the excitation force F . An accelerometer of mass m a = 0.0042 kg is bonded at 1 cm of the centre of the plate to get the reference acceleration W a (0, 0) = ω 2 e W . A laser vibrometer is focused on the plate through the fluid film. The laser spot area is less than gives a measured area of the order of 1 mm 2 . The frequency range is set between 40 Hz and 120 Hz to ensure Faraday instabilities for experimentally achievable accelerations. The thickness of the fluid layer is chosen to be of the order of the wavelength: this ensures a total covering of the plate and a small added mass.
The frequency response functions are obtained using a step by step harmonic excitation for a constant acceleration amplitude W a (0, 0).
Determination of the free surface velocity of the fluid
The signal given by the laser vibrometer is proportional to the apparent velocity of the plate d w dt . It is a combination of the plate and of the free surface velocities:
d w(r, θ, t) dt = dw(r, θ, t) dt - c 0 c 1 -1 d(h + ξ(r, θ, t)) dt , (28)
with w(r, θ, t) the displacement of the plate at the point P (r, θ), c 0 the light velocity in the air, c 1 the light velocity in the fluid, h(r, θ, t) the water level crossed by the beam. The time signal of the apparent speed of the plate (Fig. 4) exhibits a sub-harmonic at ω e /2. Their contributions can be easily identified because they are at different frequencies (see Fig. 4). To do so, equation ( 28) is rewritten as:
d w(r, θ, t) dt = ã1 cos(ω e t + ã2 ) + ã3 cos ω e 2 t + ã4 . ( 29
)
The parameters ã1 and ã2 are associated to the plate velocity, and the parameters ã3 and ã4 to the fluid interface movement. They are determined using a nonlinear optimization. The position of the nodes of the free surface mode shape are not stationary: ã3 = 0 if the laser beam focuses through a node, and is maximum if the laser beam focuses through an antinode. Indeed, for level of accelerations far higher the threshold, second order effects [START_REF] Martin | Drift instability of standing Faraday waves[END_REF] induce a slow drift of the positions of the antinode of the fluid air interface. Thus, the time of acquisition must be long enough to contain measurement on an antinode. The amplitude of the waves is then deduced from the greatest value of ã3 .
Influence of the driving acceleration and water levels
The experiments are performed with water to test the influence of the fluid level on the frequency response functions. The water levels remain below 0.0133 m and above 0.004 m for all the experiments. Below this lower limit, the wetting mechanism prevents the water from covering the whole plate. The ratio of added mass per unit area is:
ρ = ρ f h ρ s e . ( 30
)
Near the resonance frequency (75 Hz), stationary waves are observed on the free surface (Fig. 2). Their amplitude is higher at the centre of the plate. This coincides with the maximum amplitude of the first mode shape. They appear as square patterns within a circle at the centre of the plate. Their wavelength, which are consistent with capillarity waves on the surface, increases as the frequency decreases.
To determine the acceleration threshold for Faraday instability, the frequency response function is measured for several amplitudes of the driving acceleration for water levels from 0 to 0.0133 m. Nonlinear effects are observed (Fig. 5) when increasing the amplitude of acceleration:
• the existence of a threshold for waves to appear,
• an increase in the area of the circular surface on which waves are present,
• a slight increase in the frequency resonance,
• a reduction of the amplitude of the peak (up to 13 dB).
In figure 5, the threshold is between 6 m s -2 and 13 m s -2 . It corresponds to a sharp increase in damping. Figure 6 shows damping ratio ζ pf which is estimated using the half-power method. The measured damping for this non optimized configuration can be half the value of the damping measured in the case of unconstrained-layer treatment, and is greater than the damping added by foam or fiber layer. This increase is not observed for the bare plate (continuous line). The thinner the fluid layer, the stronger the damping is.
Comparison between theoretical and experimental results
Comparison between theoretical and experimental values of the plate damping induced by the fluid film is now evaluated. Water with the thinner thickness is used to induce the stronger damping. The numerical values of the parameters of the model are given in tables 1 and 2 for this configuration.
The added damping ratio depends on the local dissipation of the fluid layer and on the area of the instability area.
This local dissipation is a function of the relation between the local acceleration of the plate and of the wave amplitude (Fig. 6). The relation between the wave amplitude and the motion of the plate is quantified by the coefficients A ω f and B ω f (Eq. ( 12)) which are identified on the experimental data by a least-squares regression method (Fig. 7). The uncertainties on the position of the regression line are calculated with a confidence of 95%. Note that three points are not taken into account: the noise on the signal and the low spatial stability of the waves do not allow a correct evaluation of the amplitude of the waves for these three acceleration levels. In the following, above the threshold ˜ c , the dimensionless amplitudes of the waves lie in the following boundaries: 0.0135(˜ -0.82) < α 2 < 0.0163(˜ -0.82).
The wave length λ = 0.004 m given by the model using σ = 0.072 N m -1 the surface tension of the fluid-air interface, is near the measured experimental wave length (0.00626 m< λ < 0.00634 m, Sec. 3.3). The experimental average value λ = 0.0063 m is used in the model to evaluate the damping.
Moreover, the equivalent damping coefficient c ap added to the structure depends on the area on which the damping occurs. The first mode shape of the plate induces a circular limit of instability of radius r d where, W a (r d , θ) = 8.04 m s -2 . The relationship between this radius and the acceleration in the centre of the plate is given by the mode shape of the plate φ(r, θ) (Eq. 1) and the instability threshold. The evolution of c ap , as function of W a at the centre of the plate, can then be calculated taking into account the uncertainty of the amplitude of the waves (Eq. 31). From the evolution of c ap , the damping ratio added by the fluid ζ ap can be plotted (Fig. 8). For the experimental point of view, the damping ratio added by the fluid can be deduced by the comparison of the damping ratio observed with the fluid ζ pf and the damping ratio of the bare plate ζ p (Fig. 6). The damping ratio of the bare plate is,
ζ p = c p 2 k p m p , ( 32
)
with c p the modal damping coefficient of the plate, m p the modal mass of the plate. With the fluid layer the damping ratio is given by,
ζ pf = c p + c ap 2 k p m pf = c p 2 k p m pf + ζ ap . ( 33
)
By neglecting the kinetic energy of the waves compared to the kinetic energy of the loaded plate, the relation m pf = m p (1 + ρ) gives,
ζ pf = c p 2 k p m p (1 + ρ) + ζ ap . ( 34
)
Thus, the experimental value of the added damping ratio is The comparison (Fig. 8) between experimental and theoretical values of ζ ap shows that above the threshold the amplitude of the theoretical damping is four times greater than the experimental damping.
ζ ap = ζ pf -ζ p 1 √ 1 + ρ . ( 35)
The source of this discrepancy can be sought in the high sensitivity of the model to the amplitudes of the waves. The relation between the wave amplitude and the acceleration has been determined assuming a locally reacting behaviour of each fluid cell. This hypothesis is valid where the acceleration is uniform: this is the case in the vicinity of an antinode (centre of the plate for the first mode). This assumption may not be valid where an acceleration gradient is present. Moreover, in this transition region, the cell shape is no longer square (Fig. 2).
Nevertheless, note that the damping added by the fluid film is comparable to thoses obtained with viscoelastic or porous material treatments [START_REF] Nashif | Vibrations damping[END_REF][START_REF] Dauchez | Investigation and modelling of damping in a plate with a bonded porous layer[END_REF].
Conclusion
Damping of a vibrating structure by means of a heavy fluid film subjected to Faraday instability has been studied. It is shown that added damping may be
Fig. 2 .
2 Fig. 2. Stationary waves at the fluid-air interface (top view of the plate). Driving acceleration W a (0, 0) : 13.8 m s -2 behaviour of the instability of the fluid film and the influence of the parameters governing the phenomenon.
Fig. 3 .
3 Fig. 3. Experimental set-up.
Fig. 4 .
4 Fig. 4. Signals (V) of the acceleration of the plate (dotted line 31.6 m s -2 / V) and its apparent velocity measured by the laser vibrometer (bold line 0.025 m s -1 / V) maximum velocity of the plate : 0.039 m s -1 ; water level 0.0046 m ; ρ = 0.88 ; frequency of excitation 65 Hz.
Fig. 5 .Fig. 6 .
56 Fig.5. Influence of the driving acceleration W a (0, 0) on the frequency response function (accelerometer/force) for a water level of 0.0046 m, ρ = 0.88: continuous line, W a (0, 0) = 3.16 m s -2 ; dashed line, W a (0, 0) = 6.32 m s -2 ; dotted line, W a (0, 0) = 12.7 m s -2 ; dashed dotted line, W a (0, 0) = 25.3 m s -2 .
Fig. 7 .
7 Fig. 7. Experimental evolution of the dimensionless oscillation amplitude α = A/h with the acceleration above the threshold for a given frequency (75 Hz) with water.The two lines corresponds to the uncertainty with a confidence of 95% of the mean square line which fits the data.
Fig. 8 .
8 Fig. 8. Evolution of the damping ratio added by the fluid versus the driving acceleration W a (0, 0): theoretical values (between the two lines: the uncertainties on the measurement of the water waves induce two curves of dimensionless damping with 95% of confidence), experimental values (crosses).
Table 1
1 Numerical inputs of the model.
Plate
Diameter of the plate (m) d 0.290
Thickness of the plate (m) e 0.001
Material of the plate aluminum
Young's Modulus (Pa) E 7 10 10
Poisson's ratio ν p 0.3
Frequency of the first resonance
of the bare plate (Hz) f 1 100
Frequency of excitation (Hz) f e 75
Fluid
Gravity (ms -2 ) g 9.81
Fluid layer thickness (m) h 0.0046
Kinematic viscosity of water (m 2 s -1 ) ν 10 -6
Density of the fluid (kg m -3 ) ρ f 1000
Wave length of the free surface (m) λ 0.0063
First amplitude of the wave coefficient A ω f [0.0135, 0.0163]
Second amplitude of the wave coefficient B ω f [0.0111, 0.0134]
Table 2
2 Numerical outputs of the model.
Loaded plate
Stiffness of the plate (N m) Eq. (4) D 8.82
Dimensionless added mass Eq. (30) ρ 0.88
Density of the plate with air loading (kg m -3 ) Eq. (8) ρ s 5230
Modal stiffness (Nm -1 ) Eq. (3) k p 1.08 105
Equivalent mass of the plate with fluid (kgm -3 ) Eq. (6) ρ p 7215
Modal mass (kg) Eq. (7) m p 0.0875
Fluid
Threshold of instablility (ms -2 ) Eq. (13) ˜ c g 8.04
Local damping ratio of the fluid Eq. (17) ζ 0 14.5%
Modal mass of a fluid cell (kg) Eq. (27) m f cell 2.23 10 -6
Modal mass per unit area (kg m -2 ) Eq. (19) m f 2.25 10 -1
Equivalent damping coefficient per unit area (kg m -2 s -1 ) Eq. (16) ĉ 9.69
Damping ratio added by the fluid Eq. (2) ζ ap Fig. 6
ACKNOWLEDGMENTS
Thanks to Lazhar Benyahia from the Laboratoire Polymères, Colloides, Interfaces of Université du Maine (Le Mans, France) for our productive discussions. Thanks to Denis Ritter for the English corrections.
List of Figures
Oscillation of a fluid film on a vibrating plate, with w(x, y, t) the transverse displacement of a point P of the plate (coordinates in the plane (x, y)), t the time, h the water level of the fluid film at rest and ξ(x, y, t) the amplitude of the waves above the point P. Influence of the driving acceleration W a (0, 0) on the frequency response function (accelerometer/force) for a water level of 0.0046 m, ρ = 0.88: continuous line, W a (0, 0) = 3.16 m s -2 ; dashed line, W a (0, 0) = 6.32 m s -2 ; dotted line, W a (0, 0) = 12.7 m s -2 ; dashed dotted line, W a (0, 0) = 25.3 m s -2 . 11 6 Influence of the driving amplitude W a (0, 0) on the plate damping for several water levels: continuous line, 0.0 m ; dashed line, 4.6 mm ; dotted line, 8.0 mm ; dashed dotted line, 13.3 mm. 12 7 Experimental evolution of the dimensionless oscillation amplitude α = A/h with the acceleration above the threshold for a given frequency (75 Hz) with water. The two lines corresponds to the uncertainty with a confidence of 95% of the mean square line which fits the data. [START_REF] Milner | Square patterns and secondary instabilities in driven capillary waves[END_REF] 8 Evolution of the damping ratio added by the fluid versus the driving acceleration W a (0, 0): theoretical values (between the two lines: the uncertainties on the measurement of the water waves induce two curves of dimensionless damping with 95% of confidence), experimental values (crosses) 16
List of Tables 1 Numerical inputs of the model. [START_REF] Merlen | Duality of the supercritical solutions in magnetoacoustic wave phase conjugation[END_REF] 2 Numerical outputs of the model. 14 |
04104031 | en | [
"spi.tron",
"spi"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04104031/file/2023_GDR_SoC2_Pottier_IETR%28poster%29.pdf | Juliette Pottier
Maria Mendez Real
Sébastien Pillement
Sébastien Pillement First
First analysis and protection of the micro-architecture of a RISC-V core
établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
FirstWP 4 -•
4 INSTITUT D'ÉLECTRONIQUE ET DE TÉLÉCOMMUNICATIONS DE RENNES SEC-V Project: Methodology and organization WP 1 -Dynamic code transformation unit On-the-fly decoding modification/alteration Dynamic instrumentation Instruction set tailoring/customization/adaptation WP 2 -Micro-architectural modifications Alternative approaches to traditional caches (scratchpads, TCM) Dynamic cache management Inserting execution noise (access instructions for example) stage pipeline partially out-of-order (Execute Stage) o Single issue Micro-architecture analysis • Vulnerabilities targeted by side-channel attacks and covert channels: • Kocher, Paul, et al. "Spectre attacks: Exploiting speculative execution. "Communications of the ACM"(2020) • Ge, Qian, et al. "A survey of microarchitectural timing attacks and countermeasures on contemporary hardware." Journal of Cryptographic Engineering (2018) Prototype and evaluation Inclusion in the CVA6 core of the OpenHW Group Assessment (indicators and metrics) of security levels Security strategies: ➢ Monitoring : detection of contexts favorable to side-channels attacks and/or covert channels ➢ Dynamic management : micro-architectural defenses and micro-decoder ➢ Deployment of a complete solution on target, while preserving performance • Data cache : |
04119152 | en | [
"info.info-ai"
] | 2024/03/04 16:41:26 | 2023 | https://enac.hal.science/hal-04119152/file/ATM2023_paper_45.pdf | Chunyao Ma
Sameer Alam
Qing Cai
email: [email protected]
Daniel Delahaye
email: [email protected]
Dynamic Air Traffic Flow Coordination for Flow-centric Airspace Management
Keywords: traffic flow coordination, flow-centric operation, air traffic flow prediction, transformer neural networks
The air traffic control paradigm is shifting from sectorbased operations to cross-border flow-centric approaches to overcome sectors' geographical limits. Under this paradigm, effective air traffic flow coordination at flow intersections is crucial for efficiently utilizing available airspace resources and avoiding inefficiencies caused by high demand. This paper proposes a dynamic air traffic flow coordination framework to identify, predict, assess, and coordinate the evolving air traffic flows to enable more efficient flow configuration. Firstly, nominal flow intersections (NFI) are identified through hierarchical clustering of flight trajectory intersections and graph analytics of daily traffic flow patterns. Secondly, spatialtemporal flow features are represented as sequences of flights transiting through the NFIs over time. These features are used to predict the traffic demand at the NFIs during a given future period through a transformer-based neural network. Thirdly, for each NFI, the acceptable flow limit is determined by identifying the phase transition of the normalized flight transition duration from its neighboring NFIs versus the traffic demand. Finally, when the predicted demand exceeds the flow limit, by evaluating the available capacity at different NFIs in the airspace, the flow excess is alternated onto other NFIs to optimize and re-configure the air traffic demand to avoid traffic overload. An experimental study was carried out in French airspace using the proposed framework base on the ADS-B data in December 2019. Results showed that the proposed prediction model approximated the actual flow values with the coefficient of determination (R 2 ) above 0.9 and mean absolute percentage error (MAPE) below 20%. Acceptable flow limit determination showed that for above 68% NFIs, the flight transition duration increases sharply when the demand exceeds a certain level. The flow excess at an NFI whose demand was predicted to exceed its limit was coordinated, and the potential increase in the flight transition duration caused by the flow excess was avoided.
I. INTRODUCTION
The scalability limit of traditional sector-based Air Traffic Control (ATC) services, i.e., difficulty in subdividing heavily loaded sectors, is becoming a barrier to the sustainable growth of air traffic [START_REF] Gianazza | Learning air traffic controller workload from past sector operations[END_REF]. Researchers have started examining and testing the concept of sectorless ATC, which views the airspace as a whole instead of the current practice of dividing the airspace into small sectors. One primary practice of sectorless airspace is flow-centric operation [START_REF] Zeki | Business models for flight-centric air traffic control[END_REF], which relies on controlling and monitoring flow-based formation and evolution of air traffic, i.e., the management of dynamic flow corridors [START_REF] Reitmann | Advanced quantification of weather impact on air traffic management[END_REF]. It opens the opportunity to distribute air traffic more efficiently in the airspace without being constrained by sector boundaries [START_REF] Undertaking | European atm master plan: the roadmap for delivering high performing aviation for europe: executive summary[END_REF].
Despite the benefits of the flow-centric concept, its implementation has been limited. One primary challenge is the efficient coordination of air traffic flow at the intersections to avoid inefficiencies that may jeopardize flight safety [START_REF] Gerdes | From free-route air traffic to an adapted dynamic mainflow system[END_REF]. Research focusing on the traditional sector-based air traffic coordination, such as sector traffic prediction and flow optimization for workload balancing between sectors [START_REF] Chen | A network based dynamic air traffic flow model for en route airspace system traffic flow optimization[END_REF], no longer adapts the flow-centric operations where coordination is primarily used to avoid potential inefficiencies or conflicts between the intersecting air traffic flows [START_REF] Undertaking | European atm master plan: the roadmap for delivering high performing aviation for europe: executive summary[END_REF]. Therefore, for safe and efficient airspace management under flow-centric operations, it is crucial to develop a flow-centric-based air traffic coordination framework that can dynamically coordinate air traffic flow in advance based on the collaborative identification, prediction, and inefficiency assessment of the evolving air traffic flows. For instance, traffic flow can be re-routed in advance when the predicted flow demand exceeds the acceptable flow limit at the intersections [START_REF] Undertaking | European atm master plan: the roadmap for delivering high performing aviation for europe: executive summary[END_REF].
Effective air traffic flow identification is the cornerstone for flow-centric-based practices regarding traffic flow analysis, prediction, and coordination [START_REF] Gerdes | From free-route air traffic to an adapted dynamic mainflow system[END_REF]. In the literature, air traffic flow has been identified and described in accordance with the airspace configuration, such as the behaviors of groups of flights transiting through area control centers [START_REF] Sridhar | Modeling and optimization in traffic flow management[END_REF], waypoints [START_REF] Murca | Identification, characterization, and prediction of traffic flow patterns in multi-airport systems[END_REF], sectors [START_REF] Ma | Sector entry flow prediction based on graph convolutional networks[END_REF], and airways [START_REF] Zhang | A hierarchical heuristic approach for solving air traffic scheduling and routing problem with a novel air traffic model[END_REF]. Such a characterization of air traffic flow serves the traditional air traffic operations where air traffic control units are geographical sectors and flights have to fly along airways consisting of a set of fixed waypoints. However, flow-centric air traffic management focuses on the flow of air traffic in the airspace from a holistic view disregarding the fixed airways and sectors. It requires air traffic flow identification methods to explore the underlying flow patterns, such as the spatial-temporal evolution of flow locations and structures [START_REF] Olive | Identifying anomalies in past en-route trajectories with clustering and anomaly detection methods[END_REF].
In addition to air traffic flow identification, effective air traffic flow coordination requires the constant viewing of air traffic demand according to the available capacity at the flow intersections. Air traffic flow prediction models in the literature primarily take the time series information of the number of flights at single or multiple geographical locations in the airspace as the input and predict the future number of flights through Linear Dynamic System Models (LDSM) or neural networks such as Long Short-Term Memory (LSTM) [START_REF] Gui | Machine learning aided air traffic flow analysis based on aviation big data[END_REF] and Convolutional Neural Networks (CNNs) [START_REF] Lin | Deep learning based short-term air traffic flow prediction considering temporal-spatial correlation[END_REF]. Such time-series-based methods mainly predict future traffic demand by analyzing the past time series [START_REF] Dalmau-Codina | A machine learning approach to predict the evolution of air traffic flow management delay[END_REF]. Important spatial-temporal flow dynamics were not incorporated, limiting the modeling and prediction accuracy, which is critical for flow-centric operations where real-time decision-making based on accurate data is necessary. Moreover, the traditional sector-based capacity estimation method lacks consideration of the air traffic flow features, such as the Monitor Alert Parameter(MAP) model regarding the hand-off service workload [START_REF] Marr | Controller workload-based calculation of monitor alert parameters for en route sectors[END_REF] and the sector merge/split-based model [START_REF] Gianazza | Learning air traffic controller workload from past sector operations[END_REF]. Therefore, developing flow prediction and acceptance limit identification methods at the flow intersections concerning the evolving flow features is the building block for flow-centric operations.
Information on future air traffic demand and the corresponding acceptable flow limit at the flow intersections enables reconfiguring the air traffic demand transiting through the flow intersections in advance so that air traffic flow can be restricted within a level that does not overload the system excessively [START_REF] Lin | From aircraft tracking data to network delay model: A datadriven approach considering en-route congestion[END_REF]. In traditional sectors, air traffic overload is managed by sector operations such as merging and splitting [START_REF] Liu | A framework for strategic online en-route operations: Integrating traffic flow and strategic conflict managements[END_REF]. Under the flowcentric paradigm, traffic flow density and complexity change over time, rendering static flow control operations underloaded or overloaded during the day. Analogically, dynamic flow coordination according to the evolving airspace conditions, such as traffic flow merge/split/re-routing, gives flow-centric airspace an option to address the anticipated flow excess without compromising the flow demand and overloading the acceptable limit at the flow intersections. Therefore, developing dynamic flow coordination algorithms depending on the time-varying traffic flows is an important enabler of efficient flow management decisions and optimal utilization of airspace resources.
In view of the above analysis, this paper proposes a dynamic air traffic flow coordination framework to identify, predict, assess, and coordinate the evolving air traffic flows to enable more efficient flow-centric airspace management. Firstly, nominal flow intersections (NFI) are identified through hierarchical clustering of flight trajectory intersections and graph analytics of daily traffic flow patterns. Secondly, based on the identified NFIs, spatial-temporal flow features are represented as sequences of flights transiting through the NFIs over time. These features are used to predict the future traffic demand at the NFIs through a transformer-based neural network. Thirdly, for each NFI, the acceptable flow limit is determined by identifying the phase transition of the normalized flight transition duration from its neighboring NFIs versus the traffic demand during different periods. Finally, when the predicted future flow exceeds the flow limit, by evaluating the available capacity at different NFIs, the flow excess is alternated onto other NFIs to re-configure the air traffic demand to avoid traffic overload at the NFIs.
II. PROBLEM DESCRIPTION
This paper focuses on the problem of dynamic air traffic flow coordination at the Nominal Flow Intersections (NFIs) for efficient Flow-centric Airspace Management. This problem can be further decomposed into four sub-problems as shown in Fig. 1: 1) identification of NFIs through graph analysis of air traffic flow patterns; 2) flow representation and prediction based on the spatial and temporal dynamics feature of air traffic flows at the NFIs; 3) flow acceptance limit identification at the NFIs through flow transition efficiency analysis; 4) Flow coordination, i.e., flow-excess re-routing, based on the capacity availability and predicted flow demand at the NFIs.
1) NFI Identification: Given the freedom of airspace users taking direct or user-preferred flight routes, one may argue that flow-centric implies insignificant spatial structure and temporal patterns of the air traffic flow. However, taking into account that all flights have to depart and land at airports, the positions of the airports and the scheduled flights between airports restrict the air traffic to an appropriate pattern of main flows. A traffic flow analysis of French airspace, where the free route airspace (a potential coupled working method to flow-centric operations) has been implemented in nearly 50% of the airspace above flight level 195, shows that the majority of flight trajectories are aggregated as major flows connecting major traffic hubs [START_REF] Ma | Air traffic flow representation and prediction using transformer in flow-centric airspace[END_REF]. Thus, the first sub-problem is identifying the locations and the flow inter-connections of the NFIs through constructing and analysing spatial and temporal patterns of air traffic flows.
2) Flow Representation and Prediction: Most existing methods in the literature represent the spatial-temporal air traffic flow features as a time series of the number of flights at different locations without considering the spatial and temporal dynamics feature within the air traffic flow, such as the spatial and temporal distributions of flights in the airspace. The spatial distribution of flights is vital for determining the air traffic complexity and density, while the temporal distribution is essential for describing the dynamic evolution of air traffic [START_REF] Delahaye | Air traffic complexity map based on linear dynamical systems[END_REF]. They are the primary influencing factors of the future air traffic at the NFIs, without which the prediction model may fail to learn the causal relations between the input air traffic feature and the prediction target, limiting the model's accuracy in predicting future air traffic flows. Therefore, the second sub-problem is the representation of the spatial-temporal flight distribution and accurate prediction of the future number of flights transiting through the NFIs.
3) NFI Flow Acceptance Limit Identification: Due to the increasing demand for air travel, an NFI may be overloaded by excessive air traffic flows to be coordinated. In a traditional sector-based ATC system, the sector capacity is usually quantified as the maximum number of flights that may enter a sector per hour averaged over a sustainable period and is used to manage a safe, orderly, and efficient traffic flow [START_REF] Welch | Sector workload model for benefits analysis and convective weather capacity prediction[END_REF]. Similarly, in flow-centric ATC systems, if the demand at an NFI exceeds an acceptable limit, the efficiencies of air traffic flow transiting through the NFI can be degraded. When an NFI is overloaded, air traffic congestion can be induced, and it will take a longer time for air traffic transiting to the overloaded NFI on the flight paths due to regulatory measures such as vectoring and speed control [START_REF] Baumgarten | The impact of hubbing concentration on flight delays within airline networks: An empirical analysis of the us domestic market[END_REF]. Therefore, the third sub-problem is to identify a flow acceptance limit at each NFI above which extra traffic demand will degrade the flow transition efficiency, i.e., the flight transition duration from neighboring NFIs, to the NFI significantly.
4) NFI Flow Coordination: Through solving the above three sub-problems, the locations, the inter-connectivity, the future traffic demand, and the flow acceptance limit of the NFIs can be determined. With this information in hand, the fourth subproblem is to come up with a flow-centric air traffic excess re-routing algorithm in advance to re-configure the air traffic demand transiting through the NFIs, i.e., re-routing the excessive air traffic flow onto NFIs with spare flow acceptance capability, to avoid exceeding the flow acceptance limit at the NFIs which can cause inefficiencies.
III. METHODOLOGY
A. NFI Identification
This paper identifies the NFIs based on the flight trajectories analysis using ADS-B data, including intersection points clustering, daily flow pattern representation, and graph analysis.
1) Intersection Points clustering & Flow Pattern Representation:
The NFIs in this paper are defined as the positions in the airspace where air traffic flows intersect. This paper proposes to determine the NFIs by clustering the intersection points of flight trajectories to extract the natural groupings of the intersection points. Hierarchical clustering relies on the hierarchical decomposition of the data based on group similarities to find a multilevel hierarchy of clusters, where clusters at one level are joined as clusters at the next level [START_REF] Cai | Hierarchical clustering of bipartite networks based on multiobjective optimization[END_REF]. Air transportation networks are commonly a nodal hierarchy that follows the spoke-hub structure in which air traffic flows range from regional feeders to international hubs [START_REF] Seabra | Determinants of brazilian international flights: The role of hub-and-spoke and infrastructure variables[END_REF]. Therefore, this paper adopts the hierarchical clustering algorithm to discover the NFIs for flow representation.
To determine the optimal number of clusters that should be identified by the clustering algorithm, this paper proposes to model the daily air traffic flow patterns as graphs based on the clustering outcome. The constructed graphs across different days are further investigated and compared to determine the best-fitted number of NFIs for flow pattern representation. The daily air traffic flow pattern is represented as a weighted graph G = (V, E), where V is the set of nodes denoting the air traffic flow components, i.e., the NFIs. The flow connectivity between the nodes can be described by the weighted edges E. The weight of each edge is quantified as the air traffic volume transiting through it, i.e., the number of flights whose trajectories consecutively cross the two nodes connected by the edge.
2) Graph Analysis: The consistent and dependable performance of airspace users is an essential requirement for improving ATM system predictability [START_REF] Murc ¸a | Flight trajectory data analytics for characterization of air traffic flows: A comparative analysis of terminal area operations between new york, hong kong and sao paulo[END_REF]. Therefore, the representation of traffic flow patterns should describe the behavioral consistency of air traffic flows in the airspace even though there are daily alternations in the geographical positions of the NFIs and the traffic flow connectivity between NFIs. Under this consideration, this section proposes to determine the optimal number of clusters by modeling and analyzing the consistency of daily flow patterns versus the changes in the number of clusters. The flow pattern consistency is evaluated from two perspectives: a) geographical consistency in node (NFI) locations; b) structural consistency in flow connectivity.
a) Geographical Consistency in NFI locations: To measure the geographical consistency in the daily node locations, a nearest neighbor-based analysis is conducted to match the nodes on the temporal horizon. Given a number of n nodes in the graph Based on the distances, the nearest neighbouring node v ai k+1 of node v i k for i = 1, ..., n can be determined. Similarly, the nearest neighbouring node v bj k of node v j k+1 for j = 1, ..., n can be determined with the same process. b j is the index of the identified nearest node of v j k+1 in V k . Therefore, two sets of matched node pairs between the two graphs can be obtained:
G k constructed for day D k , let V k = {v 1 k , v 2 k , ..., v i k , ..., v n k } represent the set of nodes. Similarly, let V k+1 = {v 1 k+1 , v
S 1 : (v 1 k , v a1 k+1 ), (v 2 k , v a2 k+1 ), ..., (v n k , v an k+1 ) and S 2 : (v b1 k , v 1 k+1 ), (v b2 k , v 2 k+1 ), ..., (v bn k , v n k+1 ).
Then, the geographical consistency of node locations is quantified as the number of mutually matched nodes divided by the total number of nodes, which is formulated as
gc 1 = |S 1 ∪ S 2 |/n,
gc 2 = l-1 i=1 l j=i+1 min(w i j c k , w i j c k+1 ) n-1 i=1 n j=i+1 (w i j k + w i j k+1 ) (1)
Fig. 2 shows a diagram of the graph analysis to determine the node (NFI) location and flow structure consistency. The first column of the figure shows the graphs G 1 and G 2 constructed for the air traffic flow on day 1 and day 2, respectively. The second column of the figure shows the enlarged portions of G 1 and G 2 . The nodes, i.e., V 1 and V 2 , are marked by solid gray circles. The third column of the figure depicts the mutually paired nodes C 1 and C 2 in the node location consistency analysis of graphs G 1 and G 2 . C 1 and C 2 are marked by the red solids, and the paired nodes are labeled by the same number. The fourth column of Fig. 2 displays the two sub-graphs of G 1 and G 2 formed by C 1 and C 2 . The numbers on the edges are the edge weight W c 1 and W c 2 . The fifth column of Fig. 2 shows the determination of mutual connections in the two sub-graphs, i.e., min{W c 1 , W c 2 }. min{W c 1 , W c 2 } represents taking the smaller value of the weights on the corresponding edges in the two sub-graphs. For instance, on day 1, the weight on the edge connecting nodes 44 and 166 is 380, meaning there are 380 flights transiting through this edge. While on day 2, the edge weight is 416. Thus, the mutual flow connection on this edge during the two days is considered 380. Eventually, the flow structure consistency is calculated by summing up the mutual connections in the two sub-graphs and dividing this value by the total flow connections in G 1 and G 2 .
Calculating the geographical consistency gc 1 and the structural consistency gc 2 versus the varying number of clusters, the "saddle point" on the curves can be adopted as the optimal number of clusters for traffic flow representation. Consequently, the NFIs can be determined by hierarchical clustering based on the determined number of clusters.
B. Flow Representation and Prediction
The above section illustrates the methodology for identifying the NFIs which characterize the nominal air traffic flow intersections across different days of traffic. This section will describe the representation of air traffic flow features and the air traffic flow prediction model. This paper proposes to describe air traffic flow dynamics using the spatial-temporal flights distribution in the airspace. More specifically, the traffic flow feature in the airspace is represented as a text paragraph describing the sequences of flights transiting through the NFIs over time. The proposed method for flow feature representation and prediction consists of three steps: trajectory registration, flow dynamics description using spatial-temporal (S-T) flow distribution, and flow prediction at NFIs.
1) Trajectory Registration: With the identified NFIs, a flight trajectory can be represented as a sequence of the NFIs. The objective of trajectory registration is to search for a sequence of NFIs that can optimally approximate the original trajectory. This objective is formulated as finding the minimum dissimilarity between the original trajectory and a representative trajectory constituted by a subset of the NFIs. More details regarding the trajectory registration can be found in [START_REF] Ma | Air traffic flow representation and prediction using transformer in flow-centric airspace[END_REF].
2) Flow Dynamics Description Using S-T Flight Distribution: After trajectory registration, we can understand at what time a flight has transited through which NFI. The traffic flow dynamics in the airspace is represented as the spatial-temporal flight distribution, described by "paragraph" whose "sentences" are the sequences of flights transiting through the NFIs. In this paper, a flight is referred to by its callsign. Even though the flight callsign may be changed when airlines introduce updated schedules, in most cases, a flight callsign remains fixed for a particular flight operating on the same route regularly during a relatively long period (such as a season or a few months).
Use t 0 and t 1 to denote the start and end times of a period. Let F P k denote the traffic flow at NFI v k during t 0 and t 1 . F P k can be described as a sequence of flights (callsigns) according the time flights passing p k , i.e., F P k :
f 1 k , f 2 k , ..., f i k , ..., f m k k with f i
k denoting the i th flight in the sequence and m k denoting the total number of flights passing v k . If there is no flights transited through a NFI, the callsign sequence for this NFI will be replaced by the phrase: "No flights". Analogizing F P k as a "sentence" depicting the flow context at v k , the combinations of "sentences" during t 0 and t 1 constitute a "paragraph" description of the flow context in the entire airspace. The paragraphs for various periods will be used as inputs to the transformer-encoder-based model, introduced in the next section, to learn the contextual relations between air traffic flow and make predictions about the future air traffic flow.
3) Flow Prediction at NFIs: The prediction model is based on the neural network structure developed in by the authors in [START_REF] Ma | Air traffic flow representation and prediction using transformer in flow-centric airspace[END_REF], whose components are tokenization, embedding, transformer encoder blocks, and a fully-connected layer to produce the prediction outcome.
Given an input sequence, i.e., including the time and the flight callsign sequences at the NFIs, word-tokenization [START_REF] Sun | Spectral-spatial feature tokenization transformer for hyperspectral image classification[END_REF], which splits the data based on natural breaks and meaning, such as time (number), callsigns (words), and sequence separations (delimiters), is applied to convert the input into a list of integers that can be embedded into a vector space. After tokenization, the token embedding layer converts the list of integers into a list of vectors. For the model to use the order of the elements in the sequence, positional embeddings, which contain information on the relative or absolute position of the elements, are added to the token embeddings as the input to the transformer encoders. The embeddings are trained jointly with the rest of neural network. Back-propagation is carried through all the network layers up to the embeddings that are updated as other parameters. After embedding the elements in the input sequence, each of them flows through the transformer encoder blocks to encode the feature into a meaningful context tensor representation. The transformer encoder is composed of a stack of N e encoder blocks. The output from the stack of N e encoders is then forwarded to a fully connected layer to obtain the flow prediction results for different NFIs.
C. NFI Flow Acceptance Limit Identification
To identify the flow acceptance limit at an NFI, this paper proposes to use the flight transition duration from neighboring NFIs as the overload indicator. When the demand at an NFI is above the acceptable limit during a period, air traffic congestion can happen, and it will take a significantly larger time cost for air traffic transiting to the overloaded NFI on their flight paths due to regulatory measures such as vectoring and speed control.
Assume there is a number of N v NFIs identified, denoted as: v 1 , v 2 , ..., v i , ..., v Nv . With the time information of flights reaching the NFIs during the s th period, the number of flights transiting to NFI v i from NFI v j can be determined, denoted as n s ij . The set of transition duration for the n s ij flights can be denoted as H s ij = {h s k ij |k = 1, 2, ..., n s ij }, where h s k ij represents the transition duration of the k th flight among the n s ij flights. Considering air traffic flows can evolve daily in terms of locations of the NFIs, the traffic flow structure, or unexpected events such as mechanical issues and air traffic control disruptions, the duration H s ij is normalized by the daily minimum duration to reduce the effects of daily fluctuations in air traffic flow:
N H s ij = H s ij H Ds ij ( 2
)
where H Ds ij represents the minimum flight transition duration from v j to v i on the day of the period s. By doing so for all NFIs that are connected to v i , the normalized transition duration to v i for the period s can be obtained:
N H s i = {N H s ij |j ∈ N i },
where N i is the neighbor set of v i . The total number of flight transiting to v i can consequently be determined as n s i = n s ij , j = 1, 2, ..., N i . For all periods s ∈ {1, 2, ..., N T } in the traffic data, the set of flight transition duration N H i = {N H s i |s ∈ {1, 2, ..., N T }} as well as the set of corresponding traffic demand n i = {n s i |s ∈ {1, 2, ..., N T }} can be determined through the above procedures. By fitting the demand values to the transition duration values, a set of demand values X = {1, 2, ..., N } and the corresponding transition duration value Y = {y 1 , y 2 , ..., y N } can be obtained. The flow acceptance limit is determined by identifying the abrupt changes on the curve of demand versus transition duration, which is formulated as [START_REF] Killick | Optimal detection of changepoints with a linear computational cost[END_REF]:
obj = arg min k (k)var([y 1 , ..., y k ]) +(N -k)var([y k+1 , ..., y N ]) (3)
By minimizing Eq. 4, the flow acceptance limit l i at v i can be determined as k, flow demand above which will lead to abrupt increase in the transition duration of flights to the NFI v i .
D. NFI Flow Coordination
To re-route the flows at an NFI, a pre-requisite is the knowledge of the main flow paths crossing the NFI, i.e., the main flows transiting through the NFI and their corresponding paths. This paper identifies the main flow paths from the flight data based on the typical sequences of NFIs flown by the flights. Upon identifying the main flows transiting through an NFI, the next step is to predict the number of flights in each main flow during a future period. The prediction model in this step utilizes the same input features and neural network architecture as described in Section III-B, except that the model output is the main flow values at the target NFI instead of the flow demand values of different NFIs.
With the knowledge of the path and predicted demand of each main flow transiting through the NFI, the third step of flow coordination is to re-configure the flow excess by re-routing the main flows so that overload at the NFIs can be avoided. Let f i represent the predicted flow demand at the NFI v i . Assume that the predicted flow demand f * at the NFI v * exceeds the acceptance limit l * . Let A be the adjacency matrix of G, with its entry a ij = 1 if NFIs v i and v j are connected by traffic flows, otherwise, 0. Given that v * is predicted to be overload and excessive flow need to be re-routed to the other NFIs, a i * and a * j are set as 0. Assume there is a number of T main flows identified for v * . Let D = {d 1 , ..., d t , ..., d T } be the vector for the number of re-routed flights in the T flows, i.e., d t is the number of re-routed flights in the t th flow. Assume that for the t th flow, there are maximum R t accessible paths respectively denoted by P 1 t , P 2 t , ..., P r t , ..., P Rt t . The flow on path P r t is represented by f tr . Define a function δ (tr,ij) , with δ (tr,ij) = 1 if the link between v i and v j is on the path P r t , otherwise, δ (tr,ij) = 0. Let d ij and x ij represent the great circle distance and the amount of flow between v i and v j respectively. With all the above notations and definitions, the flow coordination model is formulated as:
obj = min X={xij } i,j=1,...,Nv Nv i=1 Nv j=1 a ij x ij d ij s.t. f tr ≥ 0 Rt r=1 f tr = d t T t=1 d t = l * -f * Nv j=1 a ij x ij ≤ l i -f i T t=1 Rt r=1 δ(tr, ij)f tr = x ij (4)
Optimizing the above function allows the traffic flow excess at v * to be re-routed so that minimum flight distance is occurred without exceeding the acceptance limit of all the NFIs.
IV. EXPERIMENTAL STUDY
To verify the efficacy of the proposed framework, an experimental study has been carried out on the French airspace using one-month ADS-B data from December 1 to December 31, 2019, including a number of 158856 flights. This study focuses on the en-route air traffic above 10,000 ft. The target in this experimental study is set as follows: identify the flow acceptance limits on the NFIs during a 30-minute interval, predict the number of flights that will transit through the NFIs in the future 30 minutes, and re-route the flow excess to avoid overloading the NFIs.
A. NFI Identification
By finding intersections of flight trajectories in the French airspace and clustering the intersection on a daily basis, a graph representation of the daily air traffic flow pattern can be obtained.
By calculating the geographical consistency gc 1 and the structural consistency gc 2 against a set of different cluster numbers ranging from 100 to 1500. A "saddle point" is observed for gc 1 and gc 2 around cluster number 605. Therefore, this paper takes the value 605 as the number of clusters to be identified by the hierarchical clustering algorithm. The centers of identified clusters are determined as the NFIs. Fig. 3 shows the graph representation for the one-month air traffic structure with 605 NFIs. We can see that this graph is able to depict the nodal hierarchy of air traffic flows ranging from regional feeders to international hubs, such as Paris and Geneva. The en-route air traffic flows are organized as a series of "spokes" connecting the traffic hubs or connecting outlying areas to a hub area. Fig. 6 presents the traffic flow prediction result on eight example NFIs in the airspace from 00:00 to 23:59 on Dec 24, 2019. The solid blue lines show the true number of flights passing the NFIs, while the red lines show the predicted value using the proposed prediction method. We can observe from Fig. 6 that the proposed method sustainably gives forecasts in close proximity to the actual flow value for different NFIs in the future 30-minute horizon. Furthermore, the prediction method can capture sharp changes in the air traffic demand, as can be observed from the figure that when the actual demand increases or decreases abruptly, the prediction model is still able to give a close prediction of the true value. Overall, the proposed prediction model approximates the actual flow values on the identified NFIs with the coefficient of determination (R 2 ) above 0.9 and mean absolute percentage error (MAPE) below 20%.
D. NFI Flow Excess Re-routing
Based on the identified flow acceptance limit and the predicted flow during the 30-minute interval, we can observe when an NFI is overloaded and what the flow excess is. It can be observed from Fig. 6 that, for the NFI whose prediction and acceptable flow limit are shown on the bottom-left, the flow is predicted to exceed the acceptable limit from 12:10 to 12:40 on the day. The value of the flow excess is five, meaning five flights should be rerouted to avoid overloading the NFI. This NFI is shown in Fig. 7 and marked by the red node. Three main flows are transiting through the NFI. The green, black, and blue lines visualize flight trajectories in the three main flows. The predicted flights in the three main flows t = 1, t = 2, and t = 3, are 5, 4, and 8, respectively.
With the knowledge of the number of predicted demand, f * = 23, the acceptable flow limit, l * = 18, and the number of flights in each main flow, this step will re-route the traffic in the main flows onto alternative routes consisting of other NFIs with spare capacity. Although exceptions exist due to traffic congestion or weather conditions, flights in the en-route phase usually prefer the routes with shorter flight distance due to fuel consumption and en-route charges [START_REF] Soler | En-route optimal flight planning constrained to pass through waypoints using minlp[END_REF]. Therefore, the alternative paths of each main flow, i.e., P r t , f or t = 1, 2, 3 in this experiment are set as the two shortest paths, i.e., R t = 2, f or t = 1, 2, 3 between its origin NFI and destination NFI without transiting through the overloaded NFI. The paths are identified based on the connectivity among the NFIs of the main flow. Therefore, in this case the Eq. 4 can be specified as follows:
a ij x ij d ij s.t. f tr ≥ 0 2 r=1 f tr = d t 3 t=1 d t = 5 605 j=1 a ij x ij ≤ l i -f i 3 t=1 2 r=1 δ(tr, ij)f tr = x ij (5)
Fig. 8 and Table I show the flow excess re-routing result. Five flights in the third main flow marked by solid blue lines are re-routed to its shortest alternative path, i.e., f 1 3 = 5, shown by the blue dashes. Note that the predicted demand is 23, and the flow limit is 18. The normalized flight transition duration, scaled between 0 and 1, to the NFI under a demand of 18 flights is 0.27, while under a demand of 23 is 1. Thus, the five flight excess in the traffic flow can potentially lead to a 270% increase in the flight transition duration. Through re-configuring the flow demand at the NFIs, the excess is reduced to 0 from 5, and the anticipated flow overload at the NFI is avoided in advance without causing abrupt increases in the flight durations with the utilization of the spare capacity of other underloaded NFIs. the optimal number of clusters. A graph analysis was proposed to identify the clustering outcome, which optimally sustained the consistency in the flow patterns. Secondly, based on the identified NFIs, the air traffic flow dynamics features were represented by the spatial-temporal flight distribution, characterized by a textual paragraph recording the time and the sequences of flights passing each NFI. Then, a transformer-based model was adopted to learn the text-enriched flow features and predict future air traffic at the NFIs. Thirdly, for each NFI, the acceptable flow limit was determined by identifying the transition point of the normalized flight transition duration from its neighboring NFIs versus the traffic demand. Finally, based on the identified acceptable flow limit and the predicted demand, the flow excess at an NFI was alternated onto other NFIs to optimize and reconfigure the air traffic demand to avoid sharp increases in flight durations. An experimental study was carried out in French airspace using the proposed framework base on one-month ADS-B data in December 2019. Results showed that the proposed prediction model approximated the actual flow values with the coefficient of determination (R 2 ) above 0.9 and mean absolute percentage error (MAPE) below 20%. Moreover, acceptable flow limit determination showed that for above 68% NFIs, the flight transition duration increased sharply when the demand exceeded a certain level. Flow overload was avoided by rerouting the detected flow excess to other NFIs with spare capacity.
Research in this paper provides a basic framework for flowcentric air traffic flow coordination. In the future, constraints from both perspectives of air traffic control services and airspace users, such as Special Use Airspace (SUA), weather conditions, airline schedules, and users' preferred routes, can be incorporated to refine this framework for more effective and efficient coordination of air traffic flow. For instance, SUA and weather conditions can be adapted to produce a more accurate and dynamic estimation of the flow acceptance limit. At the same time, airline schedules and users' preferred routes can assist the proposed flow coordination algorithm to re-configure the flow demand to the preferences of airspace users and airlines.
Fig. 1 :
1 Fig. 1: Conceptual diagram of the proposed dynamic air traffic flow coordination framework, including: 1) NFI identification through graph analysis of air traffic flow patterns; 2) flow representation and prediction based on the spatial-temporal flow distribution; 3) NFI flow acceptance limit identification through flow transition efficiency analysis; 4) Flow coordination, i.e., flow-excess re-routing, based on the capacity availability and predicted flow demand at the NFIs.
where |S 1 ∪ S 2 | represents the number of node pairs in the union of S 1 and S 2 . It will be denoted as l in the rest of the paper. b) Structural Consistency in Flow Connectivity: Upon determining the geographical consistency in NFI locations, the next step is to quantify the structural consistency in the daily air traffic flow connectivity between the NFIs. Let S = S 1 ∪ S 2 represent the set of paired nodes from graph G k and graph G k+1 based on NFI location consistency. While C k = {c 1 k , c 2 k , ..., c l k } denotes the nodes in S from graph G k and C k+1 = {c 1 k+1 , c 2 k+1 , ..., c l k+1 } denotes the corresponding paired nodes from G k+1 . Let e i j c k represent the edge connecting nodes c i k and c j k and w i j c k represent the weight on it. Let W c k represent the entire set of edge weights {w 1 2 c k , w 1 3 c k , ..., w 1 j c k , ..., w 1 l c k , ..., w 2 3 c k , w 2 4 c k , ..., w 2 j c k , ..., w 2 l c k , ..., w i (i+1) c k , w i (i+2) c k , ..., w i j c k , ..., w i l c k , ..., w (l-1) l c k }. The structural consistency in the air traffic flow patterns is measured by the mutual flow connectivity in the two graphs. More specifically, with the set of paired nodes C k and C k+1 from G k and G k+1 respectively, the flow structure consistency is evaluated by the ratio of the mutual flow connections between the graphs characterized by nodes C k and C k+1 compared to the union of flow connections in G k and G k+1 . It is formulated as:
Fig. 2 :
2 Fig. 2: The diagram for graph analysis of node (NFI) location consistency and flow structure consistency. G 1 and G 2 are the graphs constructed for the air traffic flow on the day 1 and day 2, respectively. The nodes are marked by solid gray circles. The red solids, i.e., C 1 and C 2 , in the graphs denote the nodes showing location consistency across the two days. W c 1 and W c 2 are the edge weights in the two sub-graphs of G 1 and G 2 formed by C 1 and C 2 . The structural consistency is characterized by the mutual connections in the two sub-graphs, i.e., min{W c 1 , W c 2 }, compared to the total flow connections in G 1 and G 2 .
Fig. 3 :
3 Fig. 3: Graph representation for the one-month air traffic flow using 605 NFIs.
Fig. 4 :
4 Fig. 4: Comparison of the identified NFIs for two different days.
Fig. 3
3 Fig. 3 presents the NFIs from a spatial structural perspective, and Fig. 4 depicts the temporal dynamics in the NFIs in describing air traffic flows during different days. Fig. 4 shows the identified NFIs for two days, yellow dots for day one and red dots for day two. The dots' sizes are proportional to the traffic volume transiting through the corresponding NFIs. It can be observed that the NFIs show consistent patterns in geographical distribution and traffic volume across different days, although
Fig. 5 :
5 Fig. 5: Flight transition duration versus the flow demand on eight example NFIs. The blue circles show the observations from the traffic data, the solid red lines show the third-degree polynomial fitting of the observations, and the pink dashes bound the 95% confidence intervals of the fitting. The solid black line indicates the identified acceptance limit of the NFIs.
Fig. 6 :
6 Fig. 6: Flow prediction result on Dec 24, 2019 for eight example NFIs. The solid blue lines show the true number of flights passing the NFIs, while the red lines show the predicted value.
Fig. 7 :
7 Fig. 7: Three main flows, depicted by the green, blue, and black lines, transiting through the overloaded NFI (red node).
Fig. 8 :
8 Fig. 8: Flow excess re-routing results. Five flights in the original flow marked by solid blue lines are re-routed to the path shown by the blue dashes.
2 k+1 , ..., v j k+1 , ..., v n k+1 } represent the set of nodes in the graph G k+1 constructed for day D k+1 . For each node v i k in V k , the proposed algorithm searches for its nearest node v ai k+1 in V k+1 . a i is the index of the identified nearest node of v i k in V k+1 . By representing node v i
k 's latitude and k+1 's as (φ i k ) and node v i k , λ i longitude as (φ i k+1 , λ i k+1 ), the nearest neighbour of node v i k in V k+1 is identified according
the great circle distance.
TABLE I .
I Flow excess re-routing result.
flow f t0 before excess f t1 f t2 dt f t0 after excess
t = 1 5 0 0 0 5
t = 2 4 5 0 0 0 4 0
t = 3 8 5 0 5 3
605 605
obj = min
X={xij } i,j=1,...,605 i=1 j=1
ACKNOWLEDGMENT
This research is supported by the National Research Foundation, Singapore, and the Civil Aviation Authority of Singapore, under the Aviation Transformation Programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and the Civil Aviation Authority of Singapore. The authors would like to thank professor Patric Senac, ENAC, for hosting the first author at ENAC, Toulouse, where part of this research was conducted. |
04119345 | en | [
"info"
] | 2024/03/04 16:41:26 | 2023 | https://inria.hal.science/hal-04119345/file/DelaunayAsLexicographicMinimum%20%281%29.pdf | David Cohen-Steiner
email: [email protected]
André Lieutier
email: [email protected]
Julien Vuillamy
email: [email protected]
Delaunay and regular triangulations as lexicographic optimal chains
Keywords: Delaunay, Regular triangulations, Minimal chains, Lexicographic, Simplicial Mathematics Subject Classification (2010) MSC 52C07, MSC 52C99
We introduce a total order on n-simplices in the n-Euclidean space for which the support of the lexicographic-minimal chain with the convex hull boundary as boundary constraint is precisely the n-dimensional Delaunay triangulation, or in a more general setting, the regular triangulation of a set of weighted points. This new characterization of regular and Delaunay triangulations is motivated by its possible generalization to submanifold triangulations as well as the recent development of polynomial-time triangulation algorithms taking advantage of this order.
Introduction
Besides their elegant mathematical properties [11,[START_REF] Aurenhammer | Voronoi diagrams -a survey of a fundamental geometric data structure[END_REF][START_REF] Rajan | Optimality of the Delaunay triangulation in R d[END_REF], Delaunay triangulations, and more generally regular triangulations, are intensively used in the industry, being the basis of much of the meshing algorithms for geometry processing and engineering applications [START_REF] Cheng | Delaunay mesh generation[END_REF].
Related work: regular triangulations and their optimality properties
The usual definition of Delaunay triangulation considers a finite set of points P ⊂ R n that is in general position, by which it is meant that no n + 1 points of P belong to a same hyperplane and no n + 2 points of P belong to a same sphere.
Given a finite set P of at least n + 1 points in R n in general position, the Delaunay triangulation of P is then defined as the set of n-simplices σ ⊂ P whose circumscribed sphere contains no points in P \ σ. Equivalently, it can be defined as the dual of the Voronoi diagram of P. Associating a weight µ i to each point P i in P, the Laguerre-Voronoi diagram of P, alternatively called power diagram, is a generalization of the Voronoi diagram. Its dual is called regular triangulation of the set of weighted points and coincides with the Delaunay triangulation when all weights µ i are equal.
Delaunay triangulations satisfy several optimality criteria. It has been shown [START_REF] Rajan | Optimality of the Delaunay triangulation in R d[END_REF] and [3, section 17.3.4] that "Among all triangulations of a set of points in R 2 , the Delaunay triangulation lexicographically maximizes the minimum angle, and also lexicographically minimizes the maximum circumradii". We call bounding radius of a simplex the radius of the smallest ball containing it. In [3, section 17.3.4], the grain of a triangulation T of points in the plane is defined as the maximum of the bounding radii of all triangles of T . Then [START_REF] Boissonnat | Algorithmic geometry[END_REF]Theorem 17.3.9] asserts that, given a point set P ⊂ R 2 , the Delaunay triangulation minimizes the grain among all triangulations of P. This last fact is also a consequence of our main theorem when the dimension is 2.
The grain minimization property is a necessary but not sufficient condition for a triangulation to be Delaunay. Our result is a strengthening of this property into a necessary and sufficient condition, as well as a generalization to any dimension and to regular triangulations. Even for the significantly simpler situation of 2-dimensional Delaunay, we are not aware of any work giving the strengthening of the grain optimization required to get our sufficient condition.
Our work relies strongly on a variational characterization of Delaunay triangulations obtained in [START_REF] Chen | Optimal delaunay triangulations[END_REF]. It is well known [START_REF] Edelsbrunner | Incremental topological flipping works for regular triangulations[END_REF][START_REF] Boissonnat | Algorithmic geometry[END_REF] that the Delaunay triangulation in R n is equivalent to the lower convex hull of points in R n × R where each point (x 1 , . . . , x n ) ∈ P ⊂ R n is lifted to:
lift (x 1 , . . . , x n ) = def. x 1 , . . . , x n , n j=1 x 2 j ∈ P
where P ⊂ R n+1 is the graph of the square norm function. Along this correspondence, a n-simplex {p 0 , . . . , p n } ⊂ P belongs to the Delaunay triangulation in R n if and only if its lifted counterpart {lift p 0 , . . . , lift p n } belongs to the boundary of the lower convex hull of lift P ⊂ P, where "lower" refers to the negative direction on the last coordinate x n+1 . This equivalence extends to regular triangulations where the weighted point ((x 1 , . . . , x n ), µ) ∈ R n × R is now lifted to x 1 , . . . , x n , j x 2 j -µ ∈ R n+1 [START_REF] Edelsbrunner | Incremental topological flipping works for regular triangulations[END_REF][START_REF] Boissonnat | Algorithmic geometry[END_REF].
Starting from this equivalence, Chen and coauthors [START_REF] Chen | Optimal delaunay triangulations[END_REF] observed that Delaunay triangulations minimize, among all triangulations over a given vertex set P, the L p norm of the difference between the squared norm function and its piecewise-linear interpolation on the triangulation (see [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF] below for a formal definition). This variational formulation has been successfully exploited [START_REF] Alliez | Variational tetrahedral meshing[END_REF][START_REF] Chen | Efficient mesh optimization schemes based on optimal delaunay triangulations[END_REF][START_REF] Chen | Revisiting optimal delaunay triangulation for 3d graded mesh generation[END_REF] for the generation of Optimal Delaunay Triangulations obtained by optimizing the same functional with respect to the vertex positions in two and three dimensional Delaunay triangulations.
Result and motivation in a few words
We introduce a new variational characterization of Delaunay and regular triangulations. Recall that a k-simplex is a set of k + 1 points or vertices. Given a finite set of points P ⊂ R n , respectively a set of weighted points P ⊂ R n × R, in general position, we define a specific total order σ 0 < . . . < σ N -1 on the set of n-simplices with vertices in P. This order allows to compare sets of simplices by comparing the sum of simplex weights, where the weight of simplex σ i is 2 i . This total order on sets of n-simplices, induced by the total order on simplices, is called lexicographic order.
A simplicial k-chain, whose formal definition is recalled in Section 2, is a set of k-simplices behaving as a vector over Z 2 , the field of integers modulo 2, when the addition is defined as the set-theoretic symmetric difference:
Γ 1 + Γ 2 = (Γ 1 ∪ Γ 2 ) \ (Γ 1 ∩ Γ 2 ).
The lexicographic comparison of two chains, or, merely, of two sets of simplices, can equivalently be defined as follows. We say that Γ 1 is less than Γ 2 , in lexicographic order, and we write Γ 1 lex Γ 2 , if Γ 1 = Γ 2 or if the maximal simplex in their symmetric difference is in Γ 2 . In other words, for Γ 1 = Γ 2 :
Γ 1 lex Γ 2 ⇐⇒ max σ ∈ (Γ 1 ∪ Γ 2 ) \ (Γ 1 ∩ Γ 2 ) ∈ Γ 2
The main result of this work (Theorem 1) asserts that the Delaunay, respectively regular, triangulation of P, is the minimal simplicial chain for the lexicographic order, among chains whose boundary coincides with the boundary of the convex hull of P.
The formal statement of Theorem 1 is given in Section 3 after the exposition of some prerequisites.
The definition of the total order we use for comparing simplices might not seem very intuitive. It is introduced progressively, in Section 3.2 for 3dimensional Delaunay, then in Section 3.5 for the general case of Delaunay or regular triangulation in any dimension n ≥ 1. In the case of Delaunay triangulation in the plane, the comparison ≤ between two triangles, T i , i = 1, 2 is defined as follows.
We denote by R B (T i ) the bounding radius of T i , which is the radius of the smallest circle containing T i and by R C (T i ) the radius of the circumcircle of
Acute triangle
Obtuse triangle
R B = R C R C R B
Fig. 1 The triangle bounding radius R B is the radius of the smallest enclosing circle. Triangles bounding radius R B and circumradius R C characterize the order among 2-simplices.
T i , or circumradius (see Figure 1). These two radii coincide when T i is acute.
In order to compare T 1 and T 2 we first compare R B (T 1 ) and R B (T 2 ):
R B (T 1 ) < R B (T 2 ) ⇒ T 1 ≤ T 2
In case of equality, which arises generically in case of two obtuse triangles sharing their longest edge, the tie is broken by comparing R C (T 1 ) and R C (T 2 ), in reverse order:
R B (T 1 ) = R B (T 2 ) and R C (T 1 ) > R C (T 2 ) ⇒ T 1 ≤ T 2
In the particular case of dimension 2 and zero weights, using this particular total order on triangles, Theorem 1 asserts that the 2D Delaunay triangulation is the lexicographic minimum under boundary condition.
Our result characterizes regular triangulations as solutions of a particular linear programming problem over Z 2 that can be computed in polynomial time, by simple matrix reduction algorithms [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF][START_REF] Vuillamy | Simplification planimétrique et chaînes lexicographiques pour la reconstruction 3D de scènes urbaines[END_REF]. However, we do not expect such algorithms to compete with existing algorithms for the computation of Delaunay or regular triangulation in Euclidean space.
Instead, the characterization given by Theorem 1, being concise and linear by nature, enable generalizations beyond the situation of Delaunay or regular triangulation [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF][START_REF] Vuillamy | Simplification planimétrique et chaînes lexicographiques pour la reconstruction 3D de scènes urbaines[END_REF]. This is illustrated by an example of application in section 1.3. Also, we believe that our algebraic flavored characterization, by offering a novel perpective on a classical notion, may stimulate further theoretical research as well as applications.
Regular triangulations on curved objects
Our initial motivation for this work was the design of meshing algorithms for manifolds or stratified sets, typically 2-submanifolds of the Euclidean 3-space [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF][START_REF] Vuillamy | Simplification planimétrique et chaînes lexicographiques pour la reconstruction 3D de scènes urbaines[END_REF]. When the submanifold is known through a point set sampling, possibly Fig. 2 Our variational formulation, defining Delaunay triangulations when applied to nchains in ambient R n , can be extended to k-chains in R n . This is illustrated here for k = 2 and n = 3. The boundary constraint β of ( 1) is the set of edges connecting the red dots.
with outliers, the computation of such a triangulation is known as surface reconstruction, or, when the ambient dimension is high, manifold learning.
In this context, we would like our triangulations to benefit from the same quality and optimality properties enjoyed by the Delaunay triangulation in Euclidean space and to be able to ignore outliers. A natural idea, in order to extend Delaunay or regular triangulations to this general situation, consists in using the same variational formulation of Theorem 1, but considering k-chains in R n for k < n instead of n-chains in R n .
The case n = 3 and k = 2 is illustrated on Figure 2 which shows triangulated surfaces made of the triangles in Γ min , the lexicographic minimal 2-chains in R 3 under imposed boundary, where the total order ≤ on triangles is the one given in Section 1.2:
Γ min = min lex Γ ∈ C 2 (K), ∂Γ = β .
(
For ease of computation we can take K to be the 3D Delaunay triangulation of the data points rather than the full complex. We first see that the support of the minimal chain is a manifold. It has in fact been proved [START_REF] Cohen-Steiner | Lexicographic optimal chains and manifold triangulations[END_REF] that given a connected compact smooth 2-submanifold of Euclidean space, under good sampling conditions relatively to the reach of the manifold and starting from a well chosen Čech or Rips simplicial complex over the points sample, the support of the lexicographic minimal cycle non-homologous to 0 has to be a triangulation of the manifold. We also see that the quality and arrangement of triangles in the obtained surface triangulation is, as expected, similar to that of planar Delaunay triangulations, making our approach a good candidate for extending the notion of Delaunay (or regular) triangulation to sampled curved objects.
Organization of the paper
The paper is devoted to the proof of Theorem 1.
Section 2 recalls standard definitions about simplicial complexes and chains, regular triangulations of weighted points, and states the formal definition of the lexicographic order. Section 3 states our main result.
Section 4 establishes geometric constructions of regular triangulations, with an emphasis on properties of their simplex links. A characterization of regular triangulations is given in terms of the support of a chain minimizing a weighted L 1 norm, denoted • (p) , under boundary constraints (Proposition 2). The weights in the norm • (p) are parametrized by a real number p ≥ 1. Lemmas 4 and 5 imply that, informally, for p large enough, a chain minimizing • (p) under boundary constraints minimizes in particular a lexicographic preorder induced by the comparison of bounding radii µ B of simplices.
Sections 4.2 and 4.3 describe a geometric construction that maps the link of a k-simplex τ in the regular triangulation to a polytope in R n-k . It establishes correspondences between properties of the link of τ with properties of the polytope, which are summarized in Lemma 7. In this section are defined several geometric objects used along the proof, such as the bisector bis τ of a simplex τ , the associated maps π bisτ , π τ , Φ τ , and Sh τ , and the notion of visible facet of a polytope.
Section 5 is self-contained and does not rely on previous constructions. It is devoted to the proof of Lemma 10, which establishes that convex hulls can be defined as minimal lexicographic chains under boundary constraints, for a particular lexicographic order denoted 0 . This lemma is a crucial tool for the proof of the main theorem.
Finally, in section 6, we are ready to give the proof of Theorem 1.
The proof of the theorem requires many constructions, notations and intermediate results. In order to help the reader, we provide in Appendix A a proof outline, with two flowcharts and associated explanations. This proof outline is not self contained: it is meant to serve as a roadmap that, by giving a global view of the proof, may support the reader along sections 4, 5 and 6. It is completed by a glossary, Section A.4, of the main notions used in the proof outline.
Acknowledgements: This work has benefited from many discussions and exchanges on related topics with Dominique Attali, Jean-Daniel Boissonnat and Mathijs Wintraecken.
Simplicial complexes, simplicial chains and lexicographic order
We recall here basic definitions of simplicial complexes and chains. A thorough exposition can be found in the first chapter of [START_REF] Munkres | Elements of algebraic topology[END_REF].
Simplicial complexes
Simplicial complexes For any k ≥ 0, a simplex of dimension k, or k-simplex, is a set of k + 1 vertices. If τ and σ are simplices, we say that τ is a face of σ, or, equivalently, that σ is a coface of τ , when τ ⊆ σ.
A set K of simplices is called a simplicial complex if any σ ∈ K has all its faces in K:
σ ∈ K and ∅ = τ ⊆ σ ⇒ τ ∈ K
We consider in this paper only finite sets K, i.e. only finite simplicial complexes. The dimension of a simplicial complex is the maximal dimension of its simplices. If any simplex in a n-dimensional simplicial complex K has at least one coface of dimension n, K is said to be a pure simplicial complex. Given a simplicial complex K, we denote by K [k] the subset of K containing all k-simplices of K.
Stars and Link
For a simplex τ ∈ K, the star of τ in K, denoted St K (τ ) is defined by St K (τ ) = {σ ∈ K, σ ⊇ τ } and the link of τ in K, denoted Lk K (τ ) is defined by Lk K (τ ) = {σ ∈ K, σ ∪ τ ∈ K and σ ∩ τ = ∅}. When, for 0 ≤ k < n, τ is a k-simplex in a pure n-dimensional simplicial complex K, the link Lk K (τ ) of τ in K is a (n -k -1)-dimensional pure simplicial complex.
Embedded simplicial complexes and geometric realization Given a k-simplex σ = {v 0 , . . . v k }, where each vertex v i , 0 ≤ i ≤ k belongs to R N for some N ≥ k such that the points {v 0 , . . . v k } are affinely independent, the convex hull of σ, denoted CH (σ), is called a geometric k-simplex.
Given a n-dimensional simplicial complex K, an embedding of K consists in associating to each vertex in K a point in R N such that: 1 1. for any k-simplex τ ∈ K, the points associated to the vertices of τ are affinely independent, 2. for any pair of simplices σ 1 , σ 2 ∈ K, one has:
CH (σ 1 ) ∩ CH (σ 2 ) = CH (σ 1 ∩ σ 2 ) ,
The geometric realization G(K) ⊂ R N of K is then defined as:
G(K) = σ∈K CH (σ)
In the sequel of the paper, we consider several n-dimensional simplicial complexes whose vertices belong to R n . For example, given four points a, b, c, d in general position in R 2 , the simplicial complex made of the triangles in the
1 If the simplicial complex K contains N vertices v 1 , . . . v N , associating to vertex v i the point ṽi ∈ R N whose j th coordinate is: (ṽ i ) j = 1 if j = i 0 else
gives an embedding of K in R N and the induced canonical geometric realization defines the topological space associated to the simplicial complex K. It is homeomorphic to any other geometric realization of K.
Delaunay triangulation of {a, b, c, d}, together with all their faces, is a simplicial complex embedded in R 2 . By contrast, the full 2-dimensional simplicial complex over a, b, c, d, defined as the simplicial complex made of all possible triangles, {abc, abd, bcd, acd} with all their faces, is not embedded in R 2 , regardless of the configurations of points a, b, c, d. We mention that if K is a full simplicial complex, so is the link Lk K (τ ) of one of its simplex τ ∈ K. If the geometric realization G(K) of K is a nmanifold, then the geometric realization of Lk K (τ ), G(Lk K (τ )), is a (n-k-1)sphere.
Simplicial chains
Let K be a simplicial complex of dimension at least k. The notion of chains can be defined with coefficients in any ring but we restrict here the definition to coefficients in the field Z 2 = Z/2Z. Denoting by K [k] the set of k-simplices in K, a k-chain Γ with coefficients in Z 2 is a formal sum of k-simplices:
Γ = i x i σ i , with x i ∈ Z 2 and σ i ∈ K [k]
(2)
Interpreting the coefficient x i ∈ Z 2 = {0, 1} in front of simplex σ i as indicating the existing of σ i in the chain Γ , we can view the k-chain Γ as a set of k-simplices: for a k-simplex σ and a k-chain A, we write that σ ∈ Γ if the coefficient for σ in Γ is 1. With this convention, the sum of two chains corresponds, when chains are seen as set of simplices, to their set-theoretic symmetric difference. For a k-chain Γ and a k-simplex σ, we denote by Γ (σ) ∈ Z 2 the coefficient of σ in Γ , in other words:
Γ (σ) = 1 Z2 ⇐⇒ σ ∈ Γ.
We denote C k (K) the vector space over the field Z 2 of k-chains in the complex K.
For a k-chain Γ in C k (K), we denote by |Γ | the support of Γ , which is the subcomplex of K made of all k-simplices in Γ together with all their faces.
Boundary operator.
In what follows, a k-simplex σ can also be interpreted as the k-chain containing only the k-simplex σ, which is consistent with the notation in [START_REF] Aurenhammer | Voronoi diagrams -a survey of a fundamental geometric data structure[END_REF]. Since the set of k-simplices K [k] is a basis for C k (K), a linear operator on C k (K) is entirely defined by its image on each simplex.
For a k-simplex σ = {v 0 , . . . , v k }, the boundary operator ∂ k is the linear operator defined as:
∂ k : C k (K) → C k-1 (K) ∂ k σ = def. k i=0 [v 0 , . . . , v i , ...v k ]
where the symbol v i means the vertex is deleted from the set.
A k-cycle is a k-chain with null boundary. We say two k-chains Γ and Γ are homologous if their difference (or equivalently their sum) is a boundary, in other words, if Γ -Γ = ∂ k+1 B, for some (k + 1)-chain B.
Regular triangulations of weighted points
Definition 1 generalizes the square of Euclidean distance to weighted points.
Definition 1 (Section 4.4 in [4]
) Given two weighted points (P 1 , µ 1 ), (P 2 , µ 2 ) ∈ R n × R their weighted distance is defined as:
D ((P 1 , µ 1 ), (P 2 , µ 2 )) = def. (P 1 -P 2 ) 2 -µ 1 -µ 2 (3)
We recall now the definition of a regular triangulation over a set of weighted points. Regular triangulations can alternatively be defined as duals of the Laguerre (or power) diagrams of a set of weighted points. We use here a generalization of the empty sphere property of Delaunay triangulations.
Definition 2 (Lemma 4.5 in [4])
A regular triangulation T of the set of weighted points P = {(P 1 , µ 1 ), . . . , (P N , µ N )} ⊂ R n × R, N ≥ n + 1, is a triangulation of the convex hull of {P 1 , . . . , P N } taking its vertices in {P 1 , . . . , P N } such that for any simplex σ ∈ T , denoting (P C (σ), µ C (σ)) (Definition 7) the generalized circumsphere of σ, then:
(P i , µ i ) ∈ P \ σ ⇒ D (P C (σ), µ C (σ)), (P i , µ i ) > 0 ( 4
)
Definition 4 is equivalent to the definition of a Delaunay triangulation when all weights are equal. Notice the strict inequality in [START_REF] Boissonnat | Geometric and topological inference[END_REF], where the usual definition would use a greater or equal inequality instead. However, the equality case will not occur under the generic condition assumed in Section 3.
Lexicographic order
We assume now a total order on the k-simplices of K, σ 1 < ... < σ N , where N = dim C k (K). From this order, we define a lexicographic total order on kchains. Recall that with coefficients in Z 2 for k-chains Γ 1 , Γ 2 ∈ C k (K), we can use interchangeably vector and set-theoretic operations:
Γ 1 + Γ 2 = Γ 1 -Γ 2 = (Γ 1 ∪ Γ 2 ) \ (Γ 1 ∩ Γ 2 ).
Definition 3 (Lexicographic Order on chains) Assume there is a total order ≤ on the set of k-simplices of K. For Γ 1 , Γ 2 ∈ C k (K):
Γ 1 lex Γ 2 ⇐⇒ def. Γ 1 = Γ 2 or max {σ ∈ Γ 1 + Γ 2 } ∈ Γ 2
The lexicographic order can equivalently be defined as a comparison of L 1 norm for a particular choice of simplices weights. For that, the d-simplices being totally ordered as σ 0 < σ 1 < . . . < σ N -1 , we assign the weight 2 i to simplex σ i . We can then define the norm of a d-chain Γ = j c j σ j were c j ∈ Z 2 as:
Γ * = j c j σ j * = j |c j | 2 j
where |.| : Z 2 → {0, 1} ⊂ R assigns respective real numbers 0 and 1 to their corresponding elements in Z 2 = {0 Z2 , 1 Z2 }, i.e. |0 Z2 | = 0 and |1 Z2 | = 1. Another way of saying this is to associate to each k-chain Γ ∈ C k (K) the integer number whose j th bit in the binary expression, starting from the least significant bit, corresponds to the coefficients of σ j in Γ . Then Γ * is the value of the integer number. With this convention one has:
Γ 1 lex Γ 2 ⇐⇒ Γ 1 * ≤ Γ 2 *
Main result
This section gives the main result of the paper, Theorem 1, before entering into more technical matters. Both the generic condition and the total order on simplices inducing the lexicographic order are formally defined in Section 3.4.
These formal definitions do not provide an easy visual intuition, being inductive by nature and expressed for the general case of weighted points. For this reason, we first provide an intuitive introduction in the case of zero weights, which corresponds to Delaunay triangulations.
Generic configurations
In computational geometry, a property defined on a space of configuration parametrized by R N is usually said generic when the set G ⊆ R N for which it holds is open and dense in R N . Another legitimate definition for a property to be generic is to hold almost everywhere, in other words such that R N \ G has Lebesgue measure zero.
Definition 4 (Generic property)
We say that a property on a space of configuration parametrized by R N satisfied on G ⊆ R N is generic, if G is an open dense subset whose complement has Lebesgues measure zero.
Note that the conjunction of finitely many generic conditions is generic. In the case of the Delaunay triangulation of m points in R n , the configuration space has dimension N = mn, and, for regular triangulations of m weighted points in R n × R, the dimension is N = m(n + 1).
In the context of applications, as a consequence of some symmetries in the input or from the finite representations by floating point numbers, the probability to encounter a non-generic configuration with real world data is not zero and practical implementations simulate generic configurations by breaking ties artificially, typically by symbolic perturbation (see Simulation Of Simplicity in [START_REF] Edelsbrunner | Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms[END_REF]). Section I provides a more detailed discussion of the generic conditions 1 (Section 3.4) and 2 (Section 5), including the proofs that they are indeed generic.
3.2 Order ≤ on simplices for Delaunay triangulations in R 2 and R 3 .
The following definitions of total orders on simplices of two and three-dimensional Delaunay triangulations should be seen as particular cases of Definition (9) in Section 3.5.
2-dimensional Delaunay triangulations
For 2-dimensional Delaunay triangulations, the total order defined by ( 9) has already been introduced in Section 1.2. Denoting respectively R B (T ) and R C (T ) the bounding radius, radius of the smallest circle containing triangle T , and the circumradius of T (see figure 1), the order on triangles is defined as:
T 1 ≤ T 2 ⇐⇒ T 1 = T 2 or R B (T 1 ) 2 < R B (T 2 ) 2 or R B (T 1 ) 2 = R B (T 2 ) 2 and R C (T 1 ) 2 > R C (T 2 ) 2
This preorder is a total order under our generic condition. Compared to Section 1.2, we have here used squared radii comparisons instead of radii comparisons. This obviously does not change the ordering, but makes the expression consistent with the general definition (9) since, for 2-dimensional Delaunay triangulations, one has µ 0 (T ) = R B (T ) 2 and µ 1 (T ) = R C (T ) 2 .
3-dimensional Delaunay triangulations
Given a non-degenerate tetrahedron σ = abcd in R 3 , it must belong to one of the three categories illustrated on Figure 3. This category depends on the dimension of Θ(σ), which is, under generic condition, the unique lowest dimensional face of σ with the same bounding radius as σ.
Equivalently, Θ(σ) is the unique face of σ such that R B (σ) = R C (Θ(σ)).
For all three categories, we define µ 0 (σ) as the squared bounding radius of the simplex σ:
µ 0 (σ) = R B (σ) 2 .
For the second category, in the middle of Figure 3, corresponding to Θ(σ) = abc, we define moreover µ 1 (σ) = R C (σ) 2 which is the squared radius of the circumscribing sphere. For the third category, on the right of Figure 3, corresponding to
Θ(σ) = abcd = σ Θ(σ) = abc Θ(σ) = ab
Θ(σ) = ab, we define µ 1 (σ) = min(R C (abc), R C (abd)) 2 and µ 2 (σ) = R C (σ) 2 .
The total order ≤ on tetrahedra is then defined as:
σ 1 ≤ σ 2 ⇐⇒ σ 1 = σ 2 or µ 0 (σ 1 ) < µ 0 (σ 2 ) or µ 0 (σ 1 ) = µ 0 (σ 2 ) and µ 1 (σ 1 ) > µ 1 (σ 2 ) or µ 0 (σ 1 ) = µ 0 (σ 2 ) and µ 1 (σ 1 ) = µ 1 (σ 2 ) and µ 2 (σ 1 ) > µ 2 (σ 2 )
Note that µ 1 and µ 2 are not defined for all kinds of tetrahedra. However, the equality configurations requiring the definitions of the quantities µ 1 and µ 2 only arise when the category of the triangles allows to define these quantities. As an example, if the tetrahedron σ 1 is from the first category (Θ(σ) = abcd), then µ 0 (σ 1 ) = R B (σ 1 ) = R C (σ 1 ). Under our generic condition, there cannot be another simplex with same circumradius as σ 1 , and therefore, no tetrahedron with same bounding radius. It follows that in this case σ 1 is the only tetrahedron with bounding radius R B (σ 1 ):
σ 1 = σ 2 ⇒ R B (σ 1 ) = R B (σ 2 ).
The theorem
We consider a set P = {(P 1 , µ 1 ), . . . , (P N , µ N )} ⊂ R n × R of weighted points in n-dimensional Euclidean space. A weighted point (P, 0) is seen as a usual point P ∈ R n , while, when µ > 0, it is associated to the sphere centered at P with radius r = √ µ.
The aim of this paper is to prove Theorem 1 below, where the relation lex among n-chains is the lexicographic order defined according to Definition 3, the total order on n-simplices is given by ( 9) and the general position assumption is a generic condition formalized in Condition 1 of Section 3.4. Definition 5 (Full simplicial complex) Given a finite set P, the n-dimensional full simplicial complex over P, denoted K P , is the simplicial complex made of all possible simplices up to dimension n with vertices in P.
Recall that, as explained in Section 2.2, a chain with coefficients in Z 2 is identified to the set of simplices for which the coefficient is 1.
Theorem 1 Let P = {(P 1 , µ 1 ), . . . , (P N , µ N )} ⊂ R n × R, with N ≥ n + 1,
be weighted points in general position and K P the n-dimensional full simplicial complex over P. Denote by β P ∈ C n-1 (K P ) the (n-1)-chain, set of simplices belonging to the boundary of the convex hull CH (P).
Then the simplicial complex |Γ min | support of
Γ min = min lex Γ ∈ C n (K P ), ∂Γ = β P
is the regular triangulation of P.
When all the weights µ i are equal, the regular triangulation (Definition 2) is the Delaunay triangulation.
Obviously, replacing in Theorem 1 the full simplicial complex K P by any complex containing the regular triangulation would again give this triangulation as a minimum.
Generic condition and simplex order, the formal definition
Definition 6 (Degenerate simplex) A k-simplex σ = (P 0 , µ 0 ), . . . , (P k , µ k ) is said to be degenerate if the set P 0 , . . . , P k is included in some (k -1)dimensional affine space.
Note that if a k-simplex is non-degenerate, neither are any of its faces of dimension k < k.
As Definition 1 generalizes the squared Euclidean distance to weighted points, we now define the respective generalizations of squares of both circumradius and smallest enclosing ball radius, to sets of weighted points. Definition 7 (P C , µ C , P B , µ B ) Given a non-degenerate k-simplex σ ⊂ P with 0 ≤ k ≤ n, the generalized circumsphere and smallest enclosing ball of σ are the weighted points (P C , µ C )(σ) and (P B , µ B )(σ) respectively defined as:
µ C (σ) = def. min µ ∈ R, ∃P ∈ R n , ∀(P i , µ i ) ∈ σ, D ((P, µ), (P i , µ i )) = 0 (5) µ B (σ) = def. min µ ∈ R, ∃P ∈ R n , ∀(P i , µ i ) ∈ σ, D ((P, µ), (P i , µ i )) ≤ 0 (6) P C (σ) (respectively P B (σ))
is the unique point P that realizes the minimum in Equation ( 5) (respectively ( 6)).
Note that, since σ is non-degenerate, (P, µ), ∀(P i , µ i ) ∈ σ, D ((P, µ), (P i , µ i )) = 0 = ∅, so that the minimum in ( 5) is well defined and finite.
The weights µ C (σ) and µ B (σ) are called respectively circumweight and bounding weight of σ.
In the Delaunay case where weights are zero, µ C (σ) and µ B (σ) correspond respectively to the square of the circumradius and the square of the radius of the smallest ball enclosing σ, as depicted on Figure 1, while P C (σ) and P B (σ) are the respective circles centers.
We give now the condition required in Theorem 1, for the general case, which, as explained in Section 3.1, is generic and can be handled by standard symbolic perturbation.
Generic condition 1
We say that P = (P 1 , µ 1 ), . . . ,
(P N , µ N ) ⊂ R n × R is in general position if it does not contain any degenerate n-simplex, and if, for a k-simplex σ and a k -simplex σ in K P with 2 ≤ k, k ≤ n, one has: µ C (σ) = µ C (σ ) ⇒ σ = σ (7)
We have the following observation deduced from Definitions 1 and 7:
Observation 2 If the weights of all points in P are assumed non-positive, then, under the generic condition 1, one has µ C (τ ) > 0 and µ B (τ ) > 0 for any simplex τ .
From now on, we assume P to be in general position i.e. it satisfies generic condition 1.
The next lemma assigns to any simplex σ a face Θ(σ) which is inclusion minimal among the faces of σ sharing the same generalized smallest enclosing ball as σ (see Figure 3).
Lemma 1 (Proof in Appendix C) Under generic condition 1 on P, for any simplex σ, there exists a unique inclusion minimal face
Θ(σ) of σ such that (P B , µ B )(σ) = (P C , µ C )(Θ(σ)). Moreover one has (P C , µ C )(Θ(σ)) = (P B , µ B )(Θ(σ)).
Figure 3 illustrates the possibilities for Θ(σ) in the case n = 3 and zero weights. In general, Θ(σ) can even be of dimension 0 (a single vertex) but this cannot happen with zero weights.
We will need the following lemma:
Lemma 2 For any k-simplex σ ∈ K P , one has P B (σ) ∈ CH (Θ(σ)).
Proof We claim first that, for any simplex σ, one has
P B (σ) ∈ CH (σ). If σ = {(P 0 , µ 0 ), . . . , (P k , µ k )}, CH (σ)
is the convex hull of {P 0 , . . . , P k }. If P / ∈ CH (σ) the projection of P on CH (σ) decreases the weighted distance from P to all the vertices of σ, which shows that (P, µ) cannot realize the arg min in [START_REF] Chen | Optimal delaunay triangulations[END_REF]. This proves the claim. Since by definition of Θ(σ) (Lemma 1) one has P B (σ) = P B (Θ(σ)), the lemma follows by applying the claim to the simplex Θ(σ).
Total order on simplices: the general case
In this section, we provide the general definition of the total order on simplices that has been made explicit in Section 3.2 in the particular case of Delaunay triangulations in R 2 and R 3 .
For a n-simplex σ and k such that 0 ≤ k ≤ dim(σ) -dim(Θ(σ)) we define the (k + dim(Θ(σ)))-dimensional face Θ k (σ) as follows.
For
k = 0, Θ 0 (σ) = Θ(σ), where Θ(σ) is defined in Lemma 1. For k > 0, Θ k (σ) is the (dim(Θ k-1 (σ)) + 1)-dimensional coface of Θ k-1 (σ) with minimal circumradius: Θ k (σ) = arg min Θ k-1 (σ)⊂τ ⊆σ dim(τ )=dim(Θ k-1 (σ))+1 µ C (τ ) (8)
and
µ k (σ) is the circumweight of Θ k (σ): µ k (σ) = µ C (Θ k (σ)). In particular, µ 0 (σ) = µ C (Θ(σ)) = µ B (σ) (by Lemma 1) and if k = dim(σ) -dim(Θ(σ)) then µ k (σ) = µ C (σ).
Observe that, thanks to generic condition 1 and Lemma 1, one has, for two n-simplices σ 1 , σ 2 :
µ B (σ 1 ) = µ B (σ 2 ) ⇒ Θ(σ 1 ) = Θ(σ 2 ). Therefore, if µ B (σ 1 ) = µ B (σ 2 ), µ k (σ 1
) and µ k (σ 2 ) are defined for the same range of values of k.
We define the following order relation on n-simplices (recall that µ 0 (σ) = µ B (σ)):
σ 1 ≤ σ 2 ⇐⇒ def. σ 1 = σ 2 or µ 0 (σ 1 ) < µ 0 (σ 2 ) or ∃k ≥ 1, µ k (σ 1 ) > µ k (σ 2 ) and ∀j, 0 ≤ j < k, µ j (σ 1 ) = µ j (σ 2 ) (9)
One can check that when P is in general position, the relation ≤ is a total order.
For example, when n = 2 and the weights are zero, this order on triangles consists in first comparing the radii of the smallest circles enclosing the triangles T i , i = 1, 2, whose squares are R B (T i ) 2 = µ B (T i ) = µ 0 (T i ). This is enough to induce, generically, an order on acute triangles. But obtuse triangles could generically share their longest edge and therefore have same bounding radius. In this case the tie is broken by comparing in reverse order the circumradii, whose squares are R
C (T i ) 2 = µ C (T i ) = µ 1 (T i ).
By Definition 3, and under generic Condition 1, the total order ≤ on nsimplices induces a lexicographic total order lex on the n-chains of K P .
Observation 3 From the definition of the order on simplices, the lexicographic minimum is invariant under a global translation of the weights by a common shift s: ∀i, µ i ← µ i + s. (as explicitly explained in Observation 8 in Appendix B). The same holds for regular triangulations. Therefore proving Theorem 1 for non-positive weights is enough to extend it to any weights.
Some geometric properties of regular triangulations
Lifts of weighted points and p-norms
Given a weighted point (P, µ) ∈ R n × R, its lift with respect to the origin O ∈ R n , denoted by lift(P, µ), is a point in R n × R given by: lift(P, µ) = def.
(P -O), (P -O) 2 -µ , and, for a set of weighted points P = (P 1 , µ 1 ), . . . ,
(P N , µ N ) ⊂ R n × R, lift(P) = lift(P 1 , µ 1 ), . . . , lift(P N , µ N ) ⊂ R n × R.
Similarly to Delaunay triangulations, it is a well known fact that simplices of the regular triangulation of P are in one-to-one correspondence with the lower convex hull of lift(P): Proposition 1 A simplex σ is in the regular triangulation of P if and only if lift(σ) is a simplex on the lower convex hull of lift(P).
Based on this lifted paraboloid formulation, the idea of variational formulation for Delaunay triangulations has emerged [START_REF] Chen | Optimal delaunay triangulations[END_REF]. This idea has been exploited further in order to optimize triangulations in [START_REF] Chen | Efficient mesh optimization schemes based on optimal delaunay triangulations[END_REF][START_REF] Alliez | Variational tetrahedral meshing[END_REF]. We follow here the same idea but the variational formulation, while using the same criterion, is applied on the linear space of chains, which can be seen as a superset of the space of triangulations.
We define a function on the convex hull of a k-simplex f σ : CH (σ) → R where σ = {(P 0 , µ 0 ), . . . , (P k , µ k )} as the difference between the linear interpolation of the height of the lifted vertices and the function x → (x -O) 2 . More precisely, for a point x ∈ CH (σ) with barycentric coordinates λ i ≥ 0, i λ i = 1, we have x = i λ i P i and:
f σ : x → f σ (x) = def. i λ i (P i -O) 2 -µ i -(x -O) 2 (10)
A short computation shows that the function f σ , expressed in terms of barycentric coordinates, is invariant by isometry (translation, rotation or symmetry on σ). In particular f σ (x) does not depend on the origin O of the lift. It follows from Proposition 1 that, if σ reg is a simplex containing x in the regular triangulation of P, for any other simplex σ containing x with vertices in P:
f σreg (x) ≤ f σ (x) (11)
In the particular case where all weights µ i are non-positive, the convexity of x → x 2 says that the expression of f σ (x) in ( 10) is never negative and in this case (11) implies that defining the weight w p of a n-simplex σ as:
w p (σ) = def. f σ p = CH(σ) f σ (x) p dx 1 p ( 12
)
allows to characterize the regular triangulation as the one induced by the chain Γ reg that, among all chains with boundary β P , minimizes:
Γ → Γ (p) = def. σ |Γ (σ)|w p (σ) p (13)
In this last equation, the notation |Γ (σ)| instead of Γ (σ) is there since Γ (σ) ∈ Z 2 and the sum is in R. For this reason, we used the map |.| : Z 2 → {0, 1} ⊂ R, that assigns respective real numbers 0 and 1 to their corresponding elements in
Z 2 = {0 Z2 , 1 Z2 }, i.e. |0 Z2 | = 0 and |1 Z2 | = 1.
Formally, we have the following Proposition 2 that characterizes regular triangulations as a linear programming problem over Z 2 .
Proposition 2 Let P = {(P 1 , µ 1 ), . . . , (P N , µ N )} ⊂ R n ×R, with N ≥ n+1 be in general position with non-positive weights, and let K P be the n-dimensional full simplicial complex over P.
Denote by β P ∈ C n-1 (K P ) the (n -1)-chain made of simplices belonging to the boundary of CH (P) and by Γ reg the n-chain defined by the set of nsimplices in the regular triangulation of P. Then, for any p ∈ [1, ∞), one has:
Γ reg = arg min Γ ∈Cn(K P ) ∂Γ =β P Γ (p) (14)
The proof of this proposition requires the next lemma. We consider points in the convex hull of P that do not belong to the convex hull of any (n-1)-simplex in K P , i.e. x ∈ CH (P)
\ σ∈K [n-1] P CH (σ). Note that σ∈K [n-1] P CH (σ), finite
union of convex hulls of (n -1)-simplices in R n , has Lebesgue measure 0. Let Γ ∈ C n (K P ) be such that:
∂Γ = β P If x ∈ CH (P)
does not belong to the convex hull of any (n -1)-simplex, then there is an odd number of n-simplices σ ∈ Γ such that x ∈ CH (σ).
Proof (Proof of Proposition 2) Note that the minimum in the right hand side of ( 14) is unique under the assumed general position. Since, in the regular triangulation, all (n -1)-simplices that are not on the boundary of CH (P) are shared by exactly two n-simplices, while only those in β P have a single n-coface, we have:
∂Γ reg = β P
We claim now that :
∂Γ = β P ⇒ Γ reg (p) ≤ Γ (p) (15)
Indeed, ( 12) and ( 13) gives:
Γ (p) = σ∈Γ CH(σ) f σ (x) p dx = CH(P) σ∈Γ CH(σ) x f σ (x) p dx
We get:
Γ (p) = CH(P)\ σ∈K [n-1] P CH(σ) σ∈Γ CH(σ) x f σ (x) p dx ( 16
)
From the equivalence between regular triangulations and convex hull on lifted points we know that if σ reg ∈ Γ reg , then for any n-simplex in σ ∈ K :
x ∈ CH (σ) ∩ CH (σ reg ) ⇒ f σreg (x) ≤ f σ (x) (17)
According to Lemma 3, in [START_REF] Rajan | Optimality of the Delaunay triangulation in R d[END_REF], there is an odd number of simplex σ ∈ Γ satisfying CH (σ)
x in the condition. It follows that the condition has to hold for at least one simplex σ.
Therefore [START_REF] Rockafellar | Convex analysis[END_REF] gives:
x ∈ CH (σ reg ) ⇒ f σreg (x) p ≤ σ∈Γ CH(σ) x f σ (x) p ( 18
)
Since for x ∈ CH (P) \ σ∈K [n-1] P CH (σ) there is exactly one simplex σ reg such that CH (σ reg ) x , ( 18) can be rewritten as:
σ∈Γreg CH(σ) x f σ (x) p ≤ σ∈Γ CH(σ) x f σ (x) p (19)
which, together with [START_REF] Rajan | Optimality of the Delaunay triangulation in R d[END_REF] gives the claim [START_REF] Munkres | Elements of algebraic topology[END_REF]. Now, if some n-simplex σ ∈ Γ with CH (σ) x, for some x ∈ CH (P)
\ σ∈K [n-1] P CH (σ), is not Delaunay, then σ∈Γreg CH(σ) x f σ (x) p = f σreg (x) p < CH(σ) x f σ (x) p
and since the function is continuous, this implies Γ (p) > Γ reg (p) .
Lemma 4 (Proof in Appendix E) One has:
sup x∈CH(σ) f σ (x) = µ B (σ) ( 20
)
The following is an immediate consequence used in the proof of Lemma 5:
lim p→∞ w p (σ) = w ∞ (σ) = f σ ∞ = sup x∈CH(σ) f σ (x) = µ B (σ) (21)
Lemma 5 Let P = {(P 1 , µ 1 ), . . . , (P N , µ N )} ⊂ R n × R, with N ≥ n + 1, be in general position with non-positive weights. Let K P be the corresponding ndimensional full simplicial complex. For p large enough, the weight w p (σ) p of any n-simplex σ ∈ K P , is larger than the sum of the weights of all n-simplices in K P with smaller bounding weight µ B . In other words, denoting by K
[n]
P the set of n-simplices in K P :
∃p , ∀p ≥ p , ∀σ ∈ K [n] P , w p (σ) p > τ ∈K [n] P ,µ B (τ )<µ B (σ) w p (τ ) p
Proof Consider the smallest ratio between two bounding weights (respectively squared radii for the Delaunay case) of n-simplices in K P :
ι = min σ 1 ,σ 2 ∈K [n] P µ B (σ 1 )<µ B (σ 2 ) µ B (σ 2 ) µ B (σ 1 )
One has ι > 1. Choose ι such that 1 < ι < ι. For any n-simplex σ ∈ K
[n] P we have from (21): lim
p→∞ w p (σ) = µ B (σ),
it follows that, since there is only finitely many simplices in K
[n] P , that there exists some p 0 large enough such that for any p > p 0 one has:
∀σ 1 , σ 2 ∈ K [n] P , µ B (σ 1 ) < µ B (σ 2 ) ⇒ w p (σ 2 ) w p (σ 1 ) > ι
Let N be total number of n-simplices in K P . Taking:
p = max p 0 , log N log ι
realizes the statement of the lemma.
As explained in the proof of the main theorem of Section 6, Lemma 5 allows us to focus on the link of a single simplex τ . However, before that, we need to introduce geometrical constructions that give an explicit representation of this link (Sections 4.2 and 4.3).
Projection on the bisector of a simplex
In this section, we establish a condition for a simplex τ to belong to the regular triangulation and, when it is the case, give a characterization of its link in the regular triangulation.
Given a k-simplex τ , we denote by bis τ the (n-k)-dimensional affine space bisector of τ , formally defined as: In the particular case where τ is a vertex, i.e. dim(τ ) = 0, one has bis τ = R n .
bis τ = def. x ∈ R n | ∀v 1 , v 2 ∈ τ, D ((x, 0), v 1 ) = D (x, 0), v 2 . ( 22
Let x → π bisτ (x) and x → d(x, bis τ ) = d(x, π bisτ (x)) denote respectively the orthogonal projection on and the minimal distance to bis τ . We define a projection π τ : P → bis τ × R as follows: Let o τ = P C (τ ) ∈ bis τ denote the circumcenter of τ (or, for weighted points, the generalized circumcenter as defined in Definition 7, page 13). If
π τ (P, µ) = def. π bisτ (P ), µ -d(P, bis τ ) 2 . ( 23
)
Φ τ (d) Φ τ (c) Φ τ (τ ) Φ τ (x, 0) = π bisτ (x) -o τ , (x -o τ ) 2 z = x 2 π τ (τ ) o τ
(P i , µ i ) ∈ τ , then D ((o τ , µ C (τ )), (P i , µ i )) = 0.
Since o τ = π bisτ (P i ) we have: It follows that if we denote µ(π τ (P i , µ i )) the weight of π τ (P i , µ i ), one has:
(P i -o τ ) 2 -µ i -µ C (τ ) = d(P, bis τ ) 2 -µ i -µ C (τ ) = 0 a b c d Φ τ (d) Φ τ (c) Φ τ (τ ) Sh τ (c, 0) z = x 2 o τ
µ(π τ (P i , µ i )) = µ i -d(P, bis τ ) 2 = -µ C (τ )
Therefore, π τ sends all vertices of τ to a single weighted point:
(P i , µ i ) ∈ τ ⇒ π τ (P i , µ i ) = (o τ , -µ C (τ )) (24)
Lemma 6 Let P be in general position, τ ∈ K P a k-simplex and σ ∈ K P a coface of τ . Then σ is in the regular triangulation of P if and only if π τ (σ) is a coface of the vertex {(o τ , -µ C (τ ))} = π τ (τ ) in the regular triangulation of π τ (P).
Proof Under our generic condition, σ is in the regular triangulation of P if and only if there is a weighted point (P C (σ), µ C (σ)) such that:
∀(P i , µ i ) ∈ σ, D ((P C (σ), µ C (σ)), (P i , µ i )) = 0 (25) ∀(P i , µ i ) ∈ P \ σ, D ((P C (σ), µ C (σ)), (P i , µ i )) > 0 (26)
Observe that, since τ ⊂ σ, (25) implies:
∀(P i , µ i ) ∈ τ, D ((P C (σ), 0), (P i , µ i )) = µ C (σ)
This and the definition (22) of bisector show that P C (σ) must be on the bisector of τ : We get, for (P i , µ i ) ∈ P:
P C (σ) ∈ bis τ ( 27
)
D (P C (σ), µ C (σ)), (P i , µ i ) = P C (σ) -P i 2 -µ C (σ) -µ i = π bisτ (P i ) -P C (σ) 2 + d(P i , bis τ ) 2 -µ C (σ) -µ i = D bisτ P C (σ), µ C (σ) , π bisτ (P i ), µ i -d(P i , bis τ ) 2 = D bisτ P C (σ), µ C (σ) , π τ (P i , µ i ) (28)
In the last two lines, the weighted distance is denoted D bisτ instead of D in order to stress that, thanks to (27) , it occurs on weighted point of bis τ × R rather than R n × R. Therefore (25) and ( 26) are equivalent to:
∀(P i , µ i ) ∈ σ, D bisτ ((P C (σ), µ C (σ)), π τ (P i , µ i )) = 0 ∀(P i , µ i ) ∈ P \ σ, D bisτ ((P C (σ), µ C (σ)), π τ (P i , µ i )) > 0 which precisely means that π τ (σ) is a coface of the vertex π τ (τ ) = {(o τ , -µ C (τ ))}
in the regular triangulation of π τ (P).
An immediate consequence of Lemma 6 is:
Corollary 1 The projection π τ preserves the structure of the regular triangulation around τ , more precisely:
1. the simplex τ is in the regular triangulation of P if and only if the vertex
π τ (τ ) = {(o τ , -µ C (τ ))
} is a vertex of the regular triangulation of π τ (P), 2. if τ is in the regular triangulation of P, π τ induces a simplicial isomorphism between the link of τ in the regular triangulation of P and the link of vertex π τ (τ ) in the regular triangulation of π τ (P).
Polytope and shadow associated to a link in a regular triangulation
In this section, we study the link of a k-simplex τ in the regular triangulation of P that satisfies:
(P B , µ B )(τ ) = (P C , µ C )(τ ) (29)
Equivalently, we assume Θ(τ ) = τ and in this case both generalized smallest enclosing sphere and generalized circumsphere of τ coincide so that o τ is also the center of the generalized smallest ball containing τ .
We know, from Proposition 1 and Corollary 1(2 .), that the link of τ in the regular triangulation is isomorphic to the link of lift(π τ (τ )) on the boundary of the lower convex hull of lift(π τ (P)). We consider the lift with the origin at o τ , in other words, the image of a vertex (P, µ) ∈ P is:
Φ τ (P, µ) = def. lift π τ (P, µ) = π bisτ (P ) -o τ , (π bisτ (P ) -o τ ) 2 -µ + d(P, bis τ ) 2 = π bisτ (P ) -o τ , (P -o τ ) 2 -µ ∈ R n-k × R
The construction of Φ τ is illustrated on Figure 7 in dimension 2 and Figure 9, bottom, in dimension 3.
Observe that the image by Φ τ of all vertices of simplex τ coincide. Therefore the image of τ is a single vertex:
Φ τ (τ ) = 0, µ C (τ ) ⊂ R n-k × R
Call P τ the set of weighted points in (P i , µ i ) ∈ P \ τ such that τ ∪ (P i , µ i ) has the same bounding weight as τ . In the case of the Delaunay triangulation, P τ corresponds to the set of points in P (strictly) inside the smallest ball enclosing τ . Formally, using the assumption (29), we introduce the following subset of vertices in the link of τ :
P τ = def. (P i , µ i ) ∈ P \ τ, D ((P C (τ ), µ C (τ )), (P i , µ i )) < 0 (30)
Denote by K τ the (n -k -1)-dimensional simplicial complex made of all simplices over vertices in P τ with dimension up to (n -k -1). Observe that, for any weighted point (P, µ), one has:
D ((P C (τ ), µ C (τ )), (P, µ)) < 0 ⇐⇒ (π bisτ (P ) -o τ ) 2 -µ + d (P, bis τ ) 2 < µ C (τ ) (31)
Denote by Height(lift((P, µ)) the height of the lift of a point (P, µ), defined as the last coordinate of the lift, so that:
Height(Φ τ (P, µ)) = (P -o τ ) 2 -µ (32)
We have assumed the weights to be non-positive, µ ≤ 0, and since under our generic conditions we have π bisτ (P ) -o τ = 0, (32) implies that v ∈ P τ ⇒ Height(Φ τ (v, µ)) > 0 and, using observation 2, (31) can be rephrased as: Observation 4 A vertex belongs to P τ if and only if the height of its image by Φ τ is strictly less than µ C (τ ) > 0:
v ∈ P τ ⇐⇒ 0 < Height(Φ τ (v)) < µ C (τ )
This observation allows to define the following conical projection: Definition 8 (Shadow) Let v be a vertex in P τ . The shadow Sh τ (v) of v is a point in the (n -k)-dimensional Euclidean space bis τ defined as the intersection of the half-line starting at (0, µ C (τ )) and going through Φ τ (v) with the space bis τ .
Sh τ (P, µ) = def. µ C (τ ) µ C (τ ) -Height(Φ τ (P, µ)) (π bisτ (P ) -o τ )
The shadow of a simplex σ ∈ K τ is a simplex in bis τ whose vertices are the shadows of vertices of σ.
The construction of Sh τ is illustrated on Figure 8 in dimension 2 and Figure 9, bottom, in dimension 3. Let Γ reg be the n-chain containing the n-simplices of the regular triangulation of P and |Γ reg | the corresponding simplicial complex. Denote by
X(τ ) ∈ C n-k-1 (K τ ) the (n -k -1)-chain in the link of τ made of simplices σ ∈ K τ such that τ ∪ σ ∈ |Γ reg |: X(τ ) = def. σ ∈ K τ , dim(σ) = n -k -1, τ ∪ σ ∈ Γ reg (33) = {σ ∈ Lk |Γreg| τ | µ B (τ ∪ σ) = µ B (τ )}
Sh τ (X(τ )) will play a key role for the proof of Theorem 1 in section 6.
(P) at Φ τ (τ ) = (0, µ C (τ )) with bis τ = R n-k × {0} is called shadow polytope of τ .
Figure 10 depicts the shadow polytope corresponding to the example of Figure 9 as the filled area and the bounded cells of its boundary by the two edges in blue.
For each upper half-space H j contributing to the convex cone of the lower convex hull of Φ τ (P) at (0, µ C (τ )), the intersection H j ∩ bis τ is a (n -k)dimensional half-space in bis τ . The shadow polytope is precisely defined as the intersection of all such half-spaces H j ∩bis τ . Since each H j is a upper halfspace and since by observation 4 one has µ C (τ ) > 0, H j does not contains (0, 0), which implies that H j ∩ bis τ does not contain the point o τ in bis τ . It follows that:
Observation 5 All facets of the shadow polytope are visible from 0.
Lemma 7 Let P be in general position with non-positive weights.
τ is in the regular triangulation of P if and only if
Φ τ (τ ) = (0, µ C (τ )) is
an extremal point of the convex hull of Φ τ (P). 2. When τ is in the regular triangulation of P, its link is isomorphic to the link of the vertex Φ τ (τ ) = (0, µ C (τ )) in the simplicial complex corresponding to the boundary of the lower convex hull of Φ τ (P).
When τ is in the regular triangulation of P, Sh τ induces a bijection between the simplices in X(τ ) and the set of bounded facets of the boundary of the shadow polytope.
Proof 1. follows from Corollary 1(1 .), together with Proposition 1. 2. follows from Corollary 1 item 2. together with Proposition 1.
For the claim 3., observe that 2. implies that the faces of the convex cone of the lower convex hull of Φ τ (P) at Φ τ (τ ) = (0, µ C (τ )) are in one-to-one correspondence with the link of τ in the regular triangulation. If σ is a simplex in the link of τ in the regular triangulation, denote by CC σ the convex cone in R n-k × R with apex Φ τ (τ ) = (0, µ C (τ )) and through CH (Φ τ (σ)):
CC σ = def. {(0, µ C (τ )) + λx | x ∈ CH (Φ τ (σ)) , λ ≥ 0}.
Since µ C (τ ) > 0, the intersection of CC σ with bis τ = R n-k × {0} is therefore (see Figure 9 for an illustration):
empty if all vertices of Φ τ (σ) have height larger than µ C (τ ), -non-empty and non-bounded if some vertices of of Φ τ (σ), but not all, have height larger than µ C (τ ), -non-empty and bounded, if and only if all vertices of Φ τ (σ) have the height smaller than µ C (τ ) > 0.
On Figure 9 bottom, the cones corresponding to the third category are depicted, in blue, together with their non-empty, bounded, intersection (edge) with bis τ . Therefore, since a simplex σ in the link of τ of a regular triangulation belongs to X(τ ) if and only all its vertices are in P τ , we get, with Observation 4, that σ ∈ X(τ ) if and only if the intersection of CC σ with bis τ = R n-k × {0} is non-empty and bounded, as claimed.
Visible convex hulls as lexicographic minimal chain
This section is self-contained and does not rely on previous constructions. Lemma 10 is a key ingredient for the proof of Theorem 1 but is also a result of independent interest: visible convex hulls can be characterized as lexicographic minimum chains under boundary constraints.
Remark 1
The result in Lemma 10 extends to lower convex hulls, since they are equivalent, up to a projective transformation. It extends also directly to regular triangulations, through the lift on paraboloid equivalence between regular triangulations and lower convex hulls. This is already a characterization of regular triangulations as lexicographic minimum chains under boundary constraints, but the total order on simplices given by this direct transposition is not invariant under isometry. Lemmas 8 and 9 will be instrumental in the proof of Lemma 10. Recall (Section 2.1) that the link of a k-simplex in a pure n-dimensional simplicial complex is a pure (n -k -1)-simplicial complex.
Definition 12 (Trace of a chain in a link) Given a k-simplex τ in a simplicial complex K and an n-chain Γ on K, for n > k, we call trace of the n-chain Γ on the link of τ the (n -k -1)-chain Tr τ (Γ ), defined in the link Lk K (τ ) of τ , as follows. For any σ ∈ Lk K (τ ):
Tr τ (Γ )(σ) = def. Γ (τ ∪ σ)
The next lemma is used twice in what follows.
Lemma 8 Given a k-simplex τ in a simplical complex K and a n-chain Γ on K for n > k, one has:
∂ Tr τ (Γ ) = Tr τ (∂Γ )
Proof We need to prove that for any simplex σ in the link of τ , one has:
∂ Tr τ (Γ )(σ) = Tr τ (∂Γ )(σ)
We have by definition that a (n -k -2)-simplex σ is in Tr τ (∂Γ )(σ) if and only if τ ∩ σ = ∅ and τ ∪ σ is in ∂Γ . In other words, τ ∪ σ has an odd number of n-cofaces in the chain Γ . This in turn means that σ has an odd number of (n -k)-cofaces in the trace of Γ in the link of τ , i.e. σ ∈ ∂ Tr τ (Γ ). Note that, in the context of Lemma 9, there exists at least a point x ∈ X that maximizes the distance to O, since X, union of facets, is a compact set. We need to define another order on simplices together with the corresponding induced lexicographic order on chains, respectively denoted ≤ 0 and 0 .
For that, we define the distance from a flat (an affine space) F to 0 as
d 0 (F ) = inf p∈F d(p, 0). If ζ is a non-degenerate k-simplex for some k ≥ 0, define d 0 (ζ) = d 0 (F (ζ)) where F (ζ) is the k-dimensional flat support of ζ.
In order to compare two simplex for the order ≤ 0 , we first compare the distances to 0 of their vertex farthest to 0. If they differ, the simplex whose farthest vertex is farther is greater for ≤ 0 . If they share the same farthest vertex v, then the tie is broken by comparing, among the edges containing v in each simplex, the farthest one to 0, i.e. the one that maximizes d 0 . Again, if the two simplices share also the same farthest edge, we consider the farthest 2-faces in each simplex containing this farthest edge, and so on. More formally:
We associate to a (n -1)-simplex σ in R n that does not contain 0 a dimension increasing sequence of faces
∅ = τ -1 (σ) ⊂ τ 0 (σ) ⊂ . . . ⊂ τ n-1 (σ) = σ with dim(τ i ) = i.
Under a simple generic condition, it is defined as follows.
τ -1 (σ) = ∅ and τ 0 (σ) is the vertex of σ farthest from 0.
For i ≥ 0, τ i (σ) is the coface of dimension i of τ i-1 (σ) whose supporting i-flat is farthest from 0:
τ i (σ) = def. arg max ζ⊃τ i-1 (σ) dim(ζ)=i d 0 (ζ) (34)
For i = 0, . . . , n -1 we set δ i (σ) = d 0 (τ i (σ)) and we define the comparison < 0 between two (n -1)-simplices σ 1 and σ 2 as a lexicographic order on the sequences (δ i (σ 1 )) i=0,...n-1 and (δ i (σ 2 )) i=0,...n-1 :
σ 1 < 0 σ 2 ⇐⇒ def. ∃k ≥ 0, δ k (σ 1 ) < δ k (σ 2 ) and ∀j, 0 ≤ j < k, δ j (σ 1 ) = δ j (σ 2 ) ( 35
)
which defines an order relation:
σ 1 ≤ 0 σ 2 ⇐⇒ def. σ 1 = σ 2 or σ 1 < 0 σ 2 (36)
Generic condition 2 Let K be a (n -1)-dimensional simplicial complex.
For any pair of simplices
σ 1 , σ 2 ∈ K: dim(σ 1 ) = dim(σ 2 ) = k and d 0 (σ 1 ) = d 0 (σ 2 ) ⇒ σ 1 = σ 2 .
Under Condition 2, ≤ 0 is a total order on simplices and, following Definition 3, the order ≤ 0 on simplices induces a lexicographic order 0 on k-chains of K.
Lemma 10 Let P be a set of points in R n such that 0 ∈ R n is not in the convex hull of P . Let K be the full (n -1)-dimensional simplicial complex over P , i.e. the simplicial complex made of all (n -1)-simplices whose vertices are points in P with all their faces. Assume that K satisfies the generic condition 2.
Let X be a (n -1)-chain in K whose (n -1)-simplices are on the boundary of the convex hull of P and are all visible from 0 ∈ R n . Then:
X = min 0 Γ ∈ C n-1 (K), ∂Γ = ∂X . ( 37
)
If n = 1, the boundary operator in (37) is meant as the boundary operator of reduced homology, i.e the linear operator ∂0 : C n-1 (K) → Z 2 that counts the parity.
For n = 1 (figure 11), we are given a set of real numbers for P and corresponding singletons for K, so that a 0-chain Γ can be seen as a subsets of P ⊂ R. Since 0 is not in the convex hull, numbers in P are either all positive, or either all negative. The minimal non-zero chain, for lexicographic order 0 , is the chain containing only the vertex x 1 ∈ P with minimal absolute value in P . Replacing the constraint from being non-zero by the stronger constraint for Γ to have odd cardinality, does not change this lexicographic minimum. Moreover, this parity constraint is a linear constraint (over the field Z 2 ). This is why, for n = 1, the boundary constraint ∂Γ = ∂X is the linear operator ∂0 : C n-1 (K) → Z 2 that counts the parity.
P X
Fig. 11 Illustration of Lemma 10 for n = 1. In this case X is a 0-chain, and has therefore no boundary. Fig. 12 Illustration of the recursion in the proof of Lemma 10 for n = 2. In this case X is a 1-chain, and its boundary ∂X is a zero chain made of two vertices. X is a 0-chain made of a single vertex, the red vertex nearest to O on the blue line representing Π. Lemma 10) We first claim that the lemma holds for n = 1. In this case (figure 11) the fact that 0 is not in the convex hull of P means that the 1-dimensional points in P are either all positive, or either all negative. The single simplex in the convex hull boundary visible from 0 is the point in P closest to 0, i.e. the one with the smallest absolute value, which corresponds to the minimum chain with odd parity in the 0 order, which proves the claim.
Proof (Proof of
We assume then the theorem to be true for the dimension n-1 and proceed by induction. This recursion is illustrated on figure 12 for n = 2 and figure 13 for n = 3.
Consider the minimum:
Γ min = min 0 Γ ∈ C n-1 (K), ∂Γ = ∂X (38)
We need to prove that Γ min = X. There exists a unique vertex v in the simplices of ∂X which is farthest from 0. Indeed, existence follows from compactness of ∂X, and unicity from generic condition 2, which in the particular case of vertices says that no pair of vertices lie at the same distance from 0, since in this case, d 0 is merely the distance from the vertex to 0.
Let v be this unique vertex in the simplices of ∂X which is farthest from 0. Since v is a vertex in at least one simplex in ∂X = ∂Γ min , it must be a vertex in some simplex in Γ min .
Thanks to Lemma 9, if a point x is a local maximum in X of the distance to 0 one has x ∈ ∂X. It follows that v is also the vertex in the simplices of X which is farthest from 0. Fig. 13 Illustration of the recursion in the proof of Lemma 10 for n = 3. In this case X is a 2-chain, and its boundary ∂X is a 1-chain depicted as the red edges cycle. X is a 1-chain in Π while ∂X is a 0-chain made of the two vertices framed by a little orange squares.
Since v is the vertex in X farthest from 0 and since by definition Γ min 0 X, we know that Γ min does not contain any vertex farther from the origin than v, therefore v is also the vertex in the simplices of Γ min farthest from 0.
Let us briefly give the principle of the induction. We show below that the trace of Γ min in the link of v is the solution of the same minimisation problem (37) with one dimension less, allowing to apply inductively Lemma 10 to show that X and Γ min coincide in the star of v. This allows to remove from X the star of v ((44) below) and iteratively apply the recursion on the remaining farthest vertex of X until X is empty. More formally:
Since ∂Γ min = ∂X, Lemma 8 implies that:
∂ Tr v (Γ min ) = Tr v (∂Γ min ) = Tr v (∂X) (39)
In order to define a lexicographic order on chains on the link of v in K, we consider the hyperplane Π containing 0 and orthogonal to the line 0v. We associate to any (n-2)-simplex η ∈ Lk K (v) the (n-2) simplex π vΠ (η), conical projection of η on Π with center v. In other words, if u is a vertex of η:
{π vΠ (u)} = def.
Π ∩ uv
where uv denotes the line going through u and v. The map π vΠ is a conical projection on vertices but it extends to a bijection on simplices and chains that does bot contain v that trivially commutes with the boundary operator. By definition of the lexicographic order 0 , the comparison of two chains whose farthest vertex is v starts by comparing their restrictions to the star of v. Therefore, since v is the farthest vertex in Γ min , the restriction of Γ min to the star of v must be minimum under the constraint ∂Γ = ∂X. The constraint ∂Γ = ∂X for the restriction of Γ min to the star of v is equivalent to the constraint given by equation (39) or equivalently by:
∂π vΠ (Tr v (Γ min )) = π vΠ (Tr v (∂X)) (40)
and the minimization on the restriction of the (n -1)-chain Γ min to the star of v can equivalently be expressed as the minimization of the (n-2)-chain γ min = π vΠ (Tr v (Γ min )) under the constraint ∂γ min = π vΠ (Tr v (∂X)), we have then:
γ min = π vΠ (Tr v (Γ min )) = π vΠ Tr v min 0 {Γ ∈ C n-1 (K), ∂Γ = ∂X} = min 0 γ ∈ C n-2 π vΠ (Lk K (v)) , ∂γ = π vΠ (Tr v (∂X)) (41) = min 0 γ ∈ C n-2 π vΠ (Lk K (v)) , ∂γ = ∂π vΠ (Tr v (X)) (42)
In the third equality (41) we have used the fact that the orders on (n -1)simplices in the star of v in K and the order on corresponding (n-2)-simplices in the image by π vΠ of the link of v are compatible. The fourth equality (42) uses (40). Indeed, if F is a k-flat in R n going through v, we have (see Figure 14):
d 0 (π vΠ (F )) = d 0 (F ∩ Π) = d 0 (F ) v -0 (v -0) 2 -d 0 (F ) 2 (43)
with the convention d 0 (π vΠ (F )) = +∞ in the non-generic case where F ∩ Π = ∅ (the denominator vanishes in this case while since v ∈ F one has
F ∩ Π = ∅ ⇒ d 0 (F ) < v -0 ).
As seen on (43) d 0 (F ) → d 0 (π vΠ (F )) is an increasing function and the orders are therefore consistent along the induction.
We claim that the minimization problem in (42) satisfies the condition of the theorem for n = n -1 which is assumed true by induction.
The recursion is as follows: Hyperplane Π corresponds to R n with n = n -1 and:
n ← n -1 P ← π vΠ P \ {v} K ← π vΠ Lk K (v) X ← π vΠ Tr v (X)
Observes first that, in R k , a (k -1)-simplex with vertices in a point set P is in convex position (i.e. belongs to the convex hull boundary) and is visible from 0 if and only its supporting hyperplane separates P from 0.
Since (n -1)-simplices in X are in convex position and visible from 0, hyperplanes supporting these simplices separate all points of P from 0. Consider in particular such a simplex σ in X which is moreover in the star of v and its associated supporting hyerplane Π σ v.
We have that since Π σ separate P from 0, Π σ ∩ Π = π vΠ (Π σ ), separates P = π vΠ (P \ {v}) from 0 in Π. It follows that the (n -1)-simplices in X = π vΠ (Tr v (X)) are in convex position and are visible from 0.
Therefore one can apply our lemma recursively, which gives us, using (42):
γ min = π vΠ (Tr v (Γ min )) = π vΠ (Tr v (X))
It follows that the faces in the star of v corresponding to Tr v (Γ min ) belong to X. Call Y the (n -1)-chain made of these simplices in the star of v. We have both Y ⊂ X and Y ⊂ Γ min .
Since, seeing chains as set of simplices, one has Y ⊂ X, the chain X \Y can equivalently be noted as X -Y when seeing them as vectors over Z 2 . We favor this linear formulation in the equations below. Since v is the vertex farthest from 0 in both X and in Γ min one has by definition of the lexicographic order:
Γ min = min lex Γ ∈ C n-1 (K), ∂Γ = ∂X = Y + min lex Γ ∈ C n-1 (K), ∂Γ = ∂(X -Y ) (44)
So, by considering the new problem X ← (X -Y ) and iterating as long as X is not empty, we get our final result Γ min = X.
Proof of Theorem 1
Recall the definitions of P τ and K τ introduced in section 4.3, page 24, for a k-simplex τ ∈ K P such that Θ(τ ) = τ (in other words such that µ C (τ ) = µ B (τ )):
-P τ is the set of weighted points (P i , µ i ) ∈ P \ τ such that τ ∪ (P i , µ i ) has the same bounding weight µ B (τ ) as τ , -K τ , a subcomplex of the link of τ , is made of all simplices over vertices in P τ with dimension up to (n -k -1).
The next two lemmas establish the connection between the orders lex and 0 and introduce the notation ↓ ρ Γ for a n-chain Γ (46), used in the proof of Theorem 1.
Lemma 11 (Proof in Appendix
G) For σ 1 , σ 2 ∈ K τ , one has: µ C (τ ∪ σ 1 ) ≥ µ C (τ ∪ σ 2 ) ⇐⇒ d 0 (Sh τ (σ 1 )) ≤ d 0 (Sh τ (σ 2 )) (45)
For a n-chain Γ denote by ↓ ρ Γ the chain obtained by removing from Γ all simplices with bounding weight strictly greater than ρ.
↓ ρ Γ = def. {σ ∈ Γ, µ B (σ) ≤ ρ} (46) Lemma 12 (Proof in Appendix H) For two n-chains Γ 1 , Γ 2 ∈ C n (K P ) if Γ 1 = Γ 2 one has: ↓ ρ Γ 1 lex ↓ ρ Γ 2 ⇒ Sh τρ Tr τρ (↓ ρ Γ 1 ) 0 Sh τρ Tr τρ (↓ ρ Γ 2 )
where τ ρ is a simplex such that θ(τ ) = τ and µ B (τ ) = ρ.
Observation 6 (Total orders equivalence) Observes that, since both lex and 0 are total orders under our generic condition, the implication of Lemma 12 gives the equivalence:
↓ ρ Γ 1 lex ↓ ρ Γ 2 ⇐⇒ Sh τρ Tr τρ (↓ ρ Γ 1 ) 0 Sh τρ Tr τρ (↓ ρ Γ 2 )
where τ ρ is a simplex such that θ(τ ) = τ and µ B (τ ) = ρ.
We now have all the tools needed to prove our main theorem.
Proof (Proof of Theorem 1)
We prove Theorem 1 in the case of non-positive weights which then extends to any weights thanks to Observations 3 and 8. As in Proposition 2, denote by Γ reg the chain that defines the regular triangulation of CH (P). As in Theorem 1 denote by β P ∈ C n-1 (K P ) the (n -1)-chain made of simplices belonging to the boundary of CH (P).
Observation 7
The chain Sh τ (Tr τ (↓ Γ reg )) is the set of (n -k -1)-faces of the convex hull of Sh τ (P τ ) visible from the origin 0.
In the remainder of this proof we use the lexicographic order 0 on shadows of (n -k -1)-chains in K τ , defined at the beginning of Section 5. This order is equivalent to the order lex on corresponding n-chains restricted to the set of simplices with bounding weight µ = B (Lemma 12 and Observation 6). This correspondence allows to conclude the proof by applying Lemma 10 that says the chain defined by visible faces of a polytope minimizes the lexicographic order 0 among chains with same boundary.
More formally, thanks to Lemma 12 we have:
↓ Γ 1 lex ↓ Γ 2 ⇒ Sh τ (Tr τ (↓ Γ 1 )) 0 Sh τ (Tr τ (↓ Γ 2 ))
It follows that Sh τ (Tr τ (↓ Γ min )) is the the minimum for order 0 , among all chains in the complex Sh τ (K τ ) that satisfy the constraint (49), or, equivalently:
Sh τ (Tr τ (↓ Γ min )) = min 0 Γ ∈ C n-k-1 (Sh τ (K τ )) | ∂Γ = ∂Sh τ (Tr τ (↓ Γ reg ))
Observation 7 allows us to apply Lemma 10 with:
n ← n -k X ← Sh τ (Tr τ (↓ Γ reg )) = Sh τ (X(τ )) P ← Sh τ (P τ ) K ← Sh τ (K τ ) implying: Sh τ (Tr τ (↓ Γ min )) = Sh τ (Tr τ (↓ Γ reg ))
In other words, Γ min and Γ reg coincide on simplices with bounding weight µ = B , a contradiction with the definition (47) of µ = B .
Appendices
A Overview of the proof This section, designed as a complementary roadmap, summarizes the arguments in the proof's main steps, and illustrates them on Figures 15 and16. It is completed by a glossary, Section A.4, that recalls most of the notions used in this overview. The theorem is proven for points with non-positive weights which, thanks to Observation 3, is enough to be true for any weights.
The proof splits into two main steps, (A) and (B), summarized on the flowcharts of figures 15 and 16.
For p large enough, µ B (σ) is also the dominant quantity in • (p) , since µ B (σ) = lim p→∞ w p (σ) and, for p large enough, the weight w p (σ) p of any single simplex σ in the • (p) norm is larger than the sum of all weights of all simplices with smaller bounding weights (Lemma 5 in Section 4.1).
As detailed in the proof of Theorem 1 in Section 6, this fact allows us to focus on the link of a k-simplex τ µ = B , inclusion minimal among simplices whose bounding weight µ = B is the largest bounding weight for which some simplex in Γ reg and Γ min differ.
At this point, it remains to compare the traces of Γ reg and Γ min in the link of τ µ = B , and prove that they are identical, to get the contradiction. This comparison is rather trivial for n < 3, but requires the additional constructions of step (B) for n ≥ 3.
A.2 Case n < 3:
For n = 1, for p large enough, the order lex and the order induced by comparing • (p) coincide, so that the result follows immediately from Proposition 2.
For n = 2, the proof is detailed in [START_REF] Vuillamy | Simplification planimétrique et chaînes lexicographiques pour la reconstruction 3D de scènes urbaines[END_REF]Section 4.2.3]. In this case, the two orders differ even for large p. However it can be observed that chains minimum for either • (p) , for p large enough, or for lex (under boundary constraint) contain generically only one simplex σ for each bounding weight µ B (σ) . It results that, in both triangulations, the set of simplices with a given bounding weight is made of a single simplex, acute triangle or longest edge of obtuse triangle, and therefore (with the generic condition) coincide. This with the definition of τ µ = B in step (A) gives the contradiction.
A.3 Step (B):
The proof for n ≥ 3 needs to focus on sets of n-simplices with same bounding weight µ B . Formally, it includes the proof for n = 1 and n = 2 as limit trivial cases.
Under the generic condition, n-simplices sharing the same bounding weight µ B are all cofaces of some simplex τ such that µ B (τ
) = µ C (τ ) = µ B .
For example, in the situation of Figure 3, for a Delaunay triangulation in R 3 , the set of tetrahedra with the same bounding radius R B , under generic condition, is not of the same nature in examples left, middle and right. On the left, since R B (σ) = R C (σ), there is, under generic condition, a single such tetrahedron abcd.
In the middle, τ would be the triangle abc and, denoting by B B (τ ) the unique ball with radius R B enclosing τ , there is exactly one tetrahedron abcq with bounding radius R B , in the full simplicial complex K P , for each point q ∈ B B (τ ) ∩ (P \ {a, b, c}). These vertices are in the link of τ .
On the right, τ would be the edge ab and there is a one-to-one correspondance between tetrahedra abpq with bounding radius R B , in the full simplicial simplices in the link of τ in |Γ reg | whose union with τ does not increases the bounding weight of τ :
{σ ∈ Lk |Γreg| τ | µ B (τ ∪ σ) = µ B (τ )},
and the simplices in the visible convex hull, in R n-k , of the images of vertices by Sh τ . This visible convex hull is a simplicial ball, and therefore,
{σ ∈ Lk K |Γreg | τ | µ B (τ ∪ σ) = µ = B }
is a simplicial complex homeomorphic to a (n -k -1)-ball. This last fact may be of independent interest. For example, on Figure 9, there are 6 tetrahedra in the star of the edge τ in |Γ reg | but only 2 of them share the same bounding radius as τ , and are therefore in {σ ∈ |Γ reg |, µ B (σ) = µ B (τ )} as seen on Figure 9 top where only two tetrahedra, in blue, are included in the bounding sphere. In this case, the corresponding simplicial complex Sh τ Lk {σ∈|Γreg| |µ B (σ)=µ B (τ )} τ , topologically a 1-dimensional ball, is made of the two edges sharing a vertex, in blue, inside the plane bis τ seen on Figure 9 bottom. By induction on the dimension of convex cones and convex polytopes, one shows that this bounded subcomplex of the boundary of a convex polytope visible from the origin can be expressed as the minimum under boundary constraint (Lemma 10 in Section 5) for another lexicographic order denoted 0 .
The lexicographic order 0 on the images by Sh τ is shown (Lemmas 11 and 12 in Section 6) to be equivalent to the order induced by lex on the trace of chains in Lk {σ∈K P |µ B (σ)=µ B (τ )} τ .
This equivalence between orders allows then to conclude (proof of Theorem 1 in Section 6). Indeed, we have shown that the traces of Γ reg and Γ min in Lk {σ∈K P |µ B (σ)=µ B (τ )} τ minimizes the same order under the same boundary constraints. It follows that they coincide, establishing the contradiction with the assumption at the beginning of step (A).
A.4 Glossary
Bisector of a simplex: Given a k-simplex in R n , its bisector bis τ , defined by (22), is the (n -k)-dimensional affine space, set of points with same weighted distance to each vertex of τ . In the case of zero weighs (that is in the case of Delaunay complex), it is the set of point equidistant from each vertex of τ . When τ is a Delaunay (respectively regular triangulation) simplex, it is also the affine space supporting the Voronoi (respectively power diagram) cell dual to τ . Figure 5 shows the line bisector bis τ of an edge τ = ab in R 2 .
Bounding weight µ B (σ) of a simplex σ: The bounding weight µ B (σ) of a simplex σ is a generalization, for weighted points, of the square of the radius of the smallest ball enclosing the simplex R B (σ) 2 . It is the dominant term in the definition of the order on simplices which induces the lexicographic order lex on chains. It is formally defined in Definition 7 and is related to the weights w p (σ) p of simplex σ in the norm • (p) by Lemma 4:
µ B (σ) = lim p→∞ w p (σ)
Full simplicial complex K P over a set of points: Given a set of points P ⊂ R n , the full simplicial complex K P is the simplicial complex made of all possible simplex of dimension less or equal to n with vertices in P. Unlike Delaunay or regular triangulation complexes, it is not embedded in general, as simplices convex hulls overlap as soon as P > n + 1.
Lexicographic order induced by a total order: It is defined formally in Definition 3. A total order ≤ on n-simplices induces a lexicographic order on n-chains. Seeing the two n-chains Γ 1 = Γ 2 with coefficients in Z 2 as sets of n-simplices, we write
Γ 1 Γ 2 if the largest simplex which is in the symmetric difference (Γ 1 ∪ Γ 2 ) \ (Γ 1 ∩ Γ 2 ), belongs to Γ 2 .
Lexicographic order lex : The lexicographic order lex of Theorem 1 is induced by the total order ≤ on simplices. This total order ≤ is given in Section 3.5 for the general case and Section 3.2 gives a more explicit expression for the particular cases of Delaunay triangulations in R 2 and R 3 .
Lexicographic order 0 : The lexicographic order 0 is induced by the total order ≤ 0 defined by ( 35) and (36) in Section 5. According to Lemma 10, visible convex hulls are minimum for order 0 under boundary constraint. Norm • (p) : It is defined in Section 4.1 by ( 13), as:
Γ (p) = def. σ |Γ (σ)|w p (σ) p .
It is a kind of L 1 norm on vector Γ whose coefficient w p (σ) p , for simplex σ, is defined, by [START_REF] Edelsbrunner | Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms[END_REF], as the p th power of the L p norm of function f σ : CH (σ) → R defined in [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF].
Restriction ↓ ρ Γ of a chain Γ : For a chain Γ , we denote by ↓ ρ Γ the chain obtained by removing from Γ all simplices with bounding weight strictly greater than ρ:
↓ ρ Γ = def. {σ ∈ Γ, µ B (σ) ≤ ρ}.
Shadow map Sh τ : (Definition 8). Given a k-simplex τ ∈ K P , the shadow map Sh τ is a map that sends the vertices of the link of τ that belong to simplices with same bounding radius as τ :
{{v} ∈ Lk K P τ | µ B (τ ∪ {v}) = µ B (τ )} toward R n-k .
It is a conical projection applied to the images by Φ τ , as illustrated on Figure 9, bottom. It follows a sequence of geometric constructions, notably the bisector bis τ , and the maps π τ and Φ τ , detailed in Sections 4.2 and 4.3.
The simplicial map induced by the shadow map Sh τ realizes a simplicial isomorphism between the full simplicial complex
{σ ∈ Lk K P τ | µ B (τ ∪ σ) = µ B (τ )}
and the corresponding full complex over the vertices images.
Moreover, its restriction to Lk |Γreg| τ , the support of the link of τ in the regular triangulation |Γ reg |, is a simplicial isomorphism between:
{σ ∈ Lk |Γreg| τ | µ B (τ ∪ σ) = µ B (τ )}
and the visible convex hull of the vertices images (Lemma 7 in Section 4.3).
As a consequence, {σ ∈
Lk |Γreg| τ | µ B (τ ∪ σ) = µ B (τ )} is a simplicial (n -k -1)-ball (i.e. its geometric realization is a topological (n -k -1)-ball).
Simplex: (Section 2.1 or [15, First chapter]). A k-simplex is a set of (k + 1) vertices. In particular, a vertex a is a 0-simplex, and edge {a, b}, sometimes denoted ab is a 1-simplex and a triangle {a, b, c}, sometimes denoted abc is a 2-simplex. For example, if τ 1 = ab and τ 2 = cd are edges, then τ 1 ∪ τ 2 is the 3-simplex, or tetrahedron, abcd. To each k-simplex is associated a realization as a topological space, namely the convex hull of k + 1 affinely independent points.
We denote by CH (σ) the convex hull of the vertices of σ.
Simplicial chain: (Section 2.2 or [15, First chapter]). A k-simplicial chain Γ over a simplicial complex K is a vector, over the field Z 2 , defined by associating a coefficient denoted
Γ (σ) ∈ {0, 1} = Z 2 to each k-simplex in K.
It is sometimes convenient to see a k-chain also as the set of k-simplices Γ = {σ | Γ (σ) = 1}. With this convention we allow ourselves to write for example
Γ 1 + Γ 2 = Γ 1 -Γ 2 = (Γ 1 ∪ Γ 2 ) \ (Γ 1 ∩ Γ 2 ).
Simplicial complexes: (Section 2.1 or [15, First chapter]). A set K of simplices is called a simplicial complex if any σ ∈ K has all its faces in K:
σ ∈ K and ∅ = τ ⊆ σ ⇒ τ ∈ K
Given a simplicial complex K, we denote by K [k] the subset of K containing all k-simplices in K.
Support
µ C (σ 1 ) ≤ µ C (σ 2 ) ⇐⇒ µ C (ψ λ (σ 1 )) ≤ µ C (ψ λ (σ 2 ))
and the same relation holds for µ B . Since the order ≤ between simplices (9) defined in section 3.5 relies entirely on comparisons on µ C and µ B , this total order is preserved by a global weight translation.
intersection point we can keep track of the change in the number of covering n-simplices of y(t). We know that this number is zero at x since x / ∈ CH (P). Since CH (P) ∩ [x x] is convex there is a single intersection point y(t b ) between [x x] and the boundary of CH (P).
This point y(t b ) ∈ (x , x) hits the convex hull CH (τ b ) of a (n -1)-simplex τ b ∈ β P , face of the convex hull boundary. Since ∂Γ = β P , we know that τ b is shared by an odd number n b of n-simplices in Γ . By definition of the convex hull, and since P is in general position, for each n-simplex σ coface of τ b , CH (σ) is on the inner side of the convex hull supporting halfspace. It follows that, starting from the outer point x , the number of covering simplices becomes the odd number n b just after the first crossing point y(t b ).
Then, when crossing the convex hull CH (τ i ) of any other (n -1)-simplex τ i / ∈ β P , at some point y(t i ), it follows from the condition ∂Γ = β P that the number n i of n-simplices in Γ coface of τ i has to be even. When crossing CH (τ i ), along [x x], point y(t i ) exits k -, and enters k + n-simplices in Γ , with k -+k + = n i . The current number of covering n-simplices value is incremented by k + -k -. Since n i is even and:
k + -k -= k + + k --2k -= n i -2k - k + -k -is
therefore even and the number of covering simplices remains odd along the path and, in particular, is odd at point x.
E Proof of Lemma 4 on page 18
Proof Since the expression [START_REF] Cohen-Steiner | Lexicographic optimal homologous chains and applications to point cloud triangulations[END_REF] does not depend on the origin O, let us choose this origin to be O = P B (σ). With barycentric coordinates based on the vertices P i , i = 0, k of σ, i.e. λ i ≥ 0, i λ i = 1 such that x = i λ i P i , the expression of f σ is: This with (53) ends the proof.
f σ (x) =
F Proof of Lemma 9 on page 28
Proof Assume for a contradiction that x is in the relative interior of X, that is there is some ρ > 0 such that B(x, ρ) ∩ X = B(x, ρ) ∩ ∂C. Then all facets containing x are visible from O. If x is not a vertex of C, it belongs then to the relative interior of a convex face f in ∂C with dim f ≥ 1. Then we have a contradiction since the function y → d(O, y) is convex on f and cannot have an interior local maximum at x. We assume now that x is a vertex of C.
Following for example [START_REF] Federer | Curvature measures[END_REF][START_REF] Rockafellar | Convex analysis[END_REF], denote by Tan x C and Nor x C respectively the tangent and normal cone to C at x. In case of a closed polytope they can be expressed as: u, ∀v ∈ Tan x C, u, v ≤ 0 (54)
Since C is a convex polytope, Tan x C is a convex closed cone and one has [START_REF] Rockafellar | Convex analysis[END_REF]:
Tan x C = (Nor x C) ⊥ = def.
v, ∀u ∈ Nor x C, u, v ≤ 0 (55)
Each facet F i of C containing x is supported by a half-space H i = {y, yx, n i ≤ 0} with outer normal n i , and one has:
Nor x C = i λ i n i ,
0 < x -O, x -O = x -O, i λ i t i = i λ i x -O, t i
Since all the λ i are not negative, there must be at least one i for which:
x -O, t i > 0
This precisely means that y → d(O, y) is increasing in the direction t i in a neighborhood of x in ∂C, and, with the assumption that x is in the relative interior of X, we have that y → d(O, y) is increasing in the direction t i in a neighborhood of x in X. This gives a a contradiction since x is assumed to be a local maximum of y → d(O, y) in X.
G Proof of Lemma 11 on page 34
Proof Using Definition 7 page 13, one has: {σ ∈ Γ, µ B (σ) = ρ}
We claim that:
(↓ ρ Γ 1 lex ↓ ρ Γ 2 ) ⇒ → ρ Γ 1 lex → ρ Γ 2 (64)
Indeed, by definition of the lexicographic order, if this did not hold, it would imply → It remains to show that the order lex restricted to simplices τ ∪ σ with µ B (τ ∪ σ) = ρ corresponds to the order 0 on the shadow of σ.
By definition of lex , since in (9) one has always µ 0 (τ ∪σ 1 ) = µ 0 (τ ∪σ 2 ) = ρ, it goes like this:
σ 1 < σ 2 ⇐⇒ def. ∃k ≥ 1, µ k (τ ∪ σ 1 ) > µ k (τ ∪ σ 2 )
and ∀j, 0 ≤ j < k, µ j (τ ∪ σ 1 ) = µ j (τ ∪ σ 2 ) (65)
Observe that this expression is similar to (35). For a 0-simplex {v} ∈ K P , the circumweight µ C (τ ∪ {v}) is, according to Lemma 11, a decreasing function of the distance d 0 (Sh τ (η)) between its shadow and the origin. It follows that for a (n -k -1)-simplex σ ∈ K P , the vertex v for which the circumweight µ C (τ ∪ {v}) is minimal has its shadow Sh τ (v) maximizing the distance to the origin. This minimal circumweight is µ 1 (τ ∪ {v}) while this maximal distance is δ 0 (σ).
More generally, looking at (34) and ( 8), Lemma 11 allows to check that the simplex Θ k (σ) of (8) in the star of τ in K P corresponds to the simplex τ k-1 (σ) in (34) in the link of τ :
Θ k (σ) = τ ∪ τ k-1 (σ)
So that for σ 1 , σ 2 ∈ K P referring to (35) and (65):
µ k (τ ∪ σ 1 ) ≤ µ k (τ ∪ σ 2 ) ⇐⇒ δ k-1 (σ 1 ) ≥ δ k-1 (σ 2 )
It follows that, for Γ 1 , Γ 2 ∈ C n (K P ) and τ = τ ρ :
→ ρ Γ 1 lex → ρ Γ 2 ⇐⇒ Sh τ Tr τρ (↓ ρ Γ 1 ) 0 Sh τρ Tr τρ (↓ ρ Γ 2 )
which, with claim (64), ends the proof.
I Generic conditions
The generic condition required by the main theorem is Condition 1 of Section 3.4, page 14. Because it applies to weighted points, it relies on the notion of circumweight of a simplex. In case of zero weight, it boils down to requiring that simplices are not degenerate, which means that the vertices are affinely independent, and that two distinct simplices have distinct circumradii. Each forbidden event, either for a simplex to be degenerate, or for a given pair of distinct simplices to have same circumradius corresponds to the zero of a non-zero polynomial on R N , which has therefore measures 0, and whose complement is open and dense. It follows that Condition 1, conjonction of finitely many generic conditions, is generic. Condition 2, in section 5, corresponds to Condition 1 after some geometric transformation and is generic for the same reason.
Looking in detail to Condition 1, we see that it is stronger than the usual generic condition required for Delaunay or regular triangulations. For example, for a Delaunay triangulation in R 2 to be a triangulation of the convex hull of the points, it is usually required that triples are non-degenerate (no triple of points on the same line) and that no set of four points lie on a same circle. This last condition corresponds to our condition that two distinct triangles cannot have the same circumradius, in the particular case of two triangles sharing 2 vertices.
In particular, our stronger condition, for two triangles that may not share any vertex to have same circumradius, is not really necessary for Theorem 1 to hold, at the price of a kind of theoretical symbolic perturbation for the definition of the simplices total order. Since, in practice, non-generic inputs can be managed by symbolic perturbation, relaxing our generic condition would obscure our result without much advantages.
Fig. 3
3 Fig. 3 Illustration of the definition of Θ(σ) for a tetrahedron σ = abcd in the case of zero weights.
Lemma 3 (
3 Proof in Appendix D) Given P = {P 1 , . . . , P N } ⊂ R n , with N ≥ n + 1, in general position, denote by β P ∈ C n-1 (K P ) the (n -1)-chain made of simplices belonging to the boundary of CH (P).
Fig. 4 Fig. 5
45 Fig. 4 Illustration in dimension 2 and zero weights of the constructions of sections 4.2 and 4.3. The two triangles form a Delaunay triangulation of the four points a, b, c, d in the plane because the two red circumcircles are empty. Since both simplices abc and ab share the same blue circle as smallest enclosing disk, one has Θ(abc) = ab and µ B (abc) = µ B (ab).
Figures 4 ,Fig. 6
46 Figures 4, 5 and 6 illustrate π τ for ambient dimension 2 and dim(τ ) = 1.
Fig. 7
7 Fig. 7 Illustration in dimension 2 and zero weights of the constructions of sections 4.2 and 4.3. We represent the image by Φτ of τ = ab, c and d.
Figure 9
9 Figure 9 illustrates π τ for ambient dimension 3 and dim(τ ) = 1.Let o τ = P C (τ ) ∈ bis τ denote the circumcenter of τ (or, for weighted points, the generalized circumcenter as defined in Definition 7, page 13). If(P i , µ i ) ∈ τ , then D ((o τ , µ C (τ )), (P i , µ i )) = 0.Since o τ = π bisτ (P i ) we have:
Fig. 8
8 Fig. 8 Illustration in dimension 2 and zero weights of the constructions of sections 4.2 and 4.3. The red line between Φτ (τ ) and Φτ (d) (respectively Φτ (c)) corresponds to the red circumcircle of triangle abd (respectively abc) on figure 4: A point is inside a circle if and only its image by Φ is below the corresponding line. The horizontal blue line corresponds to the blue circle on figure 4. The fact that τ belongs to the Delaunay triangulation of a, b, c, d is equivalent to the fact that Φτ (τ ) belongs to the lower convex hull of Φτ ({a, b, c, d}). The fact that µ B (abc) = µ B (ab) is equivalent to the fact that Φτ (c) is below Φτ (τ ). The half-line starting at Φτ (τ ) through Φτ (c) cuts the horizontal axis bisτ at the shadow Shτ (c, 0) of c.
Fig. 9
9 Fig. 9 Illustration in dimension 3 and zero weights for the definition of bisτ , πτ (top left and right) and Φτ (bottom).
Fig. 10 9 . 9 (
1099 Fig. 10 Illustration of the shadow polytope (grey area) of Definition 11 corresponding to the example of Figure 9.
For a polytopeLemma 9 (
9 C, we denote ∂C the boundary of the set C. Proof in Appendix F) Let C ⊂ R n be a polytope and O ∈ R n \ C. Let X ⊂ ∂C be the union of a set of facets of C visible from O. If x ∈ X maximizes the distance to O, then x ∈ ∂X, where ∂X denotes the relative boundary of X in ∂C.
Fig. 14
14 Fig. 14 Illustration for Equation (43).
∂ 2 BObservation 8
28 |Γ | of a simplicial chain Γ : The support |Γ | of a simplicial nchain Γ is the n-dimensional simplicial complex made of simplices in Γ with all their faces.Trace of a chain in the link of a simplex: (Definition 12). Given a ksimplex τ ∈ K and a n-chain Γ on K, the trace Tr τ (Γ ) of Γ is a (n -k -1)chain Tr τ (Γ ) defined in the link Lk K (τ ) of τ . For any (n -k -1)-simplex σ ∈ Lk K (τ ), one has:Tr τ (Γ )(σ) = def. Γ (τ ∪ σ)This linear operator commutes with the boundary operator as stated by Lemma 8: Tr τ (Γ ) = Tr τ (∂Γ ) Visible convex hull: Definition 10 formally defines a polytope facet visible from the point 0. Given a finite set of points P ∈ R m in general position (no m + 1 points lie on the same affine (m -1)-space), whose convex hull CH (P ) does not contain the point 0 ∈ R m . The visible convex hull is the (m -1)-simplicial complex embedded in R m corresponding to the subset of the boundary ∂ CH (P ) of CH (P ) "visible" from the point 0. In other word, its image by the embedding in R m is the set of points x ∈ ∂ CH (P ) such that the segment [0, x) does not intersect CH (P ). It is a simplicial ball, i.e. a simplicial complex homeomorphic to a closed (d -1)-ball.Weighted distance:The weighted distance between 2 weighted points, given in Definition 1, generalizes the squared Euclidean distance between ordinary points: D ((P 1 , µ 1 ), (P 2 , µ 2 )) = def.(P 1 -P 2 ) 2 -µ 1 -µ Invariance by global weight translation Let ψ λ : R n × R → R n × R, be the transformation that shifts the weight by λ: ψ λ (P, µ) = (P, µ + λ) Let σ ∈ K P be a simplex, from definitions 7 and 1 we have:P C (ψ λ (σ)) = P C (σ) and µ C (ψ λ (σ)) = µ C (σ) -λ P B (ψ λ (σ)) = P B (σ) and µ B (ψ λ (σ)) = µ B (σ) -λIt follows that a global shift by a constant value λ results in an opposite shift on the weights of generalized circumcenters. It therefore preserves the relative order between simplices weights µ C and µ B :
i
λ i (P i -P B (σ))2 -µ i -(x -P B (σ))2 one has (P i -P B (σ))2 -µ i -µ B (σ) = D ((P B (σ), µ B (σ)), (P i , µ i )) ≤ 0 so that (P i -P B (σ)) 2 -µ i ≤ µ B (σ) and it follows that:∀x, f σ (x) ≤ µ B (σ) -(x -P B (σ)) 2(53)We have from Lemma 2 that P B (σ) ∈ |Θ(σ)| ⊂ CH (σ) so that, in the expression of P B (σ) as a barycenter of vertices of σ, only coefficients λ i corresponding to vertices of Θ(σ) ⊂ σ are non-zero:P B (σ) = (Pi,µi)∈Θ(σ) λ i P iOne has by definition of (P C , µ C ):(P i , µ i ) ∈ Θ(σ) ⇒ (P i -P C (Θ(σ))) 2 -µ i = µ C (Θ(σ))and since we know from Lemma 1 that:(P B , µ B )(σ) = (P B , µ B )(Θ(σ)) = (P C , µ C )(Θ(σ))one gets:(P i , µ i ) ∈ Θ(σ) ⇒ (P i -P B (σ)) 2 -µ i = µ B (σ)and:f σ (P B (σ)) = (Pi,µi)∈Θ(σ) λ i ((P i -P B (σ)) 2 -µ i ) -0 2 = µ B (σ)
Tan x C = ρ>0 λ(c -x), λ ≥ 0, c ∈ B(x, ρ) ,where B(x, ρ) denotes the ball centered at x with radius ρ, andNor x C = (Tan x C) ⊥ = def.
(
P C , µ C ) π τ (τ ) ∪ π τ (σ) = arg min (P, µ) ∈ R n × R ∀(P i , µ i ) ∈ πτ (τ ) ∪ πτ (σ), D ((P, µ), (P i , µ i )) = 0 µ (59)Looking at Definition 7 in light of (28), page 23, in the proof of Lemma 6, we get that:σ ⊃ τ ⇒ µ C (σ) = µ C (π τ (σ)) and P C (σ) = P C (π τ (σ))(60)Since both terms of (45), page 34, are invariant by a global translation, we can assume without loss of generality and in order to make the computations simpler that o τ = 0.H Proof of Lemma 12 page 34Proof In this proof, we denote by → ρ Γ the set of simplices in Γ with bounding weight equal to ρ:→ ρ Γ = def.
ρ Γ 1
1 =→ ρ Γ 2 and the largest simplex for which → ρ Γ 1 and → ρ Γ 2 differ would be in Γ 1 contradicting ↓ ρ Γ 1 lex ↓ ρ Γ 2 which proves the claim (64). Note that, from Lemma 1 and generic condition 1, page 14, all the simplices in → ρ Γ 1 and → ρ Γ 2 are in the star of a single simplex τ = τ ρ such that µ C (τ ) = µ B (τ ) = ρ.
∀i, λ i ≥ 0 (56) and, since O belongs to none of the half-spaces H i : ∀i, O -x, n i > 0 This with (56) gives: ∀u ∈ Nor x C, O -x, u > 0 (57) using (55) we get that O -x is in the interior of -Tan x C i.e.: x -O ∈ (Tan x C) Since x is a vertex of C, Tan x C is the convex hull of Tan x ∂C, therefore (58) implies at there are t 1 , . . . t n+1 ∈ Tan x ∂C and λ 1 , . . . λ n+1 ≥ 0 such that:
• (58)
x -O =
i λ i t i which gives, since O / ∈ C:
According to Proposition 2, Γ reg minimizes Γ → Γ (p) among the chains with boundary β P for any p ≥ 1. In particular, Γ reg minimizes Γ → Γ p for the value p of Lemma 5.
Proposition 2 and Theorem 1 consider a minimum with respect to the same boundary condition while their objective differ. In order to prove Theorem 1, we have to show that both minimum still agree. By contradiction, we assume now that they differ, which means that Γ min = Γ reg where Γ min is the minimal chain of Theorem 1. Consider µ = B to be the largest bounding weight for which some simplex in Γ min and Γ reg differ:
There must be at least one simplex with bounding weight µ = B in Γ reg as otherwise, by definition of µ = B , there would be a simplex with radius µ = B in Γ min and this would give Γ reg lex Γ min with Γ reg = Γ min and since ∂Γ reg = ∂Γ min = β P this contradicts the definition of Γ min .
Similarly, it follows from Lemma 5 that if there was no simplex with bounding weight µ = B in Γ min , one would have Γ min p < Γ reg p and ∂Γ reg = ∂Γ min : A contradiction with the minimality of Γ reg for norm • p (Proposition 2). We have shown that if they differ, both Γ reg and Γ min must contain at least one simplex with bounding weight µ = B . We know from Lemma 1 and the generic condition 1 that the set of simplices with bounding weight µ = B are all cofaces of some unique dimension min-
where
is defined in (46). In order to simplify the notations, we allow ourselves to replace for the rest of the section ↓ µ = B by ↓ and τ µ = B by τ . It follows from (48) that:
↓ Γ reg and ↓ Γ min have therefore the same boundary and by Lemma 8, their trace also have the same boundary:
Observe that the set of (n -k -1)-simplices in Tr τ (↓ Γ reg ) coincides with the definition of X(τ ) in (33).
Also we have, from item 3 . in Lemma 7, that the set of shadows of simplices in X(τ ), is a chain in Sh τ (K τ ) made of the faces of the convex hull of Sh τ (P τ ) visible from the origin 0. We get the following characterization of Sh τ (Tr τ (↓ Γ reg )):
It is enough to compare the traces of Γ reg and Γ min in the link of τ
A.1 Step (A):
Step (A) corresponds to the beginning of the proof of Theorem 1 in Section 6 and relies on the definitions and lemmas given in Section 4.1. We denote by Γ reg the simplicial n-chain whose support coincides with the regular triangulation, and by Γ min the n-chain minimal for order lex under the boundary constraint ∂Γ = β P . Proving Theorem 1 consists then in proving that Γ min = Γ reg . We assume in step (A), for the sake of contradiction, that Γ min = Γ reg (see Figure 15). Proposition 2 is then established by reusing the observation first made in [START_REF] Chen | Optimal delaunay triangulations[END_REF]: the Delaunay (or regular) triangulation in dimension R n is equivalent to the convex hull of points in R n+1 where each point (x 1 , . . . , x n ) ∈ R n is lifted to a paraboloid as (x 1 , . . . , x n , j x 2 j ) ∈ R n+1 . This implies that the Delaunay triangulation minimizes, among all triangulations, the L p norm of the difference of two functions: the one with the lifted triangulation's simplices as graph and the one with the paraboloid as graph. This characterization as a minimum generalizes from a minimum among triangulations to a minimum among all chains with the convex hull of points as boundary (Proposition 2). It results that Γ reg can be characterized as the chain minimizing the norm • (p) under the boundary constraint ∂Γ = β P .
The statement of Proposition 2 is the same as Theorem 1 except for the order along which the minimum is taken: Γ reg minimizes • (p) while Γ min is minimum for lex (in particular, both statements share the same boundary constraint).
The bounding weight µ B (σ) is the dominant quantity in the definition of the order on simplices which induces the lexicographic order lex on chains. complex K P , and edges p, q ∈ B B (τ ) ∩ (P \ {a, b}). These edges are in the link of τ . Understanding the structure of the set of simplices with same bounding radius and that belong to the regular triangulation requires additional geometric constructions, see Figure 16. Extending the classical lifted paraboloid construction, we introduce, in Sections 4.2 and 4.3, several definitions associated to a k-simplex τ in a n-dimensional regular triangulation. The bisector bis τ together with the maps π τ , and Φ τ , are intermediary constructions allowing to define formally the shadow map Sh τ :
and extend it as a simplicial map that send simplices in: Denote by τ ⊂ σ the set of vertex (P i , µ i ) ∈ σ for which:
This set cannot be empty as, if all inequalities in [START_REF] Chen | Optimal delaunay triangulations[END_REF] where strict, a strictly smaller value of µ would still match the inequality, which would contradict with the arg min in [START_REF] Chen | Optimal delaunay triangulations[END_REF]. One has then τ = ∅ and:
and therefore (P B (σ), µ B (σ)) also meet the condition required for (P B (τ ), µ B (τ )) in ( 6) of Definition 7:
No point (P, µ) with µ < µ B (σ) can satisfies (50) or (51) as it would again similarly contradict the arg min in [START_REF] Chen | Optimal delaunay triangulations[END_REF]. It follows that:
Proof The idea of the proof is to shoot a ray from a point x ∈ CH (P), that does not belong to the convex hull of any (n -1)-simplex, to the outside of the convex hull CH (P) and keep track of the number of covering n-simplices along the ray as it crosses (n -1)-simplices, where, by covering simplices of a point y we mean the n-simplices σ ∈ Γ such that y ∈ CH (σ).
We claim that since x ∈ CH (P)
where [x x] denotes the line segment in R n between x and x. Indeed, we consider moving a point x t from x 0 to some x 1 , picking x 0 , x 1 far away enough to have [x 0 x 1 ] ∩ CH (P) = ∅ and in such a way that [x 0 x 1 ] belongs to none of the affine hyperplanes spanned by x and (n -2)-simplices in K P , which occurs generically. Then, the negation of condition (52) with x = x t , occurs only as isolated values of t. We can pick a value t for which this does not occur: set x = x t and the claim is proved.
Next, we move a point y(t) = (1 -t)x + tx along segment [x x]. This segment intersect transversally the (n -1)-faces CH (τ ), for τ ∈ K P . At each In this case, as seen in (24), page 22, the coordinates of π τ (τ ) are (0, -µ C (π τ (τ )) = (0, -µ C (τ )) by (60). So that D ((P, µ), π τ (τ )) = 0 gives us:
It follows that among the weighted points (P, µ) that satisfy D ((P, µ), π τ (τ )) = 0, minimizing µ is equivalent to minimizing P 2 . One can reformulate the characterization (59) of (P
Observe that:
So that the definition of (P C , µ C ) (π τ (τ ) ∪ π τ (σ)) given in (62) can be equivalently formulated as Π (P C ,µ C ) being the hyperplane in R n-k × R that minimizes P 2 C among all hyperplanes containing both lift(π τ (τ )) and all the points in lift(π τ (σ)).
But, as seen on ( 63), 2 P C is the slope of the hyperplane Π (P C ,µ C ) so that Π (P,µ) is the hyperplane with minimal slope going through Φ τ (τ ) = lift(π τ (τ )) and all the points in Φ τ (σ) = lift(π τ (σ)). This slope 2 P C is also the slope of the unique (dim(σ) + 1)-dimensional affine space F going through Φ τ (τ ) = lift(π τ (τ )) = (0, µ C (τ )) and all the points in Φ τ (σ) = lift(π τ (σ)). Since F ∩ R n-k × {0} is the affine space supporting Sh τ (σ), one has:
µ C (τ ) 2 P C (π τ (τ ) ∪ π τ (σ)) so that, using (60) for the second equality:
It follows that the map:
is decreasing. |
04119569 | en | [
"info.info-tt"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04119569/file/Named%20Entity%20Recognition%20for%20Model%20Quality%20Estimation.pdf | Slimane Mesbah
Francois Yvon
Named Entity Recognition for Model Quality Estimation
Keywords:
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The lack of reference translations for Machine Translation (MT) systems has led to the development of alternative metrics, such as Questplusplus, deepQuest, and Transquest, which do not rely on reference translations to evaluate the quality of MT systems. However, these metrics are still trained on scores given by reference translations, which makes them not completely independent of reference translations. Therefore, it is still important to improve symbolic metrics like Bleu, Meteor, or Nist, which can accurately evaluate translations, in order to train these state-of-the-art metrics such transquest on well-processed data. In this article, we propose a new metric that combines Meteor with a corrected version of Nist to address the problem of Nist's dependence on sentence length and context.
After preprocessing the data set, the next step is to evaluate the performance of the Transquest model. One approach that has been suggested is to edit name entities, we therefore replace named entities with predefined ones in order to see if this improves the model's performance.
Methodology 2.1 dataset
In this experiment, we will be working with the WMT 2020 dataset, which provides 7,000 pairs of sentences, each with a reference translation. The dataset covers a range of sentence lengths and translation scores, allowing us to test the performance of the Transquest model across a variety of scenarios.
The scores given by evaluators are normalized using the z score method, and The final score for each sentence is computed using the norm.CDF (z mean ) formula.
computing resources
To conduct the experiment, we utilized Google Colab, which provides cloud-based computing resources for machine learning tasks. We specifically used a machine instance with 16 GB of RAM and a NVIDIA GeForce GTX 1650 GPU to accelerate the processing time of the Transquest model. It is worth noting that these specifications are provided for reference only 1 3 Referenced metrics
Meteor
The METEOR metric is a comprehensive evaluation measure that not only takes into account the probabilities of n-grams, but also considers the context of the sentence, including synonyms, sentence structure, and word order. Compared to BLEU, METEOR has shown significant improvements in evaluating the quality of machine-generated translations, as demonstrated in various former studies. score = nltk.translate.meteor_score ([ref.split()], mt3.split())
Limitations
One limitation of the METEOR metric is that it heavily relies on the availability of synonym dictionaries, which may not be comprehensive or accurate. This could result in incorrect or misleading scores, especially for translations that use words or phrases that are not in the dictionary. Additionally, METEOR may not be sensitive enough to minor changes in the translation that could significantly alter its meaning.
In the example provided, the limitation of METEOR is demonstrated as the score shows little change despite a significant alteration in the translation's meaning. Even a small change in a translation can result in a vastly different meaning such as changing the word sich to mich or adding nicht to the sentence, making it difficult for the METEOR metric to accurately evaluate the quality of the translation.
In the next section, the figure illustrates how the scores are heavily concentrated between 0.6 to 0.8 on a well-balanced dataset, indicating that the METEOR metric can be overly optimistic in providing scores
Nist
The NIST (NIST BLEU) metric is another evaluation measure that is commonly used to evaluate the quality of machine-generated translations. Like BLEU, it is a modified version of the precision metric that computes the average n-gram precision scores between the machine-generated and reference translations. NIST uses a different weighting scheme compared to BLEU, where it assigns higher weights to longer n-grams and to the more frequent n-grams. nist_score_res = nltk.translate.nist_score.sentence_nist ([ref.split()], mt.split())
Limitations
To observe the limitations of the NIST metric, we conducted an experiment where we used a list of 7,000 reference sentences that were correct grammatically, syntaxically, and had good context. We then ran the NIST metric on this list by selecting the same sentence for both the machine-generated and reference sentences. Ideally, this should result in a score of 5 or 100% accuracy. However, we found that the NIST score showed a strong correlation with sentence length, indicating a limitation of the metric in accurately evaluating the quality of machine-generated translations for sentences of different lengths The penalty function used in NIST is based on the ratio of the length of the machine-generated translation to the length of the reference translation. This penalty function applies a penalty to the NIST score based on the difference in length between the two sentences. This means that longer machine-generated translations are penalized more than shorter ones, regardless of their actual quality. This can lead to an inaccurate evaluation of the quality of machine-generated translations, the function is given as follows:
plot = tr_df[tr_df["len"]>4].plot(x="len", y="nist")
penalty = e β×log 2 min(len(hyp)/len(ref ), 1.0) where β = (log 2 (1.5)/log 2 (1.5)) [START_REF] Transquest ; Ranasinghe | Sentence-Level Direct Assessment[END_REF] def nist_length_penalty(ref_len, hyp_len): ratio = hyp_len / ref_len if 0 < ratio < 1: ratio_x, score_x = 1.5, 0.5 beta = math.log(score_x) / math.log(ratio_x) ** 2 return math.exp(beta * math.log(ratio) ** 2) else: # ratio <= 0 or ratio >= 1 return max(min(ratio, 1.0), 0.0)
New length penalty function:
In this experiment, a new polynomial penalty function was suggested to overcome the limitation of the NIST penalty function. This new penalty function was trained on a dataset of sentences with varying lengths and scores, and a polynomial function was derived from the data. The degree of the polynomial was determined through cross-validation, and it was found that a polynomial of degree 15 provided the best results without overfitting the data. This new penalty function was shown to improve the accuracy of the NIST metric in evaluating machine-generated translations, particularly for longer sentences, and provides a more comprehensive and accurate evaluation of machine-generated translations. tr_df["nist_balanced"] = tr_df.apply(lambda x : x["nist"]*nist_length_penalty (x["len"],x["len"]) , axis=1) tr_df
Importance of combined metrics:
Each of these metrics has its own limitations and tends to either over-evaluate or under-evaluate the translation quality. To address this issue, a combination of metrics can be used to obtain a more reasonable score, which can be useful for training Questplusplus or TransQuest models and improving their performance. Providing well-defined scores is crucial for the accurate evaluation of these QE models. In this experiment, we have implemented a new metric using the following methods.
It is important to note that the choice of ratio between Meteor and Nist may vary depending on the specific use case and dataset. In this experiment, the 80:20 ratio was chosen based on previous research and analysis of the WMT 2020 dataset. However, for other datasets or languages, a different ratio may be more appropriate.
Output
According to the TransQuest paper, this model has shown some limitations, such as over-optimistic results, which can be attributed to name entity confusion. To address this issue, we conducted an experiment to investigate whether replacing name entities with a predefined list can improve the model's performance.
transquest results
Mean absolute error: 0.141 Root mean squared error: 0.176 Pearson correlation coefficient: 0.292
Improvements (NER)
Named Entity Recognition, also known as NER, is a field of natural language processing that involves identifying and categorizing a group of tokens, also known as spans, as specific named entities, such as people, places, organizations, or dates. Common entity types are often abbreviated, such as ORG for organization, LOC for location, etc. In this section, we utilize Spacy, a state-of-the-art library for NER, though other libraries such as NLTK.ner and Stanford NER are also available.
Transquest with NER
We have selected a predefined list of common name entities that do not carry significant meaning. This is because named entities with semantic meaning tend to confuse the model the most, for example, the name "Pierre" which is a popular name in France, often gets confused with the word "stone". list_ent = { "PRODUCT" : "product", "LOC" : "Himalayas", "DATE" : "this year", "TIME" : "night", "MONEY" : "three dollars", "PERSON" : "David", "ORG" : "IBM", "GPE" : "Paris", "PERCENT" : "four percent", "CARDINAL" : "three" } list_ent_german = { "PRODUCT" : "Produkt", "LOC" : "Himalaya", "DATE" : "dieses Jahr", "TIME" : "Nacht", "MONEY" : "drei Dollar", "PERSON" : "David", "ORG" : "IBM", "GPE" : "Paris", "PERCENT" : "vier Prozent", "CARDINAL" : "drei" }
NER edit function
We propose a method to replace named entities in both source and target sentences with the predefined list using the following function. The model was executed on the 7,000 new sentences, and it took approximately 90 minutes to complete. The resources used for this task are described in the resources section.
df["tquest_ner"] = df.apply( lambda x : transquest_model(x["new_src"], x["new_mt"]) , axis=1)
Results
Tt was observed that NIST metric tends to show dependence on sentence length, which affects the evaluation scores. To address this limitation, a new penalty function was suggested and tested, which showed promising results. Additionally, it was found that a combination of multiple metrics, including METEOR and NIST, can provide a more reasonable score and avoid over-or under-evaluation of translations. The second expiriments showed that changing name entities with a predifined list did not improve the model, which is shown in the following figure. Based on these metrics, the "transQuest" model performed better than the "transQuest with NER" model in terms of both MAE and Pearson correlation coefficient, eventhough the second model had a slightly higher RMSE.
Although the model's performance has slightly decreased, the scatter plot below indicates that the model's scores are now more concentrated between 0.4 and 0.9, which is consistent with the score distribution when using reference translations.
Conclusion
In summary, language models have seen remarkable advances in recent times, they still face limitations when it comes to less widely spoken languages. Quality estimation (QE) models have emerged as a promising solution to this issue, using state-of-the-art multilingual models that are fine-tuned with QE metrics. This approach can help to narrow the performance gap and enhance machine translation for less spoken languages, which presents new opportunities for future research that combines these cutting-edge models with less spoken languages.
Figure 1 :
1 Figure 1: Sentence Level Direct Assessment (DA) scores by professional translators
Figure 2 :
2 Figure 2: Niscr Score Over Sentene length
Figure 3 :
3 Figure 3: Polynomial approximation for penalty function
Figure 4 :
4 Figure 4: penalty output
Figure 6 :
6 Figure 6: trqnsquest scores
Figure 7 :Figure 8 :
78 Figure 7: nist scores over sentence length
(a) transquest scores before applying NER editing (b) applying NER editing on src and tgt sentences
Figure 9 :
9 Figure 9: Transquest scores with and without NER editing
Once the dataset is preprocessed, we will proceed to train a new model, which will be able to predict the quality score of the remaining data without any reference translations. In this section, we will compare two popular models, Quest++ and TransQuest, which is a Python library that fine-tunes transformer-based models for quality estimation. TransQuest has been shown to outperform other open-source quality estimation frameworks like OpenKiwi and DeepQuest. It's trained using the XLM pre-trained model from the Hugging Face Transformers library.
Figure 5: Quest++ scores
4.2 Transquest
<6):
return mtr_score(ref,mt)
else :
return round((nist_score.sentence_nist([ref.split()], mt.split())),6)
def nst_blc_score(ref,mt):
return round(nst_score(ref,mt)*f(len(ref.split())),6)
def final_score(ref,mt):
return round(0.8*mtr_score(ref,mt)+0.2*(nst_blc_score(ref,mt)/5),6)
4 Referenceless QE
4.1 Questpluspus
quest++ results
Mean absolute error: 0.751
Root mean squared error: 0.898
Pearson correlation coefficient: 0.491
def mtr_score(ref, mt): return round(meteor_score.meteor_score([ref.split()], mt.split()),6) def nst_score(ref,mt): if (len(ref.split())<6) or (len(mt.split())TransQuest is a highly acclaimed Quality Estimation (QE) model, well-known for its multilingual support. It is built on top of the XLM model and fine-tuned on the WMT dataset. There are two architectures available for training this model, and in this paper, we will focus on the MonoTransQuest architecture. 4.2.1 Implimentation model = MonoTransQuestModel("xlmroberta", "TransQuest/monotransquest-da-multilingual", num_labels=1, use_cuda=torch.cuda.is_available()) predictions, raw_outputs = model.predict([["src sentence.", "tgt sentence."]]) print(predictions)
Table 1 :
1 Tquest vs Tquest with NER
Model MAE RMSE Pearson correlation
tQuest 0.141 0.176 0.292
TQ with NER 0.168 0.203 0.121
Two models were evaluated using three different metrics: Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Pearson correlation coefficient. The first model, "transQuest," had an MAE of 0.141
Acknowledgments
I would like to express my sincere gratitude to my supervisor, Francois Yvon, for his invaluable guidance, encouragement, and support throughout the course of this project. I would also like to extend my thanks to Marc Evard and Francois Lande my machine learning teachers, as well as all my Professors and members of paris saclay university for their insights and contributions. This work was completed during a school internship managed by Sylvain Conchon, to whom I am grateful. Without their combined efforts, this project would not have been possible
https://github.com/slimane |
03837032 | en | [
"shs.psy",
"scco.ling",
"sdv.neu.sc"
] | 2024/03/04 16:41:26 | 2021 | https://hal.science/tel-03837032v2/file/Xiao_2021_These.pdf | Claire Kabdebon
Camille Straboni
Hualin Xiao
Alexandre Cremers
Brent Strickland
Sharon Peperkamp
Does grammatical gender influence how we conceive of objects?
Four years ago, I arrived in Paris alone, carrying the largest suitcase I ever had. Before that, I knew very little about the feeling of carrying a Chinese suitcase in a foreign land, but I did know what I came here for. Now it's about time to pack again. I still have the suitcase I brought from my homeland, only that the items to fill in have changed.
I'm able to complete the four-year PhD thanks to my advisors Brent Strickland and Sharon Peperkamp. They not only initiated me into Psychology, but more importantly helped me develop another way of thinking that is rational and scientific. This may sound surprising, but it is a big challenge to someone with only a background in English language and literature to acquire this kind of reasoning as it is contrary to intuition or imagination-oriented thinking. Their knowledge, skills, attitudes towards science, and ways of doing things have broadened my horizon. As a supervisor, Sharon may give the impression that she goes too much into details (or just perfectionist), but after working four years with her I finally understand that it is a quality I should have to compensate for my impatient and clumsy personality. Now I feel jealous that she was born perfectionist. Brent, on the other hand, always has his eyes on the bigger picture.
He is the one who told me on our first meeting to think about my future career and research agenda. I'm impressed by how efficiently these two minds cooperate to foster a PhD. They have also helped
Chapter 1. Introduction
Gender has multiple facets. When used to refer to biological sex, it is a term for distinguishing a male from a female person with regard to their anatomical and hormonal differences. Put in the context of society, gender is a product of social norms and interactions embodied by the different roles, qualities, and behaviors associated with men and women [START_REF] Eagly | Social Role Theory of Sex Differences[END_REF]. Gender is also an important concept for human languages. Many of the world's languages have a grammatical gender system in which nouns are categorized into different gender classes based on features like sex, animacy, shape and size etc., and following this categorization, words appearing with the nouns, such as adjectives, articles, verbs and pronouns change their forms accordingly [START_REF] Aikhenvald | How gender shapes the world[END_REF][START_REF] Corbett | Gender[END_REF][START_REF] Gygax | A Language Index of Grammatical Gender Dimensions to Study the Impact of Grammatical Gender on the Way We Perceive Women and Men[END_REF]. The meanings and implications of gender vary depending on which facet of the notion one refers to. In this dissertation, I focus on the three aspects of gender and present three empirical studies on how each aspect as well as interactions between them shape various cultural landscapes.
The first study (Chapter 2) investigates the relationship between language and thought, in particular the influence of grammatical gender on the conceptualization of objects. The Neo-Whorfian hypothesis postulating that the structure of one's native language impacts the way one thinks about the world [START_REF] Whorf | Language, thought and reality[END_REF] has earned empirical support among cognitive scientists from various perspectives such as the influence of color terms on color perception (e.g. [START_REF] Davies | A cross-cultural study of colour grouping: Evidence for weak linguistic relativity[END_REF][START_REF] Gilbert | Whorf hypothesis is supported in the right visual field but not the left[END_REF][START_REF] Thierry | Unconscious effects of language-specific terminology on preattentive color perception[END_REF], and the impact of space and time framing in language on speakers' spatial reasoning and conceptualization of time (e.g. [START_REF] Boroditsky | Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time[END_REF][START_REF] Haun | Cognitive cladistics and cultural override in Hominid spatial cognition[END_REF][START_REF] Levinson | Space in language and cognition: Explorations in cognitive diversity[END_REF]. However, research remains controversial on whether the grammatical gender of nouns denoting genderless objects prompts language users to associate different gender properties with the objects. The study described in Chapter 2 is an attempt to answer this question. To ensure that any positive/null results obtained are not due to unreliable research methods, I adopt an innovative approach by crowdsourcing the materials for Experiment 1, and I followed the open science initiative by submitting the study as a registered report 1 for peer review before we actually run the experiments. As this report is currently under review, I present in Chapter 2 in detail the design of two psycholinguistic experiments (without results), in addition to two pilot experiments (with results).
The second study (Chapter 3) examines the relationship between grammatical gender and mental representations of human referents. It focuses on the controversial use of masculine generics in French and its consequences for the underrepresentation of women in the mind of language users. Through two experiments, I compare the masculine form (e.g. les musiciens 'the musiciansmasc') to two gender-fair forms (double-gender: les musiciens et musiciennes 'the musiciansmasc and musiciansfem'; and middot: musicien•ne•s) across professions that have balanced and biased gender distributions. I ask three specific questions: 1) whether the two gender-fair forms differ from each other with regard to their effects on mental representations of gender distributions; 2) whether the effect of language form is moderated by the gender stereotype of a professional group; 3) and finally whether or not the representations induced by these language forms are consistent perceptions of real-world gender ratios.
The last study (Chapter 4) is about people's moral attitudes on gender equality between men and women and how it relates to their trust in scientific evidence of gender discrimination in academia. Moral attitudes toward a controversial issue have been shown to affect individuals' processing of new information, such that they tend to selectively assimilate or reject evidence 1 Registered reports are defined by the Royal Society as follows: "Registered reports are a format of empirical article where a study proposal is reviewed before the research is undertaken. Pre-registered proposals that meet high scientific standards are then provisionally accepted before the outcomes are known, independently of the results." (see https://royalsociety.org/blog/2016/11/registered-reports-whatare-they-and-why-are-they-important/). depending on whether it is consistent with their pre-existing attitudes or not [START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF].
In Chapter 4, I present six experiments exploring the effects of individuals' moral commitment to gender equality on their evaluations of research summaries on sex-based hiring bias in STEM fields.
In this introductory chapter, I first provide an overview of research on language and thought with respect to object conceptualization, as the empirical evidence pertaining to this topic is less conclusive. Then I introduce the investigations on the role of language in the mental representations of persons, and finally I cover the study of the relationship between moral attitudes and trust in science.
Gender in Language and Its Influences on Mental Representations of Gender
More than a core characteristic of human beings, gender is also a productive feature of human languages. Depending on how gender is encoded in the language structure, the world's languages can be roughly divided into a few categories: grammatical gender languages (e.g. French, German), natural gender languages (e.g. English, Swedish), and genderless languages (e.g. Chinese, Finnish) [START_REF] Gygax | A Language Index of Grammatical Gender Dimensions to Study the Impact of Grammatical Gender on the Way We Perceive Women and Men[END_REF][START_REF] Stahlberg | Representation of the sexes in language[END_REF]. In grammatical gender languages, every noun, no matter if it refers to persons, objects or abstract concepts, is assigned to a gender class. The number of gender class varies across languages. For example, among the 256 languages documented by the World Atlas of Language Structures online (https://wals.info/) based on the work of [START_REF] Corbett | Gender[END_REF], 144 do not have grammatical gender, 50 have a two-gender system (e.g. French: masculine and feminine), 26 a three-gender system (e.g. German: masculine, feminine and neuter), 12 a four-gender system (e.g. Dyirbal: gender I , II, III, and IV) and 24 languages have five and more genders (e.g. Swahili: gender I -VII). The grammatical gender of nouns determines the form of other lexical categories appearing in the same sentence including verbs, adjectives, articles, and pronouns. For instance, in French, the word student has two forms étudiant and étudiante respectively for a male and a female student. In many cases, a French speaker cannot avoid indicating the sex of human referents in their phrases (except when they mean to do so) as the gender information of the referent is marked on the words they use. If the author writes Elle est une bonne étudiante 'She is a good student', the reader would not doubt that it is a female student the author is describing. Similarly, nouns denoting inanimate entities have gender markings and determine the form of other words appearing with them, as in une petite table 'afem smallfem table' and un petit bureau 'amasc smallmasc desk'. Here, a and small were shown in their feminine and masculine forms, respectively, modifying the feminine noun table and the masculine noun desk. The gender assignment of person nouns in grammatical gender languages mostly corresponds to the sex of the referent, while that of object nouns seems to be arbitrary, if not based on shape, size or other semantic features [START_REF] Corbett | Gender[END_REF].
Natural gender languages do not group nouns into gender classes, except that personal pronouns (e.g. he and she in English) and some person nouns (e.g. waiter/waitress, actor/actress) distinguish between the male and female forms. In genderless languages, finally, the sex information of referents is conveyed through lexical means, hence the absence of grammatical gender marking for nouns and pronouns [START_REF] Gygax | A Language Index of Grammatical Gender Dimensions to Study the Impact of Grammatical Gender on the Way We Perceive Women and Men[END_REF][START_REF] Stahlberg | Representation of the sexes in language[END_REF].
Following the Neo-Whorfian hypothesis that the structure of a language influences the way representations of the world are constructed in the minds of the speakers, one would argue that the grouping of nouns into masculine or feminine gender class should lead language users to associate male or female gender properties with the referents of the nouns. Research on this topic can be divided into two lines. One line investigates person nouns whose grammatical gender has a sex-based semantic underpinning as it overlaps the natural gender of the referents. The other line of research focuses on inanimate nouns (e.g. objects, concepts) whose grammatical gender does not overlap natural gender and has no semantic underpinnings related to sex. Overall, research on person nouns has provided converging evidence of grammatical gender influencing the mental representations of person referents, while results from the second line of research seem to be contradictory.
Does Grammatical Gender Affect Our Conceptualization of Objects?
Previous research on the effects of linguistic relativity has provided some evidence supporting the weak version of the Whorfian hypothesis postulating that language influences thought (the Neo-Whorfianism), but not the strong version that language determines thought (see [START_REF] Wolff | Linguistic relativity[END_REF][START_REF] Zlatev | Language may indeed influence thought[END_REF]. The hypothesized language effect has been found from many perspectives such as color terms and color perception [START_REF] Davies | A cross-cultural study of colour grouping: Evidence for weak linguistic relativity[END_REF][START_REF] Gilbert | Whorf hypothesis is supported in the right visual field but not the left[END_REF][START_REF] Thierry | Unconscious effects of language-specific terminology on preattentive color perception[END_REF][START_REF] Winawer | Russian blues reveal effects of language on color discrimination[END_REF], linguistic labels and conceptual category learning [START_REF] Boutonnet | Words Jump-Start Vision: A Label Advantage in Object Recognition[END_REF][START_REF] Lupyan | Language is not just for talking: Redundant labels facilitate learning of novel categories[END_REF], and the impact of space and time framing on speakers' spatial orientation and conceptualization of time [START_REF] Boroditsky | Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time[END_REF][START_REF] Haun | Cognitive cladistics and cultural override in Hominid spatial cognition[END_REF][START_REF] Levinson | Space in language and cognition: Explorations in cognitive diversity[END_REF][START_REF] Levinson | Returning the tables: Language affects spatial reasoning[END_REF][START_REF] Li | Spatial reasoning in tenejapan mayans[END_REF][START_REF] Loewenstein | Relational language and the development of relational mapping[END_REF][START_REF] Majid | Can language restructure cognition? The case for space[END_REF][START_REF] Munnich | Spatial language and spatial representation: A crosslinguistic comparison[END_REF]. However, regarding grammatical gender and its influences on the perceived properties of objects, the existing empirical evidence remains rather controversial as suggested by the inconsistent findings in the literature (see [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a;;[START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Haertlé | Does grammatical gender influence perception? A study of Polish and French speakers[END_REF][START_REF] Imai | All giraffes have female-specific properties: Influence of grammatical gender on deductive reasoning about sex-specific properties in german speakers[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Sato | Grammatical gender affects gender perception: Evidence for the structural-feedback hypothesis[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF].
As mentioned before, grammatical gender refers to the classification of nouns into different classes followed by a grammatical rule of agreement according to which the forms of other lexical categories change, including that of adjectives, articles and pronouns. Even though the grammatical gender of nouns denoting objects does not have a sex-based semantic underpinning, one can ask whether the classification of masculine and feminine nouns, as in French, would prompt speakers to make different gender associations with the objects. To my knowledge, this question was first experimentally investigated by the cognitive scientists [START_REF] Guiora | A Cross-Cultural Study of Symbolic Meaning-Developmental Aspects[END_REF]. Using a semantic differential test, [START_REF] Guiora | A Cross-Cultural Study of Symbolic Meaning-Developmental Aspects[END_REF] showed Hebrew speaking Israeli kindergarteners and adults a list of object nouns and asked them to judge to what extent the words could be related to masculine or feminine characteristics. In addition to grammatical gender, they varied the cultural gender connotations of the stimulus words: malerelated (e.g. aircraft, tank), female-related (e.g. doll, skirt), and gender-neutral (e.g. clock, book).
Counter the predictions of the hypothesis, they found that both child and adult participants categorized the words according to their gender connotations rather than grammatical gender.
This study actually replicated results of their previous cross-cultural study comparing American and Israeli adults (cited in [START_REF] Guiora | A Cross-Cultural Study of Symbolic Meaning-Developmental Aspects[END_REF] where they found English and Hebrew speakers showed a similar pattern of responsesthe associations were made based on gender connotations. The authors thus concluded that grammatical gender in Hebrew did not influence native speakers' perceptions of objects. Being the first researchers to look at the relationship between grammatical gender and object perception, they provided the initial counter evidence of the hypothesized gender effect.
Later, [START_REF] Clarke | Gender perception in Arabic and English[END_REF] replicated the study with Arabic (a grammatical gender language) and English (for comparison). This time, contrary to the findings of [START_REF] Guiora | A Cross-Cultural Study of Symbolic Meaning-Developmental Aspects[END_REF], a gender effect was detected. The Arabic participants assigned gender qualities to words according to their grammatical gender, while the English participants relied on the words' gender connotations. After that, continuous attempts were made to replicate the studies with other languages like German, Spanish, French, Italian and Polish, on age groups from 5-year-olds to adults, employing various paradigms ranging from voice assignment, word and picture grouping, inference tasks, to word error induction tasks [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF][START_REF] Bender | Lady Liberty and Godfather Death as candidates for linguistic relativity? Scrutinizing the gender congruency effect on personified allegories with explicit and implicit measures[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Flaherty | How a language gender system creeps into perception[END_REF][START_REF] Imai | All giraffes have female-specific properties: Influence of grammatical gender on deductive reasoning about sex-specific properties in german speakers[END_REF][START_REF] Konishi | The semantics of grammatical gender: A cross-cultural study[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Kurinski | Does learning Spanish grammatical gender change English-speaking adults' categorization of inanimate objects?[END_REF][START_REF] Maciuszek | Grammatical Gender Influences Semantic Categorization and Implicit Cognition in Polish[END_REF][START_REF] Mills | The acquisition of gender: A study of English and German[END_REF][START_REF] Ramos | What constrains grammatical gender effects on semantic judgements? Evidence from Portuguese[END_REF][START_REF] Sato | Grammatical gender affects gender perception: Evidence for the structural-feedback hypothesis[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF][START_REF] Vigliocco | Grammatical gender effects on cognition: Implications for language learning and language use[END_REF].
Although most of the studies reported positive results, the picture is more of a complex one. For example, the gender effect was more robust in languages with a sex-based two-gender system (e.g. French, Italian) than in those having three grammatical gender classes (e.g. German) (Sera et al., 2002b;[START_REF] Vigliocco | Grammatical gender effects on cognition: Implications for language learning and language use[END_REF]; but see [START_REF] Bender | Gender congruency from a neutral point of view: The roles of gender classes and conceptual connotations[END_REF]; the influence of grammatical gender was not detected if participants' lexical access was blocked by articulatory suppression when performing the target task [START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF]; the effect was found with monolinguals but not with bilinguals who spoke two grammatical gender languages [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF], or that the language effect in bilinguals was dependent on the test language [START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF]; the language effect was limited among adult language learners [START_REF] Kurinski | Does learning Spanish grammatical gender change English-speaking adults' categorization of inanimate objects?[END_REF]; the effect was stronger with explicit measures (e.g. biological sex assignment) and linguistic stimuli than with implicit measures (e.g. Extrinsic Affective Simon Task) and visual stimuli [START_REF] Bender | Lady Liberty and Godfather Death as candidates for linguistic relativity? Scrutinizing the gender congruency effect on personified allegories with explicit and implicit measures[END_REF][START_REF] Ramos | What constrains grammatical gender effects on semantic judgements? Evidence from Portuguese[END_REF], and so on and so forth. Another complexity is that some positive effects could be attributed to task demands, or participants' employment of response strategies in completing the task that may have biased the results in favor of the hypothesis. For example, when asked to assign a male or female voice to objects as in [START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF], participants could simply rely on the cue of grammatical gender to complete the somewhat strange task.
That said, some research did adopt measures that were less subject to task demands (e.g. word or image priming), but again mixed results were reported. Using a semantic categorization task, [START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF] presented Spanish-English bilinguals with three pictures of objects one by one and asked them to decide whether the third picture in a series belonged to the same semantic category as the first two while measuring the event-related brain potentials (ERP). The researchers manipulated the semantic relatedness and grammatical gender of the objects. The participants' explicit judgments revealed no gender effect, but their ERPs did, suggesting that participants retrieved grammatical gender information even when it was irrelevant to the task [START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF]. Later, [START_REF] Sato | Grammatical gender affects gender perception: Evidence for the structural-feedback hypothesis[END_REF] tested native English and French-English bilinguals with a facial image categorization task and found that English speakers were influenced by the gender connotations of object primes when asked to categorize male and female facial images, while French-English bilinguals relied on the grammatical gender of objects in performing the task. Other studies that employed different implicit methods also reported positive effects of grammatical gender [START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF][START_REF] Bender | Gender congruency from a neutral point of view: The roles of gender classes and conceptual connotations[END_REF]. It is worth noting that [START_REF] Bender | Gender congruency from a neutral point of view: The roles of gender classes and conceptual connotations[END_REF] reported equally strong effects for neuter and gender-marked nouns in German. This result, however, should be interpreted with a caveat: it may have been confounded with gender connotations since neuter nouns should not exhibit such an effect except if they were stereotypically associated with males or females.
Among the previous studies, the one reported by [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] is particularly worth mentioning here. Employing a word association method, [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] asked German and Spanish participants to produce adjectives for a list of 24 object nouns that had opposite grammatical gender in the two languages (i.e. masculine in German and feminine in Spanish, feminine in German and masculine in Spanish). To minimize the influence of participants' native language, the task was administered in English. Then to test if the adjectives generated for masculine and feminine nouns differed in their gender associations, the researchers asked a group of English speakers to rate the adjectives on the extent to which they were associated with masculine or feminine properties. According to the summary of results provided by the authors [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF], German and Spanish participants generated adjectives that were rated as "masculine" or "feminine" for grammatically masculine or feminine nouns in their native language. For example, the word "bridge", which is grammatically feminine in German ("Brücke"), elicited female-typed adjectives from German speakers (e.g. beautiful, elegant, fragile, peaceful, pretty, and slender), while it is grammatically masculine in Spanish ("puente") and thus induced male-typed adjectives from Spanish speakers (e.g. big, dangerous, long, strong, sturdy, and towering). The study has garnered much attention from experts and laypeople alike as evinced by the more than 800 citations on Google scholar and the over seven million views of Boroditsky's YouTube video in which these findings were described (https://www.youtube.com/watch?v=RKK7wGAYP6k). Despite the wide attention it has drawn, the important aspects of the study remain unknown, including the experimental materials, procedure and results, except being briefly described in the book chapter [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF].
Recently, [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF] made an attempt to replicate [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF]'s study, but this time the German and Spanish speakers were tested in their native language instead of a third language. Contrary to the findings that were originally reported, results of the replication study showed no grammatical gender effect [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF]. The study of [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF], however, had its own limitations with regard to the selection of items and its sample size. The authors tested a small number of items (N = 10) with two of them denoting animals ("whale" and "mouse") and eight, objects (e.g. "pumpkin", "clock"). The mixture of animate and inanimate nouns could bias the results in opposite directions, as previous studies suggested differential effects of grammatical gender for the two types of items (see [START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a;;[START_REF] Maciuszek | Grammatical Gender Influences Semantic Categorization and Implicit Cognition in Polish[END_REF]. In addition, the nouns were not controlled for gender connotations and natural/artificial classification, two factors that could confound the grammatical gender effect (see Bender et al., 2016a[START_REF] Bender | Gender congruency from a neutral point of view: The roles of gender classes and conceptual connotations[END_REF][START_REF] Mullen | Children's classifications of nature and artifact pictures into female and male categories[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF]. With respect to the sample size, within each language group, there were only 15 participants for the adjective generation task and 10 for the adjective rating task. One would argue that results from such a small sample size are hardly generalizable to a greater population.
Overall, more reliable research methods are needed to better answer the question of whether grammatical gender affects people's conceptualization of objects. Thus, building on the work of [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] and [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF], Chapter 2 of the dissertation presents two psycholinguistic experiments, using a similar word association method, to investigate French and German speakers mental representations of objects. Here, we adopt a "stack the deck" approach in which we diverge from previous studies by stacking the deck in favor of the original hypothesis, in such a way that any null result(s) obtained (suggested by extensive piloting) would provide strong weight against it. Specifically, unlike [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] who tested participants in English as a way to reduce the influence of their native language, we will test participants directly in their native language by asking one group of participants to produce adjectives to gender marked masculine and feminine nouns, and another group of native participants to rate these adjectives as representing typically male or female qualities. All things being equal, we would expect that testing in a speaker's gendered native language (as opposed to testing them in English) while using gender marked test items would only serve to enhance any existing Neo-Whorfian effects. In addition, to address concerns over the experimenter bias (see [START_REF] Strickland | Experimenter Philosophy: The Problem of Experimenter Bias in Experimental Philosophy[END_REF] for a discussion), we crowdsourced the materials for Experiment 1 by asking participants to create semantically related noun pairs in French, and we standardized our item selection process to better control for the potential confounding factors such as the number of syllables and gender connotations.
As the underlying hypothesis/results which we are addressing is one which has captured the imagination of millions of people, including both scientists and the public, but there are very real doubts about the veracity or robustness of the findings, we decided to adopt an open sciencebased approach in assessing whether it is actually true that grammatical gender deeply influences how we think about inanimate objects. Specifically, considering that a potential publication bias a greater likelihood of positive results that support a hypothesis being published on academic journals -may exist in science practices, any null effects that we observe may have a lesser chance of being accepted for publication. To combat such a publication bias, and at the same time, to ensure the reliability of our research design, we decided to follow the open science initiative by submitting the described work (Chapter 2) as a registered report. In doing so, we can have our experimental design peer reviewed before we commence data collection. The registered report is currently under review. In Chapter 2, I present the manuscript of the registered report as submitted, including detailed descriptions of two experiments (for which data collection will start after we receive an in-principle acceptance of the report by a journal) followed by the descriptions and results of two pilot studies.
Linguistic Gender Inequality and Language Reform
In languages with a sex-based grammatical gender system, i.e. nouns referring to male persons have masculine gender and those for female persons have feminine gender, the roles of masculine and feminine genders are often asymmetrical. The masculine gender is typically assigned the role of a generic and can be used to refer to a group of women and men, or to persons whose sex is unknown or irrelevant [START_REF] Corbett | Gender[END_REF][START_REF] Gygax | A Language Index of Grammatical Gender Dimensions to Study the Impact of Grammatical Gender on the Way We Perceive Women and Men[END_REF]. On the contrary, the feminine gender has a specific female meaning and can only be used to denote female persons. The two French sentences (1a) and (1b) illustrate such a difference.
(1) a. Les sportifs français ont gagné 10 médailles d'or.
'The Frenchmasc athletesmasc have gained 10 gold medals' b. Les sportives françaises ont gagné 10 médailles d'or.
'The Frenchfem athletesfem have gained 10 gold medals' Written in the masculine form, sentence (1a) can be interpreted in three ways: only the male French athletes have gained 10 gold medals, the male and female French athletes together gained 10 gold medals, and the French athletes whose sex is unknown or irrelevant here earned 10 medals. However, the feminine form in (1b) exclusively refers to female athletes.
Another comparable example is English: the masculine third person pronoun he is designated as the generic pronoun while the feminine counterpart she has a female specific meaning. Similar use of masculine generic can be found in other grammatical gender languages such as German (e.g. Lehrer 'teachersmasc'), and natural gender languages like Swedish (i.e. han 'he').
Since 1970s, the world has seen heated debates on gender equality regarding the unequal treatment of masculine and feminine gender in languages, in particular, the use of masculine gender as generics. Language is both a reflection and a source of stereotyped gender beliefs. The unequal roles of masculine and feminine gender in a language has been argued to largely mirror the asymmetrical power and status relations between men and women in a speech community [START_REF] Bodine | Androcentrism in prescriptive grammar: Singular 'they', sex-indefinite 'he', and 'he or she'1[END_REF][START_REF] Menegatti | Gender bias and sexism in language[END_REF]. Consistent with the converging evidence of the linguistic mapping of gender stereotype across languages (T. E. S. [START_REF] Charlesworth | Gender Stereotypes in Natural Language: Word Embeddings Show Robust Consistency Across Child and Adult Language Corpora of More Than 65 Million Words[END_REF][START_REF] Garg | Word embeddings quantify 100 years of gender and ethnic stereotypes[END_REF][START_REF] Lewis | Gender stereotypes are reflected in the distributional structure of 25 languages[END_REF][START_REF] Tavits | Language influences mass opinion toward gender and LGBT equality[END_REF], a previous cross-national research has shown that the more explicit gender information is encoded linguistically, as in grammatical gender languages, the more likely that gender stereotypes are made salient and thus the higher are the chances of reproducing sexist beliefs in speakers [START_REF] Prewitt-Freilino | The gendering of language: A comparison of gender equality in countries with gendered, natural gender, and genderless languages[END_REF].
The use of masculine generics was considered to be problematic as it may represent a linguistic means of legitimizing the dominant status of the masculine gender, and in consequence turning sexist beliefs into a routine practice with most language users being unaware of it [START_REF] Ng | Language-based discrimination: Blatant and subtle forms[END_REF].
Over the last five decades, people who were aware of linguistic inequalities in their languages have expressed concern over the role of masculine generics in contributing to the underrepresentation of women in many fields. As a remedy to the male-oriented language structure, alternative forms that are considered more gender-fair have been proposed to replace masculine generics. For instance, the French community has introduced several gender-fair forms: double-gender (e.g. étudiants et étudiantes 'studentsmasc and studentsfem'), and contracted forms using a slash (e.g. étudiant/es), a dash (e.g. étudiant-e-s), brackets (e.g. étudiant(e)s), and a middot (e.g. étudiant•e•s) [START_REF] Abbou | Double gender marking in French: A linguistic practice of antisexism[END_REF]. In a similar vein, German has seen the existence of contracted forms with an asterisk (e.g. Reporter*in 'reportermasc*fem') [START_REF] Kruppa | Does the Asterisk in Gender-fair Word Forms in German Impede Readability? Evidence from a Lexical Decision Task[END_REF], word pair forms (e.g. Politikerinnen und Politiker 'politiciansfem and politiciansmasc'), capital I (e.g. PolitikerInnen), and nominalized form (e.g. die Studierenden 'the students' derived from the verb studieren 'to study') [START_REF] Sato | Altering male-dominant representations: A study on nominalized adjectives and participles in first and second language German[END_REF]. Likewise, in English, alternatives such as pair pronouns he or she (also he/she), singular they, and contracted forms s/he and (s)he have been proposed as replacements of the male generic he [START_REF] Bodine | Androcentrism in prescriptive grammar: Singular 'they', sex-indefinite 'he', and 'he or she'1[END_REF][START_REF] Gastil | Generic pronouns and sexist language: The oxymoronic character of masculine generics[END_REF][START_REF] Hyde | Children's understanding of sexist language[END_REF] see [START_REF] Stahlberg | Representation of the sexes in language[END_REF], for a review), and in Swedish, a gender-neutral third person pronoun hen was invented as a substitute for han [START_REF] Gustafsson Sendén | Introducing a gender-neutral pronoun in a natural gender language: The influence of time on attitudes and behavior[END_REF].
The course of gender-fair language never runs smoothly. Initiatives for language reform were received with mixed responses by the public [START_REF] Blaubergs | An analysis of classic arguments against changing sexist language[END_REF][START_REF] Parks | Contemporary arguments against nonsexist language: Blaubergs (1980) revisited[END_REF] see [START_REF] Stahlberg | Representation of the sexes in language[END_REF] for a review). On the one hand, proponents of gender-fair language believe that the use of masculine generics in reference to a mixed-sex group leads to biased mental representations favoring males, and in result, leaves women invisible or ignored; masculine formulations put women at a disadvantage as they are less likely to be considered for roles presented in masculine forms, and women themselves also find it hard to identify with such roles and positions [START_REF] Stahlberg | Representation of the sexes in language[END_REF]. On the other hand, opponents of language reform responded with a list of reasons for not using gender-fair language forms: there is no causal relation between language structure and gender inequality, changing sexist language is rather frivolous as a matter compared to other forms of injustice in society, people cannot be coerced to use a nonsexist language, language is not sexist by itself but instead it is the hearer/reader who interprets it in a sexist way, masculine generics are not male-biased, changing the language will estrange the current and future generations from historical heritage, and in extreme cases, people even admit and support sexism and linguistic patriarchy [START_REF] Blaubergs | An analysis of classic arguments against changing sexist language[END_REF][START_REF] Parks | Contemporary arguments against nonsexist language: Blaubergs (1980) revisited[END_REF][START_REF] Stahlberg | Representation of the sexes in language[END_REF]. In French, additional skepticism was pointed to the possible reading and learning difficulties2 created by the innovative, contracted gender-fair language forms (see [START_REF] Gygax | Féminisation et lourdeur de texte[END_REF].
When the proponents and critics of gender-fair language were tossing arguments around at each other, a growing body of literature from the last five decades has documented converging evidence of male-biased interpretations of masculine generics compared to the gender inclusive alternatives (see Stahlberg et al., 2007 for a review). Beginning in the 1970s, researchers investigated the difference between generic he and gender-fair alternatives in English. Results
showed that relative to they and he or she, the male generic he was disproportionately associated with men [START_REF] Gastil | Generic pronouns and sexist language: The oxymoronic character of masculine generics[END_REF][START_REF] Hamilton | Using masculine generics: Does generic he increase male bias in the user's imagery?[END_REF][START_REF] Hyde | Children's understanding of sexist language[END_REF][START_REF] Mackay | Psychology, prescriptive grammar, and the pronoun problem[END_REF][START_REF] Martyna | What does 'he'mean? Use of the generic masculine[END_REF][START_REF] Martyna | Beyond the" he/man" approach: The case for nonsexist language[END_REF][START_REF] Moulton | Sex bias in language use:"Neutral" pronouns that aren't[END_REF]. In a similar vein, empirical studies revealed male-oriented representations when masculine generics was used in German and French, while gender-fair alternative forms improved women's visibility in mental representations [START_REF] Braun | Cognitive effects of masculine generics in German: An overview of empirical findings[END_REF][START_REF] Gabriel | Au pairs are rarely male: Norms on the gender perception of role names across English, French, and German[END_REF]Gygax et al., 2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF]Gygax & Gabriel, 2008;[START_REF] Hansen | The Social Perception of Heroes and Murderers: Effects of Gender-Inclusive Language in Media Reports[END_REF][START_REF] Sato | Altering male-dominant representations: A study on nominalized adjectives and participles in first and second language German[END_REF]Stahlberg et al., 2001;Stahlberg & Sczesny, 2001). These studies thus justified the concerns over sexist language forms across cultures by showing that masculine generics indeed evoke representations favoring males, leaving the roles of women largely forgotten.
Although proposals for language change encountered strong resistance at the beginning, over time, gender-inclusive language forms seemed to be accepted and applied more commonly in public communication. For instance, in Germany, German-speaking Switzerland, and Austria, using gender-fair generic form in official language has become a well-established norm [START_REF] Bußmann | Engendering female visibility in German[END_REF][START_REF] Formanowicz | Capturing socially motivated linguistic change: How the use of gender-fair language affects support for social initiatives in Austria and Poland[END_REF][START_REF] Mucchi-Faina | Visible or influential? Language reforms and gender (in) equality[END_REF][START_REF] Sarrasin | Sexism and attitudes toward gender-neutral language[END_REF]; the use of generic he in English declined over time and the appearance of gender-inclusive forms grew significantly in public discourse [START_REF] Rubin | Adopting gender-inclusive language reforms: Diachronic and synchronic variation[END_REF], and now one would say it is common to encounter gender-inclusive formulations in both oral and written English [START_REF] Sarrasin | Sexism and attitudes toward gender-neutral language[END_REF]; in Sweden, people's attitudes towards the gender-neutral pronoun hen became positive over time, and accordingly they were more willing to use it in everyday communication [START_REF] Gustafsson Sendén | Introducing a gender-neutral pronoun in a natural gender language: The influence of time on attitudes and behavior[END_REF]; additionally, even though gender-fair language, especially the middot form, remains a controversial topic in France, one can find gender-fair language forms more and more often in official language (e.g. middot form is adopted by the City Hall of Paris https://www.paris.fr/municipalite and Lyon https://www.lyon.fr/solidarite).
Controversial Impact of Gender-Fair Language
Previous studies have provided consistent evidence of gender-inclusive language reducing male bias in people's mental representations (thus increasing women's presence in the minds of language users). Now, one question to ask is what social impacts does gender-fair language bring about. This may involve the perceived competence and social status of women, the inclusion of females in the workplace, and how a gender-fair language user is perceived, to name a few. Answers to these questions are by far mixed. Some research results revealed positive effects of gender-fair language as demonstrated by the findings that both women and men were more willing to apply for an opposite-sex job when the job advertisement was written in gender-inclusive language (e.g. man or woman) than when the language was gender-specific [START_REF] Bem | Does Sex-biased Job Advertising "Aid and Abet" Sex Discrimination? 1[END_REF]. Similarly, the French double-gender form (e.g. infirmier/infirmière, 'nursemasc/nursefem') and contracted form (e.g. infirmier (ère), 'nursemasc(fem)') augmented young adolescents' perceptions of professional self-efficacy, compared to the masculine form [START_REF] Chatard | Impact de la féminisation lexicale des professions sur l'auto-efficacité des élèves: Une remise en cause de l'universalisme masculin?[END_REF]. Furthermore, speakers of gender-inclusive language were perceived as less sexist and were evaluated more positively than those who used gender-exclusive language [START_REF] Greene | Effects of gender inclusive/exclusive language in religious discourse[END_REF], and applicants for the job of a spokesperson for UNICEF were rated as less sexist, warmer, more competent, and were more likely to be hired when they used gender-fair language than when they chose masculine generics [START_REF] Vervecken | Ambassadors of gender equality? How use of pair forms versus masculines as generics impacts perception of the speaker[END_REF]. Nevertheless, Horvath et al., (2016) showed that gender-fair forms in German and Italian increased women's visibility across male-and female-stereotyped professions, but the perceived social status and competence of people working in those professions were not affected by linguistic forms. A similar null effect of language form on the evaluations of professions was reported for the French language [START_REF] Gygax | Féminisation et lourdeur de texte[END_REF].
Inclusive language can help create an inclusive work environment. Using a mock interview method, [START_REF] Stout | When he doesn't mean you: Gender-exclusive language as ostracism[END_REF] found that participants' perceived sexism in the workplace, their feel of belonging, motivation to pursue a job, and identification with the job were influenced by the language form (i.e. masculine generic he vs. gender-fair he or she vs. gender-neutral one) used in the job description as well as by the interviewer. Their results demonstrated that job seekers, in particular women, perceived lower degree of sexism exhibited by the interviewer, felt less ostracized, more motivated to pursue the job and more identified with the job when gender-fair and gender-neutral language forms were used than when genderexclusive he was adopted [START_REF] Stout | When he doesn't mean you: Gender-exclusive language as ostracism[END_REF]. Consistent with this finding, Horvath & Sczesny (2016) reported that word pair forms improved the perceived fitting of women applicants for a high-status position compared to masculine form.
On the negative side, however, some evidence showed that gender-fair language could fail its good intentions by activating gender stereotypes that are disadvantageous to women. A good example is that in Italian, a female professor was seen as less persuasive and reliable when presented with a feminine professional title (e.g. professoressa) than with a masculine title (e.g. professore) [START_REF] Mucchi-Faina | Visible or influential? Language reforms and gender (in) equality[END_REF]. The negative evaluations invoked by the feminine title may be attributed to the derogatory associations with the suffix -essa in Italian [START_REF] Merkel | Shielding women against status loss: The masculine form and its alternatives in the Italian language[END_REF] and the controversial state of gender-fair language reform in Italy the time when the study was conducted. By comparing the traditional masculine form with two feminine forms (i.e. suffixessa, and neologisms -a and -e) in Italian, [START_REF] Merkel | Shielding women against status loss: The masculine form and its alternatives in the Italian language[END_REF] did prove that feminized occupational titles with -essa led to a status loss for women relative to the masculine form and neologisms -a and -e. Similarly in Polish, feminine forms (e.g. suffix -ka) often derived from masculine terms and had derogatory connotations (e.g. referring to the "wife of" or "possessions of") [START_REF] Koniuszaniec | Language and gender in Polish[END_REF]. Accordingly, in a CV evaluation study, women applicants of a position were rated less favorably when introduced with a feminized title (e.g. nanotechnolożka 'nanotechnologistfem') than with a masculine one (e.g. nanotechnolog 'nanotechnologistmasc') (M. [START_REF] Formanowicz | Side effects of gender-fair language: How feminine job titles influence the evaluation of female applicants[END_REF]; in a similar vein, female applicants for a fictional job were perceived as less competent by both women and men, but only less warm by men when a feminine title (e.g. aborolożka) was used [START_REF] Budziszewska | Backlash over gender-fair language: The impact of feminine job titles on men's and women's perception of women[END_REF]. Even though these deleterious effects of gender-fair language documented could be attributed to the negative associations specific to some forms but not the practice of gender-fair language in general, one would anticipate some backlash effects of language reform in the short run.
Gender-fair language does not influence everyone to the same degree just as everyone in a society is not unanimously pro or against language reform. The effects of gender-fair language could be moderated by factors such as the status of language reform in a community and language users' attitudes on the issue. One would expect more positive influences of genderinclusive language in a community where the usage of gender-fair language is well established than a society for which language reform is novel and controversial. For instance, [START_REF] Formanowicz | Capturing socially motivated linguistic change: How the use of gender-fair language affects support for social initiatives in Austria and Poland[END_REF] observed backlash effects of gender-fair language with Polish speakers who were not used to the presence of such forms, while positive effects were found with Austrian participants to whom using gender-fair language was already a well-established norm.
Specifically, the authors presented a fictitious initiative to participants and asked for their evaluations. With Polish participants, the authors found when the initiative was related to gender equality, the gender-fair language framing evoked more negative evaluations and less support, especially among male participants; when the initiative was unrelated to gender, language form did not show any effect on the evaluations. Conversely, the Austrian results showed that the gender-equality initiative was evaluated more favorably when presented in gender-fair form than in masculine form. In Swedish, another language that adopted a gender-fair form, [START_REF] Tavits | Language influences mass opinion toward gender and LGBT equality[END_REF] observed similar positive effectswomen politicians were more likely to be acknowledged when the gender-inclusive pronoun hen was used.
With regard to the moderating role of people's attitudes and ideology, [START_REF] Sarrasin | Sexism and attitudes toward gender-neutral language[END_REF] found that individuals holding sexist beliefs were more inclined to express negative attitudes toward gender-related language reform. Furthermore, [START_REF] Formanowicz | Side effects of gender-fair language: How feminine job titles influence the evaluation of female applicants[END_REF] revealed that conservatives were more likely to devalue a female applicant presented with a feminine title than liberals since the former tended to maintain traditional gender role beliefs, and were more resistant to social changes and feminist reforms. That said, the current adherence to gender-fair language in the U.K., Austria, Sweden and other countries seems to suggest that a converging attitude toward language reform and in result, positive impacts of gender-fair language on the societal level is not unattainable.
Gender-Fair Forms: Which One to Choose?
Difficulties in finding a proper gender-fair alternative to masculine generics vary from one language to another. It may be relatively more complicated for grammatical gender languages than for natural gender and genderless languages, given that changes of nominal forms in the former have consequences for the grammatical agreement in adjectives, articles and other lexical categories. Constrained by the language structure, the advancement of language reform in grammatical gender languages often gets pushed back due to the unconventional looking of newly invented forms that speakers find hard to accept. For example, in France, debates on sexist language dated back to the late 1990s [START_REF] Mucchi-Faina | Visible or influential? Language reforms and gender (in) equality[END_REF], while gender-inclusive language, especially the middot form, remains a highly controversial subject in the year of 2021. Although various gender-fair forms (e.g. étudiants et étudiantes, étudiant/es, and étudiant•e•s) are frequently seen in the public space such as in subway stations, and online in official language, there is never a normative criterion as to which gender-fair form should be adopted. Oftentimes one finds multiple forms appearing in a single piece of writing3 . The chaotic state of gender-fair language use can be attributed to the arguable drawbacks with regard to each individual form4 that put speakers off the idea: the double-gender form (e.g. étudiants et étudiantes) is long and repetitive; the splitting form with a slash (e.g. étudiant/es) places feminine gender in a secondary position, the middot form (e.g. étudiant•e•s) creates reading and learning problems.
Much of the criticism concerns the unease that speakers might experience when using gender-fair language. However, despite being intuitive, these claims are not empirically validated. [START_REF] Gygax | Féminisation et lourdeur de texte[END_REF] provided suggestive evidence that gender-fair forms did not make reading more difficult than it normally is. By asking participants to read texts containing different language forms (i.e. masculine: avocats 'lawyersmasc'; double-gender: avocats et avocates 'lawyersmasc and lawyersfem'; and contracted form with a dash: avocat-e-s 'lawyermascfem-s'), the authors showed that the contracted form slowed down reading when it was shown the first time, compared to masculine and double-gender forms, but the difference in reading speed disappeared when the contracted form was encountered the second and third time in the texts.
Furthermore, readers of the contracted form did not report any experience of reading difficulties.
The message from this study is that French readers and hearers may need to take a short break from their task and process a bit more the contracted forms when they are encountered for the first time, but the powerful human brain will not be defeated by this tiny little puzzle.
To answer the question of which gender-fair form should be used in France, one needs to consider the costs and benefits of language reform on the societal level. Currently, empirical data that can allow for such an analysis are still much needed. For instance, we need to assess the influences of language forms on the mental representations of gender groups, the effects of gender-fair forms on language learning and processing, the social impacts of language reform and other aspects important to the decision making. To provide data bearing on this analysis, Chapter 3 presents two empirical studies on the consistency of mental representations induced by gender-fair forms, especially, the most controversial middot form. The experiments compared two candidate alternative generic forms (i.e. double-gender and middot) with the masculine form across professions of differing gender stereotypicality. Participants were asked to read a short text about the taking place of a professional gathering and provide their estimates of proportions of women and men present at the gathering. Consistent with existing evidence in English and German (e.g. [START_REF] Braun | Cognitive effects of masculine generics in German: An overview of empirical findings[END_REF][START_REF] Gastil | Generic pronouns and sexist language: The oxymoronic character of masculine generics[END_REF][START_REF] Hyde | Children's understanding of sexist language[END_REF][START_REF] Moulton | Sex bias in language use:"Neutral" pronouns that aren't[END_REF]Stahlberg et al., 2001), results of the two experiments showed that both gender-fair forms increased the presence of women in the minds of language users. We also compared the participants' estimates with data of a previous norming study [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF] to examine the consistency of those estimates and establish whether some language form led to biased representations of gender ratios. The results suggested that depending on whether a profession is gender-balanced, male-or female-dominated, the two forms seemed to work differently, leading to various effects regarding the reduction of male bias and the consistency of representations.
From Gender Stereotyping to Gender Inequality
Stereotyping is "the attribution of general psychological characteristics to large human groups" [START_REF] Tajfel | Cognitive aspects of prejudice[END_REF]. Holding stereotypes makes people neglect individual differences and believe that all members of a group are similar [START_REF] Hogg | Social identifications: A social psychology of intergroup relations and group processes[END_REF], and once stereotypes are formed, they are spontaneously applied to members of a group [START_REF] Devine | Stereotypes and prejudice: Their automatic and controlled components[END_REF].
Characteristics such as gender and ethnicity are common sources of stereotypes. Take gender for example, men are often associated with agentic qualities (e.g. independent, competitive, arrogant and boastful) while women are thought of as having communal properties (e.g. warm, emotional, gullible and whiny) [START_REF] Bem | The Measurement of Psychological Androgyny[END_REF][START_REF] Deaux | Putting gender into context: An interactive model of gender-related behavior[END_REF][START_REF] Eagly | Gender stereotypes and attitudes toward women and men[END_REF][START_REF] Garg | Word embeddings quantify 100 years of gender and ethnic stereotypes[END_REF]. In accordance with these stereotypes, men and women are assigned to different roles in society. Generally speaking, women are significantly underrepresented in senior, leadership positions [START_REF] Bertrand | The gender gap in top corporate jobs[END_REF][START_REF] Soarea | 2013 Catalyst census Fortune 500 women executive officers and top earners[END_REF].
For example, a recent word embedding study revealed a century long (from 1910 to 1990) gender segregation in occupations, i.e., males dominated high-status professions like architect, soldier, engineer, and judge while females took on roles of caregiver or entertainer such as nurse, housekeeper, midwife, and dancer [START_REF] Garg | Word embeddings quantify 100 years of gender and ethnic stereotypes[END_REF]. Similar patterns of gender distributions have been shown in norming studies and census data [START_REF] Gabriel | Au pairs are rarely male: Norms on the gender perception of role names across English, French, and German[END_REF][START_REF] Garnham | True gender ratios and stereotype rating norms[END_REF][START_REF] Kennison | Comprehending Pronouns: A Role for Word-Specific Gender Stereotype Information[END_REF][START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF].
The interaction between social role assignment and stereotypical beliefs about the two sexes perpetuates the gender disparities across professional domains. According to Social Role Theory [START_REF] Eagly | Social role theory of sex differences and similarities: A current appraisal[END_REF][START_REF] Eagly | Social Role Theory of Sex Differences[END_REF], we often associate people with the roles they play in the society where they live, and at the same time, we perceive the qualities people exhibit when they are in roles as their personal properties. Since women and men typically occupy different roles in a community, they are thought of possessing different properties specific to the gender. In return, the stereotyped gender qualities feed normative role beliefs that the two sexes should behave differently and take on different roles [START_REF] Eagly | Social Role Theory of Sex Differences[END_REF]. For example, the observation that women occupy the role of caregiver leads us to think that they possess caregiving properties -being warm, emotionally expressive, and mindful of other people's feelingsand that they should play the role of caregiver. However, women are considered as less suitable for a leadership role, and are less often seen as assertive, dominant and aggressive, qualities that characterize a leader and decision maker, due to the fact that females are a rarity in positions of authority.
There are two mechanisms by which gender stereotypes may affect the distributions of men and women: externally through societal expectations and internally by individuals' identification with the gender roles. On the one hand, stereotypes may blind us to differences between members of a group, and in consequence bias our perceptions and judgments about individual group members. External bias, manifested in the discriminatory practices against females in hiring and promotion processes, has been argued to contribute to the underrepresentation of women in traditional male-dominated professions such as the STEM (Science, Technology, Engineering and Mathematics) fields [START_REF] Charlesworth | Gender in science, technology, engineering, and mathematics: Issues, causes, solutions[END_REF][START_REF] Koch | A meta-analysis of gender stereotypes and bias in experimental simulations of employment decision making[END_REF]. For instance, when applying for a lab manager position at a research intensive university, a female candidate was evaluated less positively than a male candidate even though they had exactly the same resume (Moss-Racusin et al., 2012a). Similarly, the same performance review of a veterinarian was evaluated less favorably and in result lower salaries were proposed by employers when the purported gender on the document was female rather than male [START_REF] Begeny | In some professions, women have become well represented, yet gender bias persists-Perpetuated by those who think it is not happening[END_REF], and, compared to men, women were less likely to be employed to perform mathematical tasks [START_REF] Reuben | How stereotypes impair women's careers in science[END_REF]. Furthermore, even when women did succeed in maletyped fields, they were penalized for violating the stereotype that women are communal but not competitive, as shown in less favorable evaluations of female leaders than male ones [START_REF] Eagly | Role congruity theory of prejudice toward female leaders[END_REF][START_REF] Heilman | Description and prescription: How gender stereotypes prevent women's ascent up the organizational ladder[END_REF][START_REF] Heilman | Gender in the workplace. Sex stereotypes: Do they influence perceptions of managers[END_REF]. These studies suggest that gender stereotyped beliefs are able to induce biased perceptions and evaluations of women's qualities and competence, which may contribute to the sustaining underrepresentation of women in professions of high social status and economic reward.
On the other hand, internalized gender stereotypes and social norms may bind individuals' behaviors. Once identified with a social group, individuals think and behave in conformity with the social norms that define the group [START_REF] Hogg | Social identifications: A social psychology of intergroup relations and group processes[END_REF]. The normative belief that the two sexes possess different qualities and should align with different social roles can constrain the life and career choices individuals make (see [START_REF] Knight | One egalitarianism or several? Two decades of gender-role attitude change in Europe[END_REF]. For example, the dearth of women in STEM fields has also been arguably attributed to females' lower interest in these fields and higher personal preferences for less math-intensive occupations [START_REF] Breda | Girls' comparative advantage in reading can largely explain the gender gap in math-related fields[END_REF][START_REF] Ceci | Women's underrepresentation in science: Sociocultural and biological considerations[END_REF][START_REF] Ceci | Understanding current causes of women's underrepresentation in science[END_REF]. According to this view, women's higher interest in persons relative to objects leads them to opt out of math-intensive fields even though they are equally competent as men (see [START_REF] Ceci | Women's underrepresentation in science: Sociocultural and biological considerations[END_REF] for a review). However, women's career choices may not reflect their vocational aspirations but instead be constrained by the traditional gender stereotypes such as 'math is not for girls' and 'women are family-centered' [START_REF] Breda | Gender stereotypes can explain the genderequality paradox[END_REF][START_REF] Charles | Indulging our gendered selves? Sex segregation by field of study in 44 countries[END_REF][START_REF] Knight | One egalitarianism or several? Two decades of gender-role attitude change in Europe[END_REF]. Thus, individuals themselves are not free from the influence of gender norms when they decide what to do in their life.
Attitudes on Gender Equality Impacts Evaluations of Research on Gender Bias
The fact is indisputable that the STEM fields are male-dominated [START_REF] Shen | Inequality Quantified: Mind the Gender Gap[END_REF]. As illustrated before, the persistent underrepresentation of women has been attributed to multiple interacting factors, ranging from stereotype-based external biases (explicit and implicit), inherent individual differences in math abilities, to free or constrained life and career choices (see [START_REF] Ceci | Women's underrepresentation in science: Sociocultural and biological considerations[END_REF][START_REF] Charlesworth | Gender in science, technology, engineering, and mathematics: Issues, causes, solutions[END_REF]. Among these factors, previous research has provided evidence that discriminatory hiring and promotion practices disfavoring women partially accounts for the current gender disparities [START_REF] Begeny | In some professions, women have become well represented, yet gender bias persists-Perpetuated by those who think it is not happening[END_REF][START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF]Régner et al., 2019a;[START_REF] Reuben | How stereotypes impair women's careers in science[END_REF]. For example, in the study of [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF], the experimenters asked professors at research-intensive universities to evaluate the CV of an undergraduate student for a lab manager position. All professors received the exact same CV except in one experimental group, the applicant was given a male name and in the other group, a female name. Results revealed a significant bias in favor of the male applicant, who was rated as more competent, hirable and was offered more mentoring and a higher starting salary than the female applicant with the same qualifications.
Despite research consistently showing that women are being discriminated against in STEM fields, people's reactions to this evidence are varied [START_REF] Danbold | Men's defense of their prototypicality undermines the success of women in STEM initiatives[END_REF][START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF][START_REF] Moss-Racusin | Can evidence impact attitudes? Public reactions to evidence of gender bias in STEM fields[END_REF]. For instance, by asking participants to rate the quality of the research by [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF], [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] showed that males rated the research less favorably than female participants, and that this sex difference in evaluations was more pronounced among STEM experts than the general public. A similar sex difference was documented by [START_REF] Moss-Racusin | Can evidence impact attitudes? Public reactions to evidence of gender bias in STEM fields[END_REF] who conducted a content analysis of the comments posted by readers of three press articles reporting on the research of [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF].
The authors found more positive reactions (e.g., calls for social change) among female readers, and more negative reactions (e.g. justifications of gender bias) among males. The reported sex difference in reactions was ascribed to ingroup bias, a mechanism by which men defend their dominant identity in STEM [START_REF] Danbold | Men's defense of their prototypicality undermines the success of women in STEM initiatives[END_REF][START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF]. However, previous research also revealed that both men and women can act as defenders of the male-dominated status quo [START_REF] Charles | Indulging our gendered selves? Sex segregation by field of study in 44 countries[END_REF][START_REF] Glick | Beyond Prejudice as Simple Antipathy: Hostile and Benevolent Sexism Across Cultures[END_REF][START_REF] Glick | An Ambivalent Alliance: Hostile and Benevolent Sexism as Complementary Justifications for Gender Inequality[END_REF][START_REF] Jost | Exposure to benevolent sexism and complementary gender stereotypes: Consequences for specific and diffuse forms of system justification[END_REF][START_REF] Napier | The joy of sexism? A multinational investigation of hostile and benevolent justifications for gender inequality and their relations to subjective wellbeing[END_REF].
Finding the explanation for the differing responses to research on gender bias is critical to the advance of gender equality. Individuals skeptical about the existence of COVID-19 and its ability to kill an infected person are more likely to violate social distancing rules, just as people who do not believe the existence and severity of gender inequality tend to make biased judgments and decisions against women. There is some evidence that individuals who think gender inequality is not any more a problem are more inclined to discriminatory views and practices. For example, Begeny et al, (2020) asked a group of managers in the profession of veterinary medicine (men and women have equal representations in this profession in the United States) to evaluate the performance review of a vet. The experimenters randomly assigned a male or female name to the vet while keeping every other content in the review identical. The authors found that managers endorsing the belief that gender inequality is a problem of the past (given the balanced representations of men and women) were more susceptible to under-evaluations of women's competence and worth, and as a result, assigning fewer career opportunities and lower salaries to female professionals relative to their male colleagues who possessed the exact same qualifications [START_REF] Begeny | In some professions, women have become well represented, yet gender bias persists-Perpetuated by those who think it is not happening[END_REF]. Consistent with this finding, scientific evaluation committees promoted fewer women to elite research positions, especially those who rejected the belief that external barriers (e.g. discrimination) rather than internal abilities constrained women's success in academia and caused their underrepresentation in STEM fields (Régner et al., 2019b). These findings suggest that people who are unaware of or having doubts about the extant gender disparities are more often than not the ones who act on stereotypes, have biased perceptions of women and make discriminatory decisions that contribute to the persistence of gender inequality.
Here, I propose that in addition to ingroup bias, a person's moral attitudes on gender equity may play a crucial role in their reception of new information related to gender bias. In particular, the moralization of gender equality -the tendency of seeing gender equality as a moral imperative that is central to one's personal identity [START_REF] Skitka | The Psychology of Moral Conviction[END_REF][START_REF] Skitka | Moral Conviction: Another Contributor to Attitude Strength or Something More[END_REF] can have significant implications for their factual beliefs and judgments about research touching on gender discrimination.
People tend to confirm what they believe to be true by selectively assimilating or rejecting new evidence, and more often than not, this tendency is moderated by their attitudes on the relevant issue. When faced with pro-attitudinal information, people are prompted to accept the information at face value as a way to validate their initial views, while for counter-attitudinal information, they would be skeptical about its relevance and reliability. This type of attitude-based (mis)trust in new information is a good demonstration of "motivated confirmation bias" [START_REF] Nickerson | Confirmation bias: A ubiquitous phenomenon in many guises[END_REF] or "motivated thinking" [START_REF] Kunda | Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories[END_REF][START_REF] Kunda | The case for motivated reasoning[END_REF].
Motivated confirmation bias has been documented in many areas, such as biased processing of health messages [START_REF] Ditto | Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions[END_REF][START_REF] Kunda | Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories[END_REF][START_REF] Liberman | Defensive processing of personally relevant health messages[END_REF], and differential evaluations of arguments and evidence pertaining to controversial social issues like anthropogenic climate change, nuclear power and vaccine [START_REF] Campbell | Solution aversion: On the relation between ideology and motivated disbelief[END_REF][START_REF] Edwards | A disconfirmation bias in the evaluation of arguments[END_REF][START_REF] Lewandowsky | Worldview-motivated rejection of science and the norms of science[END_REF][START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF][START_REF] Nisbet | The partisan brain: How dissonant science messages lead conservatives and liberals to (dis) trust science[END_REF][START_REF] Pennycook | Beliefs about COVID-19 in Canada, the UK, and the USA: A novel test of political polarization and motivated reasoning[END_REF][START_REF] Rutjens | Not all skepticism is equal: Exploring the ideological antecedents of science acceptance and rejection[END_REF][START_REF] Taber | Motivated skepticism in the evaluation of political beliefs[END_REF][START_REF] Washburn | Science denial across the political divide: Liberals and conservatives are similarly motivated to deny attitude-inconsistent science[END_REF].
For example, when confronted with new evidence pertaining to the crime deterrent effects of the death penalty, both the supporters and opponents of this practice were found to evaluate the information compatible with their prior beliefs as more convincing and reliable [START_REF] Edwards | A disconfirmation bias in the evaluation of arguments[END_REF][START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF].
Following this line of research, it seems plausible that individuals respond to research showing discriminatory practices against women in academia based on their pre-existing moral attitudes, such that those who have moral convictions about gender equality find the evidence of gender bias against females more convincing than those who are less morally concerned. Having a strong moral commitment to gender equality, however, may also prompt individuals to make systematic errors in judgment, such as perceiving imprecise causal relations from correlations.
To investigate the origin of individuals' differing reactions to evidence of gender bias, and in particular, the effects of people's moral attitudes towards gender equality on their trust in science, Chapter 4 presents six experiments examining whether individuals' self-reported moral commitment to gender equality predicts their evaluations of research summary demonstrating gender bias against vs. favoring women, which may be consistent or inconsistent with their prior beliefs. Additionally, we also examined if individuals are more inclined to make inadequate inferences when faced with a palatable conclusion.
Introduction
The relationship between language and thought, and in particular the question of whether the structure of one's language has a deep influence on how we think about and perceive the world around us, has long intrigued philosophers, linguists, anthropologists, and psychologists [START_REF] Boroditsky | Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time[END_REF][START_REF] Levinson | Returning the tables: Language affects spatial reasoning[END_REF]Pinker, 2007;[START_REF] Whorf | Language, thought and reality[END_REF]. Strong views of Whorfianism, i.e. the theory that our native language strictly determines the representational structures and processes underlying thought, have been largely discredited (see [START_REF] Wolff | Linguistic relativity[END_REF]Reines & Prinz, 2009, for a review). For example, if linguistic differences radically determine thought, then we would not be able to create new words or accurate translation from one language to another should not be so commonplace (Pinker, 2007).
There is, however, a body of more recent empirical work claiming to show that weaker versions of the Whorfian thesis may hold true: While language does not strictly determine thought, it may influence it in subtle and non-obvious ways (see [START_REF] Wolff | Linguistic relativity[END_REF]Reines & Prinz, 2009;Casanto, 2008, for a review). This topic has been examined from a number of angles, including looking at the relationship between color terms and color perception [START_REF] Davies | A cross-cultural study of colour grouping: Evidence for weak linguistic relativity[END_REF][START_REF] Gilbert | Whorf hypothesis is supported in the right visual field but not the left[END_REF][START_REF] Thierry | Unconscious effects of language-specific terminology on preattentive color perception[END_REF][START_REF] Winawer | Russian blues reveal effects of language on color discrimination[END_REF], the influence of linguistic labels on conceptual category learning [START_REF] Boutonnet | Words Jump-Start Vision: A Label Advantage in Object Recognition[END_REF][START_REF] Lupyan | Language is not just for talking: Redundant labels facilitate learning of novel categories[END_REF], and the influence of language on spatial and temporal reasoning [START_REF] Boroditsky | Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time[END_REF][START_REF] Haun | Cognitive cladistics and cultural override in Hominid spatial cognition[END_REF][START_REF] Levinson | Space in language and cognition: Explorations in cognitive diversity[END_REF][START_REF] Levinson | Returning the tables: Language affects spatial reasoning[END_REF][START_REF] Li | Spatial reasoning in tenejapan mayans[END_REF][START_REF] Loewenstein | Relational language and the development of relational mapping[END_REF][START_REF] Majid | Can language restructure cognition? The case for space[END_REF][START_REF] Munnich | Spatial language and spatial representation: A crosslinguistic comparison[END_REF]. Critics of this view have argued that any observed effects are small enough so as to lack much ecological importance (Bloom & Keil, 2001;Pinker, 2007), can be explained through non-linguistic differences, such as differences in culture (Björk, 2008), or can be explained through relatively uninteresting task demands [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF]).
Here we focus on one of the most popular examples of linguistic structure (putatively) influencing thought: The idea that the grammatical gender for nouns referring to objects causes speakers to conceive of those objects as having more masculine or feminine characteristics [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a[START_REF] Bender | Lady Liberty and Godfather Death as candidates for linguistic relativity? Scrutinizing the gender congruency effect on personified allegories with explicit and implicit measures[END_REF][START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Haertlé | Does grammatical gender influence perception? A study of Polish and French speakers[END_REF][START_REF] Imai | All giraffes have female-specific properties: Influence of grammatical gender on deductive reasoning about sex-specific properties in german speakers[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF]. In order to understand what is at stake, we suggest a few theoretical distinctions here.
One can distinguish natural (i.e. biological) sex, conceptual gender, and grammatical gender. Biological sex refers to the physical characteristics of an individual's reproduction system which determine whether a person or an animal is male, female, or intersex. By definition this is not applicable to objects. Conceptual gender relates to mental representations of and associations with biological sex, and can thus go beyond the limits of biological traits. It can pick out social roles typically related to members of each sex or can refer to a form of personal identification based on an internal awareness. Thus, certain objects can also evoke strong gender associations. For example, "lipstick" is often associated with females because women and not men typically use lipstick in modern western societies. Finally, grammatical gender refers to a linguistic way of categorizing nouns into classes which triggers agreement in other words, such as articles, adjectives, or verbs [START_REF] Aikhenvald | How gender shapes the world[END_REF][START_REF] Corbett | Gender[END_REF]. A large proportion of the world's languages have a grammatical gender system [START_REF] Aikhenvald | How gender shapes the world[END_REF][START_REF] Corbett | Gender[END_REF]. These systems can be roughly divided between those that are sex-based and those based on other semantic features such as animacy or size. In sex-based grammatical gender systems, terms which clearly refer to males vs. females predictably fall, respectively, into grammatical masculine and feminine categories. As to objects, they are assigned grammatical gender in a mostly arbitrary fashion in such languages [START_REF] Corbett | Gender[END_REF]. For example, "lipstick" in French (rouge à lèvres) is grammatically masculine despite its female gender association. Similarly, French nouns such as table ("table ") and bureau ("desk") differ in their grammatical gender despite obvious semantic similarity (with the former being feminine in French and the latter being masculine).
The central question that has sparked so much scientific and public interest is whether the grammatical gender of nouns that refer to objects fundamentally changes how speakers of the language conceive of these objects. When an object has feminine gender, do we think of it as possessing more feminine characteristics (and vice-versa for objects that have masculine gender)?
While this question has been investigated from a variety of angles [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a[START_REF] Bender | Lady Liberty and Godfather Death as candidates for linguistic relativity? Scrutinizing the gender congruency effect on personified allegories with explicit and implicit measures[END_REF][START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Haertlé | Does grammatical gender influence perception? A study of Polish and French speakers[END_REF][START_REF] Imai | All giraffes have female-specific properties: Influence of grammatical gender on deductive reasoning about sex-specific properties in german speakers[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF], it was brought to wider attention by a study summarized by [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF].
This study was conducted entirely in English on bilinguals whose native language was either German or Spanish. Participants were asked to associate adjectives with nouns, whose translations were either grammatically feminine in German and masculine in Spanish or viceversa. These adjectives were then rated as describing masculine or feminine properties by a group of English speakers. The key finding was that people produced qualitatively different adjectives depending on the grammatical gender of the relevant nouns in their native language.
For example, the word "bridge" when translated into German is grammatically feminine (brücke) and when translated into Spanish is masculine (puente). For "bridge", German speakers were reported to have produced adjectives that were rated as more feminine (e.g. beautiful, elegant, fragile, peaceful, pretty, and slender), while Spanish speakers produced adjectives that were rated as more masculine (e.g. big, dangerous, long, strong, sturdy, and towering).
These results have seemingly entered into the public's understanding of how language influences thought.
For example, Borodisksy's TED talk describing this work (https://www.youtube.com/watch?v=RKK7wGAYP6k) has been viewed more than 5 million times on Youtube alone. [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] is also highly cited and continues to be cited in scientific work as evidence for an effect of grammatical gender on how we conceive of objects despite the crucial experiment summarized in it having never been published. Given the apparent widespread belief in the underlying theoretical claims, it is thus worth assessing just how robust the empirical evidence is.
The experimental paradigm described in [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] is particularly wellsuited to address the issue of a conceptual effect of grammatical gender, as it does not explicitly ask participants to rate how masculine or feminine they perceive a given noun to be and therefore does not suffer from obvious task demands that could potentially serve as alternative explanations. This separates the paradigm from those which ask participants to explicitly rate how masculine or feminine they find specific nouns referring to objects to be (e.g. [START_REF] Clarke | Gender perception in Arabic and English[END_REF], to classify pictured objects as either masculine or feminine (e.g. [START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF], or to assign a male or female voice to objects or animals (e.g. [START_REF] Haertlé | Does grammatical gender influence perception? A study of Polish and French speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF]. In all of these cases, one can reasonably worry about any seeming gender effect merely reflecting how participants explicitly reason through or strategize about the task at hand. In contrast, empirically robust results from the paradigm used by [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] would clearly strengthen the theoretical claim that grammatical gender does in fact influence object cognition. So far, one failed attempt at a conceptual replication [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF] has shed doubt on the replicability of the results. To the best of our knowledge, the use of different paradigms has similarly failed to generalize the results (e.g. Degani, 2007;[START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF], thus enhancing the need for empirical clarity.
Our approach
Here we offer two innovative conceptual replications of [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF] using a methodology broadly based on their original approach. The novelty of the experiments here is that we "stack the deck" in favor of the original hypothesis. We choose to do this based on extensive piloting work (see supplementary materials) in addition to the published nonreplication [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF], suggesting that any influence of grammar on conceptual object representations is either weak or non-existent. Given the question marks we had about these effects, we decided on the "stack the deck" approach such that if there were a true underlying influence of grammatical gender on object cognition, our methodology would provide the best possible chance of detecting it.
In particular, in contrast to previous studies, participants will be tested in their native language as opposed to in a second language that has no grammatical gender, such as English.
The choice to originally test bilinguals in English was meant to counter any effects that might bias the results in favor of the hypothesis [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF]. All else being equal, one would thus expect that testing directly in a language that contains grammatical gender (such as French or German) would only serve to enhance gender effects on thought. Moreover, related experimental work using other paradigms has shown that experiments conducted in the participants' native language can produce significant effects of grammatical gender [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF] and that English-French and English-Spanish bilinguals show weaker sex-stereotype thinking when tested in English than when tested in gendered Romance languages (Wasserman & Weseley, 2009).
A second way in which we increase the likelihood of detecting an underlying effect (if there is one to be found) is by including gender marked determiners on the relevant noun items.
One would presume that, if anything, making grammatical gender information more salient in this way would again only serve to enhance any underlying effects if they are truly present.
The advantage of our approach is that if (as suggested by our piloting results) we find null effects of grammatical gender on the masculinity/femininity of adjectives people associate with masculine vs. feminine nouns, this will allow for strong inference against the original hypothesis.
This inferential strength from a potential null result comes at the cost of inferential strength in the case of a positive result. In other words, if positive effects of grammatical gender on adjective choice are found, this could be due to the underlying hypothesis being true, or it could be due to possible confounds which were intentionally introduced to increase the odds of finding any effect. For example, perhaps mere statistical associations between grammatically gendered determiners and certain adjectives, which are independent of conceptualization, could drive a positive effect. We therefore also plan for "conditional" experiments: if positive effects are found when gendered determiners are visible (nouns presented in singular forms), we will run followup experiments in which the nouns will be presented in plural forms, thus the relevant determiners are no longer used (German group) or do not visibly carry gender information (French group). To ensure that any null effect we find is not due to a lack of statistical power, we will use Bayesian analyses. These methods will allow us to quantify our confidence in null results.
Experiment 1
In this experiment we will test the gender effect hypothesis for pairs of French nouns that are related in meaning but have opposite grammatical gender. The choice of studying French instead of Spanish or Italian is theoretically neutral in that putative effects of grammatical gender on object conceptualization is not intended to be language specific. Thus, this choice should not in principle affect the underlying hypothesis derived from the broader theory.
Choosing semantically related nouns as stimuli will allow us to reduce a possible bias due to word meaning. We will test nouns referring to objects, and add a control condition with nouns referring to persons. That is, as male and female persons are typically associated with different gender traits (Costa et al., 2001), we expect to observe a gender effect for person nouns.
Methods
We will manipulate two factors, noun type (i.e. referring to objects vs. persons) and grammatical gender (masculine vs. feminine).
Materials
Person nouns. As control items we selected 12 pairs of person nouns, including kinship terms (e.g. father, mother) and role names (e.g. king, queen) (see Appendix B1). Within each pair, the two nouns are semantically related, as one refers to a male person and the other to the female counterpart (e.g. père 'father'mère 'mother'). Importantly, the grammatical gender of person nouns is consistent with the biological sex of their referents. For instance, père 'father' has masculine and mère 'mother' has feminine gender.
Object nouns. To avoid experimenter bias in selecting stimuli (see [START_REF] Strickland | Experimenter Philosophy: The Problem of Experimenter Bias in Experimental Philosophy[END_REF] for a discussion) we designed two tests to standardize our choices of object nouns. The first test asked a group of participants to generate semantically related noun pairs, and the second one invited another group of participants to rate these pairs on a scale for semantic relatedness. Note that as the items are meant to be presented with a gendered definite article, all nouns should be consonant-initial; indeed, before vowel-initial nouns, the masculine and the feminine article, i.e.
le and la, lose their vowel and hence become indistinct).
Noun pair generation: We recruited native French speakers (N = 155, 67 men, 86 women, and 2 other sex), aged between 18 and 61 years (M = 33, SD = 11.1), as participants on the platform Clickworker (https://www.clickworker.com/) for what was described as a "linguistic task". According to their report, none of them had learned a foreign language that had a grammatical gender system. They were paid 0.60 € for their participation.
The test was run online on Qualtrics (https://www.qualtrics.com). Participants were first shown instructions asking them to provide 12 pairs of semantically related nouns (e.g. sable "sand"plage "beach") that obey several constraints: Each pair should consist of one masculine and one feminine noun; all nouns should start with a consonant, and this consonant should not be 'h' (as before many h-initial nouns the masculine and feminine articles become indistinct, just like before vowel-initial words); all words should refer to objects or concrete places and no word should refer to animals or persons.
When they finished reading the instructions, participants were then shown seven pairs that violated a given constraint, along with explanations of why those pairs were not acceptable.
Next, they were shown seven different pairs with only one of them obeying all constraints, and they had to find the good one. Those who had found the right pair were told so, while those who had chosen an unacceptable pair were explained why their choice was wrong and they were shown the right pair. After receiving the feedback, all participants were able to move to the main task, which was to type in 12 word pairs. The above constraints remained visible during the test.
The test ended with questions about participants' sex, age and native language.
After removal of pairs containing compound words (e.g. brosse à dents 'toothbrush'), non-French words (e.g. ring), person nouns (e.g. princesse 'princess'), adjectives (e.g. pauvre 'poor') and abbreviations (e.g. CD), the list of responses comprised 883 noun pairs. We selected the 249 pairs that had been proposed by the largest number of participants (range: 2 -58 times)
to be used in the semantic relatedness rating test for final stimuli selection.
Semantic relatedness rating test: In order to assess the semantic relatedness of the pairs generated in the previously described phase, a norming test was run online using Labvanced (https://www.labvanced.com/).
In addition to the 249 target pairs, we included 83 filler pairs (i.e., 1/3 of the number of target pairs), created by pairing one masculine and one feminine noun taken from different pairs.
We then randomly divided the 332 word pairs (249 target and 83 filler pairs) into five lists, with each list containing between 65 and 67 pairs (i.e. 49 or 50 targets and 16 or 17 fillers).
Participants were randomly assigned to one of the five lists. The order in which the pairs of a relevant list were displayed was fixed across participants, with filler pairs being evenly distributed among target pairs.
We recruited native French speakers (N = 94, 59 men and 35 women), aged between 21 and 76 years (M = 43, SD = 11.9) on the crowdsourcing platform Foulefactory (https://www.foulefactory.com). We restricted recruitment to participants who had not learned a foreign language that also had a grammatical gender system as participants. They were paid 0.50€ for their participation.
Participants were presented with the word pairs one by one and asked to indicate on a 6point scale to what extent they thought the words were semantically related (1 -Not at all related, 6 -Perfectly related). We specifically made clear that they should not reflect too much on the task, but instead respond based on intuitions. After finishing all trials, participants answered questions about their sex, age, native language, and second language learned before age 10.
As a manipulation check, we ran a t-test on the average ratings between target and filler pairs. Results showed that target noun pairs (M = 4.56, SE = .008) were rated as more semantically related than filler pairs (M = 1.27, SE = .004; t(330) = 39.62, p < .0001). Eighty test pairs had received a mean score between 5 and 5.9 (M = 5.41, SD = .24). From these, we removed certain pairs such that the final set contained no duplicated nouns, i.e. nouns appearing in more than one pair. For example, consider the pairs toiletableau (5.76), peinturetableau
(5.62), and peinturepinceau (5.47). Both tableau and peinture appeared twice. As toiletableau had the highest rating among the three, we excluded peinturetableau because of the duplicate noun tableau (as well as all other pairs containing either toile or tableau), after which peinturepinceau no longer contained a duplicate noun and was hence kept. Then, from the unique pairs, we removed those containing words with ambiguous meanings (e.g., the word mine in French can refer to ore mining, a pencil core, a facial expression, or an explosive device). The final set consisted of 52 pairs that were matched in number of syllables (Mmasc = 1.81, Mfem = 1.60, t(50) = 1.75, p = 0.09). See Appendix B2 for the complete list.
Procedure
The experiment will be conducted online using Labvanced software. It will consist of two parts, adjective generation and adjective rating, performed by different groups of participants.
Adjective generation. The first part of the experiment assesses which characteristics participants tend to associate with inanimate entities. Participants will be told that the experiment is about the associations between nouns (pre-tested and selected above) and adjectives, with no mention of our interest in comparing masculine vs. feminine nouns. They will be shown four short sentences and asked to complete each sentence with three different adjectives. Nouns will be presented as subjects of sentences in singular forms, preceded by gender-marking articles le and la respectively for masculine and feminine words. The adverb très 'very' is used in the sentences to elicit gradable adjectives, which are more likely to represent people's subjective evaluations of non-arbitrary objects' characteristics (Wheeler, 1972). The presence of the adverb will also eliminate set phrases such as courrier recommandé ("registered mail").
Examples of an object and a person noun embedded in the carrier sentence are shown in
(2).
(2) a. Object noun: main 'hand'
La main est très _____
'The hand is very_____' b. Person noun: mère 'mother'
La mère est très _____ 'The mother is very_____'
Participants will first be randomly assigned to either the object or person noun condition, and within each condition, randomly to a subset of four pairs. The goal is to assign random pairs to each participant while ensuring that each pair is seen by the same number of participants. To avoid making the comparison of grammatical gender too salient, each participant will see only either the masculine or the feminine word of a semantic pair. From the four pairs assigned to each participant, we will randomly select two masculine and two feminine nouns. We will also ensure that the two nouns within each pair (masculine and feminine) are shown to the same number of participants. The test will end with demographic questions about participants' sex, age, native language, foreign language, and country of residence.
We will exclude all non-adjectival and non-French responses, and select adjectives that are used more than once for the following rating task. There is no minimum number of adjectives to be rated, but to limit the test to a manageable scale, we will test a maximum of 200 adjectives (in the adjective rating task described just below). If more than 200 adjectives are generated, we will order them by the number of times they were used and remove individual adjectives, starting from the least frequent one, until there are 200 left.
Adjective rating. The second part of the experiment tests if adjectives associated with feminine nouns are more likely to be related to female gender traits, and vice versa for masculine nouns. Participants will be randomly presented with adjectives and asked to indicate on a 7-point
Likert scale to what extent they think the words represent a masculine or a feminine quality (1 'Very masculine' -7 'Very feminine'). Adjectives that have gender inflection, i.e. varying forms for masculine and feminine gender agreement, will be presented in both forms (e.g. petit/petite 'smallmasc/smallfem'), while those having a single form for masculine and feminine gender will be shown in their unique form (e.g. pratique 'practical/convenient'). Participants will be told not to deliberate over the task but instead to respond based on intuitions. Each participant will be assigned to between 50 and 70 adjectives (the exact number will depend on how many items will be produced in the adjective generation test), presented one at a time in random order. As an attention check, they will be asked from time to time to recall the previous word they saw. There will be one such attention check every 10 trials over the test.
As before, the test will end with demographic questions about participants' sex, age, native language, foreign language, and country of residence.
Sample sizes
For the adjective generation test, we will randomly generate 10 partitions of the object and person nouns into thirteen and three subsets of four pairs, respectively. We aim for a sample size of 160, i.e. 130 for the object noun condition and 30 for the person noun condition, thus with each noun being shown to 10 participants. As data will be collected in batches with the exclusion criteria applied after a batch is completed, we may end up with a slightly bigger sample size than 160.
For the adjective rating test, the exact sample size will depend on the number of adjectives to test. Each participant will rate between 50 and 70 adjectives. For each adjective, we aim for at least 25 ratings. Again, as data will be collected in batches with the exclusion criteria applied after a batch is completed, we may end up with a slightly bigger sample size than 25 per adjective.
Participants
Participants will be native French speakers, aged at least 18 years, who have not started learning a second language with a grammatical gender system before age 10, whose self-rated proficiency in any such language does not exceed 5 on a scale from 1 to 7, and who live in France. Participants in the adjective generation task will be paid 0.50€ and those in the adjective rating task 0.60€ for their time.
We will exclude participants who do not satisfy all our recruitment criteria. For both tests, participants who fail to complete the assigned task will be excluded from the analyses. If a participant took the study more than once, only the first set of responses will be kept. Finally, we will exclude data from participants who fail to answer all attention check questions correctly at the rating task.
Statistical analysis
We will first extract the mean rating on the masculine-feminine scale for each adjective from the rating task to define its female quality association index (FQA). The data from the generating task will be analyzed with a Bayesian linear mixed-effects model fitted with Stan (Carpenter et al., 2017), using uninformative priors. The dependent variable will be the FQA of the generated adjectives and the predictors will be the gender of the noun for which the adjective was generated (sum-coded), the type of entity denoted by the noun (object or person, treatmentcoded with object as the baseline), and their interaction. The model will include the maximal random effects structure for both participants and noun pairs (Barr et al., 2013). This includes random intercepts and random slopes for gender together with their correlation (gender is both within-participant and within-item), but no random slopes for object type, which is both between-participants and between-items. The Stan code for the model is provided in Appendix A1.
If grammatical gender does affect conceptualization, we expect the adjectives generated for an object noun with feminine grammatical gender to have a more feminine rating on average.
This would translate as a positive main effect of gender in our model. For person nouns, we strongly expect a clear effect of gender, so the interaction between gender and object type is expected to be positive. This interaction is therefore of little theoretical interest, but can be used to estimate the sensitivity of our design.
We will report the 95% HDI Credible Interval for the main effect of noun gender and its interaction with object type, as well as the posterior probability P(β>0) for each parameter. We will then compare the full model to a model without the main effect of gender, and report the Bayes factor. If the Bayes factor is superior to 3 and/or P(β>0) > .95, we will run the follow-up experiment with plurals to control for the role of gendered articles.
Experiment 2
One limitation of Experiment 1 is that we are unable to compare exactly the same word in the masculine vs. feminine form. While focusing on semantically related pairs within a language is a reasonable proxy, looking across languages at words with matched meanings but which differ only in their grammatical gender is arguably a "purer" test of the original theory (despite the fact that translation equivalents are rarely 100% equivalent). Thus, the aim of Experiment 2 is to extend our test of the gender effect hypothesis from within a single language to across languages.
This also brings us closer to the experiment summarized in [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF].
Here we will compare across French and German to investigate if grammatical gender influences native speakers' conceptualization of objects. In particular, we ask whether translation equivalents in French and German that have opposite grammatical gender (e.g., pont -Brücke 'bridge') will be associated differentially with more male vs. female qualities by native speakers of these two languages, such that speakers of both languages associate grammatically masculine and feminine nouns with typically male and female qualities, respectively. In contrast to [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF], instead of testing bilinguals in a non-gendered language like English, we instead test French and German speakers directly in their native language. We reason that if anything, doing this should increase the odds of finding an effect of grammatical gender on the femininity/masculinity of the adjectives produced, due to broad statistical associations that will likely have been built up between gender marked determiners and gender associated conceptual properties.
To address potential concerns about the influences of words' specific gender connotations, i.e. some words might be conceptually associated with males or females, we will run a control analysis with an added predictor for the gender association of English translations of the French/German pairs, measured independently.
Additionally, Experiment 2 will answer a secondary question of whether natural entities are more associated with female qualities and artifacts, with male qualities as suggested by previous studies (e.g. [START_REF] Mullen | Children's classifications of nature and artifact pictures into female and male categories[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF] by comparing words for natural items vs.
artifacts.
Methods
We will manipulate two factors: language (French vs. German) and noun gender (masculine vs. feminine). Employing a similar method as for Experiment 1, we will test translation equivalent nouns that are assigned opposite grammatical gender in French and German (i.e. masculine in French and feminine in German, and vice versa). Unlike in Experiment 1, we only test nouns for inanimate entities.
Materials
We selected 59 pairs of translation equivalent nouns with opposite gender in French and German; 30 are masculine in French and feminine in German, and 29 are feminine in French and masculine in German (see Appendix B3). Among them, 23 nouns refer to natural entities (e.g., 'mountain'), 27 to artifacts (e.g., 'spoon'), and 9 cannot be easily categorized (e.g., 'milk').
Gender is balanced within each of these categories.
All nouns denote concrete objects, and none of them denotes an animal, body part, or clothing item. In addition, none has more than one, frequent, meaning in one of the languages (e.g. we excluded French bureau, which means both 'desk' and 'office'), and none has a language-specific cultural connotation (e.g., we excluded the pair masque -Maske 'mask', as the COVID-19 crisis might have induced different attitudes towards face masks in France and Germany). All French nouns are consonant-initial (such that a preceding definite article is gender-marked). 5
The French words contain between 1 and 3 syllables, the German ones between 1 and 4 (MFrench = 1.66; MGerman = 1.85; t(58) = -1.90; p = .06). Among the French words, there is no difference in number of syllables between masculine and feminine words (Mmasc = 1.80; Mfem = 1.52; t(57) = 1.75, p = .09), while among the German words, the feminine ones are longer than the masculine ones (Mmasc = 1.59; Mfem = 2.10; t(57) = -2.83, p < .006).
For the subset analysis that addresses the potential confound of gender association, we ran a pretest in a neutral language, English, that does not have a grammatical gender system. We (1 "Men" -7 "Women"). Words were presented in random order across participants.
Procedure
The procedure will remain identical to that of Experiment 1. Participants will be tested in their native language.
The nouns will be randomly divided into 15 subsets, with 14 of them containing 4 words 5 In order to select the pairs, we first retrieved a list of French nouns from the electronic dictionary Lexique (http://www.lexique.org/), which we ordered from highest to lowest lemma frequency according to Lexique's database of movie subtitles. Starting from the most frequent one, and taking into account the constraints mentioned above, we used an online French-German dictionary (https://fr.langenscheidt.com/francais-allemand) to look up the German translation equivalents. Whenever a French word had more than one translation, we only considered the first, most frequent, one. All pairs in the final selection were checked by a native speaker of German who has learned French and has lived in France for nearly two decades.
(2 masculine and 2 feminine in French) and one subset containing 3 words (2 masculine and 1 feminine in French). Participants will be randomly assigned to one of the 15 subsets and within each subset, words will be presented in random order.
The relevant sentences for the word bridge in French and German are shown in (3).
Sample sizes
For the adjective generation test, we aim for a sample size of 300 (150 for each language group), with each noun being presented to 10 participants. As data will be collected in batches with the exclusion criteria applied after a batch is completed, we may end up with a slightly bigger sample size than 300.
For the adjective rating test, the exact sample size will depend on the number of adjectives to test. Like in Experiment 1, each participant will rate between 50 and 70 adjectives.
For each adjective, we aim for at least 25 ratings. Again, as data will be collected in batches with the exclusion criteria applied after a batch is completed, we may end up with a slightly bigger sample size than 25 per adjective.
Participants
Participants will be native French and German speakers who live in France and Germany, respectively, aged at least 18 years, who have not started learning a second language with a grammatical gender system before age 10, whose self-rated proficiency in any such language does not exceed 5 on a scale from 1 to 7. Participants of the adjective generation task will be paid 0.50 € and those of the adjective rating task 0.60 € for their time. The exclusion criterion is identical to that for Experiment 1.
Statistical analysis
The data will be analyzed following the same procedure as Experiment 1. The predictors for this model will be noun gender (again, sum-coded), language (sum-coded), and their interaction. We will include the maximal random-effects structure, which this time includes random intercepts and gender slopes for participants, and random intercepts, gender and language slopes for noun pairs (but not their interaction), as well as all correlations. The Stan code for the model is provided in Appendix A2.
The effect of interest is the main effect of noun gender, for which we will report 95% HDI credible interval, posterior probability P(β>0), and the Bayes factor in favor of a model without this main effect.
In addition to this main analysis, we will run the following three control analyses: First, as word length might influence perceived conceptual gender, we will control for the number of syllables by removing all pairs in which the German word has three or four syllables (N = 8).
The remaining 51 word pairs (i.e. 23 masculine in French and feminine in German, 28 the reverse) show no difference in number of syllables between the French and German equivalents (t(49) = 0.60, p = 0.55), nor between masculine and feminine in either language (French: t(49) = 0.89, p = 0.38; German: t(49) = 1.50, p = 0.14).
53
Second, a strong association of certain items with either men or women (e.g., cigar, apron) could obfuscate a potential influence of grammatical gender. To control for this, we will run a model including gender association of the English translation of the word as a predictor, as well as its interaction with grammatical gender.
Finally, we will compare words for natural items vs. artifacts to test the hypothesis that natural entities are more associated with women and artifacts, with men [START_REF] Mullen | Children's classifications of nature and artifact pictures into female and male categories[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF]. Leaving aside the nine items which are neither typically natural nor artificial, we would run a model similar to the main analysis, but with an extra predictor for object type (artifact vs. natural, sum-coded).
If the Bayes factor is superior to 3 and/or P(β>0) >.95 for the main analysis, and control analyses show that this effect is neither an artifact of word length nor gender association of the noun pairs, we will run the follow-up experiment with plurals to control for the role of gendered articles.
Pilot Studies Pilot 1
This pilot is similar to Experiment 1 in design. The object and person noun conditions in this pilot were run separately at different times, with different participant groups, and slightly different experimental designs. To be consistent with descriptions of Experiment 1, and to help readers make sense of the results, we present the two conditions together as a single pilot.
Person noun condition
We used the same stimuli as described in Experiment 1, and similarly, it included two tasks: adjective generation and adjective rating.
Materials
The items were the same as those in Experiment 1, consisting of 12 pairs of person nouns in which the two nouns are semantically related, with one referring to a male person and the other to the female counterpart (e.g. père 'father', mère 'mother') (see Appendix B1).
Procedure
Adjective generation. The procedure of this test was similar to that of Experiment 1, with two exceptions. First, we used a single-trial design: participants were randomly presented with only one of the 24 person nouns. Thus, noun gender was a between-subject factor. Second, the words were presented in plural forms preceded by the gender-neutral determiner les, as illustrated in (4).
(4) Person noun: père 'father'
Les pères sont très___ 'The fathers are very___'
We selected 23 adjectives that had been generated by at least two participants (range: 2 -13) (a total of 168 adjectives had been generated). These 23 adjectives represented 20% all responses.
Adjective rating. The procedure was similar to that described in Experiment 1.
Participants were presented with the 23 adjectives one by one in random order, but unlike in Experiment 1 where a Likert scale will be used, here they were asked to indicate on a response slider to what extent they thought the adjectives described a masculine or feminine characteristic.
There were two versions of slider, depending on whether the left and right endpoints were labeled Masculine ('Masculine') and Féminine ('Feminine'), respectively, or the reverse. The initial position of the indicator was placed at the midpoint of the slider, labeled as Neutre ('Neutral'). The slider version was counterbalanced across participants. Like in Experiment 1, adjectives with gender inflection, i.e. adjectives with different forms for masculine and feminine gender agreement, were presented in both forms, while those having a single form for masculine and feminine gender were shown in their unique form. Instructions on how to use the slider were shown to participants at the beginning of the test. The test ended with demographic questions about participants' sex, age, native language, and foreign language experience.
Participants
Using Foulefactory, we recruited 144 native French speakers (73 women and 71 men), aged between 19 and 68 y (M = 41, SD = 12.3), for the adjective generation task, and 80 different native French speakers (44 women and 36 men), aged between 19 and 69 y (M = 41, SD = 12), for the adjective rating task.
Object noun condition
The stimuli used here were different from that for Experiment 1. However, like in Experiment 1, this condition consisted of two tasks: adjective generation and adjective rating.
Materials
Twelve pairs of semantically related French nouns referring to objects were used as items (see Appendix B4). Within each pair, the nouns are semantically related, but have opposite grammatical gender (i.e. masculine vs. feminine). For example, table 'table' and bureau 'desk' are close in denotation, with the former being grammatically feminine and the latter masculine.
Items were divided into two lists of 12, each list comprising 6 masculine and 6 feminine nouns, and the two nouns of each pair were assigned to separate lists.
Procedure
Adjective generation. The procedure was different from that of Experiment 1, as there was a pretest before the adjective generation task, and for this task, the noun items were not presented in a sentence. However, like in Experiment 1, the nouns were shown with a preceding gender-marking determiner.
Participants first went through a pretest on their knowledge about 'adjectives'. They read a sentence and answered how many adjectives were present in the sentence. They then received either positive or negative feedback depending on their response. As part of the negative feedback, the correct answer was given and the sentence was shown again to them with all adjectives in the sentence being highlighted. Regardless of their response, they were able to move to the main test.
After the pretest, participants were shown 12 nouns, one at a time, and asked to generate three adjectives to describe the object denoted by it. They were randomly assigned to either list of 12 nouns; hence they saw only one noun of each pair. Thus, in this condition, noun gender was a within-subject factor. Within each list, the nouns were shown in random order one by one.
All nouns were presented with a preceding gender-marking article le (for masculine nouns) or la (for feminine nouns). At the beginning of each trial, participants were told that answers other than adjectives would not be accepted. The test ended with two demographic questions about participants' age and sex.
We selected 19 adjectives as stimuli for the next rating test. They were the most common adjectives (range: 5 -13 times) provided for each noun (280 adjectives had been generated in total). Again, these 19 adjectives represented 38% of all responses.
Adjective rating. The procedure for this task was similar to the description of Experiment 1, except that a response slider, instead of Likert scale, was used.
Another group of participants were shown the 19 adjectives one at a time in random order, and asked to rate on a response slider to what extent the adjectives describe characteristics related to men or to women. As in the person noun condition, there were two versions of the response slider and the slider version was counterbalanced across participants. Again, adjectives that have gender inflection were presented in both forms while those having a single form for masculine and feminine gender were shown in their unique form. The test ended with two demographic questions about participants' sex and age.
Participants
Using the crowdsourcing platform Foulefactory, we recruited 37 native French speakers (10 men and 27 women), aged between 28 and 67 y (M = 47, SD = 12), for the adjective generation task, and 35 different native French speakers (12 men and 23 women), aged between 58 23 and 64 y (M = 44, SD = 11.3) for the adjective rating task.
Results
A plot of FQA as a function of grammatical gender is shown in Figure 1. We performed the same analyses as planned for Experiment 1, except that the model did not contain a by-participants random slopes for gender in the person condition (as participants only saw one noun). The mean posterior value for the effect of gender on object nouns was 0.07 (CI: [-0.54, 0.68], P(β>0) = 0.60). As a comparison, the mean posterior value for the interaction between noun gender and noun type was 1.11 (CI: [0.20, 2.0], P(β>0) = 0.990). Comparing models with and without an effect of gender on object nouns, the Bayes factor in favor of the null hypothesis was 10.0. This value clearly is in favor of the null hypothesis, yet the discrepancies between the object and noun conditions and the small proportion of adjectives for which a rating was actually elicited cast doubt on the reliability of this result and call for a proper experiment with a cleaner design and higher sample size. Our Experiment 1 will address both problems (by testing more participants and introducing the intensifier très, which limits the class of possible adjectives, we should see less unique adjectives)
Pilot 2
This pilot is similar to Experiment 2 in terms of its cross-linguistic approach (French-German) and experimental procedure.
Methods
The stimuli used in this pilot were different from that for Experiment 2. However, similar to Experiment 2, it consisted in two tasks: adjective generation and adjective rating.
Materials
Twenty-four nouns spanning various categories, including settings (e.g. bridge, mountain), objects (e.g. key, arrow), animals whose biological sex is not obvious (e.g. mosquito, snake), and abstract concepts (e.g. thought, need) were selected as stimuli (see Appendix B5).
Half of these nouns were grammatically masculine in French and feminine in German, and the other half were grammatically feminine in French and masculine in German. One word, "rain", was included in the French test but not in the German one since it does not have a plural form in German.
Procedure
Adjective generation. The procedure was similar to that for Experiment 2, except in the following two aspects. First, here each participant only saw one noun, and the nouns were presented in plural forms. Thus, French nouns were preceded by the gender-neutral determiner les, and German nouns were not preceded by any determiner. Next, the adverb "very" was not present in the carrier sentences for the two language groups. See the example for bridge below in French (5a) and German (5b). For the following rating test, we selected 100 French adjectives that had been generated by at least two participants (range: 2 -88) (except the word diluvien "diluvian" which had been generated only once), and 19 German adjectives adjectives that had been used by at least two participants (range: 2 -7). (Three hundred French adjectives and 95 German adjectives had been generated in total). The 100 French and 19 German adjectives represented 73% and 51% of all responses of the two groups, respectively.
Adjective rating. The procedure was similar to that of Experiment 2, except that here a response slider was used instead of a Likert scale. For the French group, the set of 100 French adjectives was randomly divided into four lists of 25. French participants were randomly assigned to one list, the items of which were presented in random order, one at a time. For the German group, participants were presented with all 19 adjectives, one at a time, in random order.
As described in Experiment 2, French adjectives with gender inflection, i.e. having different forms for masculine and feminine gender agreement, were presented in both forms, while those having a single form for masculine and feminine gender were shown in their unique form. Unlike in French, German adjectives do not have gender inflected suffixes, and thus all items in German were displayed in their unique form. Participants were asked to indicate on a response slider to what extent they thought the adjectives described a masculine or feminine characteristic. There were two versions of slider, depending on whether the left and right endpoints were labeled 'Masculine' (French: Masculine; German: maskuline) and 'Feminine' (French: Féminine; German: feminine), respectively, or the reverse. The initial position of the indicator was placed at the midpoint of the slider, labeled as 'Neutral' (French: Neutre; German: neutrale). The slider version was counterbalanced across participants for each language group. The test ended with demographic questions about participants' sex, age, native language, and foreign language experience.
Participants
Using Foulefactory, we recruited 347 native French speakers (147 men and 200 women), aged between 21 and 69 y (M = 40, SD = 11), for the French adjective generation task, and 128 native French speakers (58 men and 70 women), aged between 19 and 68 y (M= 42, SD = 11.5), for the corresponding adjective rating task. None of them had participated in Pilot 1.
Using Prolific, we recruited 63 native German speakers (32 men and 31 women), aged between 18 and 60 y (M = 30, SD = 10.3), for the German adjective generation task, and 49 native German speakers (25 men and 24 women), aged between 20 and 58 y (M = 31, SD = 10.1), for the corresponding adjective rating task.
Results
A plot of FQA as a function of grammatical gender is shown in Figure 2. We ran the same analysis as planned for Experiment 2, except that there were no byparticipant random effects (since each participant only saw one noun). We found a mean main effect of gender of 0.30 (CI: [-0.32, 0.93], P(β>0) = 0.84). The Bayes factor in favor of the null hypothesis was 5.6. Here again, the Bayes factor is in favor of the null hypothesis, albeit less strongly so compared to Pilot 1. Nevertheless, the imperfect design of this pilot and the imbalance between the number of French and German participants prevent us from accepting the null hypothesis on the basis of this value alone. Only if the proposed Experiment 2 also returns a high Bayes factor (despite being more biased towards a positive result), would we feel confident about this null result.
Introduction
Languages belonging to various language families have sex-based grammatical gender systems [START_REF] Corbett | Gender[END_REF][START_REF] Gygax | A Language Index of Grammatical Gender Dimensions to Study the Impact of Grammatical Gender on the Way We Perceive Women and Men[END_REF]. In these languages, each noun has either masculine or feminine (or, in some languages, neutral) gender, triggering agreement in words such as articles, adjectives, and pronouns. When referring to groups of humans, the masculine and feminine plural genders are used asymmetrically. That is, masculine plural forms are typically used to refer to: 1/ groups of men only, 2/ groups of men and women, and 3/ groups for whom the gender of the referents is unknown. In the latter two cases the masculine gender has a generic meaning. Feminine plural forms, by contrast, can only be used to refer to groups unambiguously and exclusively composed of women.
An example from French is shown in (6). As the subject is presented in a masculine plural form (6a), the sentence can be interpreted in three ways: 1/ male cashiers are on strike; 2/ male and female cashiers are on strike; 3/ cashiers whose gender is unknown are on strike.
However, if presented in the feminine form (6b), it unambiguously indicates that only female cashiers are on strike.
(6) a.
Les caissiers sont en grève.
'The cashiersmasc are on strike.'
b.
Les caissières sont en grève.
'The cashiersfem are on strike.'
The asymmetric roles of the masculine and feminine forms have intersected with heated social debates about gender equality in France and other countries [START_REF] Bodine | Androcentrism in prescriptive grammar: Singular 'they', sex-indefinite 'he', and 'he or she'1[END_REF][START_REF] Elmiger | La féminisation de la langue en français et en allemand: Querelle entre spécialistes et réception par le grand public[END_REF], as this asymmetry in language might reproduce and perpetuate an unequal social status between men and women [START_REF] Menegatti | Gender bias and sexism in language[END_REF]; see also [START_REF] Ng | Language-based discrimination: Blatant and subtle forms[END_REF]. According to one view, the default for masculine forms to represent a mixture of both genders leaves women underrepresented in language and hence underrepresented in the mind. The idea is that in seeing or hearing the masculine form, people are less likely to think of women, which in turn affects the way that women's roles in society are mentally represented. Proponents of this view thus argue for the replacement of the generic use of the masculine by alternative, gender-fair, linguistic forms in order to increase the visibility of females in language, and consequentlyby hypothesis in people's mental representations (for a review, see [START_REF] Sczesny | Can gender-fair language reduce gender stereotyping and discrimination[END_REF].
There are several gender-fair alternatives to the generic masculine form [START_REF] Abbou | Double gender marking in French: A linguistic practice of antisexism[END_REF].
One common one is a "double-gender" form, illustrated in ( 7).
(7) Les caissiers et caissières sont en grève.
'The cashiersmasc and cashiersfem are on strike'.
Another type of alternative is limited to written language only. It consists of the use of a contracted form, for instance by means of parentheses (les caissiers(ères)) or a slash (les caissiers/ères). In French, a more radical innovation in this area was instigated in 2015 by the Haut Conseil à l'Egalité entre les femmes et les hommes ('High Council for Equality between women and men'), namely a contracted form featuring a middot, as shown in (8).
(8) Les caissierères sont en grève.
'The cashiersmasc.fem are on strike'.
This form is largely known as écriture inclusive ('inclusive writing'), but here we will use the term "middot form", as écriture inclusive is really an umbrella term for multiple strategies against gender-stereotyped communication (see, for instance, the Manuel d'écriture inclusive by the French communication agency Mots-Clés6 ).
Replacing the generic masculine gender with an innovative and inclusive language form is not restricted to French. For instance, German has seen the introduction of gender-neutral nominalized adjectives and participles, e.g. die Studierenden 'the students' (cf. studieren 'to study') [START_REF] Sato | Altering male-dominant representations: A study on nominalized adjectives and participles in first and second language German[END_REF], as well as that of a word-internal capital 'I', as in LeserInnen 'readersmasc-fem' (cf. Leser 'readersmasc and Leserinnen 'readersfem'; note that all German nouns are spelled with an initial capital). The latter is a spelling innovation that to some extent resembles the French middot.
A possible skeptical position concerning the use of gender-fair alternatives to the masculine form argues that linguistic forms have little influence on how people think about gender roles. According to this view, the generic use of the masculine gender does not bias people's mental representation against women and hence it is unnecessary to use a longer double-gender or an unconventional, deliberately invented form. In France, stronger forms of skepticism are targeted specifically towards the middot form, which is particularly controversial, including among linguists. 7 This form is argued to damage orthography, creating confusion and obstacles in learning to read and write. In its solemn declaration, L'Académie Française admonished French society for the idea of "inclusive writing" (meant is the middot form),
warning people about its potential to estrange future generations from France's written heritage, undermine the status of French as a world language, and even put French in mortal danger. 8Accordingly, in 2017 the prime minister recommended not to use it in official texts.9 Going one step further, a number of parliamentarians proposed a bill in 2021, aiming to prohibit and penalize the use of the middot form in public administrations and organizations in charge of public services,10 and shortly after that, the minister of National Education ordered that it not be taught in schools.11 Despite these governmental restrictions, the middot form has become more and more widespread over the years. For instance, it appears in most Parisian universities'
brochures for some of their 2019-2020 undergraduate programs [START_REF] Burnett | Political dimensions of écriture inclusive in Parisian universities[END_REF].
Societal debates notoriously take place without reference to empirical evidence. Yet, there is a growing body of experimental work assessing the interpretation of both the masculine generic form and gender-fair alternatives in languages such as English, French and German (for a review, see [START_REF] Menegatti | Gender bias and sexism in language[END_REF]. Below, we will first review this research and then introduce our approach to addressing some outstanding questions in the current study.
Previous research
Previous studies on the interpretation of various linguistic forms have employed a variety of paradigms, with dependent measures such as sentence reading time, sentence plausibility judgment, proportion of women estimated to be present in a group of people described in a short text, and proportion of favorite women mentioned for a given profession. With regards to English, a so-called natural gender language (i.e. a language that marks gender on personal pronouns only), the generic singular pronoun he was found to favor the presence of men in people's mental representations compared to singular they and the alternative he/she (Gastil, 77 1990;[START_REF] Hamilton | Using masculine generics: Does generic he increase male bias in the user's imagery?[END_REF][START_REF] Martyna | What does 'he'mean? Use of the generic masculine[END_REF]. As for the masculine plural form of nouns, several studies have provided evidence that it likewise disfavors the presence of women in mental representations [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF][START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF][START_REF] Gabriel | Exchanging the generic masculine for gender-balanced forms-The impact of context valence[END_REF]Gygax et al., 2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF]Gygax & Gabriel, 2008;Horvath et al., 2016;[START_REF] Irmen | What's in a (role) name? Formal and conceptual aspects of comprehending personal nouns[END_REF][START_REF] Irmen | Gender markedness of language: The impact of grammatical and nonlinguistic information on the mental representation of person information[END_REF][START_REF] Kollmayer | Breaking away from the male stereotype of a specialist: Gendered language affects performance in a thinking task[END_REF]Stahlberg et al., 2001). Most of them manipulated the gender stereotype of the nountypically a role name (e.g. golfer, cashier, or spectator) [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF][START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF]Gygax et al., 2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF]Gygax & Gabriel, 2008;Horvath et al., 2016;[START_REF] Irmen | What's in a (role) name? Formal and conceptual aspects of comprehending personal nouns[END_REF][START_REF] Irmen | Gender markedness of language: The impact of grammatical and nonlinguistic information on the mental representation of person information[END_REF]. As such stereotypes are activated during reading [START_REF] Banaji | Automatic stereotyping[END_REF][START_REF] Cacciari | Further evidence of gender stereotype priming in language: Semantic facilitation and inhibition in Italian role nouns[END_REF][START_REF] Carreiras | The use of stereotypical gender information in constructing a mental model: Evidence from English and Spanish[END_REF][START_REF] Garnham | Are inferences from stereotyped role names to characters' gender made elaboratively?[END_REF]Gygax & Gabriel, 2008;[START_REF] Kennison | Comprehending Pronouns: A Role for Word-Specific Gender Stereotype Information[END_REF][START_REF] Reynolds | Evidence of immediate activation of gender information from a social role name[END_REF], the genericas opposed to the specific -interpretation of the masculine gender should be especially plausible for female-stereotyped groups. Yet, even for those groups the results revealed male biases. For instance, Gygax et al. (2008) presented French and German participants with two sentences. The first one named a group of professionals in the masculine plural form (e.g. les espions 'the spiesmasc'); the second sentence provided explicit gender information about one or more of the people in the group, and participants had to decide whether it was a sensible continuation of the first one. The authors compared three types of gender-stereotyped professions, chosen from a norming study they had run previously [START_REF] Gabriel | Au pairs are rarely male: Norms on the gender perception of role names across English, French, and German[END_REF]: masculine (e.g. spies), feminine (e.g. beauticians) or neutral (e.g. singers). Results showed that more positive responses were given when the second sentence explicitly referred to men rather than to women, regardless of the profession's stereotype. Thus, French and German speakers were more likely to match professionals presented in a masculine plural form with men than with women, even for femalestereotyped professions. (The experiment was also run in English, which does not have grammatically gendered nouns. English participants did show an effect of gender stereotype, such that continuation sentences referring to women or to men were deemed more likely for female-or male-stereotyped professions, respectively). [START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF] additionally found that this biased interpretation of masculine generics could be reduced but not suppressed if participants were explicitly reminded of the generic meaning of masculine forms.
Other studies have compared masculine plurals with gender-fair alternatives [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF][START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF][START_REF] Gabriel | Exchanging the generic masculine for gender-balanced forms-The impact of context valence[END_REF]Gygax & Gabriel, 2008;Horvath et al., 2016;[START_REF] Kollmayer | Breaking away from the male stereotype of a specialist: Gendered language affects performance in a thinking task[END_REF][START_REF] Sato | Altering male-dominant representations: A study on nominalized adjectives and participles in first and second language German[END_REF]Stahlberg et al., 2001;Stahlberg & Sczesny, 2001). Crucially, double-gender forms andin Germanthe innovative capital-I and nominalized forms typically yield a stronger representation of women than masculine forms.
Only a few studies have examined whether this effect of grammatical gender is modulated by stereotype, with mixed results: [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF] presented German participants with a short text about an annual meeting of a professional group, and asked them a few hours later to estimate the percentage of women present in the group. They found that compared to the masculine form, the double-gender form yielded higher estimated percentages of women for male-but not for female-stereotyped professional groups. Applying the same paradigm to French, however, [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF] observed a global increase in estimated percentage of women when a double-gender form was presented, relative to when a masculine form was displayed, but this effect was not modulated by the gender stereotype.
Finally, there have been some empirical studies on the potential difference between double-gender and innovative gender-fair forms. For German, Stahlberg et al. (2001) asked participants to name famous people in a given category (e.g. singers or politicians). They found that the use of the capital-I form in the question yielded higher proportions of women than that of a double-gender form (the latter yielding no higher proportion of women than the masculine forma rare result). In Stahlberg & Sczesny (2001), participants were presented with the written name of a professional category, followed by a picture of a famous person; their task was to indicate whether the person belonged to the category. For female pictures, reaction times were faster when the name of the category was written in the capital-I form compared to the doublegender form. (Reaction times were slowest for the masculine gender condition.)
Concerning the effect of innovative forms, two more studies are worth mentioning. Both deal with Swedish, a largely non-gendered language but which, like English, has gender marking on personal pronouns. Interestingly, Sweden has introduced a gender-neutral pronoun, hen, complementing han 'he' and hon 'she'. Following a few years of debate, hen was officially adopted in 2015, quickly reaching widespread adherence and use [START_REF] Gustafsson Sendén | Introducing a gender-neutral pronoun in a natural gender language: The influence of time on attitudes and behavior[END_REF]. As shown by [START_REF] Tavits | Language influences mass opinion toward gender and LGBT equality[END_REF] and [START_REF] Lindqvist | Reducing a male bias in language? Establishing the efficiency of three different gender-fair language strategies[END_REF], hen boosts the presence of women in mental representations. For instance, participants in Lindqvist et al., (2019) read a job advertisement for a profession that according to official Swedish statistics is gender-balanced, followed by the description of an applicant. They then had to choose a picture of either a man or a woman, indicating who they thought the applicant was. When the applicant was referred to by means of a non-gendered noun (den sökande 'the applicant') the results showed a male bias; that is, a picture of a man was chosen more than 50%. By contrast, there was no such bias when the applicant was referred to by means of the paired pronouns han/hon 'he/she' or the gender-neutral pronoun hen, both yielding about 50% choices of a picture of a man.
This last study is important for another reason: To the best of our knowledge it is the only one with a clear quantitative inference concerning the presence or absence of a bias induced by a given linguistic form in mental representations. Indeed, as participants made a forced choice 80 between a male and a female applicant to a gender-neutral job, the data from each condition could be compared not only to that of the other conditions but also to the objective real-world baseline chance level of 50%. Sweden fares particularly well on closing the gender-equality gap (according to the 2020 Global Gender Gap Report of the World Economic Forum, it is ranked 4 th ), which makes the presence of the male bias when a neutral, non-gendered, noun is used particularly striking. (Note, though, that their experiment tested only a single profession.)
In summary, previous research with gendered languages such as French and German has shown that compared to the masculine plural form, the double-gender and innovative forms boost the presence of women in mental representations. Whether the effect of linguistic form is modulated by stereotype, though, is largely an open question. Furthermore, while in German there is a difference between double-gender forms and the innovative capital-I, with the latter yielding the largest difference compared to the masculine form, no research has yet examined the effect of the middot writing form in French, and its potential difference in comparison with a double-gender form. Finally, the question of whether in these languages a given linguistic form induces mental representations that consistently reflect the proportions of men and womenreal or perceivedin specific societal groups or in the society as a whole is yet unanswered. That is, establishing that the presence of women in mental representations differs depending on linguistic form is one thing; a quantified comparison with a benchmark (i.e. census data or normed estimates of real-world gender ratios) would additionally allow one to establish which linguistic formif anyinduces consistent mental representations, and to estimate the size of the bias (male-or female-oriented) induced by the other forms.
Current study
In order to shed light on the three open questions mentioned above, we conduct two on-line experiments on the mental representation of gender in text-based inferences in French. We compare the masculine, double-gender, and middot plural forms. Focusing on gender ratios in professional groups, we consider both gender-neutral professions (Experiment 1) and genderstereotyped ones (Experiment 2). Importantly, we use a ratio estimation task, which allows us to compare our participants' responses to independently normed estimates of the proportion of men and women in the relevant professions [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF], Comparison to such a benchmark is of critical importance to the question whether a given linguistic form induces a bias in mental representations.
Our experimental paradigm is an adapted version of the one used by [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF] and [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF], in which participants read a short text on a professional gathering and are asked to estimate the percentage of women present at the gathering. Our most important modifications are that we test a variety of professions, such as to ensure that any observed effect generalizes across professions, and that, for practical reasons, we ask participants to provide their estimate immediately after having read the text rather than a few hours later. Additionally, we control for a possible effect of question framing by asking participants to estimate the percentages of men and women on a response slider, counterbalancing the order of appearance of the words 'men' and 'women' and the corresponding slider layout. We also use a shorter text, with two instead of four occurrences of the crucial piece of information. Like in these previous studies, though, we test participants on a single trial, since exposure to multiple trials might make them become aware of the experimental manipulations and develop a response strategy. Lastly, the professions are chosen from the French part of a norming study in which native speakers estimated the proportions of men and women in a great many professions [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF].
As in this norming study the influence of linguistic form was controlled at best (the endpoints of the rating scale showed the masculine and feminine forms, respectively, and the direction of the rating scale was counterbalanced across participants), these same norms also serve as the benchmark against which we compare our participants' estimates. (We acknowledge the fact that Misersky et al. collected their French data in Switzerland, while we test participants in France.
Yet, this difference is unlikely to impact the benchmark's validity, given that high correlations were obtained across all seven languages investigated in the norming study (the six others were English (UK), German, Norwegian, Italian, Czech, and Slovak).
Experiment 1
In this experiment, we compare three plural forms in French that can be used to refer to a group of mixed genders: the masculine, double-gender and middot forms. Importantly, we focus on professions with a perceived neutral stereotype, which should encourage participants to interpret the masculine plural form as generic, referring to both men and women, and hence potentially treat it on a par with the double-gender and middot forms. By comparing the masculine form to these alternatives, we test whether differences in linguistic form alter language users' inferences about gender ratios in the described scenarios. Specifically, we examine whether due to the ambiguity of the masculine form and its lack of explicit inclusion of female referents, this form disfavors women in mental representations relative to the other two forms. If this were the case, we should observe lower estimates of %-women in response to the masculine compared to the double-gender and middot forms. The latter two are both gender-fair but differ in two respects:
First, the middot is a more recent and more militant form than the double-gender, and second, the double-gender can be read aloud straightforwardly while the middot is essentially a spelling convention. We might therefore observe differential effects of these gender-fair forms, although it is unclear which form would be expected to boost the mental representation of women more.
Finally, we examine for each form the extent to which the mean estimate of %-women deviates from people's perception of real-world gender ratios in the professions at hand. We expect that in this respect, the double-gender and middot forms fare better (i.e., closer to a consistent representation) than the masculine form.
Unless otherwise specified, all aspects of the stimuli, procedure and analyses were preregistered (10.17605/OSF.IO/K649W).
Method
Stimuli
We selected six neutral-stereotyped professions whose French names have grammatical gender marking from the French part of [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF]'s norming study. With one exception (employé de banque 'bank employee'), the masculine and feminine forms differ not only orthographically but also phonologically (The pattern of results was the same when we removed the trials with this noun from the analyses). The estimated proportions of women in these professions are between .47 and .51 (M = .49, SD = .01) (see Appendix A).
We constructed the short passage shown in (9), describing a fictitious scenario where an annual gathering of some professionals took place. The profession name appeared twice in the text and no referential pronoun was used.
(9)
Le rassemblement régional des PROFESSION NAME a eu lieu cette semaine à Amiens. La localisation centrale de cette ville a été particulièrement appréciée.
Les PROFESSION NAME ont aussi adoré l'apéro offert à l'hôtel de ville le premier jour.
'The regional gathering of PROFESSION NAME took place this week in Amiens.
The central location of this city was particularly appreciated. The PROFESSION NAME also loved the aperitif offered at City Hall on the first day.'
The words régional and Amiens were replaced with européen and Francfort respectively for three professions, i.e. mathématicien 'mathematician', douanier 'customs officer' and astronaute 'astronaut', for it would be more plausible for these professionals to have a Europewide gathering in a more internationally-oriented city than a regional one in a provincial city.
We constructed three versions of this passage, varying the linguistic form of the profession name (masculine vs. double-gender vs. middot plural), as exemplified for one of the professions in ( 10). We also constructed two multiple-choice questions about the text, to serve as attention checks:
(11) Attention check questions a. Qu'est-ce qui a été apprécié à propos d'Amiens ?
versions of it, depending on the order of the words for men and women in the sentence. One At the end of the survey, participants were asked to fill in information about their native language, country of residence, gender, and age.
Participants
We recruited 195 participants. The data from 42 of them were removed from the analysis for the following reasons: one participated in a related experiment not reported on here, one did not complete the survey, 17 responded incorrectly at one or both attention check questions, and 23 did not satisfy all of our recruitment criteria (three were non-native speakers, two did not live in France, and among the ones recruited on Foulefactory, 18 reported an age outside of the requested range). For the 41 participants who took the survey more than once, we only kept their first response.
The data analysis thus included 153 participants (67 women and 86 men). They were native French speakers living in France, aged between 22 and 39 years (M = 30, SD = 2.7).
Three of them were recruited on the crowd-sourcing platform Clickworker (https://www.clickworker.com/), and all others on Foulefactory (https://www.foulefactory.com), the preregistered recruitment platform, has a large number of French workers, but presents two disadvantages compared to Clickworker. First, it only offers pre-specified age ranges, although a customized age range can be obtained for an extra 500€. Here, we chose the pre-specified range 25-34. Second, it lacks a good screening function: workers can do a task more than once, leading to considerable data loss. We therefore completed our sample by collecting the last three datapoints on Clickworker, with a customary age range of 20-40 years).
Their random assignment to one of six conditions (three linguistic forms x two slider layouts) yielded a mean number of 25 participants (min = 24, max = 26) per condition.
Results and Discussion
Boxplots of the estimated percentages of women as a function of linguistic form are shown in Figure 3. These data were fit with a linear mixed-effects model by using the lme4 package [START_REF] Bates | Fitting Linear Mixed-Effects Models Using lme4[END_REF] in the programming software R (R Core Team, 2020) and Rstudio (RStudio Team, 2020) Statistical significance was assessed by means of the Anova function in the Car package [START_REF] Fox | An {R} Companion to Applied Regression (Third)[END_REF], and effect sizes were computed using the eta_squared function in the effectsize package [START_REF] Ben-Shachar | Compute and interpret indices of effect size[END_REF]. Linguistic form was contrast-coded and set as fixed effect; a random intercept was added for Profession. As we collected only one datapoint per person, we did not include a random factor for Participant. Note that this model differs from the one we preregistered in that it does not contain fixed factors for Slider version and its interaction with Linguistic form. As neither in this nor in the next experiment this counterbalancing factor or any of its interactions affected the estimated %-women, and as omitting these terms did not change any of the results, we report the simpler models in both experiments for the reader's convenience. (The only other studies we know of that looked at order effects are the norming studies by [START_REF] Gabriel | Au pairs are rarely male: Norms on the gender perception of role names across English, French, and German[END_REF] and [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF]. For some languages, these studies reported higher estimates for women when the question framing and response slider showed a women-men order, but there was no such effect for French.)
The results of the mixed-effects model, shown in Table 1, revealed an effect of Linguistic form. Restricted analyses with corrections for multiple comparison (mvt method), carried out with the emmeans package [START_REF] Lenth | emmeans: Estimated Marginal Means, aka Least-Squares Means[END_REF], showed that compared to the masculine plural form, higher estimates of %-women were obtained for the double-gender ( = 9.92, SE = 2.64, t(146) = 3.76, p < .001) and the middot form ( = 9.63, SE = 2.63, t(146) = 3.67, p < .001). By contrast, there was no difference between the double-gender and middot forms (t < 1). To do this, we subtracted the normed %-women from the participant's estimate for each profession and each participant, and constructed intercept-only models with this difference score as dependent measure and a random intercept for Profession. In these models, a positive estimate for the intercept would thus indicate an overestimation of the presence of women compared to the benchmark and a negative estimate Furthermore, given the lack of a difference between the double-gender and middot forms in Experiment 1, we expect these two forms likewise to yield similar results. Finally, as in Experiment 1, we also examine to what extent participants' estimates in the experimental context reflect people's perceived gender ratios in the real world, by comparing the results to the norming data of [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF]. If the double-gender and middot forms yield more consistent mental representations, they should fare better than the masculine form.
Unless otherwise specified, all aspects of the stimuli, procedure and analyses were preregistered on OSF (10.17605/OSF.IO/FCEWA).
Method
Stimuli
We selected six male-stereotyped professions (e.g. électricien 'electricianmasc'électricienne 'electricianfem') and six female-stereotyped ones (e.g., caissier 'cashiermasc'caissière 'cashierfem') from the same norming study as used in Experiment 1 [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF]. For all professions, the feminine plural form of their French name differs from the masculine one not only orthographically but also phonologically. The mean estimated proportions of men or of women, respectively, were above .70 (male-stereotyped: Mmen = .81 SD = .03; female-stereotyped: Mwomen = .78, SD = .05; t(10) = 1.12, p = 0.3). The 12 professions, together with the estimates of %-women in those professionsaccording to Misersky et al.'s norming studyare shown in Appendix A.
We used the same testing passage in the same three versions as in Experiment 1, shown in ( 9) and (10) above. showed that compared to the masculine form, higher estimates of %-women were obtained for the double-gender form ( = 7.16, SE = 2.55, t(290) = 2.81, p < .02) and the middot form ( = 9.44, SE = 2.57, t(290) = 3.67, p < .001), while there was no difference between the latter two (t < 1).
In order to compare the results of this experiment to the ones by [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF] and [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF], who tested only masculine and double gender forms across both stereotypes, we also performed the same regression analysis without the data for the middot form.
The results of this analysis, which was not preregistered, did reveal an interaction of small effect size ( = 2.45, SE = 1.27, t = 1.98, χ 2 = 3.92, p < .05, partial 2 = 0.02), such that for malestereotyped professions, the double form yielded higher estimates of %-women than the masculine form ( = 12.0, SE = 3.52, t(194) = 3.42, p < .001), while for the female-stereotyped professions no difference was found between the linguistic forms (t <1).
Finally, following the same procedure as in Experiment 1, we carried out nonpreregistered post-hoc analyses to compare the results to [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF] norming data shown in Appendix A. (In two of the models, i.e. male stereotype with double-gender form, and female stereotype with middot form, the random factor was not taken into account by lmerTest because its estimated variance was zero or close to zero.) For the male-stereotyped professions, we found that the double-gender and middot forms yielded an overrepresentation of women (double-gender: = 19.26, SE = 2.02, t = 9.52, p < .0001; middot: = 19.18, SE = 3.00, t = 6.40, p < .003), with the masculine form trending in the same direction ( = 7.47, SE = 3.19, t = 2.34, p < .07). For the female-stereotyped professions, conversely, we found that all three forms induced an underrepresentation of women, whichas indicated by the values of the beta coefficientis numerically largest for the masculine and smallest for the middot form (masculine: = -20.24, SE = 3.63, t = -5.57, p < .003; double-gender: = -17.89, SE = 2.97, t = -6.02, p < .002; middot: = -12.39, SE = 2.83, t = -4.37, p < .0001).
These results show that both stereotype and linguistic form affect inferences about gender ratios in male-and female-stereotyped professions. Note that the overall effect of linguistic form was similar to the one observed for neutral-stereotyped professions in the previous experiment:
the double-gender and middot forms yield higher estimates of %-women than the masculine form, and they do so to the same extent. The lack of a global interaction between linguistic form and stereotype is unexpected, since we did observe such an interaction in an almost identical pilot experiment with less participants (N=142); that is, the increase in the estimations with the gender-fair forms was restricted to male-stereotyped professions. Here, we observed the interaction only in a restricted analysis without the data for the middot form. The same interaction pattern was present in the German data of [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF], but not in the French data of [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF]. As the effect sizes of the interactions in the present experiment and in our pilot experiment are both small, statistical power might be at issue to explain the overall now-you-see-it-now-you-don't pattern. We return to this point in the General Discussion.
As to the comparison with the norming data, the results suggest that none of the linguistic forms induce a consistent representation of the proportion of women for either male-or femalestereotyped professions. This contrasts with results for neutral-stereotyped professions in Experiment 1, where the gender-fair language forms indeed matched the normed ratios. Table 3 provides an overview of the comparisons with the norming data both for this experiment and the previous one, by showing the models' estimated differences, measured in percentage points, between our participants' estimates and the norms. A positive value indicates a female bias, a negative one a male bias. These data suggest that gender-fair language forms can rectify the male-biased representation induced by the masculine form for neutral-stereotyped professions, while they create a female bias for male-stereotyped professions and fail to correct the male bias for femalestereotyped professions.
General Discussion
In two on-line experiments, we investigated the influence of linguistic form and gender stereotype on the presence of women in mental representations of groups of people. French participants read a short text on a professional gathering and estimated the percentage of women present in the gathering. We deliberately opted for a complete between-participants design in which each participant was tested in a single trial, such as to avoid the emergence of response strategies. In each experiment we compared the masculine formwhich is ambiguous since its interpretation can be both specific (i.e., referring to men only) and generic (i.e., referring to men and women)to two unambiguous alternatives for mixed-sex groups, i.e. double-gender and middot form. In Experiment 1 we tested neutral-stereotyped professions, and in Experiment 2 male-and female-stereotyped ones. In addition, we compared all experimental results to norming data from [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF]. These comparisons allowed us to establish for each stereotype which linguistic forms yield a mental representation in accordance with people's perception of gender ratios in the real world and which ones generate a biased representation. Our results can be summarized as follows.
In both experiments we observed lower estimates of %-women for the masculine form than for the double-gender and middot forms. Experiment 2 additionally revealed an effect of stereotype, with lower estimates for male-compared to female-stereotyped professions. This effect interacted with that of linguistic form only in a post-hoc analysis that left out the data for the middot form, showing higher estimates of %-women for male-but not for female-stereotyped professions when the double-gender form was shown. Compared to the norming data, for neutral-stereotyped professions we found a male bias with the masculine form but consistent estimates with the gender-fair forms. However, for male-stereotyped professions, this comparison showed consistent estimates with the masculine form but a female bias with the gender-fair forms, and for female-stereotyped professions it suggested a male bias with all linguistic forms.
As to the relatively increased representation of women induced by the gender-fair plural forms compared to the masculine plural, our results are in accordance with previous studies on both French [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF]Gygax & Gabriel, 2008) and German [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF]Stahlberg & Sczesny;2001;Stahlberg et al., 2001;[START_REF] Gabriel | Exchanging the generic masculine for gender-balanced forms-The impact of context valence[END_REF]Horvath et al., 2016;[START_REF] Kollmayer | Breaking away from the male stereotype of a specialist: Gendered language affects performance in a thinking task[END_REF]. Both these languages have an innovative neutralizing form alongside the more conventional double-gender, i.e. middot in French and capital-I in German.
For German, there is evidence that capital-I yields an even higher representation of women than the double-gender form (Stahlberg & Sczesny, 2001, Stahlberg et al. 2001), while for French, no previous research has examined the effect of the middot form. Our study, however, suggests that there is no gradient effect of linguistic form, as we found no difference between the estimates for the double-gender and middot conditions in either experiment. Possibly, the diverging results are due to the fact that in German the innovative form is highly similar to the feminine form in writing and indistinct from it in pronunciation (German, e.g.: LeserInnen versus Leserinnen;
French, e.g.: électriciennes versus électriciennes, but caissierères versus caissières and éboueureuses versus éboueuses). In this respect, we can also recall the case of Swedish, a language that has introduced an innovative, neutral, personal pronoun hen, which complements male han 'he' and female hon 'she'. Hen reached widespread adherence in a few years, before its official adoption in 2015 [START_REF] Gustafsson Sendén | Introducing a gender-neutral pronoun in a natural gender language: The influence of time on attitudes and behavior[END_REF], which contrasts sharply with the situation in France, where the use of middot has remained an ideologically divisive issue. Yet, similar to what we found for French, Lindqvist el al. (2019) observed no difference in responses to the neutral hen compared to the double-gender form hen/hon regarding the presence of women in mental representations. More research, though, is necessary to examine possible differing effects of double-gender and middot forms in French.
There is one caveat to be mentioned concerning the results for the masculine form. Gygax & Gabriel (2008) showed that the interpretation of the masculine plural as specifically referring to men is enhanced when participants have just read short, unrelated, texts containing double-gender forms. (They did not test if using a middot has the same effect, but there is no reason to think it would not.) As participants come to an experiment with all their previous language experience, the co-existence of gender-fair forms and generically intended masculine forms might have made our participants on average less inclined to embrace the generic interpretation than would have been the case before the rise of gender-fair language. Hence, the estimates of %-women in response to the masculine form might have been lower than what we would have seen one or more decades ago. As long as generically intended masculine forms coexist with gender-fair forms, this pragmatic backlash effect is expected to similarly be present in people's text interpretations in real-life situations. For future research, it would be interesting to examine whether participants' use of, familiarity with, or even adherence to gender-fair language affects their interpretation of the masculine plural.
The effect of stereotype has been shown many times [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF][START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF]Gygax et al., 2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF]Gygax & Gabriel, 2008;Horvath et al., 2016;[START_REF] Irmen | What's in a (role) name? Formal and conceptual aspects of comprehending personal nouns[END_REF][START_REF] Irmen | Gender markedness of language: The impact of grammatical and nonlinguistic information on the mental representation of person information[END_REF]. Given that none of the three linguistic forms imply anything about the gender ratio in groups of mixed gender, it is easy to see why stereotype information influences judgments on this ratio. One might expect, though, that male and female stereotypes have differential effects when one of the gender-fair forms is presented. Specifically, the use of a double-gender or middot form might boost the presence of women in mental representations to a larger extent for male-stereotyped than for female-stereotyped professions. We found only limited evidence for this, despite the clear presence of such an interactionalbeit with a small effect sizein our Pilot 3. Might this latter result have been a strike of luck? In the absence of a tool for computing power in linear mixed-effects models with interactions, this is hard to know.
We present here the results of a pooled analysis, for which we combined the data from Experiment 2 with that of the pilot. (None of the participants in Experiment 2 had participated in the pilot.) The pooled dataset contains a total of 447 participants aged between 20 and 40 (mean = 29.3, SD=5.6), with on average 71 participants per condition (min = 32, max = 42). The same regression model as the one for the analysis of Experiment 2 revealed not only effects of Stereotype and Linguistic form, but also an interaction, as shown in Table 4. The pattern of results of this pooled analysis is identical to that of the pilot analyzed separately: First, the effect of Linguistic form reveals that compared to the masculine form, higher estimates of %-women are obtained for the double-gender form ( = 11.2, SE = 2.19, t(435) = 5.11, p < .0001) and the middot form ( = 11.4, SE = 2.16, t(435) = 5.29, p < .0001), while there is no difference between the latter two (t < 1). Second, restricted analyses of the interaction show that this pattern is most prominent for male-stereotyped professions (masculine vs. double: = 17.8, SE = 3.05, t( 438 of the interaction is smaller (0.02 vs. 0.05) but the χ 2 statistic is higher (10.6 vs. 6.53). This pooled data analysis, then, adds evidence supporting the hypothesis that gender-fair language forms increase the presence of women in mental representations especially for male-stereotyped professions. Yet, given its small effect size, we conjecture that a large sample size is needed in order to reliably observe the relevant interaction. Recall that the two previous studies that examined the effect of linguistic form across two stereotypes compared the masculine to a double-gender form only [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF][START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF]. Both adopted in essence the same paradigm as we did, but with just a single profession per stereotype, such that it is unknown whether their findings are generalizable across different professions. For French, [START_REF] Brauer | Un ministre peut-il tomber enceinte? L'impact du générique masculin sur les représentations mentales[END_REF] observed no interaction; with 73 participants, though, their sample size was probably too small. For German, by contrast, [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF] did report the expected interaction: the double-gender form increased the percentage of estimated women for male-but not for female-stereotyped professional groups. Further research would be welcome to shed more light on the issue of differential effects of various gender-fair language forms depending on the associated gender stereotype. One specific question in this respect concerns a possible difference between the double-gender and middot forms, as suggested by the fact that it is the middot form that prevents a global interaction in Experiment 2. That is, as the middot is still relatively infrequent, its processing might take up resources that would otherwise be allocated to proportions of women below .30 or above .70). For female-stereotyped professions, one may also wonder if putting the feminine form before the masculine, e.g., les caissières et caissiers 'the cashiersfem/masc', would render the presence of women in mental representations more salient, thus yielding more consistent representations. In everyday communication this 'feminine before masculine' presentation is standard in a few cases where a mixed-sex group of people is addressed (e.g., Mesdames et messieurs 'Ladies and gentlemen', and Bonjour à toutes et à tous 'Hello to allfem-masc of you'), but otherwise quite unusual. Previous research suggests that the order of words in a binomial phrase concurs with differential cognitive accessibility and relevance to a context [START_REF] Kesebir | Word Order Denotes Relevance Differences: The Case of Conjoined Phrases With Lexical Gender[END_REF][START_REF] Tachihara | Cognitive accessibility predicts word order of couples' names in English and Japanese[END_REF]. For instance, when asked to name familiar couples, people tended to mention first the person to whom they felt close (thus easier to be brought to mind) [START_REF] Tachihara | Cognitive accessibility predicts word order of couples' names in English and Japanese[END_REF]. [START_REF] Kesebir | Word Order Denotes Relevance Differences: The Case of Conjoined Phrases With Lexical Gender[END_REF] showed when "women" was mentioned before "men" in a binomial phrase, people were more likely to think of women as member of a group of activists, relative to when "women" was mentioned after "men".
Thus, reversing the order might indeed render the presence of women in mental representations more salient. (It's worth noting that the 'masculine before feminine' word order, more than a linguistic convention, maps stereotypical beliefs about the two sexes [START_REF] Hegarty | When gentlemen are first and ladies are last: Effects of gender stereotypes on the order of romantic partners' names[END_REF].
Mentioning women before men may therefore also help fight traditional gender stereotype.)
Finally, we briefly turn to the social impact of gender-fair language. Depending on one's view on gender roles, it might be argued that the mismatches we observed between our experimental results and people's perception of gender ratios in the real worldwith gender-fair language forms resulting in more balanced gender ratios for male-and females-stereotyped professions than those found in the norming data of Miserksy et al. ( 2013)are desirable.
Specifically, the practice of using gender-fair language could weaken stereotypes [START_REF] Chatard | Impact de la féminisation lexicale des professions sur l'auto-efficacité des élèves: Une remise en cause de l'universalisme masculin?[END_REF]Horvath & Sczesny, 2016;Vervecken et al., 2015;Vervecken & Hannover, 2015; but see [START_REF] Merkel | Shielding women against status loss: The masculine form and its alternatives in the Italian language[END_REF], and hence over time, traditionally male-dominated professions might attract more women and female-dominated ones, more men. In other words, the use of gender-fair language could play a normative role in promoting more balanced real-world gender ratios in the long term. In the short run, there may be some side effects from the backlash against the use of gender-fair language. For instance, female job applicants were evaluated less favorably in a hiring process when introduced with feminine job titles than with masculine ones [START_REF] Budziszewska | Backlash over gender-fair language: The impact of feminine job titles on men's and women's perception of women[END_REF][START_REF] Formanowicz | Side effects of gender-fair language: How feminine job titles influence the evaluation of female applicants[END_REF] and professions presented with gender-fair language were estimated to earn lower salaries than with masculine forms (Horvath et al., 2016). (The above-mentioned studies were run with Polish, German and Italian participants. A fourth, smaller-scale, study with French participants did not find any influence of gender-fair language on the evaluation of professions [START_REF] Gygax | Féminisation et lourdeur de texte[END_REF]. As observed by Gustafsson Sendén et al., (2015), people's attitudes toward gender-fair language become positive over time, thus any backlash effects might diminish with more exposure to the presence of gender-fair language forms.
To conclude, we showed that the generic use of the masculine plural and gender-fair alternatives differentially impact how people mentally represent and estimate gender ratios. In addition to adding important data to fuel public debate around gender-fair language, our results also potentially lead to new questions. For example, how do people prioritize consistency of mental representations vs. gender fairness when these come apart in their views about which linguistic forms are most desirable? And how do proponents of gender-fair language weigh the obvious orthographic drawbacks of the middot form against its potential advantage in terms of representational consistency compared to double-gender forms? Whatever the answers to these questions end up being, it is clear that existing and future empirical data should be a driving force in informing the trade-offs that must be considered in arriving at coherent policy decisions.
1994; [START_REF] Saint-Aubin | The influence of word function in the missing-letter effect: Further evidence from French[END_REF][START_REF] Staub | Failure to detect function word repetitions and omissions in reading: Are eye movements to blame[END_REF]. Thus, one might expect for grammatical manipulations on pronouns to have less of an impact than similar manipulations on profession names. In order to test this, we exploit the fact that not all French profession names have different forms for male and female gender. Specifically, we contrast two minimally different scenarios: In one, the profession name has gender marking (e.g., caissier 'cashiermasc'caissière 'cashierfem', used in the examples above) and there is no referential pronoun. In the other one, the profession name has no gender marking (e.g. artiste 'artistmasc/fem') but the grammatical gender information is present on a referential pronoun, i.e. elles 'shepl'. Moreover, we examine more directly to what extent the grammatical gender information is processed, by adding a multiplechoice question to test participants' recall of the crucial word after they have provided their estimate. We expect that if there is a difference, recall will be better when the crucial word is the profession name than when it is the pronoun.
Method
Stimuli
We selected 24 professions, twelve female-stereotyped and twelve male-stereotyped, from the French part of the same norming study [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF] as mentioned before. The 12 professions for the noun condition that have varying forms for the masculine and feminine gender (i.e. gender-marked) were the same items as used for Experiment 2. The other 12 professions have a unique form (i.e. not gender-marked) (e.g. male-stereotyped: bagagiste 'portermasc/fem'; female-stereotyped: fleuriste 'floristmasc/fem') and are used in the pronoun condition. All professions were selected from among those with mean estimated proportions of men or of women, respectively, above .70 (male-stereotyped, gender-marked: Mmen = .81 SD for those in the pronoun condition, the three choices were: masculine plural pronoun (ils), feminine plural pronoun (elles) and Je ne sais pas.
Participants
The analysis included 201 (134 women and 67 men) native French speakers living in France, aged between 20 and 40 years (M = 28.2, SD = 6.2), who participated on the crowdsourcing platform Clickworker. Their random assignment to one of eight conditions (two gender stereotypes, two word types, and two slider layouts) yielded a mean number of participants per condition of 25 (min = 24, max = 26).
The data from an additional 41 participants were removed from analysis for the following reasons: one did not complete the survey, five took the survey for the second time, 17 responded incorrectly at one or both attention check questions, and the remaining ones did not satisfy all of our recruitment criteria (eight were non-native speakers, one did not live in France, and four were younger than 20, and five older than 40).
Results and discussion
Boxplots of the estimated percentages of women as a function of word type and gender stereotype are shown in Figure 5. These data were fit with a linear mixed-effects model by using the lme4 package [START_REF] Bates | Fitting Linear Mixed-Effects Models Using lme4[END_REF] in the programming software R (R Core Team, 2020) and Rstudio (RStudio Team, 2020) Statistical significance was assessed by means of the Anova function in the Car package [START_REF] Fox | An {R} Companion to Applied Regression (Third)[END_REF]. Stereotype (male vs. female) and Word type (nouns vs. pronoun) were contrast-coded and set as fixed effects together with their interaction term. A random intercept was added for Profession. As we collected only one datapoint per person, we did not include a random factor for Participant. Note that this model differs from the one we preregistered in that it does not contain fixed factors for Slider version and its interaction terms. As neither in this nor in the following experiments this counterbalancing factor or any of its interactions affected the estimated %-women, and as omitting these terms did not change any of the results, we report the simpler models throughout the article for the reader's convenience.
The results, shown in Table 5, revealed an effect of Word type, with higher estimates of %-women in the noun compared to the pronoun condition. By contrast, there was no effect of Stereotype nor an interaction. Next, we consider the responses to the memory question. Five participants, all in the pronoun condition, could not choose between the male-or female-inflected forms of the crucial word: they replied that they were unsure. Their responses were coded as incorrect. The mean percentages of gender recall accuracy as a function of word type and stereotype are shown in Figure 6. We fit the data with a logistic mixed-effects model with contrast-coded fixed factors Stereotype, Word type, and their interaction, and with a random intercept for Profession. The results, shown in Table 6, revealed an effect of Word type, as nouns were more likely to be correctly remembered than pronouns. By contrast, there was no effect of Stereotype nor was there an interaction. These results show that when grammatical information is unambiguous, i.e. indicating that a group of professionals consists entirely of women, participants' estimates of the %-women in the group are not influenced by the profession's stereotype. Thus, in this case gender stereotype does not override grammatical gender information, regardless of whether the gender marker appears on the noun or on a referential pronoun. The estimates, however, were not at ceiling (as in theory they should be), and, more importantly, they were lower when the grammatical information was present on a referential pronoun (mean: 74%) than when it was present on the noun (mean: 85%). Furthermore, gender recall of the crucial word was worse for pronouns (mean accuracy: 65%) than for nouns (mean accuracy: 93%), suggesting better processing of nouns than of pronouns, in accordance with previous research on differences between content and function words in reading [START_REF] Healy | Letter detection: A window to unitization and other cognitive processes in reading text[END_REF][START_REF] Saint-Aubin | The influence of word function in the missing-letter effect: Further evidence from French[END_REF][START_REF] Schindler | Error in proofreading: Evidence of syntactic control of letter processing?[END_REF][START_REF] Staub | Failure to detect function word repetitions and omissions in reading: Are eye movements to blame[END_REF].
Although with a single datapoint per participant we did not have sufficient statistical power to directly assess the effect of accuracy in the memory question on the estimate of %women, the data do suggest that incorrect responses to the memory question are associated with lower estimates of %-women, especially for male-stereotyped professions. This can be seen in Table 7, which shows the number of participants and mean estimates of %-women as a function of Word type, Stereotype, and Recall response. Note that on average, estimates of %-women by participants who were correct on the memory question still did not exceed 90%. Various factors might be involved in this lack of ceiling performance: Some of these participants may have taken into consideration persons who were not denoted by the profession names but were present at the gathering (e.g. organizers, hotel staff, or accompanying family members, all of whom might be male). For others, their answer to the memory question might have been a guess which happened to be correct. Or, some might have misinterpreted the response slider when providing their estimate (six of them indeed estimated the % of women to be lower than 50%, ranging from 11%for a female-stereotyped profession!to 43%, with a mean of 27.9%) Last but not least, some might not have paid enough attention to the task.
Results
Boxplots of estimated percentages of women as a function of linguistic form and stereotype are shown in Figure 7. As in Pilot 1, the data were fit with a linear mixed-effects model. The model contained fixed factors for contrast-coded Stereotype, Linguistic form, and their interaction, as well as a random intercept for Profession. The results, shown in Table 8, revealed effects of Stereotype and an interaction. However, only a marginal effect of linguistic form was found.
As found in previous experiments, lower percentages of women were estimated for malestereotyped professions (M = 26.5, SD = 17.3) than for female-stereotyped ones (M = 60.8, SD = 17.5). Conversely, double-gender pronoun (M = 47.3, SD = 21.3) only slightly increased the perceived proportion of women compared to the masculine form (M = 46.1, SD = 27.0). The influence of linguistic form, however, was dependent on stereotype. Restricted analyses with corrections for multiple comparison (mvt method) showed that for male-stereotyped professions, double-gender form increased the representation of women [ = 11.0, SE = 4.13, t(136) = 2.67, p < .01], while for female-stereotyped ones, no difference was found between the two linguistic forms (t < 1, p = 0.82).
Pilot 3
This pilot was included in the submitted manuscript. It is identical to Experiment 2 in the aspects of design, materials, and procedure.
Method
Stimuli
The stimuli were identical to those for Experiment 2.
Procedure
The procedure remained identical to that of Experiment 2 with one exception: the number of datapoints per profession within each group defined by stereotype, linguistic form, and slider direction was more variable.
Participants
Participants were native French speakers living in France (N = 142), 73 women and 69 men, aged between 21 and 40 years (M = 32, SD = 5.2). The mean number of participants per condition was 11 (min = 6, max = 17).
Results
Boxplots of estimated percentages of women as a function of linguistic form and stereotype are shown in Figure 8. Compared to female-stereotyped professions, lower percentage of women was estimated for male-stereotyped ones.
As to the main effect of linguistic form, the pattern of results is consistent with that of Experiment 2 and the pooled analysis described above. Compared to the masculine form, higher estimates of %-women were obtained for the double-gender form ( = 21.1, SE = 4.23, t(136) = 4.98, p < .0001) and the middot form ( = 15.1, SE = 4.0, t(131) = 3.78, p < .001). However, there was no significant difference between the two gender-fair forms ( = -5.94, SE = -4.10, t(135) = -1.45, p > .1).
Concerning the interaction, restricted analyses showed that this pattern is most prominent for male-stereotyped professions (masculine vs. double: = 29.6, SE = 5.57, t(122) = 5.32, p < .0001; masculine vs. middot: = 14.0, SE = 5.78, t(127) = 2.43, p < .05; double vs. middot: = -15.6, SE = -5.69, t(132) = -2.74, p < .02); while for female-stereotyped professions, the
Appendices
Appendix A. French profession names used in Experiments 1, 2, and Pilots 1 (noun condition) and 3, with mean percentages of women as rated by participants in [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF].
Stereotype
Introduction
The fields of Science, Technology, Engineering, and Mathematics (STEM) are indisputably male-dominated [START_REF] Shen | Inequality Quantified: Mind the Gender Gap[END_REF]. Despite the implementation of recent compensatory measures in STEM, women are still outnumbered by their male colleagues, especially in senior positions (James et al., 2019). The persistent underrepresentation of women has been attributed to multiple interacting factors, ranging from external barriers such as discriminatory hiring practices against women, to internal factors including gender differences in math abilities, lifestyle, and career choices (see [START_REF] Ceci | Women's underrepresentation in science: Sociocultural and biological considerations[END_REF][START_REF] Charlesworth | Gender in science, technology, engineering, and mathematics: Issues, causes, solutions[END_REF]. In particular, previous research has provided evidence of sex-based discrimination in academia, but it seems that the directionality of the bias varies depending on the context. For example, Moss-Racusin et al. ( 2012) asked professors at research-intensive universities to evaluate the CV of an undergraduate student for a lab manager position. All professors received the exact same CV while the experimenters manipulated the sex of the applicant by assigning them either a male or female name. Results revealed a significant bias in favor of the male student, who was rated as more competent and hirable and was offered more mentoring and a higher starting salary than the female applicant.
Similarly, a wealth of other studies found biased hiring processes in favor of males over female candidates, despite their comparable backgrounds [START_REF] Begeny | In some professions, women have become well represented, yet gender bias persists-Perpetuated by those who think it is not happening[END_REF][START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF]Régner et al., 2019;[START_REF] Reuben | How stereotypes impair women's careers in science[END_REF]. Conversely, some other studies observed a gender bias favoring females [START_REF] Breda | Teaching accreditation exams reveal grading biases favor women in male-dominated disciplines in France[END_REF][START_REF] Breda | Professors in core science fields are not always biased against women: Evidence from France[END_REF][START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF]. For instance, employing a similar CV assessment method, [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF] found that women were actually preferred over men for a tenure-track position across both mathintensive and non-math-intensive fields.
discrimination against females can be explained by their prior expectations about the degree to which hiring practices in academia are biased against women.
Finally, in Experiment 4 we examined one particular way in which moral commitment to gender equality can be expected to influence perceptions of scientific research on the issue: by making participants more likely to make an inadequate inference when the conclusion is proattitudinal. When "people believe a conclusion is true, they are also very likely to believe the arguments that appear to support it, even when these arguments are unsound" (Kahneman, 2011)-a fallacious form of reasoning called the "belief bias" (Pennycook, 2020). Specifically, we tested whether the increased moral commitment to gender equality would be associated with higher chances of viewing merely observational data that women are fewer than men in academia as proof that women are discriminated against because of their sex.
Experiment 1a
Using a research evaluation method adapted from [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF], Experiment 1a
was designed to answer the question of whether individuals' moral commitment to gender equality affects their trust in evidence related to gender bias. Specifically, we tested whether individuals higher on moral commitment to gender equity would perceive research findings showing hiring discrimination against women as more accurate and the methods employed as more reliable, regardless of whether they were male or female. Moreover, we examined whether any hypothetical sex difference in the reception of evidence as reported in [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF], would disappear after the variance explained by moral commitment was factored out.
All aspects of the experiment including the materials, procedure, and statistical analyses were pre-registered on OSF (10.17605/OSF.IO/UGPE4).
continuous variables (i.e. accuracy and reliability ratings, moral commitment and gender identity)
were centered by applying the scale function of R.
We first ran two simple linear regressions with sex as the sole predictor in the models.
Results showed no sex difference in the ratings of accuracy of the findings, despite numerically lower ratings of research accuracy provided by male participants (M = 6.40, SD = 2.14) than by female participants (M = 6.87, SD = 2.16; t = -1.70, p = 0.09). No sex difference was observed for the perceived reliability of the reported methods (men: M = 6.45, SD = 2.03; women: M = 6.42, SD = 1.96; t <1).
Then, we ran two linear regressions with moral commitment, sex and gender identity as predictors. Results showed that moral commitment positively predicted the perceived accuracy of the findings (β = 0.32, SE = 0.06, t = 5.17, p < .0001, η 2 = 0.10) and the perceived reliability of the methods (β = 0.18, SE = 0.06, t = 3.0, p < .003, η 2 = 0.10) of the summary reporting hiring bias against women in STEM. Again no sex difference nor any effect of gender identity was observed (all |t| <1).
We did not run the pre-registered mediation analysis as we did not observe any significant effects of sex and gender identity.
Plots of the research evaluations as a function of moral commitment and participant sex are shown in Figure 9.
Experiment 1b
To establish if the effects observed in Experiment 1a with U.K. participants can be generalized to another cultural context, Experiment 1b replicated Experiment 1a with U.S. participants.
Adopting the same design, we tested whether individuals higher on moral commitment to gender equality were more likely to trust research showing hiring discrimination against women, regardless of their sex. All aspects of the current study were pre-registered on OSF (10.17605/OSF.IO/2UG95).
Method
Participants
The data analysis included 419 U.S. participants (166 women and 253 men) aged between 18 and 88 years (M = 38, SD = 12.9) recruited on Amazon Mechanical Turk (https://www.mturk.com/). They were paid $0.50 for their participation. We determined the sample size by running a power analysis in G*Power 3.1 (Faul et al., 2009) which suggested a sample of 415 for a statistical power of 0.90. As data were collected in batches and we applied the exclusion criteria after each batch was complete, we ended up with a sample size slightly larger than the suggested one.
In total, we recruited 768 participants. Data from 349 participants were excluded as failed one of the two attention checks (Note that we performed the same analyses with the participants included, and a similar pattern of results was observed. Removing or including these participants did not qualitatively change the results.) and 62 took the survey twice (only their first response was kept).
Materials and procedure
Materials and procedure of Experiment 1b were identical to those of Experiment 1a, except in the following aspects. First, in order to mitigate chances that participants would be distracted by information of secondary importance, we removed the name of the scientific journal (PNAS) where Moss-Racusin et al., (2012)'s article was originally published, as well as the names of the authors except the lead author (see Appendix A).
Second, a short introduction was added before showing the stimulus summary in order to better prepare participants for the task: "In this survey, we are interested in how people think about scientific findings. You will be asked to read very carefully a brief summary of a research article that was published in a scientific journal and then give your opinions as non-expert about the research".
Finally, we replaced our 1-item measure of moral commitment to gender equality with a 3-item scale in order to increase the reliability of the measure. The three items, inspired from [START_REF] Skitka | Moral Conviction: Another Contributor to Attitude Strength or Something More[END_REF] were now: 1) "Achieving gender equality in society is an absolute moral imperative"; 2) "The conviction that we need to fight for gender equality is central to my identity"; and 3) "Furthering gender equality should be the government's utmost priority".
Response scales ranged from [0] "Strongly disagree" to [10] "Strongly agree", with [5] "I don't mind" as the default slider position. The three items (α = 0.89) were averaged as our measure of moral commitment to gender equality.
Results
We performed the same analyses here as we did for Experiment 1a. Similar to what was observed in Experiment 1a, men (M = 7.33, SD = 1.96) and women (M = 7.54, SD = 1.61) did not differ in the perceived accuracy of findings (t = 1.14, p = 0.26), nor in how reliable they perceived the methods to be (men: M = 7.44, SD = 1.83; women: M = 7.47, SD = 1.65; |t| < 1).
Replicating results of Experiment 1a, moral commitment to gender equality positively predicted perceived accuracy of research findings (β = 0.38, SE = 0.04, t = 10.6, p <.0001, η 2 = 0.22) and perceived reliability of the methods (β = 0.30, SE =0 .04, t = 8.30, p <.0001, η 2 = 0.22). Again, compared to moral commitment, sex and gender identity were not significant predictors of (Faul et al., 2009) which suggested a sample of 466 for a statistical power of 0.90. As data were collected in batches and we applied the exclusion criteria after each batch was complete, we ended up with a sample size slightly larger than the suggested one.
In total, we recruited 483 participants. Data from 16 participants were excluded for the following reasons: four did not meet our recruitment criteria (reported sex being "other"), 10 failed one of the two attention checks and two took the survey twice (only their first response was kept).
Materials and procedure
The materials and procedures were identical to those of Experiment 1b, except in the following respects. First, participants were presented with the exact same introduction to the study as in [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] before they saw the stimulus summary (see the ESM).
On the top of the following page, participants saw another short instruction on the evaluation samples, the more participants deemed pursuing gender equality in society as a moral imperative, the more they trusted research findings suggesting hiring bias against women in STEM. This result was found both when the research quality measures focused on the perceived accuracy of the findings and the perceived reliability of the methods employed (Experiments 1a and 1b), and when the composite measure of research quality of [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] -participants' agreement with the interpretation of the results, the perceived importance of the findings, the quality of the writing and their overall evaluation -was used (Experiment 1c).
By contrast, participants' sex had little effect on their trust in the research reporting hiring bias against women. While men judged the findings and the methods slightly less positively than women, the difference was not significant in Experiments 1a and 1b. It was only when adopting
Handley et al's composite index of research quality, in Experiment 1c, that the sex difference reached statistical significance -men rated the research less favorably than women. This sex difference, however, was smaller (d = 0.26) than in Handley et al's (d = 0.45) and the sex effect appears to be entirely explained by participants' moral commitment as the effect of participant sex disappeared when the variance accounted for by moral commitment was factored out. This overall pattern may be due to greater awareness of gender discrimination against females among men in 2020 than in 2015. After all, feminism has been on the rise during this period, especially among young people (e.g. the #MeToo movement took place https://metoomvmt.org/).
Experiment 2
Experiments 1a -1c showed that the more participants were morally committed to gender equality, the more they trusted evidence of gender bias against women in hiring in STEM sciences. The first goal of Experiment 2 was to gauge the generality of this finding by testing an additional research summary reporting bias against women. Second, Experiment 2 explored what would happen if the evidence reported gender bias favoring women in hiring, as has been found in some academic contexts (e.g. [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF][START_REF] Breda | Teaching accreditation exams reveal grading biases favor women in male-dominated disciplines in France[END_REF]. We expected for a potential influence of wording, we also created a counterfactual version of each of these two (real) summaries. The counterfactual summaries were identical to the originals except that the results were reversed so that they indicated evidence of gender bias in the opposite direction than the originals (i.e. original: against women -> counterfactual: favoring women; original:
favoring women -> counterfactual: against women). All summaries had the same structure as in Experiment 1b (See Appendix A).
The procedure was identical to that of Experiment 1b except in the following aspects. In a 2 x 2 between-subjects design, participants were randomly presented with one of the four summaries (against women original; against women counterfactual; favoring women original;
favoring women counterfactual). Participants who saw the counterfactual summaries were informed at the end of the survey that they were in fact fictitious, and presented with the actual version. As in Experiment 1c, the order in which the moral commitment items and the research summary evaluation questions were presented was counterbalanced.
Results
We performed two stepwise linear regressions (direction being "both") with accuracy and reliability ratings as the dependent variables, respectively, and reported bias, moral commitment, sex, gender identity, presentation order, summary version, and all interactions as predictors.
Below we report results of the converged models.
Here, we added a random intercept for each individual summary in the linear model as there were four summaries tested. Results of the linear mixed-models were shown in Table 10. The results revealed a main effect of bias direction such that the findings of the "bias favoring women" summaries were rated as less accurate (M = 5.53, SD = 2.05) than those of the "bias against women" summaries (M = 7.07, SD = 1.79; β = 0.77, SE = 0.07, t = -10.4, p <.0001).
Likewise, the methods of the "bias favoring women" summaries were perceived as less reliable (M = 5.61, SD = 1.94) than those of the "bias against women" summaries (M = 6.69, SD = 1.86; β = 0.54, SE = 0.07, t = -7.32, p <.0001).
Overall, higher degree of moral commitment predicted higher perceived accuracy of findings (β = 0.14, SE = 0.04, t = 3.74, p <.001), and perceived reliability of the methods (β = 0.10, SE = 0.04, t = 2.70, p <.01). However, these effects of moral commitment were qualified by interactions with the reported direction of bias, as the post-hoc analyses showed significant effect of moral commitment on the perceived accuracy of the findings (β = 0.26, SE = 0.05, t = 5.61, p <.0001, η 2 = 0.09), and perceived reliability of the methods (β = 0.20, SE = 0.05, t = 4.02, p <.0001, η 2 = 0.05) when the summaries reported bias "against" women. By contrast, when the reported bias was "favoring" women, the effect of moral commitment disappeared (both p >.5).
Research methods were judged to be less reliable when the commitment items were presented before (M = 5.96, SD = 2.01) than after the evaluation task (M = 6.34, SD = 1.92; β = 0.17, SE = 0.07, t = 2.31, p = .02). Nevertheless, this order effect was characterized by its interaction with the direction of bias, as it was found marginally significant only for the "bias favoring women" summaries (β = 0.23, SE = 0.11, t = 2.09, p = .04), but not for the "bias against women" summaries (β = 0.12, SE = 0.10, t = 1.14, p = .26). No order effect was observed on ratings of accuracy of the findings (p > .2) for both types of summaries.
The converged model showed no sex difference nor effect of gender identity (all p > .5).
To test if there was any sex effect when the variance explained by moral commitment was factored out, we performed the regression analyses with bias direction, participant sex and their interaction term as predictors. Consistent with results shown above, the findings of the "bias favoring women" summaries were rated as less accurate than those of the "bias against women" summaries (β = 0.76, SE = 0.08, t = -10.2, p <.0001). Likewise, the methods of the "bias favoring women" summaries were perceived as less reliable than those of the "bias against women" summaries (β = 0.54, SE = 0.08, t = -7.17, p <.0001). In addition, we found an interaction between sex and bias direction (accuracy rating: β = 0.26, SE = 0.08, t = 3.48, p < .001, reliability rating: β = 0.17, SE = 0.08, t = 2.26, p = .02). For "bias against women" summaries, men (M = 6.75, SD = 1.86) rated the findings as less accurate than women (M = 7.39, SD = 1.65; t = -3.21, p <.001); similarly, men (M = 6.49, SD = 2.04) rated the methods as marginally less reliable than the women counterparts (M = 6.89, SD = 1.64; t = -1.96, p < .05).
Conversely, for the "bias favoring women" summaries, men rated the findings as equally accurate as women (t = 1.72, p >.05), and the same for the ratings of methods (t = 1.23, p >.1).
Plots of research evaluations (i.e. the perceived accuracy of findings and the perceived reliability of methods) as a function of moral commitment and participant sex are shown in Figure 10.
Discussion
Replicating results from Experiments 1a-1c, Experiment 2 found that individuals' moral commitment to gender equality affected their evaluations of the research demonstrating a hiring bias against women in STEM. People with positive moral attitudes on gender equity were more likely to accept evidence of women being discriminated against, while those with negative attitudes were more resistant to this evidence.
The influence of moral commitment seems to be restricted to the scenario where women were described as the victims of hiring bias but not when they were the beneficiaries, as participants scoring higher on moral commitment did not exhibit stronger resistance to the evidence of hiring preference for women (bias against men). One might argue that the effects of moral commitment on the evaluations of "bias against women" evidence could be attributed to the possibility that people who are morally concerned for gender equity also have overall higher trust in science. If this was true, we should have observed a similar pattern of effects for the "bias favoring women" summaries. However, this was not what we observed and this possibility should thus be ruled out.
Experiment 3
Experiments 1-2 found that moral commitment to gender equality positively predicted participants' level of trust in scientific evidence of hiring discrimination against females in academia. While this type of attitude-evaluation association is typically observed on polarizing issues, the psychological mechanism that underpins it is unclear. According to a narrow view, the positive correlation between participants' moral commitment to gender equality and their evaluations of specific evidence of discriminatory hiring practices against women may be In total, we recruited 520 participants. Data from three participants were excluded for the following reasons: two failed one of the two attention checks and one took the survey more than once (only their first response was kept).
Materials and procedure
As in Experiment 2, this experiment used a between-subjects design in which participants were randomly exposed to a research summary reporting evidence either of gender bias against or favoring women, with the former based on the article of [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF], and the latter on that of [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF]. Contrary to Experiment 2 however, only the two original summaries were used (and thus no "counterfactual" scenarios were employed).
Moreover, each condition was composed of two tasks. First, participants read the summary of a scientific study about to be conducted on the issue of gender discrimination in hiring in academic sciences and were asked to predict its likely results. The research summaries were the same as in previous experiments, except that the study's methods were described in the future tense and no results were shown. Participants' predictions about the study's likely results were collected in the form of average scores they expected the male and female candidates to receive in the study, respectively (on a 0-100 slider scale with the slider initially positioned at 0) (see Appendix A).
Immediately after having made their predictions, participants saw the following message:
"The study that you just read about has been recently run by researchers. The research findings were published as an article in a scientific journal. On the next page, you are going to see a summary of the published article. Please read carefully and answer two questions". This time, participants saw the full research summary including its results (both in text and graphical format), presented in the past tense. Following previous experiments, they were asked to rate how accurate they found the research summary's results, and how reliable they found its methods to be.
As in Experiment 2, the order in which the moral commitment items and the two tasks (prediction and evaluation) were presented was counterbalanced across participants. The design was thus a 2 (bias against vs. favoring women) x 2 (commitment items before vs. after the prediction and evaluation tasks) between-subjects design.
Results
Linear regression analysis
We performed two stepwise linear regressions (direction being "both") with accuracy and reliability ratings as the dependent variables, and reported bias, moral commitment, prior beliefs, sex, gender identity, presentation order, and all interactions as predictors. Below we report results of the converged models.
Results of linear regressions are shown in Appendix B1. As observed in Experiment 2, the "bias favoring women" summary (M = 5.86, SD = 2.05) received lower accuracy ratings than the "bias against women" summary (M = 7.36, SD = 1.81; β = 0.77, SE = 0.08, t = -9.49, p <.0001); and the method reported in the "bias favoring women" summary (M = 5.99, SD = 1.95)
were also rated as less reliable than that of the "bias against women" summary (M = 6.98, SD = 1.81; β = 0.51, SE = 0.08, t = -6.27, p <.0001).
Again, overall moral commitment positively correlated with the perceived accuracy of research findings (β = 0.10, SE = 0.04, t = 2.53, p =.01) and the reliability of methods (β = 0.11, SE = 0.04, t = 2.74, p <.01). However, these effects of moral commitment were again qualified by their interactions with the reported direction of bias (accuracy: β = 0.13, SE = 0.04, t = 3.31, p = .001; reliability: β = 0.10, SE = 0.04, t = 2.33, p = .02), such that when the summary reported Effect13 of priors: β = 0.05, p <.0001; Proportion of the total effect that is mediated: β = 0.16, p <.0001), and between moral commitment and the ratings on the reliability of research methods (Average Causal Mediating Effect of priors: β = 0.04, p <.01; Proportion of the total effect that is mediated: β = 0.16, p <.01).
Similarly, participants' moral commitment partially mediated the positive correlations between moral commitment and the ratings on the accuracy of the research findings (Average Causal Mediating Effect of priors: β = 0.007, p <.01; Proportion of the total effect that is mediated: β = 0.17, p <.01), and between prior beliefs and the ratings on the reliability of research methods (Average Causal Mediating Effect of priors: β = 0.03, p <.01; Proportion of the total effect that is mediated: β = 0.17, p <.01).
Taken together, these findings suggest the distinct effects of prior beliefs and moral commitment on people's evaluations of "bias against women" evidence.
Discussion
In line with results from previous experiments, Experiment 3 found that higher moral commitment to gender equality predicted greater trust in evidence of hiring bias against women in STEM in men and women alike. This suggests that (in 2021) individuals' moral concern for gender equality more than their sex explains the level of credence they give to evidence of discrimination against women. Moreover, the results suggested that the effect of moral commitment on evaluations is not reducible to people's prior expectations about the extent to which women are discriminated against in hiring and that people's moral attitudes and prior beliefs have concurrent but differing impacts on their appraisal of evidence.
Experiment 4
Experiments 1-3 showed that participants' moral commitment to gender equality predicts how much they trust evidence of hiring discrimination against women. Experiment 4 focused on another likely consequence of moralization: the propensity to make an imprecise inference when one agrees with the conclusion. More specifically, we hypothesized that increased moral commitment to gender equality would make people more prone to infer sex-based hiring discrimination against women from the mere observation that the profession comprises more males than females.
The literature remains controversial as to the role of external bias, relative to innate or socialized individual differences in math abilities, lifestyle, and career choices in the underrepresentation of women in STEM fields [START_REF] Breda | Girls' comparative advantage in reading can largely explain the gender gap in math-related fields[END_REF][START_REF] Ceci | Women's underrepresentation in science: Sociocultural and biological considerations[END_REF]Ceci & Williams, 2010[START_REF] Abbou | Double gender marking in French: A linguistic practice of antisexism[END_REF][START_REF] Charlesworth | Gender in science, technology, engineering, and mathematics: Issues, causes, solutions[END_REF]. These factors are often intertwined and interacting with each other. To detect the effect of external bias, one needs to have the other factors maximally constant as any relationships observed in correlation studies are likely to be confounded, thus not necessarily causal. For example, Moss-Rocusin et al. ( 2012) provided clear, unequivocal evidence of sex-based discrimination as factors other than the sex of the applicant were well controlled for in the experiment. However, the mere observation of larger proportions of male research trainees in science labs is not statistical evidence of external bias as factors other than hiring bias (e.g. the base rate of male and female applicants for the positions, and their credentials) can also be in play [START_REF] Sheltzer | Elite male faculty in the life sciences employ fewer women[END_REF].
wrote the conclusion of the summary in such a way that it did commit the fallacy. The key passage of the summary was formulated as follows (see Appendix A):
Results: [Graph showing a greater proportion of male than female postdocs] "On average, laboratories comprised significantly fewer female postdocs than male postdocs."
Conclusion: "The study provides clear evidence of women being discriminated against in academia because of their sex, a characteristic that should not matter for research work."
The experimental procedure of Experiment 4 was identical to that of Experiment 1b except that an additional item was introduced to measure participants' ability to spot the fallacy:
"To what degree do you think the researchers' conclusion is justified by the research results?".
Participants responded on an 11-point scale (0 "Not at all justified -10 "Absolutely justified").
Results
As shown in Table 11, people with higher commitment to gender equality were more likely to take sex imbalance as evidence of gender discrimination (β = 0.44, SE = 0.08, t = 5.62, p < .001, η 2 = 0.10). Replicating the results of previous experiments, the more people regarded gender equality as a moral imperative, the more they believed the reported finding that the research labs comprised a larger proportion of male than female post-docs to be accurate (β = 0.34, SE = 0.06, t = 5.68, p < .001, η 2 = 0.10), and the research methods to be reliable (β = 0.42, SE = 0.07, t = 6.11, p < .001, η 2 = 0.12). Contrary to the order effect observed before in Experiments 2 and 3, the perceived accuracy of findings was higher this time when participants reported their moral commitment before seeing the summary (β = 0.27, SE = 0.12, t = 2.30, p = .02, η 2 = 0.02), and the same trend was observed for the perceived reliability of methods (β = 0.33, SE = 0.13, t = 2.47, p = .01, η 2 = 0.02). However, participants' propensity for imprecise inference was not affected by the presentation order of the commitment items and stimulus summary (β = 0.26, SE = 0.15, t = 1.71, p = .09, η 2 = 0.01).
No effect of participant sex was found either when moral commitment was factored in or out (all p > .05).
As an exploratory analysis, we performed a one-way Anova test comparing the three types of research ratings: accuracy, reliability, justification. Results showed that ratings on the three items were significantly different (F (2, 789) = 16.5, p < .0001), such that the conclusion was thought of being justified (M = 5.61, SD = 2.57) to a lesser degree than the findings were rated to be accurate (M = 6.75, SD = 1.99; p < .0001) and the methods to be reliable (M = 6.09, SD = 2.28; p < .04); also, the methods were judged as reliable to a lesser degree than the findings were considered as accurate (p < .003).
Plots of research evaluations as a function of moral commitment to gender equality and participant sex are shown in Figure 13. scientific research, results of Experiment 3 suggest that people's prior expectations about the likelihood of women being discriminated against in hiring processes because of their sex and their moral concern for gender equity exert distinct influences on their appraisals of evidence that females suffer sex-based hiring bias in academia. Participants' research evaluations reflected both their pre-existing beliefs and their initial moral attitudes. Furthermore, Experiment 4 demonstrated that people's reasoning is subject to the influence of their moral stand on an issue such that participants with greater moral commitment to gender equality conflated correlation with causation.
Uncovering the reason why people have diverging reactions to scientific evidence of gender bias disfavoring women is critical to the establishment of gender equality in society.
Relative to previous research showing that males were more resistant than females to evidence of hiring bias against women in STEM fields [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF][START_REF] Moss-Racusin | Can evidence impact attitudes? Public reactions to evidence of gender bias in STEM fields[END_REF], here we found that people's moral convictions about gender equality, no matter if they are biologically male or female, affect their acceptance of such evidence. An individual can have multiple social identities defined by categories including their sex, profession, ideology and moral views and these identities may not be mutually exclusive. For example, being a male does not preclude one from sympathizing with females in their suffering of unequal treatment and in consequence, seeing gender equality as a moral imperative. Given the objectivity and universality nature of moral convictions [START_REF] Skitka | The Psychology of Moral Conviction[END_REF], it is not surprising that individuals' moral identity has more profound impacts on their judgments than any other non-moral identities (e.g. biological sex, profession).
By comparing people's evaluations of research reporting a hiring bias against vs.
favoring women, Experiment 2 and 3 suggested that the influence of moral commitment to gender equity is dependent on the depicted direction of bias. The extent to which individuals feel morally concerned about gender equality affected their reception of "bias against women" evidence, but moral concern did not influence individuals' judgment about "bias favoring women" research results. The differing impacts of moral commitment may be explained by the ambivalent feelings triggered in participants after seeing the unexpected or unfamiliar information that women were actually favored in academia. That is, for individuals who were morally engaged in improving women's status, the message that women have an advantage in academic hiring could be "good" and "bad" news at the same time. It is good news because, for people who value gender equality, any practice that helps to increase women's representation in male-dominated fields seems desirable, consistent with their goals for equality. Nevertheless, it can also be bad news as counter-attitudinal information could undermine collective mobilization for the cause (Petersen, 2020). Participants who interpreted the "bias favoring women" evidence as "good news" might rate the research more favorably than those regarding it as "bad news". In this way, the positive and negative effects of moral commitment simply evened out.
Considering the role of moral commitment and prior beliefs in people's treatment of new information, results of Experiment 3 suggest the concurrent but distinct functioning of the two factors in influencing people's reactions to evidence of gender bias against females in the STEM fields. Individuals' responses to the "bias against women" research are likely to result from the combinatory works of "motivated thinking" [START_REF] Kunda | Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories[END_REF][START_REF] Kunda | The case for motivated reasoning[END_REF] and Bayesian reasoning.
People who hold positive moral attitudes on gender equality are more receptive of "bias against women" research findings not just because they find the information palatable, but also that it fits with their prior knowledge and understanding of the state of affairs. When asked to make judgments pertaining to a moralized issue, individuals rely on both how they feel about the information and how likely according to their prior knowledge the information appears to be true.
Put in another way, they calibrate their trust in evidence by coordinating the works of 'cold' cognition and 'hot' emotion on the mind.
Furthermore, the influences of moral commitment and prior expectations are distinct from each other as reflected by the partial mediation effects shown in Experiment 3. Indeed, individuals' prior factual beliefs about a situation do not always correspond to how much they care about a cause and their moral involvement in it. A person can be well aware of the dire consequences of gender discrimination for women but remain indifferent to it for reasons such as they are not identified with the female gender group or they see the existence of inequality as justified [START_REF] Napier | The joy of sexism? A multinational investigation of hostile and benevolent justifications for gender inequality and their relations to subjective wellbeing[END_REF].
By varying the strength of evidence, we ruled out the mundane explanation for the effects of moral attitudes on people's trust in science that participants scoring higher on moral commitment happen to be better at recognizing reliable scientific evidence. Results of Experiment 4 indicated that individuals with higher moralization of gender equality are inclined to make imprecise inferences when they find the conclusion congenial to their moral stand.
However, the results of Experiment 4 do not imply that the participants who did not confuse the correlational data of gender imbalance with sex-based discrimination succeeded at detecting the inherent logical flaw, as they too may have simply distrusted any evidence that appeared to justify a counter-attitudinal conclusion without deliberation. It is likely that once people agree/disagree with a claim, they tend to accept/reject any evidence seemingly in support of it.
The current study has its own limitations. For example, it revealed a correlation between moralization and people's research evaluations, but it remains unclear whether or not moralization actually caused the differing perception of evidence. To answer this question, future studies can manipulate participants' level of moral concern through moral priming before exposing them to research summaries. In addition, our study showed that people with high moral commitment tend to make an invalid inference to reach a congenial conclusion, but we are unsure if those who resisted such an inference actually identified its potentially fallacious nature or they simply responded based on attitudes. Future research can address this question by eliciting reasons and arguments from participants after they complete the evaluation task and see if they have justifications for their judgments or they make attitude-motivated decisions. Finally, results of the study should be interpreted with one caveat: responses were collected with nonrepresentative samples. Future studies are welcome to look if the effects hold with national representative samples.
To conclude, the study showed that people's moral stand on gender equality affects their trust in scientific evidence of gender bias and this effect is dependent on the described direction of bias. The effects of moral commitment are not entirely reducible to people's prior factual beliefs about the likelihood of a gender bias disfavoring women happening in a hiring process.
Furthermore, the moralization of an issue leads to inadequate inferences when people see invalid evidence as valid when it seems to support a conclusion congenial to their attitudes.
A3. Introductory text preceding the stimulus summary (identical to [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] in
Experiment 1c
In the scientific world, peer experts judge the quality of research and decide whether or not to publish it, fund it, or discard it. But what do everyday people think about these articles that get published? We are conducting an academic survey about people's opinions about different types of research that was published back in the last few years. You will be asked to read a very brief research summary and then answer a few questions about your judgments as non-experts about this research. There is no right or wrong answer and we realize you don't have all the information or background. But just like in the scientific world, many judgments are made on whether something is quality science or not after just reading a short abstract summary. So to create that experience for you, we ask that you just provide your overall reaction as best you can even with the limited information. You will also be asked to provide demographic information about yourself.
A7. The stimulus study and instructions for the prediction task in the "bias against women" condition in Experiment 3
Is there a gender bias in hiring?
Background: There are ongoing debates about gender bias in various professions. This study attempts to find out whether there exists a gender bias in hiring in academic sciences or not.
Design:
The present study will ask science faculty members (N = 127) from research-intensive universities to rate the application materials of a student for a laboratory manager position (which, in scientific labs, is often occupied by a student). The student's application materials will be shown to two different groups of professors. Half of the professors will be randomly assigned to "Group A," where they will see the candidate presented with a male name, while the other half will be randomly assigned to "Group B," where they will see the candidate presented with a female name. Crucially however, all of the other application materials will be perfectly identical. This will allow the researchers to assess whether there is any bias in the evaluation process that is solely due to the gender of the candidate. In both groups, faculty members will be asked to rate the student's perceived competence and hireability on a scale from 0 to 100. A comparison between average ratings of student's perceived competence and hireability will be conducted between faculty members assigned to Groups A and B (i.e. professors who saw the candidate presented as a male and those who saw the candidate presented as a female).
Now please make predictions about the likely results of the study.
What do you think will be the average ratings of the two groups?
A9. The "bias against women" research summary based on the article of [START_REF] Sheltzer | Elite male faculty in the life sciences employ fewer women[END_REF] Overview Despite many good intentions and initiatives to recruit and retain more women, a wealth of evidence shows that gender inequality is still rife in the fields of science, technology, engineering, and mathematics (STEM). This study measured the proportion of male to female postdoctoral researchers (postdocs, who are research trainees under the direction of senior professors of a laboratory) in labs run by senior professors. The results showed that there was a much larger percentage of men than women postdocs in the labs investigated. These results provide clear evidence of women being discriminated against in academia because of their sex, a property that should be irrelevant to their fitness for scientific research.
Methods
In order to examine the gender distribution of biomedical scientists in academia, we collected information on professors, and postdocs, employed in 39 departments at 24 of the highest-ranked research institutions in the United States. We focused on departments that study molecular biology, cell biology, biochemistry, and/or genetics. In total, we obtained information on 2,062 professors and 4,904 postdocs in the life sciences. We examined the proportions of male vs. female postdocs across those labs.
Results
On average, laboratories comprised significantly fewer female postdocs than male postdocs.
Conclusion
The study provides clear evidence of women being discriminated against in academia because of their sex, a characteristic that should not matter for research work. Put in another way, I question if the assignment of object nouns to the masculine gender class would make speakers associate these objects with male qualities and vice versa for nouns assigned to the feminine gender class. The two psycholinguistic experiments test the hypothesized language effect within French (Experiment 1) and cross-linguistically between
French and German (Experiment 2). As the article in which this study is described has been submitted as a registered report for peer review, the experiments have not been conducted at the moment. Here, I mainly present a summary of the experimental design and results of two pilot experiments.
Conflicting results have been reported in previous studies using various paradigms (see [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a;;[START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Sato | Grammatical gender affects gender perception: Evidence for the structural-feedback hypothesis[END_REF]. Some of them employed explicit methods that might have led to participants' use of response strategies, a result of task demands (e.g. [START_REF] Clarke | Gender perception in Arabic and English[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF]. To avoid this happening, we employ a word association paradigm built on the work of [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF]. In addition, to minimize an experimenter bias in item selection, we ask participants to create the materials for Experiment 1. Based on extensive piloting work (see Pilot Studies of Chapter 2) as well as the findings reported in [START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF], we assume that any influence of grammatical gender on object representations is either weak or non-existent, and thus we adopt an approach of "stack the deck" in favor of the original hypothesis. That is, the experiments are designed in such a way that the odds of detecting any underlying language effects were maximized. To do this, firstly, the prospective participants will be tested in their native language instead of a language with no grammatical gender as did in [START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF]. Secondly, we increase the likelihood of detecting a language effect (if there is one to be found) by showing gender marked determiners with the relevant noun items, as making grammatical gender information more salient would again serve to enhance the chance of observing any underlying effects. And lastly, to ensure that any null effect we find is not due to a lack of statistical power, we will run Bayesian analyses that will allow us to quantify our confidence in null or positive results.
Specifically, we will ask native speakers of French and German to produce adjectives for grammatically masculine and feminine nouns in their native language, and then we will ask another group of native participants to assess these adjectives in terms of how likely they can be associated with male and female qualities. In Experiment 1, we will test semantically related
French nouns that are assigned opposite grammatical gender in French. Experiment 2 will extend the investigation across languages by testing French and German translation-equivalents that have opposite grammatical gender in the two languages. We will commence data collection once the article gets in-principle acceptance from an academic journal. Our previous pilot results suggested null effects of grammatical gender on how objects are conceived of by native French and German speakers.
The second study (Chapter 3) examined the influence of language forms on the prominence of women in people's mental representations. Focusing on French, we compared the three generic forms: masculine, double-gender and middot. Participants were shown a short text to read, which described the taking place of a professional gathering, and then asked to estimate the proportions of men and women present at the gathering by responding on a slider. Aside from language forms, we manipulated the gender stereotypicality of the professions: genderneutral (Experiment 1), and male-and female-stereotyped (Experiment 2). The results showed a summary of [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF] research article that reported discriminatory hiring practice in STEM favoring male applicants, and we asked them to evaluate the quality of the research in terms of how accurate they found the research findings and how reliable they deemed the methods to be. In addition, we asked participants to report their degree of moral commitment to the cause of gender equality, as a way to measure their moral attitudes on the issue. Results revealed a positive effect of moral attitudes such that individuals with higher commitment to gender equity tended to see the research findings as more accurate and the methods as more reliable. Contradicting the results of [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] that people's sex affected their research evaluations -men rated less favorably than women scientific evidence suggesting discriminatory hiring practices against women in academia -our results did not show any effect of sex.
In Experiments 2 and 3, we extended the investigation to varied scenarios regarding the directionality of the reported hiring bias and the diversity of evidence by using as stimuli multiple research summaries that documented hiring bias against vs. favoring women.
Experiment 3 additionally tested if the effect of moral attitudes could be explained by participants' pre-existing expectations about the likelihood of hiring discrimination against women happening in STEM by asking them to provide predictions of the research results (in terms of how much the hiring committee's evaluations of women are biased). The results of Experiments 2 and 3 were that, overall, the "bias favoring women" summaries received lower ratings of accuracy and reliability compared to the "bias against women" summaries. Prior beliefs and moral commitment had varying effects on people's trust in evidence of a bias against and favoring women: both prior beliefs and moral attitudes predicted the evaluations of "bias against women" summaries, while neither of the two factors showed any effect on the ratings of the "bias favoring women" summaries Results of the mediation analyses suggested that moral attitudes and prior beliefs have distinct influences on people's research evaluations.
Furthermore, Experiment 4 of the study addressed the question of whether moral convictions made people more prone to imprecise inferences when faced with an attitudinal conclusion. Here, we presented participants with a research summary demonstrating the status quo that female post-doctoral research trainees were outnumbered by the male counterparts in research labs of life sciences based on results of the study [START_REF] Sheltzer | Elite male faculty in the life sciences employ fewer women[END_REF]. (despite being true, these findings cannot serve as statistical evidence of gender discrimination as other factors such as women's career preferences could contribute to the observed gender imbalance.)
However, at the end of the summary, we intendedly made an invalid conclusion that women were being discriminated against in academia because of their sex (based on the smaller proportions of female post-docs in these labs). Participants were asked to judge how much they considered the conclusion as justified by the research results shown in the summary. We found that moral commitment affected people's judgments such that individuals with higher moralization of gender equality were more inclined to take gender imbalance as solid evidence of sex-based discrimination.
Limited Influences of Language on Mental Representations
In spite of the debates, the Neo-Whorfian hypothesis has gained empirical support from various domains [START_REF] Boroditsky | Does Language Shape Thought?: Mandarin and English Speakers' Conceptions of Time[END_REF][START_REF] Davies | A cross-cultural study of colour grouping: Evidence for weak linguistic relativity[END_REF][START_REF] Gilbert | Whorf hypothesis is supported in the right visual field but not the left[END_REF][START_REF] Haun | Cognitive cladistics and cultural override in Hominid spatial cognition[END_REF][START_REF] Loewenstein | Relational language and the development of relational mapping[END_REF][START_REF] Lupyan | Language is not just for talking: Redundant labels facilitate learning of novel categories[END_REF][START_REF] Majid | Can language restructure cognition? The case for space[END_REF][START_REF] Thierry | Unconscious effects of language-specific terminology on preattentive color perception[END_REF]. Now the field has seen a shift of focus from questioning the existence of linguistic influences on thought to when it happens, how it operates and what factors account for the strength and durability of such influences [START_REF] Bender | Gender congruency from a neutral point of view: The roles of gender classes and conceptual connotations[END_REF]. Earlier research on this topic seems to suggest that thinking can be affected by language before, during and after language use [START_REF] Wolff | Linguistic relativity[END_REF]. For example, the thinking for speaking [START_REF] Slobin | From" thought and language" to" thinking for speaking[END_REF][START_REF] Slobin | Language and thought online: Cognitive consequences of linguistic relativity[END_REF] mechanism was responsible for the before effect, as demonstrated in the eye movement differences between speakers of English, German and Greek when they were asked to watch motion events and then to describe them verbally [START_REF] Gennari | Motion events in language and cognition[END_REF][START_REF] Papafragou | Does language guide event perception? Evidence from eye movements[END_REF]. They may sound trivial as it is not surprising that people attend to the aspects of an event for which their language provides ready-to-use expressions when they asked to complete a task of verbal reproduction.
The during effect is explained by a mechanism of thinking with language [START_REF] Wolff | Linguistic relativity[END_REF], or actively employing language when performing a task. Here, language serves as a conceptual tool that can facilitate certain mental activities that would be difficult or impossible without language, such as numerical cognition [START_REF] Dehaene | Sources of mathematical thinking: Behavioral and brain-imaging evidence[END_REF][START_REF] Gordon | Numerical cognition without words: Evidence from Amazonia[END_REF], the understanding of false beliefs [START_REF] Milligan | Language and theory of mind: Meta-analysis of the relation between language ability and false-belief understanding[END_REF][START_REF] Pyers | Language promotes false-belief understanding: Evidence from learners of a new sign language[END_REF], and category learning [START_REF] Lupyan | Language is not just for talking: Redundant labels facilitate learning of novel categories[END_REF]. And finally, language can highlight certain aspects of the world by having them regularly encoded in the lexicon and syntax. After a long process of language learning and language use in which people repeatedly practice a certain way of categorizing the world entities, their access to these highlighted aspects are reinforced and even become unavoidable when asked to perform tasks for which language is not required. Such kind of after effect is shown in the differing preferences and proficiency between language groups regarding the utilization of absolute, intrinsic and relative spatial frames of reference [START_REF] Haun | Cognitive cladistics and cultural override in Hominid spatial cognition[END_REF][START_REF] Levinson | Returning the tables: Language affects spatial reasoning[END_REF], and similarly in the differing sensitivity to the distinction between a loose and tight fit between objects [START_REF] Mcdonough | Understanding spatial relations: Flexible infants, lexical adults[END_REF].
As the literature suggests, the structure of the language one speaks contributes to the diversity in people's conceptualizations and construal of the reality. However, we should be aware that any linguistic working on the human mind is not irreversible, for that language does not change the fundamental cognitive machinery that human beings evolved to share in common.
For example, Korean speakers may be relatively more attentive to the distinction between a loose and tight fit of objects in a containment than speakers of English. But, this is not to say that English speakers are unable to make such a distinction at all (otherwise, one would doubt the stability of all three-dimensional structures constructed by English speakers). Instead, one can view these effects of language as the impacts of habituation. After years and years of experience in a specific way of thinking/problem solving, people become more adept and efficient in analyzing the world from that perspective, being reinforced by the language structure. Thus, when faced with a new similar problem, they are more inclined to approach it from what they regard as the vantage point and find solutions by utilizing their cognitive toolkit that has been proved useful in the past.
Although previous research remains inconclusive with respect to the hypothesized influence of grammatical gender on object cognition [START_REF] Bassetti | Bilingualism and thought: Grammatical gender and concepts of objects in Italian-German bilingual children[END_REF][START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Bender | Grammatical gender in German: A case for linguistic relativity[END_REF]Bender et al., , 2016a;;[START_REF] Boroditsky | Language in Mind: Advances in the Study of Language and Thought[END_REF][START_REF] Boutonnet | Unconscious effects of grammatical gender during object categorisation[END_REF][START_REF] Cubelli | The Effect of Grammatical Gender on Object Categorization[END_REF][START_REF] Haertlé | Does grammatical gender influence perception? A study of Polish and French speakers[END_REF][START_REF] Imai | All giraffes have female-specific properties: Influence of grammatical gender on deductive reasoning about sex-specific properties in german speakers[END_REF][START_REF] Kousta | Investigating linguistic relativity through bilingualism: The case of grammatical gender[END_REF][START_REF] Mickan | Key is a llave is a Schlüssel: A failure to replicate an experiment from Boroditsky et al[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Saalbach | Grammatical Gender and Inferences About Biological Properties in German-Speaking Children[END_REF][START_REF] Sato | Grammatical gender affects gender perception: Evidence for the structural-feedback hypothesis[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF], taken together, the body of literature suggests that any effects of language on the gendered conceptualizations of objects are likely to be limited.
Language may not change human cognitive machinery, but it may tweak our attention and memory in a subtle way. For example, speakers of grammatical gender languages are required to mind the gender class of a noun and the behavior of the words that appear with it. By so doing, they may conditionally assign more similar qualities to nouns of the same gender class, especially when the task prompts them to make associations between these words and when grammatical gender can be employed to solve the problem at hand [START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF][START_REF] Clarke | Gender perception in Arabic and English[END_REF][START_REF] Guiora | A Cross-Cultural Study of Symbolic Meaning-Developmental Aspects[END_REF][START_REF] Sera | Grammatical and conceptual forces in the attribution of gender by English and Spanish speakers[END_REF][START_REF] Sera | When language affects cognition and when it does not: An analysis of grammatical gender and classification[END_REF]. But if the condition is taken out (e.g. task demands, cultural connotations), the effects of grammatical gender should be largely reduced or removed, as demonstrated in [START_REF] Beller | Culture or language: What drives effects of grammatical gender?[END_REF].
As to the relationship between language and mental representations of persons, however, our study in Chapter 2 added consistent evidence of grammatical gender influencing the perceptions of men and women in a group. People perceived fewer women group members when the masculine form was used, where the female gender was made implicit/hidden and thus less accessible in the memory; conversely, gender-fair forms rendered the female gender salient, hence prompting more female associations in the minds of the perceivers. This speaks to the old saying "Out of sight, out of mind". When we think with language, we are likely to have our attention tweaked by grammatical gender that also affects how a certain category in our memory is accessed. Language plays an important role in shaping people's mental representations of gender groups and this influence is of great societal importance, in terms of how women are evaluated and considered for certain professions [START_REF] Braun | Cognitive effects of masculine generics in German: An overview of empirical findings[END_REF][START_REF] Gabriel | Au pairs are rarely male: Norms on the gender perception of role names across English, French, and German[END_REF]Gygax et al., 2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF]Gygax & Gabriel, 2008;[START_REF] Hansen | The Social Perception of Heroes and Murderers: Effects of Gender-Inclusive Language in Media Reports[END_REF][START_REF] Sato | Altering male-dominant representations: A study on nominalized adjectives and participles in first and second language German[END_REF]Stahlberg et al., 2001;Stahlberg & Sczesny, 2001;[START_REF] Stout | When he doesn't mean you: Gender-exclusive language as ostracism[END_REF][START_REF] Vervecken | Ambassadors of gender equality? How use of pair forms versus masculines as generics impacts perception of the speaker[END_REF], how women's roles are viewed in general [START_REF] Koniuszaniec | Language and gender in Polish[END_REF][START_REF] Merkel | Shielding women against status loss: The masculine form and its alternatives in the Italian language[END_REF][START_REF] Mucchi-Faina | Visible or influential? Language reforms and gender (in) equality[END_REF], and how policies related to gender equality is treated [START_REF] Tavits | Language influences mass opinion toward gender and LGBT equality[END_REF]. In addition to the social impact, gender-fair language also helps to improve the overall consistency of representations people have of gender distributions across professions that are otherwise male-biased when the masculine form is used. As demonstrated in Chapter 3, genderfair forms removed a male bias in the representations of gender-balanced professional group, and they reduced some bias favoring males for occupations where female is the majority gender.
Although gender-inclusive forms induced a female bias for the male-dominated professions, overall, the perceivers formed more consistent representations after seeing gender-fair forms compared to the masculine form.
Stereotype, Ideology and Truth
Gender stereotypes contributes to the formation of people's gender role beliefs and moral attitudes toward gender equality. Stereotypes such as women are less competent than men, math is not for girls, women are emotional and not assertive, and women are family-centered constantly characterize women as intellectual inferiors with lesser career aspirations. Depending on how these gender stereotypes are treated by individuals, different ideologies such as gender essentialism and egalitarianism have been developed [START_REF] Knight | One egalitarianism or several? Two decades of gender-role attitude change in Europe[END_REF]. According to [START_REF] Knight | One egalitarianism or several? Two decades of gender-role attitude change in Europe[END_REF], people who hold what is known as the gender-essentialist ideology believe in the existence of innate sex differences between men and women and that these inherent differences account for the differing accomplishments of the two sexes; those embracing a flexible egalitarianism dismiss the idea that women were born less intelligent than men but partially accept the gender stereotypes, thus attributing gender disparities in society to free or constrained personal preferences; beyond these two, there are also advocates of liberal egalitarianism who reject all the traditional gender stereotypes and aspire to a society of gender equality [START_REF] Knight | One egalitarianism or several? Two decades of gender-role attitude change in Europe[END_REF]. Next, to allow for a more holistic assessment of the costs and benefits of language reform for the society, future studies can provide more empirical evidence that is currently absent. For instance, more empirical data are needed regarding the effects of different gender-fair language forms on the mental representations of gender groups, such as contracted forms with a slash (e.g. étudiant/es) or a dash (e.g. étudiant-e-s), and the feminine-before-masculine double-gender form (e.g. étudiantes et étudiants) that were not covered in our study. Each of these forms may have their own advantages and drawbacks that should be validated with scientific methods.
Additionally, by investigating professions of more varied gender stereotypicality (since our study only tested occupations that are gender-balanced, or extremely male-and femalestereotyped), futures studies can shed more light on the varying influence of language form and its relations with gender stereotype. Another issue that has been recurrently brought up as counterargument for gender-fair language is that it makes language learning more difficult (than it already is with masculine forms) for people with special conditions (e.g. dyslexia). Future
studies can help answer this question by investigating language learners with such conditions. In addition to the cognitive effects of language forms, future efforts can also consider the social impact of gender-fair language in France, such as how women's roles are viewed when genderfair language is used, how language reform affects the evaluation of women in a hiring process and whether people will be more aware of gender inequality problems.
Finally, about the impact of morality on trust calibration, prospective studies can examine the underlying factors that are accountable for the differing gender attitudes, say one's personality, cultural and socioeconomic background, and political ideology. Our study only revealed that individuals' moral attitudes affected their evaluation of evidence, but we still need to address the question of what leads some individuals but not others to be morally concerned about gender inequality and have strong attitudes on it. Furthermore, as the framing of information may have influences on how it is processed and interpreted by the recipients which in consequence leads to differing reactions, future studies are welcome to investigate what effects the framing of evidence can have on the level of credibility it induces in individuals.
asked a group of native English speakers (N = 107, 27 men and 80 women), aged between 18 and 65 years (M = 33, SD = 11.2), living in the UK, to indicate on a 7-point Likert scale to what extent they associated the English translation equivalents of our items with men or with women
Le pont est très _____b. GermanDie Brücke ist sehr_____ 'The bridge is very_____'
Figure 1 .
1 Figure 1. Female quality association (FQA) of the adjectives as a function of
Figure 2 .
2 Figure 2. Female quality association (FQA) of the adjectives as a function of
( 10 )
10 Three forms of sample profession musicien 'musician' a. masculine: musiciens b. double-gender: musiciens et musiciennes c. middot 12 : musicien.ne.s
version was framed as Selon vous, quels étaient les pourcentages d'hommes et de femmes dans ce rassemblement?('In your opinion, what were the percentages of men and women in the gathering?') (men-women version) and the other, Selon vous, quels étaient les pourcentages de femmes et d'hommes dans ce rassemblement? ('In your opinion, what were the percentages of women and men in the gathering?') (women-men version). There were likewise two versions of the response slider, such that the labels for the left and right endpoints, i.e. pictograms of a man and a woman, reflected the framing of the test question. Instructions on how to use the slider were shown with the question and remained visible to participants during the test. (See Appendix D for the slider and instructions.) Slider version (and hence, question framing), was counterbalanced within each of the three groups.
Figure 3 .
3 Figure 3. Boxplots of estimated percentages of women as a function of linguistic form. Medians
) = 5.84, p < .0001; masculine vs. middot: = 12.4, SE = 3.07, t(435) = 4.05, p < .0003; double vs. middot: = -5.40, SE = 3.05, t(438) = -1.74, p > .1); 102 for female-stereotyped professions, the differences between the masculine form on the one hand and the gender-fair forms on the other hand, are indeed smaller and significant only for the middot form (masculine vs. double: = 4.59, SE = 3.15, t(432) = 1.46, p > .1; masculine vs. middot: = 10.4, SE = 3.04, t(435) = 3.47, p < .002; double vs. middot: = 5.84 SE = 3.06, t(435) = 1.91, p > .1). Compared to the results of the pilot experiment, the effect size (partial 2 )
Figure 5 .
5 Figure 5. Boxplots of estimated percentages of women as a function of word type and gender
Figure 6 .
6 Figure 6. Mean percentages of gender recall accuracy as a function of word type and stereotype.
Figure 7 .
7 Figure 7. Boxplots of estimated percentages of women as a function of linguistic form and
Figure 8 .
8 Figure 8. Boxplots of estimated percentages of women as a function of linguistic form and
Figure 9 .
9 Figure 9. Research evaluations as a function of participants' degree of moral commitment
Figure 10 .
10 Figure 10. Evaluations of research showing evidence of "bias against women" and "bias
Figure 11, and plots of research evaluations as a function of predicted gender bias (i.e. the
Figure 11 .
11 Figure 11. Evaluations of research showing evidence of "bias against women" and "bias
Figure 12 .
12 Figure 12. Evaluations of research showing evidence of "bias against women" and "bias
Figure 13 .
13 Figure 13. Research evaluations as a function of moral commitment to gender equality
used in Experiment 4 Elite faculty in the life sciences employ fewer women Sheltzer et al. Massachusetts Institute of Technology, Cambridge, MA 02139
Gender attitudes and ideology influence individuals' perception of inequality and their trust in scientific evidence of discriminatory practices that undermine the advance of gender equity. Individuals who are morally concerned about the well-being of women are more likely to detect any practices or external barriers that discourage women from pursuing academic Additionally, future studies can enrich the data on the relationship between grammatical gender and the perceived properties in objects by examining language groups outside the Indo-European family. There are more than a hundred grammatical gender languages around the world that have diversified gender structures and many of them are understudied. Most of the existent literature documented the Indo-European languages that possess two to three gender classes. Future research can extend the investigations to languages with a larger number of gender classes, like the Niger-Congo languages (e.g. Swahili), that may show different patterns of results.
Table 1 .
1 Results of linear mixed-effects model for Experiment 1
SE t χ 2 Df p partial 2
Form 18.54 2 < .0001 0.11
double 3.41 1.53 2.23
middot 3.11 1.52 2.04
Next, we carried out non-preregistered post-hoc analyses to compare the results to
Misersky et al.'s (2013) norming data shown in Appendix A.
Table 2 .
2 Results of linear mixed-effects model for Experiment 2
SE t χ 2 Df p partial 2
Stereotype(male) -13.2 1.73 -7.64 58.4 1 < .0001 0.85
14.8 2 < .001 0.05
Form 1.62 1.48 1.10
double 3.91 1.49 2.62
inclusive 3.82 2 0.15 0.01
Stereotype Form 2.66 1.48 1.80
male:double -0.38 1.49 -0.26
male:middot
Relative to female-stereotyped professions (M = 60.8, SD = 19.6), lower percentages of
women were obtained for male-stereotyped ones (M = 34.6, SD = 18.7; = -13.2, SE = 1.73, t =
-7.64, p < .001). Restricted analyses with corrections for multiple comparison (mvt method)
Table 3 .
3 Estimated percentage point differences between participants' estimates in Experiments 1 and 2 and Misersky et al.'s (2013) French norms. Note: Significant values are shown in boldface type, with positive ones indicating a female bias, and negative ones a male bias.
Linguistic form
Table 4 .
4 Results of linear mixed-effects model for pooled data analysis (Experiment 2 and Pilot
3)
SE T χ 2 Df p partial 2
Stereotype(male) -14.3 1.69 -8.48 71.9 1 < .0001 0.87
36.2 2 < .0001 0.08
Form 3.66 1.27 2.90
double 3.88 1.25 3.12
middot 10.6 2 .005 0.02
Stereotype Form 4.08 1.27 3.23
male:double -1.54 1.25 -1.24
male:middot
Table 5 .
5 Results of linear mixed-effects regression
SE t χ 2 Df p partial 2
Table 6 .
6 Results of logistic mixed-effects regression
SE Z χ 2 Df p
Table 7 .
7 Number of participants and mean estimates of %-women as a function of Word type,
Stereotype, and Recall response
Word type Stereotype Recall response Nb Mean %-women SE
Noun Male correct 46 89.5 2.63
incorrect 5 20.4 8.56
Female correct 50 88.9 1.75
incorrect 2 50.5 0.50
Pronoun Male correct 31 89.7 4.08
incorrect 19 40.9 3.68
Female correct 33 86.9 3.62
incorrect 15 56.6 5.63
Table 8 .
8 Results of linear regression for Pilot 3
SE t χ 2 Df p partial 2
Stereotype(male) -17.0 2.56 -6.64 44.1 1 < .0001 0.81
Form(double) 2.56 1.35 1.89 3.58 1 0.058 0.03
Stereotype(male) Form(double) 2.96 1.35 2.18 4.77 1 0.03 0.03
Table 9
9
revealed effects of
Table 9 .
9 Results of linear mixed-effects model for the Pilot study
SE t χ 2 Df p partial 2
Stereotype(male) -17.0 1.92 -8.88 78.9 1 < .0001 0.91
27.9 2 < .0001 0.17
Form 9.00 2.40 3.75
double 3.07 2.25 1.36
middot 6.53 2 0.04 0.05
Stereotype Form 6.06 2.40 2.53
male:double -3.58 2.25 -1.59
male:middot
Table 10 .
10 Results of linear regression
DV IV β SE t p
Accuracy Bias (favoring) -0.77 0.07 -10.4 <.0001
Moral commitment 0.14 0.04 3.74 <.001
Bias (favoring)*Moral commitment -0.12 0.04 -3.37 <.001
Reliability Bias (favoring) -0.54 0.07 -7.32 <.0001
Moral commitment 0.10 0.04 2.70 0.007
Order (com->summary) -0.17 0.07 -2.31 0.02
Bias (favoring)*Moral commitment -0.10 0.04 -2.68 0.007
Table 11 .
11 Results of linear regression
DV IV β SE t p
Accuracy Moral commitment 0.34 0.06 5.68 < .001
Order (com ->task) 0.27 0.12 2.30 0.02
Reliability Moral commitment 0.42 0.07 6.11 < .001
Order (com ->task) 0.33 0.13 2.47 0.01
Justification Moral commitment 0.44 0.08 5.62 < .001
Order (com ->task) 0.26 0.15 1.71 0.09
B2. Results of mediation analysis with "prior beliefs" as the mediator for Experiment 3 representations of objects such that asexual objects are conceived of possessing gender qualities.
B3. Results of mediation analysis with "moral commitment" as the mediator for Experiment 3
DV DV Statistic Statistic Estimate 95% CI Lower 95% CI Upper Estimate 95% CI Lower 95% CI Upper p p
Accuracy ACME ACME 0.045 0.017 0.007 0.002 0.08 0.01 <.0001 0.002
Accuracy ADE ADE 0.236 0.132 0.035 0.020 0.34 0.05 0.002 <.0001
Total Effect Total Effect 0.281 0.170 0.043 0.028 0.39 0.06 <.0001 <.0001
Prop. Mediated 0.160 Prop. Mediated 0.068 0.170 0.050 0.29 0.37 <.0001 0.002
Reliability ACME ACME 0.036 0.013 0.006 0.0009 0.06 0.01 0.002 0.004
Reliability ADE ADE 0.183 0.055 0.029 0.015 0.31 0.04 <.0001 <.0001
Total Effect Total Effect 0.219 0.083 0.034 0.021 0.34 0.05 <.0001 <.0001
Prop. Mediated 0.164 Prop. Mediated 0.062 0.165 0.031 0.41 0.39 0.002 0.004
For a discussion, see https://madame.lefigaro.fr/societe/lecriture-inclusive-peut-elle-vraiment-changerla-place-des-femmes-dans-la-societe-040621-196755
See https://www.cidj.com/metiers/infirmiere-infirmier
See https://madame.lefigaro.fr/societe/lecriture-inclusive-peut-elle-vraiment-changer-la-place-desfemmes-dans-la-societe-040621-196755 for a discussion
https://www.motscles.net/ecriture-inclusive
See for instance https://www.marianne.net/agora/tribunes-libres/une-ecriture-excluante-qui-s-imposepar-la-propagande-32-linguistes-listent-les
https://www.academie-francaise.fr/actualites/declaration-de-lacademie-francaise-sur-lecriture-diteinclusive
9 https://www.legifrance.gouv.fr/jorf/id/
JORFTEXT000036068906 10 https://www.assemblee-nationale.fr/dyn/15/textes/
l15b4003_proposition-loi 11 https://www.education.gouv.fr/bo/21/Hebdo18/MENB2114203C.htm
The middot "‧" was replaced with the normal dot "." in the two experiments reported here, as is often the case in everyday language use.
Despite the parameter name, the analyses were correlational as the data were observational and we did not have a treatment condition.
Acknowledgements
A2. Actual "bias against women" research summary based on the article of Moss-Rocusin (2012) used in Experiments 1b, 1c, 2, and 3 A6. Counterfactual "bias favoring women" research summary based on the article of Moss-Rocusin (2012) used in Experiment 2
// gender predictor (sum-coded: masc=-0.5, fem=+0.5) vector [N] language; // language (sum-coded: DE=-0.5 or FR=+0.5) int<lower=1,upper=N_subj> subj [N]; // subject identifier int<lower=1,upper=N_pair> pair [N] 'What was being appreciated about Amiens?' b. Qu'est-ce qui a été offert à l'hôtel de ville ? 'What was offered at the City Hall?'
Procedure
The experiment was run via Qualtrics online software (https://www.qualtrics.com). Each participant was tested on a single trial and was paid 0.50 € for their time.
Participants were randomly assigned to one of the three linguistic forms (masculine, double gender, or middot). Within each group, they were randomly presented with one profession from the relevant list.
Once they agreed on the informed consent, participants were told that in the survey, they would be shown two very short texts to read and answer a few questions about. After reading the first text, which unbeknownst to the participants was a warm-up text, they had to answer two multiple-choice questions related to its contents (see Appendix C for the text and the questions), with the text still being visible. Depending on their responses, they received either positive or negative written feedback. Before they moved on to the next page of the survey and were presented with the target text on the professional gathering, they were told that questions about the following text would be more difficult. This was done to prompt them to read the following text with full attention. Then, they first read the text at their own pace; after a button press, the text disappeared and three questions were shown. The first two were the attention check questions shown in (11) above; if participants made at least one error, they were still able to finish the experiment but their data were excluded from the analyses. The third question asked them to provide their estimate of the gender ratio in the fictional gathering. There were two an underestimation. We used the lmer function of the lmerTest package [START_REF] Kuznetsova | lmerTest package: Tests in linear mixed effects models[END_REF] such as to obtain p-values. We found that the masculine form yielded an underrepresentation of women compared to the benchmark ( = -10.66, SE = 2.73, t = -3.91, p < .02), while estimates for the double-gender and middot forms did not differ from the benchmark values (double: = -0.80, |t| < 1; middot: = -1.01, |t| < 1). In other words, these results suggest that masculine plural induces an 11% point of male bias, while both the alternative forms induce a consistent representation of the proportion of women.
No previous study has focused specifically on a neutral stereotype. Yet, our finding that participants inferred a higher percentage of women when the double-gender or middot form was presented relative to the masculine form meshes well with the results of Gygax et al. (2008[START_REF] Gygax | The masculine form and its competing interpretations in French: When linking grammatically masculine role names to female referents is difficult[END_REF], who examined neutral-stereotyped role names alongside male-and female-stereotyped ones. Indeed, using a different paradigm they observed a male bias regardless of stereotype in both French and German. Similarly, the finding that the masculine form induces a male bias is in accordance with the Swedish study of [START_REF] Lindqvist | Reducing a male bias in language? Establishing the efficiency of three different gender-fair language strategies[END_REF].
Experiment 2
Experiment 1 showed an influence of linguistic form on estimations of gender ratios for professions without a gender stereotype. In this experiment, we focus on male-and femalestereotyped professions, and compare the same linguistic forms as before, i.e. masculine, doublegender and middot. This design allows us to examine whether the effect of linguistic form is modulated by stereotype. As double-gender and middot forms might promote women's visibility especially in cases where they are typically a minority gender, we expectif anythinga larger effect of linguistic form for male-stereotyped professions than for female-stereotyped ones.
Procedure
Participants were randomly assigned to one of 12 groups obtained by crossing the two stereotypes, three linguistic forms, and two slider versions. Within each group, participants were randomly shown one of the six professions from the relevant list (male-or female-stereotyped).
The procedure was otherwise identical to the one for Experiment 1.
Participants
We recruited 438 participants. The data from 133 of them were removed from the analysis for the following reasons: 33 did not complete the survey, 36 participated in Experiment 1 or a related experiment not reported in this article, 28 responded incorrectly at one or both attention check questions, and the remaining ones did not satisfy all of our recruitment criteria: 15 were non-native speakers, two did not live in France, 12 recruited on Foulefactory were older than 34, and seven recruited on Clickworker were younger than 20 (For the same reason as in Experiment 1, the selected age range for participants on Foulefactory was 25-34, while the range for those on Clickworker was 20-40. The data from the 12 on Foulefactory who indicated being older than 34 were again removed because of the conflict with the registered age in Foulefactory's database, even though they indicated being younger than 40). We also excluded the second response of 38 participants who took the survey twice.
The data analysis thus included 305 participants (158 women, 145 men, and 2 other gender). They were native French speakers living in France, aged between 20 and 40 years (M = 28, SD = 5.3). They participated on the crowd-sourcing platforms Foulefactory (N = 79) and Clickworker (N = 226) and none of them had participated in the previous experiment.
The random assignment of participants to one of 12 groups (two stereotypes x three linguistic forms x two slider layouts) resulted in a mean number of 25 participants per condition (min = 24, max = 28).
Results and Discussion
Boxplots of estimated percentages of women as a function of linguistic form and stereotype are shown in Figure 4. As in Experiment 1, the data were fit with a linear mixed-effects model. The model contained fixed factors for contrast-coded Stereotype, Linguistic form, and their interaction, as well as a random intercept for Profession. The results, shown in Table 2, revealed effects of Stereotype and Linguistic form, but no interaction.
processing information on stereotype.
Next, we turn to the question as to whether gender-fair language forms induce consistent representations. We have assumed throughout that representations are consistent if they reflect people's perceived gender ratios, as indicated by Miserky et al.'s (2013) norming data, rather than real-world gender ratios (which are currently not available for France). There is some evidence that people can reliably estimate real-world gender distributions. That is, in a study on the distribution of men and women in professions in the UK, [START_REF] Garnham | True gender ratios and stereotype rating norms[END_REF] found a good correlation between the norming data provided by the English sample in [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF] and real-world data from governmental sources. Yet, as English marks gender only on personal pronouns, it would be useful to run a similar study in French (or German). Indeed, one might wonder to what extent norming data are themselves influenced by linguistic representations, either as encountered in daily life or as used in the norming questionnaire. As to the latter, the endpoints of the rating scale for gender-marked profession names in [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF] showed the masculine and feminine forms, respectively, and the direction of the rating scale was counterbalanced across participants. The influence of language forms on perceived gender ratios was thus experimentally controlled at best. We tentatively conclude that the choice of benchmark data should not make much of a difference.
Recall that our results showed that the gender-fair forms yield consistent representations only for professions with a neutral stereotype. For male-stereotyped professions they overshoot their objective, while for female-stereotyped ones they fail to provide enough of a boost. It would be interesting to consider professions in a larger range of stereotypicality, as we only tested professions for whom the norming data either showed an almost perfect gender balance (estimated proportions of women between .47 and .51) or a largely unbalanced one (estimated
Pilot Studies
Pilot 1
This pilot (not included in the submitted manuscript) is similar to the two experiments described in the chapter with respect to its design and procedure. It has three aims. First, we seek to validate our paradigm by focusing on the French feminine grammatical form exclusively, which unambiguously refers to women-only groups. Thus, if the grammatical information is processed correctly, we should observe overall high estimates of %-women. In order to minimize the chances that the profession name is erroneously processed as if it were shown in the masculine form, we use names for which the feminine plural is not only written but also pronounced differently than the masculine one.
Second, we investigate whether the influence of gender stereotype can override that of grammatical gender. To do this, we compare male-vs. female-stereotyped professions. For female-stereotyped professions the stereotype is in accordance with the grammatical gender, but for male-stereotyped professions the two types of information conflict. Thus, if stereotype information overrides linguistic information, we should observe lower estimates of %-women for male-stereotyped than for female-stereotyped professions, where the latter should be at or near ceiling, i.e. 100%.
Third, we examine a potential difference according to whether the linguistic information is present on a noun or on a pronoun. Compared to content words, function wordsincluding pronounsare shorter, more frequent, more predictable and more often redundant, and there is evidence that they are processed in less depth during reading [START_REF] Carpenter | What your eyes do while your mind is reading[END_REF][START_REF] Healy | Letter detection: A window to unitization and other cognitive processes in reading text[END_REF]03; male-stereotyped, not gender-marked: Mmen = .80 SD = .02; female-stereotyped, gendermarked: Mwomen = .78, SD = .05; female-stereotyped, not gender-marked: Mwomen = .78, SD = .08). An Anova with the factors Stereotype and Gender-marking showed no main effect and no interaction (Stereotype: F(1,20) = 1.07, p > .1; Gender-marking and Stereotype Gendermarking: F < 1). The 24 professions, together with the estimates of %-women in those The short text about the professional gathering was similar to the one used in the two experiments. Same as the previous experiments, in the passage for the noun condition, exemplified in (12a), the feminine plural form of the profession name appeared twice in the text and no referential pronoun was used. In the passage for the pronoun condition, exemplified in (12b), the profession name had no gender marking and its plural form was shown only once, but a referential feminine plural pronoun appeared twice; this pronoun thus revealed the gender information. Hence, in both conditions the relevant grammatical information appeared twice.
(12) a. Passage for noun condition with sample profession:
Le rassemblement régional des caissières a eu lieu cette semaine à Amiens. La localisation centrale de cette ville a été particulièrement appréciée. Les caissières ont aussi adoré l'apéro offert à l'hôtel de ville le premier jour. 'The regional gathering of cashiersfem took place this week in Amiens. The central location of this city was particularly appreciated. The cashiersfem also loved the aperitif offered at the City Hall on the first day.'
b. Passage for pronoun condition with sample profession:
Le rassemblement régional des fleuristes a eu lieu cette semaine à Amiens. Elles ont particulièrement appréciée la localisation centrale de cette ville. De plus, elles ont adoré l'apéro offert à l'hôtel de ville le premier jour. 'The regional gathering of floristsmasc/fem took place this week in Amiens. Theyfeminine particularly appreciated the central location of this city. Theyfeminine also loved the aperitif offered at the City Hall on the first day.'
The words régional and Amiens were replaced with européen and Francfort respectively for three professions, mathématicien 'mathematician', douanier 'customs officer' and astronaute 'astronaut', for it would be more plausible for these professionals to have a Europe-wide assembly in a more internationally-oriented city than a regional one in a provincial city.
Procedure
The pilot followed the same procedure as for Experiment 1 and 2 except in the aspects described below.
Participants were randomly assigned to one of four groups defined by crossing word type (noun or pronoun) and stereotype (male or female). Within each group, participants were randomly assigned to one profession from the relevant list.
After the rating task, participants were asked a multiple-choice question to test how well they recalled the gender-marked profession name or the pronoun from the text they had seen.
The question was framed as Quel mot était présent dans le texte ? ('Which word was shown in the text?') and three choices were shown below the question. For participants in the noun condition, the three choices were: profession name in masculine plural form (e.g., caissiers), profession name in feminine plural form (e.g., caissières), and Je ne sais pas ('I don't know');
These factors could of course also be at play for the participants who were incorrect on the memory question. Despite the presence of such noise in the data, our paradigm appears to be adequate for testing the influence of grammatical gender on the perception of biological gender, especially when the gender information is present on the noun denoting the profession rather than on a referential pronoun. In the remaining experiments, we therefore focus on grammatical gender-marked profession names, and hence use the kind of noun passage exemplified in (5a) above.
Pilot 2
Pilot 2 (not included in the submitted manuscript) was similar to Pilot 1 in terms of the materials and procedure. In this pilot experiment, we compared the masculine plural pronoun ils 'theymasc' and its double-gender alternative ils et elles 'theymasc and theyfem'. We did not include the inclusive writing form iels as it seems extremely unfamiliar to French speakers.
Method
Stimuli
The stimuli used in this pilot were the same as for the pronoun condition of Pilot 1. They were 12 profession names with a unique form for the masculine and feminine gender (see
Appendix B).
Procedure
The procedure for this pilot was identical to that of Pilot 1 except in the following aspects.
First, we only had a pronoun condition as we removed the noun condition described in Pilot 1.
Second, no memory question was being asked after the estimation task. Third, participants were randomly assigned to one of four groups (2 gender stereotype x 2 linguistic form), and within each group, they were randomly shown one profession name from the relevant list.
Participants
Participants were native French speakers living in France (N = 146), 80 women and 66 men, aged between 20 and 68 years (M = 43, SD = 12). They participated on the crowd-sourcing platform Foulefactory.
The mean number of participants per condition was 18 (min = 8, max = 25). differences between masculine and the gender-fair forms are smaller and significant only for the middot form (masculine vs. double: = 12.5, SE = 6.38, t(133) = 1.97, p > .1; masculine vs. middot: = 16.2, SE = 5.54, t(134) = 2.93, p < .02; double vs. middot: t < 1). Appendix B. French profession names used in Pilots 1 (pronoun condition) and 2, with mean percentages of women as rated by participants in [START_REF] Misersky | Norms on the gender perception of role nouns in Czech, English, French, German, Italian, Norwegian, and Slovak[END_REF].
Stereotype
Profession name English translation %-women 'Move the indicator to respond:
Male
If the indicator is at the utmost left end, you estimate that there is 100% women and 0% men.
If the indicator is at the utmost right end, you estimate that there is 0% women and 100% men.
If the indicator is at the midpoint, you estimate that there are women as many as men.
Of course, your choice is not restricted to the three
Abstract
In a context of increasing gender equality in Western societies, systematic differences and possible biases in how people weight evidence of gender discrimination against women have garnered much public attention. However, the ultimate sources of those are unclear. Some previous work [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] suggests that participant's gender is a key factor, such that men rate less favorably than women strong (experimental) scientific evidence suggesting discriminatory hiring practices against women. Here, we explore a potentially more powerful source of variation in how people evaluate evidence of gender discrimination: their level of moral commitment to gender equality.
Across a series of six experiments, we focus on perceptions of discriminatory hiring practices based on gender in academic contexts. We find that people's degree of moral commitment to gender equality is a robust predictor of their trust in statistical evidence of gender discrimination against women, and that the correlation between moral commitment and evaluations cannot be explained by factual prior beliefs. Moreover, this holds whether the evidence of hiring bias is strong (experimental) or weak (only correlational, thus leading to confusion between gender imbalance and discriminatory hiring). Our results additionally show, in contrast to previous work in this area, that participant sex does not predict evaluations of evidence of discrimination. Taken together, our findings suggest a new picture of the origins of systematic differences in people's appreciation of evidence of gender bias in academia.
Keywords: Gender bias, Moral commitment, STEM, Prior beliefs, Trust, Research evaluations While a wealth of evidence suggests that women do suffer sex-based discrimination in academia, there is significant variability in how people react to this fact [START_REF] Danbold | Men's defense of their prototypicality undermines the success of women in STEM initiatives[END_REF][START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF][START_REF] Moss-Racusin | Can evidence impact attitudes? Public reactions to evidence of gender bias in STEM fields[END_REF]. For instance, [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] difference in reactions as in-group favoritism, a mechanism by which men protect their identity as the dominant gender group in STEM by resisting information that threatens this identity [START_REF] Danbold | Men's defense of their prototypicality undermines the success of women in STEM initiatives[END_REF][START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF].
In this work, we approach people's differing reactions to evidence of gender bias from a novel angle: by examining the possible influence of moral commitment to gender equality. Moral commitment (or conviction) to a cause refers to the degree to which individuals deem the issue as an unnegotiable moral imperative. People highly committed to an issue typically see it as objectively and universally important, and as central to their sense of moral identity. They display strong emotional reactions-such as anger and disgust-at people, practices, representations, and institutions that they think to impede the advancement of their cherished cause [START_REF] Skitka | The Psychology of Moral Conviction[END_REF][START_REF] Skitka | Moral Conviction: Another Contributor to Attitude Strength or Something More[END_REF]. Drawing on these considerations, we hypothesized that individuals' moral commitment to gender equality may be a key moderator of their trust in research on the issue of gender bias in academia. A long tradition of research has shown that people's moral attitudes modulate their processing of new information-whether this springs from prior beliefs only, or involves emotional processes as in "motivated thinking" [START_REF] Kunda | Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories[END_REF][START_REF] Kunda | The case for motivated reasoning[END_REF][START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF]. For example, scientific articles highlighting the role of human activities in global warming are more likely to have the credibility of their methods or the probity of their authors questioned by conservatives than liberals (Kahan et al., 2011a). Proponents of the death penalty put studies suggesting its inefficiency in deterring murder under closer scrutiny than people who forcefully oppose it [START_REF] Edwards | A disconfirmation bias in the evaluation of arguments[END_REF][START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF]. Scientists tend to discount manuscripts under review that run against their favored theory than those supporting their theoretical convictions (Koehler 1997).
Current study
Following these lines of research, we expected that individual differences in moral commitment to gender equality may be associated with differences in how persuasive evidence of gender discrimination in academia is perceived as being. Specifically, we hypothesized that greater commitment to gender equality should predict increased trust in research reporting evidence of hiring discrimination against females, regardless of participants' sex. Experiments 1a, 1b, 1c, and 2 look specifically at this question.
Influences of moral attitudes on judgments raise the important question of whether those influences can be reduced to people's prior factual beliefs/expectations on the issue at hand [START_REF] Lord | Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence[END_REF]Pennycook, 2020;Tappin et al., 2020aTappin et al., , 2020b)). In order to shed light on this question, we also explored in Experiment 3 if the (theorized) relationship between individuals' moral commitment to gender equality and their trust in research reporting evidence of hiring
Method
Participants
The data analysis included 268 UK participants (171 women and 97 men) aged between 18 and 74 years (M = 35, SD = 13.1). They were recruited on the crowd-sourcing platform Prolific (https://prolific.co/) and were paid £0.50 for their participation. We determined the sample size by running power analysis using the applications "mc_power_med" and "schoam4" (Schoemann et al., 2017) in R (R Core Team, 2020) via Rstudio (RStudio Team, 2020). The applications suggested a sample size of 250 for a statistical power of 0.87. As data were collected in batches and we applied the exclusion criteria after each batch was complete, we ended up with a sample size slightly larger than the suggested one.
In total, we recruited 288 participants. Data from 20 participants were removed for the following reasons: one did not meet our recruitment criterion (reported sex being "other"), 17
failed one of the two attention checks, and two took the survey more than once (only their first response was kept).
Materials and procedure
We On the next page, participants passed a second attention check by answering the question:
"The summary you've just read is about". They could choose from "Poverty economics", "Modern literature", "Gender bias", and "Quantum physics". Participants who chose answers other than "Gender bias" were removed from the analyses.
Then, participants were asked to report their degree of moral commitment to gender equality by indicating the extent to which they agreed with the statement: "Achieving gender equality in society is a moral imperative" on a slider scale ranging from [0] "Strongly disagree" to [10] "Strongly agree" with [5] "Neither agree nor disagree" as default slider position.
Next, they reported their sex and gender identity in random order. The question for biological sex was "Assuming sex is a biological notion that refers to your physical anatomy, what is your sex?". Participants chose from "Male", "Female" and "Other" as responses. There were two versions of the question on gender identity in order to counterbalance the order in which the words "masculine" and "feminine" appeared in the text. One version of the question was framed as "Assuming gender refers to how masculine or feminine you feel with respect to your identity, where would you position your gender?" (Masculine-Feminine version), and the other version, as "Assuming gender refers to how feminine or masculine you feel with respect to your identity, where would you position your gender?" (Feminine-Masculine version).
Consistent with the question framing, there were two versions of response scales depending on whether the left [0] and right [10] endpoints were labeled as "Strongly masculine" and or "Strongly feminine", respectively (Masculine-Feminine version), or the reverse (Feminine-Masculine version). The slider was initially positioned at [5] "Gender neutral" on both scales.
Participants were randomly assigned either version of the gender identity question / scale.
Responses on the Feminine-Masculine scale were reverse coded.
The survey ended with a question about participants' age, and a 1-item measure of political orientation.
Results
All data cleaning and analyses reported in this paper were conducted in R (R Core Team, 2020) via RStudio (RStudio Team, 2020). Linear regression analyses were run using the lme4 package [START_REF] Bates | Fitting Linear Mixed-Effects Models Using lme4[END_REF]. The categorical variable (i.e. sex) was contrast-coded and the research evaluations (all |t| < 1). For the same reason as described in Experiment 1a, the preregistered mediation analysis was again not run.
Plots of the research evaluations as a function of moral commitment and participant sex are shown in Figure 9.
Experiment 1c
Unlike [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF] who reported that males were less receptive to evidence demonstrating a hiring bias against women than female participants, we found an effect of moral commitment on individuals' research evaluations across cultures in Experiments 1a and 1b, while no sex difference was observed. There were a few possible explanations for the absence of sex difference in the perception of evidence in our experiments. One is that our dependent measures, which were different from those of [START_REF] Handley | Quality of evidence revealing subtle gender biases in science is in the eye of the beholder[END_REF], failed to capture the sex difference (if it exists). Another possibility is that men and women reached more consensus on the issue of gender bias over the last few years and thus any sex difference observed before has diminished. To answer this question, here we ran an exact replication of Handley et al's design again with a U.S. sample, by using their original dependent measures in order to allow for a more accurate comparison. All aspects of the current study were pre-registered on OSF (10.17605/OSF.IO/7UW85).
Method
Participants
The data analysis included 467 U.S. participants (233 women and 234 men) aged between 18 and 68 years (M = 35, SD = 11.8) recruited on Prolific. They were paid £0.50 for their participation. We determined the sample size by running a power analysis in G*Power 3.1
A second change from our previous experiments was that we counterbalanced the order in which the moral commitment items and the research evaluation questions were presented.
Finally, in this study and the other studies presented below, we kept only the Masculine-Feminine version of the gender identity question and response scale.
Results
We ran the same analysis as before. This time, the sex effect in Handley et al ( 2015 That said, the sex difference vanished (β = 0.05, SE = 0.04, t = 1.10, p = 0.27) when moral commitment was factored in. Consistent with results of previous experiments, moral commitment positively predicted the perceived overall quality of the research summary (β = 0.19, SE = 0.02, t = 12.06, p < .0001, η 2 = 0.26). Additionally, unlike Experiment 1a and 1b, here we observed an interaction between moral commitment and participant sexthe effect of moral commitment was more pronounced among males than females (β = 0.04, SE = 0.02, t = 2.67, p < .01, η 2 = 0.02). Results also suggested a three-way interaction between moral commitment, presentation order and sex (β = 0.04, SE = 0.02, t = 2.81, p < .01, η 2 = 0.02). Restricted post-hoc analysis revealed that the effect of commitment on evaluations was greater in male than in female participants when the moral commitment scale was presented before the research summary (t = 4.34, p < .001); however, the effect was the same on both sexes when the research summary preceded the moral commitment items (t < 1, p > .9).
evidence of hiring bias favoring women to face more skepticism than that of women being discriminated against, and that the skepticism would be amplified by participants' moral commitment to gender equality. All aspects of the current experiment were pre-registered on OSF (10.17605/OSF.IO/W9KUS).
Method
Participants
The data analysis included 636 U.K. participants (319 women and 317 men) aged between 18 and 76 years (M = 36, SD = 13.2), recruited on Prolific. They were paid £0.50 for their time. Again, we determined the sample size by applying the same criterion as described in Experiments 1b and 1c. We ran a power analysis in G*Power 3.1 (Faul et al., 2009) which suggested a sample of 618 for a statistical power of 0.90. For the same reason as in previous experiments, we ended up with a sample size slightly larger than the suggested one.
In total, we recruited 660 participants. Data from 24 participants were excluded for the following reasons: two did not meet our recruitment criteria (reported sex being "other"), 20 failed one of the two attention checks and two took the survey twice (only their first response was kept).
Materials and procedure
The materials used in Experiment 2 were four summaries reporting evidence of gender discrimination that in two cases described a hiring bias "against women" and in the other two cases a hiring bias "favoring women". One of the "against women" summaries was based on the article by [START_REF] Moss-Racusin | Science faculty's subtle gender biases favor male students[END_REF], as used in previous experiments, and one of the "favor women" summaries was based on the article by [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF] which reported evidence of hiring bias favoring women in STEM. In order to vary our stimuli while controlling explained entirely by their prior expectations about the degree of discrimination that women would face in the relevant scenario. Alternatively, it may be that the effect of moral commitment on evaluations of scientific evidence goes over and above these prior expectations in such a way that we can consider any such effects to be conceptually separate from those of specific prior expectations.
In order to examine the extent to which an association between moral commitment and research evaluations, when it is found, can be reduced to participants' expectations about the likely size of the hiring bias, Experiment 3 adopted a design that allowed participants to both predict how much discrimination against women they thought would be observed in the study before it was conducted, and to evaluate how credible they perceived the study once its results
were available. Similar to Experiment 2, it also manipulated the direction of the bias described in the summaries: hiring bias against vs. favoring women.
All aspects of the current study concerning the materials, procedure, and analyses were pre-registered on OSF (10.17605/OSF.IO/BVKJC)
Method
Participants
The data analysis included 517 UK participants (265 women and 252 men) aged between 18 and 80 years (M = 36, SD = 13), recruited on Prolific. They were paid £0.50 for their time.
Again, we determined the sample size by running a power analysis in G*Power 3.1 (Faul et al., 2009) which suggested a sample of 502 for a statistical power of 0.90. For the same reason as in previous experiments, we ended up with a sample size slightly larger than the suggested one.
hiring bias against women, participants' with higher degree of moral commitment rated the findings as more accurate (β = 0.24, SE = 0.05, t = 4.73, p <.0001, η 2 = 0.11), and the methods as more reliable (β = 0.22, SE = 0.05, t = 4.15, p <.0001, η 2 = 0.06). Conversely, when the summary reported hiring bias favoring women, the effect of moral commitment disappeared (both p > .5).
Next, higher prior beliefs of gender bias against women predicted higher ratings on the accuracy of research findings (β = 0.01, SE = 0.006, t = 2.03, p = .04), and this effect was moderated by the reported direction of bias (β = 0.02, SE = 0.006, t = 3.72, p <.001). For the "bias against women" summary, expectations about the likelihood of gender discrimination happening positively predicted ratings on the accuracy of research findings (β = 0.04, SE = 0.008 t = 4.55, p <.0001, η 2 = 0.07), but again the effect of priors disappeared for the "bias favoring women" summary (β = 0.01, SE = 0.008, t = 1.30, p = .20).
As before, participants who reported their moral commitment before the prediction and evaluation tasks tended to rate the findings as marginally less accurate (β = 0.15, SE = 0.08, t = 1.82, p = .07) and the methods as less reliable (β = 0.19, SE = 0.08, t = 2.40, p = .02). This order effect was conditioned on the reported direction of bias (accuracy: β = 0.19, SE = 0.08, t = 2.38, p = .02; reliability: β = 0.22, SE = 0.08, t = 2.73, p <.01). The results of the "bias favoring women" summary were rated as less accurate (β = 0.36, SE = 0.13 t = -2.87, p < .01, η 2 = 0.03), and its methods as less reliable (β = 0.42, SE = 0.12 t = -3.50, p <.001, η 2 = 0.05) when participants saw the commitment items before the prediction and evaluation tasks than in the reversed order. Conversely, for the "bias against women" summary, no order effect was observed (both p >.5).
Similar to results of Experiment 2, when moral commitment was factored in, no effect of sex or gender identity, nor any interaction with other factors was found (all p > .05). However, when the variance explained by moral commitment was factored out, an interaction between sex and the reported direction of bias was found (accuracy: β = 0.30, SE = 0.08, t = 3.64, p <.001; reliability: β = 0.23, SE = 0.08, t = 2.81, p < .01). For the "bias against women" summary, men rated the findings as significantly less accurate (Men: M = 6.96, SD = 1.74; Women: M = 7.73, SD = 1.73; t = 3.28, p <.01) and the methods as slightly less reliable (Men: M = 6.86, SD = 1.81;
Women: M = 7.25, SD = 1.79; t = 1.69, p = 0.09) than did women. In comparison, for the "bias favoring women" summary, men rated the findings as slightly more accurate (Men: M = 5.99, SD = 2.05; Women: M = 5.55, SD = 2.04; t = 1.87, p = 0.06) and the methods as significantly more reliable (Men: M = 6.28, SD = 2.01; Women: M = 5.75, SD = 1.89; t = 2.29, p = 0.02) as did women.
Correlational mediation analyses
To further assess the triangular relationships between prior beliefs, moral commitment and research evaluations, we further performed correlational mediation analyses by following the steps set forth by Baron and Kenny (1986), using the R package mediation (Tingley et al., 2014).
These analyses were run only for the "bias against women" condition, aiming to test two (theorized) mediation effects: 1) the relationship between moral commitment and research evaluations is mediated by priors, and 2) the relationship between priors and research evaluations is mediated by moral commitment. The results, obtained with 1000 times bootstrap simulations, showed two significant partial mediation effects (see Appendices B2 and B3)
First, participants' prior beliefs partially mediated the positive correlations between moral commitment and the ratings on the accuracy of the research findings (Average Causal Mediating
Method
Participants
The data analysis included 264 UK participants (198 women and 66 men) aged between 18 and 62 years (M = 29, SD = 10.9) recruited on Prolific. They were paid £0.50 for their time.
As in previous experiments, we determined the sample size by running a power analysis in G*Power 3.1 (Faul et al., 2009) which suggested a sample of 255 for a statistical power of 0.90.
For the same reason as described above, we ended up with a sample size slightly larger than the suggested one.
In total, we recruited 280 participants. Data from 16 participants were excluded for the following reasons: one did not meet our recruitment criteria (reported sex being "other"), 13 did not complete the survey and two failed one of the two attention checks.
Materials and procedure
The stimulus summary was based on the research article of [START_REF] Sheltzer | Elite male faculty in the life sciences employ fewer women[END_REF], presented in the same format as the summaries tested before (see Appendix A). In the original study, the authors examined the gender distribution of biomedical scientists in academia by collecting information on post-doctoral researchers and professors employed in 39 departments at 24 of the highest-ranked research institutions in the United States. They focused on departments studying molecular biology, cell biology, biochemistry, and/or genetics. The original results reported that the labs employed fewer female than male post-docs in labs run by senior professors. Importantly, the report did not take this observed gender imbalance as evidence of gender discrimination specifically targeted at women. This would imply committing a fallacy, as women may be underrepresented in a profession because they simply are less interested in the job, for instance, and consequently apply less. Nonetheless, we intentionally
Discussion
Consistent with results of previous experiments, individuals having higher moral commitment to gender equity were more inclined to accept the finding that women are outnumbered by men in academia and to consider the research methods employed to obtain this finding as reliable. When faced with weak evidence, participants overall showed more skepticism about the reliability of the research methods and the extent to which the conclusion was supported by the reported results. However, as predicted, individuals who saw gender equality as a moral imperative were more likely to make an imprecise inference (i.e. the existence of gender bias against women) from the observation of lower proportions of female than male post-docs in research labs.
General discussion
Across six survey-based experiments, we investigated the influences of moral commitment to gender equality on people's trust in scientific research on gender bias. Overall, the experiments consistently revealed a positive relationship between the moralization of gender equality and reactions to evidence of women being discriminated against in academia. Specifically, Experiments 1 -3 found that participants, regardless of their sex, who scored higher on moral commitment to gender equity tended to rate research findings demonstrating a hiring bias against women as more accurate and the research methods adopted as more reliable. This effect of moral commitment was limited to research summaries reporting a gender bias to the disadvantage of women. By comparing two opposing directions of hiring bias (i.e. against vs. favoring women),
Experiment 2 additionally showed that moral conviction moderated people's evaluations of "bias against women" evidence, but not when the evidence pointed to a "bias favoring women".
Regarding the role of prior beliefs in the relationship between moral commitment and trust in Appendix A. Stimuli A1. Actual "bias against women" research summary based on the article of Moss-Rocusin (2012) used in Experiment 1a
Proceedings of the National Academy of Sciences USA 109(41):16474-16479 Science faculty's subtle gender biases favor male students C. A. Moss-Racusina, J. F. Dovidio, V. L. Brescoll, M. J. Grahama, and J. Handelsman
Background:
Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty specifically exhibit a bias against female students that could contribute to the gender disparity in academic science.
Design:
The present study asked science faculty members (N = 127) from research-intensive universities to rate the application materials of a student for a laboratory manager position (which, in scientific labs, is often occupied by a student). The student applicant was randomly given a male or female name, but was described as having exactly the same skills in both groups. In both groups, faculty members were asked to: rate the student's perceived competence and hireability, propose a starting salary, and offer a certain amount of career mentoring to the student. A comparison between average ratings of student's perceived competence, hireability, proposed salary and career mentoring was conducted between the male student and the female student groups.
Results:
• Science faculty members rated the male student as significantly more competent and hireable than the (identical) female student.
• Faculty members also offered more career mentoring to the male student and selected a higher starting salary.
• The gender of the faculty did not affect their ratings, such that female and male faculty were equally likely to exhibit bias against the female student.
Conclusions:
These results suggest that well-documented bias against females also exists in academia Science faculty's subtle gender biases favor male students Moss-Racusin et al. Yale University, New Haven, CT 06520
Background:
Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty specifically exhibit a bias against female students that could contribute to the gender disparity in academic science.
Design:
The present study asked science faculty members (N = 127) from research-intensive universities to rate the application materials of a student for a laboratory manager position (which, in scientific labs, is often occupied by a student). The student applicant was randomly given a male or female name, but was described as having exactly the same skills in both groups. In both groups, faculty members were asked to: rate the student's perceived competence and hireability, propose a starting salary, and offer a certain amount of career mentoring to the student. A comparison between average ratings of student's perceived competence, hireability, proposed salary and career mentoring was conducted between the male student and the female student groups.
Results:
Science faculty members rated the male student as significantly more competent and hireable than the (identical) female student.
Faculty members also offered more career mentoring to the male student and selected a higher starting salary. The gender of the faculty did not affect their ratings, such that female and male faculty were equally likely to exhibit bias against the female student.
Conclusions:
These results suggest that well-documented bias against females also exists in academia.
A4. Actual "bias favoring women" research summary based on the article of [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF] used in Experiment 2
National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track W. M. Williams and S. J. Ceci Cornell University, Ithaca, NY 14853
Background:
In life and social sciences, women now earn the majority of doctorates, but they make up a minority of assistant professors. The underrepresentation of women in academic science is typically attributed, both in scientific literature and in the media, to sexist hiring.
Design:
In the present study, 363 faculty members (182 women, 181men) were asked to evaluate hypothetical narrative summaries describing identically qualified female and male applicants for tenure-track assistant professorships in biology, engineering, economics, and psychology. The profiles were systematically varied to disguise identical academic credentials; applicants shared the same lifestyle (e.g., single without children, married with children); and the profiles were counterbalanced by gender across faculty.
Results:
Contrary to prevailing assumptions, our data revealed that men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference.
Conclusions:
Our findings, supported by real-world academic hiring data, suggest advantages for women to launch careers in academic science.
A5. Counterfactual "bias against women" research summary based on the article of [START_REF] Williams | National hiring experiments reveal 2: 1 faculty preference for women on STEM tenure track[END_REF]
Background:
In life and social sciences, women now earn the majority of doctorates, but they make up a minority of assistant professors. The underrepresentation of women in academic science is typically attributed, both in scientific literature and in the media, to sexist hiring.
Design:
In the present study, 363 faculty members (182 women, 181men) were asked to evaluate hypothetical narrative summaries describing identically qualified female and male applicants for tenure-track assistant professorships in biology, engineering, economics, and psychology. The profiles were systematically varied to disguise identical academic credentials; applicants shared the same lifestyle (e.g., single without children, married with children); and the profiles were counterbalanced by gender across faculty.
Results:
In line with prevailing assumptions, our data revealed that men and women faculty members from all four fields preferred male applicants 2:1 over identically qualified females with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference.
Conclusions:
Our findings, supported by real-world academic hiring data, suggest that women encounter gender discrimination when launching careers in academic sciences.
Science faculty's subtle gender biases favor female students
Moss-Racusin et al. Yale University, New Haven, CT 06520
Background:
Despite efforts to recruit and retain more women, a stark gender disparity persists within academic science. Abundant research has demonstrated gender bias in many demographic groups, but has yet to experimentally investigate whether science faculty specifically exhibit a bias against female students that could contribute to the gender disparity in academic science.
Design:
The present study asked science faculty members (N = 127) from research-intensive universities to rate the application materials of a student for a laboratory manager position (which, in scientific labs, is often occupied by a student). The student applicant was randomly given a male or female name, but was described as having exactly the same skills in both groups. In both groups, faculty members were asked to: rate the student's perceived competence and hireability, propose a starting salary, and offer a certain amount of career mentoring to the student. A comparison between average ratings of student's perceived competence, hireability, proposed salary and career mentoring was conducted between the male student and the female student groups.
Results:
Science faculty members rated the female student as significantly more competent and hireable than the (identical) male student. Faculty members also offered more career mentoring to the female student and selected a higher starting salary.
The gender of the faculty did not affect their ratings, such that female and male faculty were equally likely to exhibit bias in favor of the female student.
Conclusions:
These results suggest that well-documented bias against females does not always exist in academia.
A8. The study for the prediction task in the "bias favoring women" condition in Experiment 3
Is there a gender bias in hiring?
Background:
There are ongoing debates about gender bias in various professions. This study attempts to find out whether there exists a gender bias in hiring in the field of academic sciences or not.
Design:
In the present study, 363 science faculty members (182 women, 181 men) will be asked to evaluate hypothetical narrative summaries of applicants for tenure-track assistant professorships in biology, engineering, economics, and psychology. The applicants will be depicted as having the exact same academic credentials and lifestyles (e.g., single without children, married with children). Importantly, however, applicants will be given a female name in group A, and a male name in group B. This will allow the researchers to assess whether there is any bias in the evaluation process that is solely due to the gender of the candidate.
Each faculty member will be randomly assigned to either group A (female applicant) or group B (male applicant). In both groups, they will be asked to rate the applicant's profile on a scale from 0 to 100. A comparison between average ratings will be conducted between faculty members assigned to groups A and B (i.e. professors who saw the candidate presented as a male and those who saw the candidate presented as a female).
Now please make predictions about the likely results of the study.
What do you think will be the average ratings of the two groups? The natural, linguistic and social facets of gender are intertwined, and together they play an important role in shaping the structure of languages, the social norms and role beliefs about men and women, as well as people's moral attitudes and ideologies. As a linguistic feature, grammatical gender has been argued to have consequences for the mental representations of objects and persons. A wealth of research efforts has been devoted to the question of whether the classification of nouns denoting objects into masculine and feminine gender classes influences language users' conceptualizations of objects. At the same time, the unequal treatment of the masculine and feminine grammatical gender across languages has been questioned as a source and reflection of gender stereotypes and gender role beliefs that contribute to the status quo of gender inequality. Furthermore, people's ideologies and attitudes built on gender stereotypes and social role beliefs show impacts on their perception of gender inequality and their trust in research on gender bias.
In this concluding chapter, I first summarize findings of the three empirical studies presented in previous chapters. Then I discuss the theoretical and empirical implications of the three studies. I will end the chapter by pointing out the limitations of the research and directions for future investigations.
Summary
The first study (Chapter 2) investigates the relationship between language and thought from the angle of grammatical gender, as an attempt to test the Neo-Whorfian hypothesis that language structure influences people's conceptualizations of the world. In this study, I ask if the grammatical gender system of a language has consequences for the speakers' mental that compared to the masculine form, both the double-gender and middot forms increased the perceived proportions of women across the three types of professions. The two gender-fair forms did not differ from each other in terms of increasing women's presence in mental representations.
The observed language effects were not moderated by the gender stereotype of professions.
Consistent with previous research on English and German [START_REF] Braun | Können Geophysiker Frauen sein? Generische Personenbezeichnungen im Deutschen[END_REF][START_REF] Gastil | Generic pronouns and sexist language: The oxymoronic character of masculine generics[END_REF][START_REF] Hamilton | Using masculine generics: Does generic he increase male bias in the user's imagery?[END_REF][START_REF] Irmen | Gender markedness of language: The impact of grammatical and nonlinguistic information on the mental representation of person information[END_REF][START_REF] Martyna | What does 'he'mean? Use of the generic masculine[END_REF], these findings provided further evidence of the impact of gender-fair language in promoting the salience of women in the minds of perceivers.
To establish the consistency of representations, we compared the %-women provided by our participants with real-world estimates obtained in Misersky et al.'s (2013) norming study for each profession. Results of the comparison showed that gender-fair language had varying effects on the consistency of the perceived proportions of women. Both double-gender and middot forms induced consistent representations for gender-balanced occupations; however, they introduced a female bias for male-dominated occupations, demonstrated by the overestimates of %-women compared to the normed data; and finally, for female-dominated professions, the gender-fair forms failed to provide enough boost such that a bias favoring male was still present in participants' estimates. In line with a previous study that investigated only gender-neutral professions in Swedish [START_REF] Lindqvist | Reducing a male bias in language? Establishing the efficiency of three different gender-fair language strategies[END_REF], our results regarding the gender-balanced professions added proof of the existence of a male bias induced by the masculine generics in people's mental representations as well as the efficacy of using gender-fair language in removing such a bias.
In the last study (Chapter 4), we looked at the role of morality in people's differing trust in scientific evidence related to gender discrimination. In Experiment 1, we showed participants attainment. Consistent with the awareness of gender discrimination in society, they are more receptive of research findings confirming the existence of sex-based bias against women. On the contrary, people who do not feel morally engaged in the cause of gender equity are less sensitive to any external biases that may create obstacles in the way to women's success, and in consequence, they tend to discard any information showing the opposite. As demonstrated in Chapter 4, people holding positive attitudes on gender equality find research evidence of gender discrimination against women in STEM fields more credulous than those with negative attitudes on this issue. Accordingly, the individuals who reported moral commitment to gender equity may be advocates of the liberal egalitarian ideology; while the others who were less morally concerned may be proponents of gender essentialism or flexible egalitarianism.
The influence of ideology on individuals' trust calibration can be two-fold. First, if a person's pre-conceptions are built on the accumulation of accurate information, then assigning more trust to new, attitudinal information is not only unproblematic but also efficient, since it would cost us a large amount of time and energy to deliberate over every single piece of new information we encounter. However, if a person's prior beliefs are themselves biased, based on misinformation, as is the case for COVID-19 vaccine opponents [START_REF] Roozenbeek | Susceptibility to misinformation about COVID-19 around the world[END_REF] and global warming deniers [START_REF] Zhou | Confirmation Bias and the Persistence of Misinformation on Climate Change[END_REF], attitude-based evaluations of new information will lead them further away from the truth.
Limitations and Directions for Future Research
To establish the cognitive effect of grammatical gender on object conceptualization, future studies can focus on developing more varied, reliable research methods as the existent literature suggests the lack of valid, reproducible paradigms. For instance, the study in Chapter 2 adopted a word association approach, but more innovative research methods would be a welcome addition.
ABSTRACT
The various facets of gender play an important role in shaping our cultures. People are categorized into males or females based on their biological sex; human languages differ in how gender is encoded in the language structure; and in society, different gender ideologies exist concerning what roles and positions men and women should occupy. The relationships between these facets are often intertwined. In this dissertation, I first investigate the relationship between language and people's mental representations of gender (Chapters 2 and 3). In particular, I ask if assigning grammatical masculine or feminine gender to nouns denoting inanimate objects would make native speakers think of these objects as having "male" or "female" qualities, a language effect as postulated by the Neo-Whorfian hypothesis that linguistic categories affect people's construal of the world entities.
Extensive piloting work on this topic suggests null effects of grammatical gender on speakers' conceptualization of objects. Unlike object nouns, the grammatical gender of person nouns is meaningful in that it has a semantic underpinning (i.e. malemasculine; female -feminine). I then examine the influences of grammatical gender on people's perceptions of male-female distributions across various professions in two experiments, and found that different language forms induce differential male and female associations, some of which are consistent, others biased. |
04119742 | en | [
"phys",
"spi"
] | 2024/03/04 16:41:26 | 2022 | https://hal.science/tel-04119742/file/DPHY23028.1679046174.pdf | M Laurent Garrigues
M Mauro Taborelli
M Giovanni Santin
CEA M Julien Hillairet
Examinateur M Christophe
Mme Mélanie
PAYAN M Denis
Etudiant en dernière année d'école d'ingénieurs, votre offre de stage a retenu mon attention… » C'est par ces quelques mots, aussi banals soient-ils, que mon aventure au DPHY a débuté, en quittant ma douce campagne marseillaise pour Toulouse. Et c'est par ces quelques paragraphes, plus inspirés je l'espère, qu'est venu le temps de la clôturer, en remerciant celles et ceux qui ont contribué à sa réussite.
Commençons par le gros morceau, pour mon encadrant et directeur de thèse, Christophe. Au début du stage, j'avais peur de ne pas faire assez de physique… Eh bien peuchère, j'ai été servi ! Trois ans plus tard, lors de ma soutenance, tu diras même avoir apprécié que nous ayons vraiment discuté de physique pendant la thèse, et c'est un sentiment que je partage entièrement. Mais audelà des nombreux moments passés à se casser la tête ensemble sur la physique des isolants, ce sont tes qualités humaines exceptionnelles qui ont été déterminantes pour la réussite de ma thèse. C'est grâce à ton accompagnement, ta grande disponibilité, ta pédagogie à toute épreuve, en me guidant tout en me laissant la liberté d'explorer, qu'il existe désormais un Dr. Gibaru. Et même lorsque que le monde s'est arrêté de tourner, tu as tout de même continué à assurer un suivi irréprochable, à distance comme en face à face. Je te remercie sincèrement de m'avoir fait confiance pour mener à bien ce projet, ainsi que pour ton écoute et ton soutien infaillibles tout au long de la thèse. Je ne sais pas si mon chemin m'emmènera vers le bureau d'à côté ou vers un autre pays, mais quoi qu'il en soit, j'espère que nous pourrons continuer à collaborer ensemble, et que nous aurons l'occasion de nous revoir au bout du monde tout en dilapidant le budget conférence de nos instituts respectifs.
Je remercie sincèrement Mohamed pour m'avoir ouvert les portes de sa salle de manips, de laquelle je ressortais à chaque fois avec plus de questions que de réponses… Et c'est pour ça que ce volet de ma thèse, imprévu au départ, a été extrêmement enrichissant. J'ai beaucoup apprécié ton intervention lors de ma soutenance, même si c'était risqué de me laisser seul avec une manip en sachant que mes compétences expérimentales se résumaient à casser de la verrerie en TP de chimie ! Si ces expériences se sont bien déroulées, c'est aussi grâce à Sarah, qui nous a dépannés quand DEESSE se comportait en déesse et que l'alchimie ne passait pas avec ALCHIMIE.
En étant au centre d'une collaboration entre l'ONERA le CNES et le CEA, pour reprendre mon amorce de lettre de motivation, j'adresse mes remerciements à ces trois institutions pour avoir financé mes travaux de thèse, ainsi qu'à mes correspondants respectifs. Merci à Mélanie et Damien, mes encadrants du CEA, pour nos échanges et l'aide précieuse que vous m'avez apportée autour des développements de MicroElec. Malgré que nous n'ayons eu que peu l'occasion de nous voir, j'espère que nous pourrons continuer notre collaboration dans GEANT4. Côté CNES, bien que les choses aient été plus compliquées, je remercie Jérôme pour avoir lancé le sujet, Christophe pour s'être occupé de la période de transition, et Denis et Nicolas pour avoir assuré le suivi en fin de thèse. Si j'écris aujourd'hui ces mots en qualité de docteur, c'est bien entendu grâce à Mauro Taborelli et Isabel Montero, qui ont accepté d'être rapporteurs de ma thèse, ainsi qu'à Laurent Garrigues, Giovanni Santin et Julien Hillairet qui en ont été les examinateurs. Je vous suis reconnaissant d'avoir apporté avec bienveillance des remarques et discussions très pertinentes vis-à-vis de mes travaux de thèse. Je souhaite également remercier M. Taborelli pour la rigueur de son travail de révision compte tenu de la longueur de ce manuscrit, ainsi que M. Hillairet pour nous avoir dépannés quelques jours avant la soutenance. J'ai été chaleureusement accueilli pour cette thèse dans l'unité CSE, composée entre autres d'une bande de gai-lurons qui partent massivement en pause-café lorsque le logiciel maison au nom en S se met inexorablement à planter. Ce rituel est souvent animé par les histoires de Pierre, très sympathique chef d'unité regorgeant d'anecdotes insolites se passant dans une petite vallée béarnaise. Dans les restos et pique-nique CSE, on retrouve entre autres, Sébastien, Ludivine, Gaël, Julien, David, Thierry, et le duo Marc et Rémi (ta présentation à Seattle restera entre nous).
C'est en réalité dans tout le DPHY qu'il règne une ambiance exceptionnelle qui favorise les rencontres. J'ai eu peur lorsque j'ai entendu parler d'un triathlon du département pour la première fois. Mais il s'avère que la version DPHY de ce sport est bien plus agréable, tout comme les barbecues et repas de Noël organisés par Virginie et Angélica. Il me faut également mentionner Antoine, propriétaire de la plus belle Yaris diesel de France, pour tes interprétations libres et notes bleues en cours de guitare, et Pablo, pour ton aide sur les fonctions diélectriques en début de thèse et les matchs sanguinaires de baby-foot en fin de thèse. Pour ces derniers, la relève est assurée avec Antoine F, Nour qui m'a fait découvrir le baby assis, et Vincent l'éternel débutant en guitare. Merci aussi aux soeurs Pujol et à Nathalie pour m'avoir aidé à surmonter les bourbiers administratifs, et au fameux duo Claude et Stéphane, pour les lancements du programme spatial stéphanois et les douches d'azote liquide bienvenues en temps de canicule. Il m'est impossible de tous vous nommer un par un, mais je tiens à remercier l'ensemble des permanents pour entretenir cet esprit de partage, et car mes interactions avec chacun d'entre vous ont toujours été très plaisantes.
Cette ambiance agréable est en partie portée par les autres doctorants du département. Je dois forcément commencer par Le Bureau des Légendes, alias Le Bureau F215, où le café se prend toujours à 15h15. En face de mon bureau, on retrouvait Guillerme et sa fidèle casquette, homme recherché pour la somme de 3 Mégadollars, et à qui je lègue mon contrat de publicité pour Senseo. Puisque Midi Libre prédit mieux les orages géomagnétiques que Sérénade, il faudra peutêtre songer à te reconvertir…! Il y a ensuite mes autres camarades de promotion. Gaëtan, je suis certain que les compétences acquises après la formation boomer te serviront pour ton tour d'Europe à vélo. Lucas, j'espère que nos aventures rôlistes t'inspireront pour guider tes péripéties à vélo ou en bateau avec Juliet. J'aurai passé beaucoup de temps lors des pauses café avec Julie à déprimer ensemble sur les Fauteuils des Doctorants de Troisième Année™ et à nous demander pourquoi nous sommes rentrés de Namibie ou de Madagascar, lorsque mon PC était à 100% et que j'étais à 0%. Merci d'avoir sauvé (presque) toutes mes plantes de la canicule, et ainsi de m'avoir évité de transformer à nouveau ma voiture en camion Truffaut. Enfin, il y a les doctorants et alternants des autres promotions. Je pense notamment à Maxime le cycliste fou, Carla, le faux Quentin, Lucas, Agnès, Lauriane, Adrian, Paul, Nathan, le trio Gwen-Maria-Rabia droguées au thé et ayant la lourde tâche de faire honneur au bureau F215, et tous les autres arrivés dernièrement. Bon courage pour la suite, car vous allez en baver ! Pour terminer avec le DPHY, il y a ceux qui l'ont quitté. Avec, bien sûr, le groupe des San Fermin ! Juan (Comment allez-vous mon père…), Raphael (J'espère que les cabarets aixois te plaisent !), Neil (Mon premier co-bureau et dompteur de toros), je retiens nos légendaires roadtrips à Pamplona, dans les Pyrénées et en Aveyron (On doit toujours se le faire ce voyage à Marseille !). Je n'oublie pas non plus Mathias pour l'apéro dans son terrain de golf et ses Peugeot de collection, Manon et son interprétation parfaite de l'orc poli lors des sessions D&D, Martin le troisième membre du bureau F215 en notre temps et tennisman d'élite, ainsi que les autres anciens doctorants et post-docs : Eudes, Guillaume, Abdess, Pauline, Hector, Adrien… Last but not least, Loanne, je te remercie pour les (trop) nombreux cafés, mon entraînement de baseball avant d'aller à Seattle, et l'organisation des soirées Jeux+Sushi, mais je ne te remercie pas d'avoir tué tous les méchants que j'ai invoqués en tant que MJ tyrannique de notre campagne JDR.
Pour réussir une thèse, il faut aussi savoir sortir la tête du guidon. C'est pourquoi je suis très reconnaissant envers mes amies Chloé et Gabriela, pour tous les bons moments passés ensembles, et l'énergie que vous m'avez donnée pour entretenir ma santé mentale. Je suis très heureux que nos amitiés aient pu durer si longtemps, et j'espère qu'elles resteront autant solides pendant de nombreuses années à venir. Les activités artistiques que j'ai effectuées en parallèle m'ont également beaucoup apporté. Ainsi, je remercie Bhairavi de m'avoir offert l'opportunité de monter sur les planches et d'incarner les rôles de Sir Wilfrid Robarts et Semyon Semyonovich, en compagnie de mes camarades de Stagefright. Il y a aussi Michel, qui m'a évité d'abandonner la guitare après 2 mois à l'aide de ses cours. C'est là qu'Antoine, Guillerme et moi avons créé les célèbres Blues Stompers, le groupe qui ne sait jouer qu'un seul morceau (et pas bien)… Le mot de la fin sera pour remercier chaleureusement ma famille : mon père Daniel, ma mère Solange, mon beau-père Jean-Louis et ma belle-mère Odile. Merci infiniment de m'avoir accompagné tout au long de cette thèse, et particulièrement pendant les confinements successifs. Avoir dû faire une partie de ma thèse à distance dans un climat de panique générale était une lourde épreuve que je n'aurais jamais pu surmonter sans votre soutien, et je vous remercie de m'avoir offert un splendide pot de thèse pour célébrer ce doctorat de la meilleure manière qui soit. Merci pour tout, Papa et Mama, pour votre amour, pour avoir toujours cru en moi et pour les sacrifices que vous avez fait tout au long de ma vie afin de me permettre de poursuive mes études et d'en arriver à être docteur. Les mots ne peuvent transcrire toute la reconnaissance et l'amour que j'ai et que j'aurai éternellement pour vous.
Table of contents
Effects of the space environment and radiation on electronics and spacecraft systems
Introduction
Dielectric materials are widely used in spacecrafts and satellites, and are part of numerous essential components for the operation of the spacecraft. Indeed, dielectrics can be found in CMOS components, radiofrequency devices such as waveguides, solar panels for the power supply of the spacecraft, plasma thrusters for electrical propulsion, thermal coatings on the surfaces of the spacecraft, wirings… Therefore, several insulators such as metal oxides or polymers have been selected in space applications for their mechanical, thermal and electrical properties. However, the systems and devices on board of spacecrafts are exposed to the space environment, a very harsh medium that comprises of a variety of incoming radiation. Dielectric materials on board of the spacecraft are not spared, especially those used on the surfaces of the spacecraft, which are directly exposed to the space radiation. However, the behavior of insulators under irradiation is responsible of critical issues, due to their tendency to accumulate electrostatic charge under irradiation.
Therefore, one of the main concerns in spacecraft design and operation is to prevent the risk of electrostatic discharges and perturbations created by the radiative space environment. In result, the study of the transport of radiations through dielectric materials is capital in order to understand and prevent these risks. Most of the charging effects involved in dielectrics are directly linked to their electron emission properties. Indeed, electron emission happens when a material under electron irradiation is emitting additional electrons in vacuum. In such a case, dielectric materials can charge positively. On the other hand, the implantation of electrons from the space environment in the dielectric layer charges it negatively. As a result, the potentials reached by a dielectric material are directly dependent on its electron emission properties. The multipactor effect, which is the formation of an electron cloud in RF devices, is also guided by the electron emission of the materials of the device.
Consequently, evaluating the total electron emission yield (TEEY) of insulating materials is critical for these applications. However, it is especially difficult to perform experimental TEEY measurements on dielectrics, due to the modification of the TEEY caused by charging effects. Indeed, during experimental measurements, the TEEY has been observed to increase or decrease with time depending on the global evolution of the charge.
simulate the evolution of the secondary electron emission of insulators depending on the global charge.
Finally, several experimental studies have been conducted in conditions where the external charge effects have been removed or reduced, which are more representative of the conditions of use of some dielectrics in spacecrafts. These studies have shown that the TEEY can still be reduced with time, due to internal charge effects.
However, the dielectric materials on board of satellites are subjected to strong gradients of temperature, typically from -200°C to +200°C, which can significantly modify the transport of electrons. Hence, most studies, which were made in the controlled environment of an electron microscope, are not representative of the space environment. Most importantly, while experimental studies did demonstrate that internal charging effects could strongly modify the TEEY of insulators, they were unable to provide a definite explanation for the origin of these effects. Indeed, it is very difficult to investigate the transport of electrons through experimental TEEY measurements, and a simulation of the transport of electrons in dielectrics would be required instead. Nevertheless, the simulation studies on the TEEY of insulators have only focused on the global charge effects, and no simulations were made in conditions where only internal charge effects can modify the TEEY. In consequence, there is a lack of physical explanations on the dependence of the electron emission yield of insulators on the internal charge, and it is unsure which physical mechanisms are involved.
Consequently, the aim of this study is to develop a low energy electron transport model in dielectric materials for space applications. The objective of this model is to simulate the electron emission properties of dielectric samples, and to model the effect of the internal charge on the TEEY. Hence, our goal is to provide physical explanations for the misunderstood experimental observations made on dielectrics. Improving the comprehension of charge transport and electron emission in dielectric materials is of interest in many applications. Indeed, this misunderstanding prevents the precise evaluation of the electrostatic discharges and multipactor risks associated with dielectrics in space applications. The modification of the TEEY of insulators under charging is very disruptive in electron microscopy, where the image contrast can be severely modified if the electron emission is not steady. The mitigation of electron cloud effects is also a major concern in particle accelerators and fusion reactors, since it can be the source of large power losses.
In Chapter 1, we will describe the radiative space environment and its effects on satellite systems. We will highlight the importance of secondary electron emission, and present the experimental procedure used to measure the electron emission yield. This will allow us to mention the issues met when characterizing the electron emission of dielectric materials, due to the charge buildup created in the insulator by the electrons. We will also highlight that some effects of charging on the electron emission are still not thoroughly understood, and why a low energy electron transport model for dielectrics is needed to correctly evaluate their electron emission properties in space applications.
In order to compute the electron emission of insulators, we need to identify which electronmatter interactions shall be modelled, and this will be the focus of Chapter 2. Some of these interactions, such as the inelastic scattering, are common to all material types, whereas other interactions, such as trapping and phonon collisions, are specific to insulators.
We know that the kinetic energy of most secondary electrons is of a few eVs only. Therefore, the interaction models should extend down to this range, since we have to simulate the transport of any electrons that are susceptible to escape the material. However, most publicly available Monte-Carlo models have a low energy limit of a few keV or a few hundred of eV, which is why we need to develop our own interaction and transport models. In this regard, we can start from the work made previously in MicroElec for the TEEY of metals and semiconductors. In Chapter 3, we will present the improvements brought to MicroElec to extend it to the simulation of the TEEY of 16 materials. The model will be able to simulate the elastic, inelastic, and surface interactions of low energy electrons. The simulation results will also be validated with experimental data, to verify that our simulation of the transport of low energy electrons is accurate and that the Monte-Carlo model can correctly simulate the TEEY.
In Chapter 4, we will derive an analytical secondary electron emission yield model that is based on the physics of low energy electron transport. Indeed, the Monte-Carlo model developed in Chapter 3 can be used to extract data on the penetration depth, the ionizing dose-depth profile, and the transmission rate of electrons. Gathering data on these parameters is especially critical, due to the low availability of experimental data for electrons below 10 keV. We will then combine these parameters into a single analytical expression for the secondary electron emission yield. The aim will be to provide an expression which can be used as an input parameter in systems simulation packages, and to demonstrate that the TEEY can be obtained analytically by following a more physical approach, on the contrary of most TEEY models which rely on arbitrary parameters.
In Chapter 5, we will present a new Monte-Carlo code based on the model developed in Chapter 3, which will be dedicated to the simulation of charge transport in dielectric materials. In this regard, we will have to model the transport of the thermalized electrons, and of the holes left in the material after the generation of secondary electrons. The electric field generated by the charge density influences the drift of these very low energy particles, so it needs to be computed and taken into account as well. Finally, for all particles, the trapping, detrapping and recombination processes need to be implemented. Since silicon dioxide is by far the most studied material, several of the parameters that will be needed by our model should be more available for this material. Consequently, the charging simulation model will be developed for SiO2 only. However, the model we will create will be as general as possible, so that it can be extended to other insulators if the simulation parameters can be found.
The Monte-Carlo model developed in Chapter 5 should allow us to understand the physical processes behind insulator charging and its effect on the TEEY. Nevertheless, experimental measurements on SiO2 samples are needed in order to quantitatively validate the simulations. In Chapter 6 we will present the experimental measurements made during this PhD thesis on SiO2 thin film samples. We will conduct measurements of the TEEY depending on the incident energy, and time-resolved measurements of the TEEY at a single incident energy. Afterwards, we will explain the experimental observations, using the wide range of data we can extract from the simulations. We will clarify why the TEEY is decreasing with the positive charge buildup, and highlight several experimental artifacts and bias that can appear when measuring the TEEY of dielectric materials. We will then show how the charge buildup can falsify the TEEY obtained on an insulator, and what steps can be made during the experiment to avoid this falsification.
We will also study the evolution of the TEEY until its stabilization, and the differences in the two TEEY measurement facilities used in this work. Finally, we will move away from TEEY studies in a controlled environment, and explore how the electron emission properties of a dielectric can vary in the conditions of the space environment.
Chapter 1: Context and aim of the study
Dielectric materials are widely used in spacecrafts and satellites, and are part of numerous essential components for the operation of the spacecraft. Indeed, dielectrics can be found in CMOS components, radiofrequency devices such as waveguides, solar panels for the power supply of the spacecraft, plasma thrusters for electrical propulsion, thermal coatings on the surfaces of the spacecraft, wirings… Therefore, several insulators such as metal oxides or polymers have been selected in space applications for their mechanical, thermal and electrical properties. However, the systems and devices on board of spacecrafts are exposed to the space environment, a very harsh medium that comprises of a variety of incoming radiation. Electrons, protons, heavy ions and photons continuously hit the spacecraft and perturb its operation. Dielectric materials on board of the spacecraft are not spared, especially those used on the surfaces of the spacecraft, which are directly exposed to the space radiation. However, the behavior of insulators under irradiation is responsible of critical issues, due to their tendency to accumulate electrostatic charge under irradiation. The subsequent potential difference between the dielectric and the metallic surfaces can create sudden electrical discharges, which cause serious damage to the devices. Nevertheless, even under the protection of radiation shielding, insulators inside the internal components of the spacecraft can still accumulate charge and cause several issues.
The question of dielectric materials charging under irradiation is also not specific to the space technology community. For instance, in scanning electron microscopy, samples are hit by an incident electron beam to generate an image of the target. However, the charging of insulating targets under electron irradiation can severely affect the image contrast [1][2][3]. The electronic components of particle accelerators, such as the LHC at CERN, are also exposed to incident radiation, which can hinder the operation of the devices. Finally, the generation of parasitic electron clouds, which is amplified by insulator materials, is a major source of power loss in particle accelerator beam lines, fusion reactors, or space telecommunication RF devices.
Therefore, one of the main concerns in spacecraft design and operation, is to prevent the risk of electrostatic discharges and perturbations created by the radiative space environment. In result, the study of the transport of radiations through dielectric materials is capital in order to understand and prevent these risks.
In this chapter, we will first describe the unwanted effects generated by radiation in the electronics and systems of a spacecraft, and demonstrate the necessity of studying the transport of low energy electrons through matter. Indeed, low energy electrons are responsible of the secondary electron emission process, which is the production of electrons by a material under electron irradiation. This phenomenon is responsible of the charging of dielectric materials and the generation of electron clouds, hence a precise knowledge of the electron emission of a material is required to limit these risks. In this regard, we will present the experimental procedure used to measure the electron emission. This will allow us to mention the issues met when characterizing the electron emission of dielectric materials, due to the charge buildup created in the insulator by the electrons. We will also highlight that some effects of charging on the electron emission are still not thoroughly understood, and why a low energy electron transport model for dielectrics is needed to correctly evaluate their electron emission properties in space applications.
1.1 Effects of the space environment and radiation on electronics and spacecraft systems
A spacecraft in a Geostationary Earth Orbit (GEO) or Low Earth Orbit (LEO) is exposed to several sources of incoming radiation. First, the sun continuously ejects a flux of photons from gamma rays to radio and a flux of low energy protons and electrons as part of the solar wind.
The sun also generates highly energetic protons (keV -500 MeV) and ions (1-10 MeV/n) during solar flares. The spacecraft can receive galactic cosmic rays made of protons and ions up to 300 MeV/n as well. Finally, the electrons and protons trapped in the Van Allen belts by Earth's magnetic field constitute a strongly hostile region and a source of continuous radiation for the satellites orbiting around Earth. According to the incident radiation flux model GREEN [4] from ONERA, the flux of electrons received by a spacecraft in either a LEO or GEO orbit is greater than the flux of protons. As shown in Figure 1-1, the majority of incident radiation received by spacecrafts is electrons from 100 eV to 100 keV. Consequently, electrons will be mostly responsible of the charging of the dielectric surfaces of the satellite that are directly exposed to the space environment, such as solar panels or insulating coatings. Spacecrafts are equipped with shielding materials, which are designed to slow down the incident radiation and reduce the flux received by the internal components. These shieldings are mostly made of aluminum, due to weight constraints. However, the most energetic radiation can still go through the outer layers of the satellite, hit the internal electronics and generate parasitic effects. For example, an electron of 1 MeV is able to go through a 1 mm thick layer of aluminum [5,6]. Moreover, the high energy particles going through the shielding can also create a cascade of lower energy electrons, which can then reach the inside of the spacecraft and cause additional disturbances. We shall now list some of the issues and unwanted effects occuring in electronics exposed to incident radiation.
Total Ionizing dose
The effect of the total ionizing dose received by an electronic component exposed to high-energy electrons and protons is a cumulative degradation of its properties. When going through the insulator layer of CMOS or MOSFET components, the incident radiation can transfer its energy to the electrons of the material. Electron-hole pairs are then created, which can either recombine immediately or separate. Given that the mobility of electrons is much greater than the mobility of holes in the insulator, the electrons can get quickly swept away by an electric field. Holes on the other hand can get deeply trapped in the insulator and migrate by hopping between traps.
Given that the density of traps is greater at the insulator/semi-conductor interface, a large number of holes can get trapped in this region, which creates a charge density in the dielectric layer. This progressively generates a parasitic electrostatic potential, which can disrupt the current/voltage characteristics of the device. The change of these characteristics can have many unwanted effects, such as the apparition of a leakage current or a modification of the threshold voltage, which can hinder the operation of the component. In most recent electronics, the insulating layer only has a thickness of a few nanometers. Thus, it is required to study the transport of electrons and holes at the nanometric scale to quantify the charge left in the insulator, and the cumulative degradation of the device.
Displacement damage
High energy particles going through a material can transfer their energy to the atomic nuclei as well, and eject them from their position in the crystal lattice. The ejected atoms can also transfer their energy to other atoms of the lattice and create a displacement cascade. Some of these displacements can heal and the atoms can recover a regular position in the lattice. On the other hand, other defects created by the incident radiation may remain in the material and stabilize themselves as defect clusters, in an irregular position in the lattice. These clusters modify the electronic structure of the material, by creating additional energy levels in the band gap of semiconductors and insulators. Such levels are also susceptible to act as traps that can capture the charge carriers, and act as a recombination center. In semiconductors, these levels can also serve as an intermediate for the thermal generation of electron-hole pairs, as they can facilitate the emission of an electron into the conduction band. This is especially problematic in materials with a small bandgap or in regions of the material with a high electric field. Indeed, spontaneous thermal emission of electrons in the conduction band is possible in these conditions, which results in a parasitic current generated in the device. Such conditions are found in pixel arrays of sensors onboard of satellites, where the dark current can be amplified by the displacement damage caused by the space radiation. The quantification of this degradation necessitates an estimation of the Non Ionizing Energy Loss (NIEL) transferred by the particle to the atoms of the lattice [7], the simulation of the transport of the defects [8] and the modification of the electronic structure [9].
Single event effects
Single event effects in electronic components are caused by the passage of an energetic particle through a sensitive volume. These can create soft errors, such as Single Event Upsets (SEU), which is the change of state of bits in a memory [10], or Single Event Transients, a sudden voltage spike in the device. Destructive hard errors can also happen, such as Single Event Latchups, an excess of current in the device which can potentially cause overheating of the component.
Single event effects are mostly caused by high energy protons and ions. However, recent electronics have been shown to be also sensible to SEU triggered by electrons [11]. Indeed, these components have nanometric sensitive volumes, where a SEU may be triggered by an electron of a few keV [12]. To evaluate the risk of SEU in highly integrated electronics, it is needed to accurately model the dose deposited in a nanometric sensitive volume, in order to see if it exceeds the threshold level for the triggering of a SEU.
In the case of heavy ions, these particles create electron-hole pairs in a column that spreads around the trajectory of the particle, called trace. To accurately model the width of the trace and the dose deposited by the incident ion, it is necessary to explicitly simulate the transport of low energy secondary electrons created by the heavy ion [13,14]. Therefore, low energy electron transport models are needed to correctly evaluate the triggering thresholds of SEU in electronics.
Since all electronics exposed to radiation may suffer from single event effects, displacement damage or total ionizing dose effects, these are also a concern outside of the space environment, for instance in particle accelerator or nuclear electronics.
The multipactor/e-cloud effect
Low energy electrons (eV -keV) hitting a material can create a cascade of other low energy electrons inside of the material. These secondary electrons are then able to escape the material, through the secondary electron emission process. Macroscopically, the material is generating more electrons under electron irradiation. However, in RF cavities, the secondary electrons generated by one material of the device may then impact another material and generate additional electrons, up to the point that an electron avalanche is created in the device. This phenomenon is known as the multipactor effect, which hinders the functioning of RF devices and can create an electrostatic discharge if the quantity of electrons is too important. The multipactor effect can be easily understood in the example of a parallel plate waveguide, illustrated in Figure 1-2. The energy of most secondary electrons exiting a material is below 50 eV, which is not enough to create secondary electrons in another material. However, the electromagnetic waves circulating in the cavity carry an electric field that can accelerate the secondary electrons emitted by a plate towards the opposite plate. If the electrons have acquired enough energy, the process of secondary emission can happen, and the quantity of electrons in the device increases. The new electrons may then be accelerated towards the opposite plate and generate even more electrons, resulting in an avalanche of electrons that grows exponentially. Hence, we can understand that the multipactor effect can only happen under certain conditions. Indeed, the quantity of secondary electrons emitted by a material depends on the incident electron energy, and the properties of the material itself. For multipacting to occur, the material needs to be generating more secondary electrons than the quantity of incident electrons received.
In most materials, this condition is met for incident electrons having an energy of a few hundred of eVs. If the electron energy is too low or too high, the material is absorbing more electrons than it is generating electrons, and the multipactor is prevented. The amplitude of the electric field directly modifies the energy gained by the electrons in between the parallel plates, and can thus create or cancel the multipactor. The generation of electrons also needs to be synchronous with the electric field, which oscillates with time since it is generated by a wave. If the sign of the electric field changes when electrons are emitted from a plate, the field may instead accelerate these electrons back to where they were generated, therefore preventing multipacting. In consequence, the conditions of triggering of the multipactor effect are dependent on the power and frequency used in the device. The quantification of the electron emission of the materials used in the component is also mandatory, in order to compute the multipactor thresholds and triggering conditions.
However, the multipactor effect can also be worsened by dielectrics [15,16]. Indeed, dielectrics are strongly emissive over a large range of incident energies, which amplifies the production of electrons and the risk of a multipactor discharge. Moreover, as they charge under electron irradiation, the emission properties of dielectrics get modified, which complicates the estimation of the multipactor risk if the charge state of the material is unknown.
The multipactor and the generation of an electron cloud are also an issue in several other applications. In particle accelerators, electron clouds can be generated in vacuum cavities, where they can be accelerated by the pulsating electric field generated by the particle beam line. Notably, in the LHC, electron clouds are a major source of power loss and heat loads [17]. The multipactor effect is also a concern for fusion applications [18].
Surface and internal charging effects
Low energy particles from the space environment can charge the surfaces of the spacecraft, like solar panels and thermal coatings. The flux of incident electrons generates a negative charge on the satellite's surface, which opposes itself to the positive charge generated by photoemission and secondary electron emission, until the material reaches an equilibrium potential. However, the potential reached by dielectric materials may be different from the equilibrium potential of the metallic surfaces [19]. If this potential difference becomes too important, electrostatic discharges [20] or flash-overs [21] can occur, which can damage the power supply system and thus affect the satellite functions and jeopardize the mission. The potential reached by a dielectric material is a direct function of the transport of electrons and holes inside of the material, which influences the Radiation Induced Conductivity. This potential is also dependent on the electron emission and photoemission properties of the material. Consequently, charge transport models in space-used dielectrics are required to evaluate the charging of satellite surfaces. As for the multipactor effect, it is necessary to quantify the electron emission of the materials of the spacecraft, to correctly model the potential reached by a given surface.
Description and measurement of the secondary electron emission
In secondary electron emission, the emitted electrons have been stripped from the atoms of the material following the interactions of incident electrons with these atoms. Having materials that are emitting more electrons than they are receiving is a necessary condition for the multipactor or e-cloud effects. Therefore, it is especially critical to have an accurate knowledge of the electron emission properties of the materials used in the devices.
General description of the electron emission yield
The emission of secondary electrons is quantified by the Total Electron Emission Yield (TEEY).
It is defined as the ratio of the total number of electrons exiting the material compared to the number of incident electrons:
𝑇𝐸𝐸𝑌 = 𝑁𝑏 𝑒𝑥𝑖𝑡𝑖𝑛𝑔 𝑒 - 𝑁𝑏 𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑡 𝑒 - Equation 1-1
For most typical materials, the TEEY follows a standard behavior depending on the energy of the incident electrons, which is illustrated in Figure 123. First, the TEEY increases linearly at low energy. Indeed, the more energetic the electrons are, the more secondary electrons they can set into motion from the atoms of the material. The TEEY can increase over one, which means that from a macroscopic point of view, the material is generating additional electrons. This is the necessary condition for the apparition of the multipactor effect. The point where the TEEY curve goes over 1 is called the first crossover point, with the corresponding energy of incident electrons noted 𝐸 𝐶1 . Depending on the materials, 𝐸 𝐶1 generally ranges from a few tens to a few hundreds of eVs. The TEEY then increases to its maximum value 𝑇𝐸𝐸𝑌 𝑀𝑎𝑥 at the energy 𝐸 𝑀𝑎𝑥 . For most common metals and semiconductors, 𝑇𝐸𝐸𝑌 𝑀𝑎𝑥 can range from 1 to 2.5 [22], while it can go above 5 for some insulators. 𝐸 𝑀𝑎𝑥 is generally around a few hundreds of eVs, up to 1 keV for heavier metals such as gold or some insulators such as polycrystalline diamond [23]. Then, the TEEY decreases with increasing electron energy. Indeed, while the more energetic electrons will still produce more and more electrons, the penetration depth of the incident electrons also increases with the energy [5,6]. As a result, the secondary electrons are also produced deeper into the material [24,25], so that their probability of reaching the surface is reduced. In this sense, the energy of the maximum TEEY is the optimum of a compromise between the quantity of secondary electrons produced, and the depth at which the secondary electrons are generated in the material. In consequence, the TEEY diminishes for energies beyond 𝐸 𝑀𝑎𝑥 , and becomes lower than 1 beyond the energy 𝐸 𝐶2 , the second crossover point. When characterizing the TEEY of different materials, 𝑇𝐸𝐸𝑌 𝑀𝑎𝑥 , 𝐸 𝑀𝑎𝑥 , 𝐸 𝐶1 and 𝐸 𝐶2 are often used as points of comparison.
The energetic distribution of the electrons exiting the material also follows a standard behavior, shown in Figure 1234. This distribution can be separated in three regions depending on the energy of the electrons, which define three populations of electrons. The first region (I) is the distribution of true secondary electrons, which were put into motion in the electronic cascades created by the incident electrons. Their energy is centered around a few eVs, and it is generally admitted that the distribution of true secondaries is capped at 50 eV. This criterion is often used in experimental measurements to obtain the Secondary Electron emission Yield (SEY), which only quantifies the emission of true secondary electrons. However, this criterion may not be valid for low energy electrons. Indeed, the inelastically backscattered electrons constitute the second region of the histogram (II). This continuum is mostly made of incident electrons that have lost a part of their energy before exiting the material, or secondary electrons that were excited with a higher energy. As a result, some low energy incident electrons may lose a significant amount of energy and exit the material with an energy lower than 50 eV, but these can be counted as secondary electrons when using the experimental criterion. However, due to the uncertainty principle in quantum mechanics, we cannot know is a given electron is the incident electron that has lost some energy, or a secondary that was generated with an energy larger than 50 eV. Finally, a peak is observed around the energy of incident electrons 𝐸 0 (III). These are the elastically backscattered electrons, which have entered and exited the material without losing energy, or were reflected by the material's surface. In opposition to the SEY, the Backscattered Electron emission Yield (BEY) is defined as the ratio of elastically and inelastically backscattered electrons, including both regions (II) and (III). While the term "SEY" is sometimes used with the meaning of "total electron emission yield", in this study we will consider that the SEY only includes the contribution of secondary electrons. The sum of the SEY and BEY thus gives the TEEY.
The description of Figure 1-4 is a general overview of the energy distribution of secondary and backscattered electrons. Most materials will exhibit a spectrum that follows this standard curve. However, multiple small peaks may appear on the histogram at explicit energies. These are attributed to specific transition energies of electrons between the valence and conduction band, or the energy lost by the incident electrons after interacting with a given number of plasmon oscillations.
Factors of modifications of the electron emission yield
Several parameters can modify the value and shape of the TEEY, such as surface roughness, surface contamination, or the angle of incidence. In computer simulations, the target is often assumed to be a perfectly pure and flat sample. While some simulation codes attempt to model the effect of roughness [26,27] or surface contamination [28], these factors can be a source of discrepancy between the ideal materials from the simulations, and the real materials used in the experiment.
Effect of the angle of incidence
The incidence angle of electrons is defined relatively from the surface normal. At 0° the electrons have a normal incidence perpendicular to the material surface, and at 90° the electrons have a trajectory that is parallel to the surface. In Figure 1-5, the experimental TEEY of an etched sample of copper under various angles of incidence is presented. The TEEY was measured from 0° to 60° by R. Pacaud [29] in the DEESSE facility at ONERA. We can immediately observe that the TEEY increases with the incident angle, from 1.3 at 0° up to 1.8 at 60°. The first crossover point is also lowered from 170 eV at normal incidence to 70 eV at 60° incidence. This increase of the TEEY is explained by the fact that the incident electrons with a high incidence angle have a reduced penetration depth in the material. Consequently, they can generate more electrons close to the surface, which have an increased escape probability. Since more secondary electrons can easily escape the material, the TEEY increases for high incidence angles.
We can also see that the effect of the incidence angle is less marked for lower energy electrons. This is because low energy electrons under normal incidence are already captured in the first ten nanometers of the material's surface [5]. For very low energy electrons (below 50 eV), their probability of being elastically reflected by the surface increases with the incident angle, which prevents them from generating secondary electrons. For this reason, we can also observe an increase of the BEY of low energy electrons with the incidence angle.
Higher energy electrons (> 1 keV) have a larger penetration depth. They also produce more electrons than lower energy electrons but also at a greater depth. In result, when the incidence angle is increased, the greater number of secondary electrons generated by the higher energy electrons can more easily escape the material, resulting in a large increase of the TEEY. This is also why we observe a shift of the maximal energy of the TEEY, from 700 eV at 0° to 950 eV at 60°.
Effects of the surface chemistry and roughness
Technical materials used in spacecraft devices or particle accelerators have been exposed to air beforehand, and have a surface contamination layer made of several molecules including hydrocarbons, oxides, water or other species adsorbed on the surface. For some metals, such as aluminum or copper, a native oxide layer also appears immediately when the material is exposed to air. The surface contamination layer is only a few nanometers thick but it can still strongly modify the TEEY. Consequently, unless we want to get the TEEY of a technical material, experimental samples must be decontaminated before the measurement to obtain the TEEY of the pure material. However, to decontaminate the samples properly, they should be cleaned under vacuum conditions and not exposed to air between the decontamination and the TEEY measurement. For these reasons, TEEY measurements are also made in vacuum chambers to limit the deposit of contamination on the sample.
In Figure 123456, the TEEY of a copper sample measured as received and after decontamination are compared. The data was obtained by Plaçais et al. [30]. We can see a strong increase of the TEEY on the contaminated sample compared to the sample decontaminated by baking and erosion, with a maximum TEEY of 2.23 before decontamination and 1.1 after decontamination. The first crossover point is also shifted from 30 eV to 190 eV after decontamination. Therefore, contaminated surfaces increase the electron cloud production and the risk of multipacting.
Similar observations were made on other materials, such as silver [31]. The TEEY of contaminated materials can evolve during time due to the conditioning of the surface. This phenomenon is caused by the transformation of the contamination layer into a graphite layer by the incident electrons. Contrary to surface contamination, the graphite layer created by conditioning reduces the TEEY of the surfaces. Hence, this process is exploited in particle accelerators to reduce the formation of electron clouds [32]. However, this modification of the surface chemistry can only happen with a large electron fluence, much larger than what is received during standard TEEY measurements. For instance, the maximal dose received by the samples in standard TEEY measurements made in the DEESSE facility at ONERA is 0.5 nC/mm². On the other hand, the TEEY of a technical copper surface only starts to decrease due to conditioning for a dose of 10 µC/mm² [33] or 0.1 mC/mm² [34]. Consequently, if the samples are properly decontaminated, the TEEY data should not be modified by conditioning, and we should not observe a variation of the TEEY with time in the case of metallic or semiconducting targets.
The experimental and technical materials also have a surface roughness, which can either increase or decrease the TEEY depending on the roughness structures. Indeed, the electrons emitted from the bottom of the roughness asperities may hit the side walls and get recollected, thus reducing the SEY. However, the presence of roughness may induce an opposing effect, as the incidence angle of the primary beam can be significantly increased if they hit the sidewalls of the roughness patterns for instance. Thus, in many fields, materials with a specific surface state or roughness are developed to mitigate the TEEY. A widely used option is to engineer a specific surface roughness pattern which can trap the secondary electrons emitted by the surface 60°0°Contaminated Decontaminated [26,27,[35][36][37][38][39]. Several roughness patterns can be used to reduce the TEEY, such as rectangular and triangular grooves [35], rectangular [27] or trapezoidal [37,40] checkerboard patterns, or sawtooth grooves [36]. The efficiency of these roughness structures is shown in Figure 1-7, in the example of a silver surface. The TEEY results were obtained through Monte-Carlo simulations made during this thesis and published in [40]. We can see that the presence of these patterns on a surface will hinder the propagation of the secondary electrons emitted from the bottom of the structures.
The TEEY can thus be reduced by purely geometrical effects independent on the material. The efficiency of the different roughness patterns depends on various parameters, such as the aspect ratio of the structures, the proportion of tilted surfaces or the openness of the recollection valleys. The feasibility and choice of a given structure depends on the fabrication processes available. Some authors have proposed different means of creating specific roughness patterns, for instance chemically or by laser engraving. Efforts are also made to create such roughness patterns through electrodeposition [41].
Effect of magnetic fields
In a magnetic field 𝐵, the trajectory of a secondary electron emitted from the surface with an emission angle α and a kinetic energy 𝐸 = Where 𝜔 0 = 𝑒𝐵/𝑚, and 𝑅 𝐿 is the Larmor radius for an electron traveling perpendicularly to the field orientation. The presence of a magnetic field can significantly modify the TEEY of a rough surface through multiple effects. For a secondary electron emitted from the bottom of the structures, the presence of a magnetic field can modify its escape probability, depending on the intensity of the field and the position the secondary electron has been emitted from. Wang et al. [36] suggest that the trajectory followed by the secondary electrons in the magnetic field will increase the number of collisions with the side walls, lowering the escape probability and the TEEY. This can especially happen on a sawtooth surface, where electrons with a non-zero incident angle can collide with the vertical walls. In the case of a magnetic field with an orientation parallel to the flat surface normal, the secondary electrons emitted from a side wall can be recollected by the same wall depending on the radius of the helix.
There is also the possibility that this trajectory will reduce the probability of a collision with the side walls. This can especially happen with higher values of the magnetic field, when the helix radius becomes smaller than the width of the structures. In such a case, the secondary electrons can easily escape from the bottom of the structures or the tilted side walls. For extremely high values of the magnetic field on the order of the Tesla, the secondary electrons should travel almost vertically and the effect of surface roughness could be overridden for electrons with a normal incidence.
The increase or decrease of the TEEY caused by the magnetic field is largely dependent on the roughness of the surface. Fil et al. [18] have shown that the TEEY of a technical surface could be modified by 45% in the presence of a magnetic field, but the TEEY of a polished surface was only reduced by 5%. In particle accelerators and fusion reactors, strong magnetic fields are used to contain the plasma or particle beam. Hence, the modification of the TEEY of a rough surface under a magnetic field must be considered in these applications.
Experimental measurement procedure of the TEEY
Over the years, a lot of data has been collected on the electron emission yields of several materials, including metals, semiconductors, insulators and compound materials. Most of this experimental data has been gathered in Joy's database [42,43]. However, the data for electrons below 100 eV is scarce, and there are strong dispersions in the TEEY data for a given material. These can be explained by the differences in experimental setup (intensity of the vacuum or work function of the electron collector), or the surface state of the material (chemistry, roughness). For these reasons, two experimental TEEY measurement installations were designed at ONERA: DEESSE (Dispositif d'étude de l'Emission Electronique Secondaire Sous Electrons, Facility for emission of secondary electron under electron bombardment) and ALCHIMIE (AnaLyse CHImique et Mesure de l'émIssion Electronique, Chemical analysis and measurement of electronic emission). The equipments available in DEESSE and ALCHIMIE will be given in Chapter 6, where we will also detail the experimental TEEY measurements obtained with these facilities on insulating samples. An extensive description of the facilities can be found in the thesis of T. Gineste [31,44].
The general TEEY measurement procedure is based on two measurements of the current flowing through the sample. According to the current conservation law, the incident current 𝐼 0 , the emitted current 𝐼 𝐸 and the current flowing through the sample 𝐼 𝑆 follow the expression
𝐼 0 = 𝐼 𝐸 + 𝐼 𝑆 Equation 1-3
First, the sample holder is biased positively, for instance to a potential of +27V in DEESSE and ALCHIMIE. Due to the external electric field induced by the bias, practically all low energy electrons are recollected by the surface. The measured current 𝐼 𝑆 + is very close to the incident current (𝐼 0 ≅ 𝐼 𝑆 + ). In a second step, the sample holder is biased negatively (for example -9 V in DEESSE and ALCHIMIE), to force all secondary electrons to exit the sample. Indeed, as we will show in the next section, the charge buildup in insulators may prevent the escape of secondary electrons by electrostatic effects. The current 𝐼 𝑆 -measured in this case can be used to determine the emitted current using Equation 1-3 and the incident current 𝐼 0 obtained from the previous step, as 𝐼 𝐸 = 𝐼 0 -𝐼 𝑆 -. Finally, the TEEY is given by the ratio of the emitted current over the incident current, which is determined using both measurements 𝐼 𝑆 + and 𝐼 𝑆 -:
𝑇𝐸𝐸𝑌 = 𝐼 𝐸 𝐼 0 = 𝐼 0 -𝐼 𝑆 𝐼 0 = 𝐼 𝑆 + -𝐼 𝑆 - 𝐼 𝑆 + Equation 1-4
For metals and semiconductors, it is possible to compute the TEEY by sending a continuous current on the sample. For insulators however, a pulsed measurement method must be used to limit the effects of charging. In this case, the sample is irradiated during a given time (width of the pulse) followed by a relaxation period before sending another pulse of electrons. To compute the TEEY, a first series of pulses is sent with the sample biased positively measure the incident current 𝐼 0 , then a second series of pulses is sent with the sample biased negatively to measure the sample current 𝐼 𝑆 . In both cases, the current is averaged over the series of pulses. The TEEY is then computed using the Equation 1-4.
Issues related to the electron emission yield of insulators
Even in the absence of surface contamination, a significant variation of the TEEY with time can be observed on dielectric materials, on the scale of the microsecond or nanosecond depending on the measurement parameters. This is due to the creation of charges in the material, following the emission or absorption of electrons. Consequently, it can be very challenging to perform TEEY measurements on insulators due to the strong modifications of TEEY created by the charge buildup, and special care must be taken to limit the influence of charging. Despite this, some effects of charging on the TEEY are still not thoroughly understood, as we will show in this section.
According to the conventional theory [45][46][47], the charge buildup in the dielectric material is guided by the electron emission yield. Indeed, when a secondary electron is set into motion by a primary particle, a deficit of electron is now present where the secondary electron was initially localized. Due to the conservation of charges, this deficit is modeled as a positively charged hole, which can also move in the material. If the TEEY is greater than one, the material is emitting electrons, more holes than electrons are left inside of the sample, and the insulator is charging positively. On the opposite, when the TEEY is lower than one, more electrons than holes are implanted in the material, which is absorbing electrons and charging negatively.
External charge effects
Due to the charge buildup, the surface potential 𝑉 𝑠 of the sample is modified, depending on the total charge inside of the material. The difference of potential between the surface of the insulator and the other surfaces, including the electron gun generally set to the ground, creates an electric field outside of the material. This electric field can significantly modify the trajectories and energies of low energy electrons in vacuum. In this section, we will only focus on these external charging effects, which are due to the global charge in the material and affect the electrons outside of the material. The effects of the internal electric field will be detailed in the next subsection.
From the conventional TEEY curve, three situations can arise depending on the TEEY and the energy of electrons, illustrated on Figure 12345678. When the energy of incident electrons 𝐸 0 is below the first crossover point (I), the TEEY is lower than one and the material is charging negatively. For most dielectrics at room temperature, the first crossover point is about a few tens of eVs. The incident electrons are not energetic enough to generate a significant number of secondary electrons, and are implanted close to the surface. Because the surface potential 𝑉 𝑠 is negative, the incident electrons are slowed down by the electric field in vacuum and arrive at the surface with an effective landing energy 𝐸 𝐿 = 𝐸 𝑖𝑛𝑖𝑡 -𝑒𝑉 𝑠 , where 𝑒 = 1. 6 10 -19 𝐶. Since this energy is lower than their initial energy, this reduces the TEEY according to the conventional curve. The insulator continues to charge negatively, which amplifies the slowing down of electrons and the reduction of the TEEY. The charge buildup continues until the electrostatic potential energy 𝑒𝑉 𝑠 becomes greater or equal to the energy of the incident electrons. When this limit condition is reached, the electrons cannot overcome the electric field, which is strong enough to reflect them from the surface and make them directly hit the electron collector instead. Since the electrons cannot hit the surface, the global charge is not evolving anymore and a steady state is reached. In TEEY measurements, we observe a reduction of the TEEY until the electrons get reflected by the electric field. At this point, the TEEY suddenly increases to 1 and stabilizes, though we obviously do not measure the TEEY of the material anymore since the electrons are not penetrating in the material. For electron energies 𝐸 𝑖𝑛𝑖𝑡 between the two crossover points (II), the TEEY is greater than one and the material is charging positively. Since more electrons are exiting the material than entering it, a net positive charge is created inside of the material due to the remaining holes. In this case, the surface potential becomes positive, and the electrons in vacuum are accelerated in the direction of the surface by the electric field. The incident electrons arrive at the surface with a landing energy 𝐸 𝐿 = 𝐸 𝑖𝑛𝑖𝑡 + 𝑒𝑉 𝑠 that is increased compared to their initial energy. As the material continues to charge positively, the effective energy increases more and more towards the second crossover point. Thus we could suppose from an empirical point of view that if 𝐸 𝑖𝑛𝑖𝑡 is greater than the energy of the maximum TEEY (𝐸 𝑀𝑎𝑥 ), the TEEY should decrease as the material is charging. On the other hand, if 𝐸 𝑖𝑛𝑖𝑡 < 𝐸 𝑀𝑎𝑥 , the TEEY would first go through a phase of increase until 𝐸 𝑖𝑛𝑖𝑡 = 𝐸 𝑀𝑎𝑥 . However, in reality, we only observe a decrease of the TEEY. Indeed, the secondary electrons that have escaped the material in vacuum are also accelerated back to the surface by the electric field. Thus, if their energy is lower than 𝑒𝑉 𝑠 , the secondary electrons cannot overcome the electric field and are forced back onto the surface, where they get recollected by the material. Given that most true secondary electrons have energies of 10 eV and below, a surface potential of only a few volts is enough to force the recollection of a significant part of the secondary electrons. What's more, given the energies of electrons between the two crossover points (a few hundreds of eV), an energy differential of a few eV should not cause a significant shift on the TEEY curve. Hence, the recollection of secondary electrons is the main factor that leads to a strong decrease of the TEEY. When the TEEY reaches 1, no net charge is created in the material, so that the external electric field is not increased anymore. At this point, the TEEY and the total charge have reached an equilibrium. If the incident electron energy is beyond the second crossover point (III), the TEEY is lower than one and the material is charging negatively. For most dielectrics, the second crossover point is above 2 keV. As in the case (I), the surface potential is negative, the electrons are slowed by the electric field and arrive at the surface with an energy 𝐸 𝐿 = 𝐸 𝑖𝑛𝑖𝑡 -𝑒𝑉 𝑠 . However, contrary to situation (I), the energy of incident electrons is much higher, so that they are not electrostatically reflected if the field becomes too strong. Instead, the landing energy of the electrons progressively decreases to 𝐸 𝐶2 and the TEEY converges to one. From this macroscopic point of view, a surface potential of a few hundred volts or more may be necessary to slow the electrons down the energy of 𝐸 𝐶2 . The TEEY and charge buildup reach a steady state, as in situation (II). Since the macroscopic effects of insulator charging on the TEEY are well known, it is possible to modify the experimental parameters used during TEEY measurements to limit the influence of charging. First, the electron collector can be biased to a strong potential (a few hundred volts). This will generate an extracting electric field that is opposed to the attracting field created by the positive global charge in situation (II), and prevent the recollection of secondary electrons.
Another method, which is the one used in the DEESSE and ALCHIMIE facilities at ONERA DPHY, is to bias the sample holder to a given potential, which will force the surface potential of the dielectric sample at a certain value. With this method, the sample's surface potential can be set to a negative value of a few volts, which will prevent the recollection of secondary electrons when the material is charging positively. Even if the negative surface potential will slow down the incident electrons by a few eV, this should not create a significant shift of the TEEY, especially for electrons above a few tens of eV. Moreover, the shift of incident energy is controlled, and we can take it into account when plotting the TEEY curves. Therefore, using this method will allow us to measure the TEEY of dielectric materials while removing the perturbations of external charge effects.
However, the sample holder biasing method has a few limitations. First, the negative bias should be kept to small potentials of a few volts or tens of volts, in order to avoid a significant difference in the landing energy, which will create a shift of the TEEY. Nevertheless, this shift can be compensated by sending more energetic electrons in the first place. The main limitation of the method is its ability to compensate the positive surface potential created by the charge buildup in thick or bulk dielectric samples. Indeed, the capacitance C of a material is given by the formula
𝐶 = 𝜖 0 𝜖 𝑟 𝑆 𝐷 Equation 1-5
Where 𝜖 0 is the vacuum permittivity, 𝜖 𝑟 the relative permittivity of the material, 𝑆 is the area of the sample's surface and 𝐷 is the sample's thickness. In consequence, thicker materials have a small capacitance. In addition, the change in the surface potential ∆𝑉 𝑠 can be linked to the total charge buildup in the material ∆𝑄 by the relation [48]:
∆𝑉 𝑠 = ∆𝑄 𝐶 = ∆𝑄 𝐷 𝜖 0 𝜖 𝑟 𝑆 Equation 1-6
As a result, one can see that for the same charge, a thicker material will have a stronger variation in surface potential. For instance, thin films of silicon dioxide of 20nm thickness will only charge up to a potential of a few volts during the TEEY measurements. However, samples of mica, Al2O3 or Teflon with a thickness on the order of the micrometer or millimeter can charge up to hundreds or even thousands of volts when irradiated by electrons of several keV [49]. Consequently, it can be very difficult to limit the effects of charging for thicker samples. For this reason, most dielectric samples used in TEEY studies conducted in DEESSE are thin films of nanometric thickness, so that the external charging effects can be removed efficiently. This depends obviously on the transport properties of electrons in the target material and its capacity to retain charges. For instance, Belhaj et al. [48] were still able to use the sample holder biasing method on MgO bulk samples having a thickness of 2 mm, when studying the TEEY of 200 eV electrons during short pulses.
Internal charge effects
Several TEEY experimental measurements have been made using the two external charge removal methods detailed previously [23,48,[50][51][52]. Nonetheless, a decrease of the TEEY during time has still been observed for various materials for energies between the two crossover points. These materials include Teflon [53], MgO [48], Al2O3 [50] and diamond [23], among others. Due to the absence of external charging and electric field effects, this decrease cannot be attributed to the recollection mechanisms highlighted in the previous section, in the case of situation (II). It has also been observed that the final value of the TEEY could be more or less lowered depending on the current density, the irradiation time or the temperature. Indeed, the charge buildup inside of the material can significantly disrupt the path of electrons in the material. First, an electric field also appears inside of the material, because the charge density in the material is not constant. Since the secondary electrons generated in the first few nanometers close to the surface have the highest escape probability, the remaining holes create a positive charge. The incident electrons are then implanted deeper in the material, creating a negative charge. Depending on its sign and intensity, the internal field can accelerate the electrons towards the surface, which increases their escape probability and the TEEY, or accelerate them deeper in the material, which will have the opposite effect. Simulation studies [54][55][56] have shown that the transport of electrons in SiO2 starts to be significantly affected for internal fields of 0.5-1MV/cm, but it is unknown whether such fields were reached during the TEEY measurements mentioned above. Some experimental works [48] have proposed an explanation based on the recombination of secondary electrons with the holes created by the previous cascade. Nevertheless, this hypothesis has not been confirmed. Indeed, the TEEY obtained experimentally is the global result of the several physical processes involved in the transport of electrons. Hence, it is not possible to determine experimentally the contribution of each internal physical process to the decrease of the TEEY. Instead, simulation models for the transport of low energy electrons could be more suited to study the impact of charging on the electron emission yield. In this regard, many computational models for the charge buildup and its effect on the TEEY have been conceived [57][58][59][60][61][62][63][64][65][66]. However, to our knowledge, no simulation of the effect of positive charge buildup on the TEEY were made in conditions where the sample holder was negatively biased to prevent the recollection of secondary electrons. Hence, the simulations results in the case of positive charging mainly show the effects of the external electric field on the TEEY.
In conclusion, the internal charge effects are still difficult to investigate experimentally and these results are not thoroughly understood. This is why, in this thesis work, we will particularly focus on developing a simulation model and conducting experimental measurements, to study and explain the effect of internal charge on the transport of electrons and on the TEEY.
Double-hump electron emission yield curves
Multiple studies have reported experimental TEEY measurements on such samples that do not follow the standard behavior of TEEY from Figure 1-3 [67][68][69]. The measured TEEY exhibit a double-hump shape illustrated in Figure 1-12, with the apparition of a TEEY local minimum between the two humps. This behavior has mainly been reported for SiO2 thin film samples, although it has also been observed on other space-used dielectric materials [70]. A few explanations have been proposed for these observations. Ye et al. [71] have shown that the variation of the surface irradiated by the electron beam could create double hump TEEY curves as an experimental artefact. However, these observations were only made in the case of metallic samples with surface roughness structures of millimetric dimensions. For dielectric materials, Hoffman and Dennison [70] proposed an explanation linked to external charging effects. However, other works [67][68][69] have also observed multiple hump TEEY curves on SiO2 thin films of various thicknesses grown on a Si substrate. In these studies, the sample holder was negatively biased to avoid the recollection of secondary electrons, so that this behavior cannot be attributed to external field effects only.
Yi et al. [69] and Yu et al. [68] proposed that the local TEEY minimum can be removed by compensating the holes created in the SiO2 layer by electrons tunneling from the Si layer. This is in agreement with other works that showed a link between the hole density and the TEEY [48,72]. They also proposed that the second TEEY maximum is due to enha+nced compensation of the holes by electrons tunneling from the Si layer when the penetration depth of electrons is equal to the thickness of the SiO2 thin film layer. On the other hand, Rigoudy et al. [67] suggested that a TEEY minimum instead appears under such conditions, where a conductive channel evacuates the secondary electrons from the SiO2 layer under the effect of radiation induced conductivity. However, the local TEEY minimum appears in their measurements at the same incident energy for various SiO2 thicknesses. Notably, in these three studies, the TEEY local minimum was removed by a change of experimental parameters, such as increasing the positive bias of the collector [67] or the negative bias of the sample holder [69], or decreasing the incident current [68]. As a result, the apparition of the double hump TEEY curves could be linked to the conditions of measurement and could also be an experimental artefact.
Computing the TEEY with Monte Carlo simulations
In order to understand the physical processes behind the emission of secondary electrons and the internal charging effects, which are not easily accessible by experimental means, we can use Monte-Carlo simulation codes to model the transport of electrons in matter and the generation of secondary electrons. These codes combine physical models of the interactions of particles with matter, with random numbers to model the non-deteministic behavior of these particles. Many Monte-Carlo codes are publicly available to the scientific community, such as PENELOPE, Casino, FLUKA, among others. We can also cite the GEANT4 toolkit, which includes several physics packages for the transport of particles through matter. However, most interaction models have been developed for high energy particles, and transport models for low energy electrons are lacking. For instance, PENELOPE and Casino have a low energy limit of 50 eV for electrons. The MicroElec physics module of GEANT4, developed by CEA and ONERA, extends down to 16 eV, but is only valid for electrons in silicon. As a result, these simulation packages are unable to compute the secondary electron emission yield.
On the other hand, several Monte-Carlo codes have been developed specifically for the transport of low energy electrons down to a few eVs [26,27,39,[73][74][75][76][77][78][79]. These codes are able to simulate the secondary electron emission of various materials under electron irradiation, including metals and semi-conductors [77,79] or insulators (without charging) [80][81][82], or the emission of electrons under ion irradiation [83]. Most codes are focused on metals and semi-conductors, since the TEEY of dielectrics is heavily dependent on the charge buildup, which needs to be dynamically simulated. Despite this, some Monte-Carlo codes [57][58][59][60][61][62][63] are able to model the transport of charges in insulators and its effect on the TEEY, notably external effects due to the global charge. Most studies focus on Silicon Dioxide, which is the material with the most reference data. Nevertheless, none of these codes is publicly available to the community, and the observations made on the TEEY in the case of positive charging are mainly due to external effects.
In recent years, the OSMOSEE code [76] has been developed at ONERA, which is able to simulate the transport of electrons in aluminum down to a few eVs, and can model the electron emission yield of aluminum. Further developments at ONERA during the thesis of J. Pierron [25,27,77,84], in collaboration with CEA and CNES, have extended OSMOSEE and MicroElec's physics to simulate the electron emission of Al, Si and Ag, but these updates were not released in GEANT4. Most importantly to our study, both of these codes were designed for metals and semi-conductors, hence they are not able to simulate the charge transport in insulators. However, we should be able to use these simulation codes as a base and extend it to the simulation of the TEEY of insulators.
Conclusion of Chapter 1
In this first chapter, we have highlighted how the secondary electron emission process can disrupt the operation of several devices, such as spacecraft electronics or particle accelerators. In this regard, dielectric materials have proved to be especially problematic, for several reasons. First, these materials have a larger total electron emission yield compared to metals, which increases the risk of multipactor discharges in RF components. The electron cloud formation is also enhanced, which leads to a significant loss of power in particle accelerators. Second, insulator materials can charge depending on the electron emission yield, generating a potential gradient inside of the material. The charge can build up to the point that electrostatic discharges suddenly occur. These can severely damage the electronic components, for instance by creating destructive electrical arcs on the surface of solar panels. Therefore, evaluating the TEEY of insulating materials is critical for these applications. However, it is especially difficult to perform experimental TEEY measurements on dielectrics, due to the modification of the TEEY caused by charging effects. This is also problematic in scanning electron microscopy. Indeed, the image contrast depends on the TEEY of the target, which can be modified by the charge buildup if it is an insulator. On the one hand, the external charging effects are well known. It is understood how the external electric field generated by the global charge modifies the energy of the incident and secondary electrons, why this changes the TEEY, and how the effect of this electric field can be removed. On the other hand, the comprehension of internal charging effects is still lacking. It is not thoroughly understood why the TEEY decreases with time even when the external electric field has no influence on the TEEY, or why multiple-hump TEEY curves have been observed on dielectrics but not on metals.
For these reasons, the main objective of this thesis work will be to develop our own Monte-Carlo model for the simulation of the TEEY of insulators, in order to understand the effects of internal charging on the transport of electrons in dielectric materials and their TEEY. We will also perform experimental measurements on insulating samples, to study the TEEY and obtain reference data for our simulation. Before constructing our model however, we need to identify which electron-matter interactions need to be taken into account. We have also established that the charge buildup can also modify the TEEY in insulators through internal charge effects. Hence, a main concern of this work will be to study the transport of charge carriers and their physical interactions. The description of all these interactions will be the focus of Chapter 2, along with the Monte-Carlo procedure used to simulate the transport of electrons and the secondary emission process.
Chapter 2: Presentation of the low energy electron-matter interactions and the Monte-Carlo simulation procedure
We have established that the generation of secondary electrons in all materials and the charge buildup in dielectrics are due to the interactions of incident low energy electrons with matter. However, to build a simulation of the effects of internal charging on the TEEY of dielectric materials, we need to introduce the electron-matter interactions we will model in the next chapters. The simulation procedure used in this work, namely the Monte-Carlo simulation, will first be presented. This will allow us to highlight the key quantities we will need to know, in order to describe the electron-matter interactions. Some of these interactions, as we will show, are common to all material types, and are sufficient to model the TEEY of materials without charging effects. However, we know that the internal charge buildup affects the TEEY of insulators and we want to know which physical mechanisms are involved. For this reason, we will also present the transport of the charge carriers in insulators, and the specific interactions involved. Finally, we will mention some other quantities than the TEEY, which are relevant to the study of the transport of low energy electrons and which can be accessed with Monte-Carlo codes.
Fundamental quantities of interactions and their use in the Monte-Carlo simulation procedure
This section aims to describe the operation process of Monte-Carlo codes from a general point of view. The fundamental quantities used to desctibe the interactions of particles are the mean free paths and cross sections. These two quantities, which are necessary to Monte-Carlo codes, will also be defined in this section.
Interaction cross sections
The cross section (also abbreviated as XS) is an indication of the probability of occurrence for a given interaction. The cross section is defined having the dimension of a surface, and can be understood using the analogy of a flux of particles 𝐹 𝑖 directed towards a single scattering center, which will then be measured by a detector located beyond the scattering center [1]. During the interaction, some of the particles can be scattered by the diffusion center into a solid angle 𝑑Ω measured by the particle detector. This analogy is shown in Figure 2-1. The number of particles 𝑑𝑛 scattered by the center and hitting the detector is thus dependent on the incident flux 𝜙 𝑖 and the solid angle measured by the detector 𝑑Ω. When extrapolating this image to a solid, a multiple number of scattering centers 𝑁 𝑐 is now involved. If we suppose that each particle is only interacting with one center, for instance in the case of a flux of particles crossing a very thin film, the number of particles hitting the detector is thus given by [1]:
𝑑𝑛 = 𝑁 𝑐 𝜙 𝑖 𝑑𝜎 𝑑Ω 𝑑Ω Equation 2-1
In this equation, the differential cross section 𝑑𝜎 𝑑Ω appears, which expresses the probability of a particle of being scattered in the solid angle 𝑑Ω. The total cross section 𝜎 is obtained by integration of the differential cross section in Equation 2-1:
𝜎 = ∫ 𝑑𝜎 𝑑Ω 𝑑Ω Equation 2-2
By combining both equations, we can understand that the number of particles hitting the detector is directly proportional to the total cross section 𝜎. It is equal to the number of particles hitting an area the size of 𝜎, and not simply the number of particles hitting the area occupied by the scattering center. In result, the cross section associated with an interaction can be quite different from the area of the scattering center.
Interaction Mean free paths and the computation of the interaction lengths
The mean free path 𝜆 associated to an interaction is the average distance that can be traveled by a particle between two successive interactions. If we consider a flux of particles passing through a sample of thickness 𝑑𝑥 and area 𝐴, with a volume density 𝑁 of scattering centers, the 2.1 -Fundamental quantities of interactions and their use in the Monte -Carlo simulation procedure probability of a particle to be stopped by one of the diffusion centers is expressed by the ratio between the area occupied by the scattering centers multiplied by the cross section, divided by the total area of the sample:
𝑃 = 𝑁𝜎𝐴 𝐴 𝑑𝑥 = 𝑁𝜎𝑑𝑥 Equation 2-3
The variation of the intensity of the flux after the crossing of the sample is thus equal to the initial intensity multiplied by the interaction probability as
𝑑𝐼(𝑥) = -𝐼(𝑥) 𝜎𝑁 𝑑𝑥 Equation 2-4
This equation is a linear differential equation that can be integrated. The solution of this equation is given in the form of 𝐼(𝑥) = 𝐼 0 𝑒 -𝑁𝜎𝑥 . We can then use this solution to evaluate the probability 𝑑𝑃(𝑥) of a particle to be stopped between 𝑥 and 𝑥 + 𝑑𝑥 as:
𝑑𝑃(𝑥) = 𝐼(𝑥) -𝐼(𝑥 + 𝑑𝑥) 𝐼 0 = 𝑁𝜎𝑒 -𝑁𝜎𝑥 𝑑𝑥 Equation 2-5
We can then deduce the average distance 〈𝑥〉 between two interactions, by applying the definition of the expected value of a random variable to the Equation 2-5:
〈𝑥〉 = ∫ 𝑥 𝑑𝑃(𝑥) +∞ 0 = ∫ 𝑥 𝑁𝜎 𝑒 -𝑁𝜎 𝑥 𝑑𝑥 +∞ 0 = 1 𝜎𝑁 Equation 2-6
Which gives the value of the mean free path 𝜆 = 1 𝜎𝑁 ⁄ .
Monte-Carlo programs use a random selection process to choose the interactions made by the particles, taking into account the probabilities of each interaction. This selection is made at the beginning of each simulation step, where the program surveys the mean free paths 𝜆 𝑖 for each interaction depending on the properties of the tracked particle. Using a random number 𝑅 𝑖 from 0 to 1, an effective interaction length 𝑙 𝑖 is computed, using
𝑙 𝑖 = -𝜆 𝑖 ln(𝑅 𝑖 ) Equation 2-7
The program then selects the physical process with the shortest interaction length 𝑙 𝑖 , which will be the distance travelled by the particle during the simulation step. At the end of the step, the effective interaction lengths of the physical processes that were not selected are modified, to take into account the distance that was travelled by the particle. The procedure is then repeated with a new random sampling for each simulation step. On average over a great number of simulated events, the most probable interactions will be the ones that have a higher frequency of occurrence. However, the random sampling introduces a statistical dispersion to take into account the non-deteministic transport.
Common electron-matter interactions for all material types
An electron penetrating in a solid can undergo multiple interactions and collisions, some of which are common to all material types and can occur for electrons traveling in metals, semiconductors and insulators. These are given in Figure 2-2. The primary electron entering the solid can first interact with the nuclei of the atoms of the material in an elastic interaction, which results in a change of direction of the electron. The incident electron can also transfer a part of its energy to an electron of a material in an inelastic interaction, which can result in the generation of a secondary electron. The new electrons set into motion may in turn excite other electrons of the solid, creating an electronic cascade in the material. The phenomenon of secondary electron emission happens when the secondary electrons cross the surface and escape from the material. Given the large number of electrons set into motion in the secondary electron cascades, one can easily understand how a material can be emitting more electrons than it had received. However, the emission of secondary electrons into vacuum is not systematic. Depending on its energy and angle of incidence, an electron reaching the surface can be transmitted with a change of direction, or reflected by the surface. If a primary electron remains close enough to the surface, it may be able to escape the solid, which results in a backscattered electron. The peak of elastically backscattered electrons we have observed in the energy distribution of electrons exiting a material are the primary electrons which were either reflected by the surface potential barrier before entering the material, or have entered the material, made one or a few elastic interactions with a negligible energy loss, and then escaped the material. If the primary particle has lost some energy through inelastic interactions before escaping, it will be measured as an inelastically backscattered electron.
The description of these interactions is mandatory in order to compute the electron emission of any material, so we will have to include them in our low energy electron transport code for dielectrics. On the other hand, with only a modeling of the interactions we present in this subsection, we should already be able to compute the TEEY of metals and semi-conductors, which we will do in Chapter 3.
Elastic interaction
The elastic interaction is the interaction of an incoming electron with the coulombian potential created by an atom of the material, precisely by the nuclei and the strongly bound core shell electrons. There is a very large difference in mass between the electron and the nucleus+core electrons ensemble, hence the energy lost by the incident electron is very small (below a few meV). However, the electron can be significantly deflected from its initial trajectory, depending on its energy. Indeed, when the electron energy falls below a few hundred of eVs, the probability of the electron being strongly scattered increases. The electron may even be scattered in an opposite direction, with a scattering angle greater than 90°. Moreover, as the electron energy decreases towards the eV, the elastic interaction mean free path also decreases down to the order of the interatomic distances (a few Angstrom). As a result, low energy electrons, especially below 100 eV, are strongly scattered in random directions by the elastic interactions after only a few Angstroms. Hence, their motion is equivalent to a Brownian motion. This particularity of the transport of low energy electrons means that they cannot be treated the same way as higher energy electrons (> keV), or heavier particles such as protons.
To describe the elastic interaction of an electron with an atom, one not only needs to take into account the electric field generated by the nucleus but also the field generated by the neighboring atoms of the lattice. This description and the computation of the interaction cross sections can be made using the Partial Wave Analysis (PWA) method proposed by Mott [2]. This method is based on quantum mechanics, and the computation of the PWA cross sections is strongly dependent on the approximations used for the potential created by the atoms. Nevertheless, several works have used the PWA method to compute the total and differential elastic cross sections. Notably, in this work, we have chosen to use the ELSEPA [3] (ELastic Scattering of Electrons and Positrons by Atoms) code for the computation of elastic cross sections. This code is a database of elastic cross sections computations for electrons from 10 eV up to 1 GeV. The code covers elements from Z = 1 to Z = 103, which allows the user to simulate the elastic interaction in the corresponding monoatomic materials. ELSEPA can also be used to compute the elastic cross sections of molecules, if the user can provide additional parameters such as the molecule geometry. In this case, the cross sections can be used for the transport of electrons in compound materials.
Nevertheless, the validity of the partial wave method analysis method below 50 eV is questioned by several works [4][5][6]. Akkerman et al. [4] and Valentin et al. [5] have substituted the ELSEPA cross sections for electrons in Si below 50 eV by ab-initio cross sections. In SiO2, Schreiber & Fitting [6] have shown that the values of the cross sections given by the PWA below 100 eV become unphysical, with a mean free path that falls below the interatomic distance. This is because, in insulators, the collective oscillations of the lattice (phonons) must be taken into account for electrons below 100 eV.
Inelastic interaction
In addition to the nuclei of the atoms, an electron may interact directly with the other electrons of the material. An inelastic interaction happens when the incident electron transfers a part of its energy to one of these electrons, which can then be set into motion. Hence, the production of secondary electrons and the generation of the electron cascade occurs during the inelastic interactions. However, to correctly model the electron cascade, we need to know the energy lost by the primary particle, where the secondary electron is coming from, and what part of this energy is actually transferred to the electron put into motion. In this regard, we must study the electronic structure of the material, which contains information on the two distinct populations of electrons from the solid: the strongly bound core shell electrons, and the weakly bound valence or conduction electrons.
Description of the electronic structure of a solid
The electronic structure of an isolated atom is made of several discrete energy levels occupied by the electrons. According to Bohr and Rutherford [7], these levels are defined by four integer numbers. The principal quantum number 𝑛 describes the electron shell the level belongs to, with the deepest shell having a number of 1, and the furthest shell has the biggest number. The atomic shells are often referred to by a letter, starting from K for 𝑛 = 1, L for 𝑛 = 2, M for 𝑛 = 3, and so on. The azimuthal quantum number 𝑙 describes the subshell of the electron within the 𝑛 shell, and ranges from 0 to 𝑛 -1. The magnetic quantum number 𝑚 corresponds to the shape of the orbital within the subshell, and is contained between -𝑙 and +𝑙. Finally, the spin quantum number 𝑠 gives the orientation of the spin (+1/2 or -1/2). According to Pauli's exclusion principle, a single orbital (𝑛, 𝑙, 𝑚) can only contain two electrons, having a spin of -1/2 and +1/2.
It is important to note that the deepest shells are closer to the atomic nucleus, which means that their binding energy is greater than the furthest shells. For instance, the binding energy of the K shell of germanium (Z=32) is 11 keV according to the EADL atomic data library [8], whereas the furthest shell (N3) only has a binding energy of 6 eV. Heavier elements have more energy levels, thus the core shells get even closer to the nucleus, which increases their binding energy.
However, the outermost shells still have a binding energy on the order of the eV. For example, the binding energy of the K shell of carbon (Z=6) is 300 eV, which is much lower than for germanium. On the other hand, the energy of the furthest shell of carbon (L3) is 9 eV and similar to the one of germanium, even if germanium has more shells (12) than carbon (4).
The electronic structure of a solid is quite different from the structure of isolated atoms, as shown in Figure 2-3. Indeed, when two atoms are brought together and form a chemical bound, the outermost energy levels are shared and split between the two atoms. In the case of a solid, a very large number of atoms are gathered in a volume (on the order of 10 23 at/cm 3 ), and only separated by a few Angstroms. Due to this proximity, the external shells fuse into several continuums of energy levels, which are known as electronic bands. Some of these levels can be accessible (valence and conduction bands) or inaccessible (energy gaps between the bands). The core shells however are still strongly bound to the individual atoms, and the overlap of the energy levels of the deeper shells is limited. Hence, they can still be treated as discrete energy levels. At a temperature of 0 K, electrons fill the electronic structure starting from the core shells up until the Fermi level 𝐸 𝐹 , which is the highest energy level occupied at absolute zero. The properties of a material are heavily dependent on the location of the Fermi level and the band structure. Materials can be separated into three categories, shown in Figure 2-4. First, in metals, the Fermi level is located within the conduction band. As a result, at room temperature, the conduction band is partially filled with weakly bound electrons. These electrons can easily be excited to an unoccupied level of the conduction band by thermal activation, and therefore put into motion in the solid.
In semi-conductors and insulators however, the Fermi level is placed between the valence band and the conduction band, in the middle of an energy gap. Hence, for these materials, the conduction band is completely empty at 0 K and the valence band is completely filled. The width of this gap is what differentiates a semi-conductor from an insulator. Indeed, semiconductors generally have an energy gap of a few eV at most (around 1 eV for Si and Ge), whereas the energy gap of insulators is greater than 4 eV (8 eV for SiO2 and Al2O3). The energy gap in a semiconductor is small enough that the electrons of the valence band can jump this gap, by gaining energy through thermal excitation. It is also possible to introduce additional charge carriers in a semiconductor by doping, which is done by implanting other atoms that have a different valence from the intrinsic atoms of the material. The effect of doping is also to create additional energy levels in the band gap, which can facilitate the jump of electrons into the conduction band. In insulators however, the band gap is much too large to be jumped by the electrons of the valence band through thermal excitation. This transition is only possible if the insulator is subjected to a very large electric field, which can enable the valence electrons to tunnel through the energy gap. During an inelastic interaction, the primary electron transfers a part of its energy to a bound electron of the material and promotes it into an unoccupied level of the conduction band, above the Fermi level. The promoted electron becomes a secondary electron that can now move into the material and even exit it. Therefore, we can separate the sources of secondary electrons into two populations: strongly bound electrons from the core shells, and weakly bound electrons, located in the partially filled conduction band of metals, or at the top of the valence band of semiconductors and insulators.
Inelastic interactions with core shells electrons
As mentioned earlier, core electrons are situated within the deepest shells of the atoms of the material, and are therefore strongly bound to the nucleus. To promote a core shell electron into the conduction band, the binding energy of the energy level of the electron must be overcome. Given that the binding energy of core shells ranges from a few hundred to a few thousand of eV, the excitation of a core electron results in a large energy loss for the primary. However, all of this energy is not directly transferred to the secondary particle, since most of it is spent to promote the electron into the conduction band. Indeed, for an energy transfer 𝑇 lost by the primary electron, a secondary electron from a core shell with a binding energy of 𝐸 𝑏 will be generated with an energy
𝑄 = 𝑇 -𝐸 𝑏 -𝐸 𝑔𝑎𝑝 Equation 2-8
In semiconductors and insulators, the energy of the gap 𝐸 𝑔𝑎𝑝 must also be overcomed by the secondary electron, whereas for metals 𝐸 𝑔𝑎𝑝 = 0.
Contrary to the weakly bound electrons in the energy bands, the core electrons still belong to a given atom of the lattice. When an electron is promoted from a core shell into the conduction band through ionization, a vacancy is left at the place of this electron. However, this leaves the atom in an unstable condition. Indeed, the most stable electronic configuration for an atom is when electrons occupy the deeper levels and the shallowest levels are vacant. Therefore, some de-excitation and reorganization processes can occur in the ionized atom. An electron positioned in a higher energy level 𝐸 𝑛 can dissipate energy by the emission of an X-ray photon, in order to compensate a vacancy in a lower energy level 𝐸 𝑣 . The emitted photon has an energy ℎ𝜈 = 𝐸 𝑛 -𝐸 𝑣 . The core vacancy can also be compensated through an Auger process, where the dissipation energy ℎ𝜈 is instead transferred from an electron A at the level 𝐸 𝑛 to another electron B at an above level 𝐸 𝑚 > 𝐸 𝑛 . The electron A then compensates the vacancy, whereas the electron B is ejected from the atom, which corresponds to the production of an Auger electron.
The ionization of core shells can be modeled with the dielectric function theory [4,5,9], which is the approach we have followed in this work. This method, which can also model the inelastic interactions with weakly bound electrons, will be presented in Chapter 3.
Inelastic interactions with weakly bound electrons and collective interactions
Due to the large energy transfers required to excite a core electron, most of the energy losses of low energy electrons below a few keV will be through inelastic interactions with weakly bound electrons and collective oscillations. Indeed, the primary electron can also transfer a part of its energy to an electron below the Fermi level, in the valence band or in the partially filled conduction band of a metal, and promote it into an unoccupied state in the conduction band. As a first approximation, the secondary electron can be assumed to be generated from the Fermi level, which means that it is generated with an energy 𝑄 = 𝑇 -𝐸 𝑔𝑎𝑝 , where 𝑇 is the energy lost by the primary electron. However, almost all weakly bound electrons are located at an energy level 𝐸 𝑖𝑛𝑖𝑡 that is below the Fermi level 𝐸 𝐹 in a metal, or below the top of the valence band in a semiconductor or an insulator. Therefore, depending on the initial energy of the secondary electron in the band, its energy after the inelastic interaction is given by
𝑄 = 𝑇 -𝐸 𝑖𝑛𝑖𝑡 -𝐸 𝑔𝑎𝑝 Equation 2-9
In the materials studied in this work, the energy of weakly bound electrons 𝐸 𝑖𝑛𝑖𝑡 is on the order of a few eV up to a ten of eV. In a metal, if the energy of the incident electron is greater than the difference between 𝐸 𝐹 and the bottom of the conduction band, all weakly bound electrons of the conduction band may be excited into a vacant state. This is also the case in semiconductors and insulators if the primary electron energy is greater than sum of the energy of the gap and the width of the valence band 𝐸 𝑉𝐵 . However, if the energy of the primary is not high enough, only a part of the weakly bound electrons may be excited.
Instead of transferring energy to individual electrons, the primary electron may instead create a perturbation that displaces multiple weakly bound electrons from the valence or conduction band. Through coulombian attraction, these electrons are moved back towards their position at rest, which generates a collective oscillation of electrons in the volume of the material [10]. This oscillation is called a plasmon, and characterized by its frequency 𝜔 𝑝 . When a primary electron is interacting with a volume plasmon, an energy ℏ𝜔 𝑝 is first transferred to the plasmon. The plasmon then decays after a given lifetime by transferring its energy to a weakly bound electron, which is excited into a secondary electron. Since the energy of the plasmon is quantized, the interaction of electrons with volume plasmons corresponds to a very marked peak on the energy loss spectrums, which is often the most probable energy transfer for low energy electrons. Hence, weakly bound electrons, whether by direct impact or plasmon relaxation, are the principal source of secondary electrons. These inelastic interactions will be modeled with the dielectric function theory in Chapter 3, as for core shell electrons. However, the dielectric theory cannot model surface plasmons, which are another type of collective oscillations. Indeed, a volume plasmon oscillates longitudinally along the incident electron's propagation axis, whereas a surface plasmon is a transverse oscillation that can only happen in the first few nanometers of the solid's surface. Therefore, the probability of exciting a surface plasmon instead of a volume plasmon depends on the distance between the electron and the material's surface [1]. Although it is easy to evaluate this distance in the case of a perfectly flat surface, this becomes more complicated in the case of rough structured surfaces. The interaction models we will develop for the electron emission of metals and semiconductors in Chapter 3 need to be compatible with the simulation of 3D complex geometries. Indeed, we have shown in Chapter 1 that surface roughness could be a way of reducing the TEEY, which can be studied with Monte-Carlo transport through 3D geometries [11,12]. Hence, to ensure the portability of our models, we have not decided to model surface plasmons, and assumed that all plasmons are volume plasmons.
Interactions with the materials's surface
In vacuum, electrons are not subjected to a coulombian force, and their energy can be expressed relatively to the energy level of vacuum. Therefore, the potential energy 𝐸 𝑝 of the electron is null and its total energy 𝐸 is only made of the kinetic energy 𝐸 𝑐 . At the vacuum/material boundary however, the periodic potential of the crystal is disturbed by the discontinuity induced by the interface. This results in the generation of a potential barrier that the electron must pass.
When the electron enters the material, its needs to overcome the surface potential barrier, which is defined as the difference between the bottom of the conduction band and the energy level of vacuum. The electron's energy is now expressed relatively to the bottom of the conduction band.
Due to this change of reference, an electron penetrating in the material will lose potential energy; this loss is transformed into a gain of kinetic energy. An electron exiting the material will, on the contrary, gain potential energy and lose kinetic energy. This change of energy is illustrated in Figure 2345.
Figure 2-5: Energy changes for an electron penetrating in a material
A secondary electron created in the material needs to have a greater energy than the surface barrier to cross it. This threshold is defined as the work function for a metal, or the electron affinity for an insulator. These energies correspond to the minimal energy required to eject an electron from the solid. Hence, the surface potential barrier greatly limits the number of low energy secondaries emitted.
Indeed, the work function of most metals and semiconductors is about 4 to 5 eV, which prevents electrons below this energy from escaping. In some insulators however, the electron affinity can be as low as 0.9 eV in SiO2 and Al2O3. Therefore, even very low energy secondaries of only a few eV can still escape the surface of such insulators.
The surface processes can be simulated with different models of potential barrier crossing, such as a square barrier [13] or an exponential barrier [14]. Other models, such as image force barrier models [15], take into account the perturbation generated by the incident electron as it approaches the surface. However, these models are more complicated and computationally expensive. Hence, we have chosen to retain the exponential barrier model in the case of our study.
Physical interactions of electrons and charge carriers in insulating materials
In addition to the elastic, inelastic and surface interactions, several processes need to be taken into account to correctly model the charge buildup in the insulator and its effect on the transport of electrons. Indeed, due to the large band gap, the electron-phonon processes become predominant for the transport of electrons in insulators, whereas they can be neglected in metals and semiconductors for the computation of the TEEY.
Due to charge conservation, an electron-hole pair is generated during an inelastic interaction, with the hole being left at the place of the secondary electron generated. Since we know that the internal charge buildup is modifying the TEEY, the transport of these holes must be simulated. We need to take into account the transport of thermalized electrons as well, which can move in the conduction band but are unable to cross the material surface due to their low energy. We can then differentiate the transport of ballistic electrons, which are primary and secondary electrons with an energy above 1 eV, with the drift charge carriers, which are holes and thermalized electrons. The transport of charge carriers, called drift, is very different from the transport of primary and secondary electrons we have treated so far, since it is enabled by thermal excitation and electric fields. Both drift charge carriers and ballistic electrons of only a few eV can get immobilized on traps of various natures. However, this immobilization is not definitive, and a trapped particle can escape from the trap through detrapping. Finally, electrons and holes may recombine with each other, either by the intermediate of a trap, or by geminate recombination. All the processes that will be modeled in our charge transport model for insulators are shown in Figure 23456.
Electron-phonon interactions
From the different band structures shown in Figure 2-4, we can understand how there is no threshold for the inelastic interaction in metals, since for any given energy transfer, we can always find an electron to excite above the Fermi level. In insulators and semiconductors however, an incident electron falling below the energy of the band gap is unable to lose energy through inelastic interactions. In semiconductors, this happens when electrons fall below 1 or 2 eV, but we do not need to simulate the transport of these electrons for the computation of the TEEY since their energy is lower than the electron affinity (4-5 eV). In insulators however, the energy gap is much larger (8-9 eV in SiO2 and Al2O3), but we still need to track the electrons down to the energy of the surface potential barrier, which can be below 1 eV (0.9 eV for SiO2 and Al2O3).
In the absence of energy loss models below the band gap, any electron falling below 9 eV will be able to escape the material, which leads to unrealistic TEEYs. Therefore, the electron phonon interactions, which are negligible in metals and semiconductors, must be considered in insulators as they are the main source of energy losses for electrons below the band gap.
Phonons are quantized vibrations of the lattice, which are generated when atoms collectively oscillate around their equilibrium position. These oscillations occur natively under the effect of temperature, which creates a thermal agitation in the solid. These collective oscillations of atoms can be propagated in the lattice under the form of waves. Instead of treating this phenomenon as a vibrational wave, we can use the particle-wave duality and treat them as a quasi-particle that is propagating in the lattice, which is known as a phonon. The quantization of these vibrations means that phonons correspond to distinct vibration modes, which are characterized by their vibrational frequency 𝜔. The population of phonons of vibrational mode 𝜔 is given by the Bose-Einstein statistic, which follows the relation
𝑁 𝑝ℎ = 1 𝑒 ℏ𝜔/𝑘 𝑏 𝑇 -1 Equation 2-10
Where 𝑘 𝑏 is Boltzmann's constant. We can see from this relation that the population of phonons tends to 0 when the temperature converges towards 0 K. Indeed, when the temperature increases, the oscillations of the atoms around their equilibrium position are amplified by the thermal agitation, which increases the number of phonons generated in the material. ⃗ ) ≠ 0), the vibration mode can be excited by a particle having a null wave vector, that is to say a photon. Hence, phonons that verify this condition are known as optical phonons. Indeed, optical phonons correspond to the vibration modes created when the positive and negative ions of the lattice vibrate and generate an oscillating dipole moment. This is why they can be created by the passage of a photon, and why optical phonon branches are only found in ionic or semi-ionic crystals. Acoustic phonons, on the other hand, correspond to the oscillations of neighboring atoms, and cannot be excited by a photon. However, the displacement of atoms also modifies the local electron density, which means that electrons may interact with these oscillations. Vibration modes can secondly be separated according to their orientation relatively to the propagation direction of the phonon. If the atoms oscillate along the propagation axis of the wave, it is known as a longitudinal phonon. Transverse phonons, on the other hand, create oscillations that are perpendicular to their propagation axis.
Electrons can in fact interact with both acoustic and optical modes, and either absorb and gain the energy of a phonon, or lose energy and create a phonon when traveling through the lattice.
In the case of SiO2, the energies of the vibrational modes of phonons range from a few meV for acoustic phonons, up to 0.15 eV for longitudinal optical phonons. Under the band gap energy, the emission of optical phonons will thus be the main energy loss for electrons, and must be modeled with electron-phonon interaction models to get accurate TEEYs. However, given that electrons only lose a small amount of energy when emitting a phonon, the escape probability of low energy secondaries is greater than in metals and semiconductors, which leads to insulators having a greater TEEY. The interactions of electrons with acoustic phonons cause a negligible loss of energy, but the scattering angle associated with the emission or absorption of acoustic phonons can be very large, and is quasi isotropic. Hence, for low energy electrons in insulators, acoustic phonons play a similar role to the elastic interaction, and are responsible of the random walk motion of electrons. As mentioned previously, the modeling of elastic scattering with the PWA method becomes invalid below 100 eV in insulators, which is why an acoustic phonon scattering model is often used in place in Monte-Carlo simulations of electron transport in dielectrics, for instance in SiO2 [6] or alkali halides [16].
Drift transport of electrons and holes
Ballistic electrons traveling in an insulator will then lose energy, first through inelastic interactions and second by optical phonon emission until they are thermalized in the conduction band. The electrons then reach a steady state energy on the order of 3/2 kT, which is about 40 meV at 300 K, and enter the drift transport regime. This regime is also followed by the holes created in the valence band during the inelastic interactions. Drift particles are continuously scattered by phonon collisions in random directions, and periodically gain or lose energy by absorbing and emitting phonons. Hence their transport is thermally enabled, and their energy depends on the temperature and the phonon population.
The drift transport of a charge distribution is often characterized by its drift velocity 𝑣 𝐷 ⃗⃗⃗⃗ , which must be differentiated from the thermal velocity 𝑣 𝑡ℎ ⃗⃗⃗⃗⃗ of the charge carriers. Indeed, the thermal velocity is the actual velocity of the charge carrier between two phonon scattering events and in the absence of an electric field, which is linked to its thermal energy. The drift velocity, on the other hand, is a macroscopic quantity, which is computed according to the definition of Figure 2345678. It is defined as the average displacement of the distribution of charge carriers after a given time. When there is no electric field (𝐹) in the material, the distribution spreads in every direction, since the charge carriers are deviated in a random direction after each scattering event with phonons. Hence, the drift velocity is equal to zero in this situation, since the average displacement of the distribution is null, but the individual charge carriers are absolutely not immobile. When an electric field is applied in the material, the charge carriers can be accelerated in between two scattering events, in the direction of the field for holes, or in the opposite direction for electrons. Even if the drift particles are still randomly scattered by phonons, their average displacement now follows the axis of the electric field. Hence, the distribution continues to spread due to thermal agitation, but globally moves in the direction of the electric field. The evolution of the drift velocity depends on the intensity of the electric field, and the transport properties of the charge carriers. The transport we have just described in Figure 2-8 is known as a Gaussian transport, where the charge distribution moves according to the electric field, and the drift velocity is a linear function of the field. The term µ connecting the velocity to the field is called the mobility, and depends on the effective mass of the carrier and the mean time of flight between two phonon collisions.
In the example of electrons drifting in SiO2, their mobility is 20 cm²/V/s at room temperature, and the drift velocity scales linearly with the electric field until 0.5 MV/cm [17,18]. Indeed, even if the field increases the energy of electrons between two collisions, the emission of optical phonons prevents the electrons from retaining this energy. However, if the field increases past 0.5 MV/cm, the energy losses by optical phonon emission are no longer sufficient, and the average energy of electrons increases. At this point, known as optical runaway the electrons are also mostly traveling along the axis of the electric field, and the spatial dispersion of the trajectories is severely reduced [17]. The acoustic phonon collisions are now the mechanism that prevents the average energy of electrons from increasing, until acoustic phonon runaway happens at 3-4 MV/cm [19]. For even higher electric fields of 8 MV/cm, the electrons have now acquired enough energy to be able to generate secondary electrons through impact ionization [20], until dielectric breakdown of the material occurs past 10 MV/cm.
The transport of holes in SiO2, on the other hand, is very different from electrons and cannot be modeled by Gaussian transport. Indeed, the transport of holes is instead dispersive [21,22]. This results in a transport that is strongly decorrelated from the electric field, with a mobility of only 10 -5 cm²/V/s, and can be modeled with the Continuous Time Random Walk Theory [23,24] instead. In other materials it is possible that the transport of electrons is dispersive instead, or the both charge carrier transports are Gaussian or dispersive. The transport properties of the charge carriers are largely dependent on the material. This is due to trapping mechanisms we will detail in the following subsection. Indeed, in some insulators, the drift particles are very mobile and can be evacuated easily. In other materials with traps that have a high density and/or a large depth, the transport of charges can be severely limited instead.
Trapping of charge carriers
Several types of traps exist in insulators, which introduce energy levels in the band gap. These traps, which are potential wells, can be due to the imperfections and impurities of the material, or the disorder in amorphous materials. Traps are able to immobilize electrons, holes, or both, for a given period. The traps can be separated in shallow and deep traps, which differ from the mechanisms involved to immobilize a particle. Traps are generally modeled in a macroscopic manner by a capture cross section 𝜎, which gives information on the attractiveness of the trap, and a trap density 𝑁. These can be combined into a capture mean free path 𝜆 = 1 𝜎𝑁 ⁄ .
Capture in shallow traps and polaronic transport
We can identify two different sources of shallow traps, which are both intrinsic to the material and not due to the presence of impurities. First, in polycrystalline and amorphous materials, the disorder and ruptures in the atomic bonds generates local modifications of the coulombian potential, which can immobilize electrons and holes due to the perturbation of the wave function of the charge carrier. This results in the generation of a continuum of localized states below the conduction band and above the valence band, which are known as Anderson states [25].
Second, depending on the electron-phonon or hole-phonon coupling strength in the material, a charge carrier may become self-trapped by forming a small polaron [26]. A polaron is a quasiparticle, made of an electron or a hole, and the strain field that surrounds the drift charge carrier, which can be interpreted as phonons surrounding the moving electron/hole. The effective mass of the carrier increases because the electron/hole is effectively dragging the lattice atoms along its path. This phenomenon is stronger in ionic crystals, due to the strong coulombian interaction between the charge carriers and the ions of the lattice, and much weaker in covalent crystals made of neutral atoms. The intensity of the electron-phonon coupling can be evaluated by the following coupling constant [10]:
1 2 𝛼 = 𝐶 ℏ𝜔 𝐿𝑂 Equation 2-11
Where 𝐶 is a deformation energy, and ℏ𝜔 𝐿𝑂 is the longitudinal optical phonon energy. In essence,
1 2
𝛼 can be interpreted as the number of phonons surrounding the electron [10].
Depending on the strength of the coupling, electrons and holes may create either a small or a large polaron. A large polaron is simply a mobile charge carrier that has an increased effective mass, whereas a small polaron can generate a deformation of the ions of the lattice and get selflocalized in between the ions, in the potential well generated by this deformation. Due to the degenerate valence band edge, the holes are more likely than electrons to become selfimmobilized by forming a small polaron [10]. This is why holes in SiO2 are much less mobile than electrons, because contrary to electrons they immediately become a small polaron after creation [22]. The hole polarons can get released from their trapping site by the thermal agitation, which means that their immobilization time in polaron traps is very short (10 -12 𝑠). However, the density of such traps is very large, on the order of the atomic density (10 22 𝑐𝑚 -3 ), since the polaron may self-localize in any interatomic site. If the temperature is not high enough to active this release, the polaron can instead tunnel between trapping sites, depending on the distance and difference in energy between the traps [27,28]. The density of localized states is also very large in amorphous materials (10 20 -10 21 𝑐𝑚 -3 ) due to the strong amount of disorder. Both localized states and polaron sites are very close to the conduction band or valence band edges, hence the depth of these traps ranges from a few hundredths to a few tenths of eV. Hence, the trapped charges only need a very small amount of energy to escape the trap. This energy can be provided by thermal excitation or by an electric field. However, the mean free path between two trapping events is low, due to the high trap density. Therefore, the charge carriers may move between traps through two transport mechanisms. They can either hop between traps by detrapping into the valence or conduction band and drifting until they are captured by another trap, or directly tunnel from one trap into another [28].
Capture in deep traps
Many types of defects and impurities are introduced in the fabrication process. These create energy levels in the middle of the band gap of the material, in which electrons or holes can fall, as shown in Figure 2-10. These traps are located several eV below the conduction band or valence band edge, and are an extrinsic property of the material. The nature and concentration of the deep traps is highly variable. For instance, in silicon dioxide [29], electron and hole traps can be generated by oxygen vacancies, sodium growth in SiO2, H2O molecules incorporated in the dielectric film, or other implanted ions. The SiO2/Si interface is also a highly disordered region with many broken bonds and implanted ions, such as W or Na + ions. This results in a strong density of traps in this area. The capture cross sections of all these traps are highly variable [29], depending on whether the trap is coulombic attractive (10 -13 -10 -15 cm²), neutral (10 -15 -10 -18 cm²) or coulombic repulsive (< 10 -18 cm²). This attractiveness of the trap depends on its natural charge state. For instance, some hole traps in SiO2 are initially neutral [29], and become positively charged when a hole is captured.
The potential wells created in the band gap by the deep traps have a depth of about 1 to 4 eV. However, their density is much lower than for shallow traps, ranging from 10 14 to 10 18 𝑐𝑚 -3 . Finally, the concentration of the traps is also dependent on the fabrication process. For instance the water related trap concentration in SiO2 can vary from 10 15 cm -3 for a dry oxide to 10 19 cm -3 for a wet oxide [30]. Deep traps are thus able to fix particles for a very long time, and the trapped charge carriers may only escape with the help of an increased temperature, or locally high electric fields. For some traps, the capture cross section has been observed to be field dependent. For instance, a reduction of the capture cross section of Coulomb-attractive electron traps was observed in SiO2 in fields greater than 0.5 MV/cm by Ning [31]. This was attributed to the increased energy of the electrons in the field, which makes them less attracted by the traps.
Detrapping of charge carriers
The charge carriers immobilized in a trap can be detrapped under the effect of thermal agitation.
The escape frequency 𝑊(𝐸 𝑖 ) for a trap level of depth 𝐸 𝑖 is commonly modeled by an Arrhenius law in the form of:
𝑊(𝐸 𝑖 ) = 𝑊 0 𝑒𝑥𝑝 (- 𝐸 𝑖 𝑘 𝐵 𝑇 ) Equation 2-12
With the frequency factor 𝑊 0 which describes the intrinsic mobility of the charge carrier in the trap. For shallow traps, thermal excitation is enough to allow the charge carriers to escape and hop between traps, but this is not the case of deep traps. Nevertheless, other detrapping enhancements, given in Figure 2-11, may increase the probability of the charge carrier from escaping a deeper trap. The potential barrier of the trap can first be lowered by an electric field 𝐹 due to the Poole-Frenkel (PF) effect [32,33], which is also commonly found in charge transport models for semiconductors. The lowering of the barrier reduces the effective depth of the trap, which increases the probability of escape by thermal excitation. This lowering can be quantified by the PF lowering factor, given by
∆𝐸 𝑖 = √ 𝑒 3 𝐹 𝜋𝜖 0 𝜖 𝑟 ⁄ Equation 2-13
From this expression, we can see that the potential barrier of the trap progressively lowers when the electric field increases, under a law in √𝐹. It is also possible to take into account the effect of Poole Frenkel lowering on the effective mobility, by introducing a dependency in √𝐹 and a lowering factor 𝛽 that can be determined experimentally [34]. It is possible however that the lowering factor deviates slightly from the Poole-Frenkel theory.
Detrapping can also be enhanced by the Phonon-Assisted Tunelling (PAT) effect. This phenomenon is made of two steps. First, the trapped charge carrier can absorb a phonon and get excited to a higher level in the trap, but still without escaping the potential well. However, the charge is now faced with a shallower trap barrier, where the probability of tunneling through this barrier is more favorable. Straight tunneling of the charge carrier through the potential well can only happen in shallow traps, unless the electric field is very strong (several MV/cm). This is why the charge carrier first needs to absorb a phonon, in order to increase its escape probability. The models used for the description of the Poole Frenkel and Phonon Assisted Tunneling enhancements used in this work are based on the work of Lemière et al. [35], and will be presented in more detail in Chapter 5.
Recombination mechanisms of electron-hole pairs
When a trap captures a charge carrier, the global charge of the trap is modified, and it can thus become more attractive to a particle of the opposite sign. For instance, the neutral hole traps become positively charged when they have captured a hole, which makes them strongly attractive for electrons. If a hole or an electron falls into a trap which is already occupied by the opposite particle, the two particles recombine and disappear, and the trap is freed. This is known as trap assisted recombination. As the material is irradiated, holes or electrons fill more and more traps, and the drift carriers have a higher probability of recombining in a trapping site. Hence, the recombination of electron-hole pairs reduces the quantity of free charges in dielectrics, and the trap assisted recombination we have just described must be modeled in our simulation. Moreover, since the density of trapped charges is reduced, the electric field generated by this density is also decreased, which lowers the detrapping enhancements and the energy gained by the charges in the field.
Figure 2-12: Recombination mechanisms in insulators -trap assisted recombination and geminate recombination
There is actually another type of recombination that can happen between a free electron and a free hole, which is known as geminate recombination. In this case, the free electron generated in an inelastic interaction has spontaneously been recaptured by the hole left in place. Onsager [36] has shown that the generation of an electron hole pair was conditioned by the electric field and the temperature. This was also noted by Hughes, who observed an increase of the electronhole pair yield with the electric field, starting from 0.1 MV/cm [18]. Indeed, the probability for the electron to escape geminate recombination can be expressed as [34,37] :
𝑃(𝑇, 𝐹) = 𝑃 0 exp ( 𝑟 𝑐 𝑟 0 𝑒𝑥𝑝(-𝜉) -1 𝜉 ) 𝑟 𝑐 = 𝑒 2 4𝜋𝜖 𝑟 𝜖 0 𝑘 𝑏 𝑇 𝜉 = 𝑒𝐹𝑟 0 𝑘 𝑏 𝑇 Equation 2-14
In this equation, 𝑟 𝑐 is equivalent to the capture radius that the electron must overcome to avoid recombination, and 𝑟 0 is the thermalization radius, which is the separation distance reached by the electron at the start of the diffusion. If this distance is too low compared to the capture radius, the electron cannot escape the attraction of the hole, and the pair recombine immediately. For instance, an electron in SiO2 at room temperature must be separated by at least 10 nm from its hole, in order for the two particles to diffuse apart instead of recombining immediately [18].
A major issue for this model however, is that there is no expression for the thermalization radius 𝑟 0 or the native yield 𝑃 0 , which must be determined arbitrarily or by fitting with experimental data. Given the importance of these parameters in the production yield, this can be a strong source of uncertainties in the simulation. Hence, we have chosen not to include geminate recombination in our simulation, and we will assume that the electron can always escape its parent atom. As we will show in Chapter 6, this approximation still allows us to correctly model the effects of internal charging on the TEEY, so it is possible that the effect of geminate recombination is weaker than other physical processes, and does not significantly influence the TEEY. It is also possible that the electric field reached in the sample during TEEY measurements is not strong enough to create a significant rise of the production yield. Indeed, geminate recombination must be taken into account in other applications, such as the modeling of radiation induced conductivity in polymers [34,38]. These materials can charge up to several kV, whereas the thin film dielectric samples used in this study do not charge past a couple of volts.
Other data on the transport of electrons accessible through Monte-Carlo codes
Since Monte-Carlo codes model the whole transportation of electrons through the material, several interesting quantities can be extracted from the code to quantify this transport. For instance, we can get information on the number and type of interactions, the energy and location of the secondary electrons generated, the charge distribution in the material… In this section, we focus on the characteristic parameters of the transport of electrons inside of the material, which will influence the TEEY.
Penetration depth and transmission rate
When electrons penetrate in a material, they lose energy by inelastic interactions with electrons of the target atoms, and are slowed down by doing successive interactions. Electrons can also be highly scattered by the target atoms' nuclei. As a result, each particle has an individual trajectory, as seen in Figure 2-14 (i) and Figure 2-13 [39]. iii.
The transmission rate and range are deduced from the distribution of depths.
The true range 𝑆 of a given trajectory is the total distance traveled until the electron comes at rest, i.e. the sum of all step lengths 𝑆 𝑖 traveled by the particle between each interaction, as illustrated on Figure 2-13. This parameter can be sampled for a large number of electrons to get the average true range 𝑆 ̅ , which is an indication of the total distance traveled by the electrons in average. Within the Continuous-Slowing-Down Approximation (CSDA), 𝑆 ̅ can be evaluated thanks to the following integral:
E dQ dx dQ E S 0 1 Equation 2-15
This is the integral of the reciprocal of the stopping power (dQ/dx) over energy from a final to an initial value: It corresponds to the parameter 𝑆 ̅ from the definition of Figure 2-13. The CSDA range is the total distance that is effectively travelled by electrons. However, 𝑆 ̅ does not give information on the final position of the particles or in which direction they have traveled. But, the depth reached by the electrons depends, in addition to the slowing down induced by inelastic scattering, to the deviations generated by the inelastic and elastic interactions. In this regard, 𝑆 ̅ can be interpreted as a penetration depth that can only be reached by a theoretical electron with a strictly linear trajectory. As this is never the case for electrons, which are deflected by elastic interactions, this parameter is an unreachable limit for the actual range R. Moreover, 𝑆 ̅ is not accessible experimentally as the electrons do not behave in a deterministic way but follow statistical laws for each step.
In consequence, for many applications, the paths of the particles are expressed as a projection on the incident direction of the impinging particle, generally in the depth of the material. Thus, a more convenient method is to sample the final positions of electrons along this direction. This can be done following the method on Figure 2-13 to get the projected range 𝑅 for a given trajectory. For instance, sampling the distribution of the final depths 𝑅 reached by a large number of electron trajectories, as done on Figure 2-14 (ii), allows us to compute its average 𝑅 ̅ , which corresponds to the average of the depths reached by the electrons in a semi-infinite material.
In Chapter 4, we will study the penetration depth of an electron, which will be defined as the final depth reached by the electron when it comes at rest. The transmission rate through a thickness 𝑑 is thus defined as the proportion of electrons with a final position deeper than 𝑑. For electrons, that are highly scattered, two other parameters can be found in the literature to describe their trajectories. The extrapolated range and the practical range are commonly evaluated.
The extrapolated range (R0) is commonly defined following the method shown on Figure 2-14 (iii), taken as the point of intersection between the tangent at the steepest section of the transmission probability curve (P=0.5) and the depth axis (X-axis) [40,41]. In the following, the range value given by this point will be called 𝑅 0 . Similarly, the practical range can be obtained from the depth-dose profile in place of the transmission curve. Both parameters are evaluated in slightly different ways, but have often been used interchangeably [4] as they remain similar. They differ in their definition from a simple average over the penetration depths from each individual trajectory, but are more representative of the penetration distance of individual electrons. Hence they are commonly used to define the shielding thickness necessary to protect an equipment from radiation.
In this manuscript, we will refer to the extrapolated range using its definition from the transmission rate. At high energy (E>keV), the extrapolated ranges of electrons 𝑅 0 (𝐸) in cm are demonstrated to be inversely proportional to the density 𝜌 (g/cm 3 ). For this reason, the range in cm is commonly multiplied by the density to get a value in g/cm² as 𝑟(𝐸) = 𝑅 0 (𝐸) * 𝜌, which is then independent on the nature of the target material above a few keV. In Chapter 4, where we will study the extrapolated range of electrons, we will only use the range 𝑟(𝐸) in g/cm².
Ionizing dose
We have seen that when electrons penetrate in a solid, they transfer a part of their energy through inelastic interactions and the creation of electron-hole pairs. The ionizing dose is defined as the sum of energies deposed during ionization by an incident particle and by unit of mass, which is thus the average energy transferred to the electrons of the medium consecutively to inelastic electron/electron interactions. The number of secondary electrons set into motion in the medium is thus proportional to the ionizing dose. However, a part of this energy deposed in the material is dissipated in the form of backscattered electrons and secondary electrons escaping into vacuum. The remaining energy corresponds to the incident and secondary electrons that are captured by the material. Some examples of ionizing dose-depth profiles are given for low energy electrons in Al in Figure 2-15. When the electron energy increases from 25 eV to 100 eV, the dose deposed close to the surface increases, and so does the range of the electron, which is given by the end of the dosedepth curve. The dose deposed near the surface is directly linked to the TEEY, since the secondary electrons that are able to escape are the ones that were created near the surface. Hence, the TEEY at 100 eV is greater than at 25 eV. When the electron energy increases to 500 eV, more dose is deposed in total in the material, but the dose starts to be deposed much deeper. As we can see in the case of 2 keV electrons, the total dose is greater than for all other electron energies, due to the large number of inelastic interactions. However, the range of the electron is also much larger, and most of this dose is deposed in the depth of the material. The dose deposed near the surface is lower at 2 keV than at 500 eV and 100 eV, and so is the TEEY.
Conclusion of Chapter 2
In this chapter, we have detailed the numerous electron-matter interactions we need to model, in order to simulate the transport of low energy electrons in dielectrics. First, we have seen that the transport of electrons and the electron emission properties of all material types are the result of a competition between several interactions: the elastic interactions, which can strongly scatter the electrons, and the inelastic interactions, where primary electrons lose energy and secondary electrons are generated. The source of secondary electrons are quite varied: core shell electrons, weakly bound electrons in the conduction or valence band, Auger relaxation processes, or plasmon excitation and decay. The surface of the material has also been evidenced to be a strongly limiting factor for the emission of very low energy electrons into vacuum.
However, to model the TEEY and the charge buildup of insulators, additional interactions must be considered. Electron interactions with optical phonons are the main source of energy loss below the band gap, and acoustic phonon interactions must be taken into account to correctly describe the random motion walk of electrons below 100 eV in insulators. The charge buildup cannot be correctly modeled without simulating the transport of thermalized electrons, implanted in the material, and holes created during inelastic interactions. This drift transport, enabled by thermal agitation and the electric field in the material, can be interrupted by several trapping processes. Indeed, polarons, localized states and deep level traps may capture a charge carrier and immobilize it for a given period. However, under the effect of thermal excitation, a charge carrier can exit a shallow trap, and move in the material by hopping between traps. For deep traps however, the escape of the particle must be aided by the electric field, with the Poole-Frenkel and Phonon Assisted Tunneling enhancements. Finally, as the density of trapped charges increase, mobile holes and electrons can be captured by a trap filled by an oppositely charged particle, which becomes more attractive. Charge carriers may thus recombine, through trap-assisted recombination of electron-hole pairs.
Since the elastic, inelastic and surface processes are common to all material types, we can start designing our Monte-Carlo simulation by integrating these models, and simulating the transport of low energy electrons in metals and semiconductors. Indeed, given the lack of publicly available Monte-Carlo transport models for low energy electrons, developing our own tool is mandatory and will allow us to study the secondary electron emission of metals and semiconductors. As we have seen in section 2.4, such a Monte-Carlo code can also yield additional information on the transport of electrons, like extrapolated ranges and ionizing doses, for which experimental data in the literature is lacking below 1 keV. The development of this transport model will be detailed in Chapter 3. While this model cannot simulate the effects of charging on the TEEY of insulators since it does not consider the charge buildup, such a simulation can still be used as a base and extended to the simulation of the TEEY of insulators. The transport model for insulators including the drift transport, trapping, detrapping and recombination processes will be developed in Chapter 5.
Chapter 3: Development of a Monte-Carlo low energy electron transport model to simulate the secondary electron emission of metals and semiconductors
Presentation of MicroElec and Geant4
In this section, we shall present the models used to simulate the transport of electrons in metals and semi-conductors. In this work, we have chosen to develop our Monte-Carlo model using Geant4 [1][2][3], a free open-source toolkit coded in C++. Geant4 is made of many different packages that allow the simulation of the transport of various types of radiation (electrons, protons, photons, neutron…) through matter. The main advantage of Geant4 is its flexibility: the user can freely design their geometry, and choose which physical models should be used for each particle type. Multiple models can be chosen for a given particle, which can be specific to an energy domain, a material or a region of the geometry.
To enable such a modular structure, the architecture of Geant4 relies on an object oriented approach. Several well-defined classes need to be constructed by the user, which will indicate to the Geant4 kernel the key data of the simulation, such as the geometry or particles. In the scope of this flexibility, 'Messenger' objects can be attributed to almost any class of the Geant4 application. Through text files containing macro commands, the user can modify the parameters of the simulation outside of the code. At the start of the simulation, the macro files will then be read, and the data will be imported in the classes by their messengers. Some of the key classes of Geant4 we have used in this work are listed below.
The geometry is created by the user in the class DetectorConstruction. Here, several geometrical shapes can be used to construct complex 3D structures. The user also creates the sensitive detectors that will count the particles going through them and retrieve their data from the simulation. For instance, we can create a spherical detector around a cubic material sample that can count the electrons emitted by the sample and record their energy. This allows the simulation to reproduce the data obtained from the spherical electron collector found in experimental TEEY measurement facilities. The user can also separate the geometry in different regions, where different physical models can be activated depending on the region.
The particle source is defined in the class PrimaryGeneratorAction. The user can pick different source types, such as a G4ParticleGun or a GeneralParticleSource, and set the properties of the incident particles (single energy or energy spectrum, type…) and of the source (angle of emission, position…).
A Sensitive Detector class must be created separately to properly record the information when particles go through the detector created in the geometry. In practice, we attribute to a given shape the sensitive detector object, so that Geant4 knows when to call the detector's functions and record information. In the sensitive detector class, we can define the data to be recorded and save it in a hit collection.
The data can be extracted from the simulation with Geant4 AnalysisManager. It is first created at the beginning of the simulation, where the user can set the type of file to export the data into (ROOT, CSV…). The analysis manager can then be called at various steps of the simulation to save data such as strings, doubles, or vectors.
The run is the fundamental unit of the Geant4 simulation. The user can choose to simulate as many runs as they want. During a run, several events are simulated depending on the number of incident particles. We can choose to have only one event per incident particle, or group multiple incident particles in a single event. An individual particle is followed in a track until it is killed. The secondary particles generated along the track are put in a stack by order of creation. When the primary particle is killed, the secondaries of the stack are then transported one by one until they are killed. If the secondary particle creates tertiary particles, they are added on top of the stack and are transported after the secondary particle is killed, and before the other secondaries. Finally, the smallest simulation unit is a step. A step begins right after an interaction, and ends when another interaction happens. However, Geant4 can also force a step to end when the particle is crossing a geometry boundary. Finally, in Geant4, the runs, events, and tracks of a simulation are completely independent.
User classes are provided to allow the user to perform various action on the simulation units we have just presented. These are the RunAction, EventAction, TrackingAction, StackingAction and SteppingAction classes. For example, we can use the SteppingAction while debugging to get information on the particles after each interaction, or we can retrieve the values of the electron counters at the end of a run in RunAction and save the TEEY in a file with the AnalysisManager.
The Physics List is where the user will choose which models to use in the simulation. For each simulated particle, they can define a model to be used in a specific region of the geometry, over a given energy domain. For instance, we can choose to use a very precise model for low energy electrons and switch to a less precise but faster model above a few tens of keVs. We can also choose to only model the full electron cascade generated by an incident proton in a section of the geometry made of silicon, and just compute the energy lost by the proton with a condensed history model in another section made of tungsten.
In practice, the physics list is filled by Processes, for instance the inelastic interaction of electrons is a process. Depending on the process, different interaction Models can be set to a single process, to recreate the examples we have just mentioned. The models handle the computation of the interaction cross sections, which are fed to the Geant4 kernel for the Monte-Carlo random draw of the physical interaction length. If the process is selected, the model class will then compute what happens during the interaction, such as the amount of energy lost or the angular deviation. For some processes, no models can be set and the process class itself handles the computation of cross sections and pre/post step actions. The processes we will use for the simulation of secondary electron emission are only Post-Step processes, which perform an action at the end of a step. Some other Geant4 processes can modify the particle in between two interactions, and are called Along-Step processes.
The electric, magnetic, or electromagnetic field is defined in DetectorConstruction. Two classes need to be created separately to handle the field. The FieldSetup object contains the parameters for the resolution of the field equation and the interpolation of the trajectory, which is mostly done with Runge-Kutta methods. The value of the field at a certain position and time is given by the object Field. It can be tweaked to have an uniform field or a field that can vary in space and time. This object is first called by the transportation manager of Geant4 to get the value of the field at several points. Then, the transportation manager integrates the trajectory during the step and computes the modification in energy and direction.
The models found in Geant4 can cover a very wide range of energies, up to several GeV, for various applications. For instance, ion track structures simulated in Geant4 have been used for the study of Single-Event Effects in electronic components [4,5]. We can also cite GRAS, a Geant4 based simulation tool for the study of space environment effects [6].
However, some studies have underlined the limits of the Geant4 ionization models for very integrated technologies applications [4,7,8], due to the recommended production threshold of 250 eV for secondary electrons. The MicroElec project is one of the physics model packages found in Geant4. It is developed by CEA DAM in collaboration with ONERA. As a Geant4 extension for incident electrons, protons and heavy ions in silicon, MicroElec aims to implement lower energy ionization models in Geant4 to improve track structures simulation. MicroElec has been used for microdosimetry and Single-Event Effects applications [9,10] and is based on the already existing framework of the Geant4-DNA extension [11,12].
Geant4 offers a wide selection of physical models. For electromagnetic physics (EM), there are standard EM processes, above 1 keV, and low energy EM processes, valid down to ~250 eV for electrons and gamma rays. In Geant4, we can distinguish for electrons, four different electromagnetic processes applicable in different energy ranges:
-G4eIonization (STANDARD) -G4LivermoreIonisationModel (LIVERMORE) -G4PenelopeIonisationModel (PENELOPE) G4eIonization (STANDARD) physics is a continuous process valid above 1 keV. Similarly, according to the Geant4 documentation, the Livermore continuous ionization model can be used down to few tens of eV. The PENELOPE (PENetration and Energy LOss of Positrons and Electrons) model is also a continuous ionization process which is valid down to ~50 eV [13]. These continuous inelastic processes must be combined with some elastic interaction models to account for the scattering of particles on nuclei. These later processes can be a discrete (Single Scattering : SS) or a Multiple SCattering (Msc) process.
Multiple scattering is mainly used for high energy particles. In this approach, the deviations and energy losses of the incident particle's trajectory caused by many individual interactions are condensed in one single mean trajectory after one single interaction (hence the name "Condensed history"). On the other side, discrete processes simulate each single interaction and compute the energy losses and direction changes step by step. This approach is obviously much slower than multiple scattering, but is also much more precise. This is especially true when low energy particles need to be simulated, as the distances between two interactions can be very small and the deviation can be very significant. Hence, the multiple scattering approach can be too approximate in this case. According to the nature and the energy of the incident particles, Geant4 provides some scenarii combining inelastic and elastic processes. For instance, the G4EmStandardPhysics_option4 uses the PENELOPE continuous ionization process with the multiple scattering. The accuracy is increased by replacing the multiple scattering by the single scattering process for large deviation angles. Similar options are proposed based on the G4LivermoreIonisationModel. The best accuracy can thus be reached by using discrete processes for both ionization and elastic scattering, as provided by MicroElec.
The first version of the module (Geant4.9.6) was presented in [14,15] and validated for electrons of 16.7 eV up to 50 keV and incident protons and ions of 50 keV/nucleon -23 MeV/nucleon in silicon only. Later improvements (Geant4.10.0) [16] have been added to extend the highenergy range of the models up to 100 MeV for electrons and 10 GeV/nucleon for protons and ions, with the inclusion of relativistic corrections. This version of MicroElec, which has been designed for microdosimetry simulation in silicon, is the version that was publicly available in Geant4 at the start of this PhD thesis. On the other hand, other improvements of MicroElec were made during two previous PhD theses at ONERA, but were not released in Geant4. First, during the thesis of J. Pierron [17][18][19][20][21], MicroElec and the OSMOSEE code were extended and validated for the simulation of the secondary emission of aluminum, silver and silicon. In this regard, improvements were brought to the modelling of the elastic and inelastic interactions, and a model for the crossing of the surface was also added for material/vacuum interfaces. In later work during the thesis of P. Caron [22], the treatment of the dielectric function for the computation of the inelastic cross sections was modified, to improve the accuracy of the stopping powers of electrons below 1 keV, and protons/ions below 100 keV/nucleon. He also developed computer programs for the fitting procedure of the energy loss function and the computation of the inelastic cross sections, which we have used in this work.
First, these improvements to the interaction models of MicroElec for very low energy particles will be presented. An extensive description of the elastic and inelastic interaction models can be found in the PhD works of J. Pierron [17][18][19][20][21] and P. Caron [9,22], hence we shall only give here a brief description of these models. This section will rather focus on the additional improvements that were brought during this PhD thesis, that is to say the development of acoustic phonon and optical phonon interaction models for insulators, and the extension of the surface crossing model to material/material interfaces, allowing MicroElec to be used with multilayered materials.
We have also computed the inelastic cross sections for 16 materials (Be, C, Al, Si, Ti, Fe, Ni, Cu, Ge, Ag, W, Au, SiO2, Al2O3, Kapton, BN). The new interaction models allow the Geant4 extension to be used for the tracking of electrons, protons and heavy ions in several new materials compared to its last publicly available version, for microdosimetry and secondary electron emission applications. We will then present some simulation results of the TEEY obtained from MicroElec.
3.2 Implementation of interaction models for low energy electrons
Modeling of the elastic interaction
Elastic processes for electrons are handled in MicroElec by the classes G4MicroElecElastic and G4MicroElecElasticModel. In the previous version of MicroElec (Geant4 10.0), the total and differential elastic cross sections for Si in the 50 eV -100 MeV range were extracted from the ICRU database [23]. These cross sections were completed by ab-initio cross sections from Bettega et al. [24] from 16.7 eV to 50 eV. However these calculations were originally made for CS2, and had to be adapted for Si following the approach of Akkerman et al. [25].
In the new version, the elastic cross sections are calculated with the Partial Wave Analysis (PWA) method and the code ELSEPA [26], an approach also followed by the Geant4 DNA team [27].
Since ELSEPA can be used down to 10 eV for compounds and monoatomic materials, this new approach allows us to use a single model on the whole energy range to compute the elastic cross sections for all materials. This improves the consistency of the simulations and avoids the use of individual adjustments when required for a given material. The elastic cross sections were computed in this work for all elements up to uranium and down to 0.1 eV, covering all corresponding monoatomic materials. Although the model validity is limited to 10 eV, computing the cross sections to lower energies gives us some extrapolation points that can be used to track electrons down to the energy of the surface barrier, which is about a few eVs.
The elastic cross sections for compounds can be computed with the ELSEPA module for molecular cross sections, which are then converted to cross sections per atom, or by computing the average of the monoatomic cross sections, weighted by the stoichiometry. These two approaches are compared for SiO2 in Figure 3-1. For energies higher than 1 keV, the difference between the Total Cross Sections (TXS) given by the two approaches is negligible. Below 1 keV however, the silicon atoms seem to have a stronger interaction probability than the oxygen atoms, as the molecular TXS follows the Si TXS. This would indicate that the free-atom approximation used with the weighted average becomes less valid, as low energy electrons may be subject to a stronger coupling with the lattice and/or aggregation effects from the molecule, as suggested in [26]. Due to the wide availability of material parameters, the molecular cross section approach has been chosen for SiO2. Indeed, the computation of the molecular cross sections requires additional parameters that may not be easily available for some materials, such as the molecular polarizability. The user also needs to enter the exact coordinates of the atoms in the molecule, which can quickly become overwhelming for polymers such as Kapton. For this reason, the mean atom approach was followed for the computation of the elastic cross sections of aluminium oxide, kapton, and boron nitride. Even though we know that the accuracy of this approach is reduced at low energies, this should be offset by the fact that we will be modelling the elastic interactions in insulators with an acoustic phonon-electron interaction model at 100 eV and below. In MicroElec, the inelastic interactions for incident electrons, protons and heavy ions are handled by the classes G4MicroElecInelastic and G4MicroElecInelasticModel. The cross sections (XS) calculations are based on the complex dielectric function theory of Lindhard and Ritchie [28] and the modeling of the energy loss function (ELF), following the work of Akkerman et al. [25] on silicon and the Geant4-DNA package. This approach is widely used to model the radiation transport at low energy, and has also been used to compute inelastic mean free paths by Flores-Mancera et al. [29], de Vera and Garcia-Molina [30], and Montanari et al [31].
The ELF defines the response of a material to an electronic perturbation. It is expressed as
𝐸𝐿𝐹(ℏ𝜔, 𝑞 ) = 𝐼𝑚 [- 1 𝜀(𝜔, 𝑞 ) ] Equation 3-1
with 𝜀(𝜔, 𝑞 ) the complex dielectric function, and ℏ𝜔, ℏ𝑞 the energy and momentum transferred by the primary particle to an electron of the material. In the dielectric function framework, the differential cross section
𝑑 2 𝜎 𝑑(ℏ𝜔)𝑑(ℏ𝑞)
for an electron of incident energy 𝑇 can be expressed from ELF following the equation
𝑑 2 𝜎 𝑑(ℏ𝜔)𝑑(ℏ𝑞) = 1 𝑁𝜋𝑎 0 𝑇 𝐼𝑚 [- 1 𝜀(ℏ𝜔, ℏ𝑞 ) ] 1 ℏ𝑞 Equation 3-2
Where 𝑁 is the atomic density, and 𝑎 0 = ℏ² 𝑚𝑒² ⁄ is Bohr's radius. The differential cross section in energy can then be obtained by integrating Equation 3-2 over the transferred momentums:
𝑑𝜎 𝑑(ℏ𝜔) = 1 𝑁𝜋𝑎 0 𝑇 ∫ 𝐼𝑚 [- 1 𝜀(ℏ𝜔, ℏ𝑞 ) ] 𝑑|𝑞 | |𝑞 | 𝑞 𝑚𝑎𝑥 𝑞 𝑚𝑖𝑛 Equation 3-3
Finally, the total cross section is obtained by integrating the differential cross section over the transferable energies
𝜎 = 1 𝑁𝜋𝑎 0 𝑇 ∫ ∫ 𝐼𝑚 [- 1 𝜀(ℏ𝜔, ℏ𝑞 ) ] 𝑑|𝑞 | |𝑞 | 𝑞 𝑚𝑎𝑥 𝑞 𝑚𝑖𝑛 ℏ𝜔 𝑚𝑎𝑥 ℏ𝜔 𝑚𝑖𝑛 Equation 3-4
Two steps are involved in the Monte-Carlo simulation of the inelastic interaction for a particle of energy 𝐸. First, the inelastic mean free path 𝜆 is computed from the total cross section, using the relation
𝜆(𝐸) = 1 𝑁𝜎(𝐸)
with 𝜎 from Equation 3-4. If an inelastic interaction happens, a random sampling of the the differential cross section is done in a second step, to determine the amount of energy transferred. Indeed, as we will now see, some values of energy transfers may be more probable depending on the value of the energy loss function.
At the limit 𝑞 = 0 ⃗ , the ELF is called the Optical Energy Loss Function 𝑂𝐸𝐿𝐹(ℏ𝜔, 0
⃗ ) = 𝐼𝑚 [- 1 𝜀(𝜔,0 ⃗ ⃗ ) ]
. This quantity is accessible through experimental measurements of either the OELF itself, or the optical indices 𝑛 and 𝑘. Indeed, the OELF can be obtained from these indices by the relation
𝐼𝑚 [- 1 𝜀(𝜔, 0 ⃗ ) ] = 2𝑛𝑘 (𝑛 2 + 𝑘 2 ) 2 Equation 3-5
Several databases of experimental OELF or n,k measurements are available in the literature, such as Palik's handbook [32] or the OELF database of Sun et al. [33]. A few examples of OELFs from the latter are shown on Figure 3-2. We can see several peaks and regions on the OELFs, which correspond to the various sources of energy loss we have identified in Chapter 2. The value of the OELF at a given energy is an indication of the probability of an energy transfer having this value. This means that energy transfers with a higher value of OELF will be more probable. For instance, the most probable energy loss for particles in Al is around 15 eV. First, at low energies around 10 -20eV, can be found the plasmon peak, which is a marked peak of highest intensity for many materials. This means that the main source of secondary electrons for these materials (Al and Al2O3 in Figure 3-2) will be the excitation of a plasmon, followed by its relaxation by the emission of a secondary electron. While the plasmon peak is very narrow for lighter elements, for heavier elements (Ni, Au), the peak becomes much more widened. This is due to the fact that interband transitions become more probable, that is to say the excitation of an electron from a lower level of the valence/conduction band to the conduction band above the Fermi level. As we go up towards heavier elements, the region of interband transitions significantly widens, and transition metals such as Ag, Cu, W or Au have a plateau region on the OELF that can span up to several hundreds of eVs. In the case of Au here, the region of interband transition ranges up to 2 keV. The effect of this plateau region is that the energy transfers are susceptible to take a wider range of values compared to the very marked plasmon peak found in lighter elements. Finally, for the highest energy transfers, a series of peaks can be observed, which correspond to the excitation of an electron from a core shell of an atom (K, L) to the conduction band. The binding energy of a given shell is a hard threshold for the excitation of an electron from this shell. As the core electrons have a stable bond with the nucleus, the energy required to set one of these electrons into motion can be quite high. This is especially true for heavier Z atoms, which are larger and have core electrons that are very strongly bound. For Al, the first accessible core shell is the L shell at around 100 eV, and the deepest shell is the K shell at 1.5 keV. For Ni, the core shells cover a wider range, spanning from the M shell at 100 eV to the K shell at 8 keV. Comparatively, for Au, the first core shell that can be excited is the shell at 2 keV up to the shell at 14 keV. Finally, for a compound material such as Al2O3, the particles can excite the core electrons of either atom of the compound, in this case the K shell of either the Al atom (1.5 keV) or the O atom (500 eV). The OELF of Al2O3 is quite interesting, as it is an insulating material with a gap of around 9 eV. Therefore, no ionization is possible for energy transfers below 9 eV, which is observable on the OELF with the sharp decrease of the plasmon peak at 9 eV. On the other hand, some energy transfers around a few eV are still possible, since the OELF is not null. These are attributed to the energy losses by creation of optical phonons.
Since the OELF only gives us information on the energy transfers at 𝑞 = 0, it needs to be fitted with dielectric function models and extended to 𝑞 ≠ 0 using dispersion relations. In the previous version of MicroElec, the OELF was fitted with a sum of extended Drude functions [28]:
𝐼𝑚 [- 1 𝜀(𝜔, 0 ⃗ ) ] = ∑ 𝐷 𝑗 (ℏ𝜔) 𝑗 Equation 3-6
With 𝐷 𝑗 the expression of the Drude function for the 𝑗 𝑡ℎ peak
𝐷 𝑗 (ℏ𝜔) = 𝐸 𝑝 2 𝐴 𝑗 𝑤 𝑗 ℏ𝜔 (ℏ 2 𝜔 2 -𝐸 𝑗 2 ) 2 + (𝑤 𝑗 ℏ𝜔) 2 Equation 3-7
The parameters 𝐴 𝑗 , 𝑤 𝑗 and 𝐸 𝑗 are respectively the amplitude, width, and energy of the peak, and 𝐸 𝑝 is the plasmon energy. Finally, the 𝐸𝐿𝐹(ℏ𝜔, 𝑞 ) was obtained with the dispersion relation
𝐸 𝑗 (𝑞) = 𝐸 𝑗 + ℏ 2 𝑞 2 2𝑚 0
In the new version of MicroElec, the OELF is fitted with a sum of Mermin dielectric functions following the approach given by Abril et al. [34]:
𝐼𝑚 [- 1 𝜀(𝜔, 0 ⃗ ) ] = ∑ 𝐹(𝜔)𝐴 𝑗 𝐼𝑚 [- 1 𝜀 𝑀 (𝜔, 0 ⃗ , 𝐸 𝑗 , 𝛾 𝑗 ) ] 𝑗 Equation 3-8
with the fitting parameters 𝐴 𝑗 , 𝛾 𝑗 = 𝑤 𝑗 and 𝐸 𝑗 . A simple step function 𝐹(𝜔) cuts the peak below a threshold energy 𝐸 𝑡ℎ . The Mermin function [35] 𝜀 𝑀 (𝜔, 𝑞 )
= 1 + (1 + 𝑖 𝛾 𝜔 ⁄ )[𝜀 𝐿 (𝑞 , 𝜔 + 𝑖𝛾) -1] 1 + (𝑖 𝛾 𝜔 ⁄ ) [𝜀 𝐿 (𝑞 , 𝜔 + 𝑖𝛾) -1] [𝜀 𝐿 (𝑞 , 0) -1] ⁄ Equation 3-9
is expressed in terms of the Lindhard dielectric function [34,36]
𝜀 𝐿 (𝑞 , 𝜔) = 1 + 𝜒 2 𝑧 2 [𝑓 1 (𝑢 ⃗ , 𝑧 ) + 𝑖𝑓 2 (𝑢 ⃗ , 𝑧 )]
𝑓 1 ⃗⃗⃗ (𝑢 ⃗ , 𝑧 ) = 1 2 + 1 8𝑧 [𝑔 (𝑧 -𝑢 ⃗ ) + 𝑔 (𝑧 + 𝑢 ⃗ ) 𝑓 2 ⃗⃗⃗ (𝑢 ⃗ , 𝑧 ) = { 𝜋 2 𝑢 ⃗ , 𝑧 + 𝑢 ⃗ < 1 𝜋 8𝑧 [1 -(𝑧 -𝑢 ⃗ ) 2 ], |𝑧 -𝑢 ⃗ | < 1 < |𝑧 + 𝑢 ⃗ | 0, |𝑧 -𝑢 ⃗ | > 1 𝑔(𝑥) = (1 -𝑥 2 ) ln | 1 + 𝑥 1 -𝑥 |
An interesting feature is that the dependence in both energy and wave vector is included in the Mermin dielectric function. As a result, no separate dispersion relation needs to be applied as a post treatment, contrary to the extended Drude model. The improvement of the Mermin approach over the Drude approach, especially for low energy particles, can be seen on the stopping powers computed in section 3.3.1.
The peaks can be fitted to the different transitions observed on the OELF by tweaking the three fitting parameters, as shown in Figure 3-3, where the peaks for different shells and transitions are shown. This method has been used by Denton et al. [37] and Da et al. [38] to fit the OELF for silicon and some metals. In the case of silicon, our OELF fit is similar to the fit from the previous version of MicroElec [15], while our Kapton fit is based on the work of de Vera et al. [39]. Our fit parameters for all materials shown in this work are given in Appendix I. We have fitted the OELF of 16 materials using Mermin's model: Be, C, Al, Si, Ti, Fe, Ni, Cu, Ge, Ag, W, Au, SiO2, Al2O3, Kapton, BN. The accuracy of our fit is verified using the average ionization potential I, and P-and Z-sumrules as used by Tanuma et al. [40]. The P-sum rule is given by the condition
𝑃 𝑒𝑓𝑓 = 2 𝜋 ∫ 𝐼𝑚 [- 1 𝜀(𝜔, 0 ⃗ ) ] 𝑑𝜔 𝜔 +∞ 0 + 𝑅𝑒 [ 1 𝜀(0) ] = 1
Where 𝑅𝑒 [ ] = 𝑛 -2 (0) for insulators. The Z-sum rule follows
𝑍 𝑒𝑓𝑓 = 2 𝜋Ω 𝑝 2 ∫ 𝜔𝐼𝑚 [- 1 𝜀(𝜔, 0 ⃗ ) ] 𝑑𝜔 +∞ 0 = 𝑍
Where Ω 𝑝 2 = (4𝜋𝑛 𝑎 𝑒² 𝑚 ⁄ ) 1 2 ⁄ and, 𝑛 𝑎 is the density of atoms or molecules.
The values obtained for the sum rules are shown in Table 1, and compared with data from Mukherji et al. [41], SRIM [42], ICRU [43] and Vera et al. [39]. The target value (Z) for Zeff is the atomic number for a monoatomic material, or the total number of electrons per molecule for a compound. There are some non-negligible errors in the sum rules for some materials, as we have defined our fits to be the closest to the experimental OELF. We have also chosen to validate their correctness by favoring the agreement with stopping power and SEY data over the sumrule values. For some materials, we have chosen to degrade the OELF fit to improve the stopping powers at low energies. In the case of Ni, Cu, Ge, W, SiO2, the stopping powers and SEY are still satisfying despite the errors in the sum rules, as shown in 3.3.1.
The differential cross section (DXS) for an incident particle of energy E [MeV] and mass 𝑀 [MeV/c²] is then calculated from the fitted ELF and using the following relationship, by summing the partial DXS for each j th shell:
𝑑𝜎 𝑑(ℏ𝜔) (𝐸, ℏ𝜔) = 𝑍 𝑒𝑓𝑓 2 𝜋𝑁𝑎 0 𝐸 ′ ∫ ∑ 𝐹(𝜔)𝐴 𝑗 𝐼𝑚 [- 1 𝜀 𝑀 (𝜔, 0 ⃗ , 𝐸 𝑗 , 𝛾 𝑗 ) ] 𝑗 𝑑|𝑞 | |𝑞 | 𝑞+ 𝑞- Equation 3-11
with the atomic density N [#/cm 3 ], the Bohr radius 𝑎 0 , the electron mass 𝑚 𝑒 , 𝐸 ′ = 𝑚 𝑒 𝑀 𝐸, and 𝑍 𝑒𝑓𝑓 the effective charge, equal to 1 for electrons and protons. For other particles, the effective charge was previously calculated using the Barkas formula [44]:
𝑍 𝑒𝑓𝑓 = 𝑍[1 -exp(-125𝛽𝑍 -2 3 ⁄ )] Equation 3-12
where
𝛽 = √1 -(1 + 𝐸 𝑀𝑐 2 ⁄ ) -2
. Nevertheless, at low incident ion velocity, the target electron velocities cannot be neglected. Brandt and Kitagawa (BK) theory provides an expression for describing the effective charge, connecting the charge state of the projectile and the Fermi velocity of the target. Some suggestions can be found in [42] to improve the BK formulation compared to data analysis. The original expression of the effective charge given by BK is (in atomic units):
𝑍 eff,BK (𝑍, 𝑞, 𝑣 𝐹 , 𝜁) = 𝑍[𝑞 + 𝜁(1 -𝑞) ln(1 + (2Λ 0 𝑣 𝐹 ) 2 )] Equation 3-13
Where Λ 0 is the screening radius, treated as a variational parameter in the BK theory and expressed as: 152.4 [41] 166 [42] 160.3 [41] 171 [42] 235 [42] 321 [42] 255 [41] 332 [42] 335 [42] 349.3 [41] 488 [42] 618 [41] 735 [42] 139 [43] 82.4
Material (Z) C (6)
[39]
𝛬 0 = 0.48 (1 -𝑞) 2 3 𝑍 1 3 (1 - 1 -𝑞 7 ) Equation 3-14
𝜁 is generally assumed to be ∼ 0.5 [45,46] and 𝑞 = (𝑍 -𝑁)/𝑍 is the fraction of the electrons that have been stripped from the moving ion (𝑁 being the number of electrons still bound to the projectile). An improved formulation of the screening radius is given by Kaneko [47]:
𝛬 = 𝛬 0 1 + 𝑓𝛬 0 Equation 3-15
Where the relation giving 𝑓 can be found in [47]. To obtain the best fit of stopping powers, Ziegler et al. [42] have proposed the following relation for describing 𝑞:
𝑞 𝑍𝑖𝑒𝑔𝑙𝑒𝑟 (𝑍, 𝑣 𝑟 ) = 1 -exp (-𝑐 ( 𝑣 𝑟 𝑍 2 3 -0.07)) Equation 3-16
with 𝑐 a constant close to 1 (𝑐 = 0.95 in our calculations) and 𝑣 𝑟 the relative velocity of the incident particle and a target electron. A complete expression of 𝑣 𝑟 is given by Kreussler et al. [48]. In addition, Ziegler et al. have suggested a modified value of ζ to adjust the effective charge results to extensive data analysis [42]:
𝑍 𝑒𝑓𝑓,𝑡ℎ𝑖𝑠 𝑤𝑜𝑟𝑘 (𝑍, 𝑞, 𝑣 𝐹 ) = 𝑍 𝑒𝑓𝑓,𝐵𝐾 (𝑍, 𝑞, 𝑣 𝐹 , 𝜁 𝑍𝑖𝑒𝑔𝑙𝑒𝑟 ∼ 1 2𝑣 𝐹 2 ) Equation 3-17
Finally, our expression of the effective charge depends on the atomic number, the Fermi velocity of the target and the incident velocity, through the relative velocity. This description is used in the new version of MicroElec. It does not however account for the charge modifications of the incident projectile, unlike the charge fraction approach used in CasP [49] and by Moreno-Marin et al. [50], Behar et al. [51] or Heredia-Avalos et al. [52]. Although the charge fraction description is more accurate, it is much less convenient to use in the code as the modification of the inelastic MFP needs to be handled for each charge state, which could significantly slow down the simulations. We have chosen to not use charge fractions in the new version of MicroElec as a compromise between accuracy and ease of use. Moreover, the effective charge description is much more complete than the previously used Barkas formula.
The relativistic corrections detailed in [16] by Raine et al. are also applied to the expression of the DXS and its integration limits. The effect of these corrections is directly visible on the stopping powers in section 3.3.1 computed with the Mermin approach. The comparison of these stopping powers with literature data is used as a validation metric for the inelastic cross sections. This has been done for seven metals (C, Al, Ag, Ti, Ni, Cu and W), two semi-conductors (Si and Ge) and two insulators (SiO2, Kapton), considerably extending the simulation capabilities of MicroElec. Low energy electron transportation in insulators does however require additional energy loss models, among which the interactions of electrons with phonons.
There are other corrections to the dielectric function theory at low energy as detailed by Salvat & Fernandez-Varea in [53], such as exchange effects or plasmon decay, that have not been included in this work. We have assumed that the dampening of the plasmon always leads to the production of a secondary electron, which may decrease the accuracy of our calculations at very low energies. Although the comparison with low energy experimental data is still satisfying, exchange effects are considered for future versions of MicroElec, following the method of Ochkur [54] and its implementation in other works, such as Fernandez-Varea et al. [55] or de Vera et al. [39].
Procedure of computation of the energy transfers
In the last version of MicroElec (Geant4 10.0) [17], the energy ℏ𝜔 and moment 𝑞 transferred by a primary particle of energy 𝐸 was calculated via rejection sampling from the differential cross sections per shell
𝑑𝜎 𝑗 𝑑(ℏ𝜔)
, which are given by
𝑑𝜎 𝑗 𝑑(ℏ𝜔) (𝐸, ℏ𝜔) = 𝑍 𝑒𝑓𝑓 2 𝜋𝑁𝑎 0 𝐸 ′ ∫ 𝐹(𝜔)𝐴 𝑗 𝐼𝑚 [- 1 𝜀 𝑀 (𝜔, 0 ⃗ , 𝐸 𝑗 , 𝛾 𝑗 ) ] 𝑑|𝑞 | |𝑞 | 𝑞+ 𝑞-
Equation 3-18
In this subsection, the energy ℏ𝜔 lost by the incident particle will be noted as 𝑄, and the differential cross section for a given energy transfer 𝑄 as 𝑑𝜎 𝑑ℏ𝜔 ⁄ (𝑄). In rejection sampling, we use a normalized distribution function of the energy transfers obtained from the differential cross section:
( 𝑑𝜎 𝑑ℏ𝜔 ) 𝑛𝑜𝑟𝑚 = (𝑑𝜎 𝑑ℏ𝜔 ⁄ ) max (𝑑𝜎 𝑑ℏ𝜔 ⁄ ) Equation 3-19
The rejection sampling procedure is shown in Figure 3 If the normalized differential cross section for 𝑄 𝑡𝑟𝑖𝑎𝑙 verifies the condition
( 𝑑𝜎 𝑑ℏ𝜔 ) 𝑛𝑜𝑟𝑚 (𝑄 𝑡𝑟𝑖𝑎𝑙 ) ≥ 𝑅 2 Equation 3-21
The value of the energy transfer 𝑄 is then taken equal to 𝑄 𝑡𝑟𝑖𝑎𝑙 . If this condition is not verified, the procedure is repeated with new random numbers 𝑅 1 and 𝑅 2 until we find a 𝑄 𝑡𝑟𝑖𝑎𝑙 that verifies Equation 3-21. The issue with rejection sampling is that we need to draw random numbers and interpolate the distribution until the condition is verified. This step can thus be repeated many times for each occurrence of an inelastic interaction, which considerably hinders the simulation time. For instance, in the example given in Figure 3-4, the condition of Equation 3-21 is not verified, so we need to start the procedure again.
To address this issue, MicroElec has been extended to handle direct sampling from cumulated differential cross sections. While the functions for direct sampling were already coded in MicroElec at the beginning of this thesis, the cumulated DXS were not computed and this option was not enabled. The use of cumulated DXS per shell and direct sampling has been validated for the new version and set as the default sampling method, considerably improving computation time.
In any case, the value of 𝑄 we have obtained is removed from the initial energy of the incident particle, which has a final energy 𝐸 𝑓 = 𝐸 -𝑄 after the inelastic interaction. Still, 𝑄 is not entirely transferred to the secondary electron generated. Indeed, for electrons generated from core shells with a binding energy 𝐸 𝑏 , a part of 𝑄 equal to 𝐸 𝑏 is spent to excite the electron from the core level above the Fermi level. In this case, the secondary electron is generated with an energy
𝐸 𝑠𝑒𝑐 = 𝑄 -𝐸 𝑏 Equation 3-23
For semiconductors and insulators, the secondary electron also needs to be brought above the energy gap, so that the energy of the secondary electron in these materials is
𝐸 𝑠𝑒𝑐 = 𝑄 -𝐸 𝑏 -𝐸 𝑔𝑎𝑝 Equation 3-24
The case of plasmon excitation and interband transitions is more complicated. In this case, the secondary electron is excited from the valence band into the conduction band, or within the conduction band from below to above of the Fermi level in a metal. We will call these electrons "weakly bound", in opposition to the strongly bound core electrons. The main difference between these two populations is that the core electrons are localized on a discrete level with a well defined binding energy, whereas the weakly bound electrons are located within a continuum of energies below the Fermi level 𝐸 𝐹 . In a first approximation, we can consider that all weakly bound electrons come from the Fermi level, which was the approximation used in the version of MicroElec in Geant4 at the beginning of the thesis. In such a case, the binding energy of the weakly bound electron in Equation 3-23 or Equation 3-24 is 0 eV. However, if we look at the densities of states (DOS) for a few materials, we can see that the Fermi level is not necessarily the most populated level, even in the simplest approximation of a free electron gas DOS. Since the Fermi-Dirac statistic at room temperature is a staircase function with a hard cut at 𝐸 𝐹 , we can consider that the density of states is a direct indication of the population of potentially excitable electrons at a given depth below 𝐸 𝐹 .
Consequently, for interactions with weakly bound electrons, the initial energy of the weakly bound electrons has to be taken into account in the energy transfer. In fact, during an interaction with a weakly bound electron, we can consider that it is excited from the conduction band below the Fermi level for metals, or the top of the valence band for semi-conductors and insulators.
This electron has a potential energy 𝐸 𝑖𝑛𝑖𝑡 compared to the Fermi level, which we will treat as similar to the binding energy of core shells. In result, the secondary electron is generated with an energy
𝐸 𝑠𝑒𝑐 = 𝑄 -𝐸 𝑔𝑎𝑝 -𝐸 𝑖𝑛𝑖𝑡 Equation 3-25
where 𝐸 𝑔𝑎𝑝 = 0 for metals. Following the example of the Monte-Carlo code OSMOSEE for low energy electrons [17,56], we have reintroduced the initial energy of weakly bound electrons. This energy applies to all plasmon and interband transitions. The references of energy in MicroElec correspond to the highest occupied state at 0K. Hence, for a metal, the reference is the Fermi level, for a semi-conductor or insulator it is the bottom of the conduction band.
In OSMOSEE, 𝐸 𝑖𝑛𝑖𝑡 was selected from the density of states. This selection was done for each individual interaction with weakly bound electrons by rejection sampling. The DOS 𝑔(𝐸) used for sampling was an approximation of the real DOS with the free electron gas theory, in the form of 𝑔(𝐸) ∝ √𝐸. However, as we have seen for the sampling of the differential cross sections, rejection sampling is quite expensive in computation time.
To address this issue, we have used instead a unique value for 𝐸 𝑖𝑛𝑖𝑡 , chosen for each material. This value has been selected as either the median value of the DOS, or the energy of the most populated state in the DOS. The objective is to have a value that will be representative of an average value obtained over a great number of random samplings. In this sense, the median or the most populated state are values that have the highest probability of being drawn, and are more statistically consistant. Nevertheless, the choice of a unique value has quite a few limits that are due to the simplicity of the approach. Indeed, when computing the median or choosing the most populated state, we assume that all states of the DOS are accessible. However, this will not be the case if the incident electron has a very low energy, below the Fermi energy (ex: 11.6 eV in Al). In this situation, the deepest states of the DOS are not accessible anymore. In consequence, the sampling interval is also reduced, and a random sampling over the accessible states may not yield the same average value. This case was handled in OSMOSEE by reducing the interval [𝑢 𝑎 ; 𝑢 𝑏 ] for rejection sampling and excluding the inaccessible states. In the worst case scenario where the incident electron has an energy of only a couple of eVs, its energy 𝐸 𝑖 may even be below 𝐸 𝑔𝑎𝑝 + 𝐸 𝑖𝑛𝑖𝑡 which should prevent any kind of energy transfer. This exception is handled in our code, by verifying that the energy lost by the electron and taking into account 𝐸 𝑖𝑛𝑖𝑡 does not exceed its energy 𝐸 𝑖 . If it is the case, we assume that the secondary electron comes from the Fermi level and has an initial energy 𝐸 𝑖𝑛𝑖𝑡 = 0.
As we will see in the computation of the TEEY in section 3.3.3, the addition of the initial energy is necessary to avoid an overestimation of the TEEY values. Despite the simple approximation made here, its impact on the TEEY is clearly visible and allows the simulation to gives values are in good agreement with the reference data. The values chosen for the initial energy of weakly bound electrons are given in Table 3 Finally, Auger transitions are included for core shells using the class G4AtomDeexitation. This class is called when a core electron has been ejected during an inelastic interaction, and Auger reorganization processes are susceptible to happen. For this, we need to know which shell the secondary electron is coming from. Hence, there is first a random selection based on the differential cross sections per shell, to determine if the secondary electron will be produced from a given shell or a plasmon/interband transition. Then, the corresponding table of cumulated cross section per shell is retrieved and used in the direct sampling procedure. If a core shell has been selected, the info is passed to G4AtomDeexitation along with the atomic number of the atom where the deexcitation process takes place. This class will then compute the Auger processes and add the secondary electrons generated by the Auger cascade to the tracking stack if necessary.
3.2.3
Improvements for the simulation of multilayer materials
Interaction model for the crossing of electrons through a surface or material interface
At the vacuum/material boundary, the periodic potential of the crystal is disturbed by the discontinuity induced by the interface, which can be expressed as a potential barrier [57]. The energy reference for an electron in vacuum is set as the vacuum level. In MicroElec, the energy reference for an electron inside of the material was chosen as the lowest unoccupied state, namely the Fermi level for a metal, or the bottom of the conduction band for an insulator. In other works, such as the OSMOSEE code, the bottom of the conduction band is generally taken as the energy reference for metals instead of the Fermi level, and the surface potential barrier is the sum of the Fermi energy and the workfunction. For consistency with the previous versions of MicroElec, we have chosen to retain the Fermi level as the energy reference in metals. Due to this change of reference when crossing the surface potential barrier, illustrated in Figure 3-6, an electron penetrating in the material will lose potential energy; this loss is transformed into a gain of kinetic energy. An electron exiting the material will, on the contrary, gain potential energy and lose kinetic energy. In both cases, the gain or loss is equal to the value of the potential barrier, taken as the work function 𝑊 for metals, or the electron affinity 𝜒 for insulators and 3.2 -Implementation of interaction models for low energy electrons semi-conductors. The energy modifications for an electron going through the surface are shown in Figure 3-6. In both cases, a free electron has a positive energy. Captured or bound electrons of the material have a negative energy, such as the weakly bound electron energy used for plasmon and interband transitions in Equation 3-25. The electron needs to have a greater energy than the surface barrier to be emitted outside of the material.
In MicroElec, the surface has been added as an exponential potential barrier, based on the model of the Monte-Carlo code OSMOSEE [56]. This gives the following expression of the transmission probability for an electron of energy 𝐸 and incident angle 𝜃:
𝑇(𝜃, 𝐸) = 1 - sinh 2 (𝜋𝑎(𝑘 𝑖 -𝑘 𝑓 ))
sinh 2 (𝜋𝑎(𝑘 𝑖 + 𝑘 𝑓 ))
Equation 3-26
with the pre-and post-transmission wave vectors [nm -1 ]: For an interface between two materials, 𝐸 𝑡ℎ is taken as the difference between the thresholds of the two materials as a first approximation.
𝑘 𝑖 = √(2𝑚 0 𝑒) ℏ √(𝐸 cos 2 𝜃), 𝑘 𝑓 = √(2𝑚 0 𝑒) ℏ √(𝐸 + 𝐸 𝑡ℎ ) cos 𝜃 𝐸
If not transmitted, the electron is reflected by the surface. The post reflexion angle, with respect to the normal to the surface, is then taken equal to the pre reflexion angle, as it is the case with optical reflexion, and the energy is unchanged. Finally, an electron traveling near the surface may interact with a surface plasmon, as opposed to the volume plasmons treated by the inelastic process, the interaction probability depending on the depth of the electron. Although the interaction has not been implemented, the secondary emission yields given by the code in section 3.3.3 are still in good agreement with experimental data.
The surface processes for electrons are included in Geant4 in the new class G4MicroElecSurface, a discrete process called at each interface and based on the other Geant4 class G4OpBoundaryProcess that handles the surface processes for optical photons. G4MicroElecSurface handles Vacuum/Material and Material/Material interfaces. This allows the simulation of simple multi-layer structures using the supported materials. In such a case, the difference in work function/electron affinity between the materials needs to be taken into account. Hence, the threshold energy 𝐸 𝑡ℎ takes the value of this difference. The modelling of the interfaces of a multilayer structure is shown on Figure 3-7, in the example of a SiO2 layer on a Si layer. We can notice that electrons coming from the Si layer need to overcome a potential barrier ∆𝜒 = 𝜒 𝑆𝑖 -𝜒 𝑆𝑖𝑂 2 . The surface processes allow the electron tracking limit to be extended down to the work function or electron affinity of the material, which corresponds to energies of a few eVs. In the simulation, all electrons are followed until their energy falls below the height of the surface potential barrier.
In this case, they are unable to escape and are killed from the simulation.
Modifications of MicroElec to allow the simulation of multiple materials
In the original version of MicroElec, the transport of electrons, protons and heavy ions could only be simulated in silicon. The module could also model the transport of particles in the 16 new materials with a few fixes in the code, but with only a single material per simulation. As a result, some more extensive modifications had to be made to handle the simulation of multilayered materials, using the interaction models we have presented so far.
The first addition brought during this thesis work is the creation of a material structure data file. Indeed, there are several parameters that we need to know for each individual material, such as the binding energies for each shell or the work function. For each material, a structure file Data_ [Material_Name].dat is created to store these relevant quantities. Two examples of the files are given for a monoatomic material (silver) and a compound (SiO2) in Figure 3-8. The first line of the file is a header, giving information on the material's name and its atomic number. For a compound, the keyword "Compound" is used instead of the atomic number. All the other lines (except the comments marked by a #) contain data on a given variable or table of values, and follow the structure:
[Number of values] [Name of variable] [Variable unit, or "noUnit"] [Values of the variable]
For some variables, a single value is needed. However, for other variables such as the low energy limits of the Mermin functions used for fitting (treated here as the binding energy for a core shell), we need to store a vector since we have one value per shell. The values stored in the structure file and their significance are listed in Table 3-3. Low and high energy limit of the inelastic interaction models for electrons (_e) or protons and ions (_p) ElasticModelLowEnergyLimit ElasticModelHighEnergyLimit Low and high energy limit of the elastic interaction model for electrons The new class G4MicroElecMaterialStructure reads the data files and stores the data in an object that can be created by the different processes. They can then retrieve the data for a given material by calling the object G4MicroElecMaterialStructure for a material. In this thesis work, we have also implemented map structures in MicroElecElasticModel and MicroElecInelasticModel to store the cross section tables for each material of the geometry. In these maps, each cross section table is identified by a key (the name of the material). At each simulation step, the current material can then be used to search the storage maps and retrieve the corresponding cross section tables or material structure data. Using both storage maps and material structure objects, the new MicroElec is then able to handle the transport of particles in multiple materials in a single simulation.
Implementation of electron-phonon interaction models for insulators
Due to the wide band gap in insulators (8-9 eV in SiO2 or Al2O3), the inelastic mean free paths (MFP) shown in Figure 3-9 become divergent as the electron energy approaches the band gap, and ionization becomes impossible below the band gap. The transport of electrons at this energy level (<~ 10 eV) is thus dominated by other processes. This is especially true for insulators that have large band gaps. New physical processes for the interaction of low-energy electrons with optical and acoustic phonons have been added to MicroElec for the insulator SiO2, following the work of Schreiber and Fitting [58]. The models have also been extended to other insulators, namely Al2O3 and BN.
Inelastic interactions with optical phonons
An electron can interact with a Longitudinal-Optical (LO) Phonon, of vibration mode 𝜔 𝐿𝑂 , and either absorb or create a phonon. Phonon absorptions or emissions are respectively a gain or a loss of energy for the primary electron, the value of which corresponds to the phonon vibration mode ∆𝐸 = ±ℏ𝜔 𝐿𝑂 . This interaction has been described by Fröhlich [59] [60], and used by Fitting et al. [61] in SiO2, Akkerman et al. [62] in alkali halides, and Ganachaud et al. [63] in Al2O3. Optical phonons have also been implemented in PENELOPE, though with a different formalism based on an integration of the ELF in the infrared range [64]. In the Fröhlich formalism, the scattering rates 𝑓 [s -1 ] for the absorption (-) or emission (+) of a LO phonon by an electron of energy 𝐸 are given by [65]
𝑓 ∓ (𝐸) = 𝑒 2 4𝜋𝜖 0 ℏ 2 • (𝑁 𝐿𝑂 + 1 2 ∓ 1 2 ) • ( 1 𝜖(∞) - 1 𝜖(0) ) • √ 𝑚 * 2𝐸 • ℏ𝜔 𝐿𝑂 • ln [ 1 + 𝛿 ∓1 ± 𝛿 ] Equation 3-27 With 𝛿 = √1 ± ℏ𝜔 𝐿𝑂 𝐸 , 𝑁 𝐿𝑂 = 1 𝑒 ℏ𝜔 𝐿𝑂 /𝑘 𝑏 𝑇 -1
the Bose-Einstein distribution of the population of phonons for the mode 𝜔 𝐿𝑂 and the temperature 𝑇 [K], 𝜖(∞) = 2.25 and 𝜖(0) = 3.84 the optical and static dielectric constants, with the values here given for SiO2. In this material, two phonon modes ℏ𝜔 𝐿𝑂 = 63 and 153 meV are considered. The emission and absorption rates for the two phonon energies of SiO2 are given in Figure 3-10 for electrons below 10 eV. We can see that the emission of a phonon of a given mode ℏ𝜔 𝐿𝑂 becomes impossible when the electron energy falls below ℏ𝜔 𝐿𝑂 . Moreover, phonon emission is much more probable than phonon absorption in SiO2, so the latter has been neglected. However, the case of SiO2 is particular compared to Al2O3 and BN, since for these materials only one LO phonon mode is considered.
Hence, for SiO2, we would need to create two LO phonon interaction processes in Geant4, whereas for other insulators we would have to only create one. To ensure the flexibility of our Geant4 module, we have combined the 0.063 eV and 0.153 eV emission processes into a single emission process for SiO2, with the emission frequency plotted in black in Figure 3-10. In fact, the 0.153 eV emission process is more probable than the 0.063 eV emission for ballistic electrons, so they have been weighted by a factor of respectively 75% and 25% by Schreiber [58].
We have used these factors in our LO phonon emission process for SiO2, where we use a single weighted energy ℏ𝜔 𝐿𝑂 = (0.75 * 0.153 eV + 0.25 * 0.063 eV) = 0.131 eV in Equation 3-27. Now that the scattering rate corresponds to the emission of two phonon modes, it also has to be multiplied by 2.
Figure 3-10: Emission and absorption rates of LO phonons in SiO2
The effective mass 𝑚 * is assumed to be equal to the free electron mass 𝑚 0 . The angular deflection 𝜃 of the primary electron is calculated with a random number 𝑅 ∈ [0; 1] following the angular distribution from [61]:
cos(𝜃) = 𝐸 + 𝐸 ′ 2√𝐸𝐸 ′ (1 -𝐵 𝑅 ) + 𝐵 𝑅 Equation 3-28
Where 𝐵 =
𝐸+𝐸 ′ +2√𝐸𝐸 ′ 𝐸+𝐸 ′ -2√𝐸𝐸 ′ ; 𝐸 ′ = 𝐸 -ℏ𝜔 𝐿𝑂
The mean free path can be obtained as 𝑀𝐹𝑃 = √2𝐸/𝑚 0 𝑓 ± ⁄ . As can be seen in Figure 3-9, the interactions with LO phonons will be the main energy loss process for electrons under 15 eV in SiO2. Indeed, the LO phonon MFP becomes lower than the inelastic MFP below 15eV, as the latter becomes divergent. Below the energy gap, the energy loss by LO phonon is also the only possible energy loss interaction for electrons. The parameters used for the LO phonon model are given in
Elastic interactions with acoustic phonons
Electrons are also able to interact with acoustic phonons, in this case the interaction is analog to the elastic interaction with nuclei. In fact, the energy lost by an electron interacting with an acoustic phonon is about a few meV, which will be neglected here. However, as pointed out by Akkerman [25] or Valentin [14], the validity of the Partial Wave Analysis (PWA) cross sections for the elastic interaction of electrons becomes questionable below 50 eV. In SiO2, this phenomenon is especially noticeable as the elastic MFP, given by the molecular cross sections, is inferior to the inter-atomic distance below 50 eV (Figure 3456789). It is thus recommended to switch to an acoustic phonon-electron interaction model at lower energies. In such a model, the interaction between the electrons and the lattice and their behaviour as Bloch electrons should be more realistic [58].
In order to simplify the expressions, we will use the approximation of a parabolic band, which gives the density of states of electrons [58] :
𝐷(𝐸) = 2 √2𝜋 2 ℏ 3 𝑚 0 3/2 √𝐸 Equation 3-29
The dispersion relationship is then given by:
𝐸(𝑘) = ℏ 2 𝑘 2 2𝑚 0 ⁄ Equation 3-30
and the effective mass is assumed equal to the free electron mass as in the LO phonon model.
Due to the lattice's periodicity, the unique values of the acoustic phonons' wave vectors 𝑘 are contained in the first Brillouin zone, that is to say in [-𝑘 𝐵𝑍 ; 𝑘 𝐵𝑍 ]. Hence, a wave vector outside of this domain can be expressed as a multiple of a wave vector belonging to it. In the reciprocal lattice, the limit of the Brillouin zone is given by 𝑘 𝐵𝑍 = 𝜋 𝑎 with the lattice parameter 𝑎.
In our simulations, we have implemented the expression of the collision frequency given by Schreiber & Fitting [58]. It is based on the integral relationship given by Bradford and Woolf [66]. The collision frequency is then given by:
𝑓 𝑎𝑐 = 𝜋𝑘 𝑏 𝑇 ℏ𝑐 𝑠 2 𝜌 ℰ 𝑎𝑐 𝐷(𝐸) 1 + 𝐸 𝐴 ⁄ if E < E BZ 4 𝑓 𝑎𝑐 = 2𝜋𝑚 * * (2𝑁 𝐵𝑍 + 1) 𝜌ℏℏ𝜔 𝐵𝑍 ℰ 𝑎𝑐 2 𝐷(𝐸)𝐸 2 ( 𝐴 𝐸 ) 2 * [- 𝐸 𝐴 ⁄ 1 + 𝐸 𝐴 ⁄ + ln (1 + 𝐸 𝐴 )] if E > E BZ Equation 3-31
The parameters ℰ 𝑎𝑐 (also noted 𝐶) and 𝐴 are specific to the material. We also have 𝑘 This model was first conceived by Sparks et al. [67] for alkali halides (NaCl, HCl…), and used by Fischetti [68] [69] for SiO2 according to the equation :
1 𝑓 ± (𝑘) = ∑ 𝑚 * 4𝜋𝑀𝑁 𝑐 ℏ²𝑘 ∫ 𝑑𝑞 𝑞 3 𝜔 𝛼 (𝑞) 𝑞 𝑚𝑎𝑥 0 |𝑆 𝛼 (𝑞)|² 𝛼 [𝑛 𝛼 (𝑞) + 1 2 ± 1 2 ] Equation 3-32
We have here a sum that extends over the acoustic phonon branches 𝛼, since the probability given by Sparks is only given for the scattering of an electron of wavevector 𝑘 with a phonon of a given wavevector 𝑞. Hence, we need to sum over all possible phonon wavevectors 𝑞 to get the scattering frequency. 𝑁 𝑐 Is the density of primitive cells, 𝑀 is taken as the mass of a primitive cell for small 𝑞, or or the mass of the heaviest constituent of the cell when 𝑞 approaches 𝑘 𝐵𝑍 . 𝑛 𝛼 (𝑞) are the Bose factors of the phonon branches, and 𝑞 𝑚𝑎𝑥 = 2𝑘 ± 2𝑚 * 𝐶 𝑠 /ℏ. |𝑆 𝛼 (𝑞)|² is the electron-phonon matrix element.
Bradford and Woolf [66] note that the expressions proposed by Sparks et al. [67] and Fischetti [68] [69] use approximations that become invalid at higher energies. This is notably true for electrons with wave vectors beyond the Brillouin zone edge (𝐸 > 𝐸 𝐵𝑍 ), around a few eV. The interaction frequency extrapolated by Sparks from Equation 3-32 for electrons with an energy greater than 𝐸 𝐵𝑍 /2 gives a dependence in energy in 𝐸 3/2 , which makes the inverse mean free paths diverge. To fix this problem, Bradford and Woolf have proposed to take into account the screening of the electrons belonging to the atoms of the lattice. This is made by adding a correction factor 𝛼 in the integral relationship of Equation 3-32. They obtain the formula for the matrix element |𝑆 𝛼 (𝑞)| 2 = 𝐶 2 𝑞 2 /(1 + 𝑞 2 /𝛼²)², and for the scattering rate
𝑓 ± = 3𝐶 2 4𝜋𝑀 𝑝 𝑁 𝑐 ℏ𝑣 ∫ 𝑑𝑞 𝑞 3 𝜔(𝑞) * [𝑛(𝑞) + 1 2 ± 1 2 ] * 𝑓(𝑞) (1 + 𝑞 2 𝛼 2 ⁄ ) 2 𝑞 𝑚𝑎𝑥 ± 0 Equation 3-33
Where 𝑀 𝑝 and 𝑁 𝑐 are the mass and density of the unit cell, 𝑛(𝑞) is the Bose function at room temperature,
𝑞 𝑚𝑎𝑥 ± = { 2𝑘 ∓ 2𝑚 * 𝑐 𝑠 ℏ ⁄ for q < k BZ 𝑘[1 + (1 ∓ 𝑐 𝑠 𝑘 𝐵𝑍 ℏ 𝐸 ⁄ ) 0.5 ] for q ≥ k BZ 𝑓(𝑞)
is a function that alters the mass of the unit cell to the mass of the heaviest constituent depending on 𝑞, 𝜔(𝑞) is the phonon frequency. The screening parameter has the dimension of a wavevector in this expression, whereas it is expressed as an energy in the expression of Schreiber & Fitting [58] (the parameter 𝐴). This correction allows the convergence of the acoustic MFP towards the elastic MFP at higher energies.
Nevertheless, a major drawback of this model is that it is very difficult to determine its parameters. Arnold et al. [70] underline the difficulty in finding parameters that work at both lower and higher energies. They attribute this difficulty to the lack of dependence of the scattering rates on the density of states of electrons. Akkerman et al. [62] also highlight this omission, as an explanation to the fact that their simulated energy spectrum of the secondary electrons emitted by alkali halides derive towards higher energies compared to the experimental measurements. However, the density of states is taken into account in the expression of Schreiber & Fitting [58], which we have used here. In fact, the authors have proposed three models for the band structure of SiO2, including the 1-band free electron approximation we have retained here. Two other models were proposed, namely a 3-band and a 5-band structure. They both include the free electron band, along with one or two hole and electron bands, and yield a different density of states from the free electron approximation. As they mentioned that the best agreement with the experimental data was obtained with a single free-electron band structure, we have chosen this structure for the density of states in Equation 3-29.
The value of the deformation potential 𝐶, or ℰ 𝑎𝑐 , is unknown. It is the limit of the matrix element |𝑆 𝛼 (𝑞)| from Equation 3-32 when 𝑞 converges to 0 (|𝑆 𝛼 (𝑞)| ≅ 𝐶). For higher 𝑞, the matrix element can be approximated from the total elastic scattering cross section 𝜎 evaluated at the energy of the exciton as [69]:
|𝑆 𝛼 (𝑞)| 2 ≅ ( 𝜋ℏ 4 𝑁 2 𝑚 * 2 ) 𝜎 Equation 3-34
Where 𝑁 is the atomic density. In the absence of values for 𝐶, Fischetti [69] uses Equation 3-34
for the whole energy range, that is to say 𝐶 2 = ( 𝜋ℏ 4 𝑁 2 𝑚 * 2 ) 𝜎. For SiO2, Fischetti and Schreiber & Fitting use C = 3.5 eV. The cross section can be exactly computed for an electron at the energy of the exciton, as done by Wang [71]. An alternate method is proposed by Sparks et al [67]. They have used for alkali halides the integrated cross section 𝑄 = 4𝜋𝜎 for elastic scattering with the negative ion at the exciton energy, as it is supposed to be much larger than for the positive ion. The cross section is estimated from molecular scattering data. For HCl, which has an exciton energy of 8 eV, they obtain a cross section 𝑄 𝐻𝐶𝑙 = 0.35 nm 2 . The cross section for other materials is computed from 𝑄 𝐻𝐶𝑙 using a scaling relationship depending on the radius of the halide ion 𝑟 ℎ : 𝑄 = ( 𝑟 ℎ 𝑟 𝑁𝑎𝐶𝑙 )²𝑄 𝐻𝐶𝑙 . Fischetti have followed the same approach in SiO2, assuming that the cross section is dominated by the larger oxygen ions. They have obtained an integrated cross section Q = 3.5 x 10 -15 cm² which they have rescaled by the effective charge of oxygen (≅ 1.1 𝑒) in SiO2. For Q in cm² and the atomic density N in at/cm 3 , we can then obtain the deformation potential via the relation 𝐶 = √𝜋ℏ 4 (𝑁 * 10 6 ) 2 (𝑄 * 10 -4 )/𝑚 0 ², which yields the value of 3.5 eV. In contrast, we have seen on Figure 3-1 that the molecular elastic cross sections in SiO2 from ELSEPA converge towards the atomic elastic cross section of silicon at lower energies. This could indicate that silicon is the dominating atom in SiO2 for the elastic scattering at low energies rather than oxygen, in contradiction with the assumption made by Fischetti.
The screening parameter can be chosen to make the acoustic MFP converge to the elastic MFP at a certain energy [72]. Schreiber and Fitting [58] have chosen to completely replace the elastic MFP from the PWA method by the acoustic MFP for all elastic interactions. Using their parameters, the convergence to our elastic MFPs happens at 3 keV. For the alkali halide CsI, Boutboul et al. [72] have fixed the convergence between the two models at 20 eV. This parameter can also be determined from the deformation potential following the equation [66]:
𝑙𝑖𝑚 𝑞→0 ( 4𝜋 𝑉 𝑐 * 1 4𝜋𝜀 0 𝑍 1 𝑍 2 𝑒 2 * 1 𝛼 2 + 𝑞 2 ) = 𝐶 ⇒ 𝛼 = √𝑍 1 𝑍 2 𝑒 2 𝑉 𝑐 𝐶𝜀 0
For SiO2, we have 𝑍 1 = 14 and 𝑍 2 = 8 the atomic numbers of Si and O, and 𝑉 𝑐 = 113 Å is the volume of the unit cell.
The speed of sound in the material 𝐶 𝑠 can be computed using the elastic theory from the longitudinal and transverse velocities 𝐶 𝑙 and 𝐶 𝑡 [71]:
3 𝐶 𝑠 = 2 𝐶 𝑡 + 1 𝐶 𝑙 Equation 3-35
𝐶 𝑙 and 𝐶 𝑡 can be obtained from the constants 𝐶 11 , 𝐶 12 and 𝐶 44 from the elasticity tensor by:
𝐶 𝑙 = √ 1 3 (𝐶 11 + 2𝐶 12 + 4𝐶 44 ) 𝜌 , 𝐶 𝑡 = √ 1 3 (𝐶 11 -𝐶 12 + 𝐶 44 ) 𝜌 Equation 3-36
Finally, the value of 𝑘 𝐵𝑍 can also be defined as the radius of a sphere with the volume of the first Brillouin zone [69], which gives 𝑘 𝐵𝑍 = (6𝜋 2 𝑉 𝑝 ⁄ )
1 3 [71] with 𝑉 𝑝 the volume of the cell.
The angular distribution is assumed to be isotropic in some works, such as Akkerman et al. [62]. This approximation can be justified by the fact that the transport of very low energy electrons becomes a random walk motion. However, this approximation may not be realistic for energies above 100 eV. In this work, we have also assumed an isotropic angular distribution, and this model replaces the PWA model in SiO2 for energies below 100 eV. Parameters are given in Table 3-5. The value of A has been modified, in comparison with the parameters given by Schreiber [58] (5*EBZ), to improve the acoustic MFP convergence to our PWA elastic MFP at 100 eV. For Al2O3, we have also computed the parameters following the different laws given in this section. The PWA elastic cross section obtained from the average atom approach was used to evaluate the deformation potential, the speed of sound was computed from the elastic constants, and 𝑘 𝐵𝑍 For this material, the switch between the PWA elastic model and the acoustic phonon models also happens at 100 eV. Due to the difficulty in finding the parameters of this model, another approach has been proposed by Ganachaud et al. [63,73,74]. They have proposed the following empirical cut-off function depending on the electron energy 𝐸, which multiplies the elastic cross sections at low energies by:
𝒌
𝑅 𝐶 (𝐸) = tanh (𝛼 𝑐 ( 𝐸 𝐸 𝑔 ) 2 ) Equation 3-37
With 𝐸 𝑔 the gap of the material, and 𝛼 𝑐 an adjustment parameter. The effect of this function is to increase the elastic MFP in a similar way to the acoustic phonon model. This is mandatory to avoid unphysically low values of the MFP, which will also result in particles getting stuck in place. We have used this function in place of the acoustic phonon model for boron nitride, as in the work of Chang et al [75], since we could not find some of the necessary parameters for the acoustic phonon model. For BN with a gap of 𝐸 𝑔 = 5.2 eV, the adjustment parameter has been taken as 𝛼 𝑐 = 0.5.
Empirical modeling of trapping using a polaronic interaction model
A basic model of trapping has also been proposed by Ganachaud et al. [63,73,74,76] for Al2O3.
They have assumed that the trapping of electrons is due to the polaronic effect [77][78][79], which is itself tied to the coupling of electrons with phonons. As we have seen, electrons can selflocalize by forming small polarons in the materials where they have a strong coupling with the lattice.
In this formalism, an electron of energy E has a probability (inverse mean free path) of becoming trapped given by:
𝑃 𝑡𝑟𝑎𝑝 (𝐸) = 𝑆 exp(-𝛾𝐸) = 1 𝜆 𝑡𝑟𝑎𝑝 (𝐸) Equation 3-38
𝑆 defines the amplitude of the trapping, and 𝛾 defines the energy domain where the process applies. If 𝑆 increases, the mean free path is reduced, and if 𝛾 increases, the mean free path starts to diverge at lower energies, reducing the trapping of electrons of a ten of eV. This model has been implemented in our simulations for Al2O3 and extended to SiO2. The parameters for these materials are given in Table 3-6. This gives a capture mean free path of 3.5 nm for electrons at 5 eV in Al2O3. When an electron is captured by the polaronic process, we assume that is has been lost permanently and it is removed from the simulation.
Material
𝑆 (nm -1 ) 𝛾 (eV -1 ) Al2O3 0.5 0.35 SiO2 0.1 0.2
Validation of the low energy transport model with reference data 3.3.1 Computation of stopping powers
The stopping power 𝑆(𝐸) [MeV/mm or MeV. cm 2 /g] for a particle of energy 𝐸 is related to the differential cross section according to the following relation:
𝑆(𝐸) = 𝑁 ∫ ℏ𝜔 𝑑𝜎 𝑑(ℏ𝜔) (𝐸, ℏ𝜔)𝑑(ℏ𝜔) 𝐸 𝑚𝑎𝑥 0 Equation 3-39
It can be used to check the validity of our modelling and fit of the ELF. The stopping powers given by the new version of MicroElec for silicon are plotted in Figure 3-11 The improvement of the Kaneko (B-K) [47] approach over the Barkas [83] formula for 𝑍 𝑒𝑓𝑓 is clearly visible below 30 keV/nucleon.
Ionizing dose calculations
The calculation of the ionizing dose-depth profile for an incident particle gives information on the energy deposited in the material and its uniformity. It can thus be used as another verification for the inelastic process. The ionizing dose given by the new version of MicroElec is compared in Figure 3-13 with data from SRIM and the standard physics processes of Geant4 which include continuous and multiple scattering processes. Both rejection sampling and direct sampling methods give the same results for MicroElec, but with significantly improved computation time for direct sampling. The good agreement between MicroElec, SRIM and the Standard physics models in the case of 10 keV protons is another validation of the inelastic processes. However, due to the exclusive use of discrete processes, MicroElec is about 4 times slower than the standard physics in this case, with a time difference that may vary depending on the configuration used. In the case of low energy electrons (the example of 200 eV electrons is shown here), the standard physics underestimate the range of electrons. Indeed, the range given by the stopping powers of Luo et al. [81], using the continuous slowing-down approximation for 200 eV electrons (8.9 x 10 -6 mm) is greater than the range given by the standard physics (5.5 x 10 -6 mm) and closer to MicroElec (1.1 x 10 -5 mm). There is in fact a plateauing behaviour of the extrapolated range under 1 keV. It has been illustrated with the code OSMOSEE in [21], and we will study the extrapolated range of electrons in more details in Chapter 4. Finally, the introduction of the initial energy for weakly bound electrons does not modify the dose profile. A more detailed comparison between the new version of MicroElec and the different Geant4 ionization models regarding dosimetry applications for microelectronics has been published in [START_REF] Inguimbert | Surface ionizing dose for space applications estimated with low energy spectra going down to hundreds of eV[END_REF].
Computation of the TEEY of metals and semi-conductors and validation with experimental data
The emission of secondary electrons from materials under electron irradiation is driven by several parameters: First, the TEY of silicon obtained with MicroElec is compared with simulated data from OSMOSEE [18] and experimental data from Bronstein [START_REF] Bronshtein | VTORICHNAYA ELEKTRONNAYA EMISSIYA[END_REF] in Figure 3 Option (b) shows how the initial energy of weakly bound electrons acts as a slight reduction of the TEEY while it did not have a significant effect on the dose profile. As a result, this energy can be used as a tweakable parameter to fit the TEEY to experimental data. It also shows that we need to take this energy into account in order to not over-estimate the energy of the secondary electron generated. The importance of surface processes is clearly visible in (c), with the total yield being even much higher than the experimental data (about a factor of 2). Finally, version (a) with all processes shows a good agreement with our experimental reference data [START_REF] Bronshtein | VTORICHNAYA ELEKTRONNAYA EMISSIYA[END_REF].
The yields are compared for all other metals and semi-conductors with experimental data from the SEY and BEY database of Joy [START_REF] Joy | A database on electron-solid interactions[END_REF], data from Bronstein [START_REF] Bronshtein | VTORICHNAYA ELEKTRONNAYA EMISSIYA[END_REF], and EEY measurements from ONERA [START_REF] Gineste | Investigation of the electron emission properties of silver: From exposed to ambient atmosphere Ag surface to ion-cleaned Ag surface[END_REF][START_REF] Balcon | Secondary Electron Emission on Space Materials: Evaluation of the Total Secondary Electron Yield From Surface Potential Measurements[END_REF] in Figure 3-15. Simulated data from other M-C codes, such as OSMOSEE [18] are also added as references. The TEEYs for metals and semi-conductors show good agreement with literature and experimental data. The TEEY of Kapton is also correctly modelled, even if we have not taken into account the phonon process as in the other insulators. This could be because the work function (4.7 eV) is greater than the energy gap (2.05 eV), so we do not have electrons that can go below the gap and still be able to escape. Hence, we do not have to model the energy losses below the band gap to compute the TEEY for Kapton.
Material
Computation of the TEEY of insulating materials without charging effects
First, the SEY of Al2O3 obtained from MicroElec is shown in Figure 3-16 with the phonon and polaron processes deactivated. We can see that the yield is completely overestimated and increases linearly with the incident energy. Indeed, more energetic incident electrons create more secondary electrons. However, due to the energy gap and the absence of models for the energy losses by phonons, any electron falling below the energy gap (8.8 eV) cannot lose energy anymore. Given the value of the electron affinity (0.5 eV), practically all low energy electrons can move in the material without losing any energy, and are able to freely escape from the surface. This gives the unrealistic yields seen in Figure 3-16. Then, we show in Figure 3-17 the SEY and BEY of SiO2 computed with the acoustic and LO phonon interaction models enabled, but without the polaronic capture model. The BEY for SiO2 is consistent with the data from Schreiber and Fitting [65]. However, a significant discrepancy can be seen for the SEY of SiO2 between MicroElec, the Monte-Carlo codes of Ohya et al. [START_REF] Ohya | Monte Carlo study of secondary electron emission from SiO2 induced by focused gallium ion beams[END_REF], Schreiber and Fitting [58], and the experimental data of Bronstein [START_REF] Bronshtein | VTORICHNAYA ELEKTRONNAYA EMISSIYA[END_REF]. An important point is that all the simulated SEY and BEY shown in this section do not include charge or recombination effects. The trapping of particles is only modelled empirically in our code and the code of Ganachaud & Mokrani [63], and has no impact on the successive electron cascades. This is equivalent to a fresh unirradiated and uncharged sample, which is not the case for experimental data related to insulators. The electron generation cascade can be disrupted by the trapping of low energy electrons, reducing the escape probabilities and the SEY. The creation and motion of positive and negative charges in the insulator can also generate internal and external electric fields which can dynamically evolve during the irradiation. These fields modify the trajectories of low energy electrons inside and outside of the material, and their penetration depths. As these effects significantly alter the electron cascade in insulators, this explains the discrepancy between simulated and experimental data.
Conclusion of Chapter 3
Several additions and improvements have been brought to MicroElec regarding the transportation of low energy electrons, protons, and ions in 16 materials (Be, C, Al, Si, Ti, Fe, Ni, Cu, Ge, Ag, W, Au, SiO2, Al2O3, Kapton, BN). The Energy Loss Function fitting model has been switched from the extended Drude model to the more accurate Mermin approach. The inelastic model for ions and protons is now valid down to 1 keV/nucleon, and the inelastic and elastic models for electrons are now provided down to a few eVs. Computation time is improved using cumulated DXS.
A model for the vacuum/material and material/material interfaces has been added, handling the interaction of electrons with the interfaces and the transition between the different materials of the simulation. In MicroElec, the Fermi level was initially chosen as a reference for the electron energies in the previous version of the code, which is why this reference was also retained for the computation of the crossing of the potential barrier. This will be corrected in a future version of MicroElec, since the formulas used in this work for the crossing of the surface followed the implementation in OSMOSEE, where the reference for metals was at the bottom of the conduction band. All the TEEY computations shown in this work for metals were made using an energy reference at the Fermi level. If the reference of energy in metals is changed to the bottom of the conduction band, a small modification of the energy of weakly bound electrons (for instance from 4.2 to 3.6 eV in Al) allows us to practically obtain the same TEEY curves as shown in this work. Therefore, taking a reference at the Fermi level does not invalidate the TEEY results for metals shown in this chapter, but this will be modified in a future version of MicroElec for physical consistency.
Although the module is slower than the standard physics of Geant4, it has an increased accuracy for low energy electrons under 1 keV. As the secondary electron production threshold and the low energy tracking limit for electrons are set to the value of the surface barrier, MicroElec can be used for secondary electron emission applications. The processes have been modified to allow the use of nine new supported materials. Although a simple model is used for material/material interfaces, it already allows the simulation of basic multi-layers. The refinement of the model and the addition of more oxides to the supported materials would open many possibilities for MicroElec within the study of surface or layer effects.
The new processes have been validated using multiple data. The inelastic process has been validated with the electron and proton stopping powers, which have shown good agreement with literature data. The low secondary electron production threshold in MicroElec also gives improved dose profiles. The secondary electron yield has been used to verify the complete transportation model for electrons, underlying the importance of the surface potential barrier for electron emission modeling. Satisfying agreement with experimental TEY data is observed, despite surface plasmons and exchange effects not being supported, which could decrease the accuracy of our calculations at low energies.
Finally, electron-phonon interactions have been added for insulators and used to simulate electron transportation in SiO2. The inelastic stopping powers and the BEY have been validated, but the SEY calculations do not match experimental data since charge and recombination effects are not modelled. Consequently, further developments are needed to extend MicroElec to the simulation of insulators by including the effects of charging. These developments will be presented in Chapter 5, where we have developed a new Monte-Carlo code for the simulation of charging effects in SiO2, based on MicroElec.
On the other hand, the Monte-Carlo code we have shown here is capable to compute the TEEY of several metals and semi-conductors. However, for some applications, an analytical expression of the TEEY can be more convenient to use instead of a full Monte-Carlo code. In Chapter 4, we will present an analytical SEY model we have developed, based on the data acquired by the Monte-Carlo model we have developed in this chapter.
Chapter 4: Development of an analytical electron emission model for metals and semi-conductors
As mentioned in the introduction, the TEEY is the driving parameter of many effects of the space environment on spacecraft systems. To quantify these risks, systems simulation packages need the SEY as an input parameter, for instance to compute the power thresholds to prevent multipacting in a certain geometry, or the evolution of surface potentials in a certain radiation environment. In the example of SPIS, some trials were made to combine the higher-level simulation packages with a Monte-Carlo module for the computation of the SEY. Nevertheless, this approach was very inefficient in computation time and resources. For this reason, most simulation packages use analytical expressions for the SEY, such as Dionne's model [1], Vaughan's model [2], or Furman & Pivi's model [3]. Even if these models can be freely fitted to an experimental SEY data set, their flexibility relies on the use of several arbitrary parameters and approximations that become unphysical for low energy electrons below a few keV.
The objective of this section is to derive an analytical secondary electron emission yield model that is based on the physics of low energy electron transport. In this regard, our Monte-Carlo code has been used to extract the electron transport parameters that will be studied in sections 4.1 and 4.2, that is to say the penetration depth, the ionizing dose-depth profile, and the transmission rate of electrons. The TEEY, which was our validation metric for MicroElec in the previous section, is dependent on all these parameters. Hence, the results obtained from the Monte-Carlo code should form a solid reference for the development of our analytical models.
We have then combined these parameters into a single analytical expression for the secondary electron emission yield, which will be presented in section 4.3. The analytical models have been validated with Monte-Carlo data (and experimental data for the SEY model) for 11 monoatomic metals and semiconductors: Be, C, Al, Si, Ti, Fe, Ni, Cu, Ge, Ag and W. These particular materials were selected because we have already modeled the transport of electrons in them with MicroElec, therefore we will be able to easily get reference data on these materials for our models. As we will see in this chapter, some essential quantities of our models are based on the atomic number, which is why we have not focused on compound materials such as Al2O3.
Gathering data on the range, transmission rate and dose of low energy electrons is especially critical, due to the low availability of experimental data for electrons below 10 keV. There is also practically no experimental data for electrons below 1 keV. This is because the penetration depths of these particles are about a few tens to a few hundreds of nanometers. Measuring the range of particles within a nanometric accuracy is particularly difficult, hence the interest of developing an analytical model for the range of electrons down to a few tens of eV.
The analytical models for the extrapolated range and transmission rate of low energy electrons were published during the thesis in ref. [4], and the ionizing dose model in ref. [5] in Applied Surface Science. A third paper on the SEY model has been submitted. The content of these papers has been reused in this section with the agreement of the publisher. Many empirical range-energy expressions have been proposed by several authors [6][7][8][9][10][11][12][13][14][15], describing the electron extrapolated ranges in various materials, aluminum being the most extensively studied material. Most of these relationships are in the form of a power law with the energy of the electron 𝐸 in MeV:
𝑟(𝐸) = 𝑘 * 𝐸 𝑛 Equation 4-1
Katz and Penfold [6] have made a very thorough compilation of experimental results for aluminum, and have proposed an empirical formula for the extrapolated range of electrons between 10 keV and 3 MeV. In this formula, widely used in the past, the n factor in Equation 4-1 is a function of the energy of the electrons 𝐸 in MeV:
𝑛 = 1.265 -0.0954 ln (𝐸) Equation 4-2
With 𝑟(𝐸) in mg/cm², 𝑘 = 412. Sometimes 𝑛 also depends on the material [12]. Weber [7] proposed a different expression, valid for aluminum in the energy region [3 keV, 3 MeV]. Kobetich and Katz [8] extended this model to the 0.3 keV-20 MeV range by adjusting the constants: The same authors [9] proposed further improvements for this formula, by introducing a dependence on the atomic number Z of the material for the three parameters: Most of the other models found in the literature suppose that the extrapolated range tends to zero when the electron energy decreases towards zero, by following a power law as in Equation 4-1 [16][17][18][19][20][21]. However, contrary to the Weber expression extended by Kobetich and Katz (Equation 4-3), they are not relevant on the whole energy range. This expression has also the advantage of being able to either express the extrapolated range 𝑟 as a function of the energy E (𝑟(𝐸)) or inversely E as a function of the range (𝐸(𝑟)). For these reasons, we have chosen to use this expression as a basis for our extrapolated range model, with the objective of improving this formula for very low energy electrons down to a few tens of eV. For this, we need to take into account the strong diffusion that low energy electrons undergo when traveling in the material, which also creates a significant dispersion in the penetration depths. Given that the penetration depths of sub-keV electrons is about a few tens of nanometers, we cannot neglect this dispersion and use the same assumptions as for higher energies. As we will show in the next section, the high-energy formula fail to reproduce the behaviour of the extrapolated range below 1 keV.
𝑟(E) = 𝐴𝐸 [1 - 𝐵 (1 + 𝐶𝐸) ]
𝐴 = (
Study of the extrapolated range and transmission rate of low energy electrons with Monte-Carlo simulations from MicroElec
To understand why the expressions valid for high energy electrons fail to model the behaviour of low energy electrons especially below 1 keV, we propose in this section to study the transmission rate and penetration depth using our Monte-Carlo model.
First, the transmission rates should be computed as they will be needed to deduce the extrapolated range, following the method described in section 1.5.4 of Chapter 1. We remind here that the implantation depths of electrons are first extracted from the Monte-Carlo simulations and sampled as a depth distribution. The distribution is then integrated and normalized to get the repartition function, which is the transmission rate. Finally, the tangent at 0.5 is taken and followed until its intersection with the X axis, the value obtained at the intersection being the extrapolated range.
In the following, the implantation depths for incident electrons between 25 eV and 5 keV have been sampled from MicroElec simulations and used to calculate the transmission probability as a function of the incident electron energy. They are shown in Figure 4-1 for beryllium, aluminum, iron and silver, which have respective densities of 1.85, 2.7, 7.87 and 10.5 g/cm 3 . At low energies, the penetration depths are comparable for all four materials, around 1-2 nanometers. However, as the energy increases, the penetration depths, expressed in nm in Figure 4-1, increase more quickly for low density materials. This effect is visible in Figure 4-1 as the transmission curves become less widespread when the material density increases. Indeed, for 2 keV electrons the penetration depth can reach hundreds of nanometers in aluminum and beryllium, but only 40 nm in iron and silver. Moreover, the intervals between the transmission rate curves of all four materials become very small for electron energies below 100 eV. This implies that the extrapolated ranges extracted from these transmission curves should decrease more slowly below 100 eV, as will be studied next. In Figure 4-2, the extrapolated ranges 𝑟 computed with MicroElec, which uses discrete processes, are compared with extrapolated range computations made with the continuous processes of Geant4's standard physics list (Opt4) for aluminum and silicon. This physics list is designed for precise transportation of electrons on a wide range of energies. For energies under 10 keV, the PENELOPE continuous energy loss ionization model is used with a multiple scattering model for elastic interactions. Monte-Carlo simulated data from Colladant et al. [16], Akkerman et al. [17] and experimental data from Kanter & Sternglass [18] are also displayed, but the available experimental data for the extrapolated range of low energy electrons below 1 keV is very scarce. Finally, the range given by equations ( 5) and ( 7) (Weber Formula), and the CSDA range (𝑆 ̅ ) obtained from MicroElec's stopping powers (in green, labeled CSDA) for both The extrapolated range curves show a typical behavior below a few hundreds of eVs. They are no more proportional to the incident energy, and the range/energy function stabilizes as the range reaches a plateau region whose height depends on the material. This effect is more visible on the MicroElec curve and can also be seen on the CSDA curves computed from the MicroElec stopping powers, which become parallel to each other at lower energies. The level of this plateau region changes from a material to another as a function of the relative values of the inelastic and elastic mean free paths.
One can notice that, for both materials, the CSDA ranges (𝑆 ̅ ) obtained from Equation 4-4 with MicroElec's stopping powers converge to about 2 times the extrapolated range 𝑟 𝐸 obtained from MicroElec simulations. However, the CSDA gives values which are much higher than the simulated data sets below a few keV. This can be explained by the fact that the CSDA is a maximum range which neglects the deflection induced by the elastic and inelastic interactions, and would only be attained by a hypothetical electron with a strictly linear trajectory. As this is never the case for electrons, this parameter is an unreachable limit for the actual range 𝑟. Indeed, below 100 eV, the elastic scattering becomes prevalent over the inelastic scattering for electrons. This can be shown qualitatively by calculating the probability 𝑃 𝐸𝑙 that the interaction made by the electron is an elastic interaction. It is obtained from the total cross sections (𝜎) as:
𝑃 𝐸𝑙 = 𝜎 𝐸𝑙𝑎𝑠𝑡𝑖𝑐 𝜎 𝐸𝑙𝑎𝑠𝑡𝑖𝑐 + 𝜎 𝐼𝑛𝑒𝑙𝑎𝑠𝑡𝑖𝑐
As can be seen in Figure 4-3, 𝑃 𝐸𝑙 increases strongly below 50 eV, where energy losses become rarer, and approaches 1 below 20 eV. This behavior can also be seen for other Monte-Carlo codes, as mentioned by Pierron et al. [19]. But, as mentioned previously, this range is a maximum depth that will not be reached by most electrons. Indeed, at these energies the angular distribution of the elastic scattering becomes quasi-isotropic (Figure 1). Consequently, the transportation of low energy electrons below 50 eV becomes a case of random diffusion: they are highly scattered without any energy loss until an inelastic interaction occurs, then leading the particle to come at rest. Hence, the electron can travel a longer total path before being stopped, while remaining close to the surface. However, we can see that the CSDA range, which does not take into account the elastic diffusion, becomes parallel with the extrapolated range at lower energies. This shows that the elastic interaction does not change the dynamic of the range. It only reduces the values of the projected range by diffusing the trajectories of the particles, and the intensity of this reduction becomes larger as the energy and elastic MFP decrease. Hence, the flattening dynamic could rather be caused by the divergence of the inelastic MFP.
The plateauing effect at very low energies does not seem to be reproduced by the Geant4 continuous processes (Std Phys Opt4). Indeed, this physics package uses multiple scattering instead of discrete elastic models, which can also generate more differences. Above 1 keV however, the Geant4 continuous processes give similar values to MicroElec. This behaviour can be linked with the transmission probabilities in Figure 4-5, in the example of silicon. The intervals between the transmission curves computed with MicroElec become narrower as the energy decreases, as seen for the other materials in Figure 3. However, the transmission curves obtained with the standard physics keep a spread of about the same order at lower and higher energies on the lin-log plot. This implies a linear (on a log-log plot) decrease of the extrapolated range for the whole energy range of 25 eV-10 keV seen in Figure 4-2. The comparison is given here as an indication of the differences which can occur at very low energies, when using different interaction models with different approaches (continuous vs discrete) and different mean free paths. We have already conducted a more extensive comparison between the different physics models of Geant4 for low energy dosimetry applications, which can be found in ref. [20].
Analytical expressions for the extrapolated range and transmission rate
Above some keV, Equation 4-1 reproduces faithfully the Monte-Carlo simulations and experimental data. However, the commonly used power law expressions are no longer applicable below 1 keV. As shown in Figure 4-2, Equation 4-1 does not reproduce the dynamic of the extrapolated range below 1 keV, with a linear (log/log scale) evolution in place of the flattening phenomenon observed in the simulations. Thanks to the Monte Carlo simulations shown in the previous section 4, this expression has been modified in order to be relevant down to 10 eV. In our new expression, the model of ref. [9] in Equation 4-3 is maintained for electron energies over 14.5 keV, as it is able to correctly model the dynamic of the range over this energy. This energy correspond to the points where our new expression best fit the formula of ref. [9]. Below this threshold, a power law function replaces the expression of the extrapolated range 𝑟(𝐸):
𝑟(𝐸) = { 𝐷(𝐸 + 𝐸 𝑟 ) 𝐹 ∶ 10𝑒𝑉 < 𝐸 ≤ 14.5𝑘𝑒𝑉 𝐴𝐸 [1 - 𝐵 (1 + 𝐶𝐸) ] : 𝐸 ≥ 14.5𝑘𝑒𝑉 Equation 4-5
With the following parameters:
𝑟 𝐴𝑙 = 3 * 10 -7 𝑔 𝑐𝑚 2 ⁄
is the extrapolated range of 50 eV electrons in aluminum obtained from the MicroElec simulations, 𝐸 0 = 14.5 𝑘𝑒𝑉 𝐴 = (1.06 𝑍 -0.38 + 0.18) 10 -3 𝑔 𝑐𝑚 2 ⁄ . 𝑘𝑒𝑉 𝐵 = 0.22 𝑍 -0.055 + 0.78 𝐶 = (1.1 𝑍 0.29 + 0.21) 10 -3 𝑔 𝑐𝑚 2 ⁄ . 𝑘𝑒𝑉 Although the range below 50 eV is over estimated and the agreement with the simulations is decreased for low Z (Be, C) and high Z (W) materials, overall a satisfying agreement with the simulations is observed. Indeed, below 100 eV, the average difference between the model and the simulation is between 3% and 12% for all materials, except for Be with 18%. With 𝑟 𝑍 (50 𝑒𝑉) the extrapolated range of 50 eV electrons in the material Z. The values for 𝑟 𝑍 can be extracted from the Monte Carlo simulations. They are more representative of the actual differences between the ranges of the materials and give a better estimation of G. The ranges shown in Figure 4-7 use the G values from the simulations. Alternatively, G can also be extracted from the ranges available in the literature. However, for energies this low, most available data are CSDA ranges which are not representative of the random walk of electrons. In spite of this, the data is available for many materials, like in the case of the stopping powers of Shinotsuka [21] that can be used to get the CSDA range and G parameter for 41 materials. Moreover, the dynamic of the CSDA range is similar to the one of the extrapolated range, as shown in Figure 4-2, due to the divergence of the inelastic mean free paths which strongly increase below a few tens of eVs. The 𝐹(𝑍) factor defines the slope of the extrapolated range 𝑑𝑟 𝐸 /𝑑𝐸 between 1 and 10 keV, so that the curve reproduces the dynamic of the reference ranges. The values for F and G in both cases are shown in Figure 4-8 shows the correlation between the values of G(Z) and F(Z) for each material with the atomic number of the material 𝑍. The strong correlation on Figure 4-8a shows that the height of the plateau region for a material is strongly dependent on the atomic properties of the material. Indeed, the range of low energy electrons below 100 eV tends to be higher in high Z materials, as shown in Figure 4-7. However, it is not trivial to provide a definite explanation for this correlation. At low energies, many properties depending on the Z or the material may influence the height of the plateau region, such as the number of conduction electrons per atom. This number is lower in Cu and Ag than for neighboring metals which could explain why their G values are slightly lower than expected. We can also observe that both Si and Ge have lower plateau heights than other materials with a close Z. We could then suppose that this discrepancy is due to the fact that these materials have an energy gap, which plays an important role in the energy loss process. Thus it is difficult to conclude on the origins of this dependence on Z and the observed discrepancies for some materials, and a more extensive study would be required.
𝐸 𝑟 = 𝐸 0 ( 𝐴𝐸 0 [1 - 𝐵 (1 + 𝐶𝐸 0 ) ] 𝐺 𝑟 𝐴𝑙 -1) 1 𝐹 𝐷 = 𝐴𝐸 0 [1 - 𝐵 (1 + 𝐶𝐸 0 ) ] (𝐸 0 + 𝐸 𝑟 ) 𝐹
Figure 4-8b shows a more limited correlation between the slope of the range curve (F) and the atomic number, as F varies in a narrower range than G. This correlation is rather given as a starting point to extrapolate the parameter F and extend the model to other materials than the 11 shown here. This can also be done for G using the correlation relationship between Z and G. However, some fitting work may still be needed to improve the agreement with the reference data. The comparison between the analytical model (dotted lines) and MicroElec (solid lines) for all 11 materials can be found below. As in the case of the extrapolated range, a satisfying agreement is seen between the model and the simulations, although the agreement is degraded for very low and high Z materials (in the case of Be and W), and very low energies below 20 eV.
Development of an analytical model for the ionizing dose deposited by low energy electrons
It is also possible to derive a model for the ionizing dose deposed by primary electrons of a certain incident energy arriving at the surface of the material, following an analytical approach.
The ionizing dose is defined as the energy transferred by these electrons to the material, which can then generate secondary electrons. This is in opposition to the non-ionizing dose, where the energy deposed does not generate additional particles, but can instead result in the displacement of an atom from its position in the lattice. Therefore, in insulators, electrons with an incident energy below the bandgap are not able to depose ionizing dose in the material. In the following, we will first derive a general formula for the dose deposited in a layer of thickness 𝑑ℎ, located at the depth ℎ of a material irradiated with a flux 𝜑 of incident electrons having an energy 𝐸. The amount of energy 𝑑𝐸 deposited in this layer of thickness 𝑑ℎ is given by the subtraction from the amount of energy arriving at the depth ℎ of the amount of energy leaving the volume at the depth ℎ + 𝑑ℎ. The amount of energy transported at the depth ℎ is also proportional to the transmitted fluence 𝜑 * 𝜂(ℎ, 𝐸), 𝜂(ℎ, 𝐸) being the transmission probability of the electrons of energy E through a material of thickness ℎ. It is also proportional to the energy of the particle at the depth ℎ, which is 𝐸(r -ℎ), where the extrapolated range 𝑟 is the maximal distance traveled by the particle. Hence, 𝑟 -ℎ is the remaining distance that the particle has to travel after having previously travelled the distance ℎ in the medium. This gives the expression for the energy deposited at ℎ: 𝜑 ⋅ 𝜂(ℎ, 𝐸) ⋅ 𝐸(𝑟 -ℎ)
Equation 4-9
Similarly, the amount of remaining energy at the depth ℎ + 𝑑ℎ is given by:
𝜑 ⋅ 𝜂(ℎ + 𝑑ℎ, 𝐸) ⋅ 𝐸(𝑟 -(ℎ + 𝑑ℎ)) Equation 4-10
The substraction of these two terms leads to the general expression of the dose deposited at depth ℎ, which is proportional to the flux of electrons:
𝑑𝑜𝑠𝑒(ℎ) = 𝜑 𝑑[𝜂(ℎ, 𝐸) ⋅ 𝐸(𝑟 -ℎ)] 𝑑ℎ Equation 4-11
What is noticeable here is that the dose is, for electrons of energy E, a simple function of both the transmission probability 𝜂(ℎ, 𝐸), and the extrapolated range vs. energy expressions 𝑟(𝐸) (or its inverse function 𝐸(𝑟 -ℎ) in Equation 4-11) of these electrons. We must also note that this expression only describes the energy transferred to the material by a primary electron traveling deeper into the material. It does not take into account the retrodiffusion of primaries, or the diffusion of this energy through the subsequently created secondaries. The objective of the model is to give the energy deposed by an incident electron before the generation of secondary electrons, hence we will not be taking into account the diffusion of this energy induced by secondaries. However, the retrodiffusion of the primary electron needs to be taken into account to have a good estimation of the dose, which is why we will introduce some corrections later in this section.
We have already shown in section 4.1 that most range and transmission rate models become invalid below 1 keV, due to the omission of elastic scattering. For these reasons, we will use the analytical models we have developed in section 4.1, to propose an analytical formula for the dose profile of low energy electrons.
4.2.1 Analytical expressions of the ionizing dose from the low energy range and transmission rate models 4.2.1.1 Derivation of an initial expression for the ionizing dose For higher energies, above 14.5 keV, the extrapolated range expression of Kobetich & Katz (Equation 4-3) can be used to calculate the ionizing dose-depth profile, leading to a different formula from the one we will develop here based on our models. This high energy relationship will be shown in the comparison between the models in section 0 and compared to the low energy model presented below. In the following, the ionizing dose will be expressed as a function of the depth, and given for a single incident energy 𝐸. Therefore, this expression can be used over the thickness of the material to give the dose-depth profile. Some of the quantities of the model are also dependent on the incident energy, which will be displayed as a subscript [ ] 𝐸 below the quantity. For instance, the extrapolated range 𝑟(𝐸), which depends on the incident electron energy, will be noted 𝑟 𝐸 .
Under some keV, the domain of energy of interest for secondary electron emission and surface analysis, the dose-depth profile for a given fluence can be directly derived from Equation 4-11 our models for the range-energy relationship in Equation 4-5 and the transmission rate in Equation 4-8. This gives the expression for the dose per fluence as:
𝐷𝑜𝑠𝑒 𝐸 (ℎ) = 𝑑 𝑑ℎ (𝜂 𝐸 ⋅ 𝐸(𝑟 𝐸 -ℎ)) = 𝐼′ 𝐸 (ℎ)𝜂 𝐸 (ℎ) -𝐼 𝐸 (ℎ)𝜂 𝐸 ′ (ℎ) Equation 4-12
Where the transmission rate 𝜂 𝐸 (ℎ) is given by Equation 4-8, and 𝐼 𝐸 (ℎ) = 𝐸(𝑟 𝐸 -ℎ) is the inverse function of the range expression of Equation 4-5. Knowing that the incident electron has an extrapolated range of 𝑟 𝐸 , it gives the remaining energy of this electron after having travelled the distance ℎ in the material. 𝐼 𝐸 (ℎ) can be deduced from Equation 4-5 as:
𝐼 𝐸 (ℎ) = 𝐸(𝑟 𝐸 -ℎ) = [( 𝑟 𝐸 -ℎ 𝐷 ) 1 𝐹 -𝐸 𝑟 ] Equation 4-13
The expressions of the derivatives 𝐼′ 𝐸 (ℎ) and 𝜂 𝐸 ′ (ℎ) can be written as follows:
𝐼′ 𝐸 (ℎ) = 𝑑𝐸(𝑟 𝐸 -ℎ) 𝑑ℎ = [ (𝑟 𝐸 -ℎ) 1 𝐹 -1 𝐹𝐷 1 𝐹 ⁄ ] Equation 4-14 𝜂 𝐸 ′ (ℎ) = 𝑑𝜂 𝐸 (ℎ) 𝑑ℎ = - 𝑝𝑞 𝑟 𝐸 ( 𝑞ℎ 𝑟 𝐸 ) 𝑝-1 𝑒 (- 𝑞ℎ 𝑟 𝐸 ) 𝑝 Equation 4-15
Combining all these expressions leads to a pure analytical expression for the dose depth profile.
Corrections to the initial dose model expression: modeling of the retrodiffusion of primaries and improvements for electrons below a few hundred eV
The initial expression we have derived (Equation 4-12 combined with eqs. 13-15) is found to slightly underestimate the dose near the surface. Indeed, we have assumed that all primary electrons can only travel forward into the material. However, a part of the primary electrons can be inelastically backscattered, and leave the material. These electrons will only transfer a part of their energy to the material before exiting it, this energy being deposited near the surface. Another part of the incident fluence can also be elastically backscattered, in which case the electrons will not lose energy in the material. As a result, some corrections have been brought to Equation 4-12 to take into account these effects and better fit the dose-depth profiles.
For a given depth ℎ, a retrodiffusion factor 𝜁 𝑅𝑒𝑡𝑟𝑜 has been added to reduce the energy deposited by the electrons moving forward into the material. The removed part is compensated by the energy deposited at the depth ℎ by the electrons reflected from deeper into the material, which are traveling back to the surface. The effect of this factor is to redistribute the dose towards the surface and simulate the electrons which have escaped the material but deposited a significant part of their initial energy. The Equation 4-12 becomes:
𝐷𝑜𝑠𝑒(ℎ) = 𝐼 ′ 𝐸 (ℎ)𝜂 𝐸 (ℎ) -[(1 -𝜁 𝑅𝑒𝑡𝑟𝑜 )𝐼 𝐸 (ℎ)𝜂 𝐸 ′ (ℎ) + 𝜁 𝑅𝑒𝑡𝑟𝑜 ∫ 𝐼 𝐸 (𝑧)𝜂 𝐸 ′ (𝑧) 𝑑𝑧 𝑟 𝐸 ℎ ] Equation 4-16
The retrodiffusion factor has been chosen as 𝜁 𝑅𝑒𝑡𝑟𝑜 = 0.1. We can interpret the two parts of this equation as follows: 𝐼 ′ 𝐸 (ℎ)𝜂 𝐸 (ℎ) is the remaining energy of the electrons that are transmitted through h, and 𝐼 𝐸 (ℎ)𝜂 𝐸 ′ (ℎ) is the energy deposited by the electrons that have stopped at h. The dose deposited at h is therefore the difference between the energy deposited by the electrons stopped at h, and the energy kept by the electrons transmitted through h. The aim of the retrodiffusion factor is to reduce the energy deposited by electrons deeper in the material, so in this case we want to reduce 𝐼 𝐸 (ℎ). Then, the energy deposited by the backscattered electrons when they stop is redistributed through the integral.
At very low energy, around some tens of eV, the approach of Equation 4-11 does not apply anymore because the transport regime changes to a kind of random walk motion governed by the elastic process that becomes dominant. Indeed, the inelastic mean free path starts to increase very significantly at energies close to the plasmon energy, as we have already shown in Figure 4-3 of the previous section. Equation 4-11 overestimates the deposited energy, which can become higher than the incident energy. To correct that, below a cutoff energy chosen equal to the plasmon energy of the target material, the ionizing dose have been simplified, following
𝑑𝐸 = 𝜑 ⋅ 𝐸. 𝑑𝜂⋅ 𝑑ℎ
. And the dose-depth profile is simplified to:
𝐷𝑜𝑠𝑒 𝐸 (ℎ) = -𝐸 • 𝜂 𝐸 ′ (ℎ) if 𝐸 ≤ ℏ𝜔 𝑝 Equation 4-17
This formula avoids any overestimation of the deposited dose at very low energy (E< ħp). In the intermediate energy range [ħp, ~300eV], in order to connect smoothly both Equation 4-12 and Equation 4-17, a linear combination of these two formula is proposed:
𝐷𝑜𝑠𝑒 𝐸 (ℎ) = [ 𝐸 -ℏ𝜔 𝑝 𝐸 𝐿𝑜𝑤 -ℏ𝜔 𝑝 ] 𝐼 ′ 𝐸 (ℎ)𝜂 𝐸 (ℎ) -[𝐸 𝐸 𝐿𝑜𝑤 -𝐸 𝐸 𝐿𝑜𝑤 -ℏ𝜔 𝑝 + 𝐼 𝐸 (ℎ) 𝐸 -ℏ𝜔 𝑝 𝐸 𝐿𝑜𝑤 -ℏ𝜔 𝑝 ] 𝜂 𝐸 ′ (ℎ) Equation 4-18
Where 𝐸 𝐿𝑜𝑤 = 𝐸 𝑟 /2, with Er from the extrapolated range model of Equation 4-5. This expression can be used in the energy range (ħp <E < 𝐸 𝑟 /2). Depending on target material, Er/2 corresponds to energies of a few hundreds of eV.
Modeling the contribution of the backscattered electron yield to the ionizing dose
The deposited energy expressions obtained so far are defined for any electrons entering a target solid. But as we have shown when studying the energy distribution of electrons exiting the material, a part of the incident fluence is elastically backscattered and does not deposit any energy in the material. The deposited dose, which is equal to the product of the incident fluence by the energy loss, must be reduced from the amount of reflected electrons. The proportion of incident electrons entering the irradiated material is equal to 1 -𝐵𝐸𝑌 𝐸 , 𝐵𝐸𝑌 𝐸 being the Backscattered Electron Yield for an incident energy 𝐸. To summarize, the contribution of the elastically backscattered electrons for a given energy must be removed from the three relationships for 𝐷𝑜𝑠𝑒 𝐸 (ℎ) (Eqs. [16][17][18] by removing the BEY. The dose-depth profile is thus obtained by the final expression:
𝐷𝑜𝑠𝑒 = 𝐷𝑜𝑠𝑒 𝐸 (ℎ) * (1 -𝐵𝐸𝑌 𝐸 ) Equation 4-19
In this work, we propose an expression for the BEY depending on the material and the incident electrons' energy. It is based on data from MicroElec Monte-Carlo simulations, where we have computed the proportion of backscattered electrons. The BEY is modeled by two single values for low and high energies, which are linearly connected in the intermediate region.
For materials with Z > 22, we have to reduce the BEY value used in the model at higher energies to get a better agreement with the Monte-Carlo simulations. In this case, we have used the elastic yield, which is the fraction of the BEY that only contains the incident electrons that are elastically backscattered. This choice was made to improve the agreement of the dose peofiles with the reference data. We can suppose that for higher Z materials, the backscattered electrons are able to travel deeper into the material and lose more energy. This effect is only partially included in the model, by the use of the factor 𝜁 𝑅𝑒𝑡𝑟𝑜 in Equation 4-16. Consequently, this modification of the BEY is required to reproduce the Monte-Carlo simulations, which compute the full backscattering process for the incident electrons.
The BEY values with their domain of validity are given in Table 4-2. Between the low energy and high energy values, a linear fit is applied. For W, a unique value has been used. The dose-depth profiles, calculated with this model, are first compared with Monte Carlo simulation results from MicroElec, in an aluminum target for energies ranging from 10 eV up to 2000 eV. To get these dose profiles, only the primary electrons have been simulated. The secondary electrons, which would have been created from the energy deposited by the incident electrons, have not been generated in the simulation. As the resulting electronic cascade is not simulated, the dose is independent on the SEY. This allows us to get the dose-depth profile per incident electron fluence. Indeed, the elastically and inelastically backscattered electrons are also simulated, and their contribution is thus included in the dose. 50 000 incident electrons have been simulated for each energy, with a computation time of about 1 min per energy.
As can be seen in Figure 4-10 the analytical model is also in quite good agreement with the Monte Carlo simulation of Walker [22] (9%) at 2 keV. The dispersion between the data of Walker, OSMOSEE [23], MicroElec and the analytical model is higher at 500 eV (25%) than 2 keV (11%), which can be attributed to a difference in the mean free paths used by the different Monte Carlo codes. Indeed, the authors of ref. [22] indicate that the dose is given per primary electron without modeling the secondary electron cascade, in the same way as MicroElec. The dose profiles given by this work's model for the rest of the materials are provided below in Figure 4-11. The analytical formulation of the deposited energy is in relatively good agreement with the Monte Carlo simulations for most materials. The agreement is less satisfactory for lower energies, which can be linked to the limitations of the models used. Indeed, the error increases on average up to 50% at 50 eV and below, up to a factor 2 in worst cases. This is due to the fact that the transport of electrons at very low energies becomes a random-motion walk due to the predominance of the elastic interactions. When an inelastic interaction happens, the electron deposits all its energy on a single point and comes to rest. As a result, the amount of energy deposited between h and h+dh cannot be approximated by a continuous energy loss anymore, which can explain the discrepancies between the model and the simulation at 50 eV and below. For the same reason, the dose-depth profiles have been validated for depths above a few angströms only. Indeed, a depth of a few angströms becomes very close to the atomic distances and the notion of a deposited dose in a volume becomes questionable in both the analytical model and the Monte-Carlo simulations. The quantity of backscattered electrons and the energy they can deposit near the surface may also vary with the incident energy.
Nevertheless, the average error decreases down to 30% at 100 eV. At 250 eV and above, the error for all materials is about 15% on average, and always less than 30%. Consequently, despite the approximations of the model, we can still consider that the agreement with the simulations is satisfactory above 50 eV, and the shape of the dose-depth profiles are generally well reproduced. At 50 eV and below, the model reaches its limits, but still gives a correct reproduction of the depth reached by the very low energy electrons and the dose is estimated within the right order of magnitude, as can be seen in the figures above.
Comparison between Geant4
Monte-Carlo models (MicroElec and GRAS(em_lowenergy))
Our Monte-Carlo reference code (MicroElec) can be compared with the other electromagnetic physics modules of GEANT4. In Figure 4-12, the dose profiles of electrons in Si given by MicroElec and GRAS [38] are plotted. In this study, GRAS [24] is used with the Geant4 em_lowenergy electromagnetic physics, which are continuous processes with a condensed history and multiple scattering approach. MicroElec however uses discrete processes, which are slower but more precise at low energies where the step lengths become nanometric. At 2 keV and above, both models give similar dose profiles. Below 1 keV, the electron energy gets closer to the low energy limit of GRAS, and the dose below a depth of 2E-6 g/cm² given by GRAS is much higher than MicroElec. MicroElec is able to give the dose profiles of electrons down to a few eVs, which is an improvement over the low energy limit of GRAS (250 eV). The comparison between MicroElec (dots) and em_lowenergy GRAS (lines) for the other materials studied in this work can be found in Figure 4-13, and a more detailed comparison between MicroElec and other Geant4 models can be found in ref. [20]. MicroElec has been chosen as the reference for the analytical dose model due to its ability to transport electrons below 250 eV down to a few eVs.
Comparison between the low and high energy models
The high energy model proposed in ref. [11], which includes the range expression given by Equation 4-3 , can be used to calculate the ionizing dose leading to a formula different from our model of Equation 4-16 based on the relationships of 4.1. In the same way as in the low energy model (section 3.2), the ionizing dose in the high energy model can be expressed as:
𝐷𝑜𝑠𝑒 𝐸 (ℎ) = 𝑑 𝑑ℎ (𝜂 𝐸 ⋅ 𝐸(𝑟 𝐸 -ℎ)) = 𝐼′ 𝐸 (ℎ)𝜂 𝐸 (ℎ) -𝐼 𝐸 (ℎ)𝜂 𝐸 ′ (ℎ) Equation 4-20
Where
𝑟 𝐸 = 𝐴𝐸 [1 - 𝐵 (1+𝐶𝐸)
] is the high energy range expression from Equation 4-3. This gives the inverse range function 𝐼 𝐸 (ℎ):
𝐼 𝐸 (ℎ) = 𝐸(𝑟 𝐸 -ℎ) = 1 2𝐴𝐶 [(𝑟 𝐸 -ℎ)𝐶 -𝐴(1 -𝐵) + √∆(𝑟 𝐸 -ℎ)]
And its derivative The expressions of the transmission probability and its derivative remain the same as Equation 4-8 and Equation 4-15 respectively. However, the values of p and q are changed, following the expressions given by Kobetich & Katz [9]: At 10 keV, the maximum difference between the high energy analytical model and MicroElec is 40%, and the shapes of the dose profiles are similar. However, the maximum error for the low energy dose model is 20%, thus it can still be considered acceptable. At 14.5keV, which corresponds to the limit of validity of the low energy model, it underestimates the peak of the dose with an error of 40%. However, the dose deposited near the surface is still more accurate with the low energy model. Indeed, the high energy model is based on the first assumption of Equation 4-12 which does not consider the backscattering process. For the energies below 10 keV, the dose given by MicroElec is significantly underestimated by the high energy model, and the improvements brought by the low energy model are clearly visible.
𝐼′ 𝐸 (ℎ) = 𝑑𝐸(𝑟 𝐸 -ℎ) 𝑑ℎ = 1 2𝐴𝐶 [-𝐶 + 1 2√∆(𝑟 𝐸 -ℎ) (2𝐶(𝐴(1 -𝐵) -(𝑟 𝐸 -ℎ)𝐶) -4𝐴𝐶)]
𝑞 =
Development of an analytical model for the secondary electron emission yield
As the ionizing energy released by incident electrons in irradiated materials is dissipated in the form of secondary electrons, the analytical model presented in the previous section can be used to develop simple expressions of secondary emission yield models. The proportion of electrons produced close to the surface of the solid and escaping from the material can also be estimated by combining the dose analytical expression with the extrapolated range and the transmission rate models described previously. This approach will be followed in this section. Here, we propose a semi-empirical model based on a similar approach as Dionne's model [1,25]. The simplifying assumptions used by Dionne have been left out to obtain more accurate calculations. This leads to a more complex formulation which cannot be reduced to a simple analytical formula. But in counterpart, the proposed model has an angular dependency and is a function of a limited number of parameters having a physical meaning: the work function (Wf), the average energy of the inelastic recoil electrons <Es> and the mean energy I lost by the primary particle to create a secondary electron.
Principle of calculation of the secondary electron emission yield
The secondary electron emission yield can be separated in two different processes [26][27][28]. In a first step, during the slowing down of the incident electrons, secondary electrons are produced as the result of the interaction between the primary beam and the lattice electrons. The second step addresses the transport of the secondary electrons to the surface as well as its crossing.
If one calls 𝐺(ℎ, 𝐸) the number of secondary electrons produced within a thickness dh at a distance h from the surface and p(h) the probability that these secondary electrons reach the surface and are emitted into the vacuum, The amount of secondary electrons produced at a given depth that escapes from the solid is given by:
𝐺(ℎ, 𝐸) • 𝑝(ℎ) • 𝑑ℎ Equation 4-21
The secondary emission yield is then simply the sum of the produced electrons from the surface down to the penetration depth of the incident electrons, ℎ 𝑒 (𝐸). The SEY can be estimated by the following integral expression:
𝑌 𝑆𝐸 (𝐸) = ∫ 𝐺(ℎ, 𝐸) • 𝑝(ℎ) • 𝑑ℎ h e (𝐸) 0 Equation 4-22
ℎ 𝑒 (𝐸) can be evaluated with the extrapolated range (practical range) of the incident electrons [4,29]. The generation term 𝐺(ℎ, 𝐸) is a function of the number of inelastic interaction induced by the incident electron with the electrons of the medium. It is proportional to the ionizing dose and then to the average energy loss per unit path length of the incident electrons (dE/dh). The escape probability of secondary electrons p(h) can be approximated with a quite good accuracy with an exponential function [1,25,29]: 𝑒 -𝐶 2 .ℎ (C2 being a fitting parameter). In order to integrate the Equation 4-22 and get an analytical expression of the SEY, Dionne [1,25] assumed a constant energy loss along the path of the incident electrons. According to that approximation, the generation term dE/dh is simply given by the following affine function:
E h E h dh dE e Equation 4-23
If we assume that the range follows Equation 4-1 [30], the Equation 4-22 can be integrated and leads to the following expression for the SEY:
𝑌 𝑠𝑒 (𝐸) = 𝑐 1 𝑐 2 𝐸 ℎ 𝑒 (𝐸) (1 -𝑒 -𝑐 2 ℎ 𝑒 (𝐸) ) Equation 4-24
𝐶 1 is a constant used to express the deposited dose, which is a deposited energy by unit of mass of target material. But 𝐶 1 and 𝐶 2 are arbitrary parameters used to fit the model to experimental SEY data. The maximum of SEY (𝐸 𝑚𝑎𝑥 , 𝑌 𝑠𝑒,𝑚𝑎𝑥 ) is commonly used to define 𝐶 1 and 𝐶 2 . These models are valid only for the calculation of the "true" secondary electron emission yield. It does not take into account the contribution of backscattered electrons (elastic & inelastic), even if it is used to fit the experimental total emission yield. However, the amount of backscattered electrons can be implicitly taken into account by adjusting the C1 parameter. The main assumption at the basis of this formula, i.e. a constant energy loss function, prevents this model to reproduce faithfully the shape of the experimental SEY.
In this work, we propose to improve the approach used by Dionne [1,25], in order to get a more accurate model that will depend on physical parameters that could be measured experimentally or deduced from Monte Carlo modelling. It aims at providing not simply a fitting formula but an expression based on physical parameters. In addition the approach proposed here have a relevant angular dependency which is not the case of former expressions. For that we can rewrite the Equation 4-22 as the following general expression:
dh E w p h I E h Dose E Y s f w E E h se s e , , ) ( 0 Equation 4-25
In that expression, the generation term is given by the integral of the ionizing dose over the path of the primary electron divided by the mean ionizing potential I, to get the amount of secondary electrons produced between h and h+dh. The escape probability of the secondary electrons is separated in two terms: <Es>(h) is the probability for the secondaries produced at depth h to reach the surface. As we saw in section 4.1, the transmission probability depends on h and the energy of the secondaries <Es>. The electrons that reach the surface have then a probability to cross the potential barrier given by pw(wf, <Es>). It depends also on the energy of the secondaries, and on the work function wf. We will assume in the rest of the paper, that the average energy of secondaries that reach the surface of the material is identical to the average energy of all secondaries. This hypothesis is supported by the fact that the secondary emission is driven by the secondary electrons produced near the surface. A large part of these electrons will be able to leave the material without producing any additional interaction and therefore without energy loss.
The goal is then to elaborate analytical expressions combining the dose depth profile and the two different transmission probability <Es>(h) and pw(wf, <Es>). It is possible to formulate such kind of analytical functions for Dose(h, E), <Es>(h) and pw(wf, <Es>), but it will be very difficult to get an expression, product of these three functions that could be integrated. The final Yse function will be expressed as an integral. Is the transmission rate of the secondary electrons created at the depth x with a mean energy <Es>, from our model in Equation 4-8. In Equation 4-26, we assume that the generation of secondary electrons is isotropic, hence the factor ½ applied to 𝜂(〈𝐸 𝑠 〉, 𝑥) since only half of the electrons are moving towards the surface.
Analytical expression of the SEY based on our low energy range, transmission rate and ionizing dose models
Dose(x) is obtained from our expressions in section 4.2.1, that is to say Equation 4-16, Equation 4-17 or Equation 4-18 depending on the incident energy. Assuming that the secondary electrons are produced isotropically, the probability of these electrons to reach the surface is simply given by the transmission probability through a thickness x divided by 2, to take into account the fact that only half of the isotropically produced electrons move in the direction of the surface. No factor is applied to the dose as it is already given per incident electron, while the elastically backscattered electrons have already been removed from the expressions of the dose profile.
pw(wf, <Es>) is the crossing probability of electrons through the surface potential barrier wf, for the mean secondary electron energy <Es> inside of the material. The crossing probability is limited by two phenomena: the quantum reflection by the potential barrier, and the limit angle θlim above which the electron is reflected by the surface. As only electrons with an energy greater than the work function are considered here, the reflection remains negligible and the crossing probability is mostly affected by the limit angle:
E w E f arcsin lim Equation 4-28
Assuming that the secondary electrons crossing the surface are isotropically distributed, by integrating the flux of secondary electrons from 0 up to lim the crossing probability becomes:
s f s s f w E w E E w p arcsin 2 cos 1 2 1 ,
Equation 4-29
Using all our models, Equation 4-26 is only valid for energies below 14.5 keV. It is also defined for normal incident electrons. But the dependence on the incident angle is implicit. It can be taken into account by replacing simply the depth x by x/cos() in the integral. That leads to the following expression for the SEY, which should be used in place of Equation 4 This model depends on a number of different physical parameters that can be defined with more or less ease. The work function wf can be found in literature for many materials. The average energy of recoil electrons <Es> and the mean ionizing energy I required to produce a secondary electron are more difficult to estimate. MicroElec Monte Carlo simulations show that <Es> can vary, for the tested materials, from around 10 eV up to 35 eV. Moreover, this energy varies as a function of the incident energy, and depending on the generation of secondary electrons that are considered. For the first generation of SEs, the median energy can be linked to the energy loss function of the material. The optical energy loss functions (OELF) from the database of Sun et al. [32] for Al (small Z metal), Si (small Z semi-conductor), Ge (higher Z semiconductor) and Ag (higher Z transition metal) are shown in Figure 4-15. The median energy of 1 st generation SEs is generally close to the plasmon energy when a unique plasmon peak is present in the OELF, as in the case of Si or Al. For other materials however, generally transition metals, the OELF is characterized by a plateau region where energy losses to plasmon and valence band transitions are superposed, as in the example of Ag. In this case, the median energy is close to the center of the plateau, though its value increases with the energy of the primary electrons. Indeed, a wider range of the OELF becomes accessible to the primary electrons when their energy increases, the limit being the energy of the primary electron.
In our case, in order to limit the number of parameters and to get an amount of ionized electrons representative of the population of emitted secondary electrons, the energy 𝐼 lost by the primary electrons when creating a secondary electron has been assumed equal to the mean energy of secondary electrons 〈𝐸 𝑠 〉. Both parameters have been set to the median energy of the 1 st generation SEs, which corresponds to the most probable transferable energy for the incident electrons, so that the model remains consistent with the OELF of the material. Nevertheless, our approach has some limitations and approximations which need to be compensated by the introduction of the parameter 𝜅 in the SEY. First of all, the angular distribution of the secondary electrons generated is not considered in the expression of the transmission probability. The model assumes that all electrons put into motion towards the surface follow a normal direction to this surface, which can lead to an overestimation of the SEY.
Moreover, we have used single values for 〈𝐸 𝑠 〉 and 𝐼, whereas the energy of the 1 st generation SEs generated by the primary particle should vary according to the depth. Indeed the primary electron is slowed down by the previous interactions, so the secondary electrons produced deeper into the material may not have the same energy as the ones produced near the surface. What's more, these 1 st generation SE can create other subsequent generations of secondary electrons. As a result, the median energy of all escapable SEs when all generations are considered becomes lower. This is shown in Figure 4-16 where the median energies of the 1 st generation SEs are compared with the median energy of all subsequent generations (2 nd to n th gen). What is noticeable here is that the median energies of the 1 st gen SE for Si and Al reach a limit value after 200 eV, while the energies for Ag and Ge continue to increase. This is linked to the shape of the OELF, where a plateau is present in the OELF of Ag, while the OELF of Al and Si have a single plasmon peak, and the OELF of Ge has a wider peak. Finally, the simulations show that the proportion of 1 st generation SEs that have escaped from the surface varies between 40 and 60% of all escaped SE, depending on the material and the energy of the primary electrons. Consequently, the energy of the secondary electrons in the escape probabilities and the energy used to divide the dose are very complex to model as their dynamic can be different for each material. Moreover, the energy distribution of the SEs that reach the surface may also be different from the mean SE production energy we have just focused on. A single value for 〈𝐸 𝑠 〉
and 𝐼 is limiting but much simpler to use, hence the introduction of a factor 𝜅 to compensate for all the approximations. The values for this factor have been chosen so that the model is in better agreement with the SEY data from Monte-Carlo calculations with MicroElec. The values for all parameters used in the model can be found in Table 4-3. The values of the work functions 𝑤 𝑓 have been chosen from the values commonly found in the literature, such as Lin & Joy [33].
Validation of the SEY model with Monte-Carlo and experimental data
In this section, the SEYs given by the analytical model will be compared with MicroElec simulations [4,34], and experimental data from Bronstein & Fraiman [35] and from Joy's database [33,36]. The SEY increases with the angle of incidence because the ionization is produced closer to the surface, increasing the probability for the secondary electrons to escape from the material. We can also notice that the maximum of yield also shifts to higher energies as the incident angle increases. This behavior can be linked to the location of the maximum of deposited dose, i.e. the depth at which the production of secondary electrons is maximum. This can be explained by the fact that, at normal incidence, the depth at which the maximum of dose is deposited is large enough, to prevent most of the electrons coming from this depth to escape the material. By tilting the incident beam, the peak of dose is brought closer to the surface, increasing the amount of secondary electrons able to escape the surface. The consequence is a shift of the energy of the electrons at which the maximum of SEY is reached.
In Figure 4-17 the comparison is made for Al for angles between 0° and 75° between Monte Carlo simulations and our analytical approach. The agreement is quite good from normal to 60° incidence, and the shape of the secondary emission yield is reproduced faithfully. The SEY is overestimated at 75° however, which shows the limitations of the model. The comparison of the analytical SEY with Monte Carlo simulations and experimental data of Bronstein and Joy are shown in Figure 4-18 for all materials in the case of a normal incidence. The secondary emission yield of "true secondary" electrons given by the analytical model and MicroElec [5,34] is shown in this set of figures for Be, C, Al, Si, Ti, Fe,Ni, Cu, Ge, Ag, W. In experimental references [35,36] however, the SEY is evaluated by counting electrons having an energy lower than 50 eV. Within this assumption inelastic backscattered and secondary electrons are mixed. This can be a source of small deviation with the model. Moreover, due to the surface state of the sample and the measurement conditions which can both be variable, there is an important spread between the experimental SEY data for some materials. In this case, multiple data sets have been used as a reference, to verify the overall agreement of the analytical model. Some small discontinuities can be seen on the model curves, which correspond to the transitions between the different dose (0.5Er) and transmission probability (2 keV) expressions.
The maximum of the curve is predicted at the good energy for most materials. A shift can be observed on the SEYs of Be and C, while the SEY is overestimated for W after the maximum of the SEY. This can be linked to the limitations of the other analytical expressions used in the SEY model, as they hit their limitations when used for low-Z and high-Z materials. The same observation can be made for the values of . The factor is close to 1 for transition metals but approaches 2 for the lower and higher Z materials.
Overall, with regard to the assumptions of the model, a fairly good agreement is found with the Monte-Carlo data, which was expected as it was used as the reference for the calibration of the other models used in the SEY analytical model and the SEY model itself. The agreement is also satisfying with the experimental SEY from Bronstein and Fraiman and Joy's data. Even if the data of Bronstein and Fraiman have been measured on samples evaporated under vacuum, and thus are considered free from oxidation and contamination, the surface state of the samples may be relatively different from the "ideal" flat surface simulated numerically. The surface state is known to impact significantly the SEY. Both Monte Carlo and analytical models ignore the roughness of the surface which could be very important to consider. In addition, it should be recalled that the model depends also on the backscattered emission yields from MicroElec. The BEYs have been chosen as a couple of values, each value covering a given energy domain. The dependence of the BEY on the incidence angle [5] is also not simulated, which is obviously not realistic. Finally, while the inelastically backscattered electrons are simulated in the expression of the dose, it is done by a very simple model which can be refined. The development of a backscattering emission yield model is thus necessary to get a more relevant model for the total emission yield. Nevertheless, the simulations and the experimental data of Bronstein and Joy can be considered in relatively good agreement for these materials, according to these considerations.
Comparison of key SEY parameters with Monte-Carlo simulations
The maximum SEYmax and its energy value (Emax) have been extracted from both Monte Carlo and model SEY vs. energy curves. The factor 𝜅, whose values are shown in Figure 4-19, was applied to improve the amplitude of the SEY curves and compensate for the approximations of the model. Nevertheless, one can see from the values of 𝜅 that the error in amplitude between the analytical model and the Monte-Carlo simulations without this correction is between 5% and 30% for most materials, which is satisfying given the approximations and simplifications of the analytical model. The reason of such good correlation can be related to the range of low energy electrons, and in particular to the plateau region. This region and the knee of the range/energy curve (Figure 4567) are particularly important to accurately model the maximum of SEY, which is located around some hundreds of eV. The accuracy of the range/energy curve is closely connected to both the accuracy of the deposited ionizing dose and the electron transmission probability. The height of the plateau of the range/energy curve, which is given by the G(Z) factor of our extrapolated range model (Equation 4-5), closely defines the location of the knee of this curve and the shape of its derivative (dr/dE). Indeed, G appears in the parameters D and Er of the range expression of Equation 4-5, and thus in the derivative in Equation 4-14. The quantity F is also tied to the height of the plateau. In fact, when modifying F the height of the plateau is changed, which then changes the slope of the range between the plateau region below 100 eV and the linear region above 1 keV (Figure 4567). Since the slope is changed, the derivative is also changed, and so is the dE/dr. The higher the plateau, the greater the dE/dr, this latest being proportional to the ionizing dose. The amount of secondary electrons is itself proportional to the ionizing dose. Analogously a high range at low energy pushes the knee of the range energy curve toward higher energies. This knee is clearly correlated to the location of the maximum of SEY.
To show this correlation, the G(Z) factor has been plotted as a function of SEY data from Monte-Carlo simulations: the maximum SEY and the energy of the maximum of SEY. These figures depict a linear behavior. In the case of the max SEY value, carbon is an outlying point since its Z is low but the max SEY is about 1.3 (Figure 4-18). This is an exception in the evolution of the SEY with Z, as the next material to have a SEY>1 is Fe with a Z=26. The correlation is improved from 0.47 to 0.85 when C is not considered. Likewise, W is an outlying point for the position of the max SEY, which is due to the fact that the uncertainties and discrepancies of the range and dose models are more important for high Z materials. Although in this case, the correlation is still good without removing W, at 0.77. Despite these two points being exceptions and showing the limits of the model, we can still consider for the remaining materials that there is a strong correlation between the SEY and the range level of low energy electrons (<1 keV). This parameter depends closely on the range/energy relationship that shall be known with a good accuracy. This latest parameter can be defined only knowing with a good accuracy the electron transmission probability. But knowing these parameters seems to be sufficient to make a rough estimation of the secondary electron emission for different materials. The approach of the dose model used in this work (based on the transport of low energy electrons) can be confronted to the hypothesis used by Dionne [1,25] of a constant energy loss along the penetration depth of incident electrons, where the dose is uniformly deposited on the whole range of the primary particle. In this hypothesis, the energy loss function is given as:
𝑑𝐸 𝑑ℎ (ℎ) = 𝐸 𝑟(𝐸) Equation 4-31
Where the range follows a power law based on the Continuous Slowing Down Approximation:
𝑟(𝐸) = 𝐸 𝑛 /𝛽𝑛 Equation 4-32
and 𝛽 and n are individual fitting constants which need to be determined for each material. Hence, the dose-depth profile in this approximation is given by: In Figure 4-23, the dose given by the constant loss model is compared with the analytical dose model used in this work, in the example of Cu. The parameters 𝛽 and n for the constant energy loss have been chosen respectively as 5.43 and 1, following the work of Plaçais et al. [30] who have fitted Dionne's model to the SEY of experimental Cu samples. As can be seen in Figure 4-23 the assumption of the constant energy loss is not able to precisely evaluate the dose depth profiles, despite it being able to give accurate SEYs. As a result, the surface dose is significantly underestimated. The constant energy loss dose profile is flat and has the same value for all electron energies, as a result of the choice of parameters by the authors of ref. [30]. They have chosen a value of 𝑛 = 1, which removes the dependence in energy of Equation 4-33. While the maximum ranges given by the continuous energy loss model are in good agreement with the analytical model and Monte-Carlo data at 500 eV and 1000 eV, they are underestimated at 100 and 50 eV. Indeed, below 500 eV the range does not follow a power-law dynamic anymore (Figure 4-6), so the power law in Equation 4-33 becomes invalid. Consequently, the constant energy loss approximation is less realistic than the approach proposed in the analytical dose model used here.
We can also study how the SEY can be correlated to the total amount of dose deposited within the escape depth of electrons, which will be referred to as "surface energy deposit". Indeed, the quantity of energy deposited in the first few nanometers of the material is the result of a compromise between the incident energy available to generate the secondaries and the capability of the medium to stop incident particles near the surface (density of the material). At very low energy (<~100 eV) the penetration distance of electrons of increasing energy remains constant.
In the plateau region of the range/energy function (Figure 4-7), electrons of increasing incident energy deposit more and more energy in a constant thickness. Consequently, the energy deposit increases rapidly as a function of the incident energy. The maximum of surface energy deposit is reached when the practical range function starts to become steeper (~ hundreds of eV, ). After this stage, the deposition is spread deeper in the target material, and as a consequence, starts to decrease near the surface.
Considering this, the energy of the maximum of SEY should be close to the energy where the dose deposited within the escape depth of electrons is also maximal. From the transmission rates of However, there is a much weaker correlation (0.56) between the amplitudes, as shown in Figure 4-24(b). Indeed, there is a significant dispersion between the maximum SEY and the maximum surface energy deposit on the same figure. Plotting 𝑆𝑢𝑟𝑓𝑎𝑐𝑒 𝑒𝑛𝑒𝑟𝑔𝑦 𝑑𝑒𝑝𝑜𝑠𝑖𝑡 /𝐼, which is the generation term for the first 2 nanometers, also does not give a correlation with the max SEY.
If we plot the analytical SEY and compare it to the surface energy deposit in Figure 4-25 in the example of a few materials, we can see that the amount of deposited energy closely follows the evolution of the SEY. A similar observation has also been made by Pierron et al. [23] for Si and Al where they showed that the percentage of deposited energy remaining at the end of the electron cascade is inversely proportional to the SEY, since the other part has been dissipated in the form of secondary electrons. This also highlights the importance of having a precise evaluation of the dose in the first few nanometers for SEY calculations, which as we have seen on Figure 4-23, is not made possible by the constant energy loss approximation. The same observation regarding the lack of correlation between the amplitudes can be made in Figure 4-25, where the difference in amplitude between the dose and the SEY varies for each material (the SEYs and doses have been plotted using the same axis scales).
In conclusion, this shows that the other models for the transportation of electrons, namely the transmission and surface crossing probabilities, are of crucial importance in order to get an accurate amplitude of the SEY. Indeed, the secondary electrons generated from the surface dose will have different energies and different probabilities to reach and cross the surface, which could significantly vary between the different materials. This also shows that generation of electrons also needs to be correctly estimated in the integral by the upper limit he(E), hence the knowledge of the surface energy deposit alone is not sufficient for the amplitude of the SEY. 4.4 -Discussion of the approach: Limits at very low energies
Discussion of the approach: Limits at very low energies
At very low energies and for low projected path lengths, the notions of an average range, transmission probability or average dose become debatable. Indeed, at higher energies, many inelastic interactions can be made by the electrons in a unit path length dx. The ranges of electrons follow a Gaussian distribution with a well-defined average range and a limited spread.
A single value of the range can thus be extrapolated and be used as a representative parameter for an individual electron. When the electron energies become very low, their projected path lengths reach the interatomic distances, they are highly scattered by the elastic interactions and they are only able to make a couple of inelastic interactions before coming at rest. Subsequently, the number of interactions made per unit path length dx is very small and the projected path length distribution has a significant spread. Crucially, the depths reached by very low energy electrons become close to the interatomic distances (a few angströms), which is another limitation for the number of interactions. The notion of a continuous function for the transmission probability through a few atomic layers may also become debatable. Finally, the Monte-Carlo code itself reaches its limits for very low depths and energies. Indeed, the material cannot be treated as a bulk material anymore and the use of the dielectric function theory as in the case of a bulk material is questionable.
To sum up, while the ranges and transmission rates evaluated at high energy are representative of the average path for an individual electron through a unit path length dx, the same extrapolation cannot be made at low energies where these quantities become statistical. Instead of an average electron, these parameters are applicable to a flux of a large number of electrons, where the global range and transmission rate of the flux should follow the models detailed in this work. An analogy can be made with the case of photons going through a certain thickness.
A photon is fully absorbed by the material at the first interaction it makes, as in the case of very low energy electrons. Consequently, the transmission probability derived for photons is not representative of the path of each individual photon but of the flux of photons as a whole.
Conclusion of Chapter 4
In this chapter, we have proposed an analytical model for the computation of the secondary electron emission yield. The model is based on a physical approach, which required us to also develop analytical models for the different processes of electron transport that are involved in the secondary electron emission.
First, Monte-Carlo simulations of the penetration depths of electrons have been performed for 11 materials and used to compute the transmission rates and extrapolated ranges of electrons.
An analytic formula for each of these two quantities is proposed, depending on the atomic number and two parameters that are specific for each material. The simulation results have been used to calibrate these expressions. A correlation for the material-dependent parameters F and G can be established with the atomic number Z. In the case of the plateau height G, a fairly strong correlation has been found, indicating that the behavior of very low energy electrons (below 100 eV) is strongly dependent on the atomic properties of the material. These correlation laws can be used to extend the model to new materials.
We have then proposed a dose model based on the energy loss process of electrons in matter.
Since the model is given per incident electron, corrections have been introduced to simulate the elastically and inelastically backscattered electrons. These are respectively the backscattered electron yield of the material, and the computation of the energy deposited by the electrons traveling back to the surface corrected by a retrodiffusion factor. This also allows the model to better reproduce the dose-depth profiles and give a better estimation of the surface dose induced by the inelastically backscattered electrons. The accuracy of the model has been checked with MicroElec and other Monte-Carlo simulations for 11 materials. The model is less accurate at 50 eV and below, and some discrepancies can be observed depending on the energy or the material, but on the whole the dose-depth profiles are faithfully reproduced by the model. The agreement with the reference data from MicroElec is acceptable (error less than 30% above 100 eV), as well as the one with the other Monte-Carlo codes (Walker, OSMOSEE, GRAS) above 1 keV. This low energy analytical model is also a definite improvement over the high energy model below 10 keV.
Finally, these analytical models were combined in an expression for the SEY. The comparison of the predicted SEY with Monte Carlo and experimental data is in satisfactory agreement for the 11 studied materials (Be, C, Al, Si, Ti, Fe Ni, Cu, Ge, Ag, W). This work has also demonstrated the very strong correlation of the SEY (SEYmax and Emax) with the range of very low energy electrons (plateau region of the range/energy curve). This work highlights the fact that the SEY needs a more accurate description of the transportation of low energy electrons, than the hypothesis of a constant energy loss used in some analytical SEY models.
However, this analytical approach presents some strong limitations. In principle the models can be applied on any monatomic target material. But, the parameterized functions they depend on shall be validated for each new material. The SEY model depends also on the knowledge of the average energy of secondary electrons, which is a quite difficult parameter to determine. This parameter has been defined in this work by the use of Monte Carlo simulations, though this allowed us to show that the average energy of secondaries <Es> can significantly vary according to the primary electron energy. However, we have also shown that <Es> is strongly dependent on the shape of the Energy Loss Function (ELF), which dictates the energy loss distribution. As a result, <Es> can be chosen close to the statistical median of the ELF.
To get a better prediction of the total emission yield, a more accurate expression for the backscattering of electrons must also be added to the model presented here, in order to get all the different contributions and the variation in energy. Finally, the surface state of the material is known to strongly affect the SEY, which is neglected by this model that is only valid for ideal materials having no contamination, no oxidation and no roughness. In principle, the model can be directly extended to stacks of different materials. It could be used to account for oxidized or contaminated surfaces. These are various possible improvements of the model, which are considered for future developments. Nevertheless, the analytical models presented here can be used as input data in other simulation codes, and provide an improved computation time compared to the Monte-Carlo models.
Finally, this model is a good basis for an extension to compound or layered materials, as its current version has been validated in this paper for monoatomic and monolayer materials only.
The effect of surface roughness can also be included in the form of geometrical models [37,38]. These improvements will allow the model to be adapted to the SEY of technical materials.
Chapter 5: Development of a low energy charge transport model to simulate the secondary electron emission and charge buildup in silicon dioxide
Introduction
With the analytical model presented in Chapter 4, we now have another tool that can be used for the study of secondary electron emission. The parameters of this model can be modified to model the emission yield of insulators. However, it cannot simulate the evolution of the TEEY caused by the charge buildup in the material. The Monte-Carlo model we have developed in Chapter 3 can also simulate the TEEY of a fresh insulator sample in the static case, but it is also unable to simulate the evolution of the TEEY according to charging. As the goal of this PhD thesis is to model the experimental TEEY measurements on insulators that are affected by the charge buildup, we will have to extend either of these models with new charge transport models.
Various works have followed either of these approaches, with silicon dioxide being by far the most studied material. This means that several of the parameters that may be needed by our model (trap density, capture cross section…) should be more available for this material. Moreover, we already have a modeling of the transport of low energy electrons in SiO2 down to the electron affinity of 0.9 eV, and the simulated TEEY in the static case is coherent with other simulations. For these reasons, we have chosen to develop our charging simulation model for SiO2 only, and we will focus on this material only in this chapter and Chapter 6. However, we shall create a model that is as general as possible, so that it can be extended to other insulators if the simulation parameters can be found. Where 𝐹(𝑥, 𝑡) is the electric field, 𝜌(𝑥, 𝑡) is the charge density, 𝐽(𝑥, 𝑡) is the current density of charges, 𝜎(𝑥, 𝑡) is the conductivity, and 𝑆(𝑥, 𝑡) is a source term.
These models are faster than Monte-Carlo codes, however their description of the charge transport is more macroscopic, since they simulate the charges densities globally. The interactions of the charges are also not modeled explicitly. They are contained in macroscopic terms of the equations that describe the variation of charge density with time, since this variation depends on the individual interactions such as the trapping of the secondary electrons, or the generation of holes by the inelastic interactions. On the other hand, our objective is to understand the effect of the charge buildup on the electronic cascades, the transport of electrons and the secondary electron emission. Consequently, we have rather chosen to use a Monte-Carlo code, which allows us to follow the particles individually and precisely model each single interaction made by the charges in the material. As for the analytical approach, many Monte-Carlo codes can be found in the literature for the simulation of secondary electron emission and charge transport. Nevertheless, as we mentioned before, they focus on the external and global charge effects, whereas we are concerned in this study by the internal charging effects and their influence on the transport of electrons.
In this regard, we will have to model the transport of the thermalized electrons, contrary to our current Monte-Carlo model which stops the transport of electrons when they cannot overcome the surface potential barrier. We will also have to model the transport of holes created during inelastic interactions, which were completely neglected before. The electric field generated by the charge density influences the drift of these very low energy particles, so it needs to be computed and taken into account in the simulation. For this, we can take advantage of the builtin field classes of Geant4 to apply an electric field to particles, but we will still have to compute the field. Finally, for all particles, the trapping, detrapping and recombination processes need to be implemented. In essence, by simulating the transport of the charge carriers, we want to model the radiation induced conductivity created by an incident electron cascade and its influence on the subsequent cascades.
In this chapter, we will present a new Monte-Carlo code based on the Geant4 Monte-Carlo model developed in Chapter 3 for SiO2. This code follows an iterative approach according to time, since the computation of the field and the charge densities is done dynamically. Indeed, we need to simulate the evolution of the charge buildup if we want to see the evolution of the TEEY with time. Hence, instead of a single simulation of the TEEY over several energies, we have to model the transport of incident electrons of a single energy over a given time, by simulating successive electron cascades. Most importantly, the result of the previous electronic cascade N-1 are reused and will have an impact on the next electron cascade N, so the different iterations of the simulation are not independent. This is in strong contrast with the approach of Geant4, where simulation runs are completely independent. Nevertheless, this iterative approach is mandatory in order to correctly model the effect of internal charging on the TEEY and its evolution in time. During this PhD thesis, we have also made experimental measurements of the TEEY on SiO2 thin film samples, including time-resolved measurements for the validation of our model. We will compare the results of the model developed in this chapter with the experiments in Chapter 6, and use our model to explain the experimental observations.
In this chapter and Chapter 6, we now make the distinction between ballistic electrons and drift electrons. The incident electrons and the secondary electrons generated in the material will be referred to as ballistic electrons. They are tracked until they escape the material and the electron detector surrounding the sample collects them. The transport of ballistic electrons is also stopped if their energy falls below the surface potential barrier (0.9 eV). Hence, ballistic electrons always have an energy above 0.9 eV. They follow all the interaction models shown in Chapter 3 and are subjected to the elastic, inelastic, surface interaction, optical phonon and acoustic phonon processes. If a ballistic electron becomes unable to escape, we consider that it is thermalized and becomes a drift electron. These electrons follow a different regime of transport known as drift, which is dominated by phonon collisions and trapping effects. Their energy comes from the thermal agitation, and they can be accelerated by an electric field. The energy of the drift electrons is related to the Boltzmann distribution of speeds, but we can consider that they have an energy of 3/2 𝑘𝑇 on average. This gives an energy of 40 meV at room temperature (300 K) in the absence of an electric field. Due to electrostatic conservation, we also simulate the transport of positively charged holes in the material. They are assumed to follow the same drift regime than the drift electrons. In the following, we will also be mentioning many charge densities, defined in Table 5-1 𝑛 will be defined as the number of charges stored in the counters of the simulation. 𝑁 is the number of trapped charges per cm 3 , which can be multiplied to a cross section 𝜎 (cm 2 ) to get a mean free path for trapping or recombination processes. Lastly, the charge densities 𝜌 used in Poisson's equation are expressed in C/cm 3 The simulation configuration is presented in Figure 5-1. A flat rectangular sample of SiO2 on a Si substrate is placed in a spherical electron collector, set to the ground. The thickness 𝐿 of the SiO2 layer can be freely adjusted. For practically all simulations, we have used a thickness of 20 nm, which is the thickness of the experimental samples. However, bulk materials of a few µm can also be simulated. An electron gun set to the ground is sending electrons with a normal incidence on the sample, with a distance ℎ between the gun and the surface. The reference for the 𝑧 axis is the surface of the sample. Positive 𝑧 values are in the material, while negative values are in vacuum above the surface.
The dielectric sample is formed by a 1D mesh in depth, which is used for the computation of the electric field and the sampling of the charge densities. The 1D approximation can be considered valid, as the surface irradiated by the gun (cm²) is much larger than the thickness of the samples (20 nm) studied here, so that the radial field will be negligible compared to the field in depth. The mesh is made of a series of nodes 𝑧 𝑖 spaced by an interval ∆𝑧 𝑖 = 𝑧 𝑖+1 -𝑧 𝑖 , from the surface of the material (𝑧 0 ) down to the contact between the SiO2 and the Si layer (𝑧 𝑛 ). The nodes define the cells of the mesh as 𝐶 𝑖 [𝑧 𝑖 ; 𝑧 𝑖+1 [. We store the densities of charge carriers 𝜌 𝑖 trapped between 𝑧 𝑖 up to 𝑧 𝑖+1 in the i th cell of the mesh 𝐶 𝑖 .
For the computation of the electric field, we assume that all charges in the cell 𝐶 𝑖 are located on the i th node 𝑧 𝑖 of the mesh. The density of charges 𝜌 𝑖 is then plugged into Poisson's equation to get the potential at a given node 𝑉 𝑖 :
( 𝜕𝐹(𝑧) 𝜕𝑧 ) i = ∆𝑉 𝑖 (𝑧 𝑖 ) = - 𝜌 𝑖 (𝑧 𝑖 ) 𝜖 0 𝜖 𝑟 Equation 5-2
The mesh also has a surface 𝑆 𝑚𝑒𝑠ℎ that needs to be defined accordingly, since it will be used for normalization of the charges in the computation of volumetric densities. We want to model an electron gun that has current densities from 10 -8 to 10 -5 A/cm², which can irradiate a surface from 0.1 cm² up to a few cm², and our samples have a diameter of a few cm. Consequently, it seems pertinent to use an elementary surface 𝑆 𝑚𝑒𝑠ℎ = 1 cm² for the normalization.
The sample holder can be biased to a set potential in the experiment, generally +27 V or -9 V in DEESSE. This potential, along with the potential of the electron gun (0 V) and the behavior of the electric field at the surface (Gauss' law), need to be taken into consideration as initial conditions for the resolution of Poisson's equation. The discretization of this equation will be presented in section 5.2.2.
The mesh used in the simulation is made of about 200 nodes for a 20 nm sample, which gives an average step of 1 Angstrom per node, and the first node of the mesh is set to be at 0.5 Angstroms from the surface. The rest of the nodes are logarithmically spaced. This allows us to have a very fine mesh close to the surface, which is where most secondary electrons are produced. Indeed, we need cells that have less than a nm of thickness in the first few nm of the surface since most holes are created there, but we do not need this amount of refinement past the escape zone of the secondary electrons, which is around 10-15 nm in our simulations of SiO2. Hence, we can use a much broader mesh for the rest of the material, particularly beyond the region of implantation of the primary electrons. With this logarithmic distribution, the mesh broadens as we move deeper into the material. This is especially useful in the case of bulk materials, where we can have several hundreds of nm without any deposed charge, or for electrons of a few keV, which have an implantation region that is much deeper (several tens or hundreds of nm) than the production region of the secondary electrons (10 nm). The general procedure of the iterative Monte-Carlo simulation follows these 4 phases:
1. The incident electrons are emitted by the electron gun with a normal incidence to the surface. The transport of the ballistic electrons is simulated including all interaction models from Chapter 3. A ballistic electron is stopped when it escapes from the material and hits the detector, or if its energy falls below the surface potential barrier, which means that it is unable to escape. In the latter case, we save the position of the ballistic electron; it is assumed to be thermalized and will become a drift electron in the following steps. Ballistic electrons can be captured by free traps, or recombine with trapped holes. The trajectories and energies of the electrons are modified by the electric field following the classical equation of the electrostatic force (𝑎(𝑧) ⃗⃗⃗⃗⃗⃗⃗⃗ = 𝑞 𝑚 * 𝐹(𝑧) ⃗⃗⃗⃗⃗⃗⃗⃗⃗ ), and using the built-in tools of Geant4 for field handling. Holes are also created when an inelastic interaction happens and a secondary electron is generated. The thermalized electrons and new holes are added to two stacks of "new drift holes" and "new drift electrons". They will be used for the computation of the charge densities, and as a list of charges to be transported in the next steps.
2. When the tracking of all of the ballistic electrons is finished, the positions of the created holes and thermalized electrons resulting from the electron cascade are sampled in depth along the 1D mesh. The densities of trapped holes and electrons from the previous cascades are also sampled. These densities are updated for the computation of the capture probabilities, which depend on the density of free and occupied traps. The total charge density obtained is used to update the external and internal electric field, using Poisson's equation. This equation is discretized using a resolution scheme along the 1D mesh.
3. The detrapping of the trapped electrons and holes is computed according to the detrapping probability. By multiplying the probability of detrapping for a given trap in a given cell of the mesh by the number of trapped charges in this cell, we can obtain the number of charges detrapped from the cell.
4. The transport of the drift holes and electrons is simulated. The positions of creation of the holes and of thermalization of the electrons during the electron cascade of phase 1 were saved to compute the charge distribution, so these charges are generated here at their exact position of thermalization (for electrons) or creation (for holes). In the first version of the simulation, the exact position of the trapped charges was also saved, and they were generated from their exact position of capture. However, this created an excessive consumption of memory, which made the program crash if too many trapped particle positions were stored (more than 20 million). This would systematically happen when irradiation times of several tens of ms were simulated. To address this issue, the current version of the program only stores a counter of the number of charges trapped in a given cell of the mesh. As we know which cell a given particle was detrapped from, its position of generation is randomly drawn in the cell using an uniform law. For a cell 𝐶 𝑖 [𝑧 𝑖 ; 𝑧 𝑖+1 [ and a random number 𝑅 ∈ [0; 1], a detrapped particle from this cell will be generated at a depth 𝑧 = 𝑧 𝑖 + 𝑅(𝑧 𝑖+1 -𝑧 𝑖 ). The distribution of depths of trapped charges in a given cell is probably not uniform and we are making an approximation here. But this error should be reduced if the mesh is precise enough. The detrapped charges are also added to a stack of either detrapped electrons or detrapped holes. The drift particles from the electron cascade and the detrapping are both generated with a thermal energy 𝐸 = 3 2 𝑘𝑇 in a random direction to take into account the thermal agitation. Their trajectories can also be distorted by the electric field. The charges are followed until they are captured by a trap. Depending on the filling status of the trap (empty, or filled by a trapped particle of the opposite sign), the drift particle is either saved as a trapped particle, or recombines with the particle that is already in the trap. In the case of capture by a free trap, the trapped charge density of the corresponding cell is increased.
In the case of recombination, both charges are deleted from the simulation, and the trap is freed. The drift particles are generated in the following arbitrary order in 4 distinct runs: 1) Drift holes from the new electron cascade 2) Drift electrons from the new electron cascade 3) Drift holes from the detrapping 4) Drift electrons from the detrapping.
Obviously, if the detrapping computation returns that no charges were detrapped during the simulation step, only the transport of the drift particles created during the electron cascade will be simulated.
When all drift particles have been transported, a new simulation step begins, ballistic electrons are sent again by the gun and we go through the 4 phases again. The distribution of the implanted charges and the electric field generated by this distribution are also computed after each step, summing all the distribution of charges. Hence, the volume charge densities used in Poisson's equation for the electric field follow the sum 𝜌 = 𝜌 𝑛𝑒𝑤 + 𝜌 𝑠ℎ𝑎𝑙𝑙𝑜𝑤 𝑡𝑟𝑎𝑝𝑠 + 𝜌 𝑑𝑒𝑒𝑝 𝑡𝑟𝑎𝑝𝑠 , where we add the density of charges generated in the electron cascade (𝜌 𝑛𝑒𝑤 ) with the densities of trapped charges. Given the large time intervals to be simulated (a few ms), a time step of 𝜏 = 1µs has been attributed to each iteration of the simulation. The number N of incident electrons to be sent during each iteration of the simulation is then obtained from the incident current 𝐼 using the relation
𝑁 𝑖𝑛𝑐 (𝜏) = 𝐼𝜏 𝑒 Equation 5-3
However, for an incident current of 1 µA as in ONERA's experimental setup DEESSE, this means that 6 250 000 electrons would have to be sent for each simulation step, which would lead to an insane computation time. Consequently, we have to make another approximation to get a reasonable simulation time. Instead of sending 𝑁 𝑖𝑛𝑐 incident electrons per simulation step, we choose to send a more reasonable number of incident electrons 𝑁 𝑒𝑓𝑓 per step that is related to the true number of electrons by a charge bias factor 𝛽 = 𝑁 𝑖𝑛𝑐 (𝜏) 𝑁 𝑒𝑓𝑓
Equation 5-4
In this approximation, we suppose that a single particle of the simulation is actually representative of the transport of 𝛽 real particles. So, for instance, when a hole is detrapped or an electron recombines in the simulation, this means that 𝛽 holes have detrapped or 𝛽 electrons have recombined during the time step 𝜏. There is a compromise to be made in the number of incident electrons per step 𝑁 𝑒𝑓𝑓 , between computation time and statistical consistency. While a low number of incident electrons will greatly improve the computation time, the statistical dispersion will also be very important and the consistency of the simulation will be severely degraded. For these reasons, a 𝑁 𝑒𝑓𝑓 of 500 to 1000 electrons per step has been chosen, to speed up the simulations while aiming to keep a good statistic. This gives a charge bias factor of 𝛽 = 12500 to 6250 for an incident current of 1µA. The TEEY results are degraded for a 𝑁 𝑒𝑓𝑓 lower than a few hundred, but increasing 𝑁 𝑒𝑓𝑓 past 1000 did not seem to change the TEEY results.
The trapped charge densities stored in the simulation also need to be multiplied by the charge bias factor when computing the capture mean free path, to get the real number of particles trapped in a given cell and not the effective number of trapped particles that is actually stored in the simulation. So, for a given number of trapped particles 𝑛 𝑖 (unitless) stored in a simulation, the density of trapped charges 𝑁 𝑖 (cm -3 ) is given by 𝑁 𝑖 (cm -3 ) = 𝑛 𝑖 𝛽 ∆𝑧 𝑖 𝑆 𝑚𝑒𝑠ℎ
Equation 5-5
With ∆𝑧 𝑖 the thickness of the cell, and 𝑆 𝑚𝑒𝑠ℎ = 1 cm 2 is the surface of the mesh.
During the experiment, the TEEY is sampled with two current measurements over the duration a pulse, with a relaxation period between two pulses. So, a single TEEY point is obtained per pulse, as the average of TEEY over the duration of the pulse. To reproduce this measurement procedure, the TEEY returned by the simulation at a given time is the average of the TEEY of 50 simulation steps, that amount to a duration of 50 µs. This also limits the number of points in the output data and improves the statistical noise. The simulation phases 1 to 4 are followed during the duration of the pulse. At the end of the pulse, the relaxation period is also simulated. This is done by cutting the generation of incident electrons and removing phase 1 from the simulation step. At each simulation step, no incident electrons are sent on the target, but the program will still compute the evolution of the charge densities and the electric field. It will then simulate the transport of the drift particles that are detrapped during the relaxation process, before refreshing the electric field after the charges have drifted.
Computation of the electric field with a 1D Poisson solver
The mesh presented in section 5.2.1 is used for the discretization of Poisson's equation shown here. This discretization has been done using the explicit method, following the PhD thesis work of R. Pacaud [1][2][3] who has proposed a 1D resolution scheme for Poisson's equation in the THEMIS code. This gives the Poisson equation at a given node i:
( 𝜕 2 𝑉 𝜕𝑧 2 ) 𝑖 = 𝑉 𝑖+1 -𝑉 𝑖 ∆𝑧 𝑖 - 𝑉 𝑖 -𝑉 𝑖-1 ∆𝑧 𝑖-1 ∆𝑧 𝑖 + ∆𝑧 𝑖-1 2 = - 𝜌 𝑖 𝜀 Equation 5-6
With 𝜀 = 𝜀 0 𝜀 𝑟 . From there we can isolate the coefficients of the potential levels 𝑉 𝑖 as
2𝜀 ∆𝑧 𝑖-1 (∆𝑧 𝑖 + ∆𝑧 𝑖-1 ) 𝑉 𝑖-1 - 2𝜀 ∆𝑧 𝑖 ∆𝑧 𝑖-1 𝑉 𝑖 + 2𝜀 ∆𝑧 𝑖 (∆𝑧 𝑖 + ∆𝑧 𝑖-1 ) 𝑉 𝑖+1 = -𝜌 𝑖 Equation 5-7
This equation can be rewritten under a more general form :
𝑎 𝑖 𝑥 𝑖-1 + 𝑏 𝑖 𝑥 𝑖 + 𝑐 𝑖 𝑥 𝑖+1 = 𝑑 𝑖 Equation 5-8
For a given node of the mesh 𝑖, we can identify from Equation 5-7 the 𝑎 𝑖 , 𝑏 𝑖 , 𝑐 𝑖 coefficients, which depend on the mesh size and the permittivity, the unknown variables 𝑥 𝑖 , which are the potential levels, and the coefficients 𝑑 𝑖 = 𝜌 𝑖 are the charge density on the nodes of the mesh. However, the charge densities in Poisson's equation have the dimension of a volumetric density, while the charge densities sampled in 1D along the mesh are lineic densities. The number of charges in a given cell needs to be normalized by the volume of the cell to get a volumetric density, which gives
𝜌 𝑖 (C cm 3 ⁄ ) = 𝑛 𝑖 𝑒𝛽 ∆𝑧 𝑖 𝑆 𝑚𝑒𝑠ℎ = 𝑁 𝑖 𝑒 Equation 5-9
This equation should be valid for all cells of the mesh, hence it can be rewritten in a matrix form:
[ 𝑏 1 𝑐 1 0 𝑎 2 𝑏 2 𝑐 2 0 𝑎 3 𝑏 3 ⋱ ⋱ ⋱ 𝑐 𝑛-1 𝑎 𝑛 𝑏 𝑛 ] [ 𝑥 1 𝑥 2 ⋮ 𝑥 𝑛 ] = [ 𝑑 1 𝑑 2 ⋮ 𝑑 𝑛 ] Equation 5-10
This matrix equation is solvable through Thomas's algorithm [1][2][3], which is an iterative algorithm that is able to solve equations including a tri-diagonal matrix. It allows us to determine the unknown variables 𝑥 𝑖 from the coefficients 𝑎 𝑖 , 𝑏 𝑖 and 𝑐 𝑖 . From Equation 5-7, we can see that 𝑎 𝑖 , 𝑏 𝑖 and 𝑐 𝑖 only depend on the discretization steps of the mesh, which are known for any point i. 𝑑 𝑖 is also known since we have sampled the charge distribution in depth before solving Poisson's equation. However, there are some boundary conditions that need to be verified. From Figure 5-1 we can see that the potential needs to follow three conditions at the position of the electron gun, at the surface, and at the SiO2/Si interface, which will be our boundary conditions.
The electron gun, which is placed at a height -ℎ away from the material's surface, is set to the ground. This gives the first boundary condition
𝑉 𝑔𝑢𝑛 = 0 𝑉 Equation 5-11
However, we need to rewrite our boundary conditions in the form of Equation 5-8 in order to identify the coefficients a, b, c and d. Hence, we get the coefficients:
{ 𝑎 0 = 0 𝑏 0 = 1 𝑐 0 = 0 𝑑 0 = 𝑉 𝑔𝑢𝑛 = 0 Equation 5-12
At 𝑧 = 𝐿, we have the interface between the SiO2 layer and the Si substrate. In the experiment, the sample holder is biased to a potential 𝑉 𝑝𝑜𝑙 = -9 𝑉 when measuring the emitted current, so this is the potential we will use here. We also assume that the silicon layer is conductive enough, that the potential at the interface with the SiO2 layer is equal to the bias set at the metallic sample holder. This also means that we assume there is no electric field in the Si layer. This gives the boundary condition: Finally, at the Vacuum/SiO2 interface, the potential is not forced to a given value, but the electric field needs to follow Gauss' law. This law tells us that the scalar product between the surface normal 𝑛 ⃗ and the difference of the displacement vectors in the material 𝐷 𝑀 ⃗⃗⃗⃗⃗ and in vacuum 𝐷 𝑉 ⃗⃗⃗⃗⃗ is equal to the surface charge density 𝜎 𝑆 , which is equal to 0 here. So we have the following relationship:
𝑉 𝑛 = 𝑉 𝑝𝑜𝑙 = -9 𝑉
𝑛 ⃗ • (𝐷 𝑀 ⃗⃗⃗⃗⃗ -𝐷 𝑉 ⃗⃗⃗⃗⃗ ) = 𝜎 𝑆 = 0 ⇒ 𝐷 𝑉 = 𝐷 𝑀 Equation 5-15
From there we can convert the displacement vectors into the electric field as
𝐹 𝑉 = 𝜀 𝑟 𝐹 𝑀 Equation 5-16
To find an equation in the form of Equation 5-8, the potential levels need to appear. We can discretize Equation 5-16 as a potential gradient along the mesh and make the potentials at the nodes of the mesh appear as:
𝑉 𝑠 -𝑉 𝑔𝑢𝑛 ℎ -𝜀 𝑟 𝑉 1 -𝑉 𝑠 ∆𝑧 0 = 0 Equation 5-17
From Figure 5-1, the electron gun is not included in the mesh. The mesh is beginning at the surface of the material with the surface potential 𝑉 0 = 𝑉 𝑠 being the first node of the mesh. The next node of the mesh is thus the potential 𝑉 1 that is at a distance ∆𝑧 0 = 𝑧 1 -𝑧 0 from the surface, and we have 𝑉 -1 = 𝑉 𝑔𝑢𝑛 . We then transform Equation 5-17 to get an equation in the form of Equation 5-8:
(𝑉 𝑠 -𝑉 𝑔𝑢𝑛 ) -∆𝑧 0 𝜀 0 ℎ𝜀 + (𝑉 1 -𝑉 𝑠 ) = 0 ⇒ 𝑉 𝑔𝑢𝑛 ∆𝑧 0 𝜀 0 ℎ𝜀 -𝑉 𝑠 (1 + ∆𝑧 0 𝜀 0 ℎ𝜀 ) + 𝑉 1 = 0 Equation 5-18
By identification, we can get the coefficients:
{ 𝑎 1 = ∆𝑧 0 𝜀 0 ℎ𝜀 𝑏 1 = -(1 + ∆𝑧 0 𝜀 0 ℎ𝜀 ) 𝑐 1 = 1 𝑑 1 = 0 Equation 5-19
Now that we know the values of the coefficients 𝑎 𝑖 , 𝑏 𝑖 , 𝑐 𝑖 and 𝑑 𝑖 at any point of the mesh, we can determine the potential levels with Thomas' algorithm. The principle of the algorithm is to first compute new values of c and d, namely 𝑐 𝑖 ′ and 𝑑 𝑖 ′ , from the previous values. For this, we have
𝑐 𝑖 ′ = { 𝑐 𝑖 𝑏 𝑖 if 𝑖 = 0 𝑐 𝑖 𝑏 𝑖 -𝑎 𝑖 𝑐 𝑖-1 ′ if 𝑖 ∈ [1; 𝑛 -1] 𝑑 𝑖 ′ = { 𝑑 𝑖 𝑏 𝑖 if 𝑖 = 0 𝑑 𝑖 -𝑎 𝑖 𝑑 𝑖-1 ′ 𝑏 𝑖 -𝑎 𝑖 𝑐 𝑖-1 ′ if 𝑖 ∈ [1; 𝑛 -1] Equation 5-20
In a second phase of back substitution, we obtain the potential for 𝑖 ∈ [1; 𝑛 -1] as
𝑉 𝑖 = 𝑑 𝑖 ′ -𝑐 𝑖 ′ 𝑉 𝑖+1 Equation 5-21
Finally, the electric field is obtained from the potential gradient as
𝐹 𝑖+1 2 ⁄ = - 𝑉 𝑖+1 -𝑉 𝑖 ∆𝑧 𝑖 Equation 5-22
Since this is a gradient, we actually obtain the field on the inter-node points 𝑧 𝑖+1 2 ⁄ . To get the electric field on the nodes of the mesh, we simply do the mean of the fields on the closest intermesh nodes:
𝐹 𝑖 = 𝐹 𝑖+1 2 ⁄ -𝐹 𝑖-1 2 ⁄ 2 Equation 5-23
This operation is then repeated for each simulation step. The sampling of the charge densities is done using a dichotomy method. Due to the efficiency of this method and of Thomas' algorithm, the computation of the electric field during the simulation is practically instantaneous.
Transport of the drift particles
At each time step, holes and drift electrons generated in the material move according to a drift diffusion process. Due to their low energy and strong coupling with the lattice, they are strongly scattered by the collisions with the phonons and undergo a random walk motion. Between these collisions, the particles can be accelerated by the electric field. From a macroscopic point of view, the density of holes (electrons) can drift in the (opposite) direction of the field. This motion can be evaluated with the drift velocity of the density, which is generally expressed as
𝑣 𝐷 ⃗⃗⃗⃗ = 𝑑 𝑑𝑡 〈𝑟 〉 = ±µ𝐹 Equation 5-24
Where 〈𝑟 〉 is the average movement of the distribution, 𝐹 is the electric field and µ is the electron/hole mobility. In essence, the drift velocity is the displacement of the position of the centroid of the distribution 〈𝑅〉 over a given time 𝑑𝑡. It is not representative of the movement of the individual particles however. Indeed, each particle has a random trajectory due to the scattering with phonons. In the absence of an electric field and in pure Gaussian transport, the distribution of particles is spreading with time but the centroid is not moving. Here the drift velocity will be null, whereas the particles are definitely not immobile. Hence, we cannot apply the drift velocity directly to the individual particles, but we have to generate their individual trajectories.
The drift particles are characterized by their thermal velocity. In steady state, the simplest approximation is to consider that the particles have a thermal energy 𝐸 = 𝑚𝑣² we can get an average velocity 𝑣 𝑡ℎ = √3𝑘𝑇/𝑚 * which is about 10 7 cm/s at room temperature. However, in all rigor, the particles have a unique velocity defined from the Maxwell-Boltzmann distribution of velocities. In this distribution, the probability of a particle having a velocity 𝑣 is given by:
𝑃(𝑣) = ( 𝑚 2𝜋𝑘𝑇 ) 1/2 4𝜋𝑣 2 𝑒 -𝑚𝑣 2 2𝑘𝑇
Equation 5-25
For consistency, the velocity of each particle should be randomly drawn from the Maxwell-Boltzmann distribution. To avoid the computational cost induced by a random sampling before generating each drift particle, the holes and drift electrons are simply generated with an unique energy 𝐸 = 3 2 𝑘𝑇. The drift particles are also generated in a random direction, to take into account the thermal agitation. The trajectory and energy of the particle are modified by the electric field following the classical equation of dynamics:
𝑎 = 𝑞 𝑚 * 𝐹
For this step, we take advantage of Geant4's built in tools for the transport of particles through an electric field. The equation solved by Geant4's tools is Newton's equation, which is why it is used in this simulation. The differential equation of the trajectory is integrated using a Runge-Kutta method. In Geant4, the user can define the interpolation step and the precision of the integration. A high precision is necessary to obtain realistic curved trajectories and not simply a sum of large segments. But an excessive precision will hinder computation time. In this work, the integration step has been set to 1 nm. The electric field is also applied to the ballistic particles, which can be accelerated and deviated between two interactions.
To have a physically accurate simulation, all collisions with phonons should be simulated. However, as we have seen in section 3.2.4.1 of Chapter 3, the interaction frequency of very low energy electrons (< 0.1 eV) with LO phonons is on the order of 10 -13 s to 10 -14 s .This is very short compared to the time of flight before trapping in SiO2 (10 -9 to 10 -7 s) [4]. This is especially true for holes, which have a strong coupling with Si and O atoms and are able to form small polarons [5]. When a hole becomes a polaron, it is self-trapped and immobilized at a given interatomic trapping site for very short times, comparable to the vibration frequency of the lattice (10 -12 s) [6]. As a result, it is not possible to simulate all interactions with phonons in a reasonable computation time, since the time step attributed to each iteration of the simulation is much larger (10 -6 s).
We have adopted here an approach that is comparable to the condensed history approach used for high energy particles. The drift particle is assumed to travel in a single trajectory driven by the electric field. It would correspond to the sum of all trajectories between the collisions with the phonons. This drift motion is modeled thanks to the mobility of the particle. In practice, we attribute an effective mass to the drift particle, given by 𝑚 * = 𝑞𝜏 𝑝ℎ µ
Equation 5-26
where τ ph = 10 -13 s is the time between two collisions with LO phonons [7]. The mobilities for electrons and holes have been set to common values found in the literature for SiO2, respectively µ e = 20 cm 2 V -1 s -1 [8] and µ h = 10 -5 cm 2 V -1 s -1 [4]. While these values are only valid at a given temperature and electron density, this approximation does not seem to hinder the TEEY results. Using this method, we are able to take into account the effect of the electric field on the drift of the particles, which can force the populations of electrons and holes in different directions. We are also able to model the thermal spread of the distribution in the absence of an electric field.
Modeling of the trapping of holes and electrons
In the simulation, the drift holes and electrons are followed until they are trapped. Ballistic electrons of a few eVs are also susceptible to being trapped, although with a much lower probability given their energy. In this work, we have chosen to model the effect of both deep traps and shallow traps. The trap distributions used in our simulations are shown in Figure 5-2.
We have considered two populations of traps for each particle type, which are split into deep traps due to impurities and shallow traps due to localized states.
In this section, we will also detail the trap energy depths chosen in our model, which will be noted 𝐸 𝑖 . For electron traps, the energy depth is defined as the difference between the bottom of the conduction band and the trap energy level, following the convention shown in Figure 5-2.
For hole traps, their energy depth is defined as the difference between the top of the valence band and the energy level of the trap. Hence, when the energy depth 𝐸 𝑖 of an electron (hole) trap increases, its energy level is located further away from the conduction (valence) band edge, and closer to the middle of the band gap.
Deep traps
Many types of defects and impurities are able to capture electrons or holes in SiO2. The nature and concentration of these impurities is highly variable depending on the material and the fabrication process. Indeed, in silicon dioxide, several types of electron and hole traps have been identified [9]. These traps are induced by the presence of impurities and defects in the material, or by the electron irradiation itself. They create energy levels in the band gap below the conduction band or above the valence band, in which electrons or holes can fall. The cross sections of these traps are highly variable [9], depending on whether the trap is coulombic attractive (10 -13 -10 -15 cm²), neutral (10 -15 -10 -18 cm²) or coulombic repulsive (< 10 -18 cm²), and their activation energy is about 1 to 3 eV. Finally, the concentration of the traps is also dependent on the fabrication process. For instance the water related trap concentration can vary from 10 15 cm -3 for a dry oxide to 10 19 cm -3 for a wet oxide [10]. Here, we aim at modelling the charging of plasma grown oxides, however these oxides also have specific traps that do not appear in thermal grown oxides, with cross sections of 10 -15 cm² [11] (value used for our simulations). Finally, the interface between the SiO2 and Si layers comprises a zone of a few nanometers of thickness where the concentration of defects is very high (10 20 cm -3 ). As a result, we can see that there are several possible values used in the literature for both parameters of trapping (density and cross section), and the choice of representative parameters is not trivial. The situation is also complicated by the fact that we have to model traps for electron and for holes, but the nature of the most attractive electron traps may not be the same as hole traps.
In this work, the deep trapping of electrons and holes has been modeled using a unique cross section 𝜎 𝐷 = 10 -15 cm² for drift electrons and holes. Therefore, we assume that the main capture mechanism for deep traps is due to coulombic attractive traps. The density of deep level traps is taken as N D = 10 18 cm -3 [12]. This density of traps is able to capture either electrons or holes. The mean free path is obtained by:
𝜆 𝐷 = 1 𝜎 𝐷 𝑁 𝐷 ⁄ Equation 5-27
The energy depth of the deep traps is taken as 𝐸 𝑖 = 2 𝑒𝑉 from the value proposed by Cornet et al. [13] This value is close to the energy depth of the most commonly encountered electron traps in SiO2, namely the oxygen vacancy or the Na impurities.
Shallow traps
The deep level traps shown above can be considered as "extrinsic" traps, since they are not a property of the material itself but are the result of defects created during the fabrication process.
Other types of traps exist in amorphous materials, which are known as intrinsic traps since they are linked to the nature of the material itself. Here, the disorder and ruptures in the atomic bonds create a band of localized states below the conduction and valence bands where electrons and holes may be trapped [14]. While the energy depth of these traps is very shallow (0.1 eV or less), their density is very high (10 21 cm -3 ) due to the significant disorder in amorphous materials.
In other simulations of the TEEY of SiO2, the energy depth of the shallow traps has been modeled by either a single energy level [12] or a Gaussian distribution centered on the deeper level traps [15]. However, an accurate model of this distribution of traps is especially important for the transport of holes in amorphous SiO2. Indeed, electrons follow a Gaussian transport, which means that their distribution of positions is moving in a global direction with a welldefined drift velocity [16]. On the contrary, we have seen in Chapter 2 that the transport of holes is dispersive [4] and follows the Continuous Time Random Walk theory. The cause for this transport is that the time of immobilization of the holes in the traps is not constant. Silver et al. [17,18] have shown that the dispersive transport of holes can be modeled in a Monte-Carlo simulation by an exponential distribution of trap energy depths, which induces a distribution of trap residence times.
Consequently, the density of localized states for holes is modeled in this work by an exponential law [10,17,19], so that the density of localized states with an energy 𝐸 𝑇 above the valence band edge between 𝐸 𝑖 and 𝐸 𝑖 + 𝑑𝐸 is given by:
𝑁(𝐸 𝑖 ≤ 𝐸 𝑇 < 𝐸 𝑖 + 𝑑𝐸) = 𝑁 𝑆 𝐸 𝑐 exp (- 𝐸 𝑖 𝐸 𝑐 ) 𝑑𝐸 Equation 5-28
The distribution of trap energy depths has an expected value 𝐸 𝑐 , taken as 𝐸 𝑐 = 0.07 𝑒𝑉 [10].
Since electrons follow a Gaussian transport and are much more mobile, the electron shallow traps are modeled in our work by a single level of energy depth E c = 0.02 eV, following the value proposed by Wager [10]. Here, the single energy level leads to constant release times. In fact, Mady et al. [20] have shown that the hopping transport of electrons through traps with identical energy depths follows the characteristic of a Gaussian transport. For both hole and electron traps, the total density of shallow traps is 𝑁 𝑆 = 10 21 cm -3 [10].
The capture cross sections used are 𝜎 𝑆 = 1 × 10 -14 𝑐𝑚 2 for drift electrons, 2.5 × 10 -14 𝑐𝑚² for holes. The capture mean free path is obtained in the same way as the deep level traps, using the total density of traps:
𝜆 𝑆 = 1 𝜎 𝑆 𝑁 𝑆 ⁄ Equation 5-29
The exponential distribution of energy depths is simulated using a discrete distribution of 20 trap levels 𝐸 𝑖 regularly spaced between 0 and 0.4 eV. The density of each level 𝐸 𝑖 is obtained from Equation 5-28, where 𝑑𝐸 is the width of the level, given by the discretization step as 𝑑𝐸 = 0.02 eV. Hence, shallower level have a higher density than deeper levels. This discretization is illustrated in Figure 5-3. The capture mean free path is computed globally for the exponential distribution, using the total density of traps 𝑁 𝑆 in Equation 5-29.
𝑊(𝐸 𝑖 ) = 𝑊 0 exp (-𝐸 𝑖 𝑘𝑇 )
Equation 5-31
The frequency factor 𝑊 0 (s -1 ) is a fundamental parameter of the detrapping phenomenon, since it describes the intrinsic mobility of the charge carrier in the trap. It can significantly vary according to the energy depth of the trap 𝐸 𝑖 (eV), as shown by Cornet et al. [13]. They have proposed a law linking the energy level of the trap in eV and the frequency factor in s -1 as:
log 10 (𝑊 0 ) = 4 + 5𝐸 𝑖 (eV) Equation 5-32
This expression is assumed to be valid for both electron and hole traps. Notably, from this law, one can see that shallow traps with 𝐸 𝑖 around 0.1 eV will have a low frequency factor of around 10 4 s -1 . On the other hand, deeper traps with 𝐸 𝑖 of about a few eVs will have a very high frequency factor of around 10 14 s -1 , even though it is very difficult for the charge carriers to escape from these traps. We have chosen a value for the shallow traps of 𝑊 0 = 10 3 s -1 , close to the law proposed in Equation 5-32. Since the activation energies of the shallow hole traps follow a distribution, each trap level has a distinct escape frequency, which leads to a dispersive transport for holes. For deep traps, the frequency factor is chosen as 𝑊 0 = 10 14 s -1 , from ref. [13].
Detrapping enhancements for deep traps
The detrapping of deep traps is not only enabled by thermal activation. Indeed, as we have seen in Chapter 2, some effects related to the electric field or the tunneling of charge carriers can increase the detrapping probability. Since we have assumed that our deep traps are coulombic attractive, these enhancements apply to them and should be taken into account in the modelling of the detrapping.
The potential barrier of the trap can first be lowered by an electric field 𝐹 due to the Poole-Frenkel effect [23,24]. The Equation 5-31 has to be modified by introducing the Poole-Frenkel (PF) enhancing factor 𝑒 𝑃𝐹 :
𝑒 𝑃𝐹 = exp (- ∆𝐸 𝑖 𝑘𝑇 ) , ∆𝐸 𝑖 = √ 𝑒 3 𝐹 𝜋𝜖 0 𝜖 𝑟 ⁄ Equation 5-33
We can see here that the Poole-Frenkel lowering is only dependent on the value of the electric field and the permittivity of the material. It is completely independent on the energy depth of the trap or the nature of the trapped particle.
Detrapping can also be enhanced by the Phonon-Assisted Tunelling (PAT) effect. In this case, the trapped charge can absorb a phonon and get excited to a higher level in the trap, where the probability of tunneling through the trap barrier is more favorable. The probability is dependent on the transparency of a potential barrier, given by 𝑇 = exp (-4 3
(2𝑚 * ) 1 2 ⁄ 𝐸 𝑖 3 2 ⁄ 𝑞ℏ𝐹 ) Equation 5-34
The PAT enhancement factor is obtained by an integral over the energy depth of the trap, of the probability of excitement at a given energy level 𝑧 times the transparency of the barrier at 𝑧. The potential barrier is assumed to be triangular, which gives the PAT enhancement factor as [24,25]:
𝑒 𝑃𝐴𝑇 = ∫ 𝑒𝑥𝑝 (𝑧 -𝑧 3 2 ⁄ ( 4 3 (2𝑚 * ) 1 2 ⁄ (k𝑇) 3 2 ⁄ 𝑞ħ𝐹 )) 𝑑𝑧 𝐸 𝑖 /𝑘𝑇 0 Equation 5-35
In this expression, 𝑚 * is the effective mass of the particle, 𝑞 is its charge, and we integrate the probability of tunneling from a level 𝑧 over the possible levels that the charge carrier can be excited to. The barrier is assumed to be triangular in this case. The final emission rate for the deep coulombic trap use in the Monte-Carlo model is written by including both PAT and PF enhancements [26]: In the integration of the transparency factor for the tunneling probability, the term
𝑊(𝐸 𝑖 ) =
(1 -( ∆𝐸 𝑖 𝑧k𝑇 ) 5 3 ⁄
) appears. This is because the potential barrier is deformed by the Poole-Frenkel lowering effect. In this situation, Hill [23] and Vincent et al. [24] mention that the triangular barrier model from Equation 5-35 is invalid. Consequently, the potential barrier of the trap should rather be modelled as an hyperbolic potential barrier, which modifies the tunnelling probability by a factor (1 -(
∆𝐸 𝑖 𝑧k𝑇 ) 5 3 ⁄
).
-Modeling of the recombination of electron -hole pairs
This equation has no analytical solution and has to be solved numerically. To do so, we integrate the tunneling probability over 10 energy levels z in the trap, for each time the detrapping probability for deep traps has to be computed.
Modeling of the recombination of electron-hole pairs
From a general point of view, a trap in a material is associated with a defect in the lattice which carriers can scatter on, regardless of the state of this trap (empty, filled, charged, neutral). The scattering process will occur for a carrier with a given probability to occur related to the state of the trap. Depending on the state, the scattering process can lead to the recombination of the carrier. As the material is irradiated, holes or electrons fill more and more traps, and the drift carriers have a higher probability of being captured by a trapping site which is already filled. If a hole or an electron falls into a trap which is already occupied by the opposite particle, the two particles recombine and disappear, and the trap is freed. The cross-section for the recombination of a drift particle by a trap occupied by the opposite particle is set as σ e-h = 2 × 10 -12 cm².
This value is high but is coherent with the values found in the literature, which range from 10 -13
[15] to 10 -11 cm² [27,28]. It also takes into account the fact that the coulombian interaction caused by the charge of the trapped particle adds itself to the native attraction of the trap and makes it even more attractive. On contrary, a trap filled by a particle of the same sign than a drifting particle will be much less attractive due to the coulombian repulsion caused by the trapped charge. While some kind of traps may be able to capture multiple charges of the same sign, the capture cross section for neutral or coulombic repulsive traps is much lower, so this phenomenon has not been modeled here.
The recombination mean free path for electrons and holes is given respectively as:
{ 𝜆 𝑒 = 1 𝜎 𝑒-ℎ 𝑁 ℎ ⁄ 𝜆 ℎ = 1 𝜎 𝑒-ℎ 𝑁 𝑒 ⁄ Equation 5-37
Where 𝑁 ℎ and 𝑁 𝑒 are the densities of trapped holes and electrons. However, as the new drift particles fill the traps, the densities of hole-occupied and electron-occupied traps increase after each simulation step. Since the total density of traps 𝑁 𝑇 is fixed, the density of free traps is also reduced as the traps get filled. Hence, the trapped charge and free trap densities for either shallow or deep traps follow the relation
𝑁 𝐹𝑟𝑒𝑒 = 𝑁 𝑇 -(𝑁 ℎ + 𝑁 𝑒 ) Equation 5-38
As the charge densities are sampled in depth, 𝑁 ℎ and 𝑁 𝑒 will also vary according to the position of the particle. This evolution needs to be taken into account in the capture mean free path of charge carriers by the empty traps. As a result, Equation 5-27 and Equation 5-29 have been modified and combined with Equation 5-37 to derive a unique mean free path for the capture by the shallow (𝑆) or deep (𝐷) traps including their occupation status, following the approach proposed by Li et al [15]. This combined MFP for both capture by empty traps and trap-assisted recombination is used in the simulation, in place of the separate recombination MFP of Equation 5-37 and capture by empty traps MFP of Equation 5-27 and Equation 5-29.
This gives the combined mean free paths for holes (ℎ) at a given depth z: For the exponential distribution, the density of free traps used in the mean free paths is computed globally over the 20 levels of the distribution as the sum of the available traps for each level.
When the particle falls into a trap, a random number 𝑟 1 is sampled between zero and one to determine if the particle is captured by an empty trap, or if it falls in a trap that is occupied by an opposite charge. Capture of an electron or a hole by a free trap happens if 𝑟 1 < 𝑃 𝐹𝑟𝑒𝑒 where 𝑃 𝐹𝑟𝑒𝑒 (𝑒/ℎ) = 𝜎 𝐹𝑟𝑒𝑒 𝑁 𝐹𝑟𝑒𝑒 𝜎 𝐹𝑟𝑒𝑒 𝑁 𝐹𝑟𝑒𝑒 + 𝜎 𝑒-ℎ 𝑁 ℎ/𝑒 Equation 5-42 𝑃 𝐹𝑟𝑒𝑒 is the percentage of free traps modified by the capture cross section to include the fact that free traps are less attractive than traps filled by a particle of the opposite charge. In the opposite case, the particle is captured by a trap filled with a particle of the opposite charge, and the two particles recombine.
In the case of trapping of holes in the shallow trap distribution, the occupied and free trap densities are saved individually for each trap level. Consequently, a random number is first sampled in the exponential law, to select a trap level and retrieve its charge densities. For this, we compute a trial value for the energy level 𝐸 𝑡 of the trap where the particle has fallen into, from a random number 𝑟 2 ∈ [0; 1] and the mean energy of the distribution 𝐸 𝑐 as [19]:
𝐸 𝑡 = -𝐸 𝑐 log(1 -𝑟 2 )
Equation 5- 43 We then find the energy level 𝐸 𝑖 such that 𝐸 𝑖 < 𝐸 𝑡 < 𝐸 𝑖+1 using a dichotomy method, and retrieve the density of trapped charges (𝑁 ℎ ) 𝑖 and available traps (𝑁 𝑓𝑟𝑒𝑒 ) 𝑖 for the level 𝐸 𝑖 to be used in Equation 5-42.
Modeling the effect of the incident beam surface and the current density
So far, the trapped charge and free trap densities have been defined globally in a 1D manner. That is to say, we consider in the simulation that the charge densities 𝑁 ℎ , 𝑁 𝑒 and 𝑁 𝑓𝑟𝑒𝑒 are uniform on the whole surface, and the capture+recombination mean free paths will always depend on the same charge density, regardless of the point of impact of the electron. However, this is only true if the current density is high enough, so that the electron cascades are uniformly distributed and an incident electron is guaranteed to arrive in an area filled with charges. Indeed, in TEEY experiments, the electron beam is not a single point but it covers a surface that ranges from a few mm² up to several cm². Depending on the current density and the transport properties of the trapped charges, there is a possibility that the charge densities may not be uniform on the surface at all. If the current density is low, the electron impacts on the surface will be quite widespread and the surface will not be irradiated uniformly. This will make the charge densities have a very poor uniformity, and there is a high probability that some areas will be free of charges after a given time, while other areas that have received an incident electron will have many trapped charges. This can also happen if the charges are able to quickly escape the traps and diffuse between two electron impacts at a given zone. In these cases, if a particle is created in a region where all traps are free, the charge densities used in the computation of the mean free paths should be empty. Nevertheless, in the present state, our Monte-Carlo model is unable to simulate the lack of uniformity in the charge densities that could be caused by a low incident current density.
In the following, we will present the method used to model the effect of the current density on the uniformity of the charge densities. As we sample the charges in 1D in depth and the objective is to understand a 2D surface effect, an empirical approach was used. This empirical approach can be justified by comparing the radius of an electron cascade (roughly 10 nm, Figure 5-4) with the beam area (up to several cm²). Indeed, if we want to have an accurate 3D mesh for the charge densities, it would need to have cells with an area that is not larger than the size of a cascade. Consequently, we would have to use cells that have an area not greater than 10 nm², to model a surface on the order of the cm². This would lead to a mesh with 10 12 cells, which would require excessive computation time and resources.
From the incident current 𝐼 0 , we can get the number of electron impacts 𝑁(𝜏) after a given time 𝜏 from Equation 5-3. The total area of the material where electron cascades have been created
after 𝜏 should then be the product of the number of electrons 𝑁(𝜏), with the area of an electron cascade 𝑆 𝐶 . Hence, we need to determine the area of a single electron cascade. Assuming that the electron cascades are spherical, which gives us a circle on a 2D projection on the surface, this area is given by:
𝑆 𝐶 = 𝜋𝑟 𝐶 ² Equation 5-44
Where 𝑟 𝐶 is the radius of an electron cascade taken from the incident electron's point of impact. However, given the Brownian motion of very low energy electrons, the distance between the point of capture of a secondary electron and the incident electron's point of impact can vary quite significantly, and should follow a statistical distribution. Nevertheless, we can still extract this radius from the Monte-Carlo simulations, by first sampling the final positions of the secondary electrons created in the cascade. In essence, we want to measure here the same quantity as the extrapolated range, but laterally rather than in depth. We can then follow the same approach than for the computation of the extrapolated range, by computing the normalized integral of the distribution of radii and obtaining an extrapolated radius from the tangent at P = 0.5. The extrapolated radius obtained following this method from the simulated data is given in Figure 5-4 for SiO2. In the simulations, the cascade radius 𝑟 𝐶 ranges from a few nm at 100 eV to a few tens of nm above 1 keV. Interestingly, the extrapolated radius follows a behavior that is identical to the extrapolated range of low energy electrons [29], with a plateau region below 100 eV. However, the radius was measured without simulating the drift of the charge carriers generated in the electron cascade, which will increase its spread. To take the drift into account, the value of 𝑟 𝐶 that was used in the computation of the surface of the cascade was increased by 5 nm, which is an estimation of the length that can be traveled by a hole through a few hopping events. In the simulation, we use a unique value for the cascade radius 𝑟 𝐶 = 15 nm regardless of the incident electron energy. We consider that a cascade radius of 10 nm before drift should be representative for most incident energies of interest (100 eV to 2 keV) for the TEEY. This factor, illustrated in Figure 5-5, corresponds to the macroscopic probability that an incident electron arrives in an area where another electron cascade was previously generated. In this case, the two cascades overlap, and the secondary electrons created in the new electron cascade can interact with the trapped particles created by the previous cascade. However, if the electron arrives in an area that is still free of charges after 𝜏, the electron cascade will not be affected by recombination with the charges created by the previous electrons. As we mentioned earlier, the charge densities must be modified in order to take into account the uniformity (or lack thereof) of the electron impacts on the surface. To do so, the overlap factor is used to compute an effective charge density 𝑁 𝑒𝑓𝑓 = % 𝑜𝑣𝑒𝑟𝑙𝑎𝑝 (𝜏) * 𝑁 which is then used in the computation of the mean free paths, and in Equation 5-42 for the probability of recombining. The relation between the densities given by Equation 5-38 thus becomes
𝑁 𝑓𝑟𝑒𝑒 = 𝑁 𝑇 -% 𝑜𝑣𝑒𝑟𝑙𝑎𝑝 (𝑁 𝑒 + 𝑁 ℎ ) Equation 5-46
Indeed, in the simulation, we store the total number of trapped charges created at a given depth by the incident electrons. Nevertheless, we need to model the fact that the electron has a probability of hitting a region of the material that is still free of charges, and the average density of available traps also needs to be modified in consequence. The overlap factor is computed after each simulation time step 𝜏, knowing the number of electrons sent during this step. This factor is capped at 100%, which is the limit condition where the whole surface is covered by electron cascades.
5.7 -Description of the architecture of the Geant4 simulation 207
Description of the architecture of the Geant4 simulation
To integrate all the physical models for the transport of drift electrons and holes, and to develop an iterative simulation, many new classes were conceived in the Geant4 application we have developed. Here, we will only list these various classes and provide illustrations showing how they articulate with the classes of Geant4. The full description of the architecture can be found in Appendix I.
The most important class of the simulation is the DriftManager. It is a singleton with functions related to the charging effects, and the center of the charging simulation. It stores the charge densities, computes and interpolates the electric field, launches the different phases of the simulation, computes the capture mean free paths and detrapping probabilities, and computes what happens when a particle is captured by a trap (free capture or recombination). As a singleton, DriftManager is called by the physical interaction processes, field handling classes and particle sources throughout the simulation.
The DriftManager is accompanied by a DriftMessenger. It is tasked with loading the different parameters of the model through macro commands stored in the file ChargingParameters.mac. Other macro commands are passed to the DriftManager by the DriftMessenger to signal it to begin the different phases of the simulation, and tell it whether the electron cascade, the drift phase or a relaxation phase should be simulated.
New particle types DriftHole and DriftElectron were also added. These particles inherit from the class G4ParticleDefinition and use the same particle definition parameters as G4Electron, except for the mass which is set to the effective mass from Equation 5-26, and the charge which is +e for a hole. DriftHole and DriftElectron particles are generated by the DriftManager during the drift phases.
A new physical process Trapping has been created, following the structure of the other physical processes of Geant4. It returns a mean free path from DriftManager for the capture of drift electrons and holes by shallow or deep traps, and calls DriftManager to handle the trapping or recombination of the particle. By creating a new interaction process following the nomenclature of Geant4, we can let the toolkit handle the transport of the drift particles as any other particle.
The process G4ElectronCapture tasked with killing electrons below a preset energy threshold has been overhauled. It is now handling the capture and recombination of ballistic electrons, in a similar way to the G4Trapping process.
The PrimaryGeneratorAction (PGA) class is driven by the DriftManager, which tells it whether the current run is a ballistic or drift run. The PGA also retrieves the stack of particles to be tracked in a drift run from the DriftManager. It then communicates to the Geant4 transportation manager the positions and directions of the drift particles generated.
The process DriftTimeStepMax stops the drift of the particles if their drifting time exceeds the simulation step. Here, a definite interaction value is returned instead of a mean free path. The particles that have not finished drifting at the end of the simulation time step are added to a postponed stack, and the simulation of their transport is resumed after the next ballistic run.
A set of enum variables is stored in the file driftEnums.h. They are used to set and check conditions regarding the type of the current run, the current particle type, or the type of trap.
The figures By simulating the charge buildup in the insulator and its effect on the successive electron cascades, our new Monte-Carlo model should be able to simulate the evolution of the TEEY of SiO2 with time, depending on the internal and external charge effects. We have seen that some experimental studies were made in conditions of positive charging with the external charging effects removed, either by sample holder biasing or electron collector biasing to prevent the recollection of secondary electrons. These studies had shown a decrease of the TEEY that could only be attributed to internal charging effects. In this section, we want to study whether our Monte-Carlo code is able to simulate this decrease of the TEEY in the case of SiO2 samples.
The simulation results will be validated quantitatively in Chapter 6, where we will present the experimental TEEY results obtained during this thesis and confront them to the simulation results. In the present section, we shall rather focus on a qualitative analysis. The objective is thus to determine whether the code can reproduce a decrease of the TEEY on SiO2 that would be similar to what was observed experimentally on other insulators. All simulated TEEY were obtained with an incident current of 1µA, and a beam area of 0.1 cm². This gives a current density of 10 µA/cm², to ensure that the surface is irradiated uniformly and that we can observe the interactions with internal charges. 500 incident electrons were sent at each simulation step, which gives a charge bias factor 𝛽 = 12500.
Time-resolved simulation of the TEEY of a bulk SiO 2 sample: study of the external charging effects
Before studying the influence of internal charging effects, we must ensure that our program is correctly modeling the evolution of the TEEY due to the macroscopic external charging effects, in a qualitative way. To do so, we must place ourselves in conditions where the external charging effects are not removed. Consequently, in this section, the sample holder voltage will be set at 0 V instead of the -9 V bias. Simulations were made on a bulk sample of SiO2 with a thickness of 100µm and a surface of 1 cm², instead of the 20 nm thin films we have mentioned so far. Indeed, the surface potential 𝑉 𝑠 is linked with the capacitance of the material 𝐶 and its thickness 𝐷 by the relation
𝑉 𝑠 = 𝑄 𝐶 = 𝑄 𝐷 𝜖 0 𝜖 𝑟 𝑆 Equation 5-47
Therefore, using a thick sample with a small capacitance will make the surface potential evolve much faster than with a thin film sample, and we can easily observe the effect of the external electric field. For the same reasons, the TEEY simulated in this section were also obtained under continuous irradiation instead of pulsed measurements. We have previously identified three situations regarding the global charge buildup and the evolution of the TEEY, depending on the incident electron energy 𝐸 0 with regard to the crossover points 𝐸 𝐶1 and 𝐸 𝐶2 . In the following, we will place ourselves in these conditions and verify if the time-resolved TEEY returned by the Monte-Carlo code follows the expected behavior. a) 𝐸 0 < 𝐸 𝐶1 , 𝑇𝐸𝐸𝑌 < 1
In this case, the sample is charging negatively and the TEEY is decreasing to 0 until 𝐸 = -𝑒𝑉 𝑠 . When this limit condition is reached, the incident electrons are unable to overcome the external electric field and hit the surface. They get electrostatically reflected and the TEEY increases sharply back to 1. In Figure 5678, electrons with an incident energy of 20 eV are sent on the sample. Since the TEEY is lower than 1, the surface potential is decreasing, which slows down the incident electrons. At -𝑒𝑉 𝑠 = 20 eV, the incident electrons cannot hit the surface, the surface potential does not evolve anymore, and the TEEY increases to one, as expected. In conclusion, the global charge buildup is correctly modeled for very low energy electrons below 𝐸 𝐶1 . However, in the experimental TEEY measurements made in the DEESSE facility, the TEEY curve is sampled from 50 eV up to 2 keV. For many materials, 50 eV will be below the first crossover point, but this is not the case for the SiO2 samples we have used in this work. Hence, we should not be in this situation when comparing experiments and simulations of the TEEY in Chapter 6.
b) 𝐸 𝐶1 < 𝐸 0 < 𝐸 𝐶2 , 𝑇𝐸𝐸𝑌 > 1
In this situation, the sample is charging positively, which increases the landing energy of the incident electrons, and the TEEY converges to 1 where it stabilizes. In Figure 56789, electrons with an energy of 500 eV are sent on the SiO2 sample. We can see in Figure 5-9 that the TEEY is decreasing rapidly due to the recollection of secondary electrons, caused by the increase of the surface potential. The equilibrium of charges is reached when the TEEY stabilizes at 1, where the surface potential also stops evolving, as expected. What is interesting here though is that the surface potential has stabilized at 14 V. Indeed, if we follow the conventional theory for the evolution of the TEEY due to external charging, the TEEY should only stabilize at one when the landing energy of the incident electrons has reached the second point of crossover, which is about a few keV. This means that the surface potential should stabilize at around a few thousand volts, instead of 14 V only. Here, the incident electrons have a final landing energy of 514 eV but the TEEY has still stabilized at 1. What the conventional theory does not take into account however, is the recollection of the secondary electrons. Indeed, the electrons escaping the surface must be energetic enough to overcome the external electric field generated by the positive charge. If the energy of a secondary electron is lower than 𝑒𝑉 𝑠 , it will not be able to overcome the electric field and will be recollected by the surface. Macroscopically, we can consider that a potential barrier of height 𝑒𝑉 𝑠 must be overcome by the secondary electrons. However, if we look in Figure 5-10 at the energy spectrum of electrons produced by a sample of SiO2 irradiated by 500 eV electrons, we can see that the energy of the secondary electrons is centered around 5-6 eV. Interestingly, there is a discontinuity of the energy distribution around 7 eV. This is roughly equal to the value of the energy gap used in the simulation (8 eV) minus the 0.9 eV lost by an electron when crossing the surface potential barrier into vacuum. The energy spectrum in the simulation is quite dependent on the fit of the energy loss function used, but it is unknown if this could be the reason for this discontinuity. Practically all secondary electrons have an energy that is lower than 20 eV. Therefore, as the surface potential increases, the recollection of low energy electrons compensates the positive charges that were left in the material when the secondary electrons escaped the surface in the first place. At the final value of 𝑒𝑉 𝑠 = 14 eV, the majority of secondary electrons are unable to escape and are captured by the surface. Since the TEEY has reached 1, the global charge is not evolving anymore. Hence, the value of 14 eV is an equilibrium, where a sufficient quantity of secondary electrons are recollected and compensate all the new holes created in the material, without introducing additional negative charges. This is why the surface potential also stabilizes, but at a value that is much lower than was is expected from the conventional theory.
From Figure 5-9, we can also note that the surface potential has already increased past 10 V after 1 ms only. If we were to use such a thick sample in TEEY experiments, a stronger negative bias than the -9V used in DEESSE would need to be applied to the sample holder, for instance -20 V. There is also the possibility that the simulation underestimates the final value of the surface potential, which may be even higher with real samples. Therefore, if we were to study the internal charging effects on the bulk samples, there is a significant probability that the samples would charge too quickly. The TEEY we would measure could then be falsified by the apparition of an uncompensated external electric field. When doing experimental measurements on the 20 nm thin films however, we have observed that the material does not charge past a couple of volts under continuous irradiation for a few tens of seconds. For that reason, we can safely use a negative bias of -9V, which is always stronger than the increase of the surface potential caused by the positive charge. This is the reason why we have chosen to use thin film samples to study the internal charging effects, instead of the bulk samples simulated in this section.
In summary, the Monte-Carlo model is able to simulate the external charging effects and the recollection of secondary electrons in conditions of positive charging. This is especially important since all experimental measurements will be made between the two points of crossover.
c) 𝐸 0 > 𝐸 𝐶2 , 𝑇𝐸𝐸𝑌 < 1
In this last case, the sample is charging negatively and the TEEY increases to 1, where it also stabilizes. However, the maximal incident energy used in the experiment is 2 keV, which is below the second crossover point of our SiO2 samples. Consequently, we will never be in this situation when confronting the simulation with the experiment. Moreover, the internal charging effects shown in Chapter 1, which we need to explain, were observed for incident energies between the two crossover points, therefore the situation of negative charging is out of the scope of this study. In any case, it is still necessary for consistency to study the behavior of the Monte-Carlo code for energies beyond the second crossover point.
Here, we will be using 2.5 keV incident electrons. This energy is close to the second point of crossover in the simulation, which is roughly 2.1 keV. According to the conventional theory, the sample needs to charge negatively up to -400 V, so that the energy of the incident electrons reaches the second point of crossover. Using our current simulation parameters on a 1 mm thick sample to improve computation time, the stabilization of the TEEY is not reached after 50 ms in Figure 5-11a. The TEEY actually starts to diminish before increasing and converging to 1 as what would be expected, and the negative surface potential is overestimated. When the recombination cross section is reduced to 10 -13 cm² (Figure 5-11b), the TEEY does stabilize at 1. This modification of the recombination cross section is a purely unphysical tweak, since we should expect the recombination cross section to remain the same for a given material, regardless of the incident electron energy. Despite this, the surface potential has stabilized at -780 V, which is a factor two above the expected value. This could be due to a dependence in energy that becomes relevant for electrons of 2 keV and above and/or for bulk samples that is missing in the model, or an issue that prevents the incident electrons from being slowed down by the negative surface potential. Another major drawback of the Monte-Carlo model in this situation is the computation time. Indeed, more energetic electrons create a lot of secondary electrons and drift particles that need to be transported individually. For instance, the 500 incident electrons of 2.5 keV used here generate 120 000 particles for each simulation step, if no particles are lost by recombination. In comparison, 500 incident electrons of 500 eV (as in the positive charging case) generate 20 000 drift particles per step. Hence, the computation time strongly increases for electrons of a few keV and above. Moreover, the sample needs to charge up to several hundreds of volts, but the initial TEEY is relatively close to 1. This means that the net negative charge implanted per electron is quite small compared to the case of positive charging with a TEEY of 2.5 or more. Contrary to the latter, we also do not have the recollection of secondary electrons accelerating the decrease of the TEEY. As a result, the TEEY and the surface potential can take a much longer time to reach an equilibrium, as we can see in In conclusion, the Monte-Carlo model can simulate the negative charge buildup created by electrons beyond the second crossover point, but with significant limitations. The initial parameters of the model had to be modified, since the effect of recombination was overestimated in this case, but this is an unphysical tweak as the recombination cross section should be the same for a given material. There is also probably a dependence in energy that could missing in some of the parameters of the model, which would create a significant error for electrons of a few keV. As a result, the simulation of the negative charge buildup caused by energetic electrons can only be validated in a strictly qualitative manner. The computation time also becomes excessive for incident electrons of a few keV. Admittedly, we have highlighted here a shortcoming of the Monte-Carlo code. In this work, we have chosen to not focus on the accuracy of the model beyond the second crossover point, given that the code already works in the 50 eV -2 keV range, which is the most relevant to our experimental measurements. Nevertheless, this is still a physical problem that needs to be fixed, in order to extend the simulation to electrons of a few keV and thicker materials.
Time-resolved simulation of the TEEY of SiO 2 thin films in the case of positive charging with external charging effects removed
In this section, we shall go back to simulating our experimental samples of 20 nm thin films, in positive charging conditions under a negative bias of -9V. This bias should prevent the global positive charge from generating a recollecting electric field in vacuum. As in the experimental studies, we will used a pulsed measurement procedure instead of the continuous measurement simulated in the previous section. To get an accurate time resolution, pulses of 100 µs with a spacing of 50 ms were used. The simulation results for the TEEY of 300 eV (a), 500 eV (b) and 1 keV (c) electrons are shown in Figure 5-12, obtained during 80 pulses of 100 µs. We can observe for the three energies a decrease of the TEEY. The TEEY decreases faster if its initial value is higher, as we can see by comparing the difference between the initial and final TEEY for the three energies. At 1 keV, the TEEY has been reduced by 13% after 6 ms, while at 300 eV the reduction is about 20% after 6 ms. At the same time, positive charges are created in the material and the surface potential is increasing, since the TEEY is greater than 1. However, due to the large capacitance of the thin film samples, the surface potential is evolving very slowly compared to the bulk samples of 5.8.1, and remains negative. Consequently, the decrease we observe here for the three energies cannot be caused by the recollection of secondary electrons, as in 5.8.1b. The phenomenon we observe here is strictly due to internal interactions of the electrons with charges. The assessment we make here was also observed on MgO [27], polycrystalline diamond [30], or Al2O3 [31], in conditions where positive external charging effects were compensated. For these materials, a decrease of the TEEY of electrons of a few hundred of eV was also observed as more and more incident electron pulses were sent on the target. Therefore, we can qualitatively validate the Monte-Carlo model for the simulation of the decrease of the TEEY initiated by internal charging effects, in the case of SiO2 samples.
Study of the effect of the parameters of the model on the TEEY
In this section, we compare the influence of the simulation parameters on the charge-less TEEY curve and on the decrease of the TEEY observed in the previous section. The key quantities for our simulation are the capture mean free path for empty traps, the recombination mean free path, and the detrapping frequency. These parameters will be modified compared to the reference values used so far.
First, we shall look at the TEEY spectrum of a charge-less sample, which is the static case at the beginning of the simulation. The material is free of charges so that no recombination can happen. We will not simulate the drift of charge carriers, hence the only parameter that can have an influence here is the capture mean free path of secondary electrons by empty traps. The combined capture+recombination mean free path of Equation 5-41 is used in all simulations. However, when the density of charges is null, it is equal to the capture mean free path by empty traps of Equation 5-29. This mean free path can be modified by changing either the capture cross section 𝜎 𝑆 or the trap density 𝑁 𝑆 .
In the examples shown in Figure 5-13, the capture cross section of the free shallow electron traps has been modified, whereas the electron shallow trap density remains constant at its default value of 10 21 cm -3 . We can see that the TEEY curve increases significantly when the capture cross section is increased, which creates a reduction of the capture mean free path. A capture cross section of 10 -16 cm² for the shallow traps makes the yield increase back to the value from Chapter 3 which was obtained without any trapping model. By modulating the capture mean free path, we are effectively doing the same thing as in Chapter 3, where we have changed the parameter S from the empirical polaronic capture model to decrease the TEEY from 8 to 4. The choice of a capture mean free path for secondary electrons is quite difficult, due to the significant dispersion in the TEEY measurements for a single material. It is also not trivial to choose between the various values of trap cross section and densities, and determine which kind of traps will be prevalent in a given sample. Indeed, in Figure 5-14, the TEEY obtained on our experimental samples used as a reference is plotted with the simulated charge-less TEEY and other experimental TEEY obtained on SiO2 samples. The TEEY from this work is higher than the data of Bronstein [32], but lower than the TEEY of Yong et al. [33] obtained on wet SiO2 samples. However, we have mentioned in section 5.3.1 that plasma grown samples had a different structure than wet oxides, which led to additional traps with a cross section of 10 -15 cm² in the plasma grown samples compared to the wet oxides. Wet oxides are also reported to have a trap density of 10 19 cm -3 [10] which is lower than the density used in the simulations for shallow traps (10 21 cm -3 ). Hence, the different nature of the samples can be a source of discrepancy between the various TEEY measurements. To further emphasize this point, data from Rigoudy et al. [34] obtained on thermally grown SiO2 samples has been included. Notably, this data was obtained with the same TEEY measurement facility used in this work, so the difference in TEEY between their samples and ours should only be due to the nature of the samples, if we assume that the effect of charging was limited. Indeed, there is also the possibility that some experimental measurements found in the literature could have been affected by charging effects that were not entirely compensated. It is also difficult to know whether the various points of the TEEY curves are representative of the start of the decrease of the TEEY due to internal charging, or if the TEEY has already been reduced before the final value was obtained. In fact, in the example of pulsed measurements, the value obtained for a given energy is the average TEEY over the pulse. Yet, if the TEEY has decreased during the pulse as in Figure 5-12, it is unclear what the TEEY value obtained over the pulse actually represents. Finally, we can study the effect of the charging parameters on the decrease of the TEEY observed in 5.8.2, in the example of 500 eV electrons, as in Figure 5-15. In this case, we add back all physical models we have detailed in this chapter: trapping, detrapping, drift transport of charges, dynamic computation of the field… First, the capture mean free path by free traps for secondary electrons is modified in Figure 5-15a, using the same values as Figure 5-13. This modification increases the starting point of the TEEY, since it increases the charge-less TEEY.
From the observations of Figure 5-12, we can also see that the decrease of the TEEY is much faster from a higher initial value. However, after an initial sharp decrease phase, the TEEY obtained with 𝜎 = 10 -15 cm 2 and 𝜎 = 10 -16 cm² appear to converge to the same final value at 8 ms. It is also possible that, if we increase the time period simulated, the TEEY for the three cross sections would converge to the same point. Therefore, there seems to be a second phase of decrease of the TEEY appearing after a few ms, which would not depend on the initial value of the TEEY.
In Figure 5-15b, the recombination cross section has been lowered from its initial value to 10 -13 cm². The starting point of the TEEY is not modified, since no charges are in the material at the start of the simulation. However the TEEY converges to a higher value at 8 ms with a lower recombination cross section. The decrease of the TEEY between these two points is also much sharper with a higher recombination cross section. This could indicate that the recombination of the drift charge carriers or even the secondary electrons would have a major influence on the decrease of the TEEY, which was notably suggested empirically in the experimental study of Belhaj et al [27].
Lastly, in Figure 5-15c the detrapping frequency constant 𝑊 0 for shallow traps has been reduced, which increases the time of residence of the charge carriers in the traps, and effectively disabling detrapping. By doing so, the evacuation of charges through the Si layer between two electron impacts is prevented and the TEEY in this case is decreasing more than with the default parameters. Since the charge density should be higher than with detrapping activated, the interactions of the electron cascade with the trapped charges should also be more probable, which joins the hypothesis made in the previous paragraph. Furthermore, the electric field should also be higher in the sample if more charges are trapped, and the field has also been shown to cause a reduction of the TEEY if it becomes large enough [7].
Conclusion of Chapter 5
In this chapter, we have conceived a Monte-Carlo model for the effect of positive internal charging on the TEEY of insulating materials, with a focus on SiO2 thin films due to the wide availability of reference data on this material. In this model, the experimental configuration of the DEESSE facility has been reproduced, so that our simulation results can be later compared to experimental measurements made with this facility. The code is able to simulate the drift transport of electrons and holes in 3D, until they are trapped and stored in a 1D mesh, so the effects of charging are effectively modeled in 1D. We have also modeled the trapping, detrapping, and recombination of holes, drift electrons and low energy ballistic electrons. To build our simulation, numerous new classes had to be created in a Geant4 application, and designed to interact with themselves and the Geant4 kernel to ensure proper transportation of the particles. We also had to add a 1D Poisson solver to Geant4, to compute the electric field after each simulation step.
In this chapter, the Monte-Carlo code has first been validated qualitatively, by verifying if it could reproduce the dynamic of the TEEY induced by external charging effects. The negative charging for electrons below the first crossover point and the positive charging for electrons between the crossover points is correctly modeled, with coherent values for the steady-state TEEY and surface potential. However, the simulation of the negative charging and TEEY for electrons above the second crossover point needs to be improved. In this case, the simulation parameters need to be modified, and the surface potential is overestimated. Given that the computation time also becomes disproportionate for electrons above a few keV (a few tens of hours up to a few days), we reach here the limits of the Monte-Carlo model. The perspectives of this PhD work regarding this limit could be to improve the modeling of the charging induced by electrons above a few keV, and improve the computation time by converting the code from single-thread to multithreading.
The architecture of the simulation and its processes are designed to be as general as possible. Therefore, the present Monte-Carlo code can be extended to any insulator. Simulating the TEEY of another insulator with charging effects can be done, if the transport of ballistic electrons can be simulated in this material with the interaction models of Chapter 3, and if the various parameters involved in the simulation of drift transport can be found. The choice of these parameters is especially capital, since each parameter can have a significant effect on the simulation of the TEEY. However, many different values for the charging parameters can be found in the literature in the case of SiO2. It can be difficult to choose a single value for the interaction cross sections or the trap density, for instance, since these values are highly variable.
For other less studied insulators, there can be no reference values in the literature and the charging parameters may have to be chosen arbitrarily.
Nevertheless, this also highlights the complexity in modeling the charge dependent TEEY of insulators, since many physical phenomena are involved, and these phenomena may have a stronger or weaker influence on the TEEY, depending on the material and the structure and purity of the reference sample. Despite all these uncertainties, the Monte-Carlo model can simulate a decrease of the TEEY in conditions of positive charging with the external field compensated, as the charge buildup increases in the material. This reduction of the TEEY strictly caused by internal charging effects has been properly simulated for SiO2. It is also coherent with the TEEY decrease observed experimentally on other insulators. However, we do not know yet what exactly creates this decrease, or how the positive charge buildup can influence the production or escape of secondary electrons. The parametric study made in 5.8.3 hints towards interactions between the electron cascade and the trapped charges, or between the drift particles themselves. It could also be an effect of the internal electric field on the transport of particles, but further proofs are required to validate any of these hypotheses. We also know that the simulation results are similar qualitatively to the decrease of the TEEY observed on other insulators, but we still need to quantitatively validate our results.
This will be the focus of Chapter 6, where we will present the experimental measurements made during this PhD thesis on SiO2 thin film samples in the DEESSE facility. We will then use the experimental results to validate our Monte-Carlo model. Finally, we will attempt to explain the experimental observations using the wide range of data we can extract from the simulations.
Chapter 6: Study of the effect of internal charging on the electron emission yield of silicon dioxide samples
The Monte-Carlo model developed in Chapter 5 should allow us to understand the physical processes behind insulator charging and its effect on the TEEY. Nevertheless, we have only compared the observations made with the simulation with experimental studies made on other insulators, in a strictly qualitative assessment. To ensure the coherence of our simulation, experimental measurements on SiO2 samples are needed for a quantitative validation.
In this chapter, we will use both simulation and experimental TEEY results that were obtained during this thesis work. The DEESSE and ALCHIMIE experimental measurement facilities of ONERA have been used to measure the TEEY of insulator samples. The validity of the ballistic electron transport models has already been shown quantitatively in Chapter 3, where we have demonstrated that the MicroElec Monte-Carlo model can accurately simulate the TEEY curve of non-insulator materials without charging effects. Therefore, the validation will focus on of the charging models introduced in Chapter 5. In this scope, we will present measurements of the TEEY depending on the incident energy, like the TEEY curves we have seen so far. Indeed, the incident energy/TEEY curve is the standard data used to evaluate the electron emission of a material, which is why the majority of TEEY data for insulators is only given in the form of an energy/TEEY curve. However, the effect of positive charging and the evolution of the TEEY can be assessed much more accurately with time-resolved measurements showing the temporal evolution of the TEEY at a single incident energy, as the time-resolved simulated results shown in section 5.7. Consequently, most of the results presented in this chapter will be time-resolved TEEY measurements. Several time-resolved simulations of the TEEY of dielectric materials can be found in the literature, but experimental data of the sort is much scarcer, and the simulated evolution of the TEEY was mainly driven by global charge effects. Hence, getting our own timeresolved experimental data on the decrease of the TEEY depending on internal charging is a focal point of this study, and will allow us to validate the Monte-Carlo code.
First, we will detail the measurement protocol used for the experimental measurements presented throughout this chapter. Then, the time-resolved experimental TEEY results obtained at room temperature on SiO2 thin film samples will be presented, and compared with the simulated results in order to validate the Monte-Carlo model. Once the model is validated, we can use it to retrieve data on the physics of charge transport, and provide explanations for the experimental observations. The code will first allow us to explain why the TEEY is decreasing with the positive charge buildup. From the conclusions gathered in this phase, we will highlight several experimental artifacts and bias that can appear when measuring the TEEY of dielectric materials. We will then explain how the charge buildup can falsify the TEEY obtained on an insulator, and what steps can be made during the experiment to avoid this falsification. We will also study the evolution of the TEEY until its stabilization, and the differences in the two TEEY measurement facilities used in this work. Finally, we will move away from TEEY studies in a controlled environment, and explore how the electron emission properties of a dielectric can vary in the conditions of the space environment. In this scope, we will study the variation of the TEEY with temperature from -180°C to 200°C, which is the typical temperature interval that space used materials are subjected to. We will also briefly probe what happens when the samples are not bombarded by a single energy electron gun, but by an energy spectrum and a current density that are representative of the incident radiation received from the space environment.
The values of the parameters used throughout this chapter for the simulation of charge transport in SiO2 are summarized in Table 6-1. Charge carrier mobility 20 cm 2 V -1 s -1 (drift electrons)
10 -5 cm 2 V -1 s -1 (holes)[1][1]
Electron cascade radius
Experimental measurement protocol
In this work, we have made experimental TEEY measurements on samples of amorphous SiO2 thin films, which we have already presented when developing the Monte-Carlo model. The samples were obtained from NEYCO, with the SiO2 layer grown on a Si substrate using plasma growth. The SiO2 layer has a thickness of 20 nm, and the samples are 5 cm wide.
All experimental measurements were conducted using the two TEEY measurement facilities at ONERA: DEESSE (Dispositif d'étude de l'Emission Electronique Secondaire Sous Electrons, Facility for emission of secondary electron under electron bombardment) and ALCHIMIE (AnaLyse CHImique et Mesure de l'émIssion Electronique, Chemical analysis and measurement of electronic emission). The equipments available in DEESSE (Figure 6-1) and ALCHIMIE (Figure 6-2) are listed below. An extensive description of the facilities can be found in the thesis of T. Gineste [2,3].
Most of the measurements presented in this chapter were made with the DEESSE facility [4]. It is made of an analysis chamber under ultrahigh vacuum (10 -9 mbar). In this chamber, a 1 eV -2 keV and a 1 keV -22 keV electron gun (Kimball Physics) can be used to measure the TEEY. These can be used in either a continuous or pulsed configuration. The sample can be tilted by 80° to study the TEEY at a given incidence angle. The TEEY can be measured with the hemispherical electron collector, or through the sample holder, following the method shown in Chapter 1. The latter technique is the one used in this work.
The Omicron hemispherical analyzer can also be used to study the energy spectrum of emitted electrons from 1 eV to 2 keV, and to perform in-situ Auger Electron Spectroscopy for surface analysis or Electron Energy Loss Spectroscopy. In this work, Auger spectrum analysis has been used to verify the chemical composition of the surface and ensure that the samples have been properly decontaminated. When sending electrons on the sample, we can measure the energetic spectrum of electrons emitted by the material with a hemispherical analyzer placed above the sample. Auger electrons are especially interesting, since each element has its own characteristic Auger transition energies. The Auger electrons can appear in the energy spectrum as very distinctive peaks around the transition energy. The spectrum can then be compared with a database of Auger transition energies per element, which gives us information about the percentages of each element in the surface. This analysis method has been used on the SiO2 samples after the decontamination phase. Only a small amount of hydrocarbon contamination remains (CKLL Auger peak, about 7% of contamination).
The samples can be decontaminated before the measurement in a separate transfer chamber, also under vacuum (10 -8 mbar). This chamber is equipped with a 10 eV -5 keV ion gun, which can be used with Ar or Xe ions for sample sputtering and erosion. The temperature of the sample holders can be regulated from room temperature (23°C) up to 200°C. This is the method we have chosen to decontaminate the samples before each campaign of measurements, by heating the sample during 48h at 200°C. Finally, the facility is equipped with a Faraday cup for electron beam characterization. The ALCHIMIE facility is also made of an ultrahigh vacuum analysis chamber (10 -10 mbar)
and a transfer chamber (10 -8 mbar). The analysis chamber is equipped with three electron guns (1 eV-2keV, 5eV-1keV, and 1keV-30keV) usable in pulsed or continuous configuration. The sample can be tilted up to 80°, and a 10 eV -5 keV ion gun can be used for erosion with Ar or Xe ions. The facility is equipped with a Sigma hemispherical electron analyzer which can record the energy spectrum of electrons from 1 eV to 3.5 keV. This analyzer can be used to perform in-situ surface analysis with X-ray Photoelectron Spectroscopy (XPS) using the built-in X-ray source. In this method, X-rays are sent on the target to generate photo electrons that can escape the material and be detected by the analyzer. Due to the low escape depth of the electrons (less than 10 nm), the energy spectrum (and therefore the TEEY obtained in the subsequent measurements) is strongly dependent on the chemical composition of the surface. Since photoelectrons have a distinctive energy that is proper to the chemical element they were generated from, the spectrum can then be used to determine the proportion of each chemical element of the surface. The facility is also equipped with a Faraday cup and a Pfeiffer mass spectroscopy gas analyzer. Finally, the sample holders can be cooled with liquid nitrogen down to -180 ± 10°C, and heated up to 200 ± 10°C. In this work, TEEY measurements were made with ALCHIMIE from -180 ± 10°C to 100 ± 10°C, following the same protocol as with the DEESSE installation. During the secondary electron emission measurements in both installations, the sample holder is biased to a -9 V potential, so that the surface potential remains negative and the secondary electrons that escape the material are not recollected by the surface. The sample is irradiated by a 2 keV Kimball Physics electron gun with pulsing capabilities in a defocused beam configuration (> 25 cm² down to 0.1 cm²), with an incident current of 0.1 to 1 µA, giving an incident current density ranging from less than 25 nA/cm² up to 20 µA/cm².
All measurements and simulation results were obtained under normal incidence. Indeed, sending electrons with a given angle of incidence may introduce 3D effects that we may not be able to reproduce with our 1D sampling of the charge distributions. Moreover, we are focusing on interactions of the electrons inside of the material, which should modify the TEEY regardless of the incidence angle.
The TEEY measurement procedure used in this work is the one presented in Chapter 1, which is based on two measurements of the current flowing through the sample. First the sample holder is biased to a potential of +27V, to force the recollection of all low energy secondaries. The current 𝐼 𝑆 + measured during this step is very close to the incident current (𝐼 0 ≅ 𝐼 𝑆 + ). Then, the sample holder is biased to a potential of -9V, to prevent the recollection of secondary electrons that can be induced by the positive charging of the sample. The current 𝐼 𝑆 -measured in this case can be used to deduce the emitted current, using the value of 𝐼 0 from the previous step. Finally, the TEEY is obtained from the ratio of emitted current over incident current:
𝑇𝐸𝐸𝑌 = 𝐼 𝐸 𝐼 0 = 𝐼 0 -𝐼 𝑆 𝐼 0 = 𝐼 𝑆 + -𝐼 𝑆 - 𝐼 𝑆 + Equation 6-1
Since we are working with dielectric samples, all measurements were made in a pulsed configuration instead of continuous incident current. This means that the sample is only irradiated with incident electrons for a certain duration (a few ms typically), then the electron gun is cut for a relaxation period (a few tens to hundreds of ms). This procedure greatly limits the influence of charging on the TEEY. In pulsed measurements, a first series of pulses is sent to measure the incident current, then a second series is sent to get the emitted current, from the procedure shown above. In TEEY experiments made on dielectric materials, we do not directly measure the current flowing through the sample, since they are of course insulators. Instead, we measure the opposite current generated by the creation of an image charge in the metallic sample holder.
In the simulation, the number of incident electrons is already known, and stays constant for each simulation step. Therefore, only a measurement of the number of emitted electrons is needed, and the sample holder is permanently biased to -9 V in the simulation. We also count the number of electrons exiting the material with an electron collector surrounding the sample, instead of a measurement through the sample holder.
The energy/TEEY curve is obtained through an automated program. For each incident energy, two series of 10 pulses are sent, with a duration of 6 ms per pulse and a relaxation period of 200 ms between each pulse. The first series gives the value of 𝐼 0 , and the second series gives 𝐼 𝐸 . The current measured in both cases is the sum of the current acquired during each pulse. Hence, the TEEY for each incident energy is the averaged TEEY over 10 pulses. During time-resolved measurements of the TEEY at a single energy, 80 to 100 pulses of incident current of 1 µA and 100 µs duration are sent, with a 50 ms relaxation period between each pulse. The pulses were made shorter in order to improve the resolution of the measurement. Indeed, a single point is acquired at the end of each pulse, which is the current measured during the pulse. By using 100 µs pulses instead of the standard 6 ms pulses, the resolution of the measurement is greatly improved, and we can more accurately observe the decrease of the TEEY. The relaxation period is necessary for the instruments to record the current through an oscilloscope. If the relaxation period is too short (below 10 ms), parasitic capacitances can perturb the recording of the sample current. The standard pulse parameters used in both experimental measurements and numerical simulations of this chapter are summarized in Table 6-2. In this section, all measurements and simulations were made with an incident current of roughly 1 µA, and a beam spot size of 0.1 cm² was used to ensure that we could observe a modification of the TEEY due to the internal charge interactions. Therefore, the current density J0 of all simulations and measurements of this section is 10 µA/cm², unless specified otherwise.
Using the DEESSE facility, the TEEY has been measured at room temperatures for incident energies of 300 eV and 1 keV. In these conditions, the material is charging positively: more holes than electrons are created in the material. For most of this study, we will interest ourselves in measuring and studying the evolution of the TEEY during 80 or 100 pulses of incident electrons, for a total irradiation time of 8 to 10 ms. This duration should allow us to observe a marked decrease during the experiments, while keeping the computation time reasonable. Indeed, in Chapter 5, we observed in such conditions a decrease of the TEEY with time. We noted a reduction of the TEEY by 13% to 20% after only 6 ms, which is the length of a single pulse in conventional TEEY measurements. Therefore, a total irradiation time of 8 ms should already be enough for us to validate the code, and try to provide physical explanations to this decrease.
From Figure 6-3, we can see that a very similar decrease to the simulation is measured experimentally for these two incident energies. The TEEY starts at a value greater than one, and is reduced at variable speeds depending on the initial value of the TEEY and the electron energy. This decrease is observed even though the positive external charging effects are removed. For 1 keV electrons the decrease of the TEEY is very sharp during the first ms, and a steady state is reached quite rapidly after 2 ms. The simulation results can then be compared with the experimental measurements we have obtained, with the comparison shown in Figure 6-4. In this figure, the experimental results are adjusted by 20 % so that the starting point of the decrease of the experimental TEEY matches with the simulations. That is to say, the experimental TEEY curve at 300 eV is multiplied by 1.2, and the experimental curve at 1 keV by 0.8. This allows us to directly compare the temporal decrease of the TEEY. As in the experiment, a series of 100 µs incident electron pulses are sent on the material, with a relaxation time of 50 ms between each pulse. For both 300 eV and 1 keV incident electrons, the simulation is able to reproduce the evolution of the TEEY due to the positive internal charging. The difference between the initial TEEY of the experiment and the simulation is due to the fact that the simulations start from a perfectly flat and decontaminated target, with no trapped charges inside. This is not the case for the experimental samples, which may have some surface roughness, a small amount of residual hydrocarbon and/or hydroxide contamination, or residual charges that are already deeply trapped at the start of the experiment. The starting point of the TEEY in the simulations is also strongly dependent on the density of traps, which limits the mean free paths of the low energy secondary electrons, and the final point of the decrease strongly depends on the recombination cross section. However, these parameters are difficult to choose accurately, as already stated.
From Figure 6-5, we can see that there is a difference of about 10 to 15% in amplitude between the charge-less TEEY from the simulations, and the experimental TEEY obtained with a broad beam configuration ( > 25 cm²). We will show in 6.3 why the TEEY measured with a broader beam is the closest we can get to the charge-less TEEY. This difference in the charge-less TEEY can also be a source of discrepancy for the start of the decrease in the time resolved results.
Finally, there is also the possibility of a space charge close to the surface which can force the recollection of secondary electrons during the measurement. However, previous studies on this experimental setup have shown that these effects are negligible [5]. Another possibility is that the parameters chosen for the simulation of charge transport are only representative for low electric fields and short times. Indeed, at 8 ms, the error between the simulation and the experiment increases up to 15 % and the curves start to diverge. As the time of irradiation increases, the electric field inside of the material is also increasing. As we will investigate in 6.2.2.1, if the electric field becomes higher than a certain threshold, the transport of charge carriers enters a different regime, which is governed by the electric field instead of the scattering mechanisms. To model the transport of charges in this high-field regime, we could have to change of our current charging parameters.
In section 6.5.1, we will present additional measurements that were made on ALCHIMIE to sample the TEEY over several seconds until it reaches its final steady-state. However, for the remainder of this current study, we shall interest ourselves in the 8 first ms of the decrease. We consider an error of 20% at the start of the decrease and a divergence of 15% after 80 pulses to be very satisfying, given the approximations of our model, the numerous processes involved in insulator charging which can be a source of errors, the wide spectrum of possible values found in the literature for the parameters of our model, and the strong dispersion of the experimental TEEY data obtained on insulating samples. What's more, the model is able to reproduce the decrease of the TEEY over multiple energies which have a quite distinct behavior. As a result, we consider that the simulations are accurate enough to explain how the internal charging leads to a decrease of the TEEY.
Explanation of the experimental observations through the simulation:
Study of the internal charge transport
Study of the internal electric field
Even if the recollection of the low energy secondary electrons is impossible due to the negative applied surface potential, other internal mechanisms could be the cause of the reduction of the TEEY. First, the electric field generated in the material by the trapped charges can modify the trajectories of the secondary electrons by accelerating them between interactions. According to Fitting et al. [6], electric fields above 0.5 MV/cm are strong enough to increase or reduce the escape depths by a few nanometers. Such high electric fields can also strongly accelerate the drift electrons and force them to move in the direction of the field [7]. The drift electrons can then be accelerated up to a few eVs [8], which can modify the phonon collision and trapping mean free paths. In such a case, the mobilities of the charge carriers may also be significantly affected. There are in fact three regimes of transport that the drift electrons can follow in silicon dioxide, depending on the value of the electric field and the energy they gain between two collisions [9,10]. When the field is lower than 0.5 MV/cm, the electrons cannot gain enough energy between two collisions with LO phonons and their mobility is steady. Above 0.5 MV/cm, the LO phonon collisions cannot prevent the electrons from heating up and gaining energy, which is the optical runaway phase. Here, the acoustic phonon collisions contain the electron energies up until fields of 3-4 MV/cm [9], when acoustic runaway occurs. If the field increases past this value, the electrons may gain enough energy to create secondary electrons through impact ionization, for fields close to the breakdown value in SiO2 (10 MV/cm). The trapping cross sections for coulombic attractive traps should also be lowered if the field and the electron energies increase [11].
Due to the approximations used for the drift particles, the transport of these particles may not be accurately modeled at very strong electric fields. Indeed, we approximate the collision with phonons as a single trajectory which would be the sum of all collisions, and we use a single value of mobility for the particles. With this approach, we should be able to model the fact that the drift particles are all forced to travel in the direction of the field, if it is high enough. But as we saw, the mobility should evolve for strong electric fields, along with the phonon collision frequency and mean free path. We also do not consider the evolution of the capture cross section depending on the drift electron energy. Nevertheless, such fields are not reached in our case, as shown in Figure 6-6 where the internal electric fields in a SiO2 sample after 80 pulses of 500 eV and 1 keV electrons are plotted. The maximum field value at 500 eV is about 0.04 MV/cm, and 0.34 MV/cm at 1 keV. These values are much below the 0.5 MV/cm threshold above which the transport of electrons will be impacted. Since 500 eV is around the maximal value of TEEY, this is the field obtained when the net quantity of charges created in the material is maximal, and should be the highest value of electric field attainable. Indeed, if we sample the electric field in the material after 100 pulses of 1 keV electrons, the value of the field is lower, as is the TEEY. Since the electric fields remain low enough, the approximations used for the transport of drift particles are still valid. Indeed, while the value of the electric field could be strong enough to force the charge distribution to move in its direction, the electric field is not high enough to create a regime of optical runaway.
On the other hand, the dependencies of the phonon collision MFPs with energy are all included in the transport of ballistic electrons, and we should be able to simulate the chargeless TEEY/energy curve in the presence of a strong internal field. Therefore, probing the transport of very low energy electrons at high fields could be another validation of the Monte-Carlo code.
In Figure 6-7, the internal electric field has been set to given values from 0 MV/cm up to 4 MV/cm. The field is uniform in the whole thickness of the SiO2 sample. The sign of the field is its orientation along the z axis. For positive values (a), the field is oriented down to the Si layer, and the electrons are accelerated towards the surface. For negative field values (b), the electrons are accelerated deeper into the material. We can see that there is no modification of the TEEY at 0.04 MV/cm, which is the maximal value reached in the simulations. From ±0.5 MV/cm however, the escape depth of the secondary electrons starts to be modified, and the TEEY is increased or reduced depending on the orientation of the field. This modification of the TEEY is becoming more visible This is coherent with other Monte-Carlo simulation works, who have observed a modification of the TEEY in SiO2 from 0.5 MV/cm and above [6,7,12]. In conclusion, our simulation code can qualitatively model the effect of a high electric field on the transport of ballistic electrons. However, we do not have data on the actual field reached in the thin film samples and its evolution during the experiments, and we cannot affirm whether the electric field reached in the time-resolved simulations is correctly estimated. We know that the final potential difference between the sample holder and the surface is less than 9 V over 20 nm, which gives a global electric field of at most 4.5 MV/cm. In such a case, we could have strong field effects modifying the TEEY. However, in both simulations and experiments, we do not reach the final stabilization of the TEEY and potential. Therefore, we do not know the actual value of the field at the end of our measurements, and we do not have access on the value of the field depending on the depth, such as in Figure 6-6. Moreover, the final surface potential is observed in continuous irradiation, whereas we have used pulsed measurements throughout this chapter, so that the charges can be evacuated between two pulses. Hence, the 4.5 MV/cm is a worst case value, which is never reached during pulsed measurements. Given that the decrease of the TEEY is consistent with the experiment, we will assume that the field computed in the simulation is within the right order of magnitude, despite the lack of experimental data on the internal field during the measurement.
Taking this into account, we did not simulate the drift transport in high fields since we know that the approximations used in this work become invalid. However, this regime is never reached in the simulations, as the field always remains lower than 0.1 MV/cm. One of our hypotheses to explain the decrease of the TEEY was that the charge buildup was creating an electric field, which could interfere with the transport of secondary electrons and reduce their escape probability. However, the actual field is not strong enough to have a significant impact on both the ballistic and drift transport. Therefore, we can rule out this hypothesis.
Study of the internal charge density
We have found that the decrease of the TEEY is not created by the internal electric field. Consequently, it is very probable that some interactions between the drift charge carriers and the secondary electrons could perturb the electron cascade and prevent the secondary electrons from escaping, or even being produced. While the transport of low energy electrons is already disturbed by the capture by free traps, the presence of charges in the material could amplify this phenomenon. To verify this, it is necessary to study the internal charge density, and its evolution with depth and time. First, we will interest ourselves in the shape of the charge/depth profile at the end of the decrease. In Figure 6-8, the total simulated charge densities at the end of 100 pulses (10 ms) of 300 eV and 500 eV electrons are plotted (a). They are compared with the charge density sampled by sending 10000 electrons on the target without simulating the drift of charge carriers or using a charge bias factor (b). This plot gives us information on what the charge density would be if there was no charge carrier transport or recombination, which would be the result of a single electron cascade before the drift transport is simulated. In this specific case, we deviate from the default 500 electrons sent for each simulation time step (1µs). Indeed, increasing the number of electrons allows us to reduce the simulation noise. We do not send 10000 electrons in the charging simulation since this needs to be repeated every step, and the computation time would be excessive. Here, the simulation of Figure 6-8 (b) only needs electrons to be sent once, which is why we can afford to send 10000 electrons at once. The 300 eV and 500 eV curves have a similar profile. Browsing the charge density profile from the surface, a positive region appears in the first few nanometers of the surface. In this region, more holes than electrons are captured, with the peak density of holes at 2.5 nm below the surface for 300 eV and at 5 nm for 500 eV electrons. The material has a strong positive charge up until 7.5 nm at 300 eV, and 14 nm at 500 eV. Indeed, electron-hole pairs are created as the incident electrons go through the sample and depose their ionizing dose. However, only the electrons that are close to the surface are able to successfully escape the material. When a secondary electron crosses the surface, the hole, which is much less mobile, is left in the material as a net positive charge. The total charge density becomes negative at a higher depth for 500 eV electrons, since these electrons are more energetic. Indeed, their ionizing dose is deposed
(a) (b)
through a greater thickness than 300 eV electrons, which leads to electron-hole pairs being created deeper. From the drift-less charge profiles, we can extrapolate the escape depths of the secondary electrons produced by electrons of a given incident energy. Going by the position of the peak hole density, we can also estimate that most secondary electrons are created around 3 nm below the surface. The escape depth should then be 4 nm for 300 eV and 6 nm for 500 eV incident electrons. Below the escape depth, the secondary electrons are not able to escape and will become thermalized. Since no negative charges are lost, the sum of charges in this region should be equal to 0. However, there is still a net negative charge. Indeed, we are now reaching the implantation region of the incident electrons. The extrapolated range of electrons in SiO2 has been extracted from simulations on a charge-less sample, and plotted in Figure 6789. We can see that the peak of the negative charge density is spread around the value of the extrapolated range, when the transport of the particles is not simulated. This is also why the negative region is placed deeper for 500 eV electrons, which have a greater range than 300 eV electrons. The charge density profile of 1 keV electrons is quite interesting, as shown in Figure 6-10, since the whole thickness of the material is positively charged. There is also a larger net positive charge than for the other electron energies, especially close to the interface with the silicon substrate. It is known that more energetic electrons will create more electron-hole pairs, but the TEEY of 1 keV electrons is lower than 300 eV or 500 eV electrons, which contradicts the fact that the positive net charge is larger. The extrapolated range obtained for 1 keV electrons in SiO2 (30 nm) is greater than the thickness of the sample (20 nm). This means that the implantation region of the primary electrons is mainly in the Si substrate, which should explain the lack of a distinct negative charge region in the SiO2 layer. The electrons may also be implanted deep enough that a significant part of them can escape through the substrate, resulting in the loss of negative charges and a net positive charge appearing close to the SiO2/Si interface. This leads to a large net positive charge close to the SiO2/Si interface when we sum the quantity of positive and negative charges to get the total charge density plotted here, hence the peculiar shape of the charge distribution.
In Figure 6-11, we can see the charge profiles obtained by sending 100 000 electrons and disabling the drift transport, which gives us the result of a single electron cascade as in Figure 6-8b. We find for the 20 nm sample a conventional shape for the charge density instead of the strictly positive curve after 100 pulses. We can also see that most electrons in the 20 nm sample are created within the 5 closest nanometers to the SiO2/Si interface, which is why they are able to escape the dielectric layer very easily. Finally, the peak of negative charges when the drift transport is disabled is located at 25 nm in Figure 6-11. This gives the mean implantation depth of 1 keV electrons, which is also greater than the 20 nm thickness of our samples. As we want to understand a time-dependent decrease, we should now look at the evolution of the charge density through the irradiation time. In Figure 6-13, the charge density profiles measured through the 100 pulses of the simulation are shown, with one curve sampled each 50 µs. The elapsed time increases as we move from blue to red. In Figure 6-12, the position of the positive and negative charge peaks is also plotted as a function of time, which gives a clearer estimation of the migration of charges. From these figures, we can see that the motion of holes is very limited, with at most 1 nm of difference between the positive charge peak of 500 eV electrons at 0 pulses and 100 pulses. For electrons, which are much more mobile, the peak of negative charges is moving towards the SiO2/Si interface at 300 eV and 500 eV. At 300 eV, it moves from 7 to 12nm, and at 500 eV from 10 to 17 nm. However, given the orientation of the field, we would be expecting the electrons to travel towards the surface instead.
It is also notable that the positive charge peak is increasing, but the negative charge peak starts decreasing after a given time. It is possible that the electrons are actually moving towards the surface and meeting the hole population. When these electrons reach the hole population, they get lost by recombination, which decreases the negative charge. This phenomenon can be seen for 500 eV electrons, where the charge density becomes negative at 8 nm at the start of the simulation, but is only negative above 15 nm after 100 pulses. However, another phenomenon is occurring in the opposite direction: the drift electrons are escaping through the silicon substrate, resulting in the loss of these negative charges and a net positive charge. The leak of electrons in the silicon layer should then be more pronounced for 500 eV electrons than 300 eV. This is evidenced by the migration and reduction of the negative charge peak at 500 eV, hinting towards a leak of electrons in the later stages of the simulation.
In Figure 6-6, we saw that the peak value of the field was on the order of 10 4 𝑉/𝑐𝑚. According to Fitting & Friemann [7], such a field does not have any effect on the trajectories of electrons of 0.1 eV. Our drift electrons are generated with an energy of 3/2 kT = 0.04 eV. It is therefore possible that the electric field in the sample is not strong enough to force most of the electrons to follow its opposite direction, and that electrons are traveling in all directions due to thermal agitation. Hence, the electrons implanted close to the Si layer still have a significant probability of leaking into the substrate, even though the direction of the field tends to accelerate them away from the interface. Another part of the electrons can be traveling towards the surface and recombining with the holes, which is why we have two opposite phenomena occurring at the same time. This would also explain why the holes are practically not moving, as they have an even lower mobility.
Finally, we can compare the charge density profiles of 1 keV electrons during the first pulse and the second pulse, following the 50 ms relaxation period. This comparison is shown in Figure 6-14. We can see that during the first pulse, we still have a conventional charge profile with a negative region after 12.5 nm. Due to the neutralization of positive and negative charges, the total charge profile is very close to 0. However, after the relaxation period, the charges have separated. Most electrons have escaped in the silicon layer, but the holes have remained in the SiO2 layer. This results in a net positive charge near the interface, which is due to the escape of electrons in the silicon layer and confirms our previous explanation. The evolution of the charges in the first 5 nanometers of the surface should be critical, as it is the region where most secondary electrons are created and escaping from. In the following, we will study the evolution of the total density of charges in the first 5 nanometers below the surface, which will be referred to as the surface charge density. From the charge profiles we have seen so far, we can expect this charge density to be strictly positive. It should almost be composed of trapped holes only, although a small part of electrons may get trapped very close to the surface. This density will be expressed in charges/cm², and given by an integral of the number of charges per cm 3 𝑁 that is analogue to the formula of the surface energy deposit from Chapter 4:
𝑁 𝑆𝑢𝑟𝑓 (cm -2 ) = ∫ 𝑁(𝑧) 𝑑𝑧 5 𝑛𝑚 0 Equation 6-2
In Figure 6-15, the evolution of the surface charge density is plotted with the evolution of the TEEY for our three energies of interest. We can see a definite correlation between the timeresolved increase of the surface hole density, and the decrease of the TEEY. Indeed, when the surface hole density increases at a stronger rate, the TEEY also decreases at a stronger rate. For instance, at 1 keV, the decrease of the TEEY is weaker than for the other energies, and the increase in the surface hole density is also weaker. From the interactions of the charge carriers shown in Chapter 5, we can recall that the probability of recombination of the drift electrons and ballistic electrons increases as the density of trapped holes also increases. From the very similar evolution of the TEEY and the charge density, it is very probable that the recombination of electrons with holes is the driving mechanism of the decrease of the TEEY. Indeed, a higher hole density increases the recombination probability, which should lower the overall capture mean free path of secondary electrons escaping the material. If an increasing number of secondary electrons is lost due to recombination before they can reach the surface, this should cause a reduction of the TEEY. The incident electrons are also susceptible to recombine, which could prevent them from creating secondary electrons in the first place. However, this recombination is only possible at the end of their path, when their energy has fallen down to a few eV and they have already created an electron cascade above their implantation depth.
Since the value of surface charge density is known through the whole simulation, we can try to prove this correlation by quantifying the decrease of the TEEY and confronting these two quantities. This can be done by computing the relative variation of the TEEY (
∆𝑇𝐸𝐸𝑌 𝑇𝐸𝐸𝑌
), compared to its value at t = 0. It is expressed as:
∆𝑇𝐸𝐸𝑌 𝑇𝐸𝐸𝑌 (𝑡) = 𝑇𝐸𝐸𝑌(0) -𝑇𝐸𝐸𝑌(𝑡) 𝑇𝐸𝐸𝑌(𝑡) Equation 6-3
Using this formula, we aim to follow the method given in the experimental study of Belhaj et al. [13]. They have calculated the surface density of positive charges from the surface potential ∆𝑉 𝑠 , the capacitance 𝐶, and the surface of the sample 𝑆 with the formula
𝑁 𝑆𝑢𝑟𝑓 (cm -2 ) = 𝐶 𝑞𝑆 ∆𝑉 𝑠 Equation 6-4
They have then plotted the relative variation of the TEEY of MgO at 200 eV with the surface hole density, and observed a linear relation between these two quantities. They have deduced an empirical correlation relationship that links the relative decrease of the TEEY to the surface charge density 𝑁 𝑠𝑢𝑟𝑓 as:
- ∆𝑇𝐸𝐸𝑌 𝑇𝐸𝐸𝑌 ≈ 𝑆 𝑒-ℎ 𝑁 𝑠𝑢𝑟𝑓 Equation 6-5
Where 𝑆 𝑒-ℎ is defined as an effective recombination cross section. In Figure 6-16., we have plotted the same correlation, using the TEEY/TEEY and the 𝑁 𝑆𝑢𝑟𝑓 extracted from the Monte-Carlo simulation for energies ranging from 100 eV up to 2 keV. There is in fact, for all energies, a strong linear correlation between the hole density and the relative variation of TEEY, with a R² of 0.99 except for the higher energies. For all energies except 100 eV, the curves are superimposed, and the values of the effective recombination cross section are almost identical with an average of 𝑆 𝑒-ℎ = 5.7 × 10 -12 cm 2 . This cross section is in the same order of magnitude as the recombination cross section used in the Monte Carlo simulations, which is 𝜎 𝑒-ℎ = 2 × 10 -12 cm². This result definitely shows that the recombination of electrons with holes is the internal mechanism at the origin of the decrease of the TEEY at all energies. When the hole density increases, more secondary electrons recombine and are lost before they can escape the material. In consequence, the TEEY is reduced. With our simulation results, we have also confirmed the hypothesis formulated experimentally by Belhaj et al. Noticeably, the linear correlation does not fit the very start of the decrease of the TEEY, as shown in Figure 6-17 in the example of 300 eV and 1 keV electrons. In this phase, the material is not uniformly irradiated yet, and the overlap factor is lower than one. When the material is uniformly filled by holes, the recombination takes over as the driving mechanism of the evolution of the TEEY. Hence, the evolution of the TEEY we measure after a given time becomes only dependent on the properties of the material regarding charge creation, transport, and trapping. This is coherent with the fact that this evolution of the TEEY was not only observed by ourselves on SiO2, but also by several experimental work on other insulators.
We can now also explain why the various parameters of the model had a strong impact on the evolution of the TEEY shown in the parametric study of section 5.7.3. Lowering the recombination cross section meant the TEEY had a very small reduction compared to the default case. By lowering this cross section, the trapped holes are less attractive to the secondary electrons, which have a longer global capture mean free path. Their probability of recombining before escaping is lowered, and the TEEY is increased. When the detrapping frequency factor was lessened, the decrease of the TEEY was sharper. In this case, the migration of trapped charges by hopping was prevented. From Figure 6-8, we have shown that the holes are created in the first 5 nanometers of the surface, but they can migrate a couple of nm deeper in the material, which will reduce the surface hole density. By deactivating the detrapping, the density of holes close to the surface is higher, and so is the recombination probability. The recombination with holes had already been proposed by other experimental works as an explanation of the decrease of the TEEY, in the case of negatively biased samples under a defocused beam [13][14][15][16][17]. This hypothesis is confirmed here both qualitatively and quantitatively by our numerical simulations, and the experimental measurements we have made on SiO2 samples. In conclusion, we have successfully provided an explanation for the misunderstood decrease of the TEEY, with our Monte-Carlo simulation code and the study of the charge transport.
6.2.3 Highlighting the effect of the presence of residual holes in the sample at the start of a measurement
So far, we have assumed in our simulation that the samples are perfectly charge-less at the start of the measurement. However, this may not be the case for the real samples used in the experimental study. Indeed, the charges created in the material during irradiation may not be completely evacuated in between two TEEY measurements. If the residual surface hole density is high enough, the secondary electrons produced will have a significant probability of recombining with these holes, as we have shown in the previous subsection. In simulation codes, we have complete control and knowledge of the charge density inside of the material. In TEEY experiments however, it is not possible to know the evolution of the charge in the same time as the evolution of the TEEY. We can only use a Kelvin probe to measure the surface potential after the sample has been bombarded by electrons, which only gives us information on the total charge remaining in the sample after a relaxation period. Consequently, at the start of the TEEY measurement, we do not know the charge state of the material. This can be problematic if we perform an experimental measurement on a sample that is still filled with residual holes. In fact, the presence of charges can lead to an error in the measurement of the TEEY, which can be lowered due to the recombination.
This effect is shown in Figure 6-18, where we have made two experimental measurements of the TEEY of 1 keV electrons in SiO2, in the DEESSE facility. The first TEEY measurement was made on a sample that was left at rest for two days since the last measurement, so we can consider that practically all charges should have been evacuated, except the charges captured by very deep traps (> 1 eV). The second measurement was made in the middle of a measurement campaign on the same sample. One can see that the second measurement is shifted compared to the first data set. It is possible that the electron cascades created during the previous TEEY measurements may have left residual charges that have perturbed this measurement. While we do not precisely know the quantity of charges injected between the two measurements, this first plot already shows in a qualitative way that two measurements following an identical protocol may not yield the same value of TEEY, depending on the charge state of the sample. A more quantitative study on this phenomenon has been made with the ALCHIMIE facility. This time, we have made two TEEY measurements using 300 pulses of 500 eV electrons. The second measurement was made right after the first one, so we have a better estimation of the quantity of charges left in the material at the start of the second measurement. From the experimental results in Figure 6-19, we also observe a downwards shift of the TEEY obtained in the second measurement. To prove that it is indeed the presence of residual holes in the sample that is falsifying the measurements, this phenomenon has been reproduced in the Monte-Carlo code. We will attempt to simulate the same situation as in the experiments made with ALCHIMIE, where we have made a second measurement of the TEEY immediately after the first measurement. However, we cannot know precisely the quantity of charges inside of the material at the end of an experimental measurement. Consequently, this study can only be qualitative. Instead of having the simulation start from a perfectly virgin sample, we can introduce a density of holes that are already deeply trapped at the beginning of the simulation. We can show that only the deeper traps can retain holes between two measurements, by calculating the time of residence of holes 𝜏(𝐸 𝑖 ) in a shallow trap of depth 𝐸 𝑖 . It is given by the standard activation law as:
𝜏(𝐸 𝑖 ) = 1 𝑊(𝐸 𝑖 ) = 1 𝑊 0 exp (- 𝐸 𝑖 𝑘𝑇 ) Equation 6-6
Where 𝑊 0 = 10 3 s -1 for shallow traps, and 𝑇 = 300 K. If we take a shallow trap with an energy depth of 0.1 eV, this gives a time of residence of 10 -3 s. Let's compare this to the most frequent deep trap in SiO2, which is the oxygen vacancy with an energy depth of 2.4 eV. According to Equation 6-6, the immobilization time of a charge carrier captured by such a trap is 10 20 years. While the actual immobilization time of the charge carrier in this trap will be reduced thanks to the Poole-Frenkel and Phonon Assisted Tunneling enhancements, it should still remain much larger than the time scale of a TEEY measurement. Accordingly, in the ten of seconds that separate two measurements, the particles in shallow traps should have had ample time to either exit the sample, or get fixed in deep traps.
The distribution of residual holes we introduced is assumed to follow the distribution of positive charges shown in Figure 6-8 at the end of 100 pulses. To take into account the fact that not all charges will end up in deep traps, we will assume that 50% of the charges at the end of the 100 pulses have been captured by deep traps, and the rest have escaped or recombined. Therefore, the actual charge density used in the simulation will be half of what is given below. The charge densities of Figure 6-8 for 300 and 500 eV electrons at a depth z (in nm) are approximated by a Gaussian law as:
𝑛 ℎ (𝑧) = 𝑛 0 exp (- (𝑧 -µ) 2 2𝜎 2 ) Equation 6-7
With 𝑛 0 = 1.38 * 10 9 , µ = 2.7 nm, 𝜎 = 1.8 nm at 300 eV, and 𝑛 0 = 1.5 * 10 9 , µ = 4.7 nm, 𝜎 = 3.5 nm at 500 eV. Due to the singular shape of the charge distribution after 100 pulses of 1 keV electrons, a better fit was achieved for this energy with an exponential law in the form of:
𝑛 ℎ (𝑧) = 𝑛 0 exp(𝛼(𝑧 -19 nm))
Equation 6-8
Where 𝑛 0 = 5.5 * 10 9 and 𝛼 = 0.1. For this energy, there was an excessive lowering of the TEEY (below 1) when using 50% of the density of Equation 6-8, so we have used 25% of the density instead. The approximated densities given by these relationships are plotted in Figure 6-20 in dotted lines, and compared with the reference charge density after 100 pulses in solid lines. In the simulation, the 𝑛 ℎ (𝑧) used is 50% or 25% of the values given by Equation 6-7 or Equation 6-8, which is then divided by the charge bias factor to get the actual number of trapped particles to add in each cell of the mesh. The simulated TEEY of 300 eV, 500 eV and 1 keV electrons for a sample including residual holes is compared in Figure 6-21 with the simulation results obtained on a sample that is initially charge-less. The shift of the TEEY curve in the charged sample is clearly visible, and the phenomenon we have observed experimentally is accurately modeled by the simulation code. This confirms that surface trapped holes are at the origin of the phenomenon shown in Figure 6-18, by increasing the recombination probability of the secondary electrons. For all curves, the residual holes TEEY starts roughly at the value reached at the end of the charge-less TEEY. This shows that our estimation of the proportion of holes remaining in the sample immediately after the first simulation is within the right order of magnitude. When doing successive experiments, the difference in the TEEY is not as pronounced as in the simulation, since there is always a rest time of several seconds when changing the measurement parameters. A part of the holes may also be removed during the measurement of the incident current, which is made before measuring the sample current. In section 6.2.1, a scaling factor of 20 % had to be applied to the experimental data, so that it could be compared to the simulation. While this is due to the differences in the charge-less TEEY and the presence of surface contamination on the experimental samples, this difference could also be due to these residual holes. It is also possible that the starting point of the TEEY in our time-resolved measurements does not correspond to a fresh sample, and that the actual charge-less TEEY is higher than our reference data.
In conclusion, the presence of residual holes from a previous irradiation can influence the TEEYs measured afterwards if the holes have not evacuated or been compensated. Since the transport of holes is heavily dependent on the material, the situation shown in this section may only occur in the less conductive insulators. Indeed, this phenomenon can only occur if the hole density remains high enough after a given relaxation period. This can be due to a high density of deep traps, or a very long time of residence in traps, which can prevent the holes from being evacuated.
Implementation of a charge compensation procedure to avoid falsification of TEEY measurements
We have established that the charge state of the sample is a source of errors for the TEEY. In order to avoid falsification of the experimental data, we should then find a way to remove these remaining charges before starting a new measurement. It has actually been shown experimentally by Belhaj et al. [13,18] that the residual holes can be suppressed by sending very low energy electrons in the material. They have studied this phenomenon over several pulses, using a charge compensation procedure proposed by Hoffman et al. [19]. This procedure consists in sending very low energy electrons to discharge the sample. When no charge compensation method was used, they measured a lowering of the TEEY after each incident pulse, similarly to what we have simulated in Figure 6-21. When the sample was discharged between two pulses however, the TEEY measured after each pulse was identical and the lowering was eliminated. From section 6.2.2.2, we have understood that most holes are created within 5 to 10 nm of the surface, and the density of holes in the first 5 nanometers was directly tied to the relative variation of the TEEY. The principle of this charge compensation method would then be to send very low energy electrons, which have a penetration depth of a few nanometers. Since these electrons will be implanted close to the capture depth of the residual holes, this allows us to specifically target the surface holes, and remove them of the material by recombination.
We have followed the experimental protocol from ref. [13] based on this phenomenon, to check whether the suppression of the holes close to the surface can increase the TEEY in our simulations. In the first phase, the sample is bombarded by 10 to 20 pulses of incident electrons, using the same beam parameters as previously. The sample is then bombarded by several pulses of 3 eV electrons and biased to +27 V. In this phase, we can also use a continuous low energy beam instead of incident pulses. Since the energy of most secondary electrons is lower than 27 eV, this bias allows the recollection of the secondary electrons that may escape. The incident electrons arrive at the surface with an effective energy of 30 eV. From Figure 6-9, the extrapolated range of these electrons is 3 nm, which is right at the peak of the hole density of Figure 6-20. Therefore, the injected electrons can progressively discharge the sample by recombining with the holes. This phase is run until the positive charge density has been eliminated. Finally, the sample holder is biased back to -9 V and the energy of the incident electrons is set back to its initial value. The sample is bombarded again by 10 pulses to see if the TEEY has evolved compared to its value before the charge compensation phase.
Several examples of simulation results using this procedure are shown in the figure below. First, in Figure 6-22, the sample is irradiated by 10 pulses of 300 eV electrons. At the end of this phase, 80 pulses of 3 eV electrons are sent, which progressively discharge the sample. We can see this discharge in Figure 6-22b, where the evolution of the surface charge density is shown to decrease during the whole compensation phase until it becomes negative. From Figure 6-22c, we can indeed see from the charge density profiles that the whole thickness of the sample is negatively charged at the end of the compensation. After these 80 pulses, 10 pulses of 300 eV electrons are sent again. We can see in Figure 6-22a that the TEEY has increased from 2.2 before the charge compensation up to 2.4 after this phase.
Another example is given for 500 eV electrons in Figure 6-23 following the same procedure.
Here, the TEEY has increased from 2.2 up to 2.34 after the discharge. Despite the fact that the holes are created deeper than at 300 eV, the low energy electrons are still able to completely compensate these holes. In both Figure 6-22c and Figure 6-23c, the 30 eV electrons are able to eliminate the peaks of holes, as we can see from the plots of the charge density before and after the discharge. While the TEEY of both energies has significantly increased after the discharge, it still has not reached the charge-less value at the beginning of the simulation, despite the fact that the whole thickness of the material is negatively charged at the end of the compensation phase. It may thus be necessary to change our simulation parameters, in order to enhance the recombination and ensure that practically every trapped hole has been removed. Our goal is then to bring the TEEY back to its initial value, which would guarantee reproducible measurements. In Figure 6-24, the discharge phase was run with a continuous beam of 3 eV electrons bombarding the sample for 6 ms, followed by a relaxation period of 1 second. During this period, the implanted electrons and holes are able to detrap and recombine with each other. Since we have an excess of electrons at the end of the low energy electron beam phase, the trapped holes should all be removed by recombination while these electrons move by hopping. In fact, when we resend 300 eV electrons on the target, the TEEY has increased back to its charge-less value. This simulation shows that the removal of the surface holes is done in two steps. First, the secondary electrons can recombine with the trapped holes and decrease the surface hole density, as shown in Figure 6-22 and Figure 6-23. But the relaxation period of 50 ms between each pulse was not enough to let the new electrons remove all holes, as the TEEY did not increase to its initial value. By using a much longer relaxation period of 1 second after the injection of electrons, these new drift electrons are able to get trapped and detrapped multiple times, which increases their probability of recombining with a hole. In Figure 6-25, no low energy electrons were sent after the 10 pulses of 300 eV electrons. The sample was left at rest for 10 seconds instead, before sending 300 eV electrons again. The objective of this simulation is to verify if the sample can discharge itself in a short time between two TEEY measurements, without the need to inject electrons. We can see in (a) that the TEEY has only had an increase of 0.03 at the rest period, which could be due to simulation noise. The TEEY effectively stays at the same value as before the relaxation. The positive surface charge density in (c) and the surface potential in (b) have also not evolved. From (d), the peak of positive charges in the first 5 nanometers only had a very small decrease during the relaxation. However, the negative charges trapped after 5 nm have spread in the material and were able to escape in the silicon substrate. Indeed, the density of negative charges is significantly lowered between 8 nm and 20 nm after the relaxation period. This is due to the fact that the electrons are more mobile than holes. Therefore, during the rest period, most of the charges lost by the sample are electrons that were implanted close enough to the SiO2/Si interface. Since we have not removed the surface holes, we find ourselves in the situation of section 6.2.3, and we can see that the second TEEY measurement after the relaxation period is lowered by the presence of the remaining positive charge. These results show that leaving the sample at rest for a few seconds is not enough to avoid a remanence in the TEEY, and we still need to inject electrons in the sample to effectively remove the charges. On the other hand, when the sample was left at rest for a few hours (or two days in the case of between two measurements in our experiments, the TEEY obtained after this rest period was much higher than the measurements made before. In result, the sample can be discharged by leaving it at rest for a given period of In conclusion, it is possible to remove the residual positive charge that can falsify the TEEY between two measurements, by sending very low energy electrons into the first nanometers of the surface and letting them recombine with the holes. This procedure will allow us to make more reproductible TEEY measurements without remanence. The lowering of the TEEY by the positive charging and its removal are dependent on the density of holes close to the surface, which further demonstrates the importance of recombination on the secondary electron emission process. 10 seconds 10 seconds 10 seconds
Low energy electron beam
Study of multiple-hump TEEY curves 6.3.1 Experimental measurements of multiple-hump TEEY curves
A TEEY curve with a standard shape is not always what we obtain when measuring the TEEY of an insulator sample. Indeed, several studies have reported experimental TEEY measurements on SiO2 thin films that exhibit a double-hump shape, with the apparition of a TEEY local minimum between the two humps. [20][21][22]. This behavior was also observed on other spaceused dielectric materials. In this section, we will use the DEESSE installation to study the conditions of appearance of the double-hump TEEY curves on SiO2 thin films. The most recent study on this subject was made by Rigoudy et al. [20]. The authors have used the DEESSE facility to measure the TEEY of thermal grown SiO2 thin films. They have observed doublehump TEEY curves similar to what was reported elsewhere, with a minimum of TEEY appearing around 1 keV. They have proposed an explanation based on an analytical model taking into account the radiation induced conductivity. This a phenomenon we should be able to model with our simulation code, in order to verify their theory. Notably, they were able to remove the local minimum of TEEY and obtain a conventional shape by changing the measurement parameters, such as the collector voltage. The beam spot area can also be adjusted, by changing the focus voltage F of the electron gun, from 0 V to 1000 V. A focus of 0 V (F0) is expected to produce a very broad beam spot (a few cm²), while a focus of 300 V (F300) should give a narrower beam spot (a few mm²). In all experimental results shown in this section, we have measured the TEEY over several pulses of 6ms spaced of 200ms. The energy/TEEY curves were obtained with the automated TEEY measurement program. A first series of 10 pulses is sent with the sample biased to +27V to measure the incident current 𝐼 0 , then a second series of 10 pulses is sent with the sample biased to -9V to measure the sample current 𝐼 𝑆 . In both cases, the current is averaged over the 10 pulses, and the TEEY is obtained with Equation 6-1.
In Figure 6-26, we have measured the TEEY of our plasma-grown SiO2 samples using a focus voltage of 300V. This is the value of the focus voltage that was used by Rigoudy et al. [20] when they observed a local minimum of TEEY. It is indeed the value of focus used during the standard TEEY measurement procedure in DEESSE and ALCHIMIE. One can see the appearance of a local minimum of the TEEY around 1 keV in our experimental data, as in the study of ref. [20]. The maximal TEEY (2.35) is also significantly lowered compared to the value of Figure 6-5 (2.7), where the experimental data was obtained with a focus voltage of 0V. It is important to note that changing the focus voltage from 300 V to 0 V has eliminated the local minimum at 1 keV, which allowed us to obtain the TEEY curve of Figure 6-5 that follows the standard behavior. Since the focus voltage seems to enable or disable the apparition of the local minimum, we have also made other measurements of the TEEY curve using different values of focus voltage. The results are shown in Figure 6-27. As seen before, the local minimum of the TEEY disappears if a broad beam is used (F = 0V). Moreover, changing the focus voltage from 300V to 100V or 500V changes the position of the local minimum of the TEEY and its relative amplitude compared to the maximum TEEY. When a 100 V focus voltage is used, the local minimum is moved to 500 eV. In this case, the dip in the TEEY curve is very marked, since the local minimum has a higher relative variation compared to the max TEEY (54 % of the maximum TEEY). In comparison, the local minimum at 300V is equal to 64% of the maximum. In the case of a 500V focus voltage, there is even a second local minimum that appears at 250 eV, with the local minimum around 1 keV that has been previously observed. Finally, two TEEY measurements were made in DEESSE with a thin film sample of MgO using a focus voltage of 250 V and 0 V. The results are given in Figure 6-28. Here again, we find at F 250V a double-hump TEEY curve with a local minimum at 900 eV, which is only slightly shifted compared to the minimum observed at 1 keV at a focus of 300 V. This local minimum is also eliminated if the focus is changed to 0V, as was the case for SiO2. By changing the focus voltage, we were able to make appear or disappear a local minimum of TEEY on a different insulator, which demonstrates that this effect is not a property of SiO2 alone. Given that we observe TEEY curves that are very similar to Figure 6-27, this points to physical processes which would be common to insulator thin films. On the other hand, this could also be linked to the protocol or parameters we use to measure the TEEY, because we obtain the same results on two different insulators but with the same measurement parameters. Another set of interesting data we have gathered on this subject, is the following TEEY measurement made on the SiO2 sample with a focus of 250 V, before the sample was decontaminated. It is compared in Figure 6-29 with the data of Figure 6-26 after decontamination. First, we can see that the TEEY of the contaminated sample is higher by a factor of 40% compared to the decontaminated sample. As a result, it is crucial to properly decontaminate the sample before performing any TEEY experiment, so that the data obtained is representative of the material and not of the surface contamination layer. Second, there is also a local minimum of TEEY that appears at 900 eV with a focus voltage of 250 V, like for MgO. The surface contamination layer is composed of many chemical species such as organic compounds or water. As a result, we cannot say if it is conductive or insulating. On the other hand, the SiO2 underneath the contamination layer retains its insulating properties, hence there could be some charging effects or recombination inside the SiO2 that would affect the secondary electrons before they reach the surface contamination layer. In summary, by changing the focus voltage, we can make local TEEY minimums appear at 300 eV, 500 eV and 1 keV. Nevertheless, it is not possible at this stage to accurately investigate the apparition of the local minimums with an explanation based on physics only, since the minimums can appear at various energies. The explanation proposed by Rigoudy et al. was that the local minimum was appearing at 1 keV, since the penetration depth of electrons was comparable to the thickness of the SiO2 sample. This would create a conductive channel which would force the secondary electrons into the Si layer. We have seen indeed in section 6.2.2.2 for 1 keV incident electrons that the drift electrons could easily be evacuated into the Si layer, which is coherent with their hypothesis. However, when we use different values of the focus voltage, the TEEY local minimum still appears at lower electron energies. These electrons have an extrapolated range that is lower than the sample thickness, so they are not able to create a conductive channel. Since the shape of the TEEY drastically changes according to the focus voltage, it is very possible that the double-hump curves are also a result of the measurement parameters.
Measurement of the variation of the beam area with the incident electron energy
To understand the effect of these different beam parameters, we have first studied the evolution of the incident current, to see whether there was a dependence on the focus voltage and/or energy. Indeed, if the incident current increases, more electron-hole pairs will be created in the material in a given time period. Therefore, it could be possible that the TEEY minimums are generated when the incident current is the strongest. In this regard, the incident current 𝐼 0 for the focus voltages F = 100V, 300V and 500V have been measured, and are shown in Figure 6-30. One can see an increase of the incident current with energy, from a few tens of nA at 50 eV to 0.3 µA at 2 keV. However, this variation of the incident current is independent on the focus voltage, so that the three plots are superimposed. Given the significant variation of the position and amplitude of the TEEY local minimum, the variation of the incident current alone cannot explain the dependence of the TEEY on the focus voltage. The incident beam surface 𝑆 𝐵 has then been measured for these 4 different values of the focus voltage in the ALCHIMIE facility. We have observed the same kind of multiple-hump TEEY curves in DEESSE and ALCHIMIE, and the two facilities use the same model of electron gun, a Kimball Physics ELG2. Consequently, we will assume that the beam area of DEESSE follows the same variation as measured in ALCHIMIE. The measurements were made by using the electron gun on a 5 cm wide aluminum square plaque covered by sodium salicylate powder. This powder is phosphorescent when hit by electrons above 200 eV. By using a camera filming the plaque, we can save an image of the surface irradiated by the beam, which can then be measured to obtain its area. The measurements are given in Figure 6-31. For the three focus voltages F = 100V, 300V and 500 V, there is a significant variation of the beam surface with the electron energy. For instance, the area goes from 1.8 cm² at 300eV to 0.08 cm² at 1 keV for F = 300V.
In the case of F = 100 V, the beam area goes from 0.1 cm² at 500 eV up to 13 cm² at 2 keV, as shown in the secondary plot. No data is provided for a focus voltage of 0V, because the beam surface was wider than the area covered by the phosphorescent powder at all energies. Therefore, we can only estimate the beam area at F = 0 V to be greater than 25 cm². of beam surface appear at the same energy, which is about 0.1 cm² for F = 100V at 550 eV, F = 300V and F = 500V at 1 keV, and 1mm² for F = 500V at 200eV. One can also see that the presence of two minimal area points at F = 500 V (250 eV and 1 keV) creates a triple-hump TEEY curve with two local minimums at the same energies. The beam area is also equal to or lower than 0.2 cm² at F = 500V on a large energy range (200 eV to 1.2 keV). This leads to a TEEY that is constantly lower than with the other focus voltages, except at the local minimum of beam area at 500 eV for F = 100V. Such variations of beam surface with electron energy were also reported in other works [23], but they have not made any measurements on insulators.
We have demonstrated that the incident current 𝐼 0 remains constant for various focus voltages, but the surface irradiated by the beam 𝑆 𝐵 varies with the electron energy. Therefore, the key parameter tied to the TEEY variations should be the incident current density 𝐽 0 , as:
𝐽 0 = 𝐼 0 𝑆 𝐵 Equation 6-9
Rather than sampling the whole TEEY curve averaged over several pulses, it is also possible to sample the evolution of the TEEY for a given energy after each pulse, and see if there is also a correlation with the current density. In the following, we have used the same method for timeresolved TEEY measurements as in section 6.2, where the currents were not averaged over the pulses but sampled for each individual pulse. Instead of using 100 µs pulses with a spacing of 50 ms, we have chosen here to stay with 6 ms pulses with a spacing of 200 ms, to use the same parameters as when measuring the whole TEEY curve. The resolution will be much lower, but this will allow us to observe how the TEEY is evolving during a standard measurement.
In Figure 6-32, we compare the time evolution of the TEEY for the energies and focus voltages where a TEEY minimum appears, with the evolution of the TEEY at the same energy but using a broader beam For the energies and focus voltages combinations where the beam area is equal to or below 0.2cm² (300 eV at F500, 500eV at F100 and F500, and 1 keV at F 500V and F 300V), the current density is on the order of 1 to 10 µA/cm². In this case, the TEEY is immediately lowered after one pulse compared to the broader beams, which have an area of a few cm² and a current density lower than 1 µA/cm². For 300 eV and 500eV electrons, this lowering stabilizes after 10 to 15 pulses, while for 1 keV electrons the TEEY does not seem to evolve after the first pulse. This is why our study of the decrease of the TEEY has focused on these three energies. Indeed, at these energies and focus voltages, we could easily get time-resolved experimental data on the decrease of the TEEY, and this decrease was more pronounced than for other energies which were not located at a TEEY minimum. We know that the decrease of the TEEY we observe at a local minimum is due to the recombination of secondary electrons with holes, but this could indicate that the recombination mechanisms are enhanced when a current density of 1-10 µA/cm² is used. Indeed, when the current density becomes lower than 1 µA/cm², the TEEY is much higher and has either a much smaller decrease (300 eV) or no decrease at all (1 keV). This significant variation of the TEEY depending on the current density is coherent with the apparition of the local TEEY minimums in Figure 6-27. If we were to compute the average TEEY over 10 pulses for a given focus voltage and from the data of Figure 6-32, we can clearly see how a local minimum can appear at the energies where the current density is increased.
There is a systematic shift between the TEEY curves that were obtained at the same energy and beam area, for instance at 500 eV between F 500V and F 100V, which have a beam area of 0.2 cm² and 0.1 cm². An identical type of shift is found for the wide beam curves, for instance at 1 keV between F 100V (4 cm²) and F 0V (> 25cm²). In all cases, the TEEY curve that is lower corresponds to the measurement that was made later. In the previous section, it has been shown that the presence of residual deeply trapped holes in the sample may cause a shift of the measurements. Since we did not use the charge remove procedure between each measurements, this could also be the source of the error observed here.
Explanation of the experimental observations by the simulation: Study of the effect of the current density on the TEEY
To understand why the TEEY is decreasing much faster with a focused beam than with a broad beam, we can do the same exercise as in 6.2.2.2 and study the evolution of the internal charge buildup. Using an incident current of 1 µA as previously, the TEEY was simulated with a narrow and a wider beam surface in an attempt to qualitatively reproduce the experimental results of Figure 6-32. The simulations of section 6.2, which were made with a beam area of 0.1 cm² and a current density of 10 µA/cm², have been plotted along with simulations made with a beam area of 1 cm² and a current density of 1 µA/cm², in Figure 6-33 (300 eV), Figure 6-34 (500 eV) and Figure 6-35 (800 eV). At 800 eV, the charge density has a very similar profile to the one obtained with 1 keV electrons, with a strong net positive charge near the interface. The extrapolated range of 800 eV electrons is 15 nm, so this positive charge should also be due to the leak of the trapped holes in the silicon substrate, as for 1 keV electrons. When the beam area is small and the current density is large (0.1cm², 10 µA/cm²), we observe a decrease of the TEEY due to recombination, which is coherent with the experimental measurements made with a narrow beam. However, when the current density is lowered (1 µA/cm²), the TEEY has a higher value and a slower decrease. Notably, we can observe for all energies a contradictory behavior between the evolution of the charge buildup and the TEEY at 1 µA/cm², compared to the observations we made at 10 µA/cm². In Figure 6-15, we had shown that a higher surface density of positive charges led to a stronger decrease of the TEEY. At 1 µA/cm², the positive surface charge density is much higher than at 10 µA/cm² for all three energies. As we can see from the comparison between the charge density profiles, the peak of positive charges is increased at 1 µA/cm². However, the TEEY is also higher at 1 µA/cm² than at 10 µA/cm², and the decrease is much weaker.
It is also possible to highlight this change of behavior of the TEEY by plotting the correlation between the relative variation of TEEY and the surface charge density, like we did in Figure 6-16 and Figure 6-17 at 10 µA/cm². This correlation is given in Figure 6-36. For all energies, the correlation between the two quantities is weaker at 1 µA/cm² than at 10 µA/cm², where the R² was 0.99. The slope of the linear fit is also decreased on average from 𝑆 𝑒-ℎ = 5.7 × 10 -12 cm² in Figure 6-17 down to 𝑆 𝑒-ℎ = 5.5 × 10 -13 cm². There is finally a stronger dispersion in the values of the effective cross section, and the curves are not superimposed like in Figure 6-16 and Figure 6-17. This is due to the apparition of a plateau region for the lower surface charge densities, where the relative variation of TEEY is null.
Since the effective recombination cross section is decreased, this indicates that the interactions between the electrons and the trapped holes are less probable. There is indeed a reduction of the recombination probability, but this is not due to a change of the actual recombination cross section, which remains at 2 × 10 -12 cm². The total charge density in the material is higher than at 10 µA/cm², but due to the lack of uniformity the average recombination mean free path is also higher. Indeed, with a 1 µA/cm² current density, the incident electrons do not hit the sample as uniformly as with a 10 µA/cm² density. Therefore, the electrons have a much higher chance of arriving in a region of the material that has not been irradiated before. Thanks to the overlap factor, the simulation can consider this phenomenon. We are able to simulate the fact that most electrons arrive in a region where the trapped charge density is empty, and that the density of charges seen by an individual electron is thus reduced. Indeed, after 10 pulses of 6ms, the overlap factor is 26% at 1 µA/cm², so the electrons will only see 26% of the total charge densities of Figures Figure 6- In Figure 6-33, the simulation results for 300 eV electrons at 1 µA/cm² have also been plotted on a time scale reduced by a factor x0.1 (in green) to get an equivalent time scale compared to the TEEY at 10 µA/cm². The aim of this plot is to show that the two curves are not superimposed. Dividing the current density by 10 does not simply make the TEEY decrease 10 times slower. We can see that the slope of the decrease of the TEEY is still sharper at 10 µA/cm², but on the other hand the surface potential at 1 µA/cm² still has a much shaper increase.
Therefore, the effect of the current density is not simply to change the time scale at which the TEEY is decreasing, as we have seen that the charge buildup is also modified by the current density. In fact, there is a competition between the time between two electron impacts at the same place, and the time needed for the holes to migrate by hopping. If the hole density at a given place is emptying faster than the time between two electron impacts, the decrease of the TEEY due to recombination should be less pronounced, which could be what happens at 1 µA/cm². At 10 µA/cm², the time between two electron impacts is short enough that a significant part of holes still remains and can capture the secondary electrons, which leads to the decrease of the TEEY. Depending on the material, this reduction of the TEEY may be more pronounced if the hole traps are deeper or more concentrated, or less pronounced if the holes are very mobile and can be dissipated in the sample.
Using the data from Figure 6-31, the simulation can then be used to reproduce the experimental TEEY curves for the three focus voltages. To do so, the TEEY has been simulated for 10 pulses of 6 ms with a 200 ms relaxation period. Each point of the TEEY curve is averaged over the 10 pulses, as in the experiment. The beam surfaces from the measurements in Figure 6-31 were used to compute the overlap factor, in order to simulate the effect of the focus voltage. In Figure 6-37, the simulation of the TEEY for focus voltages of 100, 300 and 500 V is displayed. By modifying the recombination probability according to the variation of current density, we can successfully simulate the apparition of the one or two local TEEY minimums that we had observed experimentally. Consequently, we are able to prove that the multiple hump TEEY curves are indeed linked to physical interactions of the electrons with the trapped charges. We have shown however that the physical interaction involved is the loss of secondary electrons by recombination, instead of RIC and the creation of a conductive channel. We have also demonstrated that these TEEY minimums are created by the variations of the current density with energy, which are themselves created by the variations of the beam surface with energy. These variations can be quite significant over the energy range. Indeed, for a focus of 300V and an incident current of 1µA, the current density ranges from 0.5 µA/cm² at 2 keV to 20 µA/cm² at 1 keV. In Figure 6-38, simulations of the TEEY curve were made with a constant current density of 1 µA/cm² and 10 µA/cm². The TEEY of each energy was averaged over 100 pulses of 100 µs. In this case, the TEEY obtained at 10 µA/ cm² is lower than 1 µA/cm², as expected due to the decrease induced by the recombination. On the other hand, the TEEY at 1 µA/cm² is very close to the charge-less TEEY, with a maximum TEEY of 2.4 instead of 2.5 for the chargeless case.
For both TEEY curves, there is no local minimum of TEEY appearing, since the current density is constant. With a narrower beam, the TEEY is lowered globally but not at select energies like in Figure 6-37. This demonstrates that the local TEEY minimums can be eliminated by working with a constant current density during TEEY experimental measurements. However, this current density should be low enough to avoid a global lowering of the TEEY and a falsification of the data by the recombination effects. For SiO2, this threshold appears to be below 1 µA/cm², but it may be different for other dielectrics with other charge transport properties. In conclusion, the multiple-hump TEEY curves of thin dielectric layers are due to internal charging effects. Nevertheless, these can also be created depending on the measurement parameters, and can in fact be a measurement artefact. A careful choice of experimental parameters can eliminate this artefact, by using a constant current density that is also low enough to limit recombination effects. Therefore, the experimental data obtained with a focus of 0V and a beam surface larger than 25 cm² should be the closest to the charge-less TEEY of the sample. With a current density on the order of the nA/cm², the great majority of incident electrons should hit regions of the sample that are free of charges. We were not able to measure the variation of the beam surface at 0 V, so it is possible that the current density has similar variations to what we measured for the other values of focus. Yet, given the absence of a local TEEY minimum in all measurements made with a focus of 0V, we can assume that the current density remains low enough at all energies to prevent the lowering of the TEEY, even if it is not uniform. An experimental study by Belhaj et al. [16] had also shown that a higher current density could globally lower the TEEY curve. In the present study, we were able to confirm this phenomenon, and explain it by the proportion of overlapping electron cascades on the surface.
Study of the effect of temperature on the electron emission yield
So far, we have interested ourselves in the electron emission of insulating samples in standard measurement conditions, that is to say in a controlled environment. Indeed, the temperature (25°C) and incident current (0.1 to 1 µA) remain constant throughout the experimental TEEY measurements. On the other hand, spacecraft materials are subjected to a large gradient of temperatures, typically ranging from -200°C to +200°C. It is known that the surface chemistry of a contaminated sample can strongly evolve depending on the ambient temperature, which should modify the TEEY. At higher temperatures, the chemical compounds can evaporate from the surface, which is the phenomenon we have used to decontaminate the samples by heating them at 200°C. At lower temperatures below 0°C, water molecules may condensate and freeze on the surface, forming a dielectric layer of ice that can charge during the experiment.
In this section, we will only focus on the variation of the dielectric properties of a SiO2 sample and its influence on the TEEY, depending on the temperature of the sample. The effect of the surface chemistry should be limited, since the samples are decontaminated and the measurements are made in an ultra-high vacuum chamber, preventing the condensation of contamination or water when the samples are cooled. Measurements at 27°C and 200°C were first made in DEESSE. We have then used the ALCHIMIE facility for TEEY measurements from -180°C to 100°C, since it is equipped with a supply of liquid nitrogen. The temperature reached when cooling or heating the sample holder is fluctuating around the target value in an interval of ±10°C. Consequently, all the temperatures given in this section (except room temperature, 27°C) are given with a precision of ±10°C.
Experimental measurements of the TEEY at -180°C and 200°C
First, we will focus on the temporal evolution of the TEEY, using time-resolved measurements with the same protocol as before (Pulses of 100µs separated by 50ms). In Figure 6-39, we have measured the TEEY of 300 eV electrons in DEESSE, at room temperature (27°C) and at 200°C, to study the effect of sample heating. The focus voltage used was F 500V, which gives a current density on the order of 10 µA/cm². The data at 200°C is starting higher than at 27°C, hence it has been reduced by 10% to match the starting point of the data at 27°C. This allows us to directly compare the evolution of the TEEY after its initial point. We find the same kind of recombination-induced decrease at 200°C. However, the slope of this decrease is lessened compared to the TEEY at 27°C, and the TEEY is already diverging at 1 ms. In Figure 6-40, the TEEY has been measured in ALCHIMIE for 500 eV electrons at F 100V, which also gives a current density on the order of 10 µA/cm². The measurements at room temperature and -180°C are compared. This time, the starting point of the TEEY is immediately lowered by 20% when the sample is cooled compared to 27°C. On the other hand, if we scale the TEEY to match the starting point of the TEEY at 27°C, we do not observe a stronger decrease of the TEEY. Instead, the TEEY first seems to have a weaker decrease at -180°C than at 27°C during 10 ms. The authors of ref. [16] have also observed an increase of the TEEY curve of polycrystalline diamond when the temperature was raised from 25°C to 90°C. On the other hand, when the current density was increased, the TEEY was lowered at all energies by recombination and the effect of temperature was much less visible. To verify if a similar effect happens in amorphous SiO2, we can compare the previous results obtained at 10 µA/cm² with the TEEY at 100°C, 27°C and -180°C measured with a current density of a few nA/cm² (Focus 0V). This comparison is shown in Figure 6-41. At -180°C, we do observe a lowering of the TEEY compared to 27°C, albeit only by 0.10. At 100°C however, the TEEY has the same value than at -180°C. Therefore, in our case, the effects of temperature on the decrease of the TEEY are removed when the current density is lowered. Finally, we can also compare the TEEY/energy curve from -180°C to 100°C measured in ALCHIMIE for a given focus voltage. The variation in current density will allow us to see if the behavior of the TEEY is coherent with what we have highlighted with the time resolved measurements. In Figure 6-42, the TEEY has been measured with a focus voltage of 100V, which creates a local minimum of TEEY at 500 eV. We can see that this local minimum still appears regardless of the temperature. The reduction of the TEEY at 500eV is the strongest when the sample is cooled to -180°C, but we do not observe any meaningful difference between the TEEY at -100°C and at 23°C. On the other hand, the TEEY is increased at 100°C, and the dip of the TEEY at 500 eV is also less important than at the other temperatures. Indeed, at -180°C, the ratio between the maximum TEEY at 200 eV and the minimum TEEY at 500 eV is 1.79. At 100°C, this ratio is reduced to 1.56. In Figure 6-43, a focus of 0V is used. We obtain identical TEEY curves regardless of the temperature, with only very slight variations akin to what we have observed in the time-resolved measurements of Figure 6-41. We know that with a focus of 0V, the current density of less than 25 nA/cm² is low enough to limit the interactions of the electrons with the trapped charges created by the previous cascades. Since we do not observe any variation of the TEEY from -180°C to 100°C in such a case, it is very probable that the transport of ballistic electrons is not affected by temperature.
In conclusion, we have observed a dependence of the TEEY on temperature, when a current density of 10 µA/cm² is used. However, when the current density decreases down to a few tens of nA/cm², the effect of temperature on the TEEY is removed. To understand why the temperature is modifying the TEEY, we can make the same study as in 6.2 and compare the evolution of the TEEY with the charge density. Indeed, this change of slope could indicate that the temperature is also acting on the transport of the drift charge carriers. It could be that the charges are migrating deeper in the material, reducing the surface charge density, or spreading laterally, modifying the overlap between electron cascades. To verify these hypotheses however, we have to make simulations of the TEEY from -180°C to 200°C and see whether we find the same behavior as in the experiment. In this subsection, we will present Monte-Carlo simulations which were all made with an incident current density of 10 µA/cm² as in section 6.2.
First, in Figure 6-44, the TEEY of electrons at 300 eV has been simulated at 27°C, -180°C and 200°C. The energy depths of the shallow hole and electron traps were increased respectively to 0.1 and 0.05 eV, instead of 0.07 and 0.02 eV previously. This change was made to get a better reproduction of the modification of the TEEY with temperature. Indeed, the measurements at 27°C, -180°C and 200°C were all made during different measurement campaigns, and it is unsure whether the exact same sample was used each time. Hence, the trapping parameters of each samples may vary, but it is impossible to obtain the actual trap distribution using experimental means. These new values remain within the right order of magnitude expected for shallow trap energy depths, but we must note that this modification of the energy depths is a purely arbitrarily fit to improve the agreement with the experimental data.
We can immediately observe that the starting point of the TEEY is increased when the sample is cooled, and decreased when the sample is heated. This is in opposition with the experimental results, where an increase of the temperature also increased the starting point of the TEEY, except with a broad beam at focus 0V. On the other hand, when the sample is charged at the end of 100 pulses, the TEEYs have reached the same value. Given the different slopes of the decreases, the TEEY at 200°C should become higher than at 27°C and the TEEY at -180°C should go below the TEEY at 27°C over a longer period of time. Thus, it is possible that we would have a better match with the experimental data if we were to start the simulation on a precharged sample, as when studying the effect of the residual holes. We must also remember that the starting point of the TEEY in the simulation is the perfectly charge-less TEEY, which is not the case of the experimental sample. The LO phonon scattering frequency is reminded below. We can see that it varies linearly with the population of optical phonons. Consequently, the interaction mean free path of ballistic electrons with phonons is reduced at higher temperature. The energy of the LO phonons does not change but the frequency of the collisions is greater at higher temperatures, which accelerates the thermalization of electrons below the band gap.
𝑓 ∓ (𝐸) = 𝑒 2 4𝜋𝜖 0 ℏ 2 • (𝑁 𝐿𝑂 + 1 2 ∓ 1 2 ) • ( 1 𝜖(∞) - 1 𝜖(0) ) • √ 𝑚 * 2𝐸 • ℏ𝜔 𝐿𝑂 • ln [ 1 + 𝛿 ∓1 ± 𝛿 ] Equation 3-27 of Chapter 3
The acoustic phonon scattering frequency is also reminded below. This time, the temperature directly appears in the collision probability for electrons with an energy below As a result, when the temperature increases, the ballistic electrons are even more scattered in random directions by the acoustic phonons, and they also lose more energy through the increased collisions with optical phonons. These two phenomena reduce the escape probability of the secondary electrons, which is why the charge-less TEEY increases when the temperature decreases in the simulation, since we compute the phonon population in the simulation depending on the temperature. In the experiment however, there was no variation of the TEEY at a focus of 0V. Hence, it is possible that the electron-phonon interaction models we have used can overestimate the effect of the phonon population on the scattering rates. On the other hand, this effect was only seen in our simulations for a perfectly charge-less sample, and it is very improbable that our experimental samples were perfectly charge-less at the start of the measurements.
In Figure 6-45, the data at -180°C and 200°C have been scaled by 5% to match the starting point of the TEEY at 27°C. This allows us to compare more accurately the relative variation of the TEEY. We can notice the same change of slope between 27°C and 200°C than in the DEESSE measurements of Figure 6-39, with the TEEY at 200°C decreasing more slowly. On the other hand, the TEEY at -180°C is decreasing faster than at 27°C. From this comparison, we can deduce that the ballistic electron-phonon interactions are not the only physical processes modified by the temperature. Indeed, we have demonstrated at room temperature that the decrease of the TEEY was directly linked to the increase of the quantity of surface positive charges. Hence, it is very probable that the change of slope of the TEEY we observe here is linked to the transport of holes in the material. In fact, several mechanisms of the drift regime are dependent on the temperature. In Equation 5-31 of Chapter 5, reminded below, we have defined the detrapping probability for a shallow trap of given depth 𝐸 𝑖 as a classical thermally activated law. In consequence, the increase in temperature also increases the detrapping probability. Therefore, when the sample is heated, the holes and electrons can easily migrate through the material, by hopping between traps.
𝑊(𝐸 𝑖 ) = 𝑊 0 exp (-𝐸 𝑖 𝑘𝑇 )
Equation 5-31 of Chapter 5
The detrapping enhancement of deep traps by Phonon-Assisted Tunneling (PAT), given in Equation 5-35 of Chapter 5, is obviously dependent on the temperature. Indeed, the energy gained by the trapped charge carrier through the collisions with phonons is what allows it to jump to a higher energy level and tunnel through the trap. When the phonon population is increased, the charge carrier is subjected to more collisions from the thermal agitation, and can more easily be excited to a higher level of energy within the trap. Thus, the formula of the enhancement factor by PAT given below is also dependent on the temperature. Finally, the energy of the thermalized charge carriers is directly dependent on the temperature. Whether we take the Boltzmann distribution of velocities as in
𝑒 𝑃𝐴𝑇 = ∫ 𝑒𝑥𝑝 (𝑧 -𝑧 3 2 ⁄ ( 4
𝑃(𝑣) = ( 𝑚 2𝜋𝑘𝑇 ) 1 2 4𝜋𝑣 2 𝑒 -𝑚𝑣 2 2𝑘𝑇
Equation 5-25 of Chapter 5
or an average energy of
3 2
𝑘𝑇, the drift charge carriers have an increased energy when the temperature is raised. The mobility of the charge carriers is also modified by the temperature, as shown by Hughes [24] who has determined that the hole mobility increases exponentially with the temperature. Mady et al. [25] have also shown that the polaron mobility increases with temperatures for internal electric fields below 1 MV/cm. In consequence, two phenomena can lead to a reduction of the surface hole density at high temperature, which would increase the TEEY. First, the migration of holes by hopping deeper into the material is favored by the reduction of the trap immobilization time. It is also possible that the capture cross section of the traps is lowered, when the particles gain more energy due to the thermal agitation. Second, the electrons can also be detrapped easily and either recombine with the surface holes, or escape in the Si layer.
We can see this charge migration in Figure 6-46, where the charge density profiles at the end of 100 pulses of 300 eV electrons are plotted for our three temperatures of interest. First, at -180°C, there is a very strong negative charge after 5 nm, which is more important than at the other temperatures. The charge density profile also resembles the profile of Figure 6-8b, where the drift transport was disabled. Indeed, the maximum of negative charges at -180°C is at 7.5 nm, compared to 6 nm in the drift-less simulation. The positive charge at -180°C is also more important than at the other temperatures, and there are no negative charges at the SiO2/Si interface. This shows that when the sample is cooled, the transport of the drift particles is severely limited. In result, more holes can remain at the surface of the material for a longer period, which can strongly reduce the TEEY. This explains why the experimental TEEY is strongly lowered at -180°C. With a current density of 10 µA/cm², the interval between two electron impacts at a given zone is shortened. However, the holes were not able to evacuate during this interval because their transport is limited by the cold temperature. Consequently, more electrons are lost by recombination and the TEEY is decreased. In this situation, it is even more important to discharge the sample as much as possible with the procedure of 6.2.4 before starting a TEEY measurement. Indeed, the charges are not mobile enough to evacuate themselves when the sample is left at rest, and it is very probable that these charges will cause an error in the next TEEY measurement. When studying the charge profile at 200°C however, we can notice a migration of the positive and negative charges deeper into the material. The surface hole density has moved of 1.5 nanometers in depth, and the negative charge density is also reduced. Here, we have evidence of the thermally activated transport that enables the migration of charges, and can increase the experimental TEEY. Indeed, despite using a current density of 10 µA/cm², the holes are still able to be evacuated between two electron impacts. This limits the loss of electrons by recombination and we have a higher TEEY when the temperature is increased. Notably, the hopping transport of electrons is also activated by the temperature, and they are able to reach the SiO2/Si interface more easily. Hence, we could also be measuring a bigger leakage current at 200°C, which could artificially increase the experimental TEEY.
Finally, in Figure 6-47, the evolution of the surface charge density is plotted. The positive surface charge increases very quickly at -180°C, which is coherent with our analysis of the charge density profile. The lack of evacuation of holes creates a strong reduction of the TEEY, as we have observed experimentally. On the other hand, the positive surface charge evolves more slowly at 200°C due to the enhanced evacuation of holes. We can thus confirm the link between the surface charge density and the decrease of the TEEY we have found at room temperature. The dependence of the TEEY on this density is also prevalent at low and high temperature, and the modification of the drift transport with temperature is what causes a modification of the TEEY. The activation of the transport of holes by raising the temperature is actually a phenomenon that can be used to discharge the sample after a measurement. However, this discharge is not instantaneous, since we have to wait until the charge carriers are detrapped and evacuated. In the case of deep traps, the time of residence of the charge carrier in the trap can be much longer than for shallow traps even when the temperature is raised. Thus, several hours of discharge may be needed to completely remove the deeply trapped particles.
In conclusion, the Monte-Carlo model is able to simulate the effect of temperature on the drift transport and the ballistic transport. It is also able to model the effect of temperature on the TEEY, albeit with a change of parameters to improve the fit to experimental data. After studying the trapped charge densities, we can explain the increase of the TEEY with temperature by the activation of the hopping transport. This phenomenon predominates over the increase of energy losses of the secondary electrons by phonon collisions, which tends to reduce the TEEY. In fact, the phonon scattering models we have implemented in this work were only used by Schreiber & Fitting [26] in simulations of a sample at room temperature. It is therefore possible that the effect of the temperature on the electron-phonon scattering rates is overestimated. Finally, we have shown that the TEEY can decrease or increase of 20% at -180°C or 200°C compared to the measurements made at room temperature with a current density of 1 µA/cm². Therefore, it is possible that space insulators subjected to temperatures ranging from -200°C to 200°C exhibit such variations of TEEY, which we cannot observe through standard qualification at room temperature. However, such variations of the TEEY with temperature are not observed when a current density of a few nA/cm² is used.
Further discussions on internal charging effects and the TEEY of dielectrics
In this section, we give additional results gathered during this thesis, some of which are preliminary. They are given as a basis for discussion and tracks to explore, in the scope of further perspectives on the effects of charging on the TEEY of insulators.
6.5.1 Probing the decrease of the TEEY towards the equilibrium state
The Monte-Carlo code is able to simulate the first 8 ms of the decrease of the TEEY, and we have been able to explain this decrease by the recombination of secondary electrons with holes. However, in standard TEEY measurements, the sample is receiving 10 pulses of 6 ms, for a total irradiation duration of 60ms, and it is possible that space-used dielectrics may be irradiated continuously for even longer time periods.
When studying the multiple-hump TEEY curves, we have made TEEY simulations that followed exactly the same protocol as the experiment, with 10 pulses of 6ms. Nevertheless, the comparison was only qualitative, and we were only able to compare the TEEY averaged over these 10 pulses. Therefore, we can go further and interest ourselves in the simulation of the TEEY past the 80 or 100 pulses of 100 µs. Since we know that the code can simulate the start of the decrease, additional experimental data will be helpful to probe the capabilities of the code in simulating the TEEY over a greater period. In this regard, measurements were made in the ALCHIMIE facility using the same experimental protocol as in DEESSE, with pulses of 100 µs spaced by 50 ms. This time however, special care was taken by using the charge removal procedure of 6.2.4 before each measurement, to limit remanence in the TEEY.
First, the decrease of the TEEY was measured over 300 pulses with the standard procedure (measurement of 𝐼 0 at + 27 V and 𝐼 𝐸 at -9 V), at the energies of the TEEY minimums of Figure 6-27. In this case, the beam area is about 0.05 to 0.1cm². The decrease of the TEEY measured in DEESSE was much stronger than the decrease measured in ALCHIMIE, due to the difference in calibration of the electron guns. In Figure 6-48, the incident current used during the standard measurement procedure for the energy/TEEY curve is plotted for the DEESSE and ALCHIMIE installations. The parameters of the electron guns are identical in both installations. However, we can see that for the same gun parameters, the incident current in DEESSE is higher (1.5 µA) than ALCHIMIE (0.7 µA). Assuming that the beam area variations are identical, this results in a stronger current density in DEESSE (15 µA/cm²) than ALCHIMIE (7 µA/cm²), which, as we have seen before, creates a stronger reduction of the TEEY. The TEEY measurements of ALCHIMIE are compared with the simulations from section 6.2 in Figure 6-49 for 300 eV, 500 and 1 keV. As for the comparison of Figure 6-4, the experimental results have been scaled to match the starting point of the simulated data. In Figure 6-49, the experimental data from DEESSE is also plotted for 300 eV electrons. The simulations have a good agreement with the data from DEESSE over 80 pulses. In the case of the ALCHIMIE measurements however, we have measured a weaker decrease of the TEEY due to the reduced current density. In consequence, the simulations severely overestimate the reduction of TEEY compared to the ALCHIMIE experimental data at the three energies. It was possible to improve the agreement of the simulations with the data from ALCHIMIE, by using an incident current of 0.6µA in the simulation. However, the agreement was only improved for the start of the measurement.
It was possible to improve the agreement at the end of the 300 pulses with the experimental data by reducing the recombination cross section to σ Recomb = 6 × 10 -13 cm², instead of its default value σ Recomb = 2 × 10 -12 cm² used in all other simulations. This modification is a purely arbitrary fit to improve the agreement to the ALCHIMIE data. It is possible that the sample used in DEESSE has different physical characteristics than in ALCHIMIE, but this isn't something we can confirm or use as a justification for this change. We have seen in the previous section that the overlap factor approach could qualitatively simulate the fact that the decrease of the TEEY is slower when the current density is lower. However, the need for such a tweak seems to highlight that more improvements are needed to this approach, in order to have a 0,0E+00 quantitative modeling of the time-resolved decrease of the TEEY at various current densities.
Although there is an inflexion around 7 ms that is not as visible in the simulations, the overall dynamic of the decrease is well simulated with this change of parameter. Therefore, the Monte Carlo model can also simulate the decrease of the TEEY past 10 ms, albeit with a change of parameters, and we have validated it for the simulation of the TEEY during 300 pulses or 30 ms in total. We can observe for both energies a behavior that is identical to the measurements of Figure 6-32 made in DEESSE. For a wide beam surface (F0), the TEEY is practically constant, with only a very small variation at 300 eV from 2.36 to 2.33. This confirms that the effect of charging and recombination on the TEEY is very limited, if a low current density is used (nA/cm² in this case). On the other hand, with a narrow beam surface (F500 or F100) the TEEY immediately decreases and stabilized after 100 ms. What is especially interesting here is that the final value reached by the TEEY is not 1. At 300 eV, both data from ALCHIMIE (Figure 6-50) and DEESSE (Figure 6-32) reach a final value of 1.6. At 500 eV, the TEEY measured in ALCHIMIE stabilizes at 1.9, while the TEEY from DEESSE reaches 1.6 after 30 pulses of 6 ms. This would mean that the sample would be charging indefinitely since it is always emitting more electrons than received. This is impossible however, because we have measured that the surface potential is stabilizing around a few volts under a continuous incident current. This measurement is made by scanning the spectrum of secondary electrons, using the hemispherical analyzer above the sample. It is known that the energy spectrum of electrons should start at 9 eV, since the sample bias at -9V accelerates the secondary electrons through a potential of 9V towards the analyzer.
However, a positive change of the surface potential of 𝑛 volts will reduce the energy gained by the electrons of 𝑛 eV. In consequence, we can measure the energy value at which the distribution of secondary electron starts, and evaluate the positive change of surface potential from the shift of this distribution.
Consequently, it is possible that the actual emitted current 𝐼 𝐸 stabilizes at the value of the incident current 𝐼 0 , which makes the TEEY stabilize at 1 and prevents the surface potential from increasing infinitely. This would mean that we are somehow measuring a current 𝐼 𝑆 that is greater than 𝐼 0 . If we go back to the very first equations of this chapter, the fundamental current conservation law used for the TEEY measurement procedure is given by 𝐼 0 = 𝐼 𝐸 + 𝐼 𝑆 , and the TEEY is computed with 𝑇𝐸𝐸𝑌 = 𝐼 𝐸 𝐼 0 ⁄ = 𝐼 0 -𝐼 𝑆 𝐼 0 ⁄ 𝐼 𝑆 is the current flowing through the sample holder, measured when the sample is negatively biased. From a particle based point of view, these equations can only describe the TEEY accurately if 𝐼 0 are the incident electrons, 𝐼 𝐸 are all electrons exiting the sample, and 𝐼 𝑆 is only made of the electrons remaining in the sample. In metals and semi-conductors, the implanted electrons can easily reach the sample holder and be recorded as a current, therefore this condition is verified. For insulators, we have assumed that the current measured at the metallic sample holder is only made of the current created by the image charge, which will be noted 𝐼 𝐶 . This is true for bulk samples, where the implanted electrons are unable to reach the sample holder, so the leakage current 𝐼 𝐿 created by these electrons is negligible. In section 6.2.2.2 however, we have shown that the drift electrons can escape from the SiO2 thin film through the silicon layer, which should create a leakage current. This leakage current should also be higher for incident electrons with a higher energy. Indeed, we have also shown that, as the incident energy increases, the drift electrons are implanted deeper in the SiO2 layer and can leak more easily in the silicon substrate. As a result, the expression of the TEEY based on the sample current becomes
𝑇𝐸𝐸𝑌 = 𝐼 0 -(𝐼 𝐶 + 𝐼 𝐿 ) 𝐼 0 Equation 6-11
Indeed, if there exists a leakage current that is able to flow through the insulator, the total current 𝐼 𝑆 measured at the sample holder will be equal to the sum of the leakage current 𝐼 𝐿 and the image charge current 𝐼 𝐶 . In the case of our thin film samples, the leakage current is not negligible. It will add itself to 𝐼 𝐶 and increase 𝐼 𝑆 . In consequence, the measured TEEY is falsely increased, even if the actual quantity of emitted electrons does not increase. The TEEY becomes erroneous, because our measurement procedure interprets the leakage of electrons through the sample as secondary electrons emitted into vacuum. This could be why the experimental TEEY does not stabilize at 1 but at 1.6 at 300 eV. At 500 eV, the leakage of electrons is higher, and so is the final value of TEEY (1.9).
With this knowledge in mind, we can attempt to explain the TEEY of 1 keV electrons measured over 5000 pulses, which is very peculiar. In Figure 6-51a, the TEEY is practically not evolving at F0, which was expected. At F300, the TEEY starts decreasing rapidly, but then immediately increases after a couple of ms and reaches its initial value. From section 6.2.2.2, we have seen that a part of the 1 keV incident electrons can actually travel through the whole SiO2 layer, and be implanted in the Si layer. The drift electrons are also created very close to the SiO2/Si interface, which enables them to leak very easily. This resulted in a strong positive charge in the SiO2 layer after 100 pulses, as shown in Figure 6-10. As a result, the thermalized secondary electrons can easily flow to the sample holder and be measured as leakage current, and we can also record a part of the incident current that goes directly through the insulating layer. This should create a strong leakage current that can significantly increase the measured TEEY. In Figure 6-13, the positive charge near the SiO2/Si interface increases with time. This could indicate that the leakage current also increases with time, which would be why the measured TEEY is also increasing. Indeed, if we zoom in the first 300 pulses of the measurement (Figure 6-51b), we only notice a quick stabilization of the TEEY after 5ms. In every experimental measurement made at 1 keV (Figures Figure 6-3, Figure 6-18, Figure 6-32, Figure 6-49), the TEEY also stabilizes very quickly, after a couple of ms only. In the simulation however, we were not able to reproduce this quick stabilization, and the simulated TEEY at 1 keV continues to decrease. Indeed, in the simulation, we count the secondary and backscattered electrons with a spherical collector. With this method, the leakage current is not included in the computation of the TEEY, and we always measure the true emitted current. This explains why the simulation struggles to fit the TEEY of 1 keV electrons after a few ms, where the leakage current starts to modify the experimental TEEY. The effect of the leakage current on the TEEY is not immediately visible over 300 pulses, and it was necessary to measure the TEEY over a larger period to highlight a possible measurement bias.
Another possible explanation for this phenomenon, is that we could be seeing the effect of the electric field, due to the amount of charges created in the material on longer times of irradiation (a few tens of seconds). It could be possible that the electric field becomes greater than the 0.1 MV/cm threshold necessary to force the electrons to drift in the opposite direction of the field. In such a case, the positive charge should force the electrons towards the surface and oppose itself to the leakage current, so when the field increases over time, the leakage should be lowered. The electrons could then be neutralizing the holes and creating an equilibrium of charges. As we have seen before, a reduction of the surface hole density creates an increase of the TEEY. It is finally possible that the leakage of drift electrons does occur, but only at shorter times when the internal field is below the 0.1 MV/cm threshold shown before. However, further investigations on the electric field effects are needed, especially to explain why the TEEY is increasing at 1 keV. Hence, another possibility is that both electric field and leakage effects could be occurring at the same time.
To validate any of these hypotheses, it would be interesting to perform TEEY measurements using the electron collector instead of the sample current method, and compare the two measurements over a few hundred of ms. Indeed, when measuring the TEEY with the electron collector, the leakage current is not taken into account. If there is still a stabilization of the TEEY over 1, then this could be due to the electric field and not the leakage current. On the other hand, if we don't have a stabilization of the TEEY over 1, then the leakage current would be the culprit.
Extending the simulation to measure this leakage current should be more difficult. Indeed, simulating the effect of the leakage current is not as simple as adding the number of drift electrons reaching the Si layer to the TEEY. The current is measured at the sample holder, and the transport of electrons through a bulk silicon layer is not instantaneous. As a result, we would also have to simulate the drift transport of electrons in silicon until they reach the sample holder, which would necessitate the use of a drift-diffusion code instead of a Monte-Carlo code.
6.5.2 Discussion on the charge buildup of a SiO 2 sample under a low energy GEO incident electron spectrum (200 eV -10 keV)
To place ourselves in the context of space used dielectrics, we can also try to simulate the evolution of TEEY of an energy spectrum, instead of the TEEY of a monoenergetic beam. Indeed, spacecraft materials are hit by a spectrum of electron energies ranging from a few hundreds of eV up to several MeV. In this study, the Monte Carlo code has been used to compute the SEY of 20nm SiO2 samples under an energy spectrum. We will reuse the -9V negative bias, so the external charging effects should be limited. This is to simulate the negative bias that can be applied to the solar panels on board of satellites, which can be covered by a layer of SiO2.
The spectrum we have used in this study is obtained from the GREEN model and based on the GEO space environment. The GREEN spectrum ranges from 200eV to several MeV. In this work, we focus on the contribution of low energy electrons of 10 keV and below only, for several reasons. From a physical point of view, electrons above 10 keV only depose a very small part of their dose near the surface, so their TEEY is very small. From a technical point of view, the cross sections for the inelastic interactions of electrons were only computed up to 10 keV. This is because the computation times above 10 keV becomes excessive compared to condensed history models such as the Geant4 standard physics, by a factor of 4 to 5. We have also shown in Chapter 4 that the dose depth profiles of 10 keV and 14.5 keV given by MicroElec and GRAS had a good agreement, so it is more pertinent to switch from MicroElec to a condensed history model above 10 keV.
In Chapter 5 however, we have shown that the Monte-Carlo model reaches its limits when modelling the external charging effects on the TEEY of electrons past the second crossover point, and that a change of parameters was needed to accurately model these effects. The TEEY results shown in this section should only be taken as preliminary results, which need to be improved. On the other hand, we should still be able to use the charge density profiles to study the charge buildup induced by an energy spectrum.
The differential incident spectrum from GREEN in the GEO orbit is given in Figure 6-52. One can see that the majority of electrons have an energy below 1 keV, which have a TEEY greater than 1 and will induce positive charging. However, a significant part of the spectrum is made of electrons that have an energy beyond the 2 nd crossover point of the material used in the simulations, which is around 2 keV. As a result, these higher energy electrons should create negative charging. The simulated TEEY obtained from this spectrum is shown in Figure 6-53, for a total simulated time of 30 ms in continuous irradiation and a current density of 10 µA/cm². We can see the apparition of a temporal evolution of the electron emission yield, which decreases with time. However, given that the surface potential remains negative and around -9V, there is no recollection of the secondary electrons by the surface. The computation was made with all the default parameters from Table 6-1, but we had shown in Chapter 5 that these parameters overestimated the recombination effect for electrons of 2.5 keV and above. Therefore, the decrease of the TEEY shown in Figure 6-53 is overestimated. Due to the computation time needed to produce these results (1 week), it was not possible to redo this computation. However, we can compute the average TEEY of the electron spectrum without charging effects from the TEEY at a given energy and the differential flux Still, we can use the charging simulation results of Figure 6-53 to study the charge density in the 20 nm SiO2 layer after 30ms of irradiation, which is given in Figure 6-54. Three distinct regions can be seen. In the first 5 nanometers, the material is positively charged due to the creation of holes and the escape of secondary electrons. According to the range of electrons of 1 keV and below in SiO2, which can be deduced from the dose depth profiles shown in Figure 6-55, the negative region after 5 nm corresponds to the implantation region of these electrons. Finally, we have shown in 6.2.2.2 that the positive region close to the interface is due to the escape of drift electrons in the silicon substrate, an effect that appeared for electrons above 300 eV. Overall, we can deduct from the surface potential in Figure 6-53 that the global charge of the material is positive. The ionizing dose-depth profiles for the full 10 keV GEO spectrum and for the GEO spectrum cut at 1 keV are shown in Figure 6-55. In the case of the 1 keV spectrum, the whole dose is deposed in the SiO2 layer. The average TEEY of the spectrum cut at 1 keV is 2.25, so they will generate a positive charge. However, when adding the contribution of higher energy electrons to get the 10 keV spectrum, we can see that the dose deposed close to the surface is reduced, which reduces the average TEEY of the spectrum down to 1.19. More importantly, a significant part of the dose is deposed in the Si layer. Indeed, the extrapolated range of 10 keV electrons in SiO2 is 700 nm, and we have already shown that 1 keV electrons are able to go through the 20 nm dielectric layer. Consequently, the electrons above 1 keV are implanted in the Si layer, and they do not contribute to the charging of the SiO2 layer. Due to the lack of their negative charge, the sample can thus get charged positively. From the dose depth profile of the 10 keV spectrum, we can deduce that the maximal range of 10 keV electrons is 1µm. In Figure 6-56, simulations were made by sending the 10 keV GEO incident spectrum on a 2 µm thick layer of SiO2. The same TEEY as for the 20 nm sample is obtained, however the 2 µm sample is charging negatively. This is because the thickness of the insulating layer is greater than the penetration depth of all electrons. As a result, the higher energy electrons beyond the 2 nd crossover point get implanted in the SiO2 layer and can generate a negative charge, which was lost in the Si layer in the case of the 20 nm SiO2 sample. We can highlight this effect by plotting the charge density profile in the 2 µm sample after 30 ms, which is given in Figure 6-57. In this case, there is a positive region in the first ten nanometers of the surface, as for the 20 nm sample. On the other hand, a large negative charge appears from 10 nm down to 1 µm. Indeed, the drift electrons contained in this negative region are unable to travel 1 µm and escape in the Si substrate. The dielectric sample is also thick enough that no electrons can go through it. Therefore, the drift electrons remain in the SiO2 layer, and generate a global negative charge. These preliminary results were obtained with an incident current density of 10 µA/cm², which is the current density in DEESSE in the cases where the beam area is minimal (0.1 cm²). In the standard measurement protocol for the TEEY qualification of space materials in DEESSE, a focus of 300 V is used, which gives a current density varying between 0.7 and 28 µA/cm² for an incident current of 1.4 µA on average. However, the incident current density in the space environment is much lower. We can easily compute the incident current densities received by a spacecraft in the LEO and GEO orbits, from the integrated flux of electrons per cm² in the GREEN model. These current densities are given in Figure 6-58. In the worst case scenario of the GEO orbit, the spacecraft will receive an incident electron current density of 0.1 nA/cm², which is far below the standard TEEY test conditions. Notably, the incident current density used in the measurements at focus 0 V in ALCHIMIE is at most 24 nA/cm². With such a current density, we did no observe any reduction of the TEEY after 0.5 seconds in section 6.5.1. Since the current densities in the GEO and LEO space environments are even lower, we also should not observe a variation of the TEEY in these situations, if we were to perform a similar experiment. Indeed, in Figure 6-59, a simulation of the TEEY of the 1 keV spectrum was made under a continuous current, with a current density of 1 nA/cm². Even after 130 ms, the TEEY has not evolved and remains at the average value of the TEEY of the spectrum, 2.25. Consequently, in the case of dielectric materials directly exposed to the space environment, it is very possible that their TEEY remains stationary at a high value for an extended period. Thus, we may never observe a lowering of the TEEY due to recombination in this situation. Due to the very low current density of 1 nA/cm², there is a large time interval between two electron impacts in the same area, which can allow the trapped charges to be evacuated and prevent the loss of electrons by recombination. This obviously depends on the transport properties of the drift charge carriers in the insulator. For very poorly conductive insulators, the charges may remain trapped long enough to recombine with the incoming electrons, or create a high electric field in the material. However, to verify if the TEEY is indeed stationary in the conditions of the space environment, we would need to make TEEY measurements at a continuous current density of 0.1 nA/cm² over a much longer time (a few tens of seconds, or even a few minutes). We also know that very low energy electrons of a few tens of eVs have a penetration depth that corresponds to the positive charge region. As a result, we have shown in 6.2.4 that these electrons can immediately recombine and remove the positive surface charge. Therefore, it would be very interesting to have access to an energy spectrum that goes below 200 eV, and study the TEEY of SiO2 samples including the contribution of these very low energy electrons. We would be able to determine whether these low energy electrons are able to remove the surface charge, or if the electrons of a few hundreds of eVs can still create an overall positive charge. Finally, the energy spectrum we have used may only be applicable to the study of the TEEY of dielectric materials that are directly exposed to the space environment, such as solar panels. In the case of RF devices placed inside of the spacecraft, the shielding could modify the energy spectrum and flux intensity of the electrons arriving in the device. We also have to take into account the electronic cascades generated by other types of incident radiation, or more energetic particles.
Conclusion of Chapter 6
In this chapter, we have gathered a significant amount of knowledge on the effects of charging on the TEEY of SiO2 thin films, by combining our simulation code with experimental TEEY measurements we have made with the ALCHIMIE and DEESSE measurement facilities of ONERA. These experiments were performed on SiO2 thin films of 20 nm of thickness, plasma grown on a Si substrate. We have notably performed time-resolved TEEY measurements, which are not commonly found in the literature but are especially pertinent for the study of the effects of charging on the TEEY. Indeed, these measurements have allowed us to observe how the TEEY is diminishing with time due to the internal charge buildup. With this data, we have first validated our simulation of the reduction of the TEEY induced by the internal charge buildup.
The Monte-Carlo code we have developed is able to accurately model the decrease of the TEEY observed in our experimental measurements. The slope of this decrease and the starting point of the TEEY agree with the simulation within an error of 20%. Given the numerous processes involved in insulator charging, the accuracy of this simulation model is very satisfying.
By studying the evolution of the charge density with time in the material, we have highlighted a direct correlation between the increase of the surface density and the decrease of the TEEY. This allowed us to demonstrate that the reduction of the TEEY, which was observed experimentally and in the simulations, is due to the recombination of the secondary electrons with the holes. As the surface hole density increases due to the bombardment of incident electrons, the probability of losing the secondary electrons by recombination also increases, which makes the TEEY diminish with time. We have thus confirmed the hypotheses that were proposed in other works [13,15,27], who had suggested that the recombination of electrons with holes was lowering the TEEY.
With this knowledge, we have also demonstrated that the presence of residual holes in the sample at the start of a measurement could cause some significant errors in the TEEY. Due to enhanced recombination compared to a virgin sample, the sample must be properly discharged to avoid a remanence effect in TEEY measurements. From this, we have tried in our simulation a discharging method proposed by Belhaj et al. [13] and Hoffman et al. [19], by sending several pulses of very low energy electrons on the target followed by a relaxation period of several seconds. By studying the internal charge density and the TEEY before and after the discharge, we have confirmed that this procedure can effectively remove the residual surface holes, and can be used during experimental TEEY measurements to improve their reproducibility.
We have made other measurements of the TEEY of SiO2 thin films, this time focusing on the apparition of multiple humps in the energy/TEEY curves. We have shown a direct correlation between the variation of the beam spot size with energy, which depends on the electron gun parameters, and the apparition of a local minimum of TEEY at select energies. The effect of the current density on the TEEY has been investigated with the Monte-Carlo simulation. By modeling in our simulations the effect of the proportion of overlapping electron cascades after a given time, we have proved that the reduction of the TEEY by recombination was enhanced by a higher current density. Experimental works on other insulators than SiO2 had also highlighted a stronger reduction of the TEEY with a stronger current density [16,28,29]. However, to our knowledge, the effect of the proportion of overlapping electron cascades and their uniformity on the surface was not taken into account in Monte-Carlo models. The novel approach of our work has allowed us to prove that the multiple-hump TEEY curves are due to a variation of the current density. These are most probably a measurement artefact, which can be removed when working with a low and constant current density.
Finally, we have moved away from the controlled lab environment and the standard measurement parameters, towards the conditions of the space environment that the space used dielectrics will be subjected to. First, we have measured and simulated the effect of temperature on the TEEY, in the standard range of temperature felt by spacecraft materials (-180°C to 200°C). We have demonstrated that the thermally activated transport of the electrons and holes could significantly modify the TEEY. If the temperature is increased, the drift charge carriers are evacuated more easily which increases the TEEY, and inversely if the temperature is lowered. This is due to the dissipation of the trapped holes in the material, which increases the escape probability of electrons. Lastly, we have also made a preliminary experimental study on the decrease of the TEEY until its stabilization. These measurements hint towards an error in the TEEY that could be created by the leakage of electrons through the substrate. This could be due to our measurement procedure through the sample holder current. Electric field effects may also become dominant over longer irradiation times, and be the cause behind the increase of the TEEY following the decrease we have observed for 1 keV electrons.
We have demonstrated in this chapter a significant influence of the current density and the temperature on the TEEY of insulators. However, the current densities used in the literature and in standard TEEY qualification for space-used dielectrics (1 µA/cm²) are much higher than the maximal current densities received by the materials on board of spacecrafts (0.1 nA/cm² in a GEO orbit). Moreover, if the insulator is subjected to a temperature gradient, it is also possible that the hotter part of the material becomes more emissive than the colder part, which can create a charge gradient inside of the material. Nevertheless, these phenomena are not taken into account in the standard TEEY measurement protocols, which were first designed for metals. Hence, the TEEY gathered during standard qualification in a controlled laboratory environment should be quite different from the TEEY of the dielectric in its real conditions of usage. In consequence, it should be necessary to develop new experimental protocols, to obtain TEEY data that are more representative of the effective TEEY of dielectrics subjected to the space radiative environment.
Conclusion and perspectives
The aims of this study were to develop a low energy electron transport model in dielectric materials for space applications, to simulate the electron emission properties of dielectric samples, and to study the effect of the internal charge on the TEEY. By conducting experimental measurements and using our model to explain them, we have clarified the misunderstood experimental observations made on dielectrics, and highlighted the physical processes involved. Several steps were involved in our approach, which have yielded significant results.
First, we have developed a low energy electron transport models in metals and semiconductors. This was done by bringing additions and improvements to the MicroElec physics module of the Monte-Carlo toolkit GEANT4. The toolkit has been extended to the transportation of low energy electrons, protons, and ions in 16 materials (Be, C, Al, Si, Ti, Fe, Ni, Cu, Ge, Ag, W, Au, SiO2, Al2O3, Kapton, BN), for the simulation of the secondary electron emission. The transport can also be made in multi-layered materials and complex geometries. The secondary electron yield has been used to verify the complete transportation model for electrons, underlying the importance of the surface potential barrier for electron emission modeling. Satisfying agreement with experimental TEEY data has been observed, which has allowed us to publish these new interaction models in Geant4. Most Monte-Carlo models publicly available for the use of the scientific community have a low energy limit of a few hundred eV at most for the transport of electrons, which includes the physical interactions models of GEANT4 except GEANT4-DNA. Hence, by releasing these models in Geant4, we have provided an accessible tool for low energy electron transport, secondary electron emission and microdosimetry studies in 16 materials. The updated MicroElec models are now available in Geant4 since the 10.6 release.
Using the results from this Monte-Carlo model for metals and semiconductors, we have investigated several key quantities of the low energy electron transport, namely the extrapolated range, transmission rate, ionizing dose and secondary electron emission yield. In this study, we have developed analytical models for all these quantities, which are valid down to a few tens of eV. The comparison of the predicted SEY with Monte Carlo and experimental data is in satisfactory agreement for the 11 studied materials (Be, C, Al, Si, Ti, Fe Ni, Cu, Ge, Ag, W). This work highlights the fact that the SEY needs a more accurate description of the transportation of low energy electrons than the hypothesis of a constant energy loss used in some analytical SEY models.
However, this analytical approach presents some limitations which could be improved in future work. Notably, to get a better prediction of the total emission yield, a more accurate expression for the backscattering of electrons could be added to the model, in order to get all the different contributions and the variation in energy. Nevertheless, this model is a good basis for an extension to compound or layered materials. The effect of surface roughness can also be included in the form of geometrical models. These improvements will allow the model to be adapted to the SEY of technical materials.
One of the main results of this thesis is the development of a Monte-Carlo model able to simulate the effect of positive internal charging on the TEEY of insulating materials, with a focus on SiO2 thin films due to the wide availability of reference data on this material. The code is able to simulate the drift transport of electrons and holes in 3D, until they are trapped and stored in a 1D mesh. We have modeled the trapping, detrapping, and recombination of holes, drift electrons and low energy ballistic electrons. To build our simulation, numerous new classes were created in a Geant4 application, and designed to interact with themselves and the Geant4 kernel to ensure proper transportation of the particles. We also added a 1D Poisson solver to Geant4, to compute the electric field after each simulation step. This simulation code can properly model the reduction of the TEEY of SiO2 in conditions of positive charging with the external field compensated, which is strictly caused by internal charging effects. This modelling is coherent with the TEEY decrease observed experimentally on other insulators in such conditions.
Conversely, the simulation of the negative charging and of the TEEY for electrons above the second crossover point needs to be improved. Given that the computation time also becomes disproportionate for electrons above a few keV, we reach the limits of the Monte-Carlo model. Perspectives of this PhD work regarding the Monte-Carlo model could be to improve the modeling of the negative charging induced by electrons above a few keV, and improve the computation time by converting the code from single-thread to multithreading. Still, the architecture of the simulation and its processes have been designed to be as general as possible. Therefore, this Monte-Carlo code can be extended in future works to other insulators used in space applications.
Most importantly, we have gathered a significant amount of knowledge on the effects of charging on the TEEY of SiO2 thin films, by combining our simulation code with experimental TEEY measurements we have made during this thesis with the ALCHIMIE and DEESSE measurement facilities of ONERA. These experiments were performed on SiO2 thin films of 20 nm of thickness, plasma grown on a Si substrate. We have notably performed several timeresolved TEEY measurements, which are not commonly found in the literature, but are especially pertinent for the study of the effects of charging on the TEEY. With this data, we have first validated our simulation of the reduction of the TEEY induced by the internal charge buildup, with an error of 20%. Given the numerous processes involved in insulator charging, the accuracy of this simulation model is very satisfying.
By studying the evolution of the charge density with time in the material, we have highlighted a direct correlation between the increase of the surface density and the decrease of the TEEY. We have therefore demonstrated that the reduction of the TEEY, observed experimentally and in the simulations, is due to the recombination of the secondary electrons with the holes. We have confirmed the hypotheses that were proposed in other experimental works, who had suggested that the recombination of electrons with holes was lowering the TEEY.
From this knowledge, we were able to explain why the TEEY was modified by internal charging in several observations. Firstly, we have shown that the presence of residual holes in the sample at the start of a measurement could cause some significant errors in the TEEY, and that the sample must be properly discharged to avoid a remanence effect in TEEY measurements. We have then proposed a discharging method usable during TEEY characterization campaigns. The error created by the trapped holes can be removed by sending several pulses of very low energy electrons on the target, followed by a relaxation period of several seconds, which allows the electrons to eliminate the holes, and bring the target closer to its charge-less state at the start of the measurements. With this method, the reproducibility of TEEY data is improved.
Secondly, we have explained the origin of the multiple humps in the energy/TEEY curves. We have demonstrated that the variation of the beam spot size with energy, which depends on the electron gun parameters, was correlated with the apparition of a local minimum of TEEY at select energies. This correlation was explained with the simulation, by proving that the reduction of the TEEY by recombination was enhanced with a higher current density, and linked to the stronger overlap of the electron cascades. To our knowledge, the effect of the proportion of overlapping electron cascades and their uniformity on the surface was not taken into account in any other Monte-Carlo models. The novel approach of our work has allowed us to demonstrate that the current density had a strong effect on the TEEY. The multiple-hump TEEY curves are therefore most probably a measurement artefact, and the effects of internal charging can be mitigated when working with a low and constant current density. In an attempt to go further, we have also studied the decrease of the TEEY until its stabilization. These measurements hint towards an error in the TEEY, which could be created by the leakage of electrons through the substrate. This could be due to our measurement procedure through the sample holder current, but this hypothesis must be confirmed with additional measurements.
Lastly, we have moved away from the controlled lab environment used in standard TEEY studies, towards the conditions of the space environment which the dielectrics will be subjected to. We have measured and simulated the effect of temperature on the TEEY, in the standard range of temperature felt by spacecraft materials (-180°C to 200°C). It has been demonstrated that the thermally activated transport of the electrons and holes could significantly modify the TEEY. This is due to the dissipation of the trapped holes in the material, which increases the escape probability of electrons, and depends on the transport properties of the insulator. We have finally simulated the TEEY of a material under a GEO electron energy spectrum. These results show that the modelling of negative charging induced high energy electrons should be improved. On the other hand, we have demonstrated a lack of variation of the TEEY when an incident current density of 1 nA/cm² was used, which is the incident current densities received by spacecrafts in a GEO orbit.
Multiple articles have been written and published in the literature, based on the results of this PhD thesis. First, the improvements brought to MicroElec have been published in [1] and released in GEANT v10.7. The new version of MicroElec developed in this thesis has been used in two other published studies, which have focused on surface ionizing dose calculations [2] and the simulation of the TEEY of multilayered materials [3]. The analytical models developed for the extrapolated range and transmission rate have been published in [4], and the ionizing dose analytical model in [5]. A third paper on the secondary electron emission yield is currently under review. Lastly, regarding the results gathered on insulator charging, the Monte-Carlo charging model for SiO2 and the study of the effect of recombination have been published in [6]. An article on the double-hump TEEY curves and the effect of current density, entitled "Experimental and Monte-Carlo study of double hump electron emission yield curves of SiO2 thin films" has been submitted, and another article on the effect of the temperature on the TEEY, entitled "Monte-Carlo simulation and experimental study of the effect of temperature on the electron emission yield of SiO2 thin films", is under progress.
As final points, we can draw several perspectives, which stem from the large range of results gathered during this PhD thesis on the low energy electron transport in insulators. When studying the effect of recombination, we have shown a direct correlation between the hole density and the variation of the TEEY. Therefore, it could be possible to simplify our Monte-Carlo model, by treating the drift charge carriers implicitly in global charge densities, instead of explicitly simulating the transport of each particle. An explicit simulation was required at the time of this study, since a lot of physical phenomena are involved in the transport of charges, and it was unknown which of these processes was modifying the TEEY. Now that we have identified recombination as the main mechanism, we could simply simulate the drift, trapping and recombination of charges in a more macroscopic approach.
For this, drift-diffusion codes are especially suited, since they directly compute the spatial and temporal variations of the global charge densities, instead of computing the microscopic transport. Due to this lack of microscopic transport, drift-diffusion codes trade the increased accuracy of Monte-Carlo codes for much faster computation times. This would be especially interesting for the simulation of higher energy electrons or extended time periods, where we have shown that the computation time of our Monte-Carlo model is much too long to be applicable. Moreover, we already have an analytical model of secondary electron emission for the charge-less transport of electrons, in metals and semiconductors. Hence, if this model was extended to insulators, the effect of charging on the TEEY could be treated with a purely analytical approach. We could then study whether a drift-diffusion resolution scheme is needed, or if the effect of recombination can simply be modelled with a purely analytical formalism.
We have used a 1D mesh for the computation of the electric field and the storage of charge densities, given that we had to simulate a nanoscale transport but study the electron emission of samples with an area of several cm². Hence, using a full nanoscale 3D mesh for centimetric dimensions would have been extremely expensive in computational resources. The effects we have studied in this work were mostly 1D effects, so our 1D approach was suited. Nevertheless, it could be useful to extend our simulation model with a 3D mesh for other situations. For instance, we could simulate the electron emission of a target hit with a very high incident current density, and attempt to study the variation of image contrast caused by target charging in scanning electron microscopes. Computing the electric field in 3D would also allow to study the effects of charging on rough samples, where the electric field may be locally intense due to peak effects.
Last of all, from an experimental point of view, we have shown that the standard TEEY measurement protocols, which were designed for metals, are not necessarily adapted for spaceused dielectrics. Indeed, the current densities used (1 µA/cm²) are not representative of the maximal current densities received by the materials on board of spacecrafts (0.1 nA/cm² in a GEO orbit). The controlled tests at room temperature also do not take into account the thermal cycling and temperature gradients that spacecraft materials can undergo (from -180°C to +200°C). However, we have shown that both current density and temperature can significantly change the TEEY measured on an insulator. Hence, such modifications could happen with the dielectrics on board of satellites, but are not evaluated during standard qualification. In result, it is possible that the data so obtained introduces an error when evaluating the discharge or multipactor risks associated with dielectrics in the space environment. In consequence, it is necessary to conceive new TEEY measurement standards that are specific to dielectrics, with a current density that is constant and low enough to avoid charging artefacts, and that takes into account the variation of the TEEY with temperature. This will yield data that is more realistic, and representative of the TEEY of dielectrics in space applications.
Appendix I -Description of the architecture of the GEANT4 simulation
To integrate all the physical models for the transport of drift electrons and holes, and to develop an iterative simulation, many new classes were conceived in the Geant4 application we have developed. Here, we shall list these various classes and then explain how they articulate with the standard architecture of Geant4.
The most important class of the simulation is the DriftManager. It is a singleton with functions related to the charging effects, and the center of the charging simulation. It stores the charge densities, computes and interpolates the electric field, launches the different phases of the simulation, computes the capture mean free paths and detrapping probabilities, and computes what happens when a particle is captured by a trap (free capture or recombination). As a singleton, DriftManager is called by the physical interaction processes, field handling classes and particle sources throughout the simulation.
The DriftManager is accompanied by a DriftMessenger. It is tasked with loading the different parameters of the model through macro commands stored in the file ChargingParameters.mac. Other macro commands are passed to the DriftManager by the DriftMessenger to signal it to begin the different phases of the simulation, and tell it whether the electron cascade, the drift phase or a relaxation phase should be simulated.
New particle types DriftHole and DriftElectron were also added. These particles inherit from the class G4ParticleDefinition and use the same particle definition parameters as G4Electron, except for the mass which is set to the effective mass from Equation 5-26, and the charge which is +e for a hole. DriftHole and DriftElectron particles are generated by the DriftManager during the drift phases.
A new physical process Trapping has been created, following the structure of the other physical processes of Geant4. It returns a mean free path from DriftManager for the capture of drift electrons and holes by shallow or deep traps, and calls DriftManager to handle the trapping or recombination of the particle. By creating a new interaction process following the nomenclature of Geant4, we can let the toolkit handle the transport of the drift particles as any other particle.
The process G4ElectronCapture tasked with killing electrons below a preset energy threshold has been overhauled. It is now handling the capture and recombination of ballistic electrons, in a similar way to the G4Trapping process.
The PrimaryGeneratorAction (PGA) class is driven by the DriftManager, which tells it whether the current run is a ballistic or drift run. The PGA also retrieves the stack of particles to be tracked in a drift run from the DriftManager. It then communicates to the Geant4 transportation manager the positions and directions of the drift particles generated.
The process DriftTimeStepMax stops the drift of the particles if their drifting time exceeds the simulation step. Here, a definite interaction value is returned instead of a mean free path. The particles that have not finished drifting at the end of the simulation time step are added to a postponed stack, and the simulation of their transport is resumed after the next ballistic run.
A set of enum variables is stored in the file driftEnums.h. They are used to set and check conditions regarding the type of the current run, the current particle type, or the type of trap.
Creation and initialization of the DriftManager at the start of a simulation
The DriftManager is made of many commands that will be called by most of the other classes throughout the simulation. But it first needs to be created and initialized in the main at the very start of the simulation, using the command DriftManager* DriftManager::GetDriftManager(). This central command is used to call the pointer to the DriftManager from any class of the simulation. To ensure that the same object is retrieved every time and a new manager isn't created instead, we use the concept of the singleton in object-oriented programming. The principle of the singleton is to restrict the instantiation of a class to one single instance, and to provide a global access to this instance. Since the DriftManager stores a lot of data that needs to be common to the whole simulation (field, traps…), it needs to be defined as a singleton. We also follow the example of the Geant4 class RunManager, which is defined as singleton. Likewise, there needs to be a unique instance of the RunManager that is called by many other classes of Geant4. This concept works by defining the GetDriftManager() function as a static public method and defining a pointer DriftManager* fDriftManager as a static private object of the class. This means there is a unique copy of the pointer fDriftManager shared by every instance of the DriftManager class. It belongs to the class DriftManager itself, but not to a specific object of the class unlike standard variables. This pointer is created as a null pointer. The function is then defined as follows. At every call of GetDriftManager(), the function verifies if fDriftManager is a null pointer, which it is necessarily at the first call of the function. If it is a null pointer, a new DriftManager object is created and the pointer to this manager is attributed to fDriftManager. If fDriftManager is not null, which is true for every call of the function except the first one, the function simply returns fDriftManager. Since fDriftManager is a static pointer, the address stored in this pointer is also common to every instance of the class. Moreover, a static public method such as GetDriftManager() is independent of any object of the class. This means that the function can be called from anywhere using the class name, even if there are no objects of the DriftManager class. This is how we can ensure that only one object of the DriftManager class will be created once, and the same object will be returned every time we call GetDriftManager(). What's more, the constructor of the class DriftManager is only called once through GetDriftManager() and it is never called directly.
The DriftManager is created at the very start of the simulation along with the DriftMessenger, before the physics are initialized and the first run starts. Right after they are created, the The following commands are called after these parameters are loaded, and are used to initialize the mesh, field, and traps.
void DriftManager::CreateMesh() Here, we create the 1D mesh in depth for the charge densities, field computation through Thomas's algorithm and field interpolation for the whole geometry. There are actually 2 meshes to be defined in this function. The global mesh std::vector <G4double> z_mesh is defined from the electron gun to the SiO2/Si interface. However, we only need to sample the charge densities inside of the material for the computation of the mean free paths and recombination. For these functions, we use an auxiliary mesh std::vector <G4double> z_histogram that is defined with a first point at 0.5 Angstrom from the surface down to the SiO2/Si interface. This condition has been chosen because 0.5 Angstrom is roughly the thickness of an atomic layer. Therefore, it would not make physical sense to have a particle trapped between the surface and 0.5 Ansgtrom. This is also necessary to circumvent a Geant4 bug where some particles may be captured in infinitesimal thicknesses of 10 -19 m from the surface. Hence, when sampling the charge distributions, any particle that has a capture position between 0 and 0.5 Angstroms from the surface is attributed to the first cell of this inner mesh, starting from 0.5 Angstrom. In essence, the main mesh z_mesh is simply the inner mesh z_histogram with two points added: the surface of the SiO2 layer and the position of the electron gun. The points of z_histogram are logarithmically spaced between 0.5 Angstrom and the thickness of the sample. We also compute the interval std::vector <G4double> delta_z between the points of the main mesh, and the inter-node points std::vector <G4double> z_field on which the electric field is defined.
void DriftManager::InitializeThomasCoefficients()
In this function, we initialize the coefficients for Thomas's algorithm. This is done by following the initial conditions and the expressions for the coefficients a, b and c detailed in 5.2.2, with the latter depending only on the permittivity and the spacing of the mesh.
void DriftManager::PreFillDeepTraps() At the start of the simulation, the sample is normally free of charges. In Chapter 6 however, we will study what happens when we start a TEEY measurement on a sample that was already filled by deeply trapped charges that have not been able to escape. This function can be called to prefill the deep trap densities used in the simulation following a given charge profile. Hence, if this function is called, the sample will be already filled with deeply trapped holes at the start of the simulation.
void DriftManager::CreateTrapLevels()
In this method, the exponential distribution of shallow hole traps is created, following Figure 5-3. 4 quantities are generated in this method and stored in vectors of doubles. The depths of the trap levels are stored in std::vector <G4double> trapEnergyLevels, here we use 20 levels between 0 and 0.4 eV that are uniformly spaced. The probability density for a given trap level 𝐸 𝑖 is obtained from the exponential law as
𝑃(𝐸 𝑖 ) = 𝑃(𝐸 𝑖 ≤ 𝐸 < 𝐸 𝑖 + 𝑑𝐸) = 𝑑𝐸 𝐸 𝑐 exp (- 𝐸 𝑖 𝐸 𝑐 ) Equation A-1
Where 𝑑𝐸 = 𝐸 𝑖+1 -𝐸 𝑖 , and it is stored in std::vector <G4double> probabilityDensity. The density of states for a given trap level from Equation 5-28 is then obtained by multiplying Equation A-1 by the total density of traps, and is stored in std::vector <G4double> trapDensityOfStates. Finally, we also compute the detrapping probability for all levels as the product between the detrapping frequency of a level (Equation 5-31) and the simulation time step. The probabilities are stored in std::vector <G4double> detrappingProbability.
Simulation procedure of the electron cascade
Now that the charging parameters are initialized, we can start the first phase of the simulation defined in section 5.2.1, by sending incident electrons using the macro commands. In an external macro file, the command /Charging/StartElectronCascadeRun is called, which enables the command void DriftManager::StartElectronCascadeRun(). The current type of run currentRunType is defined in the DriftManager through the enum runType. It can take the values electronCascade (sending incident electrons on the target from the electron gun), regularDriftRun (simulating the drift of new and detrapped holes and electrons), or postponedDriftRun (simulating the drift of the holes and electrons from the previous simulation step, whose drifting time exceeded the simulation time step). When calling DriftManager::StartElectronCascadeRun(), the method first sets the run type as a electronCascade run. It then calls the G4RunManager of Geant4, and sends the number of incident electrons that was defined in the ChargingParameters file through the G4RunManager.
The generation of incident particles in Geant4 actually goes through the PrimaryGeneratorAction (PGA) class, a fundamental class of Geant4 that can be edited by the user. As a result, we have to introduce the interactions between the DriftManager and the PGA. This class allows the user to define the type of particles to be generated, along with their position, direction, and energy, though the command void PrimaryGeneratorAction::GeneratePrimaries(G4Event* anEvent) which is called at the start of each event. Consequently, this method of the PGA will retrieve data from the DriftManager, which will tell it to generate ballistic or drift particles depending on the phase of the simulation. At the start of each event, the method runType DriftManager::GetCurrentRunType() from the drift manager is called by the PGA to get the current run type. There is a switch in GeneratePrimaries depending on the runtype returned that will define the particles to be generated. Since we are in an electronCascade run here, the PGA will then send incident electrons from the electron gun using the GeneralParticleSource (GPS) source type of Geant4. Through macro commands set in an external file, the GPS has been placed at the height of the electron gun and configured to send incident electrons of a given energy on the sample at normal incidence. Consequently, no further settings need to be set internally through the PGA, and we can simply generate an event with the standard command associated to the GPS.
The The particle transportation tools of Geant4 (G4Transportation) are tasked with interpolating the trajectory of the ballistic electrons as they are subjected to the electric field generated by the density of charges. As a result, it requires the value of the electric field at several points of the particle's trajectory to correctly interpolate it. In this case, the transportation tools of Geant4 will call the method void EMField::GetFieldValue(const G4double point [4], G4double *field) const of the EMField object. In this function, the Geant4 kernel passes a given point (x,y,z,t) to the EmField object, which is supposed to return the value of the electric field at this point.
The value of the field in the x and y directions 𝐹(𝑥) and 𝐹(𝑦) are set to 0, and the field is also static thus 𝐹(𝑡) = 0. The value of the field in depth 𝐹(𝑧) is already saved in the DriftManager in electricFieldOnNodes, which stores the values of the electric field on the nodes of the mesh.
To return a value to the Geant4 kernel, EmField calls the method G4double DriftManager::GetFieldValue(G4double depthInMeters, G4double x, G4double y) in the drift manager. Three cases are set depending of the depth of the particle. If the input depth 𝑧 is located within the dielectric layer, the function first finds the nodes 𝑧 𝑖 and 𝑧 𝑖+1 that verify 𝑧 𝑖 ≤ 𝑧 < 𝑧 𝑖+1 using a dichotomy method. The values of the field at 𝑧 𝑖 and 𝑧 𝑖+1 are then used in a linear interpolation to return the field at 𝑧. If 𝑧 is located in vacuum between the electron gun and the surface of the dielectric layer, the electric field returned is simply the surface potential divided by the distance between the gun and the surface as the gun is set to the ground. If the inverse mean free path obtained is used in Equation 5-41, to get the final capture mean free path depending on the electron energy.
In G4ElectronCapture::PostStepDoIt, another function of the drift manager is called to handle the trapping of the ballistic electron: void DriftManager::ComputeRecombinationForDriftParticle(const G4Track& aTrack, particleType part, trapType t). Similar to the computation of the mean free path, the type of particle and the type of trap need to be specified. This function retrieves the trapped charge densities in the cell of the mesh, and determines if the ballistic electron recombines with a trapped hole. The charge densities are computed in the same way as in the computation of the mean free path detailed just above. For shallow traps, a level in the energy distribution is first selected using Equation 5-43 to draw a random level from the exponential law, which is then attributed to one of the 20 discrete levels of the distribution following the procedure of 5.5. The density of holes of this level is then used in Equation 5-42, which gives the probability of recombination with a random sampling. If there is no recombination, the ballistic electron is thermalized and will be able to do an additional step as a drift electron (it joins the population of new electrons from the cascade). However, if the electron recombines, its position will not be saved and we also remove a hole from the trapped hole density. The electron will be automatically thermalized if the density of trapped holes is empty, since recombination is impossible in this case. In any event, the transport of the electron is stopped.
At the end of the run, the RunAction user class of Geant4 retrieves the TEEY measured by the sensitive detectors, and feeds it to the DriftManager with the command void DriftManager::AddSEYtoAverage(G4double s). Indeed, the TEEY returned by the simulation is averaged over several runs, to reproduce the experimental configuration where we get a single integrated point per pulse. The averaging procedure is done in the DriftManager after a given number of runs, using the values fed after each ballistic run by the RunAction class.
Computation of the electric field
Once the transport of the ballistic electrons is finished, the second step of the simulation is the computation of the field, handled by the function void DriftManager::ComputeField(). The first step of this computation is the sampling of the charge densities in depth, which will give us the 𝑑 𝑖 coefficients of Thomas's algorithm. This step is handled by calling the method void DriftManager::SampleChargeDistribution(). To do so, we must retrieve the total number of charges in each cell of the mesh. The trapped holes and electrons are already saved as a number of trapped charges per cell in the vectors std::vector< G4double >electronTrapped, std::vector< G4double > electronDeepTrapped and std::vector< G4double > holeDeepTrapped, which are shown in Figure A-2 (a). Consequently, we can simply take the number of particles saved in a given cell, sum the number of particles in deep and shallow traps, and save the total number of trapped particles per cell in the temporary vectors std::vector< G4double > rhoElectrons and std::vector< G4double > rhoHoles. For holes trapped in shallow traps, the numbers of trapped charges are stored per energy level and per depth, so we have a 2D table in the form of std::vector< std::vector< G4double >> holeTrapped, which is shown in Figure A-2 (b). There is simply an additional step which consists in summing the number of holes trapped in every energy level for a given cell of the mesh, to get the total number of trapped holes in this cell, which is then added to rhoHoles. We now know the total number of trapped charges for every cell, but the new holes and thermalized electrons from the electron cascade we have just simulated also need to be taken into account for the computation of the field. If some charges were postponed from the previous drift step, they also need to be considered when sampling the charge densities. Hence the total number 𝑁 of electrons or holes in a cell should be given by 𝑁 = 𝑁 𝑛𝑒𝑤 + 𝑁 𝑠ℎ𝑎𝑙𝑙𝑜𝑤 𝑡𝑟𝑎𝑝𝑝𝑒𝑑 + 𝑁 𝑑𝑒𝑒𝑝 𝑡𝑟𝑎𝑝𝑝𝑒𝑑 + 𝑁 𝑝𝑜𝑠𝑡𝑝𝑜𝑛𝑒𝑑 Equation A-2
However, for both new and postponed particles, the vectors std::vector<G4ThreeVector> holeNew, std::vector<G4ThreeVector> electronNew, std::vector<G4ThreeVector> holepostponed and std::vector<G4ThreeVector> electronPostponed only contain the current coordinates (x,y,z) of the particles, which were recorded when they were thermalized (for new electrons), created (for new holes), or postponed. For these particles, we need to sample the vectors and attribute each stored position to its corresponding cell of the mesh. Here, the method std::vector<G4double> Histogram(std::vector<G4double> data, std::vector<G4double> bins) is called for each stack of positions. In this function, with the depth 𝑧 of a given new or postponed particle, we find the corresponding cell of the mesh 𝑧 𝑖 using a dichotomy method, as for every other case where we need to find the correct cell with the function GetMeshCell. When the cell of a particle is found, the number of electrons or holes in the cell saved in std::vector< G4double > rhoElectrons or std::vector< G4double > rhoHoles is increased by 1. When the new and postponed charge densities have been sampled, the total number of charges in a cell 𝑛 𝑖 is finally obtained. We can then, for each cell of the mesh, obtain the volumetric charge density 𝜌 𝑖 from Equation 5-9 with the number of electrons (𝑛 𝑒𝑙𝑒𝑐𝑡𝑟𝑜𝑛𝑠 ) 𝑖 and holes (𝑛 ℎ𝑜𝑙𝑒𝑠 ) 𝑖 as: Where the charge bias factor 𝛽 appears to get the real number of trapped particles.
After the charge densities have been computed, the rest of the method void DriftManager::ComputeField() is used to obtain the electric field on the nodes of the mesh. Now that all the coefficients 𝑑 𝑖 are known, the computation procedure of the electric field is rather straightforward and follows the steps of Thomas's algorithm detailed in 5.2.2.
Computation of the detrapping
The third phase of the simulation is the computation of the detrapping of trapped holes and electrons, which will be transported along with the new drift particles in the fourth and last phase. This computation is handled by the function void DriftManager::ComputeDetrapping().Since the detrapping frequency 𝑊 can vary depending on the cell of the spatial mesh (especially for deep traps where 𝑊 depends on the field 𝐹(𝑧)), the number of detrapped particles is computed separately for each cell of the mesh. The principle used for the detrapping of any population of trapped particles is as follows. First, the detrapping frequency is computed from the formulae detailed in section 5.4 for shallow and deep traps. The frequency 𝑊 𝑖 obtained for the cell is then multiplied by the simulation time step 𝜏, which gives the detrapping probability 𝑃 𝑖 (𝜏) = 𝑊 𝑖 𝜏. This probability is applied on average to every charge trapped in the cell, which gives the number of detrapped charges in a given cell as 𝑛 𝑑𝑒𝑡𝑟𝑎𝑝𝑝𝑒𝑑 (𝜏) = 𝑃 𝑖 (𝜏) 𝑛 𝑖 . If 𝑃(𝜏) ≥ 1 for a given cell, all particles are detrapped. Once the number of detrapped particles is known, the particles are added to the stacks std::vector<G4ThreeVector> electronDetrapped or std::vector<G4ThreeVector> holeDetrapped, and removed from the trapped charge densities. They contain the coordinates of the detrapped particles, where they will be generated from in the drift transport phase. However, as mentioned in 5.2.1, we only store the number of trapped particles in a given cell and not their exact position. Consequently, the position of each detrapped particle has to be randomly generated according to the coordinates of the cell. For this, the function G4double DriftManager::RandomizeDepthforDetrapping(G4double cellID) is called. Since we know that the particle has been released from the cell 𝐶 cellID [𝑧 cellID ; 𝑧 cellID+1 [, it is thus randomly 5 -Simulation of the transport of drift electrons and holes 325 generated at a depth 𝑧 = 𝑧 cellID + 𝑅(𝑧 cellID+1 -𝑧 cellID ), where 𝑅 is a random number between 0 and 1.
The detrapping probability is calculated separately for electrons in shallow traps, holes in shallow traps, electrons in deep traps and holes in deep traps. For shallow traps, the detrapping frequency from Equation 5-31 only depends on the depth of the trap and the temperature, so it is invariant throughout the simulation and does not need to be computed after each electron cascade. For shallow hole traps, this probability is stored in the vector detrappingProbability for each of the 20 energy levels of the distribution. Hence, for each level, the number of detrapped particles is calculated separately. For deep traps however, the computation needs to be made at each simulation step, since the PF barrier lowering depends on the electric field, which evolves with time and space. Therefore, after each simulation step, the method G4double DriftManager::GetDetrappingProbabilityForDeepTrap (G4double cellID) is called in ComputeDetrapping to compute the detrapping probability at each node of the mesh, depending on the PF and PAT effects.
Simulation of the transport of drift electrons and holes
After the detrapping computation is completed, we have four stacks of drift particles that need to be transported until they get trapped: the holes and electrons from the cascade in holeNew and electronNew, and the detrapped electrons and holes in electronDetrapped and holeDetrapped. If some particles from the previous drift phase were postponed, we also have two additional stacks as holePostponed and electronPostponed. The master command void DriftManager::GenerateDriftParticles() is called through the external macro command file with the command /Charging/StartDriftRun, which immediately follows the command /Charging/StartElectronCascadeRun we have used to send the ballistic electrons. The commands for the computation of the field and the detrapping are actually called successively in GenerateDriftParticles. This function does the following actions:
1. The electric field is computed by calling ComputeField(), which itself calls SampleChargeDistribution() as we have seen in 3. 2. If 50µs have elapsed, the analysis manager is called to fill the .csv output file with the TEEY averaged over 50µs and the current value of the surface potential. Another sampling class created during this thesis work is called at this step to save the charge density profiles in depth after every 50 µs. 3. The overlap factor is refreshed, knowing the number of electrons that were sent from the start of the simulation. 4. The detrapping of the charges is computed by calling ComputeDetrapping(), following the procedure in 4. 5. If there are postponed particles, the current run type is set as a postponedDriftRun. The transport of postponed electrons is simulated, then the transport of postponed holes is simulated. 6. The current run type is set as a regularDriftRun.
7. The transport of the new drift electrons from the latest electron cascade is simulated. 8. The transport of the new holes from the latest electron cascade is simulated. 9. The transport of the detrapped drift electrons is simulated. 10. The transport of the detrapped holes is simulated.
The Figure A-3 shows the interactions between the DriftManager and the other classes of Geant4 during a drift particle cascade run. At the start of each event, the PGA gets the current type of run. If it is a regular drift run (for new and detrapped particles), the PGA calls G4ThreeVector DriftManager::GetEventFromDriftParticleStack() to get the coordinates of origin of the particle. The direction of the particle's momentum is then randomly generated in an isotropic distribution, and the particle's energy is set to
3 2
𝑘𝑇. The definition of the particle is also retrieved from the drift manager with the command GetDriftParticleDefinitionForRun(), so that the PGA knows whether to generate a drift electron or a hole. If it is a postponed run, the PGA retrieves the particle's position and momentum direction, which were saved at the end of the run in the drift manager. As for the transport of ballistic electrons, the field handling classes of Geant4 interact with the drift manager to get the value of the electric field at a given position. Through the function GetFieldValue of the drift manager, the field is interpolated and returned to the EmField object, which then transmits the value to the transportation tools.
Three processes are involved during the transport of the drift particles. In the physics list, the processes need to be defined separately for each type of particle. Consequently, the three physical processes we will present are each created in a copy for drift holes and another copy for drift electrons. The main source of interactions is the trapping process. It behaves in the same way as the G4ElectronCapture process for ballistic electrons, in that two objects are created and tasked with handling the trapping in deep traps or shallow traps. Using DriftManager::GetTrapMFPForDriftParticle(), the trapping process retrieves the capture MFP from the drift manager, depending on the trapping type and the particle that is tracked. Indeed, this process follows the procedure from 5.5, therefore we need to use the density of trapped electrons when transporting a drift hole and the density of trapped holes when transporting a drift electron. Contrary to ballistic electrons, the capture mean free path of drift particles is independent on the energy, so the value returned by the drift manager is directly used in Geant4 for the computation of the interaction length. If the trapping process is selected for the interaction, the PostStepDoIt method follows the exact same procedure as in the G4ElectronCapture class. The method DriftManager::ComputeRecombinationForDriftParticle() is called, which follows the same behavior as before. It finds which cell of the mesh the current particle is located in, computes the trapped charge and free trap densities, and does a random sampling to determine if the particle is trapped in a free trap or if it recombines. For shallow traps, a first random sampling is made to find a trap energy level in the exponential distribution of shallow hole traps with the function G4double DriftManager::GetEnergyLevel(). If a free trap captures the drift particles, the number of trapped charges in the cell (or the level for shallow hole traps) is increased. If the particle recombines, a particle of the opposite sign is removed from the number of trapped particles.
The process DriftTimeStepMax stops the particles if their drifting time exceeds the simulation step time 𝜏. However, the Geant4 kernel does not work in elapsed time between two interactions but in length traveled between two interactions. Hence, we need to convert our time limit constraint into a maximal distance constraint. Since we can retrieve the velocity 𝑣 of the particle when the process is called, we can easily get the maximal distance travelled by the particle during the simulation time step using 𝑣 = 𝑙/𝜏 . This distance will be the constraint used to stop the particles. This physical process is different from all the other processes we have seen so far, because it does not return an interaction mean free path but the physical interaction length itself. As a reminder, the physical interaction length 𝑙 is the true distance traveled by the particle between two interactions. It is determined from the MFP 𝜆 using the standard Monte-Carlo random sampling procedure with the equation 𝑙(𝐸) = -𝜆(𝐸) ln(𝑅) where R is a random number drawn from 0 to 1. Since we need to stop the particles if they exceed the maximum distance, the process DriftTimeStepMax bypasses the Monte-Carlo random sampling phase and always returns the same interaction length as 𝑙 = 𝑣𝜏. If the capture physical interaction length is greater than the limit length 𝑙, the DriftTimeStepMax will be selected for the interaction and its method PostStepDoIt() is activated. In this case, the particle is stopped, and its position and momentum are saved in the electronPostponed or holePostponed stacks of the drift manager, using the commands DriftManager::AddPostponedElectron() or DriftManager::AddPostponedHole(). The postponed particles will be generated in the next drift transport phase, after a new electron cascade has been simulated.
The final process that can act on the drift particles is the G4MicroElecSurface class, which handles the passage of the particles through any interface of the simulation. For ballistic electrons, it first computes the transmission probability. If the electron is transmitted, its transmission angle is computed, along with the energy gained or lost by the electron when going through the interface. Drift particles, however, are not able to go through every interface. At the surface of the SiO2 layer, the surface potential barrier prevents the electrons from escaping the material, unless they get accelerated from 3/2 𝑘𝑇 to at least 1 eV without getting trapped. This would require very high electric fields on the order of a few MV/cm, which are not reached in the simulation. Subsequently, any drift particle that reaches the surface of the insulating layer is always reflected back into the material. At the SiO2/Si interface however, the electrons and holes can flow though the Schottky barrier and into the Si layer due to the narrower band gap in silicon. If a drift particle reaches this interface, it is assumed to have escaped in the silicon layer and is removed from the simulation.
When the transport of all stacks of drift particles is finished, we can go to the first phase of the simulation in 2 and simulate a new electron cascade. Nevertheless, since we want to simulate a pulsed measurement procedure and not a continuous flow of incident electrons, there is a relaxation phase between two pulses that also needs to be simulated. Indeed, even in the absence of incident electrons, the drift particles can move in the material and this transport must be simulated. In this phase, no incident electrons are sent, and we only use the method DriftManager::GenerateDriftParticles() through the macro command /Charging/StartDriftRun. This way, the field is still refreshed after each simulation step, and we can simulate the detrapping, transport and trapping of the drift particles even when no incident electrons are sent. [22] and OSMOSEE [23]. Small circles of different colors are for MicroElec simulations, big circles of different colors are for Walker [36] reference data and different squares are our former simulations performed with OSMOSEE code [37]
Table of figures
Introduction .............................................................................................................................................. Chapter 1: Context and aim of the study ...................................................................................... 1.1
Figure 1 - 1 : 1 -
111 Figure 1-1: Integral fluxes of protons and electrons received by a spacecraft in LEO and GEO orbits, from the radiation belt model GREEN
Figure 1 - 2 :
12 Figure 1-2: Illustration of an electron avalanche between two parallel plates
Figure 1 - 3 :
13 Figure 1-3: Illustration of a typical TEEY curve
Figure 1 - 4 :
14 Figure 1-4: Illustration of a typical energy distribution of emitted electrons
Figure 1 - 5 :
15 Figure 1-5: TEEY measurements for an etched sample of Cu under various angles of incidence. data from [29]
Figure 1 - 6 :
16 Figure 1-6: TEEY measurements of a copper sample, as received and after decontamination by baking and erosion. Data from [30]
Figure 1 - 7 :
17 Figure 1-7: Comparison of the TEEY of silver on a flat surface and with roughness patterns with height = 100µm and base length L=80µm. Simulated results from [40]
1 2𝑚𝑣 2
12 is a helix with a gyration radius of
Figure 1 - 8 :
18 Figure 1-8: Evolution of the global charge of the insulator according to the TEEY curve
Figure 1 - 9 :
19 Figure 1-9: Charge buildup and evolution of the TEEY for incident electrons below the first crossover point
Figure 1 - 10 :
110 Figure 1-10: Charge buildup and evolution of the TEEY for incident electrons between the two crossover points
Figure 1 - 11 :
111 Figure 1-11: Charge buildup and evolution of the TEEY for incident electrons after the second crossover point
Figure 1 - 12 :
112 Figure 1-12: Illustration of atypical TEEY curves observed on SiO2 thin films
Figure 2 - 1 :
21 Figure 2-1: Illustration of the diffusion of particles by a scattering center
Figure 2 - 2 :
22 Figure 2-2: Illustration of the electron-matter interactions that are common to metals, semiconductors and insulators
Figure 2 - 3 :
23 Figure 2-3: Illustration of the electronic structure of the isolated atom and of a solid
Figure 2 - 4 :
24 Figure 2-4: Illustration of the band structure depending on material type
Figure 2 - 6 :
26 Figure 2-6: Illustration of the electron and charge carrier interactions in insulators
Figure 2 - 7 :
27 Figure 2-7: Dispersion relations of optical and acoustic phonons Phonons are split into two first categories, acoustic and optical phonons. This separation is made according to the shape of the dispersion relation branches 𝜔(𝑘 ⃗ ) of their associated vibration mode, which are shown in Figure 2-7. If the dispersion relation is not equal to zero for a null wave vector (𝜔(𝑘 ⃗ = 0 ⃗ ) ≠ 0), the vibration mode can be excited by a particle having a null wave
Figure 2 - 8 :
28 Figure 2-8: Definition of the drift and thermal velocities from the distributions of particle positions, in the case of gaussian transport
Figure 2 - 9 :
29 Figure 2-9: Shallow traps and associated transport mechanisms
Figure 2 - 10 :
210 Figure 2-10: Deep traps in the band gap of an insulator
Figure 2 - 11 :
211 Figure 2-11: Detrapping enhancements factors
Figure 2 - 13 :Figure 2 - 14 :
213214 Figure 2-13: Definition of the true range and projected range from an electron's trajectory
Figure 2 - 15 :
215 Figure 2-15: Monte-Carlo and analytical computations of the dose deposed in Al by low energy electrons
Figure 3 - 1 :
31 Figure 3-1: Comparison of molecular and averaged elastic Total Cross Sections for SiO2 3.2.2 Modeling of the inelastic interaction 3.2.2.1 Modeling of the energy losses using the dielectric function theory
Figure 3 - 2 :
32 Figure 3-2: Examples of OELFs from Sun et al.'s database
Equation 3- 10 With
10 𝑢 ⃗ = 𝜔 𝑞 𝜐 𝐹 ⁄ , 𝑧 = 𝑞 2𝑞 𝐹 , ⁄ the density parameter 𝜒 2 = 𝑒² (𝜋ℏ𝜐 𝐹 ), ⁄ the Fermi velocity of the target valence electrons 𝜐 𝐹 , 𝑞 𝐹 = 𝑚 𝑒 𝜐 𝐹 ℏ ⁄ , and the functions
Figure 3 - 3 :
33 Figure 3-3: Fitted OELFs for a metal (Ag), a semi-conductor (Si), and an insulator (SiO2) supported by MicroElec. Each colored smooth curve is attributed to a core shell/plasmon/interband transition. The reference OELFs are taken from the database of Sun et al.
1 𝜀
1
- 4 .
4 The possible energy transfers 𝑄 are contained in an interval 𝑢 = [𝑢 𝑎 ; 𝑢 𝑏 ].
Figure 3 - 4 : 1 Equation 3 - 20
341320 Figure 3-4: Illustration of the rejection sampling procedure First, two random numbers 𝑅 1 and 𝑅 2 are drawn between 0 and 1. A trial value for the energy transfer 𝑄 𝑡𝑟𝑖𝑎𝑙 in the interval 𝑢 is then computed from the random number 𝑅 1 via the relation 𝑄 𝑡𝑟𝑖𝑎𝑙 = 𝑢 𝑎 + (𝑢 𝑏 -𝑢 𝑎 )𝑅 1 Equation 3-20
transfer 𝑄 made by an incident electron of energy 𝐸 is given by the following normalized integral: sampling procedure, we can draw a random number 𝑅 between 0 and 1. The reverse function of ( 𝑑𝜎 𝑑ℏ𝜔 ) 𝑖𝑛𝑡 then gives us a value for the energy transfer 𝑄 in a straightforward manner, shown in Figure3-5. While we still need to interpolate the tables of cumulated cross sections to get a single value of 𝑄, this procedure is much faster than rejection sampling.
Figure 3 - 5 :
35 Figure 3-5: Illustration of direct sampling through cumulated DXS
Figure 3 - 6 :
36 Figure 3-6: Energy changes for an electron going through the surface barrier in the case of a metal or an insulator. The energy reference in the material used in MicroElec is highlighted in red.
the post-transmission refraction angle 𝜃 𝐸 = asin (√ 𝐸 𝐸 + 𝐸 𝑡ℎ sin 𝜃) 𝑎 = 0.5 × 10 -10 m, and 𝐸 𝑡ℎ [eV] = 𝑊 or 𝜒.
Figure 3 - 7 :
37 Figure 3-7: Example of the modeling of a multilayer structure (SiO2 on Si) in MicroElec
Figure 3 - 8 :
38 Figure 3-8: Material structure files of silver (a) and SiO2 (b)
Figure 3 - 9 :
39 Figure 3-9: Calculated Mean Free Paths for the simulated processes in SiO2, including interactions with acoustic phonons. Interactions with Longitudinal-Optical (LO) phonons are simulated for the two dominant vibration modes (63 and 153 meV).
4 and
4 𝜌 and the speed of sound in the material 𝐶 𝑠 . 𝑁 𝐵𝑍 = 1 exp (ℏ𝜔 𝐵𝑍 /𝑘 𝑏 𝑇)-1and 𝜔 𝐵𝑍 = 𝐶 𝑠 𝑘 𝐵𝑍 are respectively the population and energy of phonons at the Brilloun zone edge[62]. A linear fit is applied to connect both expressions between 𝐸 𝐵𝑍 𝐸 𝐵𝑍 , where 𝐸 𝐵𝑍 = ℏ 2 𝑘 𝐵𝑍 2 /2𝑚 0 from the dispersion relationship of Equation 3-30.
Figure 3 - 11 .Figure 3 -
3113 They are compared with previous versions of MicroElec, namely Geant4 9.6 for electrons and Geant4 10.0 for protons. The effect of the relativistic corrections introduced in[16] is visible on the stopping powers, with a better agreement observed at high electron energy with the ESTAR data from NIST[80]. In the low energy range, the Mermin model (This work) leads to a better agreement with the experimental data of Luo et al.[81] for the stopping powers of electrons (Figure5(a)) than the extended Drude model (MuElec), thanks to its better estimation of the plasmon lifetime. This phenomenon is even more noticeable on the stopping powers of protons (Figure5(b)), where the Drude model fails to reproduce the SRIM[42] data below 30 keV. The shopping powers computed for 18Ar ions (Figure5(c)) with the Mermin approach are also compared with data from SRIM and MSTAR[82] in Figure3-11. The improvement of the Kaneko (B-K)[47] approach over the Barkas[83] formula for 𝑍 𝑒𝑓𝑓 is clearly visible below 30 keV/nucleon.
Figure 3 - 11 :
311 Figure 3-11: Stopping powers of electrons (a), protons (b) and 18 Ar ions (c) in Si The validation of the new version of MicroElec for all other materials is shown for the stopping powers of electrons in Figure 3-12. They are compared with data from ESTAR [80] , Shinotsuka et al. [84], Ashley et al. [85], de Vera et al.[30], and experimental data from Joy's database[START_REF] Joy | A database on electron-solid interactions[END_REF].
Figure 3 - 12 :
312 Figure 3-12: Electron stopping powers for the materials modeled in MicroElec. The stopping powers of BN are provided for information purposes only, as no reference data could be found for this material.
Figure 3 - 13 :
313 Figure 3-13 : Dose-depth profile for 10 keV protons (a) and 200 eV electrons (b) with normal incidence in Si
The inelastic MFP controls the number of secondary electrons set in motion in the irradiated material and their energy distribution, The elastic MFP and angular deviation have an influence on the quantity of backscattered electrons and on the random walk at very low energies, The surface potential barrier is a limitation of the quantity of low energy electrons escaping from the material.Thus, the Backscattered (BEY), Secondary (SEY), or Total (TEEY) electron Emission Yields are quantities of interest that can be used to validate the transport model of low energy electrons.In this section, TEEYs, BEYs and SEYs computed with MicroElec are compared with data from other Monte-Carlo simulation codes ([MC]) and experimental measurements ([EXP]). All data correspond to the emission yields of a flat target irradiated with an electron beam under normal incidence. The processes used in MicroElec include elastic and inelastic interactions, surface processes and phonon interactions in SiO2, Al2O3 and BN.
Figure 3 - 14 :
314 Figure 3-14: Effects of the corrections implemented in MicroElec for low energy electrons on the TEEY of silicon
-14. Three options of MicroElec are plotted to show the effect of the different corrections: a) includes all processes and corrections previously mentioned, b) is without an initial energy for weakly bound electrons, c) is without the initial energy of secondaries and without the surface potential barrier.
Figure 3 - 15 :
315 Figure 3-15 : Comparison of the TEYs calculated for all metals and semi-conductors in MicroElec
Figure 3 -
3 Figure 3-16: SEY of Al2O3 computed without the phonon and polaron processes
Figure 3 - 17 :
317 Figure 3-17: Comparison of the SEY and BEY of SiO2 simulated in MicroElec with phonon interactions, without polaronic capture All MC simulations overestimate the SEY compared to the experimental data. One can notice in Figure 3-17 that our simulations are in relatively good agreement with those of Ohya et al [91]. There is a factor of two between our calculation and Schreiber and Fitting's one [58]. The Monte-Carlo codes of Ohya et al. and Schreiber & Fitting also do not include the polaronic capture model. Consequently, the difference between the results obtained with the M-C codes could be attributed to the different approaches used for the other processes. Indeed, the inelastic cross sections have been computed by Schreiber and Fitting with an impact ionization model instead of the dielectric function theory, and the M-C code from Ohya et al. does not use the acoustic phonon model at low energies. However, as we have shown, the parameters of the acoustic phonon model are quite difficult to successfully evaluate. As a result, many adjustments can be made to the model and its energy application domain to improve the SEYs. This was done by Schreiber et al. [58] by introducing a scaling parameter into the acoustic scattering rate to modify its influence on the SEY. Our modification of the screening parameter to improve the transition between the elastic models also lowered our SEY curves. Schreiber & Fitting also include the contribution of transverse optical phonons, contrary to our M-C code and the code of Ohya et al.
Figure 3 - 18 :
318 Figure 3-18: Comparison of the BEY and SEY from MicroElec for SiO2 including all modelsFinally, the SEY and BEY of SiO2 and Al2O3 obtained from MicroElec are given in Figure3-18 for SiO2 and Figure3-19 for Al2O3, with the polaronic capture enabled. The data obtained for Al2O3 is compared with experimental data from Dawson measured on two samples of sapphire, including one highly polished[START_REF] Dawson | Secondary Electron Emission Yields of some Ceramics[END_REF]. We also compare the data to the simulated data from the M-C code of Ganachaud & Mokrani[63]. The effect of the polaron model is clearly visible in SiO2 compared to Figure3-17, where we have a SEY that is now consistent with the data of Schreiber et al. If we do not use the polaronic capture model in our simulations for Al2O3, we obtain a SEY that follows the standard curve but goes up to 20. The simulated SEY for Al2O3 including the capture model is consistent with the measurements made on a highly polished surface of
Figure 3 - 19 :
319 Figure 3-19: Comparison of the BEY and SEY from MicroElec for Al2O3 including all models
4. 1
1 Development of an analytical model for the extrapolated range and transmission rate of low energy electrons (10 eV -10 keV)The definition of the extrapolated range 𝑟(𝐸) can be found in section 2.4.1 of Chapter 2, where the Figures Figure2-13 and Figure 2-14 illustrate how the extrapolated range is computed and how it differs from the true range or the average range of a particle in a material.
Equation 4- 3 where
3 𝑟 𝐸 is the extrapolated range, E the kinetic energy of the electrons in keV, 𝐴 = 5.37 • 10 -4 𝑔/𝑐𝑚²/𝑘𝑒𝑉, 𝐵 = 0.9815, 𝐶 = 3.123 • 10 -3 𝑘𝑒𝑉 -1 . 4.1 -Development of an analytical model for the extrapolate d range and transmission rate of low energy electrons (10 eV -10 keV)
Figure 4 - 1 :
41 Figure 4-1: Electron transmission rate for Be, Al, Fe and Ag materials. The energy of electrons range from 25 eV up to 5 keV
Electron energy 4 . 1 -
41 Development of an analytical model for the extrapolated range and transmission rate of low energy electrons (10 eV -10 keV) 133 materials are displayed on both figures. As a reminder, the CSDA range for an electron of energy E is obtained from the stopping power dQ/dx with the relation:
Figure 4 - 2 :
42 Figure 4-2: Extrapolated ranges of low energy electrons in Al and Si
Figure 4 - 3 :
43 Figure 4-3:Elastic scattering ratio for 11 materialsQuantitatively, the elastic MFP becomes an order of magnitude lower than the inelastic MFP, with values of a few nanometers, as shown in Figure4-4. Due to the divergence of the inelastic mean free path, the CSDA range also displays a flattening effect below 100eV.
Figure 4 - 4 :
44 Figure 4-4: Mean free paths of electrons in MicroElec for Si and Al
Figure 4 - 5 :
45 Figure 4-5: Comparison of the transmission rates in Si from MicroElec and the standard continuous processes
Figure 4 - 6 :
46 Figure 4-6: Comparison between the range models of Equation 4-3 (ref. [7]) and Equation 4-5 (This work) As shown in Figure 4-6, above ~10 keV, the proposed new expression converges to the classical formula of Equation 4-3. Below this limit, the different extrapolated range expressions diverge. The new formulation reproduces the plateau region appearing in MC simulations while the former formula cannot mimic this behavior. This formula is compared with the Monte-Carlo simulations for all simulated materials in Figure 4-7(a) and (b).Although the range below 50 eV is over estimated and the agreement with the simulations is decreased for low Z (Be, C) and high Z (W) materials, overall a satisfying agreement with the simulations is observed. Indeed, below 100 eV, the average difference between the model and the simulation is between 3% and 12% for all materials, except for Be with 18%.
Figure 4 - 7 :
47 Figure 4-7: Comparison of the extrapolated ranges given by eq. 13 and MicroElec for all 11 materials (Be, Al, Ti, Ni, Ge and W in fig (a) and C, Si, Fe, Cu and Ag in fig (b))The quantities 𝐹 and 𝐺 are specific for each material and are determined with the following calibration process. The values of F and G are provided in Table4-1. Aluminum, which is by far the more documented material, will serve as reference in the rest of the work. 𝐺(𝑍) sets the height of the plateau region of the range curve 𝑟 𝑍 for the material Z, relatively to the extrapolated range of 50 eV electrons in aluminum. It is defined as
Figure 4 - 8 :
48 Figure 4-8: Correlation of the atomic number (y-axis) with the height of the range plateau (a) and the slope of the range curve (b) Analytical expressions have also been proposed for the transmission rate of electrons through a given thickness. But they are generally valid only for energies down to a few keV. As in the case of the extrapolated range, the probabilities calculated with MicroElec can be used to calibrate a new expression that is valid down to a few eV and suitable for SEY modelling. In this work, the model from Kobetich & Katz [9] has been extended to lower energies (~10 eV). Their formula of the transmission probability 𝜂(𝐸, ℎ) for electrons of energy 𝐸 [keV] through a thickness ℎ [g/cm²], is initially given as a function of the extrapolated range 𝑟(𝐸) [g/cm²]. In this case, the extrapolated range is obtained with the analytical expression from Kobetich & Katz shown in Equation 4-3:
Figure 4 - 9 :
49 Figure 4-9: Transmission rate model (grey) compared with MicroElec (colors)
Figure 4 - 10 .
410 Figure 4-10. Dose-depth curve given by the analytical model (Model) proposed in this work for incident electrons in aluminum. The model is compared with Monte Carlo simulations of MicroElec, Walker [22] and OSMOSEE [23]. Small circles of different colors are for MicroElec simulations, big circles of different colors are for Walker [36] reference data and different squares are our former simulations performed with OSMOSEE code [37].
Figure 4 - 11 :
411 Figure 4-11: Validation of the analytical dose model with MicroElec data for 11 materials
Figure 4 - 12 :
412 Figure 4-12: Comparison of the dose profiles in Si from MicroElec and GRAS
Figure 4 - 13 :
413 Figure 4-13: Comparion between MicroElec and GRAS
4. 2 -
2 Development of an analytical model for the ionizing dose deposited by low energy electronsWhere ∆(𝑟 𝐸 -ℎ) = [𝐴(1 -𝐵) -(𝑟 𝐸 -ℎ)𝐶 2 ] + 4𝐴𝐶(𝑟 𝐸 -ℎ) is the discriminant of the high energy range expression.
1 𝑝 = 1 . 8 (
118 0.0059 𝑍 0.98 + 1.log 10 𝑍) -1 + 0.31 The dose profiles resulting from the high energy model of Equation 4-20 are shown in Figure 4-14. They are compared with MicroElec and our low energy model based on the extrapolated range expression of Equation 4-5 in the example of Cu. Equation4-20 should only be considered valid above 10 keV, as it is the domain of validity of the range energy relationship of Equation4-3, which is used in this case. Nevertheless, both models have also been plotted for electron energies below 10 keV, to demonstrate that the high energy model of Equation4-20 is indeed invalid below 10 keV and to show the improvement of our low energy model compared to the high energy model.
Figure 4 - 14 :
414 Figure 4-14: Comparison of the low energy and high energy analytical models
For electrons having energies 26 4. 3 -
263 greater than some keV, one can find in the literature analytical functions capable of adequately representing <Es>(h)[4,29,31]. The model presented in this section is based on both the practical range v.s. energy and the transmission probability functions developed in the previous sections. Based on this, a simple Yse model valid in the [~eV, ~keV] range can be derived as: Development of an analytical model for the secondary electron emission yield Where r(E) is our model for the practical range of the incident electrons of energy E defined by Equation 4-5, and 𝜂(〈𝐸 𝑠 〉, 𝑥)
Figure 4 - 15 :
415 Figure 4-15: OELFs of Ge, Al, Ag, Si
Figure 4 - 16 :
416 Figure 4-16: Median SE energy in Al, Ag, Ge and Si from MicroElec simulations
Figure 4 - 17 :Figure 4 -
4174 Figure 4-17: Secondary emission yield calculated with the model and MicroElec for an aluminum target and for different angles of incidence Figure 4-17 shows the SEY calculated with the analytical model for an aluminum target and various incidence angles going from 0° (normal incidence) up to 75°.The SEY increases with the angle of incidence because the ionization is produced closer to the surface, increasing the probability for the secondary electrons to escape from the material. We can also notice that the maximum of yield also shifts to higher energies as the incident angle increases. This behavior can be linked to the location of the maximum of deposited dose, i.e. the depth at which the production of secondary electrons is maximum. This can be explained by the fact that, at normal incidence, the depth at which the maximum of dose is deposited is large enough, to prevent most of the electrons coming from this depth to escape the material. By tilting the incident beam, the peak of dose is brought closer to the surface, increasing the amount of secondary electrons able to escape the surface. The consequence is a shift of the energy of the electrons at which the maximum of SEY is reached.
Figure 4 - 18 :
418 Figure 4-18 : Comparison of the analytical SEY model with Monte-Carlo and experimental data
Figure 4 -
4 Figure 4-19: κ as a function of ZHere, the correlation gives a law in Z² for the values of 𝜅 which can be used as a starting point to get an estimation of this parameter for any new materials. The position of the max SEY Emax has been plotted in Figure4-20 as a function of the Emax extracted from MicroElec. In this case, no compensation factor has been applied. The correlation factor is also satisfying (0.94).
Figure 4 - 20 :
420 Figure 4-20: Comparison of the energy of the maximal SEY from the analytical model and Monte Carlo data
Figure 4 - 21 :Figure 4 - 22 :
421422 Figure 4-21: Correlation of G with the max value of the SEY
33 Figure 4 - 23 :
33423 Figure 4-23: Comparisons of the dose depth curve given by the analytical model of this work for incident electrons in copper with MicroElec and a constant energy loss model.
Figure 4 - 34 Where
434 Figure 4-9 for 25 eV electrons, we can suppose that the escape depth of secondaries is around 2 nm. Consequently, we can show this correlation by calculating the total energy deposited in the first 2 nanometers of the surface, i.e.
Figure 4 - 24 :
424 Figure 4-24: Correlation between the surface energy deposit and the SEY from the analytical models: position (a) and value (b) of the max
Figure 4 - 25 :
425 Figure 4-25: Comparison of the surface energy deposit and SEY curves for C, Ti, Ni, W.
Several works have developed analytical models for charge transport in insulators. In most cases, the model solves the drift-diffusion equations to compute the evolution of the charge densities with space (𝑥) and time (𝑡). A standard 1D equation scheme found in these codes includes Poisson's equation, Ohm's law and the conservation law respectively in the form of: { ∆𝑉(𝑥, 𝑡) = 𝜌(𝑥, 𝑡) 𝜀 𝐽(𝑥, 𝑡) = 𝜎(𝑥, 𝑡) 𝐹(𝑥, 𝑡) 𝜕𝜌(𝑥, 𝑡) 𝜕𝑡 + ∇𝐽(𝑥, 𝑡) = 𝑆(𝑥, 𝑡) Equation 5-1
Figure 5 - 1 :
51 Figure 5-1: Simulation configuration
Equation 5 - 14 5. 2 -
5142 𝑛 = 𝑉 𝑝𝑜𝑙 = -9 V Developing an iterative Monte-Carlo simulation of ch arging and secondary electron emission
3 2𝑘𝑇
3 , where 𝑘 is Boltzmann's constant and 𝑇 the temperature. Using 𝐸 = 1 2
Figure 5 - 2 :
52 Figure 5-2: Trap levels modeled in the simulations
Equation 5 - 39 For
539 ℎ (𝑧) = 𝜎 𝑆 𝑁 𝑆,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 𝑒 (𝑧) 1 𝜆 𝐷,ℎ (𝑧) = 𝜎 𝐷 𝑁 𝐷,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 𝑒 (𝑧) drift electrons (𝑒): { 1 𝜆 𝑆,𝑒 (𝑧) = 𝜎 𝑆 𝑁 𝑆,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 ℎ (𝑧) 1 𝜆 𝐷,𝑒 (𝑧) = 𝜎 𝐷 𝑁 𝐷,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 ℎ (𝑧) 𝑒 (𝑧, 𝐸) = [𝜎 𝑆 𝑁 𝑆,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 ℎ (𝑧)] exp (-𝛾𝐸) 1 𝜆 𝐷,𝑒 (𝑧, 𝐸) = [𝜎 𝐷 𝑁 𝐷,𝐹𝑟𝑒𝑒 (𝑧) + 𝜎 𝑒-ℎ 𝑁 ℎ (𝑧)] exp (-𝛾𝐸) Equation 5-41
Figure 5 - 4 :
54 Figure 5-4: Simulation results of the electron cascade radius in SiO2 for different incident energies Now that we know the size of a single electronic cascade, we can compute the proportion of the material surface filled with electrons and holes resulting from the electron cascades after a given time. It is simply expressed by the number of incident electrons 𝑁 𝑖𝑛𝑐 (𝜏) sent from the start of the measurement, multiplied by the surface of a cascade to get the total surface covered by electron cascades, and divided by the area irradiated by the electron beam. This gives the following overlap factor, where we can make the incident current density 𝐽 0 appear using Equation 5-3: % 𝑂𝑣𝑒𝑟𝑙𝑎𝑝 (𝜏) = 𝑆 𝐶 𝑁 𝑖𝑛𝑐 (𝜏) 𝑆 𝐵 = 𝑆 𝐶 𝑒 𝐽 0 𝜏 Equation 5-45
Figure 5 - 5 :
55 Figure 5-5: Illustration of the evolution of the overlap factor with time
Figure 5 -
5 6 and Figure 5-7 illustrate how these classes interact with the Geant4 classes to perform the transport of ballistic electrons and drift charge carriers.
Figure 5 - 6 :Figure 5 - 7 :
5657 Figure 5-6: Architecture of the simulation for the transport of ballistic electrons
Figure 5 - 8 :
58 Figure 5-8: Simulation of the TEEY of a 100 µm SiO2 sample hit by 20 eV electrons. Current density of 10 µA/cm²
Figure 5 - 9 :
59 Figure 5-9: Simulation of the TEEY of a 100 µm SiO2 sample hit by 500 eV electrons. Current density of 10 µA/cm²
Figure 5 - 10 :
510 Figure 5-10: Energy spectrum of electron emitted by SiO2 under 500 eV incident electrons
Figure 5 - 11 :
511 Figure 5-11: Simulation of the TEEY of a 1 mm SiO2 sample hit by 2500 eV electrons. Current density of 10 µA/cm². (a): Default parameters (b): Recombination cross section decreased to 10 -13 cm²
Figure 5 - 11 . 8 -
5118 The sample used here is 10 times thicker (1 mm) than in Figure5-9 (0.1 mm), so according to Equation 5-47 the surface potential should be evolving ten times faster. Yet, the equilibrium is only reached from roughly 30 ms, instead of 2 ms in Figure5-9, so we can suppose that for a 0.1 mm sample we would have to compute a hundred times more simulation steps compared to Figure5-9 and the (Simulation of the TEEY of SiO2 simulation would be ten times longer than the present case. All in all, the individual simulation steps are longer, and we need to compute a lot more of them. As a result, the simulation results from Figure5-11 were obtained after 40 hours of computation, while the simulation of Figure5-8 took less than an hour.
Figure 5 - 12 :
512 Figure 5-12: TEEY of a 20 nm SiO2 sample for 300 eV (a), 500 eV (b) and 1 keV (c) incident electrons. Current density of 10 µA/cm²
Figure 5 - 13 :
513 Figure 5-13: Effect of the capture cross section of secondary electrons by free shallow electron traps on the TEEY
Figure 5 - 14 :
514 Figure 5-14: Comparison of the TEEY with experimental data
Figure 5 - 15 :
515 Figure 5-15: Effect of the model parameters on the TEEY of 500 eV electrons, with a current density of 10 µA/cm². (a): Effect of the capture cross sections of secondary electrons by free traps. (b): Effect of the recombination cross section. (c): effect of the detrapping frequency factor
Beam area and current density 0. 1
1 cm² to a few cm², for a current density of 0.1 µA/cm² to 10 µA/cm² Number of incident electrons per time step and bias factor 500 electrons, bias factor of 12500 Density of shallow traps per cm 3 (N S ) 10 21 cm -3 Density of deep traps per cm 3 (N D ) 10 18 cm -3 Capture cross section by a shallow trap (σ S ) 10 -14 cm² (holes and drift electrons) 6 × 10 -15 cm² (ballistic electrons) Capture cross section by a deep trap (σ D ) 10 21 cm -3 Mean value for the exponential distribution of hole shallow trap depths (20 levels ranging from 0 to 0.4 eV) 0.07 eV Modified to 0.1 eV in section 6.5 to improve the fit to experimental data Depth of electron shallow traps 0.02 eV Modified to 0.05 eV in section 6.5 to improve the fit to experimental data Depth of electron and hole deep traps 2 eV Electron-hole recombination cross section (σ e-h ) 2 × 10 -12 cm² Detrapping frequency factor (W 0 ) 10 3 s -1 for shallow traps 10 14 s -1 for deep traps
Figure 6 - 1 :
61 Figure 6-1: Illustration of the DEESSE facility. Image by S. Dadouch
Figure 6 - 2 :
62 Figure 6-2: Illustration of the ALCHIMIE facility. Image by S. Dadouch
Figure 6 - 3 :
63 Figure 6-3: Time-resolved experimental measurement of the decrease of the TEEY of SiO2 thin film for 300 eV and 1 keV electrons.
Figure 6 - 4 :
64 Figure 6-4: Comparison of the simulated and experimental data of the decrease of the TEEY.
Figure 6 - 5 :
65 Figure 6-5: Comparison of the simulated charge-less TEEY and experimental TEEY (J0 < 25 nA/cm²)
Figure 6 - 6 :
66 Figure 6-6: Electric field in SiO2 after 80 pulses of 500 eV and 1 keV electrons
Figure 6 - 7 :
67 Figure 6-7: Energy/TEEY curve for positive (a) and negative (b) uniform values of the internal electric field
Figure 6 - 8 :
68 Figure 6-8: Charge distributions after 100 pulses of 300 eV and 500 eV electrons (a) and after electron irradiation without drift transport (b)
Figure 6 - 9 :
69 Figure 6-9: Extrapolated range of electrons in SiO2 from the Monte-Carlo simulation
Figure 6 - 10 :
610 Figure 6-10: Charge profiles after 100 pulses of 1 keV electrons in 20 nm and 50 nm thick samples This effect can be shown if we compare the charge distribution after 100 pulses of 1 keV electrons in a 50 nm sample with our 20 nm sample. The comparison is shown in Figure 6-10. The 50 nm sample is thicker than the range of 1 keV electrons. For this sample, we observe a charge profile with very distinctive positive and negative regions, instead of the strictly positive charge profile of 1 keV electrons in 20 nm. The 50 nm sample is positively charged until 24 nm, which can explain why the 20 nm sample has no negative charge region. As a result, what we see on the 20 nm sample is only a part of the positive charge peak, which extends into the silicon layer. Most of the electrons can easily escape in the silicon layer due to their implantation depth.This leads to a large net positive charge close to the SiO2/Si interface when we sum the quantity of positive and negative charges to get the total charge density plotted here, hence the peculiar shape of the charge distribution.
Figure 6 - 11 :
611 Figure 6-11: Charge profiles of 100 000 1 keV electrons in 20 nm and 50 nm thick samples, with drift transport disabled
Figure 6 - 12 :Figure 6 - 13 :
612613 Figure 6-12: Evolution of the peaks of positive and negative charges during 100 pulses of 300 eV, 500 eV and 1 keV electrons
Figure 6 - 14 :
614 Figure 6-14: Evolution of the charge density of 1 keV electrons after 1 and 2 pulses
Figure 6 - 15 :
615 Figure 6-15: Comparison of the evolutions of the TEEY (a) and the surface hole density (b)
Figure 6 - 16 :
616 Figure 6-16: Linear fit between the TEEY and the surface charge density from 100 eV to 2 keV
Figure 6 - 17 :
617 Figure 6-17: Correlation between the TEEY and the surface charge density at 300 eV and 1 keV
Figure 6 - 18 :
618 Figure 6-18: Comparison of the experimental TEEY of 1 keV electrons for a virgin and a charged sample
Figure 6 - 19 :
619 Figure 6-19: Comparison of the experimental TEEY of 500 eV electrons for a discharged and a charged sample
Figure 6 - 20 :
620 Figure 6-20: Approximated residual hole densities used in the simulations
Figure 6 - 21 :
621 Figure 6-21: Simulation of the effect of residual holes on the TEEY of 300 eV, 500 eV and 1 keV electrons.
Figure 6 - 22 :
622 Figure 6-22: Simulation of the compensation of the surface holes by very low energy electrons. Case of 300 eV electrons (a) Evolution of the TEEY (b) Evolution of the total surface charge density (c) Charge density profiles after 10 pulses of 300eV and 80 pulses of 3 eV electrons.
Figure 6
6 Figure 6-23: Simulation of the compensation of the surface holes by very low energy electrons. Case of 500 eV electrons (a) Evolution of the TEEY (b) Evolution of the total surface charge density (c) Charge density profiles after 10 pulses of 500eV and 80 pulses of 3 eV electrons.
Figure 6
6 Figure 6-24: Simulation of the removal of the surface positive charge by a continuous 3 eV beam for 6 ms and a rest period of 1 second (a) Evolution of the TEEY (b) Evolution of the total surface charge density (c) Charge density profiles before and after the discharge
Figure 6 - 25 :
625 Figure 6-25: Simulation of the TEEY evolution through two periods of 10 pulses of 300 eV electrons, spaced by a 1 second rest period. (a) Evolution of the TEEY (b) Evolution of the surface potential (c) Evolution of the total surface charge density (d) Charge density profiles before and after the 1 second relaxation period
Figure 6 - 26 :
626 Figure 6-26: Double hump TEEY of SiO2 thin film sample measured at F = 300 V
Figure 6 - 27 :
627 Figure 6-27: Comparison of the TEEY curves obtained with different focus voltages
Figure 6 - 28 :
628 Figure 6-28: Double hump TEEY of a MgO thin film sample measured at F = 250 V and F = 0V
Figure 6 - 29 :
629 Figure 6-29: Comparison of the TEEY of a contaminated sample at F 250V and a clean sample at F 300V
Figure 6 - 30 :
630 Figure 6-30: Variation of the incident current with the focus voltage
Figure 6 - 31 :
631 Figure 6-31: Variation of the beam area with the focus voltage and electron energy From Figure 6-31, we can see a direct correlation between the area irradiated by the beam and the shape of the TEEY curves of Figure 6-27. The local minimum of TEEY and the minimum
Figure 6 - 32 :
632 Figure 6-32: Evolution of the TEEY in time with the number of pulses, depending on the incident energy and focus voltage
Figure 6 - 33 :Figure 6 - 34 :Figure 6 - 35 :
633634635 Figure 6-33: Simulation of the TEEY of 300 eV electrons for a current density of 10 µA/cm² and 1 µA/cm² (a) Evolution of the TEEY (b) Evolution of the surface potential (c) Evolution of the total surface charge density (d) Charge density profiles at the end of 100 pulses
33 to Figure6-35. In comparison, the overlap factor reaches 100% after 4 pulses for a current density of 10 µA/cm². In consequence, for a wider beam or lower current density, the proportion of secondary electrons lost by recombination is lessened, which leads to a higher TEEY.
Figure 6 - 36 :
636 Figure 6-36: Correlation between the TEEY and the surface charge density for a current density of 1 µA/cm²
Figure 6 - 37 :
637 Figure 6-37: Simulation of the TEEY curves obtained with different focus voltages. I0= 1µA
Figure 6 - 38 :
638 Figure 6-38: Simulation of the TEEY averaged over 100 pulses of 100 µs with a constant beam surface. I0= 1µA
Figure 6 - 39 :
639 Figure 6-39: Measurement of the TEEY of 300 eV electrons at F 500V in SiO2 at 27°C and 200°C in DEESSE. J0 = 10 µA/cm²
Figure 6 - 40 :
640 Figure 6-40: Measurement of the TEEY of 500 eV electrons at F 100V in SiO2 at 27°C and -180°C in ALCHIMIE. J0 = 10 µA/cm²
Figure 6 - 41 :
641 Figure 6-41: Measurement of the TEEY of 500 eV electrons at F 0V in SiO2 at 100°C, 27°C and -180°C in ALCHIMIE. J0 ~ nA/cm²
Figure 6 - 42 :
642 Figure 6-42: Experimental measurement of the TEEY at F 100V in ALCHIMIE from -180°C to 100°C
Figure 6 - 43 :
643 Figure 6-43: Experimental measurement of the TEEY at F 0V in ALCHIMIE from -180°C to 100°C 6.4.2 Explanation of the experimental observations by the simulation: Study of the thermally activated hopping transport
Figure 6 - 44 :
644 Figure 6-44: Simulation of the TEEY of 300 eV electrons in SiO2 at -180°C, 27°C and 200°C. J0 = 10 µA/cm² The change of charge-less TEEY in the simulation depending on the temperature can be explained by the interactions of electrons with phonons. Indeed, we have established in Chapter 3 that the transport of the ballistic electrons can be modified by the phonon interactions even in the absence of charges. Notably, the population of phonons 𝑁(𝑇) of a given mode ℏ𝜔 is dependent on the temperature 𝑇, as it follows the Bose-Einstein distribution: 𝑁(𝑇) = 1 𝑒 ℏ𝜔/𝑘 𝑏 𝑇 -1 Equation 6-10
Figure 6 - 45 :
645 Figure 6-45: Simulation of the TEEY of 300 eV electrons in SiO2 at -180°C, 27°C and 200°C, normalized to match the starting point of the TEEY at 27°C. J0 = 10 µA/cm²
3 (
3
Figure 6 - 46 :
646 Figure 6-46: Charge density profiles at the end of 100 pulses of 300 eV electrons at 27°C, 200°C and -180°C. J0 = 10 µA/cm²
Figure 6 - 47 :
647 Figure 6-47: Evolution of the surface charge density of 300 eV electrons at 180°C, 27°C and 200°C. J0 = 10 µA/cm²
Figure 6 - 48 :
648 Figure 6-48: Comparison of the incident current used in standard TEEY curve measurements in DEESSE and ALCHIMIE
Figure 6 - 49 :
649 Figure 6-49: Measurement and simulation of the TEEY over 300 pulses, for 300eV, 500 eV and 1 keV electronsLastly, we have made time-resolved measurements of the TEEY in ALCHIMIE over several seconds, by sending 5000 pulses of 100 µs with an interval of 50 ms, for a total irradiation time of 0.5 s. The objective here is to sample the decrease of the TEEY until it reaches its equilibrium, and define the value of this equilibrium. It is technically possible to simulate such a duration with the Monte-Carlo code, however the computation time would be enormous. Indeed, it took 17 hours on our desktop computer to simulate the TEEY given in Figure6-49 for 500 eV electrons over 300 pulses. In comparison, the time required for the acquisition of the TEEY over 5000 pulses in ALCHIMIE or DEESSE is roughly 4 minutes, regardless of the incident energy. In consequence, we will only use the experimental data acquired with ALCHIMIE for this part of the study, without comparing it to simulated results. Since the incident current remains practically invariable through the time-resolved measurements, only the emitted current was measured. The TEEY was then computed with the average value of the incident current from the TEEY measurements made over 300 pulses. By proceeding this way, we avoid creating a large number of residual holes when measuring I0 before sampling the emitted current, which could happen if we sent 5000 pulses first to measure I0. The TEEY measured during 5000 pulses are plotted for 300 eV and 500 eV in Figure6-50, in a minimal beam area configuration (Focus voltage F500 or F100, 0.1 cm²) and a broad beam configuration (F0, > 25 cm²).
Figure 6 - 50 :
650 Figure 6-50: Measurement of the TEEY of 300 eV and 500 eV electrons in SiO2 over 5000 pulses
Figure 6 - 51 :
651 Figure 6-51: Measurement of the TEEY of 1 keV electrons in SiO2 over 5000 pulses (a). In (b), the data of (a) is zoomed in the first 30 ms (300 pulses) of the measurement.
Figure 6 - 52 :
652 Figure 6-52: GREEN differential GEO electron spectrum
Figure 6 - 53 :
653 Figure 6-53: Simulated TEEY and surface potential of a 20 nm SiO2 target. J0 = 10 µA/cm².
Figure 6 - 54 :
654 Figure 6-54: Charge density profile after 30ms of the 20 nm SiO2 target
Figure 6 - 55 :
655 Figure 6-55: Dose-depth profiles for 1 keV and 10 keV incident spectrums
Figure 6 - 56 :
656 Figure 6-56: TEEY and surface potential of a 2µm SiO2 target
Figure 6 - 57 :
657 Figure 6-57: Charge density profile after 30ms in the 2 µm SiO2 target
Figure 6 - 58 :
658 Figure 6-58: Integrated current densities from GREEN for low energy electrons in the LEO and GEO space environment
Figure 6 - 59 :
659 Figure 6-59: Simulation of the TEEY of the 1 keV GEO spectrum, J0 = 1 nA/cm²
1 -
1 Creation and initialization of the DriftManager at the start of a simulation DriftMessenger will load the simulation parameters from the ChargingParameters.mac file, which is shown below. The macro commands setting the parameters for the mesh and the physics are set in the categories "Mesh" and "Phys". The commands in this file allow to respectively set: The sample holder voltage and electron gun voltage, as boundary conditions for the computation of the electric field The number of incident electrons (saved in nbOfElecPerRun in the drift manager) and the associated charge bias factor. The number of incident electrons is set as a balance between computation time and statistical consistency. Using an external table file, the charge bias factor is computed from the incident current we want to model with Equation 5-4. The electron and hole mobilities, the relative permittivity The radius of an electron cascade for the overlap factor The cross sections for empty trap, respectively for ballistic electrons, drift electrons and holes The cross sections for recombination, in the same order The detrapping rate 𝑊 0 and the temperature
Figure A-1 shows the interactions between the DriftManager and the other classes of Geant4 during an electron cascade run. The physical processes G4MicroElecInelasticModel and G4MicroElecElastic model can respectively register new holes and drift electrons to the DriftManager, using void AddHoleFromCascade(G4ThreeVector pos)and void AddElectronFromCascade(G4ThreeVector pos). When one of these functions is called, the position of creation of the particle is saved in either of the vectors std::vector<G4ThreeVector> holeNew or std::vector<G4ThreeVector> electronNew of the drift manager.
Figure A- 1 :
1 Figure A-1: Architecture of the simulation for the transport of ballistic electrons
Figure A- 2 :
2 Figure A-2: Illustration of the storage vectors for deep electron traps, deep hole traps and shallow electron traps (a), and for shallow hole traps (b)
ℎ𝑜𝑙𝑒𝑠 ) 𝑖 -(𝑛 𝑒𝑙𝑒𝑐𝑡𝑟𝑜𝑛𝑠 ) 𝑖 ] 𝑒𝛽 ∆𝑧 𝑖 𝑆 𝑚𝑒𝑠ℎ Equation A-3
Figure A- 3 :
3 Figure A-3: Architecture of the simulation for the transport of drift particles
Figure 1 - 1 : 12 Figure 1 - 2 : 15 Figure 1 - 3 :
1112121513 Figure 1-1: Integral fluxes of protons and electrons received by a spacecraft in LEO and GEO orbits, from the radiation belt model GREEN ................................................................................. 12 Figure 1-2: Illustration of an electron avalanche between two parallel plates ............................... 15 Figure 1-3: Illustration of a typical TEEY curve ............................................................................... 17Figure 1-4: Illustration of a typical energy distribution of emitted electrons ................................ 18 Figure 1-5: TEEY measurements for an etched sample of Cu under various angles of incidence. data from [29] ......................................................................................................................................... 19 Figure 1-6: TEEY measurements of a copper sample, as received and after decontamination by baking and erosion. Data from [30] ..................................................................................................... 21 Figure 1-7: Comparison of the TEEY of silver on a flat surface and with roughness patterns with height = 100µm and base length L=80µm. Simulated results from [40] .............................. 22 Figure 1-8: Evolution of the global charge of the insulator according to the TEEY curve ....... 26 Figure 1-9: Charge buildup and evolution of the TEEY for incident electrons below the first crossover point ....................................................................................................................................... 27 Figure 1-10: Charge buildup and evolution of the TEEY for incident electrons between the two crossover points ..................................................................................................................................... 28 Figure 1-11: Charge buildup and evolution of the TEEY for incident electrons after the second crossover point ....................................................................................................................................... 29 Figure 1-12: Illustration of atypical TEEY curves observed on SiO2 thin films .......................... 32 Figure 2-1: Illustration of the diffusion of particles by a scattering center .................................... 42 Figure 2-2: Illustration of the electron-matter interactions that are common to metals, semiconductors and insulators ............................................................................................................. 44 Figure 2-3: Illustration of the electronic structure of the isolated atom and of a solid ............... 47 Figure 2-4: Illustration of the band structure depending on material type .................................... 48 Figure 2-5: Energy changes for an electron penetrating in a material ............................................ 51 Figure 2-6: Illustration of the electron and charge carrier interactions in insulators ................... 53 Figure 2-7: Dispersion relations of optical and acoustic phonons ................................................. 54 Figure 2-8: Definition of the drift and thermal velocities from the distributions of particle positions, in the case of gaussian transport ........................................................................................ 56 Figure 2-9: Shallow traps and associated transport mechanisms .................................................... 58 Figure 2-10: Deep traps in the band gap of an insulator .................................................................. 60 Figure 2-11: Detrapping enhancements factors ................................................................................. 61 Figure 2-12: Recombination mechanisms in insulators -trap assisted recombination and geminate recombination ........................................................................................................................ 62 Figure 2-13: Definition of the true range and projected range from an electron's trajectory ..... 64 Figure 2-14: Definition of the true range and the projected range of electrons. .......................... 65
Figure 1 - 4 : 18 Figure 1 - 5 : 19 Figure 1 - 6 : 27 Figure 1 - 10 : 28 Figure 1 - 11 : 29 Figure 1 - 12 : 32 Figure 2 - 1 : 42 Figure 2 - 2 : 44 Figure 2 - 3 : 47 Figure 2 - 4 : 48 Figure 2 - 5 : 51 Figure 2 - 6 : 53 Figure 2 - 7 : 54 Figure 2 - 8 : 56 Figure 2 - 9 : 58 Figure 2 - 10 : 60 Figure 2 - 11 : 61 Figure 2 - 12 :
1418151916271102811129112322142224423472448255126532754285629582106021161212 Figure 1-1: Integral fluxes of protons and electrons received by a spacecraft in LEO and GEO orbits, from the radiation belt model GREEN ................................................................................. 12 Figure 1-2: Illustration of an electron avalanche between two parallel plates ............................... 15 Figure 1-3: Illustration of a typical TEEY curve ............................................................................... 17Figure 1-4: Illustration of a typical energy distribution of emitted electrons ................................ 18 Figure 1-5: TEEY measurements for an etched sample of Cu under various angles of incidence. data from [29] ......................................................................................................................................... 19 Figure 1-6: TEEY measurements of a copper sample, as received and after decontamination by baking and erosion. Data from [30] ..................................................................................................... 21 Figure 1-7: Comparison of the TEEY of silver on a flat surface and with roughness patterns with height = 100µm and base length L=80µm. Simulated results from [40] .............................. 22 Figure 1-8: Evolution of the global charge of the insulator according to the TEEY curve ....... 26 Figure 1-9: Charge buildup and evolution of the TEEY for incident electrons below the first crossover point ....................................................................................................................................... 27 Figure 1-10: Charge buildup and evolution of the TEEY for incident electrons between the two crossover points ..................................................................................................................................... 28 Figure 1-11: Charge buildup and evolution of the TEEY for incident electrons after the second crossover point ....................................................................................................................................... 29 Figure 1-12: Illustration of atypical TEEY curves observed on SiO2 thin films .......................... 32 Figure 2-1: Illustration of the diffusion of particles by a scattering center .................................... 42 Figure 2-2: Illustration of the electron-matter interactions that are common to metals, semiconductors and insulators ............................................................................................................. 44 Figure 2-3: Illustration of the electronic structure of the isolated atom and of a solid ............... 47 Figure 2-4: Illustration of the band structure depending on material type .................................... 48 Figure 2-5: Energy changes for an electron penetrating in a material ............................................ 51 Figure 2-6: Illustration of the electron and charge carrier interactions in insulators ................... 53 Figure 2-7: Dispersion relations of optical and acoustic phonons ................................................. 54 Figure 2-8: Definition of the drift and thermal velocities from the distributions of particle positions, in the case of gaussian transport ........................................................................................ 56 Figure 2-9: Shallow traps and associated transport mechanisms .................................................... 58 Figure 2-10: Deep traps in the band gap of an insulator .................................................................. 60 Figure 2-11: Detrapping enhancements factors ................................................................................. 61 Figure 2-12: Recombination mechanisms in insulators -trap assisted recombination and geminate recombination ........................................................................................................................ 62 Figure 2-13: Definition of the true range and projected range from an electron's trajectory ..... 64 Figure 2-14: Definition of the true range and the projected range of electrons. .......................... 65
Figure 2 - 68 Figure 3 - 1 : 78 Figure 3 - 2 : 80 Figure 3 - 3 : 84 Figure 3 - 4 : 88 Figure 3 - 5 : 89 Figure 3 - 6 :Figure 4 - 1 :Figure 4 - 5 :
26831783280338434883589364145 Figure 2-15: Monte-Carlo and analytical computations of the dose deposed in Al by low energy electrons ................................................................................................................................................... 68 Figure 3-1: Comparison of molecular and averaged elastic Total Cross Sections for SiO2 ....... 78 Figure 3-2: Examples of OELFs from Sun et al.'s database ............................................................ 80 Figure 3-3: Fitted OELFs for a metal (Ag), a semi-conductor (Si), and an insulator (SiO2) supported by MicroElec. Each colored smooth curve is attributed to a core shell/plasmon/interband transition. The reference OELFs are taken from the database of Sun et al. .......................................................................................................................................................... 84 Figure 3-4: Illustration of the rejection sampling procedure ........................................................... 88 Figure 3-5: Illustration of direct sampling through cumulated DXS .............................................. 89 Figure 3-6: Energy changes for an electron going through the surface barrier in the case of a metal or an insulator. The energy reference in the material used in MicroElec is highlighted in red. ............................................................................................................................................................ 92 Figure 3-7: Example of the modeling of a multilayer structure (SiO2 on Si) in MicroElec ........ 94 Figure 3-8: Material structure files of silver (a) and SiO2 (b) ........................................................... 95 Figure 3-9: Calculated Mean Free Paths for the simulated processes in SiO2, including interactions with acoustic phonons. Interactions with Longitudinal-Optical (LO) phonons are simulated for the two dominant vibration modes (63 and 153 meV). ........................................... 97 Figure 3-10: Emission and absorption rates of LO phonons in SiO2 ............................................ 98 Figure 3-11: Stopping powers of electrons (a), protons (b) and 18 Ar ions (c) in Si ................... 106 Figure 3-12: Electron stopping powers for the materials modeled in MicroElec. The stopping powers of BN are provided for information purposes only, as no reference data could be found for this material. ................................................................................................................................... 108 Figure 3-13 : Dose-depth profile for 10 keV protons (a) and 200 eV electrons (b) with normal incidence in Si ...................................................................................................................................... 109 Figure 3-14: Effects of the corrections implemented in MicroElec for low energy electrons on the TEEY of silicon ............................................................................................................................ 111 Figure 3-15 : Comparison of the TEYs calculated for all metals and semi-conductors in MicroElec ............................................................................................................................................. 114 Figure 3-16: SEY of Al2O3 computed without the phonon and polaron processes ................. 115 Figure 3-17: Comparison of the SEY and BEY of SiO2 simulated in MicroElec with phonon interactions, without polaronic capture ........................................................................................... 116 Figure 3-18: Comparison of the BEY and SEY from MicroElec for SiO2 including all models ................................................................................................................................................................ 117 Figure 3-19: Comparison of the BEY and SEY from MicroElec for Al2O3 including all models ................................................................................................................................................................ 118 Figure 4-1: Electron transmission rate for Be, Al, Fe and Ag materials. The energy of electrons range from 25 eV up to 5 keV........................................................................................................... 132 Figure 4-2: Extrapolated ranges of low energy electrons in Al and Si ........................................ 134 Figure 4-3:Elastic scattering ratio for 11 materials ......................................................................... 135 Figure 4-4: Mean free paths of electrons in MicroElec for Si and Al ......................................... 135
Figure 4 - 19 :Figure 5 - 1 :Figure 5 - 2 :Figure 5 - 3 :Figure 5 - 4 :Figure 5 - 7 :Figure 5 - 12 :Figure 6 - 9 :Figure 6 - 15 :Figure 6 - 34 :Figure 6 - 54 :
419515253545751269615634654 Figure 4-5: Comparison of the transmission rates in Si from MicroElec and the standard continuous processes .......................................................................................................................... 136 Figure 4-6: Comparison between the range models of Equation4-3 (ref.[7]) and Equation4-5 (This work) ........................................................................................................................................... 138 Figure 4-7: Comparison of the extrapolated ranges given by eq. 13 and MicroElec for all 11 materials (Be, Al, Ti, Ni, Ge and W in fig (a) and C, Si, Fe, Cu and Ag in fig (b)).................... 139 Figure 4-8: Correlation of the atomic number (y-axis) with the height of the range plateau (a) and the slope of the range curve (b) ................................................................................................. 142 Figure 4-9: Transmission rate model (grey) compared with MicroElec (colors) ....................... 144 Figure 4-10. Dose-depth curve given by the analytical model (Model) proposed in this work for incident electrons in aluminum. The model is compared with Monte Carlo simulations of MicroElec, Walker[22] and OSMOSEE[23]. Small circles of different colors are for MicroElec simulations, big circles of different colors are for Walker[36] reference data and different squares are our former simulations performed with OSMOSEE code[37]. ........................................... 150 Figure 4-11: Validation of the analytical dose model with MicroElec data for 11 materials ... 153 Figure 4-12: Comparison of the dose profiles in Si from MicroElec and GRAS...................... 154 Figure 4-13: Comparion between MicroElec and GRAS ............................................................. 156 Figure 4-14: Comparison of the low energy and high energy analytical models ....................... 157 Figure 4-15: OELFs of Ge, Al, Ag, Si.............................................................................................. 163 Figure 4-16: Median SE energy in Al, Ag, Ge and Si from MicroElec simulations .................. 164 Figure 4-17: Secondary emission yield calculated with the model and MicroElec for an aluminum target and for different angles of incidence ..................................................................................... 165 Figure 4-18 : Comparison of the analytical SEY model with Monte-Carlo and experimental data ................................................................................................................................................................ 166 Figure 4-19: κ as a function of Z ...................................................................................................... 168 Figure 4-20: Comparison of the energy of the maximal SEY from the analytical model and Monte Carlo data ............................................................................................................................................. 168 Figure 4-21: Correlation of G with the max value of the SEY .................................................... 170 Figure 4-22: Correlation of G with the position of the max value of the SEY ......................... 170 Figure 4-23: Comparisons of the dose depth curve given by the analytical model of this work for incident electrons in copper with MicroElec and a constant energy loss model. ...................... 171 Figure 4-24: Correlation between the surface energy deposit and the SEY from the analytical models: position (a) and value (b) of the max ................................................................................. 173 Figure 4-25: Comparison of the surface energy deposit and SEY curves for C, Ti, Ni, W. .... 174 Figure 5-1: Simulation configuration ................................................................................................ 185 Figure 5-2: Trap levels modeled in the simulations ....................................................................... 195 Figure 5-3: Modeling of the shallow hole traps .............................................................................. 198 Figure 5-4: Simulation results of the electron cascade radius in SiO2 for different incident energies ................................................................................................................................................. 205 Figure 5-5: Illustration of the evolution of the overlap factor with time .................................... 206 Figure 5-6: Architecture of the simulation for the transport of ballistic electrons .................... 208
Figure 6 - 59 :
659 Figure 6-54: Charge density profile after 30ms of the 20 nm SiO2 target .................................. Figure 6-55: Dose-depth profiles for 1 keV and 10 keV incident spectrums ............................ Figure 6-56: TEEY and surface potential of a 2µm SiO2 target .................................................. Figure 6-57: Charge density profile after 30ms in the 2 µm SiO2 target ..................................... Figure 6-58: Integrated current densities from GREEN for low energy electrons in the LEO and GEO space environment ...................................................................................................................Figure 6-59: Simulation of the TEEY of the 1 keV GEO spectrum, J0 = 1 nA/cm² ..............Figure A-1: Architecture of the simulation for the transport of ballistic electrons ...................Figure A-2: Illustration of the storage vectors for deep electron traps, deep hole traps and shallow electron traps (a), and for shallow hole traps (b) ............................................................................ Figure A-3: Architecture of the simulation for the transport of drift particles ..........................
Figure A- 1 :
1 Figure 6-54: Charge density profile after 30ms of the 20 nm SiO2 target .................................. Figure 6-55: Dose-depth profiles for 1 keV and 10 keV incident spectrums ............................ Figure 6-56: TEEY and surface potential of a 2µm SiO2 target .................................................. Figure 6-57: Charge density profile after 30ms in the 2 µm SiO2 target ..................................... Figure 6-58: Integrated current densities from GREEN for low energy electrons in the LEO and GEO space environment ...................................................................................................................Figure 6-59: Simulation of the TEEY of the 1 keV GEO spectrum, J0 = 1 nA/cm² ..............Figure A-1: Architecture of the simulation for the transport of ballistic electrons ...................Figure A-2: Illustration of the storage vectors for deep electron traps, deep hole traps and shallow electron traps (a), and for shallow hole traps (b) ............................................................................ Figure A-3: Architecture of the simulation for the transport of drift particles ..........................
Figure A- 2 :
2 Figure 6-54: Charge density profile after 30ms of the 20 nm SiO2 target .................................. Figure 6-55: Dose-depth profiles for 1 keV and 10 keV incident spectrums ............................ Figure 6-56: TEEY and surface potential of a 2µm SiO2 target .................................................. Figure 6-57: Charge density profile after 30ms in the 2 µm SiO2 target ..................................... Figure 6-58: Integrated current densities from GREEN for low energy electrons in the LEO and GEO space environment ...................................................................................................................Figure 6-59: Simulation of the TEEY of the 1 keV GEO spectrum, J0 = 1 nA/cm² ..............Figure A-1: Architecture of the simulation for the transport of ballistic electrons ...................Figure A-2: Illustration of the storage vectors for deep electron traps, deep hole traps and shallow electron traps (a), and for shallow hole traps (b) ............................................................................ Figure A-3: Architecture of the simulation for the transport of drift particles ..........................
1.5 -Computing the TEEY with Monte Carlo simulations
Table 3 -1 : Sum rules values deduced from the OELF fits, for 11 materials, in the new version of MicroElec
3
-2.
Material SiO2 C Al Si Ti Ni Cu Ge Ag W Kapton
Initial energy
of weakly
bound 8 0 4.2 1.73 4 3 2.6 2.5 2.03 4 0.5
electrons
(𝐸 𝑖𝑛𝑖𝑡 , eV)
Table 3 -2: Initial energy 𝑬 𝒊𝒏𝒊𝒕 of weakly bound electrons
3
Table 3 -3: Parameters stored in the Data_Material file
3
Table 3
3 -4 for Al2O3, BN and SiO2.
Material ℏ𝜔 𝐿𝑂 (eV) 𝜖(∞) 𝜖(0)
SiO2 0.131 2.25 3.84
(0.75 x 0.153 + 0.25 x 0.063)
[58]
Al2O3 0.1 [63] 3 9
BN 0.17 4.5 7.1
Table 3 -4: Parameters used for the LO phonon model
3
Table 3 -5 : Table of parameters for the acoustic phonon model in SiO2
3
Table 3 -6: Parameters of the polaronic capture model
3
3.3 -Validation of the low energy transport model with reference data
Table 3 -7: Energy parameters for low energy electrons
3
3.3 -Validation of the low energy transport model with reference data
Table 4 -
4 1. In the case of the CSDA values, one must use 𝑟 𝐴𝑙 = 9 * 10 -7 𝑔 𝑐𝑚 2 ⁄ .
Table 4 -1: G F parameters from the M-C simulations (bold red) and from CSDA ranges [21]
4
Z G F
3 0.06 1.776
4 0.51 1.8
6 0.74 1.8
11 0.09 1.719
12 0.4 1.714
13 1 1.73
14 0.92 1.83
19 0.2156 1.656
21 1.386 1.69
22 1.91 1.8
23 2.6026 1.657
24 2.9722 1.64
26 3.46 1.75
27 4.774 1.654
28 3.58 1.8
29 3.72 1.8
32 2.71 1.6
39 1.8018 1.624
41 5.1128 1.599
42 5.9444 1.594
44 6.1138 1.593
45 5.5748 1.599
46 5.0974 1.6
Table 4 -2: BEY values
4 Validation of the model with Monte-Carlo data 4.2.2.1 Comparison of the analytical model with Monte-Carlo data
Material C Be Al Si Ti Fe Ni Cu Ge Ag (Elastic yield) W
Z 6 4 13 14 22 26 28 29 32 47 74
Low Value 0.1 0.3 0.35
energy BEY High Energy domain Value E < 21 eV E < 200 eV 0.02 0.225 E < 400 eV 0.14 0.35
energy BEY Energy domain E > 700 eV E > 2 keV E > 3 keV
Table 4 -
4 3: Values for the parameters of the SEY model
Material (eV) 𝑤 𝑓 ⟨𝐸 𝑠 ⟩ (eV) (eV) 𝐼 𝜅
Be 4.98 18.4 18.4 1,65
C 4.81 20 20 1
Al 4.28 15 15 1,05
Si 4.05 16.7 16.7 1,3
Ti 4.33 19.4 19.4 0,85
Fe 4.5 23.6 23.6 0,85
Ni 5.15 22 22 0,75
Cu 4.65 19 19 0,65
Ge 5 13 13 0,65
Ag 4.26 22 22 0,75
W 4.55 35 35 1,15
.
Symbol Definition
𝑛 𝑒 Number of electrons in the simulation
𝑛 ℎ Number of holes in the simulation
𝑁 𝑒 Density of electrons per cm 3
𝑁 ℎ Density of holes per cm 3
𝜌 𝑒 Volume charge density of electrons in C/cm 3
𝜌 ℎ Volume charge density of holes in C/cm 3
𝑁 𝑇 Density of traps (shallow or deep) per cm 3
𝑁 𝑆 Density of shallow traps per cm 3
𝑁 𝐷 Density of deep traps per cm 3
𝜎 𝑆 Capture cross section by a shallow trap
𝜎 𝐷 Capture cross section by a deep trap
𝜎 𝐹𝑟𝑒𝑒 Capture cross section by a free trap
𝑁 𝐹𝑟𝑒𝑒 Density of free traps
𝜎 𝑒-ℎ Electron/hole recombination cross section
Table 5 -1: Definition of the notations for the charge densities, trap densities and cross sections
5 In this work, we aim to reproduce the experimental setup of the TEEY measurement facility DEESSE at ONERA, which we used to measure the TEEY on amorphous SiO2 samples. It is thus necessary to simulate the experimental setup configuration and the experimental samples, to improve the comparison of our model with the data from this facility.
5.2 Developing an iterative Monte-Carlo simulation of
charging and secondary electron emission
5.2.1 Presentation of the simulation configuration and general procedure
Table 6 -1: Simulation parameters used in the charge transport model
6
Parameter Value
Simulation time step 1 µs
Incident current 1 µA
Table 6 -2: Experimental measurement parameters used in this study
6
Measurement type Time-resolved measurements at a single energy Energy/TEEY curve from 50 eV to 2 keV
Pulse length 100 µs 6 ms
Relaxation time between two pulses 50 ms 200 ms
Number of pulses sent per 80 to 100, up to a few TEEY averaged over 10
incident energy thousand in section 6.4 pulses
Incident current 0.1 µA to 1 µA
Area irradiated by the Minimum: 0.05 cm²
electron beam Maximum: > 25 cm²
Incident current density Minimum: < 25 nA/cm² Maximum: 20 µA/cm²
of the edge of the Brillouin zone (a few eVs), the dependence in temperature is contained in the phonon population 𝑁 𝐵𝑍 at the edge of the Brillouin zone, which follows Equation 6-10.
E BZ 4 . For electrons
above the energy 𝑓 𝑎𝑐 = 𝜋𝑘 𝑏 𝑇 ℏ𝑐 𝑠 2 𝜌 ℰ 𝑎𝑐 𝐷(𝐸) 1 + 𝐸 𝐴 ⁄ if E < E BZ 4
𝑓 𝑎𝑐 = 2𝜋𝑚 * * (2𝑁 𝐵𝑍 + 1) 𝜌ℏℏ𝜔 𝐵𝑍 ℰ 𝑎𝑐 2 𝐷(𝐸)𝐸 2 ( 𝐸 𝐴 ) 2 * [- 𝐸 𝐴 ⁄ 1 + 𝐸 𝐴 ⁄ + ln (1 + 𝐸 𝐴 )] if E > E BZ
Equation 3-31 of Chapter 3
2.2 -Common electron-matter interactions for all material types
3.3 -Validation of the low energy transport model with reference data
6.3 -Study of multiple-hump TEEY curves
-Simulation of the transport of drift electrons and holes
Remerciements
Ces travaux de thèse de doctorat ont été effectués à lEspace (DPHYCette thèse a bénéficié d'un cofinancement Elle a été
Trapping of ballistic electrons
The deep level traps are also capable of capturing ballistic electrons with a reduced cross section 𝜎 𝐷 = 10 -16 𝑐𝑚² to take into account their higher velocity. For shallow traps, the cross section is also lowered to 6 × 10 -15 𝑐𝑚² for ballistic electrons. However, this trapping is only possible if the electron energy is very low (a few eVs). To take into account this dependency on the energy, the mean free path obtained from Equation 5-27 or Equation 5-29 is modified using Ganachaud and Mokrani's empirical law [21]:
= 𝜎𝑁𝑒𝑥𝑝 (-𝛾𝐸)
Equation 5-30
Where 𝜎 and 𝑁 are the capture cross section and density of the shallow or deep traps, and γ = 0.2 eV -1 , which is the valued used in Chapter 3 and is similar to the value used by Ohya et al. [22] for SiO2 (0.25). Compared to our use of this function for the polaronic empirical capture in Chapter 3, we have substituted the fitting parameter 𝑆 in the interaction probability by the inverse mean free path 𝜎𝑁.
Modeling of the detrapping
Detrapping by thermal activation
The charge carriers immobilized in any kind of trap are able to escape under the effect of thermal agitation. The escape frequency 𝑊(𝐸 𝑖 ) for a trap level of energy depth 𝐸 𝑖 follows a thermally activated law:
5.4 -Modeling of the detrapping particle is located in vacuum but not above the surface of the material (x or/and y are greater than the width of the surface), we assume that the electric field is the same as in the previous case. Lastly, if the particle is in the Si layer, the electric field is equal to zero.
The trapping of low energy electrons is managed in the overhauled process G4ElectronCapture. As a Geant4 process, two phases are involved during the transport of the particle, which each require communication with the drift manager: the computation of the interaction mean free path, and the action to be done at the end of the step if the process is selected. These two phases are respectively handled by G4double G4ElectronCapture::GetMeanFreePath, which returns a mean free path to the Geant4 kernel, and
G4VParticleChange* G4ElectronCapture::PostStepDoIt, which tells Geant4 what to do when the interaction happens at the end of a step.
In G4ElectronCapture::GetMeanFreePath, the function G4double DriftManager::GetTrapMFPForDriftParticle(G4double depthInMeters, particleType part, trapType t) of the drift manager is called. As an input, it receives the depth of the particle, its type, and the type of trap involved. The type of particle is defined by the enum particleType, which can be a driftElectron, balisticElectron or driftHole. Here, we are in the case of a balisticElectron. The type of trap is defined by another enum trapType, it can be a shallowTrap or a deepTrap. The value of the type of trap is stored in the G4ElectronCapture object. In practice, two copies of G4ElectronCapture are created in the physics list. One handles the trapping of low energy electrons by shallow traps, and the other by deep traps. When the two G4ElectronCapture objects are created, we attribute to them the value of either shallowTrap or deepTrap for the type of trap. In the function of the drift manager GetTrapMFPForDriftParticle, the cell of the mesh in which the current particle is located is first determined. This is done by calling the function G4double DriftManager::GetMeshCell(G4double depth), which finds for a given depth 𝑧 the corresponding cell of the mesh 𝑧 𝑖 with a dichotomy method as always. Once the cell is found, we can retrieve the number of holes and electrons trapped in the cell, and compute the density of trapped electrons (e), holes (h) and free traps using Equation 5-9 for the charge densities as:
With 𝑛 𝑒/ℎ the number of trapped electrons/holes saved in the cell. We also use Equation 5-46 for the number of remaining free traps as:
Finally, from Equation 5-39, the function returns the capture inverse mean free path to the class G4ElectronCapture. The density of trapped particles and the cross sections vary depending on the type of trap (deep/shallow), which is why we need to have one capture process for each type of trap. The type of trap also needs to be given as input to the method GetTrapMFPForDriftParticle so that it can retrieve the correct densities and cross sections. The
List of publications and conferences |
04120108 | en | [
"sdv.mhep"
] | 2024/03/04 16:41:26 | 2022 | https://hal.univ-lorraine.fr/tel-04120108/file/DDOC_T_2022_0286_CONSOLI.pdf | Arturo Consoli
M.Sc Marco Pileggi
A T M Hasibul Hasan
M.D Alice Venier
M.D Alessandro Sgreccia
M.D Silvia Pizzuto
M.D Oguzhan Coskun
M.D Federico Di Maria
M.D Bertrand Lapergue
PhD Georges Rodesch
PhD Serge Bracard
M.D Bailiang Chen
LIST OF TABLES AND FIGURES vi RESUME DE LA THESE EN FRANÇAIS vii DEFINITION AND PRESENTATION OF THE THESIS xvi
Keywords:
Background: Patients with acute ischemic stroke secondary to large vessel occlusions and good collaterals are frequently associated with favorable outcomes after mechanical thrombectomy (MT), although poor outcomes are observed also in this subgroup. We aimed to investigate the factors associated with unfavorable outcomes (mRS3-6) in this specific subgroup of patients.
Methods: 219 patients (117 females, mean age: 70.6±16 y.o.) with anterior circulation stroke and good collaterals (ASITN/SIR grades 3-4), treated by MT between 2016-2021 at our institution were included in this study. Clinical files and neuroimaging were retrospectively reviewed. Univariate and multivariate analyses were performed to identify the predictors of unfavorable outcomes in the overall population (primary endpoint). Secondary endpoints focused on the analysis of the predic-tors of unfavorable outcomes in the subgroup of successfully recanalized patients, predictors of mortality and symptomatic intracerebral hemorrhages (sICH) in the overall population. Results: Poor outcome was observed in 47% of the patients despite the presence of good collaterals. Older age (p<0.001), baseline mRS2 (p<0.001), higher baseline NIHSS (p<0.001), longer onset-to-recanalization (p=0.03) and "groin-to-recanalization" times (p=0.004), no intravenous thrombolysis administration (no-IVT, p=0.004), >3 passes (p=0.01), partial recanalizations (mTICI0-2a; p=0.002), higher 24h-NIHSS (p<0.001) and sICH (p<0.001) were associated with the primary endpoint. The multivariate analysis showed an independent correlation between unfavorable outcomes and older age, higher 24h-NIHSS, 24h-ASPECTS, no-IVT and with secondary transfers. Conclusions: Despite good collaterals poor outcomes occurred in 47% of the patients. The main factors associated with unfavorable outcomes are comparable to those observed in patients with poor collaterals. Patients with good collaterals not receiving IVT were significantly associated with unfavorable outcomes, whereas FPE was not significantly correlated with clinical outcome in this specific cohort of patients.
LIST OF TABLES AND FIGURES
Introduction
Les Accidents Vasculaires Cérébraux ischémiques (AVCi) représentent une pathologie grave, potentiellement mortelle causée par l'occlusion d'une artère cérébrale. La lésion ischémique déterminée par l'occlusion artérielle peut évoluer vers la nécrose du tissu cérébral si le flux sanguin n'est pas rapidement rétabli. Les AVCi se caractérisent par des taux élevés de mortalité et de morbidité et l'objectif de la prise en charge de cette maladie dans un contexte aigu est basé sur un diagnostic rapide et un choix approprié de traitement.
Les nouvelles preuves apportées par les récents essais contrôlés randomisés et publiés en 2015 ont permis le développement et la diffusion généralisée de la thrombectomie mécanique (TM), une procédure Malgré les excellents résultats obtenus depuis l'introduction de la MT en termes d'augmentation constante du taux de recanalisation des artères obstruées, le taux de bons résultats cliniques n'a pas suivi la même évolution.
Tous les patients traités par MT ne tirent pas bénéfice de ce type de procédure, et on a évoqué pour eux le concept de « reperfusions futiles ». L'hypothèse physiopathologique sous-jacente à ce phénomène semble s'appuyer sur la circulation collatérale (CC), un système d'anastomoses entre les artères léptoméningées corticales du cerveau qui pourrait alimenter le territoire d'une artère occluse par un flux rétrograde au travers de ces anastomoses.
Une attention croissante est accordée à la CC dans la littérature, bien que les informations sur son hémodynamique et la tolérance à l'évolution de l'ischémie restent encore très peu connues.
Objectifs de la thèse
L'objectif de cette thèse de sciences est d'analyser le rôle de la CC selon trois paramètres:
1. L'analyse de l'implication clinique de la CC en tant que facteur pronostique pour les patients présentant un AVCi; Le chapitre 4 est axé sur l'analyse critique de la CC selon les caractéristiques artériographiques décrites dans l'article: Angiographic collateral venous phase: a novel landmark for leptomeningeal collaterals evaluation in acute ischemic stroke. Consoli A, Pizzuto S, Sgreccia A, Di Maria F, Coskun O, Rodesch G, Lapergue B, Felblinger J, Chen B, Bracard S. L'article a été publié dans le
Journal of NeuroInterventional Surgery (JNIS).
Le dernier chapitre est consacré au développement d'un algorithme de post-traitement spécifique pour l'évaluation de la CC directement sur des artériographies 2D. Le flux de travail technique et l'ébauche de la demande de brevet seront présentés dans ce chapitre.
viii Chapitre 1 : INTRODUCTION Les AVCi représentent environ 80 % des AVC à la phase aiguë, tandis que les AVC hémorragiques représentent environ 20 % des cas. En France, les AVCi sont considérés comme la première cause de handicap dans la population adulte, la deuxième cause de démence et troisième cause de mortalité (Haute Autorité de Santé, 2009). Chaque année, environ 150 000 nouveaux AVCi sont signalés avec un taux de mortalité d'environ 15 à 20% au cours du premier mois après l'événement ischémique et environ 50% au cours de la première année [START_REF] Fery-Lemonnier | La prévention et la prise en charge des accidents vasculaires cérébraux en France[END_REF][START_REF] Chevreul | Cost of stroke in France[END_REF]. Le coût annuel de la gestion des AVCi avait été estimé à 8,9 milliards € en 2007, ce qui correspondait à aux 3 % du budget annuel du système de santé [START_REF] Chevreul | Cost of stroke in France[END_REF].
Actuellement, deux stratégies thérapeutiques principales sont disponibles pour le traitement de l'AIS: la thrombolyse intraveineuse (IVT) et la thrombectomie mécanique (MT). L'objectif de la MT est d'obtenir la recanalisation de l'artère obstruée de manière rapide, sûre et complète. Le nombre de procédures de MT suit une croissance constante dans le monde entier et en France environ 7500 procédures/année sont effectuées.
En revanche, certains sous-groupes de patients ne semblent pas bénéficier de la MT, ce qui détermine la condition d'un succès technique (la recanalisation de l'artère occluse) sans atteindre un résultat clinique favorable. Cette condition a été décrite dans la littérature comme une « recanalisation futile ». À l'heure actuelle, le taux de recanalisation futile se situerait entre 45 et 55 % (Goyal M, 2016;van Horn N et coll., 2021). Une possible explication physiopathologique de ce phénomène pourrait être recherchée dans le rôle de la CC.
La CC pourrait être définie comme un apport vasculaire rétrograde de la circulation cérébrale dans un territoire qui ne peut pas être vascularisé en raison de l'occlusion de l'artère qui perfuse normalement des zones cérébrales définies. La CC se développe grâce à des anastomoses préexistantes entre les artères corticales des territoires voisins.
Il est possible de soutenir que la CC intervient entre une macro-circulation, c'est-à-dire la circulation corticale cérébrale (les artères corticales et piales et les artérioles pénétrantes) et une microcirculation, conçue comme le réseau cortical des capillaires qui est responsable de l'autorégulation.
En effet, certains auteurs ont interprété la CC comme un « modulateur » pour la réponse vasculaire d'autorégulation et pour certains phénomènes hémodynamiques spécifiques tels que le couplage neurovasculaire [START_REF] Vasquez | Intracranial collateral circulation and its role in neurovascular pathology[END_REF].
La compréhension de l'hémodynamique des artères corticales du cerveau est encore limitée. Les valeurs de pression et la mesure du flux sanguin dans cette région découlent toujours des mesures ix effectuées au cours des interventions microchirurgicales après craniotomie et qui ont été réalisées à l'aide de microsondes dans patients sans occlusion vasculaires (Carter PL et coll., 1978;Bederson et coll., 1995). Ces travaux préliminaires ont permis de créer le substrat pour le développement technique de l'algorithme de caractérisation.
Chapitre 3: ANALYSE DE L'IMPACT CLINIQUE DE LA CIRCULATION COLLATE-
RALE
Le rôle de la CC dans la détermination des résultats cliniques favorables a été largement décrit dans la littérature. Plusieurs articles ont montré comment la présence des vaisseaux collatéraux est assox ciée à des résultats cliniques favorables après traitement endovasculaire [START_REF] Bang | Impact of collateral flow on tissue fate in acute ischaemic stroke[END_REF][START_REF] Bang | Collateral flow predicts response to endovascular therapy for acute ischemic stroke[END_REF][START_REF] Ribo | Extending the time window for endovascular procedures according to collateral pial circulation[END_REF], Liebeskind et al., 2014, Consoli et al., 2016[START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF][START_REF] Liggins | Interhospital variation in reperfusion rates following endovascular treatment for acute ischemic stroke[END_REF], Seners P et al., 2019, Tong et al., 2018[START_REF] Leng | Impact of Collateral Status on Successful Revascularization in Endovascular Treatment: A Systematic Review and Meta-Analysis[END_REF].
Compte tenu de l'ensemble de la population présentant un AVCi, il a été démontré qu'environ 45 à 55% des patients traité avec succès par TM n'atteindra pas une indépendance fonctionnelle (Goyal et al., 2016, van Horn et al., 2021).
Les patients ayant un bon profil collatéral sont généralement considérés comme les candidats les plus favorables pour la TM, car ceux-ci sont associés à des conditions cliniques moins graves et avec lésions ischémiques plus limitées (Consoli et al., 2016). Cependant, aussi les patients avec de bons collatéraux peuvent ne pas bénéficier de la TM, malgré un tableau de perfusion cérébrale et clinique favorable, bien qu'une analyse des facteurs qui déterminent un mauvais résultat clinique dans ce sous-groupe spécifique de patients n'ait pas déjà été effectuée.
Par conséquent, ce chapitre était axé sur l'analyse de l'impact clinique de la CC en tant que facteur Les résultats de l'étude UNCLOSE ont montré que même les patients ayant de bons collatéraux peuvent ne pas atteindre un résultat clinique favorable en termes, indépendamment de la technique endovasculaire ou du type de anesthésie utilisée. Les résultats globaux étaient partiellement similaires à ceux rapportés dans la littérature et pour les populations de patients avec une CC insuffisante. En effet, il s'agissait de l'un des premiers articles portant sur cette cohorte spécifique de patients avec une bonne CC.
Néanmoins, nous avons pu observer de mauvais résultats cliniques (mRS3-6) chez 47% des patients présentant une bonne CC évaluée à l'aide du score ASITN/SIR. Le taux de mortalité était de 13,6 %. Le rôle de la thrombolyse IV en tant que facteur « protecteur » est conforme aux résultats actuels rapportés par l'étude SWIFT DIRECT, récemment publiée dans le Lancet, qui a montré que l'association IVT + MT était associée à un meilleur résultat clinique que la TM seule (Fischer U et al., 2022).
Ces données sont particulièrement intéressantes si l'on considère que cette cohorte est considérée Hassen W et al., 2019).
Afin d'évaluer l'efficacité de la CC, nous avons proposé l'analyse de la CC sur la base du concept de la perméabilité du côté veineux des vaisseaux collatéraux. Selon cette théorie, l'efficacité de la CC serait également liée à l'efficacité des veinules de la garantie circulation effectuant plusieurs tâches, telles que le dégagement des emboles en aval, le maintien du flux sanguin et effet préventif sur l'adhésion plaquettaire (Tong et al., 2018).
Les résultats de l'étude UNCLOSE avaient montré comme la modalité d'évaluation actuelle de la CC par artériographie, en utilisant les outils actuels, pourrait ne pas être adaptée compte tenu de ce nouveau paradigme qui met en évidence le rôle des veines et des venules de la CC dans la physiopathologie des AVCi.
L'analyse du sous-groupe de patients avec de bonnes collatérales et des résultats défavorables et la subjectivité de la l'évaluation de la CC soulèvent certains points de débat. Cette étude rétrospective réalisée sur une population de 200 patients a permis d'analyser un paramètre pas encore évalué dans la littérature : la phase veineuse de la CC évaluée sur les images artériographiques. L'hypothèse de l'étude était centrée sur l'impact clinique de la visualisation des structures veineuses sous-corticales dans la région de la circulation collatérale. La présence de cette phase veineuse représenterait le signe indirect de la fonctionnalité de la CC, ce qui pourrait permettre de différencier la CC non seulement par rapport à son extension, mais surtout par rapport à la capacité de maintenir un flux fonctionnel dans les vaisseaux collatéraux.
L'étude a montré que l'analyse de la phase veineuse des collatéraux semble affiner l'évaluation de la CC. Bien que l'analyse basée sur l'échelle ASITN/SIR et l'analyse basée sur CVP aient montré des résultats globalement similaires et en ligne avec la littérature médicale (Anadani et al. 2022[START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF], Liebeskind et coll., 2022), il a été possible d'observer une association bien plus significative entre la présence de la phase veineuse collatérale (CVP+) et le résultat clinique favorable (mRS à 3 mois) ainsi qu'en termes de taux de transformation hémorragique et de mortalité en comparaison avec l'évaluation effectuée par l'échelle ASITN/SIR. De plus, l'analyse composite qui correspondait à l'évaluation combinée de la CC par l'échelle ASITN/SIR et la présence/absence de la phase veineuse collatérale ont fourni des résultats solides en termes d'association avec des résultats cliniques favorables (OR: 6.56, CI95% [START_REF] Qian | A meta! analysis of collateral status and outcomes of mechanical thrombectomy[END_REF]39]) et un risque plus faible de transformation hémorragique.
Une validation externe de ces résultats sera obligatoire afin de confirmer ces résultats, qui restent pour l'instant un outil générateur d'hypothèse.
Chapitre 5 : UN APPROCHE TECHNIQUE BASE SUR LA PHYSIOPATHOLOGIE : DÉ-VELOPPEMENT D'UN ALGORITHME POUR L'ÉVALUATION ANGIOGRAPHIQUE DE
LA CIRCULATION COLLATÉRALE
Les résultats de l'analyse clinique et critique de la circulation collatérale, qui ont été discutés dans les chapitres précédents ont soulevé plusieurs questions sur l'évaluation de la circulation de la CC. Afin de caractériser les angiographies, en particulier l'opacification liée au produit de contraste iodé, nous avons analysé de manière approfondie le comportement de la série artériographique, non seulement par régions (cérébrale, région collatérale, région du bassin versant, etc.) mais aussi par pixel afin de comprendre le comportement du contraste du tissu cérébral.
Analyse spatiale des régions observées
Les caractéristiques des images artériographiques ont été analysées sur la base d'une répartition dans 4 types de régions: la région cérébrale, région collatérale étendue, la région « critique » de territoire de dernier pré et la région collatérale pure, afin d'analyser les changements de trajectoire temporelle de l'opacification liée au produit de contraste des vaisseaux collatéraux dans le territoire cortical de l'artère cérébrale moyenne. Un premier processus de segmentation a été effectué manuellement à l'aide du logiciel ITK-SNAP.
Analyse semi-quantitative de la courbe de densité temporelle L'évaluation de l'analyse de la courbe temps-densité basée sur la ROI a été effectuée en calculant les éléments des mappes paramétriques : time-to-peak (TTP), mean transit time (MTT), maximum enhancement projection (MEP). Alors que les interprétations étaient difficiles à réaliser sur les données TTP et MTT, les mappes MEP ont abouti à des résultats plus indicatifs pour donner la première xiv différentiation de la CC en « bonne » et « mauvaise » (nommées respectivement Group-gc « Good collaterals » et Group-pc « Poor collaterals »).
Complexité vasculaire dans la région collatérale
Afin de décrire quantitativement l'observation précédente et aussi prouver l'hypothèse qu'une bonne/mauvaise CC peut être différencié par le niveau de complexité vasculaire de la CC, nous avons utilisé la métrique appelée « dimension fractale », Ds, une méthode mathématique récente qui a été largement utilisée pour quantifier les vaisseaux rétiniens et les profils osseux trabéculaires en radiographie ou tomodensitométrie.
Concepts physiopathologiques intégrés
Pendant les travaux de la thèse, deux concepts physiopathologiques ont été développées et intégrés dans le développement de l'algorithme : la désynchronisation et les « turning points ». formed though an endovascular approach to remove the clot occluding a cerebral artery.
Désynchronisation
Although the excellent results obtained with the introduction of MT in terms of a constant increase of the recanalization rate of the occluded arteries, the rate of clinical outcomes did not follow the same behavior. Therefore, it was clear that not all the patients treated by MT could benefit from this type of procedure, configuring the so-called "futile reperfusions".
The physiopathological hypothesis underlying this phenomenon seems to rely on the collateral circulation (CC), a system of anastomoses between the cortical leptomeningeal arteries of the brain that could supply the territory of an occluded artery through a retrograde flow sustained by these anastomoses. Furthermore, an increasingly growing attention is being paid to CC in the literature, although very little is known about its hemodynamics.
The interest about CC raised up during the period of my Residency program in Radiology at the Careggi University Hospital in Florence (2010)(2011)(2012)(2013)(2014)(2015), where I have started to build my scientific activity under the supervision of my first mentor, Dr Salvatore Mangiafico, the first interventional neuroradiologist in the world to perform a procedure of MT. During those years the knowledge about CC was very limited as well as it was the therapeutic management of AIS.
I have started to deeply study this subject, since I could clearly observe that this type of circulation played a major role in the prognosis and evolution of AIS. I have participated to the publication of two first papers about collaterals in 2013 [START_REF] Mangiafico | Semi-quantitative and qualitative evaluation of pial leptomeningeal collateral circulation in acute ischemic stroke of the anterior circulation: the Careggi Collateral Score[END_REF]) and 2014 [START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF] xvii
The final declared objective of this algorithm will be the assessment of the flow velocity within the CC.
Aim of the PhD thesis
The aim of my PhD Thesis was to investigate the role of the CC through:
1.The analysis of the clinical implication of the CC as a prognostic factor for patients with AIS 2.A critical analysis of the angiographic images in order to evaluate a possible correlation with the physiopathologic events of AIS 3.The development of a dedicated post-processing algorithm for 2D angiograms and the comparison with a pre-existing algorithm.
The PhD thesis will be constituted of three different sections:
-Section 1: INTRODUCTION An introductive chapter (Chapter 1) that will provide an overview about AIS, the diagnosis, the possible treatment and the clinical challenges. This introductive part will serve also to acquaint the reader with the concept and the definition of the CC and the role of CC in AIS. All these notions will be further developed in the following sections;
-Section 2: PREFATORY RESULTS
A second chapter (Chapter 2) that resumes the Prefatory results obtained during my previous scientific experiences in the field of CC during my Residency Program and during the Master 2.
The following paper will be presented:
Consoli A, Andersson T, Holmberg A, Verganti L, Saletti A, Vallone S, Zini A, Cerase A, Romano D, Bracco S, Lorenzano S, Fainardi E, Mangiafico S; CAPRI Collaborative Group. CT perfusion and angiographic assessment of pial collateral reperfusion in acute ischemic stroke: the CAPRI study.
The paper was published in the Journal of Neurointerventional Surgery (JNIS) in 2016.
These preliminary results provided the basic concepts for the present work.
-Section 3: PhD ENDPOINTS xviii Three separate chapters (Chapters 3-5) will focus on the three aforementioned endpoints of the PhD thesis.
Chapter 3 will be centered on the clinical relevance of the CC as a prognostic factor in AIS.
This aspect was analyzed in the paper:
UNfavorable CLinical Outcome in patients with good collateral Scores: the UNCLOSE study. cardioembolic (due to valvular or non-valvular cardiac diseases, atrial fibrillation, intra-cardiac tumors such as mixomas, valvular prosthetic implantation),
atheromatous (secondary to the rupture of an instable atheromatous plaque located in the arte- rial wall of the carotid bifurcation/in the aortic arch or to the presence of an atheromatous plaque in a cerebral artery with an in-situ thrombosis),
infectious (in case of bacterial/mycotic/viral endocarditis)
tumoral (secondary to the hyper-coagulation induced by the biochemical reactions caused by the cancerous lesion)
Hemorrhagic strokes are mainly linked to the rupture of cerebral vascular structures (arteries, capillaries, veins) secondary to the aging or to the development of malformative diseases (aneurysms, ArterioVenous Malformations -AVMs, ArterioVenous cortical or dural fistulae -DAVF). According to the type of bleeding we can distinguish:
typical intra-parenchymal hematomas, which are frequently observed in the elderly population, with associated hypertension and ongoing vascular aging phenomena (amyloid deposits, arteriosclerosis) in the so-called "typical" localizations such as the putamen, the lenticular nucleus, deep cerebral territories;
atypical intra-parechnymal hematomas, which are often observed in younger patients without known vascular risk factors and which are located in atypical sites, such as (cerebral lobes, superficial territories, posterior cranial fossa, except for the pons). This subtype of hemorrhage stroke is frequently linked to vascular malformations (AVMs/DAFVs, dural or cortical fistulas);
sub-arachnoid hemorrhages (SAH), which are mainly caused by the rupture of a cerebral aneurysm, whose fissuration determines a blood collection in the sub-arachnoid spaces, surrounding the brain. Also other diseases can cause SAH, such as DAVFs and cortical AVMs, although this presentation is quite rare.
Epidemiology of Acute Ischemic stroke (AIS)
AIS represent about 80% of acute strokes, while hemorrhagic strokes account for about 20% of the cases. In France, AIS is considered the first cause of handicap in the adult population, the second cause of dementia and the third cause of mortality (Haute Autorité de Santé, 2009).
Every year about 150.000 new AIS are reported with a mortality rate of about 15-20% during the first month after the ischemic event and about 50% during the first year. This rate varies according to the type, localization and extension of the ischemic lesion. Among the survivors, the morbidity rate is about 50-75% because of the persistence of a neurological deficit (Fery-Lemonnier E., 2009; [START_REF] Chevreul | Cost of stroke in France[END_REF]).
Socio-financial burden of AIS
The annual cost of IAS management had been estimated in 8.9 billion € in 2007, which corresponded to the 3% of the annual budget of the Healthcare system [START_REF] Chevreul | Cost of stroke in France[END_REF]. The median cost of direct expenses every ischemic event is estimated at 16.686 €/patient during the first year [START_REF] Berkhemer | A Randomized Trial of Intraarterial Treatment for Acute Ischemic Stroke[END_REF].
Furthermore, one should consider the indirect costs, such as those linked to the rehabilitation programs and the loss of productivity, which had been calculated respectively at 2.4 billions € and 255.9 millions € [START_REF] Berkhemer | A Randomized Trial of Intraarterial Treatment for Acute Ischemic Stroke[END_REF].
Treatment of AIS patients
As aforementioned, AIS are secondary to the occlusion of a cerebral artery. However, the localization of the occlusion site allows to differentiate the lacunar AIS, which are caused by the occlusion of small arterial branches (perforating or isolated cortical branches) due to small clots, and the major strokes, which are due to the occlusion of large arteries of the Willis's polygon (the so-called "large vessel occlusions" -LVO). In general, lacunar strokes are associated with a more favorable prognosis and less neurological sequelae, while LVOs are mostly correlated with worse clinical outcomes and higher mortality rates (Goyal et al., 2016).
Most of the AIS is located in the carotid territory (anterior circulation, about 80-85% of the cases)
and less than 20% are located in the vertebro-basilar territory (posterior circulation).
Clinical severity of AIS depends on the localization of the occluded artery and on the extension of the ischemic lesion in the involved territory. Indeed, the occlusions of an artery of the vertebro-basilar system are associated with higher mortality rates, although these are less frequent. AS far as the anterior circulation is concerned, the occlusions of the carotid siphon of the internal carotid artery (ICA) are associated with more severe clinical conditions as compared with the isolated occlusions of the middle cerebral artery (MCA).
Currently, two main therapeutic strategies are available for the treatment of AIS: the intravenous thrombolysis (IVT) and the mechanical thrombectomy (MT)
-IVT is an inhibitor of the plasminogen activator; therefore, this drug allows the clot lysis. The ECASS-III [START_REF] Hacke | Thrombolysis with alteplase 3 to 4.5 hours after acute ischemic stroke[END_REF] had shown that patients could benefit of IVT administration (Alteplase) during the first 4.5 hours from the onset of the neurological symptoms, concluding that there was a significant improvement of clinical outcome without increasing the risk of a hemorrhagic transformation of the ischemic lesion. More recent trials, such as the DEFUSE-3 (Albers et al., 2017), showed the same clinical benefit without a temporal window limitation, but according to the perfusional status of the brain at the moment of the administration of the IVT.
The effectiveness of IVT seems to be related to the clot length, since it has been shown that clots longer than 8 mm are significantly less responsive to IVT with Alteplase [START_REF] Campbell | Clot Length Assessment in Stroke Therapy Decisions[END_REF]. However, new lytic molecules, such as the Tenecteplase showed even better results in terms of effectiveness and safety profile, slightly increasing the recanalization rates (Seners P et al., 2019). Current guidelines of the AHA and the ESO [START_REF] Powers | Guidelines for the Early Management of Patients With Acute Ischemic Stroke: 2019 Update to the 2018 Guidelines for the Early Management of Acute Ischemic Stroke: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association[END_REF]Berge E et al., 2019) consider IVT as a first-line treatment for AIS in patients without contra-indications for IVT), in association with MT in case of LVOs.
-Mechanical Thrombectomy (MT) is a minimally invasive, endovascular procedure performed by Interventional neuroradiologists that allows the removal of the clot which occludes the cerebral artery. Different endovascular techniques are used to perform MT and in particular: the stent-like retriever (SR), the Contact Aspiration (CA) and the Combined Technique (CoT). All these techniques are performed through a femoral/radial/carotid arterial access (through the navigation of the aorta, the aortic arch and of vertebral or carotid arteries, always under angiographic control)
and these allow to reach to the occlusion site and to catch/aspire (or both simultaneously) the clot that obstructs the cerebral artery.
SR technique is based on the catch of the clot though dedicated devices (stent-like retrievers) which can be deployed and re-sheathed after having trapped the clot through their struts. CA is based on the navigation of large bore aspiration catheters until the proximal surface of the clot and a continuous aspiration, performed either manually of through a mechanical pump, and allows to engage the clot inside of the aspiration catheter and to remove it by simple aspiration. Recently, their combined use (SR+CA) which is resumed in the CoT, showed similar results in terms of effectiveness, without however showing a superiority compared to SR alone [START_REF] Seners | Recanalization before Thrombectomy in Tenecteplase vs. Alteplase-Treated Drip-and-Ship Patients[END_REF].
Recent Randomized Controlled Trials (RCTs) showed the significant benefit of MT for patients with LVOs of the anterior circulation [START_REF] Jovin | Thrombectomy within 8 hours after symptom onset in ischemic stroke[END_REF][START_REF] Saver | Stent-retriever thrombectomy after intravenous t-PA vs. t-PA alone in stroke[END_REF][START_REF] Goyal | Randomized assessment of rapid endovascular treatment of ischemic stroke[END_REF][START_REF] Campbell | Endovascular Therapy for Ischemic Stroke with Perfusion-Imaging Selection[END_REF]. Furthermore, also patients with posterior circulation AIS seem to benefit from MT [START_REF] Tao | Endovascular treatment for acute basilar artery occlusion: A multicenter randomized controlled trial (ATTENTION)[END_REF][START_REF] Li | Basilar Artery Occlusion Chinese Endovascular Trial: Protocol for a prospective randomized controlled study[END_REF], although previous trials weren't able to show its superiority compared to the best medical management.
The aim of MT is to achieve the recanalization of the occluded artery rapidly, safely and completely. The number of MT procedures is constantly growing throughout the world and in France about 7500 procedures/year are performed.
Patients selection
As aforementioned, RCTs showed the evidence of the benefit of MT in AIS treatment as well as the need to properly select patients to treat.
Indeed, some subgroups of patients seem not to get a benefit from MT, which determines the condition of a technical success (the recanalization of the occluded artery) without achieving a favorable clinical result. This condition has been described in the literature as "futile recanalization". Currently, the rate of futile recanalization is reported to be between 45 and 55% (Goyal M, 2016;van Horn N et al., 2021), meaning that almost half of the patients treated by MT will not have a favorable clinical outcome. A possible explanation of this phenomenon could be provided by an inappropriate patients' selection. During the last years we observed a progressive widening of the indications to MT in terms of temporal window, age limit and clot location. On the other side, a larger number of patients have had access to MT compared to the last decade.
Patients' selection is currently performed through MRI (in particular in France and Switzerland) and more widely through CT scan (in most European and US countries). These imaging methods allow to assess the extension of the ischemic lesion, through the ASPECT score (Fig. 1), the localization and the length of the occlusion as well as the presence of a collateral circulation at the moment of the treatment. A new paradigm of AIS management is currently being proposed by some recent on- In this model, which includes patients admitted in the early phases of AIS (before 4.5 hours) and within severe neurological deficits, the Digital Subtraction Angiography (DSA) would acquire a central role as a selection tool for patients to treat, since it provides real-time information about the extension of the collateral circulation.
Moreover, it has been clearly showed that patients with LVOs and limited ischemic lesions represent the best target for MT and IVT in order to achieve a favorable clinical outcome [START_REF] Nogueira | Thrombectomy 6 to 24 Hours after Stroke with a Mismatch between Deficit and Infarct[END_REF]Consoli A et al., 2016), while patients with larger ischemic lesions and treated in very late windows represent those subgroup for which a clear response has not been provided yet and an ongoing clinical trial, IN EXTREMIS (https://www.inextremis-study.com/), will hopefully provide solid data about this issue.
The assessment of the collateral circulation
As previously mentioned, collateral circulation assumes a central role in patients' selection.
Collateral circulation (CC) has been widely described in the literature [START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF]Consoli et al., 2016;Rocha M et al., 2017;Qian J et al., 2020) although its hemodynamic characteristics remain still largely unknown.
CC could be defined as a retrograde vascular supply of the cerebral circulation in a territory which cannot be fed because of the occlusion of the artery that normally vascularize defined cerebral areas. CC develops thanks to pre-existing anastomoses between the cortical arteries of neighbor territories (i.e. in case of occlusion of the MCA, the CC develops through anastomoses between the cortical branches of the MCA and the arteries of the ipsilateral Anterior Cerebral Artery -ACA. From a physiopathologic point of view, CC reflects the expression of what we could name the "hemodynamic reserve" of the brain during ischemia. Previously reported studies showed how the development of CC is definitely individual and subjective [START_REF] Khatri | Leptomeningeal collateral response and computed tomographic perfusion mismatch in acute middle cerebral artery occlusion[END_REF][START_REF] Seyman | The collateral circulation determines cortical infarct volume in anterior circulation ischemic stroke[END_REF].
7 A B ACA ACM ACA ACM
Currently, both CT-Angiography with a multiphase acquisition protocol [START_REF] Menon | Multiphase CT Angiography: A New Tool for the Imaging Triage of Patients with Acute Ischemic Stroke[END_REF] and MR-Angiography [START_REF] Hartkamp | Circle of Willis collateral flow investigated by magnetic resonance angiography[END_REF][START_REF] Kim | Multiphase MR Angiography Collateral Map: Functional Outcome after Acute Anterior Circulation Ischemic Stroke[END_REF] could provide useful information about the extension of CC, however without providing further information about the qualitative measurement of the collateral vessels.
On the other side, the DSA, which is performed through the injection of iodine contrast mean directly into the artery, allows to provide a more sensitive assessment of CC in terms of extension and effectiveness, basing on its dynamic profile (Consoli A et al., 2016;[START_REF] Mangiafico | Semi-quantitative and qualitative evaluation of pial leptomeningeal collateral circulation in acute ischemic stroke of the anterior circulation: the Careggi Collateral Score[END_REF].
The role of the collateral circulation in AIS
The endovascular treatment of AIS is continuously evolving. The new devices, stent-like retrievers, net aspiration catheter, new angiographic machines, are considered among the most important technological novelties in the neuroradiological field, both diagnostic and interventional. However, such technical developments need to be associated to the evolution of the comprehension and understanding of the disease.
When we consider the anatomy of the cortical and subcortical arteries, we can observe how the cortical arteries which run on the pial surface of the brain give rise to the penetrating arterioles. These arterioles feed the microcirculation which is mainly constituted by the capillaries located in the cerebral cortex and in the sub-cortex (Fig. 3).
8 10, 1369-1376 (2007). https://doi.org/10.1038/nn2003).
The microcirculation is responsible for the cerebral autoregulation and, therefore, for the maintenance of the perfusion of the cortex and of the subcortical area. In case of AIS with a LVO, the perfusion of the microcirculation can be maintained through some anastomoses that occur between the cortical arteries of two neighbor vascular territories [START_REF] Iadecola | Glial regulation of the cerebral microvasculature[END_REF][START_REF] Kunz | Cerebral vascular dysregulation in the ischemic brain[END_REF].
These anastomoses, which constitute the CC, provide a retrograde filling of the penetrating arterioles and of the microcirculation and, consequently, the maintenance of the cerebral perfusion during the occlusion of an artery [START_REF] Iadecola | Glial regulation of the cerebral microvasculature[END_REF].
It is possible to argue that the CC intervenes between a Macro-circulation, meant as the cortical cerebral circulation (the cortical and pial arteries and the penetrating arterioles) and a microcirculation, intended as the cortical network of capillaries which is responsible for autoregulation. Indeed, some Authors pointed out the CC as a "modulator" for the vascular autoregulation response and for some specific hemodynamic phenomena such as the neurovascular coupling [START_REF] Vasquez | Intracranial collateral circulation and its role in neurovascular pathology[END_REF].
When an infarcted territory is retrogradely perfused through the CC it can be maintained in a "stunned" condition and since the autoregulation can be assured by the CC this part of the brain can be considered as salvageable in case of recanalization of the occluded artery.
Oppositely, if the CC is not present, the territory which is not fed by the occluded artery will rapidly evolve to necrosis with a definitive loss of function of that area even in case of a complete recanalization of the occluded artery.
Indeed, is we consider that the recanalization rates and the capacity of clot retrieval have exponentially increased up to 90% of the cases, favorable clinical outcomes are still permanently observed up to 45-55%. The aforementioned "futile recanalizations" reflect the impairment between the technological development and the stagnation of the understanding of the physiopathology of AIS.
Therefore, CC has been proposed as a plausible explanation to this phenomenon, since the presence of this retrograde circulation allows the brain to be fed during the period of the occlusion of the cerebral artery and a certain level of cerebral perfusion can be maintained during the ischemia.
On the other hand, the lack of this CC is associated with a poor and critical perfusional status of the brain without a clinical benefit for patients treated by MT (Consoli A et al., 2016;[START_REF] Mangiafico | Semi-quantitative and qualitative evaluation of pial leptomeningeal collateral circulation in acute ischemic stroke of the anterior circulation: the Careggi Collateral Score[END_REF].
These concepts can be resumed in the definition of the "slow" and "fast progressors". Indeed, patients with good collaterals have been reported to achieve good clinical outcomes even in late temporal windows [START_REF] Liebeskind | Collateral Circulation in Thrombectomy for Stroke After 6 to 24 Hours in the DAWN Trial[END_REF] mainly because they harbored a good CC. The physiopathological explanation would lay in the effect of collaterals to maintain a level of cerebral perfusion during the ischemia, sustaining the ischemic penumbra and, therefore, the possibility to rescue the brain tissue from necrosis, even in patients treated in late windows, determining a slow progression of the ischemia (Rocha et al., 2017, Mohammaden MH et al., 2022).
Hemodynamics of cortical arteries and evaluation of cerebral blood flow velocity
The understanding of the hemodynamics of the cortical arteries of the brain is still limited. Pressure values and measurement of blood flow in this region still derives from measurements performed during microsurgical procedures after craniotomy and which were performed using microprobes in patients without LVOs [START_REF] Carter | Regional Cortical Blood Flow at Craniotomy[END_REF][START_REF] Bederson | Cortical Blood Flow and Cerebral Perfusion Pressure in a New Noncraniotomy Model of Subarachnoid Hemorrhage in the Rat[END_REF].
However, currently no data are provided in case of AIS and during an arterial occlusion, when we could assist to the activation of the cortical anastomoses which provide a retrograde cortical circulation.
Several papers focused on the issue of the measurement of blood flow velocity in cerebral arteries with non-invasive methods such as CT-Angiography or MR-Angiography [START_REF] Thierfelder | Color-coded cerebral computed tomographic angiography-Implementation of a convolution-based algorithm and first clinical evaluation in patients with acute ischemic stroke[END_REF][START_REF] Menon | Assessment of leptomeningeal collaterals using dynamic CT angiography in patients with acute ischemic stroke[END_REF][START_REF] Smit | Timing-invariant imaging of collateral vessels in acute ischemic stroke[END_REF], whereas the assessment directly performed on the DSA remains still limited [START_REF] Muehlen | Noninvasive Collateral Flow Velocity Imaging in Acute Ischemic Stroke: Intraindividual Comparison of 4D-CT Angiography with Digital Subtraction Angiography[END_REF].
Several groups focused on the correlation between the CC and perfusional imaging [START_REF] Galinovic | The ratio between cerebral blood flow and Tmax predicts the quality of collaterals in acute ischemic stroke[END_REF][START_REF] Cortijo | Relative cerebral blood volume as a marker of durable tissue-at-risk viability in hyperacute ischemic stroke[END_REF]) and more recently on the dynamic evaluation of collateral vessels, in particular using dedicated scores such as the Cortical Veins Opacification score (COVES), which were applied either on CT-Angiography or on MR/CT-Perfusion (Faizy et al., 2021;Faizy et al, 2021;[START_REF] Singh | Time-resolved assessment of cortical venous drainage on multiphase CT angiography in patients with acute ischemic stroke[END_REF].
Section II -PREFATORY RESULTS
PREFATORY RESULTS
The CAPRI Study
The impact of the CC for AIS had already been extensively investigated in the literature. Several
Authors had highlighted the unpredictable aspect of the development of these cortical anastomoses, since no correlations were found with age, sex or related vascular risk factors [START_REF] Lazzaro | The impact of diabetes on the extent of pial collaterals in acute ischemic stroke patients[END_REF]Arsava EM et al., 2014[START_REF] Christoforidis | Predictors for the extent of pial collateral recruitment in acute ischemic stroke[END_REF].
Although the concept of CC was anatomically clear, no explanation had been provided concerning the role of CC in the physiopathology and in the early 2010s the evolution of the perfusional assessment in the setting of AIS was starting to be settled.
It had frequently be recalled how CC would represent the expression of the perfusional status of the brain at the moment of the ischemia, suggesting the role of a vascular "reservoir" to supply a vascular territory which was not fed by the related occluded artery (Consoli A et al., 2016).
However, no clear proof of a correlation between CC and the perfusional status had been provided at that time.
After having published a novel grading scale for CC (the Careggi Collateral Score, CCS) directly based on the 2D-angiograms [START_REF] Mangiafico | Semi-quantitative and qualitative evaluation of pial leptomeningeal collateral circulation in acute ischemic stroke of the anterior circulation: the Careggi Collateral Score[END_REF], I had the possibility to coordinate a multi centric collaborative group that included five high-volume Italian centers focused on Interventional
Neuroradiology and a renowned European institution, the Karolinska Institutet in Stockholm.
This collaborative group aimed to investigate the correlation between patients with AIS of the anterior circulation that were screened by using CT-Perfusion, which provided perfusional color-maps showing the extension of the ischemic core (the brain part that would not have been rescued even in case of a complete recanalization) and of the ischemic penumbra (the salvageable cerebral tissue), and DSA at the moment of the procedure of MT.
The results of this analysis were published in the Journal of NeuroInterventional Surgery.
The main hypothesis was that high grades of CCS, meaning a good collateral status, were associated with favorable perfusional patterns, such as small ischemic cores and large ischemic penumbras.
This hypothesis was analyzed in the paper: CT perfusion and angiographic assessment of pial collateral reperfusion in acute ischemic stroke: the CAPRI study, published in the Journal of Neurointerventional Surgery in 2016.
The results of the CAPRI Study showed that a clear correlation between the collateral grade and the perfusional status of the brain could be established.
The perfusional status had been evaluated mainly on the CBV (Cerebral Blood Volume) color-maps which were a sensitive marker of the ischemic core, according to the literature [START_REF] Cortijo | Relative cerebral blood volume as a marker of durable tissue-at-risk viability in hyperacute ischemic stroke[END_REF] and the extension of the ischemic core was assessed using the ASPECT score.
Finally, we could observe that patients with a good collateral status were more likely to be associated with significantly higher rates of favorable clinical outcomes (Fig. 4). 13
The preliminary results at CIC-IADI Laboratory
After the publication of the CAPRI study, several papers focused on the correlation between collaterals and cerebral perfusion [START_REF] Galinovic | The ratio between cerebral blood flow and Tmax predicts the quality of collaterals in acute ischemic stroke[END_REF]Ginsberg MD, 2018[START_REF] Menon | Neuroimaging in Acute Stroke[END_REF].
Furthermore, in the last 5 years several technical developments were introduced in the study of cerebral perfusion through CT or MR Perfusion-Weighted Imaging (PWI) and a large number of automated softwares have been released in order to calculate rapidly the cerebral perfusional status.
These softwares, such as the RAPID ® or the OLEA ® software, allow to extrapolate a quantitative assessment of the ischemic core and the ischemic penumbra and to provide color maps almost in real time. On the other hand, no further advancements have been done concerning DSA, except for some sporadic attempts [START_REF] Muehlen | Noninvasive Collateral Flow Velocity Imaging in Acute Ischemic Stroke: Intraindividual Comparison of 4D-CT Angiography with Digital Subtraction Angiography[END_REF].
The main advantage of DSA is to provide a dynamic visualization of the intracranial circulation and of the CC thanks to the transit of the iodine contrast mean within the vessels. Contrast passes through the arteries, then to the capillaries and finally runs through the veins to be drained towards the heart and secondarily to the pulmonary circulation to be re-oxygenated. The algorithms used by the CT-Perfusion or MR-PWI are based on the same principle: after contrast injection, the transit time of the contrast itself is calculated and according to the different transit time in the cerebral areas a quantitative assessment is provided. Color-maps are generated according to the delay of transit of the contrast mean.
Intuitively, the transit time in a cerebral territory vascularized by an artery that is occluded will be much longer than the one measured in a territory where the feeding artery is patent. However, the presence of collateral vessels determines the maintenance of a certain amount of blood flow in the infarcted cerebral territory. This phenomenon is thought to support the concept of the ischemic penumbra: the cerebral territory receiving the blood flow through CC is maintained "stunned" and it remains salvageable if the occluded artery is properly recanalized.
However, it is known that several biochemical events occur at the level of the CC, such as in-situ thrombosis, platelets uncontrolled adhesion and distal migration of proximal emboli (downstream emboli). Therefore, one could argue that although a CC is present and extensively visible either on the non-invasive imaging methods (CT-or MR-Angiography) or directly on the DSA, the collaterals vessels could not be effective because of the aforementioned biological phenomena. Indeed, the thrombosis of these collaterals vessels would determine a slowdown in the CC, which could not provide the retrograde filling of the microcirculation (Fig. 3).
The concept of the effectiveness of CC represented the main target of my Master 2 project.
The aim of the Master 2 project was to focus on the analysis of the 2D angiograms acquired during the DSA, which represent the direct imaging method used to perform MT (Fig. 5). Indeed, basing on the assumption that the visualization of the CC could provide useful information about the extension of the collateral vessels but on necessarily on their effectiveness, the goal was to build up an algorithm of segmentation to be applicable directly to the 2D-DSA images in order to analyze the flow velocity within the CC, which could be considered as a surrogate of the effectiveness of these collateral vessels.
Since the setting of a complex algorithm requires a considerable time, we had decided to focus the Master 2 objective on the first step of the process: the acquisition of the angiographic data. 2D-DSA images are acquires by the angiographic machine while a bolus of iodine contrast mean is injected.
In order to analyze a dynamic process, such as the visualization of the CC, we argued that the standard acquisition protocols currently used in the clinical practice could not be sufficiently informative.
Methods
In order to properly analyze DSA images we had primarily defined an "experimental" acquisition protocol for the DSA images to be compared to a "standard" acquisition protocol which is already set on the angiographic machines. The standard clinical protocols are based on a 2 frames/second frequency with a variable number of frames during the time intervals of the acquisition. The "experimental" protocol was made up of a 6 frames/second frequency with a variable but higher number of frames acquired during the time intervals of the acquisition. The experimental protocol is currently already available in the basic setup of angiographic machines, but not systematically used in standard care methods. The acquisition is limited to a single angiogram and therefore there is no modification of procedural time and not significant differences in terms of radiation exposure. The objective is to find an acquisition method that can provide sufficient acquisition rate in order to provide information of the contrast behavior to reflex the pre-treatment CC functionality. The CC functionality is therefore evaluated by the enhancement speed of the CC area by the contrast agent.
Therefore, a first basic version of an algorithm for segmentation of vessels using Fussy C-means, which was capable to provide the perfusion-like curves according to the pixel opacification in the CC area, after the normalization of the results with the area of the involved brain hemisphere. A clinical analysis was performed in order to observe the impact of the visualization of CC on clinical outcomes, and in particular on the 3-months modified Rankin Scale (mRS, Table 1), a widely used scale to assess the functional independency at 3 months after the ischemic event.
Table 1. The modified Rankin Scale.
Imaging analysis aspects
The workflow of the technical process is summarized in Fig. 6. All the 2D DSA data were firstly manually segmented through the ITK-SNAP software (www.itksnap.org) [START_REF] Yushkevich | USse-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability[END_REF]. I have performed the manual segmentation of the collateral region and of the hemisphere area for each patient, carefully identifying the collateral vessels on the first frame where these were visible until the last frame.
Indeed, since DSA is a dynamic imaging method the progression of the contrast agent is observed in different circulatory phases, passing through the artery (arterial phase), capillaries (parenchymal/ capillary phases) and the veins (venous phase). The contrast enhancement changes were computed based on the segmentation of CC arteries which led to a time-enhanced artery perfusion curve.
Results and perspectives
We have observed that the perfusional curves obtained through the post-processing of the experimental protocol at 6 frames/second were more precise and contributive as compared to those obtained with a standard 2 frames/second protocol.
Intuitively, the experimental protocol provided a higher amount of data in a definite time window, which represents a major advantage to analyze CC, since this type of collateral circulation is visualized during the whole angiographic run and in some cases all along the three different phases (arterial, parenchymal and venous). Since the basic version of the algorithm is sensitive to the opacification of the vessels and to the pixel density modifications over time, a 6 frames/second acquisition protocol is associated with a higher number of vessels and pixel modification to analyze.
This effect determines the achievement of more precise perfusional curves, which is particularly adapted for a dynamic circulation visible over different time points (Fig. 7). Furthermore, we had analyzed the clinical outcomes of the 10 patients included in the analysis. All the patients were completely or adequately recanalized, according to the modified Treatment In Cerebral Ischemia score [START_REF] Dargazanli | Modified Thrombolysis in Cerebral Infarction 2C/Thrombolysis in Cerebral Infarction 3 Reperfusion Should Be the Aim of Mechanical Thrombectomy: Insights From the ASTER Trial (Contact Aspiration Versus Stent Retriever for Successful Revascularization)[END_REF]) (mTICI score, Table 2). Although it is known that the recanalization grade is a strong predictor of a favorable clinical outcome, a certain number of patients do not reach functional independence after MT (about 45-55%) (Goyal M, 2016;van Horn N et al., 2021).
mTICI Description
Grade 0 No perfusion or anterograde flow beyond site of occlusion Grade 1 Penetration but not perfusion. Contrast penetration exists past the initial obstruction but with minimally filling of the normal territory Grade 2 Incomplete perfusion wherein the contrast passes the occlusion and opacifies the distal arterial bed but rate of entry or clearance from the bed is slower or incomplete when compared to non-involved territories
Grade 2a
Some perfusion with distal branch filling of <50% of territory visualized
Grade 2b
Substantial perfusion with distal branch filling of ≥50% of territory visualized
Grade 2c
Near complete perfusion except for slow flow in a few distal cortical vessels, or presence of small distal cortical emboli.
Grade 3 Complete perfusion of normal filling of all distal branches
In this context, it was very interesting to notice that after having analyzed the perfusional curves obtained, two definite clusters of patients were recognizable (Fig. 8): the upper cluster, which included those patients with good clinical outcomes and the lower cluster regrouping those patients with high mortality rates and poor clinical outcomes. The main difference in these two small cohort of patients was the perfusional curve obtained after the analysis of the CC. Hence it may be plausible to use the shape of the perfusional curve as predictors for patient outcomes These preliminary results determined the first step of a rigorous methodological approach to the setting of a segmentation algorithm dedicated to the analysis of CC in patients with AIS. Section III -PhD ENDPOINTS
CLINICAL RELEVANCE OF THE COLLATERAL CIR-CULATION AS A PROGNOSTIC FACTOR IN ACUTE ISCHEMIC STROKE
The role of CC in determining favorable clinical outcomes has been widely described in the literature. Several papers have shown how collaterals are associated with favorable clinical outcomes after endovascular treatment [START_REF] Bang | Impact of collateral flow on tissue fate in acute ischaemic stroke[END_REF][START_REF] Bang | Collateral flow predicts response to endovascular therapy for acute ischemic stroke[END_REF][START_REF] Ribo | Extending the time window for endovascular procedures according to collateral pial circulation[END_REF], Liebeskind et al., 2014, Consoli et al., 2016), by enhancing the results of the recanalization after MT [START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF][START_REF] Liggins | Interhospital variation in reperfusion rates following endovascular treatment for acute ischemic stroke[END_REF], increasing the exposure of the clot to the IVT (Seners P et al., 2019), limiting the ischemic vascular damage (Tong et al., 2018), encouraging the dislodgment of the clot [START_REF] Leng | Impact of Collateral Status on Successful Revascularization in Endovascular Treatment: A Systematic Review and Meta-Analysis[END_REF].
Furthermore, good collaterals have also been associated with a lower risk of hemorrhagic transformation after recanalization (Cao R et al., 2020;Hao Y et al., 2017), reducing the risk of growth and extension of the ischemic lesion as well as the risk of cerebral swelling and symptomatic intracerebral hemorrhages.
Considering the overall stroke population, it has been shown that about 45-55% of patients successfully treated by MT will not reach a functional independence (Goyal et al., 2016, van Horn et al., 2021). Among these, old patients, those with low baseline ASPECTS, those who are incompletely or who are lately recanalised and those where intraprocedural complications occurred are more likely not to benefit from MT. Patients with a good collateral profile are usually considered as the most suitable candidates for MT, since these are associated with less severe clinical conditions and with more limited ischemic lesions (Consoli et al., 2016). However, also patients with good collaterals may not benefit from MT, despite a favorable perfusional and clinical pattern, although a deep analysis of the factors which determine a poor outcome in this specific subgroup of patients has not been performed yet.
Therefore, this chapter was focused on the clinical relevance of the CC as a prognostic factor in AIS patients. In particular the aim was to investigate which factors can be associated with unfavorable outcomes in patients with a good collateral profile. This research question was approached in the UNCLOSE (UNfavorable CLinical Outcomes in patients with good collateral Scores study)
Study, a retrospective analysis of a cohort of patients with a good CC, submitted to the Journal of Neuroradiology.
Highlights •
A good collateral circulation is associated with favorable outcomes in patients with large vessel occlusions of the anterior circulation treated by mechanical thrombectomy. These patients are generally considered as the best candidates for the endovascular treatment.
•
Unfavorable outcomes may occur in almost half of the patients with good collateral circulation (47% in our series)
•
Predictors of unfavorable clinical outcome do not differ in patients with good collaterals as compared to those observed in the literature concerning patients with poor collaterals.
•
The presence of good collaterals may not be sufficient to achieve favorable outcomes in patients with large vessel occlusions of the anterior circulation.
•
Good collaterals should be considered as a supplemental reason to obtain fast and complete recanalizations.
Introduction
Good collaterals represent a hemodynamic reservoir for ischemic areas due to large vessel occlusions (LVOs). Their role has been widely described in the literature in terms of association with favorable clinical outcome, [START_REF] Gersing | Clinical Outcome Predicted by Collaterals Depends on Technical Success of Mechanical Thrombectomy in Middle Cerebral Artery Occlusion[END_REF][START_REF] Qian | A meta! analysis of collateral status and outcomes of mechanical thrombectomy[END_REF] of the modulation between "slow" and "fast progressors" [START_REF] Rocha | Fast Versus Slow Progressors of Infarct Growth in Large Vessel Occlusion Stroke: Clinical and Research Implications[END_REF][START_REF] Mohammaden | Characterizing Fast and Slow Progressors in Anterior Circulation Large Vessel Occlusion Strokes[END_REF] and of the reduction of the hemorrhagic transformation after mechanical thrombectomy (MT). [START_REF] Cao | Collateral Vessels on 4D CTA as a Predictor of Hemorrhage Transformation After Endovascular Treatments in Patients With Acute Ischemic Stroke: A Single-Center Study[END_REF] Therefore, patients with good collaterals are considered as the most sui-
Materials and Methods
Study cohort
All consecutive patients with acute ischemic stroke of the anterior circulation who underwent me-chanical thrombectomy from January 1st 2016 to December 31st 2021 at our institution were retro-spectively analyzed from a prospectively collected, web-based registry (ETIS Registry, NCT03776877).
Variables and definitions
In the present study, the following data were Ethical approval was obtained from the institutional review board, which waived for patient informed consent.
Inclusion and Exclusion Criteria
Inclusion criteria were: age ≥ 18, acute LVO in the anterior circulation (intracranial internal carotid and M1 or proximal M2 segments of the middle cerebral artery), and good collateral circulation which was defined as grades 3 and 4 of the American Society for Interventional and Therapeutic Neuroradiology/Society of Interventional Radiology (ASITN/SIR) scale. [START_REF] Higashida | Trial Design and Reporting Standards for Intra-Arterial Cerebral Thrombolysis for Acute Ischemic Stroke[END_REF] Patients with tandem or multiple occlusions, with significant pre-stroke disability defined as modified Rankin Scale (mRS) >2, and patients with incomplete and unavailable follow-up data were excluded.
Collateral circulation assessment
Interventional images were retrospectively re- Supplementary specific analyses, such as the subgroup of patients recanalized with mTICI 2c-3, were performed in order to help with the interpretation of the results.
Statistical analysis
Statistical analysis was performed using de-
Results
We reviewed 1419 patients with acute ischemic stroke due to LVO in the anterior circulation treat-ed by mechanical thrombectomy and 219
(females: 53.4%, mean age: 70.6 ±16 y.o.) patients were included in the final analysis after the application for the exclusion criteria. A flowchart of the study is provided in Fig. 1.
Kappa intra-and inter-observer agreement were respectively 0.61 and 0.69 per ASITN/ SIR grade and 0.72 and 0.82 per dichotomization (grades 1-2 vs 3-4). An unfavorable outcome was observed in 103 patients (47%). The results of the univariate analysis related to the prima-ry and to the secondary endpoints are summarized respectively in Table 1 and Table 2, whereas Ta-ble 3 reports the multivariate analyses. Supplementary analyses are reported in Supplemental Tables 123.
Primary endpoint
In our population, a statistically significant correlation with an unfavorable clinical outcome was observed for patients with older age (p<0.001), baseline mRS2 (p<0.001), higher baseline NIHSS (p<0.001), longer onset-to-recanalization (OTR, p=0.03) and "groin-to-recanalization" times (p=0.004). IVT administration (p=0.004), number of passes <3 (p=0.01), a partial mTICI score (TI-CI 0-2a; p=0.002), a higher 24h-NIHSS and lower 24h-ASPECTS (p<0.001) and sICH (p<0.001). The multivariate analysis showed an independent correlation between unfavorable outcomes and older age, higher 24h-NIHSS, lower 24h-ASPECTS, no-IVT administration and secondary transfers ("drip and ship" model).
Secondary endpoints
Factors associated with unfavorable outcome in patients with adequate recanalization are shown in Table 2 and did not differ from those identified for the primary endpoint at the univariate analysis, except for the OTR and the procedural time, which were not significantly associated.
In the sub-analysis performed on all-cause mortality at 90 days we observed that although older age (p<0.001), baseline mRS, baseline and 24h-NIHSS and 24h-ASPECTS (p<0.001),
IVT administra-tion (p=0.02) and adequate recanalization (p=0.01) were significantly correlated, only older age, 24h-NIHSS and 24h-ASPECTS resulted independent predictors at the adjusted multivariate analy-sis (Table 3).
Furthermore, sICH were significantly correlated to baseline ASPECTS, longer procedures, FPE, a higher number of maneuvers, recanalization failures and 24h-NIHSS, whereas only 24h-NIHSS, 24h-ASPECTS and PH1-PH2 grades proved a statistical correlation at the multivariate analysis.
Discussion
Patients with good collaterals could be described as the best candidates for mechanical throm-bectomy. However, a non-negligible part of patients treated by MT does not reach functional inde-pendence after the endovascular treatment, [START_REF] Van Horn | Predictors of poor clinical outcome despite complete reperfusion in acute ischemic stroke patients[END_REF] although patients with unfavorable outcomes show more frequently a poor collateral profile. [START_REF] Hassen | Inter-and intraobserver reliability for angiographic leptomeningeal collateral flow assessment by the American Society of Interventional and Therapeutic Neuroradiology/ Society of Interventional Radiology (ASITN/ SIR) scale[END_REF] In our cohort, age (OR: 1.05, IC95%[1.02-1.09], p=0.001) and higher 24hours NIHSS (OR: 1.23, IC95%[1.14-1.32],
p<0.001) were some of the main predictors of poor clinical outcome (Table 1), which is in line with previously published pa-pers. [START_REF] Arsava | The Detrimental Effect of Aging on Leptomeningeal Collaterals in Ischemic Stroke[END_REF] However, some Authors argued that age could have a different behavior which would be in favor of a preserved collateral status. [START_REF] Agarwal | Interaction of age with the ischaemic penumbra, leptomeningeal collateral circulation and haemodynamic variables in acute stroke: a pilot study[END_REF] The mean age of this cohort was 70.6 ±16 y.o., which could be considered slightly higher than in other acute stroke cohorts. We aimed to analyze the im-pact of other factors that could explain the unfavorable clinical results in this specific subgroup of patients with good collaterals.
Collaterals and recanalization
The adequate recanalization of the occluded artery is widely known as an independent predictor of favorable clinical outcome, although about half of the patients with adequate-tocomplete recanali-zation do not achieve a functional independence. [START_REF] Goyal | Endovascular thrombectomy after largevessel ischaemic stroke: a meta-analysis of individual patient data from five randomised trials[END_REF] Our results showed that an adequate recanali-zation grade remains a strong predictor of favorable clinical outcome in patients with good collat-erals. The interaction between the recanalization grade and collaterals had already been investigated pre-viously showing that collaterals enhance the effect of the recanalization, improving the clinical outcomes of recanalized patients. [START_REF] Leng | Impact of Collateral Status on Successful Revascularization in Endovascular Treatment: A Systematic Review and Meta-Analysis[END_REF] When we compared patients with favorable clinical outcomes and those with unfavorable ones, the rates of mTICI 2b-3 were high in both subgroups but significantly lower in the subgroup of patients with an unfavorable outcome (82% vs 95%, p=0.002; Table 1) independently from the endovascular technique used. Interestingly, neither the FPE nor the mTICI grade resulted as independent predic-tor of favorable outcome or a protective factor for mortality at the multivariate analysis (Table 3), however that could be explained by the selected population and the high recanalization rates in both subgroups. Thus, we observed similar results when we considered patients with mTICI 2c-3 grades (Supplemental Table 1).
These results are in line with the physiopathological concept that collaterals supply the microcirculation in the ischemic territory during the arterial occlusion, [START_REF] Kunz | Chapter 14 Cerebral vascular dysregulation in the ischemic brain[END_REF] maintaining a sort of "hemodynamic reserve". [START_REF] Consoli | CT perfusion and angiographic assessment of pial collateral reperfusion in acute ischemic stroke: the CAPRI study[END_REF] Other possible explanations of the effectiveness of the association between collaterals and recanali-zation could be related to the mitigation of the ischemic vascular injury or the better exposition to thrombolytics agents [START_REF] Seners | Better Collaterals Are Independently Associated With Post-Thrombolysis Recanalization Before Thrombectomy[END_REF] or to the higher chance of dislodgment of the clot. [START_REF] Leng | Impact of Collateral Status on Successful Revascularization in Endovascular Treatment: A Systematic Review and Meta-Analysis[END_REF][START_REF] Liggins | Interhospital variation in reperfusion rates following endovascular treatment for acute ischemic stroke[END_REF] Nevertheless, complete recanalizations may have a favorable impact on the collateral circulation itself, allowing the removal of the thrombotic material in the pial arterioles and in the capillary bed providing a proper supply of the microcirculation. [START_REF] Tong | Cerebral venous collaterals: A new fort for fighting ischemic stroke?[END_REF] Furthermore, we did not observe any differences concerning the type of anesthesia or the use of a balloon guide catheter (BGC) whereas the potential detrimental effect of general anesthesia was described in patients with poor or intermediate collaterals [START_REF] Liu | Adverse Outcomes Associated With Higher Mean Blood Pressure and Greater Blood Pressure Variability Immediately After Successful Embolectomy in Those With Acute Ischemic Stroke, and the Influence of Pretreatment Collateral Circulation Status[END_REF] and the BGC had previously been correlated with improved outcomes [START_REF] Blasco | Balloon guide catheter improvements in thrombectomy outcomes persist despite advances in intracranial aspiration technology[END_REF] unless a combined technique (stent retriever+aspiration) was used as a first-line strategy. [START_REF] Bourcier | Balloon Guide Catheter is Not Superior to Conventional Guide Catheter when Stent Retriever and Contact Aspiration are Combined for Stroke Treatment[END_REF] Moreover, intraprocedural complications were not associated with worse clinical outcomes since these were neither frequent nor clinically relevant in most of the cases. Indeed, mechanical vasospasm represented more than half of the reported complications without clinical consequences.
Collaterals and time
In our cohort, patients with unfavorable clinical outcomes were recanalized later with almost a 30-minutes difference in terms of onset-toreperfusion (299.4 vs 270.4 minutes, p=0.03; 1), which is in line with previously reported series [START_REF] Bourcier | More than three passes of stent retriever is an independent predictor of parenchymal hematoma in acute ischemic stroke[END_REF].
These results could suggest that despite good collaterals, a fast and complete recanalization must remain the goal of mechanical thrombectomy. Therefore, good collaterals should not be considered as a "time machine" that would provide a certain tolerance for long procedures.
The effect of the collateral circulation to maintain the cerebral perfusion during ischemia also in late temporal windows has already been described [START_REF] Liebeskind | Collateral Circulation in Thrombectomy for Stroke After 6 to 24 Hours in the DAWN Trial[END_REF] as well as its role of modulator of the progres-sion of the ischemia. [START_REF] Anadani | Collateral status reperfusion and outcomes after endovascular therapy: insight from the Endovascular Treatment in Ischemic Stroke (ETIS) Registry[END_REF] Moreover, in our population we did not observe any significant difference in terms of clinical outcome between patients treated before and beyond 6 hours (mRS0-2: 56% vs 44%, p=0.19, Supplemental Table 2) although higher recanalization grades were obtained before 6 hours, which supports the hypothesis of a neuroprotective effect of good collaterals on the brain tissue in both early and late windows.
Collaterals and thrombolysis
The role of intravenous thrombolysis on collateral circulation remains still debated. Some
Authors did not find any correlation between IVT and collaterals [START_REF] Anadani | Effect of intravenous thrombolysis before endovascular therapy on outcome according to collateral status: insight from the ETIS Registry[END_REF] while other papers had underlined the role of both fibrin [START_REF] Jeanneret | The Plasminogen Activation System Promotes Dendritic Spine Recovery and Improvement in Neurological Function After an Ischemic Stroke[END_REF] and platelets [START_REF] Xu | Antiplatelet Strategies and Outcomes in Patients with Noncardioembolic Ischemic Stroke from a Real-World Study with a Five-Year Follow-Up[END_REF][START_REF] Yao | Enhanced Procoagulant Activity on Blood Cells after Acute Ischemic Stroke[END_REF] in the development of the in-situ thrombotic phenome-na that can occur within the collateral vessels, particularly in the venous side. [START_REF] Tong | Cerebral venous collaterals: A new fort for fighting ischemic stroke?[END_REF] Intravenous thrombolysis (mainly Alteplase according to the inclusion period) was significantly associated with favorable clinical outcomes with a protective effect observed at t h e m u l t i v a r i a t e a n a l y s i s ( O R : 0 . 2 8 , IC95%[0.11-0.73], p=0.009). There is currently no evidence that IV thrombolysis may have an ef-fect on collateral circulation, and these results could be explained by the significantly higher rate of patients with mTICI 2b-3 among those who received IV thrombolysis prior to mechanical throm-bectomy (Supplemental Table 3). The SWIFT DIRECT Trial [START_REF] Fischer | Thrombectomy alone versus intravenous alteplase plus thrombectomy in patients with stroke: an open-label, blinded-outcome, randomised non-inferiority trial[END_REF] has recently shown that the asso-ciation IVT+MT provided better clinical outcomes compared to MT alone. However, in our cohort secondary transfers were significantly associated with unfavorable outcomes and it could be hy-pothesized that the transfer delays may be due to the IVT administration. Nevertheless, only 60% of the patients secondarily transferred to our institution had received IV thrombolysis at the Primary Stroke Centers (unpublished data).
Collaterals and hemorrhagic transformation
The rate of sICH was low (5.9%, 13/219) as one could expect in a selected population with good collateral scores. Indeed, poor collaterals are frequently associated with higher hemorrhagic rates after endovascular treatment. [START_REF] Cao | Collateral Vessels on 4D CTA as a Predictor of Hemorrhage Transformation After Endovascular Treatments in Patients With Acute Ischemic Stroke: A Single-Center Study[END_REF][START_REF] Hao | Predictors for Symptomatic Intracranial Hemorrhage After Endovascular Treatment of Acute Ischemic Stroke[END_REF] In this cohort, lower baseline ASPECTS, partial or unsuccessful recanalizations (mTICI 0-2a) and longer procedures were associated with a higher rate of sICH. Indeed, patients with adequate and almost complete/complete recanalization had significantly higher 24h-ASPECTS (p>0.001, Tables 23, Supplemental Tables 12). This result is in line with previously published papers [30] which highlighted a sort of protective effect of collaterals to slowdown the extension of the ischemia, although in this specific cohort of patients with good collaterals and adequate recanalization we observed an unfavorable outcome in 47% of the cases.
Limitations
The retrospective nature of the analysis represents the main limitation of this study. However, we acknowledge also the fact that since the collateral circulation was assessed using the ASINT/SIR [START_REF] Higashida | Trial Design and Reporting Standards for Intra-Arterial Cerebral Thrombolysis for Acute Ischemic Stroke[END_REF] scale and that the inter-and intra-observer agreement for this score could be quite poor, [START_REF] Hassen | Inter-and intraobserver reliability for angiographic leptomeningeal collateral flow assessment by the American Society of Interventional and Therapeutic Neuroradiology/ Society of Interventional Radiology (ASITN/ SIR) scale[END_REF] although in this cohort we considered a dichotomous parameter since we included only patients with ASITN/SIR grades 3 and 4 and an excellent inter-observer agreement was recorded.
Furthermore, although the predictors of unfavorable outcome did not differ from those observed in overall stroke populations and already described in literature, this study represents, to the best of our knowledge, the first analysis specifically focused of this subgroup of patients, providing find-ings that could be potentially hypothesis-generating. The local Institutional Review Board approved the data collection and analysis for this study.
Conclusions
[30]Broocks G, Kemmling A, Faizy 53) 72 ( 62) 44 ( 43) 0.004
Anesthesia protocol, N(%)
General anesthesia 38 ( 17) 16 ( 14) 22 [START_REF] Bourcier | More than three passes of stent retriever is an independent predictor of parenchymal hematoma in acute ischemic stroke[END_REF] 0.12 Conscious sedation 157 ( 72) 90 ( 78) 67 ( 65)
Local anesthesia 24 ( 11) 10 ( 9) 14 ( 14)
BGC used, N(%) 61 ( 28 8) 19 ( 10) 4 ( 13) 20 ( 10) 2 ( 15) 16) 7 ( 4) 8 ( 27) 12 ( 6) 2 ( 15)
42 2 0 13 (
Baseline NIHSS, median (IQR)
Discussion
The results of the UNCLOSE Study showed that even patients with good collaterals may not reach a favorable clinical outcome in terms, independently on the endovascular technique or the type of anesthesia used.
The overall results were partially similar to those reported in literature, while certain factors did not show the same results. Indeed, this was one of the first papers investigating this specific cohort of patients with good collaterals.
Nevertheless, we could observe poor clinical outcomes (mRS3-6) in 47% of the patients with good collaterals assessed through the ASITN/SIR score. Mortality rate was 13.6%.
The role of IV thrombolysis as a "protective" factor is in line with the current results reported by the recent trial SWIFT DIRECT, which showed that the association IVT+MT provided better clinical results than the MT alone (Fischer U et al., 2022). The interaction between IVT and collaterals will be discussed in the following chapter.
Furthermore, we could observe how patients successfully recanalized in late temporal windows (>6h) had the same rate of favorable clinical outcomes than those recanalized before 6h and these had also a better clinical evolution with lower 24h-NIHSS (11 vs 16, p<0.001).
These results are in line with the so-called « late window paradox » [START_REF] Albers | Late Window Paradox[END_REF], which explains how those patients treated in late windows are more selected through perfusional imaging in order to justify late treatments. For this reason, patients treated in late windows are associated with favorable outcomes, as it appeared in our cohort.
These data are particular interesting if we consider that:
this cohort is considered to be the most likely to achieve functional independence after MT these clinical results were independent from the type of endovascular technique or from the type of anesthesia used.
Nevertheless, if we had already observed that the endovascular technique seems not to have a direct impact on clinical outcome [START_REF] Lapergue | Effect of Endovascular Contact Aspiration vs Stent Retriever on Revascularization in Patients With Acute Ischemic Stroke and Large Vessel Occlusion: The ASTER Randomized Clinical Trial[END_REF][START_REF] Turk As 3rd | Aspiration thrombectomy versus stent retriever thrombectomy as first-line approach for large vessel occlusion 97 (COMPASS): a multicentre, randomised, open label, blinded outcome, non-inferiority trial[END_REF], controversial results
were published concerning the type of anesthesia and, in particular, a potential detrimental effect of general anesthesia on the collateral status (Fandler-Höfler S et al., 2020; [START_REF] Raychev | Physiologic predictors of collateral circulation and infarct growth during anesthesia -Detailed analyses of the GOLIATH trial[END_REF].
Although some Authors underlined the risk of a pressure drop in CC at the moment of the induction of the general anesthesia, it seems that the effect of general anesthesia could be more effective on poor collaterals, which would be more vulnerable than the good ones (Liu D et al., 2021).
Interestingly, the First Pass Effect was not significantly associated with a good clinical outcome, except for sICH. However, the hypothesis that CC could provide from one hand a higher resistance to ischemia also after the first manoeuver seems not to be supported. Indeed, we have observed that longer procedures with a higher number of passes were associated with unfavorable outcomes.
These results could raise some questions concerning the assessment and the prognostic value of the CC in patients with AIS.
Indeed, although the ASITN/SIR scale is probably the most used classification to assess the CC in AIS, the inter-and intra-observer reliability between different operators is quite poor (Ben Hassen W et al., 2019). Therefore, there is a growing need for a tool that could help the assessment of CC in a reliable and easy way.
On the other hand, one could argue that collaterals that are judged as good or excellent basing on the ASINT/SIR scale, could be ineffective.
Future perspectives
The CC represents a potential tool for patients' selection and prediction of clinical outcome and hemorrhagic risk for patients treated by MT for AIS.
Although CC has already widely proven to play a central role, together with the recanalization grade, in determining favorable clinical outcomes and providing useful prognostic information about hemorrhagic transformation, it is not rare to observe a sort of "mismatch" between the CC status and the final clinical result.
Nevertheless, as we observed in the literature, the current grading scale to assess CC seems not to provide a reliable evaluation which is very subjective according to the observers. It is also possible that a different type of analysis, bases on solid physiopathological assumption, could improve the reliability of the currently most used angiographic scale: the ASITN/SIR. This issue will be addressed in the next chapter, focusing on the critical analysis of the angiographic assessment of the CC. However, this classification presents some limitations. In particular, the assessment "per definition" of CC using ASITN/SIR is time-consuming and not easy-to-use. Moreover, the reported inter-observer agreement between different operators in the assessment of the ASITN/SIR score was far from being considered excellent (Ben Hassen W et al., 2019).
The subgroup of patients with good collaterals and unfavorable outcomes and the subjectivity of the assessment of CC could raise up some points of debate. The progression of the contrast agent through the cerebral circulation can be clearly divided into three different phases: arterial, parenchymal/capillary blush and venous. The transition between either two consecutive phases is therefore defined as a turning point of the two phases. In 2D pretreatment DSA, such turning points represent the frames of the angiograms where the transition from an angiographic phase to the following can be observed according to the opacification of the vascular structures (arteries, capillary blush, veins).
The turning points would help to differentiate the transition from the arterial to the parenchymal phase and from the parenchymal to the venous phase. Thus, the application of the turning points would help to separate and to differentiate the angiographic phases and it could provide the basis to "quantify" the desynchronization between the CC and the cerebral circulation.
The introduction of these two concepts seems to be a suitable approach to assess a dynamic process, such as the CC function.
Future perspectives
The detection of the CVP seems to increase the diagnostic performances of the currently most used classification for CC.
The introduction of the desynchronization and the turning point could represent an advancement in the understanding of the hemodynamics of the CC. Indeed, these concepts could help to "decompose" the assessment of the CC and could provide some measurable parameters in order to quantify the effectiveness of the collateral vessels and, hopefully, to calculate the flow velocity within the CC.
Furthermore, the main advantage of the angiographic analysis lays in the real-time imaging provided by the DSA. The assessment of the CC performed through the DSA would not be influenced by other delays, since it would be performed at the beginning of the procedure of Mechanical Thrombectomy.
Indeed, data that are currently reported in terms of correlation between the baseline extension of the ischemic area (i.e. DWI-ASPECTS or plain CT-ASPECTS or CT-Perfusion ASPECTS) and the angiographic evaluation of collaterals in the Angiosuite are definitely biased by the latency between the baseline neuroimaging and the beginning of the angiographic assessment (transfert time, setting, eventual general anesthesia etc…). As we have observed in the paper, it seems that the patency of the CVP decreases over time, which could be interpreted as a dynamic modification of the CC evolving towards a mechanism of in-situ thrombosis (Consoli A et al., The angiographic collateral venous phase: are good collaterals always effective? Supplemental Table 1, submitted to JNIS).
Moreover, the new trend that is currently under investigation in some randomized trials [START_REF] Pfaff | Direct Transfer to Angio-Suite Versus Computed Tomography-Transit in Patients Receiving Mechanical Thrombectomy: A Randomized Trial[END_REF][START_REF] Requena | Direct to Angiography Suite Without Stopping for Computed Tomography Imaging for Patients With Acute Stroke: A Randomized Clinical Trial[END_REF]) is to consider a direct access of stroke patients directly to the Angiosuite, waiving the preliminary selection process through the CT/MRI. According to this schema, DSA would assume a more central role in the identification of markers of effectiveness and prognosis rather than the current role.
The detection of the turning points, in particular, could be a substrate for automated algorithms that would be capable to reproduce the assessment of the transition of the angiographic phases and then to detect the presence of the CVP in the area of the CC.
Finally, the evaluation of a correlation with CT-Perfusion and MR-PWI are mandatory in order to strenghten the results of the critical analysis of the angiographic assessment of the CC, as well as a comparative assessment with the new imaging techniques such the evaluation of the profile of the cortical venous outflow assessed using the cortical vein opacification score (COVES).
All these findings could support the theory that the venous phase of the collateral circulation could represent the real modulator of the evolution of the ischemic process and that the representation of a good or of a poor collateral circulation could depend mainly on the effectiveness and permeability of the venous outlet of the CC rather than the extension of the arterial leptomeningeal anastomoses.
A TECHNICAL APPROACH TO COLLATERALS BASED ON PHYSIOPATHOLOGY: DEVELOPMENT OF AN AL-GORITHM FOR THE ANGIOGRAPHIC EVALUATION OF THE COLLATERAL CIRCULATION
The results of the clinical and critical analysis of collateral circulation, which were discussed in the previous chapters, raised several questions about the assessment of collateral circulation.
Indeed, the current methods to assess the CC may not provide a definite answer to the main physiopathological issues, such as the effectiveness of collaterals, the role of the venous phase of the CC and its capacity to properly sustain the cerebral perfusion although their extension.
Furthermore, the growing interest in a more rapid workflow for the management of acute ischemic stroke will put in a central position the role of the DSA in patients' assessment in the acute setting of AIS. This effort considers the direct access to the Angiosuite avoiding the intermediate diagnostic step based on the CT or MRI. However, this type of pathway needs to be further investigated and its applicability worldwide will be a matter of discussion considering the different geographical setting and the available human resources.
It would therefore be really advantageous if the pre-treatment DSA acquired in-situ can be used to provide indications of treatment strategies, such as selecting patients for MT, prognosis, strategies after the treatment, etc. This will in the end involves the development of a DSA-based dedicated algorithm to characterize the CC. This algorithm will be constructed according to the physiopathological assumptions that have already been discussed. To achieve this, the pre-treatment dynamic DSA series need to be quantitative analyzed. In particular, this analysis will try to look for the potential link between image features and the concept of deysnchronization between the cerebral and collateral circulation and the concept of the turning points.
Inspired by and as an extension of the M2 work, in this chapter we will first observe the time-density curve of the collateral region of the DSA data. Different from only a ROI based study, pixel-wise analysis will be also applied. Parametric maps and image based features will be computed and analysed, and coorelated with the radiological readings of these data. A tissue-specific (i.e. parenchyma artery, or veins) will then be made following the segmetion of vessels within in the collateral region as previously proposed in Fig. 6 (Section II -Chapter 2, p.33). FOCH centre in the ETIS Registry ( NCT03776877) were included in this analysis. Patient exclusion criteria was restricted to be minimum in order to analyse a mixed population of patients with good or poor collaterals, or age, therefore the timing, in particular the delay between the onset of symptoms and the beginning of the procedure of MT (onset-to-groin).
All patients were imaged through a 6 frames/second acquisition (as defined in Chapter 2) as the first angiographic run before starting the MT procedure.
Afterwards, a dedicated Quality Core Lab, which included two neuroradiologists not involved in the procedures, evaluated the quality of the images and stated whether to include the angiogram in the analysis. Motion artefacts represented the main reason for exclusion and 19 patients were excluded from the analysis.
Baseline and follow-up clinical data were prospectively collected from the ETIS Registry (NC-T03776877). The local Institutional Review Board approved the study and waived for written informed consent. All the data were stocked in a dedicated database and the images were anonymized and centralized inside the ArchiMed system hosted by the IADI laboratory, Nancy, France.
The DSA acquisition protocol
The adopted imaging protocol was the same as already used in the pilot study described in Chapter II (page 31). The Antero-Posterior view of the pre-treatment angiograms acquired with 6 frames/s were collected from the recruited population. As previously described, only the AP view was used for the analysis in order to avoid the overlap with the ACA territory in the lateral projections.
Indeed, it could be intuitive to observe that the flow density curves obtained by the segmentation of the vessels opacified using an "experimental" DSA acquisition protocol at 6 frames/second were more detailed that those obtained using a "standard" acquisition protocol at 2 frames/second. Both acquisition protocols are available on most of the angiographic machines although the standard acquisition protocol is set at 2 frames/second. This observation seems to be particularly relevant if we consider the dynamic behavior of the CC, since a higher number of frames provides more data in the same time interval, allowing the construction of a more precise curve.
A panel of variables have been studied to perform the critical analysis of the angiograms for the setting of the algorithm. These variables were summarized in the analytic plan that was set and named as "Image Feature Analysis (IFA)", which will be described later.
The analysis of a subgroup of patients imaged with both conventional clinical acquisition protocols
(2 vs 6 frames/second) showed the difference in the construction of the perfusional curves (Fig. 10), highlighting how the perfusional curves obtained through the experimental protocol were more informative and provided more data. In particular, the 6 frames/second protocol provided a higher number of images to be analysed in the late venous phases, which would determine a more reliable result overall in terms of detection of the collateral venous phase.
For these reasons, a 6 frames/second acquisition protocol was used.
Angiogram data analysis
In order to characterise the angiograms, especially the contrast enhancing time-course, we have thoroughly analysed the behaviour of the DSA series, not only by regions (cerebral, collateral region, watershed region, etc.) but also by pixel in order to understand the contrast behaviour by tissue.
Thus for each patient, their data have been divided into different regions and the contrast progression has been classified as described in the three-phase scheme already described in the previous chapters.
Spatial analysis the observed regions
We analyzed the imaging features in 4 types of regions: cerebral region, extended collateral region, the watershed regions and pure collateral regions (see details definition as below, Fig. 11), in order to observe the time course changes of contrast agent filling of collateral vessels in the cortical MCA region:
-Cerebral region: a ROI was drawn including the whole hemisphere perimeter including the tem- poro-basal pole, the inferior border passed through the ICA siphon.
-Extended-collateral ROI (large, red one) including the vascular territories of the ACA and the MCA, drawn from the midline and following the outer profile of the cortical convexity and the inner profile of the white matter territory, excluding the basal ganglia territory medially and the fronto-basal area inferiorly. A careful selection was performed excluding the superior sagittal sinus superiorly and the transverse sinus inferiorly in order to avoid a selection bias of these venous structures that could have influenced the modification of the pixel density. This ROI would provide data about the density curves related to both the cerebral and the collateral circulation.
-Pure collateral ROI (blue one, intermediate ROI), which included only the territory of the MCA. This type of ROI could limit the influence of the cerebral circulation (which was identified with the ACA territory, Fig. 9, Chapter 4, p.85) and provide data related to the density curve of the collateral area alone. The pure collateral ROI is overlapped with the extended collateral ROI.
-Watershed ROI (yellow one, more restricted), which was drawn at the level of the transition from the ACA to the MCA territory (watershed territory). This ROI could provide information about the density curve of the watershed territory, which can be considered as the area of activation of the collateral circulation in case of M1-MCA occlusions.
All the segmentation process in the initial phase was performed manually using the freeware ITK-SNAP. The three ROIs are overlapped.
Temporal analysis: the observed time window and key frames
According to the concept of the desynchronization, the collateral and the cerebral circulation have a different time window of observation. In the study both time windows were labelled and were identified through the first and the last frame of appearance of the opacified vessels. The first turning point corresponds to the transition between the arterial and the parenchymal phase and the second one to the transition between the parenchymal and venous phase. Some landmarks were used in order to define the turning points and are summarized as follows (Fig. 12):
- the ROIs (collateral and hemispheric), according to the Master2 protocol the first and the last frame of observation of the collateral circulation, in order to define the temporal window of observation the turning points, in order to compare the manual detection of the transition of the circulatory phases on the angiograms with the assessment performed by the algorithm. Two turning points were defined for both cerebral circulation and CC.
Semi-quantitative time-density curve analysis
Assessment of the ROI-based time-density curve analysis is done by computing the following parametric maps: Time-to-peak enhancement map (TTP), mean transit time map (MTT), and maximum enhancement projection map (MEP) as shown in Fig. 13. While interpretations were difficult to make on TTP and MTT data, MEP images resulted more indicative to give the first intuitive explanation of "good" and "poor" collaterals (named as Group-gc and Group-pc).
Fig. 13. ROI-based analysis of time density curves in patients with good collaterals (superior row) and poor collaterals (inferior row) according to MEP maps (first image of both rows) and TTP maps (second images of both rows).
Examples from the two groups (Patient TEST-002 and Patient TEST-041 representing good and poor collateral groups respectively) are illustrated in Fig. 13. It is most interesting to notice that in the MEP map, the collateral region of Group-gc contains more dense vessels and that we can observe an opposite situation in Group-pc. In the current calculation, the MEP is extracted basing on all the frames, and the current CC status (good or poor) is assessed by the arterial collateral status.
However, whether we consider also the venous phase or not, that would not lead to obvious differences in the MEP map.
Vessel complexity in the collateral region
In other to quantitatively depict the previous observation and also prove our hypothesis that the good/poor CC can be discriminated by the CC vessel complexity level, we used the metric called "fractal dimension", , a recent mathematical method which has been widely used to quantify retinal vessels and trabecular bone patterns in X-ray or CT [START_REF] Mainster | The fractal properties of retinal vessels: embryological and clinical implications[END_REF][START_REF] Jurczyszyn | The use of fractal dimension analysis in estimation of blood vessels shape in transplantable mammary adenocarcinoma in Wistar rats after photodynamic therapy combined with cysteine protease inhibitors[END_REF][START_REF] Majumdar | Fractal analysis of radiographs: assessment of trabecular bone structure and prediction of elastic modulus and strength[END_REF] where is the fractal dimension, is the length of box which creates mesh covering surface with examing pattern, is the minimal number of boxes which are required to cover examining pattern.
As previously observed in the previous section, the distal part of the pure collateral ROI provided the most relevant results in terms of differentiation between good and poor CC. A sub-analyis of this ROI was then performed. Therefore, we calculated the fractal dimension of three sub-regions of the pure collateral ROI (Fig. 14): 1. Pure collateral ROI (in red), 2. Watershed ROI (in yellow), 3. ROI 1 without ROI2 (in green). This was also because some of the patients, mainly those with poor collaterals, showed very dense watershed regions, although in the adjacent distal CC region only few vessels were observed. Thus it is necessary to make a thorough observation. The of this three ROIs in the two groups (Group-Gc and Group Pc) are plotted in Fig. 15A. Since the relative size of the collateral region within the cerebral may vary from patient to patient and that could introduce a bias, we also calculated the normalized fractal dimension by dividing the ratio of CC and cerebral areas (Fig. 15B).
D s D s D s = li m ε→0 logN(ε) log ( 1 ∕ ε ) D s ε N(ε) D s
Fig. 15. Boxplot of the fractal dimension in the three ROIs between the two groups (A) and the normalized fractal dimension between the two groups (B).
D s
According to the boxplot above, the normalized fractal dimension of the pure CC ROI ( ) seems to be a good candidate to separate Group-gc and Group-pc. An Anova test was then applied to the ( of the groups, leading to a p = 0.013 (p = 0.05 is considered as significant),showing the two groups are significantly different (Fig. 16).
Fig. 16. Anova plot showing the normalized fractal dimensions in both subgroups.
The Image Feature Analysis (IFA)
As aforementioned, a list of imaging features has been carefully chosen in order to characterise the CC regional intensity behaviour during the contrast progression through the different circulatiory phases of the DSA. Another objective of the IFA was to provide an intepretation of the physiopathological concepts that were introduced: the desynchronization and the turning points.
In either the CC ROI or the cerebral ROI, the evolution of the contrast trajectory is in general described as follows: the image intensity starts to decrease when the contrast starts to fill the arteries, and the anatomical information including the vessel networks starts to be enhanced.
The amount of information increases the number of vessels being enhanced, their sharpness increases until the maixum value is ahiceved and that will represent the end of arterial phase and the beginning of parenchyma phase, which corresponds to the first turning point. From that point, the contrast agent starts to leave the artery and to fill the capillary bed, where the perfusion to the parenchyma happens. This parenchymal phase causes the whole DSA image to become darker and darker, and the contrast between vessels and other tissues becomes less obvious.
n D s( pCC ) n D s( pCC )
However, due to the overlapping nature of DSA being a 2D-image, and to the delay between different regions (the cerebral and the CC region desynchronization as previously described), it may be the case that the parenchyma would be perfused progressively especially when the CC flow is slow.
After the DSA frame becomes the darkest and the less contrasted, the contrast agent starts to leave the parenchyma and entering the venous circulation. This represents the turning point from the parenchyma phase to the venous phase. At this time, the desynchronization between cerebral and CC regions would be the more obvious, with the worst case that before the end of the DSA acquisition which is defined as the full visualization of the dural venous sinuses (superior sagittal sinus and transverse sinuses), the venous phase of CC is still not shown.
The following image features are analysed: maximum histogram peaks change over time, the ROI intensity uniformity, mean and histogram skewness, and the entropy. As it was shown in Fig. 10, the IFA was based on several parameters:
The maximum histogram peaks at each frame in the CC and cerebral ROIs changes as the well as opacification when the contrast trajectory evolves. This curve starts to decrease when the contrast fills the arteries in the corresponding ROIs and starts to come back when the contrast leaves the ROI identifying a bi-phasic peak corresponding to the phase separation from the arterial to the parenchymal/capillary phase and from the parenchymal/capillary phase to the venous phase.
A particular mention will be done concerning the entropy. It is a scientific concept as well as a measurable physical property that is most commonly associated with a state of disorder, randomness, or uncertainty. The general definition of entropy can be found in the entropy focus criterion [START_REF] Atkinson | Automatic Compensation of Motion Artifacts in MRI[END_REF], which is used in order to favor the high contrast between the structures: However, entropy is also commonly described as a criterion of maximization in order to provide a noise reduction, according to a different formula, which therefore, this definition is a smoothness measurement:
E = - s ∑ j=1 B j B max ln [ B j B ma x ]
In the construction of the algorithm we defined the entropy focus criterion as Entropy-HC (as "high contrast") and the criterion of maximization as Entropy-SN (as "smoothness").
The assumption used here is that when the contrast agent enters the collateral circulation region, the information content in the collateral area will be perturbed due to the modification of the opacification and thus lead to a change of entropy.
When there is a transition through the different circulatory phases, such as from artery to parenchyma, and from parenchyma to the venous phase, a dramatic change on entropy can be found.
The analysis of these ROIs showed that the results were not significantly different (Fig. 17). Indeed, according to the IFA panel, the shape, the slope and the valleys of the perfusional curves obtained through the three different ROIs did not differ.
Fig. 17. The comparative analysis of the three different ROIs using the IFA panel. A fourth ROI is referred to the hemisphere (named as "brain" ROI), which is used for normalization of the curves.
No difference was observed in the behaviour of the perfusional curves.
Although no difference was observed in terms of behaviour of the perfusional curves, the "Extended collateral" ROI was chosen for the setting of the algorithm for three main reasons:
this ROI provides data from both types of circulations (cerebral and collateral), following the principle of the desynchronization; this ROI provides data for the detection of the venous phase, while the other two ROIs could miss this type of information being more limited;
-The extension of the ROI seemed to be the most reliable in order to perform the normalization with the hemispheric ROI in order to assess the real extension of the collateral area.
-In this context, we used entropy as a measurement of information content in a corresponding area.
-The hypothesis here is that the perfusion of phase contrast changes the information content of the collateral circulation area. The dramatic change of entropy value in this area should correspond to a local extreme (either a peak or a valley) in the function of entropy against time or the differential of this function. In the case of the DSA, after the contrast agent enters the ROI of the collateral circulation, the information content in the ROI changes, therefore the starting frame of the collateral circulation window corresponds to the inflection point of the entropy curve in Fig. 17 (Entropy HC graph). The peaks of the entropy curve may therefore indicates the turning points.
Therefore, by drawing the entropy change according to each frame, it may be possible to detect the following features:
1. the beginning of the collateral circulation window by observing the change of the shape of the entropy curve along the time 2. the turning points from the arterial phase to the parenchymal/capillary phase or from the parenchymal/capillary phase to the venous phase, according to the following considerations:
a.
The changing trend of the entropy curve will be altered, i.e. a local extreme (turning point) can be detected. More precisely, after entering the collateral area, the contrast agent starts to render the arteries. Thus, the appearance of this region starts to become more opacified from the initial flat grey with clear information of the arteries (activation phase). Therefore, following the first perturbation of the information content of this region, the entropy change within the collateral circulation area as well as the tendency of entropy change should be almost monotonic between the beginning of the collateral circulation perfusion window and the turning point from the arterial to the parenchymal phase. Indeed, the contrast bolus passing through the artery has highlighted the contrast between the structures. Thus, when all (or the largest part of) the arteries have been filled, the entropy within this region should be the smallest.
b.
When the bolus starts to spread into the parenchyma, the image starts to be blurred. Indeed, the contrast agent starts to leak into the parenchyma and smooths out the corresponding region in the angiogram. Therefore, the entropy curve will show opposite trend compared the one observed in the arterial phase. Such trend will stop when the parenchyma is fully perfused and the contrast starts to go to the viens. So, in that case, the turning point should correspond to the time when the entropy curve shows a local extreme, with a maximum value of the Entropy-HC.
For the Entropy-SN, the trend of the curve should be similar however it will be more evident concerning the transition from the parenchymal/capillary phase to the venous phase as it corresponds to the smoothest time of the whole perfusion state.
As far as the Point 1 is concerned, we found an excellent correlation between the beginning of the collateral phase identified by the algorithm and the one that was manually detected in almost all the current patients.
In this patient (TEST-002, Fig. 18), the two blue vertical lines correspond to the manual defined collateral circulation window, whereas the point on the red spot curve indicates the turning point of this curve (plot of Entropy), suggesting the transition of the phase from the arterial to the parenchymal phase and corresponding to the valley of this curve. This can probably be detected by looking at the peaks on the derivative of this entropy curve. Although we could achieve an excellent agreement in the detection of the frame corresponding to the beginning of the temporal window of observation (phase of activation), the end of the temporal window of observation was not easy to identify.
After the review of the patients for which it was not possible to identify the last frame of observation of the collateral circulation, we could conclude that this occurred almost in all patients with a poor collateral circulation, with a limited arterial phase and almost absent parenchymal and venous phase.
These results represented the base for the development of the different versions of the algorithm and also the first step of integration of the novel concepts that were introduced.
The goal of the algorithm will be to provide a real-time analysis of the CC through the identication of the markers of effectiveness, such as the presence of a collateral venous phase, and to assess the flow velocity in order to describe the hemodynamics of the collateral vessels.
The development of the algorithm was the object of a patent submission procedure, which is ongoing.
CONCLUSIONS AND FUTURE PERSPECTIVES
The role of the collateral circulation in AIS has been widely investigated in the past and recent literature.
Although several articles and studies found strong correlations between the collateral status and the clinical outcome of patients treated by mechanical thrombectomy, the physiopathological mechanism of collaterals remains poorly understood.
Recently, the hemodynamic features of these leptomeningeal anastomoses made the subject of further investigations, leading to a more dynamic vision of the collateral vessels in the physiopathology of acute ischemic stroke.
However, the current tools to assess collaterals seem to be still limited and focused on the extension of the collateral vessels and the introduction of novel concepts and parameters to analyze could provide more reliable and solid results.
Indeed, as we have observed in the previous chapters, patients harboring a good collateral circulation may not benefit from the endovascular treatment through mechanical thrombectomy, although these patients would represent the best candidates to achieve a good clinical outcome.
A partial response to this issue could be to find in the limited assessment of collaterals according to the currently used grading systems, such as the ASITN/SIR classification.
Furthermore, these results strenghten the concept that extended collaterals may not be sufficiently effective and that a physiopathology-centred assessment is crucial to better understand the role of collaterals in acute ischemic stroke.
Thus, the integration of specific data concerning the effectiveness of the collateral circulation could improve the overall assessment of these anastomoses.
The introduction of the concepts of the desynchronization between the collateral circulation and cerebral circulation, the turning points of the contrast trajectory phases, as well as the evaluation of the collateral venous phase could provide a more complete analysis of collaterals and, potentially, a substrate for a real-time assessment of collaterals in the angiosuite.
The development of an automated algorithm to characterise the hemodynamic behaviours and to characterize the effectiveness represented the final goal of the Thesis.
Fig. 1 . 6 Fig. 2 .Fig. 3 .Fig. 4 . 13 Fig. 5 . 22 Fig. 6 . 24 Fig. 7 .Fig. 8 .Fig. 9 .Fig. 10 .Fig. 11 .Fig. 12 .Fig. 13 .Fig. 14 .Fig. 15 .Fig. 16 .Fig. 17 .Fig. 18 .
1623413522624789101112131415161718 Fig. 1. The Alberta Stroke Program Early CT Score scheme based on a CT scan……….…..…….6
2 .
2 Une analyse critique des images artériographiques afin d'évaluer une corrélation possible avec les événements physiopathologiques des AVCi; 3. Le développement d'un algorithme de post-traitement dédié aux images d'artériographie en 2D. La thèse de sciences est constituée de trois sections différentes: vii -Section 1: INTRODUCTION Un chapitre introductif (chapitre 1) donnant un résumé du diagnostic, du traitement possible et les défis cliniques des AVCi. Cette partie introductive servira également à familiariser le lecteur avec le concept et la définition de la CC et son rôle dans les AVCi. Toutes ces notions sont développées plus en détail dans les sections suivantes; -Section 2: RÉSULTATS PRÉLIMINAIRES Ce deuxième chapitre (Chapitre 2) reprend les résultats préliminaires obtenus lors de mes expériences scientifiques antérieures dans le domaine de la CC au cours de mon programme de résidence et pendant le Master 2. -Section 3 : ANALYSE DES PARAMETRES D'EVALUATION DES OBJECTIFS Trois chapitres distincts (chapitres 3 à 5) sont consacrés au développement des trois paramètres d'évaluation susmentionnés de cette thèse. Le chapitre 3 est focalisé sur l'impact clinique de la CC en tant que facteur pronostique dans l'AVCi. Cet aspect a été analysé dans l'article: UNfavorable CLinical Outcome in patients with good collateral Scores: the UNCLOSE study. Consoli A, Pileggi M, Hasan A T M, Venier A, Sgreccia A, Pizzuto S, Coskun O, Di Maria F, Scarcia L, Lapergue B, Rodesch G, Bracard S, Chen B. Le papier a été soumis au Journal of Neuroradiology.
Chapitre 2 :
2 RESULTATS PRELIMINAIRES Le Chapitre 2 résume les résultats préliminaires obtenus lors d'une étude antérieure cordonnées par le Doctorant et centrée sur la corrélation entre la classification artériographique de la CC et les images obtenues par scanner de perfusion, montrant une corrélation significative entre le degré de CC et l'extension de la lésion ischémique, notamment le core ischémique, observé sur le scanner de perfusion. Les conclusions de cette étude, l'étude CAPRI, publiée dans le Journal of NeuroInterventional Surgery, étaient en faveur d'une corrélation entre les deux méthodes d'imagerie. Ce travail a permis d'avancer dans la compréhension physiopathologique des AVCi en montrant l'importance de l'évaluation de la CC. Une autre partie de ce chapitre décrit les résultats d'une étude préliminaire effectuée pendant le Master2 au sein du même laboratoire. L'objectif était d'analyser le type d'acquisition artériographique à utiliser pour la construction d'un algorithme de caractérisation de la CC en utilisant directement les images artériographiques afin de permettre une évaluation en temps réel. Un protocole « standard » (à 2 images/seconde) et un protocole « expérimental » (à 6 images/seconde) ont été comparés. Les deux protocoles d'acquisition sont disponibles et installées dans toutes les machines d'artériographie, mais seulement le protocole standard est utilisé dans le cadre des procédures de TM. Les résultats de cette étude ont permis de conclure qu'un protocole d'acquisition à 6 images/ seconde est le plus adapté pour analyser la CC en raison de la quantité de données obtenus qui permettent d'analyser de manière plus spécifique la CC.
pronostique d'un résultat clinique dans les AVCi traités par TM. En particulier, l'objectif était d'évaluer quels facteurs peuvent être associés à des résultats cliniques défavorables chez les patients présentant un bon profil collatéral. Cette question de recherche a été abordée dans l'étude UN-CLOSE (UNfavorable CLinical Outcomes in patients with good collateral Scores study), une étude rétrospective d'une cohorte de patients avec une bonne CC, soumise dans le Journal of Neuroradiology.
: la CC et la circulation cérébrale sont visualisés en différentes phases de l'artériographie et ceux-ci pourraient être considérés comme « désynchronisés ». Il pourrait être intuitif que plus le délai entre l'opacification de la normale circulation cérébrale antérograde et l'opacification de la CC est court et plus la vitesse de flux sera élevée dans la CC et donc dans le territoire ischémique. En fait, la perméabilité du côté veineux de la CC permet d'éviter une stase sanguine dans les vaisseaux collatéraux et de maintenir un flux sanguin stable dans le territoire ischémique et la perfusion de la microcirculation. Turning points: La progression du produit de contraste dans la circulation cérébrale peut être clairement identifiée en trois phases différentes: artérielle, parenchymateuse/capillaire et veineuse. La transition entre deux phases consécutives peut être donc définie comme un tournant des deux phases (turning point). Les turning points aideraient à différencier la transition de la phase artérielle à la phase parenchymateuse et de la phase parenchymateuse à la phase veineuse. Ainsi, l'application des turning points aiderait à séparer et à différencier les phases artériographiques et pourrait fournir la base pour « quantifier » la désynchronisation entre la CC et la circulation cérébrale. Le développement de l'algorithme permettra à terme une analyse en temps réel de la CC en particulier par rapport à son efficacité (présence de la phase veineuse de la CC) ainsi que par rapport à la vitesse de flux. Ce type d'évaluation permettra d'ajouter des éléments hémodynamiques utiles pour la compréhension physiopathologique des AVCi et pour la gestion des patients pour diminuer le risque de transformation hémorragique identifiant les patients avec une CC inefficace. xv DEFINITION AND PRESENTATION OF THE THESIS Acute ischemic stroke (AIS) represent a life-threatening condition caused by the occlusion of a cerebral artery and determining an ischemic lesion which can evolve toward the necrosis of the brain if the blood flow is not rapidly restored. AIS is characterized by high rates of mortality and morbidity and the goal of the management of this disease in an acute setting is based on a rapid diagnosis and a proper choice of treatment. The new evidences brought by recent Randomized Controlled Trials (RCTs) in 2015 allowed the development and the widespread diffusion of Mechanical Thrombectomy (MT), a procedure per-
be focused on the critical analysis of the CC according to the angiographic characteristics that were described in the paper: Angiographic collateral venous phase: a novel landmark for leptomeningeal collaterals evaluation in acute ischemic stroke. Consoli A, Pizzuto S, Sgreccia A, Di Maria F, Coskun O, Rodesch G, Lapergue B, Felblinger J, Chen B, Bracard S. Published in the Journal of NeuroInterventional Surgery (JNIS).The last Chapter will be dedicated to the development of a specific post-processing algorithm for the assessment of CC directly on 2D angiograms and to the use with a pre-existing algorithm. The technical workflow and the draft of the patent submission will be presented and discussed in this Chapter: Algorithm of characterization of collateral circulation "REACH" -Real-timeAssessment of Collateral circulation Hémodynamics in acute ischemic stroke)/Algorithme de caractérisation de la circulation collatérale REACH (REal-time Assessment of Collateral circula-
Fig. 1 .
1 Fig. 1. The Alberta Stroke Program Early CT (ASPECT) score: a 10-points scale used to assess the extension of the ischemic lesion. The ASPECT score is widely used and it can be applied to both MRI and CT scan.
Fig. 2 .Fig. 2
22 Fig.2.
Fig 3 .
3 Fig 3. The anatomy of cortical pial arteries and of the microcirculation (adapted from Iadecola, C., Nedergaard, M. Glial regulation of the cerebral microvasculature. Nat Neurosci 10, 1369-1376
Fig. 4 .
4 Fig. 4. The Careggi Collateral Score introduced in 2013 to assess the collateral circulation directly on Digital Subtraction Angiography (image from the Supplemental material of Consoli A, Andersson T, Holmberg A, Verganti L, Saletti A, Vallone S, Zini A, Cerase A, Romano D, Bracco S, Lorenzano S, Fainardi E, Mangiafico S; CAPRI Collaborative Group. CT perfusion and angiographic assessment of pial collateral reperfusion in acute ischemic stroke: the CAPRI study. The paper was published in the Journal of Neurointerventional Surgery(JNIS) in 2016. J Neurointerv Surg. 2016 Dec;8(12):1211-1216.. The score provided a semiquantitative evaluation of collateral vessels (in the anteroposterior projection) and a qualitative evaluation (in the lateral projection). In particular, the latter was assessed through the "suspended artery sign", first described in 2013 byMangiafico et al.
Fig. 5 .
5 Fig.5. Screenshot from the title page of the Master 2 final thesis, focused on the assessment of flow velocity in cerebral cortical vessels directly on Digital Subtraction Angiography.
Clinical data and the angiographic runs of 10 patients with AIS secondary to the occlusion of the M1 segment of the MCA treated by MT at the Foch Hospital have been selected for the purpose of the study. All the patients were imaged through the experimental acquisition protocol. In order to compare the two protocols, we have simulated the standard protocol by subtracting a number of frames proportional to the cadence of the angiographic protocol. Finally the algorithm evaluated the DSA series acquired with the two different protocols and provided the perfusional curves which were obtained.The local Ethical Committee at Foch Hospital had approved the use of the pseudonymized data of the patients and of the DSA data. Data corresponding to the two protocols lead to two different perfusional curves where the standard one shows an aliasing like behavior and might indicate different predictions of clinical outcomes.
Fig. 6 .
6 Fig. 6. Post-treatment workflow. Phase 1: definition of the temporal window (time of interest, TOI) and of the collateral area (region of interest, ROI); Phase 2: segmentation of the opacified vessels in every frame of the TOI; Phase 3: production of the curve that plotted the number of enhanced vessels over time (perfusional curve).
Fig. 7 .
7 Fig. 7. Results of the comparison of the perfusional curves obtained through the analysis of the angiograms provided by the experimental acquisition protocol (6 frames/second, blue dotted line) and the simulated standard one (after reconstruction at 2 frames/second, red dotted line).
Fig. 8 .
8 Fig. 8. Graph showing the different behavior of the two identified clusters of patients according to the distribution of the perfusional curves after normalization (adapted and modified from the candidate's Master 2 project). The upper cluster (highlighted by the red rectangle) shows better clinical results (80% of favorable clinical outcomes and 0% of mortality) than the lower cluster (blue dotted rectangle: 25% of favorable clinical results and 75% of mortality). The main difference between thetwo clusters was based on the profile of the perfusional curve, which was more enhanced in the perfusional curves of the upper cluster, which showed a higher number of vessels opacified in the collateral area and therefore with effective collaterals.
reviewed and analyzed: age, sex, cardiovascular risk factors, baseline modified Rankin Scale (mRS), baseline National Institute of Health stroke scale (NIHSS) score, presence of hemorrhage or acute FLAIR signal abnormalities at baseline CT or MRI, CT or DWI-Alberta Stroke Program Early CT Score (ASPECTS), arterial occlusion site, pa-tient's direct admission to our comprehensive stroke center or to a primary stroke center with sec-ondary transfer ("drip and ship"), time metrics (onset-to-groin -OTG, onset-to-recanalization -OTR, grointo-recanalization -GTR), intravenous thrombolysis administration, type of anesthesia, MT technique, number of passes, first pass effect (FPE) procedural complications, recanalization grade according to the modified Treatment in Cerebral Infarction (mTICI) scale, 24 hours-NIHSS and ASPECTS, hemorrhagic transfor-mation and symptomatic intracerebral hemorrhage (sICH) accord-ing to the ECASS definition and 90-days mRS score. Standard recanalization grades definitions were used: partial (mTICI 0-2a), adequate (mTICI 2b-3) and almost complete/complete (mTICI 2c-3). Patients at admission were mainly images by MRI (>90%).
viewed by an interventional neuroradiologist and an interventional neurologist with ≥5 years of experience in the neurointerventional field.Collateral circulation was assessed on the prethrombectomy angiographic study, with injection from ipsilateral ICA (for M1 and proximal M2 occlusions) or contralateral ICA and posterior circulation (for intra-cranial ICA occlusions). The ASITN/SIR scale was used for the evaluation. A third neurointerven-tionalist with >10 years of experience reviewed all the angiograms and resolved the disagreements.Kappa inter-and intra-observer agreement was calculated per grade and per dichotomization (grades 1-2 vs grades 3-4). Patients without an adequate pre-thrombectomy angiographic study to assess collateral circulations were excluded.EndpointsThe primary endpoint was to investigate the factors associated with unfavorable outcomes defined as the lack of the achievement of a functional independence (mRS 3-6) at 90-days outpatient visit or telephone interview. Clinical evaluations were performed by certified stroke neurologists.Secondary outcomes included the analysis of those factors associated with: unfavorable outcomes in patients adequately recanalized (defined as mTICI 2b-3), all-causes mortality at 90days and post-procedural symptomatic intracerebral hemorrhage (sICH) in the overall population.
scriptive evaluation with the median [interquartile] or mean ± standard deviation for continuous variables and frequency and percentage for categorical variables. Student's T-tests or Wilcoxon tests were performed for continuous variables depending to the number of patients in groups. For categorical variables, chi square test or Fisher's test were used when the expected value was less than 5. Those variables associated with the outcome in univariate analysis with a significance level below 0.2 were included in a multivariate logistic regression mod-el, which was adjusted for confounding factors. The odds ratios between factors and outcomes of their respective 95% confidence intervals were calculated. Stepwise backward multiple regression was performed to identify statistically significant predictors. All tests were bilateral with a 5% de-gree of significance. Statistical analysis was performed with the SAS 9.4 software.
Unfavorable clinical outcomes were observed in 47% of patients following mechanical thrombecto-my despite good collaterals according to the ASITN/SIR classification. Although a good collateral circulation could allow to achieve functional independency also in patients treated in late temporal windows, partial recanalizations and longer procedures were associated with poor outcomes without resulting independent predictors, independently from the endovascular technique. FPE was not sig-nificantly correlated with clinical outcome. Intravenous thrombolysis increases the chance to achieve a good clinical outcome in patients with good collaterals without increasing the risk of hemorrhagic transformation.
In particular: are good collaterals always effective? This research question represented the aim of the following paper published in the Journal of NeuroInterventional Surgery. the periphery of ischemic site, with persistence of some of the defect, and to only a portion of the ischemic territory 2 Rapid collaterals to the periphery of ischemic site, with persistence of some of the defect, and to only a portion of the ischemic territory 3 Collaterals with slow but complete angiographic blood flow of the ischemic bed by the late venous phase 4 Complete and rapid collateral blood flow to the vascular bed in the entire ischemic territory by retrograde reprfusion 54 Patients with anterior circulation stroke and LVO (M1/ICA terminus/tandem occlusions) in the study periodof collateral venous phase; mRS: modified Rankin Scale; PH1/PH2: parenchymal hematoma according to the ECASS-III scale
Fig. 9 .
9 Fig. 9. The desynchronization of the collateral circulation (from "Lo studio dei circle collateral può sostituire le tecniche di imaging per identificare la penombra nella selezione dei pazienti da sottoporre a TM nelle finestre terapeutiche più lunghe?" -Arturo Consoli -Invited Conference at the 8th National Congress of the Italian Stroke Association, Verona -December 2021) . Digital Subtraction Angiography, antero-posterior projection, acquired in a patient with a left M1-Middle Cerebral Artery occlusion. Panel A shows the very early arterial phase of the angiogram: at this moment of the acquisition we can observe how the "normal" anterograde cerebral circulation (highlighted by the red triangle and corresponding to the territory of the Anterior Cerebral Artery) and the collateral circulation (blue triangle highlighting the collateral area) are desynchronized. Indeed in this early phase the collateral circulation is in the activation phase while the normal cerebral circulation is the arterial phase. Panel B shows the intermediate (capillary) phase in the normal cerebral circulation; in this phase the collateral circulation is in the arterial phase.
5. 1 .
1 Materials: Study population and image acquisition protocol 5.1.1. The study population and image analysis One-hundred and eight patients with the occlusion of the M1-middle cerebral artery segment (M1-MCA) eligible to MT procedure with a M1-MCA occlusion between 2018 and 2021 from Hospital
Fig. 10 .
10 Fig. 10. Comparative analysis between the standard acquisition protocol (2 frames/second) and the experimental protocol (6 frames/second). The analysis of the curves showed that the experimental acquisition protocol provided more informative curves since a larger number of data is available for the analysis of pixel density modification in the same interval time.
Fig. 11 .
11 Fig. 11. Digital Subtraction Angiography, anteroposterior view showing the occlusion of the left middle cerebral artery. Depiction of the three ROIs that were compared for the analysis of the collateral circulation: the large ROI (red), the intermediate ROI (blue) and the critical ROI (yellow).
First
turning point: first frame of observation of the parenchymal blush in the ACA territory Second turning point: first frame of visualisation of the superior longitudinal sinus on the midline -Collateral circulation First turning point: first frame of observation of the parenchymal blush in the CC ROI Second turning point: first frame of visualisation of the cortical veins, which have a straight course parallel to the visualisation of the cerebral sulci.
Fig. 12 .
12 Fig. 12. Definition of the turning points in the cerebral and collateral regions according to the concept of the desynchronization
Fig. 14 .
14 Fig. 14. Illustration of the three ROIs for fractal dimension calculation. The left image depicts the ROIs in a patient belonging to the Gc (good collateral group) and the right one is referred to a patient belonging to the Pc (poor collateral group).
Fig. 18 .
18 Fig. 18. The Entropy plot in the collateral area. The blue lines identify the temporal window observation, manually defined. The blue triangle shows the turning point from the arterial to the parenchymal/capillary phase.
luation artériographique de la présence d'une phase veineuse collatérale en rapport avec l'efficacité de la circulation collatérale (évaluation qualitative). Ce type de paramètre représente l'interprétation physiopathologique du rôle du système veineux et venulaire de la circulation collatérale dans le maintien de la perfusion cérébrale pendant l'occlusion artérielle et l'évolution de l'ischémie cérébrale. Le CVP a été comparé à l'évaluation faite par l'échelle ASITN/SIR et le CVP seul ainsi que l'évaluation composite de ces deux paramètres ont montré des meilleurs résultats en termes de corrélation avec les résultats cliniques.La dernière partie de la thèse est centrée sur le développement d'un algorithme de caractérisation de la circulation collatérale (algorithme ASCOT). L'algorithme permettra d'analyser en temps réel les images d'artériographie cérébrale en cours de traitement par thrombectomie mécanique et de fournir des données d'efficacité de la circulation collatérale qui pourront aider à élargir les connaissances physiopathologiques sur les AVCi, orienter les choix thérapeutiques intra-procédurales et de prise en charge post-procédurale.EnglishThe focus of this thesis is on the evaluation of collateral circulation in patients with acute ischemic stroke (AIS). The role of this collateral circulation has been investigated in the literature while the hemodynamic characteristics are still little known. Since 2018 there has been a growing interest around collateral circulation and several teams around the world have focused their efforts on understanding the pathophysiology of AIS as well as on new imaging methods to further describe the hemodynamic functioning of the collateral circulation.The objectives of the thesis were developed on three distinct axes: (i) the analysis of the clinical impact of collateral circulation in ischemic stroke, (ii) the critical analysis of collateral circulation assessment by cerebral angiography and (iii) the development of a real-time collateral circulation characterization algorithm for angiographic images.The clinical impact of collateral circulation is an essential step in the pathophysiological understanding. Several studies have shown significant correlations between good collateral circulation and favorable clinical outcomes and patients with good collateral circulation are considered the best candidates for mechanical thrombectomy procedures. In this thesis I present the results of a retrospective clinical analysis study (UNCLOSE study), targeted on this specific subgroup of patients with AIS and good collateral circulation evaluated by cerebral arteriography (ASITN/SIR scale) and treated by mechanical thrombectomy. The findings of this study showed an adverse clinical outcome in almost half of this subgroup (47%) despite the presence of good collateral circulation. The factors that may explain this type of results are not different from those already described in patients with insufficient or absent collaterals, with the exception of the lack of significance of First Pass Effect and a protective role of intravenous thrombolysis. In addition, these results focus on the modality of evaluation of collateral circulation by cerebral arteriography and the need to propose other evaluation parameters.A new evaluation parameter was described while proposing a critical analysis of the angiographical evaluation of collateral circulation: the angiographical venous phase of collateral (CVP, collateral venous phase). I present the results of a retrospective study of angiographic evaluation of the presence of a collateral venous phase in relation to the collateral circulation efficiency (qualitative evaluation). This type of parameter represents the physiopathological interpretation of the role of the venous and venular system of collateral circulation in maintaining cerebral perfusion during arterial occlusion and progression of cerebral ischemia. The CVP was compared to the ASITN/SIR assessment and the CVP alone and the composite assessment of these two parameters showed better results in terms of correlation with clinical results.The last part of the thesis focuses on the development of a collateral circulation characterization algorithm (ASCOT). The algorithm will enable real-time analysis of brain angiography images acquired during mechanical thrombectomy procedures and provide data about collateral circulation effectiveness that could help expand pathophysiology knowledge on AIS, and drive intraprocedural and post-procedural treatment choices.
Table 1 .
1 The modified Rankin Scale………………………………………………………………………24
Table 2 .
2 The modified Treatment In Cerebral Ischemia score (mTICI score)……………………….[START_REF] Xu | Antiplatelet Strategies and Outcomes in Patients with Noncardioembolic Ischemic Stroke from a Real-World Study with a Five-Year Follow-Up[END_REF]
Table 3 .
3 TheASITN/SIR collateral grade………………………………………………………………..54
vi
Inspiré par et considéré comme une extension des travaux du Master2, dans ce chapitre sont décrites l'analyse des courbes temps-densité de la région de la CC extraites des données artériographiques. Différemment d'une étude basée exclusivement sur l'analyse d'une Région d'Intérêt (ROI), l'analyse en termes de pixels a été également appliquée. Une population de 108 pa-
tients présentant une occlusion du segment M1 de l'artère cérébrale moyenne (M1-MCA), éligibles
à la procédure de TM et traités entre 2018 et 2021 à l'hôpital FOCH a été analysée. Tous les pa-
tients ont été imagés avec le protocole d'acquisition « expérimental » décrit précédemment (à 6
images/seconde).
Par la suite, un Core Lab Qualité dédié à analyser la qualité des images d'artériographie a été insti-
tué, qui comprenait deux neuroradiologues non impliqués dans les procédures. Le Core Lab Qualité
Il serait donc très avantageux que les images artériographiques prétraitement acquises in situ puissent être utilisées pour fournir des indications par rapport aux stratégies de traitement, telles que la sélection des patients pour la TM, le pronostic, les stratégies post-traitement, etc. Ce concept a xiii été abordé par le développement d'un algorithme dédié pour caractériser la CC basé sur les images artériographiques. a évalué la qualité des images et a indiqué l'éligibilité pour l'analyse. Les artefacts de mouvement ont représenté la principale raison d'exclusion et 19 patients ont donc été exclus.
Un groupe de variables a été évalué pour effectuer l'analyse critique des angiographies et le paramétrage de l'algorithme. Ces variables ont été résumées dans le plan analytique qui a été établi et nommé « Image Feature Analysis (IFA) ».
. In the former paper we introduced a new grading system based on the Digital Subtraction
rough the Careggi Collateral Score (CCS), while in the latter we had observed that CC had a real
impact on the clinical outcome of patients treated by MT.
From 2011 to 2015 I have coordinated a collaborative group which included the Careggi University
Hospital (Florence), the Lescotte Hospital (Siena), the Sant'Agostino Estense Hospital (Modena),
the University Hospital of Ferrara and the Karolinska Institute (Stockholm). This collaborative
group focused on the comparison between the angiographic assessment of the CC through the Ca-
reggi Collateral Score and the perfusional aspect of the brain evaluated through the CT-Perfusion:
the CAPRI Study (CT perfusion and angiographic assessment of pial collateral reperfusion in acute
ischemic stroke: the CAPRI study). The retrospective analysis of our multicentric cohort led to the
publication of the results of our paper on the Journal of NeuroInterventional Surgery, which is con-
sidered one of the most valuable journals in the domain of Interventional Neuroradiology (Consoli
A, et al. 2016). The CAPRI Study represented also the final thesis of my Residency Program.
During the Master 2, in 2018, I had the chance to work on the CIC-IT/IADI Laboratory in Nancy,
directed by Prof. Jacques Felblinger, under the close supervision of Dr Bailiang Chen and I have
exposed my project to work on the imaging post-treatment in order to assess the qualitative aspect
of CC directly on DSA. The preliminary work that I have conducted at the laboratory led to a basic
version of an algorithm that segmented and evaluated the perfusional curves of the brain directly on
the 2D angiograms that I had acquired using an experimental protocol at 6 frames/seconds.
This experience represented a milestone in the construction of the project of the PhD thesis. Indeed,
considering the initial results that we had observed I felt very confident to continue this project fol-
lowing a more sophisticated roadmap. The team at CIC-IT/IADI Laboratory was very motivated as
well. A strategy plan was built up in accordance with Prof. Serge Bracard, my PhD thesis Director,
and Dr Mitchelle Bailiang Chen, my PhD thesis Co-director. Prof. Bracard and Dr Chen approved
the subject of the PhD and the PhD thesis plan.
COVID-19 pandemic has overturned the entire world during her last two years and that has affec-
ted, at the very beginning, the workflow of my PhD project. However, we have rapidly thought
about an alternative program and started weekly meetings with a remote work-based schedule. I had
the possibility to continue the experimental and the clinical workflow at my institution, the Foch
Hospital in Suresnes. I had the opportunity to work remotely with Dr Chen and we have mutually
advanced in the development of a dedicated algorithm for the segmentation of the collateral vessels.
Angiography (DSA) in order to provide a semi-quantitative and a qualitative assessment of CC th-
xvi
submission Consoli A -CIC-IT IADI Laboratory. Algorithm of characterization of collateral circulation "ASCOT" (ASsessment of COllateral circulation hemodynamics in acute ischemic sTroke)/ Algorithme de caractérisation de la circulation collatérale "ASCOT" (ASsessment of COllateral circulation hemodynamics in acute ischemic sTroke)
Invited Conference Speaker -National Congress Groupe de Réflexion sur la Cardiologie Interven-1. INTRODUCTION
tionnelle (GRCI, Paris) -Prise en charge d'un AVC par un neuroradiologie interventionnel" -De-
1. Published papers -Peer-reviewed Journals 1.1. Acute stroke -Overall background cember 2017
Consoli A, Andersson T, Holmberg A, Verganti L, Saletti A, Vallone S, Zini A, Cerase A, Romano D, Acute stroke is considered as a life-threatening, emergency condition and it can be classified as:
Bracco S, Lorenzano S, Fainardi E, Mangiafico S; CAPRI Collaborative Group. CT perfusion and Invited Conference Speaker -Annual National Congress of the Egyptian Stroke Conference (Cairo) acute ischemic stroke (AIS) and hemorrhagic stroke.
angiographic assessment of pial collateral reperfusion in acute ischemic stroke: the CAPRI study. -"Mechanical thrombectomy or thrombus aspiration? Insights from the ASTER and ASTER2
The paper was published in the Journal of Neurointerventional Surgery (JNIS) in 2016. J Neuroin-Trials" -December 2018 The former is secondary to the occlusion of a cerebral artery, caused by a clot with different sour-
terv Surg. 2016 Dec;8(12):1211-1216. doi: 10.1136/neurintsurg-2015-012155. Epub 2016 Jan 22. ces:
PMID: 26801947. Invited Conference Speaker -SLICE Course (Nice) -"Clot imaging with DSA" -October 2019
Consoli A, Pizzuto S, Sgreccia A, Di Maria F, Coskun O, Rodesch G, Lapergue B, Felblinger J, Invited Conference Speaker -Italian Stroke Association (ISA) -"Lo studio dei circoli collaterali
Chen MB, Bracard S. Angiographic collateral venous phase: a novel landmark for leptomeningeal puo' sostituire le techniche di imaging per identificare la penombra nella selezione dei pazienti da
collaterals evaluation in acute ischemic stroke -The paper has been published in the Journal of sottoporre a trombectomia meccanica nelle finestre terapeutiche più lunghe?" -"Could the study of
NeuroInterventional Surgery (JNIS) -2023 collateral circulation replace the imaging techniques to identify the penumbra to select patients for
mechanical thrombectomy in late windows?" -Verona, December 2021
2. Submitted papers -Peer-reviewed Journals Invited Conference Speaker -Annual Camp Base Neurovascular course (CBNV, Rome) -"How to Section I -INTRODUCTION
Consoli A, Pileggi M, Hasan A T M, Venier A, Sgreccia A, Pizzuto S, Coskun O, Di Maria F, Scar-select patients for mechanical thrombectomy in acute ischemic stroke?" -September 2022
cia L, Lapergue B, Rodesch G, Bracard S, Chen B. UNfavorable CLinical Outcomes in patients
with good collateral Scores: the UNCLOSE study. -Submitted to the Journal of Neuroradiology
3. Patent 4. Communications
Invited Conference Speaker -Annual Camp Base Neurovascular course (CBNV, Rome) -"Angio-
graphic and CT-Angiography assessment of collateral circulation" -September 2015
xx xxi
Table 2 .
2 The modified Treatment In Cerebral Ischemia score (mTICI score)
Table 1
1
). Sec-ondary transfers resulted signifi-
cantly associated with unfavorable outcomes
(OR: 4.41, IC95%[1.70-11.43], p=0.002) at the
multivariate analysis whereas no differences
were shown in terms of onset-to-groin. Howe-
ver, if we consider only patients with almost
complete/complete re-canalization we obser-
ved that the impact of the secondary transfers was not significant (Supple-mental Table
1
).
in terms of procedure time, which was longer
in patients with unfavorable outcomes (63.1 vs
46.3 minutes, p=0.004), although no dif-feren-
ces were shown in terms of first-line endova-
scular technique or occlusion site. This datum
could be explained by the higher number of
passes performed in patients with poor clinical
out-comes (2.7 vs 2, p=0.01, Table 1) or sICH
(3.5 vs 2.3 minutes, p=0.01, Table
The most significant difference was observed
Flowchart of the studyTable 1 . Overall study population data and univariate analysis (primary endpoint).
1
Proximal M2 40 (18) 27 (23) 13 (13)
Side, N(%)
Left 115 (53) 62 (53) 53 (51)
0.76
Right 104 (47) 54 (47) 50 (50)
Baseline ASPECTS, median 8 (7-9) 8 (7-9) 8 (7-9) 0.1
(IQR)
Secondary transfer, N(%) 132 (60) 63 (54) 69 (67) 0.06
Onset-to-groin, mean±SD 230.12±113.09 225.5 ±82.8 239.7 ±87.1 0.22
Onset-to-recanalization, 288.3±117.3 270.4 ±93 299.4 ±96 0.03
mean±SD
Groin-to-recanalization, 53.8±41.1 46.3 ±32.7 63.1 ±47.6 0.004
mean±SD
IVT, N(%) 116 (
Overall Favorable outcome Unfavorable p
(N=219) (N=116) outcome (N=103)
Females, N(%) 117 (53.4) 56 (48) 46 (45) 0.59
Age, mean±SD 70.6 ±16 65.3 ±16.9 76.6 ±12.9 <0.001
Baseline mRS, N(%)
0 181 (82) 105 (91) 76 (74)
1 23 (10) 11 (9) 12 (12) <0.001
2 15 (7) 0 15 (14)
Baseline NIHSS, median 15 (9-19.25) 12 (7-17) 18 (14-21) <0.001
(IQR)
Occlusion site, N(%)
ICA terminus 14 (6) 6 (5) 8 (8)
M1-MCA 165 (74) 83 (72) 82 (79) 0.21
40
T, et al. Effect of thrombectomy on oedema progression and clinical outcome in patients with a poor collateral profile. Stroke and Vascular Neurology 2021;6:doi: 10.1136/ svn-2020-000570.
Fig.1.
Table 2 -
2 Secondary endpoints.
PH1/PH2 30 (13.6) 10 (8.6) 20 (20) 0.02
sICH, N(%) 13 (6) 0 13 (13) <0.001
Recurrent stroke, N(%) 11 (5) 3 (3) 8 (8) 0.09
Table 3 . Multivariate analysis for primary and secondary endpoints P value OR [IC 95%] Primary endpoint -Unfavorable outcome in overall population
3
Age 0.001 1,05 [1.02 -1.09]
24h-NIHSS <0.001 1.23 [1.14 -1.32]
Secondary transfers 0.002 4.41 [1.70 -11.43]
IVT 0.009 0.28 [0.11 -0.73]
24h-ASPECTS <0.001 2.42 [1.84 -7.21]
Secondary endpoints
Unfavorable outcome in adequately recanalised patients (mTICI
2b-3)
Age 0.0002 1.05 [1.03 -1.08]
Baseline NIHSS <.0001 1.15 [1.07 -1.22]
IVT 0.022 0.41 [0.19 -0.88]
Secondary transfers 0.005 2.95 [1.38 -6.33]
Mortality
Baseline mRS 0.003
1 vs 0 0.534 1.57 [0.38 -6.47]
2 vs 0 0.0006 10.67 [2.78 -41.02]
Baseline NIHSS 0.003 1.17 [1.05 -1.27]
mTICI (2B-3) 0.012 0.18 [0.05 -0.69]
sICH
Age 0.039 1.08 [1.07 -1.29]
Baseline ASPECTS 0.005 0.53 [3.87 -106.34]
First pass effect 0.033 0.10 [0.01 -0.83]
Table 2 . Subgroup analysis of good outcome according to the time elapsed be- tween onset of symptoms and recanalization.
2
OTR <6h OTR >6h
(N=176)* (N=32)**
Table 3 . Subgroup analysis of recanalization rate according to the administra- tion of intravenous thrombolysis.
3
Use of BGC, N(%) 46 (26) 13 (40) 0.09
First-line strategy
Aspiration 116 (66) 14 (44)
0.02
Stent retriever/ Combined 60 (34) 18 (56)
First pass effect, N(%) 80 (45) 5 (15) 0.001
N° of passes, mean ± SD 2.14±1.2 3 ±1.8 <0.001
Final mTICI 2b-3, N(%) 165 (94) 24 (75) <0.001
Final mTICI 2c-3, N(%) 135 (76) 15 (47) <0.001
Complications, N(%) 22 (13) 4 (12) 1
24h-NIHSS, median (IQR)** 16 (10-20) 11 (6-18.5) <0.001
24h-ASPECTS, median (IQR)** 6 (5-8) 7 (6-8) 0.07
sICH*, N(%) 11 (6) 2 (6) 0.9
Stroke recurrence, N(%) 9 (5) 1 (3) 0.89
mRS0-2, N(%) 99 (56) 14 (44) 0.19
mTICI 0-2a mTICI 2b-3 Tot. P
IVT- 17 (16%) 87 (84%) 104
0.02
IVT+ 8 (7%) 110 (93%) 118
50
IVT -= no administration of intravenous thrombolysis.
Table 3 .
3 The American Society of Interventional and Therapeutic Neuroradiology/Society of Interventional Radiology (ASITN/SIR) Collateral grade
Table 1 . Subgroup analysis and multivariate analysis
1
Supplemental <60 60-90 >90
N patients 41 39 120
Age, mean ± SD 65.5 ± 17.2 69.8 ± 16.0 64.7 ± 15.4
Baseline ASPECTS, median
(IQR) 7 (5-8) 7 (5-8) 7 (5-8)
CVP+, N(%) 23 (56.1%) 22 (56.4%) 45 (37.5%)
ASITN/SIR >2, N(%) 22 (53.7%) 23 (59.0%) 68 (56.7%)
mRS 0-2, N(%) 26 (63.4%) 19 (48.7%) 51 (42.5%)
N passes, mean ± SD 2.29 ± 1.54 2.41 ± 1.67 2.78 ± 2.26
Multivariate analysis: defined variable CVP+
OR IC95% p
ASITN/SIR >2 17.0 [6.3 -46.3] <0.0001
IV Thrombolysis 3.6 [1.5-9.0] 0.0053
Onset to imaging 0.02
180< vs <90 9.1 [2.0-41.7]
90-180 vs <90 2.9 [0.98-8.53]
Onset to groin 0.0006
180-360 vs <180 0.1 [0.04-0.47]
360< vs <180 0.03 [0.005-0.195]
OR IC95% p
Age 0.97 [0.941-0.996] 0.02
CVP+ 12.18 [5.25-28.28] 0.001
Composite ASITN/SIR + CVP 6.56 [2.99-14.39] 0.001
IV Thrombolysis 2.68 [1.17-6.14] 0.02
mTICI 2b-3 5.21 [2.86-9.47] 0.006
. An intuitive and practical understand of fractal dimension is that it describes how thoroughly it fills space. The calculation of fractal dimension here uses the FracLab
(version 2.2, MATLAB toolbox; https://project.inria.fr/fraclab), a classical box counting method
but can directly applied to grey-scale images. The equation used to calculate is reported here be-
low:
(3) 8 (10) 0.06 5 (3) 6 (21) 0.001 11 (6) 0 0.001
Acknowledgments
To Prof. Serge Bracard, for his continuous support and his clear supervision. He is an exemple of pea-
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data are available upon reasonable request to the corresponding author
CRITICAL ANALYSIS OF COLLATERALS ACCOR-DING TO DIGITAL SUBTRACTION ANGIOGRAPHY
In Chapter 3 the role of CC as a predictor of good clinical outcome for patients treated by MT for AIS secondary to LVOs was analyzed. Although the results is not significant, currently, the CC is considered as a relevant factor in determining the functional independence in AIS patients and it is considered as a co-factor together with the recanalization grade (mTICI score).
Indeed, as discussed in the previous chapters, the main role of CC is to modulate the evolution of the ischemic process, by providing a sort of "hemodynamic reserve" in case of arterial occlusion.
However, patients with good collaterals can also be associated with unfavorable outcome or, at least, not reach a functional independence.
Therefore, even those patients who are considered the best candidates to MT with the current criteria could not benefit from the endovascular treatment and in particular those with older age, without administration of IVT and treated with long procedure times (Consoli A et al., 2022 UNCLOSE).
The CAPRI Study (Consoli A et al., 2016), reported in Chapter 2, had shown how the CC assessed thought the CCS was significantly associated with the CBV assessed on the CT Perfusion imaging.
However, CCS is just one of the different classifications that have been proposed in order to assess the CC [START_REF] Mcverry | Systematic review of methods for assessing leptomeningeal collateral flow[END_REF]. More than 40 classifications have been proposed and about 20 are used in AIS patients.
The most used grading scale is the ASITN/SIR classification which describes a 5-grades scale for the assessment of collateral circulation (Table 3), based on an integrated semi-quantitative and qualitative evaluation of CC.
The different grades are calculated based on the peripheral or complete filling of the infarcted territory associated with the mention "rapid" or "slow" according to the difference of filling of the CC compared to the washout of the ACA (Higashida RT et al., 2003).
Supplemental Material
Supplemental Fig. 1. Flow chart of the study with eligibility criteria.
Discussion
The findings of this study showed how the CC should be assessed not only according to its extension and that we should consider also their effectiveness.
As it has been shown in the previous chapter, also patients with good CC (according to the ASINT/ SIR classification) may not reach functional independency although these may represent the optimal candidates to achieve favorable clinical outcomes.
In order to assess the effectiveness of the CC we have proposed the analysis of the CC based on the concept of the potency of the venous side of collateral vessels. According to this theory, the effectiveness of the CC would also relate to the effectiveness of the venules of the collateral circulation performing several tasks, such as the clearance of the downstream emboli, the maintenance of the blood flow and a preventive effect on platelet adhesion (Tong et al., 2018).
Furthermore, the patency of the venous side of the collateral circulation is related with the possibility for the CC to continue to "accept" the blood flow retrogradely from the pial circulation, avoiding a condition of stasis in the microcirculation with a subsequent risk of thrombosis.
The current study showed that the analysis of the venous phase of collaterals seems to refine the evaluation of CC. Although the ASITN/SIR-based analysis and the CVP-based analysis showed globally similar results, in line with the literature (Anadani et al. 2022[START_REF] Mangiafico | Effect of the Interaction between Recanalization and Collateral Circulation on Functional Outcome in Acute Ischaemic Stroke[END_REF], Liebeskind et al., 2022), it was possible to observe a stronger association between the presence of CVP (CVP+) and the measures of clinical outcomes (3 months mRS) or hemorrhagic transformation rather than the evaluation performed through the ASITN/SIR score.
Furthermore, the composite analysis that matched the combined evaluation of the CC through both the ASITN/SIR and the CVP provided solid results in terms of association with favorable clinical outcomes (OR: 6.56, IC95% [2.99-14.39]) and a lower risk of hemorrhagic transformation.
Interestingly, in the inclusion period of the study (2016-2021), we had to exclude about 400 patients from the analysis because of the lack of angiograms sufficiently durable to assess the venous phase of the collateral circulation. Although that could represent a potential limitation for the present study, this datum depicts the poor interest that has always been paid in the past decades to the venous side of the CC.
An external validation of these results will be mandatory in order to confirm these findings, which remain at the moment a hypothesis-generating tool.
Technical aspects
The CC should be considered as a dynamic circulation, retrogradely running from the neighbor vascular territories towards the territory of the occluded artery.
Digital Subtraction Angiography provides the visualization of the cerebral circulation through the opacification of the vessels thanks to the contrast agent and it could be considered particularly suitable to assess the CC.
Indeed, DSA provides a real-time imaging of the transit of the contrast agent, highlighting the different phases of the blood flow: the arterial phase, the parenchymal phase and the venous phase. Therefore, DSA is capable to provide a dynamic assessment of the CC as well.
During the PhD program, after having reviewed several angiograms and with the clinical experience as interventional neuroradiologist of the academic work on collaterals, I have been working on the introduction of two novel concepts in the assessment of the CC: the desynchronization and the turning points.
The desynchronization of CC depicts the different behavior of the CC as compared to the "normal" cerebral circulation, namely the anterograde circulation. Indeed, the CC is a retrograde circulation per definition and, therefore, the time of opacification is delayed than the one needed for the normal cerebral circulation (Fig. 9).
Consequently, the CC and the cerebral circulation are visualized in different phases of the angiograms and these could be considered as "desynchronized".
It could be intuitive that the shorter is the delay between the opacification of the normal anterograde circulation and the appearance of the CC the higher the flow velocity will be in the CC and then in the ischemic territory.
Theoretically, this feature could be interpreted as a marker of effectiveness of the CC. Indeed, according to the aforementioned theory by Tong et al. (Tong et al., 2018), the patency of the venous side of the CC allows to avoid a blood stasis in the CC and to maintain an effective blood flow in the ischemic territory and the perfusion of the microcirculation.
The efforts that have been done in the setting of this algorithm have been driven by the physiopathological assumptions concerning the collateral vessel in order to provide useful information about their hemodynamic features in order to facilitate their understanding.
The patent application, whether accepted, will enable the setting of multicentric evaluations based on the use of the algorithm as well as clinical trials to identify potential correlations with other imaging modalities (such as CT-perfusion or MRI-PWI).
Further studies centered on the clinical validation of the algorithm will be prepared and proposed including larger study populations.
The assessment of the venous phase of collateral circulation (Tong et al., 2018) will remain a core subject for the next years and the implementation of the algorithm and its automatization will represent the main mid-term goals to achieve.
The project of the Thesis has also considered the recent innovations and the growing interest in the assessment of the dynamic profile of the collateral circulation and, in particular, of its venous phase.
The preliminary results of the current work generated enthousiasm in the team and other future PhD students as well as Master2 candidates showed an interest to continue the development of the algorithm.
Furthermore, other colleagues form other specialities, such as cardiologists and neurologists, manifested their interest in the further evolution of the algorithm.
SUMMARY OF THE PHD THESIS |
00412025 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2009 | https://inria.hal.science/inria-00412025/file/eumw.pdf | Cédric Lévy-Bencheton #
email: [email protected]
Guillaume Villemaud
email: [email protected]
Power consumption optimization in multi-mode mobile relay
Multi-mode is a common feature in current generation terminals, enabling the user to stay connected at any time. By selecting an appropriate standard, multi-mode can reduce terminal power consumption. Software Defined Radio is an enabler towards multi-mode for the next generation of terminals. In such a radio, communication modes are implemented by a general processor through digital functions, instead of dedicated chips. Providing access to users in bad conditions through relays, is another solution to reduce power consumption. We look at multi-mode relaying, where a mobile terminal, connected to an UMTS base station, acts as an 802.11g-relay for those users. In this paper, we evaluate the algorithmic complexity of 802.11g and UMTS to estimate the power consumption of a Software Defined Radio. We propose a multi-mode relay scheme using such terminals, with the purpose of minimizing the global power consumption. Finally, we enounce different rules to maximize the local and global power gain by implementing multi-mode relays.
I. INTRODUCTION
Multi-mode is a key feature in all current mobile terminals. By enabling communication on different standards, such as UMTS or 802.11g, this property is a big step towards the unlimited connectivity requested by more users everyday. In Software Defined Radio (SDR), the increasing number of chips in current terminals is replaced by a single generic purpose processor running algorithms. Thus, a multi-mode SDR implements different standards as different algorithms. It brings the flexibility needed to guarantee an always available connection, while providing an adaptive and reconfigurable terminal. This reconfiguration is at the center of the IEEE SCC41 Working Group [START_REF]IEEE Standards Coordinating Committee 41 (Dynamic Spectrum Access Networks)[END_REF], via P1900.4 [START_REF] Holland | Development of a Radio Enabler for Reconfiguration Management within the IEEE P1900. 4 Working Group[END_REF].
Still, another problem remains. How can operators ensure an all time connectivity, without generating new costs? Relay usage is one possible answer. A relay is a device transmitting data from users in bad conditions to a base station. Such relays can be deployed by operators or be mobile, where users' terminals act as potential relays. Relaying enables a better efficiency on network coverage [START_REF] Cavalcanti | Connectivity opportunity selection in heterogeneous wireless multi-hop networks[END_REF], a greater capacity [START_REF] Nourizadeh | Performance evaluation of cellular networks with mobile and fixed relay station[END_REF] and reduces the transmission power [START_REF] Wang | Extending the lifetime of wireless sensor networks through mobile relays[END_REF]. Their implementation increases the network lifetime.
The decision for a terminal whether to act as a relay or to be relayed depends on a metric: for example transmission power reduction based on the channel conditions [START_REF] Madan | Energy-efficient cooperative relaying over fading channels with simple relay selection[END_REF], or network capacity improvement [START_REF] Seddik | Outage Analysis and Optimal Power Allocation for Multinode Relay Networks[END_REF]. Metrics are computed either locally by the terminal [START_REF] Ganesan | Cooperative Spectrum Sensing in Cognitive Radio, Part I: Two User Networks[END_REF], or globally by the operator [START_REF] Jiang | Self-organizing relay stations in relay based cellular networks[END_REF].
In our work, we consider mobile multi-mode relays: a terminal communicating with other users on one standard, and with the base station on another standard. Since mobile terminals are power-limited, we reduce the power consumption by taking advantage of multi-mode. Contrary to classical works in the relaying field, we focus on the physical layer power consumption, including not only the transmission power but also numerical and analogical power consumptions. The scope of this paper is the physical layer power consumption, and thus, we do not consider upper layers.
A SDR is a convenient way to implement multi-mode. Even though its power consumption is beyond classical radios, this drawback is largely compensated through reconfiguration, which allows the terminal to change mode following different criteria. Current works refer to channel conditions [START_REF] Debaillie | Energy-scalable ofdm transmitter design and control[END_REF]. We propose a new reconfiguration scheme based on power reduction. In order to evaluate a terminal power consumption for every mode, we separate the numerical power consumption (linked to the algorithmic complexity), and the radio power consumption (depending on the radio front-end and transmission power). Then, we compare all modes and reconfigure the terminal to the most power efficient one. Hence, the terminal power consumption is minimized at all time.
We detail the previous stages in Section II. Then, we compare a multi-mode relay with direct connections in order to reduce the network global power consumption in Section III. Finally, we express rules to minimize the global power consumption using a mobile multi-mode relay in Section IV.
II. TOWARDS A LOWER POWER CONSUMPTION
In Software Defined Radios (SDR), a physical layer is implemented through algorithms. Being multi-mode, the radio runs the different algorithms corresponding to the selected modes at the same time. We propose to communicate on the mode minimizing the SDR power consumption, which is composed of two parts: the numerical power consumption, and the radio power consumption.
A. Algorithmic complexity evaluation
In order to evaluate the numerical power consumption, we compute the algorithmic complexity per bit for every mode, by refering to Neel, Robert and Reed's work [START_REF] Neel | A formal methodology for estimating the feasible processor solution space for a software radio[END_REF]. This complexity per bit allows us to compare different standards.
We present the number of operations per bit for each mode (rounded to the upper integer) in IEEE 802.11g and UMTS (Table I). For a more detailed version, please refer to [START_REF] Lévy-Bencheton | Optimisation de la consommation dans les relais mobiles multi-modes[END_REF].
B. Numerical power consumption
Once we know the algorithmic complexities, we evaluate the numerical power consumption, P p (in Watt), following [START_REF] Wang | Energy-efficient dsps for wireless sensor networks[END_REF]:
P p = N * C * V 2 dd (1)
with N being the number of cycles, C the processor's switching capacitance (in Farad) and V dd the input voltage (in Volt). For a given processor, at fixed frequency, the number of cycles increases with the algorithmic complexity. This leads to a higher power consumption. Considering an ARM ARM 968E-S, we have V dd = 1.2V and C = 97.3pF.
In order to express P p in Watt per bit, we consider one operation per bit and set N to the number of operations per bit evaluated before. This result gives us the power required to transmit or receive one single data bit in the chosen mode. We call it the power cost per bit.
C. Radio power consumption
We separate the radio power consumption into two parts: the radio-frequency front-end power consumption, and the transmission power. We consider a multi-mode radio-frequency front-end, capable of receiving simultaneously an 802.11g and an UMTS signals, as presented in [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. The front-end power consumption depends on the architecture and the activity (in transmission or reception). We evaluate the radio power consumption, P c (in Watt), using [START_REF] Shih | Physical layer driven protocol and algorithm design for energy-efficient wireless sensor networks[END_REF]:
P c = N T T on [P te + P O ] + N R R on P re ( 2
)
with P te and P re (in Watt) being the power consumption of the front-end components, respectively when emitting and receiving, P O the output signal power (in Watt), T on and R on defining transmission or reception, and N T and N R the amount of time the transmitter/receiver is switched on per period. Since a radio can either transmit or receive a signal at a given moment, for T on = 1, N T = 1 and N R = 0; reciprocally for R on = 1. We evaluate the power consumption during a single data bit and express P c in Watt per bit.
Yet, P O must be taken into account in transmission, since it depends on the channel conditions and the distance with the receiver. We explain how to evaluate P O in Section III-B.
Those hypothesis allow us to express the power cost per bit for all modes of our SDR on Fig. 1. In 802.11g, the numerical and radio power are almost identical to transmit and receive one data bit (Fig. 1a). The radio power mostly depends on the transmission power, adjusted according to the receiver's channel conditions.
In UMTS, the numerical and radio power decrease at high data rate, due to reduced complexity and sampling. Moreover, the numerical power represents approximately a quarter of the radio power at 384 kbps (Fig. 1b).
By taking into account those results, the fastest rate is not always the most power consuming. Thus, we use the fastest mode to reduce the power cost per bit. We consider an UMTS Base Station (BS) and two SDR terminals, one is a Primary User (PU) capable of becoming relay, the other is a Secondary User (SU). The terminals communicate at 54 Mbps in 802.11g with each other and at 384 kbps in UMTS with BS.
III. REDUCTION OF THE GLOBAL POWER CONSUMPTION
A. Case Comparison
We define the global power consumption as the sum of all terminals' power cost per bit. We compare the global power consumption, in Watt per bit, for the following cases:
1) PU direct : PU and SU communicate directly in UMTS with BS (Fig. 2a). 2) PU active relay : PU communicates in UMTS with BS and acts as a relay. SU's signal is relayed in 802.11g on another UMTS connection establised by PU (Fig. 2b).
3) PU inactive relay : PU is not in communication and acts as a relay. SU's signal is relayed in 802.11g on the UMTS connection established by PU. This case can also represent PU sharing its own connection, via multiplexing or aggregation techniques for example.
B. Channel conditions
The terminals control their transmission power by reducing P O to the minimum value allowing the receiver to decode data properly. The required radio power to transmit a single bit is obtained by integrating P O in (2).
Since, P O depends on the channel conditions, we model the 802.11g and UMTS channels independently. We use an ITU-R office indoor channel model for 802.11g [START_REF]Recommendation ITU-R P.1238-1 : Propagation data and prediction models for the planning of indoor radiocommunication systems and radio local area networks in the frequency range 900 MHz to 100 GHz[END_REF]:
L = 20log 10 (f ) + 30log 10 (d) -28 + L f (n) (3)
with L being the pathloss (in dB), f the carrier frequency (in MHz), d the distance between two terminals (in m), 28 the freespace loss coefficient and L f (n) the floor penetration loss factor with n the number of floors penetrated. Here, L f (n) = 15 + 4(n -1) for n = 2.
We use the following Outdoor-to-Indoor empirical channel model in UMTS [START_REF] Kürner | Prediction of outdoor and outdoor-to-indoor coverage in urban areas at 1.8 GHz[END_REF]:
L in,LOS,K = 32.4 + 20log 10 (f ) + 20log 10 (S + d in ) +L perp + L par (1 - D S ) 2 (4)
with L in,LOS,K the pathloss with line of sight (in dB), f the carrier frequency (in MHz), d in the distance between the terminal and the outdoor (in m), S and D the distances between the base station and the building (in m), respectively in line of sight and parallel to the ground, L perp and L par the wave penetration factors into the building (in dB), respectively for a perpendicular incidence and the line of sight angle. We take L perp = 10dB, L par = 40dB, D S = 0.4, and the mobile terminal inside the building, d in = 10m from the walls.
We apply a Rice fading to both signals, since the terminals are in line of sight. PU moves in straight line from d BS-P U = 800m to SU, fixed at distance d BS-SU = 1, 000m. We compare the global power consumption for all three previous cases on Fig. 3.
C. Mobile relay
When PU and SU are far from each other, PU active relay is more efficient than direct connections. However, when PU is getting closer to SU, relaying becomes more expensive: the power cost per bit of two UMTS connection plus an 802.11g relay is approximately the same as two direct UMTS connections. At that point (30m< d P U -SU < 80m), PU enters a "No-Relay Zone": a zone where relaying has no major gain compared to direct connections. When PU and SU are too close (d P U -SU < 30m), direct connections should to be privileged.
We also notice that PU inactive relay always gives the lowest global power consumption. This behaviour comes from the highest power cost per bit in UMTS: when PU and SU become closer, PU only maintains one UMTS connection and an 802.11g link for the relay. Compared to two UMTS connections at long range, and due to the fact that the power cost per bit is much lower in 802.11g than in UMTS, the global power consumption is minimized when PU shares its UMTS connection. PU and SU move together in straight line from BS to d BS-P U = 1, 000m. We fix d P U -SU = 50m to study the persistence of the "No-Relay Zone", at any distance from BS. The global power consumption is depicted on Fig. 4.
D. Fixed relay
Near BS, PU active relay is not interesting. For 200m< d BS-P U < 300m, the gain is neglectable. Far from BS, direct connections are privileged. Meanwhile, PU inactive relay is always interesting for the same reason as above.
E. Multi-users mobile relay
We now evaluate the gain of relaying N SUs on PU active relay , with PU moving in straight line from d BS-P U = 800m to d BS-SU = 1, 000m on 5. With PU far from SU, the global power consumption of N SUs directly connected in UMTS is approximately the same as PU relaying N + 2 SUs. When PU gets closer to SU, direct connections become more efficient. Moreover, PU inactive relay is always interesting. Based on the previous results, the following rules allow to minimize the global power consumption (Fig. 6).
• Terminals far from BS are relayed by PU closer to BS (Fig. 6 , ). • In the "No-Relay Zone", a terminal relaying has no impact on the global power consumption. Terminals connect directly to BS in UMTS (Fig. 6 ). • When PU and SU are too close from each other, they contact BS directly (Fig. 6 , ). • For multi-users, PU shares its connections when approaching SUs (Fig. 6 ). All other approaches aiming at power reduction only consider the transmission power and forget the numerical power consumption. We have shown how important the numerical power consumption is in multi-mode, and have minimized the global power consumption using multi-mode relay.
By adding mobility, the terminal acting as a PU will relay for a certain period, before entering the "No-Relay Zone". At that moment, PU stops relaying. Later, that terminal can become a new SU and be relayed by a new PU. This way, by reducing a terminal power consumption, we minimize the global power consumption.
V. CONCLUSION AND FUTURE WORK
In this paper, we have shown how to minimize a network global power consumption by using a Software Defined Radio as a multi-mode mobile relay. We have expressed the need to evaluate a terminal power consumption, and have calculated the complexity of two standards and their associated power cost per bit. We have determined the gain provided by such terminal, acting as a multi-mode mobile relay, on the global power consumption. Finally, we have presented different rules to establish a relay in order to reduce the global power consumption.
We will continue to explore this reconfiguration scheme in a multicast streaming network and study the benefits for operators and users at the same time. We will also study the minimization of power consumption with mobile SUs and multiple relays, and evaluate the impact of a realistic MAC layer using network simulation.
Fig. 1 .
1 Fig. 1. Power cost per bit (W/bit) in (a) 802.11g (b) UMTS. For the radio part, the distance between the emitter and the receiver is 100 m.
Fig. 2 .
2 Fig. 2. (a) Two terminals are directly connected to an UMTS Base Station (BS). (b) A Secondary User (SU) connects to a Primary User (PU) in 802.11g. The latter establishes a secondary UMTS connection to transmit SU data.
Fig. 3 .
3 Fig. 3. Global Power Consumption in Watt per bit (Normalized by direct connections) with a Primary User (PU) acting as a mobile relay for a fixed Secondary User (SU).
BS-PU (m) Normalized Power Consumption (W/bit) Global Power Consumption for PU-SU = 50m (Normalized by direct connections) 802.11g Relay + 2 PU UMTS 802.11g Relay + 1 PU UMTS 2 Direct UMTS Connections
Fig. 4 .
4 Fig. 4. Global Power Consumption in Watt per bit (Normalized by direct connections) with a Primary User (PU) acting as a fixed relay for a Secondary User (SU) at 50m. PU and SU move away together from the base station. The results for direct connections and PU active relay overlap each other.
Fig. 5 .Fig. 6 .
56 Fig. 5. Global Power Consumption in Watt per bit (Normalized by direct connections) with a Primary User (PU) acting as a mobile relay for N fixed Secondary User (SU). PU maintains N + 1 UMTS connections with the base station (N SUs and its own). |
00412028 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2009 | https://inria.hal.science/inria-00412028/file/23.pdf | Ioan Burciu
email: [email protected]
Jacques Verdier
email: [email protected]
Guillaume Villemaud
email: [email protected]
Low Power Multistandard Simultaneous Reception Architecture
Keywords: double orthogonal frequency translation, multistandard simultaneous reception, power consumption, complexity
In this paper, we address the architecture of multistandard simultaneous reception receivers and we aim at improving both the complexity and the power consumption of the analog front-end. To this end we propose an architecture using the double orthogonal translation technique in order to multiplex two received signals. A study case concerning the simultaneous reception of 802.11g and UMTS signals is developed in this article.
I. INTRODUCTION
In the wireless telecommunications embedded domain, we can observe a request of multiple functionalities expected from the devices also impacting on the classical constraints of power consumption and complexity. Several new services have appeared such as video streaming and high-speed data transfer. They either use already existing wireless standards or need new dedicated ones. Because of the need for using simultaneously different services and therefore different standards, the transceivers able of processing simultaneously several standards have to be developed.
In this paper we focus on the reception part of a multistandard simultaneous processing transceiver. The present state of the art is using stacked-up dedicated frontends in order to simultaneously receive several standards. One of its major drawbacks is the bad performance-powercomplexity trade-off due to the parallelization of the processing stages.
In order to obtain a better trade-off, we propose a new architecture for multistandard simultaneous reception inspired by the image rejection double IQ architecture [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF]. It uses a single front-end capable of multiplexing the two input signals, once separately filtered and amplified, of translating the resulting signal in the baseband domain and then of demultiplexing the two signals in the digital domain.
This paper consists of three parts. Following this introduction, section II describes this novel architecture and shows simulated results of the simultaneous 802.11g/UMTS reception, further details have been already published [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. In section III a comparative power consumption study between the proposed architecture and the state of the art is presented. It consists in a theoretical study using power models for each block [START_REF] Li | A System Level Energy Model and Energy-Quality Evaluation for Integrated Transceiver Front[END_REF] and a state of the art of analogical circuits used by the two architectures [START_REF] Siddiqi | 2.4 GHz RF down-conversion mixers in standard CMOS technology[END_REF]- [START_REF] Arias | Low-Power Pipeline ADC for Wireless LANs[END_REF]. Finally, conclusions of this study are drawn.
II. DOUBLE IQ MULTISTANDARD SIMULTANEOUS RECEPTION
ARCHITECTURE In wireless telecommunications, the integration of IQ baseband translation structures in the receiver chain has become a common procedure. The simple IQ architecture is usually used in the receiver front-end design in order to reduce the bandwidth of baseband signals treated by the ADC.
Meanwhile, the orthogonal frequency translation technique is also used to eliminate the image frequency during the translation steps of heterodyne front-end architectures [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF]. The image frequency rejection technique consists in using two orthogonal frequency translations. After the double orthogonal translation, a signal processing block uses the four baseband signals to eliminate the image frequency signal.
This monostandard image rejection architecture relies on the advantage of orthogonalizing the useful signal and the signal occupying its image frequency band. Even though the spectrums of the two signals are completely overlapped after the first frequency translation, this orthogonalization allows the baseband processing to theoretically eliminate the image frequency component while reconstructing the useful one.
This paper assesses the use of the double orthogonal translation technique to develop a multistandard simultaneous reception front-end [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. The main idea is to consider that the signal from the image band becomes another useful signal. The architecture and the spectrum evolution of such a receiver, able of treating simultaneously two standards, are developed in Fig. 1. The parallelization of the input stages of the frontend imposes the use of two dedicated antennas, two dedicated RF band filters and two dedicated LNAs. The gain control stage is realized by the input stages, each LNA being dedicated to the gain control of one of the signals. Once the two signals are well filtered and amplified, an addition of the two outputs is made. The resulting signal is then processed by a double orthogonal translation structure. The frequency of the oscillator used by the first stage is ably chosen in such a manner that each of the two useful signals occupies a spectrum in the image band of the other. This implies a complete overlapping of the spectrums of the two signals in the intermediate frequency domain. After the second orthogonal frequency translation and after the digitalization of the four resulting signals, two parallel processing are implemented, each of them composed of a series of basic operations. Each of them reconstructs one of the two useful signals, while rejecting the other. As a result, the output signals of this final block are the two useful signals translated in baseband.
The choice of the standards used for our study case is WLAN (802.11g) and WCDMA-FDD because of their growing importance. Several simulation of the structure presented in Fig. 1 were performed using the ADS software provided by Agilent Technologies. One of this series of simulations concerns the BER (Bit Error Rate) evolution of the two study case standards when being simultaneously received by a structure using the state of the art front-end stack-up architecture and the proposed double IQ architecture.
In order to achieve a good performance comparison between the multistandard single frond-end receiver and the front-end stack-up, the blocks used during the simulation have the same typical metrics (gain, noise figure, 1 dB compression point, third order interception point) in both cases. As it can be seen from Fig. 2 the performance of the two architectures during the simultaneous reception of the two standards is almost identical. Meanwhile, these simulations do not take into account the orthogonal mismatches of the IQ translation blocks. An additional study concerning this issue has been made and will be presented in an extended version of this paper. The conclusions impose a basic digital algorithm (Least Mean Squares) in order to mitigate the impact of these mismatches on the signal quality. For our study case the results obtained by using this adaptive algorithm show a complete mitigation of the IQ mismatches. In the same time, the same study concludes that the power consumption of this algorithm is not significant compared to that of the whole receiver composed of the analog front-end described here and of the digital signal processing part.
III. POWER AND COMPLEXITY ISSUES
When designing an embedded front-end, the main issues to be considered are the power consumption and the complexity of the structure. Generally these two issues are related: the growing complexity involves the use of spare elements which increases the power consumption. In this paper we propose an innovating architecture which allows the reduction of the analogical front-end power consumption and complexity during a multistandard simultaneous reception. In order to reveal these reductions, this section presents a study comparing the proposed structure to the state of the art of the multiband simultaneous reception architecture -the front-end stack-up.
While evaluating the performances of the proposed structure, it can be seen that it has the same advantages and the same drawbacks as the stack-up structure using two heterodyne front-end. Therefore the comparison will be made between the double IQ structure presented in Fig. 1 and the stacked-up heterodyne dedicated front-end architecture presented in Fig. 3.
The theoretical part of the power comparison study relies on energy models of each type of block used in the two architectures. These energy models are presented in [START_REF] Li | A System Level Energy Model and Energy-Quality Evaluation for Integrated Transceiver Front[END_REF], along with a system level energy evaluation. In order to realize a global evaluation of the power consumption of the two structures, the theoretical study takes into account the state of the art of each block used by the two structures, in terms of performance-power trade-off.
A. Filters
There are several analog filters in the analog part of a receiver. These include the RF band select filter, used to suppress the wideband interference signal, the IF filter, used to suppress the interference signal from the image frequency band, and the baseband low-pass filter, used to suppress inband interference while also helping with the anti-aliasing
B. LNA and Mixers
The power consumption model of the mixers used in the two structures, it is a function of the noise figure NF and the gain K:
NF K k P mixer mixer / ⋅ = . ( 1
)
In the followings we consider that all the mixers used in the two architectures have the same performances constrains in terms of gain and noise figure and therefore have the same power consumption. One of the better suited mixers offering an excellent performance-power trade-off is presented in [START_REF] Siddiqi | 2.4 GHz RF down-conversion mixers in standard CMOS technology[END_REF]. It has a power consumption of 5.6 mW.
The power model of the LNA is similar to that of the mixers as it also depends on the noise figure NF and on the gain A:
NF A k P LNA LNA / ⋅ = . ( 2
)
For our study case the two structures use the same couple of dedicated LNAs. The state of the art show a power consumption of 8.04 mW for the WLAN dedicated LNA [START_REF] Cheng | A 1 V switchable CMOS LNA for 802.11A/B WLAN applications[END_REF] and of 7.2 mW for the UMTS dedicated LNA [START_REF] Alam | A 2 GHz variable gain low noise amplifier in 0.18-µm CMOS[END_REF]. In this study we assume that the power control is performed by the LNAs. This assumption doesn't influence the power study as the LNAs' highest consumption level appears when it operates in the high gain mode.
C. Baseband Amplifier
The baseband amplifier (BA) is used to amplify the signal before conversion. It improves the SNR (Signal to Noise Ratio) of the signal, allowing a better BER. Its power consumption depends on its gain and on its bandwidth:
LNA a BW k P ⋅ ⋅ = (3)
where the k coefficient depends on the device dimensions and other process parameters. a BA is the baseband amplifier gain and is assumed to be a BA = 5. Here we assume that the UMTS dedicated BA consumes 5mW [START_REF] Li | A System Level Energy Model and Energy-Quality Evaluation for Integrated Transceiver Front[END_REF] and the WLAN dedicated BA consumes 10 mW as it has a two times larger bandwidth. For the proposed structure, the BAs are assumed to consume 10 mW.
D. Frequency Synthesizer
Concerning the frequency synthesizer's power consumption, it has a model composed of two separate components: the power consumed by the VCO (Voltage Controlled Oscillator) and that consumed by the PLL (Phase Lock Loop). The consumption of the phase lock loop has a model depending on the reference frequency F ref , on the RF frequency F LO , on the total capacitances C 1 and C 2 loading the RF circuits and on supply voltage V dd :
ref dd LO dd PLL F V C b F V C b P ⋅ ⋅ ⋅ + ⋅ ⋅ ⋅ = 2 2 2 2 1 1 (4)
An LC tank-based VCO has a power model depending on the values of the elements of the LC tank R, L, C, on the noise excess factor NEF along with the phase shift where it is measured ∆ω, on the phase noise power spectral density S Φ , on the temperature T and on the Boltzmann constant K:
( ) 2 3 1 ω ∆ ⋅ ⋅ ⋅ ⋅ ⋅ = Φ S T k NEF L R C P VCO (5)
The central frequencies of the state of the art synthesizers used by the two architectures are practically the same, as well as the other metrics. A well suited element is presented in [START_REF] Fu | Fully integrated frequency synthesizer design for wireless network application with digital programmability[END_REF]. It consumes 42 mW for an output frequency between 2.6 GHz and 2.9 GHz and a phase noise of -115dBc/Hz @1MHz.
E. Analog to Digital Converters
The analog to digital converters, along with the number of frequency synthesizers, are the key elements of this power consumption comparative study. In fact, except for the ADC and the baseband amplifiers, all the elements used by the two architectures need to fulfill the same performance constrains.
The power model of the ADCs can be defined by: ( )
) 838 . 4 1525 . 0 ( min 2 1 10 + ⋅ - + ⋅ ⋅ = N signal sample dd ADC f f L V P (6)
where N 1 is the resolution of the A/D converter and L min is the minimum channel length of the used CMOS technology. For our study case, the f sample for the UMTS is the same as that of the WLAN, even if the signal has a bandwidth two times smaller, because of the over-sampling that has to be done for this standard. Concerning the ADCs used by the proposed structure, the sampling frequency is equal to that of the WLAN dedicated ADC as the bandwidths of the signals to be digitized in the two cases are equal.
In a receiving front-end architecture, the ADCs' resolution requests depend on several metrics such as the performances offered by the power control stage, but also on the PAPR (Peak to Average Power Ratio) of the signal to be digitized. While the power control stage is the same for the two architectures, we have to evaluate the PAPR evolution when adding an UMTS and a WLAN signal. A theoretical and simulation study was separately done and it reveals a worse case scenario where the final signal's PAPR increases of only 2 dB compared to that on the WLAN input signal. Therefore, while comparing the two structures and for the same power control performances (dedicated LNAs), we considered that the resolution of the ADC used by the proposed architecture is the same as that used in the WLAN dedicated front-end in the stacked-up front-end architecture.
The performance-power trade-off state of the art of ADC dedicated to the UMTS and WLAN are presented in [START_REF] Farahani | A low power multi-standard sigma-delta ADC for WCDMA/GSM/Bluetooth applications[END_REF] and respectively in [START_REF] Arias | Low-Power Pipeline ADC for Wireless LANs[END_REF]. Their power consumption is of 11 mW and 12 mW.
F. Overall power and complexity evaluation
In order to make a comparative overall power consumption evaluation between the two architectures, a complexity study has to be made in order to evaluate the number of elements that have to be used for each structure. Table 1 summaries the elements used by each of the two architectures, as well as their individual power consumption along with their supply voltage.
As shown here, the proposed architecture needs less components than the state of the art front-end stack-up as it doesn't need image rejection filters and it uses less frequency synthesizers. Therefore, the complexity comparison is favorable to the proposed structure, especially because the image rejection filters are not on-chip integrated elements.
For our study case and for the power consumption levels in Table 1 the overall power consumption comparison shows that the proposed structure consumes 216 mW while the state of the art architecture uses 284 mW. This means a 20 % of gain in favor of the single front-end structure assessed in this paper. In order to better understand this power gain, Fig. 4 shows the power consumed by every type of block used by the two architectures. This power gain comes essentially from the use of two times less frequency synthesisers, while using the same number of other components.
IV. CONCLUSIONS
In this article, a novel multistandard simultaneous reception architecture was presented. Expected performance of its implementation has been presented for a particular study case -simultaneous reception of two signals using the 802.11g and UMTS standards. The signal processed by the analog part of the receiver presents an excellent spectral efficiency as the two standards spectrums are overlapped after the first IQ stage. Compared to the state of the art represented by the stack-up dedicated front-ends structure, the proposed architecture offers a much better performance-complexity-power trade-off. In fact it is less complex as it uses less electronic blocks (external image rejection filters and frequency synthesizers) for the same performance. In addition to the reduced complexity, the overall power study shows a 20% power gain.
Fig. 3
3 Fig. 3 State of the art of simultaneous reception -Stacked-up heterodyne dedicated front-end architecture.
Fig. 1 Fig. 2
12 Fig. 1 High complementary standard rejection multistandard simultaneous reception architecture.
Fig. 4
4 Fig. 4 Consumption of the different block types used by the two architectures.
ACKNOWLEDGMENT
The authors wish to acknowledge the assistance and support of the Orange Labs Grenoble. |
04120296 | en | [
"shs.eco"
] | 2024/03/04 16:41:26 | 2023 | https://univ-rennes2.hal.science/hal-04120296/file/WP_Pellegris_profit_rate.pdf | EROI
Energy as a limiting factor of economic growth: the profit rate channel
Alban Pellegris -Working Paper
Introduction
Neoclassical economists do not consider energy as an important driver of economic growth. In their perspective, economic growth comes from the accumulation of different capitals: technical capital, human capital, technological capital or public capital. Energy resources are also a capital: they belong to the so called natural capital. However, this capital is not specific so that, in the long run, it appears to be substitutable by other capitals [START_REF] Solow | The Economics of Resources or the Resources of Economics[END_REF][START_REF] Stiglitz | Growth with Exhaustible Natural Resources: Efficient and Optimal Growth Paths[END_REF].
Ecological economists have vigorously opposed these views in many ways. Two major contribution are the neo-thermodynamic theory of growth developed by Robert Ayres and Benjamin Warr [START_REF] Ayres | The economic growth engine: how energy and work drive material prosperity[END_REF] and the biophysical approach developed by [START_REF] Cleveland | Energy and the U.S. Economy: A Biophysical Perspective[END_REF][START_REF] Husson | Relating Financial and Energy Return on Investment[END_REF].
For Ayres and Warr and their followers [START_REF] Santos | Useful Exergy Is Key in Obtaining Plausible Aggregate Production Functions and Recognizing the Role of Energy in Economic Growth: Portugal 1960-2009[END_REF], the measurement of energy from useful exergy allows the Solow residual to disappear. Thus energy would explain most of the 20 th century economic growth. However, the fragility of this approach lies precisely in the analytical framework in which it is situated. It falls under the same post-Keynesian criticism formulated during the Cambridge controversy towards Agregate Production Function [START_REF] Felipe | The Aggregate Production Function: 'Not Even Wrong[END_REF][START_REF] Robinson | The Production Function and the Theory of Capital[END_REF][START_REF] Sraffa | Production of commodities by means of commodities: prelude to a critique of economic theory[END_REF]. In other words, the physical and monetary dimensions of production are not distinguished but rather confused [START_REF] Hornborg | Ecological economics, Marxism, and technological progress: Some explorations of the conceptual foundations of theories of ecologically unequal exchange[END_REF][START_REF] Pellegris | Le découplage entre la consommation d'énergie et le PIB : questionnements théoriques et évaluations empiriques du rôle de l'énergie dans le processus de croissance économique. Pirgmaier, E[END_REF].
Faced with these limitations, the biophysical approach may appear more robust. According to these authors, economic development depends on the energy surplus, which directly rely on the EROI. To prove this, they establish a relationship between the EROI, the price of energy, the energy bill and thus the GDP. The mechanism can be summarized in the following figure: According to these economists, the EROI determines the price of energy [START_REF] Heun | Energy return on (energy) invested (EROI), oil prices, and energy transitions[END_REF] while the price of energy would, for a given energy intensity of GDP, affect the weight of the energy bill1 . Finally, an increasing energy bill would subtract sums used for consumption or investment and reduce the rate of economic growth [START_REF] Bashmakov | Three laws of energy transitions[END_REF][START_REF] Fizaine | Energy expenditure, economic growth, and the minimum EROI of society[END_REF][START_REF] Murphy | Energy return on investment, peak oil, and the end of economic growth: EROI, peak oil, and the end of economic growth[END_REF]. For example, in the US case, [START_REF] Fizaine | Energy expenditure, economic growth, and the minimum EROI of society[END_REF] estimate that the maximum bill to expect positive economic growth is 11%, which would correspond to an EROI of 11:1. However, this litterature has two limitations. Firstly, this work is (implicitly) based on the energy theory of value [START_REF] Costanza | Embodied Energy and Economic Valuation[END_REF][START_REF] Farber | Economic and ecological concepts for valuing ecosystem services[END_REF]. Although not always explicitly claimed (Pirgmaier, 2021), the EROI does correspond to the energy embodied in energy production. Yet this theory of value has been criticised for its substantialism and reductionism [START_REF] Hornborg | Ecological economics, Marxism, and technological progress: Some explorations of the conceptual foundations of theories of ecologically unequal exchange[END_REF].
Secondly, this work is questionable in terms of the method used. While the channel identified is a demand channel (the weight of the bill affects consumption and investment), the model used by Court and Fizaine (2016) take capital and labour, two supply variables, as the control variables. The model used does not therefore allow for a proper test of the hypothesis, which weakens the significance of the results.
In this paper, we argue that these difficulties can be overcome by turning to another paradigm: Marxist political economy. Thus, this paper can be understood as part of a collective movement to restore the Marxist perspective within the ecological economics [START_REF] Burkett | Marx's reproduction schemes and the environment[END_REF][START_REF] Foster | Ecological Economics and Classical Marxism: The "Podolinsky Business[END_REF]Pirgmaier, 2021). To show this we proceed in three steps. Section 2 shows that the Marxist framework allows us to theoretically identify a profit rate channel through which energy exerts a limiting character on the growth process. Sections 3 and 4 propose an empirical evaluation of this channel: section 3 presents our method while section 4 presents and discusses the results. Section 5 concludes.
The theoretical existence of a profit rate channel
Three elements structure the Marxist framework: (1) the adherence to the labour theory of value (2) the importance of the rate of profit in the economic dynamic, especially (3) the equalization of profit rate among industries. Let us see how these concepts are articulated to think about the limiting character of energy on the growth process.
The labor theory of value
Summarized briefly, the labour theory of value considers that the relative prices of goods are given by the quantities of labour (direct and indirect) that the goods incorporate. This theory therefore makes room for natural constraints: the more difficult goods are to produce, the higher the relative price will tend to be. From this point of view, the labour embodied in the production of energy is therefore a possible alternative to the EROI (embodied energy) to account for the price of energy.
Contrary to the energy theory of value, the labor theory of value is still discussed and some recent work have shown its relevance [START_REF] Cockshott | Testing Marx: Some new results from UK data[END_REF][START_REF] Cockshott | Robust correlations between prices and labour values: a comment[END_REF][START_REF] Tsoulfidis | Price-value deviations: further evidence from input-output data of Japan[END_REF][START_REF] Wright | Marx's transformation problem and Pasinetti's vertically integrated subsystems[END_REF][START_REF] Zachariah | value and equalisation of profit rates: a multi-country study[END_REF] to analyze relative prices dynamics.
The rate of profit
The rate of profit is a fundamental variable that determines the rate of accumulation, i.e. the rate of growth. Marx analyses the rate of profit as the ratio between two variables: the rate of exploitation, which relates the amount of profits (P) to the wage bill (W) the organic composition of capital (OCC), which relates the value of constant capital (that is machines and raw materials, C) to the value of variable capital (that is the value of the worforce, W).
𝑃𝑟𝑜𝑓𝑖𝑡 𝑅𝑎𝑡𝑒 = 𝑃 𝑊 𝐶 𝑊 + 1
The organic composition of capital is not a purely technical indicator, as prices will influence this ratio. Nevertheless, there is a technical component that Marx calls the technical composition of capital (TCC) which relates the quantity of machines (c) to the quantity of workers (L).
𝑇𝐶𝐶 = 𝑐 𝐿
If we add to the analysis, pc the unit price of capital and w the individual wage, the OCC can be expressed by the following equation:
𝑂𝐶𝐶 = 𝑐 𝐿 * 𝑝𝑐 𝑤 𝑂𝐶𝐶 = 𝑇𝐶𝐶 * 𝑝𝑐 𝑤
Thus, because it is a component of constant capital, when the price of energy increases, all else being equal, the price of constant capital (pc) increases. The organic composition increases, especially in the sectors where the technical composition (CTC) is the highest.
The equalization of profit rates among industries
This has two consequences: the most capital-intensive branches experience a fall in their profit rate, which drives the average profit rate of the economy down. However, this situation is not sustainable: in order for the capitalists in these industries to continue to produce, a change in relative prices must take place, otherwise they would have an interest in investing in industries with a less capital-intensive technical composition, as the rate of profit is currently higher there. Through this mechanism of capital reallocation, producing relative price movements, profit rates tend to equalize among industries. At the end of this process, all industries, whether intensive or not, gradually tend to align themselves with the new et lower (average) macroeconomic profit rate. The fall in the macroeconomic rate of profit is thus spread and borne by all branches of production
Methods
In order to test the existence of the profit channel, we develops an econometric analysis of the relationship between the relative price of energy and the profit rate based on a panel of 16 countries over the period 1995-2019. Our starting hypothesis is that there is a significant and negative relationship between the relative price of energy and the profit rate of an economy.
Variables
We retain four variables to carry out this econometric work: a dependent variable that we seek to explain, the profit rate, and three explanatory variables. Among these three variables, we find, first of all, the relative price of energy, which is the variable of interest to us and whose influence on the profit rate we are trying to understand. To this variable, we have added two control variables, which are highly likely to influence the rate of profit according to the Marxist perspective mentioned above: the exploitation rate and the technical composition of capital. First, the exploitation rate, since this is the numerator of the rate of profit formula (see above). A fall in the rate of profit can be compensated by an increase in the exploitation of workers, that is, in the share of value added accruing to capitalists. This is one of the counter-tendencies identified by Marx to the tendency of the rate of profit to fall (Marx, Cohen-Solal and Badia, 1959). Second, the technical composition is also important as suggested in section 2: the effect of the energy price on the profit rate depend on how capital intensive is the industry or the country under study.
Data
All the data come from the OCDE database. Following the example of other Marxist works [START_REF] Basu | Quantitative empirical research in marxist political economy: a selective review[END_REF], the rate of profit series were obtained by relating the net operating surplus to the net value of capital. For the relative price of energy, we relate the energy price index to the consumer price index. When the energy price increases faster than the consumer index, the relative price of energy increases. For the exploitation rate, we relate the net operating surplus to the payroll.
The main innovation in terms of indicators relates to the technical composition of capital. Following [START_REF] Giampietro | The Metabolic Pattern of Societies: Where Economists Fall Short[END_REF], we considere that the capital intensity of an industry can be observed from the energetic metabolic rate (EMR) which relates the quantity of (primary) energy to the hours of work mobilized. Such a ratio offers the advantage to be truly technical in the sense that it does not rely on monetary aggregates but only on physical aggregates.
Linear regressions and estimators
Once the data are computed, we test the stationarity of our variables (see appendix 6.1.) before running multivariate linear regressions. We first run different models in order to confirm the number of explanatory variables to use (table 1). The result of these four pool OLS confirm the relevance of model D with all three explanatory variables (table 2). In a second step, we consider four econometric regressions that complement the last one (Pool OLS) that we have just presented. These new regressions allow us to confirm the robustness of our approach, as do Carpenter and [START_REF] Guariglia | Cash flow, investment, and investment opportunities : new tests using UK Panel data[END_REF]for an estimation of fixed capital accumulation on panel data. In the end, we report in Table 3 the following 5 regressions for our tri-variate equation:
1-A pool model with an OLS estimator (Pool Ordinary Least Square) where all our data are treated as a single geographic set. This model has already been estimated since it is the result of equation D, a model that will be called "Pool" in the following. 2-A country fixed effect model with within estimator ("Country" model) where the regression is conducted for each of the 16 countries. A fixed effect model means that "the difference between individuals can be captured by the constant" (Greene et al., 2011, p.345). Here, the estimated model will therefore have a different constant for each of the 16 countries. 3-A time fixed effect model with OLS estimator ("time" model) where the regression is conducted for each year. Example: we compile all the data from all the countries for the year 1995 and we look at the relationship that emerges. Unlike the previous model, there is not a constant for each country but for each year selected. 4-A model with country and time fixed effects with OLS estimator ("two effects" model) which combines the two previous models since we control here both the country and the year. 5-A random model with a random estimator: like a fixed-effect model, a random model will consider that being in country A, rather than in country B, will affect the relationship between profit and energy price. However, unlike the fixed-effect model, the random-effect model considers that it is possible for countries to respond differently to a change in energy price. In statistical terms, the relationship is not just affected by a different constant. We can first note that, with the exception of the "two-effects" model, whatever the model chosen, our variables are all significant. For all these other models, our independent variables are not only significant, but above all the coefficients are compatible with our hypotheses: The increase in the price of energy tends to negatively affect the profit rate ("country," "time,"
Results
Results of multivariate regressions
"random" and "pool" models) The exploitation rate tends to affect the profit rate positively (all models) The increase in the technical composition of capital tends to affect the rate of profit negatively ("pool" and "time" models) or positively ("random", "country", and "two effects" models)
To choose between these different models, we run several tests (see appendix 6.2.). First, we distinguish a pool OLS pool from a country fixed effect model. In our case, it seems reasonable to consider a country fixed effect. Indeed, we can estimate that there are unobservable variables, specific to each of the countries studied. For example, some countries have a particular position in the value chain that allows for a higher profit rate than elsewhere, as the analysis of profit rates has revealed. The use of a Fisher test confirms this intuition and highlights significant effects linked to heterogeneity between countries.
We then determine whether a country fixed effect model should be preferred to a random effect model. To do this, we perform a Haussman test. With a p value < 0.05, this test leads us to prefer the fixed effect model. Finally, we need to decide whether the country fixed effects should be combined with a year fixed effect ("two way fixed" model). Here again, there is a test to guide us: the Lagrange multiplier test for time effect (Breusch-Pagan) for cylindrical panels. With a p-value > 0.05, this test leads us to prefer the model with only a country fixed effect. Finally, a last comparison must be made: that between a model with time fixed effects and a model with country fixed effects. The Fisher test allows us to decide in favor of the country fixed effects model. Finally, we test different characteristics that can modify the robustness of the selected model. Our data exhibits heteroskedaticity, serial correlation and cross sectional dependence (see appendix 6.3.). According to [START_REF] Baltagi | Econometric Analysis of Panel Data[END_REF], autocorrelation and cross-sectional dependence can be problematic for macroeconomic panels with large time series (beyond 20 to 30 years). Since our study covers a 24year period and our data are macroeconomic, we consider a procedure to take care of their presence. Heteroscedasticity also makes the estimates provided by the least squares method questionable (our various estimators are all derived from this method). The econometric literature suggests two different ways to take into account the behavior of model errors.
The first solution consists in keeping a model using the least squares method (Ordinary Least Square) but with robust standard errors [START_REF] Bai | Feasible Generalized Least Squares for Panel Data with Crosssectional and Serial Correlations[END_REF]. In the case where heteroscedasticity, autocorrelation and dependence between sections are present, [START_REF] Hoechle | Robust Standard Errors for Panel Regressions with Cross-Sectional Dependence[END_REF] recommends using the Driscoll-Kraay estimator.
The second solution proposed by [START_REF] Bai | Feasible Generalized Least Squares for Panel Data with Crosssectional and Serial Correlations[END_REF] is to use an FGLS (Feasible Generalized Least Square) model when the number of years T is greater than the number of individuals N, which corresponds to our situation (T= 25 and N=16). This type of model allows estimation in the presence of autocorrelation, dependence between sections within the panel and heteroscedasticity across the panel.
Interpretation
The estimation of the two models is consistent with our hypotheses. The relative price of energy appears to be negatively correlated with the profit rate. The coefficient is -3.4. This means that when the relative energy price index increases by one unit, then the profit rate decreases by 3.4 percentage points. The interpretation of one unit of relative price is not immediately obvious. To understand it, we need to recall the values of relative prices: the relative price of energy ranges from about 0.75 to 1.10, between 1995 and 2019, for all countries combined. The change in the relative price of energy over the period is therefore less than one unit. The increase of one unit represents, in the case of a relative price of 0.75, an increase of 130%. Over the period studied, the energy price index increased by only 0.35 units, or about 46%. If we accept a rough calculation, it turns out that the increase in the relative price of energy would have cost 0.46*3.4 = 1.19 points of profit rate between 1995 and 2019. The effect therefore appears to be substantial over the period.
We can say that this empirical work largely validates the hypotheses formulated by the Marxist analytical framework. The latter thus appears heuristic in order to understand the way in which energy, an indispensable use value in the economy, exerts its effects on the dynamics of growth. In particular, we highlight a profit rate channel that, until now, seems to have been neglected by the literature in ecological economics which prefers the weight of the energy bill. Controlling for the exploitation rate and technical composition of capital, the effect of the relative price of energy is significant on the rate of profit, confirming the hypothesis of an action through the organic composition of capital.
Energy bill as part of the profit rate channel
It is possible to put this opposition between the rate of profit and the weight of the energy bill into perspective. Indeed, if we examine the formula of the weight of the energy bill in GDP, it is possible by rearranging the equation to include the variables we have used in our regression. The profit rate channel exhibits two control variables that are part of the definition of the energy bill. This could explain why, in the case of France, Marxist authors like [START_REF] Husson | Relating Financial and Energy Return on Investment[END_REF] observe a high correlation between the energy bill and the profit rate (fig. 3). It also confirms that the energy bill can be used in a supply channel (the profit rate channel) rather than a demand channel (as suggested by the biophysical approach).
Concluding remarks
In this article, we have shown the heuristic character of the Marxist framework to show how energy acts as a limiting factor in the growth process. This framework makes it possible to identify a supply channel, the rate of profit channel, which appears more robust than a demand channel, the energy bill channel, highlighted by the authors of the biophysical approach. Indeed, the statistical work conducted on data from a panel of 16 OECD countries confirms the existence of this channel over the period 1995-2019. In contrast to the work of the biophysical approach, the control variables used are consistent with the hypothesis tested. Moreover, the reformulation of the energy bill (as a product of three factors: the relative price of energy, the technical composition and real labor productivity), suggests that the energy bill can also be interpreted in the context of a supply channel.
An interesting extension of this work would be to compare the results in terms of labor embodied in energy production with those in terms of energy embodied in energy production (EROI).
Heteroskedaticity, auto-correlation and cross sectionnal dependance
Objectif
Fig 1 .
1 Fig 1. Energy surplus as a limiting factor to economic growthSource: author
Figure 2
2 Figure2summarize what we call the profit rate channel:
bill/GDP nGDP : nominal GDP ; rGDP : real GDP Pe : energy price; P : price; Pe/P : energy relative price Qe : energy quantity; h : hours worked ; Qe/h : technical composition of capital rGDP/h : real productivity of labor
Fig. 3
3 Fig.3 Energy bill (as % of GDP) and the profit rate, France (1949-2006). Taux de profit = profit rate; consommation d'énergie en valeur (% du PIB) = energy bill (% of GDP) Source : Husson 2009
Table 1 .
1 Different equations estimatedWith α, a constant ; β1, β2 et β3 coefficients of the explanatory variables explicatives and ε the error term. PR_NRJ = relative price of energy; Expl = exploitation rate; TCC = technical composition of capital. Source : auteur
Model Equation
A 𝑃𝑟𝑜𝑓𝑖𝑡𝑅𝑎𝑡𝑒 = 𝛼 + 𝛽1𝑃𝑅 𝑁𝑅𝐽 + 𝜀
B 𝑃𝑟𝑜𝑓𝑖𝑡𝑅𝑎𝑡𝑒 = 𝛼 + 𝛽1𝑃𝑅 𝑁𝑅𝐽 + 𝛽2𝐸𝑥𝑝𝑙 + 𝜀
C 𝑃𝑟𝑜𝑓𝑖𝑡𝑅𝑎𝑡𝑒 = 𝛼 + 𝛽1𝑃𝑅 𝑁𝑅𝐽 + 𝛽2𝑇𝐶𝐶 + 𝜀
D 𝑃𝑟𝑜𝑓𝑖𝑡𝑅𝑎𝑡𝑒 = 𝛼 + 𝛽1𝑃𝑅 𝑁𝑅𝐽 + 𝛽2𝐸𝑥𝑝𝑙 + 𝛽3𝑇𝐶𝐶 + 𝜀
Modèles
Table 2 . Pool OLS with the different explanatory variables.
2
Source : auteur.
Table 3 . Results of multivariate regression of the rate of profit on the relative price of energy, the operating rate and the technical composition of capital.
3
Models
Table 4 . Results of 3 models : fixed effect, Discroll & Kray and FGLS.
4 It is clear that taking into account autocorrelation, heteroscedasticity and dependence between sections does not change our results. Both methods (Discroll & Kray and FGLS) give us significant coefficients (table 4)
Models
« Country » « Discroll & « FGLS »
Kray »
Relative price of energy -3.4 *** -3.4* -3.4***
Exploitation rate 0.09 *** 0.09* 0.09***
Technical composition of 0.21*** 0.21*** 0.21***
capital
R² 0.50 0.50 0.39
Adj. R² 0.48 0.48
Number of observations 400 400 400
*** p < 0.001 ; ** p<0.01 ; * p<0.05
Source : author
Table . 7 Testing for heteroscedasticity, autocorrelation and cross-sectional dependence.
.
Test H0 Result Interpretation
Heteroskedaticity Breush- Homoskedaticity 88.522*** Heteroskedaticity
Pagan (BP)
Auto-correlation Breush- No auto- 264,89*** Auto-correlation
Goodfrey- corrélation (chisq)
Woolridge
Cross-sectionnal Breush- No dependance 586,4*** Cross-sectionnal
dependance Pagan LM (chisq) dependance
Cross-sectionnal Pesaran CD No dependance 5,544*** Cross-sectionnal
dependance (z) dependance
*** p < 0.01 ; ** p<0.05 ; * p<0.1
Source : author
Indeed, the energy bill in relation to GDP can be expressed as the product of the energy price (p) and the energy intensity (E/GDP). Energy bill/GDP = p * E/GDP
Energy price Energy bill/GDP
GDP growth
Appendix
Unit root tests |
00412054 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2008 | https://inria.hal.science/inria-00412054/file/rws08fin.pdf | P F Morlat
A Luna
X Gallon
G Villemaud
J M Gorce
Structure and Implementation of a SIMO Multi-Standard Multichannel SDR Receiver
Keywords: SIMO processing, WLAN standard, SDR receiver, multi-channel multi-standard receiver
This article presents the structure of a multiantenna multi-channel and multi-standard receiver based on Software Defined Radio (SDR) concept. At this time our work is focused on 802.11g and 802.11b signals cohabiting in a 40 MHz frequency bandwidth around the 2.4 GHz RF carrier frequency. Using classical Single Input Multiple Output (SIMO) processing, these incoming signals are combined to increase transmission performances and to mitigate fading effect. Simulated and measured results of our structure are given and a description of the real proposed architecture is detailed.
I. INTRODUCTION
The future of wireless communications can be view as numerous standards operating together. Reconfigurable receiver is an important feature in the solution of providing interoperability of transceivers in a number of different wireless systems. The ideal universal receiver sampling the incoming signals just after a RF stage is very seducing but not realistic at this time due to limitations in sampling capabilities for high frequency systems. In this context, Software-defined radio (SDR) [START_REF] Mitola | The software radio architecture[END_REF][START_REF] Rhiemeir | Modular Software-Defined Radio[END_REF] principles have attracted more and more attention. We can then imagine reconfigurable receiver able to deal with predefined transmission standards. An important number of contributions on SDR can be found in the literature and several key-items of different aspects of the signal processing chain are discussed: sample rate adaptation [START_REF] Brandolini | Toward Multistandard Mobile Terminals-Fully integrated Receivers requirements and architectures[END_REF], RF front end design [START_REF] Hentschel | Sample Rate conversion for software radio[END_REF] as example, and several SDR test bed platforms are described [START_REF] Miranda | A self reconfigurable receiver architecture for software radio systems[END_REF][START_REF] Xu | Analysis and implementation of software defined Radio receiver platform[END_REF].
Our work combines several attractive and relevant principles in order to increase data rate, quality of service and flexibility in the same architecture. First of all, based on SDR concept we propose a multi-standard approach. Indeed, the receiver we describe is able to deal with WLAN 802.11g and 802.11b standards. In this way, each technology will no longer relies on a particular chip, but on a software package, allowing dynamical changes and also parallel decoding. Then a particular attention should be paid on calculation resources, assuming a maximum reuse of basic functions for all standards (FFT, filtering) and a precise dimensioning to achieve real time operations.
Moreover, it is well known that multiplying antennas, even from the only receiver point of view offers a high potential to compensate radio channel fading and interference. That is why 4 receiving antennas are used by our receiver. Several copies of the same emitted signal are available and using SIMO processing a great benefit could be taken from spatial diversity, especially in large angular spread configuration which corresponds to the working environment of WLAN transmissions. At this time SIMO algorithms we use are based on the classical and well known Minimum Mean Square Error (MMSE) criterion; because of its low complexity and its good adaptation to WLAN transmissions using training sequences at the beginning of each frame. But depending on the treated signal and the propagation channel, different alternatives of the same algorithm are used in order to have the better trade off complexity -performances increasing for each WLAN standard. [START_REF] Morlat | Performance Validation of a multi-standard and multiantenna receiver[END_REF] describes classical SIMO processing we use, applied in the case of 802.11g and 802.11b transmissions.
Another particularity of the proposed structure is the multi-channel approach. Classical receivers, after selection of its allowed channel for communication, operate a frequency tuning in its RF part to only filter this chosen band, naturally operating an adjacent channel rejection. 802.11 transmission based on 84 MHz large band shared between up to 14 overlapping 20 MHz channels is a very relevant example of this phenomena. That is why designing a receiver sampling a 40 MHz bandwidth instead of a bandwidth corresponding to only one communication channel seems to be interesting. This allows taking advantage of larger numerical information of the desired signal together with interferes. Then different techniques of interference mitigation, channel rejection [START_REF] Mary | Performance Analysis of Mitigated Asynchronous Spectrally-Overlapping WLAN Interfernce[END_REF], or parallel decoding can be involved.
All of those multi-* approaches are combining in the same test platform composed by antenna and RF frontend, numerical processing stage with DSP and FPGA, and a PC to combine received signals thanks SIMO processing and compute transmission BER. This article is structured as follow: first part describes results obtained by simulation and measure using software demonstrator developed with ADS from Agilent Technologies [9] in different working conditions. Then, the second part presents the structure developed platform.
II. SIMO MULTI-STANDARD MULTI-CHANNEL RECEIVER PERFORMANCES
As presented before, combining multi-antenna, multichannel and multi-standard principles in the same receiver seem to be a very promising issue to increase wireless transmission performances. However, several parameters like antenna coupling, channel correlation and also propagation channel properties have an important influence on SIMO performances [START_REF] Dietze | Analysis of a two-ranches maximal ratio and selection diversity system with unequal SNRs and correlated inputs for a Rayleigh fading Channel[END_REF]. That is why a software demonstrator simulating the running of a 4 antennas receiver and able to deal with 802.11g/b signals in a 40 MHz bandwidth was developed using ADS software. Different SIMO algorithms, configurations of running (space between two interest communication channels), and propagation channel characteristics could be chosen to have a better estimation of the performances of our system in realistic working conditions. Moreover, thanks the compatibility with Agilent equipment, a complete multi-antenna (2x2) test bed radio platform was installed in order to rapidly test and evaluate performances of any stage of wireless transmission [START_REF] Morlat | Global System evaluation scheme for multiple antennas adaptive receivers[END_REF]. Due to the current capacity of our platform, it is possible to record only 2 incoming signals simultaneous. However it is sufficient to introduce channel correlation and antenna coupling in our measure. That is why, even if the developed demonstrator is able to run with 4 receiving antennas, only 1x2 SIMO performances are presented. 1 gives measured and simulated performances of an 11 Mbps 802.11b transmission in different working conditions. Firstly a very good match between AWGN theoretical, simulated, and measured Bit Error Rate (BER) curves is observed. Furthermore, about 5 dB of gain could be also observe in the case of a transmission under a fast fading multi-path propagation channel between the single antenna receiver and the 2 antennas structure using SIMO processing to combine received signals.
Finally, table 1 presents BER values in the case of the terminal is dealing with two recorded signals of each standard after propagation under a measured AWGN channel. Both signals are emitted to a sufficient level to guaranty no transmission error in the case of a mono standard communication. As it could be expected, the more channels are spaced, the better are transmission performances. With these results, we can observe a great performances increasing using a 1x2 SIMO architecture even if only spatial diversity processing are used. On the other hand, in the case of two channels are 25% overlapped (channel spacing of 15 MHz), no transmission error is obtained, even if a single antenna receiver is used.
III. THE SDR RECEIVER STRUCTURE
This parts describes different operating stage we develop in order to treat RF incoming signals to the BER of the transmission computation.
A. RF stage
A 2 patches antenna spaced by a distance of 6 cm and providing each 2 channels of orthogonal polarization is used. So, we have 4 incoming signals to treat. This antenna is designed for 2.4 GHz transmissions, and connected to the RF stage which allows the down conversion, and also an 80 MHz sampling and a 10 bits coding of the complex signal. Mo/s connection is possible. However, rate of the incoming data from the RF stage to the FPGA, due to the sampling and the bit coding is 800 Mo/s. An important data rate decreasing must be applied in the FPGA. Part D will describe functions of different numerical processing components. At the end, each DSP is connected to a 600 Mo/s internal bus to access to SDRAM, Flash Memory and also the PCI gateway. Finally, the coding rate used with this TS-C43 board is 8 bits.
B. Numerical Processing
C. PC connection
Thanks the 528 Mo/s PCI connection, numerical data are send to a dual-processor 3 GHz PC with a 4 Go RAM running Linux. The PC is used to combine the 4 numerical channels using different SIMO processing, and also to demodulate signals in order to compute BER of the transmission. At this stage of treatment, real time processing is not a really important parameter. Let us now list the most important functions used in the numerical processing stage.
D. Numerical functions
Each TS-C43 board has to deal with 2 reception channels. Remembering that our receiver must be able to deal with 2 802.11 communications at the same time, one DSP has to permit the global signal processing for 1 incoming signal.
The major aim of the FPGA is to ensure an important data rate decreasing to make transmission to DSP possible. That is why the FPGA has to detect signal presence in the 40 MHz bandwidth of interest, detect the WLAN channel (carrier frequency) used and apply a 11 MHz filter around this carrier frequency on both complex channels (I and Q), which corresponds to a 22 MHz channel bandwidth. That way, the data rate is 44 Mo/s and there is no problem for transmission to the DSP.
After a standard characterization and an adapted sampling according to the detected standard, numerical processing is applied to the signal. Fig. 3. Description of numerical functions Fig. 3 describes the different numerical functions implemented on each one of eight DSP. The Channel Impulse Response (CIR) for 802.11g transmissions is computed in frequency domain by the PC, before SIMO processing. In the case of 802.11b transmission, barker dispreading is applied after the optimal SIMO coefficients computation. Thanks the several algorithms implemented, the data rate in the PCI transmission to PC is sufficient low.
At this time 802.11b and 802.11g processing have been tested with Matlab and implemented and tested with VC++ software. 802.11b receiver architecture is implemented on DSP and validated, but a lot work taking into account real time constraint has to be done. SDR concept combined with SIMO processing and multi-channel approach is a very promising issue to develop future wireless terminals ensuring robust and flexible transmissions. The aim of this work is to propose a SDR receiver architecture including these different approaches. RF properties, real time digital processing and also global system considerations have to be taken into account in order to propose a suitable architecture. Even if the SDR terminal structure and the role of different equipments of our platform depending on their computing capacities, are well defined at this time, a lot of work on the DSP implementation optimization has to be done in order to permit a real time processing.
Fig. 1 .
1 Fig. 1. BER vs. Eb/No for 802.11b transmission
The numerical processing is computed using two TS-C43 PMC from VSystems[12]. Each board (fig.2) comprises 4 clustered TigerSHARC DSP allowing a 300 MHz processing, a Xilinx Virtex-II FPGA and a 528 Mo/s PCI connection.
Fig. 2 .
2 Fig. 2. The TS-C43 PMC board FPGA and the RF stage are connected together by an external I/O connection of 100 pins, each one allowing a 100 MHz connection. Between FPGA and DSP, a 150Mo/s connection is possible. However, rate of the incoming data from the RF stage to the FPGA, due to the sampling and the bit coding is 800 Mo/s. An important data rate decreasing must be applied in the FPGA. Part D will describe functions of different numerical processing components. At the end, each DSP is connected to a 600 Mo/s internal bus to access to SDRAM, Flash Memory and also the PCI gateway. Finally, the coding rate used with this TS-C43 board is 8 bits.
ACKNOWLEDGEMENT
The authors wish to acknowledge France-Telecom R&D who supports this work. |
00412055 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2007 | https://inria.hal.science/inria-00412055/file/eucap07.pdf | P F Morlat
X Gallon
G Villemaud
J M Gorce
MEASURED PERFORMANCES OF A SIMO MULTI-STANDARD RECEIVER
Keywords: SIMO processing, measure validation, SDR receiver, WLAN
SIMO algorithms ensure important increasing of radio communications robustness, but their performances strongly depend on channel propagations conditions and antenna characteristics. This article presents performances of different SIMO treatments applied on WLAN (802.11b and 802.11g) transmissions in a multi-standard and multi-channel context. These performances are obtained by measure, under different propagation channels and for realistic working conditions.
Introduction
Combining different standards communication in the same receiver structure, based on concept of Software Defined Radio (SDR) is a very promising issue for future wireless systems [START_REF] Xu | Analysis and implementation of software defined radio receiver platform[END_REF][START_REF] Harada | Software Defined Radio Prototype toward Cognitive Radio Communication Systems[END_REF]. Moreover using SIMO (Single Input Multiple Output) processing improves wireless systems performances. Furthermore, a multi channel receiver seems to be an interesting evolution with the arrival of communication standard defined on overlapping channels, such as 802.11 systems. Thus a software demonstrator simulating the running of a multi-channel multi-standard and multi-antenna receiver was developed using Advanced Design Systems (ADS) from Agilent Technologies [START_REF] Morlat | Performance Validation of a multi-standard and multi-antenna receiver[END_REF]. At this time we focused our study on a 4 arms receiver capable to deal with 802.11g and 802.11b cohabiting signals in a 36 MHz bandwidth. However in order to have a better estimation of SIMO processing performances, an evaluation oh those algorithms in real working conditions is necessary. Indeed, antenna coupling, channel correlation and channel propagation properties have an important influence on those performances. Based on capacities of our 2x2 radio platform using the Agilent connected solution equipments [START_REF] Morlat | Global system evaluation scheme for multiple antennas adaptative receivers[END_REF], our work presents measured performances of a SIMO multi-standard multichannel receiver.
The Measurement procedure 2.1 Description of the platform
A complete test bed platform was installed, using Agilent Technologies equipments and ADS software (Fig. 1). This
Description of the measure
Our aim is to study SIMO performances under realistic working conditions. In this context, antenna coupling, channel correlation, and also different propagation conditions must be introduced in the measurement system. Depending on the current capabilities of our platform only 1x2 SIMO configuration is possible. Correlation of received signals and antenna coupling introduce loss of information diversity, and then a reduction of SIMO performances. Envelope correlation ρ between two signals x and y is compute according to (1) [START_REF] Dietze | Analysis of a two-ranches maximal ratio an selection diversity system with inequal SNRs and correlated Inputs for a Rayleigh Fading Channel[END_REF] [ ] ( ) ( )
2 2 ) ( ) ( ) ( ) ( y y E x x E y y x x E - ⋅ - - ⋅ - = ρ (1)
where E(.) denotes the expected value and
dv v dy y BER N X X v X v - ⋅ - - = - ∞ - + - - ∫ ∫ 2 1 2 2 2 exp 2 exp 2 1 1 π (2) Where No Eb X ⋅ = 2
and N=8 in the case of an 11 Mbps transmission. Figure 3 presents BER variation for a 36 Mbps 802.11g transmission under different channel propagation conditions. A very good match between simulated and measured AWGN results can be observed (only 1 dB of deviation, but measures were no realized in anechoic chamber). Simulated performances of the mono-antenna 802.11g receiver for propagation under the ETSI-A channel are also presented. An important deviation (about 4 dB) between these simulated results and BER variation under the measured multi-path channel is observed. That is due to the fact that only two echoes are detected during the characterization of the propagation channel used for measure, while 18 taps model for the ETSI-A channel. Of this fact, it is normal to observe better performances of the channel equalizer for propagation under the measured channel than under the simulated ETSI-A channel. It is important to note that at this time, BER variations are obtained in the case of a static channel and that no fast fading is introduced to validate our platform and measurement system. Introduction of fast fading will be described in the next section.
1x2 SIMO measure
SIMO performances strongly depend on channel correlation, and also antenna coupling [START_REF] Shiu | Fading correlation and its effect on the capacity of MEA systems[END_REF]. That is why studying SIMO processing introducing channel correlation and also antenna coupling is relevant. All of SIMO algorithms used to increase 802.11b and 802.11g transmissions are described in [START_REF] Morlat | Performance Validation of a multi-standard and multi-antenna receiver[END_REF]. Only 1 dB of gain thanks SMI processing is obtained compared to a single antenna receiver dealing with the best signal recorded (switch antenna selection). It is due to the fact that difference of received signals power on both antennas was very important during these measures. However, using SF-MMSE algorithm to combine different incident signals ensure an increasing of system performances of 3 dB. But propagation under static multi-path channel do not really corresponds to realistic working conditions, that is why to have a more precise estimation of our system's performances, it is necessary to introduce fast fading in the measurement procedure. In this context, speed was imposed to the antenna during measurement. In order to choose the maximum imposed speed, we must keep in mind that the propagation channel has to stay constant during the time of one frame. In the demonstrator developed with ADS and running with Ptolemy tool , speed of the terminal is fixed to 10 km/hr, i.e. 2.78 m/s, which results in a maximum Doppler shift of 22 Hz. The Doppler spread and coherence time are inversely proportional to each other, yielding a coherence time of 45 ms. Duration of a 802.11g or 802.11b frame depends on the data rate and also on data size to transmit. In our conditions of work, the longest frames are obtained for 11 Mbps 802.11b transmissions. Packet size of these frames is 100 octets, so the duration of one frame is 290 usec. So, channel could be considered constant. Figure 6 presents BER performances of the single antenna receiver and the 1x2 SIMO receiver running with signals recorded at the same time after propagation under the multipath channel by the moving terminal in the case of a 36 Mbps 802.11g transmission. The SIMO processing used is the SMI, and BER curves of the single antenna receiver were obtained with one of the two recorded signals. Indeed, in this case of measure, signal power level on each arm of the SIMO receiver are quite equivalent. In these conditions of work, estimate Eb/No at the input of the receiver is quite difficult due to the fast variation of the signal power level. That is why BER curves are not so smooth as usual. About 5 dB of gain is observed thanks SMI processing for a BER value of 10 -2 . We also observe an important variation of the curve's slope. The well know result that with SIMO processing under fast fading propagation, BER decreases more quickly than in the case of a single antenna reception is verify.
Multi-standard configuration
Nowadays, with the growth of number of communication standard, users would like to have only one terminal to achieve multiple wireless standards. The most promising technology to achieve such receiver is SDR technology, which allows users to switch communication systems by changing software alone. At this time we focus our study on a system able to run with 802.11g and 802.11b transmission.
Cohabitation between these two standards seems to be relevant to study because they share the same RF carrier and are defined on overlapping channels. In this context, designing a receiver based on SDR concept and sampling a frequency band wider than those of a single communication channel seems to be a relevant study. This allows taking advantage of the knowledge of adjacent channel interferes and to increase transmission performances. [START_REF] Morlat | Global system evaluation scheme for multiple antennas adaptative receivers[END_REF] describes the architecture of the multi standard (802.11b/g) multi channel (40 MHz of bandwidth reception) SDR terminal we propose. This part of the article presents performances of a multistandard multi-channel terminal using 1x2 SIMO processing able to run with two 802.11 signals cohabiting at the same time. SIMO algorithms ensure to combat fading and multipath effect, but we will show that these algorithms also ensure mitigation of interferences. Others adjacent channel cancellation processing exist [START_REF] Suthaharan | Joint interference cancellation and decoding scheme for next generation wireless LAN systems[END_REF]. Performances of these algorithms will be probably studied later.
An important parameter to study the comportment of the multi-channel terminal is the number of communications channels between the two signals of interest.
The first tested configuration is the cohabitation of two 36 Mbps 802.11g emitted throw an AWGN propagation channel on different carrier frequencies. Incoming works are dealing with more analysis and estimation of WLAN performances in a multi-channel configuration. Studies about implementation of more efficient multi-antenna algorithms in order to permit better interferers mitigation could also be interesting.
platform is made of one arbitrary waveform generator (ESG 4438C) and a vector spectrum analyzer (VSA 89641) with two RF inputs. These equipments are connected to a PC running with the ADS software. With this platform, any signals of a maximum RF frequency of 6 GHz could be generated by ADS and emitted to a real propagation channel. Our platform ensures a reception bandwidth of 40 MHz, that is why it is possible to make cohabit two 20 MHz WLAN signals, emitted on two different carrier frequencies spaced by 20 MHz. The recorded signals are transferred to the software and all of the baseband processing are applied in order to combine the different received signals (SIMO processing) and to demodulate the data.
Figure 1 :
1 Figure 1: 2x2 MIMO transmission with the radio platform
Figure 3 :
3 Figure 3: 36 Mbps 802.11g SISO performances
Figure 5 :
5 Figure 5: 802.11g BER performances
Figure 6 :
6 Figure 6: 802.11g BER performances
3 Measured performances 3.1 Validation for a SISO configuration
x = E (x ) .
Table 1 presents correlation value in function of the antenna
spacing d (a fraction of wave length λ) in the case of a NLOS
(Non Line of Sight) transmission configuration. For each
distance value, we also give the BER (Bit Error Rate) value
obtained thanks SIMO processing applied to the two recorded
signals used to compute the correlation.
d 0.3 0.5 0.75 1 1.25 1.5
ρ 0.45 0.43 0.02 0.07 0.1 0.12
BER 0.02 0.03 0.03 0.01 0.02 0.01
Table 1: Correlation value in function of antenna spacing
A very low envelope correlation (ρ<0.7) is observed. A quite
constant BER is also obtained, proving that for all distance
between the two antennas, correlation has no influence on
system's performances.
For different antenna spacing, coupling between the two arms
is compute and the maximum value, obtained for a distance of
0.3 λ is equal to -20 dB. That is why, coupling effect between
receiving antennas could be considered as negligible.
Finally, characterisation of the propagation channel was
realized. The measured NLOS channel used has a delay
spread much less important (about 76 ns, τ rms = 35 ns) than
the often used channel model ETSI-A for office environment
(delay spread = 390 ns, τ rms = 50ns) [6].
At first, in order to validate the structure of our radio
platform, first tests were realized in a SISO configuration
under AWGN and multi path propagation. Figure 2 presents
results we obtained for an 11 Mbps AWGN 802.11b
transmission.
1,0E+00
theoritical BER
1,0E-01 measured BER simulated BER
1,0E-02
BER
1,0E-03
1,0E-04
1,0E-05
0 2 4 6 8 10 12
Eb/No -dB
Figure 2: 11 Mbps 802.11b SISO performances
For each Eb/No, 10 000 frames of 100 octets are emitted and demodulated to accurately estimate the corresponding BER. The theoretical 11 Mbps 802.11b BER variation was computed according to (2)
[START_REF] Borgo | Analysis of the hidden terminal effect in multi-rate IEEE 802.11b networks[END_REF]
Table 2
2
presents
Table 3 :
3 multi-standards multi-channel performancesWith table 3, we can also observe the contribution of SMI processing to mitigate interference effect and to decrease transmission error. In the case of two channel 25% overlapped, even if the terminal uses a single antenna, no transmission error is obtained. SDR concept combined with multi-channel and SIMO processing in the same terminal is a very promising issue to develop future wireless receivers ensuring high data rate and robust transmissions. The aim of this work was to expose performances results of several SIMO processing applied to real communication standards in realistic working conditions. Theses results were obtained by measure, taking so into account a most realistic as possible propagation channel, antenna coupling, channel correlation and fast fading. We have first presented results obtained in a single user transmission to detail effect of different propagation parameters and then in the second part of this article, several results obtained by measures simulating the running of a 40 MHz dealing with different 802.11 signals are given. With theses results we can study the effect of adjacent channel interference in the case of WLAN communication for different value of the channel spacing between the two signals of interest. Results are also given to prove the utility of spatial diversity to mitigate not only fading channels but also interference channels.
4 Conclusions |
04120551 | en | [
"spi",
"phys"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04120551/file/DMPE23032.1680099399.pdf | Thomas Geoffrey Decker
email: [email protected]
Robin William Devillers
Stany Gallier
Experimental visualization of aluminum ignition and burning close to a solid propellant burning surface
Visible images of burning solid propellants loaded with aluminum particles are presented. The images are recorded at high acquisition frame rate (up to 33kHz) with excellent sharpness, enabling the study of aluminum ignition and combustion close to the propellant burning surface. The article focuses on two aspects: the aluminum aggregate ignition and the alumina breaking and migration on the aggregate surface. The article presents new ways of studying aluminum agglomeration and combustion by applying image processing algorithms on state-of-the-art image series.
Introduction
Solid Rocket Motors (SRM) are widely used in military and civilian applications. They are highly reliable, require little maintenance, and are readily available even when stored for a long time. A solid propellant is a solid material used to generate thrust by its combustion. It is composed of a fuel binder and oxidizing particles. Metal particles, usually aluminum, are added to increase the propulsion performances with additional combustion heat release. While the fuel and the oxidizer burn at the propellant surface, the aluminum particles burn in the gas flow produced by the fuel/oxidizer combustion. The aluminum particles are subject to complex phenomena prior to their combustion in the gas flow [START_REF] Price | Combustion of aluminized solid propellants[END_REF][START_REF] Babuk | Formation of condensed combustion products at the burning surface of solid rocket propellant[END_REF]. They follow an accumulation-aggregationagglomeration mechanism [START_REF] Maggi | Combustion of metal agglomerates in a solid rocket core flow[END_REF][START_REF] Ao | Aluminum agglomeration involving the second mergence of agglomerates on the solid propellants burning surface: experiments and modeling[END_REF]. The particles accumulate in the solid propellant pockets [START_REF] Cohen | A pocket model for aluminum agglomeration in composite propellants[END_REF], they aggregate while being attached to the burning surface and finally agglomerate when they ignite on the burning surface or in the gas flow. The aluminum particles are referred as aggregates (on the burning surface, with a coral-like shape, eventually morphing into a spherical one), and agglomerates (in the gas flow, with a spherical shape). The resulting droplets can reach hundreds of micrometers in diameter [START_REF] Babuk | Nanoaluminum as a solid propellant fuel[END_REF], while the virgin particle size is usually between 5 and 50µm. An oxide (alumina) cap is attached to the burning aluminum droplet. The droplet combustion produces heat as well as alumina, with some parts being evacuated in the gas flow as smoke while another portion increases the size of the oxide cap.
The inert alumina droplets produced during the aluminum combustion influences the stability and the performances of the SRM, depending on their size [START_REF] Salita | Deficiencies and requirements in modeling of slag generation in solid rocket motors[END_REF][START_REF] Li | Effects of particle size on two-phase flow loss in aluminized solid rocket motors[END_REF]. The particles often follow a trimodal distribution [START_REF] Zarko | Formation of al oxide particles in combustion of aluminized condensed systems[END_REF] when captured with collection techniques ([2, 10, 11]). The lower and higher modes of the size distribution are respectively attributed first to alumina smokes and second to the aluminum agglomerates and coarse oxide, but the intermediate mode (4 -8µm) is not fully understood [START_REF] Zarko | Formation of al oxide particles in combustion of aluminized condensed systems[END_REF], and doubts remain about the production of those alumina droplets. The alumina production mechanism during the aluminum droplet combustion is considered in many studies [12,[START_REF] Karasev | Formation of charged aggregates of al2o3 nanoparticles by combustion of aluminum droplets in air[END_REF]. But its production close to the surface when the aluminum is still attached to the propellant has never been studied to the authors' knowledge. No model for oxide-cap formation during the aggregation-agglomeration mechanism has been published. This formation is crucial because the alumina migration to form an oxide cap influences the aluminum ignition on the burning surface while the size of the resulting oxide cap influences the aluminum combustion along the droplet lifetime [12,[START_REF] Beckstead | Correlating aluminum burning times[END_REF][START_REF] King | Aluminum combustion in a solid rocket motor environment[END_REF].
Two phenomena associated with the alumina production and oxide cap formation on the burning surface are studied with image processing: the aluminum aggregate ignition and the alumina layer breaking and migration. This study shows new ways of studying physical phenomena associated with aluminum agglomeration and combustion in solid propellants.
Experimental setup and data
Shadowgraphy was used to visualize accurate details above the solid propellant surface during combustion [START_REF] Devillers | 7th European Conference for Aeronautics and Space Sciences[END_REF]. Here a similar optical set-up was used without back illumination to improve the contrast between aluminum and alumina. A high-speed camera acquires frames to target physical phenomena on the solid propellant burning surface. The acquisition rate was adjusted up to 33kHz, and the recorded images size depends on the frame rate. The spatial resolution is about 3.2µm/px.
Two solid propellants are studied (P1 and P2). Their compositions are similar. Both include a trimodal repartition of Ammonium Perchlorate (AP) particles, a hydroxyl-terminated polybutadiene (HTPB) binder and approximately 20% of aluminum particles. They only differ in the initial size of the aluminum particles. The P1 initial aluminum particles size is approximately 30µm while the P2 is approximately 15µm. In order to capture the aluminum particles or agglomerates as clearly as possible, propellants are beveled. The cam-era views the propellant slightly from the top with an angle around 45 • . Heated aluminum and alumina are clearly visible because of their incandescence.
The P1 propellant is studied at two initial pressures, 5 and 10 bar. It is recorded at 14kHz in order to capture the aluminum ignition of the aggregates. The P2 propellant is studied at one initial pressure, 5 bar. It is recorded at 33kHz in order to capture the aluminum breaking and migration. Image examples are presented in figures 1, 2, and 3. The surface on the image of the P1 propellant at 10 bar is brighter than at 5 bar (the experimental setup is kept identical). The solid propellant flames are closer to the surface with increasing pressure. The aluminum close to the surface is therefore hotter, i.e. brighter.
Image processing 3.1 Aggregates detection
The first studied phenomenon is the aluminum aggregates ignition on the burning surface of the propellant. The objective is to automatically detect and track the aluminum aggregates and agglomerates on the propellant surface and in the gas flow. The number of captured images for the P1 propellant at the initial pressures of 5 and 10 bar being approximately 2000 and 2500, automatic algorithms are needed. The following processing steps were used:
Step 1 -Top-hat filtering The Top-hat transform [START_REF] Dougherty | SPIE[END_REF] extracts elements from given images. The transform depends on a structuring element, usually a disk. The bright objects with much smaller sizes are considered as part of the background. Therefore, only the aggregates with sufficient sizes can be easily detected.
Step 2 -Thresholding A simple thresholding of the resulting image enables the detection of the aggregates and agglomerates on the burning surface. The threshold choice is a compromise in order to avoid merging separate aggregates into a single detected object.
Figure 5 shows steps 1 and 2 for the P1 propellant at 5 bar, compared to the color scale limited original image shown in figure 4.
Steps 3 and 4 -Detection and tracking
The resulting thresholded image is binary. The aggregates and agglomerates are easily detected. Metrics are automatically calculated, such as the area, the circularity, the mean and the maximum intensity.
The resulting detections are tracked on successive images to analyze the behavior of aggregates on the solid propellant surface. Thanks to the significant frame rate, a simple Intersect-Over-Union (IOU) threshold is employed. Two tracked aggregates at 5 and 10 bar can be visualized in figures 6a and 6c).
Alumina detection
The second studied phenomenon is part of the aggregate ignition, it is the alumina breaking and migration. The objective is to detect and track alumina on the surface of aluminum aggregates. In order to study the phenomenon, the frame rate is much higher, 33kHz. We focus here on one sequence of approximately 300 frames where the alumina encapsulating an aggregate breaks up and migrates.
A Top-hat transform is applied, followed by a thresholding. An example is shown in figure 7 for propellant P2 at 5 bar with (a) Top-Hat transform and (b) Threshold image. Figure 7 zooms on a igniting aggregate on the propellant surface.
The detection algorithm can segment the main alumina surface surrounding the aluminum collapsing into a lobe (on the left-hand side) but also isolated alumina portions moving on the aluminum surface (on the righthand side) that will eventually reach the forming oxide cap.
Several metrics are automatically calculated, such as the area, the circularity, the mean and the maximum intensity. The main alumina surface is easily tracked over time, as the largest detection. We follow other alumina detections with the IOU calculation. Here, the minimum IOU is fixed very low because of the rapid motion of the alumina zones compared to their size. Figure 8a shows the evolution of the main alumina layer that initially surrounds the whole liquid aluminum, and figure 9a shows successive positions of a single alumina portion.
Results and discussion 4.1 Aggregates ignition
Here are comments on the aggregate ignition on the surface by analyzing two aggregates tracked other time (figures 6a and 6c). The metrics are plotted in figure 6b.
The growth of the detected areas of the two aggregates on the burning surface (see images 1 to 4 from figure 6a and images 1 to 3 from figure 6c) can be observed with the area. Only fragments of the aggregated are detected for the first detection because the overall aggregate is not bright enough at this early heating stage. The detections grow with time as more and more portions of the aggregates become sufficiently bright.
The increase in temperature of a tracked aggregate is not only synonymous with an increase of the detected area but also a steady increase in the mean intensity (see images 1 to 4 from 6a and 1 to 3 from figure 6c). The mean intensity slowly increases from 2.5 to 4.0ms for the aggregate at 5 bar (blue plot) while the intensity increase happens from 1.1 to 2.0ms at 10 bar (red plot). The intensity spike happening around 0.3ms in the red plot corresponds to a burning droplet passing nearby in its upwards movement.
When the aggregates are ignited and transformed into agglomerates, they establish a diffusion flame with the surrounding gas (see image 6 from 6a and image 5 from figure 6c). The diffusion flame establishment corresponds to an increase in area, an increase in mean intensity and a decrease in circularity. Simultaneously, the newly formed agglomerates leave the surface and move in the gas flow. The movement in the gas flow is visible with the vertical velocity increase starting at 5ms at 5 bar and 4.2ms at 10 bar.
The metrics are calculated for each tracked aggregate on the burning surface detected for at least five frames. 786 aggregates were tracked at 5 bar and 1259 at 10 bar. Two estimates are calculated for each tracks
• The track duration time gives the estimate of the ignition time τ • A mean intensity Imean is calculated over the complete track Distributions for the two estimated are presented in figures 10a and 10b. To avoid visualizing the threshold effect with a spike of distribution on the lowest values, the distribution of the ignition time is presented as a cumulative. The ignition time τ is globally lower at 5 bar than at 10 bar. The aggregates ignition is faster with increasing pressure. A shorter ignition delay leads to less agglomeration. It is consistent with the physics of aluminum agglomeration in solid propellants, agglomerates are smaller with increasing pressure [START_REF] Pokhil | Combustion of metal powders in active media[END_REF][START_REF] Sambamurthi | Aluminum agglomeration in solid-propellant combustion[END_REF].
The mean intensity I mean is globally increasing with increasing pressure. Propellant flames get closer to the burning surface with increasing pressure, leading to a more efficient aggregate heating (i.e. brighter detections).
Overall, tracking aluminum aggregates on the burning surface enables the study of physical phenomena associated to aluminum agglomeration, ignition, and combustion on the solid propellant burning surface. It helps understand the importance of the aggregates' temperature on its ignition. The study of the detected area shows that the aggregates are not homogeneous in temperature. Data also show the importance of temperature in the aggregates ignition. With increasing pressure, the solid propellant flames are closer to the burning surface, the aggregates are hotter, resulting in a decreasing ignition time.
A more quantitative study is required to compare the ignition time of the aggregates to their residence time on the propellant surfaces. Models would have to include ignition delay if its time range is comparable to the residence time on the burning surface. A more detailed study could put into equation the ignition time depending on physical parameters such as pressure, the propellant constitution or the aggregate size.
Alumina breaking and migration
The second focus is on the alumina breaking and migration of an aluminum aggregate attached to the solid propellant burning surface. The alumina breaking and migration exposes the liquid aluminum to the surrounding oxidizing gas. Therefore, the liquid aluminum starts to burn and establishes a diffusion flame (see images 4 to 6 from figure 8a). We focus on two tracks, the main alumina surface that initially surrounds the whole aluminum aggregate and an isolated alumina portion moving rapidly on the liquid alumina surface. Metrics associated with the two tracks are presented in figures 8b and 9b.
The main alumina surface follows a first period of ∼ 7ms where it keeps a circular shape but seems to have a variable temperature (i.e. a variable mean intensity in figure 8b). The exothermic production of alumina is a possible explanation of the increased temperature (i.e. an increased mean intensity in figure 8b between 2.5 and 5ms) prior to the alumina breaking. Price et al. [START_REF] Price | Combustion of aluminized solid propellants[END_REF] demonstrated that the encapsulating alumina cracks due to the expanding liquid aluminum. The liquid aluminum finds itself partially exposed, resulting in the production of additional alumina.
Then, the alumina surface breaks up on the top of the aggregate (where it is supposed to be hotter), a local temperature increase occurs due to the released liquid aluminum combustion, and the circular alumina surface retracts. The alumina break up happens at ∼ 7ms. The alumina may move on the aluminum surface as small portions. The phenomenon is very rapid, as shown in figure 9b (the total duration of the alumina movement being only a few ms). The alumina may also be ejected as small drops (see images 3 and 4 from figure 9a).
The small tracked alumina portion also has a variable temperature (i. e. a variable mean intensity) as shown in figure 9b. It seems that its temperature decreases when it is moving from the top of the liquid aluminum to the main alumina surface, meaning that the aluminum aggregate has a vertical temperature gradient. It confirms the analysis in the previous section that the aggregate temperature is not homogeneous.
The movement of alumina on the liquid aluminum stops at ∼ 14ms. The reason attributed here is that the aluminum of this aggregate does not produce enough heat. The alumina produced on the liquid aluminum surface tempers the combustion and slowly passivates the aggregate. It is visually observed that some igniting aggregates can be passivated. Another ignition needs to occur to finalize the transition from aggregate to agglomerate.
Overall, significant alumina production occurs on the liquid aluminum surface, during the residence of the aluminum aggregate on the solid propellant burning surface. The produced alumina then migrates into the oxide cap. Therefore, a portion of the final residue is produced when the aluminum aggregate is attached to the propellant burning surface. This study shows the importance of physical phenomena associated to aluminum on a solid propellant surface, influencing the size of the resulting alumina cap, and affecting the stability of a SRM.
Conclusion
Visible images of burning solid propellants loaded with aluminum particles were presented. Physical phenomena associated with alumina production were studied thanks to high-speed recording, excellent spatial resolution, good image clearness, and new algorithms using a Top-hat transform.
Different stages of the aggregated aluminum ignition were shown at two initial pressures, 5 and 10 bar. Aggregate temperature increases before it ignites and transforms into a spherical agglomerate. It was found that the ignition time decreases with increasing pressure. The mean intensity of the aggregates increases with increasing pressure, meaning that the aggregates are hotter on the burning surface. Increased pressure leads to closer solid propellant flames to the burning surface and the aggregates.
The alumina breaking and migration were observed thanks to a test at a very high frame rate of 33kHz. An alumina portion was even tracked on a liquid aluminum droplet. Small aluminum/alumina drops were observed being ejected in the gas flow during the surrounding alumina surface migration. A significant production of alumina occurs on the liquid aluminum surface. To the authors' knowledge, those phenomena were never studied in the literature. This study shows the importance of the The study showed the importance of the aggregates' temperature during their ignition. The study also showed that aggregates ignite with a delay, depending on pressure. With refined data analysis, quantitative data about the temperature gradient depending on pressure or aggregate size could be calculated. Its influence on the aluminum aggregate ignition delay could also be investigated. Quantitative data on the aggregate temperature gradient and its ignition delay will be useful for future aluminum agglomeration models.
Figure 1 :
1 Figure 1: Image of the P1 propellant at the initial pressure of 5 bar. Agglomerates are visible in the gas flow, and aggregates (igniting or not) on the burning surface.
Figure 2 :
2 Figure 2: Image of the P1 propellant at the initial pressure of 10 bar.
Figure 3 :
3 Figure 3: Image of the P2 propellant at the initial pressure of 5 bar. Aggregates transforming into agglomerates are visible on the left and in the middle.
Figure 4 :
4 Figure 4: Image of the P1 propellant at the initial pressure of 5 bar with a limited color-scale.
Figure 5 :
5 Figure 5: Thresholding following the Top-hat transform application on the image of the P1 propellant at the initial pressure of 5 bar.
Figure 6 :
6 Figure 6: Tracking and physical metrics of two aggregates/agglomerates of the P1 propellant at the initial pressures of 5 and 10 bar.
Figure 7 :
7 Figure 7: (a) Top-hat transform application on the zoomed image on an aggregate of the P2 propellant at the initial pressure of 5 bar, (b) Thresholding of the resulting image
Figure 8 :
8 Figure 8: Tracking and physical metrics of the main alumina surface of an aggregate of the P2 propellant at the initial pressure of 5 bar (thumbnails zoomed).
Figure 9 :
9 Figure 9: Tracking and physical metrics of a moving alumina portion of an aggregate of the P2 propellant at the initial pressure of 5 bar (thumbnails zoomed).
Figure 10 :
10 Figure 10: Distributions for the tracked aggregates at 5 and 10 bar. (a) Mean Intensity distribution, (b) Ignition time cumulative distribution
Acknowledgment
Thomas Decker's PhD is financed by ONERA and ArianeGroup. S. G. thanks DGA (French Procurement Agency) for funding. |
00412056 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2007 | https://inria.hal.science/inria-00412056/file/eucap07_alaus.pdf | L Alaus
G Villemaud
P F Morlat
J M Gorce
WLAN PREAMBLE DETECTION METHODS IN A MULTI-ANTENNA, MULTI-STANDARD SOFTWARE DEFINED RADIO ARCHITECTURE
Keywords: SDR receiver, SIMO, wideband sensing, WLAN
This paper presents a comparison of different detection methods which could be used to sense a frequency band in order to select a particular channel and a particular communication standard in a wideband melted signal. Here we focus on a generic WLAN receiver based on SDR principles, capable of sampling several overlapping channels to demodulate simultaneously concurrent users possibly using different waveforms.
Introduction
Software Defined Radio (SDR) is a well-known concept, particularly promising to combine numerous communication standards in a single receiver [START_REF] Mitola | The software radio architecture[END_REF]. Thus, this powerful approach could be associated with multi-antenna principles to enhance terminal capabilities by the way of SIMO or MIMO algorithms, naturally offering interference rejection possibilities. In order to tend to a really cognitive system, a first step is to allow a wideband detection of all received signals (supposing wideband front-ends), enabling an efficient characterization of all possible standards. This paper presents in the first part the general structure of a SDR testbed dedicated to combine multi-antenna, multistandard and multi-channel approach. This supposes to have several antennas, with several RF front-ends and wideband ADC, enabling to sample more than only one channel. Then it appears possible to numerically select the desired channel and also characterize the standard used. Moreover, this system could allow the test of interference cancellation techniques on adjacent channels, combined with multi-antenna algorithms [START_REF] Mary | Reduced Complexity MUD-MLSE Receiver for Partially-Overlapping WLAN-Like Interference[END_REF]. Then the second part will focus on the first and fundamental step of the digital receiver: detection and characterization of present signals. Therefore different methods to detect and also analyze the quality of particular frames in melted signals are discussed. The aim is there to provide a good trade-off between precision and computation cost, to offer a fast decision of the presence of useable information and, in case of several emitters, the quality of each link. Of course, the idea is to characterize the signal before any demodulation step to automatically choose which numerical blocks have to be processed to recover data. For a practical case of study, two cohabitating standards (without any cooperation) of WLAN systems are used, 802.11b and 802.11g. Three families of detection methods have been explored: correlators with period detection, cyclostationarity and second order moment estimator detector. To compare those methods, an indoor environment was simulated with 802.11b and 802.11g frames emitted, and one or two antennas at the receiver. AWGN or multipath channel were observed with interferer along seven different overlapping channels. Results are discussed in terms of performance versus complexity. Finally, the impact of using a multi-antenna algorithm associated with these methods was investigated by the way of a SMI approach [START_REF] Morlat | Performance Validation of a multi-standard and multi-antenna receiver[END_REF].
Structure of the SDR receiver
Global multi-* approach
A common view of the future for wireless communications is the trend toward systems enabling a fusion of numerous standards, always offering the highest quality. Some devices already propose several integrated functionalities by the way of hardware duplication, even in a single chip package. Of course the idealist vision of universal software radio is very seducing, but yet actual limitations in sampling capabilities can not let imagine it for high frequency systems. Even though, SDR principles are mostly applicable to design flexible multi-standard transceivers, requiring a large scale RF part and powerful common digital processing resources. Based on well-known considerations, this part describes the use and merging of some popular concepts in the field of converged wireless systems, placed from the receiver point of view, and targeting a flexible test platform for such structures. Let us now list the most meaningful principles used in our general approach.
Multi-standard approach -To achieve the wireless convergence, as said above, a SDR architecture with a common baseband processing seems very appealing. This implies that all parts of the receiving process are done numerically, except the downconversion. In this way, each technology will no longer relies on a particular chip, but on a software package, allowing dynamical changes, parallel decoding, and also update or new bundle downloads. Then a particular attention should be paid on calculation resources, assuming a maximum reuse of basic functions for all standards (FFT, filtering...) and a precise dimensioning to achieve real time operations. This relies both on computation capabilities and on RF Front-end design. SDR proposals often suggest direct conversion architectures, arguing that heterodyne systems can not fit every standard needs. Thus, zero or lowIF designs are generally preferred, all possible RF bandwidths being transposed to the same low frequency band before sampling. These simple and cost effective architectures greatly impact on baseband signal properties, inducing numerical compensation to be processed. Multi-antenna approach -This is a very hot topic in the area of wireless systems. Even from the only receiver point of view, multiplying antennas offers a high potential to compensate radio channel fading and interference. Indeed, receiving several copies of the same signal allows spatial diversity, with possible spatial rejection, or combined spacetime processing. In large angular spread configurations (e.g. urban or indoor), a great benefit could also be taken from time or polarization diversity. All incoming works on wireless applications deal with at least both space and time combiners from the simplest forms (antenna switch, FIR...) to advanced techniques like space-time coding (Alamouti code, Trellis or block-coding for MIMO systems). Then these popular techniques have to be evaluated in realistic environments enabling an efficient tradeoff between expected performances and the increase of RF needs (multiple front-ends, power consumption...) as well as calculation needs (DSP, FPGA, processor with also an impact on energy management).
Multi-channel approach -
There we talk about multiple frequency channel approach. Classical receivers, after selection of its allowed channel for communication, operate a frequency tuning in its RF part to only filter this chosen band, naturally operating an adjacent channel rejection. A global supervision of the network is needed to ensure that all co-and adjacent channel interference are efficiently reduced. A very relevant example is the 2.4 GHz bandwidth for 802.11 systems: this 84 MHz large band is shared between up to 14 overlapping 22 MHz channels. Therefore frequency-reuse is a hard point for dense networks, usually leading to high interference levels. Numerous interference rejection or multiuser detection techniques are well-known, generally dedicated to a single channel filtering. We discuss herein the opportunity of a wider sampling, offering a larger numerical information of the desired signal together with interferers. Then different techniques of interference mitigation, rejection (co-or adjacent channel), or parallel decoding can be involved. Of course, this principle comes with consequences on RF specifications (wideband operation) and digital processing (several potential additional noises).
Global combined approach -
Of course all those multi-* approaches already offer meaningful performances separately. Nevertheless combining these seducing principles in a global test platform allow evaluating all possible trade-off between joint or dissociated processing. Indeed this platform must be able to support additional technologies like CDMA or OFDM. A general scheme of such a platform is described in the following part (Fig. 1), with a particular attention paid on RF architecture choice and numerical compensation techniques [START_REF] Morlat | Global system evaluation scheme for multiple antennas adaptative receivers[END_REF].
General architecture discussion
The key point of such a multi-* architecture is obviously the analog to digital converter. Because a sampling frequency of few GHz is unachievable, the conversion implies a limited bandwidth. The agility of a receiver is then driven by two features: the carrier frequency tuneless and the maximal bandwidth. The first allows the receiver to process many standards as a function of the frequency band. On the opposite, the second allows the receiver to manage simultaneously several signals of same technology over adjacent channels. Note that both may converge in a near future because the radio bands are saturated and several standards should be called for co-existing on the same band. In co-existing systems, interference plays a major role in limiting the overall capacity. There is no doubt that all techniques of interference mitigation are going to be studied in depth. The SDR framework offers a first way to increase this capability. Indeed, the use of a high sampling frequency permits interference cancelling of adjacent channels by the use of multi-user detection. Multi-antennas is also a very complementary technology to avoid or mitigate interference. Finally, a full and efficient agility depends on both RF frontend features and numerical processing. RF Front-End features -To properly balance subsystem performance we must be aware of the limitations of the analog RF front-end considering that compensation can be made numerically. In most systems, the receiver tends to be more complex than the transmitter. The first challenge of receiver design is the Dynamic Range. RF front-end of a receiver must separate a desired signal typically from -130 dBm to -70 dBm, from a background RF environment that may be in the relative range from -20 dB to 0 dB. In many systems, the RF front-end also sets the system signal to noise ratio and should be designed to add minimal noise. Thus the ADC ADC ADC overall system must have a considerable dynamic range to accommodate both the high-power background signals and the lowestpower desired signal. Dynamic range is limited at the bottom of the range by noise that enters the system through thermals effects of the components or through nonidealities of the ADC, such as quantization noise or sampling aperture jitter. Low level signals can be masked by this noise. Dynamic range is limited at the high-end by interference. The source of this interference could be co-channel, adjacent channel, or self induced by the transceiver. High interference levels may cause the receiver to become more non-linear and introduce cross products (spurious components), which may inhibit the detection of low-level signals or reduce the desired signal bit error rate (BER). Simply attenuating the high-level signal before they drive the receiver into a non-linear operating region is insufficient since low-level desired signals also present will be attenuated until masked. The tunable radio frequency receiver consists in an antenna connected to a RF bandpass filter (BPF). The BPF selects the signal and the low noise amplifier (LNA) with the automatic gain control (AGC) raises the signal level for compatibility with the ADC. The BPF bandwidth relative to the carrier frequency must be highly selective and today it is impossible to design a filter at RF or microwave frequencies. Whatever the technology, it is clearly impractical. Firstly, the difficulty in designing a tuned radio frequency receiver is the limitation of the ADC, which must handle high frequency signals. In addition, to give the bandwidth and rolloff limitations of the RF filter, the sampling rate of the ADC must be high enough to avoid significant aliasing. The consequence of high sampling rate conversion is high power consumption. The ADC must accommodate multiple signals over the wide bandwidth of the RF filter (in tens of megahertz) with dynamic range of 100 dB and its non-idealities, such as jitter, lead to distortion of the signal. Numerical processing -In such a framework, the signal of interest reaches the processor, over-sampled and mixed with other interfering signals. Thus, several functions have to be implemented in a multi-* SDR numerical receiver: signal detection, synchronisation, downsampling, RF impairment mitigation, beamforming, etc … The complexity of the digital receiver is strongly related to the efficiency of the RF front-end. For multiple antennas, the balance between the RF front-end and numerical processing increases further. For each data channel, a digital demodulation is needed, providing an equivalent narrow-band signal. The over-sampling eases the time synchronisation and mitigation, but in turns is subject to an increase of the sampling noise due to possible high power adjacent channels. This loss of efficiency may be compensated by the use of multiuser detection techniques taking into account adjacent channels and several standards. In this context, the multiple antennas architecture appears promising, because more signals can be separated. The counterpart is the need of a multi-channel ADC.
Channel detection and characterization
Different methods
The first and fundamental step of signal processing is to detect that an interesting signal is present. Designers must pay attention to avoid expensive algorithms if there is nothing to receive all the time. Numerous methods exist to detect a desired signal, and the overall complexity essentially depends on the amount of known information and the amount of desired information. The use of singularities in a particular standard eases the detection, but that supposes a multiplication for each targeted standard. On the other hand, blind techniques are hard to implement and offer poor information on the signal quality. Let us now list the most useful techniques. Energy based signal detection -Usually, standard detectors use power detection methods to distinguish the offset of energy, characteristic of the presence of a signal. The detection is based on some function of the received samples which is compared to a threshold. Energy detectors are often used due to simplicity and good performance, but offer limitation in the case of spread signals and produce an important level of false detections.
Matched filter and correlation detection method -
The crosscorrelation is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. If a signal is correlated with itself (autocorrelation), then the maximum value of the correlation can be found at a time shift of 0. If this maximum is above a known threshold, the detection is proved. The most utilized technique provides a matched filter and a correlation detection method that carries out a correlation on a received signal with known periodic codes inserted as in the case of a 802.11b preamble. Cyclostastionarity Detection -A cyclostationary signal is a signal which contains a hidden periodicity [START_REF] Gardner | The Cumulant Theory of Cyclostationary Time-Series, Part I : Foundation[END_REF][START_REF] Gardner | The Cumulant Theory of Cyclostationary Time-Series, Part II : Development and Applications[END_REF]. A time-series x(t) contains a second order periodicity with frequency "a" if and only if there exists some stable Quadratic Time Invariant (QTI) transformation of x(t) into y(t) such that y(t) contains first-order periodicity with frequency a. The most obvious QTI transformation is a simple square modulus. Then cyclostastionarity detection becomes simple periodicity detection [START_REF] Burel | Detection of Spread Spectrum Transmissions using Fluctuations of Correlation Estimators[END_REF]. Moreover, calculating the different cyclic autocorrelation functions and identifying the non-zero values is also used to underline a cyclostastionary signal and detect the cyclic period, but this process is much more complex.
Of course various combinations of these techniques could be used depending on the desired complexity. In a multi-antenna context, some benefits could also be taken from the different copies of the signal, but considering that the synchronization is not yet performed. Until detection is proved and synchronization achieved, the full benefit of spatial diversity could be taken as describe in the following part.
Case of study
Our actual case of study is based on 802.11b and 802.11g standards. As said previously, if considered as in a noncooperative behaviour, those two protocols presents high levels of interference in the 2.4 GHz ISM band, with strongly overlapping channels. We are then developing a multi-* SDR receiver with 4 antennas at 2.4 GHz with a 80 MHz sampling offering an actual 40 MHz baseband. Thus we could find several overlapping channels with both possible standards in the sampled signal. Figure 2 presents the main numerical blocks required for one arm of this receiver. Therefore, the first method is based on a classical correlation of the received signal and a reference one, but enhanced by a periodicity detection (CPD), allowing a good separation of both kinds of signals. This prevents problems on defining a detection threshold. The second method is based on cyclostationarity detection [START_REF] Gardner | The Cumulant Theory of Cyclostationary Time-Series, Part II : Development and Applications[END_REF][START_REF] Gardner | The Cumulant Theory of Cyclostationary Time-Series, Part I : Foundation[END_REF]. Here the detection is based on the periodicity of higher order moments, and could be performed in time domain or in frequency domain based on the nullity of these moments at cyclic frequencies. Then, the third method considers fluctuations of second order moment estimators [START_REF] Burel | Detection of Spread Spectrum Transmissions using Fluctuations of Correlation Estimators[END_REF], detecting variation of those estimators corresponding to the symbol period.
For each method, an important consideration is to find a useable period of the signal depending of the waveform. As exposed on figure 3, the DSSS used for 802.11b present a 11 µs periodicity while the OFDM preamble of 802.11g could be characterized by a 16 µs format. (lower)
Results
To compare those methods, an indoor environment was simulated with 802.11b and 802.11g frames emitted, and one or two antennas at the receiver. AWGN or multipath channel were observed with interferer along seven different overlapping channels. Results presented on figure 4 are quite equivalent in terms of performances, but an important difference to be noticed is on complexity (Table 1). Hence, the CPD is the most robust in front of noise or interference with a low complexity but could not offer information of the better present mode, if both 802.11b and 802.11g standards are present. Frequency cyclostationarity and estimator detection characterize the quality of each signal with relatively good thresholds (ratio versus interferer considering SNR on figure 4). Thus, the estimator method is less complex but also less accurate. After all, the use of multi-antenna capabilities by the SMI algorithm notably enhances performances of such methods with only two antennas.
Conclusions
SDR concept combined with multi-channel and SIMO processing in the same terminal is a very promising issue to develop future wireless receivers ensuring high data rate and robust transmissions. The aim of this work was to present a global SDR system designed to combine several multi-* capabilities in a single receiver. Then we expose performances of different detection techniques to choose channels of interest and characterize the standard to deal with. Therefore, the CPD is the most robust in front of noise or interference with a low complexity and estimator detection characterize the quality of each signal with relatively good thresholds. Obviously, combining multiple antennas enhances performances of such methods with modest additional computation cost. Incoming works are dealing with more analysis and estimation of WLAN performances in a multi-channel configuration. Studies about implementation of more efficient multi-antenna algorithms in order to permit better interferers mitigation could also be interesting.
Figure 1 :
1 Figure 1: General architecture of a multi-* SDR receiver
Figure 2 :
2 Figure 2: General structure of the software WLAN receiverBy the way, the first block consists on the characterization of potentially interesting signal. We then estimate the costperformances trade-off of different detection methods.
Figure 3 :
3 Figure 3: Frame structure for 802.11b (upper) et 802.11g (lower)
CPD |
00412059 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2009 | https://inria.hal.science/inria-00412059/file/22.pdf | I Burciu
email: [email protected]
G Villemaud
J Verdier
Multiband Simultaneous Reception Front-End with Adaptive Mismatches Correction Algorithm
Keywords: multistandard simultaneous reception, adaptive systems, least mean square methods I
This paper addresses the architecture of multistandard simultaneous reception receivers and aims at improving the performance-power-complexity trade-off of the front-end. To this end we propose a single front-end architecture offering lower complexity and therefore lower power consumption. In order to obtain the same performance as the state of the art receivers, a light weight adaptive method is designed and implemented. It uses a mix of two digital implemented algorithms dedicated to the correction of the frontend IQ mismatches. A study case concerning the simultaneous reception of 802.11g and UMTS signals is developed in this article.
INTRODUCTION
The multiple functionalities characteristics imposed to the devices of the wireless telecommunications embedded domain have lead to the development of several dedicated standards. Depending on the type of the implemented service, a decision is made concerning the type of wireless telecommunication standard. For example the 2G standards are used for the voice communication, the WiFi is used for data transfer or a more general standard, like the 3G UMTS, is used for simultaneous data and voice transfer. Meanwhile, when implementing a multiservice simultaneous treatment device, the dedicated standards stack-up is generally chosen in order to have a good power-performance trade-off.
When designing a wireless telecommunications embedded device, the main goal is the good performance-powercomplexity trade-off [START_REF] Li | A System Level Energy Model and Energy-Quality Evaluation for Integrated Transceiver Front[END_REF]. The state of the art of the multiband simultaneous receivers is using the technique of the dedicated front-end stack-up. If we take into account the parallelization level of this type of architecture, it becomes obvious the interest of designing a unique front-end capable of simultaneously receiving two signals [START_REF] Evans | Development and Simulation of a Multi-standard MIMO Transceiver (Report style)[END_REF]. Such a multiband simultaneous reception single front-end architecture was proposed in [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. A comparative study between this architecture and the front-end stack-up [START_REF] Burciu | Low Power Multistandard Simultaneous Reception Architecture[END_REF] shows a power reduction of 20% in favor of the proposed architecture, as well as a complexity gain due to the use of fewer components (image rejection filters and frequency synthesizers). Meanwhile, while evaluating the performances of the proposed architecture, we can observe an important sensitivity to the IQ mismatches of the orthogonal translations [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF] [START_REF] Rudell | Frequency Translation Techniques for High-Integration High-Selectivity Multi-Standard Wireless Communication Systems[END_REF]. In fact, as shown in [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF], for a level of IQ mismatches going from zero to a realistic level [7] (1 o of phase imbalance and 0.3 dB of gain imbalance) the signal's quality can be degraded up to six times.
In order to mitigate the influence of the IQ mismatches on the reception quality of the useful signal, an adaptive digital algorithm was developed. This method is context-aware and is based on a mix between an iterative light weight algorithm and a more complex SMI [START_REF] Gantz | Convergence of the SMI and the diagonally loaded SMI algorithms with weak interference[END_REF] (Single Matrix Inversion) algorithm. By mixing these two methods, we take advantage of the low consumption of the LMS (Least Mean Squares) iterative algorithm [START_REF] Widrow | The complex LMS algorithm[END_REF] and of the fast convergence characteristics of the more power greedy SMI. The results show a perfect mitigation of the effects of the IQ mismatches by the adaptive algorithm. In other words, when integrating this algorithm, the proposed architecture has the same performance as the font-end stack-up. This paper is composed of three parts. Following this introduction, section II describes the double IQ multistandard front-end architecture. Section III details the IQ mismatches and their impact on the quality of reception when using a double IQ front-end architecture. It is also dedicated to the implementation of the adaptive mismatches correction algorithm and to the results of the entire multiband simultaneous reception structure. Finally, conclusions of this study are drawn.
II. UNIQUE FRONT-END DEDICATED TO THE MULTIBAND SIMULTANEOUS RECEPTION
A. Double orthogonal translation front-end architecture
In wireless telecommunications, the integration of IQ baseband translation structures in the receiver chain has become a common procedure [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF]. It is generally used in order to reduce the bandwidth of baseband signals treated by the ADC. The orthogonal frequency translation is also used to eliminate the image frequency signal during the translation steps of heterodyne front-end architectures [START_REF] Rudell | Frequency Translation Techniques for High-Integration High-Selectivity Multi-Standard Wireless Communication Systems[END_REF]. This image frequency rejection technique consists in using two orthogonal frequency translations stages followed by a signal processing technique. It relies on the orthogonalization of the useful signal and the image frequency band signal during the translation from the RF to base band domain. Even though the spectrums of the two signals are completely overlapped after the first frequency translation, the orthogonalization allows the baseband processing to theoretically eliminate the image frequency component while reconstructing the useful one. Starting from this monostandard image rejection architecture, the double orthogonal translation technique is implemented in a novel multistandard simultaneous reception architecture [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. The choice of the standards used for the study case is WLAN (802.11g) and WCDMA-FDD, representitive of the OFDM and CDMA techniques, as well as for the severe constraints they impose. Several simulation of the structure presented in Fig. 1 were performed using the ADS software provided by Agilent Technologies [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. One of this series of simulations concerns the BER (Bit Error Rate) evolution of the two study case standards when being simultaneously received either by a structure using the heterodyne front-end stack-up architecture or by the proposed double IQ architecture. The blocks used during the simulations have the same typical metrics in both cases. The performance of the two architectures during the simultaneous reception of the two standards is almost identical. Meanwhile, these simulations do not take into account the orthogonal mismatches of the IQ translation blocks.This sensible matter concerning the mitigation of the IQ mismatches impact is amply treated in section III.
B. Comparative power-complexity study
One of the most important issues when designing a radiofrequency front-end is the performance-power-complexity trade-off [START_REF] Li | A System Level Energy Model and Energy-Quality Evaluation for Integrated Transceiver Front[END_REF]. This section is therefore dedicated to a comparative overall power-complexity evaluation between the heterodyne front-end stack-up and the proposed architecture. Table 1 presents the results of a bibliographic study, concerning the state of the art of the blocks used by the two structures [START_REF] Burciu | Low Power Multistandard Simultaneous Reception Architecture[END_REF]. It summaries the numbers of elements used by each structure, as well as their individual power consumption along with the supply voltage. We can then conclude that the proposed architecture needs fewer components than the frontend stack-up as it doesn't need image rejection filters and it uses two times less frequency synthesizers. Therefore, the complexity comparison is favorable to the proposed structure, especially because the image rejection filters are not on-chip integrated elements. For our study case and for the power consumption levels presented in Table 1 the overall consumption comparison shows that our structure consumes This means a 20 % of gain in favor of the single front-end structure assessed in this paper.It can be observed that the power reduction comes essentially from the use of two times less frequency synthesisers, while using the same number of other components.
III. ADAPTIVE ALGORITHM DEDICATED TO THE CORRECTION OF ORTHOGONAL MISMATCHES
This section is addressing the impact of the orthogonal mismatches on the quality of the useful signal in a multiband double orthogonal translation structure. It also presents a dedicated adaptive MMSE (Minimum Mean Square Error) algorithm used to mitigate this impact.
A. IQ mismatches
The double orthogonal translation technique allows a theoretically perfect rejection of the image band signal for a case study where the quadrature mounted mixers are perfectly matched -no phase or gain mismatch. Nevertheless, design and layout defaults, such as different line lengths between the two branches and non identical gain of the mixers, generate phase and respectively gain mismatches [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF]. Therefore, in a real case scenario where the gain and the phase mismatch can go up to 0.3 dB and respectively 1 degree [7], the image band signal is not completely rejected. In fact the metric used to quantify this rejection, the image rejection ratio (IRR), depends on the gain and phase mismatches between the two branches of the IQ translation structures [START_REF] Rudell | Frequency Translation Techniques for High-Integration High-Selectivity Multi-Standard Wireless Communication Systems[END_REF]. While evaluating the IRR of the double orthogonal translation, we choose to ignore the IQ mismatches of the block used to translate the signal in the base band domain because of the low frequency of their input signal. Supposing that the first IQ stage has a gain mismatch ΔA and a phase mismatch Δθ, the final IRR can be modeled by the equation:
⎥ ⎦ ⎤ ⎢ ⎣ ⎡ Δ Δ + - Δ + + Δ Δ + + Δ + + = ) cos( ) A 1 ( 2 ) A 1 ( 1 ) cos( ) A 1 ( 2 ) A 1 ( 1 log 10 ) ( 2 2 θ θ dB IRR (1)
For the levels of phase and gain mismatch of 1 o and 0.3 dB respectively, the theoretical IRR rejection is 28.97 dB. In the followings we assume that the mismatches of the orthogonal translation blocks integrated in the multiband simultaneous reception architecture can reach this level. Therefore the 28.97 dB of IRR represent the minimum rejection that the complementary signal undergoes when one of the signals is received. For our study case -simultaneous reception of UMTS and WLAN -this level of rejection is clearly not sufficient. In fact, the worst case scenario implies a WLAN signal having a -80 dBm power level and a UMTS signal with a -30 dBm power level at the antenna. For this case we consider an automatic gain control stage (AGC) that amplifies of 30 dB the signal on the WLAN dedicated branch and of 0 dB the signal on the UMTS dedicated branch. For these given conditions the WLAN quality of service imposes that the front-end rejects the complementary UMTS signal under the power level of the thermal noise on the WLAN dedicated branch. This means an IRR of 41 dB that has to be realized by the double orthogonal translation structure in order to have the same performances as the front-end stack-up architecture. For the scenario presented above, several simulations show a normalized WLAN BER that can go from 1 to 6 when the phase and gain mismatch go from 0 to 1 o and 0.3 dB respectively [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF].
B. Adaptive digital Algorithm
In order to mitigate the influence of the IQ mismatches on the quality of the signals processed by the proposed receiver structure, a digital adaptive method has been implemented. It is composed of a mix between a light power consumption iterative LMS algorithm and a power greedier SMI algorithm. The scenario considered here involves a continuous reception of a WLAN signal while the UMTS signal at the antenna has a random power level. It is also supposed that the IQ mismatches have a slow variation during the reception.
Based on (1) and on the system model presented in [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF] The α/β ratio is directly proportional with the IRR, as it represents the attenuation of the image band signal compared to the useful baseband signal. In the followings, we choose to focus on the reception of the WLAN signal and to consider the UMTS signal as interference, the UMTS dedicated method being analog to that used for the WLAN. For this study case, the adaptive correction method is estimating β by a weight w.
Figure 1 .
1 Figure 1. Double orthogonal frequency translation front-end dedicated to the multistandard simultaneous reception.This new architecture considers the image band signal as a useful signal. In order to fulfill this condition the local oscillator used during the first of the two orthogonal translations is ably dimensioned. Its frequency is set in such a manner that each of the two useful signals has a spectrum that occupies the image frequency band of the other.The architecture and the spectrum evolution of such a receiver, able of treating simultaneously two standards, are developed in Fig.1. As it can be observed, the parallelization of the input stages imposes the use of two dedicated antennas, two dedicated RF band filters and two dedicated LNAs. The gain control stage is realized by the input stages, each LNA being dedicated to the gain control of one of the signals. Once separately filtered and amplified, an addition of the two signals is made. The output is then processed by a double orthogonal translation structure. As the frequency of the first oscillator is chosen in such a manner that each of the two signals occupies a spectrum in the image band of the other, a complete overlapping of the spectrums can be observed in the intermediate frequency domain. After the second orthogonal frequency translation and after the digitalization of the four resulting signals, two basic processing chains are implemented. Each of them reconstructs one of the two useful signals, while rejecting the other.The choice of the standards used for the study case is WLAN (802.11g) and WCDMA-FDD, representitive of the OFDM and CDMA techniques, as well as for the severe constraints they impose. Several simulation of the structure presented in Fig.1were performed using the ADS software provided by Agilent Technologies[START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF]. One of this series of simulations concerns the BER (Bit Error Rate) evolution of the two study case standards when being simultaneously received either by a structure using the heterodyne front-end stack-up architecture or by the proposed double IQ architecture. The blocks used during the simulations have the same typical metrics in both cases. The performance of the two architectures during the simultaneous reception of the two standards is almost identical. Meanwhile, these simulations do not take into account the orthogonal mismatches of the IQ translation
Figure 2 .
2 Figure 2. Digital context-aware method used to mitigate the influence of IQ mismatches in a double orthogonal translation receiver.
, the two signals s BBWLAN and s BB UMTS , obtained after the digital demodulation presented in Fig.1, can be modeled by: WLAN and s RF UMTS are the baseband translation of the RF signals at the output of the automatic gain control stages. The α and β coefficients depend directly of the gain mismatch ΔA and of the phase mismatch Δθ:
Figure 3 .
3 Figure 3. WLAN BER evolution during a multiband simultaneous reception for different values of the gain control and for two configuration depending on the implantation of the adaptive IQ mismatches corection method.
TABLE I
I
. BASIC ELEMENTS USED BY THE TWO ARCHITECTURES
Stack-up Double IQ Power//Supply
Quantity Quantity mW//V
LNA UMTS 1 1 7.2//1.8
LNA WLAN 1 1 8//1
RF Filter 2 2 -
IF Filter 2 0 -
Mixer 6 6 5.6//-
RF-Frequency Synthetizer 2 1 42//3
IF-Frequency Synthetizer 2 1 20//-
Baseband-Amplifier WLAN 2 4 10//-
Baseband-Amplifier UMTS 2 0 5//-
ADC WLAN 1 4 12//2.5
ADC UMTS 2 0 11//1.8
216 mW while the state of the art architecture uses 284 mW.
ACKNOWLEDGMENT
The authors wish to acknowledge the assistance and support of the Orange Labs Grenoble.
In order to do this estimation it uses the two signals from the output of the digital demultiplexing stage and a known training sequence -two long preamble symbols of the WLAN signal. Once the estimation is finished, the weight is multiplied with the s BB UMTS signal and the result is subtracted from s BB WLAN , as shown in Fig. 2. In this manner, the interfering s RF UMTS component of the s BB WLAN signal becomes insignificant.
The estimation is realized by a context-aware method using either a LMS or a SMI algorithm [START_REF] Gantz | Convergence of the SMI and the diagonally loaded SMI algorithms with weak interference[END_REF] [START_REF] Widrow | The complex LMS algorithm[END_REF]. The LMS algorithm is an iterative method that uses the MMSE technique in order to minimize the difference between the received signal and a training sequence. For our study case, each of the iterations implies the following operations:
where s REF WLAN is the training sequence of the WLAN signal and μ is the algorithm step size. A trade-off has to be made when choosing μ because a large step size leads to a bad estimation precision while a small one leads to a slow convergence of the algorithm. Simulations show that the algorithm manages to mitigate the influence of IQ impairments. In addition, for a continuous WLAN and UMTS simultaneous reception, LMS manages to adapt to the slow IQ impairments variation.
The major drawback of this algorithm is the small precision of the evaluation to the absence or to the week power level of the complementary UMTS signal. But this is not a major problem for this input power level case. The real inconvenient is the fact that, if the power level on the UMTS dedicated branch changes from -107dBm to a significant level, the LMS algorithm has to converge once again in order to offer a good precision. Simulation shows that, in order to converge to an estimation of β allowing a supplementary 20 dB of IRR, the algorithm needs up to 10000 samples. Knowing that the two WLAN preamble symbols provide 128 samples of training sequence per frame, it takes up to 80 frames for the algorithm in order to grant a sufficient precision. We conclude that this algorithm can provide an adaptive mitigation of the influence of the IQ impairments, but it cannot manage an arbitrary power variation of the complementary signal.
In order to overcome this sensitivity of the LMS algorithm to the absence of signal at the output of the UMTS dedicated branch, a solution is the use of this adaptive algorithm only when the complementary signal sbb UMTS has certain power level. But this means that during the absence of the UMTS signal, the algorithm can not adapt the w estimation to the variation of the IQ mismatches. Therefore, each time the UMTS signal power changes from an insignificant to a consistent level, the LMS has to converge in order to evaluate once again the IQ mismatches that could have changed during the absence of the UMTS signal. The solution that has been choose in order to adapt to this UMTS signal fluctuating power level is to use a SMI algorithm [START_REF] Gantz | Convergence of the SMI and the diagonally loaded SMI algorithms with weak interference[END_REF]. The advantage of this algorithm is its estimating performance when using a relatively small training sequence -128 samples of the two WLAN preamble symbols for our study case. Compared to the continuous estimation approach of the LMS algorithm, the SMI has a block adaptive approach. Instead of using an iterative approach in order to estimate w, it uses the entire training sequence for a matrix inversion operation. Simulations show that a training sequence of 128 samples is sufficient for the SMI algorithm in order to realize a supplementary 20 dB rejection of the UMTS complementary signal. However, the main drawback of this type of algorithm is its complexity and power consumption compared to the LMS. By consequence, the optimum solution for an adaptive IQ mismatch correction algorithm is a context-aware method depending on the power level P UMTS of s BB UMTS :
• If the current P UMTS is bigger than a choosen detection level, the decision on which algorithm to be activated depends on the P UMTS level of the previous frame:
o If it was smaller than the detection level, the SMI is activated in order to find the optimum w weight by using only one 128 samples training sequence.
o If it was bigger than the detection level, the LMS algorithm is activated in order to be able to adapt to the IQ mismatches slow variation.
• If P UMTS is smaller than a trigger level, none of the two algorithms is activated in order to estimate the w weight.
This context-aware adaptive method was implemented using Matlab software. In order to have more eloquent results, it was incorporated in a co-simulation platform which also includes the UMTS and WLAN transmission sources, as well as the model of the analog double orthogonal front-end. This platform is composed of two E4438C Signal Generators, of a 89600 Vector Signal Analyzer and it uses the Advanced Design System software, all provided by Agilent technologies [10]. The measurements were made by using a line of sight channel close to the AWGN conditions. During this measurements campaign we focused on the WLAN reception while the UMTS signal is considered as interfering. In order to validate the context awareness of the adaptive method, the source generating the interfering signal is arbitrarily turned on. The power level for this source is chosen in such a manner that the UMTS interfering signal has a power level of -30 dBm at the input of the receiver. Fig. 3 presents several BER evolutions as function of the WLAN signal's SNR (Signal to Noise Ratio). These results reveal that, for different levels of the gain control and therefore of the interfering signal, the influence of the IQ mismatches is mitigated by the digital adaptive method. In fact the performance of a single WLAN dedicated front-end and that of the proposed multiband simultaneous reception structure are practically identical when the adaptive method is used. In the same time, we can observe a shift between the curves characterising the BER evolution when the adaptive method is implemented and when the multiband receiving structure doesn't use any digital correction method. This shift can go up to 20 dB, depending on the attenuation of the UMTS interfering signal by the gain control stage.
Finally the proposed adaptive method manages to offer up to a supplementary 20 dB of the complementary signal rejection when implemented in a multiband double orthogonal translation structure. In other words, our structure has the same performance as the front-end stack-up structure [START_REF] Evans | Development and Simulation of a Multi-standard MIMO Transceiver (Report style)[END_REF], independently of the IQ mismatches.
IV. CONCLUSIONS
This article is focused on a novel context-aware adaptive method dedicated to the mitigation of the IQ mismatches impact in a multiband double orthogonal translation front-end. The theoretical results related to this digital processing technique were validated by measurement. During this measurement campaign, a performance comparative study is realized between the front-end stack-up receiver and the single front-end multiband receiver integrating this digital adaptive method. For this purpose, the radiofrequency platform is integrating a real communication channel and realistic models of the two multiband simultaneous reception front-ends compared here. The reception measured performances are practically identical when the two structures are used. This implies that, when integrating the adaptive method, the proposed single front-end architecture has the same performance as the dedicated front-end stack-up. If we also take into account the complexity gain as well as the 20% of power reduction, the proposed receiver structure offers a significantly improved performance-consumption-complexity trade-off when compared to state of the art of the multistandard simultaneous reception architecture.
The follow-ups of this work would consist in a powerperformance study concerning the length optimization of the training sequence used by the SMI algorithm. |
04120540 | en | [
"spi.fluid",
"chim",
"spi.gproc"
] | 2024/03/04 16:41:26 | 2023 | https://univ-pau.hal.science/hal-04120540/file/Manuscript.pdf | Jawad El Ghazouani
Mohamed Saidoun
Frédéric Tort
Jean-Luc Daridon
email: [email protected]
Experimental evaluation method of asphaltene deposition inhibitor's efficacity at atmospheric pressure using a fully immersed Quartz Crystal Resonator, centrifugation and optical microscopy techniques
Keywords: Asphaltenes, Quartz Cristal Resonator, Centrifugation, Inhibitors, Oil, Destabilization, Deposition
The objective of this paper is to propose a multi-scale method to evaluate the effect of additives on the destabilization and deposition of asphaltenes under conditions of non-diluted crude oil samples. This study shows results of probing asphaltene deposition in presence of a chemical additive using a quartz crystal resonator fully immersed in a dead oil while n-heptane is continuously added. The results obtained show that destabilization is slightly delayed and deposited amount is significantly decreased at concentrations above 3000 ppm of asphaltene deposition inhibitors. The amount of deposited asphaltenes decreases while the inhibitor concentration is increased. In addition, the effect of the chemical is further studied by centrifugation technique to evaluate the concentration of unstable asphaltenes aggregates and the kinetic of aggregation is investigated under optical microscopy observation. Results support the results obtained by quartz crystal resonator and to rule on the evolution of the size/shape of the aggregates.
INTRODUCTION
Asphaltenes are compounds that have attracted a lot of attention in the oil industry, particularly because of the problems they cause during oil extraction, production and transportation [START_REF] Adams | Asphaltene Adsorption, a Literature Review[END_REF] .
Naturally occurring in crude oil in the form of suspending nanoaggregates 2 their destabilization is initiated by the effect of oil composition, pressure or temperature variations, followed by an aggregation process. Aggregates containing unstable asphaltene molecules, identified as the source of deposits [START_REF] Katz | Nature of Asphaltic Substances[END_REF] , have a complex chemical structure and follow a complex destabilization mechanisms that involve multi-scale aggregation phenomena from few nanometer to hundreds of micrometers [START_REF] Hoepfner | The Fractal Aggregation of Asphaltenes[END_REF] .
Asphaltenes are the heaviest and most polar fraction of crude oils. They contains heteroatoms (N,O,S) and heavy metals such as nickel and vanadium [START_REF] Ancheyta | Changes in Asphaltene Properties during Hydrotreating of Heavy Crudes[END_REF] , which undeniably have an effect on the polarity of asphaltenes and can influence their solubility [START_REF] Kaminski | Classification of Asphaltenes via Fractionation and the Effect of Heteroatom Content on Dissolution Kinetics[END_REF] . Asphaltene molecules contains both c c c w h 4-10 c h h c ch w h h ranging from 3 to 7 carbon [START_REF] Groenzin | Asphaltene Molecular Size and Weight by Time-Resolved Fluorescence Depolarization[END_REF] .
Numerous techniques have been used to characterize the size and shape of nanoaggregates such as centrifugation [START_REF] Mostowfi | Asphaltene Nanoaggregates Studied by Centrifugation[END_REF][START_REF] Goual | On the Formation and Properties of Asphaltene Nanoaggregates and Clusters by DC-Conductivity and Centrifugation[END_REF] , static light scattering [START_REF] Eyssautier | Structure and Dynamic Properties of Colloidal Asphaltene Aggregates[END_REF] , direct-current conductivity [START_REF] Goual | On the Formation and Properties of Asphaltene Nanoaggregates and Clusters by DC-Conductivity and Centrifugation[END_REF][START_REF] Goual | Impedance Spectroscopy of Petroleum Fluids at Low Frequency[END_REF][START_REF] Zeng | Critical Nanoaggregate Concentration of Asphaltenes by Direct-Current (DC) Electrical Conductivity[END_REF] , nuclear magnetic resonance (NMR) [START_REF] Sheremata | Quantitative Molecular Representation and Sequential Optimization of Athabasca Asphaltenes[END_REF][START_REF] Lisitza | Study of Asphaltene Nanoaggregation by Nuclear Magnetic Resonance (NMR)[END_REF] , small angle X-ray scattering and neutron scattering (SAXS/SANS) [START_REF] Sheu | Self-Association of Asphaltenes[END_REF][START_REF] Eyssautier | Structure and Dynamic Properties of Colloidal Asphaltene Aggregates[END_REF][START_REF] Hoepfner | Multiscale Scattering Investigations of Asphaltene Cluster Breakup, Nanoaggregate Dissociation, and Molecular Ordering[END_REF] . These studies have converged towards an organization of polydisperse asphaltene nanoaggregates described as a stack, with an average diameter of 1-2 nm [START_REF] Mullins | Advances in Asphaltene Science and the Yen-Mullins Model[END_REF] , composed of 5 to 9 asphaltenes molecules [START_REF] Schneider | Asphaltene Molecular Size by Fluorescence Correlation Spectroscopy[END_REF][START_REF] Tanaka | Characterization of Asphaltene Aggregates Using X-Ray Diffraction and Small-Angle X-Ray Scattering[END_REF][START_REF] Wu | Laser-Based Mass Spectrometric Determination of Aggregation Numbers for Petroleum-and Coal-Derived Asphaltenes[END_REF] created by the interaction of the aromatic cores and limited by the steric repulsion generated by the peripheral alkyl chains.
A second level of aggregation was observed at higher asphaltene concentrations in toluene described as clusters that consist of less than 12 nanoaggregates and have an average diameter of 5-10 nm [START_REF] Simon | Relation between Solution and Interfacial Properties of Asphaltene Aggregates[END_REF] . The growth of self-associated asphaltene is limited to the nanoscale in good solvents, leaving them in suspension subject to Brownian motion and to hydrodynamic forces. Further flocculation of clusters is synonym to a commencing destabilization of asphaltenes.
One of the most used solutions to prevent asphaltene deposits on pipeline is to perform continuous or batch injection of chemicals. Their selection is usually performed through laboratory tests to measure their capacity to disperse unstable asphaltenes from crude oils under diluted conditions. There are many methods to measure the dispersing capacity of additives such as Asphaltene Dispersion Test (ADT) [START_REF] Juyal | Reversibility of Asphaltene Flocculation with Chemicals[END_REF][START_REF] Hashmi | Polymeric Dispersants Delay Sedimentation in Colloidal Asphaltene Suspensions[END_REF] , turbidity test [START_REF] Shadman | Effect of Dispersants on the Kinetics of Asphaltene Settling Using Turbidity Measurement Method[END_REF][START_REF] Kraiwattanawong | Effect of Asphaltene Dispersants on Aggregate Size Distribution and Growth[END_REF] , ambient pressure near-infrared (NIR) spectroscopy test [START_REF] Melendez-Alvarez | On the Evaluation of the Performance of Asphaltene Dispersants[END_REF] or centrifugation test [START_REF] Jennings | Asphaltene Inhibitor Testing: Comparison between a High Pressure Live-Fluid Deposition and Ambient Pressure Dead-Oil Asphaltene Stability Method[END_REF][START_REF] Balestrin | Direct Assessment of Inhibitor and Solvent Effects on the Deposition Mechanism of Asphaltenes in a Brazilian Crude Oil[END_REF] . The ADT and turbidity tests are performed with diluted crude oils (1-5% volume of crude oil to n-heptane) under static conditions, as opposed to undiluted and flowing oil in industrial conditions. Some other tests are focused on the deposition of unstable asphaltenes like the coupon deposition test [START_REF] Bae | Advantages of Applying a Multifaceted Approach to Asphaltene Inhibitor Selection[END_REF] , electrodeposition test [START_REF] Mena-Cervantes | Tin and Silicon Phthalocyanines Molecularly Engineered as Traceable Stabilizers of Asphaltenes[END_REF] , packed bed apparatus [START_REF] Vilas Bôas Fávero | Mechanistic Investigation of Asphaltene Deposition[END_REF] , capillary test [START_REF] Vilas Bôas Fávero | Revisiting the Flocculation Kinetics of Destabilized Asphaltenes[END_REF] or Quartz Crystal Resonator (QCR) [START_REF] Enayat | Review of the Current Laboratory Methods To Select Asphaltene Inhibitors[END_REF][START_REF] Ekholm | A Quartz Crystal Microbalance Study of the Adsorption of Asphaltenes and Resins onto a Hydrophilic Surface[END_REF] . Using piezoelectric properties of quartz, QCR disk sensors allow to evaluate mass deposition on its surfaces [START_REF]Use of the Vibrating Quartz for Thin Film Weighing and Microweighing[END_REF] in the nanometer range. For this reason, it is also commonly referred to as a QCM (quartz crystal microbalance). However, its applications are not restricted to micro weighting as quartz sensors are sensitive to any change in its surrounding fluid when it is in contact with a fluid [START_REF] Keiji Kanazawa | The Oscillation Frequency of a Quartz Resonator in Contact with Liquid[END_REF] or fully immersed in a liquid [START_REF] Cassiède | Evaluation of the Influence of a Chemical Inhibitor on Asphaltene Destabilization and Deposition Mechanisms under Atmospheric and Oil Production Conditions Using QCM and AFM Techniques[END_REF] . Therefore, it appeared to be particularly well suited to probe asphaltene destabilization by sensing both asphaltene mass deposition and bulk property changes [START_REF] Daridon | Probing Asphaltene Flocculation by a Quartz Crystal Resonator[END_REF] .
The main objective of this work is to propose a multi-scale set of laboratory tests for an efficient selection of chemical additives for asphaltene deposition reduction. For this purpose, a dead oil supplied by TotalEnergies was studied in the presence or not of a commercial asphaltene deposition inhibitors provided by TotalEnergies Additives & Fuels Solutions. A quartz crystal disk fully immersed in the oil was used to probe asphaltene deposition during n-C 7 titration and the inhibition of deposition in the presence of additive was evaluated at the nanoscale. In addition, a centrifugation technique was used to evaluate the effect of the inhibitor on aggregation. Finally, detection time of the appearance of aggregates larger than 500 nm on microscopy and optical microscope equipped with SWIR (Short Wave InfraRed) camera allows to complete the results obtained by quartz crystal resonator.
MATERIALS AND METHODS
Chemicals
The study was carried out with a dead oil, called oil X, provided by TotalEnergies from a field located in West Africa and already put in production. The oil was free from contamination of water, solid particles, production additives or drilling fluids. Standard properties as well as SARA analysis of the dead oil are summarized in Table 1. To conduct this analysis, a sample of dead oil was obtained directly from the bottom tank and flashed in a cold trap. The dead oil was then topped at 323 K and 20 mbar to remove light ends (C15-). The residue was diluted in dichloromethane at a concentration of 15 mg/ml, and an Iatroscan instrument was used to measure saturates, aromatics, and polar fractions (resin + asphaltenes). Saturates were eluted using n-heptane, while aromatics were eluted using a solvent composed of 25 vol% dichloromethane and 75 vol% n-heptane. The content of asphaltenes was determined by precipitation, using n-pentane as an antisolvent with a weight/volume ratio of 1:40. Before each experiment, the oil was previously heated at 60°C which is above the wax appearance temperature (WAT) of the crude when the oil sample is taken.
QCR Experiment
The technique considered here to study asphaltene destabilization and deposition using a thickness shear mode acoustic sensor is based on a titration method. The titration system, which was presented previously [START_REF] Daridon | Probing Asphaltene Flocculation by a Quartz Crystal Resonator[END_REF][START_REF] Saidoun | Revisiting Asphaltenes Instability Predictions by Probing Destabiliztion Using a Fully Immersed Quartz Crystal Resonator[END_REF][START_REF] Acevedo | Understanding Asphaltene Fraction Behavior through Combined Quartz Crystal Resonator Sensor[END_REF] is mainly composed of a stirred batch reactor working at varying volume with an inserted injection system. A schematic diagram of the full experimental setup is presented in Figure 1. The temperature-controlled titration cell is a conical glass vessel to achieve an antisolvent injection reaching concentrations between 0 and 90 % in a single run. The antisolvent used in this study is >99% pure grade n-heptane supplied by Sigma Aldrich. Antisolvent is added into the vessel at constant flow rate discharged by a peristaltic pump at constant rate chosen to avoid any local over concentration. The titrant is withdrawn from a container that has been placed on a balance. The mass change in the container is continuously recorded, allowing a direct calculation of the composition of studied system as a function of time. To avoid radial or axial gradients in composition and temperature, the mixture is continuously stirred at a fixed speed of 470 rpm.
The stirring speed is maximized while ensuring that the quartz remains fully immersed during the titration. The magnetic stirring bar is 25 mm long and has a diameter of 0.5 mm. Two electrical feedthroughs are used to plug and hold the acoustic sensor immersed in oil during the entire experiment at the bottom of the cell. The acoustic sensor is a highly polished AT-cut quartz having a fundamental frequency of 3 MHz and a diameter of 13.6 mm with gold electrodes of 6.7 mm in diameter deposited on both sides. QCR sensors are highly sensitive to their thickness, as well as to the thickness of the electrode, and particularly to their surface quality. It is completely impossible to manufacture two sensors with identical properties and therefore with the same resonance properties in air. Thus, it is preferable to use the same QCR for comparative studies. If, unfortunately, the sensor breaks during a lengthy study, it would be necessary to characterize the untreated dead oil using the new sensor before continuing the study. In case of full immersion within a Newtonian fluid, a simplified model can be used to correlate changes in resonant frequency (eq 1) and half-band half-width (eq 2) of the sensor with the properties of the surrounding system [START_REF] Johannsmann | Viscoelastic Analysis of Organic Thin Films on Quartz Resonators[END_REF][START_REF] Lucklum | Role of Mass Accumulation and Viscoelastic Film Properties for the Response of Acoustic-Wave-Based Chemical Sensors[END_REF][START_REF] Daridon | Probing Asphaltene Flocculation by a Quartz Crystal Resonator[END_REF] :
( In this work, each of the studied solutions were prepared as follows with a rigorously identical protocol to compare results with or without asphaltene deposition inhibitors (AI): i) 25 cm 3 of the studied mixture is poured into the cell.
ii) The crude + additive mixture is left under stirring at the working temperature for 1 hour to achieve equilibrium between quartz sensor surface and the surrounding oil.
iii) The heptane is injected at constant mass flow rate of 0.25 g/min.
For the interpretation of recorded data, the shift in resonant frequency and half-band half-width are calculated and plotted as a function of either time or volume fraction of n-heptane in the mixture expressed as a function of the volume of each component:
(
h j f h z wh f " " f as 3)
the intersect of the opposed slopes before and after the observed inflexion point of the resonance frequency (Figure 2). At this point, resonant behavior deviates from the simple dilution Kanazawa law. In practice, frequency signal of quartz crystal resonator appeared predominantly affected by mass deposition of asphaltenes, if any, explaining the major change of slope of the frequency [START_REF] Daridon | Measurement of Phase Changes in Live Crude Oil Using an Acoustic Wave Sensor: Asphaltene Instability Envelope[END_REF] . Consequently, the shift in frequency plot was prefeed to estimate the onset. Finally, the change of deposited mass was estimated by plotting as a function of and calculating its slope according to equation 1.
Figure 2. Determination of "onset" from resonance frequency measurements.
Centrifugation Experiment
The content of unstable asphaltenes per unit volume of crude oil at specific aging time C A (t)
was measured here by the time-resolved centrifugation method reported by Maqbool et al [START_REF] Maqbool | Revisiting Asphaltene Precipitation from Crude Oils: A Case of Neglected Kinetic Effects[END_REF] . The samples were prepared following the same protocol as presented for QCR experiments. Mixtures of non-treated oil and oil + additive A at 3000 ppm (in volume) were prepared with a content in heptane of 51.1% and 51.2% in volume respectively. After their preparation the samples with the desired heptane composition were sealed and maintained under stirring at 470 rpm and 60°C during aging process. Aliquots of 1.5 ml were taken over time and centrifuged with the Rotina 380R from Hettich at 15,000 rpm. Centrifugation times, was fixed at 14 minutes to reach a separation efficiency cut-off size of 200 nm. The calculation method used is the one proposed by Saidoun [START_REF] Saidoun | Investigations into Asphaltenes Destabilization[END_REF] .
In this method, the supernatant is drained and recovered centrifuged pellets are collected as "wet" cakes containing unstable asphaltenes and trapped liquid solution. Pellets are washed with heptane several times, re-dispersed and re-centrifuged until the supernatant remains transparent.
Finally, the mass of dried cake is used to calculate the concentration of unstable asphaltenes aggregates larger than the cut-off size per unit volume of initial crude oil. The concentration of separated unstable asphaltenes aggregates C A (t) is plotted as a function of the aging time for different oil-heptane blend systems for the interpretation. C A (t) is expected to increase with the aging time for a fixed mixture and eventually tends to an equilibrium (plateau region).
SWIR microscopy experiment
Because of the darkness of the oil, it is difficult to observe the shape and size of aggregates using a standard microscope with a high magnification. Instead, a short-wave infrared (SWIR) camera coupled to a microscope was used at wavelengths where the absorption of light by heavy hydrocarbons is reduced. A custom optical assembly was used, it was developed around video microscope unit (VMU mitutoyo) [START_REF] Daridon | Combined Investigations of Fluid Phase Equilibria and Fluid-Solid Phase Equilibria in Complex CO2-Crude Oil Systems under High Pressure[END_REF] paired with a x50 to x100 objectives with a long working distance and correction for infrared wavelengths. A camera using an InGaAs sensor was h VM f h f w h f 0.9 1.7 μ .
sample of the oil X + additive A mixture at different additive concentrations was taken at 83 vol% of titrant and placed in a two polished windows cell with a path length of 1mm for immediately observation under a microscope with X50 magnification.
Detection Time Experiment
Following the procedure introduced by Maqbool et al. [START_REF] Maqbool | Revisiting Asphaltene Precipitation from Crude Oils: A Case of Neglected Kinetic Effects[END_REF] , the detection time of unstable asphaltenes in oil-heptane mixtures was measured with and without addition of additive.
Solutions were prepared with the same protocol described in earlier sections. When the desired volume fraction of heptane in the oil-heptane mixture was reached, the flask was sealed, and the mixture was kept in this condition at 60°C under constant stirring (at 470 rpm) for the time necessary to achieve the full study. Aliquots of each sample were taken at controlled aging times using a capillary and observed under a microscope with a magnification of x20.
Turbidity experiment
In addition, absorbance measurements were carried out to compare the results obtain with other techniques. Asphaltenes were precipitated by adding 100 µL of crude oil sample with (or without) additive to 3 mL of n-heptane in a vial. The mixture was manually homogenized, and the absorbance evolution of the blend was measured as a function of time.
The turbidity measurements were obtained using a Cary 60 UV-Vis Spectrophotometer from Agilent using a near-infrared light source of 800 nm.
RESULTS AND DISSCUSION
The role of an asphaltene h c h ck build-up period. However, the addition of an additive does not systematically lead to a positive effect for preventing asphaltene deposition. Moreover, Barcenas et al. demonstrated that increasing the amount of additives injected does not always results in a higher interaction with asphaltenes 50 . To evaluate the efficacy of an additive, laboratory tests were conducted to identify the working concentration that shows a positive effect on asphaltene destabilization and on diffusion deposition process trying to replicate driving forces of asphaltene destabilization in industrial operating conditions. The results of these tests were summarized below.
Turbidity experiment
The turbidity test is one of the most common approach used in laboratory for evaluating the effectiveness of additives based on the dispersion efficiency of asphaltenes in presence of chemicals. This method evaluates the ability of an additive to keep destabilized asphaltenes in suspension. The higher the absorbance, the more dispersed the asphaltenes are. The results of absorbance measurements with concentrations ranging from 10 ppm to 3000 ppm (in volume) of additives in oil/n-heptane mixtures are shown in Figure 3. From 100 ppm the absorbance is maintained at a higher value than for the untreated reference oil. The unstable asphaltene aggregates are therefore maintained in suspension in the presence of the additive over this period of one hour. The effect of the concentration above 300 ppm appears negligible. With this type of technique where the destabilizing precipitant/oil ratios have to be higher than 90% in volume and the medium remains static during the measurement, the question of representativeness with field conditions can be asked. Hence the need to resort to experiments with more realistic destabilizing precipitant/oil ratio conditions and focusing on scales of measures that enable to more clearly discriminate the effectiveness of additives. From 3000 ppm the destabilization is slightly shifted towards higher compositions and this effect is more marked as the additive concentration is important. For 3000 ppm we observe a shift in destabilization of +2.2 % in volume fraction compared to the reference measurement (pure crude oil) and +8.5% and +11.4% for 10 000 ppm and 20 000 ppm respectively. These results show that more than 1000 ppm of additive A is required to shift the destabilization of asphaltenes to a higher antisolvent composition. In addition, the onset shift is greater as the concentration of additive is increased but appears to reach a plateau value near 20 000 ppm. For all samples, titration curves first decrease because of the reduction in viscosity caused by the addition of heptane (dominating dilution effect). Then the increase in dissipation is related to the destabilization of the asphaltenes and the growth of asphaltene aggregates suspended around the sensor. The quartz disk therefore detects larger and/or more objects close to its surface. This increase in dissipation is shifted towards higher heptane compositions as the AI concentration is increased. Furthermore, the amplitude of the dissipation peak increases as the additive concentration in solution is increased.
From these measurements and from equation 1 and 2 it is possible to extract the mass deposited on the quartz surface as well as the change in density viscosity product of the fluid in contact with the sensor. The apparent viscosity is calculated according to the following equation:
Where here is as a function of the square root of 3 (eq 2). Only the 3 rd harmonic was used for several reasons. Firstly, it appears that the first harmonic is particularly affected by intrinsic properties of quartz crystals such as piezoelectric stiffening and edge effects [START_REF] Cassiède | Electrical Behaviour of AT-Cut Quartz Crystal Resonators as a Function of Overtone Number[END_REF] , which means that it behaves differently from other harmonics and cannot be used. For the largest harmonics, it has been shown that inharmonic resonances interfere with the main resonance peaks [START_REF] Cassiède | Characterization of the Behaviour of a Quartz Crystal Resonator Fully Immersed in a Newtonian Liquid by Impedance Analysis[END_REF] . Finally, the 3 rd harmonic has the greatest penetration length of the acoustic wave in the contacting media (eq. 5). The ratio is proportional to , which makes it possible to estimate the bulk viscosity-density product from 3rd harmonic peak with less impact from the properties of the deposited layer than with higher harmonics.
(
) 5
The density is calculated using the molar mass/molar volume ratio of the mixture during continuous addition of heptane. Theoretical viscosity change with addition of heptane was calculated using the Grunberg-Nissan type of equation [START_REF] Grunberg | Mixture Law for Viscosity[END_REF] (eq 6) assuming a binary mixture composed of crude oil + heptane.
In this expression, the Grunberg-Nissan parameter G was adjusted to match the apparent viscosity measured by QCR before asphaltenes destabilization.
Results in Figure 6 indicate that the aggregation of asphaltenes after heptane addition promotes a slight increase of the viscosity. A shift in the increase in dissipation is observed in presence of additive and it is proportional to the additive concentration. The deposited mass can be evaluated by expressing equation 1 in terms of as a function and by calculating the slope of the obtained straight line [START_REF] Daridon | Probing Asphaltene Flocculation by a Quartz Crystal Resonator[END_REF] . The values obtained by this method during the titration were displayed in Figure 7. Starting from a concentration of 1000 ppm of additive A, a clear decrease of the deposit compared to non-treated oil is shown.
For the purpose to evaluate and compare the positive additive effect on deposition reduction, the deposition inhibition efficiency in the presence of additive A at a volume concentration of 80 vol% was calculated according to the following equation:
The results are reported in Table 3. As it can be inferred from this Table 3 very Equilibrium plateau values are reached at aging times higher than 500h; about 1.9 kg/m 3 oil in the presence of additive A compared to 7.2 kg/m 3 oil for the untreated oil.
SWIR microscopy
The titration tests carried out with the quartz crystal sensor indicated a delayed destabilization of asphaltenes, reduced mass deposition on quartz surfaces, and an increased dissipation of the signal in presence of additive. This increase in dissipation (proportional to an apparent viscosity) can reflect a change in the morphology of the flocs formed far from the onset of destabilization.
To observe this change in floc structures in the micrometer range SWIR microscopy observations were performed at the end of the titration. To make these observations comparable, a sample of titrated oil was taken instantaneously after the titration experiment and directly transferred to the glass cell of the microscope. All the samples investigated corresponded to a heptane volume fraction of 0.83. The results are shown in Figure 9. Without additive, the aggregates have a size between 25 and 45 micrometers in diameter and have a globular-like shape in this condition of sampling (Figure 9. A). A clear decrease in aggregate size is observed above 3000 ppm (Figure 9. C, D, E and F) in the same conditions of sampling. From 10 000ppm the size of aggregates does not seem to decrease anymore. Moreover, as the concentration of additive A is increased, the aggregates are less and less spherical and some aggregates with needle shapes are observed.
The reduction of the size of the aggregates observed in presence of chemical additive at this micrometer scale is very likely related to the stabilization of aggregates by amphiphilic properties of additive. Indeed, it was reported in the literature that amphiphiles with their polar head will react with the active sites of asphaltenes [START_REF] González | Peptization of Asphaltene by Various Oil Soluble Amphiphiles[END_REF][START_REF] Chang | Stabilization of Asphaltenes in Aliphatic Solvents Using Alkylbenzene-Derived Amphiphiles. 1. Effect of the Chemical Structure of Amphiphiles on Asphaltene Stabilization[END_REF][START_REF] Chang | Stabilization of Asphaltenes in Aliphatic Solvents Using Alkylbenzene-Derived Amphiphiles. 2. Study of the Asphaltene-Amphiphile Interactions and Structures Using Fourier Transform Infrared Spectroscopy and Small-Angle X-Ray Scattering Techniques[END_REF] . Once attached to the asphaltenes, the hydrophobic tail of the amphiphiles will generate a steric repulsion and thus limit the size of the asphaltene aggregates [START_REF] Chang | Stabilization of Asphaltenes in Aliphatic Solvents Using Alkylbenzene-Derived Amphiphiles. 1. Effect of the Chemical Structure of Amphiphiles on Asphaltene Stabilization[END_REF][START_REF] León | Study of the Adsorption of Alkyl Benzene-Derived Amphiphiles on Asphaltene Particles[END_REF] . The detection time curves are shown in Figure 10 for Oil X as pure reference crude oil and the same oil treated by additive A at 3000 ppm in volume. The instantaneous microscopy detection of asphaltene aggregates in Oil X + heptane was observed at 58 vol% of heptane and 61 vol% for oil treated with additive A. This result is consistent with observed minimum shift in frequency (-f) of the resonating immersed sensor; i.e. at 55.2 vol% of heptane in Oil X and 57.4 vol% of heptane treated oil X with 3000 ppm of additive A (refer to Figure 4). This way, the larger sensitivity of the quartz resonator compared to microscopy is also verified, as expected.
Indeed, the microscope observations describe the appearance of aggregates larger than 500 nm while the quartz allows the detection of nanometer level. Nevertheless, both techniques show a delay of the same order of magnitude in the observation destabilization of asphaltenes in the presence of additive at nanoscale and microscale.
The presence of additive A in crude oil delays the observation of asphaltene aggregates by 1 to 2 orders of magnitudes at heptane fractions inferior to 60 vol% in oil X. Results suggest that Additive A stabilizes asphaltene aggregates by keeping them in suspension.
CONCLUSION
Raw and processed data obtained from the QCR sensor were used to compare asphaltenes destabilization and deposition during a titration of pure crude oil and crude oil with additive. It allowed to evaluate the efficacy of an AI through measurements in the nanometer scale. Results
show that additive A shifted the destabilization towards higher antisolvent compositions with an increased effect as the concentration of additive in solution is increased. From 1000 ppm onwards a decrease of the deposit with an increase of the AI concentration was observed. This effect seems to be limited between 10,000 ppm and 20,000 ppm, and it can be assumed that there is a threshold above which the increase in AI concentration no longer has a significant impact on deposition. Excessive AI increase did not negatively impact the amount of deposition and no AI self-assembly effect was observed. Moreover, the results obtained in QCR were supported by the results obtained in centrifugation where a clear decrease in concentration of destabilized particles larger than 200 nm was observed in the presence of additive. The effect of the additive on the aggregation kinetics was also highlighted by the detection time measurements with a delay in appearance of unstable asphaltenes aggregates larger than 500 nm.
These observations are consistent with the results obtained by the QCR and the effect of additives on the response of quartz to dissipation. Indeed, it has been described previously that the increase in dissipation is linked to an increase in the viscosity of the crude-heptane mixture, which is itself linked to the appearance of flocs in the environment close to the quartz disk.
Absorbance measurements showed the capacity of the additive to maintain these particles in suspension and thus to limit their size.
Increasing the concentration of additive A showed a shift in the increase of viscosity compared to the pure crude oil which is consistent with the shift in the appearance of aggregates under the microscope and shows the effect of the additive in delaying the destabilization towards higher antisolvent compositions.
This research aims at providing solutions to the industry, confronted to problems related to the accumulation of asphaltene deposits on the walls of pipelines but also to the accumulation of sedimented particles under static conditions. Hence, results of this manuscript show that evaluating effectiveness of an additive in laboratories must study the deposition as well as the aggregation kinetics. In addition, the study of viscosity is interesting especially for the evaluation f h ff c f "f w " .
Immersed resonating sensors and centrifugation (or microscopy) methods are showed to be complementary techniques for the screening of asphaltene inhibitors.
Figure 1 .
1 Figure 1. Atmospheric pressure experimental titration QCR set-up.
The same quartz disk was reused in this work to allow a reliable comparison of all experiments carried out. It was cleaned between each experiment by consecutive immersions in toluene vials located in an ultrasonic cleaner. Any direct contact of the quartz surface and a solid material is avoided during the washing and drying as well as the storage of the sensor between successive experiments to limit the risk of scratching and consequently changing the resonant properties of the resonator. Cleaning quality is verified by measuring the resonant properties of the unload quartz disk at the same temperature before each new experiment. The cleanliness of quartz surfaces is validated only if the resonance frequency change does not exceed 10Hz / 9 MHz.The fundamental resonance frequency as well as the geometry of the quartz was selected to allow the sensor to be operational in an extended overtone range (from 3 to 7) in full immersion in viscous oils. A network analyser was used to apply a radio frequency voltage to the quartz sensor and to measure the input port voltage reflection coefficient S 11 which is used to calculate the conductance of the quartz over the frequency range from fundamental to 7 th overtone. The resonance frequency of each overtone correspond to the maximum of a conductance peak in the conductance spectrum. The resonance frequencies are estimated with an uncertainty of 4 Hz whereas the resonance half-band half-width (related to resonator dissipation) is determined with a maximum error of 5 Hz. The unloaded reference values and are obtained by applying unload measurements (air) prior to each experiment. Reference frequencies and dissipation are respectively used to calculate the shift in resonant properties caused by fluid immersion and .
number, is the fundamental resonance frequency of the crystal, is the theoretical adsorbed or deposited mass on the quartz surface, is the density-viscosity product of the surrounding fluid and is an empirical correction term for the viscous friction of the fluid on rough surfaces of the sensor. Extension of this simple model to asphaltene systems by considering as the sum of surface effect and asphaltene deposition makes it possible to estimate changes in both asphaltene deposited mass (per unit area) and viscosity-density product in the fluid surrounding the sensor.
The detection time corresponds to the aging time after which asphaltene flocs are first observed and confirmed to further grow by microscopy. The experiment was repeated for multiple volume fractions of antisolvent with and without additive. A plot of the detection time as a function of the volume fraction of heptane was obtained as a result the experiments. This plot is referred to as the detection time curve. It ranges from few minutes for large antisolvent fraction up to a hundred of hours for low antisolvent fraction. The lowest concentrations of antisolvent for which the detection-time is less than 5 minutes is considered as the instantaneous detection condition by microscopy.
Figure 3 .
3 Figure 3. Relative absorbance of Oil X in presence of additive A at different concentrations:
Figure 4 .
4 Figure 4. Shift of resonance frequency (3 rd harmonic) recorded during experimental titration of
Figure 5 .
5 Figure 5. Shift of dissipation (3 rd harmonic) recorded during experimental titration of crude oil X
Figure 6 .
6 Figure 6. Apparent viscosity of the surrounding fluid calculated during experimental titration of
few effects were observed at low concentration of additive (300 ppm). At excessive concentrations of AI (>10 000ppm) the effects seem to tend towards an inhibition efficiency of 90% at a volume composition of 80% in antisolvent. The subsequent working concentration was set at 3000 ppm for further evaluation of the additive because at this concentration the deposition inhibition efficiency is sufficiently clear, and the concentration is not excessive in relation to the real use conditions in the industry.
Figure 7 .
7 Figure 7. Deposited mass calculated during experimental titration of crude oil X + additive A at
Figure 8 .
8 Figure 8. Concentration of unstable asphaltenes larger than 200 nm separated by centrifugation
Figure 9 .
9 Figure 9. SWIR camera capture at X50 magnification of crude Oil X + additive A samples at 83
Figure 10 :
10 Figure 10 : Necessary aging time to detect particles by visual microscopy observations as function of the volume fraction of heptane in heptane-oil blends: (red ) non-treated Oil X, (orange ) Oil X + additive A at 3000ppmv.
Table 2 .
2 Properties of Additive A
Property Value Technique/Apparatus
Density (kg/m 3 ) @ 15 °C 930 Viscometer Stabinger SVM
3000 -ASTM D7042
Refractive Index @ 60 °C 1.504 Mettler Toledo RM40
2.2 Experimental Methods
Table 3 .
3 Deposition inhibition efficiency of oil treated by additive A at 80 vol% heptane
composition.
Additive A concentration (ppm) Deposition inhibition efficiency
300 -5.91 %
1000 32.84 %
3000 60.96 %
ACKNOWLEDGMENTS
We would like to acknowledge the following people who have contributed to this project and and for the fruitful discussions: Nelson Acevedo, Nicolas Passade-Boupat and Moussa Kane.
The authors would also like to thank TotalEnergies for providing the oil sample and TotalEnergies Additives & Fuels Solutions for supplying the chemical additive.
AUTHOR INFORMATION
Corresponding Author
*Jean-Luc Daridon -
TotalEnergies,
, 64000 Pau, France;
orcid.org/0000-0002-0522-0075.
E-mail : [email protected].
Notes
The authors declare no competing financial interest. |
04120663 | en | [
"info"
] | 2024/03/04 16:41:26 | 2022 | https://inria.hal.science/hal-04120663/file/Making%20with%20Data%20-%20Digital%20Production%20-%20section%20introduction.pdf | Yvonne Jansen
Her Ph.D. thesis focused on physical data representations, and her
Digital production changed the practice of physical data representations tremendously. The first section of this book includes works which relied on manual crafting which requires mastery of "traditional" tools, such as chisels to do woodcarving (like Adrien Segal's Snow Water Equivalent), or machines such as pipe benders or various saws, to bend and cut materials into the needed shape (like Loren Madsen's Endings). This section shows how the emergence of digital production shifted mastery from tools to the software controlling the machines. Included is a selection of five works which relied to different degrees on digital production mechanisms to produce the desired physical artifacts.
While one may be tempted to think that not much mastery is required when machines do all the work, creating physical data representations using digital production is not as straightforward as it may seem. Yes, machines do the actual carving, bending, or cutting, and in many cases probably more accurately than a human would be able to, but telling a machine what to do requires a different kind of ingenuity: one has to generate the unique code that can tell a machine how to make the specific data physicalization one has in mind. And in most cases there is not an app for that. Sometimes, even the machine is not something one can just buy in a store. For example, Figure 1 shows a printed 3D model of a mathematical object (a D2 dihedral symmetry group) created with a home-built custom 3D printer which uses sugar as printing material and was dubbed CandyFab4000 by its inventors. Finally, there is more than one way to go about physicalizing particular data and often many paths need to be explored and many decisions made until finally an object emerges. Due to the intrinsic physics of a desired geometry and a chosen fabrication process, the resulting artifact does not necessarily come out as one expected (as Stephen Barrass discusses in his chapter on the Chemo Singing Bowl). In most cases, some additional manual intervention is needed, most commonly to assemble or paint what comes out of a machine. The five chapters in this section all rely on digital production, and each one tells a different story combining different levels of manual and digital craft. A high level of manual craft is involved in pieces where machines cut from dozens to hundreds or sometimes even thousands of components into a particular shape which then need to be (often manually) assembled. Machines can produce very accurately an abundant number of pieces which then need to be put together to make sense and show the actual data, which can be a very painstaking process. Ekene Ijeona's chapter introducing his Wage Islands sculpture is an example of such an approach, and the artist details the minutious effort that went into it. Volker Schweisfurth also describes how he cut, assembled, and glued 3D printed elements to create Data That Feels Gravity. Another example is the Emoto data sculpture created by Moritz Stefaner (whose Perpetual Plastic project appears later in the book), Drew Hemment and Studio NAND which shows timelines of Twitter sentiments captured during the Olympic summer games in 2012. Figure 2 shows a closeup of the sculpture and the detailed peaks which represent the volume of tweets about the 2014 Olympics grouped into columns for each day and separated into rows by the level of positivity of the sentiment expressed in a Tweet (positive to negative). A CNC machine made sure that the sculpture accurately showed the data, but much preparatory work went into the collection and processing of the data, the editing of the 3D design files, and the final assembly of the different pieces. A differently laborious process involving many different steps and combinations of support structures and data-encoding parts is described in detail in the MINN_LAB Design Collective's chapter detailing their Orbacles project.
The above are very specific examples, where a particular type of physicalization was thought up explicitly for a specific dataset. On the other extreme, one can find examples where the digital production process is almost entirely automated such that anyone, without prior experience in digital design or fabrication, could insert their data, for example, from their personal activity tracker, and get a 3D printed artifact at the end, whose shape is entirely defined by their own data. Figure 3 shows multiple examples of pre-defined designs, which are parameterized such that someone's data determines specific features, like the length or width of spikes. The motivation behind these is to make physicalizations accessible to anyone who would like to see their data in physical form, for instance, as a memento.
Physicalizations can also simply serve personal curiosity, such as the ones shown in Figure 4, made by Nathan Yau, who generated physical charts from shot position data of key players of the NBA in 2018. He wrote code to transform NBA data from a player into a physical shape and-similar to the physicalizations of personal data above-ran the same code for different players to generate different physicalizations that can be physically organized and compared. While many physicalizations primarily encode data solely in their geometrical shape, as shown in the above figures, some take on the additional aim to encode data in non-visual properties of the final physical object. For instance, Stephen Barrass' chapter describing his Chemo Singing Bowl sculpture examines how objects can sonify data. The result is meant to be played like an instrument so that it makes the encoded data perceivable. Many different kinds of physical properties can be adjusted based on the data used to generate the object, including properties such as mass, floatability and even taste. For instance, in their chapter Data Seeds, Nick Dulake and Ian Gwilt detail how adjusting the shape and mass of artificial seeds results in objects falling down at different speeds. The relationship between shape and speed to fall is complex and being able to encode data in such ways depends very much on the availability of mathematical models to transform the data. Realizing such data physicalizations would not be possible without the support of digital production.
◀
Figure 1 A 3D model called Sugar Soliton created by Windell Oskay after the original model by Batsheba Grossman printed using sugar which turned partially into caramel giving it an appearance of a coral | Credit: CC-BY 2.0 Windell H. Oskay, ww.evilmadscientist.com. ▼ Figure 2 Emoto data sculpture showing timelines of Twitter sentiments during the Olympic games in 2012. It consisted of 17 plates milled with a CNC machine. | Credit: Moritz Stefaner, Drew Hemment, and Studio NAND. www.emoto2012.org
▲
Figure 5 Nathan Yau's physical charts showing per-player NBA shot selection data. | Credit: Nathan Yau. |
04120747 | en | [
"info"
] | 2024/03/04 16:41:26 | 2021 | https://hal.science/hal-04120747/file/H_STREAM_PAPER.pdf | Genoveva Vargas-Solar
email: [email protected]
Javier A Espinosa-Oviedo
email: [email protected]
H-STREAM: Composing Microservices for Enacting Stream and Histories Analytics Pipelines
Keywords: Stream processing, Cloud, microservices
This paper introduces H-STREAM, a stream/data histories processing pipelines enactment engine. H-STREAM is a framework that proposes microservices to support the analytics of streams produced by systems collecting data stemming from IoT (Internet of Things) environments. Microservices implement operators that can be composed for implementing specific analytics pipelines as queries using a declarative language. Queries (i.e., microservices compositions) can synchronise online streams and histories to provide a continuous and evolving understanding of the environments they come from. H-STREAM microservices can be deployed on top of stream processing systems and data storage backends, tuned according to the number of things producing streams, the pace at which they produce them, and the physical computing resources available for continuously processing and delivering them to consumers. The paper summarises results of an experimental setting for studying H-STREAM scale-up possibilities according to the number of thins and production rate. Then it shows a proof of concept of H-STREAM in a smart cities scenario.
Introduction
The Internet of Things (IoT) is the network of physical devices enabling objects to connect and exchange data. IoT enables the construction of smart environments (grids, homes, and cities) where streams are produced at different paces. The status of these environments can be observed, archived and analysed online, managing and processing streams. To have a thorough understanding, modelling and predicting smart environments behaviour, processing, and analytics tasks must combine streams and persistent historical data. For example, at "9:00, start computing the average number of people entering a shopping mall every morning and identify points of interest in the mall according to peoples flow in the last month". Answering this query is challenging because it is necessary to determine: (i) the streams that must be discarded or persist into histories (do we store the average/hour or every event representing a person entering the mall? or the person visiting an area in the mall?); (ii) how to properly combine histories with streams within analytics tasks (do we combine the whole history with the average observation/hour? do we compute POIs of the last month and correlate them with new computed POIs observed online?). Existing stream platforms provide efficient solutions for collecting and processing streams with parallel execution backends for example Apache Flink3 , Kafka 4 and message-based infrastructures like Rabbit MQ 5 . Programmers rely on these platforms to define stream processing operations that consume "mini"-batches of streams observed through temporal windows thanks to query engines like Elasticsearch, Amazon Athena, Amazon Redshift and Cassandra. These engines use a list of passive queries to analyze and sequence data for storage or use by other processors. The focus has been oriented to ensure performance since streams can be massive and analytics computationally costly. The processing operations can be rather complex and ad-hoc to target application requirements. Analytics-based applications must build ad-hoc programs that process postmortem data and streams to perform online analytics tasks. Since programs are ad-hoc and queries are passive, the use of specific processing operations (e.g., clustering, windowing, aggregation) are (hard)coded, and they should be modified and calibrated if new requirements come up.
Current advances in data processing and data analytics have shown that it is possible to propose general operations as functions or operators that can be called, similar to queries within databases applications (e.g. Spark programs). Still, the stream/data processing operations remain embedded within programs, and this approach hinders data-program independence that can imply high maintenance costs. Addressing this requirement often implies integrating data processing systems and programming this kind of hybrid solution. We believe that platforms with processing and analytics operators that can be agnostic to processing streams and stored data histories are still to come. The challenge is defining a platform that can wrap operators as self-contained services and compose them into pipelines of analytics tasks as queries that can continuously deliver aggregated streams/historical data to target applications. This paper introduces H-STREAM 6 an analytics pipelines' enactment engine. It provides stream processing microservices for supporting the analysis and exploration of streams in IoT environments. A microservice is a software development technique that structures an application as a collection of loosely coupled services. H-STREAM combines stream processing and data storage techniques tuned depending on the number of things producing streams, the pace at which they produce them, and the physical computing resources available for processing them online and delivering them to consumers. H-STREAM deploys stream operators pipelines, called stream queries, on message queues (Rabbit MQ) and data processing platforms (Spark) to provide a performant execution environment.
The paper summarises results of an experimental setting for studying H-STREAM scale-up possibilities according to the number of thins and production rate. Then it shows a proof of concept of H-STREAM in a smart cities scenario.
Accordingly, the remainder of the paper is organised as follows. Section 2 introduces related work regarding stream processing. The section discusses some limitations and underlines how our work intends to overcome certain limitations concerning those solutions. Section 3 describes the general architecture of H-STREAM with operators as microservices that are deployed on high-performance underlying infrastructures. It also introduces the microservices composition language for defining stream processing pipelines as queries. Section 4 introduces the core of our contribution, a stream processing microservice. It also describes the experimental scenario that applies series of microservices to evaluate scale up in terms of the number of things and volume of streams. Section 5 introduces a proof of concept use case for analysing connection logs in cities and show how to define queries that have to synchronise histories with streams to analyse them online using H-STREAM query language. Section 6 concludes the paper.
Related work
Streams can result from a fine-grained continuous reading of phenomena within different environments. Observations are done in different conditions and with different devices. Therefore, streams must be processed for extracting useful information. Stream processing refers to data processing in motion or computing on data directly as it is produced or received. In the early 2000s, academic and commercial approaches proposed stream operators for defining continuous queries (windows, joins, aggregation) that dealt with streams [START_REF] Fragkoulis | A survey on the evolution of stream processing systems[END_REF][START_REF] Rao | The big data system, components, tools, and technologies: a survey[END_REF]. These operators were integrated as extensions of database management systems. Streams were often stored in a database, a file system, or other forms of mass storage. Applications would query the data or compute over the data as needed. These solutions evolved towards stream processors that receive and send the data streams and execute the application or analytics logic. A stream processor ensures that data flows efficiently and the computation scales and is fault-tolerant. Many stream processors adopt stateful stream processing [START_REF] Cardellini | Elastic stateful stream processing in storm[END_REF][START_REF] Carbone | State management in apache flink®: consistent stateful distributed stream processing[END_REF][START_REF] Alaasam | Stateful stream processing for digital twins: Microservice-based kafka stream dsl[END_REF][START_REF] To | A survey of state management in big data processing systems[END_REF] that maintains contextual state used to store information derived from the previously-seen events.
We analyse stream processing systems that emerged to process (i.e., query) streams from continuous data providers (e.g. sensors, things). These systems are designed to address scalability including (i) streams produced at a high pace and from millions of providers; (ii) computationally costly processing tasks (analytics operations); (iii) online consumption requirements. Apache Storm 7 is a distributed stream processing computation framework that is distributed, faulttolerant and guarantees data processing. A Storm application is designed as a "topology" in the shape of a directed acyclic graph (DAG) with spouts and bolts acting as the graph vertices. Edges on the graph represent named streams flows and direct data from one node to another. Together, the topology acts as a data transformation pipeline. Apache Flink is an open-source stateful stream processing framework. Stateful stream processing integrates the database and the event-driven/reactive application or analytics logic into one tightly integrated entity. With Flink, streams from many sources can be ingested, processed, and distributed across various nodes. Flink can handle graph processing, machine learning, and other complex event processing. Apache Kafka is an open-source publish and subscribe messaging solution. Services publishing (writing) events to Kafka topics are asynchronously connected to other services consuming (reading) events from Kafka -all in real-time. Kafka Streams lacks point-to-point queues and falls short in terms of analytics. Spring Cloud Data Flow8 is a microservicebased streaming and batch processing platform. It provides tools to create data pipelines for target use cases. Spring Cloud Data Flows has an intuitive graphic editor that makes building data pipelines interactive for developers. Amazon Kinesis Streams9 is a service to collect, process, and analyse streaming data in real-time, designed to get important information needed to make decisions on time. Cloud Dataflow10 is a serverless processing platform designed to execute data processing pipelines. It uses the Apache Beam SDK for MapReduce operations and accuracy control for batch and streaming data. Apache Pulsar is a cloud-native, distributed messaging and streaming platform. Apache Pulsar11 is a high-performance cloud-native, distributed messaging and streaming platform that provides server-to-server messaging and geo-replication of messages across clusters. IBM Streams12 proposes a Streams Processing Language (SPL). It powers a Stream Analytics service that allows to ingest and analyse millions of events per second. Queries can be expressed to retrieve specific data and create filters to refine the data on your dashboard to dive deeper.Source 13 . Event stream query engines like Elasticsearch, Amazon Athena, Amazon Redshift, Cassandra define queries to analyze and sequence data for storage or use by other processors. They rely on "classic" ETL (extraction, transformation and loading) processes and use query engines to execute online search and aggregation, for example, in social media contexts (e.g., Elastichsearch) and SQL like queries on streams (e.g. Amazon Athena, Redshift and Cassandra). Discussion. The real-time stream processing engines rely on distributed processing models, where unbounded data streams are processed. Much data are of no interest, and they can be filtered and compressed by orders of magnitude [START_REF] Lyman | How much information 2003?[END_REF][START_REF] Woo | What's the big deal about big data[END_REF][START_REF] Zikopoulos | Understanding big data: Analytics for enterprise class hadoop and streaming data[END_REF]. Stream querying and analytics are often performed after the complete scanning of representative data sets. This strategy is inconvenient for real-time processing. Windowing mechanisms have emerged for processing data stream in a predefined topology with a fixed number of operations such as join, aggregate, filter, etc. The challenge is to define these filters in such a way that they do not discard information and that process streams produced at high pace to fulfill consumers requirements. Furthermore, online analysis techniques must process streams on the fly and combine them with historical data to provide past and current analytics of observed environments. Despite solid stream processing platforms and query engines, solutions do not let programmers design their analytics pipelines without considering the conditions in which streams are collected and eventually stored. H-STREAM has been designed as a stream processing and analytics cartridge for defining stream analytics pipelines and enacting them by composing microservices that hide the underlying platforms dealing with lowlevel tasks for collecting and storing streams.
H-STREAM for building querying pipelines for analysing streams
We propose H-STREAM, an analytics' pipelines enactment engine with microservices that can be composed for processing streams (see figure 1). H-STREAM operators implement aggregation, descriptive statistics, filtering, clustering, and visualisation wrapped as microservices. Microservices can be composed to define pipelines as queries that apply a series of analytics operations to streams collected by stream processing systems and stream histories. H-STREAM relies on (i) message queues for collecting streams online from IoT farms; and (ii) a backend execution environment that provides a high-performance computing infrastructure (e.g., a virtual data centre [START_REF] Akoglu | Putting data science pipelines on the edge[END_REF], a cloud) with resources allocation strategies necessary for executing costly processes.
Composing microservices. Microservices can work alone or be composed to implement simple or complex analytics pipelines (e.g., fetch, sliding window, average, etc.). A query is implemented by composing microservices. For example, consider observing download and upload speed variations within users' connections when working on different networks. Assume that observations are monitored online but that previous observations are also stored before the query is issued.
A network analyst willing to determine if she obtains the expected bandwidth according to her subscription to a provider can ask every two minutes give me the fastest download speed of the last 8 minutes (see a) in Figure 2). Figure 2 b) shows the composition implementing this query example that starts calling a Fetch, and a Filter operators that retrieve respectively the streams produced online with a history filtering the download speed collected the last 8 minutes.
Results produced by these services are integrated by the operator MAX that synchronises the streams with the history to look for the maximum speed. The result is stored by a service Sink that contacts Grafana. This query is executed every two minutes by an operator window. The operator Fetch interacts with a RabbitMQ service that collects streams from devices and with a service that con- tacts InfluxDB to store the streams for building a history. Finally, an operator window triggers the execution of the query every two minutes. The approach for composing microservices is based on a composition operation that connects them by expressing a data flow (IN/OUT data). We currently compose aggregation services (min, max, mean) with temporal windowing services (landmark, sliding) that receive input data from storage support or a continuous data producer. We propose connectors, namely Fetch and Sink mi-croservices that determine the way microservices exchange data from/to things, storage systems, or other microservices. Stream Processing Pipelines Query Language. We proposed a simple query language with the syntax presented in figure 3, used to express:
-The frequency in which data will be consumed (EVERY(number:Integer, time-Unit:{minutes, seconds, hours})).
-The aggregation function applies to an attribute of the input tuples (min, max, mean). -The observation window on top of which aggregation functions will perform.
The window can involve only streams produced online (the last 5 seconds) or include historical data (the last 120 days).
Fig. 3. Taxonomy of queries that can be processed by composing microservices.
The observation can be done starting from a given instance in the past (starting(number:Integer, timeUnit:{minutes, seconds, hours})) until something happens. For example, from the instant in which the execution starts until the consumer is disconnected. This corresponds to a landmark window [START_REF] Golab | Data stream management[END_REF]. It can also be done continuously starting from a moving "current instant" to several {minutes, seconds, hours} before. This case corresponds to a sliding window. The expression includes the logic names of the data producers that can be a store (Influx, Cassandra) and/or a streaming queue provided by a message-oriented middleware (e.g., RabbitMQ). A query expression is processed to generate a query-workflow that implements it (see figure 2). Activities represent calls to microservices; they are connected according to a control flow that defines the order they should be executed (i.e., in sequence or parallel). The control flow respects a data flow that defines data Input/Output dependencies. H-STREAM enacts the queryworkflow coordinating the execution of microservices, retrieving partial output that serves as input or a result (see figure 2).
Stream processing microservice
Figure 4 shows the general architecture of a stream microservice. A microservice consists of three main components, Buffer Manager, Fetch and Sink, and Operator-Logic. The microservice logic is based on a scheduler that ensures the recurrence rate in which the analytics operation implemented by the microservice is executed. Stream processing is based on "unlimited" consumption of data ensured by the component Fetch that works if a producer notifies streams. This specification is contained in the logic of the components OperatorLogic and Fetch. As shown in the figure, a microservice communicates asynchronously with other microservices using a message-oriented middleware. As data is produced, the microservice fetches and copies the data to an internal buffer. Then, depending on its logic, it applies a processing algorithm and sends it to the microservices connected to it. The microservices adopt the tuple oriented data model as a stream exchange model among the IoT environment producing streams and the microservices. A stream is a series of attribute-value couples where values are atomic (integer, string, char, float) from a microservice point of view. The general architecture of a microservice is specialised in concrete microservices processing streams using well-known window-based stream processing strategies: tumbling, sliding and landmark [START_REF] Krämer | Semantics and implementation of continuous sliding window queries over data streams[END_REF][START_REF] Golab | Data stream management[END_REF]. Microservices can also combine stream histories with continuous flows of streams of the same type (the average number of connections to the Internet by Bob of the last month until the next hour).
Since RAM assigned to a microservice might be limited, and in consequence, its buffer, every microservice implements a data management strategy by collaborating with the communication middleware to exploit buffer space, avoiding losing data and generating results on time. A microservice communicates asynchronously with other microservices using a message-oriented middleware. As data is produced, the microservice fetches and copies the data to an internal buffer. Then, depending on its logic, it applies a processing algorithm and sends it to the microservices connected to it. There are possibilities: (i) on-line processing using tree window-based strategies [START_REF] Krämer | Semantics and implementation of continuous sliding window queries over data streams[END_REF][START_REF] Golab | Data stream management[END_REF] (tumbling, sliding and landmark) well known in the stream processing systems domain; (ii) combine stream histories with continuous flows of streams of the same type (the average number of connections to the Internet by Bob of the last month until the next hour).
Interval oriented storage support for consuming streams
A microservice that aggregates historical data and streams includes a component named HistoricFetch. This component is responsible for performing a oneshot query for retrieving stored data according to an input query (for example, by a user or application). As described above, we have implemented a general/abstract microservice that contains a Fetch and Sink microservices. The historical fetch component has been specialized to interact with two stores: In-fluxDB14 and Cassandra15 . The microservice HistoricFetch exports the following interface:
def queryToHistoric(function: String, value: String, startTimeInMillis: Long, endTimeInMillis: Long, groupByTimeNumber: Int, groupByTimeTimeUnit: String) :
List[List[Object]]
The method queryToHistoric(), shown in the code above, implements the connection to a data store or DBMS, sends queries and retrieves data. It returns the results packaged in the Scala structure List[ List [Row] ] objects, where Row is a tuple of three elements:
-Timestamp: Long, a timestamp in the format epoch; -Count: Double, the number of tuples (rows) that were grouped; -Result: Double, the result of the aggregation function.
As shown in the following code, a component HistoricFetch is created by the microservice specifying the name of the store (historicProvider), the name of the database managing the stream history (dbName.series) and the execution context. The current version of our microservice runs on Spark, so the execution context represents a Spark Context (sc).
val hf : HistoricFetch = new HistoricFetch(historicProvider,dbName,series,sc)
The component HistoricFetch of the microservice creates an object HistoricProvider (step 1) that is used as a proxy for interacting with a specific store (i.e., InfluxDB or Cassandra). The store synchronously creates an object Connection (step 2). The Connection object will remain open once the query has been executed and results received by HistoricFetch. Then, the component HistoricalFetch will use it for sending a temporal query using its method queryToHistoric() (step 3). The result is then received in the variable Result that is then processed (i.e., transformed to the internal structure of the operator see below) and shared with the other components through the Buffer of the microservice (step 4).
Consider the query introduced previously every two minutes give me the fastest download speed of the last 8 minutes. It combines the history of observations of the last 8 minutes with those produced continuously and this every two minutes. A particular situation to consider is how to synchronise the observations stored in the history with those fetched online. Figure 5 shows the general principle of the functional logic of a microservice (MAX) dealing with the streams harvested before the execution of the current query. The challenge is double, first retrieve batches of historical data according to different temporal filters. For example, the temporal filter for data produced the last 8 minutes observed from time t 1 is [t 1 -8, t 1 ] whereas for time
t 1 + i is [t 1 + i-8, t 1 + i].
Second, successively combine these batches with incoming flows arriving at time t 1 , .. t 1 + i every 2 minutes (as stated in the query). In technical terms, the query implies looking for the maximum down load speed by defining windows of 8 minutes for observing the downlaod speed in the connections. To get the fastest speed every 2 minutes (as stated in the query), we divide the 8 minutes into buckets of 2 minutes (see Figure 5 (1)) and look within the window for the max value, that is, the fastest download speed (i.e., the fastest speed within the 2 minutes buckets), and keep it as the "local" maximum speed. We combine every bucket with the historical data filtered according to the corresponding time interval (see Figure 5 (2)). This strategy is valid only if the production timeliness of the stream producers and the operator microservice are synchronised. Finally, the global max will be the maximum of all this set of local maximum speeds that will be the fastest download speed in the last 8 minutes.
Microservices execution
Microservices are executed on top of a Spark infrastructure deployed on a virtual machine provided by the cloud provider Microsoft Azure (see Figure 6). As shown in the Figure, a microservice exports two interfaces: the operator interface as a SpepsIoT Component with methods to manage it (e.g., start/stop, bind/unbind) and to produce results in a push/pull mode; the DB interface (queryToHistorc in the upper part of the Figure ) to connect and send temporal queries to a temporal database management system (e.g., Cassandra, InfluxDB). The microservice wraps the logic of a data processing operator that consumes time-stamped stream collections represented as series of tuples. We assume that it is possible to navigate through the tuple structure for accessing attribute values where one of the tuple attributes corresponds to its time-stamp. The time-stamp represents the arrival time of the stream to the communication infrastructure (i.e., rabbitMQ queue). The operator logic is implemented as a Spark program. Spark performs its parallel execution. Produced results can be collected by interacting with the operator through its interface; it can be connected to another microservice (e.g., the operator sink) as shown in the left part of the figure. A microservice running on top of the Spark platform processes streams considering the following hypothesis:
-There is a global time model for synchronizing different timelines (batch and stream). -We use the Spark timeline as a global reference in the implementation, and we execute aggregations recurrently according to "time buckets". The size of the time bucket is determined by a query that defines an interval of observation (e.g. the average number of connections of the last 5 hours) and a moving temporal reference (e.g., every 5 minutes, until "now" if "now" is a moving temporal reference).
Experimental validation
We conducted experiments for validating the use of our microservices. For deploying our experiment, we built an IoT farm using our Azure Grant As shown in figure 7, in this experiment, microservices and testbeds were running on separate VMs. This experiment leads to several cases scaling up to several machines hosting until 800 things with a clustered version of Rabbit using several nodes and queues that could consume millions of messages produced at rates in the order of milliseconds see Figure 8).
Observations in figure 9 showed the behaviour of the IoT environment regarding the message-based communication middleware when the number of things increased, when the production rate varies and when it uses one or several queues for each consuming microservice. We also observed the behaviour of the IoT environment when several microservices were consuming and processing the data. Of course, the most agile behaviour is when nodes and virtual machines increase independently of the number of things. Indeed, note that the performance of 800 things against 3 things does not change a lot by increasing nodes, machines and queues. Note also that devoting one queue per thing does not lead to essential changes in performance. For our experiments, we varied the settings of the IoT environment according to the properties characterising different scenarios. We used fewer things and queues, and more nodes to achieve data processing in an agile way. In this scenario, we assumed that there were few connected things with a high production rate. This scenario concerns an experiment conducted in the Neuroscience Laboratory at CINVESTAV Mexico (details can be found in [START_REF] Arriaga-Varela | Supporting real-time visual analytics in neuroscience[END_REF]). Regarding connectivity in cities (see Section 5), with many people willing to connect devices in different networks available in different urban spaces, we configured more things and queues and nodes for the second one.
RabbitMQ queue neubotspeed
The result of an H-STREAM query as a microservices composition can be seen in the right lower part of Figure 2. It corresponds to a visualisation of the data sunk on Grafana. One of the functions not directly integrated into existing stream processing systems is the possibility of synchronising long-term historical data (stored streams) with streams. The use case provides queries that combine long histories with online streams for the complete validation of H-STREAM. Through queries implemented by H-STREAM queries, we prove that it is possible to provide a hybrid post-mortem and online analytics. The microservices query data histories stored as post-mortem collections, and they also connect to online producers observing their connections according to the observation window size. Figure 11 compares the execution time of these queries according to two settings: (1 in the figure) 800 things producing streams through 3 queues: (2 in the figure 800 things and 1 queue deployed on one node). The good news is that H-STREAM microservices can deal with farms with many stream producers connected to few queues. The combination of streams produced at recurrent paces combined with long histories is done in a "reasonable" execution time in the order of seconds. The query execution cost depends on its recurrence and the history size. The overhead implied by the streams' production pace is delegated to the message passing middleware.
Conclusions and Future Work
This paper proposes H-STREAM that composes microservices deployed on highperformance computing backends (e.g., cloud, HPC) to process data produced by farms of things producing streams at different paces. Microservices compositions can correlate online produced streams with post-mortem time series. This synchronisation of past and present enables the discovery and model of phenomena or behaviour patterns within intelligent environments. The advantage of this approach is that there is no need for full-fledged data management services; microservices compositions can tailor data processing functions personalised to the requirements of the applications and IoT environments.
Future work consists of developing a microservices composition language that can be used for expressing the data processing workflows that can be weaved within target application logics [START_REF] Akoglu | Putting data science pipelines on the edge[END_REF]. We are working on two urban computing projects regarding the modelling and management of crowds and smart energy management in urban clusters.
Fig. 1 .
1 Fig. 1. H-STREAM General Architecture.
Fig. 2 .
2 Fig. 2. Microservices Composition Example.
Fig. 4 .
4 Fig. 4. Architecture of a stream processing microservice for processing data streams.
Fig. 5 .
5 Fig. 5. Synchronizing stream windows with historic data for computing aggregations.
Fig. 6 .
6 Fig. 6. Microservice execution.
16 and implemented a distributed version of the IoT environment to test a clustered version of Rabbit MQ. Therefore, we address the scaling-up problem regarding the number of data producers (things) for our microservices. Using Azure Virtual Machines (VM), we implemented a realistic scenario for testing scalability in terms of: (i) Initial MOM (RabbitMQ) installed in the VM 2 in figure7. (ii) Producers (Things) installed in the VM 1 in figure7. (iii) microservices installed in the VM 3 in figure7.
Fig. 7 .
7 Fig. 7. General setting deployed on Windows Azure.
Fig. 8 .Fig. 9 .
89 Fig. 8. Scale up scenarios
EVERY 60 seconds compute the max value of download_speed of the last 3 minutes FROM cassandra database neubot series speedtests and streaming rabbitmq queue neubotspeed EVERY 30 seconds compute the mean value of upload_speed starting 10 days ago FROM cassandra database neubot series speedtests and streaming rabbitmq queue neubotspeed EVERY 5 minutes compute the mean of the download_speed of the last 120 days FROM cassandra database neubot series speedtests and streaming rabbitmq queue neubotspeed
Fig. 11 .
11 Fig. 11. Queries execution time vs. deployment setting and history length.
https://flink.apache.org
https://kafka.apache.org/intro
https://www.rabbitmq.com
Here the Github address and a Youtube address of a demonstration of the system.
https://storm.apache.org
https://spring.io/projects/spring-cloud-dataflow
http://aws.amazon.com/kinesis/data-streams/
https://cloud.google.com/dataflow
https://pulsar.apache.org/
https://www.ibm.com/cloud/streaming-analytics
https://deepsource.io
InfluxDB is a time series system accepting temporal queries, useful for computing time tagged tuples (https://www.influxdata.comis)
Cassandra is a key-value store that provides non-temporal read/write operations that might be interesting for storing huge quantities of data (http://cassandra.apache.org)
The MS Azure Grant was associated with a project to perform data analytics on crowds flows in cities. It consisted of credits for using cloud resources for performing high-performance data processing.
Use Case: Analysing the behaviour of network services
The use case scenario gives insight into the way microservices can be composed to answer continuous queries. For deploying our experiment, we used the IoT farm on Azure (see Section 4) that we implemented to address the scaling-up problem in terms of several data producers (things) to be consumed by our microservices.
The experimental scenario aims at analyzing the connectivity of the connected society. The data set used for the use case has been produced in the context of the Neubot project 17 . It consists of network tests (e.g., download/upload speed over HTTP) realized by different users in different locations using an application that measures the network service quality delivered by different Internet connection types 18 The idea is that people install the Neubot application on their computers and devices. Every time they connect to the Internet using difnetworks (4G, Ethernet, etc.), the application computes network quality metrics. The data is used then to answer queries such as: -Am I receiving the network speed I am paying for all the time?; -Which are the periods of the day in which I can upload/download files at the highest speed using different network providers? A first simple example of the queries we tested is the following.
EVERY 2 minutes compute the max value of download_speed of the last 8 minutes FROM influxdb database neubot series speedtest and streaming 17 Neubot is a project devoted to measuring Internet from the edges by the Nexa Center for Internet and Society at Politecnico di Torino (https://www.neubot.org/). 18 Here is the reference to the project that produced the dataset. We omit this information to respect double-blinded evaluation requirements. |
00412076 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2006 | https://inria.hal.science/inria-00412076/file/ECWT2006.pdf | P F Morlat
H Parvery
G Villemaud
X Xin
J Verdier
J M Gorce
Global System Evaluation Scheme for Multiple Antennas Adaptive Receivers
Keywords: Adaptive arrays, System analysis and design, Phase noise, MIMO systems
This paper describes a global system approach to easily develop, simulate and validate a multi-antennas receiver structure. Our aim is to offer a rapid evaluation for future wireless systems, including promising techniques such as SIMO or MIMO, Software Defined Radio (SDR), OFDM and Interference Cancellation. A particular focus is on the impact of RF front-ends characteristics on the effective performances of a receiver in a realistic environment. A complete connected solution based on Agilent Technologies tools is presented, combining simulations and measurements with realistic conditions. This testbed allows a direct evaluation of all parts of a wireless link with multiple antennas. For instance IQ imbalance and phase noise influence on a four antennas 802.11g receiver is herein exposed.
I. INTRODUCTION
In the field of high data rate wireless systems, a very promising issue is to combine the most recent techniques in the same architecture dealing with numerous communication standards. For instance, this combination could be an OFDM signal associated with multiple antennas reconfigurable transceivers with a Zero-IF Software Defined Radio (SDR) structure [START_REF] Rykaczewski | Signal Path Optimization in Software-Defined Radio Systems[END_REF].
In order to properly design such a transmission scheme, a complete and precise study based on both simulations and measurements is essential. This works aims to propose such a system-level evaluation by the use of a 2x2 MIMO radio platform connected to the Advanced Design System (ADS) software from Agilent Technologies [2]. This connected solution allows us to simulate and measure all parts of a wireless link, from the global system level down to the particular model of RF components or fading channels.
In part II, we discuss the key points of RF front-end limitations which strongly decrease receiver efficiency. Part III presents the system-level evaluation tools used in part IV describing a particular example of a 4 antennas 802.1lg receiver. In this example, we show some interesting results for the joint compensation of fading effects and IQ imbalance. Finally, the influence of the power amplifier nonlinearity and oscillator phase noise is reported in this paper.
II. ON MULTIPLE ANTENNA RECONFIGURABLE
RECEIVER PROBLEMS To properly balance subsystem performance, the digital signal processing engineer must be aware of the limitations of the analog RF front-end so that some compensation can be made via digital signal processing [START_REF] Tubbax | Joint Compensation of IQ Imbalance, Frequency Offset and Phase Noise in OFDM Receivers[END_REF]. In most systems, the receiver tends to be more complex than the transmitter. The main challenge for designing receivers is the Dynamic Range. The RF front-end of a receiver must separate a desired signal typically from -130 dBm to -70 dBm, from a background RF environment that may be in the relative range of-20 dB up to 0 dB. In many systems, the RF front-end also sets the system signal to noise ratio and should be designed to add minimal noise. Thus the overall system must have a considerable dynamic range to accommodate both the high-power background signals and the lowest-power desired signal. Dynamic range is limited at the bottom of the range by noise that enters the system through thermal effects of the components or through non-idealities of the ADC, such as quantization noise or sampling aperture jitter. Low level signals can be masked by this noise. Dynamic range is limited at the high-end by interference. The source of this interference could be co-channel, adjacent channel, or self induced by the transceiver. High levels of interference may cause the receiver to become more non-linear and introduce cross products (spurious components), which may inhibit the detection of low-level signals or reduce the desired signal bit error rate (BER). Simply attenuating the high-level signal before they drive the receiver into a non-linear operating region is insufficient since low-level desired signals also present will be attenuated until masked by system induced noise and thus will fall below the sensitivity of the receiver.
For the transmitter part, power-efficiency is a particularly challenging problem: power is consumed at a much higher level in the talk mode than in the stand-by mode. For a mobile unit, power control is commonly used at the mobile transmitter to reduce power consumption and to help improve capacity of the cellular system. The accuracy and linearity of the RF transmit chain is important to ensure that the desired signal is transmitted with no spurious signals or noise. Finally, noise and distortion are the limiting factor in the RF circuit performance, thus it is necessary to quantify them. Generally, several items are discussed in detail such as the noise figure of the receiver, the transmitter's noise floor, IQ gain imbalance, IQ orthogonality phase imbalance, phase noise, voltage noise of power supply, linear phase and amplitude distortion. Therefore, in a global system approach, both transmitter and receiver RF performances will impact the overall link budget. For instance, in the field of emerging high rate wireless systems, OFDM is a very popular modulation technique because of its performances in hard multi-path environments. Moreover Wireless LAN generally deal with such Rice or Rayleigh fading channels and so multiple antennas techniques appear an essential way of increasing throughput. At the same time, considerable effort is being made to develop reconfigurable receivers using more or less Software Defined Radio (SDR) principles. Thus multiple standards could elegantly cohabit in the same simple architecture with good processing capabilities.
Then a global approach combining multiple antennas, OFDM technique and SDR principles appears really promising particularly for future broadband wireless systems. Unfortunately, this combination is not so evident: multiple antenna algorithms and SDR are resource-expensive and OFDM technique is sensitive to RF front-end performances. Moreover, the impact of RF impairments on a receiver depends on RF front end topologies [START_REF] Brandolini | Toward Multistandard Mobile Terminals -Fully Integrated Receivers Requirements and Architectures[END_REF]. The direct conversion receivers often referred to as zero intermediate-frequency (Zero-IF) receivers have become very popular in wireless mobiles. Nevertheless, the RF impairments are much more critical in this case, as compared to super-heterodyne receivers for instance. In addition, a potential method is to sample a much larger bandwidth to deal with efficient interference cancellation or multi-channel capabilities. By the way, trying to enlarge the acquired bandwidth naturally degrades the RF performances.
III. GLOBAL SYSTEM EVALUATION APPROACH
The radiocom platform (Fig. 1) is an analysis tool which allows arranging the most advanced techniques in conception, modelling and test for radiocommunication systems. This platform is made of high technology equipments developed by Agilent Technologies: the ADS software, and measurement hardwares which are two arbitrary waveform generators (ESG 4438C) and a vector spectrum analyzer (VSA89641) with two RF inputs [2]. With this platform it is possible to extend measurements up to 6 GHz, with a received bandwidth analysis of 36 MHz. The arbitrary waveform generator by itself is able to generate any complex signal which it is then possible to analyze after propagation with the vector spectrum analyzer. The vectorial analysis software can demodulate this signal and offers visualisation options to show spectrum, constellations, BER, EVM... in order to accurately estimate the transmission system quality. With this connected solution a software/hardware interaction allows us to test and conceive very complex and realistic systems. Indeed the modelled signal with ADS can be loaded in the internal memory of the ESG, then the signal measured by the VSA can be recorded and transferred to the software. This interaction can take place at any part of the chain (baseband, RE...). ADS permits us to model the complete system, from the component to the whole architecture. With the Ptolemy tool, analogical and digital signal live together, it allows us to study all the fields bound to telecommunications by making them act mutually.
We can therefore estimate the impact of the different noise sources of the RF front end (phase noise, distortion, IQ imbalance...), and also the propagation environment (fading...). All component toolboxes (receiver, transmitter, channel) are customizable and models more or less elaborated can be add to them. A very realistic modelling of the system can be simulated taking into account the characteristic of the channel, and the physical properties of the front end. Fig. 2 presents an ADS schematic which evaluate the simulated and measured BER performance of an 802.1 lb system versus the SNR.
For example, a part of the standard receiver and transmitter can be replaced by a specific component (VCO, LNA...) with their appropriate characteristics to evaluate their performance (Fig. 3). Finally, this platform permits the validation of channel models with direct comparison with signal recorded by the VSA in any typical environment. With the twice RF input of the VSA it is also possible to analyze and evaluate the performance of SIMO or MIMO systems or to characterize multi-channel models. In this part, we will show via quantitative analysis that the SIMO configuration also helps to reduce the impact of non- ideal characteristics of the RF front-end on OFDM receivers alongside mitigating the effects of fading. Our objective is to simulate a very realistic transmission system and to have a better estimation of the advantage of the Sample Matrix Inversion (SMI) smart antennas algorithm [START_REF] Gupta | SMI adaptive antenna arrays for weak interfering signals[END_REF].
A complete 802.1lg transmission system was modelled with the ADS software, using different types of channel (average white Gaussian noise or various multipath models). Fig. 4 presents the BER obtained with the four arms SMI receiver compared to a single antenna receiver with a 36 Mbps transmission rate (16QAM). Fig. 5 compares simulated and measured BER performance.
A. IQ Imbalance in A WGN and Multipath Channel Multiple antennas techniques are known to be efficient for signal-to-noise ratio improvement or interference rejection.
By the way, RF impairments such as IQ imbalance could be considered as additional interference [START_REF] Tubbax | Joint Compensation of IQ Imbalance, Frequency Offset and Phase Noise in OFDM Receivers[END_REF]. So minimisation techniques used for smart antennas applied on digital baseband signal allow a global compensation of channel effects and RF non-idealities. Thus a first evaluation of the SMI gain is presented in figures 6 and 7. These results present the relative BER (zero gain imbalance BER value taken for reference) compared for a single antenna Fig. 7 Relative BER vs. IQ gain imbalance in multipath channel receiver and a four antennas receiver in different channels and for gain or phase IQ imbalance, showing the potential of such a technique for joint fading/RF impairments mitigation.
B. Influence ofPowerAmplifier with Nonlinear Gain Compression It's very interesting to study the effects of harmonic [START_REF] Gregorio | The performance of OFDM-SDMA systems with power amplifier non-linearities[END_REF] distortion and intermodulation distortion. Indeed, a linear PA is needed in the case of non-constant envelope signals. The linearity of a PA is a key parameter as it is closely related to power consumption and to distortion, hence BER. All gain compression characteristics are modeled (with ADS software) using a polynomial expression up to the saturation. Fig. 8 shows characterization parameters for nonlinear elements. In Fig. 9, the effects of the 1-dB compression point and the intermodulation intercept point (IP) associated with the third- order products on BER performances (for Eb/No=lOdB) are reported for a multipath channel. Compared with the case of a AWGN channel (not reported), the SMI gain seems to be very interesting. Of course, measurements have to be performed to confirm these encouraging simulation results. -0 -: -90 dBc/Hz at I kHz offset -O --60 dBc/Hzat 1 kHz offset -V-: Spurs at ± 100 kHz is -30 dBc (with phase noise: -0 -) -* -: Spurs at ± 100 kHz is -60 dBc (with phase noise: -0 -)
C. Influence ofOscillators Phase Noise
The same approach was used to study the influence of phase noise spectrum of local oscillators [START_REF] Wu | Performance analysis on the effect of phase noise in OFDM systems[END_REF] and to evaluate the improvement of the traditional Sample Matrix Inversion.
Any phase noise model is easy to design and to implement in the ADS software. For instance, it is possible to analyse the impact of flicker frequency noise (a classical model for the phase noise in free running oscillators) and PLL spurious on BER curves vs. signal to noise ratio (spurious response or spur is a generic problem associated with fractional-N synthesizers [START_REF] Razavi | Phase locking in high performance systems From Devices to Architectures[END_REF]), see Fig. 10.
V. CONCLUSION
Today' s wireless communications systems are built by multidisciplinary teams that include experts in signal processing, RF engineering and other disciplines. Our global system approach permits to easily develop, simulate and validate a wireless system even with a multi-antennas receiver structure (SIMO OFDM system). The efficiency of a complete connected solution based on Agilent Technologies tools, combining simulations and measurements under true operating conditions (correlated channels and antenna coupling) is clearly demonstrated. The IQ imbalance influence on a four antennas 802.1lg receiver has been presented, and the efficiency of the SMI algorithm for BER reduction has been clearly shown. We have also presented the compensation of phase noise thanks a SIMO receiver, and the impact of non-linear distortion of amplifiers on OFDM receiver performances.
Fig. 2 .
2 Fig. 2. Example of a schematic design for 802.1 lb BER evaluation
OF A FOUR ANTENNAS 802.1 I G RECEIVER WITH RF IMPAIRMENTS
Fig. 1 .
1 Fig.1. The platform structure for a lxl transmission
Fig. 4 .Fig. 6 .Fig. 5 .
465 Fig. 4. BER vs. Eb/No in multipath channel
Fig. 9
9 Fig. 8 Nonlinear element characterization for nonlinear models |
00412077 | en | [
"phys.phys.phys-ins-det",
"phys.hexp"
] | 2024/03/04 16:41:26 | 2009 | https://hal.in2p3.fr/in2p3-00412077/file/Stephane_Poss_article-1.pdf | Stéphane Poss
Prospects for CP violation in LHCb
Introduction
This document describes the current expectations for two important CP violation measurements of the LHCb experiment. For a theoretical overview of the CP violation phenomenology, one can refer to [START_REF] Branco | CP violation, International series of monographs on physics[END_REF] and [START_REF] Beyer | CP violation in particle, nuclear and astrophysics[END_REF]. The description of the LHCb experiment is given in detail in the reference [START_REF] Alves | The LHCb Detector at the LHC[END_REF]. The first part of this document presents the status of the sensitivity studies of the B 0 s mixing phase φ J/ψφ s in the decays B 0 s → J/ψφ, together with the phase φ φφ s from the decays B 0 s → φφ. We also give the expected results on the calibration measurement of sin(2β) with B 0 d → J/ψK 0 S decays. The second part gives the sensitivity to the CKM angle γ. This part is subdivided in two sections, the first one presents the extraction of γ in the "tree" level dominated decays B → DK and the second one presents the sensitivity in the "penguin" level decays B → hh.
2 Measurement of the B 0 s mixing phase 2.1 Analysis of B 0 s → J/ψφ decays
The interference between B 0 s → J/ψφ decays with and without B 0 s -B 0 s oscillation gives rise to the CP violating parameter φ J/ψφ s . This is represented in Figure 1. In the Standard Model, when penguin contributions to the decay amplitude are neglected, this phase is predicted to be:
φ J/ψφ s = -2β s = 0.0360 +0.0020 -0.0016 rad (1)
where
β s = arg(V ts V * tb /V cs V * cb )
. This phase could be modified by New Physics contribution to the B 0 s -B 0 s mixing. The TeVatron experiments [4,5] yield the following constraint: The measurement itself is performed studying the time dependent decay rates of the B 0 s → J/ψφ decays. This measurement is complicated by the following: • the decay is P → V V , so it requires an angular analysis of the final state particules to disentangle statistically the different CP-odd and CP-even components,
φ J/ψφ s ∈ [-2.6, -1.94] [-1.18, -0.54] rad at 68%CL , (2) b
c c s s s W + V cs V * cb c c s s s b s b V cb V * cs W - W W B 0 s B 0 s φ J/ψ J/ψ φ B 0 s V * tb V * ts V ts V tb t t B 0 s → J/ψφ B 0 s → B 0 s → J/ψφ +NP
• the decay width difference between the two mass eigenstates of the B 0 s , ∆Γ s , is expected to be very large compared to the one of the B 0 d mesons,
• the oscillation frequency ∆m s of the B 0 s mesons is very large compared to the oscillation frequency of the B 0 d mesons [START_REF]Observation of B 0 s -B 0 s oscillation[END_REF][START_REF]Lifetime Difference and CP-Violating Phase in the B 0 s System[END_REF], so it requires a good proper time resolution to see the oscillations.
The LHCb experiment expects to gather 117k events for one nominal year of running, i.e. 2 fb -1 of integrated luminosity, with a B/S ≈ 2.1. The proper time resolution is expected to be ≈ 40 fs, and the B 0 s invariant mass resolution is expected to be 16 MeV/c 2 . The measured angles for the angular analysis show distortions of less than 10%, and the effective tagging power (εD 2 ) is ≈ 6.2% [START_REF] Calvi | Calibration of flavour tagging with B + → J/ψ(µµ)K + and B 0 → J/ψ(µµ)K * control channels at LHCb[END_REF].
The φ J/ψφ s sensitivity has been estimated to be:
L = 0.5 fb -1 : σ(φ J/ψφ s ) ∼ 0.060, L = 2 fb -1 : σ(φ J/ψφ s ) ∼ 0.030
in the nominal running conditions, with large errors coming from the bb production cross section and the visible branching ratio.
Systematic effects due to proper time and angular resolution, angular acceptance and flavor tagging have been studied and found to be smaller than the statistical uncertainty expected for 2 fb -1 . The Figure 2 shows the evolution of the sensitivity according to two LHC running scenarii. The blue lines show the uncertainties related to the bb cross section and the visible branching ratio of B 0 s → J/ψ(µµ)φ(KK). The black horizontal line is the combined CDF/D0 uncertainty in 2008 scaled to an
Analysis of B 0 s → φφ decays
Contrary to the B 0 s → J/ψφ decays, the B 0 s → φφ decays occur by a penguin process, as illustrated in Figure 3. Due to the presence of the CKM matrix element V ts in the decay, the total weak phase cancels. Therefore, neglecting the contribution from the penguin process involving u and c quarks, the mixing phase is predicted to be in the Standard Model. Only a contribution from New Physics can make this phase deviate from 0. It presents the same difficulties for the analysis than for the B 0 s → J/ψφ decays study, with a lower expected yield due to the decay process occuring only through a loop diagram.
φ φφ s = 0 b s s s s s s s s s s b s b W W B 0 s B 0 s φ φ φ φ B 0 s V * tb V * ts V ts V tb t t B 0 s → φφ B 0 s → B 0 s → φφ +NP W + t t W - V ts V * ts NP ′ NP ′
In the spectator quark model, this decay is very similar to the decays B 0 d → φK 0 S and B 0 d → η ′ K 0 S , so the penguin contributions in the decay amplitudes can be factorized.
The LHCb expected selection yield is 6.2 k events for 2 fb -1 with a B/S < 0.8. The sensitivity on φ φφ s with these statistics is σ(φ φφ s ) ≈ 0.08. Used in association with the measurement of φ J/ψφ s , this measurement can be used to constrain New Physics in the mixing and in the penguin decay.
Calibration with the measurement of sin
(2β) in B 0 d → J/ψK 0 S decays
The measurement of sin(2β) is used in LHCb as a calibration channel [START_REF] Amato | LHCb sensitivity to sin(2β) from the CP-asymmetry in B 0 → J/ψ(µµ)K 0 S (π + π) decays[END_REF]. It is used because it shares the same trigger line than B 0 s → J/ψφ decays, and the selection procedure uses the same criteria for the J/ψ selection. The opposite side tagging algorithm is expected to behave identically, so that the mistag fraction are compatible. Moreover, both decays use the same tools, apart from the angular measurements, as B 0 d → J/ψK 0 S is not a P → V V decay. Finally, both measurements require identical control channels.
With 2 fb -1 of data, the expected yield is 76 k events after trigger and selection, and the sensitivity to sin(2β) is expected to be σ(sin(2β)) ≈ 0.020. This sensitivity is comparable to the world average [START_REF] Barberio | Heavy Flavor Averaging Group[END_REF].
Measurement of the CKM angle γ
The measurement of this angle is performed in LHCb studying two classes of decay processes, tree and penguin. The tree level decays are expected to be less sensitive to New Physics than the penguin level decays. The analysis of the tree level decays is performed using the methods known as GLW [START_REF] Gronau | [END_REF], ADS [12], and GGSZ [13], and the time dependent analysis of B 0 s → D s K decays. The penguin process is studied using the analysis of B → hh decays.
γ with tree level decays
The GLW and ADS methods are counting experiments. The first one is interested in Table 1: Expected yield and B/S for 2 fb -1 for the Cabibbo-allowed and suppressed modes used in the extraction of the CKM angle γ.
B + → D 0 K + decays with D 0 → π + π -or D 0 → K + K -,
specific D decays D 0 → K + π -. The advantage of the ADS and GLW methods used together is to mix Cabibbo-allowed and Cabibbo-suppressed decays, as illustrated in Figure 4. In parallel to the charged modes, the neutral decays B 0 d → D 0 K * 0 are also studied in LHCb, together with the multi-body decays B + → D(K ± π ∓ π -π + )K + .
The expected yields for 2 fb -1 of data and the B/S for the charged and neutral modes are given in Table 1. The sensitivity determined on those data samples is given in Table 2.
The GGSZ method consists in the Dalitz plot analysis of B ± → D(K 0 S π + π -)K ± decays. The LHCb experiments expects to collect 6.1 k events for 2 fb -1 , with a B/S < 1.1. The extraction of γ uses two techniques giving two different sensitivities:
• Amplitude fit: σ(γ) = 9.8 o , Channel σ(γ)( o ) B ± → DK ± 13.8 B 0 → DK * 0 5.2 -12.7
Table 2: Sensitivity to the measurement of the CKM angle γ with 2 fb -1 using the ADS and GLW methods. In the neutral decay case, the sensitivity depends on the value of the strong phase δ B 0 , also extracted from the data. The time dependent analysis of B 0 s → D s K decays relies on LHCb's particle identification system, with a separation Kπ in the RICH. The yield is expected to be 6.2 k with a B/S = 0.7 for 2 fb -1 . Figure 5 shows the reconstructed and fitted decay rate for B 0 s → D - s K + decays. The expected sensitivity on the CKM angle γ using only the time dependent analysis is σ(γ) = 10.3 o , relying on the knowledge of the B 0 s mixing angle β s extracted from B 0 s → J/ψφ decays.
γ with penguin level decays
The interest of the B → hh decays is the fact that the penguin contribution to the decay diagram is not negligible compared to the tree amplitude. Therefore this decay mode is sensitive to New Physics. The corresponding Feynman diagrams are given in Figure 6.
Like the analysis of B 0 s → D - s K + decays, the measurement of the CKM angle γ
Conclusion
The LHCb experiment is dedicated to the measurement of CP violation and the search for rare decays in the b quark sector. In this document, we presented the current results of the prospective studies towards CP violation. The expected sensitivity on the B 0 s mixing phase φ J/ψφ s with 2 fb -1 of data is 0.03 rad. This sensitivity enables the LHCb experiment to probe New Physics at the level of 5σ with 0.2 fb -1 in case the current central value given by the TeVatron is confirmed. The measurements of the CKM angle γ in tree dominated decays, B → DK, is extracted using three analyses, giving a combined sensitivity of σ(γ) = 5 o . The measurement of γ in penguin decays, B → hh, gives a sensitivity of σ(γ) = 8 o assuming U-spin symmetry. Those sensitivities should be compared to the current world average of σ(γ) ≈ 20 o [START_REF] Barberio | Heavy Flavor Averaging Group[END_REF].
Figure 1 :
1 Figure 1: B 0 s → J/ψφ decays with and without B 0 s -B 0 s oscillation. The interference between both process gives rise to the φ J/ψφ s
Figure 2 :
2 Figure 2: Expected LHCb sensitivity on φ J/ψφ s = -2β s as a function of the integrated luminosity. The plot on the left indicates the evolution for a nominal LHC running condition, E CM = 14 TeV, while the plot on the right shows the same evolution, but with a LHC energy of E CM = 10 TeV. We also indicate the expected sensitivity for the TeVatron and the Standard Model prediction.
Figure 3 :
3 Figure 3: Illustration of the B 0 s → φφ decays, with and without B 0 s -B 0 s oscillation.
Figure 4 :
4 Figure 4: Feynman diagrams at leading order of the B → DK decays. Left: Cabibboallowed decay. Right: Cabibbo-suppressed decay. The phase difference between the two modes is γ -2β. B + B 0 d Cabibbo allowed 84 k 4 k Cabibbo suppressed 1.6 k 360 B/S 0.6 -3.2 0.2 -13.5
Figure 5 :Figure 6 :•
56 Figure 5: Reconstructed and fitted B 0 s → D - s K + decay rate.
Figure 7 :
7 Figure 7: Reconstructed ππ invariant mass for different decay types. Each mode is identified using a PID hypothesis for the reconstructed pions. Without PID, there would be no separation. |
04120908 | en | [
"sdv"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04120908/file/Durantel%20Antiviral%20Res%202023%20version%20accept%C3%A9e.pdf | David Durantel
email: [email protected].
D Durantel Antiviral
Therapies against chronic hepatitis B infections: The times they are achangin', but the changing is slow!
Keywords:
Preambular nota bene: As a tribute to Dr Mike Bray, the following review of literature will be mainly based on published data and concepts, but will also contain my personal views, and in this respect could be more considered as a bioassay.
Even though a cost-effective and excellent prophylactic vaccine exists since many years to protect against hepatitis B virus (HBV) infection, academic-researcher/drug-developers/stakeholders are still busy with the R&D of novel therapies that could eventually have an impact on its worldwide incidence. The Taiwanese experience have univocally demonstrated the effectiveness of constrained national HBV prophylactic vaccination programs to prevent the most dramatic HBV-induced end-stage liver disease, which is hepatocellular carcinoma; but yet the number of individuals chronically infected with the virus, for whom the existing prophylactic vaccine is no longer useful, remains high, with around 300 million individuals around the globe.
In this review/bioassay, recent findings and novel concepts on prospective therapies against HBV infections will be discussed; yet it does not have the pretention to be exhaustive, as "pure immunotherapeutic concepts" will be mainly let aside (or referred to other reviews) due to a lack of expertise of this writer, but also due to the lack of, or incremental, positive results in clinical trials as-off today with these approaches.
List of abbreviations (by order of apparition)
HBV [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]. HBV enters hepatocytes via the NTCP receptor; EGF receptor modulates this entry process. The incoming nucleocapsid is released into the cytoplasm and reach the nucleus pore where the HBV genome (rcDNA) is uncoated and delivered to the nucleus. Within the nucleus rcDNA is converted into cccDNA and the latter is "chromatinized" (host and viral factors bind/associate) to become transcriptionally active. From cccDNA, all HBV RNAs can be transcribed including mRNAs and pgRNA. These RNAs are used to encode all viral proteins. PgRNA, covalently associated with one copy of HBV polymerase, is incorporated into the capsid to form immature nucleocapsids. The HBV polymerase performs a reverse transcription of the pgRNA into novel rcDNA within this capsid; a mature nucleocapsid is formed. This mature nucleocapsid has two fates: it can be redirected to the nucleus ("recycling" process) to maintain or amplify the cccDNA pool or it can be enveloped, likely within multi-vesicular bodies (MVB), to generate infectious particles (called Dane particles). Besides these virions two other viral proteins are secreted in large excess: HBeAg and HBsAg, the latter forming subviral particles (SVPs). Most classes of direct and indirect antivirals are indicated on this life cycle in order to pin point where there are acting. Abbreviations are listed elsewhere in the manuscript. The picture of the life cycle was edited from reference [START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF], a review already published in Antiviral Research.
PBMC peripheral blood mononuclear cells FXR farnesoid X receptor
Current context
Between 250 and 350 million individuals are estimated being chronically infected by the hepatitis B virus (HBV), and at risk of developing end-stage liver diseases (e.g. decompensated cirrhosis, hepatocellular carcinoma (HCC)) (WHO fact sheets). Worldwide, only around 5-10% of chronically infected patients (CHB) are diagnosed, and 1-2% are or have been treated with either nucleos(t)ides analogues (NAs) and/or pegylated-interferon-alpha (peg-IFN-α) (EASL 2017Clinical Practice Guidelines, 2017;[START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]. Treatments are mainly prescribed by Hepatologists and only considered when an "active hepatitis" is evidenced notably by elevated seric liver enzyme levels (e.g. ALT) and a risk of progression of fibrosis (EASL 2017Clinical Practice Guidelines, 2017); this means that there are many identified infected patients that are not treated, including those with a "highlyviremic" HBe-positive infection with normal ALT who can transmit infection by blood contact or sexual intercourse. This rather "atypical" recommendation, compared to other ones in the infectious disease field (e.g. HIV; for which a treatment is nowadays proposed as soon as the diagnosis is done, whereas back 10 years only the CD4 count prompted decision to treat), has some implications that will be discussed in the last section of this review.
As off today, there are two types of drugs/biologics clinically approved to treat chronic hepatitis B (CHB) patients (EASL 2017Clinical Practice Guidelines, 2017;[START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]. PegIFN-α, a biologic, which is weekly and subcutaneously administered for generally 48 weeks, leads to good virological (and biochemical) responses during treatment, but only a weak sustained response (SR) after cessation of treatment, with less than 10% of HBsAg loss/seroclearance; HBsAg seroclearance (6 month after cessation of treatment), associated with virosuppression and associated or not with seroconversion to anti-HBs antibody, defines what is commonly called a functional cure. Moreover, peg-IFN-α is often associated with side effects and is poorly tolerated, at least partially due to a systemic exposure to this cytokine (EASL 2017Clinical Practice Guidelines, 2017;[START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]. NAs, which are orally administrated, leads to a strong viral suppression (i.e. reduction of seric viremia) in the majority of patients; the problem of resistance to this class of drugs has been mitigated by the discovery of 2nd generation NAs with high antiviral potency and strong genetic barrier to resistance (i.e. tenofovir or entecavir) (EASL 2017Clinical Practice Guidelines, 2017;[START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]. If the safety profile of NAs is very good to excellent, enabling life-long therapy to prevent relapse of viremia and reduction of the risk of developing HCC, the effect of NAs on HBsAg and cccDNA levels is only marginal (EASL 2017Clinical Practice Guidelines, 2017;Wong et al., 2022;[START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]). Yet NAs, when compliance is good, have demonstrated long-term benefits in treated patients by lowering their risks of developing end-stage liver diseases (cirrhosis and HCC) to 2-10 folds healthy population risks [START_REF] Papatheodoridis | Similar risk of hepatocellular carcinoma during long-term entecavir or tenofovir therapy in Caucasian patients with chronic hepatitis B[END_REF].
In order to improve current therapies and move towards more universal and finite duration regimens, faster and stronger, direct or indirect, effects on cccDNA and/or HBsAg levels have to be achieved with novel drugs, together with older ones, and combination of them. The main objective is to increase the rate of functional cure (i.e. virosuppression + HBsAg seroclearance/loss off-treatment) from 10% to as much as possible (25-30% would be a good step forward [START_REF] Cornberg | Guidance for design and endpoints of clinical trials in chronic hepatitis B -Report from the 2019 EASL-AASLD HBV Treatment Endpoints Conference( ‡)[END_REF]) with novel combination therapies of 2-4 drugs, given either concomitantly or in a sequential manner. HBsAg seroclearance in CHB patients, which can be either obtained naturally or following a treatment, is associated to a greater reduction of the risk of HCC incidence as compared to that obtained with long-term NA-treatment and of course as compared to general population [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF]Diao et al., 2022a).
As nicely exemplified in the case of HIV, the implementation of more efficient, cost-effective and safe combination therapies is not an easy task [START_REF] Gibas | Twodrug regimens for HIV treatment[END_REF]. Moreover, the "starting" point in the case of CHB is high, with NAs-treated CHB patients who are rather satisfied with their well tolerated "one pill-a-day" treatment and industrial partners who find it more and more difficult to balance the "effort to profit ratio" in this very difficult field of research, despite a great potential market for treating the vast majority of undiagnosed/untreated infected individuals. The last registration of a novel class of drug in the HBV field has been done around 25 years ago; this emphasizes well the difficulty to improve HBV therapeutic strategies and the risk for potential major failures in clinical trials evaluations.
Strategies to improve the inhibition of mature HBV nucleocapsid biogenesis, leading to virosuppression
Nucleos(ti)de analogue drugs are potent, safe, and rather unexpensive drugs that are backbones of HIV and HCV therapies, as well as currently used for the treatment of HBV infections [START_REF] Analogues | LiverTox: Clinical and Research Information on Drug-Induced Liver Injury[END_REF][START_REF] Li | Drug Discovery of Nucleos(t)ide Antiviral Agents: Dedicated to Prof[END_REF]. Entecavir (ETV), tenofovir disoproxil fumarate (TDF) and its novel prodrug tenofovir alafenamide (TAF), are the three most used 2nd/3rd generation NAs in the HBV field [START_REF] Pierra Rouviere | HBV replication inhibitors[END_REF]. In the hepatocyte HBV life cycle, NAs inhibit the reverse transcription of pregenomic RNA (pgRNA) into relaxed-circular DNA (rcDNA); the latter enters into the composition of "mature nucleocapsid" that are subsequently incorporated (by budding into HBsAgcontaining host lipid membranes) into neo-produced/secreted HBV virion (Fig. 1). This is the reason why NAs lead to a decline/elimination of viremia in serum, called virosuppression. Moreover, one specificity of the HBV life cycle is that mature nucleocapsids, which contain neoreverse-transcribed rcDNA, can also be "recycled" to the nucleus of infected cells in order to maintain/replenish cccDNA pool (Fig. 1) [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]. NAs could then theoretically induce, if active at 100% on their enzymatic target, a decline and even a loss of cccDNA in the long term, and this loss of cccDNA could in turn contribute to HBsAg seroclearance. However, one should nevertheless remind that HBsAg can also be made-up from integrated form of HBV and not cccDNA anymore (in particular in HBe-negative patients with low HBsAg levels) [START_REF] Salpini | Hepatitis B virus DNA integration as a novel biomarker of hepatitis B virus-mediated pathogenetic properties and a barrier to the current strategies for hepatitis B virus cure[END_REF]; in these patients a decline of cccDNA level could have therefore no effect on HBsAg one. But this is only mathematical reasoning with a 100% target engagement. The reality is that, with the second generation of NAs, such an HBsAg clearance is obtained after 10 years of treatment in only few % of cases (0-5% according to studies) in clinic (EASL 2017Clinical Practice Guidelines, 2017;[START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF], thus suggesting that a strict 100% inhibition of HBV polymerase is currently unlikely achieved and that leakiness level of mature nucleocapsids are yet made under treatment and sufficient to maintain cccDNA pool. Then, should more potent 3 rd /4 th generation NAs be developed? The relative lack of potency of anti-HBV NAs has been recently evidenced in clinical trials, including for instance the Vebicorvir (VBR) phase-2 clinical trial (NCT03577171), in which VBR, a capsid assembly modulator (CpAM; see after for this class of drug), was combined to ETV. Indeed, it was shown that the combination of ETV + VBR induced a greater virosuppression, compared to ETV alone, indirectly showing that ETV does not reach full potency at inhibiting mature nuclocapsid formation [START_REF] Sulkowski | Safety and efficacy of vebicorvir administered with entecavir in treatment-naïve patients with chronic hepatitis B virus infection[END_REF]. Therefore, YES, in my view, novel NAs should be developed. The best anti-HBV NA used in clinic is in the double-digit nanomolar range in term of activity. In the case of HCV, direct acting antivirals (DAAs) active in the picomolar range have been developed and are currently used in clinic (EASL recommendations on treatment of, 2020); some room for improvement does exist for anti-HBV NAs. In this respect, a novel derivative of entecavir (called E-CFCP), which is more potent (0.7-1.8 nM depending on assays), more effective on NAresistant strains, and has a long-lasting effect in vitro and in preclinical mouse animal models, has been described [START_REF] Higashi-Kuwata | Identification of a novel long-acting 4'-modified nucleoside reverse transcriptase inhibitor against HBV[END_REF] and should be further evaluated in clinical trial. In addition to potency, a better delivery or activation of the NAs within the liver/hepatocytes should also be considered to improve anti-HBV NAs; this concept was recently discussed in another review published in Antiviral Research [START_REF] Pierra Rouviere | HBV replication inhibitors[END_REF]. Relative to this concept, it is worth noting that ATI-2173, a pro-drug of clevudine and first "Active Site Polymerase Inhibitor Nucleotide (ASPIN)" drug candidate (developed by Antios Therapeutics) has shown improved delivery/activation to liver and interesting phase-1b/2a results [START_REF] Squires | A randomized phase 1b trial of the active site polymerase inhibitor nucleotide ATI-2173 in patients with chronic hepatitis B virus infection[END_REF][START_REF] Tomas | Sustained 12 week off treatment antiviral efficacy of ATI-2173, a novel active site polymerase inhibitor nucleotide, combined with tenofovir disoproxil fumarate in chronic hepatitis B patients, a phase 2a clinical trial[END_REF]. In particular, a longer antiviral activity was observed offtreatment when ATI-2173 was combined to TDF [START_REF] Squires | A randomized phase 1b trial of the active site polymerase inhibitor nucleotide ATI-2173 in patients with chronic hepatitis B virus infection[END_REF][START_REF] Tomas | Sustained 12 week off treatment antiviral efficacy of ATI-2173, a novel active site polymerase inhibitor nucleotide, combined with tenofovir disoproxil fumarate in chronic hepatitis B patients, a phase 2a clinical trial[END_REF]; this could argue for a possible stronger effect on cc-cDNA, as observed in the woodchuck model with clevudine derivatives [START_REF] Hui | Clevudine for the treatment of chronic hepatitis B virus infection[END_REF][START_REF] Peek | Antiviral activity of clevudine [L-FMAU, (1-(2-fluoro-5-methyl-beta, Larabinofuranosyl) uracil)] against woodchuck hepatitis virus replication and gene expression in chronically infected woodchucks (Marmota monax)[END_REF].
Another way to improve inhibition of the mature nucleocapsid formation and subsequent virosuppression could come from the R&D of drugs targeting HBV nucleocapsid assembly; these drugs are mainly called "core assembly modulators" (CpAMs) [START_REF] Viswanathan | Targeting the multifunctional HBV core protein as a potential cure for chronic hepatitis B[END_REF]. CpAMs act "upstream" of NAs (in the pathway of mature nucleocapsid formation) by preventing the incorporation of pgRNA into capsid and therefore its subsequent conversion into rcDNA by HBV polymerase; the latter being exclusively active within an already formed capsid (Fig. 2) [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]. Many CpAMs, belonging to two main classes in terms of mode of action (Class-Em or Class-Ab) have been identified and some of them evaluated in clinical trials [START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF][START_REF] Viswanathan | Targeting the multifunctional HBV core protein as a potential cure for chronic hepatitis B[END_REF]. The first and second generation of CpAMs (e.g. BAY41-4109, AT130, NVR 3-778, JNJ-6379/Bersacapavir, Vebicorvir/ABI-H0731 …) were not potent enough (and/or not safe enough) in monotherapy or combination therapies in clinical trials and have been either discontinued or slow-down/put in hold in their clinical development. What was expected with CpAMs alone, or in combination with other molecules, was a faster decline of HBsAg levels, which could theoretically be explained by an improved inhibition of mature nucleocapsid formation leading to a faster cccDNA decline following a "full stop" of the "recycling" phenomenon. Maybe expectations were initially too high regarding these CpAMs, whose main mode of action is not a direct inhibition of HBsAg expression/secretion and the conditions of their evaluation too harsh, with expected results in the short term. First and 2 nd generations of CpAMs were however rather good at reinforcing virosuppression in combination with NAs, as demonstrated in Vebicorvir trials [START_REF] Sulkowski | Safety and efficacy of vebicorvir administered with entecavir in treatment-naïve patients with chronic hepatitis B virus infection[END_REF]Yuen et al., 2022a), and this could already be interesting enough whether treatments could be extended to several years; yet the current evaluation "standard", for novel therapy design and demonstration of efficacy in clinical trials, is a 48-week treatment period for a short finite duration regimen aiming at an increase of HBsAg seroclerance rate to around 30%. Maybe this standard is too ambitious and should be revis-
C O R R E C T E D P R O O F D. Durantel Antiviral Research xxx (xxxx) 105515
ited to give another chance, in particular to CpAMs that are overall great DAAs. Nevertheless, the lack of potency of previous generations of CpAMs has been now tackled and more potent (ultra-potent) CpAMs (with EC 50 going from hundreds of nM down to single digit nM or even pM range [START_REF] Unchwaniwala | ABI-4334, a Novel Inhibitor of Hepatitis B Virus Core Protein, Promotes Formation of Empty Capsids and Prevents cccDNA Formation by Disruption of Incoming Capsids[END_REF]Zhang et al., 2020); ABI-4334, ATI-1428, AB-836, ALG-000184 …) have now entered phase-1 clinical trials (Gane et al., 2022a). Hopefully these novel CpAMs will be evaluated with better clinical endpoints and in longer clinical trials to obtain a full power of this class of drugs.
In contrast to NAs, CpAMs are also capable to reduce seric HBV RNAs and also to some extend secreted HBcrAg (core-related HBV antigen; i.e. a composite marker made of secreted forms of HBc and HBe proteins) [START_REF] Sulkowski | Safety and efficacy of vebicorvir administered with entecavir in treatment-naïve patients with chronic hepatitis B virus infection[END_REF][START_REF] Yuen | Antiviral activity, safety, and pharmacokinetics of capsid assembly modulator NVR 3-778 in patients with chronic HBV infection[END_REF]Yuen et al., , 2020aYuen et al., , 2021aYuen et al., , 2022a;;[START_REF] Zoulim | JNJ-56136379, an HBV capsid assembly modulator, is well-tolerated and has antiviral activity in a phase 1 study of patients with chronic infection[END_REF]. Moreover, in vitro, CpAMs were shown to have several modes of action beside their primary target, which is of course the assembly of nucleocapsids. Indeed, at a higher concentration, they have been shown to block cccDNA establishment (secondary MoA) [START_REF] Berke | Capsid assembly modulators have a dual mechanism of action in primary human hepatocytes infected with hepatitis B virus[END_REF][START_REF] Lahlali | Novel potent capsid assembly modulators regulate multiple steps of the hepatitis B virus life cycle[END_REF], HBeAg biogenesis (tertiary MoA) [START_REF] Lahlali | Novel potent capsid assembly modulators regulate multiple steps of the hepatitis B virus life cycle[END_REF], as well as HBV RNA biogenesis for the most potent of them (quaternary MoA) (Fig. 1); the latter was evidenced in vitro and in animal models in very long-term treatment conditions (Lahlali et al., 2018) (and our unpublished data). This inhibitory effect on HBV biogenesis is likely due to the regulatory functions of the HBc/core protein, which can be found associated with cccDNA and HBV RNAs in the nucleus of infected hepatocytes [START_REF] Lucifora | Evidence for long-term association of virion-delivered HBV core protein with cccDNA independently of viral protein production[END_REF] (and our unpublished data) and likely plays roles in cccDNA transcription and post-transcriptional events [START_REF] Chabrolles | Hepatitis B Virus Core Protein Nuclear Interactome Identifies SRSF10 as a Host RNA-Binding Protein Restricting HBV RNA Production[END_REF][START_REF] Taverniti | Capsid assembly modulators as antiviral agents against HBV: molecular mechanisms and clinical perspectives[END_REF]. All these additional MoA are dependent on the concentration of CpAMs and therefore on their potency. It is therefore expected that 4 th generation CpAMs, which are ultra-potent could lead to additional inhibitory phenotype in vivo, including a faster and stronger reduction of cccDNA and consequently HBsAg levels. Repetitive failures of CpAMs (of 1 st /2 nd generations) to demonstrate a real "added value" in combination with NAs and/or RNAi in various clinical trials have almost disqualified this class of drugs; but great hopes reside in the 3 rd /4 th generations of CpAMs.
Is there a place for HBV ribonuclease H (RNaseH) to improve virosuppression? NAs target the reverse transcription activity of the HBV polymerase. But HBV-pol contains also a RNAseH domain which is important to degrade RNA templates during the RT process [START_REF] Edwards | Shedding light on RNaseH: a promising target for hepatitis B virus (HBV)[END_REF]; this enzymatic activity is considered as a possible drug target, with the identification of three different chemical families that inhibit its activity [START_REF] Edwards | Shedding light on RNaseH: a promising target for hepatitis B virus (HBV)[END_REF]Li et al., 2021a); Structure-activityrelationship studies are on-going to improve the potency and toxicological profile of these drugs, which are rather low as off today. Mechanistically speaking, these inhibitors should reinforce the inhibition of mature nucleocapsid formation, which can be obtained with NAs and/ or CpAMs (Fig. 1). In addition, blocking the RT process in-between pgRNA and rcDNA could generate non-physiologic replication intermediates (RNA/DNA hybrids), which in turn may serve as PAMP (pathogen-associated molecular patterns) to activate innate immunity/IFN response. Knowing that HBV infected hepatocytes are immunologically "cold" (because HBV is a rather stealth virus [START_REF] Wieland | Genomic analysis of the host response to hepatitis B virus infection[END_REF])), a strategy that could turn these "cold" cells into "hot" ones (i.e. cells producing alarming IFNs or cytokines) is worth investigating. Combination of CpAMs and RNAseH inhibitors, which could reinforce this phenotype, could be interesting to test in that respect.
Strategies to, directly or indirectly, improve HBsAg loss/seroclearance
All strategies that will have an impact on the level or transcriptional activity of cccDNA will end up having an impact on HBsAg level, as cc-cDNA is the main template for all HBV RNA biogenesis [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]; the exception to this is found in HBeAg negative patients well advanced in their natural history, in who HBsAg can also be significantly expressed by integrated HBV into host genome [START_REF] Salpini | Hepatitis B virus DNA integration as a novel biomarker of hepatitis B virus-mediated pathogenetic properties and a barrier to the current strategies for hepatitis B virus cure[END_REF]. A section on the direct targeting of cc-cDNA level and activity will be found just after. Building up on the previous section and concepts, in theory, a complete level of virosuppression and 100% inhibition of the formation of mature nucleocapsid, should be good enough to allow cccDNA decline by both blockade of the reinfection of novel hepatocyte and blockade of the recycling of mature nucleocapsid, subsequently leading to HBsAg decline. So far, we have discussed about NAs, CpAMs and RNAseH inhibitors. In this section we will focus on other antiviral molecules that can have a more direct impact on HBsAg biogenesis and/or secretion and/or seric level. Some of these molecules could also reinforce the inhibition of mature nucleocapsid biogenesis and therefore virosuppression, thus showing that feed forward amplification loops are at play.
Small interfering RNAs (siRNA or RNAi) targeting HBV sequences can induce a rather specific degradation of all HBV RNAs, provided that they are designed to target a relevant 3′ sequence common to all RNAs. By doing so HBV-targeting RNAi are theoretically capable to suppress the expression of all HBV proteins (HBx, HBsAg, Cp, etc …) and synthesis of pgRNA, which are all instrumental for the generation of viral progeny (van den [START_REF] Van Den Berg | Advances with RNAi-based therapy for Hepatitis B virus infection[END_REF] (Fig. 1). As HBV proteins are endowed with crucial proviral functions (HBx allows cccDNA transcription by inducing degradation of the restriction factor Smc5/6 [START_REF] Decorsiere | Hepatitis B virus X protein identifies the Smc5/6 complex as a host restriction factor[END_REF]; Cp allows capsid formation [START_REF] Seeger | Hepadnaviruses[END_REF]; HBV-pol allows reversion transcription [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]; envelop proteins allow virion and subviral particles (SVPs) formation [START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF], …), a feed forward amplification loop of inhibitory phenotypes is expected, and HBV-RNAi could be seen as an "holy grail antiviral" (van den [START_REF] Van Den Berg | Advances with RNAi-based therapy for Hepatitis B virus infection[END_REF]. But the main problem of this class of biologic/antiviral resides in its delivery to the liver and within the liver to all infected cells. This galenic problem is far from being futile and likely the reason why so far clinical trials have led to good, but not outstanding, results, in particular in monotherapies. So far clinicalstage RNAi have been either embedded in proprietary lipid formulations or conjugated with moieties known to allow hepatocytes delivery (i.e. essentially N-acetylgalactosamine; GalNac) (van den [START_REF] Van Den Berg | Advances with RNAi-based therapy for Hepatitis B virus infection[END_REF][START_REF] Dowdy | RNA therapeutics (almost) comes of age: targeting, delivery and endosomal escape[END_REF][START_REF] Hui | RNA interference as a novel treatment strategy for chronic hepatitis B infection[END_REF][START_REF] Springer | GalNAc-siRNA conjugates: leading the way for delivery of RNAi therapeutics[END_REF]. There are currently 5 RNAi (JNJ-3989, VIR-2218, AB-729, ALG-125755, RG6346), evaluated in monotherapies and/or combination in clinical trials [START_REF] Fitzgerald | The HBV siRNA, ALG-125755, demonstrates a favourable nonclinical profile and significant and durable hepatitis B surface antigen reductions in the AAV-HBV mouse efficacy model[END_REF]J. Hepatol. 73, S20, 2021aJ. Hepatol. 73, S20, , 2021b;;[START_REF] Lim | Longer treatment duration of monthly VIR-2218 results in deeper and more sustained reductions in hepatitis B surface antigen in participants with chronic hepatitis B infection[END_REF]Yuen et al., 2020bYuen et al., , 2021bYuen et al., , 2022bYuen et al., , 2022c)). They mostly target the HBx region within genome, leading to virtual degradation of all HBV RNAs (due to 3'overlapping of all HBV RNAs), including RNAs coming from integrated HBV genomes; but recently RNAi targeting envelope protein genes have been R&Ded. These RNAi gene silencers are generally rather safe in monotherapies and associated to rather potent and durable HBsAg declines (1-3 log10; sometime higher leading to HBsAg seroclearance in few patients). Interestingly there are subcutaneously injected in a monthly (or even less frequently) manner, which is very convenient for patients. The absence of ALT flares when used in monotherapies, which is a good thing when safety is concerned, could nevertheless be seen as a problem, as moderate and clinically controlled ALT flares (so called "good flares") might be needed to foster HBV cure. This also suggest that HBsAg decline may not be sufficient to restore immune responses against infected cells, as initially thought. Combination of RNAi with immune stimulator might be needed to improve the rate of functional cure; this will be discussed in the combination section.
Antisense oligonucleotides (ASO) can also be used to block the translation of HBV mRNAs, including HBsAg ones. One ASO has reached phase-2 clinical development; it is GSK3228836/Bepirovirsen (Yuen et al., 2021c), which is a "naked" antisense oligonucleotide (a version with GalNac moiety, i.e. GSK3389404 was less efficient (Yuen Durantel et al., 2022d)). This asset led to a profound decline of HBsAg level, 3-4 log10, after only 28 days of administration (once a week subcutaneously). As opposed to RNAi, moderate elevations of ALT could explain potency and durability of the HBsAg decline/seroclearance. In the very recently published B-Clear phase-2b study (Yuen et al., 2022e), it was shown that a dose of 300 mg per week for 24 weeks led to a sustained response in terms of HBsAg decline below lower level of detection (i.e. functional cure) in 9-10% of patients with CHB, who were either co-administrated or not with a NA. Even if not reaching undetectability, 65-68% of patients treated with Bepirovirsen (alone of with a NA), had their HBsAg level going below 100 IU/mL, which is very encouraging for this drug. Longer treatment periods are currently investigated to determine whether the rate of functional cure could be improved upon; even if such an improvement is not observed, the durability of decrease of HBsAg under 100 IU/mL could allow stopping Bepirovirsen and the associated NA, as well as investigating functional cure post-treatment (see concept of stopping NA strategy here after). The safety of this drug is yet closely investigated (B-Sure trial; NC-T04954859), as few patients had to stop the treatment during trials because of drug-induced adverse effects. The possibility that Bepirovirsen is more active than RNAi because of a possible activation of TLR sensors (likely TLR8; but TLR9 was also evoked) is being investigated [START_REF] You | Reply to: bepirovirsen/GSK3389404: antisense or TLR9 agonists?[END_REF]; such a possible dual mode of action (i.e. inhibition of mRNA translation + innate immune stimulation) is for sure in the spirit of what need to be done to functionally cure HBV (see later). More results from trials with Bepirovirsen alone or in combination with other assets are expected soon.
C O R R E C T E D P R O O F D.
Acting downstream of RNAi and ASO in the HBV life cycle, nucleic acid polymers (NAPs; e.g. REP 2139) have also been shown to rapidly reduce circulating HBsAg or subviral particles (SVP) (Vaillant, 2022a(Vaillant, , 2022b)). Indeed, REP 2139 neither induce HBV RNA degradation, nor inhibit HBsAg mRNAs translation in infected hepatocytes, but it prevent HBsAg secretion by likely targeting the host-factor DNAJB12 involved in subviral particles (SVPs) biogenesis (Vaillant, 2022a). The anti-HBV activity of all NAPs is sequence independent and driven by a length-dependent (40-mers being optimal) and phosphorothioation (hydrophobic)-dependent interaction with amphipathic alpha helices present in target proteins (Vaillant, 2022a(Vaillant, , 2022b)). In a recently published phase-2 trial study, it was shown that the addition of REP 2139 to TDF + peg-IFN-α significantly increased the rate of HBsAg loss (associated with anti-HBs antibody seroconversion) after 48 weeks of therapy. A functional cure was observed in 14 of 40 participants (i.e. 35% of cases), which is an unprecedented rate of cure in the CHB field [START_REF] Bazinet | Safety and efficacy of 48 Weeks REP 2139 or REP 2165, tenofovir disoproxil, and pegylated interferon alfa-2a in patients with chronic HBV infection naïve to nucleos(t) ide therapy[END_REF]. However, there are several drawbacks for the use of REP 2139: it has to be combined with peg-IFN-α for a full efficacy, it has to be administrated every day by intravenous infusion, and safety profile in the long-term is yet to be investigated in larger cohorts of patients [START_REF] Durantel | Nucleic acid polymers are effective in targeting hepatitis B surface antigen, but more trials are needed[END_REF].
Two final types of assets able to reduce HBsAg levels are also worth mentioning.
First, small molecules of the dihydroquinolizinone chemical series, including orally available RG7834 (initially developed by Roche), were identified as able to accelerate the degradation of HBsAg mRNAs and therefore to reduce intracellular HBsAg synthesis/expression in infected cells and in turn HBsAg/SVPs secretion [START_REF] Mueller | A novel orally available small molecule that inhibits hepatitis B virus expression[END_REF]. RG7834 was shown to target PAPD5/7 host factors, which are involved in the polyadenylation of HBV RNAs and their stabilization [START_REF] Mueller | PAPD5/7 are host factors that are required for hepatitis B virus RNA stabilization[END_REF]. The first generation of these drugs was shown to be very toxic (neurotoxic in particular) in vivo; this could have been anticipated due to the fact that PAPD5/7 do stabilize also host RNAs. Efforts to improve the delivery of this type of drug to the liver or to increase its specificity of action on HBV RNAs (and not host RNAs anymore) are being made by several academic and corporate researchers to give rise to second generation compounds with better safety profiles (e.g. GSK3965193, GST-HG131 …) [START_REF] Li | Significant in Vitro and in Vivo Inhibition of HBsAg and HBV pgRNA with ASC42, a Novel Non-steroidal FXR Agonist[END_REF].
Finally, another class of molecule is anti-HBs antibody. Immunoglobulins (HBIG; mainly targeting HBsAg) isolated from vaccinated patients have been used since many years for passive prevention of reinfection of liver grafts and mother-to-child transmission (Chen et al., 2020;[START_REF] Orfanidou | Antiviral prophylaxis against hepatitis B recurrence after liver transplantation[END_REF]. Injection of these immunoglobulins in CHB patients had in contrast little value. But recent research in the fields of HIV and more recently Sars-CoV2 have revived interest on monoclonal antibodies endowed with neutralizing capacity, as well as ADCC (Antibody-Dependent Cell Cytotoxicity), ADCP (Antibody-Dependent Cell Phagocytosis) (and other) properties. Quite recently, broadly monoclonal neutralizing antibodies have been identified in natural controllers of the HBV infection or in vaccinees and characterized [START_REF] Beretta | Advances in human monoclonal antibody therapy for HBV infection[END_REF][START_REF] Hehle | Potent human broadly neutralizing antibodies to hepatitis B virus from natural controllers[END_REF]. Concomitantly several monoclonal antibodies directed against the main antigenic loop (the "a" determinant) in the HBsAg protein (i.e. small HBs protein) have been reported [START_REF] Burm | A human monoclonal antibody against HBsAg for the prevention and treatment of chronic HBV and HDV infection[END_REF][START_REF] Golsaz-Shirazi | Construction of a hepatitis B virus neutralizing chimeric monoclonal antibody recognizing escape mutants of the viral surface antigen (HBsAg)[END_REF][START_REF] Kucinskaite-Kodze | New broadly reactive neutralizing antibodies against hepatitis B virus surface antigen[END_REF][START_REF] Wang | A human monoclonal antibody against small envelope protein of hepatitis B virus with potent neutralization effect. C O R R E C T E D P R[END_REF][START_REF] Wu | Mapping the Conformational Epitope of a Therapeutic Monoclonal Antibody against HBsAg by in Vivo Selection of HBV Escape Variants[END_REF][START_REF] Zhang | Establishment of monoclonal antibodies broadly neutralize infection of hepatitis B virus[END_REF]. The best MAbs should recognize conformational epitopes, have strong and broad neutralizing activity (EC 50 at single digit ng/mL, or below) and ideally should have been engineered in their Fc domain to enhanced ADCC and/or ADCP functionalities; this is the case of VIR-3434 (VIR biotech) and Lenvervimab (GC Pharma) [START_REF] Wu | Mapping the Conformational Epitope of a Therapeutic Monoclonal Antibody against HBsAg by in Vivo Selection of HBV Escape Variants[END_REF][START_REF] Gupta | Preliminary pharmacokinetics and safety in healthy volunteers of VIR-3434, a monoclonal antibody for the treatment of chronic hepatitis B infection[END_REF], which have successfully entered phase-2 clinical studies, following the evaluation of their safety and efficacy on HBsAg levels in phase1/2a studies. Their combination with other assets should be interesting.
Strategies to target cccDNA
CccDNA represents the ultimate target in the HBV life cycle in order to get a functional cure and maybe, in a far future, a more complete cure [START_REF] Martinez | Covalently closed circular DNA: the ultimate therapeutic target for curing HBV infections[END_REF][START_REF] Seeger | Control of viral transcripts as a concept for future HBV therapies[END_REF]. As already mentioned, all strategies that will have an impact on the level (i.e. by degradation or loss through cell division) or transcriptional activity (i.e. transcriptional repression) of cccDNA should end up having an impact on HBsAg level, as cccDNA is the main template for all HBV RNA biogenesis. And, as some viral proteins (e.g. HBx, and maybe Cp) are important to maintain cc-cDNA transcriptional activity, a feed forward inhibitory amplification loop could be expected.
A degradation and/or an inhibition of the transcriptional activity of cccDNA has been mainly observed in vitro with innate immune cytokines, including interferons, lymphotoxin beta receptor agonist, TLR2 and TLR3 agonists, and several pro-inflammatory cytokines (e.g. IL-6, IL-1b …) (Delphin et al., 2021;[START_REF] Durantel | New antiviral targets for innovative treatment concepts for hepatitis B virus and hepatitis delta virus[END_REF][START_REF] Isorce | Immune-modulators to combat hepatitis B virus infection: from IFN-α to novel investigational immunotherapeutic strategies[END_REF][START_REF] Isorce | Antiviral activity of various interferons and pro-inflammatory cytokines in nontransformed cultured hepatocytes infected with hepatitis B virus[END_REF][START_REF] Lucifora | Specific and nonhepatotoxic degradation of nuclear hepatitis B virus cccDNA[END_REF][START_REF] Lucifora | Direct antiviral properties of TLR ligands against HBV replication in immune-competent hepatocytes[END_REF][START_REF] Xia | Interferongamma and tumor necrosis factor-alpha produced by T cells reduce the HBV persistence form, cccDNA, without cytolysis[END_REF]. As off today, only IFN-α (mainly in its pegylated from) is used in human; although its MoAs in human have not been completely elucidated, it is likely that part of the antiviral activity of this drug, which can lead to around 5-10% of functional cure, is due to a decline in cccDNA transcriptional activity, as well as a direct (non-cytopathic) or indirect (via immune driven cell death ± hepatocyte division/turn-over) degradation/loss of cccDNA. As IFN-α is less potent than proinflammatory cytokines to induce a cccDNA degradation and/or transcriptional arrest in vitro, it would be interesting to develop an asset that could induce the local production of these cytokines (i.e. IL-1β, IL-6, LTβ …). So far two clinically tested types of molecules are capable to induce the production of such cytokines ex vivo and in vivo: there are TLR7 and TLR8 agonists. Both types of agonists were shown to induce the production of cytokines in isolated PBMC or/and in vivo, as well as were also shown to induce a strong decline in cccDNA in the woodchuck model [START_REF] Amin | Therapeutic Potential of TLR8 Agonist GS-9688 (Selgantolimod) in Chronic Hepatitis B: Remodeling of Antiviral and Regulatory Mediators[END_REF][START_REF] Li | Anti-HBV response to toll-like receptor 7 agonist GS-9620 is associated with intrahepatic aggregates of T cells and B cells[END_REF][START_REF] Menne | Sustained efficacy and seroconversion with the Toll-like receptor 7 agonist GS-9620 in the Woodchuck model of chronic hepatitis B[END_REF][START_REF] Niu | Toll-like receptor 7 agonist GS-9620 induces prolonged inhibition of HBV via a type I interferon-dependent mechanism[END_REF]. Yet the TLR7 agonist GS9620/Vesatolimod was shown not potent enough in monotherapy in CHB patients at the doses tested [START_REF] Gane | The oral toll-like receptor-7 agonist GS-9620 in patients with chronic hepatitis B virus infection[END_REF] and the TLR8 agonist GS9688/Selgantolimod is also facing problem of efficiency in monotherapy in on-going clinical trials (Gane et al., 2022b); the latter is
C O R R E C T E D P R O O F D. Durantel Antiviral Research xxx (xxxx) 105515
also investigated in combination with other assets or in other strategies.
To summarize, so far IFN-α has not been yet replaced by another immune-stimulator in clinical practice; but such a goal is yet on the table. Beside TLR7 and TLR8 agonists, there are some investigations on TLR2 and TLR3 agonists, which could also be of interest in that respect [START_REF] Lucifora | Direct antiviral properties of TLR ligands against HBV replication in immune-competent hepatocytes[END_REF][START_REF] Desmares | Insights on the antiviral mechanisms of action of the TLR1/2 agonist Pam3CSK4 in hepatitis B virus (HBV)-infected hepatocytes[END_REF]. It is however worth noting that RIGI and NOD2, despite initial interest (Yuen et al., 2022f), are not considered anymore as targets due to a fatality in a phase-2b clinical trial with Inarigivir, a RIGI/NOD2 agonist.
In the category of pure epigenetic modifiers, none of the epidrugs that could have shown anti-HBV activity in vitro [START_REF] Dandri | Epigenetic modulation in chronic hepatitis B virus infection[END_REF]Zeisel et al., 2021), have reached clinical trial stage, maybe with the exception of farnesoid X receptor (FXR) agonists. Two FXR agonists, EYP001 (Enyo Pharma) [START_REF] Darteil | In vitro characterization of EYP001 a novel, potent and selective FXR agonist entering phase 2 clinical trials in chronic hepatitis B[END_REF][START_REF] Erken | First clinical evaluation in chronic hepatitis B patients of the synthetic farnesoid X receptor agonist EYP001[END_REF] and ASC42 (Ascletis) are in phase-2 trials. Of note these two drugs are also clinically tested against NASH, for which FXR has been identified as a relevant target (Li et al., 2021b;Radreau et al., 2019). In vitro these drugs lead to an HBsAg decline following a partial cccDNA silencing [START_REF] Barnault | Combined effect of Vonafexor and Interferon-alpha on HBV replication in primary human hepatocytes[END_REF]Li et al., 2021c). Combination of EYP001 and IFN-α in vitro and in an open-labeled phase-2 trial (EYP001-203) has shown interesting results on HBsAg levels, thus suggesting that the combination of two assets able to each target cccDNA transcriptional activity could be a good strategy [START_REF] Barnault | Combined effect of Vonafexor and Interferon-alpha on HBV replication in primary human hepatocytes[END_REF].
Finally, as of today, strategies based on Crisper-Cas9 or gene editing technologies to either degrade or de-functionalized cccDNA are yet at the preclinical stage [START_REF] Martinez | Gene editing technologies to target HBV cccDNA[END_REF]; further development will request gallenic efforts (not to say miracle) to efficiently deliver "active assets" (whatever they are; proteins, expression vector etc …) to the site of replication of the virus. Moreover small chemical molecules that would be able to induce a specific cccDNA degradation do not exist and are unlikely to be discovered! One efficient way to loose cccDNA is to induce the division of infected hepatocytes; indeed it is now rather well proven, in vitro and in mouse animal models, that cccDNA does not resist to cell division [START_REF] Lutgehetmann | In vivo proliferation of hepadnavirus-infected hepatocytes induces loss of covalently closed circular DNA in mice[END_REF][START_REF] Tu | Mitosis of hepatitis B virus-infected cells in vitro results in uninfected daughter cells[END_REF]. Quantifying the number of hepatocyte division in an individual "normally" behaving (i.e. taking acetaminophen or other hepatotoxic medics from time to time, consuming alcohol even at low dose, having different food habits, etc …), which we could define as micro-regenerative events, is a difficult task; yet such hepatocyte turnover could play a role, and explain differences at individual level, in the natural history or treatment history of CHB patients. Another way to loose cccDNA is to kill infected cells via a targeted immune response for instance; this is the goal of some immune therapies (therapeutic vaccination, T-cell therapies, checkpoints inhibitors …), which would not be discussed in this review in details, but can be viewed in the following recent reviews [START_REF] Fanning | Therapeutic strategies for hepatitis B virus infection: towards a cure[END_REF][START_REF] Gehring | Targeting innate and adaptive immune responses to cure chronic HBV infection[END_REF]Maini and Burton, 2019). Induced death of infected cells, as a broad therapeutic concept, has to be handled with care as it is not clear how many hepatocytes are infected at a given stage of the natural history of HBV infections, and one want to avoid "bad ALT flares" that could lead to liver failure. Considerations on "bad versus good flares" (the latter being looked for to improve functional cure rate) can be found in other reviews [START_REF] Ghany | Serum alanine aminotransferase flares in chronic hepatitis B infection: the good and the bad[END_REF][START_REF] Liaw | Hepatitis B flare: the good, the bad and the ugly[END_REF]; with our current understanding, it seems impossible to think that an increased rate of functional cure can be obtained in "short duration regimen" without "playing" somehow with good flare events. Looking for finite duration regimens to replace safe but almost long-life administration of NAs, has a cost!
Combination strategies
Combination of assets is unanimously seen as the way forward to design a short (or one should say "not too long") finite duration regimen able to significantly increase the rate of functional cure [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF][START_REF] Durantel | New antiviral targets for innovative treatment concepts for hepatitis B virus and hepatitis delta virus[END_REF][START_REF] Fanning | Therapeutic strategies for hepatitis B virus infection: towards a cure[END_REF]. Many combinations involving investigational drugs CpAMs, RNAi/ASO, NAPs, MAbs, etc …, between them and/or with approved ones (i.e. NAs and peg-IFN-α) are on-going in many registered clinical trials (c.f. to lists into [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF]). The aim of combination therapies is to get stronger (hopefully synergistic) or/and faster inhibitory phenotypes on given parameters in order to increase as much as possible the % of functional cure defined as sustained virosuppression + HBsAg loss (associated or not with anti-HBs seroconversion; 6-12 months off-treatment), while achieving at minima superior and durable virosuppression and/or HBsAg declines (several logs, even if no proper functional cure is achieved) as compared to each asset evaluated separately. And, of course, the safety profile should be as good as each single element of the combination, proving no negative drug-drug interactions; one has to reckon that most of the clinical evaluations failed to date because of lack of superiority in term of efficacy and more importantly because of inferiority in terms of safety/tolerability.
So far with all combination tested in phase-2 to date, only very few succeeded in increasing the % of functional cure, which was, before all these recent efforts, best achieved by the combination of peg-IFN-α plus TDF in clinical trial condition (around 8-9%) [START_REF] Marcellin | Combination of tenofovir disoproxil fumarate and peginterferon α-2a increases loss of hepatitis B surface antigen in patients with chronic hepatitis B[END_REF]. Among successful combinations, is the complex combination of NA + peg-IFN-α + REP 2139, which has led to a 30-35% functional cure after 48 weeks of treatment [START_REF] Bazinet | Safety and efficacy of 48 Weeks REP 2139 or REP 2165, tenofovir disoproxil, and pegylated interferon alfa-2a in patients with chronic HBV infection naïve to nucleos(t) ide therapy[END_REF]. In this trial, ALT elevations were very high, necessitating particular clinical care, but did not lead to liver failure. As discussed previously the use of REP 2139 is tricky because of the route and number of injections needed and further optimization (e.g. gallenic of the drug) and clinical evaluations are required [START_REF] Durantel | Nucleic acid polymers are effective in targeting hepatitis B surface antigen, but more trials are needed[END_REF]. Quite disappointingly, the final results of a triple combination of 3 DAAs, i.e. NA + CpAM + RNAi (REEF2 study, NCT03365947; 48 weeks of NA alone versus NA + RNAi/JNJ-3989 ± CpAM/JNJ-6379, followed by 48 weeks oftreatment) (c.f. Agarwal et al., AASLD, 2022;Abs. LB 5012) did not show any functional cure, while allowing an interesting long-lasting drop of HBsAg level under 100 IU/mL in 45% of patients versus 15% (in NA alone arm). Very disappointingly, the additive value of CpAM was only incremental; but 4 th generation CpAMs could be a game changing in that respect. And of course, one should admit that longer duration of treatment are needed to further evaluate this concept of combination of direct acting agents (DAAs). To shorten combination therapies of DAAs, this reviewer does think that a "host targeting" immune component will be needed. This could be an innate-immune stimulators (TLR7 or 8 agonist, or peg-IFN-α itself; see next section for this latter reshuffled concept), a checkpoint inhibitor (anti-PD1/PDL1) [START_REF] Wang | A human monoclonal antibody against small envelope protein of hepatitis B virus with potent neutralization effect. C O R R E C T E D P R[END_REF][START_REF] Gane | Anti-PD-1 blockade with nivolumab with and without therapeutic vaccination for virally suppressed chronic hepatitis B: a pilot study[END_REF], one (or two; targeting both the main "a" determinant of HBs and the preS1 domain?) monoclonal antibody with enhanced Fc functions or/and a more sophisticated immune therapeutic component (therapeutic vaccination, CAR/TCR T cell adoption, etc …) [START_REF] Fanning | Therapeutic strategies for hepatitis B virus infection: towards a cure[END_REF][START_REF] Gehring | Targeting innate and adaptive immune responses to cure chronic HBV infection[END_REF]Maini and Burton, 2019). Great hope resides in Bepirovirsen, which is a possible "double bullet" asset (inhibition of HBV RNA translation/HBV protein expression + possible TLR8/9 agonist) and could be combined to a NA (TDF or ETV) or added-on (for CHB patients already under NA-treatment) to set a sort of triple therapy featuring 2 DAA activities plus one innate immune one. Bepirovirsen was announced it will be evaluated in 2023 in a phase-3 trial (the first one to be launched since NAs were implemented in the HBV field) with several arms; the times they could be a-changin'! 6. Playing with old drugs: "back to the past" pragmatic approaches Interestingly, but also sadly (when the economy of R&D of novel drug is concerned), the best successes in terms of functional cure have been recently obtained with old medicines and concepts.
C O R R E C T E D P R O O F D. Durantel Antiviral Research xxx (xxxx) 105515
In particular the "stopping NA" strategy has gained great interest and has been extensively investigated recently in clinical studies and practices [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF][START_REF] Sonneveld | Probability of HBsAg loss after nucleo(s)tide analogue withdrawal depends on HBV genotype and viral antigen levels[END_REF]. This consists of stopping the administration of the NA in patients who have been treated for a (very) long period of time (>> 48 weeks), and monitor whether the viremia will either i) rebound and a productive infection restart (necessitating re-treatment), ii) remain low offtreatment (with no necessary change in HBsAg level; in this case no retreatment needed, only observation), or iii) further get down with an HBsAg seroclearance. This strategy is applied almost exclusively to HBe-negative patients with already, naturally or treatment-induced, rather low HBsAg levels (<500-3000 IU/mL; depending on local practice and studies). Indeed, it was shown that very low HBsAg level (e.g. <100 IU/mL), as well as HBV genotype (genotype C would be more favorable), was associated with greater chances of achieving a functional cure [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Yardeni | Current best practice in hepatitis B management and understanding long-term prospects for cure[END_REF][START_REF] Sonneveld | Probability of HBsAg loss after nucleo(s)tide analogue withdrawal depends on HBV genotype and viral antigen levels[END_REF]. Stopping NA can be associated with ALT elevation that can be a sign for either "good or bad flares", which could either shift the balance toward a possible cure (flares will be retrospectively called good flares) or serious liver events (decompensation, liver failure …) (flares will be retrospectively called bad flares) [START_REF] Tseng | Serious adverse events after cessation of nucleos(t)ide analogues in individuals with chronic hepatitis B: a systematic review and meta-analysis[END_REF]. Predictors (beyond HBsAg level and HBe-negative status) of HBsAg seroclearance are actively looked for, including immunological ones, in order to better identify CHB patients who would benefit the most from this strategy. More information of "stopping NA" strategy can be found in these reviews [START_REF] Wong | How to achieve functional cure of HBV: stopping NUCs, adding interferon or new drug development?[END_REF][START_REF] Berg | The times they are a-changing -a refined proposal for finite HBV nucleos(t)ide analogue therapy[END_REF][START_REF] Jeng | Disputing issues in the paradigm change to finite antiviral therapy in HBeAg-negative patients[END_REF][START_REF] Papatheodoridi | New concepts regarding finite oral antiviral therapy for HBeAg-negative chronic hepatitis B[END_REF], whereas the concept of good versus bad flares is discussed in these ones [START_REF] Ghany | Serum alanine aminotransferase flares in chronic hepatitis B infection: the good and the bad[END_REF][START_REF] Liaw | Hepatitis B flare: the good, the bad and the ugly[END_REF]. Whether combinations of DAAs (e.g. NA + CpAM + RNAi) would replace current NA-based treatments in near future, then the stopping concept would also apply to these novel therapies; this is why the on-going intensive search for biomarkers/immune-markers that could predict for HBsAg seroclearance after stopping treatment is crucial.
Peg-IFN-α, which was first used (in a non-pegylated form) to treat HBV infections in the eighties, has recently made a strong come back in therapeutic strategies, despite its poor toxicological and tolerability profiles. Historically, interferon from lymphoblastoid origin remains the first medecine that allowed reported HBsAg seroclearances in CHB patients [START_REF] Alexander | Loss of HBsAg with interferon therapy in chronic hepatitis B virus infection[END_REF]. Thirty-five years later, peg-IFN-α is included in treatment arms of many on-going and planned phase-2 trials in double (or triple) combination with CpAMs, RNAi (e.g. VIR-2218), ASO (bepirovirsen), etc… In combination with the NAP REP 2139, and on the top of a NA, Peg-IFN-α led to the greatest rate of functional cure ever reported and published to date [START_REF] Bazinet | Safety and efficacy of 48 Weeks REP 2139 or REP 2165, tenofovir disoproxil, and pegylated interferon alfa-2a in patients with chronic HBV infection naïve to nucleos(t) ide therapy[END_REF]. Moreover, it has recently been successfully used, off-label, in combination with bulevirtide (an HBV/HDV entry inhibitor) to improve treatment of patients (except those with decompensated cirrhosis) chronically infected with the hepatitis delta virus [START_REF] Lampertico | Bulevirtide with or without pegIFNα for patients with compensated chronic hepatitis delta: from clinical trials to real-world studies[END_REF].
As other innate immune-modulators, such as TLR agonists, struggle to make it to the clinic, Peg-IFN-α remains a therapeutic option. In the last 10-15 years, while efforts to develop novel DAAs and HTAs were intensifying, many clinical studies and reports on clinical practices have kept remembering us that peg-IFN-α continue to be a therapeutic option, in particular in combination. The first convincing 48-week de novo combination of peg-IFN-α plus TDF, showing an 8-9% HBsAg seroclearance in a clinical trial setting, was reported in 2016 by Marcellin and colleagues [START_REF] Marcellin | Combination of tenofovir disoproxil fumarate and peginterferon α-2a increases loss of hepatitis B surface antigen in patients with chronic hepatitis B[END_REF]. While many others de novo combination studies with NAs have been performed and reported (with more or less patients, in different geographical area with genetically different population, with different HBV genotypes, etc …), other schedules were also tested: add-on of peg-IFN-α in NA-suppressed patients, switching to peg-IFN-α after NA-suppressing treatment, lead-in period of peg-IFN-α before switching or adding to another drug. A recent meta-analysis of this exhaustive literature has been published and has concluded that in NA-virosuppressed patients, switching to peg-IFN-α was superior to the add-on strategy, and that both switching and add-on strategies were superior to NA continuation alone regarding the rate of HBsAg seroclearance [START_REF] Liu | Effect of combination treatment based on interferon and nucleos(t)ide analogues on functional cure of chronic hepatitis B: a systematic review and meta-analysis[END_REF]. The latest large multicenter real-world study in NA-suppressed HBeAg negative CHB patients (Everest project in China; NCT04035837) reported interim results with as high as 33% of functional cure after 48 weeks in per protocol analysis (Wu et al., 2021); and authors claim that "(dixit) "There are still good chances of functional cure for more than half of the patients who had not achieved HBsAg loss by prolonged treatment or retreatment". Are best soups made in old pots? One should be cautious about this, as the use of peg-IFN-α, whatever the schedule, is frequently associated with ALT flares, which have to be clinically managed in expert Hepatology center. We are still far away from a possible, easy, twice-a-year administration as awaited with lenacapavir in the HIV field [START_REF] Marrazzo | Lenacapavir for HIV-1 -potential promise of a long-acting antiretroviral drug[END_REF] !
Conclusions and final considerations
The objective of this review/bioassay was to, non-exhaustively, discuss about the main novel (or old, but reshuffled) therapeutic strategies, which are currently clinically evaluated, in the field of HBV chronic infections.
But before concluding and further debating on the necessity to carry on efforts to design and implement purely novel strategies to cure HBV infections with a finite regimen, let us play to the devil's advocate. The best recent successes in terms of improved functional cure for HBV were obtained by using old drugs: namely NAs (c.f. to stopping strategy) and peg-IFN-α (mainly used in combination). Why therefore not scalingdown our efforts to develop new types of drugs & scaling-up our effort to identify undiagnosed patients (i.e. 90-95% of infected patients worldwide would not be aware of their status) and put them on treatment (<1-2% of the 300 million of individuals potentially chronically infected are treated!), as well as enforcing prophylactic vaccination programs? In line with this reasoning, one should kept in mind that there are plenty of patients, whose HBV sero-positivity is known, that are not treated because their liver is doing well! This "atypical" recommendation of not treating patients who could have elevated HBV viremia and be in capacity to transmit infection to others could evolved. In particular, it is now well established that HBV integration into host genome does occur very rapidly after the onset of infection and keep accumulating over time if the replication of the virus is not stopped by NAs for instance, thus increasing the risk that these integrative events may become, 5-30 years after, a component of HCC initiation and progression [START_REF] Salpini | Hepatitis B virus DNA integration as a novel biomarker of hepatitis B virus-mediated pathogenetic properties and a barrier to the current strategies for hepatitis B virus cure[END_REF][START_REF] Kim | High risk of hepatocellular carcinoma and death in patients with immune-tolerant-phase chronic hepatitis B[END_REF][START_REF] Mason | HBV DNA integration and clonal hepatocyte expansion in chronic hepatitis B patients considered immune tolerant[END_REF][START_REF] Tu | Hepatitis B virus DNA integration occurs early in the viral life cycle in an in vitro infection model via sodium taurocholate cotransporting polypeptide-dependent uptake of enveloped virus particles[END_REF][START_REF] Tu | Hepatitis B virus DNA integration: in vitro models for investigating viral pathogenesis and persistence[END_REF]. Why not treating (with the safe NAs we dispose of) the patients who were previously called "immune tolerant" (now referred as HBe-positive patients with a chronic infection (EASL 2017 Clinical Practice Guidelines, 2017)) as soon as there are diagnosed [START_REF] Bertoletti | HBV infection and HCC: the 'dangerous liaisons[END_REF]? This would make sense, not only with respect to early integration events, but also because it is now also proposed that the immunological T (and likely B) cell exhaustion is weaker in recently infected (with shorter exposure to high amount of HBsAg), and youngest, patients [START_REF] Bert | Effects of hepatitis B surface antigen on virus-specific and global T cells in patients with chronic hepatitis B virus infection[END_REF]. In the case of HIV, even the youngest individuals who are diagnosed positive are immediately treated with DAAs.
Regarding now the design and implementation of novel combination therapies including novel types of drugs (e.g. RNAi, CpAMs, ASOs, etc.), what could be the final words? The aim to cure HBV infections while avoiding long-life treatment is meaningful; finite-duration regimens are awaited/expected by patients. But to avoid frustrations and failures in future clinical evaluations, as brilliantly discussed in this very recent opinion paper [START_REF] Pawlotsky | New hepatitis B drug development disillusions: time to reset?[END_REF], expectations have to be redefined, as well as therapeutic end-points. Regarding the latter it is clear that we have to define virological end points, but in the mean time we should not forget about clinical end-points (fibrosis, cirrhosis, HCC occurrences); an improved combination therapy (featuring a NA + a CpAM for instance), in terms of virosuppression (even absence of HB-sAg seroclearance), may prove useful to reduce further the incidence of HCC in patients, even if it is not a finite duration regimen; in this respect one could hope that a CpAM or a RNAi will make it to the clinic to increase the number of types of molecule in our arsenal. An increased rate of and durable HBsAg seroclearance (ideally associated with an anti-HBs seroconversion and recovery of T/B cell functions; this latter recovery of immune functions may require an immunological "kick") remain a major objective, as HBsAg loss is clearly associated with a decreased (yet not annihilation) of HCC incidence (Diao et al., 2022b;[START_REF] Jin | HBsAg seroclearance reduces the risk of late recurrence in HBV-related HCC[END_REF][START_REF] Yang | A risk prediction model for hepatocellular carcinoma after hepatitis B surface antigen seroclearance[END_REF][START_REF] Yip | Risk of hepatic decompensation but not hepatocellular carcinoma decreases over time in patients with hepatitis B surface antigen loss[END_REF]; but do we need to decide that such an HBsAg seroclearance has to be obtained in no more than 48 weeks? Surely not! Longer-term, yet with finite duration, regimens might be necessary, and this has to be taken into considerations by R&Ders, stakeholders, and funding supports. A schematic "roadmap" for this R&D is given in Fig. 3. The regulatory path for drug development will also need to evolve (as done for HIV and HCV) to allow a faster evaluation of new compounds and their combinations and an efficient translation to clinical applications towards a cure of HBV infection.
Declaration of competing interest
DD received, in the last 10 years, research grants from Arbutus Biopharma, Gilead Sciences, Janssen, and Enyo Pharma (on-going). He has also "interacted" with pharmaceuticals companies (i.e. Gilead Sciences, Janssen, Evotec, ImmuneMed, RDP Pharma AG) by running "Fees for Services" contracts.
C O R R E C T E D P R O O F
Fig. 1 .
1 Fig. 1. HBV life cycle and main targets for antivirals based on information reviewed in[START_REF] Seeger | Hepadnaviruses[END_REF][START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF]. HBV enters hepatocytes via the NTCP receptor; EGF receptor modulates this entry process. The incoming nucleocapsid is released into the cytoplasm and reach the nucleus pore where the HBV genome (rcDNA) is uncoated and delivered to the nucleus. Within the nucleus rcDNA is converted into cccDNA and the latter is "chromatinized" (host and viral factors bind/associate) to become transcriptionally active. From cccDNA, all HBV RNAs can be transcribed including mRNAs and pgRNA. These RNAs are used to encode all viral proteins. PgRNA, covalently associated with one copy of HBV polymerase, is incorporated into the capsid to form immature nucleocapsids. The HBV polymerase performs a reverse transcription of the pgRNA into novel rcDNA within this capsid; a mature nucleocapsid is formed. This mature nucleocapsid has two fates: it can be redirected to the nucleus ("recycling" process) to maintain or amplify the cccDNA pool or it can be enveloped, likely within multi-vesicular bodies (MVB), to generate infectious particles (called Dane particles). Besides these virions two other viral proteins are secreted in large excess: HBeAg and HBsAg, the latter forming subviral particles (SVPs). Most classes of direct and indirect antivirals are indicated on this life cycle in order to pin point where there are acting. Abbreviations are listed elsewhere in the manuscript. The picture of the life cycle was edited from reference[START_REF] Tsukuda | Hepatitis B virus biology and life cycle[END_REF], a review already published in Antiviral Research.
Fig. 2 .
2 Fig. 2. Molecular mode of action of the two types of CpAMs. There are two classes of CpAMs, which induce the formation of either aberrant or empty capsids. In any cases CpAMs prevent the formation of rcDNA containing nucleocapsids. A similar inhibitory phenotype can be obtained with NAs and/or RNAse H inhibitors; this CpAMs, NAs, and RNAseH inhibitors all lead to a decrease in the production of neo-virions, i.e. to virosuppression. The two chemotypes of CpAMs indicated, as examples, are the one extracted from the reference (Lahlali et al., 2018) (i.e. JNJ-827 and JNJ890).
Fig. 3 .
3 Fig. 3. An illustrated "road map" for the R&D of novel antiviral strategies.
Data availability
No data was used for the research described in the article. |
01887992 | en | [
"spi.meca.stru"
] | 2024/03/04 16:41:26 | 2013 | https://hal.science/hal-01887992/file/mhenni2013.pdf | Faïda Mhenni
email: [email protected]
Nga Nguyen
Hubert Kadima
Jean-Yves Choley
email: [email protected]
Safety Analysis Integration in a SysML-Based Complex System Design Process
Model-based system engineering is an efficient approach to specifying, designing, simulating and validating complex systems. This approach allows errors to be detected as soon as possible in the design process, and thus reduces the overall cost of the product. Uniformity in a system engineering project, which is by definition multidisciplinary, is achieved by expressing the models in a common modeling language such as SysML. This paper presents an approach to integrate safety analysis in SysML at early stages in the design process of safety-critical systems. Qualitative analysis is performed through functional as well as behavioral safety analysis and strengthened by formal verification method. This approach is applied to a real-life avionic system and contributes to the integration of formal models in the overall safety and systems engineering design process of complex systems.
I. INTRODUCTION
Over the last decade, the complexity of industrial systems has considerably grown since they integrate an increasing number of components and a variety of technologies. Meanwhile, system engineers always have to reach the following main objectives: building the right systems, building them correctly and on time, while reducing costs. Thus, modelbased systems engineering approach is a necessity in system design to better manage these constraints. SysML [1], [START_REF]OMG Systems Modeling Language (OMG SysML)[END_REF] is a semi-formal unified language dedicated to modeling systems. It allows engineers to document the properties from different disciplines and to describe the whole solution [START_REF] Hause | Building bridges between systems and software with SysML and UML[END_REF]. This OMG standard is becoming more and more supported by industry because it provides a consistent, well-defined, and well-understood language to communicate the requirements and corresponding designs among engineers.
In addition to being complex, industrial systems are also safety-critical. Hazard and risk analyses are critical to guarantee the reliability, robustness, and quality of products. In general, safety analysis techniques can be split into two categories: qualitative and quantitative approaches. Qualitative methods try to find the causal dependencies between a hazard on system level and failures of individual components, while quantitative methods aim at providing estimations about probabilities, rates and severity of consequences. These safety analyses are usually performed separately with independent tools. Consequently, they occur late in the design process when the design is already finalized and thus, miss the opportunity to influence design choices and decisions [START_REF] Sharvia | Integrated application of compositional and behavioural safety analysis[END_REF]. The purpose of this paper is to provide a methodology based on pertinent semi-formal and formal models to automate parts of the safety analysis process and, consequently, both reduce the cost and improve the quality of the system safety studies. The methodology allows system engineers to perform early validation of system safety requirements in the design process by using model checking [START_REF] Clarke | Model Checking[END_REF], a formal verification method. Safety requirements are integrated in SysML requirement diagrams that are then verified by formal test cases whose scenarios can be described by SysML state-machine diagrams, extended with failure modeling. Given the formal model of the system, the safety analysis process consists of defining a set of formal properties via temporal logic formulas to represent the informal safety requirements and then using a model checker to determine whether the proposed system architecture satisfies the safety requirements. The tight coupling between the two environments, the semi-formal and the formal one, to provide a unified framework for model-based safety analysis and to ensure the traceability between system design and safety models of complex systems is the main contribution of our work.
The rest of the paper is organized as follows. Section II talks about related work in the domain of safety analysis and system engineering. Section III presents our methodology with different steps to carry out a unified model-based safety analysis in the whole system engineering process. Section IV describes in detail our case study, which is the aircraft wheel brake system, with different SysML diagrams as well as the application of the NuSMV model checker to verify some important properties extracted from safety requirements. Conclusion is given in Section V.
II. RELATED WORK
During the safety assessment process of a system, a number of interrelated analysis methodologies are used. Most commonly used hazard analysis methodologies in the discipline of system safety (e.g. Preliminary Hazard Analysis, System Hazard Analysis, Fault Tree Analysis, Event Tree Analysis, Failure Mode and Effects Analysis, Functional Hazard Analysis, Petri Net Analysis or Markov Analysis) are well described in a book written by Ericson [START_REF] Ericson | Hazard Analysis Techniques for System Safety[END_REF]. Each hazard analysis technique is a unique analysis methodology using specific guidelines and rules with an overall objective of identifying hazards, mitigating them, and assessing system residual risk. To perform safety analyses, the two most traditionally used fault modeling techniques are Failure Mode and Effects Analysis (FMEA) and Fault Tree Analysis (FTA). FMEA aims to evaluate the effects of potential failure modes of components or functions, and eliminate these potential risks in the system design. Meanwhile, FTA is a top-down, deductive analytical method in which initiating primary events such as component failures, human errors, and external events are traced through Boolean logic gates to an undesired global system-level event.
Both of these techniques are compositional analyses based on the topology of the system and do not consider its dynamic behavior. For a comprehensive safety analysis however, the behavioral aspect should also be considered.
Model checking is a formal verification method used to verify a set of desired behavioral properties (usually related to safety requirements) of a system through exhaustive enumeration of all the states reachable by the system and the behavior that goes through them. This automated process receives a model of the system and one or more temporal logic formulas [START_REF] Huth | Logic in Computer Science : Modelling and Reasonning about Systems[END_REF] representing the properties to be verified, and determines whether the system satisfies these properties or not, in which case a counter example (a path leading to undesired state) is generated to help the designer perform corrective design changes. The application of model checking in safety analysis has been studied in many researches [START_REF] Bieber | Combination of fault tree analysis and model checking for safety assessment of complex system[END_REF], [START_REF] Bozzano | Improving system reliability via model checking: The FSAP/NuSMV-SA safety analysis platform[END_REF], [START_REF] Joshi | Modelbased safety analysis[END_REF], [START_REF] Seidner | Vérification des EFFBDs : Model checking en ingnierie système[END_REF], [START_REF] Sharvia | Integrated application of compositional and behavioural safety analysis[END_REF] for safety requirements verification and/or automatic fault trees generation. Two different well known tools can be used for model checking: FSAP/NuSMV-SA and AltaRica.
FSAP/NuSMV-SA [START_REF] Bozzano | Improving system reliability via model checking: The FSAP/NuSMV-SA safety analysis platform[END_REF] is an automated analysis tool that consists of two components: FSAP (Formal Safety Analysis Platform), which provides a graphical user interface, and NuSMV-SA, which is based on the NuSMV model checker as safety analysis engine. The tool provides some predefined failure modes that can be injected by safety engineers in the initial model of the system to create a so-called extended system model. The safety requirements are expressed in temporal logic and can be subsequently verified using the NuSMV model checking verification engine. The model checker is used as a validation tool, or a powerful fault tree analysis tool. However, one limitation of this tool is that the fault trees automatically generated with minimal cut sets have a flat structure with only two levels deep. This representation does not reflect the structure of the system and consequently, the fault tree comprehension is not very intuitive for system engineers.
AltaRica [START_REF] Point | AltaRica : Contribution à l unification des méthodes formelles et de la sûreté de fonctionnement[END_REF], [START_REF] Arnold | The AltaRica language and its semantics[END_REF], [START_REF] Bieber | Combination of fault tree analysis and model checking for safety assessment of complex system[END_REF] is a formal specification language that was designed to specify the behavior of complex systems. An AltaRica model is composed of nodes that are characterized by their reachable states, in and out flows, events, transitions and assertions. Once a system model is specified in the AltaRica language, it can be compiled into a lower level formalism such as finite-state machines, fault trees and stochastic Petri Nets. The safety requirements are formalized with the use of linear temporal logic operators and the formal verification technique can be performed by the AltaRica's MEC model checker. Nevertheless, this model checker is limited by the size of systems it can handle.
The recent work of [START_REF] Sharvia | Integrated application of compositional and behavioural safety analysis[END_REF] dealt with the integration of the Compositional Safety Analysis (CSA) and the Behavioral Safety Analysis (BSA). First, system failure models such as Fault Tree Analysis and Failure Mode and Effects Analysis are constructed by establishing how the local effects of component failures combine as they propagate through the hierarchical structure of the system. The technique used for CSA is called HiP-HOPS (Hierarchically Performed Hazards Origin and Propagation Studies), proposed in [START_REF] Papadopoulos | Hierarchically performed hazard origin and propagation studies[END_REF]. CSA gave preliminary information about state-automata that represent the transition between normal and failure states of the system. Next, in the BSA, model checking can be carried out on these behavioral models in order to verify automatically the satisfaction of safety properties. So the CSA and BSA could be effectively combined to benefit from the advantages of both approaches. Even so, behavioral information captured from CSA is rather limited because its main purpose is the failure propagation and hierarchy, not the dynamic behavior.
Joshi et al described in their report [START_REF] Joshi | Modelbased safety analysis[END_REF] the so called ModelBased Safety Analysis in which the nominal (nonfailure) system behavior captured in model-based development is augmented with the fault behavior of the system. Like in [START_REF] Sharvia | Integrated application of compositional and behavioural safety analysis[END_REF], temporal logic is used to formalize informal safety requirements, and the model checker NuSMV is used to validate these requirements. To illustrate the process, a case study about the wheel brake system is well detailed in the report, with a fault model consisting of different component failures, i.e. digital and mechanical failure modes. Fault tolerance verification is carried out by using additional variables and real-time temporal logic operators to investigate if the system can handle some fixed number of faults. Nonetheless, the model-based development studied in this paper addresses principally Simulink. In our paper however, we are interested in integrating safety analysis into a more general framework, namely system engineering via SysML.
Laleau et al. [START_REF] Laleau | A first attempt to combine SysML requirements diagrams and B[END_REF] tried to combine SysML requirement diagrams and the B formal specification language. Since requirements in SysML are textual, the SysML requirement models are firstly extended to represent some concepts in the goal-oriented requirement engineering approach, such as expectation, elementary or abstract goal for requirement classes and milestone, and/or refinement for relationship between requirements. Then, derivation rules are proposed to translate the SysML goal models into B specifications. By doing this, a more precise semantics of SysML goal models is given, narrowing the gap between the requirement phase and the formal specification.
Also regarding requirements, Dubois [START_REF] Dubois | Gestion des exigences de sûreté de fonctionnement dans une approache IDM[END_REF] proposed to directly include system requirements in the design process but the separation with the proposed solutions as required by safety standards is achieved by isolating the following triplet: requirement models, solution models and validation and verification models. A SysML profile respecting safety standards called RPM (Requirement Profile for MeMVaTEX) has been developed. The requirement stereotype of SysML is replaced by the MeMVaTEX requirement, by adding various properties such as verifiable, verification type, derived from, satisfied by, refined by, traced to, etc. So, the traceability is assured between requirement models, between requirement and solution models, and between requirement and V&V models by using these properties.
Another approach to integrate SysML and safety analysis is the use of the common modeling framework Eclipse [START_REF] Thomas | Performing safety analyses and sysml designs conjointly : a viewpoint matter[END_REF]. In this work, an independent tool called Obeo Designer Safety viewpoint that implements classical risk analyses is developed. Then the interoperability of this modeling tool and the SysML model is achieved through the Eclipse Modeling Framework. To make the integration possible, the authors used the open source SysML Topcased editor. Safety elements can reference SysML model elements since they are both expressed in the same framework. Furthermore, the Topcased GenDoc plugin can also be used to generate safety documentation from the two models. So in this approach, SysML is not extended with a safety profile in order to tune the SysML models. In a more recent work [START_REF] Belmonte | A model based approach for safety analysis[END_REF], a translation from Obeo Designer's Domain Specific Language for FMEA (Failure Mode and Effects Analysis) and PHA (Preliminary Hazard Analysis) into AltaRica is added to enable formal verification. However, no real system has been studied yet in order to prove the scalability of the method.
David et al. [START_REF] Cressent | Prise en compte des analyses de la sûreté de fonctionnement dans l ingénierie de système dirigée par les modèles SysML[END_REF], [START_REF] David | Contribution à l'analyse de sûreté de fonctionnement des systèmes complexes en phase de conception : application à l'évaluation des missions d'un réseau de capteurs de présence humaine[END_REF] worked on the generation of an FMEA report from system functional behaviors written in SysML models, and on the construction of dysfunctional models by using the AltaRica language in order to compute reliability indicators. In their methodology called MéDISIS, they start with the automatic computation of a preliminary FMEA. The structural diagrams, namely Block Definition Diagram (BDD) and Internal Block Diagram (IBD), and the behavioral diagrams such as Sequence Diagram (SD) and Activity Diagram (AD) are analyzed in detail to give an exhaustive list of failure modes for each component and each function, with their possible causes and effects. Then the final FMEA report is created with help from experts in the safety domain. To facilitate a deductive and iterative method like MéDISIS, a database of dysfunctional behaviors is kept updated in order to rapidly identify failure modes in different analysis phases. The next step of their work is the mapping between SysML models and AltaRica data flow language, so that existing tools to quantify reliability indicators such as the global failure rate, the mean time to failure, etc can be used directly on the failure modes identified in the previous step.
Garro et al. [START_REF] Garro | Enhancing the RAMSAS method for system reliability analysis -an exploitation in the automotive domain[END_REF] developed RAMSAS, a model-based method for system reliability analysis that combines SysML and the Simulink tool allowing the verification of reliability performance of the system though simulation. A formal verification method was not used in this research for safety assessment.
There are a lot of works about model-based safety analysis but few researches have addressed the direct connection between SysML and model checking. This paper focuses on a methodology that integrates formal safety analysis directly in SysML in order to provide a unified framework with better traceability between safety and system models. Our methodology will be described in the next section.
III. INTEGRATING SAFETY ANALYSIS IN THE DESIGN
PROCESS
Traditionally, safety analysis is only performed at a late stage of the design process, and thus does not directly influence system design. The aim of our work is "design safe" by integrating safety measure as soon as possible while designing a new system.
The design process usually starts with capturing initial requirements and then modeling system functions. At this early stage, when a functional model is established, functional safety analysis (using Functional Hazard Analysis (FHA) or Functional Failure Mode and Effects Analysis (FMEA) for example) can already be performed to assess the failure effects of each function. According to the severity of failure effects, system functions are classified as critical or not. This classification allows appropriate resource and priority allocation to each function. Focus is placed on critical functions whose failure may lead to catastrophic effects. Necessary design modifications shall be performed at this stage to suppress or mitigate identifiend potential risks. New functions such as fault detection may be introduced and the process shall be iterated until a satisfactory functional model is established. This analysis is performed based on the SysML functional model of the system and consequently ther is no gap between the system model and safety analysis.
Then, components are allocated to functions to define the system structure. A Block Definition Diagram is used to model the system structure in SysML. Alternative solutions are identified and are compared before a final solution is chosen.
Integrating safety analysis at this level allows system designers to consider safety among the trade-off criteria. Component FMEA is then realized to each component of the system (components are picked up from the corresponding Block Definition Diagram). Failure propagation can be described in a fault tree established based on the Internal Block Diagram, which illustrates the interactions between the different components in the SysML model. Based on these studies, the design may be modified to take into account safety aspects from early design stages, minimizing late and costly design changes. A safer design can be obtained by adding backup and redundant components. System behavior is then refined by the choice of redundancy approaches (static, dynamic or hybrid) and how the system shall behave in case of failure. New safety requirements are derived from the analyses performed above and the SysML requirement model is updated to integrate these new requirements. Like for the functional safety analysis, the component based safety analysis is performed based on the system's SysML model and thus is directly linked with the latest version of the model. Design changes on components are also directly integrated in the SysML model improving the whole consistency. More detail with a case study example about preliminary safety analysis based on functional and structure models of the system can be found in our previous work [START_REF] Mhenni | SysML and safety analysis for mechatronic systems[END_REF].
At this stage, a structural and behavioral description of the system is already established and safety requirements have been derived and/or refined from other requirements (stakeholder or higher level). Measures have been taken to mitigate the potential risks identified in compositional safety analyses above, by adding new and/or modifying exixting functions/components. The next step consists in verifying that, during its operational phase, the system respects safety requirements. In other words, the dynamic behavior of the system shall also be satisfy a set of safety requirements mainly requirements about the number of tolerated failures and how the system shifts to degraded states before failure. Test cases can be defined in SysML and linked (using the verify relationship) to the corresponding requirements. In our study, formal test cases carried out by a model checking tool have been chosen to assess requirements satisfaction. The advantage of formal verification tests is that they execute all possible computation paths corresponding to different combinations of states the system can have. This task cannot be done manually, especially for a large number of states. We shall make sure that all requirements are linked to test cases and thus will be verified. SysML provides automatically generated traceability diagrams and tables to list all the requirements and their links to other model elements.
Formal verification is based on the following elements that have already been established:
• System requirements (including safety requirements) that are modeled in SysML language via Requirements and Requirement Diagrams to illustrate relationships between requirements;
• System functions and the flow exchanges among them given by system Use Cases and Activity Diagrams in SysML;
• A structural description of the system given by Block Definition Diagram and Internal Block Diagram in SysML. The first diagram describes the composition of the system while the latter illustrates interactions and flow exchanges between components inside the system. Traceability links are established between the model elements (requirements, functions, blocks and test cases) in SysML to facilitate verification tasks.
Given these elements, an extended formal model taking into account failures and their impact on the system is built. A SysML State Machine Diagram can be used for this purpose since it can model the dynamic behavior of the system in presence of faults. Different system states (or modes) and the transitions (including failure occurrences) are thus modeled. This model is then mapped to a formal model abstracting the considered system, written in an appropriate language accepted by a model checker. NuSMV has been selected in our study because this symbolic model checker is more scalable and therefore is recommended for large real-life systems. Moreover, its availability as an open-source program makes it easier for future integrated framework tool development. The NuSMV model consists in a main module that describes the whole system states, and a specific module for each component (or part in the SysML Model). The informal safety requirements are also translated into formal properties using temporal logic formulas. Running the corresponding program assesses whether the system satisfies these properties or not. If one or more properties are not validated, the model checker gives a counter example which will help to find out the causes of the problem. In this case, system engineers have to revise the system design until a new solution fulfilling all safety requirements is found.
IV. CASE STUDY
A. Introduction
The case study addressed in our paper is the wheel brake system of an aircraft described in the ARP 4761 standard Appendix L [START_REF]Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment[END_REF], and also studied in [START_REF] Guillerm | Intégration de la sûreté de fonctionnement dans les processus de l'ingénierie système[END_REF], [START_REF] Joshi | Modelbased safety analysis[END_REF] and [START_REF] Sharvia | Integrated application of compositional and behavioural safety analysis[END_REF]. This wheel brake system together with other aircraft sub-systems contributes to achieving the system function: "Decelerate Aircraft on Ground". This function is proved to be safety critical. Elements of safety analysis will be given in section IV-B.
B. Safety Analysis
A top-down design process usually begins with requirements definition and analysis. Then a breakdown of system functions is established. A functional decomposition of the aircraft functions is given in the ARP 4761 document [START_REF]Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment[END_REF]. In this work, we will focus on the sub-function "Decelerate Aircraft on Ground" of the high level function "Control Aircraft on Ground". At this early stage, a functional safety analysis can be performed by analyzing the failure effects of each function. Safety experts shall consider all possible scenarios of failure and the effects of each potential failure on the system and its user(s). To help capitalization and reuse, different level function libraries can be integrated in SysML, with their respective failure modes and effects. This library shall be updated when new elements are availabe. Based on this FMEA, a classification of the functions is proposed according to the severity of failures modes.
A preliminary safety analysis of the "Decelerate Aircraft on Ground" shows that the considered function is critical to the functioning of the system since its failure could lead to catastrophic consequences like the aircraft leaving the runway or crashing the buildings or equipment on the airport. A safety requirement is derived from this analysis: "Loss of wheel braking during landing or rejected take-off shall have a frequency less than 5 E-7 per flight" [START_REF] Joshi | Modelbased safety analysis[END_REF] (SR).
A further breakdown of the "Decelerate Aircraft on Ground" shows that it has four sub-functions that are: "Decelerate Wheels on Ground", "Prevent Aircraft from Moving when Parked", "Control Thrust Reverser" and "Control Ground Spoiler". Since the wheels deceleration has a stronger effect than other braking systems, our safety analysis will be hold on the function "Decelerate Wheels on Ground". The wheel braking system, which is allocated from this function, must be fault tolerant. This property is obtained by using redundant components in our example. The braking system is then made of a physical part that actually brakes the wheels and a control part to monitor the functioning. Redundancy is applied to both parts. The physical part contains two redundant hydraulic lines, a Normal line that is first activated and an Alternate one that is activated when the normal chain is inoperative. Each of the two systems has an independent power source. A supplementary power source, called emergency power source, is also mandatory for the wheel-brake system in aircraft [START_REF] Moir | Aircraft Systems, Mechanical Electrical and Avionics Subsystems Integration[END_REF]. In our case, an accumulator is also added and provides the braking system with hydraulic power when all the other power sources are inoperative. The SysML Internal Block Diagram in Fig. 1 describes the wheel brake system and the interactions between components. The interactions help analyzing the fault propagation through the system. To model this behavior, we consider four modes or states of the system that are:
• Normal: when the Normal line is operated;
• Alternate: activated when the Normal line fails and this failure is noted as failNormal;
• Emergency: activated when Alternate mode also fails (failAlternate). The accumulator provides power source in this mode;
• Fail: when all systems are in failure. The failure of the accumulator is noted as failEmergency. These modes and the corresponding transitions are modeled in an automata describing the dynamic behavior of the system including fault effects given by a SysML State Machine Diagram (see Fig. 2).
To satisfy the safety requirement SR, the wheel brake system shall be fault tolerant, i.e. it shall be able to brake the system even in presence of a certain number of failures. This can be translated, for instance, into the two safety requirements. These requirements are linked to corresponding test cases in a SysML Requirements Diagram (Fig. 3).
• SR1-WBS: When output is not supplied by Normal Line and there is no failure accounted in Alternate line, pressure shall be supplied from Alternate line;
• SR2-WBS: When both Normal line and Alternate line are not producing output, as long as there is no failure accounted along the emergency line, the system shall not fail.
Fig. 3. Safety Requirements in SysML
Formal verification is chosen to verify these requirements. For this the automata describing the dynamic behavior of the system in Fig. 2 together with the requirements to be verified shall be translated into a NuSMV program. The safety requirements SR1-WBS and SR2-WBS are respectively specified by temporal formulas F1-WBS and F2-WBS:
V. CONCLUSION
In this paper, we introduced an integrated methodology that combines SysML safety analysis with a formal verification method, namely model checking. Model-based system engineering analysis via SysML aims to facilitate the communication among interdisciplinary engineers by employing a consistent and well-defined language. Joining the safety analysis together with the design phase through SysML models will enhance consistency in the whole system, enable earlier error detection, avoid expensive re-design and reduce time to market delays. Another advantage of SysML is that it is not a specific tool or methodology. The combination of safety analysis with a standard modeling language instead of some specific tools or languages allows us to achieve wider analysis coverage and a more robust assessment. Furthermore, the traceability between system and safety models, i.e. the safety requirements and the formal test cases, assures a coherent safety analysis, without semantic rupture in the whole process of system engineering.
In our previous work [START_REF] Mhenni | SysML and safety analysis for mechatronic systems[END_REF], we have shown how some safety analysis results such as FMEA can be generated from SysML structural and behavioral diagrams. In this paper, we extend the work by integrating in this unique framework the automated verification of formal models with regard to safety properties. However, The translation of informal safety requirements into temporal logic formulas is not always straightforward.
Our future work is to investigate more in detail the structure of the system to propose a more precise behavioral formal model with different failure modes propagation. Input and output flows as well as internal failure modes will be taken into account. Complex system properties for verifying system fault tolerance by using special temporal logic operators must be supported. At the same time, the scalability of the model checker must be tested in accordance to the size and the complexity of the models.
Fig. 1 .
1 Fig. 1. Wheel Brake System Structure
Fig. 2 .
2 Fig. 2. Dynamic Behavior Including Faults Automata
•
F1-WBS: SPEC AG ((Normal.output = False AND Alternate.failAlternate= False) → Alternate.output=True). • F2-WBS: SPEC AG ((Normal.output = False) AND (Alternate.output = False) AND (Accumulator.failEmergency = False) → !(systemMode = Fail)) Running the NuSMV program, these two properties are shown to hold. |
04120956 | en | [
"spi.other"
] | 2024/03/04 16:41:26 | 2016 | https://theses.hal.science/tel-04120956/file/TH_T2851_mkarzova.pdf | Keywords: Nonlinear acoustics, N-waves, sonic boom, ultrasound, irregular reflection, Mach stem, nonlinear focusing, extracorporeal shock-wave therapy, propulsion of kidney stones. iii, comparison with measurements . . . .
Nonlinear focusing and reflection of weak acoustic shocks are important problems both in medical ultrasound and in aeroacoustics. In aeroacoustics, studies of propagation and reflection from surfaces of high-amplitude N-wave are relevant to the sonic boom problem associated with jet noise and with the development of civilian supersonic aircraft. In medical applications, high-intensity focused ultrasonic acoustic fields are widely used for therapy and noninvasive surgery. In this context, investigation of nonlinear acoustic fields and their characterization are of great importance.
In this thesis, the problem of characterization of high-amplitude N-wave generated in air by an electric spark was studied by two optical methods: a Schlieren method and a Mach-Zehnder interferometry method. Pressure waveforms were reconstructed either from the light intensity patterns in the recorded images or from the signal of photodiode using an Abel-type transform. The temporal resolution of reconstructed N-waves is six time better than that of the current stateof-the-art microphones. Thus, proposed optical methods are perspective tools for calibration of broadband acoustic microphones.
Developed optical techniques were applied to study the irregular reflection of spark-generated N-wave from the plane rigid surface in air. The Schlieren optical system was used for visualization of reflection pattern while the Mach-Zehnder interferometry method was applied to reconstruct waveforms. It was shown that irregular reflection occurs in a dynamical way and the length of the Mach stem increases with the propagation distance. Moreover, the Mach stem formation was observed above the surface where the reflected front shock of the N-wave intersects with the incident rear shock.
Characterization of nonlinear focused acoustic fields of medical devices is important to predict and to control induced biological effects in tissue. In the thesis, nonlinear propagation effects were analyzed for two modern medical devices: Duolith SD1 used in extracorporeal shock wave therapy and Philips C5-2 array probe used in preliminary experiments to move kidney stones by acoustic radiation force. A combined measurement and modeling approach was used for field characterization of devices: the boundary condition for the modeling was set to match low power measurements of the acoustic pressure field. In addition to characterization of real medical transducers, the theoretical investigation of nonlinear focusing of pulsed and periodic ultrasonic beams was performed for Gaussian and piston sources using the KZK equation. The saturation mechanisms were found different for cases of periodic and pulsed fields.
The reflection from the rigid boundary is considered in the thesis as a process similar to focusing of axially symmetric beam since the normal derivative of the pressure on the axis of focused beam and at the rigid surface in reflection wave pattern is equal to zero. Under this light, the formation of spatial structures similar to Mach stem was observed at the focal area of medical transducers and was described in numerical simulations within the framework of KZK equation.
This PhD thesis was performed in co-tutelle program between Physics Faculty of Moscow State University (Russia) and Département de Mécanique des Fluides et d'Acoustique in École Centrale de Lyon (France). I address all my sincerest thanks to the Embassy of France in Russia which allowed me to get a scholarship from the French Government and to fulfill this co-supervised scientific work.
I express my deepest gratitude to Daniel Juvé, Director of Centre Acoustique in ECL, who was the chair of the French jury. My gratitude also goes to Professor Vladimir Preobrazhensky (Ecole Centrale de Lille) and Professor Robin Cleveland (Oxford University) who agreed to be rapporteurs of this work and members of the jury.
I extend my warmest acknowledgments to my supervisors in co-tutelle Professor Vera Khokhlova (Moscow State University) and Professor Philippe Blanc-Benon (Ecole Centrale de Lyon). They have greatly contributed to this thesis by their advices, by their comments and their support. They were always available for discussions and they always helped me to find a solution of both scientific and everyday problems.
Special thanks to Petr Yuldashev, Edouard Salze, and Sébastien Ollivier for working together. They always helped me to resolve all my scientific questions and contributed a lot to performing aeroacoustic experiments. I'm grateful to Jean-Michel Perrin for his help with the fabrication of the experimental setup, to Thomas Castelain for his help with the adjustment of the optical system, and to Emmanuel Jondeau for his help with the experiment automatization. I also would like to thank all other members of acoustic team in Centre Acoustique for welcoming me to their team.
I would like to acknowledge Professor David Blackstock for helpful comments on results of this study.
I wish to express my most sincere gratitude and appreciation to all members of LIMU team in Moscow State University. They have always been a constant source of encouragement during my studies. My warmest thanks go to Professor Oleg Sapozhnikov for fruitful discussions both in science and life, to Professor Valeriy Andreev for his helpful comments, to Mikhail Averyanov and Olga Bessonova for their great help on the beginning stage of my work.
I also owe my sincere thanks to people from the Center for Industrial and Medical Ultrasound in Seattle (University of Washington, USA). It was a great pleasure and honor to work with CIMU team. Special thanks to Camilo Perez and Bryan Cunitz who measured acoustic fields of medical devices investigated in the fourth chapter of this thesis.
Finally, I would like to thank my family and my friends who always supported me and helped a lot in many aspects. i
Introduction
Nonlinear focusing of weak acoustic shocks and their reflection from different types of surfaces are important problems in atmospheric and medical acoustics [START_REF] Rudenko | Theoretical foundations of nonlinear acoustics[END_REF][START_REF] Vinogradova | Teorija voln[END_REF][START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF][START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF][START_REF] Rudenko | Self-action effects for wave beams containing shock fronts[END_REF][START_REF] Rudenko | Nonlinear waves: some biomedical applications[END_REF]. In medicine, high-energy focused shock pulses have been widely used for about 30 years for the destruction of kidney stones in lithotripsy procedure [START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF]. Currently, there are new medical applications using focused waves: extracorporeal shock wave therapy, stopping internal bleeding (hemostasis), treatment of tumors by high intensity focused ultrasound [START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF][START_REF] Rudenko | Nonlinear waves: some biomedical applications[END_REF]. All these medical applications use intensive (up to 30 kW/cm 2 in the focal area of the beam) acoustic waves propagating in nonlinear media. Induced biological effects are strongly dependent on the amplitude of the shock front.
In aeroacoustics, nonlinear propagation and reflection of shock pulses are of special attention due to development of new civil supersonic aircrafts [START_REF] Plotkin | State of the art of sonic boom modeling[END_REF]. Shock pulses of sonic boom, or N-waves, generated by supersonic motion of the airplane propagate through the atmosphere to the ground, reflect from it and form an acoustic field with non-uniform pressure distribution close to the ground. High peak positive and negative levels of acoustic pressure may be harmful to people and buildings. Weak acoustic shocks are also generated by explosions, thunders, earthquakes, collapse of cavitation bubbles, high-power electrical discharges, and even by loud playing on some wind musical instruments [START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF][START_REF] Hirschberg | Shock waves in trombones[END_REF]. Despite the difference of practical applications, all these problems of medical and aeroacoustics have much in common in terms of theoretical models since they are related to the propagation, nonlinear focusing and reflection from the boundaries of shock acoustic waves.
Theoretical models describing these problems are quite complex, and analytical solutions can be obtained only within the framework of simplified approximations. Numerical simulations combined with laboratory experiments are used to describe in detail spatial and temporal structures of acoustic fields of shock waves. It should be noted that the presence of shock in waveform complicates significantly both numerical simulations and measurements. In simulations, the difficulties are more of a technical nature and are caused by the need to use small time and spatial steps of the numerical grid that requires using powerful supercomputers with large RAM size. Numerical modeling of practical applications related to nonlinear focusing and propagation of shock waves became possible only recently due to the rapid development of supercomputers and parallel computing methods.
Introduction
Measurements of shock waves by acoustic methods are difficult from fundamental point of view. First of all, a bandwidth of even modern broadband devices (condenser microphones and fiber optic hydrophones) is limited at high frequencies that in many cases does not allow measure the shock rise time [START_REF] Loubeau | High-frequency measurements of blast wave propagation[END_REF]. Second, waveforms measured by microphones are distorted by wave diffracting on a surface of the microphone (Yuldashev et al., 2010b). Third, the high-precision measurements of reflection pattern are impossible using a microphone since it distorts the field structure by additional waves reflected from the microphone. Therefore, using alternative methods, such as optical, is of great interest to measure shock acoustic waves without distortion of their profiles and with high time resolution. In the thesis, optical measurements of an N-wave during its propagation and reflection from the surface were performed by means of two optical methods: the schlieren method (Karzova et al., 2015d, Karzova et al., 2015e) and the Mach-Zehnder interferometry method [START_REF] Yuldashev | Mach-Zehnder interferometry method for acoustic shock wave measurements in air and broadband calibration of microphones[END_REF],Karzova et al., 2015c,Ollivier et al., 2015, Karzova et al., 2015b).
The presence of the shock in acoustic waveform leads to several features in manifestation of nonlinear effects in the processes of the focusing and reflection. One of the classic effects caused by the presence of the shock is the formation of a three-wave structure near the reflecting surface at small angles of incidence [START_REF] Ben-Dor | Shock Wave Reflection Phenomena[END_REF]. This phenomenon was first experimentally observed by Ernst Mach in 1868 [START_REF] Mach | Über den Verlauf von Funkenwellen in der Ebene und im Raume[END_REF] and has been studied well for strong step shocks when the acoustic Mach number is close to one. While step shocks are typical for aerodynamics, acoustic shock waves usually have more complicated waveforms of an N-wave (sonic boom waves), blast waves, sawtooth waves, and others. In addition, in nonlinear acoustics the values of acoustic Mach number are on the order of 10 -3 , which is at least one order smaller than in aerodynamics.
The reflection of such very weak, but nonetheless strongly nonlinear acoustic waves has not been studied to the same extent. In the thesis, nonlinear reflection of an N-wave generated by a spark source in air is studied experimentally (Karzova et al., 2015e,Karzova et al., 2015c,Karzova et al., 2015b). Another classic phenomenon caused by the presence of the shock is the saturation of acoustic field parameters in nonlinear focused fields [START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF]. The limitation of the acoustic pressure at the focus need to be taken into account in medical applications using high-intensity focused beams. Existing analytical solutions for estimation the saturation level of the pressure amplitude at the focus were obtained in [START_REF] Naugolnykh | The dependence of the gain of the acoustic focusing system of ultrasonic intensity[END_REF][START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF][START_REF] Bacon | Finite amplitude distortion of the pulsed fields used in diagnostic ultrasound[END_REF][START_REF] Shooter | Acoustic saturation of spherical waves in water[END_REF][START_REF] Musatov | Nonlinear refraction and absorption phenomena due to powerful pulses focusing[END_REF] and could be used to predict peak positive pressure at the focus of medical transducers. However, these estimates have been obtained under different approximations and therefore are inaccurate.
Numerical experiment provides an accurate and detailed study of the acoustic field structure.
Introduction focused acoustic field in a saturation regime is investigated numerically [START_REF] Karzova | Mechanisms for saturation of nonlinear pulsed and periodic signals in focused acoustic beams[END_REF]. In addition, the formation of spatial structures similar to the Mach stem on the beam axis at the focus is considered (Karzova et al., 2015e).
As mentioned above, the study of nonlinear effects in focused fields of modern medical devices is an important problem of medical acoustics. Understanding the spatial and temporal structure of acoustic fields of medical transducers is necessary for planning induced therapeutic effect and development protocols ensuring the most effective treatment. Extracorporeal shock wave therapy (ESWT) has been actively used recently to treat several musculoskeletal disorders [START_REF] Kudo | Randomized, placebo-controlled, double-blind clinical trial evaluating the treatment of plantar fasciitis with an extracoporeal shock wave therapy (ESWT) device: a North American confirmatory study[END_REF], Rompe et al., 2003[START_REF] Gerdesmeyer | Extracorporeal shock wave therapy for the treatment of chronic calcifying tendonitis of the rotator cuff: a randomized controlled trial[END_REF][START_REF] Furia | Safety and efficacy of extracorporeal shock wave therapy for chronic lateral epicondylitis[END_REF][START_REF] Rompe | Repetitive low-energy shock wave treatment for chronic lateral epicondylitis in tennis players[END_REF]. Therapeutic bioeffects induced by ESWT include angiogenesis (blood vessel formation), osteogenesis (bone formation), and antinociceptive effects. Although ESWT has been already used in clinics, the actual physical mechanisms of ultrasound action on bones and surrounding tissues in ESWT remain unknown as well as a structure of the acoustic field of ESWT devices. Another new promising medical application of shock waves is the recently developed use of focused ultrasonic radiation force to move kidney stones and residual fragments out of the urinary collecting system [START_REF] Shah | Novel ultrasound method to reposition kidney stones[END_REF]. A commercial diagnostic 2.3 MHz C5-2 array probe is used to deliver the acoustic pushing pulses. The probe works in regime of generating millisecond pulses at the very power operational output [START_REF] Shah | Focused ultrasound to expel calculi from the kidney[END_REF]. The optimization of array probe parameters and choosing the optimal treatment protocols require the study of nonlinear acoustic fields generated by the currently used probe at different regimes. In the thesis, nonlinear propagation effects were analyzed using a combined measurement and modeling approach [START_REF] Kreider | Characterization of a multi-element clinical HIFU system using acoustic holography and nonlinear modeling[END_REF][START_REF] Canney | Acoustic characterization of high intensity focused ultrasound fields: A combined measurement and modeling approach[END_REF][START_REF] Bessonova | Membrane hydrophone measurement and numerical simulation of HIFU fields up to developed shock regimes[END_REF]: the boundary condition for the modeling was set to match low power measurements of the acoustic pressure field [START_REF] Perez | Acoustic field characterization of the Duolith: Measurements and modeling of a clinical shockwave therapy device[END_REF],Karzova et al., 2013,Karzova et al., 2015a).
Aims of the dissertation
The general aim of the dissertation is an experimental and theoretical study of nonlinear focusing and reflection of weak acoustic shocks in the context of aeroacoustic problems and problems of diagnostic and therapeutic medical ultrasound. According to this aim, the following challenges can be outlined:
1. Development of optical methods to measure profiles of the N-wave in the laboratory experiment in air. Investigation the applicability of the developed methods and their temporal resolution.
2. Experimental study of the nonlinear reflection of the N-wave from a flat rigid surface in air. Determination the criteria for observation the irregular reflection.
3. Numerical simulation of nonlinear pulsed and periodic focused acoustic beams generated by Gaussian transducers and piston ones. Study how wave temporal structure and source apodization effect on limited values of the acoustic pressure at the focus. Observation the Mach stem formation at the focus of medical transducers and its description within the framework of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation.
Structure and volume of the dissertation
The dissertation consists of the introduction, four chapters, conclusions, appendix, and the list of references. Each chapter, in addition to the original material, contains an introduction with literature review and conclusions. The references list contains 128 articles on 8 pages; the total volume of the dissertation is 120 pages, including 74 figures and 2 tables. Figures and formulas in the dissertation are referred as (1.3) where the first number is the chapter number and the second number is the number of the formula or the figure in this chapter.
Dissertation content
The first chapter is devoted to optical methods for measuring pressure profiles of N-wave generated by a spark source in air. In §1.1 a review of existing methods to measure acoustic shock waves is presented and limitations of measuring N-waves by condenser microphones are discussed. Optical methods are proposed to use as an alternative way for measurements of acoustic shock pulses.
In §1.2 the experimental setup designed for optical schlieren measurements of spark-generated acoustic waves in homogeneous air is presented. A procedure of reconstruction of the acoustic pressure waveforms from schlieren images is described in §1.3. Pressure waveforms were reconstructed from the light intensity patterns in the recorded images using an Abel-type inversion method. Absolute pressure levels were determined by analyzing at different propagation distances the duration of the compression phase of pulses, which changed due to nonlinear propagation effects. Examples of the reconstructed pressure signatures at different distances from the source are presented in §1.4. The time resolution of the method (3 μs) was restricted by the exposure time of the high-speed camera. Another optical method proposed in the thesis for measurements of spherically diverging N-waves is based on the Mach-Zehnder interferometry technique. The experimental setup is described in §1.5. In §1.6 the reconstruction method to restore pressure waveforms from optical phase signals is described. The reconstruction is based on an Abel-type inversion. In contrast to the schlieren optical method, the Mach-Zehnder interferometery method provides quantitative reconstruction of N-wave pressure waveforms and therefore it is a broadband laser microphone. The results of optical measurements obtained by using the Mach-Zehnder interferometer are given in §1.7. The time resolution of interferometric method (0.4 μs) is mainly determined by the finite beam width (about 0.1 mm). In §1.8 advantages and limitations of both optical methods (schlieren method and the Mach-Zehnder interferometery method) for measure-Introduction ments of acoustic shock waves in air are discussed. In §1.9 conclusions of the first chapter are given.
The second chapter of the thesis is devoted to experimental study of an irregular reflection of an N-wave from a rigid surface in air. In §2.1 a review of existing theoretical and experimental studies of shock wave reflection is presented, as well as reflection of weak shocks under von Neumann paradox is considered. The classification of different reflection regimes of weak acoustic shocks from the rigid surface is given in §2.2. The special attention is paid on the differences between reflection of step-shocks and more complicated waveforms typical for acoustics. In §2.3 the experimental setup designed for schlieren optical visualization of shock wave reflection from a rigid surface is presented. Schlieren images obtained in the experiment are shown and demonstrate the dynamical irregular reflection of the N-wave with increasing length of the Mach stem when the pulse propagated along the surface. Schlieren optical system provides visualization of reflection pattern for the front shock of the N-wave. The Mach-Zehnder interferometery method was used to measure pressure waveforms of the N-wave close to reflecting surface. In §2.4 experimentally measured pressure waveforms are presented. The nonlinear interaction between reflected front shock and incident rear shock of the N-wave is discussed in §2.5. The interaction leads the Mach stem formation above the surface where these shocks intersect and overpressure area is formed above the surface. In §2.6 conclusions of the second chapter are given.
In the third chapter mechanisms of nonlinear saturation in focused acoustic fields of periodic waves and single pulses are considered. In §3.1 a review of analytical approaches providing estimation of limiting values of peak positive pressure in periodic and pulsed focused fields are presented. The possibility to observe the Mach stem formation in the axial focal area is discussed. In §3.2 a numerical model based on the KZK equation is described. The model was used to characterize nonlinear focused fields of pulsed and periodic acoustic beams generated by a piston source and a Gaussian source. In §3.3 the effect of the signal temporal structure on the limiting values of peak pressures are discussed. It is shown that in periodic beams higher peak positive pressures could be achieved than in pulsed beams. §3.4 is devoted to study the effect of source pressure distribution on the spatial structure and limiting values of peak pressures in focused fields. The Gaussian sources were found more appropriate for achieving high peak pressures at the small focal area than piston sources. In §3.5 the interaction between shock fronts of the axially symmetric focused periodic and pulsed fields is considered as a process similar to reflection from the rigid surface. It is shown that the KZK equation allows describing the Mach stem formation in the focal area of the piston source. The structure of the front patterns in the focal region of the beam resembled to the von Neumann reflection as the result of interaction between the edge and the central waves coming from the source. In §3.6 conclusions of the third chapter are given.
The fourth chapter is devoted to the characterization of nonlinear focused acoustic fields of new medical devices used in extracorporeal shock wave therapy (ESWT) and in diagnostic ultrasound.
In §4.1 a review of perspectives to use ESWT for several muscular skeletal disorders is presented as well as parameters of ESWT devices. Using of diagnostic probes to create focused ultrasonic radiation force for moving kidney stones out of the urinary collecting system is discussed. The Introduction numerical modeling is an important tool for characterization of acoustic fields of these medical devices. In §4.2 the nonlinear effects in focused acoustic field of electromagnetic device Duolith SD1 of ESWT is studied using a combined measurement and modeling approach. The boundary condition for nonlinear modeling of KZK equation was obtained from the experiment by applying the method of the equivalent source. The method uses measurements to obtain parameters of equivalent source, i.e., the source with the same acoustic field on the axis of the beam as the real one. It was shown that in ESWT fields the shock front formation did not occur for the currently machine settings. A true shock formation could be reached if the maximum initial pressure output of the device is doubled. In §4.3 the combined measurement and modeling approach was used to characterize the nonlinear ultrasonic field of the standard diagnostic probe Philips C5-2 used in clinical experiments to push kidney stones. The measurements were done in two steps. The first one was the measurements of low-amplitude pressure waveforms along the axis of the probe and at its focal plane. These measurements were performed at low power output and were used to set boundary condition to the numerical model. The second series of measurements were performed at different output levels and were conducted for further comparison with the results of nonlinear simulations. A 3D numerical model based on the Westervelt equation was used to simulate the nonlinear acoustic field generated in water by the diagnostic probe at different output levels and for different number of operating elements. It was shown that the pushing of kidney stones occurs in a saturation regime. In §4.4 conclusions of the forth chapter are given.
In the general conclusions, the main results are briefly summarized.
Introduction
Chapter 1
Measurements of N -waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method §1.1 Introduction High-amplitude (>1 kPa) and short duration (tens of microseconds) acoustic pulses are widely used in downscaled laboratory experiments to simulate sonic boom propagation through atmospheric inhomogeneities (Lipkens & Blackstock, 1998a, Lipkens & Blackstock, 1998b[START_REF] Lipkens | Model experiment to study sonic boom propagation through turbulence. Part III: Validation of sonic boom propagation models[END_REF][START_REF] Davy | Measurements of the refraction and diffraction of a short N-wave by a gas-filled soap bubble[END_REF][START_REF] Blanc-Benon | Laboratory experiments to study N-waves propagation: effects of turbulence and/or ground roughness[END_REF], Averiyanov et al., 2011b[START_REF] Salze | Laboratory-scale experiment to study nonlinear N-wave distortion by thermal turbulence[END_REF], problems of architectural acoustics [START_REF] Grillon | What can auralisation in small scale models achieve[END_REF], urban acoustics [START_REF] Picaut | A scale model experiment for the study of sound propagation in urban areas[END_REF][START_REF] Picaut | Experimental study of sound propagation in a street[END_REF] and outdoor sound propagation [START_REF] Almgren | Acoustic boundary layer influence on scale model simulation of sound propagation: Experimental verification[END_REF]. The most common ways to generate such pulses in air are to use various spark sources: electrical sparks [START_REF] Wright | Propagation in air of N-waves produced by sparks[END_REF], Yuldashev et al., 2010b[START_REF] Orenstein | The rise time of N-waves produced by sparks[END_REF], focused laser beams [START_REF] Qin | Characteristics and application of laser-generated acoustic shock waves in air[END_REF], or explosive-type materials [START_REF] Loubeau | High-frequency measurements of blast wave propagation[END_REF]. The waveform of pulses produced in such ways is not always known, but it is expected that due to the prevalence of nonlinear effects the initial pulse becomes an N-wave quite soon. Following the current terminology, let us to call spark-generated pulses as "N-waves" [START_REF] Dumond | A determination of the waveforms and laws of propagation and dissipation of ballistic shock waves[END_REF] because of their shape.
The study of the N-wave propagation in atmosphere is important due to high interest to development of civil supersonic aircrafts and inherent sonic boom problem. Outdoor experiments of sonic booms are not numerous because they are complex and expensive projects [START_REF] Lee | Sonic Boom produced by United states Navy aircraft: measured data, AL-TR-1991-0099[END_REF][START_REF] Maglieri | A summary of XB-70 sonic boom signature data for flights during March 1965 through[END_REF]. In addition, it is not possible to control all parameters of the atmosphere along the propagation path of the N-wave [START_REF] Elmer | Varability of measured sonic boom signatures: volume 1technical report[END_REF][START_REF] Willshire | Preliminary results from the White Sands Missle Range sonic boom[END_REF]. Alternatively, laboratory-scaled model experiments could be performed instead of outdoor measurements. In model experiments, parameters of an acoustic source and a propagation medium are well controlled. Despite the fact that model experiments do not reproduce the tapered geometry of a wavefront they are of great importance for understanding fundamental properties of nonlinear propagation of N-waves.
Before studying the propagation of N-waves in complex cases of turbulent atmosphere, highprecision measurements of N-waves should be performed first in a homogeneous medium. It turns out that the actual waveform of spark-generated pulses, particularly their rarefaction phase, can Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method be very different from the symmetric shape of an N-wave. Nevertheless, the N-wave model is widespread to describe pressure signatures of shock pulses during their propagation in air [START_REF] Wright | Propagation in air of N-waves produced by sparks[END_REF], Yuldashev et al., 2010b, Averiyanov et al., 2011a). Even if the waveform is not restricted to have the N-wave shape in simulations of pulse propagation through homogeneous (Yuldashev et al., 2010b) and turbulent media (Averiyanov et al., 2011a), the N-wave assumption is still often used to set a boundary condition to the model. This simplified assumption may introduce errors, for example, in the simulation of pulse propagation through a caustic, in which the resulting waveform resembles the derivative of an initial wave (Lipkens & Blackstock, 1998a, Averiyanov et al., 2011a). Accurate measurement of high-amplitude and short-duration acoustic waveforms at distances close to the source is therefore critical to accurately determine the boundary condition for the modeling. Also it is important for studying the environmental impact of sonic boom as our perception of a noise is largely determined by the rise time of the shock fronts, their amplitude and duration of the N-wave [START_REF] Fidell | Relative rates of growth of annoyance of impulsive and non-impulsive noises[END_REF][START_REF] Leatherwood | Subjective loudness response to simulated sonic booms[END_REF]. Propagation of spark-generated acoustic pulses in homogeneous air has been studied experimentally by several teams, particularly by Wright with co-workers [START_REF] Wright | Propagation in air of N-waves produced by sparks[END_REF][START_REF] Wright | Diffraction of spark-produced acoustic impulses[END_REF][START_REF] Wright | Acoustic radiation from a finite line source with N-wave excitation[END_REF]) and Yuldashev with co-workers (Yuldashev et al., 2010b, Yuldashev et al., 2008b). Although several methods have been proposed to characterize acoustic fields produced by sparks, certain measurement limitations still exist. The most common approach is to measure pressure signatures of N-waves using acoustic microphones. However, the bandwidth of commercially available high-frequency condenser microphones does not typically exceed 150 kHz at -3 dB level, while the spectrum of shock pulses extends up to 1MHz; in addition, calibration of microphones at high frequencies is often not accurate. A microphone response and the resulting waveform distortions are also dependent on the microphone mounting. This results in significant distortions of the measured waveforms and steep shock fronts (Yuldashev et al., 2010b,Yuldashev et al., 2008b). In most cases there is no possibility to theoretically estimate these distortions. Moreover, waveform measurements are impossible close to a spark source because maximum pressure is out of a condenser microphone linear range. In addition, the pressure level is so high that it can damage microphones. Note also that acoustic measurements could be performed using piezoelectric dynamic pressure sensors, which are appropriate in the case of very high amplitude pressure waves (>100 kPa), but their main disadvantage is low sensitivity (14.5 mV/kPa) and resolution (for example, 3.4 Pa for the model 113B28 PCB Piezotronics).
An alternative method to measure shock pulses produced by sparks is to use optical methods instead of microphones. The basic principle of these methods is that the acoustic wave introduces variations of air density and corresponding variations of optical refractive index; as a result, the light beam deflects from its initial direction when passing through an acoustic signal.
Measurements of shock waves using optical methods have been widely treated in literature [START_REF] Mach | Photographische fixierung der dunch projectile in der luft eingeleiteten Vorgänge[END_REF][START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF][START_REF] Merzkirch | Flow visualization[END_REF], Yuldashev et al., 2008a, Yuldashev et al., 2010a[START_REF] Cowan | The experimental determination of the thickness of a shock front in a gas[END_REF][START_REF] Greene | The thickness of shock fronts in argon and nitrogen and rotational heat capacity lags[END_REF][START_REF] Panda | Laser light scattering by shock waves[END_REF][START_REF] Panda | Wide angle light scattering in shock-laser interaction[END_REF]. They can be divided into three types: shadowgraphy, schlieren, and interferometry methods. However, although weak shocks have been addressed [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF], most of the effort was generally focused 1.1. Introduction on measuring strong shocks created by supersonic flows; the thickness of strong shocks was usually estimated using indirect methods based on the shock speed measurements. For weak shocks optical methods are usually used only for visualization of the field structure but not for quantitative measurements of pressure waveforms.
In a recent work (Yuldashev et al., 2010b), an optical focused shadowgraphy technique was used to visualize the front shock of spark-generated N-waves. An estimation of the front shock width and rise time was then obtained, thanks to numerical simulation of optical beam propagation through the shock. A good agreement with the measurements was shown (Yuldashev et al., 2010b). Although the shadowgraphy technique provided a good temporal resolution of the high amplitude front shock of the pulse, it was not sufficiently sensitive to restore the whole waveform or even the rear shock of the pulse. The reason is that shadowgraphy method is sensitive to the second derivative of pressure, i.e., it captures sharp changes of pressure at the front shock, while smooth variations of pressure in the pulse are missed.
Holographic interferometry [START_REF] Mizukaki | Application of digital phase-shift holographic interferometry to weak shock waves propagating at Mach 1.007[END_REF] has been used to visualize explosion-type waves, but the resolution and the accuracy of the restored waveforms were significantly lower than in the microphone measurements. Laser interferometry can also be used to measure high-amplitude and short duration acoustic pulses in air, however, to our knowledge, no quantitative analysis has been performed to this day for shock waves [START_REF] Smeets | Laser interference microphone for ultrasonics and nonlinear acoustics[END_REF]. The goal of this chapter is to demonstrate that optical methods (the schlieren method and the Mach-Zehnder interferometry method) are capable to reconstruct absolute pressure signatures of spark-generated acoustic pulses in homogenous air (Karzova et al., 2015d, Yuldashev et al., 2015). Both optical methods are based on the fact that the distribution of light intensity in the measured schlieren images or in interference pattern is associated with the acoustic wave by the Abel-type transform. In the case of the schlieren method Abel-type transform contains an unknown normalization constant which does not permit to determine absolute pressure values, only the shape of an acoustic signal can be reconstructed. Absolute pressure levels were obtained by analyzing lengthening of the compression phase of the pulse with distance caused by amplitude-dependent nonlinear propagation effects. The Mach-Zehnder interferometry method provides quantitative accurate measurements of pressure signatures of N-waves. The time resolution in measured by the Mach-Zehnder interferometer waveforms is six time better than the bandwidth of 1/8-inch condenser microphones (Brüel&Kjaer, B&K and G.R.A.S., Denmark); thus the Mach-Zehnder interferometer is a reliable tool to calibrate broadband acoustical microphones.
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method §1.2 Experimental setup for optical measurements using a schlieren system
Visualization of shock fronts using a schlieren optical method
Optical schlieren method is widely used for a visualization of optical inhomogeneities in transparent refracting media [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF][START_REF] Vasil'ev | Shadow Methods[END_REF]. The conventional schlieren system was realized by German physicist August Toepler in 1867. The basic idea of the method is illustrated in Fig. 1.1. A light beam from a point light source or a slit ( 1) is directed by lens or by system of lenses and mirrors (2-2 ) through the test object (3). After propagating through optical inhomogeneities ( 4) the light is focused on a sharp edge of an opaque screen (5) called a Foucault knife. If there are no optical inhomogeneities light is blocked by the screen. In the presence of the optical inhomogeneity (4) a part of rays is deflected and passes above the screen edge. A lens ( 6) is placed behind the screen to project deflected light rays on a projection screen ( 7) and to obtain an image (8) of optical inhomogeneities which scatter light. An optical knife (5) provides a dark background on the screen (7) since it blocks undeflected rays and the schlieren image is bright. If the optical knife ( 5) is removed the image is not contrast.
The propagation of acoustic waves in a medium introduces variations of air density and corresponding variations of optical refractive index. If the acoustic wave contains a shock front then a large gradient of the refractive index will be created in the location of a shock. This allows to use the schlieren method for optical visualization of shocks. The brightest parts of the schlieren image correspond to the maximum values of pressure derivatives [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF], i.e., demonstrate the location of the shock front.
Experimental setup
A top view of the experimental setup designed for optical schlieren measurements of spark-generated acoustic waves in homogeneous air is shown in Fig. 1.2. A spark source (Fig. 1.3 (a)) with a 21 mm gap between tungsten electrodes and with an applied voltage of 15 kV produced high amplitude pressure pulses that readily turned to a shock waveform when propagating from the spark. The repetition rate of the pulses was 1 Hz; the wavefront was assumed to have a spherical geometry.
Acoustic pulses introduced variations of air density and, as a result, variations of the optical refractive index which are schematically shown in Fig. 1.2 by gradients of the gray color. These variations were visualized using the schlieren method. The schlieren system was composed of a quartz tungsten halogen (QTH) continuous white light source mounted in the geometrical focus of a spherical mirror with 1 m radius of curvature, a beam splitter, an optical knife (a razor edge), and a high-speed Phantom V12 CMOS camera. A metal plate with a circular hole of 2 mm in diameter was glued to the light source in order to have a point light source (Fig. 1.3 (b) Light beam was transmitted through the beam splitter and through the test zone of the acoustic pulse propagation. Then, the light reflected from the mirror, intersected the test zone once again, and propagated back to the beam splitter (solid lines with arrows in Fig. 1.2). Spatial variations of the light refractive index n caused by the acoustic wave led to deviation of a part of light rays from the initial propagation direction. Light rays that were not deflected by acoustic pressure inhomogeneities were blocked by the optical knife located in the focal point of the beam. Deflected rays bent around the razor edge were captured by a high-speed camera to form a schlieren image. Double passing of the light beam through the test zone provided better contrast of the image. The brightness of these images corresponds to modulation of the light intensity and is proportional to the gradient of acoustic pressure [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF].
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method §1.3 Theoretical background: reconstruction of an acoustic waveform from a schlieren image In this paragraph, the algorithm for reconstructing pressure signatures from schlieren images and corresponding assumptions for its correct interpretation are presented. The proposed method includes two steps. First, the waveforms of acoustic pulses were obtained from schlieren images. Then, the absolute pressure values were determined by analyzing the change in duration of the compression phase of the pulses at different distances from the source.
Algorithm for reconstructing dimensionless pressure signatures of Nwave using an Abel-type inversion method
Acoustic pressure p can be related to the perturbation of the optical refractive index n. The refractive index n is related to the air density ρ via the Gladstone-Dale constant K [START_REF] Merzkirch | Flow visualization[END_REF]:
n + n 0 = 1 + K(ρ 0 + ρ)
, where ρ 0 is the ambient density, and ρ is the density perturbation caused by the acoustic wave, n 0 is the ambient refractive index. Under experimental condition, the density perturbation can be regarded as a linear function of acoustic pressure p: ρ = p/c 2 0 , where c 0 is the ambient sound speed; higher order terms can be neglected as the acoustic pressure is small compared to the ambient atmospheric pressure p atm : p/p atm ∼ 0.01. The refractive index therefore can be expressed as
n = K p c 2 0 . (1.1)
Variation of the refractive index n produces a phase shift ϕ opt of the light beam. In the xy plane, shown in Fig. 1.2, the phase shift accumulates while the light propagates along the y axis. Since the phenomenon is symmetrical with respect to the plane z = 0, the light rays are assumed not to deviate from the xy plane. Neglecting light reflection by acoustic inhomogeneities and taking into account double crossing through the test zone, one can write the phase as
ϕ opt (x) = 2 • (2π/ λ) +∞ -∞
n(x, y)dy, where λ is the optical wavelength. Radial symmetry of the wavefront allows us to rewrite the expression for the phase as
ϕ opt (x) = 2 • 2π λ +∞ 0 2n(r = x 2 + y 2 )dy = 2 • 2π λ +∞ x 2n(r)rdr √ r 2 -x 2 . (1.2) Equation (1.
2) is the direct Abel transform of the function n(r) [START_REF] Bracewell | The Fourier Transform and Its Applications[END_REF]. Inversion of the Abel transform (1.2)and the relationship s = λϕ opt /2π between the optical path length s and the phase ϕ opt gives
n(r) = - 1 2π +∞ r ds dx dx √ x 2 -r 2 . (1.3) 1.
3. Theoretical background: reconstruction of an acoustic waveform from a schlieren image
In the experiments, the light intensity distribution I is the quantity measured in the perpendicular image xz plane of the schlieren arrangement. For a schlieren system, the light intensity of the image formed behind the optical knife is proportional to the angle of deviation of rays [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF]. Taking into account the spherical symmetry of the wavefront, the angle of light deviation in the test zone can be written as ε=∂s/∂r 1 , where r 1 = √ x 2 + z 2 is the radial coordinate in the image plane xz. Thus, the light intensity I(r 1 ) in the schlieren image is
I(r 1 ) = -C ∂s ∂r 1 , (1.4)
where C is an unknown constant and the sign minus is introduced to account for the knife orientation. For example, if the knife blocks the light from the opposite side of the beam, the same schlieren image is formed but the bright areas of the image are replaced by the dark ones and vice versa. Integrating the intensity in Eq. (1.4), one can obtain the optical path length s as
s(r 1 ) = 1 C +∞ r 1 I(r )dr , (1.5)
where r is a dummy integration variable. Due to the radial symmetry of the optical path length s in the plane xz, one could write Eq. (1.5) in the one dimensional (1D) case of z = 0
s(x) = 1 C +∞ x I(r )dr . (1.6)
Combining Eqs. (1.1), (1.3), and (1.6), one obtains the following relation between the pressure signature p and the schlieren image intensity I:
p(r) = - c 2 0 2πKC +∞ r d dx ⎛ ⎝ +∞ x I(r )dr ⎞ ⎠ dx √ x 2 -r 2 . (1.7)
Equation (1.7) contains the unknown constant C, which makes it impossible to reconstruct absolute pressure levels directly from the images. Nonetheless, dimensionless pressure waveforms can be reconstructed by calculating the integral
p(r) ∼ +∞ r d dx ⎛ ⎝ +∞ x I(r )dr ⎞ ⎠ dx √ x 2 -r 2 .
(1.8)
Estimation of the peak positive pressures from the pulse elongation
In order to determine the absolute pressure values in the reconstructed waveforms, the lengthening of the N-wave with distance caused by nonlinear propagation effects was analyzed. The analytic solution of the 1D simple wave equation generalized for spherically divergent waves was Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method used [START_REF] Pierce | Acoustics: an introduction to its physical principles and applications[END_REF]. The duration of the compression phase T at a distance r of a shock wave having an amplitude p 0 and a compression phase duration T 0 (Fig. 1.4a) at the distance r 0 is given by T (r)/T 0 = 1 + σ 0 ln(r/r 0 ), (1.9) where σ 0 = (γ + 1)r 0 p 0 /2γp atm c 0 T 0 .
Here γ is the heat capacity ratio equal to 1.4 for air. In acoustics, Eq. (1.9) is associated with nonlinear propagation of an ideal spherically divergent N-wave, but it also remains valid for nonsymmetrical shock waves if only the compression phase is considered. Equation (1.9) therefore, can be applied to determine the pressure amplitude p 0 from the duration of the compression phase in the waveforms measured at different distances from the spark source. In the schlieren experiment, the spatial extent d of the compression phase of the wave was measured instead of the duration. However, since the acoustic wave does not change greatly over a propagation distance equal to its wavelength, the duration of the compression phase can be related to its spatial extention via the sound speed: d = T c 0 . The reconstruction algorithm described above is valid under several assumptions. First, it is assumed that the method is valid despite the optical beam not being collimated as in classical schlieren systems [START_REF] Settles | Schlieren and shadowgraph techniques: visualizing phenomena in transparent media[END_REF]. However, this assumption is valid, since the width of the test zone, i.e., the zone where the light beam actually interacts with the refractive index inhomogeneities, is much smaller than the total beam length, which is equal to twice the radius of curvature of the mirror. Quantitatively, the width of the test zone is estimated as 2 • 2λ ac rλ 2 ac , where λ ac is the wavelength of the acoustic wave (Fig. 1.4b). For a maximum propagation distance of 50 cm and a wavelength of 2 cm, the width of the test zone is 28 cm, which is small in comparison to 2 m of the beam length.
Conditions for the applicability of an algorithm
The second assumption is that the wave has a spherical wavefront in the xy plane, thus the refractive index n(r) is a function of only the radial distance r. In the experimental conditions, generally it is true; however, for large electrode gaps or small distances this assumption may be slightly violated.
The third assumption is that optical beam propagation is considered in the framework of geometrical optics [Eqs. (1.2) and (1.4)]. Moreover, it is assumed that optical rays passing through the 1.3. Theoretical background: reconstruction of an acoustic waveform from a schlieren image test zone remain straight lines [Eq. (1.2]. These assumptions may be violated near strong shocks where diffraction effects are important [START_REF] Panda | Laser light scattering by shock waves[END_REF], Yuldashev et al., 2010b). In experiment, variations of refractive index n were less than 5% of its value in undisturbed medium and the rise time of shock was about two orders smaller than the pulse duration, thus the framework of geometrical optics is applicable. .5: Effect of a 3 μs exposure time of the camera on the reconstructed waveform. Solid curve is the initial N -wave that was numerically propagated during 3 μs; dotted curve is the wave after propagation; dashed curve is the averaged wave, which imitates the measured waveform. The half duration of the initial wave can be calculated as the half duration of the averaged wave plus the half of the exposure time (1.5 μs).
Accurate estimation of the compression phase duration is a critical point of the method, since this parameter is used to determine the peak positive pressure. However, the duration of the compression phase in the reconstructed dimensionless waveforms was distorted because of a finite exposure time (3 μs) of the high-speed camera, i.e., the shock front was smeared. To simulate the averaging effect induced by the camera, numerical simulations based on the Burgers equation generalized for relaxing homogeneous atmosphere were performed. Numerical model is described in detail in (Yuldashev et al., 2010b). The high-speed camera was assumed to perform a uniform temporal averaging of acoustic pressures arriving at this distance during the exposure time. An ideal spherically diverging N-wave was numerically propagated from the source. Then, for each distance where the measurements were taken, the pressure was averaged over all waveforms (100 waveforms total) which passed through this point during the exposure time. The parameters of the initial N-wave in the numerical model were: the peak pressure p 0 = 2500 Pa at a distance from the spark source r 0 =105,6 mm; the shock rise time, defined as the time during which the acoustic pressure increased from 10% to 90% of the peak positive pressure (Lipkens & Blackstock, 1998a), was chosen according to the quasi-stationary solution of the Burgers equation as 0.07 μs; the duration of the compression phase T 0 (or the half duration) of the initial N-wave, defined as the time between the points of the positive half peak at the front shock and zero pressure values, was chosen T 0 = 17 μs. Finally, for each distance, the averaged waveform was compared with the original ideal N-wave at the same distance.
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method A summary of the results of N-wave propagation modeling is presented in Fig. 1.5. The initial N-wave (solid curve) is supposed to imitate a "real" wave, while the averaged wave (dashed curve) is a "measured" wave. Note that in the space representation, the N-wave is no longer symmetric: there is a small difference between values of the peak positive and negative pressures. This is caused by the fact that the front shock is located farther from the source than the rear shock and thus has smaller amplitude because of the spherical divergence of the field. The distance between the propagated (dotted curve) and initial pulses corresponds to 3 μs and is about 1 mm.
The finite exposure time leads to the following effects. First, the coordinate of the peak positive pressure and the angles of smooth slopes (more than 3 μs in time or 1 mm in space) of the real and measured waveforms are unchanged. Second, the zero pressure position is shifted by a distance that corresponds to half of the exposure time (see markers at zero pressure level in Fig. 1.5). Finally, the whole duration of the measured wave becomes longer than the real one for a time interval equal to the exposure time. Note also that the shock width (spatial equivalent of the rise time) obtained from the averaged waveform (1 mm) is determined by the exposure time (3 μs). To evaluate the duration of the compression phase correctly using the averaged wave, one should calculate the duration between the peak positive and zero pressure levels and add half of the exposure time, i.e., 1.5 μs in our case (lower right corner of Fig. 1.5). This method to properly evaluate the duration of the compression phase of the pulse is found to be applicable for all distances where the measurements were taken. Note that nonlinear distortions of the propagated wave (dotted curve) are not significant and the correction to the half duration of the measured wave can be obtained based on the assumption of linear plane wave propagation. 1.4. Results of optical measurements performed by the schlieren system corresponds to negative derivative, i.e., the pressure decreases. Finally there is second bright stripe which is less contrast and wider than the first stripe. This area corresponds to the rear front of the pulse. These features of the image demonstrate that the front shock of the spark-generated pulse is sharper and shorter than the rear shock.
-0.5 0 0.5 1
x, mm For every pulse, the distance r 0 is defined as the coordinate of the peak positive pressure.
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method
Examples of waveforms, reconstructed at different distances from the spark source, are shown in Fig. 1.8. The analysis of optical data gives waveforms as functions of the distance from the source. Conversion of waveforms in time domain was done using the ambient sound speed c 0 which was equal to 343 m/s for the experimental conditions (relative humidity 49%, temperature 292 K). The coordinate of the peak positive pressure is considered to be the propagation distance r 0 of the wave. Analyzing dimensionless waveforms plotted in Fig. 1.8, one can conclude that close to the source the acoustic wave is very asymmetric: the negative peak is significantly lower than the positive peak (waveform number 1 in Fig. 1.8) and the rear shock is very smooth and has a long rise time (about 15 μs in time which corresponds to 5 mm in space) in comparison to the front shock. These features are typical for the near field of blast waves [START_REF] Brode | Blast wave from a spherical charge[END_REF]. The front shock is smeared to 3 μs due to the finite exposure time of the camera.
Reconstructed pressure signatures of N -wave
The duration of the compression phase was calculated as a function of the propagation distance for reconstructed dimensionless waveforms. The smearing of the schlieren image during the exposure time of the high-speed camera was took into account (see paragraph1.3.4). To estimate the coefficient σ 0 in Eq. (1.9), experimental data for (T /T 0 ) 2 -1 were linearly fitted as a function of ln(r/r 0 ) using the least squares method (Fig. 1.9). The origin of the graph in Fig. 1.9 corresponds to T 0 = 13.5 μs and r 0 = 70.5 mm. Fifteen sparks were used to obtain the data presented in Fig. 1.9. The value of 0.486 was obtained for the coefficient σ 0 with a standard deviation of 0.013. The corresponding peak positive pressure was p 0 = 2γp atm c 0 T σ 0 /(γ + 1)r = 3.72 kPa. A zoom view of the data at small propagation distances is given in the inset. 1.4. Results of optical measurements performed by the schlieren system Finally, pressure amplitudes were found for all distances and thus pressure signatures were fully reconstructed. Note that the temporal correction of 1.5 μs to the duration of the compression phase was quite substantial Ůwithout taking into account the reconstructed pressure amplitudes would be up to 10% higher.
Reconstructed peak positive pressures at different distances from the spark source are shown in Fig. 1.10 (markers). The power law p = p 0 (r/r 0 ) -1.2 provides a good approximation of the peak pressure as a function of distance. Reed proposed [START_REF] Reed | Atmospheric attenuation of explosion waves[END_REF] this relation for blast waves and it is in good agreement with experimental values starting from about 100 mm from the source. The discrepancy between the Reed relation and experimental values closer to the spark source could be explained by less applicability of either the data processing method or the Reed relation. Nevertheless, both dependencies predict extremely high peak positive pressure close to the spark (about 12 kPa at the distance of 30 mm).
Examples of the reconstructed pressure signatures at different distances from the source are shown in Fig. 1.11. One can observe that close to the source the duration of the compression phase of the wave is about two times smaller than the duration of the rarefaction one (waveform at r 0 = 36 mm). As the acoustic wave propagates further from the source, it becomes more symmetric and the rear shock becomes steeper, the rise time reaches 3 μs, which is equal to the resolution time. The durations of compression and rarefaction phases of the wave equalize. Waveforms start to resemble an N-wave only starting from the distances of about r 0 = 500 -600 mm, but even at the distance of r 0 = 532 mm (last subfigure) the wave is still not fully symmetric, the peak positive pressure being 1.2 times higher than the peak negative pressure. The measured front shock rise time is limited by the exposure time of the camera and equals to 3 μs which corresponds to a 1 mm shock thickness for all measured waveforms. Note that modern high-speed cameras can provide images at lower exposure times (0.1 -0.5 μs), so the time resolution of reconstructed waveforms can be improved.
Mach-Zehnder interferometer
The experimental setup designed for measurements of weak acoustic shock waves using the Mach-Zehnder interferometer is presented in Fig. 1.12 and includes optical and acoustical parts. In the acoustic part the electrical spark source (Fig. 1.3 (a)) described above was used to generate Nwaves. The gap between two tungsten electrodes was set to 20 mm and supplied voltage was about 16-20 kV. The Mach-Zehnder interferometer was mounted on a 60 × 60 cm optical breadboard (PBH51505, ThorLabs, Inc.) and was composed of a laser source 1 , two beam splitters ( 4 and 5 , 50/50 reflection/transmission), two flat mirrors 6 and 7 , three lenses 3 and a photodiode sensor 8 (see Fig. 1.12 (a)). A He-Ne laser (wavelength λ = 632.8 nm) with a nominal power of 10 mW was used as a coherent light source. Neutral filters were used to attenuate the light beam power down to 1.3 mW to fit requirements of the photodiode sensor 2 . All optical elements (beamsplitters, mirrors, filters and lenses) were 25 mm in diameter. A first beamsplitter 4 divides the incident laser beam into a reference beam and a probing beam. A second beamsplitter sums these two beams to produce an interference intensity pattern at the photodiode surface. The beamsplitters only approximately fitted the declared 50/50 reflection and transmission coefficients.
However, in the chosen propagation scheme the probing beam is first transmitted and then reflected while the reference beam is reflected and then transmitted. Thus, deviation of reflection and transmission coefficients from 50/50 ratio is compensated and beams had almost equal intensities at the exit. Propagation paths of the reference and the probing beams formed a square with 35 cm side.
A focusing lens with 20 cm focal length was mounted between the laser and the first beam splitter in order to reduce the probing beam thickness in the zone where the interaction with the acoustic wave occurs 9 . Thinner probing beam provides better time resolution of the measurement method. Two other focusing lenses (15 cm focal length) were placed a few centimeters after each 1.5. Experimental setup for optical measurements using a Mach-Zehnder interferometer of the two mirrors. These lenses compensate the divergence of the laser beam and reduce the beam cross-section in order to collect its total optical power on the surface of the photodiode. The beams were aligned in such a way that the output optical field contained only one interferometric fringe. Thus, functioning of the interferometer in the infinite-fringe mode was realized [START_REF] Merzkirch | Flow visualization[END_REF] Light intensity at the exit of the interferometer was captured by a photodiode (NT53-372, Edmund Optics) which has responsivity r p = 0.35 A/W at 632.8 nm optical wavelength, surface of 3.2 mm 2 and 45 pF of electric capacitance at zero bias voltage. The photodiode was connected to a transimpedance amplifier to provide a linear relation between the light intensity and the output voltage. The transimpedance amplifier was designed according to the guidelines given in the Ref. [START_REF] Graeme | Photodiode Amplifiers: OP AMP Solutions[END_REF], figure. 3.14. The transmission impedance of the amplifier was R = 2.2kΩ. Thus, the output voltage u ph of the photodiode amplifier is related to the beam power P as u ph = r p RP . A low noise constant reverse bias (2.5 V) was applied to the photodiode to reduce its capacitance and to increase bandwidth of the amplifier up to 16 MHz (at -3 dB).
The output voltage of the photodiode amplifier u ph was fed to the first input of a fully differential amplifier with unit gain and 26 MHz bandwidth. An adjustable low noise reference voltage source was connected to the second input of the differential amplifier to provide necessary bias to the resulting output signal. The optical signal was measured at the first output of the differential amplifier. Inverted signal from the second output of the differential amplifier (u fb ) was applied to an input of the feedback loop of a stabilization system.
In the stabilization system the input voltage u fb was filtered by a first-order low-pass filter with τ f = 20 ms time constant. The output of the filter was connected to a low frequency amplifier (25 kHz bandwidth, gain 10) which was loaded to a piezoactuator. One mirror was glued to the piezoactuator; thus its small displacement provided control on the optical phase difference between the reference and the probing beams (Fig. 1.12 (a), 6 ). The piezoactuator (AE0505D08F, ThorLabs) lengthening coefficient was equal to κ = 9.1 • 10 -8 m/V. The piezoactuator produces the optical phase shift which is proportional to the applied voltage u pz :
ϕ pz (t) = 2 √ 2k 0 κu pz (t) = αu pz (t) (1.10)
where k 0 = 2π/λ is the optical wavenumber. The numeric coefficient 2 √ 2 in the equation ( 1.10) appears due to the fact that the piezoactuator moves the mirror along a diagonal between the incident and the reflected light beams forming a right angle (Fig. 1.12). The parameter α = 2 √ 2k 0 κ in the given experimental conditions was equal to 2.56 V -1 .
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method §1.6 Measurement of acoustical waveforms using a Mach-Zehnder interferometer
Optical signal formation
Light intensity I formed by the interference of the reference and the probing beams at the surface of the photodiode is described by the following equation [START_REF] Born | Principles of Optics[END_REF]: 1.11) where I A and I B are the intensities of the probing and the reference beams after the second beamsplitter, respectively, and ϕ is the optical phase difference between them. The measurement protocol was organized as follows. At the fist stage the laser source was disabled and the input bias to the differential amplifier was adjusted in the way to produce zero output signal. Thus, light intensity I is proportional to the output voltage signal and the same equation is applied:
I = I A + I B + 2 I A I B cos ϕ, (
u D = u A + u B + 2 √ u A u B cos ϕ.
(1.12)
Here u A is the voltage measured when the reference beam is shaded, and u B when the probing beam is shaded. Excitation of low frequency mechanical oscillations of the experimental setup produced corresponding variations of the optical phase difference. These low frequency phase variations were used to check the quality of the interference. It was verified that the minimal value of the measured voltage is equal to u Dmin = u A + u B -2 √ u A u B , and the maximal value is equal
to u Dmax = u A + u B + 2 √ u A u B .
At the second stage, the bias voltage was moved to the position where the output voltage is equal to u C = -(u A + u B ) in the absence of the optical signal from the photodiode. In this case, when the optical signal is turned on, the output voltage of the differential amplifier is proportional to the cosine function of the phase argument ϕ:
u = u D + u C = 2 √ u A u B cos ϕ = u 0 cos ϕ, (1.13)
where u 0 = 2 √ u A u B is the amplitude of voltage variations.
The total phase difference ϕ is the sum of the following items given by:
ϕ(t) = ϕ 0 + ϕ pz (t) + ϕ ac (t) + ϕ n (t). (1.14)
Here ϕ 0 is a constant phase difference related to initial adjustment of the interferometer, ϕ ac (t) is a phase difference produced by the measured acoustic wave, and ϕ n (t) is a phase related to mechanical perturbations: ground vibrations, acoustic noise, air flows. For example, the interferometer was sensitive even to voice and clapping hands.
The stabilization system was designed to keep the output voltage at zero level in the absence of acoustic wave by compensating low frequency noise and forcing the phase ϕ to remain close to 1.6. Measurement of acoustical waveforms using a Mach-Zehnder interferometer the π/2 value. The functioning of the system is described in detail in [START_REF] Yuldashev | Mach-Zehnder interferometry method for acoustic shock wave measurements in air and broadband calibration of microphones[END_REF]. As a result, the output voltage is related to the phase difference associated with the measured acoustic wave as:
u = u 0 sin(ϕ ac (t) + ϕ r (t)), (1.15)
where ϕ r (t) is a fraction of the noise that was not completely compensated by the stabilization system. Some uncompensated constant offsets also could be present in the function ϕ r (t). However, as the spectrum of the acoustic phase ϕ ac (t) is concentrated at high frequencies above several kHz and the noise phase ϕ r (t) is generally a low frequency function (from zero to hundred Hertz), it was always possible to subtract this component, which appeared as an almost constant bias during the acquisition time window. A radially symmetric acoustic wave traveling through the probing beam is schematically drawn in Fig. 1.13. The probing beam is located at the distance x = r 1 from the spark source. At any time t, the refraction index distribution n(x, y, t) induced by the acoustic wave leads to a phase difference:
Optical phase induced by the acoustic wave
ϕ ac (t) = k 0 +∞ -∞ n(x = r 1 , y, t)dy.
(1.16)
Since the distribution n(x, y, t) = n(r, t) is a radially symmetric function, equation (1.16) can be written as:
ϕ ac (t) = 2k 0 +∞ r 1 n(r, t)rdr r 2 -r 2 1 .
(1.17)
The analytical inversion of equation ( 1.17) to obtain n(r, t) is not known. However, as functions n(r, t) at different times t are not independent and belong to the same traveling acoustic wave, an approximate method to reconstruct the function n(r = r 1 , t) from the phase signal ϕ ac (t) can be used. Since the acoustic wave does not change too much over a propagation distance equal to its wavelength (for the N-wave it is the distance between front and rear shocks), the moving object n(r, t) can be treated as a stationary function at some fixed time t, while the laser beam is supposed to move along the x axis with the sound speed c 0 . Thus, using the Abel transform (1.17),
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method and Eqs. (1.1), (1.15), one obtain an expression for pressure waveforms: .18) Note that Eq. ( 1.18) was obtained under following assumptions: (1) the wavefront was supposed to be spherical, (2) diffraction effects were neglected, (3) refraction of the laser beam on the optical heterogeneity was not taken into account, and (4) function n(r = r 1 , t) was supposed to be stationary while the laser beam passes optical inhomogeneities. As it was shown by numerical methods in [START_REF] Yuldashev | Mach-Zehnder interferometry method for acoustic shock wave measurements in air and broadband calibration of microphones[END_REF], these approximations do not introduce significant errors in the reconstructed signal and the error of the of Mach-Zehnder interferometry method is only 2 %. §1.7 Results of optical experiments performed by the Mach-
p(t) = - c 2 0 λ 2π 2 K +∞ r d dr 1 arcsin u 2 √ u A u B dr 1 r 2 1 -r 2 . ( 1
Zehnder interferometer
Measurements were performed at distances from 10 cm up to 100 cm between the spark source and the probing laser beam. At each distance 140 waveforms were recorded in order to allow statistical analysis of the data. An example of the reconstructed waveform with corresponding measured optical phase signal at the distance r = 20 cm are presented in Fig. 1.14. Here, the phase signal is a result of post-processing: low-frequency and high-frequency noise filtering, background phase correction (subtraction of a constant phase level which is present in the signal before arrival of the N-wave), application of a time window to remove reflected waveforms arriving after the direct wave. The order of the magnitude of the optical phase signal is about 1 radian for an N-wave with 1250 Pa positive peak pressure. Waveforms at several different distances r are shown in Fig. 1.15. At each distance r, an experimental waveform (solid line) was obtained by averaging 140 individual waveforms, appropriately shifted in time to fit an average arrival time. The main features of nonlinear propagation of Nwave have been already discussed previously in results of the schlieren measurements. However note one more time that close to the source the acoustic wave rather resembles a blast wave than a symmetric N-wave.
An experimental waveform at r = 10 cm was set as an initial waveform for the Burgers equation to perform numerical simulations, whose results were used to validate measurements. The same numerical model was used earlier for evaluation of the effect of the finite exposure time on duration of the compression phase (subsection 1.3.4). The simulated waveforms are shown in Fig. 1.15 by black dashed lines. An excellent agreement between the experimental and theoretical waveforms is observed, which confirms the Mach-Zehnder interferometer method.
Measured and modeled propagation curves of (a) the peak positive (p + ) and peak negative (p -) pressure, (b) the duration of the compression phase of the waveform (also called half duration [START_REF] Wright | Propagation in air of N-waves produced by sparks[END_REF], Yuldashev et al., 2010b)), and (c) the shock rise time τ sh are compared in Figs. 1.16(a)-1.16(c). The error bars in Fig. 1.16 are obtained from statistical processing of 140 measured waveforms. The experimental and theoretical values of the positive peak pressure match to within an interval of 8% (maximal relative difference is noted). Negative peak pressure -in 10%.
Half duration data are also in good agreement within an interval of 2% [Fig. 1.16(b)]. The increase of the half duration for larger propagation distances is a classical nonlinear effect [START_REF] Rudenko | Theoretical foundations of nonlinear acoustics[END_REF]. Since this effect is amplitude-dependent, it was used to deduce the amplitude of the N-wave [START_REF] Wright | Propagation in air of N-waves produced by sparks[END_REF],Lipkens & Blackstock, 1998a,Yuldashev et al., 2008b,Yuldashev et al., 2010b). However, with this method it is difficult to achieve an accuracy better than 10% (Yuldashev et al., 2010b).
Experimental results for the shock rise time are less consistent with theory [Fig. 1.16(c)]. The rise time is defined as the time for the pressure on the front shock to increase from 0.1p + to 0.9p + . Experimental values of the rise time are always higher than theoretical values. With the experimental conditions in this work, the time resolution is mainly determined by the laser beam width and focal distance of the focusing lens (Yuldashev et al., 2010a). It follows from Fig. 1.16(c) Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method that the experimental value of the time resolution is about 0.4 μs. This value is more than 6 times better than that of standard condenser microphones (2.5-2.9 μs in the case of the Brüel & Kjaer, type 4138) and corresponds to 2.5 MHz bandwidth. Using a better quality laser beam, thinner time resolution can be achieved. However, in the case of strong shocks (tens of kPa or more) or very thin laser beams, diffraction of the optical field on the shock front can lead to degradation of performance of the measurement method [START_REF] Panda | Laser light scattering by shock waves[END_REF]. §1.8 Comparison of optical methods: benefits and limita-
tions for acoustic field reconstruction
Optical methods presented in this chapter provide the attractive possibility to obtain quantitative information about characteristics of the high-amplitude and short duration acoustic pulses generated by a spark in a homogeneous atmosphere. Let us discuss benefits and limitations of both schlieren optical method and the Mach-Zehnder interferometry method for acoustic field reconstruction.
The optical schlieren method described in §1.2 and §1.3 has the time resolution limited by the exposure time of the high-speed camera which was 3 μs in our case. Modern high-frequency condenser microphones (for example Brüel & Kjaer, type 4138) have the same time resolution, but microphone measurements usually start from 150 to 200 mm away from the source where the pressure levels are not very high and the response of the microphone is linear. In contrast, there is no restriction on the minimal distance from the spark in the schlieren method: one can obtain a schlieren image even at 30 mm from the spark and the corresponding waveform could be reconstructed. Possibly, the data processing methodology is not highly accurate at distances very close to the spark, but nonetheless one can obtain an approximate waveform that could not be measured using microphones. Note also that in microphone measurements it is impossible to estimate distortions induced by the wave diffraction on the microphone, microphone mounting and its frequency response. Smearing of the schlieren image during the exposure time of the highspeed camera is the main cause of distortion. This distortion is quite predictable quantitatively (as discussed in §1.3.4).
The schlieren method has also some advantages with respect to the focused shadowgraphy technique described in (Yuldashev et al., 2010b). The contrast of shadowgrams (images obtained using the focused shadowgraphy technique) is proportional to the second spatial derivative of pressure, while the contrast of schlieren images is proportional to the gradient of pressure. The focused shadowgraphy technique allowed visualizing the front shock of the pulse with a time resolution better than 0.5 μs, which permitted to describe its fine structure. However, if this method is well suited to measure shocks, it is not sufficiently sensitive to measure the pressure decrease following the peak pressure nor the rear shock. The schlieren method is more sensitive to low amplitude pressure variations, and therefore makes it possible to estimate the whole waveform except the fine structure of the front shock (limitation due to the resolution of the camera). However, the exposure 1.9. Conclusions time of modern high-speed camera could reach 0.5 μs and the time resolution of the method could be improved.
The accuracy of the schlieren method was found about 10-15%. Four main sources of error are identified. First, the distortion due to the exposure time of the camera; second, assumptions of geometrical optics and spherical symmetry in data processing; third, the low frequency noise associated with slow variations of background intensity between snapshots, which is substantial at large distances from the spark. Finally, although the spark source produces pulses with a good repeatability, their initial amplitude and duration changes from pulse to pulse. This leads to dispersion in experimental data (Fig. 1.9). .17: Comparison of pressure waveforms obtained using the optical schlieren method (black line) and the Mach-Zehnder interferometry method (red line) at distance r = 10 cm from the spark source.
The Mach-Zehnder interferometry method is the most promising among all optical methods since it allows to restore pressure waveforms with the best time resolution (0.4 μs in the experiment) and with the best accuracy (2%). The time resolution of the method is determined by the width of the laser beam and could be improved by using a focusing lens. A common disadvantage of the both optical methods (schlieren and interferometry) is their applicability only for waves which have spherical or cylindrical geometry of the wavefront. Thus there is a limitation on geometry of measured acoustic fields. The high accuracy of the Mach-Zehnder interferometry method and its high time resolution allow to use it for calibration of high-frequency condenser microphones.
The measurements of N-waves using the schlieren method and the Mach-Zehnder interferometry method were performed by author with the time delay about two years, so a direct comparison between obtained results was difficult. The atmospheric experimental conditions were changed and electrodes of the spark source were replaced. Nevertheless a comparison between pressure waveforms measured by the schlieren system and the Mach-Zehnder interferometer at the same distance r = 10 cm show that results are in a very good agreement (Fig. 1.17). To obtain these waveforms a dimensionless waveform measured in schlieren experiment was multiplied by peak positive pressure level of waveform measured using the interferometer. The main difference between waveforms is in structure of the front shock, which is resolved better in the case of the Mach-Zehnder interferometry method. §1.9 Conclusions
The propagation of nonlinear spark-generated acoustic pulses in homogenous air was studied experimentally using two optical methods: the schlieren method and the Mach-Zehnder interferometry method.
Chapter 1. Measurements of N-waves in air using optical methods: a schlieren method and a Mach-Zehnder interferometry method
The schlieren method allowed reconstructing dimensionless waveforms at distances from 30 mm to 600 mm from the source. Analysis of schlieren images was based on the assumption of spherical geometry of the acoustic field and the geometrical optics approximations. The reconstruction of dimensionless acoustic waveforms was performed using the Abel-inversion transform. To evaluate the smearing of the waveform during exposure time of the camera, the propagation of a spherical diverging N-wave was simulated using the generalized Burgers equation. A method to evaluate the duration of the compression phase taking into account exposure time of the camera was proposed.
The analysis of the elongation of the compression phase duration as a function of the propagation distance allowed to reconstruct the absolute pressure values. The time resolution of the method (3 μs) was restricted by the exposure time of the camera and thus the fine structure of the front shock could not be resolved using the method. The schlieren method has two main advantages: first, it allows reconstruction of the pressure signatures at distances close to the spark source (about 30 mm), where measurements using condenser microphones are impossible; second, it provides the reconstruction of the whole waveform with the good accuracy that has not been achieved using the focused shadowgraphy method.
The optical measurement method based on a Mach-Zehnder interferometer is most suitable method to measure spherically diverging N-waves in homogeneous air. Pressure waveforms are reconstructed from optical phase signal of interferometer using an Abel-type inversion. The interferometric method allows one to reach 0.4 μs of time resolution, which is 6 times better than the time resolution of a 1/8-inch condenser microphone (2.5 μs). The Mach-Zehnder interferometry method is a perspective tool for calibration of broadband condenser microphones.
Physical causes of the Mach stem formation
In aerodynamics, there is a well-known relation between the shock velocity u in undisturbed medium and a pressure jump p on the shock front [START_REF] Uizem | Linear and nonlinear waves[END_REF]: u(p) = c 0 + εp/ρ 0 c 0 . It means that the shock of greater amplitude propagates faster than the shock of less amplitude. In reflection, the pressure perturbation caused by reflected wave is superimposed on one already caused by the Chapter 2. Irregular reflection of an N-wave from a rigid surface in air incident wave. It leads to different velocities of incident and reflected shocks. The reflected shock with greater speed starts to overtake the incident shock. If the nonlinear effects are strong or the incident angle is small then the reflected shock could superimpose on the incident one and form the united single shock front -the Mach stem. Thus, the Mach stem formation caused by different speeds of two shock fronts because of nonlinear effects.
A three-shock theory of von Neumann
John von Neumann [START_REF] Neumann | Oblique reflection of shocks[END_REF].
He described the irregular reflection by a three-shock theory based on the assumption that all the waves in the flow are shaped as shocks of negligible curvature and thickness, and obey the Rankine-Hugoniot jump conditions [START_REF] Pierce | Acoustics: an introduction to its physical principles and applications[END_REF][START_REF] Ben-Dor | Shock Wave Reflection Phenomena[END_REF], i.e., laws of mass, energy, and momentum conservation:
ρ 1 u 1 sin ϕ 1 = ρ 2 u 2 sin(ϕ 1 -θ 1 ), (2.1
)
p 1 + ρ 1 u 2 1 sin 2 ϕ 1 = p 2 + ρ 2 u 2 2 sin 2 (ϕ 1 -θ 1 ), (2.2) γ γ -1 p 1 ρ 1 + 1 2 u 2 1 sin 2 ϕ 1 = γ γ -1 p 2 ρ 2 + 1 2 u 2 2 sin 2 (ϕ 1 -θ 1 ), (2.3) ρ 2 u 2 sin ϕ 2 = ρ 3 u 3 sin(ϕ 2 -θ 2 ), (2.4
)
p 2 + ρ 2 u 2 2 sin 2 ϕ 2 = p 3 + ρ 3 u 2 3 sin 2 (ϕ 2 -θ 2 ), (2.5) γ γ -1 p 2 ρ 2 + 1 2 u 2 2 sin 2 ϕ 2 = γ γ -1 p 3 ρ 3 + 1 2 u 2 3 sin 2 (ϕ 2 -θ 2 ), (2.6) ρ 1 u 1 sin ϕ 3 = ρ 4 u 4 sin(ϕ 3 -θ 3 ), (2.7
)
p 1 + ρ 1 u 2 1 sin 2 ϕ 3 = p 4 + ρ 4 u 2 4 sin 2 (ϕ 3 -θ 3 ), (2.8) γ γ -1 p 1 ρ 1 + 1 2 u 2 1 sin 2 ϕ 3 = γ γ -1 p 4 ρ 4 + 1 2 u 2 4 sin 2 (ϕ 3 -θ 3 ).
(2.9) Equations (2.1)-(2.9) are written in the coordinate system in which fronts are fixed, ϕ i are incident angles, θ i are angles of a flow deflection from its initial propagation direction, u i are speeds of 2.1. Introduction flows (Fig. 2.2). Solution of Rankine-Hugoniot jump conditions Eqs. (2.1)-(2.9) is:
tan θ 3 = ctg ϕ 3 M 2 1 sin 2 ϕ 3 -1 2 + M 2 1 (γ + cos 2ϕ 3 ) , (2.10) tan θ 2 = ctg ϕ 2 M 2 2 sin 2 ϕ 2 -1 2 + M 2 2 (γ + cos 2ϕ 2 ) , (2.11) tan θ 1 = ctg ϕ 1 M 2 1 sin 2 ϕ 3 -1 2 + M 2 1 (γ + cos 2ϕ 1 )
, (2.12)
and pressure ratios near the shock front are:
p 2 p 1 = 1 + 2γ γ + 1 (M 2 1 sin 2 ϕ 1 -1),
(2.13)
p 3 p 2 = 1 + 2γ γ + 1 (M 2 2 sin 2 ϕ 2 -1),
(2.14)
p 4 p 1 = 1 + 2γ γ + 1 (M 2 1 sin 2 ϕ 3 -1).
(2.15)
Here M i are Mach numbers which exceed the values of acoustic Mach number M a by one. A perpendicularity of the Mach stem to the surface is also proposed:
θ 3 = θ 1 -θ 2 = 0 and p 3 = p 4 .
According to Eqs. (2.13)-(2.15), the transition from irregular reflection to regular one occurs at ϕ 3 = 0 • , i.e., then mathematically the Mach stem becomes parallel to the surface. This ratio is known as the von Neumann criteria and written as:
p 4 p 1 = 1 + 2γ γ + 1 (M 2 1 -1).
(2.16)
The three-shock theory was found to be in good agreement with experiments only for strong shocks when the acoustic Mach number M a was greater than 0.47. For weaker shocks (0.1 < M a < 0.47) the three-shock theory was strongly disagreed with experimental observations supported by numerical simulations. For M a < 0.1 the theory has no physically acceptable solutions and predicts fundamental impossibility of irregular reflection, while the experimental data clearly show that irregular type of reflection for such weak shocks does, in fact, exist. The conflict between three-shock theory and experimental results is known as the von Neumann paradox which was first formulated by Bikhoff in 1950 [START_REF] Birkhoff | A Study in Logic. Fact and Similitude[END_REF], and irregular reflection pattern in this case is called the von Neumann reflection.
A review of researches devoted to an irregular reflection of shocks
Attempts to resolve the von Neumann paradox were undertaken by several authors in [START_REF] Skews | The physical nature of weak shock wave reflection[END_REF][START_REF] Colella | The von Neumann paradox for the diffraction of weak shock waves[END_REF][START_REF] Brio | Mach reflection for the two-dimensional Burgers equation[END_REF][START_REF] Guderley | The Theory of Transonic Flow[END_REF][START_REF] Zakharian | The von Neumann paradox in weak shock reflection[END_REF][START_REF] Vasil'ev | Numerical simulation of weak shock diffraction over a wedge under the von Neumann paradox conditions[END_REF]. Although the von Neumann paradox is still considered to be unresolved, performed experimental studies have revealed many new interesting features of shock front interactions.
Chapter 2. Irregular reflection of an N-wave from a rigid surface in air Most of the experiments to study the Mach reflection effect were performed in shock tubes using methods of optical visualization [START_REF] Semenov | Classification of pseudo-steady shock wave reflection types[END_REF][START_REF] Skews | The physical nature of weak shock wave reflection[END_REF][START_REF] Colella | The von Neumann paradox for the diffraction of weak shock waves[END_REF]. In a shock tube, step shocks with plane front are produced. A wedge is placed in a tube to observe the reflection of the flow by optical methods. Usually shadowgraphy or schlieren methods are used for visualization of reflection pattern. In [START_REF] Semenov | Classification of pseudo-steady shock wave reflection types[END_REF] the complicated spatial structures close to triple point were observed for Mach reflection (see Fig. 2.3) and studied in detail [START_REF] Semenov | Classification of pseudo-steady shock wave reflection types[END_REF][START_REF] Ben-Dor | Shock Wave Reflection Phenomena[END_REF]. In 1990 Colella and Henderson were the first who experimentally and numerically observed a new type of reflection of weak shocks (M a = 0.05) from a rigid boundary -the von Neumann reflection [START_REF] Colella | The von Neumann paradox for the diffraction of weak shock waves[END_REF]. In this new reflection type they saw no discontinuity of the slope angle in transition from the incident front to the Mach stem (see Fig. 2.4a).
In addition to experimental studies of weak shock reflection from surfaces, theoretical and numerical efforts were performed in [START_REF] Tesdall | Self-similar solutions for weak shock reflection[END_REF][START_REF] Brio | Mach reflection for the two-dimensional Burgers equation[END_REF], Zakharian et al., 2000, 2.2. Types of reflection of weak acoustic shocks from a rigid surface behind the Mach stem. Also the numerical model allowed to demonstrate a low-pressure area just behind the triple point, that has been predicted previously only theoretically in [START_REF] Guderley | The Theory of Transonic Flow[END_REF]. These features were also observed in numerical experiments in [START_REF] Vasil'ev | Numerical simulation of weak shock diffraction over a wedge under the von Neumann paradox conditions[END_REF][START_REF] Zakharian | The von Neumann paradox in weak shock reflection[END_REF] based on Euler's equation.
In 2002 [START_REF] Tesdall | Self-similar solutions for weak shock reflection[END_REF] used methods of numerical simulations to find a more complex configuration in reflection pattern with a sequence of triple points in the Mach stem. They showed that this spatial structure was located in a very small region below the main triple point, its size was only 2% of the length of the Mach stem. Later Skews and Ashworth [START_REF] Skews | The physical nature of weak shock wave reflection[END_REF] constructed a shock tube with diameter of 1.1 m and were able to observe the Mach stem with an extremely large length of about 80 cm. The sequence of triple points was indeed occurred on the Mach stem. Fig. 2.4b demonstrates images obtained in [START_REF] Skews | The physical nature of weak shock wave reflection[END_REF] with one more triple point located below the main one (experiments were performed in air, M a = 0.04).
Note that all works mentioned above, both theoretical and experimental, are mainly in the framework of aerodynamics and consider only plane step shocks with acoustic Mach numbers M a greater than 0.035 While step shocks are typical for aerodynamics, acoustic shock waves usually have more complicated waveforms of an N-wave, blast waves, sawtooth waves, and others.
In addition, in nonlinear acoustics the values of acoustic Mach number M a are on the order of 10 -2 -10 -3 , which is at least one order smaller than in aerodynamics. The reflection of such very weak, but nonetheless strongly nonlinear acoustic waves has not been studied to the same extent. Moreover, the reflection of acoustic shock waves occurs within the von Neumann paradox and thus is not described by the three-shock theory. It is interesting to refer to historical background and remember that von Neumann by himself considered acoustics and aerodynamics as two fundamentally differing fields. In his opinion, small values of Mach number in acoustics lead to physical impossibility to observe such nonlinear phenomena as irregular reflection [START_REF] Neumann | Oblique reflection of shocks[END_REF]. §2.2 Types of reflection of weak acoustic shocks from a rigid surface Nonlinear reflection of acoustic shock waves (M a about 10 -2 -10 -3 ) from a rigid surface was investigated in recent works of Marchiano and co-authors [START_REF] Marchiano | Experimental evidence of deviation from mirror reflection for acoustical shock waves[END_REF], Baskar et al., 2007). In work [START_REF] Marchiano | Experimental evidence of deviation from mirror reflection for acoustical shock waves[END_REF] reflection of periodic sawtooth waves (M a = 2.3 × 10 -4 ) from rigid boundary was studied experimentally in water. Different reflection regimes were observed and then studied in detail using methods of numerical simulations [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF]. It was shown that the type of reflection depends on the critical parameter a, defined by the shock amplitude p, the grazing angle ϕ of the incident wave, and a coefficient of nonlinearity β of the propagation medium:
a = sin ϕ √ 2βM a .
(2.17)
Chapter 2. Irregular reflection of an N-wave from a rigid surface in air Numerical simulations based on the KZ equation were used in [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] to define values of the critical parameter a corresponding to each reflection regime. For plane step shocks four reflection regimes exist. Initially for 0 ≤ a ≤ 0.4 weak von Neumann reflection was observed.
The term "weak von Neumann reflection" was introduced by the authors of [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] to describe the reflection pattern for almost grazing incidence when the reflected shock does not exist, but the initial plane incident shock has a curvature close to the surface. The range of values 0.4 ≤ a ≤ √ 2 corresponds to the von Neumann reflection, i.e., an irregular reflection regime characterized mainly by the continuous slope of the shock front along the incident shock and the Mach stem. For √ 2 ≤ a ≤ 5 the reflection occurs in a regular regime, but angles of the incident and reflected waves are not equal. Finally, at a > 5 classical law of reflection is valid.
In work [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] cases of an ideal plane N-wave and periodic sawtooth wave were also studied in comparison with a step shock reflection. In contrast to step shocks, N-waves and sawtooth waves were found reflected in a dynamical way: the length of the Mach stem was changing while the wave propagated along the surface. This is due to the fact that both the amplitude of nonlinear wave and the incident angle are changed. Thus, current value of the critical parameter a depends on location of reflection point on the surface. In addition, the evolution of the reflection pattern for the front and the rear shocks of the N-wave was different. Figure 2.5 presents the results on numerical simulation performed in [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] for initial current value of the critical parameter a = 0.5. Reflection patterns correspond to different location of reflection point on the surface, from left to right the distance from the first point of the shock wave reflection to the observed region increases. For the front shock of the N-wave authors of [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] obtained following values of the critical parameter a corresponding to different types of reflection: for 0 ≤ a ≤ 0.4 weak von Neumann reflection occurred. This range of values coincides with the corresponding one for a plane step shock case. Values 0.4 ≤ a ≤ 0.8 correspond to dynamical irregular reflection, when the length of the Mach stem first increases and then decreases; for a > 0.8 reflection is regular. Thus, while the N-wave propagates along the surface values of the critical parameter a are gradually increasing due to reducing the amplitude of 2.3. Visualization of dynamic irregular reflection of a spherical N-wave using an optical schlieren method the incident wave, as a result, the type of reflection varies from weak von Neumann reflection to von Neumann reflection and finally to a regular reflection. Transition from one type of reflection to the another one occurs differently for the rear shock of the N-wave. It is clearly seen (Fig. 2.5) that there is secondary reflected shock behind the rear shock which is formed when irregular reflection becomes regular, and then increases with increasing of the current value of a. Formation of the secondary reflected shock was observed only in reflection of rear shock of the N-wave; there was no such effect in the case of the periodic wave.
In this thesis, reflection of N-waves from rigid boundaries was investigated experimentally using optical methods: the schlieren method and the Mach-Zehnder interferometry method. The goals are to demonstrate experimentally how irregular reflection occurs in air for very weak spherically diverging spark-generated pulses and to evaluate the values of the critical parameter a for different types of reflection. This complements earlier experimental observations on irregular reflections of plane periodic waves in water [START_REF] Marchiano | Experimental evidence of deviation from mirror reflection for acoustical shock waves[END_REF] since spherical divergent waves, single pulses, propagation in air, and variation in acoustic Mach number were considered.
The experiments were performed by the author of the dissertation in the LMFA at Ecole Centrale de Lyon (France). §2.3 Visualization of dynamic irregular reflection of a spherical N -wave using an optical schlieren method
The experimental setup designed for optical visualization of shock wave reflection from a rigid surface is shown in Fig. 2.6. Acoustic shock waves [START_REF] Lee | Sonic Boom produced by United states Navy aircraft: measured data, AL-TR-1991-0099[END_REF] were produced by a 15kV electric spark source (2) with 21 mm gap between tungsten electrodes (its calibration is described in detail in a previous chapter). Spherically divergent N-waves reflect from the rigid surface (3), located at a distance h under the spark. The emerging reflection pattern (4) was visualized using a schlieren method. The schlieren system was composed of a quartz tungsten halogen (QTH) continuous white light source (5) mounted in the geometrical focus of a spherical mirror (6) with 1m radius of curvature, a beam splitter (7), an optical knife (a razor edge, 8), and a high-speed Phantom V12 CMOS camera (9) with exposure time set to 1 μs. Light beam was transmitted through the beam splitter (7) and through the test zone of the acoustic pulse reflection. Then, the light reflected from the mirror (6), intersected the test zone once again, and propagated back to the beam splitter [orange solid lines with arrows in Fig. 2.6a]. Double passing of the light beam through the test zone provided better contrast of the image. Since the brightness of these images is proportional to the gradient of the acoustic pressure, they depict qualitatively the reflection pattern of the front shock of the pulse.
For visualizing the reflection pattern, an initial series of schlieren images was recorded without acoustic wave to obtain the averaged background image. A second series of images was recorded with the presence of acoustic wave. Raw images in the second series were dark and the front structure was not clearly seen without additional data processing. The averaged background image was subtracted from every recorded image of the reflection pattern; this subtraction resulted in reduction of noise and enhancement of the image contrast. An additional processing was the averaging of twenty images obtained from different sparks for a fixed source and reflection point configuration at the same h sp , s, and ϕ (see Fig. 2.6a). In these images, the position of the reflection point varied by less than 5 mm and the reflection patterns were juxtaposed before averaging. Schlieren images [(19.5 ± 0.2) mm width × (12.2 ± 0.2) mm height] of the reflection patterns obtained in this way are presented in Fig. 2.7.
In order to investigate independently the effect of the pressure level and the incident angle ϕ on the reflection pattern, the position of the spark source was chosen so that the distance s between the source and the reflection point was the same for each angle ϕ considered in the study [Fig. 2.6a]. The results are given in Fig. 2.7 for two values of acoustic Mach number M a (i.e., two distances s 2.3. Visualization of dynamic irregular reflection of a spherical N-wave using an optical schlieren method M a × 10 -3 6.0 ± 0.3 Probably occurs for a ≤ (0.58 ± 0.2)
(0.58 ± 0.2) < a < (1.1 ± 0.3) a ≥ (1.1 ± 0.3)
between the source and the reflection point) and different values of the incident angle ϕ. The series of frames in Fig. 2.7a was obtained at a distance s = (46 ± 4) mm away from the spark source, which corresponded to an acoustic pressure amplitude for the incident wave p 0 = (6.2 ± 0.5) kPa and a value of the acoustic Mach number M a = (0.044 ± 0.004). To estimate the value of the acoustic Mach number its definition for the plane wave was used: M a = p 0 /(γp atm ), where γ = 1.4 is the adiabatic index for air and p atm = 100 kPa is the atmospheric pressure. For the series of frames in Fig. 2.7b, experimental parameters were s = (207 ± 7) mm, p 0 = (0.84 ± 0.04) kPa, and M a = (0.0060 ± 0.0003). The coefficient of nonlinearity β was equal to 1.2 for the experimental conditions of the relative humidity of 49% and temperature of 292 K.
For the grazing angle (ϕ = 0 • ) the spark source was located right at the reflecting surface (h = 0). In this case no reflected shock was observed for both values of acoustic Mach number M a (cases ϕ = 0 • in Fig. 2.7). The same pattern was achieved also for the angles ϕ ≥ 7 • when was decreasing and the variation of the grazing angle ϕ had stronger effect on a than the variation of the Mach number M a . Thus, the optical schlieren method provides visualization of the N-wave front shock reflection from the rigid boundary.Regular and irregular types of reflection were observed, and corresponding values of the critical parameter a for each reflection regime were determined. The rear shock of the spark-generated wave is less steep than the front shock, therefor it is more difficult to visualize it using the schlieren system.
M a = 0,
Obtained in the experiment value a = 1.1 ± 0.3 of the critical parameter corresponded to transition from regular reflection to irregular one is in agreement with the theoretical value a = 0.8 obtained in [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] by numerical modeling. Note that in [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] reflection of a plane N-wave was studied while in this experiment the wavefront was spherical and the waveform was asymmetrical in contrast to an ideal N-wave; thus the discrepancy between theory and experiment was supposed. §2.4 Measurement of irregular reflection patterns using the
Mach-Zehnder interferometry method
A Mach-Zehnder interferometer was used for quantitative measurements of reflection patterns formed in reflection of the N-wave from the rigid surface in air. The description of the experimental setup is given earlier in §1.5, where the Mach-Zehnder interferometry method is applied to measure N-waves in homogeneous air. The rigid surface made of plastic was located at distance h sp = 21 mm below the spark source (Fig. 2.9). The reflection from the rigid boundary does not change radial symmetry of the wavefront in the plane parallel to the surface. Thus, the Abel inversion transform (1.18), required the spherical or cylindrical symmetry of the wavefront, remains applicable.
A reflection pattern measured using the Mach-Zehnder interferometer is shown in Fig. 2.10a for the case l = 25 cm, where l is a distance along the surface between a projection of a spark source on a surface and a reflection point (see Fig. 2.9b). This pattern represents the results of measured waveforms obtained at distances h from the rigid surface in the range from 2 mm up to 30 mm with increments of 2 mm. Pressure levels are indicated by colors. The abscissa is a time, thus a front shock of the N-wave coming earlier in time is left and the rear front is right. At each height h above the surface, 140 waveforms were recorded in order to allow statistical analysis The pressure level close to the surface exceeds more than twice the amplitude of the incident shock front. In the case of linear reflection, the pressure is exactly doubled. Here, the pressure amplitude of the incident front is 1 kPa, which is clearly seen on waveforms at h = 16 and 30 mm when the incident front is already separated from the reflected one. The peak positive pressure close to surface is 2.3 kPa (waveform at h = 2 mm), i.e., pressure increases in 2.3 times. Consider how reflection pattern changes while the N-wave reflects from the surface at its different points (Fig. 2.11). In schlieren experiments (Fig. 2.8), the structure of the front shock of the N-wave was visualized while measurements using the Mach-Zehnder interferometry method provides quantitative information about the entire structure of the field. In Fig. 2.11 one can see that the Mach stem is perpendicular to the surface and grows with the wave propagation along the surface. The reflection of the rear shock is regular. The using of the Mach-Zehnder interferometry method allows also measurements of the trajectory of the triple point (Fig. 2.12). One of the main features distinguishing the step shock reflection from the reflection of acoustic waves is a dynamical character of the irregular reflection for the last case [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF]. In experiment, irregular reflection of the pulse did occur in a dynamic way and the length of the Mach stem was increased linearly while the pulse propagated along the surface (Fig. 2.12). The linear interpolation of the triple point trajectory to the surface predicts that for current geometry configuration the transition between regular and irregular reflection occurs 2.5. Nonlinear interaction between the reflected front shock of an N-wave and its incident rear shock at l = 8 cm. Note that theoretical investigation of the plane N-wave reflection from the rigid boundary [START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF] predicts complicated nonlinear trajectory of the triple point. Here one could suppose that this linear dependence is a result of the wavefront sphericity and thus of a faster decrease of the energy on the front shock. Nevertheless, this linear dependance could be the initial region of more complicated nonlinear dependance. §2.5 Nonlinear interaction between the reflected front shock of an N -wave and its incident rear shock
In reflection of the N-wave from the rigid surface nonlinear interaction between the incident and reflected shocks of the wave leads to the Mach stem formation near the surface. The spatial structure like a Mach stem can also be formed above the surface in the region where reflected front shock of the N-wave interacts with its incident rear front. Consider the reflection pattern obtained in the experiment in the case of l = 13 cm (Fig. 2.11). The overpressure area above the surface is clearly observed at hight h ≈ 4 cm and τ ≈ 400 μs (yellow area in the figure). The structure of the fonts here is similar to the three-shock structure in irregular reflection; the difference is only in the Mach stem orientation. In order to understand the reason of overpressure area formation here, let one consider an evolution of waveforms with increasing height h above the surface (Fig. 2.13).
Near the surface at the height h = 2 mm the waveform is a single pulse; shock front corresponds to the Mach stem and increasing of pressure in rarefaction phase corresponds to the incident and reflected rear fronts of the N-wave. With increasing of the height h the Mach stem divides into two fronts with the reflected front shock moving to the right on the waveform. At the same time parts of waveform correspond to the incident and reflected rear fronts of the N-wave become farther from each other. Incident rear front is moving contrary to the left. It is clearly seen how smooth rear front comes on the reflected front shock on waveform at h = 36 mm. Then there is their nonlinear interaction and merge into a single shock front (waveform at h = 44 mm) that is in fact the Mach stem. Note that the peak positive pressure of this front ( 2.1 kPa) is about the amplitude of the incident front shock and the waveform becomes similar to the two periods of the sawtooth wave.
Thus, the formation of the Mach stem in reflection of the N-wave from rigid surfaces can occur both near the surface and above the surface in the area where the incident rear shock intersect reflected front shock of the N-wave. In the case of the ideal N-wave there are two shock fronts in contrast to smooth rear front of spark-generated wave, therefore such interaction will be more pronounced.
§2.6 Conclusions
In this chapter, reflection of spherically divergent spark-generated pulses from the rigid surface in air was studied both by optical schlieren method and the Mach-Zehnder interferometry method.
Chapter 2. Irregular reflection of an N-wave from a rigid surface in air 2.6. Conclusions
The optical schlieren method provides only visualization of reflection pattern structure while the Mach-Zehnder interferometry method allows to reconstruct pressure waveforms in the pattern by applying the inverse Abel transform to the phase of the measured signal. In experiments, dynamical irregular reflection was observed and the values of critical parameter correspond to different reflection regimes were found. It was shown that irregular reflection of pulses occurred in a dynamic way and that the trajectory of the triple point could be linearly interpolated within distances available in the experiment.
Chapter 3
Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams §3.1 Introduction
Focusing of high-intensity pulses and periodic waves is an important problem of nonlinear acoustics [START_REF] Rudenko | Theoretical foundations of nonlinear acoustics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF]. The interest to this subject, in particular, is associated with a variety of medical applications of high-intensity ultrasound. Focused shock pulses are used, for example, in lithotripsy [START_REF] Averkiou | Modeling of an electrohydraulic lithotripter with the KZK equation[END_REF] for destruction of kidney stones while periodic sawtooth waves are used in noninvasive surgery to cause necrosis of soft tissue tumors [START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF]. The efficiency of these procedures is strongly dependent on the operational mode of a transducer, i.e., the number of generated pulses, their waveforms, amplitude, and duration. To select the most optimal operational mode it is necessary to be able to predict the parameters of generated fields and the biological effects caused by them. Also, formation of shock fronts occurred due to nonlinear effects should be taken into account. In the case of strong manifestation of nonlinear effects, the effect of saturation is observed: the acoustic field parameters at the focus of the transducer become not depending on the initial pressure amplitude [START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF][START_REF] Rudenko | Self-action effects for wave beams containing shock fronts[END_REF].
The mechanisms causing the saturation effect are different for periodic and pulsed fields. This leads to the fact that the limiting values of the acoustic field parameters are also different for periodic and pulsed modes of focusing. In medical applications often it is necessary to obtain a high value of the peak positive or negative pressure at the focus of the transducer. In a weak nonlinear case, it is enough to increase the pressure amplitude at the transducer. However, if the nonlinear effects are significant the increase of pressure amplitude at the source does not provide pressure increase at its focus due to the saturation effect. In this case, higher pressure amplitude can be obtained by using a signal with another temporal waveform.
In this chapter, a comparison of focusing efficiency for pulsed and periodic waves is performed as well as a comparison with existing analytical solutions. Nonlinear propagation of focused acoustic waves was studied using numerical simulations based on the Khokhlov-Zabolotskaya-Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams Kuznetsov (KZK) equation. In simulations, cases of a piston and Gaussian sources were considered. Physical mechanisms causing saturation effects in focused acoustic fields are determined. Also in this chapter the qualitative analogy between physical processes occurring in the focusing of an axially symmetric beam and in reflection from plate rigid boundary is discussed. The spatial structure of wavefront in the focal area is considered similarly to the Mach stem formation in reflection of weak shocks considered in the previous chapter.
In this section, existing analytical approaches to evaluate the limiting pressure level at the focus are considered in cases of periodic and pulsed ultrasonic beams. Then, experimental results obtained by [START_REF] Kulkarny | An experimental investigation on focussing of weak shock waves in air[END_REF] showing different spatial structures of waveforms at the focus are presented.
The saturation effect in the fields of periodic waves: a literature review
The saturation effect of the peak positive pressure in periodic fields exists already in the case of nonlinear propagation of a plane wave [START_REF] Vinogradova | Teorija voln[END_REF]. Due to nonlinear effects, an initially harmonic wave becomes a sawtooth wave. Dependence of the peak positive pressure p + of a periodic wave at distance x from the source is given by a solution of simple wave equation
∂p ∂x = ε ρ 0 c 3 0 p ∂p ∂τ (3.1)
and can be written analytically as
p + p 0 = (1 + ε ρ 0 c 3 0 ωxp 0 ) -1 . (3.2)
Here ε is the coefficient of the nonlinearity of the medium , ρ 0 is the medium density, c 0 is the sound speed in an undisturbed medium, τ = tx/c 0 is the retarded time, and ω is a circular frequency of the periodic wave.
Infinity increase of the initial pressure amplitude p 0 → ∞ leads to the saturation effect when the pressure amplitude p sat at fixed distance x does not depend on p 0
p sat = ρ 0 c 3 0 εωx . (3.3)
Here, the saturation effect of the peak positive pressure is occurred due to energy attenuation on formed shock front [START_REF] Vinogradova | Teorija voln[END_REF].
In focusing, the amplitude of the wave is increased, thus, nonlinear effects become more significant. Nonlinear effects can be pronounced differently depending on the frequency of the wave, its shape and amplitude and geometry of the transducer (Bessonova et al., 2009). Combined effects of nonlinearity and diffraction can lead to different features of the acoustic field structure depending on manifestation of nonlinear effects. When the nonlinear effects are moderate and shock front formation doesn't occur or occurs close to focal area, the nonlinear enhance of peak positive pressure and intensity is observed at the focus [START_REF] Naugolnykh | The dependence of the gain of the acoustic focusing system of ultrasonic intensity[END_REF],Ostrovskii 3.1. Introduction & Sutin, 1975). This occurs due to the better focusing of higher harmonics and their relative phase shift caused by diffraction. In this case the peak negative pressure is less than it is predicted by the linear theory (Bessonova et al., 2009). When nonlinear effects are strongly pronounced and the shock formation occurs close to the transducer, the nonlinear attenuation of the shock front becomes significant and leads to the saturation effect [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF][START_REF] Bacon | Finite amplitude distortion of the pulsed fields used in diagnostic ultrasound[END_REF][START_REF] Shooter | Acoustic saturation of spherical waves in water[END_REF]. , 1959). It is supposed that the spherical converging wave is propagating from the piston source to the sphere of radius r F (Fig. 3.1). This wave is described by the generalized simple wave equation:
∂p ∂r + p r - ε ρ 0 c 3 0 p ∂p ∂τ = 0. (3.4)
Here r is a coordinate and τ = tr/c 0 is a retarded time. Distance r F is found in such way that the pressure amplitude of one-dimensional linear spherical converging wave at this point r F will be equal to the pressure amplitude of linear focused beam at focus F (Fig. 3.2), described by a parabolic approximation of diffraction theory:
2ik ∂A ∂r + Δ ⊥ A = 0, (3.5)
where p(r) = A(r) exp(-i(ωt-kr)), Δ ⊥ = ∂ 2 /∂r 2 + ∂/r∂r is a Laplace operator in the cylindrical coordinate system, k = ω/c 0 is a wavenumber [START_REF] Vinogradova | Teorija voln[END_REF]. The exact solution of Eq. (3.5) on the beam axis of the piston source is
A(r) = 2A 0 1 -r/F sin G 1 -r/F 2r/F , (3.6)
while for Gaussian source it is
A(r) = A 0 (1 -r/F ) 2 + (r/F ) 2 /G 2 , (3.7)
where G = ka 2 0 /2F is linear enhance coefficient and a 0 is a transducer radius. In according to solutions of Eqs. (3.6) and (3.7), the amplitude at the geometrical focus is A(r = F ) = A 0 ka 2 0 /2F = A 0 G (Fig. 3.2). Substituting this in equation describing linear propagation of spherical converging wave ∂A/∂r + A/r = 0, for which A(r) = G/r, one obtains r F = F/G. At this distance from the focus, the acoustic field is calculated for the one-dimensional nonlinear case (Eq. 3.4). For this variable substitutions P = pr/p 0 F , Θ = ωτ and σ = (εωp 0 F/ρ 0 c 3 0 ) ln(F/r) are used to obtain Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams Eq. (3.4) in dimensionless form:
∂ P ∂σ - P ∂ P ∂Θ = 0. (3.8)
This is a simple wave equation. The amplitude of sawtooth wave is given from this equation for regime of developed shock P ≈ π/(1 + σ) ≈ π/σ [START_REF] Vinogradova | Teorija voln[END_REF]. Returning to the dimensional variables one obtains a dependence of sawtooth wave amplitude on distance r: p = πρ 0 c 3 0 ωεr 1 ln(F /r) .
(3.9)
Then for r F = F/G the pressure amplitude, i.e., the pressure in a saturation regime is
p sat = πρ 0 c 3 0 εωF G ln G = πρ 0 c 2 0 2ε a 0 F 2 1 ln (ωa 2 0 /2c 0 F ) . (3.10)
One can see that the limiting pressure value p sat depends on a geometry of the transducer (the angle a 0 /F of focusing), parameters ε, ρ 0 , c 0 of propagating medium, and the main frequency ω of a wave. With greater focusing angle and less frequency, greater limiting value of the pressure can be obtained.
r
A(r) F A 0 G 0 r F Figure 3.2:
The amplitude of the harmonic wave as a function on the distance from the transducer along its axis in a case of linear propagation. Black color corresponds to piston source, red color -to Gaussian source, and the blue color -to spherical converging wave.
In effort of Ostrovskii and Sutin [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF] an approximate approach of alternative consideration of periodic wave focusing generated by piston source was applied.
Initially, the approach considers only nonlinear focusing process neglecting diffraction effects. Then, starting at some distance close to focus, nonlinear effects are supposed negligible and only diffraction effects are considered. Next, in the focal area nonlinear effects once again prevail over diffraction effects and a nonlinear propagation of a plane wave is considered.
According to this approach, the pressure maximum is achieved at a distance before the geometrical focus and equals
p sat = 2ρ 0 c 3 0 εωF G ln η . (3.11)
Here η is defined from equation Np sh η ln η = 2π, where N = 2πεp 0 F ω/(ρ 0 c 3 0 ) is a nonlinear parameter and p sh is a shock amplitude. Limiting values of the peak positive pressure obtained by using Eqs. (3.10) and (3.11) have the same order. In work [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF] the enhance coefficient of peak positive pressure was found to be more than four times greater than it would be in linear focusing.
Introduction
Approaches proposed in works [START_REF] Naugolnykh | The dependence of the gain of the acoustic focusing system of ultrasonic intensity[END_REF] and [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF] allow the evaluation of the saturation pressure level for a piston source. However the model of Gaussian source is also widespread. In 1984, Bacon proposed an approach which allows to evaluate the limiting value of the peak positive pressure in the case of the Gaussian source [START_REF] Bacon | Finite amplitude distortion of the pulsed fields used in diagnostic ultrasound[END_REF]. In this approach one-dimensional propagation of nonlinear acoustic wave is considered in a tube whose walls are determined by localization of focused gaussian beam. The propagation of such beam is described by the nonlinear evolution equation
∂p ∂r - ε ρ 0 c 3 0 p ∂p ∂τ + 1 2S ∂S ∂r p = 0, p(r = 0) = p 0 sin ω 0 τ, (3.12)
where S = S(r) = πa 2 0 (1 + r 2 /a 2 0 ) is a cross-sectional area of a tube given by a gaussian beam. Introducing new variables ỹ1 = p √ S and ỹ2 so that dỹ 2 = dr/ S(r), ỹ2 (r = 0) = ỹ0 and proceeding to dimensionless form, the first equation in (3.12) can be written in a form of a simple wave equation and as in a previous case, one can evaluate the saturation pressure Moreover for large values of linear enhance coefficient G 2 the limiting pressure levels obtained from all three equations will be the same.
p sat = πρ 0 c 3 0 εωF G ln 2G . ( 3
As the nonlinear effects become more pronounced the saturation of the peak positive pressure occurs not only at the focus but also at some distances close to the transducer. This phenomenon was studied theoretically and experimentally in work [START_REF] Shooter | Acoustic saturation of spherical waves in water[END_REF] for converging and diverging spherical waves. Theoretical analysis in work [START_REF] Shooter | Acoustic saturation of spherical waves in water[END_REF] was performed using the Eq. (3.8) for spherical waves in regimes when the sawtooth wave is already formed. For each distance, the peak positive pressure of sawtooth wave was found as it was done in work (Naugolnykh K.A. & Romanenko E.V., 1959). The limiting value of the peak positive pressure at distance r F coincided with the corresponding value obtained in (Naugolnykh K.A. & Romanenko E.V.,
1959).
All considered above approaches for evaluating parameters of nonlinear focused acoustic fields neglect diffraction or account it separately from nonlinear effects, as it was done in work [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF]. However, in focusing of high-intensity ultrasound waves used in medical applications combined effects of nonlinearity and diffraction should be taken into account simultaneously, especially in a focal area. A method, taking into account combined nonlinear effects and diffraction and allowed to find paraxial area of focused beams, was developed by [START_REF] Hamilton | A new method for calculating the paraxial region of intense acoustic beams[END_REF]. The system of nonlinear eikonal and energy transfer equations describing distortions of wavefront due to nonlinear and diffraction effects was obtained. Analysis of the focusing gain G was performed using the KZ equation [START_REF] Bakhvalov | Nonlinear Theory of Sound Beams[END_REF]. It was shown that until the shock front has not yet formed, nonlinear effects increase the focusing gain of the peak Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams positive pressure due to more precise focusing of higher harmonics and phase shift between them caused by diffraction effects. At the geometrical focus the relation of the peak positive pressure p + to amplitude of the initial wave p 0 was found
p + p 0 = G 1 + N 2 (π/2G) -ln(1/G) 1 + (1/G) 2 ∼ G + NG 2 . (3.14)
As one can see, when diffraction effects (coefficient G) are fixed this relation linearly increases with increasing the amplitude of the initial wave (coefficient N). Note that Eq. (3.14) is applicable only for weak nonlinear waves until shock formation occurred.
Focusing of periodic waves generated by piston source and Gaussian source is studied in detail in works (Bessonova et al., 2009[START_REF] Bessonova | Focusing of high power ultrasound beams and limiting values of shock wave parameters[END_REF], Bessonova O.V. et al., 2010) using the methods of numerical simulations based on the KZK equation [START_REF] Bakhvalov | Nonlinear Theory of Sound Beams[END_REF]. In works (Bessonova et al., 2009) and [START_REF] Bessonova | Focusing of high power ultrasound beams and limiting values of shock wave parameters[END_REF] is was shown that existing analytical assessments (3.10), (3.11), and (3.13) provide underestimated values of the peak positive pressure. Increase of the pressure amplitude at the source leads to non monotonic change of the focusing gain: initially it increases (up to 3.5 times) then decreases. The maximum of the focusing gain is reached at such initial amplitude for which shock formation occurs at the focal area. The saturation of the peak positive pressure in the case of piston source reaches for less values of nonlinear parameter N, i.e., at lower initial pressure amplitude than it happens in the case of Gaussian source.
In the thesis, focusing of periodic fields was also studied using numerical simulations of the KZK equation as it was done in (Bessonova et al., 2009[START_REF] Bessonova | Focusing of high power ultrasound beams and limiting values of shock wave parameters[END_REF]. This study was performed for further comparison with focused fields of shock pulses which were not studied using numerical methods before.
The saturation effect in pulsed fields: a literature review
Now consider approaches which are provided the evaluation of the limiting pressure values and field structures in pulsed fields. In contrast to periodic fields, in pulsed fields the saturation effect is not observed in one-dimensional case of nonlinear propagation of the plane wave [START_REF] Vinogradova | Teorija voln[END_REF]. Solution of Eq.(3.1) for a single pulse with the shape of the N-wave is the following
p + p 0 = 1 + ε ρ 0 c 3 0 T 0 xp 0 -1 2 , (3.15)
where T 0 is a pulse duration. Peak positive pressure p remains dependant on the initial pressure amplitude p 0 even if the last one is infinitely increased p 0 → ∞: p = ρ 0 c 3 0 T 0 p 0 /εx. Thus, nonlinear absorption of an energy on the developed shock is not enough to saturate parameters on the pulsed field in the case of plane wave propagation. However, saturation effects do exist if the pulsed field is focused.
Introduction
In effort of Sapozhnikov et al. focusing of acoustic pulses generated by Gaussian sources is studied theoretically in a wide range of initial pressure amplitudes [START_REF] Sapozhnikov | Focusing of powerful acoustic pulses[END_REF]. For weakly pronounced nonlinear effects analysis of nonlinear focusing was performed alternatively, as it was done in work [START_REF] Ostrovskii | Focusing of acoustic waves of a finite amplitude[END_REF] for periodic fields. According to this approach the path of the wave was divided into two segments. On each segment either nonlinear effects or diffraction were negligible. Nonlinear effects were taken into account in the region between transducer and the focal area; here the wave was supposed to be a spherically convergent wave.
In the focal area, wave supposed to be diffracted as a linear wave and nonlinear effects were not accounted. The size of the focal region was chosen so that the limiting transition to the linear case was carried out. In work [START_REF] Sapozhnikov | Focusing of powerful acoustic pulses[END_REF], it was shown that coordinate r = Fr F where r F = F/G is satisfied to this requirement (geometry is similar to shown in Fig. 3.1). Then, at the initial stage of propagation of the spherical converging wave its evolution can be expressed as an implicit function:
p p 0 = F F -r ϕ t - r c 0 + εp ρ 0 c 3 0 (F -r) ln F F -r , (3.16)
where function ϕ is a temporal waveform. After substituting r = Fr F pressure p = p F on the boarder of the focal region is
p F p 0 = Gϕ τ + T 0 p F p 0 N ln G G , (3.17)
where T 0 is an initial pulse duration. Then, solution (3.17) is used as a boundary condition to resolve linear diffraction equation for propagation in a focal area. According to [START_REF] Vinogradova | Teorija voln[END_REF] the solution of linear diffraction equation on the beam axis in parabolic approximation is
p p 0 = +∞ -∞ ϕ(τ )g(r, τ -τ )dτ , g(r, τ ) = 1 |1 -r/F | ∂ ∂τ H τ 1 -r/F exp - 2rc 0 τ a 2 0 (1 -r/F ) . (3.18)
Here H(τ ) is a Heaviside step function. The temporal waveform of a pulse at the focal plane is found by expression (3.18), where instead of a 0 and F one should use a F = a 0 r F /F and r F , correspondingly, and function ϕ(τ ) should be replaced by p F (τ )/p 0 . Then the coefficient of the pressure enhancement will be given by
p + p 0 = G 1 -N ln G . (3.19)
At N = 1/lnG the shock front is formed at the waveform on the boarder of the focal area. At the same time the coefficient of the pressure enhancement becomes infinite. It means that nonlinearity
Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams can not be neglected, i.e., the approach becomes unapplicable. Thus, estimation (3.19) is applicable only for weak nonlinear cases when N < 1/lnG.
It was shown that linear diffraction does not limit the amplitude of shock waves. This limitation is due to effect of nonlinear refraction [START_REF] Musatov | Nonlinear refraction and absorption phenomena due to powerful pulses focusing[END_REF]. Nonlinear refraction effect is caused by amplitude dependant velocity of the shock. The speed of the shock front is determined by the average pressure just before the shock front and behind it. Shock waves with greater amplitude propagate faster than shocks of lower amplitudes.
F F nonl
Figure 3.3: The effect of nonlinear refraction on the beam focusing -beam becomes defocused; focal area becomes wider and shifts from the transducer. The figure is taken from a review [START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF].
For Gaussian source the front velocity at the axis of the beam is greater than on its periphery. That leads to local defocusing of the beam and shift of the focal area (Fig. 3.3). The width of the focal area becomes wider than it would be in a linear case. Thus, for shock waves diffraction effects are less significant than nonlinear effects and focusing of pulsed fields could be described using ray approach. This approach was proposed by Sapozhnikov [START_REF] Sapozhnikov | Focusing of powerful acoustic pulses[END_REF]. Two methods to find nonlinear rays for pulsed beams were applied to a pulse with a shape of isosceles triangle. In first method rays are lines perpendicular to shock front. Method of sequential approximations were used to find a ray pattern. Linear rays converging to a focus are used as a zero approximation. Along these rays the initial pulse is directed. Parameters of found pulses are used for a first approximation. Then normals are constructed to shock front and an initial pulse is propagated on new ray pattern, etc. For first approximation when the wave is supposed to be spherical the evaluation of the pressure amplitude p nonl at the new nonlinear focus F nonl where the wavefront becomes plane (see Fig. 3.3) is given by
p nonl p 0 = F F nonl √ 2 1 + N ln( F F nonl ) -1/2 , (3.20)
where the coordinate of the front straightening F nonl is found from
N G F F nonl ln F F nonl = √ 2 1 + N ln F F nonl 1/2 .
(3.21)
In this approach diffraction effects are supposed to be inessential. Also the effect of transverse amplitude distribution on waveform structure is evaluated. However, this approach does not take into account the inverse effect of waveform distortions on the pressure amplitude.
The second method taking into account this inverse effect is based on the assumption that before shock formation there is no nonlinear absorptions and triangular pulse propagates along linear rays [START_REF] Sapozhnikov | Focusing of powerful acoustic pulses[END_REF]. After formation of the shock front at distance x sh the pulse evolution is described by the simple wave equation with additional term accounting a tube section Here τ = t -(ss 1 )/c 0 ; s and s 1 are ray coordinate and its initial value, correspondingly. This approach takes into account the effect of the ray bending on the amplitude of the wave and provides equations to find the peak positive pressure p + at the focus:
p + p 0 = 1 p 0 Φ( a a 0 f ) ⎡ ⎣ 1 + N 2F Φ( a a 0 f ) x x sh dx f (x ) ⎤ ⎦ -1/2 , f 2 d 2 f dx 2 = N 2F 2 G ⎛ ⎝ 1 + N 4F x x sh dx f (x ) ⎞ ⎠ ⎛ ⎝ 1 + N 2F x x sh dx f (x ) ⎞ ⎠ -3/2 , f | x=x sh = 1 - x sh F , df dx x=x sh = - 1 F . (3.23)
Function Φ describes the amplitude distribution on the surface of transducer depending on transverse coordinate a. Eqs. (3.23) were obtained in assumption of aberration-free approximation of nonlinear geometrical acoustics and triangle waveform of the pulse. Eqs. (3.23) contain third-order differential equations not solved analytically. Nevertheless a solution can be found by using methods of numerical integration. This was done in work [START_REF] Musatov | Nonlinear refraction and absorption phenomena due to powerful pulses focusing[END_REF]. The expression to estimate limiting value of the pressure in focused pulsed field was obtained
p sat = 1.5p in α 2 , (3.24)
where p in = ρ 0 2 0 /2ε is inertial pressure in medium and α = a 0 /F is a focusing angle. In dimensionless form it is NP/G = 1.5, (3.25)
where P = p/p 0 is dimensionless pressure amplitude, normalized on its initial value p 0 . Thus, limiting value of pressure at the focus of Gaussian source generated pulsed field depends on only focusing angle α. This fact was confirmed in experiments (Musatov & Sapozhnikov, 1993a,Musatov & Sapozhnikov, 1993b).
In the thesis, the study of focusing of pulsed fields is performed for piston source and Gaussian source using methods of numerical simulations. Note that numerical simulation of focusing for shock pulses is a more complex problem in comparison with simulation of periodic waves, since it needs more calculation resources. In particular, for the same time and spatial steps of a grid, simulation of pulsed signals requires the use of greater time window (more than ten times greater even in a case of weak nonlinearity) than it is needed for periodic fields simulation. This leads to an increase of the random access memory necessary for calculation and a corresponding increase of calculation time even in the case of linear propagation. Numerical simulations of nonlinear focus-Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams ing of pulsed fields became possible only lately due to the rapid progress of computing machinery and methods of parallel computing. [START_REF] Kulkarny | An experimental investigation on focussing of weak shock waves in air[END_REF].
The structure of wavefronts at the focal area
Consider the focusing of an axial-symmetric beam as a process in which the upper half of the beam reflects from an axis of symmetry. Such consideration is correct from a mathematical point of view since boundary conditions on the beam axis and on reflecting rigid surface are the same: the normal derivative of the pressure in the transverse coordinate is equal to zero. Consideration of focusing as a process similar to reflection is also in agreement with physical nature. Such consideration is used, for example, in optics and electrostatics in method of mirror images. Then it is logical that the Mach stem formation is observed not only in reflection from rigid surfaces but also in focusing of axial-symmetric beams, at least in a case of small focusing angles and strongly nonlinear beams.
One of the few efforts to consider the Mach stem formation in focusing of shocks was performed by Kulkarny [START_REF] Kulkarny | An experimental investigation on focussing of weak shock waves in air[END_REF]. To generate focused step shocks he used a shock tube with a parabolic reflector on its edge. The shadowgraphy method was used for optical visualization of wavefront structures. The experiment was conducted in air in the range of acoustic Mach numbers 5 • 10 -3 ≤ M a ≤ 5 • 10 -1 . For M a ∼ 10 -3 linear focusing regime was observed as it is shown in Fig. 3 Note that Kulkarny performed his experiment in the 70s when reflection of weak shocks in a framework of the von Neumann paradox has not been relevant. Perhaps, this was a reason why the experiment failed to observe the case when the Mach stem formed at M a ∼ 10 -3 . More sensitive to the pressure gradients optical system was necessary for that. Shadowgraphy method realized by Kulkarny did not have sufficient sensitivity to such visualizations.
Later, the focusing of weak shock waves has been studied numerically in [START_REF] Tabak | Focusing of weak shocks waves and the von Neumann paradox of oblique shock reflection[END_REF], but only step shock cases were considered. As it was mentioned above, acoustical waves have more complicated waveforms than step shocks, thus focusing of acoustic weak periodic and pulsed shocks still remains relevant.
For Mach stem formation, two shocks should interact. In the case of reflection they are incident shock and reflected one, in focusing -shocks of the central wave and the edge wave. Interaction between shock fronts of the periodic wave was studied numerically and compared with experimental results in work [START_REF] Khokhlova | Numerical modeling of finite amplitude sound beams: Shock formation in the near field of a cw plane piston source[END_REF]. Unfocused harmonic wave radiated by intense continuous wave source with an oscillated near field was considered. It is shown that if a shock formation occurs in a penultimate maximum of peak positive pressure distribution on the axis then the center wave will interfere with the edge wave. Since the edge wave has reversed waveform, the shock formation in its waveform occurs in another position. Thus, formation of two shocks in each cycle of an initially harmonic wave, followed by their motion towards each other and further collision, is observed. In [START_REF] Khokhlova | Numerical modeling of finite amplitude sound beams: Shock formation in the near field of a cw plane piston source[END_REF] interaction of shocks were considered in time domain on the beam axis. The structure of wavefronts on the paraxial area potentially containing the Mach stem was not considered. In the thesis of Bessonova the example of such structure is demonstrated but it was not considered as the Mach stem formation [START_REF] Bessonova | Nonlinear effects in focused high-intensity ultrasound beams: numerical modeling and application in noninvasive surgery[END_REF].
In this thesis, numerical simulations revisiting the classical problem of shock wave focusing under the light of Mach stem formation are performed. The model is based on a KZK equation for axially symmetric nonlinear beams. In subsection §3.5 results are presented for piston source and the possibility of the Mach stem formation in a focal area of medical transducers is discussed.
§3.2 Numerical model based on the KZK equation
The nonlinear propagation of high-intensity acoustic signals generated by focused sources is described here using the KZK equation. The equation takes into account the combined effects of nonlinearity, diffraction and absorption. For the axisymmetric beams the equation can be written in dimensionless form as
∂ ∂Θ ∂P ∂σ -NP ∂P ∂Θ -B ∂ 2 P ∂Θ 2 = 1 4G ∂ 2 P ∂ρ 2 + 1 ρ ∂P ∂ρ , (3.26)
where P = p/p 0 is the acoustic pressure normalized by the initial amplitude p 0 at the transducer; σ = x/F is the propagation distance normalized by the transducer focal length F ; ρ = r/a 0 is the Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams lateral distance normalized by the transducer radius a 0 ; Θ = 2πτ /T 0 is the dimensionless time; τ = tx/c 0 is the retarded time; T 0 is the signal duration (for the harmonic wave it equals to one period). Equation (3.26) contains three dimensionless parameters: N = 2πF εp 0 /ρ 0 c 3 0 T 0 is the nonlinear parameter, where ε is the coefficient of medium nonlinearity, G = πa 2 0 /c 0 F T 0 is the diffraction parameter and B is the absorption parameter. The initial pressure amplitudes of the harmonic wave and the pulse are chosen so that in the case of linear focusing the shape and the peak positive pressure P + amplitude of both signals at the transducer focus are the same (Fig. 3.5). A harmonic wave was selected as the initial periodic signal P 0 (Θ) = sin Θ.
(3.27)
The pulsed regime is presented by a sequence of pulses with low pulse-repetition frequency. The shape of each pulse is a single period of a harmonic wave and the pressure between pulses was taken to be constant. In this case the signal value average over the time window was zero
P 0 (Θ) = 1 -1/n 0 -sin Θ, π/2 ≤ Θ ≤ 5π/2, - 1
/n 0 , Θ ≤ π/2 and Θ ≥ 5π/2, (3.28) where 2πn 0 is the length of the time window and n 0 is the integer number. In the case of G = 10 the value n 0 = 13 was used for a Gaussian source and n 0 = 30 was used for a piston source. These initial conditions are convenient for comparison of the periodic wave focusing and focusing of single pulses in the nonlinear case.
The boundary condition was set in the plane σ = 0 and corresponded to a circular focused source with either Gaussian spatial apodization( 3.29) or with uniform pressure amplitude distribution (3.30). Focusing of the beam is provided by a phase shift, which increases quadratically with the transverse coordinate ρ P (σ = 0, ρ, Θ) = P 0 Θ + Gρ 2 exp(-ρ 2 ), (3.29)
P (σ = 0, ρ, Θ) = P 0 (Θ + Gρ 2 ) , ρ ≤ 1, 0, ρ > 1.
(3.30)
The equation (3.26) with boundary conditions (3.29, 3.30) and initial waveforms (3.27, 3.28) was solved numerically using a combined time-domain and frequency-domain approach based on a method of splitting in physical factors. Diffraction effects are calculated in the frequency domain 3.2. Numerical model based on the KZK equation using the Crank-Nicholson algorithm of the second order of accuracy over both spatial coordinates [START_REF] Bessonova | Focusing of high power ultrasound beams and limiting values of shock wave parameters[END_REF]. Absorption effects are taken into account using the exact solution for harmonics in the frequency domain [START_REF] Filonenko | Effect of Acoustic Nonlinearity on Heating of Biological Tissue by High-Intensity Focused Ultrasound[END_REF]. Nonlinear effects are calculated in the time domain using the Godunov-type conservative numerical algorithm, which is capable to model the propagation of nonlinear waves even in case if only 3-4 time grid points per shock are present [START_REF] Kurganov | New high-resolution central schemes for nonlinear conservation laws and convention-diffusion equations[END_REF]. Transition between spectral and time domains is carried out using the fast Fourier transform. The algorithm was adapted for parallel computation with the help of the OpenMP technology, which provided an opportunity to reduce significantly the calculation time.
The parameters for the numerical scheme were chosen based on the condition of stability for a numerical algorithm and a preset accuracy of calculation (2%). The calculation accuracy was estimated by comparison of solutions obtained for steps of discretization differing twice. If solutions differed by less than 2% than the discretization step was taken to be equal to the current one. For the Gaussian source there are no strong pressure gradients along both transverse ρ and lateral σ coordinates; thus, grid steps are chosen greater than in the case of piston source. For the Gaussian source, diffraction step along the propagation coordinate was hσ = 10 -3 and the step in the transverse coordinate was hρ = 4.06•10 -4 . To satisfy the Kurant-Friedrichs-Levi condition for the Godunov-type scheme several steps over nonlinearity were performed within each diffraction step along the propagation coordinate. The nonlinearity step was h nonl was selected automatically at each diffraction step hσ and varied within the range 7 • 10 -5 ≤ h nonl ≤ 3 • 10 -4 . For a piston source the step along propagation coordinate was reduced up to hρ = 1 • 10 -4 for pulsed field; other steps were similar to used in the case of Gaussian source. For periodic field of the piston source steps were chosen hσ = hρ = 5 • 10 -5 . The time step of numerical grid or the number of the harmonics taken into account in calculation were also varied with propagation coordinate. Initially, 128 harmonics were taken into account for a periodic wave and 8192 for a pulse. This number of harmonics was sufficient to describe focusing with the selected precision in linear and weakly nonlinear cases (N < 0.1). The number of accounted harmonics was increased with the propagation coordinate since the wave became steeper. In a focal area 2048 harmonics were accounted for a periodic wave and 8192 for a pulse in the case of Gaussian source while 16384 were used in a pulsed bean produced by the piston source. Thus the minimal time stem was equal to ht = 5 • 10 -3 and ht = 1.5 • 10 -3 for a pulsed and periodic fields, correspondingly. Artificial absorption to smooth large field gradients in transverse directions leading to algorithm divergence was introduced [START_REF] Bessonova | A derating method for therapeutic applications of high intensity focused ultrasound[END_REF]. Its value was selected from the condition for at least seven nodes of the time grid to fit a shock front at the focus. In this case the absorption was small in near field and increased with the propagation coordinate. In a focal area, the artificial absorption increased up to tenfold at G = 10 and 40 times at G = 40. The minimum value of the absorption coefficient was B = 5.4 • 10 -3 .
In numerical simulations, a priori knowledge about focusing geometry was used. For example, in a focal area the calculation along the transverse coordinate was performed only over the region where the peak positive pressure exceeded 0.06% of its maximum value. [START_REF] Averkiou | Modeling of an electrohydraulic lithotripter with the KZK equation[END_REF] and correspond to the dimensionless parameters G = 14 and N = 1.4. In the thesis most of the results are presented for G = 10 and N = 1.0, which are close to the values, typical for the fields of clinical shock-wave lithotripters. These values are also typical for medical transducers used in noninvasive surgery of soft tissues [START_REF] Hill | Physical principles of medical ultrasonics[END_REF].
Among the most important parameters of acoustic field are the peak positive P + and peak negative P -pressures at the focus of the transducer. The peak positive pressure of shock wave determines the mechanical and thermal effects while the peak negative pressure is responsible for cavitation. In a case of linear propagations, P + and P -at the focus are the same for periodic wave and the pulse (Fig. 3.5). Results of numerical simulations show the difference which occurs in the nonlinear case. focus of the transducer which is located at σ = 0. One can see clearly that greater values of both peak positive and negative pressures are achieved in a periodic field in comparison with the corresponding values in a pulsed field. At the same time, the focal region of the peak positive pressure in a periodic field is more compact in both longitudinal and transverse directions. In Fig. 3.6 the size of the focal region is determined at e -1 level of the maximum peak pressure in each case and indicated by a white contour. One can see that focused periodic fields are preferable for achieving the highest peak positive and peak negative pressures at the focus of the transducer than pulsed fields.
The focal area of P -is shifted toward to transducer for both periodic and pulsed fields (σ = 0.95) that should be taken into account in describing the cavitation effects. The focal area of P - has a non symmetrical shape in the form of a dumbbell. For periodic field it has bigger size in the direction to the source (Fig. 3.6c) while for a pulsed field asymmetry manifests in the opposite direction [Fig. 3.6(d)]. This difference is due to the fact that there is no rarefaction phase in the initial pulse profile. It appears only far from the source due to manifestation of diffraction effects. Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams call the location of the maximum for the peak positive pressure P + as a focus of the transducer. For a periodic field, the maximum of P + is achieved approximately at the geometrical focus of the source (σ = 1.0). In pulsed fields, the maximum of P + is reached behind the geometrical focus at σ ≈ 1.1. In the linear case, the maximum of the peak positive pressure for periodic and pulsed fields is attained shifts to at σ ≈ 0.98 due to diffraction effects [START_REF] Rudenko | Theoretical foundations of nonlinear acoustics[END_REF]. Thus, the location of the focus is different in cases of linear and nonlinear focusing and depends on temporal structure of the signal. Shift of the focus in high-intensity pulsed fields is caused by nonlinear refraction which is pronounced stronger in pulsed fields than in periodic ones. This is due to the fact that the shock front formed in an initially harmonic wave stays almost symmetrical with respect to zero up to the focus and its velocity is almost unchanged.
Waveforms on the axis of the beam at different distances σ from the transducer are presented in Fig. 3.7(c) for a periodic field. At σ = 0.8 a periodic waveform is still symmetrical with respect to zero, i.e., the front velocity in the traveling coordinate system is close to zero. The pulse profile at the same distance σ [Fig. 3.7(d)] is non symmetrical with zero pressure just behind the shock front.
Therefore, the shock front of a pulse propagates faster than the front of a saw-tooth wave. Thus, the phenomenon of nonlinear refraction manifests strongly for a pulsed field than a periodic one. The effect of nonlinear refraction in periodic fields is significant only in the focal region, where the waveform becomes asymmetrical (a waveform at σ = 1.0). For pulsed fields the effect of nonlinear refraction becomes significant immediately after shock front formation. The waveforms shown at σ = 1.0 and σ = 1.07 correspond to foci of periodic (c) and pulsed (d) fields, correspondingly.
As one can see, the maximum pressure in the periodic field is higher than in pulsed field and the pulse duration exceeds duration of a single period of a periodic wave. After passing the focus (waveforms at σ = 1.2), pressure in both periodic and pulsed fields decreases and the shock front in the traveling coordinate system is shifted to the left (i.e., it arrives earlier than a linear wave would) because of the combined influence of nonlinear refraction and diffraction effects.
The white dashed lines in Fig. 3.7(a,b) indicate wavefronts when they become plane in the paraxial area. The straightening of wavefronts occurs behind the focus and the wavefronts are still converging in the region, where maxima of the peak positive pressures are reached. Wavefront straightening in a pulsed field occurs farther from the focus than in a periodic field. This is also caused by the phenomenon of nonlinear refraction, which pronounces weaker in periodic focused beams than in pulsed ones.
The characteristic distortion of waveforms at the focus at different values of the nonlinear parameter N can be observed in Fig. 3.8(a,b). The front position of a periodic wave changes insignificantly, while the pulse front becomes strongly shifted to the left because of nonlinear refraction. As the nonlinear parameter N grows, the pulse duration increases while the duration of one period on a periodic wave does not change but the initially harmonic wave becoming a saw-tooth wave. At the value of linear focusing gain G = 10 a shock front formation in focal waveform in both periodic and pulsed fields occurs at N = 0.5. In a weakly nonlinear case (at N < 0.5) the peak positive pressure in pulsed and periodic fields grows with the increase of the nonlinear parameter N and then decreases after formation of a shock front (at N > 0.5) due to 3.3. Effect of a signal waveform on limiting values of shock wave parameters in nonlinear focused beams nonlinear absorption. Before formation of a shock front (at N < 0.5), the compression phase of a pulse is shortened and then at N > 0.5 lengthened vice versa. The rarefaction phase of a pulse is lengthened monotonically with the increase of N. In a periodic wave, the compression phase becomes shorter and the rarefaction phase becomes longer as nonlinear effects become stronger.
Nevertheless, these changes are pronounce much more weakly in comparison with the pulsed case.
- field in solid lines. Let one to assume that saturation is achieved starting from the moment at which the derivative of a saturation curve is 5% of its maximal value. In this case for a periodic field saturation of the peak positive pressure occurs at N = 5 and for a pulsed field, at N = 1.5. Thus saturation in pulsed fields occurs earlier than in periodic fields, i.e., at smaller values of the nonlinear parameter N and therefore at smaller values of pressure amplitude on the source. In a weakly nonlinear case (at N < 0.5) saturation curves for periodic and pulsed fields are close to each other; i.e., the fields have close values of positive and negative pressures (Fig. 3.9a,b).
At large values of the nonlinearity coefficient N, the peak positive pressure and the modulus of the peak negative pressure in a periodic field are larger than in a pulsed field. Saturation for the peak negative pressure is not observed in the interval of studied parameters N (Fig. 3
.9b).
As one can see from Fig. 3.9(a), in the case of saturation for the peak positive pressure in pulsed fields, the numerically calculated coefficient NP + /G ≈ 1.9. Thus in pulsed fields the limiting peak positive pressure level predicted by Eq. (3.25) is ≈ 20% smaller than the one obtained by numerical simulations taking into account diffraction effects. Note that the saturation level of the peak positive pressure does not depend on value of G, i.e., on the initial duration of the pulse. Similar phenomenon was predicted by Eq. (3.24).
Analytical expression (3.13) for peak positive pressure at the focus of periodic fields can be written in a dimensionless form as (3.33) can be used. Approximations are shown in Fig. 3.9 by thick light-blue lines. In both fields, the quadratic increase of the peak positive pressure is observed in a weakly nonlinear case and then gives place to a slow logarithmic increase. Precisely this increase describes saturation.
NP + /G = π ln (2G) . ( 3
An analogous approximation was also selected for estimating limiting values of the peak negative pressure. Since these values are close in periodic and pulsed fields identical approximations were selected for both fields. Opposite to the saturation curves for the peak positive pressure, the values of the peak negative pressure at the given N depend on the parameter G 10 (a,b) show waveforms at different transverse distances ρ from the beam axis at the distance σ = 0.8 from the transducer. One can see that even at a small distance from the beam axis (ρ = 0.24) the waveforms are almost undistorted though on the axis the shock front is already formed in both periodic and pulsed fields.
N|P -|/G = lg [(G/13 + 1.2) N + 1] G/33 + 0.61 . (3.34) Figures 3.
Larger values of the peak positive pressure in periodic fields [Fig. 3.10(a)] can be explained qualitatively in the following way. Because of the Gaussian spatial apodization of pressure amplitude at the source, the peak pressure in the center wave is higher than in the wave coming from 3.3. Effect of a signal waveform on limiting values of shock wave parameters in nonlinear focused beams the source periphery. Since nonlinear effects manifest stronger for waves with larger amplitudes, the waves coming from the source periphery are distorted much weaker than shock central waves [Figs. 3.10(a,b)]. In nonlinear periodic fields, waves from the center of the source and its periphery are focused almost to the geometrical focus of the source (Fig. 3.11). Increasing of the pressure amplitude on the source leads to saturation effect for a center wave but not for waves coming from the source periphery: they are always linear. This is the reason that the saturation curve for a periodic field in the range of the studied parameters does not stabilize at a constant level and the value of the peak pressure in the focus continues to grow slowly [Fig. 3.9(a)]. Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams an arbitrary distance from the source was calculated as an integral over a time window and over the beam aperture in squared pressure at each point. Then the energy was normalized to its initial value. The energy over a single period was calculated as the energy of a periodic wave and also normalized to its initial value One can see from Fig. 3.12 that near the source the energy of periodic and pulsed beams remains constant. Then, starting from the distance σ corresponding to the length of shock formation in waveforms at the beam axis, the energy starts to decrease due to nonlinear absorption in shocks. It is well known that in a plane nonlinear wave the energy of a pulsed signal after shock formation decreases with the distance as 1/σ and the energy of a periodic field decreases faster as 1/σ 2 [START_REF] Rudenko | Theoretical foundations of nonlinear acoustics[END_REF]. In this case the effect of pressure saturation is observed in a plane periodic wave, while there is no saturation for a pulsed signal [START_REF] Rudenko | Nonlinear sawtooth-shaped waves[END_REF][START_REF] Rudenko | Self-action effects for wave beams containing shock fronts[END_REF]). In the case of a focusing, the energy of a periodic field decreases with the distance also faster than the energy of a pulsed nonlinear field (Fig. 3.12). Thus nonlinear absorption is pronounced stronger in focused periodic nonlinear fields than in pulsed ones. This allows one to conclude that the main mechanism leading to saturation in focused periodic fields is nonlinear absorption. The effect of nonlinear refraction in significant only in a very small region near the focus and is insignificant on the whole. For pulsed fields the main mechanism of saturation of the peak positive pressure is nonlinear refraction.
E E 0 = p 2 p 2 0 adadt. ( 3
Note that despite the fact that the energy of a periodic beam decreases faster, the maximum attainable value of the peak positive pressure in a periodic field is higher than that in a pulsed one. These peculiarities of nonlinear propagation provide a possibility to use beams of pulses for more effective delivery of the wave energy to the focal region, and periodic waves -to achieve higher values of the pressure amplitude in the focus. In a previous section it was shown that spatial structure and limiting values of acoustic parameters of focused fields depend on the signal temporal structure. Results of numerical simulations demonstrate that parameters of acoustic field are also strongly dependant on the distribution of the pressure amplitude on the source (or so-called source apodization). In a current section, Gaussian and uniform (piston) apodizations of the source are considered since they are the most widespread and corresponded to real transducers used in medicine [START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF][START_REF] Averkiou | Modeling of an electrohydraulic lithotripter with the KZK equation[END_REF].
Figures 3.13 and 3.14 demonstrate two-dimensional spatial distributions of the peak positive (Fig. 3.13) and peak negative (Fig. 3.14) pressures in the case of focused periodic (a, c) and pulsed (b, d) fields produced by sources with Gaussian (a, b) and uniform (c, d) apodizations. Sizes of focal areas are shown in white solid lines and were determined by level of e -1 from peak pressure maximuma in each field. Differences in all four cases are clearly observed: for Gaussian source focal areas of peak pressures are 2-3 times narrower along the propagation direction σ than in the case of piston source. This is due to the fact that in fields produced by Gaussian sources, the shock formation occurs only in the central part of the beam where pressures are higher than in periphery.
Waves from source periphery are coming in geometrical focus in Gaussian fields. Focusing of the edge wave of the piston source occurs nonlinearly that enhances the effect of the nonlinear refraction and increases sizes of the focal area. Note that the transition of the source apodization from Gaussian to the uniform one changes the structure of both periodic and pulsed fields. In periodic field, oscillating structure caused by interference of the central and the edge waves appears [Figs. 3.13,3.14 (c)]. In pulsed fields, there are no spatial oscillations but nonlinear effects become significant not only for the central wave but also for the edge wave.
Consider on-axis structure of nonlinear pulsed field produced by a focused piston source. In Figs. 3.15,3.16 distributions of peak positive and peak negative pressures are shown for different values of the nonlinear parameter N. In the case of moderate focusing (Fig. 3.15, G = 10), the increasing of the pressure amplitude on the source shifts the on-axis maximum of P + away from the source while the shock formation is not occurred. After formation of the shock front, focus of P + shifts in opposite direction, i.e., towards to the source. The trajectory of the focus is a loop (Fig. 3.15). Focus displacement of P + away from the transducer is caused by the effect of nonlinear refraction. After shock is formed, nonlinear absorption becomes significant and focus moves back to transducer. In the case of Gaussian source, focal area of P + of pulsed field moves away from the source in the whole range of N. Difference in trajectories indicates that nonlinear refraction defocusing the beam is prevailing in Gaussian fields. For a piston source the structure of the focal area is determined equally by both nonlinear refraction and nonlinear absorption. This is due to the fact that nonlinear effects in Gaussian fields are significant only in paraxial part of After shock formation: the beam with high pressure levels; while for a piston source nonlinear effects play significant role over the whole surface of the transducer.
N = 0.4 N = 0.5 N = 0.6 N = 0.7 N = 0.8 N = 0.9 N = 1.0 N = 2.0 G = 40
Interesting features are observed for strong focusing of pulsed field when the influence of nonlinear refraction becomes more significant. In fig. 3.16, on-axis distributions of peak pressures are shown for a piston source at G = 40. Before the shock formation, shift of the focus of P + is negligible and a displacement from the geometric focus (σ = 1) is about 3% of the focal length. However, after shock is formed in a pulse profile, the focal region broadens and reaches in the If nonlinear effects are strongly pronounced (curve for N = 2.0 in Fig. 3.16) a maximum of P + is reached before the geometrical focus since nonlinear absorption of energy on the shock front is significant. Then there is a very gradual decline in the peak positive pressure due to the appearance of nonlinear refraction. Thus, nonlinear effects significantly change the distribution of peak positive pressure compared to the linear case. Changes in the distribution of the peak negative pressure P -with an increase of the pressure amplitude on the source (parameter N) is also quite strong, but it is more predictable: the focal area of P -is slowly shifted to the source, |P -| is monotonically decreased. (Figs. 3.15,3.16).
To analyze the focusing efficiency of periodic and pulsed fields produced by Gaussian and piston sources, let one compare acoustic fields with the same focusing angle (parameter G) and the same pressure amplitude p 0 on the source (parameter N). Note that such comparison describes not fully equivalent cases since the piston source at the same pressure amplitude p 0 will be powered more than a Gaussian source. However, the choice of the same pressure amplitude p 0 on both sources is caused to the same waveforms and peak pressures at the focus in the linear case for all four acoustic fields. In this way periodic and pulsed fields generated by a Gaussian source were compared in the previous section. In Fig. 3.17, axial distributions of peak pressures P + and P -are presented for all four cases mentioned above. Dashed lines correspond to curves for periodic field, solid lines correspond to pulsed field. Curves for a Gaussian source are shown by dark-blue color and curves for a piston source are shown by red color. Distributions are given for two values of diffraction parameter (G = 10 and G = 40) in cases of linear propagation N = 0.0 (a, b); nonlinear propagation at N = 0.5 (c, d) when shock is just formed for Gaussian fields; and in a case of strong nonlinearity at N = 1.0 (e, f) when shocks are developed for all fields. It is clearly seen that regardless the source apodization and manifestation of nonlinear effects, in periodic fields levels of P + and |P -| are higher than in pulsed fields [Fig. 3.17(c-f)]. Thus, focusing of periodic fields is preferred for achieving highest peak pressures. The second important feature is wider focal area in fields produced by a piston source compared to size of focal area in Gaussian fields. This feature is observed already in a linear case.
If the goal of focusing is to focus the acoustic beam in the focal region of the small size, one should use a focused periodic field of a Gaussian source. Periodic fields produced by a piston source contain sharp and narrow peaks in the distribution of P + in the region before focus: an example of such structure is clearly visible in Fig. 3.17(d). The reason for the formation of these peaks is a merger of two shocks in one period of a periodic wave to one shock front and, as a consequence, a sharp increase in the nonlinear absorption and rapid attenuation of the peak amplitude (see Fig. 3.18).
Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams In strongly nonlinear focused fields [Fig. 3.17 (d,f)] produced by a Gaussian source, peak pressures for periodic and pulsed fields are higher than in the case of piston source. The reason is more stronger manifestation of nonlinear effects for a piston source: nonlinear effects are significant along the whole transverse coordinate which leads to stronger nonlinear absorption of energy and to lower values of peak positive pressure at the focus than in Gaussian fields. Thus, transducers with Gaussian apodization are better suited to achieve highest peak pressures in a small focal area than As shown in the previous section, during the propagation of acoustic wave shock fronts of the same wave can interact with each other (Fig. 3.18). To observe this phenomenon for periodic waves, the shock formation should occur in the penultimate pressure maximum on the transducer axis, then in the region of the main maximum central wave interferes with an edge wave. The edge wave is reverse polarity wave, in which shock is formed in another part of the profile. As a result, the formation of two shocks in the single period of the wave occurs and their interaction is observed [START_REF] Khokhlova | Numerical modeling of finite amplitude sound beams: Shock formation in the near field of a cw plane piston source[END_REF]. In the numerical simulations performed in this study, similar phenomenon was also observed in nonlinear focusing of pulsed fields produced by a piston source and oscillating near field was found absent. Such interaction of fronts in space, not in time, will be similar to the Mach stem formation described in detail in the previous chapter. This paragraph will be devoted to this phenomenon.
It is interesting that attempts to 'find' in numerical simulations the spatial structure similar to the Mach stem were not successful in fields produced by Gaussian source regardless the temporal structure of the signal. The Mach stem formation does not occur in fields of Gaussian source. This is caused by the absence of the interaction of two shocks, for Mach stem formation the edge wave should contain the shock front. In the case of Gaussian source the edge wave is always linear. Also it was impossible to 'find' the Mach stem formation in the focal area of piston source generated pulses without pronounced rarefaction phase (for example, for pulse shown in Fig. 3.5(a)). Therefore, nonlinear focused fields of harmonic wave and bipolar pulse produced by piston source were chosen for the study. In [START_REF] Khokhlova | Numerical modeling of finite amplitude sound beams: Shock formation in the near field of a cw plane piston source[END_REF] it was shown that for the two shock structure in one period of the harmonic wave, the higher shock is coming from the edge of the source, and the lower shock is coming from its central part. The formation of the Mach stem structure in the focal area of acoustic beams is thus the result of nonlinear interaction of shock fronts of the edge and central waves. These two fronts merge, because the velocity of the shock depends on its pressure, the higher shock in the waveform propagates faster than the lower shock. These two shocks collide and the Mach stem forms in the beam region close to the axis.
In pulsed beams, the edge wave starts to collide with the central wave at the end of the pulse, therefore the second shock front and Mach stem structure form within an initially negative phase of the bipolar pulse [14 < Θ < 16 in Figs. 3.20(d) and (e)]. The front pattern in this case also resembles the von Neumann reflection with continuous slope between the fronts of the central wave and the Mach stem [Fig. 3.20(d)], but the front structure is blurred [Fig. 3.20(e)]. Smearing of the front structure occurs since the edge wave in the pulsed fields is smoother than in the periodic fields [Fig. 3.20(f), rear shocks of waveform 2 and waveform 3] and thus the values of the pressure derivative are less. When the edge wave front merges with that of the central wave, they turn into a sharp shock and provide the excess of the pressure amplitude on the rear shock [Fig. 3.20(f), waveform 1]. This excess of the pressure is clearly observed as the white area in [Fig. 3.20(d)] at the location of the Mach stem structure.
Thus, numerical simulations based on the KZK equation for nonlinear periodic and pulsed acoustic beams in water showed a process analogous to the Mach stem formation. The structure of the front patterns in the focal region of the beam resembled to the von Neumann reflection as the result of interaction between the edge and the central waves coming from the source. For pulsed beams the effect occurred only for the rear shock of the pulse. In periodic fields generated by a piston source the Mach stem formation occurs at less values of nonlinear parameter N (i.e., at less pressure amplitudes on the source) than in the case of pulsed field.
§3.6 Conclusions
The chapter describes the effects of nonlinear saturation in focused beams of periodic waves and pulses generated by Gaussian and piston sources. Numerical simulations were based on the axial symmetric KZK equation. It is shown that in periodic fields the saturation of the peak positive pressure is mainly due to the effect of nonlinear absorption at the shock front. In acoustic fields of single pulses the main mechanism of saturation is the nonlinear refraction. The level of the peak positive pressure in the periodic field, achieved at the focus, appeared to be higher than that of the single pulse. The total energy of the beam of the periodic wave, however, decreases much faster with the distance from the source than that of the single pulse. These nonlinear propagation effects propose a possibility to use pulsed beams for more effective delivery of the wave energy to the 3.6. Conclusions focal region while periodic waves are preferable to use for achieving higher peak pressures at the focus. Also it was shown that the formation of the Mach stem could be observed in the focal area of piston source in fields of periodic waves and bipolar pulses; and could be described using the KZK equation.
Chapter 4. Characterization of nonlinear focused ultrasound fields of new medical devices bone formation) [START_REF] Endres | Extracorporeal shock-wave therapy in the treatment of pseudoarthrosis: a case report[END_REF]; and antinociceptive effect [START_REF] Wang | An overview of shock wave therapy in musculoskeletal disorders[END_REF][START_REF] Ohtori | Shock wave application to rat skin induces degeneration and reinnervation of sensory nerve fibers[END_REF] (Fig. 4.2). After ESWT procedure, a patient feels less pain, a possibility of movement in joints appears, local metabolism becomes improved, and blood circulation is restored in tissue. Micro-cracks and a large number of tiny fragments of bone are formed. Stimulation of bone formation (osteogenesis) occurs and a blood supply in the fracture zone is improved [START_REF] Steinberg | ESWT role in Wound care[END_REF]. A lot of clinical studies have shown high efficiency and effectiveness of the ESWT [START_REF] Endres | Extracorporeal shock-wave therapy in the treatment of pseudoarthrosis: a case report[END_REF][START_REF] Wang | An overview of shock wave therapy in musculoskeletal disorders[END_REF][START_REF] Kudo | Randomized, placebo-controlled, double-blind clinical trial evaluating the treatment of plantar fasciitis with an extracoporeal shock wave therapy (ESWT) device: a North American confirmatory study[END_REF], Rompe et al., 2003[START_REF] Gerdesmeyer | Extracorporeal shock wave therapy for the treatment of chronic calcifying tendonitis of the rotator cuff: a randomized controlled trial[END_REF][START_REF] Furia | Safety and efficacy of extracorporeal shock wave therapy for chronic lateral epicondylitis[END_REF][START_REF] Rompe | Repetitive low-energy shock wave treatment for chronic lateral epicondylitis in tennis players[END_REF][START_REF] Steinberg | ESWT role in Wound care[END_REF][START_REF] Ohtori | Shock wave application to rat skin induces degeneration and reinnervation of sensory nerve fibers[END_REF]. However, there are a number of works [START_REF] Brown | Investigation of the immediate analgesic effects of extracorporeal shock wave therapy for treatment of navicular disease in horses[END_REF][START_REF] Buchbinder | Ultrasound guided extracorporeal shock wave therapy for plantar fasciitis[END_REF][START_REF] Haake | Extracorporeal shock wave therapy in the treatment of lateral epicondylitis: A randomized multicenter trial[END_REF] which state that the positive effects of ESWT is a placebo effect while there are no therapeutic effects. The reason of this disagreement is not full understanding of the physical and biological mechanisms of ESWT action on biological tissues and lack of optimal treatment protocols for each of the diseases and for each available device [START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF]. If a treatment protocol is not sufficient the therapy has no effects.
ESWT devices use the same methods of shock waves generating as lithotripters use. However, there are also pneumatic (or ballistic) ESWT devices which are considered to be the most inexpensive and reliable [START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF]. ESWT uses longer pulses than lithotripsy does: their duration is about 20 μs in contrast to 5 μs in lithotripsy; peak positive pressures vary from 4 MPa up to 40 MPa depending on the particular device model; peak negative pressures are in the range from -20 MPa up to -4 MPa.
Pneumatic and electromagnetic ESWT devices are the most common in clinical practice. Acoustic fields generated by pneumatic ESWT device were studied by Cleveland [START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF] for the model EMS Swiss Dolorclast Vet (Fig. 4.3a) used in veterinary practice. The emitter of this model is equipped with radial (unfocused) and focusing applicators. However, measurements of the field indicated that focused applicator on clinically significant distances (up to 12 cm from the source) has qualitatively the same field structure as unfocused applicator has. This was explained by too large focusing distance of the focused applicator. Pressure waveforms measured in water at a distance of 1 cm from transducer are shown in Fig. 4.3 (b, c) for both applicators. Waveforms are [START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF].
strongly different from ones used in lithotripsy (Fig. 4.1): they don't contain typical for lithotripsy pressure jump with a shock front at the beginning of the pulse.
In this chapter of the thesis the acoustic field of a clinical ESWT device (Duolith SD1 T-Top produced by Storz Medical, Switzerland) is characterized in water using a combined measurement and modeling approach.
Device Duolith SD1 is used in orthopedics, cosmetology, and neurology. The Duolith SD1 T-Top device has dual modes of operation, one called "focused shock wave therapy" (focused electromagnetic head), and the other one is "radial shock wave therapy" (ballistic head). Radial mode is used for the treatment of diseases with shallow location or required low energy surface treatment. Focused therapy is used for the treatment which requires deeper penetration. Fig. 4.4 demonstrates how the treatment of a heel spur and tennis elbow occurs using the focused therapy head (figures and acoustic parameters of generated pulses are taken from technical specification sheet of Duolith SD1).
Focused fields of electromagnetic ESWT devices are more intensive compared to ones of pneumatic devices, therefore, nonlinear effects in case of electromagnetic sources are more substantial. Another new promising medical application of focused shock pulses is a treatment of kidney stones by pushing small stones from the kidney using an acoustic radiation force (Fig. 4.5) [START_REF] Shah | Novel ultrasound method to reposition kidney stones[END_REF]. Formation of kidney stones (nephrolithiasis) is a common urological disease, which affects about 10% of the population during their lifetime. Nephrolithiasis may be asymptomatic and detected only at the time of examination of the body in cases of suspected other diseases. However, in the 85% cases kidney stones are beginning to move from the kidneys through the ureters into the bladder causing a blockage of the ureter and severe pain attacks (renal colic). Patients describe renal colic to be extremely painful. Every year thousands of operations to remove kidney stones are held. Mentioned above lithotripsy is the medical procedure that is widely used for extracorporeal removal of kidney stones. It utilizes high-energy focused shock pulses to break stone into small fragments which can pass from the body in a natural way. However, remaining of residual stone fragments in the lower pole of the kidney is a common problem confronted by urologists and documented in 21%-59% of patients who underwent lithotripsy [START_REF] Osman | et al, 5-year-follow-up of patients with clinically insignificant residual fragments after extracorporeal shockwave lithotripsy[END_REF].
Ultrasonic propulsion of kidney stones is a new stone management technique being under development. It uses a diagnostic ultrasound probe to create a real-time B-mode image and to generate a pulse to move the kidney stone out of the kidney with acoustic radiation force (Fig. 4.5). Ultrasonic propulsion could be an alternative extracorporeal procedure to remove small kidney stones by pushing them toward ureter; or effective method to facilitate passage of stone fragments after lithotripsy. Preliminary investigative clinical results of ultrasonic propulsion have been successful and the displacements of kidney stones pushed by acoustic radiation force were fixed in experiments in the porcine model [START_REF] Shah | Focused ultrasound to expel calculi from the kidney[END_REF]. Recently, the method has been successfully tested on patients volunteers who were waiting for a lithotripsy procedure [START_REF] Hickey | Trial to test using ultrasound to move kidney stones[END_REF]. In preliminary experiments, the pushing of kidney stones were produced by a standard diagnostic probes (Philips ATL HDI C5-2 and ATL HDI P4-1) generated long millisecond pulses at the highest possible applied voltage. Physical principles and treatment protocols are still not developed for this new technology.
This chapter of the thesis is devoted to investigation of nonlinear effects in the fields of modern diagnostic (Philips C5-2 probe) and ESWT (Duolith SD1) medical devices. The combined measurement and modeling approach [START_REF] Kreider | Characterization of a multi-element clinical HIFU system using acoustic holography and nonlinear modeling[END_REF], Canney et al., 2008, Bessonova & 4.2. Nonlinear effects in acoustic field of a clinical shock wave therapy device Duolith SD1 Wilkens, 2013) was used to describe the field structures. The approach is based on using low power measurements to set boundary condition in numerical model [START_REF] Perez | Acoustic field characterization of the Duolith: Measurements and modeling of a clinical shockwave therapy device[END_REF], Karzova et al., 2013, Karzova et al., 2015a) Experiments were performed in CIMU by its former PhD student Camilo Perez [START_REF] Perez | Characterizing ultrasound pressure fields, microbubbles and their interaction[END_REF].
Measurements were performed on the portable Duolith SD1 T-Top (Storz Medical AG, Tägerwilen, Switzerland) ESWT device that uses a focused electromagnetic source. The electromagnetic source was coupled with the standoff of 20 mm radius that contains the oil bag attachment to the membrane of the therapy head. The therapy head was located outside the water tank (31 cm long×18 cm deep ×18 cm wide) and was coupled to the tank via a tegaderm window and coupling gel (Fig. 4.6). The maximum degassed level 8% O 2 was at room temperature.
Measurements of the acoustic field were performed in water using a 3D computer-controlled positioning system (Velmex NF90, Bloomfield, NY) and a fiber optic hydrophone (FOPH 2000, RPI, Acoustics, Germany; the fiber tip was 100 μs in diameter). The focal length of therapy head was set F = 30 mm; the machine pulse repetition frequency was ranged from 1 to 8 Hz. Alignment of the FOPH to the acoustic field was done by performing a raster scan in two separate planes: One plane intersected the acoustic axis at the focus, with the maximum pressure at the center. The other plane was distal to the first. The beam axis was found as a line crossing the pressure maxima of the two planes. The FOPH was positioned parallel to the axis x of the beam. Radial symmetry of the field emitted from the device was confirmed after initial experiments.
Setting a boundary condition in a model using a method of an equivalent source
To set a boundary condition in numerical model the method of equivalent source was applied [START_REF] Kreider | Characterization of a multi-element clinical HIFU system using acoustic holography and nonlinear modeling[END_REF][START_REF] Canney | Acoustic characterization of high intensity focused ultrasound fields: A combined measurement and modeling approach[END_REF][START_REF] Bessonova | Membrane hydrophone measurement and numerical simulation of HIFU fields up to developed shock regimes[END_REF], Perez et al., 2013, Karzova et al., 2013, Karzova et al., 2015a). The method consists in using of experimental data to set parameters of so-called 'equivalent source', which generates the same acoustic field as the real transducer. In experiments, pressure waveforms were measured radially in a plane as close to the therapy head as possible for the existing experimental arrangement, which was approximately 5 mm from the standoff (Fig. 4.6). The waveforms were measured from the beam axis at r = 0 up to the radial coordinate r = 14 mm. Although the standoff radius was 20 mm, measurements at distances 14 < r < 20 mm were not collected because of low signal-to-noise levels. A total of 71 waveforms were measured with a spatial step 0.2 mm, time step 2 ns and duration 20 μs. Signals were sampled at 500 MSample/s and each waveform was averaged over 20 individual waveforms. These data were used to set a boundary condition for numerical modeling of the ESWT field.
Representative examples of averaged waveforms collected from the experiment for setting the boundary condition are shown in Fig. 4.7 (a,b,c) in blue. In order to make them applicable for the numerical simulations it was necessary 1) to reduce the noise level in the measured signal; 2) to add the tail to each waveform for zero mean value of the signal (which is necessary for fast Fourier transform (FFT) in numerical simulations); 3) to change the spatial step of the grid for boundary condition; 4) to add waveforms in the radial scan from 14 mm to 20 mm from the axis. Each procedure is described below in more detail.
Reducing the noise level in measured waveforms
To facilitate the modeling effort, each experimental waveform was numerically smoothed to reduce the noise level in the measured signal. To avoid decreasing the signal amplitude in the smoothing process each waveform was divided into two parts. The first part contained the region around the maximum of the peak positive pressure (within 1.25 μs) and the second part contained the other smoother parts of the pulse. The first part was numerically smoothed 3 times over 5 points that 4.2. Nonlinear effects in acoustic field of a clinical shock wave therapy device Duolith SD1 allowed for keeping the same amplitude level. The second part of the signal was smoothed 3 times over 30 points. The resulting smoothed waveforms are also shown in Fig. 4.7 (a,b,c) in red.
Requirement of zero mean value of the waveform
The general properties of the solution to the KZK equation yield that the time integral over the pulse must be equal to zero, as the zero frequency component in the FFT series expansion of the signal is eliminated by diffraction. To ensure that the pulses used for the boundary condition satisfy this requirement, a tail of Δt = 30 μs duration was added at the end of each pulse as
p(t * ) = p 1 cos 2 (πt * /2Δt) -(p 1 + 2S Δt ) sin 2 (πt * /2Δt) . (4.1)
Here p 1 is the pressure value at the last measured time point of each waveform, t * is time counted from this last point, S is the integral over the averaged waveform. The absolute value of the maximum pressure in the tail did not exceed 1.6 MPa, i.e., it was of the same order as the level of noise in the measured waveforms (±0.7 MPa). The tails adding to waveforms are shown in Fig. 4.7 (a,b,c) as black lines.
Changing steps of the grid
The radial step in the numerical modeling was further refined by adding 36 waveforms in between each two experimental waveforms. Each of these 36 additional waveforms was obtained by linear interpolating the pressure for each time point in the neighboring experimental waveforms. The coefficients for interpolation were i/36 and (1i/36) where i is the number of an additional waveform between the two experimental ones. As a result, the radial step was refined to 5.4 μ m instead of 0.2 mm in the experiment.
With the numerically added tail, the number of time points increased to 25000, with a time step of 2 ns. A requirement for FFT version used in our simulation program5 is setting the number of time points as a power of 2. In order to satisfy this requirement the number of time points was increased up to 32768 by padding the signals with zeros.
Adding waveforms at the edges of the source
To account for non-measured waveforms in the radial coordinate from 14 mm to 20 mm from the axis, additional waveforms were numerically introduced in the boundary condition by taking the very last radial waveform obtained at 14 mm and exponentially decreasing its amplitude along the radial coordinate with a linear time delay that followed the overall geometry of the measured field.
The decrease in the pressure amplitude along the radial coordinate is shown on Fig
Comparison of data obtained in measurements and nonlinear modeling
Numerical simulations of ESWT device Duolith SD1 were performed in wide range of supplied power. First, consider structure of the acoustic field corresponding to the boundary conditions obtained from the experiment -this situation corresponds to supplied power used in clinical practice. To confirm the validity of the numerical simulations, a series of measurements of pressure profiles was further conducted along the beam axis and in the transverse direction in focal plane at F = 30 mm. A comparison of the measurement and modeling results for peak positive and negative pressures along the axis is shown in Fig. 4.9.
Experimental results are shown by black circles and numerical results by red lines. The position of the therapy head is marked by blue dashed line. Modeling results are in a good agreement with the experimental data for the peak positive pressure and there are some discrepancies for the peak negative pressure, but they are all within the experimental error. It was observed that the maximum of the peak negative and positive pressures were achieved in different spatial locations: at x = 29 mm away from the therapy head for p + and at x = 12 mm for p -. This difference is caused by a com-4.2. Nonlinear effects in acoustic field of a clinical shock wave therapy device Duolith SD1 bination of nonlinear and diffraction effects. The measured and modeled waveforms at different distances x from the source along the beam axis are shown in Fig. 4.10. Red color corresponds to simulated waveforms and grey color corresponds to experimental ones. There is excellent agreement between the measured and modeled axial waveforms, where again, the simulations predict a slightly more negative peak pressure. Note that simulated waveforms are well fitted with experimental ones in a sharp pressure jump at the front of the pulse formed due to nonlinear propagation effects in the focused beam.
The radial (transverse) scans for peak positive p + and peak negative |p -| pressures in the focal plane F = 30 mm are shown in Fig. 4.11 in two perpendicular to the beam axis directions.The modeling results are in excellent agreement with the measurements and confirm radial symmetry of the acoustic field generated by Duolith SD1. Numerical simulation were performed for a wide range of initial pressure amplitudes on the source. This was done by scaling the pressure amplitudes of the boundary condition (Fig. 4.8) from 0.4 to 2 in steps of 0.1. From these scaled pressures, the axial distributions and focusing gain can be compared for increasing source output. Although this linear scaling does not precisely correspond to changing the output level of the device, it was an adequate approach as nonlinear effects were weak at the distance where measurements for the boundary condition were taken.
The results of these additional simulations are shown in Fig. 4.13a. The dashed lines correspond to the results simulating the experimental conditions. One can see how the focal zone changes with source pressure amplitude. With an increase in source output, the position of the spatial maximum of p + on the beam axis changes non-monotonically. It first moves away from the source and then backward. This effect is typical for nonlinear focused beams and has been observed in the earlier studies described in chapter 3 ( §3.4) for focused pulsed fields. The shift away from the source is characteristic for focusing without formation of shocks. It is caused by strengthening of the nonlinear self-refraction phenomenon because the speed of the pulse front depends on its amplitude. At very high source outputs, when a shock is formed prefocally, strong absorption at the shock results in diminishing of the peak positive pressure and the maximum moves backward.
Note also that the maximum shift in the natural focus from the lowest to highest setting is about 6 mm. The peak negative pressure maximum always moves toward the source with the increase of its output.
Therapeutic bioeffects from ESWT are often categorized as being due to the presence of the shock front. Despite the fact that the name of therapy requires a shock front, in work [START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF] it was shown that the field of ballistic transducer EMS Swiss Dolorclast Vet, used in veterinary medicine, does not contain not only shock fronts, but even sharp jumps in pressure. In the case of electromagnetic transducer Duolith SD1 nonlinear effects are pronounced stronger than in fields of ballistic transducers and the waveform of the pulse at the focus (Fig. 4.10) is similar to one produced by lithotripters. The shock front determines the minimum spatial scale at which the biological effects of ultrasound occur in the tissue. If the size of the cell is larger than the width of the shock front then the shock front can break the cell membrane due to the formation of strong spatial gradient field. If the shock front is wider than the cell size then it will cause only the acceleration in the cell movement.
The average size of a human cell is about 10-30 microns which corresponds to the time scale of the order of 1 ns in the pressure waveform. The limited temporal resolution of the hydrophone (2 ns) does not allow to register the fine structure of the front in measurements. However, to answer the question whether the shock wave therapy IS a SHOCK wave therapy can be obtained by using numerical simulations.
In Fig. 4.13 (b) a front at the focus obtained in numerical simulations is shown (distance x = F = 30 mm, the whole waveform is shown in Fig. 4.10). It is clearly seen that the smooth 'pedestal' rise precedes the sharp pressure jump. The classical definition of the shock front determines its duration τ rt from 10% to 90% of the peak positive pressure (shock amplitude A s ) and the shock is supposed to be governed by the stationary solution of the Burgers equation [START_REF] Hamilton | Nonlinear Acoustics[END_REF]
p(τ ) = A s 2 1 + tanh εA s 2b τ = A s 2 1 + tanh τ τ 0 , (4.2)
where τ is the retarded time, ε and b are coefficients of nonlinearity and the thermoviscous absorption of the propagation medium, correspondingly, τ 0 = 2b/εA s . In Fig. 4.13 (b) the part of the front corresponded to pressure rise from 10% to 90% of its maximum value is shown. This part of the front contains a smooth 'pedestal' rise. Thus, classical definition of the shock rise time is not correct in this case. To account only the steepest part of the front the shock rise time should be defined using its derivative. The 0.1A s to 0.9A s rise time of the pressure at the shock in Eq. ( 4.2) is the length of time for the function tanh to change from -0.8 to The shock rise time described by Eq.( 4.2) is defined by combined effects of nonlinearity and thermoviscous absorption. Nonlinear effects tend to steepen the shock, while thermoviscous effects of energy absorption at the shock tend to smoothen it. The balance of these two effects creates a shock of quasi stationary thickness τ rt , inversely proportional to the shock amplitude A s . If the front of amplitude A s has greater thickness than τ rt then the front is not a shock and potentially can be more steeper due to nonlinear effects. Now estimate the thickness of the front at the focus of Duolith SD1 and analyze whether the front is a shock. The rise time of the front calculated on the level of 0.36 of the maximum of the pressure derivative is 8 ns that corresponds to pressure Δp = 25 MPa at the front (Fig. 4.14). In the case of the shock front with an amplitude A s = Δp/0.8 the thickness of the front defined from Eq.( 4.2) is τ rt = 0.28 ns (for propagation in water at ε = 3.5, b = 4.33 • 10 -3 kg/(c•m)). Thus, the shock formation did not occur for current clinical machine Duolith SD1 settings.
Note that the estimation of the shock front thickness given above was obtained for propagation in water. However, if the shock front formation is not occurred in water it will not be formed in biological tissue since the absorption will be even more significant.
In focused nonlinear fields the shock front formation occurs then the focusing gain reached its maximum [START_REF] Bessonova | Focusing of high power ultrasound beams and limiting values of shock wave parameters[END_REF][START_REF] Rosnitskiy | Effect of the angular aperture of medical ultrasound transducers on the parameters of nonlinear ultrasound field with shocks at the focus[END_REF]. The inset of Fig. 4.13 (a) shows the peak positive focusing gain, given by the ratio of p + at the focus to its initial value at the boundary, as a function of the source pressure output. In our case, the experimental conditions correspond to the output level that is lower than the level of the maximum focusing gain, i.e., the shock has not yet formed. Apparently, if the device could generate an extra factor of 2 in pressure at the source, a shock may indeed form at the focus. The illustration of the experimental setup for pressure field measurements of Philips C5-2 abdominal imaging probe (Philips, Bothel, WA) is depicted in Fig. 4.15. The focused pressure fields were generated using a V-1 Verasonics ultrasound engine with extended transmit burst capabilities (Verasonics, Kirkland, WA); the Verasonics was controlled through an HP Z820 PC (HP, Palo Alto, CA) using Windows 7 (Microsoft, Redmond, WA) and Matlab 2011b (Mathworks, Natick, MA). The probe was fixed vertically in a large water tank facing downwards. Water was degassed to about 10% dissolved oxygen. The axes of the probe were aligned to those of a 3-axis positioner (Velmex, Bloomfield, NY). A hydrophone was mounted to the positioner by a custom L-shaped fixture so that they were parallel to the imaging probe.
High-power measurements were performed using a fiber optic hydrophone FOPH-2000 (RP-Acoustics, Leutenbach, Germany, the size of the tip is 100 μm) that allows measurements of pressure waveforms at frequencies up to 100 MHz. Fiber optic hydrophones have a relatively low sensitivity (approximately MPa) but they are well suited for measurements of high-amplitude pressure waveforms comprising steep parts.
Low-amplitude calibration measurements were performed using a capsule hydrophone HGL-0085 in conjunction with AH-2010 preamplifier (Onda, Sunnyvale, CA). Capsule hydrophones are used for measurements in the frequency range from 1 MHz to 20 MHz with pressure levels of the order of several MPa. The sensitive surface of the capsular hydrophone HGL-0085 is a PVDF membrane of 200 μm diameter.
The transmit signals were 75 cycles at f = 2.3 MHz with a pulse repetition frequency of 20 Hz. A trigger signal was generated by the Verasonics at the beginning of the transmit signal to synchronize oscilloscope acquisition.
The C5-2 array probe comprises 128 single elements located on a cylindrical surface (see Fig. 4.16). The projection of active probe surface onto the xy plane is a rectangle of the height l y . Steering of the focus F x in the xz plane is performing electronically by changing the pressure phase over the probe elements in x-direction. The cylindrical acoustic lens focuses the field at a constant depth F y to reduce the divergence of the beam in the yz plane. Field measurements were performed for the regimes with 16, 32, 40, 64, and 128 active elements; the centermost elements for each configuration were used.
The exact geometrical parameters of the probe Philips C5-2 is unknown. Nominal parameters of the C5-2 probe were approximately measured by a ruler: radius of curvature R ≈ 38 mm, angle of aperture 2θ ≈ 40 • , and height l y ≈ 12 mm. Despite these values are not exact they give The hydrophone measurements included two steps. On the first one low-amplitude measurements of pressure waveforms were performed by capsule hydrophone along the beam axis z and in two perpendicular directions x and y in the focal plane at z = 50 mm. These measurements were carried out at the lowest possible voltage (2 V) applied to the probe and were used for setting a boundary condition to numerical model. A step size along the z-axis was 0.5 mm while transverse scans were done along xand y-axis with the steps of 0.1 mm and 0.05 mm, correspondingly.
The second step of measurements were carried out for a wide range of applied voltages (from 5 V to 90 V). Pressure nonlinear fields were measured using a fiber optic hydrophone. The location of peak positive pressure was found using a transmit sequence with 128 elements at 50 V. At this location, corresponded to z = 50 mm, all the waveforms were collected at the different number of elements and different voltage levels. Each acquisition point used 128 averages with the FOPH bandwidth set at 100 MHz and a sampling rate of 320 MHz. The waveform was calibrated and deconvolved with a manufacturer supplied impulse response. Mean and standard deviation values for maximum positive pressure and minimum negative pressure were taken for 50 of the 74 cycles so that a steady state was reached in the waveform. These measurements were performed for further comparison with results of numerical modeling of nonlinear acoustic field of the probe.
Setting the boundary condition for numerical model using low-amplitude measurements of pressure waveforms
A boundary condition of numerical model was set by finding the best fit between distributions of pressure amplitude on the beam axis and in the focal plane obtained in linear modeling and in measurements. In numerical model, a continuous periodic wave of frequency f = 2.3 MHz was used as an initial condition. The pulse length in experiments was chosen sufficiently long (75 periods of harmonic wave) for such simplification in the modeling. The pressure amplitude was assumed to be uniform over the cylindrical surface of the equivalent source. The phase was 4.3. Nonlinear effects in ultrasound field of rectangular focused diagnostic-type transducer Philips C5-2
where Δx = (R sin θ) 2 + (R -R cos θ + F x ) 2 -F x is a path difference of focused waves emitted by the apex of the probe and by selected point on the surface of the probe; (R, θ, y) is a cylindrical coordinate system with the origin at the center of curvature of the probe; ω = 2πf is a cyclic frequency; k = 2πf /c 0 is the wavenumber, and t is time. Changing number of the operating elements was accounted in the model by changing the angle of aperture θ. Rayleigh integral was used for numerical calculation of linear acoustic field:
p( r, t) = -iρ 0 f S u( r ) exp(ik | r -r |) | r -r | dS , (4.4)
where r = {x, y, z}, ρ 0 is the density of water, and u( r ) is the complex amplitude of the vibration velocity on the surface S of the probe.
Since parameters R, θ, l y , p 0 , F x , and F y of the probe were initially known only estimated, but not exactly, numerical calculation of the linear field was carried out in several iterations. The first time the field was calculated for approximate values of R, θ, l y given above in subsection 4.3.1, and for F x = F y = 50 mm. Then, each of these five parameters was varied individually so that the pressure amplitude distributions normalized by its maximum coincided well with those measured at the first step of the experiment on the beam axis and in focal plane. Note that each of parameters affects on a particular feature in the distributions and therefore was quite easy defined.
For example, parameter l y has predominant influence on the pressure distribution in the focal plane along the y-axis, but has almost no effect on the axial distribution along the z-axis. After finding the best fit values of parameters R, θ, l y , F x , and F y for configurations of 16, 32, 40, 64, and 128 active elements, the initial pressure amplitude p 0 was determined by multiplying the already found normalized distributions on required pressure at the focus. Obtained in this way geometric parameters of the equivalent source are shown in Table 4.1 for a different number of active elements.
Distributions of the pressure amplitude calculated numerically using the Rayleigh integral with the best fit parameters given in Table 4.1 are shown in Fig. 4.17 and are compared with measurements. The spatial structure of acoustic field of a C5-2 probe was strongly dependent on the Chapter 4. Characterization of nonlinear focused ultrasound fields of new medical devices number of active elements even in the case of a linear propagation. Fig. 4.17 demonstrates that with increasing of number of active elements the size of the focal area reduces in both longitudinal and transverse directions and the pressure amplitude at the focus increases. The numerical simulation of the linear field of the probe allowed to characterize the field structure in the entire space that is a time consuming task in measurements. Fig. 4.18
presents the two-dimensional distributions of the pressure amplitude in a plane located at a distance of 2 mm from the apex of the probe (upper series) as well as the beam focusing in the xz and yz planes (middle and lower rows, respectively). On distributions in the plane z = 2 mm the active surface of the probe is clearly visible and schematically shown by white dotted lines. Note that in the area in front of the edges of the active surface the pressure amplitude is up to two times higher than in the central partparticularly it is clearly noticeable for configuration of 128 elements. First, this is caused by the fact that waves coming from the edges of the probe are almost in phase. Second, they pass a longer path than waves coming from the central part of the probe and therefore increasing of the amplitude associated with the wave focusing is pronounced stronger. Distributions also clearly demonstrate that near field of the probe is a very nonuniform along its surface since waves emitted by the different elements interfere. Now let one consider the structure of the field in the both focal planes of the beam (Fig. 4.18, middle and bottom rows). Case of 16 active elements is a single configuration for which the size of the active surface along the y-axis is greater than that along the x-axis. In this case the beam is weak focused and pressure amplitude at the focus is only 2 times higher than its initial value.
The square shape of the active surface is achieved by powering the 32 elements of the probe. The structure of the field in the xz and yz planes is almost identical and the size of the focal area in both transverse directions is the same. One of the most common configuration used in clinical practice is 40 active elements. In this mode, the focusing is quite effective since the focal pressure amplitude is 6 times greater than its initial value. The transverse dimensions of the focal area defined by the level of -6 dB are 2 × 3 mm along x and y axes, respectively. With a further increase of the number of active elements, the focusing efficiency increases while the size of the focal areas decreases (64 and 128 elements configurations in Fig. 4.18).
Transfer of the boundary condition from cylindrical surface to the plane
Once the parameters of the equivalent sorce were found on a cylindrical surface, the boundary condition for modeling three-dimensional nonlinear field of the C5-2 probe was set on the plane z = 0. For this aim pressure distribution in the plane (x, y, z = 0 mm) calculated using the Rayleigh integral was transferred onto the plane (x, y, z = 0 mm) using the angular spectrum
method p(z + Δz) = p(z) exp[iΔz( k 2 -k 2 x -k 2 y -k)], (4.5)
where k x and k y are spatial frequencies, Δz is the shift in z-axis z [START_REF] Yuldashev | Nonlinear shock waves propagation in random media with inhomogeneities distributed in space or concentrated in a thin layer[END_REF]. The resulting distribution at plane (x, y, z = 0 mm) was used as a boundary condition to the 3D nonlinear ultrasound field modeling.
Numerical model based on Westervelt equation to calculate the threedimensional nonlinear field
The Westervelt equation written in a retarded time coordinate was used to simulate nonlinear field generated by the probe [START_REF] Westervelt | Parametric Acoustic Array[END_REF]) Here τ = tz/c 0 , Δp = ∂ 2 p/∂x 2 + ∂ 2 p/∂y 2 + ∂ 2 p/∂z 2 ; parameters ρ 0 , c 0 , ε, and δ are the ambient sound speed, nonlinearity coefficient, and the thermoviscous absorption of the medium, respectively. The values of the physical constants were chosen to represent the experimental measurement conditions in water at room temperature ρ 0 = 998 kg/m 3 , c 0 = 1486 m/s, ε = 3.5, δ = 4.33 • 10 -6 m 2 /s. Equation (4.6) describes nonlinear propagation in one-direction along the z-axis and in contrast to the KZK equation does not require the smallness of the diffraction angles.
∂
Numerical algorithm for simulation the Westervelt equation (4.6) was developed in our laboratory earlier by Petr Yuldashev [START_REF] Yuldashev | Nonlinear shock waves propagation in random media with inhomogeneities distributed in space or concentrated in a thin layer[END_REF]. The author of the thesis set the boundary and initial conditions to the solving code and found numerical grids. The details of the numerical algorithm is presented in [START_REF] Yuldashev | Nonlinear shock waves propagation in random media with inhomogeneities distributed in space or concentrated in a thin layer[END_REF], only its main stages will be listed here.
The simulations were performed using the method of fractional steps with an operator splitting procedure of second order. The diffraction operator was calculated in the frequency domain for each harmonic component using the angular spectrum method. The absorption was calculated in the frequency domain using an exact solution for each harmonic. The nonlinear operator was calculated in the frequency domain using a forth-order Runge-Kutta method at small distances from the probe and conservative time-domain Godunov-type scheme at greater distances. The switch to the Godunov-type scheme was made at a distance z where the amplitude of the tenth harmonic exceeded 1% of the amplitude at the fundamental frequency f . Parameters of the numerical scheme were: longitudinal step dz = 0.075 mm, transversal steps dx = dy =0.02 mm. Maximum number of harmonics was set to 750.
Results on numerical simulations of nonlinear propagation, comparison with measurements
Numerical simulations of the three-dimensional nonlinear field of the diagnostic probe were performed in a wide range of applied voltages. In modeling, the increase of the applied voltage was simulated by increasing the pressure amplitude p 0 of the initial harmonic wave. The relationship between the pressure amplitude in the modeling and the applied voltage in experiment was found by assuming a linear dependence between these values. To validate the results of the modeling 4.3. Nonlinear effects in ultrasound field of rectangular focused diagnostic-type transducer Philips C5-2 of nonlinear propagation, calculated pressure waveforms at the beam focus at z = 50 mm were compared with waveforms measured on the second stage of the experiment using a fiber optic hydrophone. Examples of measured waveforms and calculated ones are represented in Fig. 4.19 for configurations of 16, 32, 40, 64, and 128 active elements at the applied voltage of 20 V (upper row) and 60 V (bottom row). Waveforms obtained numerically were in a good agreement (accuracy of 3%) with the experimental data for all configurations except the case of 128 active elements. In the last case a good agreement was observed only for voltages less than 25 V while for greater voltages modeling predicts higher values of the peak positive pressure than it was measured (see the last waveform in the bottom row of Fig. 4.19). Further, it will be shown that this discrepancy is possible due to the fact that the size of the focal area of the peak positive pressure p + at applied voltages (above 25 V) becomes smaller than the size of FOPH surface (100 μm). This kind of problem has been observed previously in the calibration of nonlinear fields produced by multielement arrays of therapeutic clinical noninvasive surgery system [START_REF] Kreider | Characterization of a multi-element clinical HIFU system using acoustic holography and nonlinear modeling[END_REF]. Diagnostic probe Philips C5-2 is designed for supplied voltages in the range from 2 V to 90 V. The lower boundary of this range corresponds to ultrasound visualization regime used in a clinical practice. From physical point of view it is a linear propagation of ultrasonic waves. Trial experiments to push kidney stones were performed using the upper limit of this range (90 V) when the wave profile is highly distorted due to nonlinear effects and contains a shock front. It is interesting to note that even at voltages equal to 20 V the steep parts in profile began to be formed and waveforms became to be strongly asymmetric (top row in Fig. 4.19). The formation of the shock front in waveforms occurred at the applied voltages equal to about one-third of the maximum.
The efficacy of the treatment can be increased by using higher transducer output to provide stronger pushing force which requires greater focal pressure. However, nonlinear acoustic saturation effect can be a limiting factor. Fig. 4.20 shows the saturation curves for peak positive p + and peal negative p -pressures for configurations of 16, 32, 40, 64, and 128 active array elements. Peak pressures were calculated at a distance z = 50 mm on the beam axis. Curves obtained in numerical simulations are shown by solid lines while measurements are shown by markers. One can see that starting from applied voltage of 50 V the peak positive pressure p + increases very slowly. If one assumes that the saturation of p + occurs when derivatives from saturation curves are less than 5% of its maximum value then the voltage level of 50 V will correspond to this threshold. Thus, pushing of kidney stones by acoustic radiation force of the ultrasonic beam generated by The evolution of waveforms with the increasing of applied voltage is shown in Fig. 4.21 at a distance z = 50 mm for 40 active elements configuration. It is clearly seen how initially harmonic wave (profile at 5 V) distorts with increasing of its amplitude and finally turns into a sawtooth wave with the shock front (profile at 90 V).
Note that changes in waveforms in the saturation regime are minimal (profiles at 60 V and 90 V in the figure).
Measurements of pressure waveforms in a large volume of space with fine spatial step is time consuming process. Numerical modeling has allowed to investigate in detail the spatial structure of the nonlinear ultrasonic field of the diagnostic probe with a spatial step of 5 times smaller than the size of the hydrophone. Simulations were performed for the entire range of the voltage applied to the probe. Let one first consider the results of the nonlinear simulation of peak pressure distribution along the z-axis at different number of active elements (Fig. 4.22). In the case of 16 active elements the beam is narrow in the direction of x-axis, its focusing occurs less efficient than in use of a larger number of active elements. Levels of the peak pressures at z = 50 mm differ from ones in intra-focal maximum not more than 2 times. When one uses 40 active elements it is necessary to take into account a significant shift of the maximum of the peak pressure from the supposed focal depth z = 50 mm. The shift of the maximum of peak positive pressure first occurs away from the probe and then towards to the probe, the shift of the focal area at voltage level of 90 V is about 1 cm. Similar effect is discussed in chapter 3 ( §3.4) when focusing of pulsed fields was considered. Also in chapter 3 ( §3.4) the possibility of observing the sharp and narrow peaks in the distribution of the p + in the prefocal area of the periodic field was discussed. In the field of a diagnostic probe this feature was observed in the case of 40 active elements in saturation regime of p + (shown in inset of Fig. 4.22). Minimum of the peak negative pressure p -is displaced with pressure amplitude increase toward to the probe. If all 128 elements are active, the focusing occurs almost exactly at the focus z = 50 mm. Consider how a field spatial structure of the peak pressures in the xz plane of electronic focusing is changed with an applied voltage. Fig. 4.23 shows the two-dimensional distributions of p + and p -in the plane of the electronic focusing in the case of 40 active elements of the probe.
The first column shows the distributions corresponding to the quasi linear regime (15 V) when waveform is not greatly distorted (see Fig. peak pressures p + and p -in this case are similar in structure but the focal area of p + has greater length along the z-axis and located farther than the focal area of p -. The second column shows the distribution in the nonlinear regime (30 V) when the waveform contains a shock front (see Fig. 4.21). Distributions in the last column correspond to the saturation regime of the p + (80 V). With increase of applied voltage, the focal area of p + becomes smaller in size but remaining highly elongated along the z-axis of the probe. Interestingly that there are no fundamental changes in distributions of p -with an increase of applied voltage, only pressure levels are increased and a small displacement (about 2-3 mm) of the focal area of p -is occurred toward to the probe.
Comparison of two-dimensional distributions of the peak positive p + and peak negative p - pressures in a saturation regime is demonstrated in Fig. 4.24 for configurations from 16, 40, and 128 active elements. It is clearly seen that the focal area of p + dramatically reduces in the xz plane with an increase of a number of active elements. In the case of 128 active elements the size of the focal area of p + along the x-axis is only 50 μm, which is 2 times smaller than the diameter of the FOPH tip (100 μm). Such a small size of the focal area and a sharp pressure gradient out of its boarder have led to significant differences (35 %) between the profiles measured by a hydrophone and calculated numerically (see Fig. 4.20,case of 128 active elements and Fig. 4.19). Significant changes with an increase in the number of active elements occur also in the yz plane of focusing of the beam by acoustic lens. The size of the focal region dramatically reduced along the z-axis but remained almost unchanged along y-axis. As a general conclusion one can say that using a greater numerically by summing the results for each segment. However, the light intensity I is equal to zero for distances x beyond the location of the pulse and therefore numerical integration requires a finite window. In numerical simulations, the size of spatial window was equal to the size of the schlieren image and the spatial step was 8 μs.
Synth schlieren et la méthode de interférometrie Mach-Zehnder) pour les mesures des ondes de choc acoustiques dans l'air sont discutées.
Le deuxième chapitre de la thèse est consacré à l'étude expérimentale d'une réflexion irrégulière d'une onde en N sur une surface rigide . Dans le §2.1 nous présentons un examen des études théoriques et expérimentales existantes sur la réflexion d'onde de choc ainsi que sur la réflexion des chocs faibles avec le paradoxe de von Neumann. La classification des différents régimes de réflexion de faibles chocs acoustiques sur une surface rigide est donnée dans le §2.2.
Une attention particulière est portée sur les différences entre la réflexion de "step-shock" et celle de formes d'onde plus complexes typiques des applications en acoustique. Dans le §2.3 le dispositif expérimental conçu pour la visualisation optique schlieren de la réflexion de l'onde de choc sur une surface rigide est présenté. Les images schlieren obtenues dans l'expérience sont analysées. Nous avons montré l'existence d'une réflexion irrégulière qui se traduit par un pied de Mach (Mach stem) dont la longueur évolue dynamiquement lorsque l'impulsion se propage le long de la surface. Le système optique schlieren permet une visualisation du motif de réflexion pour le choc avant de l'onde en N. La méthode de interferomètre de type Mach-Zehnder a été utilisé pour mesurer les formes d'onde de pression de l'onde en N proche de la surface réfléchissante. Dans le §2.4 les formes d'ondes de pression mesurées expérimentalement sont analysés. L'interaction non linéaire entre le choc avant réfléchi et le choc arrière incident de l'onde en N est discutée dans le §2.5. L'interaction conduit à la formation de pieds de Mach au-dessus de la surface où se croisent ces chocs et une zone de surpression est formée au-dessus de la surface.
Dans le troisième chapitre les mécanismes de saturation non linéaire des champs acoustiques focalisés d'ondes périodiques et des impulsions courtes sont considérés. Dans le §3.1 un examen des approches analytiques qui fournissent l'estimation des valeurs limites de pression positive maximale dans les faisceaux acoustiques périodiques et pulsés est détaillé. La possibilité d'observer la formation d'un pied de Mach dans la zone focale axiale est discutée. Dans le §3.2 un modèle numérique basé sur l'équation parabolique KZK est décrit. Le modèle a été utilisé pour caractériser des champs focalisés non linéaires de faisceaux acoustiques impulsionnels ou périodiques générés par une source de type piston et une source gaussienne. Dans le §3.3 l'effet de la signature temporelle du signal sur les valeurs limites de pressions crêtes est discuté. Nous avons montré que dans les faisceaux acoustiques périodiques les pressions crêtes positives sont plus élevées que celles réalisées dans des faisceaux pulsés. Le §3.4 est consacrée à étudier l'effet de répartition de la pression de la source sur la structure spatiale du faisceau ulstrasonore émis et les valeurs limites de pressions crêtes dans les champs acoustiques focalisés. Nous montrons que les sources gaussiennes sont plus appropriées pour atteindre les hautes pressions dans une zone focale de faible taille que les sources de type piston. L'analyse du §3.5 de l'interaction entre les fronts de choc des champs périodiques et pulsé focalisé à symétrie axiale montre qu'elle peut être considérée comme un procédé similaire à la réflexion de la surface rigide. Il est aussi démontré que l'équation KZK permet de décrire la formation d'un pied de Mach dans la zone focale pour une source de type piston. La structure des motifs des fronts d'onde dans la région focale du faisceau ressemble à Synthèse des résultats celle de la réflexion de von Neumann comme le résultat d'une interaction entre le bord et la partie centrale de l'onde en provenance de la source ultrasonore.
Le quatrième chapitre est consacré à la caractérisation des champs acoustiques focalisés non linéaires de nouveaux dispositifs médicaux utilisés dans la thérapie par ondes de choc extracorporelle (ESWT) et dans les sondes ultrasonores de diagnostique. Dans le §4.1 un examen des perspectives d'utiliser l'ESWT pour plusieurs troubles musculo-squelettiques est présenté ainsi que les paramètres de dispositifs de ESWT. La mise en oeuvre d'une sonde de diagnostic utilisant la force de rayonnement d'une source ultrasonore focalisé pour déplacer les calculs rénaux du système de collecte urinaire est également discutée. La modélisation numérique est de nos jours un outil important pour la caractérisation des champs acoustiques de ces dispositifs médicaux. Dans le §4.2 les effets non linéaires dans le faisceau acoustique focalisé du dispositif électromagnétique Duolith SD1 de ESWT sont étudiés en combinant mesures et modélisations numériques.
La condition limite pour la modélisation non linéaire avec l'équation KZK a été obtenue à partir de l'expérience en appliquant la méthode de la source équivalente. Ainsi notre procédé utilise des mesures pour obtenir des paramètres de source équivalente, à savoir la source avec le même champ acoustique sur l'axe du faisceau comme la vraie. Il a été montré que, dans les champs ESWT la formation choc ne se produit pas pour les paramètres d'utilisation actuelle de la machine. Une véritable formation de choc pourrait être atteinte si l'amplitude de pression initiale maximale du dispositif était doublée. Dans le §4.3 l'approche combinée mesure-modélisation a été utilisée pour caractériser le champ ultrasonore non linéaire de la sonde de diagnostic standard Philips C5-2 utilisé dans les expériences cliniques pour pousser les calculs rénaux. Les mesures ont été effectuées en deux étapes. La première était la mesure des formes d'ondes de pression de faible amplitude le long de l'axe de la sonde et dans son plan focal. Ces mesures ont été effectuées à faible puissance de sortie et ont été utilisées pour définir la condition limite introduite dans le modèle numérique.
La seconde série de mesures a été effectuée à différents niveaux de sortie et a été réalisée pour comparaison avec les résultats des simulations non linéaires. Un modèle numérique 3D basé sur l'équation de Westervelt a été utilisé pour simuler le champ acoustique non linéaire générée dans l'eau par la sonde de diagnostic à des niveaux de sortie différents et pour un nombre différent d'éléments actifs dans la source ultrasonore. Il a été montré que la poussée des calculs rénaux se produit en régime de saturation. Dans le §4.4 les conclusions du quatrième chapitre sont donnés.
Introduction 4 .
4 Numerical and experimental study of nonlinear effects in the fields of modern diagnostic and ESWT medical devices. Determination levels of acoustic pressures which provide shock front formation at the focus and saturation of acoustic field parameters. Introduction Salze, Sébastien Ollivier, Emmanuel Jondeau, Jean-Michel Perrin). Measurements of acoustic fields of new medical devices Duolith SD1 and Philips C5-2 presented in Chapter 4 were performed by Camilo Perez and Bryan Cunitz (Center for Industrial and Medical Ultrasound, University of Washington, Seattle), correspondingly. The author participated in planning, discussion and data processing of these experiments.
Figure 1 . 1 :
11 Figure 1.1: The scheme of the schlieren system. 1 -a point light sorce, 2-2 -a system of lenses and mirrors, 3 -test object, 4 -an optical inhomogeneity, 5 -an optical knife, 6 -a lens, 7 -a projection screen, 8 -a schlieren image. The figure is taken from the Internet: dic.academic.ru/dic.nsf/enc_physics/2531/.
Figure 1
1 Figure 1.2: Illustration of the experimental setup, the view is from the top along the z axis. Acoustic pulses are produced by a 15 kV spark source located at r = 0. Corresponding variations of the optical refractive index are schematically shown by gradients of the gray color. A schlieren optical system used to visualize the pressure wave consists of QTH continuous light source, a beam splitter, a spherical mirror, an optical knife, and a high-speed camera (Phantom V12 CMOS). Solid lines with arrows illustrate the trajectory of the light beam in the absence of acoustic wave.
Figure 1
1 Figure 1.3: (a) A spark source, (b) a light source and a beam splitter.
Figure 1 . 4 :
14 Figure 1.4: (a) Typical waveform produced by the spark source; p 0 and T 0 are the peak positive pressure and the duration of the compression phase, correspondingly. (b) Sketch illustrating the calculation of the width of the test zone. Location of the acoustic pulse is shown in gray.
Figure1.5: Effect of a 3 μs exposure time of the camera on the reconstructed waveform. Solid curve is the initial N -wave that was numerically propagated during 3 μs; dotted curve is the wave after propagation; dashed curve is the averaged wave, which imitates the measured waveform. The half duration of the initial wave can be calculated as the half duration of the averaged wave plus the half of the exposure time (1.5 μs).
1 Figure 1
11 Figure 1.6: A typical schlieren image recorded with the high-speed camera.
Figure 1
1 Figure 1.7: Illustration of the pressure signature reconstruction from the schlieren image. The light intensity with extracted background is shown in (a). Individual distributions of light intensity were calculated along 500 radial lines (examples are shown by dashed lines). The intensity signal averaged over 500 radial lines is shown in (b). Reconstructed waveform is presented in (c). All data shown in the figure are normalized by the corresponding maximum values.
Figure 1.8: Dimensionless waveforms reconstructed from the schlieren images at different distances from the spark source. For every pulse, the distance r 0 is defined as the coordinate of the peak positive pressure.
Figure 1 .Figure 1 .
11 Figure 1.9: Experimental (markers) data for the duration of the compression phase T as a function of propagation distance r. The origin of the graph corresponds to T 0 = 13,5 μs and r 0 = 70,5 mm. Solid line is obtained by linear fitting the experimental values using the method of least squares, the coefficient of proportionality equals 0.486 with standard deviation of 0.013.
Figure 1 .Chapter 1 .Figure 1 .
111 Figure 1.11: Reconstructed temporal waveforms generated by the spark source. The radial position r 0 of the positive peak is noted in each subfigure.
Figure 1 .
1 Figure1.13: Illustration of the optical phase integration along the probing laser beam propagating through a radial distribution of the refraction index inhomogeneities induced be the acoustic wave.
Figure 1 .Figure 1 .
11 Figure 1.14: An example of measured by the Mach-Zehnder interferometer waveform at the distance r = 20 cm (black line) and corresponding optical phase signal (blue line).
Figure 1 .
1 Figure 1.16: Theoretical (solid line) and experimental (markers) data obtained for N -wave parameters: the peak positive and peak negative pressures (a), the half duration (b), and the shock rise time (c).
Figure 1
1 Figure1.17: Comparison of pressure waveforms obtained using the optical schlieren method (black line) and the Mach-Zehnder interferometry method (red line) at distance r = 10 cm from the spark source.
Figure 2 . 1 :
21 Figure 2.1: (a) Ernst Mach, (b) scheme of an experiment performed by E. Mach to study shock wave reflection, the view is from the top.
Figure 2
2 Figure 2.2: (a) John von Neumann. (b) Description of Mach reflection in a three-shock theory developed by J. von Neumann in 1942. The figure is taken from (Ben-Dor, 1992).
Figure 2
2 Figure 2.3: Reflection patterns obtained in a shock tube for strong step shocks. The figure is taken from (Semenov et al., 2012).
Figure 2.4: (a) The von Neumann reflection of step shock observed in experiments in[START_REF] Colella | The von Neumann paradox for the diffraction of weak shock waves[END_REF]. (b) A sequence of triple points observed in experiments in[START_REF] Skews | The physical nature of weak shock wave reflection[END_REF]. The image on the right is a raw shadowgram while the left image is in enhanced contrast.
Figure 2
2 Figure 2.5: Consecutive reflection patterns of the N -wave obtained for different points on the surface. On each pattern reflection of the front shock of the N -wave correspond to the left part of the image while reflection of its rear shock -to the right one. The gradients of a color correspond to the pressure level:white color corresponds to positive pressure, black color -to negative pressure. Figures were taken from[START_REF] Baskar | Nonlinear reflection of grazing acoustic shock waves: unsteady transition from von Neumann to Mach to Snell-Descartes reflections[END_REF].
Figure 2
2 Figure 2.6: Illustration (a) and a photo (b) of the experimental setup: 1 -shock acoustic pulse, 2 -a spark source, 3 -a rigid surface, 4 -reflection pattern consisting of incident and reflected fronts, 5 -QTH continuous light source, 6 -a spherical mirror, 7 -a beam splitter, 8 -an optical knife, and 9a high-speed camera (Phantom V12 CMOS). Solid lines with arrows illustrate the trajectory of the light beam in the absence of acoustic wave.
Figure 2.8: Three consecutive schlieren images from the high-speed camera obtained for the same position of the spark source (ϕ = 14 • and M a = 0.044 for the first frame).
2. 4 .Figure 2 Figure 2
422 Scheme of the reflection process
Figure 2
2 Figure 2.12: The trajectory of the triple point. Experimental points are shown by marker and a linear fit is shown by a solid line.
Figure 2
2 Figure 2.13: Pressure waveforms of N -waves measured by the Mach-Zehnder interferometer at different distances h above the rigid surface at distance l = 13 cm.
FFigure 3 . 1 :
31 Figure 3.1: Geometry of the focusing.
.13) Eqs. (3.10), (3.11), and (3.13) provide similar values of the limiting pressure level at the focus.
Figure 3 . 4 :
34 Figure 3.4: Wavefront geometry in focusing of shock waves with different Mach numbers. (a) Linear focusing, (b) focusing of weak shock (M a ∼ 10 -2 ), (c) focusing of moderately strong shock (M a ∼ 10 -1 ), (d) focusing of strong shock (M a ∼ 1). The figure is taken from PhD thesis[START_REF] Kulkarny | An experimental investigation on focussing of weak shock waves in air[END_REF].
.4(a). The intersection of the central wave and the edge wave occurs exactly at the focus. As the value of acoustic Mach number increases up to M a ∼ 10 -2 velocity of both central and edge waves is also increases and front interaction occurs closer to reflector [Fig. 3.4(b)]. The focal area becomes cigar shaped. In a paraxial area the Mach stem starts to form. With further increase of acoustic Mach number the 3.2. Numerical model based on the KZK equation focal area moves towards to reflector [Fig. 3.4(c),(d)]. The structure of fronts at the focal area resembles to the one observed in a case of strong shock reflection from a rigid boundary.
Figure 3 . 5 :
35 Figure 3.5: Initial waveforms on the transducer (a) and waveforms in its focus (b) in the case of linear focusing with G = 10. The harmonic wave is presented by the dashed curve, and pulseby the solid curve.
Chapter 3 .
3 Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams §3.3 Effect of a signal waveform on limiting values of shock wave parameters in nonlinear focused beams For numerical simulation of focused beams the following parameters in the equation (3.26) were chosen: G = 10; 20; 40; 0 ≤ N ≤ 6. The peak pressure of p 0 = 6 MP a, pulse duration of T 0 = 4 μs, an effective reflector radius a 0 = 77 mm, and an effective focal distance F = 128 mm are typical for the Dornier HM3 lithotripter
Figure 3 .Figure 3
33 Figure 3.6 presents two-dimensional patterns of spatial distributions for the peak positive (a, b) and peak negative (c, d) pressures in a nonlinear beam (G = 10, N = 1.0) for periodic (a, c) and pulsed (b, d) fields of Gaussian transducer. The distance σ = 1 corresponds to the geometrical
3. 3 .Figure 3 . 7 :
337 Figure 3.7: The upper series: ray patterns for periodic (a) and pulsed (b) fields. Rays are shown by solid lines and shock fronts -by dashed lines. The white dashed line indicates the front position at the point of its straightening in the paraxial area. Colors indicate levels of the peak positive pressure (G = 10, N = 1.0). The lower series: waveforms at the source axis of Gaussian transducer at different distances σ in the cases of periodic (c) and pulsed (d) fields.
Figure 3 .
3 Figure 3.7 presents ray patterns for periodic (a) and pulsed (b) fields in the case of Gaussian source with G = 10 and N = 1.0. A dashed line indicates wave fronts that were determined at each spatial point according to the maximum of the derivative from a waveform at this point. Solid lines indicate rays plotted as perpendicular to wave fronts in dimensional coordinates. Let us to
Θ
Figure 3.8: Waveforms at the focus (at the location of the maximum of the peak positive pressure) for periodic (a) and pulsed (b) signals with G = 10 and different values of the nonlinear parameter N = 0.2, 0.5, 1.0, and 3.0. Solid pink lines show waveforms at the source.
Figure 3 .
3 Figure 3.9a presents saturation curves for the peak positive pressure in periodic and pulsed fields of Gaussian source. Saturation curves for a periodic field are shown in dashed lines and for a pulsed
Chapter 3 .Figure 3
33 Figure3.9: Saturation curves for the peak positive (a) and negative (b) pressures. Dashed lines correspond to the dependencies for a periodic field and solid lines -to a pulsed one. Thick light-blue lines show the saturation curves plotted using the approximations given by Eqs.(3.32)-(3.34). In the plot (a) at the right from the legend different colors indicate saturation levels calculated using the analytical solution given by Eq.(3.31).
Figure 3 Figure 3
33 Figure 3.10: Waveforms at different transverse distances ρ to the beam axis at σ = 0.8 in the cases of periodic (a) and pulsed (b) fields generated by a Gaussian source.
Figure 3
3 Figure 3.12: Dependencies of the beam energy on propagation distance σ for the periodic waves (a) and pulses (b) calculated with different values of nonlinearity N = 0.2, 0.5, 1.0, and 3.0.
3. 4 .
4 Effect of source apodization on spatial structure and limiting values of shock wave parameters in nonlinear focused beams §3.4 Effect of source apodization on spatial structure and limiting values of shock wave parameters in nonlinear focused beams
Figure 3
3 Figure 3.13: 2D spatial distributions of the peak positive pressure in cases of a Gaussian source (upper series a, b) and piston source (lower series c, d). Distributions in left column (a, c) correspond to the periodic field while the right column (b, d) -to the pulsed field. White solid lines indicate sizes of focal areas. White dashed lines show the plane of the geometrical focus. Distributions are plotted for G = 40, N = 1.0.
3. 4 .Figure 3
43 Figure 3.15: Distributions of peak positive P + and peak negative P -pressures on the axis of the piston source in pulsed field at G = 10. Values of nonlinear parameter N are shown on the legend to Fig.3.16.
Figure 3 .
3 Figure 3.16: Distributions of peak positive P + and peak negative P -pressures on the axis of the piston source in pulsed field at G = 40.
Chapter 3 .Figure 3 Figure 3
333 Figure3.17: Distributions of peak positive P + and peak negative P -pressure on the beam axis in cases of focused Gaussian source (blue curves) and piston source (red curves). Distributions are given for pulsed (solid lines) and periodic (dashed lines) fields at G = 10 (a, c, e) and G = 40 (b, d, f).
Figure 3
3 Figure 3.19: Saturation curves of peak positive (a) and peak negative (b) pressures in pulsed (solid lines) and periodic (dashed lines) fields of piston source. Curves are plotted for highest values of P + and |P -| achieved in each field.
piston sources. Saturation curves of peak positive and negative pressures in the case of piston source are shown in Fig. 3.19. In contrast to ones obtained in Gaussian fields [Fig. 3.9(a)], saturation curves for a piston source are no more superimposed for different values of the diffraction parameter G. This means that limiting values of peak pressures for a piston source depend on either pulse duration or the frequency of the harmonic wave. For a case of G = 40 the peak positive pressure in pulsed field reaches the saturation immediately after the formation of the shock front. Saturation curves for peak negative pressure for periodic and pulsed fields differ no more than 35% [Fig. 3.19(b)]. Levels of |P -| in fields produced by a piston source are below corresponding levels of |P -| in Gaussian fields [Fig. 3.9(b)]. §3.5 Interaction of shock fronts in nonlinear focused acoustic beams
Figure 3
3 Figure 3.20: Mach stem formation in the focused beams of periodic waves [(a)-(c)] and bipolar pulses [(d)-(f)]. (a), (d) Temporal pressure waveforms at different transverse distances ρ from the beam axis. (b), (e) Temporal derivatives of the pressure waveforms shown on (a) and (d) correspondingly, i.e., numerical schlieren images. The darker greys indicate higher values of the derivatives. (c), (f) Initial waveform and waveforms at different radial distances ρ from the axis indicated by arrows in (b) and (e) correspondingly.
First, consider results
of modeling obtained at G = 10 and N = 1 for nonlinear propagation of focused periodic acoustic waves [Figs. 3.20(a)-(c)]. It is clearly seen that the spatial structure of the wave front is very similar to the von Neumann reflection: it contains one front intersecting the beam axis (the Mach stem) further dividing into two fronts at each distance from the axis [Figs. 3.20(a) and (b)]. A continuous slope can be seen between the focusing front and the Mach stem that distinguishes the von Neumann reflection regime. Note that the Mach stem structure Chapter 3. Saturation mechanisms of shock wave parameters in pulsed and periodic high-intensity focused ultrasound beams corresponds to one shock in one period of the wave [Fig. 3.20(c), waveform 1], while there are two shocks in one period of the wave away from the axis at ρ = 0.016 [Fig. 3.20(c), waveform 2].
Figure 4 . 2 :
42 Figure 4.2: Biological effects of ESWT; X-ray of healing of fractures is taken from (Endres et al., 2008).
Figure 4
4 Figure 4.3: (a) Pneumatic ESWT device, model EMS Swiss Dolorclast Vet. (b), (c) Pressure waveform measured in water at a distance 1 cm from the source for unfocused and focused applicators, correspondingly. Figures are taken from[START_REF] Cleveland | Acoustic field of a ballistic shock wave therapy device[END_REF].
Figure 4 . 4 :
44 Figure 4.4: Noninvasive treatment for orthopedic disorders using ESWT device Duolith SD1 (Storz Medical, Switzerland). Figures and acoustic parameters of generated pulses are taken from technical specification sheet of Duolith SD.
Chapter 4 .Figure 4
44 Figure 4.5: Use of acoustic radiation force to push kidney stones. Diagnostic probe Philips C5-2 controlled by Verasonics is used for the procedure.
. Experiments were performed in the Center for Industrial and Medical Ultrasound (CIMU), University of Washington, Seattle. The author was participated in planning of experiments, data processing and discussion of obtained data. §4.2 Nonlinear effects in acoustic field of a
Figure 4 . 6 :
46 Figure 4.6: Measurements of pressure waveforms at the plane located 5 mm far from the source.
Chapter 4 .Figure 4
44 Figure 4.7: (a),(b),(c) Radial scan waveforms in a plane 5 mm from the standoff at radial distances r = 0, 7 and 14 mm, respectively. Typical measured waveforms are shown in blue, the waveforms after averaging in red, and the numerical tails added to each waveform are shown in black. (d) The red curve shows the decreasing pressure amplitude with radial distance. Additional waveforms were introduced from 14 mm to 20 mm with the amplitude shown in green color.
Chapter 4 .
4 . 4.7(d) as a red line, additional waveforms were introduced with the amplitude shown in Fig.4.7(d) by a green line.Obtained using the method of equivalent source boundary condition is shown in Fig.4.8 and was used in the modeling of KZK equation. Here, the time-axis of the signals is shifted by 32 Characterization of nonlinear focused ultrasound fields of new medical devices μs, so that its beginning corresponds to t = 0. In the simulations, boundary condition was set in the window of 65.5 μs × 43.2 mm. Not shown in Fig.4.8 part of boundary condition was equal to zero. One can clearly see that the profiles have a different time delay depending on the radial coordinate r, which will provide the focusing of the beam in the simulations.
Figure 4 .
4 Figure 4.8: Boundary condition map for the modeling algorithm.
Figure 4 . 9 :
49 Figure 4.9: Axial distribution of the measured peak positive and peak negative pressures (black circles) compared to the modeling results (red line). Blue dash line corresponds to the position of the therapy head of the standoff.
exper. |p -|, exper.
Figure 4 .Figure 4 .
44 Figure 4.11: Radial (transverse) scans at the focus for peak positive p + and peak negative |p -| pressures in two perpendicular to the beam axis directions.
Chapter 4 .
4 Characterization of nonlinear focused ultrasound fields of new medical devices elongated shape along the axis of the beam and is localized at about a focal depth F = 30 mm; focal areas of |p -| and I are wider in the transverse direction than the focal area of p + and located at a distance corresponding to about one third of the focal length F .
Figure 4 .Figure 4 .
44 Figure 4.12: Two-dimensional spatial distributions of the peak positive (a) and peak negative (b) pressures, and energy density (c) in the field generated with the short standoff obtained in the modeling of transducer Duolith SD1.
Chapter 4 .
4 Characterization of nonlinear focused ultrasound fields of new medical devices 0.8, and is equal to τ rt = 2.2τ 0 . The time derivative of Eq. (4.2) is ∂p/∂τ = A s /2τ 0 • cosh -2 (τ/τ 0 ) = A s /2τ 0 • (1 -tanh 2 (τ/τ 0 )). It is equal to 0.36 of the maximum value of the derivative when tanh is equal to plus or minus 0.8. For a stationary shock such as governed by Eq.(4.2) these two definitions are equivalent.
Figure 4 .
4 Figure 4.14: Definition of the shock rise time used its time derivative (level 0.36 of the maximum level of the derivative should be used).
1
1 Low power measurements on the axis and in the focal plane of the transducerMeasurements were performed in CIMU (Seattle) by Bryan Cunitz.
4. 3 .Figure 4 .
34 Figure 4.15: Diagram of the experimental arrangement for measurement of acoustic field in water. Fiber optic hydrophone (FOPH) was used for high-power measurements while lowamplitude measurements were performed using a capsule hydrophone.
Chapter 4 .
4 Characterization of nonlinear focused ultrasound fields of new medical devices an initial approximation for fitting the parameters of an equivalent source in numerical modeling. The product specification sheet from manufacturer provide nominal values of the width of each element (0.37 mm) and the gap between them (0.05 mm). In experiments, delays to each of the elements of the probe were programmed by a time of flight calculation using the speed of sound in water as 1480 m/s and a focal position of z = F x = 50 mm along the axis of the probe.
Figure 4 .
4 Figure 4.16: Geometry of focusing from the diagnostic 2.3 MHz C5-2 array probe
Figure 4 .
4 Figure 4.17: Comparison of simulated and measured acoustic pressure amplitude distributions at the lowest probe output of 2V (linear propagation). Axial pressure distributions are depicted in the left column while two right columns depict distributions in two transverse directions in the focal plane at z = 50 mm. Results are presented for 16, 32, 40, 64, and 128 active elements of the probe.
4. 3 .Figure 4 .
34 Figure 4.18: Spatial distributions of the pressure amplitude obtained in numerical simulations of linear propagation for 16, 32, 40, 64, and 128 active elements. White dashed curves show the active surface of the probe.
Chapter 4 .Figure 4 .
44 Figure 4.19: Comparison of single periods of wave at the focus of the probe obtained in numerical simulations and measured in water.
Figure 4 .
4 Figure 4.20: Saturation curves for the peak positive and negative pressures obtained in numerical simulation (solid curves) and in hydrophone measurements (markers) for configurations of 16, 32, 40, 64, and 128 active elements of the probe.
Chapter 4 .Figure 4 .
44 Figure 4.21: Waveforms at the focus of the probe (z = 50 mm) obtained in numerical simulations for 40 active elements of the probe.
Figure 4 .
4 Figure 4.22: Distributions of peak pressures along the z-axis of the beam obtained in numerical simulations for different number of active elements.
Chapter A .
. Numerical calculation of the inverse Abel transform from the light intensity pattern in the schlieren image First, consider calculation for r = 0: spline approximation (A.3) with β 0 = 0 (because (dA/dx)| x=0 = 0) in Eq. (A.4), one obtainsB(r = 0) =n (ba) + 3 2 δ n (b 2a 2 ) , a = n • Δ, b = (n + 1) • Δ. (A.6) Second, consider calculation for r = m • Δ, m = 1...N -1. Let A(x = N • Δ) = 0 because A(x) → 0 if x → ∞.Replace the integral by the sum of integrals over the segments:B(r = m • Δ) =approximation (A.3) for every segment, one obtainsB(r = m • Δ) = -1 π n=N -1 n=1 I n , where I n = β n (ba ) + 2γ n c [sh(b )sh(a )2b )sh(2a )]), a = arch( a c ), b = arch( b c ), c = m • Δ. (A.8)Thus, using Eqs. (A.6) and (A.8) it is possible to calculate the Abel inversion transform (A.1)
Table 2 .
2 1: Experimental values of the critical parameter a for different types of reflection. Data were obtained for two distances from the spark source.
Experimental parameters Weak von Neumann reflection von Neumann re-flection Regular reflection
M a × 10 -2 4.4 ± 0.4 Probably occurs for a ≤ (0.38 ± 0.05) (0.38±0.05) < a < (1.05 ± 0.15) a ≥ (1.05 ± 0.15)
Table 4 .
4 1: Parameters of the equivalent source provided the best fit between results of linear field simulations using the Rayleigh integral with ones obtained in measurements.
Number of elements 16 32 40 64 128
Angle of aperture θ, rad ×10 -2 8.421 16.842 21.053 33.684 67.368
Initial pressure amplitude p 0 , kPa 295 275 265 217 160
Focal depth of acoustic lens F y , mm 85 86 86 70 70
Other parameters F x = 50 mm, R = 38 mm, l y = 12.5 mm
changed continuously over the source surface to provide the focusing in planes xz and yz
p(R, θ, y, t) = p 0 sin ωt + k Δx + y 2 2F y ,
Vasil'ev & Kraiko, 1999). In[START_REF] Brio | Mach reflection for the two-dimensional Burgers equation[END_REF] numerical modeling is used to study reflection of step shocks from a rigid boundary; the model is based on a two-dimensional (2D) Burgers equation. This model allowed authors to observe the formation of a supersonic jet in a small region
Acknowledgements
Author's personal contribution
The author took part in all the steps of the research presented in the dissertation. Aeroacoustical experimental data presented in the dissertation (Chapters 1 and 2) were obtained personally by the author in collaboration with the team of LMFA, École Centrale de Lyon (Petr Yuldashev, Edouard [START_REF] Cleveland | Physics of Shock-Wave Lithotripsy[END_REF].
Focusing of shock pulses is an important problem of nonlinear acoustics since focused shock waves are widely used in medical applications. Lithotripters are among the first medical devices which uses shock pulses in clinical practice. For about 30 years lithotripters are used for destruction of kidney stones [START_REF] Hill | Physical principles of medical ultrasonics[END_REF][START_REF] Bailey | Physical mechanisms of the therapeutic effect of ultrasound (A review)[END_REF]. There are three types of lithotripters: electrohydraulic, electromagnetic, and piezoelectric [START_REF] Cleveland | Physics of Shock-Wave Lithotripsy[END_REF]. Classification is based on the method of the pulse excitation. The shock front in the initial profile of the generated pulse is contained only in fields of electrohydraulic lithotripter; the principle of its action is based on creating an electrical discharge in one of the foci of the elliptical reflector. Pulses generated by electromagnetic and piezoelectric sources initially do not contain shocks. In these cases, formation of the shock front occurs while pulse propagates and is caused by nonlinear effects pronounced due to high pressure levels.
Examples of waveforms measured at the foci of electrohydraulic (Dornier HM3) and electromagnetic (Storz SLX) lithotripters are shown in Fig. 4.1. Typical parameters of pulses used in lithotripsy are following: peak positive pressure is in the range from 30 to 110 MPa, peak negative pressure is from -20 up to -5 MPa, and a duration of pulses is several microseconds [START_REF] Cleveland | Physics of Shock-Wave Lithotripsy[END_REF].
Extracorporeal shock wave therapy (ESWT) is another important medical application of shock pulses. Since the beginning of the 90s ESWT is used for noninvasive treatment of multiple musculoskeletal disorders such as tendonopathies, plantar fasciitis, lateral epicondylitis, pain after joint replacement, bedsores, etc [START_REF] Kudo | Randomized, placebo-controlled, double-blind clinical trial evaluating the treatment of plantar fasciitis with an extracoporeal shock wave therapy (ESWT) device: a North American confirmatory study[END_REF], Rompe et al., 2003[START_REF] Gerdesmeyer | Extracorporeal shock wave therapy for the treatment of chronic calcifying tendonitis of the rotator cuff: a randomized controlled trial[END_REF][START_REF] Furia | Safety and efficacy of extracorporeal shock wave therapy for chronic lateral epicondylitis[END_REF][START_REF] Rompe | Repetitive low-energy shock wave treatment for chronic lateral epicondylitis in tennis players[END_REF][START_REF] Steinberg | ESWT role in Wound care[END_REF]. Therapeutic effects induced by ESWT include a growth of blood vessels in a sore (angiogenesis effect) [START_REF] Ito | Extracorporeal Shock Wave Therapy as a New and Non-invasive Angiogenic Strategy[END_REF]; osteogenesis effect (new number of active elements in a saturation regime lead to shift of focal areas of peak positive and negative pressures away from the probe, reducing of their sizes and increasing of peak pressure levels.
Appendix A
Numerical calculation of the inverse Abel transform from the light intensity pattern in the schlieren image
In chapter 1 the method for reconstruction of dimensionless pressure waveforms from the light intensity pattern in the schlieren image was proposed. Here consider the numerical calculation of the integral I(r )dr is calculated using trapezoidal numerical integration while the outer integral contains a singularity at x → r and could not be calculated in this way. To avoid singularity, the integrand is approximated using a cubic spline interpolation and then calculated numerically. Note that the investigated integral multiplied by the factor (-1/π) is the Abel inversion transform that is written in general form as [START_REF] Bracewell | The Fourier Transform and Its Applications[END_REF])
One uses the next property of this inversion:
Calculating the derivative, one finds dA/dx = 2
Let us approximate A(x) using cubic spline interpolation:
and assume that the function A(x) is given at the nodes of the uniform grid
where Δ is the mesh spacing.
Appendix B
Synthèse des résultats
Dans l'introduction nous présentons l'actualité du sujet de la thèse qui porte sur l'étude de la focalisation nonlinéaire et de la réflexion d'ondes de choc acoustiques. Dans le contexte des applications aux ultrasons médicaux et à l'aéroacoustique l'état de l'art à ce jour des questions scientifiques est exposé ainsi que les objectifs généraux. Ci-après nous résumons les travaux et les principaux résultats obtenus.
Le premier chapitre est consacré à la mise en oeuvre des méthodes optiques pour la mesure de profils de pression acoustique dans le cas d'une onde en N générée par une source d'étincelle dans l'air. Dans le §1.1 un examen des méthodes existantes pour mesurer les ondes de choc acoustiques est présenté et les limites de la mesure par des microphones à condensateur sont discutés. Les méthodes optiques proposées comme une alternative pour les mesures d'impulsions de choc acoustiques sont l'ombroscopie schlieren et l'interferométrie. Dans le §1.2 le dispositif expérimental conçu pour les mesures schlieren d'ondes acoustiques générées par une source à étincelles en milieu homogène est présenté. Une procédure de reconstruction des formes d'onde de pression acoustique à partir d'images schlieren est décrite dans le §1.3. Les formes d'onde de pression ont été reconstruites à l'aide d'une méthode d'inversion de type Abel, à partir des cartes d'intensité lumineuse enregistrées avec une caméra haute résolution et rapide. Les niveaux de pression absolue ont été déterminés par une analyse à différentes de distances de propagation de la durée de la phase de compression d'impulsions, qui est modifiée en raison des effets de propagation non linéaire. Des exemples des signatures de pression reconstruites à différentes distances de la source sont présentés dans le §1.4. Notons que la résolution temporelle de la méthode (3 μs) est limitée par la durée d'exposition de la caméra à grande vitesse. Une autre méthode optique proposée dans la thèse pour les mesures de forme sphérique divergeante d'onde en N est basée sur la technique d'interférométrie Mach-Zehnder. Le dispositif expérimental est décrit dans le §1.5. Dans le §1.6 la méthode de reconstruction pour restaurer des formes d'onde de pression à partir de signaux de phase optique est décrite. La reconstruction est basée sur une inversion de type Abel. Contrairement à la méthode optique de strioscopie, la méthode de Mach-Zehnder permet la reconstruction quantitative des formes d'ondes de pression d'onde en N et, par conséquent, on dispose ainsi d'un "microphone laser" à large bande. Les résultats des mesures optiques obtenues à l'aide de l'interféromètre de Mach-Zehnder sont données dans le §1.7. La résolution temporelle de la méthode interféromètrique (0.4 μs) est principalement déterminée par la largeur finie du faisceau laser (environ 0.1 mm). Dans le §1.8 les avantages et les limites des deux méthodes optiques (méthode |
04121015 | en | [
"info"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121015/file/publi-7298.pdf | Youssra Rebboud
email: [email protected]
Pasquale Lisena
email: [email protected]
Raphaël Troncy
email: [email protected]
Prompt-based Data Augmentation for Semantically-Precise Event Relation Classification
Keywords: Event Relation, Information Extraction, Knowledge Graphs, Machine Learning
The process of recognizing and classifying the relationships between events mentioned in the text is a crucial task in natural language processing (NLP) known as event relation extraction. If temporal relations and causality are largely studied in the literature, other types of relations have found less interest. Our study specifically concentrates on four types of event relations: causality, enabling, prevention, and intention. Our main contribution consists of the use of a state-of-the-art language model (GPT-3) to extend an existing small dataset with synthetic examples to address the challenge of insufficient training data. We evaluate the quality of these generated samples by training an event relations extraction system, showing improved performances in classifying event relations.
Introduction
Relation extraction (RE) -the identification and classification of relationships between two named entities in raw text -is a classic natural language processing (NLP) task which is receiving attention from the scientific community [START_REF] Hendrickx | SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations between Pairs of Nominals[END_REF][START_REF] Nasar | Named Entity Recognition and Relation Extraction: State-of-the-Art[END_REF]. A more specific branch of RE aims to automatically detect the relations between events (Event Relation Extraction, ERE). While, in RE, the entity type is crucial for inferring the correct relation 1 , in ERE the relations involve homogeneous entities (namely, pairs of events), requiring specialised methods and approaches. Among event relation, the literature mostly focused on temporal relations [START_REF] Vo | Extracting Temporal Event Relations Based on Event Networks[END_REF], causality [START_REF] Khetan | Causal bert: Language models for causality detection between events expressed in text[END_REF], and coreference of the same event in different textual resources [START_REF] Liu | Extracting events and their relations from texts: A survey on recent research progress and challenges[END_REF].
Apart from causal and temporal structures, event relations may include different concepts such as prevention, intention, enabling, etc. Extracting this variety of relations may serve various downstream applications, including semantic timelines, question answering, and fact checking, supporting the decision making and improving information and entertainment. Apart from a demonstrative proof-of-concept [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF], the automatic extraction of different kind of relations have not been deeply investigated. The first step toward achieving this objective is to develop a dedicated dataset. Although [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF] attempts to construct such an initial dataset for multiple event relations, the result is particularly small in size and exhibited significant imbalances.
The recent advent of generative models has marked a paradigm shift in Natural Language Processing with their capabilities to generate human-like complex text relying only on the provided prompt. In this work, we aim to use prompt-based solutions to extend The Event Relations Dataset [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF] with synthetic sentences including references to event relations, with a particular focus on causality, prevention, intention, and enabling. This dataset will then be used to further evaluate state-of-art techniques based on machine learning, to check if they can be successfully applied in the prediction of event relation types.
This work has two research objectives:
• To investigate if prompt-based generative models are suitable for generating synthetic data for the purpose of populating a dataset of event relation, particularly for such kind of rare and very specific event relation types. • Evaluate the performance of methods based on language models in predicting event relations when trained on synthetic data.
The remainder of this paper is structured as follows. First, we review the existing event relations datasets and event relations extraction in Section 2. We describe our approach for constructing a synthetic event relations dataset by proving GPT-3 with prompts to generate sentences that hold prevention, intention, and enabling relations as well as their constructs in Section 3. Section 4 details a method for extracting event relations from the generated dataset, whose results are discussed in Section 5. Finally, we conclude and outline some future work in Section 6.
Related Work
Events and Event Relationships Datasets
Within the field of events and events relationships, several datasets have been developed with the main objective of capturing events, event coreferences, causal and temporal relations. For instance, ACE 20052 for event extraction, and TimeBank [START_REF] Uzzaman | SemEval-2013 task 1: TempEval-3: Evaluating time expressions, events, and temporal relations[END_REF] and CausalTimeBank [START_REF] Mirza | Annotating causality in the TempEval-3 corpus[END_REF] respectively for temporal and causal events relationship extraction.
In [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF], the FARO ontology for representing event relations has been introduced, as a harmonisation of different models from the literature, including definitions for all relations. In addition, a first Event Relation dataset covering four event relation types: caus lity, prevention, enabli ng, and intention is presented in [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF]. With the exception of causality, these relation types are absent in other annotated datasets. Due to the small size of the dataset and its imbalance, training a model on top of it presented a significant challenge. 3To address this gap, our work aims to increase the size of this dataset using data augmentation techniques. In prior research, numerous techniques were elaborated for data augmentation within the field of events and events relationships extraction [START_REF] Liu | Extracting events and their relations from texts: A survey on recent research progress and challenges[END_REF] such as distant supervision [START_REF] Feng | Effective deep memory networks for distant supervised relation extraction[END_REF] and translation [START_REF] Liu | Event Detection via Gated Multilingual Attention Mechanism[END_REF]. In this work, we intend to leverage capabilities of generative models such as GPT-3 [START_REF] Brown | Language Models are Few-Shot Learners[END_REF]. It is worth to mention that, at the time of writing, no official API for ChatGPT is available.
Events and Events Relationship Extraction
Numerous techniques were adopted to tackle the event relation extraction problem from a general point of view regardless of the entity type -event, person, location, etc. -, including supervised, unsupervised, semi-supervised, and distant supervision approaches [START_REF] Wang | Deep Neural Network Based Relation Extraction: An Overview[END_REF]. Each approach has advantages and limitations: supervised approaches heavily rely on large training datasets; on the other hand, unsupervised approaches fall short in labeling the identified clusters -introducing a barrier for human understanding -and in finding unified evaluation metrics; distant supervision approaches are based on entity alignment between a corpus and and a knowledge base, but demonstrated low accuracy scores due to a bad precision.
Particularly, the Cross-Modal Attention Network [START_REF] Zhao | Modeling dense cross-modal interactions for joint entity-relation extraction[END_REF] achieved state-of-the-art performances by simultaneously learning two tasks: entity recognition and relation classification. The approach involves injecting the token-level information into entity tags, rather than concatenating token and label representations.
In the literature, the extraction of events and their relationships studies mostly a subset of possible relations, namely causality, temporality and coreference [START_REF] Liu | Extracting events and their relations from texts: A survey on recent research progress and challenges[END_REF]. Previous studies demonstrated that models based on the combination of CNN, LSTM and attention mechanism are able to capture causal dependencies, even when the cause and effect are separated by a significant distance within the sentence [START_REF] Yang | A survey on extraction of causal relations from natural language text[END_REF].
Event extraction has been made possible using pretrained language models such as BERT [START_REF] Devlin | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[END_REF], as shown in [START_REF] Yang | Exploring pre-trained language models for event extraction and generation[END_REF]. SpanBERT [START_REF] Joshi | SpanBERT: Improving pre-training by representing and predicting spans[END_REF] -an improved version of BERT that excels in predicting text spans instead of single words -has also been employed for event extraction, resulting in notable performance gains [START_REF] Portelli | Improving adverse drug event extraction with spanbert on different text typologies[END_REF].
Building a Synthetic Event Relations Dataset with GPT-3
The Event Relation dataset (Section 2.1) is the only available dataset including multiple event relation types. In this section, we describe our efforts for overcoming the two most important limitations of this dataset: its size and the large unbalance between relation types.
Our data augmentation strategy for expanding the dataset is based on the automatic generation of sentences using a prompt-based model. Using the right prompt as input, the model would provide new synthetic sentences for enriching the dataset.
We use the GPT-3 language model [START_REF] Brown | Language Models are Few-Shot Learners[END_REF], and more precisely the GPT-3.5 text-davinci-003 variant as described in the OpenAI documentation. 4 We are interested in generating sentences that involve events and relationships between them, particularly those related to prevention, intention, and enabling.
Starting Point: The Event Relations Dataset
The Event Relations Dataset [START_REF] Rebboud | Beyond Causality: Representing Event Relations in Knowledge Graphs[END_REF] -later named in this paper the Original Dataset -describes some of the FARO event relation types. It represents the first events and events relations dataset that encapsulates different event relations, ranging from temporal to causal, and extending beyond causality to include intention, prevention, enabling, and the explicit negation of causality -that we will not cover in this work, because it would require a separate discussion. The construction of the dataset was done by manually re-annotating two existing datasets, TimeBank [START_REF] Uzzaman | SemEval-2013 task 1: TempEval-3: Evaluating time expressions, events, and temporal relations[END_REF], and [START_REF] Ning | Joint Reasoning for Temporal and Causal Relations[END_REF], which previously only included temporal and causal relations. The dataset was afterwards extended with more samples for prevention and enabling relation using the same manual validation technique.
The annotation of the aforementioned new event relations types involved also the annotation of the constructs of each relation, which we refer to them in the following as event triggers. It is worth mentioning that these event triggers belong to a bigger class in FARO called Relata, which is an abstraction encompassing two sub classes: Event (immanent) and Condition (transcendent). In this paper, we consider a subset of events that acts as triggers for preventing, causing or intending to cause other events, and a condition that enables the happening of another event.
Example 1. "The move boosts Intelogic Chairman Asher Edelman's stake to 20% from 16.2% and may help prevent Martin Ackerman from making a run at the computer-services concern. "
𝑚𝑜𝑣𝑒 prevents -------⟶ 𝑟𝑢𝑛
In Example 1., there exists an event relationship of type prevention, and the two event triggers that participate in the relation are move and run of type Event.
Example 2. "The government of Prime Minister Brian Mulroney has been under pressure to reduce the deficit, which is expected to reach C$30 billion this year. "
𝑝𝑟𝑒𝑠𝑠𝑢𝑟𝑒 enables ------⟶ 𝑟𝑒𝑑𝑢𝑐𝑒
In Example 2., the event relationship involves the type of enabling, with the two corresponding event triggers being pressure and reduce, in which pressure is an event trigger of type Condition and reduce is of type Event.
Table 1 summarizes the number of event relations per relation type in the Original Dataset after extending it with news agency samples.
Prompt-based Sample Generation of Sentences
When designing the prompt utilized to generate synthetic examples for a specific relation type, we include:
1. the definition that the FARO ontology assigns to that relation type; 2. a subset of relevant examples from the dataset.
We consider a sequence of words Xi = [x 1 , x t1 , …, t2 , x n ], representing an event relationship occurring between two Relata, of a specific relation type ER x . The words x t1 and x t2 respectively represents in the text the two Relata which are the subject and the object of the relations. The definition of the relation type definition(ER x ) is taken from the FARO ontology.
The selection of the prompt is done after a series of attempts. For sentences generation, we started by leveraging only the task description in the prompt. Therefore, the generated sentences where too short and basic, while we need realistic and longer sentences, similarly to those in the Original Dataset.
Table 2 demonstrates an effort to prompt the model to produce sentences that showcase connection between events with the desired relation type, but the resulting answer falls short of meeting our intended expectations.
The prompt text to generate sample sentences including relations of type ERx is written as the following:
Prompt(ERx) = definition(Event) + definition(ER x ) + request(ER) + examples(ERx)
This prompt definition concerns prevention and intention relations. In the context of enabling relation, we include the definition of a condition as follows:
Prompt(ER enable ) = definition(Event) + definition(Condition) + definition(ER enable ) + request(ER) + examples(ERx)
where request(ER) refers to the task description that is given to the language model along with the definitions and examples(ERx) are randomly-selected examples from the existing dataset which will be used to iteratively expand and reformulate the dataset.
Example: Prompt used to generate sentences with event relation of type Enabling Note that the original dataset was re-annotated based on Timebank [START_REF] Uzzaman | SemEval-2013 Task 1: TempEval-3: Evaluating Time Expressions, Events, and Temporal Relations[END_REF] and Event Causality dataset [START_REF] Ning | Joint Reasoning for Temporal and Causal Relations[END_REF], both of which are derived from news articles. This makes the majority of the sentences falling within the political domain. Therefore, introducing the word political in the prompt is to ensure that the generated sentences were coherent and consistent with the original dataset domain.
Prompt-based Event Trigger Annotation
Similarly to sentence generation, we leverage definitions of events to the prompt, adding few examples illustrating the right position for event triggers for each relation types. The prompts have been chosen to acquire the most similar sample pattern to facilitate parsing.
For an event relationship ERx including prevention or intention, the prompt for selecting their event trigger words is designed as follow:
Table 2 Example of prompting attempts that fell short of producing the desired results
Prompt
Answer Limitation Give me an event that enables the happening of other event.
One event that enables the happening of another event is a person's decision to take an action. For example, a person's decision to get up and walk across a room enables the person to arrive at their destination.
• Prompt had no examples from the existing dataset. • Answer was too simple. • Did not describe a real-world scenario. • Explanation was given instead of an actual expression.
Describe a situation where an event is an intention to cause another event, for example: Companies such as Microsoft or a combined worldcom MCI are trying to monopolize Internet access.
An example of an event intended to cause another event is when companies such as Microsoft or a combined worldcom MCI attempt to monopolize Internet access. By controlling the majority of the market, these companies can dictate the terms and prices of access, potentially limiting consumer choice and driving up costs...
• Explanation was given instead of an actual expression. definition(Event)
An event is a possible or actual event, which can possibly be defined by precise time and space coordinates. definition(Condition) A condition is the fact of having certain qualities, which may trigger events. definition(ER enable )
The enables relationship connects a condition or an event (trig-ger1), with an other event (trigger 2) it is contributing to realize as an enabling factor. where the last element is the description of the task of retrieving event triggers from the text. This request takes the following shape:
If in this sentence <TEXT OF THE SENTENCE> is present an expression with a <RELATION TYPE> relationship between <𝑥 𝑡1 > (trigger1) and <𝑥 𝑡1 > (trigger2), what would be the trigger1 and trigger2 in these sentences? Give me only one single word for each trigger an only two triggers per sentence. Put each pair between parentheses in a separate line.
For event relations of type enable, the definition of the condition is modified in the following way: The prevent relationship connects an event (trigger1) with the event (trigger 2) for which is the cause of not happening. request(ET)
Prompt
If in this sentence "Subcontractors will be offered a settlement and a swift transition to new management is expected to avert an exodus of skilled workers from Waertsilae Marine's two big shipyards, government officials said." is present an expression with a prevention relationship between settlement (trigger1) and exodus (trigger2), what would be the trigger1 and trigger2 in these sentences? Give me only one single word for each trigger an only two triggers per sentence, put each pair between parentheses in a separate line.
Manual Validation
We use these methods and we generate 600 sentences with each of the relations.
To guarantee the accuracy of the generated set of samples and their appropriate event triggers, we manually validate each synthetic sentence, ensuring its adherence to the given definition. Overall, 90.77% of all generated sentences were correctly representing an event relation of the requested type. After removing the wrong samples from the dataset, we proceed checking the correctness of their extracted event triggers for the remaining correct sentences.
The generated events triggers were not consistent in term of their patterns from one generation to another. For this reason, an additional parsing step was needed. For doing that, we identified the different textual patterns, processed and categorized these patterns by removing irrelevant words such as '(trigger 1)', and retaining only the precise word or sequence of words that represent the essential part of the event. We were able to identify roughly 12 different patterns. Some examples are reported in Table 3.
After this processing, we validated the correctness of the two trigger words, measuring an accuracy of 75.15% for trigger-1 and 66.82% for trigger-2. Sentences with wrong triggers were not eliminated from the dataset, but instead manually fixed. Table 4 shows the detailed accuracy scores for each relation type.
We merged the synthetic data with our original dataset, resulting into a larger and more diverse dataset. We managed to acquire and validate 1507 new sentences -with relative event triggers -making a total 2289 sentences. The statistics of this new dataset -latter in the text named the Augmented Dataset -are reported in Table 5. Based on the above reported results, it is evident that GPT-3, with a clear definition of concepts, particularly definition of relations and their constructs types, along with a limited number of examples (5 for each iteration), is able to reach a considerable accuracy. For event triggers generation, we just included a single sentence example, along with its event triggers. Observing the first generated annotations and realising that, despite the limited amount of training data, the results obtained were reasonably good, we decided to continue without adding further sentences. We considered the accuracy quite high, considering the difficulty of the task.
Events and Event Relation Extraction
In order to fulfil our goal of testing the effectiveness of GPT-3 based data augmentation technique on events and events relationships extraction, and at the same time, to evaluate the performances of existing models, we conducted the following experiment. We fine-tune two instances of the BERT model [START_REF] Devlin | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[END_REF]. The first one, named BERTee, is for token classification which we apply on event extraction. The second one, named BERTer, is for sequence classification which we apply on events relation classification. We additionally fine-tune a variant of BERT for event extraction, SpanBERT [START_REF] Joshi | SpanBERT: Improving pre-training by representing and predicting spans[END_REF] taking into consideration that some of our event triggers are represented by more than one word, i.e. a span of words, and SpanBERT is specifically designed to handle this type of representation.
Xi = [x 1 , x t1 , …,x t2 , x n ] is a sentence of n tokens, which is part of the studied dataset. BERTee (Figure 1a) is trained for predicting a tag for each token in the input sequence. The tags are chosen among TAGi=[O,x1 type ,…,x2 type,O ], where x1 type =[Trigger1] and x2 type =[Trig-ger2] are the subject and the object of the event relation, and 'O' is refereed to the rest of tokens in each sentence. The models consists of 12 transformer blocks receiving in input the sequence of tokens Xi and returning the relative contextualized representation H i = [h1, h2,…, hn]. On top of the transformers, a classification layer -consisting of a fully connected layer followed by a softmax activation -maps each contextualized representation ℎ 𝑖 to a probability distribution over the possible labels for that token, i.e., P hi = [P(Trigger1|hi), P(Trigger2|hi), P(O|hi)]. Finally, for x i ∈Xi, we select the most probable tag TAG xi = max(P hi ).
Similarly, BERTer (Figure 1b) takes as input the same sequence of tokens Xi, and uses transformers to compute the contextualized representation H i . This representation feeds a softmax classification layer that maps the hidden states to the event relation types outputting the probability distribution for the label Li ∈L, given L =[causality, enabling, prevention, intention, No-Relation]. We select similarly the most probable label from the outputted labels probabilities. Some of the event mentions present in our work are denoted as a span of words, i.e a sequence of words. Although they present the minority, we wanted to test a variant of BERT called SpanBERT [START_REF] Joshi | SpanBERT: Improving pre-training by representing and predicting spans[END_REF], which is trained mainly to predict a sequence of words rather than a single word. During the training, the model is masking spans of words rather than random single words, pushing the neural network to predict them. The model is similar to the overall architecture of the BERTee in term of transformers blocks: base model with 12 transformer block. In addition to the classification layer, SpanBERT also includes a span classification layer that is designed to predict the label for a span of tokens. For example, given the sentence: "The United Nations passed a resolution to impose an arms embargo on Syria in an effort to pressure the government to end its civil war". The span "arms embargo" is aimed to be tagged by SpanBERT as a single class label, namely as "Trigger1 Trigger1" more robustly.
Results
Events and Events Relationship Extraction
We trained BERTee, BERTer, and SpanBERT models on both the Original and Augmented datasets. However, it is important to note that the test set used for evaluation was extracted The outcomes of the experiment with BERTer are shown in Table 6. We observe that the performance varied across different classes in the Original Dataset. The model showed relatively higher F1 score values for the 'cause' and 'enable' classes, while it struggled to perform well on 'intend' and 'prevent' classes. This is probably due to the low support of these latter classes, with intention relation being less represented in the dataset with only 44 example sentences. Data augmentation leads to significant improvements in the metrics, with an increment of the F1-score for all classes. The highest improvement is observed in the 'prevent' and 'enable' classes, and in particular for the 'intend' class, which received a 115.91% increase in the F1-score with respect to the 44% of the Original Dataset. Despite the absence of added synthetic data for the "cause" relation, we still notice an improvement with a modest margin. Future work will involve data augmentation also for that class.
The performance of BERTee and SpanBERT in events classification is less good in both the Original and Augmented datasets (Table 7). Despite testing different parameters, the results indicated that further improvements are still needed to enhance the performance of this task. Therefore, the current outcomes should be viewed as preliminary, and further investigations will be adopted as future work.
Modeling the Extracted Event Relations in a Knowledge Graph
Event Knowledge Graphs are shown to be an effective data representation way to ease navigation through event flows and their relations and to flexibly retrieve information about these events from the stored knowledge [START_REF] Guan | What is event knowledge graph: A survey[END_REF]. This can serve many applications such as link prediction and fact-checking, and their efficiency tends to be considerable when they are richer in terms of aspects and relationships between them. For this sake, we aim to generate a Knowledge Graph (KG) of events and relations between them from the Augmented dataset. In other words, this KG will be an RDF version of the Augmented dataset.
The KG that we constructed contains events and relations between them. The elements in our KG are classified according to the FARO ontology, which distinguishes between two major types of Relata: Condition and Event (see Section 3.1).
More precisely, the events are typed according to the relation between them. 'Enables' relation, was used to connect two entities in which the subject represents the (Condition) that is necessary for the object (Event) to occur. We also identified the other relations that we focus on in the previous parts between events in our KG, such as "causes", "prevents", and "intends", which relate two entities of type Event. These relations capture different types of causal and temporal dependencies between events, and they allow to reason about complex chains of events and their potential consequences.
To ensure the traceability of our KG, we linked each event in our KG to the sentence it was extracted from and each sentence to its provenance corpus, using the Provenance ontology (PROV-O) [START_REF] Belhajjame | PROV-O: The PROV Ontology[END_REF]. This provenance information allows to track the origin of each piece of information in our KG, and to verify its accuracy and relevance. When the provenance involves scientific datasets or software, it is referenced in the graph using the FaBiO ontology [START_REF] Peroni | FaBiO and CiTO: Ontologies for describing bibliographic resources and citations[END_REF], detailing information about paper title, author and year of publication.
Additionally, in order to have a more complete KG in terms of event relations (temporal relations, comparative relations, etc.), we have incorporated in the same aforementioned way events and their temporal relations from TimeBank [START_REF] Uzzaman | SemEval-2013 task 1: TempEval-3: Evaluating time expressions, events, and temporal relations[END_REF] corpus, We also leverage events which have temporal, comparative and contingent relations from the [START_REF] Hong | Building a Crossdocument Event-Event Relation Corpus[END_REF] dataset, we call it in the rest of the paper Hong dataset.
TimeBank consists of 24k events with 3.4k temporal links, extracted from 183 News article. On the other hand, Hong consists of the annotation of ACE2005 news-wire documents and other news documents about Malaysian Airline 17, resulting in 862 events with 25610 relations between them.
The integration of the earlier stated datasets was made after examining the overlap between some of their event relation definitions with FARO ontology. The mapping of this relations to FARO is shown in 8 and 9.
The resulting graph -containing over 68,000 statements, with 11,917 event relation linkshas been loaded in a triplestore, available for query at http://kflow.eurecom.fr/.
Conclusion and Future Work
In this work, we made a first attempt towards the automatic extraction of event relations with precise semantics from raw text, focusing on a subset of them. We applied GPT-3 to generate synthetic data for the aforementioned event relation types, obtaining good accuracy. The result of this effort is a dataset consisting of 2289 sentences -of which 1507 were synthetic -annotated with the event mentions in the text. The data augmentation method described in this paper can be used to extract even more event relations by properly replacing the definition and the examples to match the required relation types. Furthermore, we used BERT for performing two related tasks: event relation classification and event mentions classification. We utilized also SpanBERT to evaluate its ability to classify events that are expressed as a sequence of words into a single class, even though such events are With the recent release of , new experiments can be performed for improving the synthetic data generation, in particular in the extraction of relevant triggers.
Furthermore, we would like to investigate the interaction between event relation types and the event mentions by jointly extract them from text in a sense of enhancing the event trigger classification by leveraging event relation information. In particular, We plan to test the effectiveness of the [START_REF] Zhao | Modeling dense cross-modal interactions for joint entity-relation extraction[END_REF]model on our own dataset and assess its performance under similar conditions.
In future work, we intend to combine the automatic classification of event relation to classic event identification techniques, in order to automatically annotate news and encyclopedic entries, with the final goal of realizing a KG of interconnected events with precise semantics.
(a) BERTee for Event Extraction (b) BERTer for Event Relation Classification
Figure 1 :
1 Figure 1: BERTee and BERTer architectures
Table 1
1 Total number of relations in the Original Dataset.
Relation type Cause Intend Prevent Enable Not-Cause
Number of relations 283 44 89 124 3
Event Triggers ER enable = definition(Event) + definition(Condition) + definition(ER enable ) + request trig (ER enable , sentence, x t1 , x t2 ) Example: Prompt used to generate event triggers with event relation of type Prevention
definition(Event) An event is a possible or actual event, which can possibly be
defined by precise time and space coordinates.
definition(Condition) A condition is the fact of having certain qualities, which may
trigger events.
definition(ER enable )
Table 3
3 Three of the different textual patterns which GPT-3 was returning in output for the Event Triggers selection.
Pattern Number Event Triggers
0 Entitles, Buy
1 Approval (trigger1), Acquire (trigger2)
2 "Trigger1 (military): success Trigger2 (diplomatic): risks"
Table 4
Percentage of Correct Sentences and Event Trigger Words with GPT-3
Relation types Intention Prevention Enabling Total
Correct Generated Sentences(%) 93.82 97 81.5 90.77
Correct ET1 (%) 75.13 81.83 68.5 75.15
Correct ET2 (%) 73.47 77 50 66.82
Number of Checked Examples 600 600 600 1800
Table 5
5
Augmented Dataset Statistics
Relation Type Original Dataset Augmented Dataset
Prevent 93 646
Enable 118 573
Intend 44 615
Cause 283 283
No-Relation 72 172
total 610 2289
Table 6
6 Event relationship extraction results on the test set with BERTer on both the original and the Augmented dataset
Dataset Class Precision Recall F1-Score
cause 0.64 0.83 0.72
enable 0.77 0.67 0.71
Original Dataset intend 0.67 0.33 0.44
prevent 0.62 0.67 0.64
cause 0.75 0.75 0.75 (+4.16%)
enable 0.93 0.96 0.95 (+33.80%)
Augmented Dataset intend 0.98 0.92 0.95 (+115.91%)
prevent 0.96 0.93 0.94 (+46.86%)
Table 7
7 Event extraction results on the test set after finetuning BERTee and SpanBERT for token Classification on both the original Dataset and the augmented dataset. (*): The reported metrics are the macro average of Precision, Recall, and F1-score. The macro average F1 score computes the F1 score independently for each class and then takes the unweighted average of the scores. This means that each class is given equal weight, regardless of the number of samples it contains. For this reason, the F1 score values may seem to not reflect the overall precision and recall metrics
Model Dataset Precision(*) Recall(*) F1-score(*)
SpanBERT Original Dataset Augmented Dataset 0.34 0.34 0.36 0.32 0.12 0.18
BERTee Original Dataset Augmented Dataset 0.33 0.33 0.31 0.34 0.12 0.23
Table 8
8 Mapping of Event Relation Dataset (Hong) relations to FARO
FARO Hong
Category Super-type Type Type Super-type
Immediately before Before Before
Meets Before Meet
Starts Starts Starts
Temporal Temporally related to Ends Finish Finish
Contains(the subproperty) During During
Overlaps Overlaps Overlap
Simulations to Equality Equality
Causality Enabling Contingently related to Causes Enables Causality Condition Contingency
Opposite to Opposite
Comparison Comparatively related to Alternative to Negation Comparison
Contrasting version of Competition
Table 9
9 Mapping of TimeBank relations to FARO The reported results show that the augmented dataset -and in general synthetic data -improve the ability of the model to generalize and correctly classify sequences, even for classes with a limited number of training examples in the original dataset. However, for event mention classification the performance is still relatively low and needs further improvements.All code used for the experiments reported in this paper, as well as the resulting dataset is available at https://github.com/ANR-kFLOW/event-relation-classification.
Types FARO TimeBank
Simultanious to SIMULTANIOUS
Before BEFORE
Imediately before IBEFORE
Inverse of (Imediately before) IAFTER
Contains(the subproperty) During
Temporal Contains INCLUDES
Inverse of (contains) IS INCLUDED
Starts BEGINS
Inverse of (starts) BEGUN BY
Ends ENDS
Inverse of (ends) ENDED BY
less frequent.
https://catalog.ldc.upenn.edu/LDC2006T06
More details are provided in Section 3.1
https://platform.openai.com/docs/guides/completion
Acknowledgments
This work has been partially supported by the French National Research Agency (ANR) within the kFLOW project (Grant n°ANR-21-CE23-0028). |
04121020 | en | [
"math"
] | 2024/03/04 16:41:26 | 2022 | https://hal.science/hal-04121020/file/Positive%20solutions%20for%20slightly%20subcritical%20elliptic%20problems%20via%20Orlicz%20spaces.pdf | Mabel Cuesta
Rosa Pardo
POSITIVE SOLUTIONS FOR SLIGHTY SUBCRITICAL ELLIPTIC PROBLEMS VIA ORLICZ SPACES
Keywords: Positive solutions, subcritical nonlinearity, changing sign weight. [2020]58E07, 35J20, 35B32, 35J25, 35J61
This paper concerns semilinear elliptic equations involving sign-changing weight function and a nonlinearity of subcritical nature understood in a generalized sens. Using an Orlicz-Sobolev space setting, we consider superlinear nonlinearities which do not have a polynomial growth, and state sufficient conditions guaranteeing the Palais-Smale condition. We study the existence of a bifurcated branch of classical positive solutions, containing a turning point, and providing multiplicity of solutions.
Introduction
In this paper we study the classical positive solutions to the Dirichlet problem for a class of semilinear elliptic equations whose nonlinear term is of subcritical nature in a generalized sens and involves indefinite nonlinearities. More precisely, given Ω ⊂ R N , N > 2, a bounded, connected open subset, with C 2 boundary ∂Ω, we look for positive solutions to:
(1.1)
-∆u = λu + a(x)f (u), in Ω, u = 0, on ∂Ω, where λ ∈ R is a real parameter, a ∈ C 1 ( Ω) changes sign in Ω,
(1.2) f (s) := g(s) + h(s), with h(s) := |s| 2 * -2 s [ln(e + |s|)] α , 2 * = 2N
N -2 is the critical Sobolev exponent, α > 0 is a fixed exponent, and g ∈ C 1 (R) satisfies
(H) (H) 0 lim s→0 f (s) |s| p-2 s = L 1 ,
for some L 1 > 0, and p ∈ 2, 2N N -2
(H) ∞ lim s→∞ g(s)
|s| q-2 s = L 2 , for some L 2 ≥ 0, and q ∈ 2, 2N N -2
(H) g |g (s)| ≤ C(1 + |s| q-2 ), for s ∈ R.
The second author is supported by grants PID2019-103860GB-I00, MICINN, Spain, and by UCM-BSCH, Spain, GR58/08, Grupo 920894.
We will say that g (or even f ) satisfies hypothesis (H) whenever (H) 0 , (H) ∞ , and (H) g are satisfied. Since we are interested in positive solutions, we (1.3) redefine f to be zero on (-∞, 0],
note that f (0) = 0 and that (1.4)
lim s→0 + f (s) s -L 1 |s| p-2 = 0.
When λ = 0, a(x) ≡ 1 and g(s) ≡ 0, this kind of nonlinearity has been studied in [START_REF] Castro | A priori bounds for Positive Solutions of Subcritical Elliptic Equations[END_REF][START_REF] Castro | A priori estimates for positive solutions to subcritical elliptic problems in a class of non-convex regions Discrete and Continuous Dynamical[END_REF][START_REF] Castro | Equivalence between uniform L 2 * (Ω) a-priori bounds and uniform L ∞ (Ω) a-priori bounds for subcritical elliptic equations[END_REF][START_REF] Mavinga | A priori bounds and existence of positive solutions for semilinear elliptic systems[END_REF], and in [START_REF] Damascelli | A priori estimates for some elliptic equations involving the p-Laplacian Nonlinear Analysis[END_REF] for the case of the p-laplacian operator, with α > p N -p . It is known the existence of uniform L ∞ a priori bounds for any positive classical solution, and as a consequence, the existence of positive solutions. When α → 0, there is a positive solution blowing up at a non-degenerate point of the Robin function as α → 0, see [START_REF] Clapp | Alberto A solution to a slightly subcritical elliptic problem with non-power nonlinearity[END_REF] for details.
From [START_REF] Crandall | Bifurcation from simple eigenvalues[END_REF] it is known that (λ 1 , 0) is a bifurcation point of positive solutions (λ, u λ ) to the equation (1.1) .
For f behaving like |u| p-2 u at zero with 2 ≤ p ≤ 2 * , the influence of the negative part of the weight a is displayed under the sign of Ω a(x)ϕ 1 (x) p dx, where ϕ 1 is the first positive eigenfunction for -∆ in H 1 0 (Ω). Specifically, whenever
Ω a(x)ϕ 1 (x) p dx < 0 the bifurcation of positive solutions from the trivial solution set is 'on the right' of the first eigenvalue, in other words for values of λ > λ 1 . And whenever Ω a(x)ϕ 1 (x) p dx > 0 the bifurcation from the trivial solution set is 'on the left' of the first eigenvalue, in other words for values of λ < λ 1 .
Inspired by the work of Alama and Tarantello in [START_REF] Alama | On semilinear elliptic problems with indefinite nonlinearities Calculus Var[END_REF], we will focus our attention to the case a(x) changing sign and (1.5) is satisfied, and, among other things, we will prove the existence of a turning point for a value of the parameter Λ > λ 1 , and in particular the existence of solutions when λ = λ 1 . We will use local bifurcation and variational techniques.
All throughout the paper, for v : Ω → R, v = v + -v -where v + (x) := max{v(x), 0} and v -(x) := max{-v(x), 0}.
Let us also define Ω ± := {x ∈ Ω : ±a(x) > 0}, Ω 0 := {x ∈ Ω : a(x) = 0}, and assume that both Ω + , Ω -are non empty sets.
For this nonlinearity the Palais-Smale condition of the energy functional becomes a delicate issue, needing Orlicz spaces and a Orlicz-Sobolev embedding theorem.
In order to prove (PS) condition, Alama and Tarantello ( [START_REF] Alama | On semilinear elliptic problems with indefinite nonlinearities Calculus Var[END_REF]) assume that the zero set Ω 0 has a non empty interior. This is also a common hypothesis for other authors when dealing with changing sign superlinear nonlinearities [START_REF] Chang | Mei-Yue Dirichlet problem with indefinite nonlinearities[END_REF][START_REF] Ramos | Christophe Superlinear indefinite elliptic problems and Pohožaev type identities[END_REF][START_REF] Tehrani | Infinitely many solutions for an indefinite semilinear elliptic problem in R N[END_REF]. But this is a technical hypothesis. (PS)-condition will be proved in Proposition 3.1 without assuming that hypothesis. We neither use Ambrosetti-Rabinowitz.
Let us now denote
(1.6) C 0 = inf{C ≥ 0 : f (s) + C ≥ 0 for all s ≥ 0},
and remark that hypothesis (H) implies that C 0 < +∞. Observe also that
(1.7) f (s) + C 0 s ≥ 0, for all s ≥ 0; f (s)s + C 0 s 2 ≥ 0, for all s ∈ R.
Let u be a weak solution to (1.1). By a regularity result, see Lemma 2.1, u ∈ C 2 (Ω)∩C 1,µ (Ω). So by a solution, we mean a classical solution.
Assume that u is a non-negative nontrivial solution. It is easy to see that the solution is strictly positive. Indeed, adding ±C 0 a(x)u to the r.h.s. of the equation, splitting a = a + -a -, taking into account (1.4), and letting in each side the nonnegative terms, we can write
-∆ + a -(x) f (u) u + C 0 + C 0 a(x) + u (1.8) = λu + a(x) + f (u) + C 0 u + C 0 a(x) -u, in Ω.
Now, the strong Maximum Principle implies that u > 0 in Ω, and ∂u ∂ν < 0 on ∂Ω.
Our main result is the following theorem.
Theorem 1.1. Assume that g ∈ C 1 (R) satisfies hypothesis (H). Let C 0 > 0 be defined by (1.6). If a changes sign in Ω, and (1.5) holds, then there exists a Λ ∈ R,
λ 1 < Λ < min λ 1 int (Ω 0 ) , λ 1 int Ω + ∪ Ω 0 + C 0 sup a +
and such that (1.1) has a classical positive solution if and only if λ ≤ Λ.
Moreover, there exists a continuum (a closed and connected set) C of classical positive solutions to (1.1) emanating from the trivial solution set at the bifurcation point (λ, u) = (λ 1 , 0) which is unbounded. The paper is organized in the following way. Section 2 contains a regularity result and a non existence result. (PS)-condition and an existence of solutions result for λ < λ 1 based in the Mountain Pass Theorem will be proved in Section 3. A bifurcation result for λ > λ 1 is developped in Section 4. The main result is proved in Section 5. Appendix A contains some useful estimates. Orlicz spaces, a Orlicz-Sobolev embeddings theorems, and variational techniques, also including a (PS) condition in Orlicz-Sobolev spaces setting and the Mountain Pass Theorem, will be treated in Appendix B.
A regularity result and a non existence result
Next, we recall a regularity Lemma stating that any weak solution is in fact a classical solution.
Lemma 2.1. If u ∈ H 1 0 (Ω) weakly solves (1.1) with a continuous function f with polynomial critical growth
|f (x, s)| ≤ C(1 + |s| 2 * -1 ), then, u ∈ C 2 (Ω) ∩ C 1,µ (Ω) and u C 1,µ (Ω) ≤ C 1 + u 2 * -1 L (2 * -1)r (Ω) , for any r > N and µ = 1 -N/r. Moreover, if ∂Ω ∈ C 2,µ , then u ∈ C 2,µ (Ω).
Proof. Due to an estimate of Brézis-Kato [START_REF] Brézis | Remarks on the Schrödinger operator with singular complex potentials[END_REF], based on Moser's iteration technique [START_REF] Moser | A new proof of De Giorgi's theorem concerning the regularity problem for elliptic differential equations[END_REF], u ∈ L r (Ω) for any r > 1; and by elliptic regularity u ∈ W 2,r (Ω), for any r > 1 (see [START_REF] Struwe | Variational methods, Applications to nonlinear partial differential equations and Hamiltonian systems[END_REF]Lemma B.3] and comments below).
Moreover, by Sobolev embeddings for r > N and interior elliptic regularity
u ∈ C 1,α (Ω) ∩ C 2 (Ω). Furthermore, if ∂Ω ∈ C 2,α , then u ∈ C 2,α (Ω).
Proposition 2.2. Let f satisfy hypothesis (H) and let C 0 be defined in (1.6). Assume that a changes sign in Ω.
(1) Problem (1.1) does not admit a positive solution u ∈ H 1 0 (Ω) for any λ ≥ λ 1 int Ω + ∪ Ω 0 + C 0 sup a + .
(
) If int (Ω 0 ) = ∅, then λ 1 int (Ω 0 ) < +∞ and (1.1) 2
does not admit a positive solution for any
λ ≥ λ 1 int (Ω 0 ) . Proof. 1. Let λ ≥ λ 1 int Ω + ∪Ω 0 +C 0 sup a +
, and assume by contradiction that there exists a non-negative non-trivial solution u ∈ H 1 0 (Ω) to (1.1) for the parameter λ. Since the Maximum Principle u > 0 in Ω, see (1.8).
Let φ be the positive eigenfunction of -∆, H 1 0 (int Ω + ∪ Ω 0 of L 2 -norm equal to 1. For simplicity we will denote also by φ the extension by 0 of φ in all Ω. By Hopf's maximum principle we have
∂ φ ∂ν < 0 on ∂ int Ω + ∪ Ω 0
, where ν is the outward normal.
Again if we multiply the equation (1.1) by φ and integrate along int Ω + ∪ Ω 0 we find, after integrating by parts,
0 > ∂(int(Ω + ∪Ω 0 )) u ∂ φ ∂ν dσ + int(Ω + ∪Ω 0 ) λ 1 int (Ω + ∪ Ω 0 ) -λ + C 0 a + (x) u φ dx = Ω + a + (x) f (u) + C 0 u φ dx > 0, a contradiction. 2. Let λ ≥ λ 1 int (Ω 0
) and assume by contradiction that there exists a positive solution u ∈ H 1 0 (Ω) of problem (1.1) for the parameter λ. Let φ be a positive eigenfunction associated to λ 1 int (Ω 0 ) < +∞. For simplicity we will also denote by φ the extension by 0 in all Ω. If we multiply equation (1.1) by φ and integrate along Ω 0 we find, after integrating by parts,
int (Ω 0 ) ∇u • ∇ φ dx = λ int (Ω 0 ) u φ dx.
On the other hand
int (Ω 0 ) ∇u • ∇ φ dx = λ 1 (int (Ω 0 )) int (Ω 0 ) φu dx + ∂(int (Ω 0 )) u ∂ φ ∂ν dσ. Hence 0 > ∂(int (Ω 0 )) u ∂ φ ∂ν dσ = λ -λ 1 int (Ω 0 ) int (Ω 0 ) u φ dx ≥ 0, a contradiction.
3. An existence result for λ < λ 1
In this section we prove the existence of a nontrivial solution to equation (1.1) for λ < λ 1 , through the Mountain Pass Theorem.
3.1. On Palais-Smale sequences. In this subsection, we define the framework for the functional J λ associated to the problem (1.1) λ . Hereafter we denote by • the usual norm of H 1 0 (Ω):
u = Ω |∇u| 2 dx 1/2 . Given f (s) = h(s) + g(s) defined
F (s) + 1 2 C 0 s 2 ≥ 0, for all s ≥ 0.
Consider the functional J λ : H 1 0 (Ω) → R given by
J λ [v] := 1 2 Ω |∇v| 2 dx - λ 2 Ω (v + ) 2 dx - Ω a(x)F (v + ) dx.
Observe that for all v ∈ H 1 0 (Ω),
J λ v + ≤ J λ [v].
The functional J λ is well defined and belongs to the class C 1 with
J λ [v] ψ = Ω ∇v∇ψ dx -λ Ω v + ψ dx - Ω a(x)f (v + )ψ dx,
for all ψ ∈ H 1 0 (Ω). Consequently, non-negative critical points of the functional J λ correspond to non-negative weak solutions to (1.1).
The next Proposition proves that Palais-Smale sequences are bounded whenever λ < λ 1 (int Ω 0 ), where λ 1 (int Ω 0 ) may be infinite. Proposition 3.1. Assume that g ∈ C 1 (R) satisfies hypothesis (H) and assume also that λ < λ 1 (int Ω 0 ) ≤ +∞.
Then any (PS) sequence, that is, a sequence satisfying
(J 1 ) J λ [u n ] ≤ C, (J 2 ) J λ [u n ] ψ ≤ ε n ψ , where ε n → 0 as n → +∞ is a bounded sequence.
Proof. 1. Let {u n } n∈N be a (PS) sequence in H 1 0 (Ω) and assume by contradiction that u n → +∞. Let us first prove the following claim:
Claim. Let v ∈ H 1 0 (Ω) be the weak limit of v n = un un and assume that v n → v, strongly in L 2 * -1 (Ω) and a.e. Then v = 0 a.e. in Ω.
Assume that v ≡ 0 and denote
γ n = u n . Let ω n := {x ∈ Ω : v + n (x) > 1}, then for any ψ ∈ C 1 0 (Ω), ln(e + γ n ) α γ 2 * -1 n (u + n (x)) 2 * -1 [ln(e + γ n v + n (x))] α |ψ| ≤ |v + n (x)| 2 * -1 ψ ∞ , ∀x ∈ ω n .
Let x ∈ Ω \ ω n , using the estimates (A.1),
ln(e + γ n ) α γ 2 * -1 n (u + n ) 2 * -1 [ln(e + γ n v + n )] α |ψ| ≤ |v + n | 2 * -2 ψ ∞ ≤ ψ ∞
Besides, by the reverse of the Lebesgue dominated convergence theorem, see for instance [2, Theorem 4.9, p. 94] , there exists h i ∈ L 1 (Ω), 1 ≤ i ≤ 3 such that, up to a subsequence,
|v + n | 2 * -1 ≤ h 1 , |v + n | p-1 ≤ h 2 |v + n | 2 * -2 ≤ h 3 , , a.e.
x ∈ Ω, for all n ∈ N, and therefore
ln(e + γ n ) α γ 2 * -1 n f (u + n )ψ ≤ C (h 1 + h 2 + h 3 + 1)) ψ ∞ ∈ L 1 (Ω).
By Lebesgue dominated convergent theorem we have
ln(e + γ n ) α γ 2 * -1 n a(•)f (u + n )ψ → a(•)(v + ) 2 * -1 ψ strongly in L 1 (Ω).
We have used here that if v + (x) = 0, then
lim n→+∞ ln(e + γ n ) ln(e + γ n v + n (x)) = 1,
and if v + (x) = 0, then lim n→+∞ ln(e + γ n ) ln(e + γ n v + n (x)) α |v + n (x)| 2 * -1 ≤ lim n→+∞ |v + n (x)| 2 * -2 = 0.
On the other hand
ln(e + γ n ) α γ 2 * -1 n Ω ∇u n • ∇ψ dx → 0.
Hence, using (J 2 ) for an arbitrary test function ψ, multiplying by ln(e+γn) α γ 2 * -1 n and passing to the limit we find
Ω a(x)(v + ) 2 * -1 ψ dx = 0 ∀ψ ∈ C 1 0 (Ω).
In particular v + = 0 a.e. in Ω \ Ω 0 .
Assume that int Ω 0 = ∅, and that λ < λ 1 (int Ω 0 ). Thus, for any
ψ ∈ C 1 0 (int Ω 0 ) we have from (J 2 ) int Ω 0 ∇u n • ∇ψ dx -λ int Ω 0 u + n ψ dx = o(1).
Dividing by u n and passing to the limit we have
int Ω 0 ∇v • ∇ψ dx = λ int Ω 0 v + ψ dx. From the Maximum Principle, v ≥ 0 in int Ω 0 . Since λ < λ 1 (int Ω 0 ) then it must be v + ≡ 0 in int Ω 0 . Hence v + ≡ 0 in Ω.
On the other hand, taking u - n as a test function in the condition (J 2 ),
- Ω |∇u - n | 2 dx - Ω a(x)f (u + n )u - n dx = Ω |∇u - n | 2 dx ≤ n u -
n so u - n → 0 and then v -≡ 0, and we conclude the proof of the claim.
2. In order to achieve a contradiction, we use a Hölder inequality, and properties on convergence into an Orlicz space, cf. Appendix B.
To this end, the analysis of Lemma A.2 give us the existence of
α * > 0 such that the function s → s 2 * -1 [ln(e+s)] α is increasing along [0, +∞[ if α ≤ α * .
In this case we will denote
(3.2) m(s) = s 2 * -1 [ln(e + s)] α If α > α * the function s → s 2 * -1 [ln(e+s)] α possesses a local maximum s 1 in [0, +∞[. Let us denote by s 1 the unique solution s > s 1 such that s 2 * -1 1 [ln(e + s 1 )] α = s 2 * -1 [ln(e + s)] α
and define the non-decreasing function Since
(3.3) m(s) := s 2 * -1 [ln(e+s)] α if s ∈ [s 1 , s 1 ], s 2 * -1 1 [ln(e+s 1 )] α if s ∈ [s 1 , s 1 ]. It follows that (3.4) s → M (s) = s 0 m(t) dt is a N -function in [0, +∞[.
v n 0 in H 1 0 (Ω) and strongly in L 2 (Ω), it follows from (J 2 ) applied to ψ = u n that (3.5) lim n→∞ Ω a(x) f (u + n )u n u n 2 dx = lim n→∞ Ω a(x) f (u + n ) u n v + n dx = 1.
Since the Hölder inequality into Orlicz spaces, see Proposition B.11.(ii),
(3.6) Ω a(x) f (u + n ) u n v + n dx ≤ a ∞ u n f (u + n ) M * v + n M
By Theorem B.3 and Theorem B.12 we have
(3.7) v n -v M → 0.
Moreover, since there exists C > 0 such that m(s) ≤ Cs 2 * -1 and M (s) ≤ Cs 2 * for all s ≥ 0, and the sequence {u n } n∈N ⊂ H 1 0 (Ω), then, for each n ∈ N, there exists a C n such that
|u n m(u n )| ≤ C n , |M (|u n |)| ≤ C n .
By using definition B.8 of M * and identities of Proposition B.9 we have
M * |m(u n )| = |u n m(u n )| -M (|u n |) then, for each n ∈ N, Ω M * |m(u n )| dx ≤ 2C n . Observe that f (s) ≤ C(1 + m(s)), that M * f (s) ≤ M * C(1 + m(s)) , see Proposition B.11.(iii)
, and by convexity of M * , that
f (u + n ) M * ≤ Ω M * C(1 + m(u + n ) dx + 1 ≤ C n ,
see Proposition B.11.(i), and the r.h.s. is bounded for each n. Consequently, a(x [START_REF] Krasnoselskiȋ | Rutickiȋ Convex functions and Orlicz Spaces[END_REF], Theorem 14.2).
) f (u + n ) un ∈ L M * (Ω), which is the dual of L M (Ω) (see
On the other hand, from
J 2 , for all ψ ∈ C ∞ c (Ω), (3.8) Ω ∇v n ∇ψ dx -λ n Ω v n ψ dx - Ω a(x) f (u + n ) u n ψ dx ≤ ε n u n ψ .
Taking the limit, and since [START_REF] Donaldson | Trudinger Orlicz-Sobolev Ppaces and Imbedding Theorems[END_REF]),
C ∞ c (Ω) is dense in L M (Ω) (see
(3.9) lim n→∞ Ω a(x) f (u + n ) u n ψ dx = 0, for all ψ ∈ L M (Ω). Moreover, since (3.7), v n → v = 0 in L M (Ω).
Hence [2, Proposition 3.13 (iv)], and (3.9) imply
lim n→∞ Ω a(x) f (u + n ) u n v n dx = 0,
which contradicts (3.5), ending the proof.
Theorem 3.2. Assume the hypothesis of Proposition 3.1 and let {u n } n∈N be a (PS) sequence in H 1 0 (Ω). Then, there exists a subsequence, denoted by {u n } n∈N , such that u n → u in H 1 0 (Ω). Proof. From Proposition 3.1 we know that the sequence is bounded. Consequently, there exists a subsequence, denoted by {u n } n∈N , and some u ∈ H 1 0 (Ω) such that u n u weakly in
H 1 0 (Ω), (3.10) Ω a(x)g(u n )|u n -u| dx → 0, (3.11) u n → u a.e. (3.12)
By testing (J 2 ) against ψ = u n -u and using (3.10), and (3.11) we get
u n -u 2 = Ω ∇u n • ∇(u n -u) dx + o(1) ≤ a ∞ Ω |u n | 2 * -1 [ln(e + |u n |)] α |u n -u| dx + o(1). Claim. Ω |u n | 2 * -1 [ln(e + |u n |)] α |u n -u|dx = o(1),
In order to prove this claim, we use as in the above proposition, a Hölder inequality and a compact embedding into an Orlicz space, c.f. Appendix B.
By Theorem B.3 and Theorem B.12 we have
(3.13) u n -u M → 0,
where m, and M are defined by (3.2)-(3.4), as in the above proposition.
On the other hand, since there exists C > 0 such that m(s) ≤ Cs 2 * -1 and M (s) ≤ Cs 2 * for all s ≥ 0, and the sequence
{u n } n∈N is bounded in H 1 0 (Ω), then |u n m(u n )| ≤ C, |M (|u n |)| ≤ C for all n ∈ N
By using definition B.8 of M * and identities of Proposition B.9 we have Now, using Holder's inequality (B.6) and that s 2 * -1 [ln(e+s)] α ≤ m(s) for all s ≥ 0, we get
M * |m(u n )| = |u n m(u n )| -M (|u n |) then Ω M * |m(u n )| dx ≤ C for all n ∈ N.
Ω |u n | 2 * -1 [ln(e + |u n |)] α |u n -u|dx ≤ u n -u M m(u n ) M * ≤ (C+1) u n -u M
and it follows from (3.13) that u n -u → 0. Proof. We verify the hypothesis of the Mountain Pass Theorem, see [13, Theorem 2, §8.5]. Observe that the derivative of the functional J λ :
H 1 0 (Ω) → H 1 0 (Ω)
is Lipschitz continuous on bounded sets of H 1 0 (Ω); also the (PS) condition is satisfied, see Proposition 3.1. Clearly J λ [0] = 0.
1. Let now u ∈ H 1 0 (Ω) with u = r, for r > 0 to be selected below. Then,
(3.14) J λ [u] = r 2 2 - λ 2 Ω (u + ) 2 dx - Ω a(x)F (u + ) dx.
From hypothesis (H) we have
Ω a(x)G(u + ) dx ≤ C Ω |u| p + |u| q dx ≤ C (r p + r q ) .
where
G(s) := s 0 g(t) dt. Now, definition (1.2) implies that Ω a(x)F (u + ) dx ≤ C r p + r q + r 2 * .
In view of (3.14), and thanks to the Poincaré inequality we get
J λ [u] ≥ 1 2 1 - |λ| λ 1 r 2 -C r p + r q + r 2 * ≥ C 1 r 2 ,
taking |λ| < λ 1 , r > 0 small enough, and using that p, q, 2 * > 2.
2. Now, fix some element 0 ≤ u 0 ∈ H 1 0 (Ω), u 0 > 0 in Ω + , u 0 ≡ 0 in Ω -. Let v = tu 0 for a certain t = t 0 > 0 to be selected a posteriori. Since (3.15) f
(tu 0 ) = |t| 2 * -2 t f (u 0 ) ln(e + |u 0 |) ln(e + |tu 0 |) α + g(tu 0 ), then f (tu 0 )/t → +∞ as t → +∞ in Ω + .
From definition, and integrating by parts,
F (s) = s 0 t 2 * -1 ln(e + t) α + g(t) dt = 1 2 * sh(s) + G(s) + α 2 * s 0 1 ln(e + t) α+1 t 2 * e + t dt.
It can be easily seen that lim s→+∞ G(s) sf (s) = 0. Therefore, using l'Hôpital's rule we can write
lim s→+∞ F (s) sf (s) = 1 2 * ∈ 0, 1 2 , (3.16) hence lim t→+∞ F (tu 0 ) tu 0 f (tu 0 ) = 1 2 * ∈ 0, 1 2 in Ω + . (3.17)
Let C 0 ≥ 0 be such that F (s) + 1 2 C 0 s 2 ≥ 0 for all s ≥ 0 (see (1.7)), and let (3.18)
Ω + δ := {x ∈ Ω + : a(x) = a + (x) > δ}.
By definition, u 0 ≡ 0 in Ω -, so, introducing ± 1 2 C 0 (tu 0 ) 2 , splitting the integral, and using (3.17)-(3.18) we obtain
- Ω a(x)F (tu 0 ) dx = - Ω + a + (x)F (tu 0 ) dx ≤ C 0 t 2 2 Ω + a + (x)u 2 0 dx - Ω + δ a + (x) 1 2 C 0 (tu 0 ) 2 + F (tu 0 ) dx ≤ C + C 0 t 2 2 Ω + a + (x)u 2 0 dx - δt 2 2 Ω + δ C 0 u 2 0 + u 0 f (tu 0 ) 2 * t dx.
Hence, there exists a positive constant C > 0 such that
J λ [tu 0 ] = t 2 2 u 0 2 -t 2 λ 2 u 0 2 L 2 (Ω) - Ω + a + (x)F (tu 0 ) ≤ C(1 + t 2 ) - δ t 2 2 Ω + δ C 0 (u 0 ) 2 + u 0 f (tu 0 ) 2 * t dx < 0
for t = t 0 > 0 big enough.
Step 3. We have at last checked that all the hypothesis of the Mountain Pass Theorem are accomplished. Let
Γ := {g ∈ C [0, 1]; H 1 0 (Ω) : g(0) = 0, g(1) = t 0 u 0 }, then, there exists c ≥ C 1 r 2 > 0 such that c := inf g∈Γ max 0≤t≤1 J λ [g(t)] is a critical value of J λ , that is, the set K c := {v ∈ H 1 0 (Ω) : J λ [v] = c, J λ [v] = 0} = ∅. Thus there exists u ∈ H 1 0 (Ω), u ≥ 0, u = 0 such that for each ψ ∈ H 1 0 (Ω), we have (3.19) Ω ∇u • ∇ψ dx = Ω λu + + a(x)f (u + ) ψ dx.
and thereby u is a nontrivial weak solution to (3.19). By Lemma 2.1, u is a classical solution, and by (1.8), u > 0 in Ω.
A bifurcation result for λ > λ 1
Next Proposition uses Crandall -Rabinowitz's local bifurcation theory, see [START_REF] Crandall | Bifurcation from simple eigenvalues[END_REF], and Rabinowitz's global bifurcation theory, see [START_REF] Rabinowitz | Some global results for nonlinear eigenvalue problems[END_REF].
λ 1 < Λ < min λ 1 int (Ω 0 ) , λ 1 int Ω + ∪ Ω 0 + C 0 sup a + where C 0 > 0 is such that f (s) + C 0 s ≥ 0 for all s ≥ 0, (see definition (1.6)).
Moreover, there exists an unbounded continuum (a closed and connected set) C of classical positive solutions to (1.1) emanating from the trivial solution set at the bifurcation point (λ, u) = (λ 1 , 0).
Proof. Proposition 2.2 proves the upper bounds for Λ. Next we concentrate our attention in proving that Λ > λ 1 . Choosing λ as the bifurcation parameter, we check that the conditions of Crandall -Rabinowitz's Theorem [START_REF] Crandall | Bifurcation from simple eigenvalues[END_REF] are satisfied. For r > N , we define the set W 2,r + := {u ∈ W 2,r (Ω) : u > 0 in Ω}, and consider W 2,r + (Ω) ∩ W 1,r 0 (Ω) endowed with the topology of W 2,r (Ω). If r > N , we have that
W 2,r + (Ω) ∩ W 1,r 0 (Ω) → C 1,µ 0 (Ω) for µ = 1 -N r ∈ (0, 1)
. Moreover, from Hopf's lemma, we know that if ũ is a positive solution to (1.1) then ũ lies in the interior of W 2,r + (Ω) ∩ W 1,r 0 (Ω). We consider the map F :
R×W 2,r + (Ω)∩W 1,r 0 (Ω) → L r (Ω) for r > N , F : (λ, u) → -∆u -λu -a(x)f (u)
The map F is a continuously differentiable map. Since hypothesis (i), g(0) = 0, and so a(x)F (0) = 0, F (λ, 0) = 0 for all λ ∈ R, and since F u (x, 0) = 0,
D u F (λ 1 , 0)w := -∆w -λ 1 w, D λ,u F (λ 1 , 0)w := -w. Observe that N D u F (λ 1 , 0) = span[ϕ 1 ], codim R D u F (λ 1 , 0) = 1, D λ,u F (λ 1 , 0)ϕ 1 = -ϕ 1 ∈ R D u F (λ 1 , 0) ,
where N (•) is the kernel, and R(•) denotes the range of a linear operator. Hence, the hypotheses of Crandall-Rabinowitz theorem are satisfied and (λ 1 , 0) is a bifurcation point. Thus, decomposing
C 1,µ 0 (Ω) = span[ϕ 1 ] ⊕ Z, where Z = span[ϕ 1 ] ⊥ ,
there exists a neighbourhood U of (λ 1 , 0) in R× C 1,µ 0 (Ω), and continuous functions λ(s), w(s), s ∈ (-ε, ε), λ : (-ε, ε) → R, w : (-ε, ε) → Z such that λ(0) = λ 1 , w(0) = 0, with Ω wϕ 1 dx = 0, and the only nontrivial solutions to (1.1) in U , are (4.1) λ(s), sϕ 1 + s w(s) : s ∈ (-ε, ε) .
Set u = u(s) = sϕ 1 + s w(s). Note that by continuity w(s) → 0 as s → 0, which guarantees that u(s) > 0 in Ω for all s ∈ (0, ε) small enough.
Next, we show that λ(s) > λ 1 for all s small enough. Since (3.15), and hypothesis (H) 0 on f , note that a(x)f (su) s p-1 u p-1 → L 1 a(x) as s → 0. In fact, as w(s) → 0 uniformly as s → 0, hypothesis (H) 0 yields a(x)f sϕ 1 + s w(s)
s p-1 ϕ 1 + w(s) p-1 -→ L 1 a(x) uniformly in Ω as s → 0.
Hence, multiplying and dividing by ϕ 1 + w(s) p-1 , we deduce
1 s p-1 Ω a(x)f u(s) ϕ 1 → s→0 L 1 Ω a(x)ϕ p 1 .
Now we prove that λ(s) > λ 1 arguing by contradiction. Assume that there is a sequence (λ n , u n ) = λ(s n ), u(s n ) of bifurcated solutions to (1.1) in U , with λ(s n ) ≤ λ 1 . Multiplying (1.1) λn by ϕ 1 and integrating by parts
0 ≤ λ 1 -λ(s n ) s p-1 n Ω u(s n )ϕ 1 = 1 s p-1 n Ω a(x)f u(s n ) ϕ 1 → L 1 Ω a(x)ϕ p 1 < 0
which yields a contradiction, and consequently, Λ > λ 1 . Finally, Rabinowitz's global bifurcation Theorem [START_REF] Rabinowitz | Some global results for nonlinear eigenvalue problems[END_REF] states that in fact, the set C of positive solutions to (1.1) emanating from (λ 1 , 0) is a continuum (a closed and connected set) which is either unbounded, or contains another bifurcation point, or contains a pair of points (λ, u), (λ, -u) whith u = 0. Since (1.8), any non-negative non-trivial solution is strictly positive, moreover (λ 1 , 0) is the only bifurcation point to positive solutions, so C can not reach another bifurcation point. Since (1.3), neither C contains a pair of points (λ, u), (λ, -u) whith u = 0, which state that C is unbounded, ending the proof.
Proof of Theorem 1.1
First we prove an auxiliary result.
Proposition 5.1. For each λ ∈ (λ 1 , Λ), the following holds: (i) Problem (1.1) λ admits a positive solution
u λ = inf u(x) : u > 0 solving (1.1) λ ,
in other words u λ is minimal. (ii) Moreover, the map λ → u λ is strictly monotone increasing, that is, if λ < µ < Λ, then u λ (x) < u µ (x) for all x ∈ Ω, and ∂u λ ∂ν (x) > ∂uµ ∂ν (x) for all x ∈ ∂Ω. (iii) Furthermore, u λ is a local minimum of the functional J λ .
Proof. (i.a) Step 1. Existence of positive solutions for any λ ∈ (λ 1 , Λ).
Let λ ∈ (λ 1 , Λ) be fixed. By definition of Λ, there exists a λ 0 ∈ (λ, Λ) such that the problem (1.1) λ 0 admits a positive solution u 0 . It is easy to verify that u 0 > 0 is a supersolution to (1.1) λ . Indeed, for any
ψ ∈ H 1 0 (Ω) with ψ ≥ 0 in Ω Ω ∇u 0 ∇ψ dx-λ Ω u 0 ψ dx- Ω a(x)f (u 0 )ψ dx = (λ 0 -λ) Ω u 0 ψ dx ≥ 0.
Moreover, for every δ > 0 satisfying (5.1)
0 < δ < λ -λ 1 2L 1 a -∞ 1 p-2 1 ϕ 1 ∞
the function u = δϕ 1 is a subsolution for (1.1) λ whenever λ > λ 1 . Let δ > 0 satisfying (5.1) and such that g(s) ≥ 0 for any s ∈ [0, δ ϕ 1 L ∞ (Ω) ]. For any ψ ∈ H 1 0 (Ω), ψ > 0 with in Ω we deduce
δ Ω ∇ϕ 1 ∇ψ dx -λδ Ω ϕ 1 ψ dx - Ω a(x)f (δϕ 1 )ψ dx = -(λ -λ 1 )δ Ω ϕ 1 ψ dx - Ω a(x)f (δϕ 1 )ψ dx = -(λ -λ 1 )δ Ω ϕ 1 ψ dx - Ω a(x) (δϕ 1 ) 2 * -1 [ln(e + δϕ 1 )] α + g(δϕ 1 ) ψ dx ≤ -(λ -λ 1 )δ Ω ϕ 1 ψ dx + a - ∞ Ω h(δϕ 1 ) + g(δϕ 1 ) ψ dx < 0.
This allow us to take u = δϕ 1 as a subsolution for (1.1) λ with u < u 0 . The sub-and supersolution method now guarantees a positive solution u to (1.1) λ , with u ≤ u ≤ u 0 .
(i.b) Step 2. Existence of a minimal positive solution u λ for any λ ∈ (λ 1 , Λ).
To show that there is in fact a minimal solution, for each x ∈ Ω we define u λ (x) := inf u(x) : u > 0 solving (1.1) λ .
Firstly, we claim that u λ ≥ 0, u λ ≡ 0. Assume by contradiction that u λ ≡ 0. This would yield a sequence u n of positive solutions to (1.1) λ such that ||u n || C(Ω) → 0 as n → ∞, or in other words, (λ, 0) is a bifurcation point from the trivial solution set to positive solutions. Set
v n := un ||un|| C(Ω)
. Observe that v n is a weak solution to the problem
(5.2) -∆v n = λv n + a(x)f (u n )/||u n || C(Ω) in Ω ; v n = 0 on ∂Ω .
It follows from (H) 0 that a(x)f (un)
||un|| C(Ω)
→ 0 in C(Ω) as n → ∞. Therefore, the right-hand side of (5.2) is bounded in C(Ω). Hence, by the elliptic regularity, v n ∈ W 2,r (Ω) for any r > 1, in particular for r > N . Then, the Sobolev embedding theorem implies that
||v n || C 1,α (Ω) is bounded by a constant C that is independent of n. Then, the compact embedding of C 1,µ (Ω) into C 1,β (Ω) for 0 < β < µ yields, up to a subsequence, v n → Φ ≥ 0 in C 1,β (Ω). Since ||v n || C(Ω) = 1, we have that ||Φ|| C(Ω) = 1. Hence, Φ ≥ 0, Φ ≡ 0.
Using the weak formulation of equation (5.2), passing to the limit, and taking into account that λ is fixed and v n → Φ, we obtain that Φ ≥ 0, Φ ≡ 0, is a weak solution to the equation
-∆Φ = λΦ in Ω , Φ = 0 on ∂Ω.
Then, by the maximum principle it follows that Φ = ϕ 1 > 0, the first eigenfunction, and λ = λ 1 is its corresponding eigenvalue, which contradicts that λ > λ 1 .
Secondly, we show that u λ solves (1.1) λ . We argue on the contrary. Observe that the minimum of any two positive solutions to (1.1) λ furnishes a supersolution to (1.1) λ . Assume that there are a finite number of solutions to (1.1) λ , then u λ (x) := min u(x) : u > 0 solves (1.1) λ and u λ is a supersolution. Choosing ε 0 small enough so that ε 0 ϕ 1 < u λ , the sub-supersolution method provides a solution ε 0 ϕ 1 ≤ v ≤ u λ . Since v is a solution and u λ is not, then v u λ , contradicting the definition of u λ , and achieving this part of the proof.
Assume now that there is a sequence u n of positive solutions to (1.1) λ such that, for each x ∈ Ω, inf u n (x) = u λ (x) ≥ 0, u λ ≡ 0. Let u 1 := min{u 1 , u 2 }. Choosing ε 1 small enough so that ε 1 ϕ 1 < u 1 , the subsupersolution method provides a solution ε 1 ϕ 1 ≤ v 1 ≤ u 1 . We reason by induction.
Let u n := min{v n-1 , u n+1 }. Choosing ε n small enough so that ε n ϕ 1 < u n , the sub-supersolution method provides a solution ε n ϕ 1 ≤ v n ≤ u n ≤ v n-1 . With this induction procedure, we build a monotone sequence of solutions v n , such that
(5.3) 0 < v n ≤ u n ≤ v n-1 ≤ u n-1 ≤ • • • ≤ v 1 .
Since monotonicity and Lemma 2.1,
v n C(Ω) ≤ v 1 C(Ω)
, by elliptic regularity, v n C 1,µ (Ω) ≤ C for any µ < 1, and by compact embedding
v n → v in C 1,β (Ω) for any β < α.
Using the weak formulation of equation (1.1) λ , passing to the limit, and taking into account that λ is fixed, we obtain that v is a weak solution to the equation (1.1) λ . Hence
v(x) ≥ u λ > 0. Moreover, since (5.3), v n (x) ↓ v(x) pointwise for x ∈ Ω, so inf v n (x) = v(x)
. Also, and due to (5.3), u n (x) ↓ v(x) pointwise for x ∈ Ω, and inf u n (x) = v(x).
On the other hand, by construction u n ≤ u n+1 , so, for each
x ∈ Ω, v(x) = inf u n (x) ≤ inf u n (x) = u λ (x)
. Therefore, and by definition of u λ , necessarily v = u λ , proving that u λ solves (1.1) λ , and achieving the proof of step 2.
(ii) The monotonicity of the minimal solutions is concluded from a subsupersolution method. Reasoning as in step 1, u µ is a strict supersolution to (1.1) λ , so w := u µ (x) -u λ (x) ≥ 0, w ≡ 0. Moreover, w = 0 on ∂Ω, and we can always choose c 0 := C 0 a ∞ > 0 where C 0 is defined by (1.6), so that a -(x)f (s) + c 0 ≥ 0 and a + (x)f (s) + c 0 ≥ 0 for all s ≥ 0, then
-∆ + a -(x)f θu µ + (1 -θ)u λ + c 0 w = (µ -λ)u µ + λw + a + (x)f θu µ + (1 -θ)u λ + c 0 w > 0 in Ω,
finally, the Maximum Principle implies that w > 0 in Ω, and ∂w ∂ν < 0 on ∂Ω, ending the proof of step 3. Theorem 2] if there exists an ordered pair of L ∞ bounded sub and super-solution u ≤ u to (1.1) λ , and neither u nor u is a solution to (1.1) λ , then there exist a solution u < u < u to (1.1) λ such that u is a local minimum of J λ at H 1 0 (Ω). Reasoning as in (i), u := u µ with µ > λ is a strict super-solution to (1.1) λ , and u := δϕ 1 is a strict sub-solution for δ > 0 small enough, such that u(x) < u(x) for each x ∈ Ω. This achieves the proof.
(iii) Since [4,
Proof of Theorem 1.1. Theorem 3.3 provides the existence of positive solutions for λ < λ 1 , and Proposition 5.1 provide the existence of minimal positive solutions for λ ∈ (λ 1 , Λ).
(a) Step 1. Existence of a second positive solution for λ ∈ (λ 1 , Λ).
Fix an arbitrary λ ∈ (λ 1 , Λ), and let u λ be the minimal solution to (1.1) λ given by Proposition 5.1, minimizing J λ . A second solution follows seeking a solution through variational arguments [14, Theorem 5.10] and the Mountain Pass procedure shown below.
First, reasoning as in Proposition 5.1(iii), we get a local minimum ũλ > 0 of J λ . If ũλ = u λ , then ũλ is the second positive solution, ending the proof. Assume that ũλ = u λ . Now we reason as in [14, Theorem 5.10] on the nature of local minima. Thus, either
(i) there exists ε 0 > 0, such that inf J λ (u) : u -ũλ = ε 0 > J λ (ũ λ ), in other words, ũλ is a strict local minimum, or (ii) for each ε > 0, there exists u ε ∈ H 1 0 (Ω) such that J λ has a local minimum at a point u ε with u ε -ũλ = ε and J λ (u ε ) = J λ (ũ λ ).
Let us assume that (i) holds, since otherwise case (ii) implies the existence of a second solution.
Consider now the functional
I λ : H 1 0 (Ω) → R given by I λ [v] = J λ [u λ + v] -J λ [u λ ], more specifically I λ [v] := 1 2 Ω |∇v| 2 dx - λ 2 Ω (v + ) 2 dx - Ω Gλ (x, v + ) dx.
where
Gλ (x, s) := a(x) F (u λ (x) + s) -F (u λ (x)) -f (u λ (x))s = a(x) 1 2 f (u λ (x))s 2 + o(s 2 )
.
Obviously I λ [v + ] ≤ I λ [v],
and observe that
I λ [v] = 0 ⇐⇒ J λ [u λ + v] = 0. Fix now some element 0 ≤ v 0 ∈ H 1 0 (Ω)∩L ∞ (Ω), v 0 > 0 in Ω + , v 0 ≡ 0 in Ω -. Let v =
tv 0 for a certain t = t 0 > 0 to be selected a posteriori, and evaluate
I λ [tv 0 ] = 1 2 t 2 ∇v 0 2 L 2 (Ω) -λ v 0 2 L 2 (Ω) - Ω Gλ (x, tv 0 ) dx.
Reasoning as in the proof of Theorem 3.3 for large positive t, since F (t)/t 2 → ∞ as t → ∞, and using also (3.1) we obtain that
I λ [tv 0 ] ≤ C(1 + t + t 2 ) - Ω + a + (x) F (u λ + tv 0 ) + 1 2 C 0 (u λ + tv 0 ) 2 ≤ C(1 + t + t 2 ) -δ Ω + δ F (u λ + tv 0 ) + 1 2 C 0 (u λ + tv 0 ) 2 dx, so I λ [tv 0 ] < 0
for t = t 0 big enough, and where Ω + δ is defined by (3.18). Thus, the Mountain Pass Theorem implies that if
Γ := {g ∈ C [0, 1]; H 1 0 (Ω) : g(0) = 0, I λ [g(1)] < 0}, then, there exists c > 0 such that c := inf g∈Γ max 0≤t≤1 I λ [g(t)]
is a critical value of I λ , and thereby
K c := {v ∈ H 1 0 (Ω) : I λ [v] = c, I λ [v] = 0} is non empty.
Since for any g ∈ Γ we have
I λ [g + (t)] ≤ I λ [g(t)
] for all t ∈ [0, 1], it follows that g + ∈ Γ, and we derive the existence of a sequence v n such that
I λ [v n ] → c, I λ [v n ] → 0, v n ≥ 0.
On the other hand, w n := u λ + v n is a (PS) sequence for the original functional
J λ . Since Theorem 3.2, if λ < λ 1 (int Ω 0 ), v n → v λ en H 1 0 (Ω), so I λ [v] = 0 and I λ [v] = c > 0, hence v λ ≥ 0 is a nontrivial critical point of I λ . Consequently, w λ := u λ + v λ is a positive critical point of J λ , such that, for each ψ ∈ H 1 0 (Ω), we have Ω ∇w λ • ∇ψ dx = Ω λw λ + a(x)f (w λ ) ψ dx,
and thereby w λ := u λ + v λ ≥ u λ , w λ = u λ is a second positive solution to (1.1) λ .
(b) Step 2. Existence of a classical positive solution for λ = Λ.
We prove the existence of a solution for λ = Λ. For each λ ∈ (λ 1 , Λ), problem (1.1) admits a minimal positive weak solution u λ and λ → u λ is increasing, see Proposition 5.1. Taking the monotone pointwise limit, let us define u Λ (x) := lim λ↑Λ u λ (x).
Since step 1, for any λ ∈ (λ 1 , Λ) there exists a second positive solution to (1.1) λ . Let's denote it by ũλ = u λ . Now, define the pointwise limit (5.7) ũ λ 1 (x) := lim sup
λ→λ 1 ũ λ (x).
Reasoning as in step 2, ũ λ 1 < +∞ and ũ
λ 1 ∈ C 2 (Ω) ∩ C 1,µ (Ω) is a classical solution to (1.1) λ 1 .
Moreover, ũ λ 1 > 0. Assume on the contrary that ũ λ 1 = 0. By the Crandall-Rabinowitz's Theorem [START_REF] Crandall | Bifurcation from simple eigenvalues[END_REF], the only nontrivial solutions to (1.1) in a neighborhood of the bifurcation point (λ 1 , 0) are given by (4.1)). Since Proposition 5.1, those are the minimal solutions u λ , and due to ũλ = u λ , ũλ are not in a neighbourhood of (λ 1 , 0), contradicting the definition of ũ λ 1 (x), (5.7)
Hence, ũ λ 1 ≥ 0, and reasoning as in (1.8), the Maximum Principle implies that ũ λ 1 > 0.
Appendix A. Some estimates First, we prove an useful estimate of ln(e+s) ln(e+as) . Lemma A.1. Let 0 < a ≤ 1 be fixed. Then for all s ≥ 0, (A. [START_REF] Alama | On semilinear elliptic problems with indefinite nonlinearities Calculus Var[END_REF] ln(e + s) ln(e + as) ≤ ln e a ≤ 1 a .
Proof. Denote (s) = ln(e+s) ln(e+as) for all s ≥ 0. Then 1 ≤ (s) ≤ (s 0 ) where s 0 > 0 is the unique value where (s) = 0. When computing s 0 we find (s 0 ) = 0 ⇐⇒ (e + as 0 ) ln(e + as 0 ) -a(e + s 0 ) ln(e + s 0 ) = 0 and therefore max = (s 0 ) = ln(e + s 0 ) ln(e + as 0 ) = e + as 0 a(e + s 0 ) .
Notice that we have (s 0 ) ≤ 1 a . In order to find a better upper bound of ln( e+as 0 e+s 0 ) let us denote for all s ≥ 0 θ(s) = (e + as) ln(e + as) -a(e + s) ln(e + s).
Then, there exists χ ∈ (0, s 0 ) such that and the first inequality of (A.1) is achieved. The second one is obvious.
0 -e(1 -a) = θ(s 0 ) -θ(0) = θ (χ)s 0 =⇒ e(1 -a) s 0 = -θ (χ).
Then
Next lemma is about the variations of h(s) = s Let us define for s ≥ 0, θ(s) := ln(e + s) -α 2 * -1 s s + e .
We have:
θ(0) = 1, θ(s) → +∞ as s → +∞, θ (s) = s+e(1-α 2 * -1 ) (e+s) 2
.
Hence:
(1) If α 2 * -1 ≤ 1 then θ (s) ≥ 0 for all s ≥ 0 and in particular θ(s) ≥ 0 and therefore h (s) ≥ 0 for all s ≥ 0;
(2) if α 2 * -1 > 1 then θ (s 0 ) = 0 for s 0 = e α 2 * -1 -1 .
Let us compute θ(s 0 ):
θ(s 0 ) = ln α 2 * -1 - α 2 * -1 + 2,
and hence:
(i) if θ(s 0 ) ≥ 0 then θ(s) ≥ 0 for all s ≥ 0 and therefore h (s) ≥ 0 for all s ≥ 0;
(ii) if θ(s 0 ) < 0 then there exists s 1 < s 2 such that
θ(s) > 0 ∀s ∈ [0, +∞[ \ ]s 1 , s 2 [ =⇒ h (s) > 0 ∀s ∈ [0, +∞[ \ ]s 1 , s 2 [.
Notice that t → ln t is greater that t → t -2 somewhere between some t 1 < 1 and the value t * =the unique solution > 2 of the equation
ln t * = t * -2.
Finally the statement of the lemma holds for α * = t * (2 * -1).
Appendix B. A compact embedding using Orlicz spaces
For references on Orlicz spaces see [START_REF] Krasnoselskiȋ | Rutickiȋ Convex functions and Orlicz Spaces[END_REF][START_REF] Rao | Theory of Orlicz Spaces[END_REF]. Throughout Ω ⊂ R N is an bounded open set. We will denote L(Ω) = {ϕ : Ω → R : ϕ is Lebesgue measurable}. Assume also that M satisfies the ∆ 2 -condition, that is,
(B.1) ∃K > 0, ∀s ∈ [0, +∞[, M (2s) ≤ KM (s). Let {u n } n∈N in H 1 0 (Ω) be a sequence satisfying (1) sup n∈N u n 2 * < ∞, (2
) there exists u ∈ H 1 0 (Ω) such that lim n→+∞ u n (x) = u(x) a.e. Then there exists a subsequence {u n k } k∈N such that
(B.2) lim k→∞ Ω M |u n k (x) -u(x)| dx = 0.
In order to proof this theorem we need some definitions. Proof. Let fix ε > 0 and let δ > 0 be such that
∀n ∈ N, ∀A ⊂ Ω mesurable , |A| < δ =⇒ A M (|u n |)dx ≤ ε.
Using Fatou's lemma we infer that also
∀A ⊂ Ω mesurable , |A| < δ =⇒ A M (|u|)dx ≤ ε. Let Ω n = {x ∈ Ω : |u n (x) -u(x)| > M -1 (ε)}.
As a consequence of Egoroff's theorem, the sequence (u n ) n∈N converge in measure to u so there exists n 0 ∈ N such that
|Ω n | < δ.
Then, using the convexity of M and (B.1) it comes
Ω M |u n -u| dx = Ωn M |u n -u| dx + Ω\Ωn M |u n -u| dx ≤ 1 2 Ωn (M 2|u n | + M 2|u| dx + |Ω| M M -1 (ε) ≤ K 2 Ωn M (|u n |) + M (|u|) dx + |Ω|ε ≤ (K + |Ω|)ε.
In order to prove that, for the sequence of our theorem, the set Proof. For the Valle Poussin's theorem see [START_REF] Natanson | Theory of functions of a real variable[END_REF] page 159. To prove the second statement remark that the function Φ = Φ•M -1 satisfies (B.3).
Here M -1 stand for the right-hand inverse.
Proof of theorem B.3. Let us take Φ(s) = |s| 2 * . From hypothesis (1) of the theorem, the set K = {u n : n ∈ N} satisfies (B.4) for some D > 0.
Then the conclusion follows from lemma B.5 and Lemma B.6 .
Remark B.7. Whenever (B.2) is satisfied we say that the sequence {u n k } k∈N converges in M -mean to u.
One can formulate Theorem B.3 as a compact embedding of H 1 0 (Ω) in some vector space endowed of the Luxembourg norm associate to M (see [START_REF] Krasnoselskiȋ | Rutickiȋ Convex functions and Orlicz Spaces[END_REF][START_REF] Rao | Theory of Orlicz Spaces[END_REF]). Instead, we are going to use the Orlicz-norm which is more suitable to our purposes. We will will see later in Theorem B.12 that the convergence in M -mean implies the convergence with respect to the Orlicz-norm, provided that the ∆ 2 -condition is satisfied. • M is a norm in the real vector space L M (Ω) = u ∈ L(Ω) : u M < +∞ .
(see [START_REF] Krasnoselskiȋ | Rutickiȋ Convex functions and Orlicz Spaces[END_REF] for the details). Let us prove the following less trivial properties: (ii) The divide the proof in 3 steps.
Step 1: For all v ∈ L(Ω),
Ω uv dx ≤ u M if ρ(v, M * ) ≤ 1 ρ(v, M * ) u M if ρ(v, M * ) > 1
Furthermore, (a) For every, λ ∈ λ 1 , Λ), (1.1) admits at least two classical ordered positive solutions. (b) For λ = Λ, problem (1.1) admits at least one classical positive solution. (c) For every λ ≤ λ 1 , problem (1.1) admits at least one classical positive solution.
which implies that there exists K > 0 such that m(2s) ≤ Km(s) for all s ≥ 0 and consequently M satisfies the ∆ 2 -condition (B.1).
Finally, by inequality (B.5) of Proposition B.12 we get sup m(|u n |) M * , n ∈ N ≤ C + 1.
3. 2 .
2 An existence result for λ < λ 1 . The next Theorem provides a solution to (1.1) for λ < λ 1 based on the Mountain Pass Theorem. Theorem 3.3. Assume that Ω ⊂ R N is a bounded domain with C 2 boundary. Assume that the nonlinearity f defined by (1.2) satisfies (H), ant that the weight a ∈ C 1 (Ω). Then, the boundary value problem (1.1) λ has at least one classical positive solution for any λ < λ 1 .
Proposition 4 . 1 .
41 Let us define Λ := sup{λ > 0 : (1.1) admits a positive solution}. If (1.5) holds then,
-θ (s) = a ln e + s e + as ≤ a ln 1 a) + 1 -a a ln(1/a) + 1 -a ≤ ln(1/a) + 1,
Definition B. 1 .
1 We will say that a function M : [0, +∞[→ [0, +∞[ is a N -function if and only if (N1) M is convex, increasing and continuous, The proof of the following property is trivial, we just quoted it for the sake of completeness. Proposition B.2. Any N -function M admits a representation of the form M (s) = s 0 m(t)dt where m : [0, +∞[→ [0, +∞[ is a non-decreasing right-continuous function satisfying m(0) = 0 and lim s→+∞ m(s) = +∞. Thus, m is the right-derivative of M .Our first aim is to prove the following result: Theorem B.3. Let M : [0, +∞[→ R be a N -function such that lim s→+∞
Definition B. 4 .
4 Let K ⊂ L(Ω). We say that K has equi-absolutely continuous integrals if and only if ∀ε > 0 there exists h > 0 such that∀ϕ ∈ K, ∀A ⊂ Ω mesurable , |A| < h =⇒ A |ϕ(x)| dx < ε.Lemma B.5. Let M : [0, +∞[→ R be a N -function satisfying the ∆ 2 condition (B.1). Let {u n } n∈N be a sequence of measurable functions converging a.e. to some function u and such that the set M |u n | : n ∈ N has equi-absolutely continuous integrals. Then (B.2) holds.
MΦthen the family K 1 =
1 |u n | : n ∈ N has equi-absolutely continuous integrals we are going to use the following lemma : Lemma B.6. Let K ⊂ L(Ω) and let Φ : [0, +∞[→ [0, +∞[ |u| dx ≤ D. Then all the functions u ∈ K are integrable and K has equi-absolutely continuous integrals (Valle Poussin's theorem). Moreover, if M : [0, +∞[→ [0, +∞[ is a continuous increasing function satisfying {M |u| : u ∈ K} has equi-absolutely continuous integrals.
Definition B. 8 .
8 Let M be a N -function. The complementary of M defined for all s ≥ 0 is the functionM * (s) := max st -M (t) : t ≥ 0 .As before, we give the following trivial result for the sake of completeness:Proposition B.9. If m is the right derivative of M then m * (s) = sup{t : m(t) ≤ s}is the right derivative of M * and M * is a N -function. Furthermore, for all s ≥ 0 we have sm(s) = M (s) + M * (m(s)), sm * (s) = M (m * (s)) + M * (s). Next, let us introduce the Orlicz norm associated to M : Definition B.10. Let M be a N -function and let M * be its complementary. Let us denote for any v ∈ L(Ω) ρ(v, M * ) = Ω M * |v| dx and define the Orlicz norm of any u ∈ L(Ω) by u M := sup Ω uv dx : v ∈ L(Ω), ρ(v, M * ) ≤ 1 .
Proposition B. 11 .
11 (i) For all u ∈ L(Ω), (B.5) u M ≤ Ω M (|u|) dx + 1.
(
ii) For any u and v in L(Ω) it holds (B.6) Ω uv dx ≤ u M v M * (Holder's inequality). (iii) For any u and v in L(Ω) we have u M ≤ v M if |u| ≤ |v| a.e. Proof. (i) This follows from the definition of • M and the inequality |uv| ≤ M (|u|) + M * |v| .
Step 2 : 3 :
23 Indeed, the first case follows directly from the definition. If ρ(v, M * ) > 1 then by convexityM * |v| ρ(v, M * ) ≤ M * |v| ρ(v, M * )and thereforeρ |v| ρ(v, M * ) , M * ≤ 1 ρ(v, M * ) Ω M * |v| dx = 1andΩ u v ρ(v, M * ) dx ≤ u M . If u M ≤ 1 then ρ m(|u| , M * ) ≤ 1. Set u n = uχ {|u|≤n} for all n ∈ N. Since u n is bounded then ρ (m(|u n | , M * < +∞. Assume by contradiction that Ω M * m(|u| dx > 1 and let n 0 ∈ N be such that Ω M * m |u n 0 | dx > 1. We have M * m(|u n 0 | < M |u n 0 | + M * m(|u n 0 |)| = |u n 0 | m |u n 0 | and therefore, by (i), ρ m(|u n 0 |), M * < Ω |u n 0 | m |u n 0 | dx ≤ u n 0 M ρ m(|u n 0 |), M * which contradicts u n 0 M ≤ u M ≤ 1.This is trivial from the definition of u M , step 1 and the fact that |u|m(|u|) = M (|u|) + M * m(|u| .Step If u M ≤ 1 then ρ(u, M ) ≤ u M . Let us remark that for all s ≥ 0 M * (m(s)) + M (s) = sm(s).
2 * -1 [ln(e+s)] α for s ≥ 0. Lemma A.2. There exists α * > 2(2 * -1) such that h is an increasing function on ]0, +∞[ if and only if α ≤ α * . Moreover, if α > α * there exists s 1 < s 2 such that h is increasing in [0, +∞[ \ ]s 1 , s 2 [.
Proof. We have
h (s) = s 2 * -2 [ln(e + s)] α+1 (2 * -1) ln(e + s) - αs s + e ,
so h (s) ≥ 0 ⇐⇒ ln(e + s) ≥ α 2 * -1 s s + e .
Acknowledgments
We would like to thank Professors Xavier Cabré, Carlos Mora and Guido Sweers for helpful discussion and references about Orlicz-Sobolev spaces.
This work was started during Pardo's visit to the LMPA, Université du Littoral Côte d'Opale ULCO, whose invitation and hospitality she thanks.
We next see that u Λ < +∞, reasoning on the contrary. Assume that there exists a sequence of solutions u n := u λn such that u λn → +∞ as λ n → Λ. Set v n := u n / u n , then there exists a subsequence, again denoted by v n such that v n v in H 1 0 (Ω), and v n → v in L p (Ω) for any p < 2 * and a.e. Arguing as in the claim of Proposition 3.1, v ≡ 0. Moreover (5.4)
On the other hand, from the weak formulation, for all
)
Taking the limit, and since
Hence [2, Proposition 3.13 (iv)], and (5.6) imply
which contradicts (5.4) and yields u Λ < +∞.
By Sobolev embedding and the Lebesgue dominated convergence theorem,
Now, by substituting ψ = u n in (5.5), using Hölder inequality and Sobolev embeddings we obtain
By compactness, for a subsequence again denoted by u n , u n u * in H 1 0 (Ω), u n → u * in L p (Ω) for any p < 2 * and a.e. By uniqueness of the limit, u Λ = u * . Finally, by taking limits in the weak formulation of u n as λ n → Λ, we get
The existence of a classical positive solution for λ < λ 1 is done in Theorem 3.3. Let's look for a solution when λ = λ 1 .
Set v 0 = m(|u|). From step 2, ρ(v 0 , M * ) ≤ 1 and then
Now we prove Holder's inequality. From step 2 applied to M * and
and Holder's inequality follows.
The proof of (iii) is trivial.
Finally, we give the following compact embedding result: |
03345853 | en | [
"math.math-nt"
] | 2024/03/04 16:41:26 | 2020 | https://hal.science/hal-03345853/file/ErratumToMultiDimRace.pdf | Abstract. As pointed out by Alexandre Bailleul, the paper mentioned in the title contains a mistake in Theorem 2.2. The hypothesis on the linear relation of the almost periods is not sucient. In this note we x the problem and its minor consequences on other results in the same paper.
Corrected statement of Theorem 2.2, and related results
First recall some notations used in [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]2].
Denition 1. Let P be a set of positive integers, the natural density dens(P) of P is given by
dens(P) = lim X→∞ 1 X k≤X 1 P (k)
if the limit exists, where 1 P is the indicator function of the set P.
For γ 1 , . . . , γ N ∈ R, we denote by γ 1 , . . . , γ N Q the Q-vector space spanned by these elements.
The following theorem corrects the awed hypothesis in [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Th. 2.2].
Theorem 1. Let N ≥ 1 and let γ 1 , . . . , γ N ∈ (0, π) be distinct real numbers
such that π / ∈ γ 1 , . . . , γ N Q . Let D ≥ 1 and x c 1 , . . . , c N ∈ C D . Let F = (F 1 , F 2 , . . . , F D ) : N → R D be the function dened by F (k) = N n=1 c n e ikγn + c n e -ikγn = N n=1 (a n cos(γ n k) + b n sin(γ n k)) .
The image of F is contained in a compact subset of the subspace
V F of R D generated by the vectors a 1 , . . . , a N , b 1 , . . . , b N .
Then, for every subspace H ⊂ R D not containing V F and every vector α ∈ R D , one has dens(F ∈ α + H) = 0.
In particular, if
V F ⊂ D d=1 {x ∈ R D : x d = 0}
, then, for every α 1 , . . . , α D ∈ R, one has dens(F d = α d ) = 0 for all 1 ≤ d ≤ D, and the density
dens(F 1 > α 1 , F 2 > α 2 , . . . , F D > α D )
exists.
Date: September 18, 2020. (1) Note that the dierence with [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Th. 2.2] lies in the hypothesis on the real numbers γ 1 , . . . , γ N : we need that π / ∈ γ 1 , . . . , γ N Q . In particular, the hypothesis is stronger than the one in [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Th. 2.2], but still weaker than the full Linear Independence.
(2) In the case π ∈ γ 1 , . . . , γ N Q , Bailleul observed in [START_REF] Bailleul | Explicit Kronecker-Weyl theorems and applications to prime number races[END_REF]Th. 1.5] that the subtorus generated by γ 1 , . . . , γ N over Z may not be connected, this is the reason of the gap in the proof of [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Th. 2.2]. This is also the cause of the dierence between the continuous case ( [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Th. 1.2]) and the discrete case. Indeed, the subtorus {(yγ 1 , . . . , yγ N ) : y ∈ R}/(2πZ) N is connected unconditionally : it is the continuous image of the connected set R. This last remark corrects [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Rem. 2.3.(i) and Rem. 3.7], note also that one should read R/2πZ in [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Rem. 2.3.(i)] instead of Z/2πZ.
The main application of this result is the analogue of Chebyshev's bias in rings of polynomials with coecients in nite elds. Let us recall some denitions.
Denition 2. Let p α be a prime power, and Q ∈ F p α [t]. Let a 0 , a 1 , . . . , a D mod Q be distinct invertible congruence classes, and let π(k; Q, a i ) denote the number of irreducible polynomials in F p α [t] with degree at most k that are congruent to a i mod Q.
If, for each permutation σ, the set
P Q;a σ(0) ,a σ(1) ,...,a σ(D) := {k ∈ N : π(k; Q, a σ(0) ) > π(k; Q, a σ(1) ) > . . . > π(k; Q, a σ(D)
)} admits a natural density, we say that the irreducible polynomial race is weakly inclusive. Moreover, if every set of the form {k ∈ N : π(k; Q, a i ) = π(k; Q, a j )}, i = j, has natural density equal to zero, we say that the ties have density zero.
The hypothesis in [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Cor. 2.5] has to be changed accordingly.
Corollary 2. Let p α be a prime power, and Q ∈ F p α [t]. Suppose that there exists M ≥ 1 and γ 1 , . . . , γ M ∈ (0, π) such that π / ∈ γ 1 , . . . , γ M Q , and such that for each character χ mod Q, there exists
1 ≤ m ≤ M with L( 1 2 + iγ m , χ) = 0 but L( 1 2 + iγ m , χ ) = 0 for χ = χ.
Then, every irreducible polynomial race in F p α [t] modulo Q is weakly inclusive and the ties have density zero.
Finally, the hypothesis in Theorem 1 is now too strong to deduce [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Cor. 2.6], we should thus consider this statement as not proved. Indeed, one needs to take into account all the zeros of the Dirichlet L-functions of the quadratic characters modulo f (T )(T -u), but we only have information on the zeros of the Dirichlet L-functions of the primitive quadratic character.
Proof of the corrected statement
The key lemma is the following. Lemma 3. Let N ≥ 1 and γ 1 , . . . , γ N ∈ (0, π) be distinct real numbers such that
π / ∈ γ 1 , . . . , γ N Q . Then the sub-torus A = {(kγ 1 , . . . , kγ N ) : k ∈ Z}/(2πZ) N is connected.
Proof. We will show that A = {(yγ 1 , . . . , yγ N ) : y ∈ R}/(2πZ) N , then the conclu- sion follows from the fact that this sub-torus is connected. The rst inclusion (⊂) is immediate.
Let (e 1 , . . . , e d ) be a basis of γ 1 , . . . , γ N Q , such that for all 1 ≤ i ≤ N , one has γ i = d k=1 g i,k e k with g i,k ∈ Z. By hypothesis, the set {2π, e 1 , . . . , e d } is linearly independent over Q, thus, by the discrete version of the KroneckerWeyl Equidistribution Theorem (see e.g. [START_REF] Bailleul | Explicit Kronecker-Weyl theorems and applications to prime number races[END_REF]Th. 1.2]), one has {(ke 1 , . . . , ke d ) : k ∈ Z} = (R/2πZ) d . In particular, for every y ∈ R and > 0, there exists ∈ Z and m
1 , . . . , m d ∈ Z such that max 1≤k≤d | e k -ye k -m k 2π| < / max 1≤i≤N d k=1 |g i,k |.
Thus, for all 1 ≤ i ≤ N , we have
| γ i -yγ i - d k=1 g i,k m k 2π| ≤ d k=1 |g i,k | • | e k -ye k -m k 2π| < .
Using the fact that g i,k ∈ Z, this shows that y(γ 1 , . . . , γ N ) ∈ A, for all y ∈ R, which concludes the proof.
We can now give the proof of Theorem 1.
Proof of Theorem 1. We follow the proof from [De], the only aw is in the proof of [START_REF] Devin | Limiting Properties of the Distribution of Primes in an Arbitrarily Large Number of Residue Classes[END_REF]Lem. 3.5].
Let S be the unit sphere of R D and, let f : (V ∩ S) × A be the function dened by
f (u, θ) := N n=1 2 Re( u, c n e iθn ) (1) = N n=1 u, a n cos θ n + u, b n sin θ n ,
this function is analytic in the variable θ.
Since the vectors a 1 , . . . , a N , b 1 , . . . , b N span V , there exists at least one of them, say v j , such that u, v j = 0. So, the function f (u, •) is a linear combination of the 2N characters of A dened by (θ 1 , . . . , θ N ) → e iθn , (θ 1 , . . . , θ N ) → e -iθn , 1 ≤ n ≤ N , with at least one non-zero coecient. Note that these 2N characters are all distinct, and also distinct from the trivial character (θ 1 , . . . , θ N ) → 1. This follows from the fact that the values of the derivative of these functions restricted to {(yγ 1 , . . . , yγ N ) : y ∈ R}/(2πZ) N ⊂ A at y = 0 are respectively γ n , -γ n , 1 ≤ n ≤ N , and 0 which are distinct. Thus by a result of DedekindArtin ( [START_REF] Lang | Algebra[END_REF]VI,Th. 4.1]), those characters are linearly independent and the function f (u, •) is not constant on A. Moreover, Lemma 3 implies that A is connected, and the fact that N > 0 and γ 1 = 0 ensures that it is not restricted to {0}, so that a non-constant analytic function on A is indeed non-locally constant.
The rest of the proof follows similarly to [De].
Acknowledgments
I thank Alexandre Bailleul for his careful reading of [De] and for helpful discussions that led to this correction, as well as the referee for their useful remarks that improved the text. |
00412103 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2009 | https://inria.hal.science/inria-00412103/file/Burciu_COST_TD_09_844.pdf | Ioan Burciu
email: [email protected]
Guillaume Villemaud
Jacques Verdier
Candidate Architecture for MIMO LTE-Advanced Receivers with Multiple Channels Capabilities and Reduced Complexity and Cost
published or not. The documents may come L'archive ouverte pluridisciplinaire
INTRODUCTION
Internet of things: this is the greatest challenge of mobile communications. All incoming standards need to provide higher throughputs in compact terminals. One can see that all proposed solutions converge towards same techniques: OFDM (Orthogonal Frequency Division Multiplexing), MIMO (Multiple Input Multiple Output), adaptive coding and modulation, as well as scalable bandwidths. These techniques require an increasing complexity of terminals, leading to additional cost and consumption. Particularly from the RF front-ends point of view, OFDM imposes high PAPR(Peak to Average Power Ratio) constraints and good linearity [START_REF] Van Nee | OFDM for Wireless Multimedia Communications[END_REF], MIMO conducts to the multiplication of RF chains, scalable bandwidth imposes wider bandwidth characteristics of RF components [START_REF] Kaiser | Smart Antennas: State of the Art[END_REF].
Recently, we have proposed different solutions to reduce complexity of multi-band [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF] and multi-antenna [START_REF] Gautier | Low complexity antenna diversity front-end: Use of code multiplexing[END_REF][5] frontends. In this article we look forward to merging those principles to propose a candidate architecture for simplified LTE-Advanced terminals [START_REF] Seidel | Progress on LTE Advanced" -the new 4G standard[END_REF] -current 3GPP [7] working group for future mobile phones with very high data rates and mobility. In fact, in actual propositions for LTE-Advanced PHY-layer, the use of combined multiple-antenna capabilities and multiple separated frequency channels are very seducing for the downlink transmission. Meanwhile the use of these combined techniques supposes high cost architectures for the user embedded terminal.
This paper is organized as follows. Section II summarizes the main LTE-Advanced key points, focusing on multi-band and multi-antenna needs. Section III assesses the previously proposed architectures which separately respond to multi-band and multi-antenna problems respectively. A novel combined architecture is presented in Section IV along with a comparative complexity study. This architecture is dedicated to the multiple-antenna reception of signals composed of two arbitrarily distributed frequency channels. Finally, conclusions and forthcoming works are drawn in section V.
LTE-ADVANCED SPECIFICATION
A. Requirements LTE-Advanced, an evolved version of LTE, is currently under investigation in order to fulfill the requirements defined by the International Telecommunications Union (ITU) for next generation mobile communication systems [START_REF] Seidel | Progress on LTE Advanced" -the new 4G standard[END_REF].
Requirements for LTE-Advanced are similar to those imposed by LTE, excepting peak data rates and spectral efficiency which should be increased. The goal is to provide peak-data-rates reaching 1 Gbit/s in local areas. Such high data rates require a special spectrum allocation for a 100 MHz range in order to obtain a spectral efficiency reaching a 10 bps/Hz level. This spectrum allocation of the 100 MHz system bandwidth will use the aggregation of individual component carriers.
B. Proposed radio access techniques for LTE-Advanced
The 3GPP working groups are currently starting take into consideration the technical proposals that can be implemented in order to achieve these requirements. The main novel requirements imposed by the future LTE-Advanced concern [7]:
• Asymmetric wider transmission bandwidth
• Layered OFDMA multi-access • Advanced multi-cell transmission • Discontinuous spectrum allocations • Enhanced multi-antenna transmission techniques
In general, OFDM provide simple means to increase bandwidth by simply increasing the number of subcarrier. However, due to a fragmented spectrum, the available bandwidth might not be contiguous, as shown in Fig. 1. Some of the main challenges for 100 MHz terminals are presented below [START_REF] Seidel | Progress on LTE Advanced" -the new 4G standard[END_REF]:
C. Scenario
Among the proposed radio access techniques, multi-antenna transmission and discontinuous spectrum allocation have a direct influence on the front-end part of the receiver. Each additional antenna or each additional band induces an important complexity increase of the receiver's analog part. This study assesses the multi-antenna reception of a discontinuous spectrum signal (as shown in Fig. 1).
PRIOR WORKS: LOW COMPLEXITY FRONT-END
D. General purpose
The analog complexity issue concerning advanced receiver using digital processing has been very little addressed [START_REF] Kaiser | Smart Antennas: State of the Art[END_REF]. In fact, the performance gain achieved by diversity techniques implies an increase of the digital complexity, but also an increase of the complexity and the consumption of the analog front-end [START_REF] Tsurumi | Broadband RF stage architecture for software-defined radio inhandheld terminal applications[END_REF].
In the research domain concerning multi-band reception, we can distinguish two different categories of front-ends: nonsimultaneous receivers using switching techniques and simultaneous receiving receivers. To the knowledge of the authors, the optimal scheme when realising multiband simultaneous reception uses the front-end stack-up techniqueeach chain being dedicated to the reception of only one frequency band. Nonetheless, this architecture is characterized by some inconveniences such as the bad complexityperformance trade-off, but also the price and the physical size.
Therefore our studies deal with the analog front-end architectures dedicated to simultaneous multi-band reception. Our main goal is the improvement of the performanceconsumption-complexity trade-off offered by these structures. Two studies explore the use of a single common front-end for the processing of signals received by the antennas:
• The double IQ architecture dedicated to the simultaneous reception of two separate frequency band signals.
• The code multiplexing architecture for the multiantenna reception.
The use of a single analog chain underlies the idea of multiplexing the different RF branches. In other words, we realise a multiplexing of the different RF contributions in order to use a single frequency translation step. In the digital domain, we demultiplex the different baseband translated contributions.
E. Double IQ architecture 1) Context
Double orthogonal translation techniques are generally used in image frequency rejection front-ends [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF]. The rejection of the signal having its spectrum in the image frequency band of the useful signal is a key point of the heterodyne front-end architectures. In order to perform these specifications, the double orthogonal translation technique uses a signal processing method that manages to reconstruct the useful signal starting from four baseband signals. These baseband signals are the result of a double orthogonal frequency translation. Each of them is composed of two separate contributions: the baseband components of the useful signal and of the image frequency band signal.
In other words, the double orthogonal (double IQ) structure is multiplexing the useful signal and the image frequency band signal during the translation from RF to baseband and is then demultiplexing them in the baseband domain using a signal processing technique. Starting from this structure, we proposed and developed a novel front-end architecture dedicated to the simultaneous reception of two separate frequency band signals [START_REF] Burciu | A 802.11g and UMTS Simultaneous Reception Front-End Architecture using a double IQ structure[END_REF].
2) Presentation of the structure
The proposed dual-band simultaneous reception architecture is shown in Fig. 2. The input blocks -antenna, RF filter and LNA (Low Noise Amplifier) -are parallelized in order to have a good amplification and filtering of the two useful signals S 1 and S 2 . In the same time, we concluded to a possible mutualisation of these blocks while realizing the reception of two signals representing two frequency non-overlapping channels of the same standard. Once filtered and amplified, the useful signals are translated in the baseband domain by a double IQ structure. An important aspect that has to be mentioned is the choice of the first local oscillator LO 1 frequency. This frequency is chosen in such a manner that each of the useful signals has a spectrum situated in the image frequency band of the other. In the baseband domain, the four obtained signals are digitalized. The baseband component of the two useful signals is obtained by using two dedicated basic operations processing similar with that used by the image frequency rejection structures [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF].
3) Characteristics
The double orthogonal translation technique allows a theoretically perfect rejection of the image band signal for a case where the quadrature mounted mixers are perfectly matched -no phase or gain mismatch. In the meantime, design and layout defaults, such as different line lengths between the two branches and non identical gain of the mixers, generate phase and respectively gain mismatches [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF]. A study concerning this issue as well as the design and implementation of an original dedicated digital MMSE (Minimum Mean Square Error) method has been done.The results show that, when integrated, this MMSE method confers to the double IQ architecture the same sensitivity to IQ mismatches as that of the dedicated front-ends stack-up architecture. These results were obtained in simulation using the ADS software provided by Agilent Technologies, the input signals chosen are two non-overlapping 802.11g channels. These results were validated by using an Agilent Technologies Platform that integrates a real transmission channel and an accurate model of the proposed architecture. Fig. 3 presents these results. The BER evolution depending on the SNR (Signal to Noise Ratio) of the useful signals shows that, when integrating the digital MMSE Method, the double IQ (MMSE DIQ) architecture presents the same sensibility to the IQ defaults as the stack-up architecture (Stack-Up) -for a phase and gain mismatch of 1o and 0.3 dB.
Several studies concerning the proposed double IQ structure were realized. In the following we mention the one concerning the specifications of the ADC (Analog to Digital Converters) and that concerning the overall consumption. The ADC specification study concluded that the constraints imposed to the ADC used by this structure are almost identical (dynamics and sampling frequency) to those used by a dedicated frontend stack-up structure. The overall consumption study is comparing the double IQ structure to the front-end stack-up structure. It concludes that due to the mutualisation of several blocks as well as because of the use of fewer components, the proposed double IQ structure offers Fig. 4 Code multiplexing architecture dedicated to the simultaneous reception of a multi-antenna system. Fig. 5 802.11g simulated and measured BER evolution for a two-antenna SIMO reception using the code multiplexing architecture and the classical stack-up architecture. a significant 25% of consumption gain as well as a smaller complexity.
A. Code multiplexing architecture 1) Context
An antenna diversity system must be able of simultaneously receive different versions of the transmitted signal(s).For example, an N antennas receiver must simultaneously process N signals having their spectrums situated in the same frequency band. To answer these specifications, the RF stackup architecture is an obvious choice. It is composed of N stacked-up processing chains, each of them being dedicated to the demodulation of one of the N branches. However, this choice imposes a high complexity of the analog part of the receiver. The goal of our study concerning the multipleantenna structures aims to reduce this complexity without decreasing the baseband signal's SNR quality.
To this end, an innovative architecture is introduced based on code multiplexing. This architecture uses the direct sequence spread spectrum (DSSS) technique in order to multiplex the different antennas contributions so that a single frequency translation block is used.
2) Presentation of the structure
In order to achieve the time and frequency overlapping between each antenna contribution, decorrelation can be done by the DSSS technique. This technique is the basis of CDMA multiple access technology [START_REF] Kohno | Spread spectrum access methods for wireless communications[END_REF]. The DSSS technique consists in allocating an orthogonal spreading code to each branch. Fig. 6 Candidate Architecture LTE-Advanced Receivers capable of realizing a multi-antennas processing of a discontinuous spectrum signal. The discontinuous spectrum is received, coded in the baseband domain in order to be processed by a single front-end structure.
The signals rk(t) are multiplied by the periodical codes ck(t) which are pseudorandom sequences of N binary entities having a rate N times higher than the symbol. Thus, the resulting bandwidth of the signal's spectrum is N times larger than those of the received signal rk(t). The addition of N encoded signals is then performed in order to generate the radio-frequency multiplex signal d(t). This signal is then baseband translated by a unique IQ demodulator. The decoding of each contribution is performed in the digital domain. It consists in using matched filters composed of digital filters followed by subsampling operations.
The proposed code multiplexing architecture is shown in Fig. 4. The RF input blocks (antenna, RF filter and LNA) are parallelized in order to have a good amplification and filtration of the received radiofrequency signals.
3) Characteristics
The proposed structure uses orthogonal codes to multiplex the different branches through a single IQ demodulator. The main goal was to reduce the complexity of the analog frontend.
In [START_REF] Gautier | Low complexity antenna diversity front-end: Use of code multiplexing[END_REF], we show that this system reduces the number of ADC by using only two ADC instead of the 2N used by the classic receiver. But, for the proposed architecture, the specifications of the ADC in terms of bandwidth are much more stringent. Meanwhile, a global consumption reduction of 20% could be reached for an N= 8 antennas system.
The points revealed in [START_REF] Gautier | New antenna diversity frontend using code multiplexing[END_REF] are the evaluation of the feasibility of such a structure. The implementation of analog coding and digital decoding has been validated by BER simulations and measurements (Fig. 5). Results show that, in a Gaussian case, the bit error rate does not significantly increase when the receiver integrates this type of RF coding architecture instead of the classical front-end stack-up.
Several studies have been done concerning the influence of IQ imbalance on the signal quality. Their conclusions reveal that the proposed architecture has the same sensitivity to IQ mismatches as the stack-up architecture. We note that the resulting IQ mismatches affect in the same manner the quality of each antenna baseband translated contributions. We mention that a classical dedicated front-end stack-up will have different IQ impairments for each dedicated IQ demodulator. These theoretical performance were validated in real environment simulations. In [START_REF] Gautier | IQ imbalance reduction in a SMI multi-antenna receiver by using a code multiplexing front-end[END_REF] we implement a SMI (Sample Matrix Inversion) multi-antenna digital method in order to compare the performance of the RF code multiplexing architecture with those of a classical front-end stack up for different IQ mismatches levels.
The last important point that has to be mentioned is the capability of the proposed architecture to adapt to any multiantenna schemes. In fact, we can find different application scenarios such as MIMO reception or beamforming techniques LTE-ADVANCED RECEIVER ARCHITECTURE
B. Context
As mentioned in section II, the future LTE-Advanced standard takes in consideration a discontinuous spectrum usage, as well as MIMO techniques for the downlink transmission. In order to answer to these specifications the actual state of the art of radiofrequency receivers imposes the use of dedicated front-end stack-up architectures. For example, if we consider a dual-band discontinuous spectrum scenario where two antennas are used for the reception of a MIMO transmission, the receiver will have to integer a four front-end stack-up. Each of these front-ends is dedicated to the processing of one of the contributions obtained from the combination of two antennas and two non-adjacent frequency bands. It becomes obvious that this method, involving the stack-up of dedicated front-ends, imposes high complexity, but especially high power consumption.
One of the most important issues when designing a radiofrequency receiver is the performance-power-complexity trade-off. In order to obtain low power consumption and low complexity LTE-Advanced receiver, an obvious solution is the mutualisation of elements of the dedicated front-end used by the stacked-up architecture. If we also take into consideration the simultaneous processing constrains, we can conclude that the mutualisation of the frequency translation step imposes a multiplexing of the different RF domain contributions.
Starting from this RF multiplexing idea, we present a novel single front-end architecture able to overcome the constraints imposed to the receiver by the LTE-Advanced standard.
C. Structure presentation
Starting from the dual band simultaneous reception and the RF code spreading architectures presented in Section III, we propose a unique front-end architecture dedicated to the LTE-Advanced receivers. In this paper we chose to treat the case of a two antenna reception an OFDM dual band signal, corresponding to a handheld integration scenario. Meanwhile, the structure can be easily generalized for several antennas by increasing the number of chips used by the orthogonal codes.
In the followings, we consider that the two input signals S and S' are the result of the passage of a bi-band signal through two different transmission channels. Let Band 1 -Band 2 and Band 1 '-Band 2 ' be the two pairs of contribution composing S and respectively S'. Once received by two space separated antennas, the two input signals are separately filtered and amplified by two RF filters and two LNA respectively, as shown in Fig 6.
The multiplexing of the four contributions is realized by a two step method. To begin, we use the orthogonal code spreading technique in order to multiplex the two input signal once filtered and amplified. Let cS, cS',cBand 1 -cBand 2 and cBand 1 '-cBand 2 ' be the signals resulting from the use of the coding technique.
The orthogonal codes used here have a length of two and a chip time two times smaller than the symbol time of each of the two signals. When multiplied with the orthogonal codes, each of the S 1 and S 2 signals will see each of the components of the pairs Band 1 -Band 2 and respectively Band 1 '-Band 2 ' spread in the same manner around their own central frequency, as shown in Fig. 6. This multiplexation step is concluded by the addition of the cS 1 and cS 2 signals. Therefore, by using the code spreading technique, we multiplexed two by two the contributions having the same central frequency but which have been received on different antennas. As a result of this operation we obtain a signal having a dual band spectrum. Each of its two frequency bands is composed of the addition between cBand 1 and cBand 1 ' and respectively cBand 2 and cBand 2 '.
The second block of the architecture assessed in this section is implementing the double IQ technique similar to that used Similar to the structure presented in Section III the two digital outputs are the baseband translation of the two frequency bands composing the signal at the output of the RF coding block. Therefore, each of them is the addition between the baseband translated signals cBand 1 , cBand 1 ' and respectively cBand 2 , cBand 2 '. In order to demultiplex each of these two pairs of signals, we apply two matched filters, similar to those used for the code multiplexing architecture.
Once we separately obtain the two pairs of signals corresponding to the two antennas reception of a dual band signal, two antenna SIMO processing method are used. Each of them has one of the pairs of decoded signals as input.
D. Characteristics
The single front-end receiver presented above is capable of answering to the LTE-Advanced requirements concerning the MIMO reception of a discontinuous spectrum signal.
In order to validate the theoretical study relative to this receiver we realized several simulations. The software used for these simulations was the ADS software provided by Agilent Technologies.
Knowing that the physical layer requirements concerning LTE-Advanced are not yet finalized, we choose to use a particular RF signal model for these simulations. In order to obtain an OFDM signal having a discontinuous spectrum we use a signal composed of the addition of two 802.11g nonoverlapping channels. For the transmission channel the model chosen is the AWGN (Additive White Gaussian Noise).
The results presented in Fig. 7 show the BER evolution for different E b /N 0 of the input signal. The receivers are using either the proposed unique front-end architecture or the dedicated front -end stack-up architecture. The results showed in Fig. 7 are mainly concerning the SIMO receptors using the two different architectures when receiving the dual-channel signal -Stack-up-SIMO, DoubleStructure-SIMO-Channel1 and DoubleStructure-SIMO-Channel2. We also show the BER evolution when the receivers are realizing a SISO reception-Stack-up-SISO and DoubleStructure-SISO. Both in the SIMO and SISO case, the reception performance when using the front-end stack-up architecture are slightly better compared to those obtained when using the unique front-end structure. This is due to the fact that the codes used by the Double Structure are not completely orthogonal. This matter has been already treated in Section III. We observe that the SIMO reception offers the same performance gain when using the two architectures. Meanwhile, if a real SIMO AWGN channel we have a 3 dB gain, the simulated channel manages to offer only a quasi AWGN conditions. In fact, the two noises added to the initial signal are not completely uncorrelated. This is why the SIMO gain obtained during ADS simulation doesn't reach the 3 dB level. In conclusion we consider that the performance of the proposed architecture are the same as those of the dedicated front-end stack-up.
Parallel to this performance study, a comparative power consumption study was made. For this study we considered that the single front-end architecture, as well as the dedicated front-end stack-up imposes the same constrains to the basic blocks (LNA, filters, amplifiers, mixers and oscillators) that are composing them. A bibliography research allowed us to choose the state of the art of these basic components in terms of performance-power consumption trade-off. The conclusions of this study show a 33 % power consumption gain in favor of the single front-end architecture( 211 mW instead of 315 mW). The main reason to this important gain is due to the use of less basic blocks. In order to better illustrate this idea, Fig. 8 shows the consumption of the different types of basic blocks used by the single front-end architecture and the front-end stack-up. If we also take into account the fact that the complexity of the proposed structure is significantly smallerfewer components and image frequency filter free-, it becomes obvious that the proposed structure offers an excellent performance-consumption-complexity trade-off.
CONCLUSION
In this article, a novel low complexity architecture was presented. We consider that this type of architecture is a good candidate to the integration in the receivers dedicated to the future LTE-Advanced standard. One of the major advantages of this architecture is its low complexity and therefore low power consumption.
Knowing that the power consumption of the analog part of the receiver is directly dependent on the used number of basic processing blocks, the excellent complexity-powerperformance trade-off becomes obvious. Indeed, this is confirmed by the fact that, despite the use of a demultiplex block in the digital domain, the proposed architecture offers a significant 33% of power gain compared with the solution proposed by the actual state of the art.
The proposed architecture joins two separate innovating methods: the RF orthogonal coding and the double IQ dual band simultaneous reception technique. Expected performance of its implementation has been presented for a particular study case -SIMO reception of an OFDM discontinuous spectrum signal. The simulation results show similar performance when comparing the proposed unique front-end structure and the dedicated front-end stack-up. This validates our theoretical study.
As mentioned in section two, one of the sensible points of this structure is the impact that the IQ mismatches can have on the signal quality. One possible solution to this inconvenient is the use of a digital MMSE method in order to mitigate the IQ mismatch influence on the received signal. This represents one of the forthcoming of our activities.
Another issue that still has to be addressed aims the design and realization of a demonstrator. Real environment measurements concerning the single front-end architecture will soon be realized using a radiofrequency platform with a real SIMO transmission channel.
Fig. 1
1 Fig. 1 LTE-Advanced spectrum decomposition and one possible discontinuous spectrum allocation. • Availability of front-end for such a large bandwidth and bandwidths of variable range • Availability of Analog Digital Converter with such a high sampling rate and quantization resolution • Increased decoding complexity e.g. for channel decoding and increased soft buffer size In this paper, we deal with theses challenges and we propose a candidate architecture for the following scenario.
Fig. 2
2 Fig. 2 Double IQ architecture dedicated to the simultaneous reception of two separate frequency band signals.
Fig. 3
3 Fig. 3 802.11 BER evolution during a multiband simultaneous reception using the Double IQ architecture and the Stack-up architecture integrating constant orthogonal mismatches.
Fig. 7
7 Fig. 7 BER evolution during the simultaneous SIMO reception of a signal composed of two non-overlapping 802.11g channels.
Fig. 8
8 Fig. 8 Consumption of the different types of basic blocks used by the single front-end architecture and the dedicated front-end stack-up. by the dual band simultaneous reception structure. The processing realized by this block realizes a frequency translation of the RF orthogonal coding block output in the baseband domain.Similar to the structure presented in Section III the two digital outputs are the baseband translation of the two frequency bands composing the signal at the output of the RF coding block. Therefore, each of them is the addition between the baseband translated signals cBand 1 , cBand 1 ' and respectively cBand 2 , cBand 2 '. In order to demultiplex each of these two pairs of signals, we apply two matched filters, similar to those used for the code multiplexing architecture.Once we separately obtain the two pairs of signals corresponding to the two antennas reception of a dual band signal, two antenna SIMO processing method are used. Each of them has one of the pairs of decoded signals as input.
ACKNOWLEDGEMENT
The authors wish to sincerely thank Orange Labs which supports this work. |
04121044 | en | [
"math"
] | 2024/03/04 16:41:26 | 2021 | https://hal.science/hal-04121044/file/criticalexponent.pdf | Mabel Cuesta
email: [email protected]
Liamidi Leadi
email: [email protected]
Joseph Liouville
Positive and sign-changing solutions for a quasilinear Steklov nonlinear boundary problem with critical growth
Keywords: Critical growth, indefinite weights, Steklov boundary conditions, p-laplacian operator. 35D05, 35J60, 35J65, 35J70, 35J25, 35J35
In this work we study the existence of positive solutions and nodal solutions for the following p-laplacian problem with Steklov boundary conditions on a bounded regular domain Ω ⊂ R N ,
with given numbers p, N satisfying 1 < p < N , p * := p(N -1) N -p the critical exponent for the Sobolev trace map W 1,p (Ω) → L q (∂Ω) and functions b 0 and a, V possibly indefinite. By minimization on subsets of the associated Nehari manifold, we prove the existence of positive solutions if N ≥ max{2p-1, 3} and the parameter λ close to the principal eigenvalues of the operator -∆ p + V with weighted-Steklov boundary conditions. We also prove the existence on nodal solutions for a definite and N > max{p 2 , 2p, p p-1 , 2}. Our results show striking differences between the cases p > 2, p = 2 and p < 2.
Introduction
Consider the following problem of parameter λ -∆ p u + V (x)|u| p-2 u = 0 in Ω; |∇u| p-2 ∂u ∂ν = λa(x)|u| p-2 u + b(x)|u| p * -2 u on ∂Ω;
(1.1) for 1 < p < N , a, b two given functions in C γ (∂Ω) for some γ > 0, a ≡ 0 with b ≥ 0, V ∈ L ∞ (Ω) and p * := p(N -1) N -p . The domain Ω is a bounded subset of R N of class C 2,α for some 0 < α < 1 and N ≥ 3. Our aim is to prove the existence of solutions for λ close to the principal eigenvalues of (1.5) (see below).
In the case a ≡ 0, V ≡ 1, b ≡ 1, the quasilinear problem (1.1) arises, for instance, when searching for functions u ∈ W 1,p (Ω) for which the norm of the Sobolev's trace immersion i p * ,Ω : W 1,p (Ω) → L p * (∂Ω) is achieved:
S 0 := i p * ,Ω -p = inf u∈W 1,p (Ω)\W 1,p 0 (Ω) Ω (|∇u| p + |u| p ) dx ∂Ω |u| p * dσ p/p * , (1.2)
where σ is the restriction to ∂Ω of the the (N -1)-Hausdorff measure, which coincides with the usual Lebesgue surface measure as ∂Ω is regular enough. Due to the lack of compactness of i p * ,Ω , the existence of minimizers for (1.2) does not follows by standards methods. Following the ideas of [2], [START_REF] Bonder | Estimates for the Sobolev trace constant with critical exponent and applications[END_REF]and [START_REF] Biezuner | Best constants in Sobolev trace inequalities[END_REF], Fernandez-Bonder and Rossi proved in [START_REF] Bonder | On the existence of extremals for the Sobolev trace embedding theorem with critical exponent[END_REF] that a sufficient condition for the existence of minimizers for (1.2) is that S 0 < K -1 N,p where
K -1 N,p def = inf R N + |∇u| p dx; |∇u| ∈ L p (R N + ) and R N -1 |u| p * dy = 1 . (1.3)
In the linear case, i.e. p = 2, with b ≡ 1 and V ≡ 0, namely, for the problem (Y )
∆u = 0 in Ω u > 0 in Ω ∂u ∂ν + N -2 2 βu = u 2 * -1
on ∂Ω, which is related to the Yamabé problem when β = cte = mean curvature of ∂Ω, Adimurthi-Yadava [START_REF] Adimurthi | Positive solution for Neumann problem with critical non linearity on boundary[END_REF] proved that problem (Y ) has solution when β ∈ C 1 (∂Ω), N ≥ 3 and there exists a point x 0 ∈ ∂Ω such that
β(x 0 ) < h(x 0 ) := 1 N -1 N -1 i=1 ν i , (1.4)
where the ν i are the principal curvatures at x 0 ∈ ∂Ω with respect to the unit outward normal. Finally, problem (1.1) in case p = 2 and V = 0 can also be related to well known λ-parameter problem of the Brézis-Nirenberg [START_REF] Brézis | Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents[END_REF] with Dirichlet boundary condition
-∆u = λu + |u| 2 * -2 u in Ω, u = 0 on ∂Ω.
Among the huge amount of improvements and generalization of this pioneering work we quote the work of Cerami-Solimini-Struwe [START_REF] Cerami | Some existence results for superlinear elliptic boundary value problems involving critical exponents[END_REF] where they stated the existence of sign changing solutions of the Dirichlet problem for λ ∈ (0, λ 1 ) and N ≥ 6. We have adapted here their approach to our quasilinear problem with nonlinear boundary conditions. Quasilinear elliptic problems with an indefinite potential V have attracted a lot of attention the last decade. After the work concerning the eigenvalue problem with Dirichlet boundary condition with an indefinite weights in [START_REF] Cuesta | A weighted eigenvalue problem for the p-Laplacian plus a potential[END_REF] and the one for the eigenvalue problem with Steklov boundary conditions in [START_REF] Leadi | A weighted eigencurve for steklov problems with a potential[END_REF], some others quasilinear problems with weights have been considered with sublinear, superlinear or concave-convex nonlinear terms. In the present work we would like to explore the effect of sign-changing weights a and V on the multiplicity of solutions for a rather simple critical-exponent quasilinear problem with a parameter λ. From the variational point of view, the geometry of the related functional associated, for example, to the eigenvalue problem -∆ p u + V |u| p-2 u = 0 in Ω, |∇u| p-2 ∂u ∂ν = λa(x)|u| p-2 u on ∂Ω
(1.5) may take in consideration the disjoint subsets ∂Ω a|u| p dσ > 0 and ∂Ω a|u| p dσ < 0.
It is well known (see [START_REF] Leadi | A weighted eigencurve for steklov problems with a potential[END_REF]) that, if a changes sign, there are two principal eigenvalues λ -1 < λ 1 for the above eigenvalue problem. We will prove in this work that positive and sign-changing solutions of problem (1.1) can also be found by minimizing the energy functional on the subset of the Nehari manifolds where ∂Ω a|u| p dσ ≷ 0. By considering indefinite weights, we improve and complete several existing results for similar problems.
This paper is organised as follows. In section 1 we study under which conditions the infimum of the associated energy functional along the Nehari manifold is achieved. We prove in Proposition 2.7 that this is the case where this infimum is less than
1 p - 1 p * K -p * p * -p 0 where K 0 def = b p p * ∞,∂Ω K -1 N,p .
In order to assure this inequality we use the well known technique of mass concentration for the fundamental solutions, i.e. functions defined in R N + realizing the infimum in (1.3). In section 2 we analyse the different Lebesgue norms of these functions and in section 3 we state our main existence result in Theorem .4.2. In section 4 we study the infimum of the associated energy Φ λ along the so called nodal subsets of the Nehari manifold. Finally Theorem 6.2 state an existence result for positive weights a.
Minimization on the Nehari manifold
Let us define the following C 1 -functional on W 1,p (Ω) by
E V (u) def = Ω (|∇u| p + V (x)|u| p ) dx, A(u) def = ∂Ω a |u| p dσ, B(u) def = ∂Ω b|u| p * dσ, Φ λ (u) def = E V (u) -λA(u).
The natural norm of W 1,p (Ω) will be denoted by
• , i.e., ∀u ∈ W 1,p (Ω), u = Ω |∇u| p dx + Ω |u| p dx 1/p .
The Lebesgue norm of L q (Ω) will be denoted by • q and the Lebesgue norm of L q (∂Ω, ρ) by • q,∂Ω , for any q ∈ [q, +∞[. Solutions of problem (1.1) will be understood in the weak sense.
As in [START_REF] Cerami | Some existence results for superlinear elliptic boundary value problems involving critical exponents[END_REF] we will make use of the Nehari manifold associated to our problem. For this end, we define the energy functional
I λ (u) = 1 p Φ λ (u) - 1 p * B(u)
and the Nehari manifold associated to
I λ N = {u ∈ W 1,p (Ω)\{0}; I λ (u), u = 0} = {u ∈ W 1,p (Ω) \ {0}; Φ λ (u) = B(u)}
that we split into three sets
A + = {u ∈ N ; A(u) > 0}, A -= {u ∈ N ; A(u) < 0}, A 0 = {u ∈ N ; A(u) = 0}.
It is well known that critical points of I λ are solutions of problem (1.1) and belong to N . Notice that I λ restricted to N is equal to
I λ (u) = 1 p - 1 p * B(u) = 1 p - 1 p * Φ λ (u).
Minimizing the functional I λ along A ± provided us with positive solutions of our problem (1.1). Precisely, let us set
C ± λ = inf u∈A ± I λ (u) (2.1)
The following result is well known, we give the proof for the sake of completeness.
Lemma 2.1. If C ± λ is achieved and C ± λ > 0 then C ± λ is a critical value of I λ associated to a positive solution of (1.1).
Proof. Let
u ∈ A + such that C + λ = I λ (u) = inf v∈A + I λ (v)
. By taking |u| instead of u we can assume that the infimum is achieve at some u ≥ 0 in A + . Furthermore, if we set
J λ = Φ λ -B, we have that u ∈ N =⇒ J λ (u) = 0, u ≡ 0 and J λ (u), u = pΦ λ (u) -p * B(u) = (p -p * )B(u).
Observe that, since 0
< C + λ = I λ (u) = 1 p -1 p * B(u) then B(u) = 0 and therefore J λ (u), u = 0. By Lagrange's Multipliers theorem there exists α ∈ R such that I λ (u) = αJ λ (u). Hence 0 = J λ (u) = I λ (u), u = α(p -p * )B(u) =⇒ α = 0. Thus I λ (u) = 0.
The aim of this section is to prove that the previous infima are achieved and that they are strictly positive. The positivity of I λ depend on whenever λ < λ 1 or λ > λ -1 , where λ 1 and λ -1 are defined as follows. Let us recall the following results on the eigenvalue problem (1.5) associated to our problem (see [START_REF] Leadi | A weighted eigencurve for steklov problems with a potential[END_REF]). By a principal eigenvalue we mean an eigenvalue having a positive eigenfunction.
Proposition 2.2 ( [START_REF] Leadi | A weighted eigencurve for steklov problems with a potential[END_REF]). Let
α a def = inf{E V (u); u p = 1, A(u) = 0}.
(2.2)
Then problem (1.5) possesses a principal eigenvalue if and only if α a > 0.
Precisely, 1. if α a > 0 and a changes sign then (1.5) admits exactly two principal eigenvalues λ -1 < λ 1 , with
λ 1 := min M + E V , (2.3)
where
M + := {u ∈ W 1,p (Ω); A(u) = 1} and λ -1 = -min M -E V , (2.4)
where M -:= {u ∈ W 1,p (Ω); A(u) = -1};
2. if α a > 0 and a is of definite sign then (1.5) admits exactly one principal eigenvalue, which are either λ 1 or λ -1 ;
3. if α a = 0 then (1.5) has a unique principal eigenvalue λ * given by
λ * = inf M + = -inf M -E V .
Moreover a function u ∈ S is an eigenfunction associated to λ * if and only if A(u) = 0 and E V (u) = α a = 0.
Remark 2.3. Actually, the hypothesis of Theorem 3.3 of [START_REF] Leadi | A weighted eigencurve for steklov problems with a potential[END_REF] are that both λ D V =first eigenvalue of u → -∆ p u + V |u| p-2 u with Dirichlet boundary condition and β(V, a) = inf{E V (u); A(u) = 0, u p,∂Ω = 1} are > 0. These two hypothesis are equivalent to α a > 0.
As a straightforward consequence of the above proposition we have Corollary 2.4. Assume α a > 0. For any λ < λ 1 (resp. for any λ > λ -1 ) there exists c > 0 such that, for all u ∈ W 1,p (Ω) satisfying A(u) ≥ 0 (resp.
A(u) ≤ 0) it holds E V (u) -λA(u) ≥ c u p . (2.5)
Remark 2.5. Weak solutions of problem (1.1) and (1.5) belong to L ∞ (Ω) ∩ L ∞ (∂Ω) according to [START_REF] Cuesta | Weighted eigenvalue problems for quasilinear elliptic operators with mixed RobinDirichlet boundary conditions[END_REF]. Consequently weak solutions are of class C 1,µ (Ω) for some 0 < µ < 1 (see [START_REF] Lieberman | Boundary regularity for solutions of degenerate elliptic equations[END_REF]).
Throughout the paper we will always assume α a > 0.
Let us now study the geometry of the fibering maps and the Nehari manifold.
Lemma 2.6.
1. Assume either λ < λ 1 or λ > λ -1 . Then for any u ∈ W 1,p (Ω) such that B(u) = 0, the function t → I λ (tu) has a local maximum at
0 < t u := Φ λ (u) B(u) 1 p * -p , (2.6)
t u u ∈ N and
I λ (t u u) = 1 p - 1 p * Φ λ (u) B(u) p/p * p * p * -p . 2. If λ < λ 1 then there exists a constant c > 0 such that ∀u ∈ A + ∪ A 0 =⇒ u ≥ c and B(u) ≥ c.
(2.7)
3. If λ > λ -1 then there exists a constant c > 0 such that ∀u ∈ A -∪ A 0 =⇒ u ≥ c and B(u) ≥ c .
4. All minimizing sequences for C ± λ are bounded.
5. C ± λ > 0.
Proof. (1) For any u ∈ W 1,p (Ω) such that B(u) > 0 one easily proved that
g u (t) = t p-1 Φ λ (u) -t p * -1 B(u)
for t > 0, vanished at t u and that the function t → I λ (tu) has a global maximum at t u . Clearly, g u (t) = 0 ⇔ tu ∈ N .
(2) We know from equation (2.5) that there exists a constant c 1 > 0 such that Φ λ (u) ≥ c 1 u p . Moreover using Sobolev's embedding from the trace we have, for some constant c 2 > 0,
B(u) ≤ c 2 b ∞ u p *
and the conclusion follows using that Φ λ (u) = B(u) because u ∈ N . One can prove (3) in a similar way. (4) Assume by contradiction that a minimizing u n ∈ A + is unbounded and take v n = un un . Thus, for a subsequence, there exists v 0 ∈ W 1,p (Ω) such that v n v 0 , strongly in L p (Ω) and L p (∂Ω). Since
1 p - 1 p * Φ λ (v n ) = I λ (u n ) u n p → 0 as n → +∞, then E V (v 0 ) -λA(v 0 ) ≤ 0. (2.8) If v 0 ≡ 0 then 0 = E V (v 0 ) -λA(v 0 ) = lim n→∞ Φ λ (v n ) =⇒ v n → 0 strongly in W 1,p (Ω),
what is in a contradiction with the fact that v n = 1. Thus v 0 ≡ 0. Also we have A(v 0 ) > 0 because the possibility A(v 0 ) = 0 is ruled out from the condition α a > 0 and (2.8). If λ < λ 1 we then have a contradiction between (2.5) and (2.8) (5) If for instance C + λ = 0 and (u n ) n is a bounded minimizing sequence converging to some u 0 weakly in W 1,p (Ω), strongly in L p (Ω) and also strongly in L p (∂Ω) hence A(u 0 ) ≥ 0 and
1 p - 1 p * Φ λ (u 0 ) ≤ lim n→∞ I λ (u n ) = C + λ = 0. (2.9)
If u 0 ≡ 0 then we will get from the last inequality that u n → 0 strongly in W 1,p (Ω), in contradiction with (2.7). Thus, u 0 ≡ 0 but now (2.9) contradicts (2.5).
In the next proposition we will prove that the values C ± λ are achieved whenever they are smaller than a certain value involving K N,p if λ is close to λ 1 . This second constraint follows from the necessity to assure that the infimum is achieved at some point lying in the open set
A + . Precisely, let us consider γ a,b def = inf{E V (u); A(u) = 0, B(u) = 1}. (2.10) Proposition 2.7. One has 1. 0 < γ a,b and
C ± λ ≤ 1 p - 1 p * γ p * p * -p a,b .
(2.11)
2. There exists δ 1 > 0 (resp. δ 2 > 0) such that
C + λ < 1 p - 1 p * γ p * p * -p a,b ∀λ ∈ (λ 1 -δ 1 , λ 1 ), ( resp. C - λ < 1 p - 1 p * γ p * p * -p a,b ∀λ ∈ (λ -1 , λ -1 + δ 2 )).
Proof. (1) It follows directly from α a > 0 that γ a,b ≥ 0. Assume by contradiction that γ a,b = 0 and let (u n ) n be a minimizing sequence for γ a,b . Assume furthermore that (u n ) n is an unbounded sequence and take v n = un un . Thus there exists a subsequence, still denoted v n , and a function v 0 such that v n v 0 , strongly in L p (Ω), in L p (∂Ω) and a.e. We have in one hand
Φ λ (v 0 ) ≤ lim inf n→+∞ Φ λ (v n ) ≤ 0, (2.12)
and in other hand A(v 0 ) = 0. Besides v 0 ≡ 0 otherwise we will deduce from (2.12) that v n → 0 strongly in W 1,p (Ω), which is in contradiction with the fact that v n = 1. Thus
α a v 0 p p ≤ E V (v 0 ) = Φ λ (v 0 ) ≤ 0,
which contradicts the hypothesis α a > 0. We conclude that the sequence (u n ) n is bounded. Hence, up to a subsequence, it converges weakly to some u 0 in W 1,p (Ω), strongly in L p (Ω) and in L p (∂Ω). Hence E V (u 0 ) ≤ 0 and A(u 0 ) = 0. If u 0 ≡ 0 we have a contradiction with the hypothesis α a > 0. If u 0 ≡ 0 hence u n converges strongly to 0, in contradiction with B(u n ) = 1.
Next, to prove for instance that
C + λ ≤ 1 p -1 p * γ p * p * -p a,b , let u n be a se- quence in W 1,p (Ω) such that A(u n ) = 0, B(u n ) = 1 and E V (u n ) → γ a,b .
We can assume also that u n ≥ 0 by taking |u n | instead of u n if necessary and, using the same argument as above, the sequence (u
n ) n is bounded in W 1,p (Ω). Let ψ ∈ C 1 (Ω) be any positive function such that supp ψ ∩ ∂Ω ⊂ {x ∈ ∂Ω; a(x) > 0}. Let us take v n = u n + ψ n . Clearly v n -u n → 0. Moreover A(v n ) = ∂Ω av p n = ∂Ω a + (u n + ψ n ) p - ∂Ω a -u p n > A(u n ) = 0, and clearly B(v n ) ≥ B(u n ) = 1.
Furthermore, using the following inequality
||x + y| q -|x| q -|y| q | ≤ C|xy| |x| q-2 + |y| q-2 , (2.13)
valid for any q ≥ 1 and any x, y ∈ R N and using also that the sequence u n is bounded we have
Φ λ (v n ) = E V (u n ) + o(1) = γ a,b + o(1).
Finally, if we consider z n := t vn v n ∈ A + , with t un defined in Proposition (2.6), it comes
C + λ ≤ 1 p - 1 p * Φ λ (v n ) B(v n ) p/p * p * p * -p → 1 p - 1 p * γ p * p * -p a,b .
(2) We only prove the estimate for C + λ . By taking t ϕ 1 ϕ 1 , where ϕ 1 is the unique positive eigenfunction associated to λ 1 such that A(ϕ 1 ) = 1 and t ϕ 1 has been defined in (2.6), by definition of C + λ one has
1 p - 1 p * -1 C + λ p * -p p * λ 1 -λ (B(ϕ 1 )) p/p * Thus, if λ ∈ R is such that λ 1 -λ (B(ϕ 1 )) p/p * < γ a,b , i.e., λ > λ 1 -(B(ϕ 1 )) p/p * γ a,b then 1 p -1 p * -1 C + λ p * -p p * < γ a,b as stated in (2.15).
Consequently, let us define
λ ± * def = inf λ ∈ R; C ± λ < 1 p - 1 p * γ p * p * -p a,b (2.14)
As a consequence of (2) in Proposition 2.7 we have λ + * < λ 1 and λ - * > λ -1 and therefore
λ ± * = sup λ ∈ R; C ± λ = 1 p - 1 p * γ p * p * -p a,b . Proposition 2.8. Let λ ∈ R. 1. If λ < λ 1 and C + λ < 1 p - 1 p * K -p * p * -p 0 (2.15) then there exists u ∈ A + ∪ A 0 such that I λ (u) = C + λ . Similarly, if λ > λ -1 and C - λ < 1 p - 1 p * K -p * p * -p 0 (2.16)
then there exists u ∈ A -∪ A 0 such that I λ (u) = C - λ .
2. If furthermore λ + * < λ < λ 1 and (2.15) holds then problem (1.1) with parameter λ possesses a positive solution u satisfying A(u) > 0 and
I λ u) = C + λ . Similarly if λ -1 < λ < λ -
* and (2.16) holds then problem (1.1) with parameter λ possesses a positive solution u satisfying A(u) < 0 and
I λ u) = C - λ .
Proof. We will only give the proof concerning C + λ since the argument is similar for C - λ . Let (u n ) n be a minimizing sequence. By (4) of Lemma 2.6 the sequence (u n ) n is bounded so assume that u n ∈ A + converges weakly to some u 0 , strongly in L p (Ω) and in L p (∂Ω). Clearly A(u 0 ) ≥ 0. Claim: We have
Φ λ (u 0 ) ≤ ( 1 p - 1 p * ) -1 C + λ p * -p p * B(u 0 ) p p * .
(2.17) Indeed, in one hand, using that (u n ) n is a minimizing sequence we have
1 p - 1 p * -1 C + λ = B(u n ) + o(1). (2.18)
Besides, we also have by the Brézis-Lieb lemma ( [START_REF] Brézis | A relation between pointwise convergence of functions and convergence of functionals[END_REF])
1 p - 1 p * -1 C + λ = Φ λ (u 0 ) + ∇(u n -u 0 ) p p + o(1) (2.19)
In other hand, let us choose > 0 such that
1 p - 1 p * -1 C + λ < (K 0 + ) -p * p * -p .
Using again Brézis-Lieb lemma and the fact that p/p * < 1, we get
B(u n ) p/p * ≤ B(u 0 ) p p * + B(u n -u) p p * + o(1),
and hence it comes from (2.18) and Lemma 2.10 (see below) gives
( 1 p - 1 p * ) -1 C + λ p/p * ≤ B(u 0 ) p p * + (K 0 + ) ∇(u n -u 0 ) p p + o(1
C + λ < (K 0 + ) -p p * -p , we obtain ( 1 p - 1 p * ) -1 C + λ p/p * ≤ B(u 0 ) p p * + 1 p - 1 p * -1 C + λ p p * -Φ λ (u 0 ) 1 p - 1 p * -1 C + λ p-p * p *
and the proof of the claim follows. Notice that u 0 ≡ 0 since, otherwise, u n → 0 strongly in W 1,p (Ω) which contradicts (3) of Lemma 2.6. As a consequence of (2.17) and that Φ λ (u 0 ) > 0 we have B(u 0 ) > 0. Finally let us prove that C + λ is achieved at t u 0 u 0 ∈ N . Indeed, again by (1) of Lemma 2.6 we have
1 p - 1 p * -1 C + λ ≤ Φ λ (u 0 ) p * p * -p B(u 0 ) p p * -p while by the claim Φ λ (u 0 ) p * p * -p B(u 0 ) p p * -p ≤ 1 p - 1 p * -1 C + λ
and the equality follows.
(
) Since λ > λ + * we get C + λ < 1 p -1 p * γ p * p * -p a,b 2
and therefore C + λ is achieved at some u ∈ A + . By replacing u by |u| if necessary, we can assume that u ≥ 0. The result then comes from Lemma 2.1. By the reularity results (see Remark 2.9) andth strong maximum principle of [START_REF] Vazquez | A strong maximum principle for some quasilinear elliptic equations[END_REF], the solution u is strictly positive up to the boundary. Remark 2.9. Notice the if λ + * < λ - * then, under the hypothesis of Proposition 2.8, we will obtain two positive solutions of problem (1.1) for any parameter λ ∈ (λ + * , λ - * ) ∩ (λ -1 , λ 1 ): one in A + and the other in A -. However, that λ + * < λ - * is not clear for general weights a, b and V . We have used in the previous proposition the following Cherrier's-type inequality that has been proved by [START_REF] Biezuner | Best constants in Sobolev trace inequalities[END_REF] in the case b ≡ 1 and can be trivially generalizes for any positive bounded weight b: Lemma 2.10. [START_REF] Biezuner | Best constants in Sobolev trace inequalities[END_REF] For any > 0 there exists C > 0 such that for all u ∈ W 1,p (Ω) it holds
∂Ω b|u| p * dσ p/p * ≤ (K 0 + ) Ω |∇u| p dx + C Ω |u| p dx where K 0 := K N,p b p/p * ∞ (2.21)
and K N p is defined in (1.3).
Estimates of the L p -norms of fundamental solutions
We turn now our attention to the problem of finding the values λ for which we have S ± λ < K -1 0 , where we denote here, for simplicity,
S ± λ = 1 p - 1 p * -1 C + λ p * -p p * . (3.1)
It is well known (see [START_REF] Nazaret | Best constants in Sobolev trace inequalities on the halfspace[END_REF]), that the value K -1 N,p defined in (1.3) is achieved at functions of the form
U ,y 0 (y, t) = -N -p p U y -y 0 , t , with y 0 ∈ R N -1 arbitrary and ∈]0 + ∞[, where
U (y, t) = 1 ((t + 1) 2 + |y| 2 ) N -p 2(p-1)
.
The functions U ,y 0 are usually called fundamental solutions. The constant K -1 N,p can be computed explicitly (see [START_REF] Bonder | Estimates for the Sobolev trace constant with critical exponent and applications[END_REF][START_REF] Nazaret | Best constants in Sobolev trace inequalities on the halfspace[END_REF]) and it is equal to
K -1 N,p = N -p p -1 p-1 π p-1 2 Γ N -1 2(p-1)
Γ p(N -1)
2(p-1) p-1 N -1
.
Let us assume for convenience that
x 0 = 0 ∈ ∂Ω and |Ω + a ∩ B s (0)| > 0 ∀0 < s < r (3.2)
for some r > 0, where Ω + a = {x ∈ ∂Ω; a(x) > 0}. Let φ be a smooth radial function with compact support in the ball B r/2 (0) satisfying φ ≡ 1 in B r/4 (0). For any > 0 let us choose the following test functions:
u (y, t) = U (y, t)φ(y, t) = N -p p(p-1) φ(y, t) ((t + ) 2 + |y| 2 ) N -p 2(p-1)
.
(3.3)
Notice that A(u ) > 0. In order to give the asymptotic development with respect to the parameter of the quotient Φ(u ) B(u ) p/p * , we will compute each of integrals involved. Much of the work have been done by [START_REF] Bonder | Estimates for the Sobolev trace constant with critical exponent and applications[END_REF] and we refer the reader to this paper for full details. To make the computations simpler, we will choose a special parametrization of the boundary ∂Ω around 0 ∈ ∂Ω.
Since we are assuming that ∂Ω is of class C 2 , there exists c > 0 and a C 2function ρ :
{y ∈ R N -1 , |y| ≤ c} → R such that Ω ∩ B r (0) = {(y, t) ∈ Q c ; t > ρ(y)} ∂Ω ∩ B r (0) = {(y, t) ∈ Q c , ; t = ρ(y)}, (3.4)
where
Q c := {(y, t), |y| ≤ c, 0 ≤ t ≤ c} and ρ(y) = 1 2 N -1 i=1 ν i y 2 i + O(|y| 3 ) (3.5) for some ν i , i = 1, • • • , N -1.
We set hereafter
h 0 = 1 N -1 N -1 i=1 ν i . (3.6)
The value h 0 is known as the mean curvature of ∂Ω at 0 with respect to the outward normal ν.
Ω |∇u | p dx = A 1 + f 1 ( ),
where
f 1 ( ) := A 2 + O( N -p p-1 ) if p < N +1 2 -h 0 2 ω N -2 ln(1/ ) + O( ) if p = N +1 2
and
A 1 = 1 2 N -p p -1 p-1 β N -1 2 , N -1 2(p -1) ω N -2 ; A 2 = - h 0 4 N -p p -1 p β N + 1 2 , N -2p + 1 2(p -1) ω N -2 .
2.
Ω V (x)|u | p dx = f 2 ( ), f 2 ( ) := O( p ) if p 2 < N O( p ln( 1 )) if p 2 = N O( N -p p-1 ) if p 2 > N 3. Assume that b(0) = b ∞ and b(y, ρ(y)) -b(0) = O(|y| γ+1 ) ∀ |y| ≤ c
for some γ > 0.
Then ∂Ω b|u | p * dσ = B 1 + B 2 + o( )
where
B 1 = 1 2 b ∞ β N -1 2 , N -1 2(p -1) ω N -2 , B 2 = - 1 2 b ∞ (N -1)h 0 β N -1 2 , N -1 2(p -1) ω N -2 .
4. Assume that a ∈ C γ close to 0 for some γ > 0.
Then ∂Ω a(x)|u | p dσ = f 3 ( )
where
f 3 ( ) = C 1 p-1 + o( p-1 ) if N > p 2 -p + 1 a(0)ω N -2 p-1 ln(1/ ) + O( p-1 ) if N = p 2 -p + 1, O( N -p p-1 ) if N < p 2 -p + 1, (3.7
) and
C 1 = 1 2 a(0)β N -1 2 , N -p 2 + p -1 2(p -1) ω N -2 .
We recall that
ω N -1 = measure of the unit sphere S N -1 of R N = 2π N 2 Γ N 2 and β(x, y) := ∞ 0 t x-1 dt (1 + t) x+y = 1 0 t x-1 (1 -t) y-1 dt = Γ(x)Γ(y) Γ(x + y)
for x, y > 0.
Proof. (1) -( 2) These estimates can be found in [START_REF] Bonder | Estimates for the Sobolev trace constant with critical exponent and applications[END_REF].
( From basic integration, we deduce for any a > -1 and b > 0 that
c/ 0 t a (1 + t 2 ) b dt = 1 2 β a+1 2 , 2b-a-1 2 + O( 2b-a-1 ) if 2b -a -1 > 0, ln(1/ ) + O(1) if 2b -a -1 = 0, O( 2b-a-1 ) if 2b -a -1 < 0. (3.8) Thus for any a, b, ∈ R + we have |y|≤c |y| a ( 2 + |y| 2 ) b dy = ω N -2 1 2 N -1+a-2b β a+N -1 2 , 2b-a-N +1 2 + O(1) if 2b -a -N + 1 > 0, ω N -2 ln(1/ ) + O(1) if 2b -a -N + 1 = 0, O(1) if 2b -a -N + 1 < 0. ( 3.9)
By expanding (( + ρ(y)) 2 and using Taylor's theorem we find -N -p
N -1 p-1 ∂Ω∩Br(0) |y| γ+1 |u | p * dσ = |y|≤c |y| γ+1 1 + |∇ρ(y)| 2 dy (( + ρ(y)) 2 + |y| 2 ) p(N -1) 2(p-1) = |y|≤c |y| γ+1 dy ( 2 + |y| 2 ) p(N -1) 2(p-1) +O |y|≤c |y| γ+3 dy ( 2 + |y| 2 ) p(N -1) 2(p-1) = O( γ+N -p p-1 (N -1) ) if N > p(γ + 1) -γ O(ln(1/ )) if N = p(γ + 1) -γ O(1) if N < p(γ + 1) -γ. Since N -1 p-1 + N + γ -p p-1 (N -1) = γ + 1 then
p-1 I = |y| c 1 + 1 2 |∇ρ(y)| 2 + O(|y| 4 ) ( 2 + ρ(y) 2 + 2 ρ(y) + |y| 2 ) p(N -p) 2(p-1) dy = |y| c dy ( 2 + |y| 2 ) p(N -p) 2(p-1) -p(N -p) p-1 |y| c
ρ(y)dy
( 2 + |y| 2 ) p(N -p) 2(p-1) +1 + O |y| c |y| 2 ( 2 + |y| 2 ) p(N -p) 2(p-1) = I 1 -p(N -p)
p-1
I 2 + O(I 3 )
where
I 1 = ω N -2 × 1 2 p-N -1 p-1 β N -1 2 , N -p 2 +p-1 2(p-1) + O(1) if N > p 2 -p + 1 ln(1/ ) + O(1) if N = p 2 -p + 1 C 2 if N < p 2 -p + 1 and C 2 = 1 p 2 -p+1-N |c| p 2 -p+1-N ω N -2 > 0. Clearly I 2 = O(I 1 ) and I 3 = 2 O(I 1 ). Consequently I = 1 2 ω N -2 p-1 β N -1 2 , N -p 2 +p-1 2(p-1) + O( N -p p-1 ) if N > p 2 -p + 1, O( N -p p-1 ln(1/ )) if N = p 2 -p + 1; O( N -p p-1 ) if N < p 2 -p + 1.
Similar computations for II give
II = N -p p-1 |y|≤c |y| γ dy ( 2 + |y| 2 ) p(N -p) 2(p-1) = O( p-1+γ ) if N > p 2 -(p -1)(1 -γ); O N -p p-1 ln(1/ ) if N = p 2 -(p -1)(1 -γ); O( N -p p-1 ) if N < p 2 -(p -1)(1 -γ).
Since by hypothesis γ > 0 then II = I + o(I) and we conclude.
In section 5 we will need the -asymptotic of several L q -norm of the fundamental solution u defined in (3.3). Proposition 3.2.
1. ∇u 1 = f 4 ( ), where
f 4 ( ) = O N -p p(p-1) if p > 2N -1 N , O N -p p(p-1) ln( 1 ) if p = 2N -1 N , O( N -N p ) if p ≤ 2N -1 N . 2. ∇u p-1 p-1 = O( N p -1 ).
3. u 1 = f 5 ( ), where
f 5 ( ) = O N -p p(p-1) if p > 2N N +1 , O N -p p(p-1) ln( 1 ) if p = 2N N +1 , O( N +1-N p ) if p < 2N N +1 . 4. u p-1 p-1 = O( N p -1 ).
5. u 1,∂Ω = f 4 ( ). Proof. Let us denote by α = N -p p(p-1) . ( 1) We have
|∇u | = N -p p -1 α (t + ) 2 + |y| 2 -N -1 2(p-1) in B r/4 (0). so -α Ω |∇u | dx = N -p p -1 Ω∩B r/4 (0) (t + ) 2 + |y| 2 -γ dydt + O(1)
,
where γ = N -1 2(p-1)
. Notice that the integral on the right hand side goes to a constant as goes to 0 if 2γ -N < 0, that is, if p > 2N -1 N . In the case 1 < p ≤ 2N -1 N let us compute this integral as follows. We write Ω∩B r/4 (0)
(t + ) 2 + |y| 2 -γ dydt = Qc dtdy [(t + ) 2 + |y| 2 ] γ - Qc\Ω dtdy [(t + ) 2 + |y| 2 ] γ = I 1 -I 2 + O(1)
After firstly changing the variables t and y by t and .y respectively and secondly changing y by (t + 1).z, one gets
I 1 = C N -2γ +∞ 0 (1 + t) N -1-2γ dt +∞ 0 r N -2 (1 + r 2 ) γ dt + O(1) = O N -2γ since N -2γ = N p-2N +1 p-1 < 0 in the case 1 < p < 2N -1 N . When p = 2N -1 N
we have
I 1 = C ln 1 + O(1) = -C ln ( ) + O(1)
.
For I 2 we have
I 2 = |y| c dy ρ(y) 0 dt ((t + ) 2 + |y| 2 ) γ = |y|≤c ρ(y)dy ( 2 + |y| 2 ) γ + O |y| c |y| 4 dy ( 2 + |y| 2 ) γ+1 = O |y| c |y| 2 dy ( 2 + |y| 2 ) γ + O |y| c |y| 4 dy ( 2 + |y| 2 ) γ+1 = o N -2γ . Thus Ω |∇u | dx = α O(1) if p > 2N -1 N , C ln( 1 ) + O(1) if p = 2N -1 N , O( N -2γ ) if p < 2N -1
N , and the conclusion follows.
(2) We have
-α(p-1) Ω |∇u | p-1 dx = N -p p -1 p-1 B r/4 (0) (t + ) 2 + |y| 2 -γ(p-1) dydt + O(1).
Since in this case 2γ(p -1) -N = -1 < 0 the integral on the right handside converges to a constant as → 0 and the result follows.
(3) Set now γ 1 = N -p 2(p-1) . We have, by letting → 0 in the integral below,
-α Ω u (y, t)dtdy = Ω∩B r/4 (0) dydt [(t + ) 2 + |y| 2 ] γ 1 = O(1) if N -2γ 1 > 0. Notice that N -2γ 1 > 0 ⇐⇒ p > 2N N +1 . If p ≤ 2N N +1 we write -α Ω u (y, t)dtdy = I 1 -I 2 + O(1)
with
I 1 = Qc dydt [(t + ) 2 + |y| 2 ] γ 1 and I 2 = Qc\Ω dydt [(t + ) 2 + |y| 2 ] γ 1 .
Following the computations of (1) (with γ 1 instead of γ) we will have
I 1 = C ln(1/ ) + O(1) if p = 2N N +1 , O( N -2γ 1 ) if p < 2N N +1 ,
and besides
I 2 = O( N -2γ 1 ).
Thus
Ω |u | dx = α O(1) if p > 2N N +1 , C ln( 1 ) + O(1) if p = 2N N +1 , O( N -2γ 1 ) if p < 2N N +1,
and the conclusion follows.
(4) We have
-α(p-1) Ω |u | p-1 dx = Ω∩B r/4 (0) (t + ) 2 + |y| 2 -γ 1 (p-1) dydt + O(1).
In this case 2γ 1 (p -1) -N = -p < 0 the the integral on the right converges to a constant as → 0 and the result follows.
(5) In this case we have
-α ∂Ω |u | dσ = |y|≤c 1 + |∇ρ(y)| 2 dy (( + ρ(y)) 2 + |y| 2 ) γ 1 + O(1) = I 1 + O(1)
and
I 1 = |y|≤c dy (( + ρ(y)) 2 + |y| 2 ) γ 1 + O |y|≤c |y| 2 dy (( + ρ(y)) 2 + |y| 2 ) γ 1 = I 2 + I 3 .
By expanding ( + ρ(y)) 2 and using Taylor's theorem we have
I 2 = |y|≤c dy ( 2 + |y| 2 ) γ 1 -2γ 1 |y|≤c ρ(y) dy ( 2 + |y| 2 ) γ 1 +1 +O |y|≤c |y| 4 dy ( 2 + |y| 2 ) γ 1 +1 = I 1 2 + I 2 2 If 2γ 1 -N + 1 < 0, i.e., if p > 2N -1 N
then I 1 2 converges to a constant as goes to 0 and I 1 2 is O(ln( 1)) if p = 2N -1 N . In the case p < 2N -1
N
we have
I 1 2 = N -1-2γ 1 c/ 0 r N -2 (1 + r 2 ) γ 1 dt = O( N -1-2γ 1 ).
Finally one can easily check that the remaining terms can be neglected when compared with I 1 2 and the result follows. ( 6) By expanding ( + ρ(y)) 2 and using Taylor's theorem as above we have
-α(p-1) u p-1 p-1,∂Ω = |y|≤c 1 + |∇ρ(y)| 2 dy (( + ρ(y)) 2 + |y| 2 ) γ 1 (p-1 + O(1) = |y|≤c dy ( 2 + |y| 2 ) N -p 2 -2(N -p) |y|≤c ρ(y) dy ( 2 + |y| 2 ) N -p 2 +O |y|≤c |y| 2 dy ( 2 + |y| 2 ) N -p 2
and, since (N -p) -N + 1 = -p + 1 < 0, the first integral on the right converges as goes to 0 and we get the result. [START_REF] Bonder | On the existence of extremals for the Sobolev trace embedding theorem with critical exponent[END_REF] As previously and, using that 2γ
1 (p * -1) = N , -α(p * -1) ∂Ω |u | p * -1 dσ = |y|≤c 1 + |∇ρ(y)| 2 dy (( + ρ(y)) 2 + |y| 2 ) γ 1 (p * -1) + O(1) = |y|≤c dy (( + ρ(y)) 2 + |y| 2 ) N/2 +O |y|≤c |y| 2 dy (( + ρ(y)) 2 + |y| 2 ) N/2 .
By expanding ( + ρ(y)) 2 and using Taylor's theorem the first integral is now
I 1 = -1 c/ 0 r N -2 (1 + r 2 ) N/2 dt = O( -1 )
and all the other integral are negligible when compared with I 1 . Finally, since α(p * -1) = N p the result follows.
Existence of positive solutions
We can give now sufficient conditions on V, a, b et λ to fulfil the condition S ± λ < K -1 0 . By taking -a instead of a, one can prove similar results in order to have the inequality S - λ < K -1 0 . We will assume here the following hypothesis (B): there exists a point x 0 ∈ ∂Ω such that (B)
b ∞ is achieved at x 0 , b(x 0 ) -b(x) = O(|x -x 0 | γ+1 ) for some γ > 0, a ∈ C γ close to x 0 for some γ > 0, a(x 0 ) > 0. (4.1) Proposition 4.1. Let N ≥ 2p -1.
Assume that there exists a point x 0 ∈ ∂Ω satisfying hypothesis (B) in (4.1). Then
C + λ < 1 p - 1 p K -p * p * -p 0
holds in the following cases:
1. for any λ ∈ R if p > 2 and the mean curvature h 0 at x 0 is positive;
2. for λ, a(x 0 ) and h 0 satisfying
N -2 2 h 0 + λa(x 0 ) > 0 (4.2) if p = 2; 3. for any λ > 0 if 1 < p < 2.
Proof. By definition of S ± λ in (3.1) we have
S + λ ≤ Φ λ (u ) B(u ) p/p * .
Notice that A(u ) > 0 if and r are small enough as a consequence of the estimate (4) in Proposition 3.1. We are going to prove that there exists a positive constant Λ such that
K 0 Φ λ (u ) B(u ) p/p * = 1 -Λg( ) (4.3) with g( ) def = if p ≥ 2, N > 2p -1; ln( 1 ) if p ≥ 2, N = 2p -1; p-1 if 1 < p < 2.
Φ λ (u ) B(u ) p/p * = A 1 + A 2 + o( ) (B 1 + B 2 + o( )) p/p * = A 1 B p/p * 1 1 + A 2 A 1 - p p * B 2 B 1 + o( ).
We have
A 2 A 1 = - 1 2 N -p N -2p + 1 (N -1)h 0 , B 2 B 1 = - 1 2 (N -1)h 0 ,
and hence we define Λ as
Λ := - A 2 A 1 - p p * B 2 B 1 = (N -p)(p -1) N -2p + 1 h 0 > 0. Notice that A 1 B p/p * 1 = K -1 0 . Second case: If p > 2 and N = 2p -1 then Φ λ (u ) B(u ) p/p * = A 1 -h 0 ω N -2 2 ln(1/ ) + O( ) (B 1 + O( )) p/p * = A 1 B p/p * 1 1 - h 0 ω N -2 2A 1 ln(1/ ) + O( )
Here we define
Λ := h 0 ω N -2 2A 1 > 0.
Third case: If 1 < p < 2 then
K 0 Φ λ (u ) B(u ) p/p * = A 1 -λC 1 p-1 + o( p-1 ) (B 1 + o( p-1 )) p/p * = A 1 B p/p * 1 -λ C 1 B p/p * 1 p-1 + o( p-1 )
Notice that if 1 < p < 2 one has ln(1/ ) = o( p-1 ). Here
Λ := λ C 1 A 1 > 0
in the case λ > 0 and a(x 0 ) > 0.
Fourth case:
If p = 2 and N > 2p -1 = 3 then Φ λ (u ) B(u ) p/p * = A 1 + (A 2 -λC 1 ) + o( ) (B 1 + B 2 + o( )) p/p * = A 1 B p/p * 1 1 + A 2 -λC 1 A 1 - p p * B 2 B 1 + o( ).
Now we have (in this case)
-Λ := A 2 -λC 1 A 1 - p p * B 2 B 1 = - N -2 N -3 h 0 -λ 2 N -3 a(x 0 )
and therefore, if (4.2) holds, Λ > 0.
Fifth case:
If p = 2 and N = 2p -1 = 3 then Φ λ (u ) B(u ) p/p * = A 1 + -h 0 ω 1 2 -λa(x 0 )ω 1 ln(1 ) + O( ) (B 1 + O( )) p/p * = = A 1 B p/p * 1 1 + -h 0 2 -λa(x 0 ) A 1 ω 1 ln(1/ ) + O( ).
Here we have
Λ := h 0 2 + λa(x 0 ) A 1 ω 1 .
Thus, if (4.2) holds, Λ > 0.
As a direct consequence of Proposition 4.1 and Proposition 2.8 we can now formulate the main result of this section. with λ + * defined in (2.14). Then problem (1.1) possesses a positive solution u satisfying A(u) > 0 in the following cases:
1. if p > 2 and the mean curvature at x 0 satisfies h 0 > 0, 2. if 1 < p < 2, λ > 0, 3. if p = 2 and a(x 0 ), h 0 satisfy (4.2).
We also have the analogous result when considering -a instead of a: Theorem 4.3. Let N ≥ 2p -1 and assume that there exists a point x 0 ∈ ∂Ω such that hypothesis (B) of (4.1) are satisfied for b and -a.
Let λ ∈ R satisfying λ -1 < λ < λ - * (4.6)
with λ - * defined in (2.14) Then the problem (1.1) possesses a positive solution u satisfying A(u) < 0 in the following cases:
1. if p > 2 and the mean curvature at x 0 satisfies h 0 > 0, 2. if 1 < p < 2, λ < 0 and a(x 0 ) < 0, 3. if p = 2 and a(x 0 ), h 0 satisfy (4.2). Remark 4.4. [START_REF] Adimurthi | Positive solution for Neumann problem with critical non linearity on boundary[END_REF] for the Yamabe problem (Y ).
Notice that if
p = 2, V ≡ 0,a = β then λ 1 = 0, λ + * = -∞. Condition (4.2) for λ = 1 is condition (1.4) of
2. In order to have λ > 0 in the case 1 < p < 2, one can required that λ 1 > 0 (resp. λ -1 < 0). Thus, it will be enough to ask for instance that inf{E V (u); u p = 1} > 0, a condition weaker than α a > 0.
Minimisation along nodal subsets of the Nehari manifold
In order to find nodal solutions we introduce the following nodal subsets of the Nehari manifold:
N + = {u ∈ W 1,p (Ω); u ± ∈ A + }, N -= {u ∈ W 1,p (Ω); u ± ∈ A -} (5.1)
and let us define
D ± λ = inf u∈N ± I λ (u).
(5.2)
Clearly 2C ± λ ≤ D ± λ for any λ ∈ R since I λ (u) = I λ (u + ) + I λ (u -) ≥ C + λ + C + λ .
In what follows we are going to show, under certain conditions on λ, p, a and b, similar to those of the first section, that both D ± λ are achieved, providing us with a pair of nodal solutions of problem (1.1). Our intention is to prove now that the Palais-Smale condition is satisfied. For any u ∈ N + we denote Proof. We only prove the first case. Let (u n ) n be a sequence in N + satisfying (5.3). Using that I λ (u n ) = ( 1p -1 p * )Φ λ (u n ), we can prove as in ((4) of Lemma 2.6 that the sequence is bounded. Let u be such that, up to a subsequence, u n u, strongly in L p (Ω), in L p (∂Ω) and a.e. Then we also have u ± n u ± , strongly in L p (Ω) and in L p (∂Ω). Let us assume by contradiction that, for and, since we are assuming that ∇u + n -∇u + p 0 , then ∇u + n -∇u + p ≥ (K 0 + ) -p * p(p * -p) + o(1).
T N + (u) = {v ∈ W 1,p (Ω); Φ λ (u ± ), v = B (u ± ), v }
(5.5)
Besides, using again Brézis-Lieb Lemma and (5.5) we have
I λ (u + n ) = I λ (u + n -u + ) + I λ (u + ) + o(1) = ( 1 p - 1 p * ) ∇u + n -∇u + p p + I λ (u + ) + o(1) ≥ ( 1 p - 1 p * )(K 0 + ) -p * p * -p + o(1), since I λ (u + ) ≥ 0. Also one has I λ (u - n ) ≥ C + λ . Finally we have the estimate I λ (u n ) = I λ (u + n ) + I λ (u - n ) ≥ ( 1 p - 1 p * )(K 0 + ) -p * p * -p + I λ (u - n ) + o(1) ≥ ( 1 p - 1 p * )(K 0 + ) -p * p * -p + C + λ + o(1).
Since by hypothesis (PS1) we have
I λ (u n ) → c < C + λ + ( 1 p -1 p * )K -p * p * -p 0
, we get a contradiction by choosing > 0 small enough.
As in the previous section, we need to assure that the infima D ± λ are not achieved for any u ∈ N ± satisfying either A(u + ) = 0 or A(u -) = 0. Let us introduce the values
η + λ def = inf{I λ (u); u ∈ N + , A(u + ) = 0, A(u -) ≥ 0}; η - λ def = inf{I λ (u); u ∈ N -, A(u + ) = 0, A(u -) ≤ 0}.
(5.6)
Clearly D ± λ ≤ η ± λ . Now, we prove that the infima D ± λ are achieved provided they are sufficiently smaller. Proposition 5.2. Let λ < λ 1 and assume that
D + λ < min η + λ , C + λ + 1 p - 1 p * K -p * p * -p 0 .
Then there exists u ∈ N + solution of problem (1.1) satisfying
I λ (u) = D + λ . Similarly, if λ > λ -1 and D - λ < min η - λ , C - λ + 1 p - 1 p * K -p * p * -p 0 then exists v ∈ N -solution of the problem (1.1) satisfying I λ (v) = D - λ .
Proof. First we are going to prove that we can find a minimizing sequence for the infimun D + λ that satisfies the hypothesis (P S1) and (P S2) of the previous proposition. The idea is to apply Ekeland's variational principle to the complete metric space X = N + inherited with the distance of W 1,p (Ω). Notice that
N + = {u ∈ W 1,p (Ω); u ± ∈ N , A(u ± ) ≥ 0} ∩ { u ± ≥ c and B(u ± ) ≥ c}
according to the estimates (2.7). For any > 0 let u ∈ N + such that I λ (u ) ≤ D + λ + 2 . We can assume that > 0 such that 0 < 2 < η + λ -D + λ . By Ekeland's variational principle (see [START_REF] Ekeland | On the variational principle[END_REF]) there exists v ∈ X such that
(E1) I λ (v ) < I λ (u ), (E2) dist (v , u ) < , (E3) I λ (v ) ≤ I λ (w) + v -w ∀w ∈ X, w = v .
Using the fact that D + λ < η + λ , we can assume that A(v ± ) > 0 otherwise we will have
D + λ + 2 ≥ I λ (v ) ≥ η + λ ,
which is a contradiction. For any w ∈ W 1,p (Ω) consider w t = v + tw for t small enough to have B(w ± t ) > 0 and A(w ± t ) > 0. Put
s 1 (t) = Φ λ (w + t ) B(w + t ) 1 p * -p ; s 2 (t) = Φ λ (w - t ) B(w - t ) 1 p * -p so s 1 (t)w + t -s 2 (t)w - t ∈ N + . Hence, using (E3), I λ (v ) -I λ (s 1 (t)w + t -s 2 (t)w - t ) t t v -s 1 (t)w + t -s 2 (t)w - t ≤ . (5.7) If we write h(t) = I λ (s 1 (t)w + t -s 2 (t)w - t ) then h(0) = I λ (v )
and by elementary computations
h (0) = I λ (v ), s 1 (0)v + -s 2 (0)v -+ w = I λ (v ), w , lim t→0 v -s 1 (t)(v + tw) + + s 2 (t)(v + tw) - t = -v + s 1 (0) + v -s 2 (0) -w , s 1 (0) = 1 p * -p Φ λ (v + ), w -B (v + ), w B(v + ) 1 p * -p -1 , s 2 (0) = 1 p * -p Φ λ (v -), w -B (v -), w B(v -) 1 p * -p -1 and therefore s 1 (0) = s 2 (0) = 0 if w ∈ T + N (v ). Letting t → 0 in (5.7) we get I λ (v ), w ≤ w ∀w ∈ T N + (v ).
Choosing = 1/n we have that v n = v 1/n provides a minimizing sequence in N + that satisfies both (P S1) and (P S2) of the previous proposition. Then there exists a converging subsequence and we will conclude from (E1) that D + λ is achieved at some u ∈ N + . Since the possibility that A(u ± ) = 0 is excluded from the hypothesis D + λ < η + λ the conclusion comes finally from Lemma 2.1. Proof. Let u 0 > 0 be a critical point of I λ with critical value C + λ . By hypothesis we assume that a > 0 on ∂Ω ∩ B r/4 (x 0 ) for some r > 0 satisfying furthermore ∂Ω\B r/4 (x 0 ) a|u 0 | p > 0.
Let u be defined as in (3.3) and define the map σ : [0, 1] 2 → W 1,p (Ω) by σ(s, t) = Kt(su 0 -(1 -s)u ) for some K > 0 to be fixed later.
First we claim that
D + λ ≤ max (s,t)∈[0,1] 2 I λ (σ(s, t)). (6.2)
To see that, consider the map : R 2 → R 2 defined as (s, t) = f λ (σ(s, t) + ) -f λ (σ(s, t) -), f λ (σ(s, t) + ) + f λ (σ(s, t) -) -2 , where
f λ (u) = 0 if u = 0, B(u) Φ λ (u) if u = 0.
Notice that the estimate (2.5) implies that f λ is a continuous map. Moreover we have f λ (σ(0, t) + ) -f λ (σ(0, t) -) ≤ 0 f λ (σ(1, t) + ) -f λ (σ(1, t) -) ≥ 0 f λ (σ(s, 0) + ) + f λ (σ(s, 0) -) -2 ≤ 0 and we choose K > 0 big enough to have f λ (σ(s, 1) + ) + f λ (σ(s, 1) -) -2 ≥ 0.
We can apply Miranda's theorem [START_REF] Miranda | Un'osservazione sur un teorema di Brouwer[END_REF] to get the existence of some (s, t) ∈ [0, 1] 2 such that (s, t) = (0, 0), i.e., f λ (σ(s, t) + ) = f λ (σ(s, t) -) = 1. (6.3)
Thus u = σ(s, t) is such that u ± ∈ N . It remains to proof that A(u ± ) > 0 to conclude (6.2). We have As a corollary of Proposition 5.2 and Proposition 6.1 we have the following existence result. In order to assure that the condition D ± λ < η ± λ is satisfied, we only consider now weights a with definite sign.
Proposition 3 . 1 .
31 Let N ≥ 2p -1. Assume for convenience the hypothesis (3.2) and let u be as in (3.3). Then 1.
) To estimate ∂Ω b(x)|u | p * dσ we write ∂Ω b(x)|u | p * dσ = b(0)
∂Ω∩Br( 0 )
0 |y| γ+1 |u | p * dσ = o( ) and the result follows. (4) First of all we use the fact that a ∈ C γ (∂Ω) and write a(y, ρ(y)) = a(0) + O(|y| γ ); |y| ≤ c so ∂Ω a(x)|u | p dσ = a(0) |u | p dσ and II := ∂Ω∩Br(0) |y| γ |u | p dσ.
6 .
6 u p-1 p-1,∂Ω = O( N p -1 ).
(4. 4 )
4 First case: If p > 2 and N > 2p -1 then the integrals (c) and (d) in Proposition 3.1 are o( ) and therefore
Theorem 4 . 2 .
42 Let N ≥ 2p -1. Assume that there exists a point x 0 ∈ ∂Ω such that hypothesis (B) of (4.1) are satisfied. Let λ ∈ R satisfying λ + * < λ < λ 1 (4.5)
Proposition 5 . 1 ..
51 the tangent subspace to N + at u. If L ∈ W -1,p (Ω), by L T N + (u) we mean the norm of the restriction of L to the subspace T N + (u) . Let λ < λ 1 and c ∈ R satisfy c < C + λ Then I λ satisfies the Palais-Smale condition at level c on N + , i.e., any sequence (u n ) ∈ A + satisfying(P S1) I λ (u n ) → c, (P S2) I λ (u n ) T N + (un) = o(1)(5.3)possesses a convergent subsequence.Similarly, if λ > λ -1 and c < C - satisfies the Palais-Smale condition at level c on N -.
Remark 5 . 3 . 6 .
536 The condition D + λ < η + λ is needed here to avoid the minimizing sequences to converge to some u satisfying A(u + ) = 0 or A(u -) = 0 . Notice that we required the similar condition (i.e.C - λ < 1 p -1 p * γ p * p * -p a,b ) inorder to prove that C + λ is achieved and we have given in Proposition 2.7 a condition on λ to assure that C + λ < 1 p -1 p * γ p * p * -p a,b . We speculate that also D + λ < η + λ for λ close to λ 1 , but we have been unable to prove it. Notice that if a > 0 (or a < 0) then γ a,b = η ± λ = +∞. Existence of nodal solutions Proposition 6.1. Assume hypothesis (B) in (4.1), condition (4.6) and the additional constraint N > max{p 2 , 2p, in cases (1),(2) and (3) of Proposition 4.1.
A 4 ) 1 L 1 +K 2 1 L 1 Lp * - 1 L
4112111 (u + ) = K p t p ∂Ω a|(su 0 -(1 -s)u ) + | p = = K p t p ∂Ω\B r/4 (x 0 ) a|su 0 | p + ∂Ω∩B r/4 (x 0 ) a + |u + | p > 0;and alsoA(u -) = K p t p ∂Ω∩B r/4 (x 0 ) a + |u -| p > 0,otherwise u ≥ 0 on ∂Ω and therefore B(u -) = 0, in contradiction with (6.3).(2) Next we prove thatmax (s,t)∈[0,1] 2 I λ (σ(s, t)) < C + λWe write for simplicity the functions u ∈ σ([0, 1] 2 ) as u = αu 0 + βu , with |α|, |β| ≤ K. Then using the inequality (2.13) we have for some positive constantsK 1 , K 2 , K 3 I λ (αu 0 + βu ) -I λ (|α|u 0 ) -I λ (|β|v ) ≤ +K 1 ∇αu 0 p-∞ (Br(0)) ∇βv 1 + ∇αu 0 L ∞ (Br(0)) ∇βv p-1 pαu 0 L ∞ (Br(0)) βv p-1 p-1 + αu 0 p-∞ (Br(0)) βv 1 +K 3 αu 0 L ∞ (∂ΩBr(0)) βv p-1 p-1,∂Ω + αu 0 p-∞ (∂ΩBr(0)) βv 1,∂Ω +K 4 αu 0 L ∞ (∂Ω∩Br(0)) βv p * -1 p * -1,∂Ω + αu 0 ∞ (∂Ω∩Br(0)) βv 1,∂Ω . (6.5) Using (1) of Lemma 2.6 and (4.3) in the proof of Proposition 4.1 we haveI λ (|α|u 0 ) ≤ I λ (u 0 ) = C + λ ;) defined in (4.4). Besides, the remaining terms in (6.5) are o( ) if (6.1) is satisfied. Indeed, notice that in the estimate of the norm ∇u 1 , all the powers of are > 1 if either p ≥ 2 and N > p 2 or 1 < p < 2 and N > max{2p, p p-1 }. The other terms ( ∇u p-1 p-1 , u 1 , • • • ) the power of is > 1 if p < N 2 .
Acknowledgments. This work was partilly carried out while the first author was visiting the IMSP of the Université d'Abomey Calavi (Porto-Novo) and also while the second author was visiting the Université du Littoral Côte d'Opale (ULCO). We would like to express our gratitude to those institutions.
instance, u + n → u + strongly in W 1,p (Ω). Let us denote, for each v ∈ W 1,p (Ω) and each u ∈ N , the real number
Using estimate (2.7) we have that the sequence B(u ± n ) is bounded away from 0 and therefore
and, because we also have that u + n u + , it comes also that
Using Brézis-Lieb identity we deduce
.
Using the fact that Φ λ (u + n ) = B(u + n ) and Brézis-Lieb Lemma we get
and letting n → ∞ we see that Φ λ (u + ) = B(u + ). In particular I λ (u + ) ≥ 0. Let now > 0. We have, by using Lemma 2. |
00412106 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2008 | https://inria.hal.science/inria-00412106/file/Burciu_COST_lille.pdf | In this paper, we address the architecture of multistandard simultaneous reception receivers and we aim to reduce the complexity of the analog front-end. To this end we propose an architecture using the double orthogonal translation technique in order to multiplex two signals received on different frequency bands. A study case concerning the simultaneous reception of 802.11g and UMTS signals is developed in this article. Theoretical and simulation results show that this type of multiplexing does not significantly influence the evolution of the signal to noise ratio of the signals.
I. INTRODUCTION
OWADAYS the market presents a real interest in the development of telecommunication networks based on radiofrequency systems. Along with the already existing ones, new standards (WiFi, WiMax or the 3G standards) allow the operators to offer new and better services in terms of speed, quality and availability. Consequently, in order to handle this important diversity of telecommunication techniques, there is a growing interest in developing new frontend architectures capable of processing several standards.
For the multistandards research domain we can distinguish two different categories of receivers: non-simultaneous receivers using switching techniques [START_REF] Evans | Development and Simulation of a Multi-standard MIMO Transceiver (Report style)[END_REF][2][3][4] [START_REF] Rampmeier | A Versatile Receiver IC Supporting WCDMA, CDMA and AMPS Cellular Handset Applications[END_REF] and simultaneous receiving receivers. The state of the art of the multistandard simultaneous reception architectures uses the front-end stack-up technique -each chain being dedicated to the reception of only one standard. Nonetheless, this architecture is characterized by some inconveniences such as the bad complexity-performance trade-off, but also the price and the physical size.
The goal of the architecture proposed in this paper, subject of a patent pending [START_REF] Burciu | Technique d'orthogonalisation permettant la réduction de l'occupation spectrale pendant le traitement simultané de deux signaux indépendants (Patent style)[END_REF], is to answer a multistandard simultaneous reception need generated by the ambient or sensor network domain, while also not being restricted to that alone. In order to answer to this need we chose to study the simultaneous reception of an 802.11g signal and a UMTS signal using only one front-end.
The structure assessed in this article implements a novel and innovating multistandard simultaneous receiving architecture using a single front-end. Moreover, the baseband signal has the same bandwidth as the one of the state of the art front-end stack-up structure. This architecture uses the double orthogonal translation technique [START_REF] Mak | Analog-Baseband Architecture and Circuits for Multistandard and Low Voltage Wireless Transceivers (Book style)[END_REF] [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF] in order to multiplex the two standards signals by completely overlapping their spectrums at a intermediate frequency. After the second IQ translation the baseband signals are digitized, and then are processed by a signal processing block that separately demultiplexes the baseband component of the two standards. A key point of this structure is the orthogonal mismatches of the translation blocks, which can be meanwhile digitally mitigated by a proper signal processing [START_REF] Rudell | Frequency Translation Techniques for High-Integration High-Selectivity Multi-Standard Wireless Communication Systems[END_REF] [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF] [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF]. In addition, the image frequency impairment is no longer a problem as each of the standards occupies the image band of the other. This paper consists of three parts. Following this introduction, section II describes the double IQ principle, along with the implantation of this technique in a novel multistandard front-end architecture, based on orthogonal multiplexing of its two input branches. The last section details the implementation of such a receiver by specifying its functionality and by presenting some significant simulation results. Finally, conclusions of this study are drawn and the follow-up to this work is provided.
II. MULTI-BAND RECEIVER USING A DOUBLE IQ STRUCTURE
A. The double IQ technique
In wireless telecommunications, the integration of IQ baseband translation structures in the receiver chain has become a common procedure. The simple IQ architecture is usually used in the receiver front-end design in order to reduce the bandwidth of baseband signals treated by the ADC.
Meanwhile, this orthogonal frequency translation technique is also used to eliminate the image frequency default during the translation steps of heterodyne front-end architectures [START_REF] Rudell | A 1.9GHz Wide-Band IF Double Conversion CMOS Integrated Receiver for Cordless Telephone Applications[END_REF], [START_REF] Rudell | Frequency Translation Techniques for High-Integration High-Selectivity Multi-Standard Wireless Communication Systems[END_REF]. The image frequency rejection technique consists in using two orthogonal frequency translations of the signal. In order to realize this double translation, three IQ translation blocks are needed. After the double orthogonal translation, a signal processing block uses the four baseband signals to eliminate the image frequency signal. This type of image rejection structure relies on the advantage of orthogonalizing the useful signal s u (t) and the signal occupying its image frequency band s Im (t). Even though the spectrums of the two signals are completely overlapped after the first frequency translation, this orthogonalization allows the baseband processing to theoretically eliminate the image frequency component while reconstructing the useful one.
This paper assesses the use of the double orthogonal translation technique to develop a multi-standard simultaneous reception front-end. In fact, the main idea refers to a technique allowing the reconstruction, in the baseband domain, of the signal from the image band. This technique relies on a signal processing parallel to that dedicated to the reconstruction of the useful signal. If the image band of the useful signal is occupied by a second useful signal, we can consider that this type of structure can simultaneously treat the two useful signals. In order to fulfill this image band condition a clever choice has to be made concerning the frequency of the local oscillator used during the first orthogonal frequency translation.
In order to realize a theoretical study of this type of multistandard reception dedicated double IQ structure, the useful components s 1 (t) and s 2 (t) of the input s(t) are considered as RF domain signals. Therefore these signals can be modeled by the following:
) 2 sin( ) ( ) 2 cos( ) ( ) ( 1 1 1 1 1 t f t Q t f t I t s π π + = , ( 1)
) 2 sin( ) ( ) 2 cos( ) ( ) ( 2 2 2 2 2 t f t Q t f t I t s π π + = , (2)
where {I k (t)+jQ k (t), k=(1;2)} are their baseband complex envelope. By taking into account this oscillator's frequency condition, the two output signals of the first IQ translation structure s I (t) and s Q (t) can be defined by:
(
where LP[.] stands for low-pass filter and where the intermediate frequency f IF = f 1 -f LO1 =f LO1 -f 2 . These equations highlight the overlapping of the useful spectrum and the image band spectrum after the intermediate frequency translation, as shown in Fig. 1.
In the second IQ frequency translation step, each of the two signals s I (t) and s Q (t) are separately multiplied by two 90° shifted sinusoids. As the frequency of the local oscillators is chosen to be f LO2 =f IF , the four output signals of this second IQ translation block are translated in the baseband domain and are given by the equations:
4 ) ( 4 ) ( )] ( ) 2 [cos( ) ( 2 1 t I t I t s t f LP t s I IF II + = = π , (5) 4 ) ( 4 ) ( )] ( ) 2 [sin( ) ( 2 1 t Q t Q t s t f LP t s I IF IQ - = = π , (6) 4 ) ( 4 ) ( )] ( ) 2 [cos( ) ( 2 1 t Q t Q t s t f LP t s Q IF QI + = = π , (7) 4 ) ( 4 ) ( )] ( ) 2 [sin( ) ( 2 1 t I t I t s t f LP t s Q IF QQ - = = π . (8)
The four output signals contain the multiplexed baseband translated information of the two RF components s 1 (t) and s 2 (t). For a mono-standard image rejection front-end architecture, only the useful component s u (t) is interesting and therefore reconstructed by a single signal processing. But both of the baseband translated information can be separately demultiplexed by two dedicated signal processing, detailed by:
)] ( ) ( [ ) ( ) ( ) ( 1 t s t s j t s t s t s IQ QI QQ II BB - + + = , (9) )] ( ) ( [ ) ( ) ( ) ( 2 t s t s j t s t s t s QI IQ QQ II BB + + - = . (10)
Each of these series of operations reconstructs one of the two components while eliminating the other. In fact, by developing ( 9) and ( 10) using ( 5), ( 6), ( 7) and ( 8), we obtain:
{s kBB (t)=I k (t)+jQ k (t),
k=(1;2)}, the same baseband characterizations as those of the RF input signals s 1 (t) and s 2 (t).
Usually, for the mono-standard image rejection architecture, only one of the treatment techniques is implemented in the analog domain so that only two signals have to be digitized instead of four in the case of a digital signal processing. But, if we wish to realize simultaneously the two dedicated signal processing, the four baseband signals have to be digitalized and then used to do the demultiplaxation step in the digital domain.
B. Theoretical consideration on the implantation of multiband double IQ architecture
All the studies presenting the integration of the double IQ technique use this method in order to cancel the image frequency default in a mono-standard reception front-end.
Here we propose implement it in a novel multistandard simultaneous reception front-end architecture (Fig. 2). The input stages of the front-end are parallelized, each branch being dedicated to the processing of only one frequency band. This way, the signal from the two different frequency bands can be separately received by a dedicated antenna, filtered and amplified by dedicated RF filters and LNAs respectively. Another key element of this structure is the power control realized in parallel for the two signals. As it will be shown below this parallel power control step allows a better rejection of the complementary standard during the digital demodulation. Once the signals are properly filtered and amplified, an addition step process the two signal in order to generate the input signal of the double IQ structure. After the double IQ frequency translation the four baseband signals are digitized and the two dedicated signal processing demultiplex the two useful signals.
As presented in the previous section, the double IQ technique allows, for ideal orthogonal mismatches conditions, a theoretically perfect rejection of the image band while reconstructing the useful signal. For the receivers using heterodyne process, the image rejection ratio is the ratio of the intermediate frequency signal level produced by the desired input signal to that produced by the image band signal. For a double IQ structure, the image rejection ratio (IRR) depends on the gain and phase mismatches between the two branches of the IQ translation structures, and especially on the mismatches of the first one as the frequency translation is generally the highest. The orthogonal mismatches are caused by design and layout defaults such as different line length between the two branches and non identical mixers, which generate phase and respectively gain mismatches [START_REF] Traverso | Decision Directed Channel Estimation and High I/Q Imbalance Compensation in OFDM Receivers[END_REF]. Supposing that the first IQ stage has a gain mismatch ∆A and a phase mismatch ∆θ, the final IRR can be modeled by the equation below [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF]. [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF] For a receiver implementing this kind of architecture, the image band rejection is accomplished through a combination between the front-end's input elements: antenna, external RF filter, LNA (Low Noise Amplifier) on one hand, and the image rejection technique achieved by the double-conversion configuration on the other hand. The state of the art frontend's input elements can realize an image frequency rejection of up to 40 dB depending on the choice of the intermediate frequency.
)] ( ) 2 [sin( ) ( 1 t s t f LP t s LO Q π = 2 ) 2 cos( )] ( ) ( [ 2 ) 2 sin( )] ( ) ( [ 2 1 2 1 t f t Q t Q t f t I t I IF IF π π + + - = 2 ) 2 sin( )] ( ) ( [ 2 ) 2 cos( )] ( ) ( [ 2 1 2 1 t f t Q t Q t f t I t I IF IF π π - + + = )] ( ) 2 [cos( ) ( 1 t s t f LP t s LO I π =
∆ ∆ + - ∆ + + ∆ ∆ + + ∆ + + = ) cos( ) A 1 ( 2 ) A 1 ( 1 ) cos( ) A 1 ( 2 ) A 1 ( 1 log 10 ) ( 2 2 θ θ dB IRR
In order to provide sufficiently high image rejection to meet, for example, the WLAN 802.11g standard, an IRR of at least 80 dB is needed. In order to achieve this 80 dB IRR it is shown [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF] that only 0.01 dB gain mismatch and 0.1 degrees of phase mismatch are allowed for each of the IQ blocks-this way, the remaining 40 dB of IRR are realized using the image rejection technique.
This high degree of matching is not achievable using only good design and layout techniques, additional digital signal treatment techniques have to be employed in order to achieve this performance. One of these techniques has been developed in the digital domain using an LMS (Least mean square) algorithm [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF]. The results show an image rejection ratio due to the front-end architecture reaching up to 70 dB. Therefore we can assume that the total IRR of a classical double IQ image rejection receiver reaches 110 dB. This level of image rejection allows the elimination of the external band-pass filter from the receiver's design.
In comparison to the single antenna double IQ image rejection architecture, for the multiband architecture assessed here, the addition of the parallel branches' outputs generates supplementary parasitic signals that can degrade the final SNR (Signal to Noise Ratio) of the two useful signals. Each of the two antennas receives a signal made of two componentss 1 (t)+s' 2 (t) for the A 1 antenna and s' 1 (t)+s 2 (t) for the A 2 antenna, where s 1 (t) and s' 1 (t) are the same transmitted signals after two different propagation channels, as well as s 2 (t) and s' 2 (t). The parasitic components s' 2 (t) and s' 1 (t) are filtered by the input stage of each dedicated branch -antenna, RF band filter, LNA -but even when attenuated like this, these components have to be taken into account while studying the useful signals' SNR evolution.
In fact, the output signal of the adder is mainly composed of four components:
) ( ' ' ) ( ' ' ) ( ) ( ) ( 2 2 1 1 2 2 1 1 t s G t s G t s G t s G t Adder out ⋅ + ⋅ + ⋅ + ⋅ = (12)
where the coefficients G 1 , G 2 , G' 1 and G' 2 are the gains that the two input parallel branches of the receiver induce to each of the four components.
In order to evaluate the SNR evolution of the useful signal s 1BB (t) after the demultiplexing stage, the evolution of the parasitic signals s' 1 (t), s 2 (t) and s' 2 (t) compared to that of the useful signal s 1 (t) have to be taken into account:
• The s' 2 (t) signal is attenuated by the input blocks of the branch dedicated to the treatment of s 1 (t). The state of the art of the antennas, of the RF band filters and of the LNA can generate a 40 dB rejection of s' 2 (t) for an architecture such as that of Fig. 2. In addition to these 40 dB of initial rejection, the double IQ structure, along with the LMS digital processing, will achieve up to 70 dB of signal rejection from the image band of the useful signal. This means a rejection of up to 110 dB of the parasitic signal s' 2 (t).
• The s 2 (t) signal undergoes up to 70 dB of rejection compared to the useful signal s 1 (t). This rejection is generated by the double IQ structure, similar to that of s' 2 (t) as the two signals occupy the same frequency band after the addition of the two branches. In addition to this rejection, another element to be taken into account, when studying the influence of s 2 (t) on the SNR of s 1 (t), is the dedicated power control stage. In fact the worst case scenario is when s 1 (t) is at its lowest power level and the parasitic signal s 2 (t) is at its highest. This means that this is the case when s 2 (t) has its highest effect on the degradation of the useful signal. In this case, the power control will amplify s 1 (t) compared to s 2 (t) before the addition step, which means that the influence of the parasitic signal on the useful signal is decreased. The state of the art of the power controls [START_REF] Xiao | A High Dynamic Range CMOS Variable Gain Amplifier for Mobile DTV Tuner[END_REF] can provide up to 35 dB between minimum and maximum amplification. Therefore, for the worst case scenario, it can be considered that the s 2 (t) signal undergoes a 105 dB rejection compared to the useful signal s 1 (t).
• The s' 1 (t) signal, along with s 2 (t), is one of the two components of the radiofrequency signal received by the A 2 antenna. This signal doesn't undergo a rejection due to the double IQ structure as it occupies the same frequency band as the useful signal after the addition step. The only supplementary rejection that s' 1 (t) will undergo compared to the useful signal s 1 (t) is realized by the input elements of the front-end. In fact, as this signal is received by the branch dedicated to s 2 (t), the input elements will realize an attenuation of up to 40 dB. As s' 1 (t) and the useful signal s 1 (t) are not received by the same antenna, even if they are generated by the same transmitter, a phase shift and a gain shift between the two appears during the hertzian transmission. For an AWGN (Additive White Gaussian Noise) transmission channel, the phase shift between the two signals can go from 0 to 360 degrees, but the gain shift can be ignored. For this case, where the two signals s' 1 (t) and s 1 (t) have the same power level at the input of the front-end, the 40 dB of attenuation of the parasitic signal s' 1 (t) achieved before the addition step assures a 40 dB SNR of the useful signal s 1 (t) in the baseband domain after the digital signal processing. This SNR level insures a very good reception quality.
In the case of a multipath channel, where the gain shift as well as the phase shift can not be ignored, a new solution can be implemented. It consists in using a digitally controlled RF phase shifter that will cancel the phase shift between s' 1 (t) and s 1 (t) before the addition step. This way s' 1 (t) is no more a parasite, but a useful component during the digital signal processing that reconstructs the s 1 (t) signal. This solution will be developed in a future document.
Considering all this arguments concerning the additional parasitic components, it can be considered that the SNR evolution of the useful signal is the same as that of a signal treated by a classic mono-standard receiver. Therefore the single front-end multistandard simultaneous reception structure presents similar performance as a front-end stack up structure.
Meanwhile, a complexity comparison study reveals that the single front-end structure is less complex, much more compact and presents a higher on-chip integration level. The number of components is smaller because of the use of a single local oscillator for the first frequency translation compared to the two dedicated oscillators of the front-end stack-up receiver. Furthermore, the greatest advantage of the single front-end receiver is the elimination of the image rejection RF filters. In fact these external components, used to mitigate the impact of the image band signal, can not be integrated on-chip. In the proposed architecture, these components are replaced by a cheaper, on-chip and especially more flexible signal treatment. In the following section, a validation of the theoretical result will be presented.
III. IMPLEMENTATION AND PERFORMANCE
The high image rejection multistandard receiver using a double IQ front-end architecture allows the simultaneous reception of two different frequency bands. In order to validate the theoretical study, a first implementation was made and simulated using the ADS software (Advanced Design System) provided by Agilent Technologies [13]. The selection of the standards used for this implementation was influenced by their complexity and their deployment as well as by their complementarities in terms of range. These parameters, along with a direct utility of such a structure in the sensor network domain, directed our choice towards the 802.11g and the WCDMA-FDD standards. Regarding this choice, an important point that should be underlined is the implementation constraints imposed by the standards dynamics, but especially by those of the WCDMA-FDD. These dynamics constraints make this standards choice implementation the most delicate.
In order to realize a good performance comparison between the multistandard single frond-end receiver and the front-ends stack-up, the blocks used during the simulation have the same typical metrics (gain, noise figure, 1 dB compression point, third order interception point) for both cases. By taking into account all these metrics, a global characterization of the multistandard single front-end receiver is made (Table 1).
During this study, it will be considered that the metrics of the blocks used by the two parallel input branches are similar and therefore the performance offered by the front-end for the two standards are identical in terms of noise figure, gain and third order intercept point.
The first results (Fig. 3) represent the evolution of the two standards BER (Bit Error Rate) depending on their SNR level at the antenna. This BER evolution was observed using both the multistandard single front-end and the front-end stack-up structures as receivers. The wireless transmission channel was chosen to be AWGN while the translation blocks are considered to be ideal in terms of IQ mismatch. During the simulation of the reception of one of the standards the antenna power level of the complementary standard is set to the maximum level so that its parasitic influence is the highest.
Under these conditions the two standards BER evolutions are almost identical for both types of receivers. In fact, using the multistandard single front-end receiver allows the complete rejection of one of the standards during the digital final signal processing as the IQ mismatches are ignored for the moment. The theoretical study underlines the importance of the IQ mismatches for the performance of a receiver using a double orthogonal translation. Indeed, for this type of receiver, it is necessary to realize a good rejection of the image frequency band, which is occupied by the complementary standard. In fact, this rejection relies on two different methods: the gain control realized in the RF domain and the image band rejection realized by the IQ structure, depending on the IQ mismatches. In order to estimate the impact of the orthogonal mismatches on the evolution of the two standards BER a second set of simulations are realized. The metrics of the receiver used during these simulations are the same as those presented in Table 1, except for the gain dynamics of the AGC which take two different values of 35 dB and 40 dB. Concerning the power level of the signals at the antenna, while testing the influence of the IQ mismatches on the BER of one of the standards, the power level of the complementary standard is maximal. Meanwhile, the power level of the concerned standard is at its reference level (the minimum power level that ensures a certain service quality). For our study case, the concerned standard power level leads to a 10 -3 level of BER, when considering ideal IQ mismatch conditions.
For each standard, two normalized BER evolution are presented in Fig. 4, for an AGC gain dynamics of 35 and respectively 40 dB. Depending on the AGC dynamics the complementary signal will be attenuated by a certain amount at the input of the antenna compared to the useful signal. Another rejection step is then realized by the IQ structure, but this one is dependent of the orthogonal mismatches.
Results show that the BER performance of the receiver depends on one hand of the AGC gain dynamics and on the other hand on the orthogonal IQ mismatches. For an AGC gain dynamics varying from the state of the art 35 dB to 40 dB, the BER can triple for the same power levels and mismatch configuration. It can also be observed that, under significant orthogonal mismatches conditions, the influence of the complementary standard (at its maximum power level) on the useful one's SNR leads to a BER six times higher.
The graphs of Fig. 4 rely on simulations of the multistandard receiver architecture which does not integrate the digital signal processing (LMS) dedicated to the mitigation of the orthogonal mismatches [START_REF] Çetin | Adaptive self-calibrating image rejection receiver[END_REF]. The use of these signal processing techniques reduces the final influence of the complementary signal on the useful one's SNR. It can be considered that the final orthogonal mismatches are reduced to an equivalent level of 0.01 dB of gain mismatch and 0.1 degrees of phase mismatch, corresponding to a 70 dB rejection of the complementary signal from the image frequency band. For these levels of orthogonal mismatches, the influence of the complementary standard on the useful one can be ignored as it can be observed on the results shown in Fig. 4. Therefore the theoretical study concerning the rejection of the parasitic signals presented in section II is validated here.
IV. CONCLUSIONS
In this article, a novel multistandard simultaneous reception architecture was presented. Expected performance of its implementation has been presented for a particular study case -simultaneous reception of two signals using the 802.11g and UMTS standards. Compared to the stack-up dedicated frontends structure, this architecture uses an innovating double IQ multiplexing technique in order to use a unique front-end to receive both standards. In addition to the complexity decrease offered by the use of a single front-end, the signal processed by the analog part of the receiver presets an excellent spectral efficiency as the two standards spectrums are overlapped after the first IQ stage. Knowing that the power consumption of the analog part of the receiver is directly dependent on the bandwidth of the signal, the excellent complexity-powerperformance trade-off becomes obvious. Despite the use of a demultiplexing block in the digital domain, the power consumption of the receiver is lower compared to that of the actual state of the art. The key point of this structure is the rejection of the complementary standard during the demultiplexing stage. As a matter of fact, the rejection level depends of the orthogonal mismatches of the frequency translation blocks; a complete study of their influence has been presented.
The issues that still have to be addressed turn around the implementation of a digital processing used to mitigate the IQ impairments. Another interesting idea concerns a possible multi-antenna multistandard simultaneous reception technique using the principles of the architecture assessed in this article.
Fig. 1
1 Fig. 1 Spectral evolution of the signals in a double IQ structure translation multiplies the input signal s(t) with two 90° shifted sinusoids, generated by a local oscillator having a frequency f LO1 =(f 1 +f 2 )/2. This choice of the oscillator frequency fulfils the image band condition: each of the two signals must occupy the image frequency band of the other before the first orthogonal frequency translation.By taking into account this oscillator's frequency condition, the two output signals of the first IQ translation structure s I (t) and s Q (t) can be defined by:
Fig. 2
2 Fig. 2 Multiband simultaneous reception architecture using the double IQ structure
Fig. 3
3 Fig.3802.11g and WCDMA BER evolution during multistandards simultaneous reception using two types of receivers: the classical front-end stack-up and the multistandards single front-end receiver
Fig. 4
4 Fig. 4 802.11g and WCDMA BER evolution versus gain and phase imbalance of the IQ translation blocks. Two series are dedicated to each BER evolution for an AGC gain dynamics of 35 and respectively 40 dB
TABLE I METRICS
I
2
802.11g
normalized BER 1,2 1,4 1,6 1,8
1
0 0,2 0,4 PhaseIm balance(deg.) 0,6 0,8 1
USED FOR THE SIMULATION OF THE MULTISTANDARD SINGLE
FRONT-END RECEIVER
Symbol SI UNIT VALUE
NF dB 6
IIP3 dBm -12
Maximal Gain AGC dB 25
Minimal Gain AGC dB -10
35 dB AGC 802.11g 40 dB AGC WCDMA 35 dB AGC WCDMA 40 dB AGC |
00412107 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2008 | https://inria.hal.science/inria-00412107/file/Morlat_COST_TD_08_659.pdf | 1-Introduction
In today wireless system, the principle of combining OFDM (Orthogonal Frequency Division Multiplexing) scheme and multi-antenna processing (MIMO or SIMO architectures) is often use and offer an important increase of achievable performances. Another objective is to propose more and more integrated systems, and to reduce the space and the cost of the wireless designs. In this way, homodyne down conversion structure appears to be very seductive. Unfortunately, multiantenna systems and complexity reducing are really contradictive: increasing the number of multibranches leads to RF components duplication and increase in space occupation. In order to decrease systems cost, the dirty RF concept [START_REF] Fettweis | Dirty RF, a new Paradigm[END_REF] which aims at identifying constraints that can be relaxed on hardware component is an interesting solution. However, OFDM technique is really sensitive on RF impairments [START_REF] Woo | Combined Effects of RF impairments in the Future IEEE 802.11n WLAN Systems[END_REF] and numerical processing have to be used in order to estimate and correct nonidelallities of the hardware stage to minimize the decreasing of performances transmission due to dirty RF. Especially in the case of the use of an homodyne structure, when RF impairments are much more critical, as compared with super-heterodyne receivers [START_REF] Brandolini | Toward Multi-Standard Mobile Terminals -Fully Integrate Receivers Requirements and Architecture[END_REF]. Three most critical RF impairments have been studied in the special case of OFDM receiver: phase noise [START_REF] Amrmanda | Understanding the effects of Phase Noise in Orthogonal Frequency Division Multiplexing (OFDM)[END_REF][START_REF] Corvaja | Phase Noise Spectral Limits in OFDM Systems[END_REF] and frequency offset [START_REF] Pollet | BER Sensitivity of OFDM Systems to Carrier Frequency Offset and Wiener Phase Noise[END_REF][START_REF] Ma | Effect of frequency offset on BER of OFDM and Single Carrier Systems[END_REF] due to local oscillator non idealities and IQ imbalance due to mismatch between the both I and Q branches especially critical in an homodyne structure [START_REF] Windish | Performance Degradation due to IQ imbalance in Multi-Carrier Direct Conversion Receivers: A Theoretical Analysis[END_REF][START_REF] Liu | Impact of IQ imbalance on QPSK OFDM QAM detection[END_REF]. For each considered impairments numerical processing ensuring an impairments defaults exist, but error can still occure and the use of these algorithm increase the numerical complexity of the developed receiver. In this context, the aim of this work is to point out the potential of naturally compensating the RF impairment presented before jointly with the fading effects by the usage of a classical SIMO antenna processing algorithm, thus without increasing numerical complexity in the special context of the often used 802.11g WLAN OFDM transmission [START_REF]Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications: Further Higher Data Rate Extension in the 2.4 GHz Band[END_REF]. In this current article, main of the presented results are obtained by taking into account most as possible realistic working condition by the use of a global system evaluation scheme based on Agilent Technologies equipments [START_REF] Morlat | A Global System Evaluation Scheme for Multiple Antennas Adaptive Receiver[END_REF]. As presented in [START_REF] Morlat | A Global System Evaluation Scheme for Multiple Antennas Adaptive Receiver[END_REF], realistic consideration of the characteristics of transmission channel, the antenna coupling and also the channel correlation would indeed allow a fast and effective design of multiantenna wireless systems in order to obtain an important design cycle reducing.
The first part presents the used global system level approach and the associated tools with some theoretical, simulated and measured results in order to validate the developed testbed radio plateform. In the second part, the three presented RF impairments are considered one by one in a Zero-IF down conversion structure and their effect on OFDM transmission performances are detailed. In the last part, after a short description of the SIMO processing that is used, the natural compensation of each RF impairment it is possible to obtained taking advantage of space diversity is presented. Results given in this part are obtained by taking into account different measured propagation channels (AWGN or fading channels) and a complete 802.11g structure.
2-Description and validation of the global simulation scheme
This part of the work presents the global simulation scheme developed in order to efficiently simulate and measure all part of a SIMO (Single Input Multiple Output) 802.11g transmission chain from the global system level down to the particular model of RF components or fading channels. [START_REF] Morlat | A Global System Evaluation Scheme for Multiple Antennas Adaptive Receiver[END_REF] presents the testbed radio platform developed using Agilent Technologies equipments: the ADS software and the measurement hardware (two arbitrary waveform generators -ESG 4438Cand a vector spectrum analyzer with two RF inputs -VSA 89641). Due to the capabilities of the developed platform, realistic 2x2 MIMO transmissions, extending measurements up to 6 GHz, with a received bandwidth analysis of 36 MHz could be experimented (Fig. 1). In order to validate the developed testbed platform, the first tests were done for an uncoded 36 Mbps 802.11g SISO transmission, under an AWGN propagation channel simulating a perfect Zero-IF base band conversion stage. For such a data rate, a 16-QAM modulation scheme is used. The theoretical Symbol Error Probability s P for a M-QAM is computed considering two independent M PAM modulations on both I-and Q-channels:
( ) 2 1 1 M s P P - - = (1) with :
( )
- - = b M M M Q M P γ 1 log 3 1 1 2 2 (2)
where b γ denotes the energy per bit to noise ratio (Eb/No) and Q the Gaussian function. The relationship between the Signal to Noise Ratio (SNR), the transmission data rate, the coding rate c R , the signal bandwidth (BW) and b γ is given by:
b c BW R rate data SNR γ ⋅ ⋅ = _ (3)
Where BW = 20 MHz, and c R =3/4 in the case of a 802.11g transmission.
In the end, the theoretical Bit Error Probability (BER) is:
( )
M P BER s 2 log = (4)
Fig. 2
3-RF impairment SISO effects
Notations
Three RF impairments at the receiver side are taken into account. Our work focuses on the effect of frequency offset, the local oscillator (LO) phase noise, the I-Q imbalance (both gain and phase). Fig. 3 presents the structure of a Zero-IF converter including the RF impairments considered in our study. To ease readability, the low pass filter stage and the ADC are intentionally omitted even though they are also responsible for I-Q imbalance. θ (rad) and α (V) refer to phase and gain imbalance respectively; ∆f (Hz) is the mismatch between the carrier frequency f c of the emitter and the receiver; φ is the phase noise due to local oscillator imperfection. The gain imbalance G is given in dB by:
- + = α α 1 1 log 20 G (5)
The frequency offset value is often expressed in parts per million: x(t), y(t) and z(t) (to be introduced later) denote the complex emitted signal, the received signal after channel propagation without RF impairments, and the baseband signal after RF impairment degradation in the temporal domain, respectively. h(t) and n(t) refer to the temporal channel impulse response and the additive white Gaussian noise, respectively. The same notation using capital letters refers to the transposed signals in the frequency domain. ⊗ is the convolution operator, and *
( ) c ppm f Hz f f ∆ = ∆ (6)
s RF (t) π/2 + θ/2 cos(2π(f c -∆f)t +φ(t)) I(t) Q(t) base band processing 1+α/2 1-α/2 π/2 -θ/2
denotes complex conjugation. M is the number of sub-carrier used in the OFDM system (M=64 for a 802.11g transmission).
Most of the presented performances are given in relative BER (i.e. ratio between BER with imperfection and BER with perfect RF conditions). Reference BER is about 5.10 -3 . This is obtained for a perfect simulated Zero-IF down converter. To have a precise estimation of relative BER, 3000 measurements or simulated frames of 1600 bit are performed.
Phase noise effect
Phase noise effect can be separated into a multiplicative and an additive part in single-input single output OFDM systems. Due to phase noise impairments φ(t), the temporal received signal is defined by
( ) ( ) ( ) [ ] ( ) ( ) t n e t x t h t z t j + ⋅ ⊗ = ϕ (7)
After removing the cyclic prefix and applying the DFT on the remaining samples, the demodulated carrier amplitudes
( ) k Z m with k the sub-carrier index ( 1 0 - < < M k
) of the m th OFDM symbol are
given by [START_REF] Petrovic | Properties of the Intercarrier Interference due to phase noise in OFDM[END_REF] ( )
( ) ( ) { ( ) ( ) ( ) k N S l H l X S k X k H k Z m ICI M k l l k l m CPE m m m m + + = ∑ - ≠ = - 4 4 4 3 4 4 4 2 1 1 , 0 , 0 , (8)
The term i m S , corresponds to DFT of one realization of ( ) n j e ϕ during the m th OFDM symbol:
( ) ( ) ∑ - = + - = 1 0 / 2 , 1 M n n M i n j i m e M S ϕ π (9)
The received sample in frequency domain [START_REF] Armstrong | Analysis of new and existing methods of reducing intercarrier interference due to carrier frequency offset in OFDM[END_REF] exhibits two terms due to phase noise. All phase noise correction schemes are based on Common Phase Error (CPE) estimation and mitigation, and are relatively efficient in WLAN OFDM systems for small phase noise working conditions. However, under high phase noise, the Inter-Carrier Interference (ICI) term dominates over the CPE. In this case, the phase noise suppression becomes really difficult [START_REF] Pollet | BER Sensitivity of OFDM Systems to Carrier Frequency Offset and Wiener Phase Noise[END_REF] and the performances requirement may not be guaranteed due to the loss of orthogonality between sub-carriers.
The most common way of characterizing oscillator's phase noise is its Power Spectral Density (PSD), S φ (f), where f is the frequency offset to the carrier frequency. The phase noise effect is commonly characterized as a Wiener process corresponding to a PSD slope of -20dB/dec. In this case, the critical parameter of the oscillator quality is the single sideband -3 dB bandwidth of the Lorentzian power density β. The phase noise variance is hence computed as T πβ σ ϕ 4 2
=
, with T being the sample time [START_REF] Pollet | BER Sensitivity of OFDM Systems to Carrier Frequency Offset and Wiener Phase Noise[END_REF]. [START_REF] Amrmanda | Understanding the effects of Phase Noise in Orthogonal Frequency Division Multiplexing (OFDM)[END_REF] gives the variance expression for a model more general than the classical Wiener process, closer to the measurements and which studies the influence of different phase noise spectrum slopes on the system performance:
( )df f S ∫ ∞ ∞ - = ϕ ϕ σ 2 (10)
When no phase noise suppression algorithm is applied, [START_REF] Amrmanda | Understanding the effects of Phase Noise in Orthogonal Frequency Division Multiplexing (OFDM)[END_REF] shows that the Energy per Symbol Ratio degradation D (in dB) caused by phase noise defined by any PSD spectrum -assuming phase noise variance is small (σ 2 <<1) -is not dependent on the number of sub-carriers, and is given by: ( )
s D γ σ 2 1 log 10 + = ( 11
)
s γ denotes the initial Energy per Symbol Ratio without phase noise impairments. We observe relatively good matching between the results obtained with the developed ADS 802.11g transmission scheme and the analytical results. However, it is important to note that in many WLAN OFDM receivers, CPE correction is applied (see [START_REF] Morlat | On Relaxing constraints on Multibranches RF Front-End for a SIMO OFDM -A Global System Evaluation Scheme[END_REF] for example). As CPE can easily be suppressed, only ICI distribution has to be considered to compute SNR penalty. The properties of the ICI term were previously studied by several authors. In many cases, only a Wiener process is considered and the ICI term is assumed to be complex Gaussian distributed. This approximation is valid only for small phase noise process [START_REF] Petrovic | Properties of the Intercarrier Interference due to phase noise in OFDM[END_REF]. As explained in [START_REF] Corvaja | Phase Noise Spectral Limits in OFDM Systems[END_REF], it is important to take into account different slopes values in the phase noise spectrum to provide applicable models for hardware design. Our developed test-bed answers this need, but it is not the aim of this work.
Frequency offset
The complex baseband received signal affected by frequency carrier mismatch between the receiver and the emitter's local oscillator is:
( ) ( ) ( ) [ ] ( ) t n e t x t h t z t j f + ⋅ ⊗ = ∆ -π 2 (12)
We define the normalized frequency offset ε as the ratio between the carrier frequency offset in Hz and the adjacent OFDM sub-carrier spacing. In the expression below, BW is the occupied signal bandwidth (recalling that BW = 20 MHz for 802.11g transmissions): [START_REF] Armstrong | Analysis of new and existing methods of reducing intercarrier interference due to carrier frequency offset in OFDM[END_REF] gives the expression of the sampled signal for the sub-carrier k (k=0,…, M-1) after the receiver fast Fourier transform processing:
BW M f ⋅ ∆ = ε (13)
( ) ( ) ( ) { ( ) ( ) ) ( 1 , 0 0 k N l X l H S S k X k H k Z ICI M k l l k l CPE + + = ∑ - ≠ = - 4 4 3 4 4 2 1 (14)
Where the sequence k l S -is given by:
( ) [ ] ( ) ( ) + - - - + - + - = ε π ε π ε π l k N j k l e k l M M k l S 1 1 sin sin (15)
For a quick evaluation, [START_REF] Pollet | BER Sensitivity of OFDM Systems to Carrier Frequency Offset and Wiener Phase Noise[END_REF] gives a well-known good approximation of the Energy Per Symbol
I-Q imbalance
The imbalance can be modelled either symmetrical or asymmetrical (these are equivalent representations). In the symmetrical method which is used here [START_REF] Tubbax | Compensation of IQ imbalance in OFDM systems[END_REF], each arm (I and Q) experiences half of the phase and amplitude errors. It has been shown that multi-carrier signals are affected by a mutual inter-carrier interference between each pair of symmetric sub-carriers [START_REF] Liu | Impact of IQ imbalance on QPSK OFDM QAM detection[END_REF].
The perfect baseband received signal after channel propagation is given by: ( ) ( ) ( ) ( )
t n t x t h t y + ⊗ = (17)
[10] gives the expression of the imperfect baseband signal due to I-Q imbalance
( ) ( ) ( ) * t y t y t z ⋅ + ⋅ = λ µ (18)
Injecting ( 18) in (17), one can show that
( ) ( ) ( ) ( ) [ ] ( ) ( ) ( ) [ ] * t n t x t h t n t x t h t z + ⊗ ⋅ + + ⊗ ⋅ = λ µ (19)
where µ and λ depend on the I-Q imbalance:
- = + = 2 sin 2 cos 2 sin 2 cos θ θ α λ θ α θ µ j j ( 20
)
In the frequency domain, for each data sub-carrier k of the OFDM signal (k ∈ [-26, 26]), the received signal is given by [START_REF] Liu | Impact of IQ imbalance on QPSK OFDM QAM detection[END_REF] shows that the error ∆ between the estimated received symbol and the emitted symbol for each sub-carrier k due to the I-Q imbalance is [START_REF] Zareian | Analytical BER Performance of M-QAM-OFDM Systems in the Presence of IQ imbalance[END_REF] gives the analytical BER expression for M-QAM OFDM systems in the presence of I-Q imbalance, based on constellation degradation. Fig. 6 presents, in the case of AWGN transmission, uncoded 16 QAM-OFDM BER performances by using the presented radio communication platform. Simulated and measured data are compared with theoretical results. In this figure, phase imbalance and gain imbalance are respectively equal to θ=5° and α=0.6 dB.
( ) ( ) ( ) ( ) ( ) ( ) ( ) * * * k N k N k X k H k X k H k Z - + + - - + = λ µ λ µ (21)
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) * * * 1 1 k N k H k N k H k X k H k H k - + + - - = ∆ µ λ µ λ (22)
4-RF impairments mitigation using SIMO processing
The SIMO 802.11g architecture
Spatial diversity is very often considered in today radio receiver ensuring BER performances improving by the use of SIMO processing. Instead of using only one antenna at the receiver side, N antennas are used to take advantage of the several versions of the same emitted signal, combining these N signals and increasing the SNR of the received signal.
In our case, a Minimum Mean Square Error (MMSE) approach is used as the optimization criterion using a Sample Matrix Inversion (SMI) technique [START_REF] Gupta | SMI adaptive antenna arrays for weak interfering signals[END_REF] to estimate the optimal complex weight to apply on each received branch. These coefficients are computed using the knowledge of the two long preambles (corresponding to two OFDM symbols) at the beginning of each 802.11g frame in the frequency domain. Considering a constant propagation channel response during the complete frame duration, the SMI processing ensures a very good trade-off between BER performance and computation complexity.
In our equipment, the two incident signals are recorded simultaneously by the VSA, and the baseband signals are re-injected in the ADS simulated multi-antenna structure. Simulated and measured performances of 1x2 SIMO structures for different propagation channels and in realistic working conditions are presented in [START_REF] Morlat | Measured Performances of a Multistandard SIMO receiver[END_REF]. In the next part of this work, we present the effect on RF impairments due to a non-ideal Zero-IF baseband converter. Even if adapted numerical processings already exist to decrease and correct RF impairments, some errors can still occur. That is why it seems to be an interesting study to detail the natural compensation due to SMI properties it is possible to reach. Furthermore, this kind of study corresponds well on the context of a global system performances presentation.
Phase Noise Mitigation
As detailed in section 3, local oscillator phase noise impacts M-QAM OFDM transmission significantly. Even if CPE can be removed, residual errors due to ICI still exist. Furthermore, the design of precise local oscillators ensuring a decreasing phase noise variance is expensive. That is why we present, in this part, the achievable performances in a SIMO configuration for different propagation channels including phase noise impairment. The considered phase noise PSD is modelled as follow: Fig. 7 gives the relative BER performances obtained by simulation considering a complete 802.11g transmission scheme: both phase tracking ensuring CPE suppression and FEC decoding are applied.
AWGN and frequency selective channels with fading are used to simulate the wireless propagation.
The parameter β takes different values between 1 kHz and 30 kHz, i.e. between 3.2.10 -3 and 9.6.10 -2 the OFDM sub-carrier spacing of 312.5 kHz.
As showed in Fig. 9, SIMO processing does not mitigate phase noise impairments significantly, especially under the AWGN case. This result can be explained by the fact that the optimal complex coefficients applied on each arm of the SIMO receiver are estimated with only the 128 samples of the 802.11g frame preamble (2 OFDM symbols). However, as reported in ( 8) and ( 9) the phase noise contribution is not constant during a whole frame, and takes different values at each new OFDM symbol; the SIMO weights become obsolete. This observation could be mitigated if the transmission is under a frequency selective channel. In this case, the multi-antenna receiver is able to take advantage of spatial diversity and ( )
l k S k H
-can take small values on each received branch, ensuring a decreasing of the ICI term.
Frequency Offset Mitigation
In a common 802.11g receiver, digital stages of frequency offset estimation and correction are implemented [START_REF] Gil Jimenez | Design and Implementation of Synchronization and AGC for OFDM-based WLAN Receivers[END_REF]. However, residual frequency offset can occur, and introduce errors in the propagation channel estimation. SIMO performance evaluation in a realistic 802.11g structure seems to be an interesting choice and is detailed in the next part.
Fig. 8 presents the BER relative performances for different frequency offset values, in the range of 0 to 55 kHz (i.e. 0 to 23 ppm). These results were obtained by treating measured signal in the case of an NLOS with fading transmission. Even though the frequency carrier offset takes an important value of 50 kHz, the impact of this impairment that is obvious for a SISO transmission is really well mitigated using diversity.
Even if the analytical impact of frequency offset and phase noise on the received complex samples are almost the same, we observe that SIMO processing ensures larger mitigation of frequency offset than of phase noise impairment. This is due to the fact that the ICI term is constant in the case of a transmission impaired by frequency offset. Hence, this constant error is taken into account by the SIMO processing and complex weights computed by MMSE algorithm are still optimal during the entire frame (from a frequency offset mitigation point of view).
I-Q mismatch mitigation
At first, SMI gain related to simultaneous measured signals is presented in Fig. 9 for an I-Q phase imbalance in the range of 0° to 10°. The solid line curves represent BER performance for an AWGN propagation channel. This clearly shows that the SMI algorithm efficiently compensates RF impairments impact, even for a large phase imbalance of 10°, and that SIMO processing allows a very good rejection of the jamming effect due to phase imbalance. In the case of a wireless link under a frequency selective and fading channel, performance degradation and their mitigation with the SMI processing are presented with dotted lines on the same figure. SISO performances under AWGN and NLOS working conditions are fairly close, which seems contradict (22). A possible explanation is related to the noise level at the receiver input. In the present case, the reference BER value of 5.10 -3 is relative high, so the contribution of the first term in (22) is not significant. Furthermore, the I-Q impairments mitigation with the SMI processing is less important for frequency selective working conditions than for a transmission under an AWGN channel. This could be explained by ( 22) when reminding that the same complex weights are applied to all the sub-carriers. The mitigation difference clearly appears for phase imbalance of up to 8°, which is an important mismatch. Following the same approach, I-Q gain imbalance influence was studied. Fig. 10 presents the mitigation which can be reached using diversity for different I-Q gain imbalance values from 0 to 1.2 dB. Similar conclusions can be drawn for I-Q gain imbalance mitigation with SMI processing.
Even if the mitigation with the SMI algorithm is less important in multi-path propagation channels than in AWGN conditions, excellent performance can be observed for I-Q gain imbalance values up to 1 dB.
5-Conclusion
We present a comprehensive study of a multi-antenna OFDM homodyne receiver, based on theory, simulation and measurements with a reduced design cycle results. The aim of this approach is to get a realistic view of a complex system's performance without requiring many stages of prototyping.
This study highlights the key points of the RF element design for this kind of receiver.
The multi-antenna approaches, as software radio principles, call to reduce strains on the quality of components to achieve an acceptable cost-performance ratio. We prove that the conventional multiantenna treatment used can significantly reduce damage caused by the most common RF impairments: frequency offset, local oscillator phase noise and IQ imbalance. The multiplication of RF branches can be made cheaply if we consider this natural impairments compensation, without ever having to resort to specific digital processing. The result, however, is that particular attention should be paid to the problems of phase noise, by the means of a strong constraint on the selected components, or by the addition of a dedicated digital processing.
As future work, we plan to assess the same constraints and possible compensations as part of a multi-standard broadband architecture, making it possible to obtain a feasible receiver for applications such as cognitive or opportunistic radio. More results concerning this work are presented in [START_REF] Morlat | On Relaxing constraints on Multibranches RF Front-End for a SIMO OFDM -A Global System Evaluation Scheme[END_REF].
6-References
Figure 1 -
1 Figure 1 -The 2x2 MIMO connected solution using the interaction between ADS software and Agilent Technologies equipments. Emitter part, channel propagation and receiver part can be studied and modelled.
Figure 3 -
3 Figure 3 -Zero-IF converter with RF impairments (local oscillator phase noise, frequency offset and I-Q imbalance).
Fig. 4 Figure 4 -
44 Fig. 4 shows a comparison between simulated and theoretical BER performance for an uncoded 16-QAM OFDM 802.11g transmission in AWGN propagation condition. These results were obtained considering a phase noise with a PSD spectrum for positive frequencies modelled as follows, and which correspond to a phase noise variance of σ 2 = 0.015 rad 2 : -60 dBc/Hz at 1 kHz -90 dBc/Hz at 100 kHz -110 dBc/Hz at > 1 MHz
Fig. 5 compares
5 Fig. 5 compares BER performances of an uncoded 16-QAM OFDM transmission under an AWGN propagation channel, taking into account a frequency offset equal to 8.3 ppm (∆ f = 20 kHz). The fact that the different curves match validates our structures and measurement methods.
Figure 5 -
5 Figure 5 -Frequency offset impact on AWGN SISO BER performances (theoretical, simulated and measured data). The frequency offset value is 20 kHz.
Figure 6 -
6 Figure 6-I-Q imbalance impact on AWGN SISO BER performances (theoretical, simulated and measured data). The phase imbalance value is 5° and the gain imbalance is equal to 0.6 dB.
-
60 dBc/Hz at β Hz -80 dBc/Hz at β x10Hz -100 dBc/Hz at > β x100 Hz This corresponds to a Wiener phase noise exhibiting a -20dB/dec slope with β the PSD spectrum bandwidth of the phase noise. With β increasing, SNR degrades and hardware components get cheaper.
Figure 7 -
7 Figure 7 -Phase noise mitigation in AWGN and NLOS simulated conditions. Results are given in relative BER versus local oscillator phase noise characteristics (β)
Figure 8 -
8 Figure 8 -Relative BER versus frequency offset in NLOS working conditions. SISO and 1x2 SIMO performances are given.
Figure 9 -Figure 10 -
910 Figure 9 -Relative BER versus I-Q phase imbalance in AWGN and NLOS measured working conditions. SISO and 1x2 SIMO performances are given.
Figure 2 -Uncoded 802.11g AWGN BER performances. Theoretical, simulated and measured data are plotted versus SNR.
1,0E+00
1,0E-01
1,0E-02
BER
1,0E-03
1,0E-04 theory
simulation
measure
1,0E-05
4 6 8 10 12 14 16 18 20
SNR -dB |
04121235 | en | [
"math"
] | 2024/03/04 16:41:26 | 2017 | https://hal.science/hal-04121235/file/conc-convCuesta-Leadi.pdf | Keywords: Mathematics Subject Classification (2010). 35J20, 35J70, 35P05, 35P30 Concave-convex problem, Nehari manifold, Dirichlet-Steklov boundary condition, elliptic problem, non-coerciveness, p-Laplacian, pbilaplacian, indefinite weights
On abstract indefinite concave-convex problems and applications to quasilinear elliptic equations
Mabel Cuesta and Liamidi Leadi
Abstract. In this work we study the existence of critical points of an abstract C 1 functional J defined in a reflexive Banach space X. This functional is of the form
J(u) = 1 p E(u) - 1 r A(u) - 1 q B(u),
with E, A, B positive-homogeneous indefinite functional of degree p, q, r respectively and 1 < p < q < r. The critical points are found by minimization along several subsets of the Nehari manifold associated to J.
We apply these results to various quasilinear elliptic problems, as for instance, the following p-laplacian concave-convex problem with Steklov boundary conditions on a bounded regular domain
-∆pu + V (x)u p-1 = 0 in Ω;
|∇u| p-2 ∂u ∂ν = λa(x)u r-1 + b(x)u q-1 on ∂Ω; u > 0 in Ω, with given functions a, b, V possibly indefinite and 1 < r < p < q. We also apply our abstract result for a concave-convex quasilinear problem associated to the p-bilaplacian.
Introduction
After the celebrated paper of Ambrosetti-Brézis-Cerami [START_REF] Ambrosetti | Combined effect of concave and convex nonlinearities in some elliptic problems[END_REF] on the solvability of the elliptic problem
-∆u = f λ (u)
in Ω, u = 0 on ∂Ω, u > 0 in Ω, for f λ (u) = λu r-1 + u q-1 and 1 < r < 2 < q, there have been a huge amount of research on this type of equations and the effects on multiplicity of the concave (1 < r < 2)-convex (2 < q) nonlinearity. In [START_REF] Brown | The Nehari manifold for a semilinear elliptic equation involving a sublinear term[END_REF][START_REF] Brown | A fibering map approach to a semilinear elliptic boundary value problem[END_REF][START_REF] Brown | The Nehari manifold for a semilinear elliptic equation with a sign-changing weight function[END_REF] the authors studied the previous equation with a concave-convex forcing term f λ (u) = λa(x)u r-1 + b(x)u q-1 with changing-sign weights a(x), b(x) by using the so called Nehari manifold and the fibering map associated to the problem. This approach has proved to be very useful to deal with this type of problem and have become a subject of research on its own. The results of [START_REF] Ambrosetti | Combined effect of concave and convex nonlinearities in some elliptic problems[END_REF] were partially generalized to the p-laplacian operator under Dirichlet boundary conditions in [START_REF] Ambrosetti | Multiplicity results for some nonlinear elliptic equations[END_REF][START_REF] Garcia-Azorero | Multiplicity of solutions for elliptic problems with critical exponent or with a nonsymmetric term[END_REF][START_REF] De Figueiredo | Local superlinearity and sublinearity for the p-Laplacian[END_REF][START_REF] Garcia-Azorero | Some results about the existence of a second positive solution in a quasilinear critical problem[END_REF][START_REF] Wu | Multiplicity of positive solution of p-Laplacian problems with signchanging weight functions[END_REF]. Soon later, other boundary conditions have been considered, as for instance in [START_REF] Sabina De Lis | A concave-convex quasilinear elliptic problem subject to a non linear boundary condition[END_REF]. Simultaneously some attention has been accorded to quasilinear problems that are non coercive. The first work in this direction was the study of the spectrum of the operator -∆ p u + V (x)|u| p-2 u with Dirichlet boundary conditions and V an indefinite bounded weight, see [START_REF] Cuesta | A weighted eigenvalue problem for the p-Laplacian plus a potential[END_REF]. In [START_REF] Ramos-Quoirin | Lack of coercivity in a concave-convex type equation[END_REF] the author studied the concave-convex problem -∆ p u + V (x)u p-1 = λa(x)u r-1 + b(x)u q-1 in Ω; u = 0 on ∂Ω; u > 0 in Ω, by minimization on the Nehari set and obtained, under various "coerciveness" conditions related to V , a and b, the existence of up to four solutions: two solutions satisfying the condition E V (u) > 0 with
E V (u) := Ω (|∇u| p + V (x)|u| p ) dx,
and two more solutions satisfying E V (u) < 0.
Our goal in this work is to generalize the results of [START_REF] Ramos-Quoirin | Lack of coercivity in a concave-convex type equation[END_REF] for the same quasilinear operator, say -∆ p u + V (x)|u| p-2 u, but with different boundary conditions. Precisely, let Ω be a bounded smooth domain of class C 2,α (0 < α < 1) with outward unit normal ν on the boundary ∂Ω and ∆ p u := div (|∇u| p-2 ∇u) is the well known p-laplacian operator. The functions V ∈ L ∞ (Ω) and (a, b) ∈ (C s (∂Ω))
2 , for some s ∈ (0, 1), are allowed to change sign. The real number λ is a positive parameter. We will ask the exponents r, q to satisfy 1 < r < p < q < p * where p * = p(N -1) (N -p) + is the critical exponent for the trace operator W 1,p (Ω) → L s (∂Ω, dρ), and ρ denotes the restriction to ∂Ω of the (N -1)-Hausdorff measure, which coincides with the usual Lebesgue surface measure as ∂Ω is regular enough. We will consider the following quasilinear elliptic Problem I:
-∆ p u + V (x)u p-1 = 0 in Ω;
|∇u| p-2 ∂u ∂ν = λa(x)u r-1 +b(x)u q-1 on ∂Ω; u > 0 in Ω, (1.1) and Problem II:
-∆ p u + V (x)u p-1 = λa(x)u r-1
in Ω;
|∇u| p-2 ∂u ∂ν = b(x)u q-1 on ∂Ω; u > 0 in Ω.
(1.
2)
The search of solutions for these quasilinear problems can be stated in an abstract form as the search of critical points of a functional J defined on a Banach space X, which will take the form
J(u) = 1 p E(u) - 1 r A(u) - 1 q B(u), (1.3)
where E, A, B are possibly indefinite but positive-homogeneous functional of degree p, r, q respectively, with 1 < r < p < q. We will prove several existence and multiplicity results on critical points of J by minimizing J along several subsets of the Nehari manifold N = {u ∈ X \ {0}; J (u), u = 0} associated to J. These existence results for a general functional E, A, B and X can be applied in many cases, as for instance for the p-bilaplacian operator with Navier boundary conditions, that is the following Problem III:
∆ 2 p u -c|u| p-2 u = λa(x)|u| r-2 u + b(x)u q-2 u in Ω; u = ∆u = 0 on ∂Ω, (1.4)
with c ∈ IR. The p-bilaplacian operator is defined as ∆ 2 p u := ∆(|∆u| p-2 ∆u) and it has received recently some attention. In the case p = 2 the authors in [START_REF] Yang | On semilinear biharmonic equations with concaveconvex nonlinearities involving weight functions[END_REF] generalize the Ambrosetti-Brézis-Cerami problem in the case c = 0 (coercive case), indefinite weight a and non-negative weight b. See also [START_REF] Liu | Infinitely many solutions for p-biharmonic equation with general potential and concave-convex nonlinearity in IR N[END_REF] for similar results in IR N or [START_REF] Ji | On the p-biharmonic equation involving concave-convex nonlinearities and sign-changing weight function[END_REF] for Dirichlet boundary conditions (u = ∇u = 0) and p = 2. This paper is organized as follows. In section 2 we describe the Nehari set, the fibering map and the different "sign-subsets" of the Nehari set that will be used to find critical points of the C 1 functional J which is defined in (1.3). In sections 3 and 4 we prove four different critical points theorems of the functional J in the Nehari set, c.f. Theorem 3.3, Theorem 3.4, Theorem 4.1 and Theorem 4.2. The hypothesis needed to apply these general theorems concern the coerciveness of E along the sign-subsets of the Nehari set described in section 2. In section 5 we present various conditions in terms of eigenvalues-like numbers that will imply the required coerciveness and then we prove some existence results (see Theorem 5.1. and Theorem 5.7). In section 6 we apply the theorem of the previous section to state some existence and multiplicity results for problems I, II and III.
The Nehari set for a concave-convex functional
Let (X, • ) be a reflexive Banach space and E, A, B ∈ C 1 (X, IR). Let us assume that for some 1 < r < p < q it holds
E(tu) = t p E(u), A(tu) = t r A(u), B(tu) = t q B(u), ∀(t, u) ∈ IR + × X.
(2.1) The following hypothesis will also be assumed:
(H1) ∀(u n ) n∈IN , u n ∈ X, if u n
u for some u ∈ X then there exists a subsequence u n k such that
A(u n k ) → A(u) and B(u n k ) → B(u). (H2) E is bounded on bounded sets (i.e. sup u∈X, u =1 |E(u)| < ∞) and it is weakly lower semi-continuous. (H3) ∀(u n ) n∈IN , u n ∈ X, if u n u for some u ∈ X and E(u n ) → E(u) then u n → u.
By " " we denote the weak convergence in X. Let us consider the functional J defined as
J(u) = 1 p E(u) - 1 r A(u) - 1 q B(u)
and look for solutions of the problem J (u) = 0. Since J may be unbounded from below on the set {u ∈ X ; B(u) > 0}, it is useful to consider the functional J restricted to the so called Nehari set
N := {u ∈ X \ {0} ; N (u) := J (u), u = 0},
where •, • is the usual duality map defined on X × X. Thus u ∈ N if and only if u = 0 and
E(u) = A(u) + B(u).
Let us introduce the fibering maps associated to J. For u ∈ X \{0}, we define the function
J u : (0, ∞) -→ R t -→ J u (t) := J(tu).
Consequently u ∈ N if and only if J u (1) = 0 or yet, tu ∈ N (with t > 0) if and only if J u (t) = 0. The notation J u , J u stands here for
dJ u dt , d 2 J u dt 2 resp.
Furthermore, since J u (u) = 0 for u ∈ N we can write
J u (1) = (p -q)E(u) -(r -q)A(u) = (p -r)E(u) -(q -r)B(u). (2.2)
It is standard to split N into three sets that, roughly speaking, correspond to local minima, local maxima and inflexion points of J u :
N + := u ∈ X ; J u (1) = 0, J u (1) > 0 , N -:= u ∈ X ; J u (1) = 0, J u (1) < 0 , N 0 := u ∈ X ; J u (1) = 0, J u (1) = 0 .
We also introduce the following "sign-subsets":
A ± := {u ∈ X ; A(u) ≷ 0}, A 0 = {u ∈ X ; A(u) = 0}, B ± := {u ∈ X ; B(u) ≷ 0}, B 0 = {u ∈ X ; B(u) = 0}, E ± := {u ∈ X ; E(u) ≷ 0}, E 0 = {u ∈ X ; E(u) = 0}.
Finally we denote A ± 0 = A ± ∪ A 0 and similarly for B ± 0 and E ± 0 . We stress here that, without further assumptions on E, A, B, the previous sets can be void.
Next we state the following property of N that follows straight from hypothesis (H1)-(H3).
Lemma 2.1. Let u n be a sequence in N such that u n 0. Then u n → 0.
Let us give a result on the boundedness of N . We recall that a subset
C of X is called a cone if tx ∈ C, ∀(t, x) ∈ IR + × C. Lemma 2.2. Let C ⊂ X be weakly closed cone such that (C ∩ B 0 ) \ {0} ⊂ E + .
(2.3)
Then (i) the set N + ∩ C is bounded; (ii) if sup C∩N J(u) < ∞ then N ∩ C is bounded.
Proof. (i) Assume by contradiction that there is an unbounded sequence
u n ∈ N + ∩ C. Take v n = un un ∈ C and v 0 ∈ C such that v n v 0 . From the fact that u n ∈ N we have E(v n ) u n q-p = A(v n ) u n q-r + B(v n )
and passing to the limit we conclude that B(v 0 ) = 0. We have used here that E is bounded on bounded sets (i.e. (H2)). From the fact that u n ∈ N + we have
E(v n ) < q -r q -p A(v n ) u n p-r
and passing to the limit it comes that E(v 0 ) ≤ 0. The possibility of v 0 = 0 is ruled out by the fact that, in that case, 0 = E(v 0 ) = lim inf n→∞ E(v n ) and therefore, by (H3), we will have v n → v 0 = 0. This is a contradiction with the property v n = 1. Thus v 0 = 0 and we get a contradiction with the hypothesis (2.3) of the lemma.
(ii) Assume that u n ∈ N ∩ C is a sequence satisfying u n → +∞ and denote v n = un un . Since the sequence v n is bounded, there exists v 0 ∈ X and a subsequence v n k such that v n k v 0 . From the fact that u n k ∈ N we have, as previously, 0 = B(v 0 ). On the other hand
J(u n k ) u n k p = 1 p - 1 q E(v n k ) - 1 r - 1 q A(v n k ) u n k p-r
therefore passing to the limit we get
E(v 0 ) ≤ lim inf k→+∞ E(v n k ) ≤ 0 because J(u n ) is uniformly bounded from above. If v 0 = 0 hence E(v 0 ) = 0 = lim inf k→+∞ E(v n k )
and, from hypothesis (H3), v n k → v 0 = 0. This is impossible because v n k = 1 for all k. Thus v 0 = 0 and we get again a contradiction with the hypothesis (2.3) of the lemma.
We also have the following property:
Lemma 2.3. Let C ⊂ X be weakly closed cone and assume that
C ∩ A 0 \ {0} ⊂ E + . (2.4)
Then N -∩ C has no sequence u n 0.
Proof. Assume by contradiction that there is a sequence u n ∈ N -∩ C such that u n 0 in X. From the fact that u n ∈ N we have 0 = E(0) ≤ lim inf E(u n ) = lim inf A(u n ) + B(u n ) so u n → 0 on X. Take z n := un un ∈ C and assume that for some z 0 ∈ X we have z n z 0 , A(z n ) → A(z 0 ) and B(z n ) → B(z 0 ). By using that u n ∈ N we have
A(z n ) = E(z n ) u n p-r -B(z n ) u n q-r
and passing to the limit it comes A(z 0 ) = 0. Besides by using that u n ∈ N - and (2.2) we have
E(z n ) ≤ q -r p -r B(z n ) u n q-p ,
and passing to the limit
E(z 0 ) ≤ lim inf E(z n ) ≤ 0.
Notice that the possibility z 0 = 0 is excluded because, in that case, we would have 0 = E(z 0 ) = lim inf E(z n ) which will imply that z n → z 0 = 0, a contradiction with the fact that z n = 1. Thus we have proved that z 0 ∈ C ∩ A 0 \ {0}. Then from the hypothesis (2.4) of this lemma it comes that E(z 0 ) > 0, a contradiction.
The fibering map
Let us give a complete description of the behaviour of J u according to the sign of A(u), B(u) and E(u). Let us write
J u (t) = t p-1 E(u) -t r-1 A(u) -t q-1 B(u) = t r-1 [m u (t) -A(u)],
where m u (t) := t p-r E(u) -t q-r B(u).
(2.5) Clearly, for t > 0, tu ∈ N if and only if t is a solution of the equation
m u (t) = A(u).
(2.6)
If for u ∈ X\{0} and t > 0 one has that J u (t) = 0 then J u (t) = t r-1 m u (t).
Consequently tu ∈ N + if and only if m u (t) > 0 (similar results for N -and N 0 ). In order to study the resolvability of (2.6) let us describe the variation of the function t → m u (t) for any u ∈ E 0 ∩ B 0 . Four possible pictures of the graph of m u can be drawn: We shall now describe the nature of the fibering maps for all possible signs of A(u), B(u) and E(u). The following behaviour of the function J u follows from the previous description of the function m u . As above, we will assume that u ∈ E 0 ∩ B 0 . Case 1 : E(u) > 0 and B(u) > 0.
In this case the function m u has graph as shown in Figure 1(a).
Case 1.1 : If A(u) > 0 and A(u) < max t>0 m u (t) then it is clear that there are exactly two solutions 0 < t 1 (u) < t 2 (u) of (2.6) with m u (t 2 (u)) < 0 < m u (t 1 (u)). Thus there are exactly two multiples of u lying in N , namely t 1 (u)u ∈ N + and t 2 (u)u ∈ N -. It follows that J u has exactly two critical points, a local minimum at t 1 (u) and a local maximum at t 2 (u). Moreover J u is decreasing in (0, t 1 (u)) and in (t 2 (u), ∞), increasing in (t 1 , t 2 ), see Figure 2(f). Case 1.2 : If A(u) ≤ 0. Then using the graph of m u in Figure 1(a), we deduce that there exists one positive solution of (2.6). Consequently J u has graph as shown in Figure 2(g) and there is a unique value t(u) > 0 such that t(u)u ∈ N . Moreover m u (t(u)) < 0, so t(u)u ∈ N -and the fibering maps J u has a unique critical point which is a local maximum. Case 2 : E(u) ≥ 0 and B(u) ≤ 0.
In this case the function m u is an increasing function of t (see Figure 1(b)). Case 2.1 : If A(u) > 0, then m u has a graph as in Figure 1(b) and J u has a graph as shown in Figure 2(i). It is clear that there is exactly one solution of (2.6), i.e. there is a unique t(u) > 0 such that t(u)u ∈ N . Moreover m u (t(u)) > 0 (since m u is an increasing function of t) and so t(u)u ∈ N + . Thus the fibering map J u has a unique critical point which is a local minimum, as shown in Figure 2(i). Case 2.2 : If A(u) ≤ 0 then the function J u is increasing functions of t and so has graph as shown in Figure 2(h). Consequently (2.6) has no solution, for all t and thus no multiple of u lies in N . Case 3 : E(u) ≤ 0 and B(u) ≥ 0.
In this case the function m u is a decreasing function of t and has graph as shown in Figure 1(c). Case 3.1 : If A(u) < 0 then (2.6) has a unique solution. Since J u must have graph as shown in Figure 2(g), we conclude again that there is a unique t(u) > 0 such that t(u)u ∈ N and since m u (t(u)) < 0 in this case, we deduce that t(u)u ∈ N -. Hence the fibering map J u has a unique critical point which is a local maximum. Case 3.2 : If A(u) ≥ 0 then (2.6) has no solution. Moreover J u is a decreasing function of t and has graph as shown in Figure 2(e). Thus in this case no multiple of u lies in N . Case 4 : E(u) < 0 and B(u) < 0.
In this case m u has graph as shown in Figure 1(d). Case 4.1 : If A(u) < 0 and A(u) > min t>0 m u (t) then J u has graph as shown in Figure 2(j). In this case there are exactly two solutions t 1 (u) < t 2 (u) of (2.6) with m u (t 1 (u)) < 0 < m u (t 2 (u)). Thus there are exactly two multiples of u which belong to N , namely t 1 (u) ∈ N - and t 2 (u) ∈ N + . It follows that J u has exactly two critical points, a local maximum at t = t 1 (u) and a local minimum at t = t 2 (u). Furthermore J u is increasing in (0, t 1 ), decreasing in (t 1 , t 2 ) and increasing in (t 2 , ∞), as in Figure 2(j). Case 4.2 : If A(u) ≥ 0 then (2.6) has a unique solution and J u has graph as shown in Figure 2(i). Thus there is a unique value t(u) > 0 such that t(u)u ∈ N . Since m u (t(u)) > 0, we deduce that t(u)u ∈ N + and consequently J u has a unique critical point which is a local minimum as shown in Figure 2(i).
(i) If either A + ∩ B - 0 ∩ E + = ∅ or Λ + := {u ∈ A + ∩ B + ∩ E + ; A(u) < max t>0 m u (t)} = ∅ then N + ∩ E + = ∅. (ii) If either A - 0 ∩ B + ∩ E + = ∅ or Λ + = ∅ then N -∩ E + = ∅. (iii) If Λ -:= {u ∈ A -∩ B -∩ E -; A(u) > min t>0 m u (t)} = ∅ then N + ∩ A -= ∅. (iv) If Λ -= ∅ then N -∩ B -= ∅.
A simple calculation shows that the maximum of m u in case I is
max t>0 m u (t) = q -p q -r p -r q -r p-r q-p E(u) q-r q-p B(u) p-r q-p
and the minimum of m u in case IV is
min t>0 m u (t) = - q -p q -r p -r q -r p-r q-p (-E(u)) q-r q-p (-B(u)) p-r q-p .
The maximum (resp. the minimum) of m u is achieved at the point
t * (u) := (p -r)E(u) (q -r)B(u) 1 q-p .
(2.7)
Let us denote
λ + * := inf u∈A + ∩B + ∩E + max t>0 m u (t) A(u) ≥ 0 (2.8)
in case I and
λ - * := inf u∈A -∩B -∩E - min t>0 m u (t) A(u) ≥ 0 (2.9)
in case IV. From the previous discussion it follows trivially that:
Lemma 2.5. (i) If λ + * > 1 then N 0 ∩ E + = ∅. (ii) If λ - * > 1 then N 0 ∩ E -= ∅.
3. Local minimizers of J restricted to the Nehari set and to E + Our purpose in this section is to prove that, under some suitable assumptions on A ± , B ± and E ± , the functional J is bounded below and achieves its infimum on some of the sign subsets of N described in cases I to IV of section 2. This will provide us critical points for J, as a consequence of the following well known result:
Lemma 3.1. Suppose that u is a local minimiser of J restricted to N . If u ∈ N 0 then u is a critical point of J relative to X.
Proof. Since u is a minimiser of J on N , there exists γ ∈ R (Lagrange multiplier) such that
J (u) = γN (u). (3.1)
Thus in particular we have
J (u), u = γ N (u), u , which implies that γ N (u), u = 0 because 0 = N (u) = J (u), u (since u ∈ N ). Moreover N (u), u = pE(u) -rA(u) -qB(u) = (p -r)E(u) -(q -r)B(u) = J u (1).
Consequently, if u ∈ N 0 , that is J u (1) = 0,then γ = 0 and we conclude from (3.1) that u is a critical point of J.
Let us rewrite the functional J for u ∈ N in two different forms:
J(u) = 1 p - 1 r E(u) + 1 r - 1 q B(u) = 1 p - 1 q E(u) - 1 r - 1 q A(u). (3.2)
We then observe the following
Lemma 3.2. (a) J(u) > 0 for all u ∈ (N -∩ A - 0 ) ∪ (N -∩ B + 0 ) ∪ (N -∩ E - 0 ); (b) J(u) < 0 for all u ∈ (N + ∩ A + 0 ) ∪ (N + ∩ B - 0 ) ∪ (N + ∩ E + 0 ). Proof. From (2.2) we deduce the following inequalities (a) If u ∈ N -then q -r q -p A(u) < E(u) < q -r p -r B(u). (3.3) Hence J(u) > max (q -r)(r -p) prq A(u), (q -p)(r -p) pqr E(u), (q -p)(q -r) pqr B(u) , (b) Similarly, if u ∈ N + then q -r p -r B(u) < E(u) < q -r q -p A(u). (3.4)
Hence J(u) < min (q -r)(r -p) pqr A(u), (q -p)(r -p) pqr E(u), (q -p)(q -r) pqr B(u) .
Minimizing J along N +
A first critical point of J can be found on N + ∩ E + , provided this set is not empty and some "coerciveness" conditions:
Theorem 3.3. Assume that E + ∩ N + = ∅, λ + * > 1 and (H4) (A + 0 ∩ B 0 ) \ {0} ⊂ E + , (H5) A + ⊂ E + . Then the following local infimum i := inf u∈N + ∩E + J(u)
is achieved. Furthermore i < 0.
Proof. The fact that i < 0 readily follows from Lemma 3.2(b). First we prove that N + ∩ E + is bounded. Indeed N + ∩ E + ⊂ N + ∩ A + because of inequality (3.4) and, since A + 0 is a weakly closed cone, we have that the condition (2.3) of Lemma 2.2 corresponds to hypothesis (H4), thus we conclude that N + ∩E + is bounded. Notice that in particular we have that i > -∞. To prove that the infimum i is achieved take a minimizing sequence u n . Since this sequence is bounded there exists some u 0 ∈ X such that, up to subsequence, u n u 0 . By using (3.2) we can write
J(u n ) = 1 p - 1 q E(u n ) - 1 r - 1 q A(u n ) ≥ - 1 r - 1 q A(u n ).
and letting n → +∞ we get A(u 0 ) ≥ -i( 1 r -1 q ) -1 > 0. Then it comes then from (H5) that E(u 0 ) > 0. We claim that u n converges strongly to u 0 in X. Assume by contradiction that u n → u 0 . We discuss two alternatives : Alt. 1. B(u 0 ) > 0. Using the previous classification of the fibering maps, the graph of J u0 is as the one in Figure 2(f) so there exist 0 < t 1 (u 0 ) < t 2 (u 0 ) such that t 1 (u 0 )u 0 ∈ N + ∩E + , t 2 (u 0 )u 0 ∈ N -∩E + , J u0 is increasing between t 1 (u 0 ) and t 2 (u 0 ) and decreasing elsewhere. Since we are assuming that u n → u 0 hence u 0 ∈ N and therefore t 1 (u 0 ) = 1. Let us distinguish two cases: (a) 1 ≤ t 2 (u 0 ) and (b) 1 > t 2 (u 0 ) . In case (a) we have
J(t 1 (u 0 )u 0 ) = J u0 (t 1 (u 0 )) ≤ J u0 (1) < lim inf J un (1) = lim J(u n ) = i. (3.5)
Moreover Lemma 2.5 and hypothesis (H4) imply that N 0 = {0}. Thus (3.5) leads to a contradiction because t 1 (u 0 )u 0 ∈ N + ∩ E + . In case (b) using that
0 = J u0 (t 2 (u 0 )) < lim inf J un (t 2 (u 0 ))
we conclude that J un (t 2 (u 0 )) > 0 for n large. Since 1 is a local minimum of J un and the graph of J un looks like the one of figure 2(f) then it must be 1 < t 2 (u 0 ), a contradiction. Alt. 2. B(u 0 ) ≤ 0. The graph of J u is as the one in Figure 2(i) so there exists 0 < t(u 0 ) such that t(u 0 )u 0 ∈ N + ∩ E + and J u0 has a global minimum in t(u 0 ). If t(u 0 ) ≥ 1 again we have (3.5), a contradiction. If t(u 0 ) < 1 we use that 0 = J u0 (t(u 0 )) < lim inf J un (t(u 0 )), so then J un (1) > 0 for n large, which is also impossible because u n ∈ N .
Minimizing J along N -
We now look for solutions of
J (u) = 0 in N -∩ E + . Theorem 3.4. Let us assume that N -∩ E + = ∅, λ + * > 1 and (H6) B + 0 \ {0} ⊂ E + , Then the following infimum j := inf u∈N -∩E + J(u) is > 0 and it is achieved.
Proof. We know from Lemma 3.2(a) that j ≥ 0. Observe that any minimizing sequence is bounded because (H6) implies the hypothesis (2.3) of Lemma 2.2 and the result comes from (ii) of the aforementioned lemma. Assume first that j > 0. We claim that there is a strong convergent minimizing sequence for j. Assume by contradiction that we have a minimizing sequence u n u 0 such that A(u n ) → A(u 0 ), B(u n ) → B(u 0 ) but u n → u 0 . If u 0 = 0 then Lemma 2.1 will imply that u n → 0, which is not the case we are assuming now. Thus u 0 = 0. We can also prove that B(u 0 ) > 0 by using the fact that
u n ∈ E + and q -r qr B(u n ) = J(u n ) - 1 p - 1 r E(u u ) ≥ J(u n ).
Then passing to the limit we get B(u 0 ) > 0. Consequently, (H6) implies that E(u 0 ) > 0. Now we distinguish two cases according to the sign of A(u). Alt. 1. A(u 0 ) > 0. In this case J u0 and J un look like Figure 2(f). If u n → u 0 then t 2 (u 0 ) = 1. We also have that t 2 (u 0 )u 0 ∈ N -∩ E + (we use here that λ + * > 1 to assure that t 2 (u 0 )u 0 ∈ N 0 , c.f. Lemma 2.5). Furthermore 0 = J u0 (t 2 (u 0 )) < lim inf J un (t 2 (u 0 )).
Thus J un (t 2 (u 0 )) > 0 for n large. Since t 2 (u n ) = 1 hence t 1 (u n ) < t 2 (u 0 ) < 1 and we will have
j ≤ J u0 (t 2 (u 0 )) < lim inf J un (t 2 (u 0 )) ≤ lim n→+∞ J un (1) = j, (3.6)
a contradiction. Alt. 2. A(u 0 ) ≤ 0. In this case J u0 and J un look like Figure 2(g). If u n → u 0 then t(u 0 ) = 1. We also have that t(u 0 )u 0 ∈ N -∩ E + . Then again we have
j ≤ J u0 (t(u 0 )) < lim inf J un (t(u 0 )) ≤ lim n→+∞ J un (1) = j, (3.7)
a contradiction.
Let us finally prove that j > 0. If this were not the case then for any minimizing sequence (which we know that will be bounded) we will have J(u n ) → 0. Then, up to a subsequence, there exists u 0 ∈ X such that u n u 0 , A(u n ) → A(u 0 ) and B(u n ) → B(u 0 ). From Lemma 3.2(a) we know that N -∩ E + ⊂ B + and hence the possibility u 0 = 0 is excluded from Lemma 2.3 and (H6) applied to C = B + 0 , so (2.4) is satisfied. Besides by writing
0 = lim n→+∞ J(u n ) = lim n→+∞ 1 p - 1 r E(u n ) + 1 r - 1 q B(u n ) it comes lim n→+∞ E(u n ) = p(q -r) q(p -r) B(u 0 ). (3.8) Thus, if B(u 0 ) = 0 then E(u 0 ) ≤ lim n→+∞ E(u n ) = 0, a contradiction with (H6). Hence B(u 0 ) > 0. Furthermore, from (3.8) and 0 = lim n→+∞ J(u n ) = lim n→+∞ 1 p - 1 q E(u n ) - 1 r - 1 q A(u n ) it comes A(u 0 ) = r(q -p) q(p -r) B(u 0 ) > 0.
We are going to rule out the following two alternatives: Alt. 1. u n → u 0 . In this case J(u 0 ) = 0 and we will have u 0 ∈ N -∩ E + (the possibility of u 0 ∈ N 0 is ruled by the constraint λ + * > 1). But, according to Lemma 3.2(a), J(u 0 ) > 0, a contradiction. Alt. 2. u n → u 0 . The maps J u0 will look as in Figure 2(f) and repeating the argument of Alt. 1 above we have (3.6) and we reach a contradiction. This conclude the proof of j > 0. We first look for solutions of our problem in
N + ∩ A -⊂ N + ∩ E -. Theorem 4.1. Assume that N + ∩ A -= ∅, λ - * > 1 and (H7) A - 0 ∩ B 0 \ {0} ⊂ E + (H8) A 0 ∩ B -⊂ E + .
Then the following infimum
l := inf u∈N + ∩A -J(u)
is < 0 and it is achieved.
Proof. It comes from (H7) and Lemma 2.2(i) that N + ∩ A -is bounded and by Lemma 3.2(a) that l is negative. Let us prove that this infimum is achieved. Let u n be a minimizing sequence. Since this sequence is bounded, there exists
u 0 such that u n u 0 , A(u n ) → A(u 0 ) and B(u n ) → B(u 0 ). Thus A(u 0 ) ≤ 0, B(u 0 ) ≤ 0 and E(u 0 ) ≤ lim inf E(u n ) ≤ 0. From E(u n ) ≤ 0 and J(u n ) = 1 p - 1 r E(u n ) + 1 r - 1 q B(u n )
we infer that
B(u n ) ≤ rq q -r J(u n )
and passing to the limit it comes that B(u 0 ) < 0. In particular u 0 = 0. Now, if A(u 0 ) = 0, using (H8) we will conclude that E(u 0 ) > 0, a contradiction. Then A(u 0 ) < 0 and J u0 behaves either as in Figure 2(h) if E(u 0 ) = 0 or as in Figure 2(j) if E(u 0 ) < 0. Let us discuss this two alternatives: Alt. 1. E(u 0 ) = 0. In this case E(u 0 ) = 0 = lim inf E(u n ) = 0 so u n → u 0 by ( H3). Hence J(u 0 ) = l and also u 0 ∈ N + 0 ∩ A -. Since be in N 0 implies that E(u 0 ) = q-r q-p A(u 0 ) by (3.4) and we know that A(u 0 ) < 0 = E(u 0 ), then clearly u 0 ∈ N 0 , so have u 0 ∈ N + ∩ A -and we are done. Alt. 2. E(u 0 ) < 0. In this case there exist two values 0 ≤ t 1 (u 0 ) < t 2 (u 0 ) such that t 2 (u 0 ) > 0 is a global minimum value for J u0 , t 2 (u 0 )u 0 ∈ N + ∩ A - and J u0 is decreasing between t 1 (u 0 ) and t 2 (u 0 ), increasing elsewhere. Notice that u 0 ∈ N 0 because of condition λ - * > 1. Let us assume by contradiction that u n → u 0 . Then
l ≤ J(t 2 (u 0 )u 0 ) = J u0 (t 2 (u 0 )) ≤ J u0 (1) < lim inf J un (1) = lim J(u n ) = l, a contradiction. 4.2. Minimizing J along N - Finally we minimize along N -∩ B -⊂ N -∩ E -. Theorem 4.2. Assume that N -∩ B -= ∅, λ - * > 1, (H7) and (H9) A 0 ∩ B - 0 \ {0} ⊂ E + . Then the infimum k := inf u∈N -∩B -J(u)
is achieved and it is positive.
Proof. By Lemma 3.2(a) the value k ≥ 0. Since N -∩ B -⊂ A -, Lemma 2.2(ii) and (H7) imply that any minimizing sequence is bounded. We claim that any minimizing sequence u n possesses a convergence subsequence. Indeed, assume
u n u 0 , A(u n ) → A(u 0 ) ≤ 0 and B(u n ) → B(u 0 ) ≤ 0 and that u n → u 0 . Since u n ∈ N -we have E(u n ) ≤ q-r p-r B(u n ) ≤ 0 so E(u 0 ) ≤ 0.
Let us assume first that k = 0. If u 0 = 0 then from Lemma 2.1 it comes that u n → 0 and therefore 0 = lim J(u n ) = k, a contradiction. Thus u 0 = 0. From hypothesis (H7) and (H9) we must have u 0 ∈ A -∩ B -. Hence J u0 looks like Figure 2(j) if E(u 0 ) < 0 or like Figure 2(h) if E(u 0 ) = 0. Let us to discuss this two alternatives. Alt. 1. E(u 0 ) = 0. As we have E(u 0 ) = 0 = lim inf E(u n ) then u n → u 0 . Hence, using that u n ∈ N and passing to the limit we find 0 = E(u 0 ) = A(u 0 ) + B(u 0 ) ≤ 0 so it must be A(u 0 ) = B(u 0 ) = 0, a contradiction with (H7) or (H9). We have ruled out this alternative. Alt. 2. E(u 0 ) < 0. Thus there exists 0 < t 1 (u 0 ) < t 2 (u 0 ) such that t 1 (u 0 )u 0 ∈ N -and t 1 (u n ) = 1. We have used here that t 1 (u 0 )u 0 ∈ N 0 because of the hypothesis λ - * > 1. Let us assume by contradiction that u n → u 0 . Hence 0 = J u0 (t 1 (u 0 )) < lim inf J un (t 1 (u 0 )) which implies that J un (t 1 (u 0 )) > 0 for n large. Thus t 1 (u 0 ) < 1 or t 1 (u 0 ) > t 2 (u n ) because J un looks line Figure 2(j). In the first case we have
k ≤ J(t 1 (u 0 )u 0 ) = J u0 (t 1 (u 0 )) < lim inf J un (t 1 (u 0 )) < lim inf J un (1) = k,
a contradiction. In the second case let us suppose, up to a subsequence, that t 2 (u n ) converges to some s ∈ [1, t 1 (u 0 )]. We have
J u0 (s)) < lim inf J un (t 2 (u n )) = 0,
which is a contradiction because J u0 > 0 on (0, t 1 (u 0 )). We have just proved that the minimizing sequence converges strongly and consequently the infimum k is achieved.
To finish the proof let us check that k > 0. Assume by contradiction that k = 0 and take u n a minimizing sequence. Since u n is bounded (as above) then we can assume that u n u 0 for some u 0 . We have B(u 0 ) ≤ 0, A(u 0 ) ≤ 0 and u 0 ∈ E - 0 . It follows from Lemma 2.3 and (H9) that u 0 = 0 and from the fact that u n ∈ N -we have
A(u n ) > pqr (q -r)(r -p) J(u n ).
Hence passing to the limit we get A(u 0 ) = 0. Thus u 0 ∈ A 0 ∩ B - 0 ∩ E - 0 which contradicts (H9) if B(u 0 ) = 0 or (H7) if B(u 0 ) = 0.
On various conditions for coerciveness and existence results
5.1. On the coerciveness of E restricted to A ± 0 and B ± 0
We want to give in this section some variational conditions on E, A, B that would imply hypothesis (H4)-(H9). Those conditions will concern the following constants of coerciveness
i ± (A) := inf {E(u) ; A(u) = ±1} , i ± (B) =: inf {E(u) ; B(u) = ±1} , (5.1)
j 0 (A) := inf {E(u) ; A(u) = 0, u ∈ S} , j 0 (B) := inf {E(u) ; B(u) = 0, u ∈ S} . (5.2)
Recall that S is the unit sphere of X. We give below and existence and multiplicity result for the equation J (u) = 0 in terms of the constants j 0 , i ± .
Theorem 5.1. Let us assume hypothesis (H1) to (H3).
(i) If λ + * > 1, N + ∩ E + = ∅, i + (A)
> 0 and either j 0 (A) > 0 or j 0 (B) > 0 then there exists at least one solution of
J (u) = 0 in N + ∩ E + . (ii) If λ + * > 1, N -∩ E + = ∅, j 0 (B) > 0 and i + (B) > 0 then there exists at least one solution of J (u) = 0 in N -∩ E + . (iii) If λ - * > 1, N + ∩ A -= ∅
and either (1) j 0 (A) > 0 and i -(A) > 0 or (2) j 0 (B) > 0 and j 0 (A) > 0 or (3) j 0 (B) > 0 and i -(B) > 0, then there exists at least one solution of
J (u) = 0 in N + ∩ A -⊂ N + ∩ E -. (iv) If λ - * > 1, N -∩ B -= ∅
and either (4) j 0 (A) > 0 and j 0 (B) > 0 or (5) j 0 (A) > 0 and i -(A) > 0 or (6) j 0 (B) > 0 and i -(B) > 0, then there exists at least one solution of
J (u) = 0 in N -∩ B -⊂ N -∩ E -.
We left the proof to the reader. In the next proposition we give a variational characterization of i ± (A) and i ± (B). Proposition 5.2. Let us assume hypothesis (H1) to (H3). Assume that j 0 (A) > 0 and that i ± (A) ∈ IR. Then there exits ϕ
± ∈ X satisfying A(ϕ ± ) = ±1 such that E (ϕ ± ) = p r i ± (A)A (ϕ ± ).
Similarly, if j 0 (B) > 0 and i ± (B) ∈ IR then there exits φ ± ∈ X satisfying B(φ ± ) = ±1 such that
E (φ ± ) = p q i ± (B)B (φ ± ).
Proof. We only prove the first part. If u n ∈ X with A(u n ) = 1 is a minimizing sequence for, say, i + (A) then the sequence u n is bounded. Indeed, otherwise if we take v n := vn un hence, up to a subsequence, there exists v 0 ∈ X such that v n v 0 , A(v n ) → 0 and E(v n ) → 0. Notice that v 0 = 0 because otherwise it will follows from 0 = E(v 0 ) ≤ lim inf E(v n ) = 0 and (H3) that v n → 0. This is impossible since v n = 1. We then have v 0 = 0, A(v 0 ) = 0 and E(v 0 ) ≤ lim inf E(v n ) = 0, a contradiction with the hypothesis j 0 (A) > 0. We have just proved that u n is bounded. The proof that i + (A) is achieved is standard and we omit it here. Let us denote by ϕ + an element of X where the infimum is achieved. From the Lagrange multipliers rule it follows that there exists λ ∈ IR such that E (ϕ + ) = λA (ϕ + ). By testing this last equation against ϕ + and using the different homogeneities of E and A we get the result.
Remark 5.3. As a matter of fact the constraint A(u) = 1 in the definition of i + (A) := i + (A, 1) can be replaced by, say, A(u) = c, where c in any positive number. In that case, it is trivial to prove that i + (A, c) = c p r i + (A, 1). A similar statement can be formulated for i -(A) and i ± (B).
Remark 5.4. We can easily prove that j 0 (A) and j 0 (B) are achieved provided they are finite and j 0 (A, B) := inf{E(u) ; A(u) = 0, B(u) = 0, u ∈ S} > 0. However we can not give a variational formulation of any of them mainly because, in general, we don't know if the set A -1 ({0}) (resp. B -1 ({0})) is a manifold.
On the coerciveness of E and a principal eigenvalue
In this section we look for sufficient conditions implying (H4) to (H9) in terms of the first eigenvalue of the operator E. Precisely, let us assume the following
(H3) ∃(Y, • Y )a Banach space s.tX is cont. one-to-one embeded onY, X → Y is compact and u → u p Y is Fréchet differentiable. Let us denote χ(u) = u p Y . Then it is straightforward from (H2)-(H3)' that λ 1 := inf{E(u) ; u Y = 1} (5.3) is > -∞ and it is achieved. Moreover, if E(ϕ) = λ 1 and ϕ Y = 1 then, by Lagrange multiplier rule, E (ϕ) = λ 1 χ (ϕ).
We can give now the following sufficient conditions in terms of λ 1 to assure (H4)-(H9). Notice that the following conditions are stronger than the ones in terms of i ± (A), i ± (B) etc. Let us also stress here that λ 1 > 0 is equivalent to E - 0 \ {0} = ∅ and therefore (H4) to (H9) are trivially satisfied. Hereafter we denote
E λ1 := {ϕ ∈ X \ {0} ; E (ϕ) = λ 1 χ (ϕ)}.
We have the sequence v n = un un will satisfy E(v n ) → 0, v n Y → 0, so, up to a subsequence, v n 0, in contradiction with (H3) and the fact that v n = 1). Let u 0 ∈ X be such that u n u 0 . Thus A(u 0 ) ≥ 0, u 0 Y = 1 and consequently d ≤ E(u 0 ) ≤ lim inf E(u n ) = d. We have proved that E(u 0 ) = d and d is achieved. If A(u 0 ) = 0 hence d ≥ j 0 (A) > 0 and we are done. If A(u 0 ) > 0 hence
d = inf{E(u) ; A(u) > 0, u Y = 1}
and d is then a local minima of E under the constrain u Y = 1. By Lagrange multiplier rule, there exists λ ∈ IR such that E (u 0 ) = λχ (u 0 ). By evaluating this equation at u 0 we readily obtain
pd = pE(u 0 ) = E (u 0 ), u 0 = λ χ (u 0 ), u 0 = pλ.
Since E λ1 ⊂ A -then λ = λ 1 and therefore d = λ > 0 by (H3) .
We can now give the first main existence result of this section. This result generalizes Theorem 4.8 and Theorem 5.7 of [START_REF] Ramos-Quoirin | Lack of coercivity in a concave-convex type equation[END_REF]. Theorem 5.7. Let us assume hypotheses (H1) to (H3) .
(i) Assume that N ± ∩ E + = ∅ and λ + * > 1. If either λ 1 > 0 or λ 1 = 0 and E λ1 ⊂ A -∩ B -then there exists at least two solutions of J (u) = 0. (ii) Assume that N + ∩ E + = ∅ and λ + * > 1. If λ 1 = 0 and E λ1 ⊂ A -∩ B + 0
then there exists at least one solution of J (u) = 0. (iii) Assume that N + ∩ A -= ∅, N -∩ B -= ∅ and λ - * > 1. If j 0 (A) > 0 and j 0 (B) > 0 then there exist at least two solutions of J (u) = 0 in E -. (iv) Assume (H3) , N + ∩ E + = ∅, λ + * > 1 and λ 1 < 0. If j 0 (A) > 0 and E λ1 ⊂ A -then there exist at least one solution of
J (u) = 0 in E + . (v) Assume (H3) , N ± ∩ E + = ∅, λ + * > 1 and λ 1 < 0. If j 0 (A) > 0, j 0 (B) > 0 and E λ1 ⊂ A -∩ B -then
(i) λ 1 > 0 ⇒ j ± 0 (A, B) > 0, (ii) If λ 1 = 0 then (1) E λ1 ∩ A + 0 ∩ B + 0 = ∅ ⇒ j + 0 (A, B) > 0. (2) E λ1 ∩ A - 0 ∩ B - 0 = ∅ ⇒ j - 0 (A, B) > 0.
Let us show that, in a particular case, the variational equation (5.6) is equivalent to the equation J (u) = 0. By equivalent we mean here that a multiple of a solution of (5.6) is a solution of J (u). Let us observe that if ϕ ∈ X \ {0} is a solution of (5.6) and we denote for simplicity
d 1 := (β/α) α (λ + * ) β A(ϕ) , d 2 := (α/β) β (λ + * ) β B(ϕ) (5.7)
then for any c > 0 the function v = cϕ satisfies
E (v) = d 1 c p-r A (v) + d 2 c p-q B (v).
Thus we have the following result Proposition 5.10. Assume that j + 0 (A, B) > 0 and that
(λ + * ) β = p r β . p q α . (5.8)
Then there exists a solution of J (u) = 0 satisfying u ∈ A + ∩B + . The constant c pqr := ( p r ) β .( p q ) α is > 1. A similar result can be state for λ + * if j - 0 (A, B) > 0. Proof. Recall that J (u) = 0 if and only if E (u) = p r A (u) + p q B (u). Let ϕ ∈ X be a point where λ * + is achieved. A simple computation shows that we can choose c > 0 such that cϕ is a critical point of J, that is, d 1 c p-r = p r and d 2 c p-q = p q if and only if
p r d 1 β . p q d 2 α = 1
and using the fact that α + β = 1, A(ϕ) β B(ϕ) α = 1 and (5.7) we get (5.8).
Let us write
(c pqr ) q-r r = p r q r -1 . q r 1-p r .
One has q -r r ln(c pqr ) = ( q r -1) ln p r + (1 -p r ) ln q r = (x -1) ln y -(y -1) ln x, where we have put x := q r and y := p r . Using the fact that the function f (z) := ln z z-1 is strictly decreasing for z > 1 we conclude that q-r r . ln(c pqr ) > 0 and therefore the constant c pqr > 1 as claimed.
Remark 5.11. Notice that, if (5.8) holds, we can not distinguish the solution of the problem J (u) = 0 obtained in Proposition 5.10 from the one obtained in Theorem 3.3 or the one in Theorem 3.4.
Existence results for quasilinear problems 6.1. Existence and multiplicity results for Problem I
Let Ω ⊂ IR N be a bounded smooth domain of class C 2,α (0 < α < 1) with outward unit normal ν on the boundary ∂Ω. Let V ∈ L ∞ (Ω) and (a, b) ∈ (C s (∂Ω)) 2 , for some s ∈ (0, 1), and allow them to change sign. The exponents r, q to satisfy 1 < r < p < q < p * where p * = p(N -1) (N -p) + the critical exponent for the trace embedding; ρ denotes the restriction to ∂Ω of the (N -1)-Hausdorff measure, which coincides with the usual Lebesgue surface measure as ∂Ω is regular enough. Finally let the number λ be a positive parameter.
Take X = W 1,p (Ω) with the usual Sobolev norm • 1,p . Solutions of Problem I are understood in the weak sense, that is,
Ω |∇u| p-2 ∇u∇ϕ+V (x)|u| p-2 uϕ = ∂Ω λa|u| r-2 u+b|u| q-2 u ϕ dρ. (6.1) ∀ϕ ∈ W 1,p (Ω). Let us consider E(u) = Ω (|∇u| p + V (x)|u| p ) dx, A(u) = λ ∂Ω a(ρ)|u| r dρ, B(u) = ∂Ω b(ρ)|u| q dρ,
and the energy functional
J(u) = 1 p E(u) - 1 r A(u) - 1 q B(u) = = 1 p Ω (|∇u| p + V (x)|u| p ) dx - λ r ∂Ω a(ρ)|u| r dρ - 1 q ∂Ω b(ρ)|u| q dρ.
It is clear that solutions of Problem I are positive critical points of J. It is also clear that A, B satisfy (2.1) and hypothesis (H1) since the trace operators W 1,p (Ω) → L r (∂Ω, ρ) and W 1,p (Ω) → L q (∂Ω, ρ) are compact (remember that r, q < p * ). Hypotheses (H2) and (H3) are well known properties of the p-laplacian operator, c.f. [START_REF] Struwe | Variational Methods, Applications to Nonlinear Partial Differential Equations and Hamiltonian Systems[END_REF].
Let us consider the compact embedding W 1,p (Ω) → Y = L p (Ω) and denote • p the Lebesgue norm of L p (Ω). The eigenvalue λ 1 defined in (5.3) takes the following expression:
λ 1 := inf Ω (|∇u| p + V (x)|u| p ) dx ; u ∈ W 1,p (Ω), u p = 1
and it corresponds to the least eigenvalue µ of the following eigenvalue problem with Newman boundary conditions :
-∆ p u + V (x)|u| p-2 u = µ|u| p-1 u in Ω, |∇u| p-2 ∂u ∂ν = 0 on ∂Ω. (6.2)
This problem should be understood in the weak sense, that is, for all ϕ ∈ W 1,p (Ω)
Ω (|∇u| p-2 ∇u∇ϕ + V (x)|u| p-2 uϕ) dx = µ Ω |u| p-2 uϕ dx. (6.3)
It is known (c.f. [START_REF] Cuesta | Weighted eigenvalue problems for quasilinear elliptic operators with mixed Robin-Dirichlet boundary conditions[END_REF]) that λ 1 is simple and isolated in the sense that inf{µ > λ 1 ; µ solves (6.3) for some u ∈ W 1,p (Ω) \ {0}} > λ 1 .
We then have that hypothesis (H3)' is satisfied and that E λ1 is of dimension 1. We will denote by ϕ 1 the unique eigenfunction of L p -norm equal to 1 associated to λ 1 . It is also known that ϕ 1 is sign definite and never vanishes in Ω. Furthermore, there is a second eigenvalue λ 2 for problem (6.2) and it can be characterized as
λ 2 = inf{µ > λ 1 ; µ solves (6.
2) for some u ∈ W 1,p (Ω) \ {0}}. (6.4)
In order to apply Theorem 5.7 we are going first to determinate under which conditions on a, b, V the Nehari sign-sets used in theorems 3.3, 3.4, 4.2, 4.1 are non empty. For this purpose we will use Proposition 2.4 of section 2. Let us denote
Γ ± a := {ρ ∈ ∂Ω ; a(ρ) ≷ 0}, Γ ± b := {ρ ∈ ∂Ω ; b(ρ) ≷ 0}, Γ a,0 := {ρ ∈ ∂Ω ; a(ρ) = 0}, Γ b,0 := {ρ ∈ ∂Ω ; b(ρ) = 0}. Lemma 6.1. (1) E + = ∅. (2) If Γ + a = ∅ then N + ∩ E + = ∅. (3) If Γ + b = ∅ then N -∩ E + = ∅. (4) If Γ - a ∩ Γ - b = ∅ then N + ∩ A -= ∅ and N -∩ B -= ∅.
Proof. The proof of ( 1) is trivial and we only prove (2), the proofs of the other cases are similar.
(2) We distinguish two cases: Case
a) Γ + a ∩ (Γ - b ∪ Γ b,0 ) = ∅. In this case we can construct a C ∞ function v in ∂Ω such that A(v) > 0 ≥ B(v). Let u ∈ W 1,p (Ω) having v as its trace. Let ξ be a C ∞ cut function such that 0 ≤ ξ ≤ 1, ξ ≡ 1 in a small ball B(x 0 , r) ⊂ Ω where |u| > c for some c > 0, ξ ≡ 1 in a neighbourhood Ω δ := {x ∈ Ω ; dist (x, ∂Ω) < δ} of ∂Ω, and ξ = 0 in Ω \ B(x, 2r) ∪ Ω 2δ . We can assume that c p B(x0,2r) |∇ξ| p ≥ B(x0,2r) |ξ| p (|∇u| p + V |u| p ) (6.5)
which implies that E(ξu) > 0. Thus ξu ∈ A + ∩B - 0 ∩E + and from Proposition 2.4 (i) we infer that
N + ∩ E + = ∅. Case b) If Γ + a ⊂ Γ +
b the construction of u and ξ runs similarly, starting with v ∈ C ∞ (∂Ω) such that A(v), B(v) > 0. The cut function ξ can be chosen in such a way that (6.5) is satisfied as well as
A(ξu) = A(v) < q -p q -r p -r q -r p-r q-p E(ξu) q-r q-p B(ξu) p-r q-p = max t>0 m ξu (t).
of [START_REF] Lieberman | Boundary regularity for solutions of degenerate elliptic equations[END_REF] gives that they are of class C 1,α (Ω) for some α ∈ (0, 1). Finally the Strong Maximum principle of [START_REF] Vazquez | A strong maximum principle for some quasilinear elliptic equations[END_REF] insures that non negative solutions of the problem are > 0 on Ω.
Remark 6.4. Unfortunately we are not able to replace the conditions j 0 (a) > 0 and j 0 (b) > 0 by, say, a condition related to some suitable eigenvalue. We just remark that both j 0 (a) > 0 and j 0 (b) > 0 imply that λ D 1 > 0, where λ D 1 denotes the first eigenvalue of -∆ p + V |u| p-2 u with Dirichlet boundary conditions. Remark 6.5. In the case a ≥ 0 (resp. b ≥ 0) and a "nice" zero set Γ a,0 one should be able to relate the condition j 0 (a) > 0 (resp j 0 (b) > 0) with the positivity of the first eigenvalue of -∆ p u + V |u| p-2 u over W 1,p (Ω, Γ a,0 ) := {u ∈ W 1,p (Ω) ; u = 0 on ∂Ω \ Γ a,0 }, as was done in [10, Proposition 11].
Existence and multiplicity results for Problem II
Let Ω ⊂ IR N be a bounded smooth domain of class C 2,α (0 < α < 1) with outward unit normal ν on the boundary ∂Ω. Let a, V ∈ L ∞ (Ω) and b ∈ C s (∂Ω) for some s ∈ (0, 1) be possibly indefinite and the exponents r, q to satisfy 1 < r < p < q < p * where p * = p(N -1) (N -p) + and let the number λ be a positive parameter. Take X = W 1,p (Ω) with the usual Sobolev norm • 1,p , the operators
E(u) = Ω (|∇u| p + V (x)|u| p ) dx, A(u) = λ Ω a(x)|u| r dx, B(u) = ∂Ω b(ρ)|u| q dρ,
and the energy functional
J(u) = 1 p E(u) - 1 r A(u) - 1 q B(u) = = 1 p Ω (|∇u| p + V (x)|u| p ) dx - λ r Ω a(x)|u| r dx - 1 q ∂Ω b(ρ)|u| q dρ.
A, B satisfy (2.1) and hypothesis (H1) since the embedding W 1,p (Ω) → L r (Ω) and the trace operator W 1,p (Ω) → L q (∂Ω, ρ) are compact (remember that r, q < p * < p * ). Solutions of Problem II are understood in the weak sense, that is,
Ω |∇u| p-2 ∇u∇ϕ + V (x)|u| p-2 uϕ = Ω λa|u| r-2 uϕ dx + ∂Ω b|u| q-2 uϕ dρ, (6.6
) for all ϕ ∈ W 1,p (Ω). Similar to the previous case, (H2) to (H3)" are also satisfied with λ 1 defined in (6.2) and λ 2 as in (6.4). Let us consider again the compact embedding W 1,p (Ω) → Y = L p (Ω) so the eigenvalues λ 1 and λ 2 have been already defined for Problem I. Let us denote
Ω ± a := {x ∈ Ω ; a(x) ≷ 0}, Γ ± b := {ρ ∈ ∂Ω ; b(ρ) ≷ 0}.
We study now the sign-sets associated to J. Lemma 6.6. (1) E + = ∅.
(
) If Ω + a = ∅ then N + ∩ E + = ∅. (3) If Γ + b = ∅ then N -∩ E + = ∅. (4) If Ω - a = ∅ and Γ - b = ∅ then N + ∩ A -= ∅ and N -∩ B -= ∅ . Proof. (2) 2
We can easily construct a C ∞ ∩ W 1,p 0 (Ω) function ξ with support on a small ball where a > 0 such that E(ξ) > 0.
(3) Let v ∈ C ∞ be a function defined in ∂Ω such that B(v) > 0 and let u ∈ W 1,p (Ω) having v as its trace. Let ξ be a C ∞ cut function such that 0 ≤ ξ ≤ 1, ξ ≡ 1 in a small ball B(x 0 , r) ⊂ Ω where |u| > c for some c > 0, ξ ≡ 1 in a neighbourhood Ω δ := {x ∈ Ω ; dist (x, ∂Ω) < δ} of ∂Ω, and ξ = 0 in Ω \ B(x, 2r) ∪ Ω 2δ . We can assume that which implies that E(ξu) > 0. The cut function ξ can be chosen to satisfy also A(ξu) > -q -p q -r p -r q -r p-r q-p (-E(ξu))
q-r q-p (-B(ξu))
p-r q-p = min t>0 m ξu (t).
Thus ξu ∈ Λ -. The proof of the other case is similar.
We keep here the same notation for the different coerciveness constants, although they read now as follows which implies that E(ξv) > 0. Thus ξu ∈ A + ∩B - 0 ∩E + and from Proposition 2.4 (i) we infer that N + ∩ E + = ∅. b) If Ω + a ⊂ Ω + b the construction of v and ξ is analogous to the previous case, starting with v ∈ C ∞ Ω) such that A(v), B(v) > 0 with support in a small ball B(x 0 , 2r) ⊂ Ω + a . The cut function ξ can be chosen in such a way that (6.9) is satisfied as well as A(ξv) < q -p q -r p -r q -r p-r q-p E(ξv)
q-r q-p B(ξv)
p-r q-p = max t>0 m ξv (t).
The proof of the other cases are similar.
The coerciveness constants are now 1 < 0, Ω bϕ q 1 < 0 then there exist at least two solutions for any λ ∈ (0, µ -1 + ). (ii) Assume that Ω + a = ∅. If c = λ 1 and Ω aϕ r 1 < 0 then there exists at least one solution for any λ ∈ (0, µ -1 + ). (iii) Assume that Ω - a ∩ Ω - b = ∅. If j 0 (a) > 0 and j 0 (b) > 0 then there exist at least two solutions in E -for any λ ∈ (0, µ -1 -). (iv) Assume that Ω + a = ∅. If λ 1 < c < λ 2 , j 0 (a) > 0 and Ω aϕ r 1 < 0 then there exists at least one solution in E + for any λ ∈ (0, µ -1 + ) (v) Assume that Ω + a = ∅ and Ω + b = ∅. If λ 1 < c < λ 2 , j 0 (a) > 0, j 0 (b) > 0, Ω aϕ r 1 < 0, and Ω bϕ q 1 < 0 then there exist at least two solutions in E + for any λ ∈ (0, µ -1 + ). Corollary 6.12. Assume that Ω + a = ∅, Ω + b = ∅ and Ω - a ∩ Ω - b = ∅. If λ 1 < c < λ 2 , j 0 (a) > 0, j 0 (b) > 0, Ω aϕ r 1 < 0, and Ω bϕ q 1 < 0 then there exist at least 4 solutions for all λ ∈ (0, min{µ + ) -1 , µ -1 -}). Remark 6.13. Notice that we do not claim in the statement of Theorem 6.11 that the solutions are positive. The reason is that we can not use, for a local minimizer u ∈ W 2,p (Ω), the relation J(u) = J(|u|) to deduce positivity of solution, since it can happen that |u| ∈ W 2,p (Ω) even if u ∈ W 2,p (Ω). The
Case I : E(u) > 0 and B(u) > 0 in Figure 1(a); Case II : E(u) ≥ 0 and B(u) ≤ 0 in Figure 1(b); Case III: E(u) ≤ 0 and B(u) ≥ 0 in Figure 1(c); Case IV: E(u) < 0 and B(u) < 0 in Figure 1(d).
Figure 1 .
1 Figure 1. Possible forms of m u
Figure 2 .
2 Figure 2. Possible forms of J u
4 .
4 Local minimizers of J restricted to the Nehari set and to E -
4. 1 .
1 Minimizing J along N +
4 .
4 there exist at least two solutions of J (u) = 0 in E + . Proof. (i) Clearly we have (H4), (H5) and (H6) from Proposition 5.5. Thus the local minimum of Theorem 3.3 provides us a first solution of J (u) = 0. Since we also have (H6), a second solution comes from Theorem 3.4. (ii) We have (H4) and (H5) from Proposition 5.5 (a)-(b). Thus the local minimum of Theorem 3.3 is a solution of J (u) = 0. (iii) From (a) and (b) of Proposition 5.6 the hypotheses (H7), (H8) and (H9) hold. Then we get two solutions from Theorem 4.1 and Theorem 4.2. (iv) From (c) of Proposition 5.6 the hypotheses (H4) and (H5) hold. Then we get one solution from Theorem 3.3. (v) From (c) and (d) of Proposition 5.6 the hypotheses (H4), (H5), (H6) and (H7) hold. Then we get two solutions from Theorem 3.3 and Theorem 3.Remark 5.9. As in Proposition 5.5 one has
,2r) |∇ξ| p ≥ B(x0,2r) |ξ| p (|∇u| p -V |u| p ) which implies that E(ξu) > 0. Thus ξu ∈ B + ∩ E + and from Proposition 2.4 (i) we infer that N + ∩ E + = ∅. (4) We prove that Λ -= ∅. Let 0 ≤ v ∈ C ∞ be a function defined in ∂Ω such that B(v) < 0 and let u ∈ W 1,p (Ω) having v as its trace. We can assume that u ≥ 0 by replacing u by |u| if necessary. If A(u) ≥ 0 let us take a function 0 ≤ w ∈ W 1,p 0 (Ω) with support in Ω - a such that Ω - a aw r < -Ω au r which implies that A(u + w) < 0. Hence replace u by u + w, which also has v as trace. Let ξ be a C ∞ cut function such that 0 ≤ ξ ≤ 1, ξ ≡ 1 in a small ball B(x 0 , r) ⊂ Ω - a where |u| > c for some c > 0, ξ ≡ 1 in a neighbourhood Ω δ := {x ∈ Ω ; dist (x, ∂Ω) < δ} of ∂Ω, and ξ = 0 in Ω \ B(x, 2r) ∪ Ω 2δ . We can assume that c p B(x0,2r) |∇ξ| p ≥ B(x0,2r) |ξ| p (|∇u| p -V |u| p )
2 Ω
2 j 0 (a) := inf Ω |∇u| p + V (x)|u| p ; Ω a|u| r = 0, u 1,p = 1 , j 0 (b) := inf Ω |∇u| p + V (x)|u| p ; ∂Ω b|u| q = 0, u 1,p = 1 .B(x 0 , r) ⊂ Ω where |v| > for some > 0, ξ ≡ 0 in Ω \ B(x 0 , 2r). We can assume that p B(x0,2r) |∆ξ| p ≥ B(x0,2r) |ξ| p (|∆v| p + c|v| p ) -|∇v| p |∇ξ| p (6.9)
j 0
0 (a) := inf Ω |∆u| p -c|u| p ; Ω a|u| r = 0, u W = 1 , j 0 (b) := inf Ω |∆u| p -c|u| p ; Ω b|u| q = 0, u W = 1and one also has to rewrite the constants µ ± in terms of the operators E, A, B. By applying Theorem 5.7 we get Theorem 6.11. (i) Assume that Ω + a = ∅ and Ω + b = ∅. If either c < λ 1 or c = λ 1 and Ω aϕ r
Acknowledgments. This work was partilly carried out while the first author was visiting the IMSP of the Université d'Abomey Calavi (Porto-Novo) and also while the second author was visiting the Université du Littoral Côte d'Opale (ULCO) and the Université Libre de Bruxelles (ULB). We would like to express our gratitude to those institutions.
Proposition 5.5. Let us assume (H1) to (H3) and λ 1 = 0. Then (a) E λ1 ∩ A + 0 ∩ B 0 = ∅ ⇒ (H4), (b) E λ1 ⊂ A -⇒ j 0 (A) > 0, i + (A) > 0 ⇒ j 0 (A) > 0 and (H5), (c) E λ1 ⊂ B -⇒ j 0 (B) > 0, i + (B) > 0 ⇒ (H6).
Proof. We only prove (b), the proof of the other cases are similar. Trivially i + (A) ≥ λ 1 = 0 and j 0 (A) ≥ λ 1 = 0. Assume by contradiction that i + (A) = 0 and let u n ∈ X with A(u n ) = 1 be a minimizing sequence for i + (A). If the sequence u n is bounded then, up to a subsequence, u n u 0 for some u 0 ∈ X. Hence A(u 0 ) = 1 (so in particular u 0 ≡ 0) and E(u 0 ) = i + (A) = 0. Hence
is an eigenfunction associated to λ 1 and we have again a contradiction with the assumption of (b). Thus the sequence (u n ) is unbounded. Let us take v n := v n u n ; hence there exists v 0 ∈ X such that, up to a subsequence, v n v 0 , A(v n ) → 0 and E(v n ) → 0. Notice that v 0 = 0 because otherwise it will follows from 0 = E(v 0 ) ≤ lim inf E(v n ) = 0 and (H3) that v n → 0. This is impossible since v n = 1. We then have v 0 = 0, A(v 0 ) = 0 and E(v 0 ) ≤ 0. By using the inequality
we deduce that E(v 0 ) = 0 so v 0 belongs to E λ1 , a contradiction with the assumption of (b). The proof of j 0 (A) > 0 rules similarly.
We can obtain two more solutions of J (u) = 0 in the case λ 1 < 0, that is, when E -= ∅. To do so, let us assume in this case that
We have
Proposition 5.6. Let us assume hypothesis (H1) to (H3). Then (a) j 0 (A) > 0 ⇒ (H8) and (H9), (b) j 0 (B) > 0 ⇒ (H4) and (H7).
Let us assume also hypothesis (H3) and (H3) . Then (c) j 0 (A) > 0 and E λ1 ⊂ A -⇒ (H4) and i + (A) > 0 ⇒ (H4) and (H5), (d) j 0 (B) > 0 and E λ1 ⊂ B -⇒ (H4) and i + (B) > 0 ⇒ (H4) and (H6).
Proof. (a), (b) are trivial. We only prove (c) as the proof of (d) is similar. Let us denote d := inf{E(u) ; A(u) ≥ 0, u Y = 1} and prove that d > 0. Clearly d > 0 ⇒ (H4). Since for all u ∈ X satisfying A(u) = 1 we have E( u u Y ) ≥ d, the conclusion i + (A) > 0 will follow, and also (H5). First we claim that d is achieved. Indeed, if u n is an admissible sequence with E(u n ) → d then we can prove that the sequence is bounded (otherwise
A variational characterization of λ ± *
We have proved in the previous section the existence of four solutions of the equation J (u) = 0 provided λ ± * > 1, where λ ± * has been defined in (2.8) and (2.9). Let us here give some variational characterization of these values. For the sake of simplicity let us denote α := p -r q -r , β := q -p q -r ,
Observe that
Proposition 5.8. Let us define
If j + 0 (A, B) > 0 then λ + * is achieved. Furthermore, for any u ∈ X where λ + * is achieved we have
A similar result holds for λ - * under the constraint j - 0 (A, B) > 0.
Proof.
is a minimizing sequence for λ + * and (u n ) is bounded then, up to a subsequence, u n u 0 for some u 0 ∈ X that will satisfy A(u 0 ) β B(u 0 ) α = 1, so u 0 will be admissible in the infimum and then it is achieved. If u n goes to +∞ then for v n = un un we will have, up to a subsequence,
un p = 0. We can rule out the possibility
Testing this identity at u we have
that is pE(u) = µ(rβ + qα) = µp and the identity (5.6) follows after a simple computation.
Thus ξu ∈ Λ + and the conclusion follows from (i) of Proposition 2.4.
We remember here the coerciveness values defined in (5.1), that in our case will be j 0 (a) := inf
Finally let us recall the definitions (2.8) and (2.9) and write for sake of simplicity
and
which do not depend on λ.
We can now reformulate Theorem 5.7 as the following existence and multiplicity result for Problem I. Theorem 6.2. (i) Assume that Γ + a = ∅ and Γ + b = ∅. If either λ 1 > 0 or λ 1 = 0 and ∂Ω aϕ r 1 < 0, ∂Ω bϕ q 1 < 0 then there exists at least two solutions for any λ ∈ (0, µ -1 + ). (ii) Assume that Γ + a = ∅. If λ 1 = 0 and ∂Ω aϕ r 1 < 0 then there exists at least one solution for any λ ∈ (0, µ -1 + ). (iii) Assume that Γ - a ∩ Γ - b = ∅. If j 0 (a) > 0 and j 0 (b) > 0 then there exist at least two solutions in E -for any λ ∈ (0, µ -1 -). (iv) Assume that Γ + a = ∅. If λ 1 < 0 < λ 2 , j 0 (a) > 0 and ∂Ω aϕ r 1 < 0 then there exist at least one solution in E + for any λ ∈ (0, µ -1
1 < 0, and ∂Ω bϕ q 1 < 0 then there exist at least two solutions in E + for any λ ∈ (0, µ -1 + ). In particular we have from cases (iii) and (v) that
1 < 0, and ∂Ω bϕ q 1 < 0 then Problem I possesses at least 4 solutions for any λ ∈ (0, min{µ -1 + , µ -1 -}). Proof. Proof of Theorem 6.2. The existence of weak solutions in each of the 4 cases have already be done in Theorem 5.7. Since each solution comes as a local minimizer of J along the sign subsets of the Nehari set and all of these subsets are invariant by taking the absolute value of a function u, we can assume that all these critical points are ≥ 0. Besides the result of [START_REF] Cuesta | Weighted eigenvalue problems for quasilinear elliptic operators with mixed Robin-Dirichlet boundary conditions[END_REF]Theorem A.1] implies that all solutions are bounded and the regularity result We left to the reader the expressions of µ ± in terms of the operators E, A, B. We can now formulate an existence and multiplicity result for Problem II. We generalize some results of [START_REF] Garcia-Azorero | A convex-concave problem with a nonlinear boundary condition[END_REF][START_REF] Sabina De Lis | A concave-convex quasilinear elliptic problem subject to a non linear boundary condition[END_REF] where this problem was studied for V ≡ 0, a ≡ b ≡ 1. Theorem 6.7. (i) Assume that Ω + a = ∅ and Γ + b = ∅. If either λ 1 > 0 or λ 1 = 0 and Ω aϕ r 1 < 0, ∂Ω bϕ q 1 < 0 then there exists at least two solutions for any λ ∈ (0, µ -1 + ). (ii) Assume that Ω + a = ∅. If λ 1 = 0 and Ω aϕ r 1 < 0 then there exists at least one solution for any λ ∈ (0, µ -1 + ). (iii) Assume that Ω - a = ∅ and Γ - b = ∅. If j 0 (a) > 0 and j 0 (b) > 0 then there exist at least two solutions in E -for any λ ∈ (0, µ -1 -). (iv) Assume that Ω + a = ∅. If λ 1 < 0 < λ 2 , j 0 (a) > 0 and Ω aϕ r 1 < 0 then there exist at least one solution in E + for any λ ∈ (0, µ -1
1 < 0, and ∂Ω bϕ q 1 < 0 then there exist at least two solutions in E + for any λ ∈ (0, µ -1 + ). The proof of this theorem comes from Theorem 5.7 and the regularity results quoted in the proof of Theorem 6.2.
1 < 0, and ∂Ω bϕ q 1 < 0 then there exists at least 4 solutions of Problem II for any λ ∈ (0, min{µ + ) -1 , µ -1 -}). Remark 6.9. In [START_REF] Garcia-Azorero | A convex-concave problem with a nonlinear boundary condition[END_REF][START_REF] Sabina De Lis | A concave-convex quasilinear elliptic problem subject to a non linear boundary condition[END_REF] the authors also proved non-existence of solutions for large values of λ. To our knowledge this is an open problem in the non coercive case.
Existence and multiplicity results for Problem III
Let us now discuss the solvability of
The open bounded set Ω ⊂ IR N is assumed here to have a Lipschitz boundary and a, b, ∈ L ∞ (Ω) and c ∈ IR. By a solution of this problem we mean a function u ∈ W 2,p (Ω) ∩ W 1,p 0 (Ω) such that
holds for all ϕ ∈ W 2,p (Ω) ∩ W 1,p 0 (Ω). Here V, a, b ∈ L ∞ (Ω) and 1 < r < p < q < p * * , where
is the critical Sobolev exponent for W 2,p (Ω). The space W (Ω) := W 2,p (Ω) ∩ W 1,p 0 (Ω) endowed with the equivalent norm u W := ∆u p is reflexive uniformly convex Banach space. Take X = W (Ω),
and the energy functional
It is clear that A, B satisfy (2.1) and hypothesis (H1) since the embedding W (Ω) → L r (Ω) and W (Ω) → L q (Ω) are compact (because r, q < p * * ). Thus (H1), (H2) and (H3) hold.
The coerciveness of E depends on the position of c with respect to the first eigenvalue of the following p-bilaplacian problem
with weak formulation
According to [START_REF] Drabek | Global bifurcation result for the p-biharmonic operator[END_REF] there exists λ 1 > 0 a least eigenvalue of (6.8), and this eigenvalue is principal, isolated and simple. Furthermore it holds
Let ϕ 1 > 0 with ϕ 1 p = 1 be an eigenfunction associated to λ 1 . Denote by λ 2 = inf{λ > λ 1 ; λ solves (6.8)}.
The fact that λ 2 ∈ IR is an eigenvalue of the Dirichlet p-bilaplacian and the existence of a sequence of eigenvalues is proved for instance in [START_REF] Talbi | On the Spectrum of the Weighted p-Biharmonic Operator with Weight[END_REF]. |
00412125 | en | [
"shs.archeo",
"sdu.stu.gp",
"phys.phys.phys-geo-ph",
"sde.mcg"
] | 2024/03/04 16:41:26 | 2007 | https://hal.science/hal-00412125/file/Benech_city_planning.pdf | Christophe Benech
email: [email protected]
New Approach to the Study of City Planning and Domestic Dwellings in the Ancient Near East
Keywords: magnetic survey, city planning, domestic unit, space syntax, Doura-Europos
This paper presents the results of a magnetic survey on the Hellenistic and Roman site of Doura-Europos in Syria. The interpretation of the magnetic data is based on an original approach by considering the use of space in a domestic unit. This type of study has been developed for sociological research but is adapted to the information carried within geophysical data. After a brief presentation of the role of geophysical methods for the study of city planning, the most important components of the 'space syntax' will be presented and applied to two blocks of Doura-Europos, one that has been excavated in the twentieth century by the Yale University and another surveyed using the magnetic method.
INTRODUCTION
The different geophysical methods available today allow the collection of very detailed information about archaeological sites and their environment. However, these spectacular results are still often confronted with scepticism and even with disbelief by archaeologists (Hesse, 1999;[START_REF] Gaffney | Project specifications or guidelines in commercial archaeological geophysics: a recipe for disaster? 6 th International Conference on Archaeological Prospection[END_REF] because of the problem of interpretation and of integration of geophysical data into archaeological maps. This is a subject that has been discussed many times and interesting solutions have been proposed (see [START_REF] Picard | GIS in Archaeology -the interface between Prospection and Excavation[END_REF].
A geophysical map is not an archaeological plan and it would be utopian to expect the precision of a topographical plan of an excavated areathe base map from which archaeologists workfrom it. These two sources provide very different types of information, which cannot be treated in the same way. If we treat a geophysical map in the same way as an archaeological plan, the result will always be less reliable, and the essential and original contribution of the geophysical data will be neglected. It is more fruitful to focus on their complementary nature in order to provide a new vision of the archaeological site and renew thematic and methodological approaches.
For better utilization of the geophysical map in archaeology, geophysicists and archaeologists have to hold a dialogue concerning (i) the type of information that can enhance knowledge of the archaeological site and (ii) the development of a new approach to archaeological issues arising from documentation based on architecture, stratigraphy and, more generally, the spatial organization of archaeological remains. This is the sort of process is developed here, in the particular case of city planning and households in the ancient Near East. The present study is based on geophysical surveys carried out at the site of Doura-Europos (Syria), where extensive documentation concerning urbanism and households exists. In this case, data from a magnetic survey are used, but the more general discussion could extend to other geophysical methods.
THE STUDY OF HELLENISTIC AND ROMAN URBANISM IN ARCHAEOLOGY
The study of city planning, and more generally urbanism, is best undertaken by an extensive archaeological survey, the aim of which is to recognize the urban space as completely as possible. Hellenistic cities of the Near East adopted the same the city planning, which is known as the 'Hippodamian plan'. This concept, brought by Macedonian settlers, is based on the parcelling of rectangular blocks separated by orthogonal streets of the same width (Figure 1). These blocks were also divided into dwelling units of the same size. The dimensions of the blocks, their inner division and the width of the streets varied from one site to another but theoretically are supposed to be the same for a given city [START_REF] Picard | GIS in Archaeology -the interface between Prospection and Excavation[END_REF]. The Hippodamian plan is therefore taken to illustrate a particularly egalitarian -restrained and austere -division of the urban space, as opposed to the more monumental and hierarchical urbanism developed by the Romans.
Until recently, the method used to study the urban space was almost the same for all the archaeological sites of the Near East, and it has also been applied in Doura-Europos (Figure 2). Wide-area excavation at the beginning of the twentieth century at some ancient cities resulted in a rich archaeological documentation. Some blocks were completely excavated, and the entire town plan was reconstructed by digging up the angles of non-excavated blocks (when it was possible, as in Doura-Europos) or by extrapolation, reproducing on the whole surface of the city the module of reference identified in excavated blocks. Those early excavations were focused on 'prestigious' monuments: administrative or religious buildings, rather than domestic dwellings. Apart from some sites such as Olynthus (Greece), Delos (Greece) and Doura-Europos, domestic dwellings were usually illustrated by the excavation of a 'standard block' which was supposed to represent all the households of the city.
Therefore, this method only allowed for the reconstitution of a schematic plan of the city. It may be possible in this way to illustrate the theoretical concept used for the organization of the city, but it cannot help us understand the real administration of the urban space -and its evolution. For reasons of cost, and also because of the emergence of a better methodology of research and the question of conservation of excavated remains, such large-scale excavations are no longer undertaken. Consequently, current research, suffers because of a drastic lack of new data. Modern excavations, more punctual but also more rigorous, yield important data about the life of the city, but do not permit us to refine the simplified scheme of the organization of the urban space. Geophysical survey is currently the only non-destructive technique for obtaining new data. The results can be more interesting than the data obtained hitherto, because if the environmental conditions are good enough the whole urban layout of a city can be investigated. Numerous such studies have been made or are still in progress (e.g. [START_REF] Becker | Magnetische Prospektion in der Untersiedlung von Troia[END_REF][START_REF] Gaborit | Zeugmamoyenne vallée de l'Euphrate : rapport préliminaire de la campagne de fouille 1998[END_REF][START_REF] Groh | Integrated prospection in the upper town of Ephesus, Turkey -a case study, 5 th International Conference on Archaeological Prospection[END_REF][START_REF] Schmidt-Colinet | Geophysical survey and excavation in the "Hellenistic Town" of Palmyra[END_REF]. These surveys have helped to highlight the main characteristics of the urban space of different ancient cities, i.e.:
(i) limits of the city, fortifications, location of gateways;
(ii) main characteristics of the plan -length and orientation of the streets, block module;
(iii) differentiation between built and non-built areas;
(iv) differentiation between public and private areas;
(v) identification of the function of some buildings that have a typical plan.
These results are important in themselves but do not account for the richness of geophysical data, because only the most visual and the most readable information from the geophysical map is used. Mostly, it is at this step of the interpretation that dissent occurs between archaeologists and geophysicists, because the geophysical map lacks the precision of an archaeological plan despite its detailed information. The detailed interpretation of the geophysical map is normally limited to a 'line by line' restoration of visible geophysical anomalies; the result should be a noninterpretative drawing but this type of work reduces the coherence of built structures-it involves conscious choices and, moreover, unconscious choices.
Different experiences of the interpretation of geophysical data have shown that, whatever the type of reconstruction chosen, it can be applied only when the logic and the organization of the archaeological features being analyzed are clearly identified, and understood, by a more detailed interpretation (e.g., in the case of city planning and buildings, see [START_REF] Hesse | Introduction géophysique et notes techniques[END_REF][START_REF] Groh | Geophysikalische Messungen im NordÖstlichen Stadtteil von Flavia Solva: Interpretation und archäologische-historische Auswertung[END_REF].
A detailed interpretation must first of all be based on archaeological documentation of the site (if available) or from another site from the same period and with the same type of occupation. From this point of view, it is obvious that archaeological and geophysical approaches are closely linked: the work of the geophysicist does not end with the presentation of more or less detailed maps, and that of the archaeologist does not begin with the archaeological interpretation of this presentation. A new step may be suggested, departing from our conviction that every interpretative approach is linked to the problems posed by archaeology. By contrast, it is sometimes necessary to reorient the questions of the archaeologist, in order to analyse the geophysical data with the most appropriate method.
This type of approach is applied here to the households of the Near East Hellenistic city of Doura-Europos.
DOURA-EUROPOS (SYRIA)
The site of Doura-Europos was discovered at the beginning of the twentieth century and was excavated by F. Cumont (1922-23), a French and American mission directed by M. Rostovtzeff (Yale University), and since 1986 a French and Syrian mission directed by P. Leriche (Centre National de la Recherche Scientifique). Doura-Europos is located on the right bank of the River Euphrates on a limestone plateau at around 30 m above the river (Figures 2 and3). The historical developments outlined below highlight the most important phases of the urban development of the site as they have been understood by successive archaeological excavations.
Founded at the end of the fourth century BC by the Macedonians, Doura-Europos was originally a military station. The concept of the city planning (and therefore the accession to the status of 'city'), began in the middle of the second century BC. About 133 BC, the city fell under Parthian domination; by this date, only the sector of the agora and also some administrative and religious buildings had been completed. The new Parthian authorities continued the urban programme respecting the Hippodamian plan that had been established by the Macedonians. The dimensions of the blocks are on average 35 m x 70 m; the width of the streets is about 5 m, except for the main streets, starting from the Palmyra gate, which reach about 10 m. In AD 165, the city was taken over by the Romans, and the city planning was subsequently greatly modified, mainly due to the installation of a military camp in the northern part of the city. The Romans constructed many buildings in this sector without taking the pre-existing Hippodamian plan into account. Finally, in AD 256, Doura-Europos was pillaged by the Sassanids and practically abandoned. Obviously, the geophysical map shows the last phase of more than four centuries of occupation i.e. the result of the evolution of the application of an urban theoretical concept: the Hippodamian plan.
FIRST RESULTS WITH MAGNETIC SURVEY
The magnetic survey of Doura-Europos was carried out with a caesium gradiometer, and the southern half of the site has been completely covered (Figures 2 and4) [START_REF] Benech | The study of ancient city planning by geophysical methods: the case of Doura-Europos, Syria. 5 th International Conference on Archaeological Prospection[END_REF].
Unlike the schematic plan established by the Yale expedition (1928)(1929)(1930)(1931)(1932)(1933)(1934)(1935)(1936)(1937), the geophysical data record variations street width. In some cases the variation is considerable and indicates a hierarchical organization of the streets, probably linked to the circulation inside the city. Some of the main streets (particularly the ones starting from the Palmyra gate) conserved their original width, whereas others (probably those less frequently used) became narrower with time due to the encroachment of houses onto the public thoroughfare.
According to the results of the excavations, the Yale expedition suggested an internal division of the blocks into eight identical parcels, although this division is not really clear in the excavated blocks. The geophysical map clearly identifies some plans of houses that were already known from the excavations, and which fill exactly 1/8 of the total area of the block (Figure 5). Not all blocks of Doura-Europos, however, have the same internal division: in the southern part of the city the geophysical map reveals two blocks (I 10 and I 11) that are divided into six parcels. This is the first time that the existence of two different types of division has been observed in a Hellenistic city of the Near East, but we do not know yet if they are contemporary. In block I 11, two of these parcels are free from construction, and different hypotheses may be advanced concerning their function: they could be gardens, stock areas or enclosures for animals, a notion which might be justified by the proximity of the southern gate of the city.
THE STUDY OF DOMESTIC SPACE
The excavations of Doura-Europos present a pronounced diversity of the ground plans of domestic houses. The geophysical maps, even if they are very clear, do not identify the plan of every house inside the blocks. There are always numerous ambiguities, in particular concerning the limits of the houses, the function of the different rooms and the circulation inside the house (to which may be added the problems of blocked doors (Figure 6) or partly ruined walls, which give the illusion of a passage on a geophysical map). These aspects are, of course, all easy to solve in excavation and play an important role in the study of the household following the traditional approach.
The study of individual households traditionally has been based on the classical literature available. However, written sources are of little value with respect to the archaeological data, which are mostly fragmentary. They describe a way of life very localized geographically, and leave the door open to misleading generalizations (for a critique of the use of classical sources see the introduction of [START_REF] Nevett | House and Society in the Ancient Greek World[END_REF].
Study of the household can be approached from Figure 5. Comparison between the plan of an excavated house (from Hoepfner and Schwander,1986) and the plan of a house shown by geophysical survey (from block M4 -minimum white=-10nT/m: maximum black=+10nT/m).The entrance is not on the same side of the block but there is a corridor, which means that the courtyard is not visible from the street. In the southern part of the house, there is the reception room, opened to the north, with two adjacent rooms. The eastern side of the house is covered but open to the courtyard and is the place of domestic activities such as cooking. This is a typical domestic unit of Doura-Europos, which occupies exactly 1/8 of the surface of the block. (E, entrance; C, courtyard; RR, reception room).
another point of view, which is the use of space. This method has been used previously by archaeologists, in particular those working on the most ancient periods [START_REF] Hodder | Spatial analysis in archaeology[END_REF][START_REF] Renfrew | Approaches to social archaeology[END_REF]. Their aim was to introduce methods of analysis developed in ethnology, sociology and anthropology [START_REF] Rapoport | Pour une anthropologie de la maison[END_REF][START_REF] Levi-Strauss | Tristes tropiques[END_REF][START_REF] Bourdieu | La maison ou le monde renversé. In Esquisse d'une théorie de la pratique[END_REF]. Important works have shown that the study of the use of domestic space is complementary to a more traditional architectural study (Kent, 1990a;Parker Pearson and Richards, 1994;Allison, 1999). Some of these studies are concerned with the Classical period [START_REF] Jameson | Domestic space in the Greek City State[END_REF]Laurence and Wallace-Hadrill, 1997;Nevet 1999;[START_REF] Cahill | Household and city organization at Olynthus[END_REF] and they have yielded interesting, albeit at times somewhat fragmentary, results in accordance with the archaeological documentation available for the different sites that have been studied. Sociologists B. Hillier and J. Hanson have shown that the organization of space may reflect the social, economic and cultural characteristics of a society. They developed the 'space syntax' method of analysis, which can be easily applied to geophysical maps (Hillier and Hanson, 1984; for a summary see [START_REF] Grahame | Public and private in the Roman house: the spatial order of the Casa del Fauno[END_REF]. The geophysical map brings a new, continuous and homogeneous documentation to this domain, which can play an important role if the methodology is adapted to the nature of geophysical data. The archaeological studies cited above present the idea that it is the use of space that influences the record of archaeological materials (ceramics, objects, etc.) and their architectural elements -not the other way around (Kent, 1990b). Such an approach is interesting to apply to geophysical maps, even if it is necessarily limited to a study of the ground plan, which admittedly could be considered to be extremely restricted. The chosen approach is to use as a starting point a plan delimiting all the spaces, and not from a plan locating the walls. Such a reconstruction is more accurate given the nature of geophysical data: it is very difficult indeed to restore the geometry of the walls with precision, to the same degree as that inherent in a plan derived from excavation. On the other hand, it is not hard to delimit surfaces. Moreover, the quality of a study of the use of space does not suffer from a certain error tolerance that is inevitable if we work from a geophysical map.
For the first time, this approach will be validated by a comparative study of two blocks of Doura-Europos, one completely excavated by the Yale mission, and another one surveyed with the caesium G-858 gradiometer. To fulfil this aim the main characteristics of a household are presented in order to understand the logic of the use of space in the houses of Doura-Europos.
MAIN CHARACTERISTICS OF DOURA-EUROPOS HOUSES
The houses of Doura-Europos are organized around three main elements, the entrance (E), the courtyard (C) and what for present purposes will be referred to as 'the reception room' (RR) (Figure 5).
The entrance is normally an 'L' shaped corridor leading to the courtyard, which generally is not visible from the street; but in the case of particularly small houses the entrance looks directly on to the courtyard. There may be more than one entrance, particularly if the house is large; there are also houses where a room is used as a shop, which has an independent entrance. The open courtyard is, generally speaking, the main place of the house. It is the point from which circulation inside the house is organized, and it controls the access to most of the other rooms, and to the roof or the first floor, when one exists. It is the focal point of the main domestic activities (cooking).
The reception room is the most significant room from a social point of view. This room stands out from the rest by its comfort, architectural decoration and orientation: it is generally open to the north, to preserve a minimum of freshness inside the room. However, in the case of smaller houses, the position of the reception room may be different.
The space syntax suggested by Hillier and Hanson (1984) allows visualization of the organization of space inside the houses. Plans that look different may in fact have the same logic of organization. We have modified the scheme proposed by Hillier and Hanson: the circles on Figure 7, which represent individual rooms, are proportional to their surface areas, and the whole scheme is enclosed in a circle proportional to the total surface area of the house.
Figure 7. Scheme of houses 2 and 4 in block C7 (see Figure 9) following the space syntax of Hillier and Hanson (1984). We see here the articulation entrance (green), courtyard (red), reception room (blue), for this last one. The widest houses are first characterized by the greater number of rooms around the courtyard.
When using this scheme, account must be taken of ambiguities due to the interpretation of the magnetic maps:
(i) The circulation inside the house presents a problem: the passages between the rooms are not always clear. Partly ruined walls can be mistaken for an opening. In the case where archaeological structures are preserved on a very weak elevation, the magnetic signal of the stone block (gypsum most of the time), which constitutes a threshold, may be mistaken for the signal of a wall; in such cases, the opening is then practically undetectable.
(ii) The excavated blocks show a pronounced variation in the width of the walls (from 20 cm to more than 1 m); the thinnest dividing walls are usually completely ruined and may not appear on the magnetic map.
(iii) Another difficulty concerns the delimitation of the different houses of the block. This question is all the more complex because transformations through time inside the house are frequently seen in Doura-Europos, mainly due to alterations caused by inheritance [START_REF] Saliou | Les quatre fils de Polémocratès (P. Dura 19)[END_REF] or by the purchase of rooms by neighbouring houses. Houses 4 and 6 of block C7 are highly complex examples of this, and clearly show the type of ambiguity that our study confronts (Figure 8). These houses are examplars of the division of a 'standard' size house (i.e. 1/8 of the surface of the block), which had a central courtyard and distribution of rooms around it. The identification of two different courtyards and two entrances would be obvious to an excavator, but not so on a magnetic map. Such an organization presumably would be interpreted as a unique house.
Figure 8. Plan of houses 4 and 6 in the bock C7 (see Figure 9). The total surface of both houses is equivalent to 1/8 of the surface of the blocks, indicating that an original division has been divided into two dwelling units. Such an organization could appear as a single domestic unit on a geophysical map: only the two entrances suggests that there are two units, but even in this case the second entrance could interpreted as one associated with a shop.
These limitations necessitate an initial argument about the concept of space rather than about the use of space, but a statistical study may help to further the interpretation. Of course, it is impossible to present a reliable statistical analysis on the basis of a study of two blocks only, but the relevance of this application to an understanding of excavated and surveyed blocks clearly emerges, which enables a predictive model to be built that will facilitate the interpretation of geophysical maps and offer an original and new approach to the study of the household.
In archaeological publications, the terms 'house' and 'dwelling unit' are often synonymous. The term 'house' is used in the following interpretation when domestic units have been identified with certainty or with reference to the social functions of a private building; the term 'dwelling unit' will be used for units which potentially include rooms with nondomestic functions (shop, workshop, etc.).
CASE STUDY I: BLOCK C7
Block C7 is the only completely excavated block of Doura-Europos that comprises only domestic dwellings, i.e. no administrative or religious buildings (Figure 9) [START_REF] Hopkins | The houses[END_REF][START_REF] Saliou | La forme d'un îlot de Doura-Europos… L'îlot C7 revisité[END_REF]. There are two other blocks of domestic dwellings, but they constitute particular cases because they comprise particularly large houses, which nearly (block D5) or entirely (D1) occupy the area of the block.
In its final state of occupation, block C7 contained 12 dwelling units, the limits of which no longer correspond to the original division. The entrance, the courtyard and the reception room are clearly identified for all units except for unit 9, where the location of the reception room is uncertain. The area of the dwelling units is extremely variable, from 54 m 2 to 320 m 2 , which is far from the egalitarian division of the Hippodamian concept. Many uncertainties remain about the organization of the block, which was quickly excavated using the techniques current at the beginning of the twentieth century: neither its stratigraphy nor its chronology were recorded well. Some of the rooms continued in use in later periods, and it is difficult to establish their correct plan, even for the last phase of occupation (the plan of unit 9 is particularly doubtful).
There are 17 entrances for 12 dwelling units, hence the block may contain five shops. It is beyond the scope of this paper to apply the space syntax approach to each dwelling unit. The following are the important results for the interpretation of the geophysical map.
(i) The 'entrance-courtyard-reception room' layout is respected in all cases apart from units 7 and 9, where the entrance leads to a room instead of a corridor ii) Generally there were four levels of depth, (number of steps that must be taken to arrive to the deepest room of the house starting from the entrance), whatever the size of the dwelling unit; and at times even a fifth level.
(iii) The main difference is the number of rooms around the courtyard, which increases with the size of the dwelling unit (see e.g. units 2 and 4, Figure 7).
(iv) In most cases the rooms are distributed on three sides of the courtyard. In unit 12, a reception room was lacking. Of a total of 11 reception rooms, six were located in the southern part of the dwelling unit and open to the north.
Figure 10 compares the surface area of individual rooms against the surface area of the dwelling. Except in unit 9, where the identification of the reception room is uncertain, the courtyard and the reception room are the largest rooms of the houses. The reception room is generally larger than the courtyard, and the more important the house, the greater the increase in surface area of these two rooms. On the other hand, the mean surface area of the other rooms is almost the same for all dwelling units. Figure 11 demonstrates the importance of the courtyard and the reception room to domestic life: even in the smallest dwelling unit, number 3, more than 80% of the total surface area is used for these two rooms.
This study is admittedly limited to only one block, but these first results highlight certain interesting social and cultural aspects of the domestic dwelling. It can be assumed that the size of the dwelling unit was proportional to the richness of the occupier, as proved by excavation. Most dwelling units have a surface area of about 200 m 2 , i.e. less than the original 1/8 division (306.25 m 2 ). Unit 9 is the only one to have a surface area >300 m 2 . The courtyard and the reception room were the most important places within the houses. These spaces were the parts of the house a visitor was likely to see, and it was therefore important to enhance them with exterior signs of richness and prestige. Another surprising result is that the other rooms have almost the same surface area, whatever the size of the dwelling unit. Only the number of rooms increased with the size of the dwelling unit. This tendency seems to show that whatever the social class (probably linked to a certain form of richness allowing an individual to have a bigger house), the inhabitants did not feel the necessity to live in larger rooms: private spaces had more or less the same dimensions.
Block C7 demonstrates an important diversity in the richness and size of its dwelling units, but this may be relative, as all of the proprietors may have belonged to the same cultural class and had the same social origin. These are only preliminary results of an approach applied for the first time to the households in Doura-Europos and therefore must be treated with with prudence until they are confirmed by excavation and survey.
CASE STUDY II: BLOCK M2
The general rectangular shape of block M2 and its dimensions hardly changed with time and remain close to the reference model (Figure 12); -although its subdivision into eight units is no longer visible for most of the last stage of its occupation. The lengthwise division can still be observed for the whole length of the block even if the wall does not look continuous. The transverse middle division continues to be visible but no longer seems to function as a separation between dwelling units any more (see below). The field survey map produced by the Yale expedition indicates 11 entrances for block M2: three were located on the western side of the block, one on its southern side and seven on its eastern side. All of these entrances are visible on the magnetic map. The widest of the streets surrounding the block is located along its eastern side, which also has the most entrances, and we can make the same observation as in the case of block C7: an increase in the number of entrances on one side of a block (mainly due to the presence of shops) is linked to the width of the streets. The next step is to delimit the spaces inside the block: as previously stated, the most important aspect is to identify the different spaces, and uncertainty about the geometry or the exact surface area of the space does not affect the study (Figure 12c). The block is accordingly divided into 83 space units. Here, as with block C7, the courtyard and the reception room were the widest rooms of the house. Figure 12d categorizes rooms with a surface area greater than 25 m²: the logical position of the courtyard and the reception room are thereby easily identified. It is now possible to hypothesize the delimitation of the different dwelling units based on the observations of the space syntax schemes of block C7. Thus nine dwelling units can be delimited: four on the eastern and five on the western side. In the southern part of the block, plans of houses 3, 4, 5 and 6 are clearly visible with the classic organization observed in Doura-Europos; units 3 and 4 therefore have two entrances. The northern part of the block is more problematic because it has been subject to important transformations in the organization of space. Nonetheless, the identification of the courtyards and reception rooms allows units 7, 8 and 9 to be delimited. In this hypothesis, only the entrance of unit 8 is not visible on the magnetic map and is not mentioned on the Yale map; the corridor between the supposed entrance and the courtyard is nevertheless clearly visible.
Units 1 and 2 are more complex because of the number of their three entrances and their location. The entrance, the courtyard and the reception room of unit 1 are clearly identified, but its boundary with unit 2 is rather unclear; it has to be located on the line of the southern wall of the reception room, because the northern entrance of unit 2 is immediately to the south. This dwelling unit therefore has two entrances, and it is in fact the widest of the block with four rooms with a surface area greater than 30 m 2 . Either a large dwelling unit existed here -including an independent entrance for a shop -or this area represents two houses. In the latter case the southern house would not have an entrance corridor but a direct access to a room of more than 25 m 2 . DISCUSSION Even though it is not possible to draw definitive conclusions about the use of domestic space in Doura-Europos based on the study of only two blocks, it is interesting to compare the results obtained in both cases.
It was concluded that block M2 is made up of nine dwelling units compared with 12 dwelling units in block C7. The dwelling units accordingly were widest in M2, with the minimum surface area for dwelling units of M2 being >200 m 2 .In contrast there are six dwelling units in C7 with a surface of <200 m 2 . A histogram of the surface area of the rooms (Figure 13) shows that there were more smallsize rooms in C7, but the global distribution of the surface is the same for both blocks.
A graph of the distribution of the surface area of the rooms for each dwelling unit in M2 produces a different result (Figure 10c).
In block M2 the mean surface area of the courtyards is greater than that of the reception rooms. The linear regression line decreases in the case of the courtyard, which is attributable to the size of unit 2. This leads to two possibilities:
(i) there are two houses, as hinted at above;
(ii) unit 2 partly or entirely comprises an administrative or, more probably, a religious building. The latter is commonly seen in Doura-Europos. Many houses were transformed into religious buildings, whereby they maintain some characteristics of the domestic space, but their widest rooms have been transformed to receive the public. If unit 2 is divided into 2a and 2b (Figure 10d), a closer correlation between M2 and C7 is evident, even if the coefficient of determination for M2 (r²=0.22) is very inferior to that of C7 (r² =0.44, or 0.83 if unit 9, which is a doubtful case, is not taken into account). As for the courtyard, the correlation between the reception room and the size of the dwelling unit is less evident in the case of M2 (r²=0.16) than for C7 (r²=0.35, or 0.59 without unit 9).
The correlation between the courtyards, reception rooms and the dwelling unit size is less clear in the case of M2 even if the proportion of surface dedicated to both these rooms is almost the same (Figure 11b). Nevertheless it must be noted that the dwelling units of M2 are on average larger than those of C7, and the correlation observed in C7 is not the same for very large dwelling units, such as unit 2 in M2 and unit 9 in C7.
CONCLUSIONS
The results presented here show that an interpretative 'space syntax' approach adapted to the specific of geophysical data can open rich new avenues of research that are inaccessible by the common methods used in archaeology. The results presented here are, of course, partial, but they show how useful the study of space may be for the interpretation of geophysical maps, and at the same time the potential for developing such an approach in archaeology. A complete study would involve taking into account all excavated and surveyed dwelling units of Doura-Europos and the layout of streets in order to obtain a concerted vision of the organization of the public and private space of the city. From this point on, it will be possible to study the social and cultural aspects in relation to the use of space, at the level of not only the domestic unit but also of the city. Such an approach will considerably modify the study of city planning and will demonstrate that the role of the geophysical map is not only to offer a 'global view' for locating new places to excavate.
Figure 1 .
1 Figure 1. Hippodamian plan of Miletus (Turkey) known as one of the oldest city plans of this type. It comprises two different modules of blocks: 51.60m x 29.50m for the southern part, and 20.75m x 17.70m for the northeastern part.
Figure 2 .
2 Figure 2. Plan of Doura-Europos (Syria) (from Rostovtzeff, 1939 and redrawn by H. David). The area surveyed is in dark grey and the excavated areas are shown in middle grey.
Figure 3 .
3 Figure 3. Aerial view fromthe south-east of the site of Doura-Europos.
Figure 4 .
4 Figure 4. General magnetic map of the geophysical survey of the southern part of the site (minimum white=-10 nT m -1 : maximum black=+10 nT m -1 ). The magnetic survey has been carried out with a caesium gradiometer recording measurements every 0.20m along profiles of 1m apart.
Figure 6 .
6 Figure 6. Example of a door blocked with mudbrick.
Figure 9 .
9 Figure 9. Plan of block C7 (from Saliou, 2005).The block comprises 11 dwelling units but there are 17 entrances, indicating the presence of shops with independent entrances.
Figure 10 .
10 Figure 10. Distribution of the surface of the rooms for each domestic unit: (a) block C7; (b) block C7 without unit 9; (c) block M2; (d) block M2 with unit 2 divided into domestic units.
Figure 10
10 Figure 10. (Continued).
Figure 11 .
11 Figure 11. Ratio of the surface room/house: (a) block C7; (b) blockM2.
Figure 12 .
12 Figure 12. (a) Magnetic map of block M2. (b) Theoretical division in equal parcels of a block with dimensions 35m_70m. (c) Hypothesized division of the space inside the block. (d) Hypothesized distribution of the courtyards and reception rooms.
Figure 13 .
13 Figure 13. Histogram of the surface area of the rooms in blocks C7 and M2. |
04121251 | en | [
"shs.eco"
] | 2024/03/04 16:41:26 | 2022 | https://pastel.hal.science/tel-04121251/file/2022UPSLM081_archivage.pdf | Keywords: Energy, Household, Consumption, China-African trade, Firm, Productivity, Financial development
Au terme de ces quatre années de recherche doctorale, je tiens à adresser mes sincères remerciements à mon encadrante, professeure Margaret Kyle, pour avoir accepté de m'encadrer et surtout de m'avoir renouvelé sa confiance au fil du temps. Je saisis donc ce canal pour lui témoigner toute ma gratitute car sans elle, cette thèse ne deviendrait guère une réalité. Ses orientations et ses conseils ont été d'une portée incommensurable. Malgré l'écart linguistique, son dévouement au travail et son ambition de réussir sa mission l'ont obligée à surmonter cette barrière dans nos discussions. A côté d'elle, se trouve le professeur Ahmed Tritah, mon coencardrant. Humble, polyvalent, astucieux, dévoué et surtout très flexible, il a été à mes côtés du début de cette thèse jusqu'à la fin tant dans la vie sociale mais surtout dans les études. Il a sû jouer un rôle de facilitateur qui a abouti à l'accomplissement de ce projet. Son assistance indefectible se lit à travers la diversité dont fait montre le contenu de cette thèse. Malgré son calendrier chargé car étant sur plusieurs fronts, il ne ménage aucun effort à répondre présent quand il le faut. C'est l'opportunité pour moi de lui dire un spécial merci car j'ai beaucoup appris à ses côtés sur plusieurs plans.
Tel un père de famille, l'immence rôle fédérateur du professeur Pierre-Noel Giraud est sans comparaison. Pièce maîtresse et véritable chef d'orchestre de la chaire Economie Industrielle de l'Emergence de l'Afrique (EIEA), M. Giraud a un sens d'écoûte et d'humilité irréprochable qui a d'ailleurs permis l'effectivité de cette thèse malgré mes multiples défauts. En dehors de ces interventions indispensables auprès de l'Université Mohamed VI Polytechnique (UM6P) pour pallier les besoins fondamentaux de la chaire, M. Giraud porte de temps en temps un regard critique sur les différents sujets traités par les doctorants de la chaire. Ce faisant il apporte son avis quand il le faut sur certaines thématiques et n'hesite pas à orienter vers des sources utiles quand il le faut. C'est l'occasion de lui témoigner toute ma reconnaissance pour ses conseils et orientations.
A l'UM6P et à l'Office Chérifin du Phorsphate (OCP), je vous envoie un merci retentissant car cette thèse n'aurait pas été possible sans le financement que vous m'avez apporté. Au docteur Jamal Azizi, directeur de la chaire EIEA, je lui témoigne toute ma reconnaissance pour l'effort qu'il a déployé durant ce long parcours pour satisfaire mes besoins administratifs auprès de l'UM6P.
Un grand merci à la Chaire Energie et Prospérité (Institut Louis Bachelier) pour ses différents séminaires qui m'ont été d'une grande utilité mais surtout pour son accompagnement financier lors de ma participation à diverses conférences.
Les commentaires du professeur Saïd Hanchane sur mes papiers de recherche ont été d'une importance très capitale et c'est l'occasion pour moi de lui dire un spécial merci. Mes pensées vont i Chapitre 0. Remerciements également à l'endroit du professeur Pierre Fleckinger pour son assistance et sa sympathie à mon égard depuis le début de cette thèse et au professeur Mathieu Glachant, directeur du laboratoire Cerna, merci à vous, toutes les fois que vous avez échangé avec moi sur mes sujets. Permettezmoi de vous signaler que ces discussions ont laissé des traces positives dans le document. A tous les autres chercheurs du Cerna (Sven, Dennis, ...), veuillez recevoir ma profonde gratitude pour toutes vos contributions directes ou indirectes à travers les séminaires qui nous réunissaient de temps en temps.
Mon passage à l'Organisation Mondiale du Commerce (OMC) m'a permis de rencontrer des personnes formidables. Je profite de cette aubaine pour témoigner toute ma reconnaissance à Monsieur José-Antonio Monteiro, mon mentor dans ladite institution qui n'a ménagé aucun effort à passer au peigne fin toutes les lignes de la thèse avec des commentaires pertinents. D'une manière analogue, mes remerciements vont à l'endroit de Monsieur Kimm Sèna Gnangnon, pour le temps accordé pour décortiquer le document et surtout pour ses conseils et orientations.
C'est le moment de dire merci à tous les doctorants du Cerna en l'occurence ceux de la chaire EIEA, pour tous ces moments passés ensemble et surtout les échanges fructueux, parfois houleux, que nous avons lors de nos séminaires ou en petits groupes. Houleux mais utiles ! Autant que vous êtes, je vous aime, Maghrébins comme Subsahariens sans oublier les Français et Autres.
Je me permets de saisir cette opportunité pour remercier Dr Johanna Choumert-Nkolo et Dr Mathilde Maurel qui ont accepté d'être rapporteures de cette thèse. C'est tout un plaisir pour moi de recevoir vos remarques et suggestions qui me seront à coup sûr très utiles au vu de vos expériences dans la recherche.
Cette longue liste de témoignages ne saurait prendre fin sans remercier Mme Barbara Toussaint, qui dans l'ombre accomplit au quotidien une mission nécessaire pour le bon déroulement de cette thèse. De même, à toutes les personnes qui de près ou de loin m'ont encouragé à aller de l'avant et à être résilient face aux péripéties, recevez ma profonde gratitude. Un spécial merci à ma famille en particulier à mon père Hounyonou Kponou Victor et à ma mère Hodonou Justine, pour qui je dois un grand merci pour m'avoir assisté, sans faille, moralement et financièrement. Veuillez recevoir l'honneur qui vous est dû !
Résumé
Les pays africains aspirent à un développement industriel pour diversifier leurs exportations, actuellement concentrées en ressources naturelles. Cependant, l'électrification et le renforcement de la compétitivité des entreprises nationales restent un défi lorsqu'elles font face à la concurrence des importations, notamment celle chinoise. Ce sujet est au coeur de la présente thèse, divisée en trois chapitres. Le premier analyse l'impact de l'accès des ménages tanzaniens à l'électricité sur leur consommation et leurs revenus issus des activités domestiques. Les résultats suggèrent que l'accès à l'électricité entraîne une augmentation de la consommation quotidienne des ménages. De même, le chiffre d'affaires et le nombre d'emplois créés sont plus élevés dans les activités à forte intensité capitalistique favorisées par l'électricité. Dans le deuxième chapitre consacré à l'effet du commerce sino-africain sur la croissance des entreprises africaines, les résultats empiriques (RE) suggèrent que la pénétration de la Chine (PC) réduit la croissance des petites et jeunes entreprises, tandis que les grosses entreprises ont connu un effet pro-concurrentiel. Les entreprises exportatrices ont été doublement affectées, car leur croissance a baissé lorsque la PC a augmenté sur le marché extérieur commun. Enfin, le troisième chapitre a étudié comment la PC sur les marchés africains a affecté la performance des entreprises (PE). D'un point de vue théorique, la productivité (PTF) et l'intensité énergétique (IE) des entreprises sont inversement liées, tandis qu'il existe une relation logarithmique entre ces indicateurs et le niveau de production des entreprises. Les RE montrent que la PC sur le marché africain a entraîné une baisse de la PTF et de l'IE des petites et moyennes entreprises sans impact significatif sur les grosses entreprises. Sur la base des résultats théoriques, nous déduisons que la diminution de l'IE pourrait s'expliquer par la réduction du niveau de production. Alors que les obstacles électriques et financiers affectent négativement la PE, les petites entreprises confrontées aux obstacles électriques et les grandes entreprises confrontées aux obstacles financiers ont amélioré leurs performances sous l'impulsion de la compétition chinoise. Les RE ont également révélé que la PTF affecte négativement l'IE, sans qu'il y ait de causalité inverse.
Mots clés : Energie, ménage, consommation, commerce Chine-Afrique, Entreprise, Productivité, Développement financier Chapitre 1
General Introduction
In economics, an important indicator of the population's well-being is the consumption of goods and services made by the public and private sectors (e.g., [Moratti, 2012]). Access to energy is essential for the production of these goods and services (e.g., [SDG, 2021]), and hence influences populations' welfare. However, the situation regarding energy access in Africa in particular in the Sub-Saharan area is dramatic. According to SDG (2021), among the 733 million (in 2020) people without access to electricity in the World, 77% live in sub-Saharan Africa (Figure 1.1). The situation is even worse when we consider cooking energy (Figure 1.2). The report pointed out that access to clean cooking energy is a major concern in developing countries, especially in African ones. Around 31 percent (2.4 billion (2.1-2.7) people) are still cooking primarily with polluting fuels and technologies, such as charcoal, coal, crop waste, dung, kerosene, and wood. The majority of households that lack access to electricity and clean cooking energy are concentrated in rural areas. According to [SDG, 2021], in 2020, around 80 percent of the world's people without access to electricity live in rural areas, with 75% being located in Sub-Saharan Africa. As for clean cooking fuel, in 2020, 86 percent of people in urban areas had access to clean fuels and technologies compared with only 48 percent in rural populations. While the seventh Sustainable Development Goal (SDG7) aims to ensure access to affordable, reliable, sustainable, and modern energy for all by 2030, Sub-Sahara Africa is far from achieving this goal. This is also the case for all other SDGs, as shown in Figure 1.3 where Africa is ranked at the last position based on the SDG's index ( [Sachs, 2022]). The figure also reveals that Africa is the great beneficiary of the accomplished effort by the rest of the world mainly the developed countries to target the SDGs. Africa's performance in terms of the attainment of SDGs is not surprising. Indeed, there is a strong correlation between SDG7 and other SDGs. The literature has considered the socio-economic impacts of the lack of energy access, as well as the channels through which these impacts operate at the macro and micro levels. The lack of access to electricity or affordable electricity may constitute a huge impediment to African industrialization that is necessary to bring out millions of African people from unemployment and poverty [START_REF] Pablo | Electricity provision and industrial development : Evidence from India[END_REF], [Anning, 2018]). For instance, less electrified countries attract less foreign direct investment flows [START_REF] Inglesi-Lotz | The impact of electricity prices and supply on attracting FDI to South Africa[END_REF], [Chandio, 2020]) while the latter represents an important lever for easing social and economic tensions ( [START_REF] Sinkala | Chinese FDI and employment creation in Zambia[END_REF], [Brincikova, 2014],
Figure 1.3 -SDG Index score vs International Spillover Index score (source : World Bank 2022) [START_REF] Abor | Foreign direct investment and employment : host country experience[END_REF], [Klein, 2001], [START_REF] Gohou | Does foreign direct investment reduce poverty in Africa and are there regional differences ?[END_REF]). The health and education sectors are also among the most impacted. Due to the absence of electricity, rural areas suffer from a strong deficiency in health and education infrastructures that explains the low expectancy life and the literacy rate of the rural population ( [START_REF] Kanagawa | Assessment of access to electricity and the socio-economic impacts in rural areas of developing countries[END_REF], [START_REF] Iyabo Adeola Olanrele | The impact of access to electricity on education and health sectors in Nigeria's rural communities[END_REF], [START_REF] Nadimi | Modeling of quality of life in terms of energy and electricity consumption[END_REF], [Bridge, 2016], [START_REF] Asumadu | Electricity access, human development index, governance and income inequality in Sub-Saharan Africa[END_REF]). At the microeconomic level, households that have access to electricity create income-generating activities (IGA) that improve their revenue. Electricity also offers a good condition for girls in electrified households to improve productivity in household tasks ( [Peters, 2009]). Another advantage is the fact that the electric light creates a good environment for pupils to work at night and allow them to use the work tools like computer, laptops, and phone ( [Arraiz, 2015]). However, as clearly stated in SDG7, the affordability, and reliability of electricity are also important and need to be taken into account in any analysis. According to the Energy Sector Management Assistance Program (ESMAP-2015), access to energy is not just a simple binary variable, but rather a multi-level variable. The simplistic questions in the old surveys do not consider other dimensions of energy access, 1 such as the use of multiple fuels and devices, varying levels of access and use, the quality and safety of the energy source, the affordability of consumer electricity service, and the importance of other household energy services such as space heating and lighting. This information is relevant in the case of Africa characterized by poor electricity infrastructure. Furthermore, the generation of electricity and the transport sector are 4). This explains the high volume of the main gas of the greenhouse effect (that is CO 2 ) in the atmosphere, causing climate change and heart diseases ( [START_REF] Dennekamp | Air quality and chronic disease : why action on climate change is also good for health[END_REF], [Bernard, 2001]). Finally, fossil fuels that are used for cooking, generate also particles (including CO 2 ) that are seriously harmful to health.
Household welfare is not only dependent on the goods and services produced by local industries but also on imported ones. However, these imports can constitute an opportunity or threat to the local industries' growth. The successive waves of trade liberalization, from the creation of the General Agreement on Tariffs and Trade (GATT) in 1947 to that of the World Trade Organization (WTO) in 1995, have dominated the international economy literature. China's accession to the WTO, at the end of 2001, is a case in point. As shown in Figure 1.5, China's trade with the rest of the world has increased in an unpredictable way both with developed and developing countries, including Africa. The Sino-African trade can be considered the second wave of trade liberalization for almost all African countries. Indeed, not only the trade volume has increased but there has also been a strong diversification of products exchanged, including at extensive margins (that is, with the introduction of varieties of products, with different qualities). This penetration of Chinese products into the African markets is not without consequences for African economies, both at macro and microeconomic levels. At the macroeconomic level, the penetration of Chinese products into the African markets has resulted in the creation of many enterprises mainly services-oriented enterprises engaged in trade activities (e.g., [Sieber-Gasser, 2010]). African Manufacturing enterprises can also have access to cheap Chinese products, such as physical capital and intermediate inputs, that allow them to reduce their production costs.
African Exporting manufacturing firms are able to increase their market power both in the domestic and foreign markets. Note : The second axis is for the bar chat.
We also notice from the literature that it is not only Chinese products that penetrate the African market but also Chinese Foreign Direct Investment (FDI). Chinese FDI flows can contribute to improving the gross domestic product (GDP), economic growth, as well as fiscal revenue that is necessary to reduce the burden of public debt in African countries (e.g., [START_REF] Horiuchi | The effect of FDI on economic growth and the importance of host country characteristics[END_REF], [START_REF] Johnson | The effects of FDI inflows on host country economic growth[END_REF]], [Agyapong, 2019]). The Chinese FDI flows to Africa can also create employment, and reduce poverty (e.g., [START_REF] Sinkala | Chinese FDI and employment creation in Zambia[END_REF], [START_REF] Boakye-Gyasi | [END_REF], [Onjala, 2008]). Meanwhile, the penetration of China into the African markets may result in greater competition in the domestic markets of African countries, it can also threaten domestic firms' growth, and eventually hinder the industrialization of African economies. At the microeconomic level, Chinese products' penetration into African markets represents a huge opportunity for African households to have access to cheaper products, such as household appliances, mobile phones, computers, and laptops. The low prices of Chinese products allow households to improve their purchasing power, increase their consumption as well as savings, and ultimately contribute to the improvement of African countries' macroeconomic performance. The above description, thus far, sheds light on the potential of trade with China, and energy access to significantly affect African economies. The present thesis focuses on these two factors by analyzing their microeconomic impacts on African countries. It includes three main chapters. Chapter (2) analyses the impact of electricity access on household consumption and home-based activity by focusing on Tanzanian households. Chapter (3) focuses on the effect of China's penetration into African markets domestic on firms' growth. Finally, chapter (4) explores whether the penetration of China into African markets provides incentives to African firms to improve their productivity and energy efficiency under financial and electricity constraints.
According to the East Africa Economic outlook (2019), Tanzania was the third fastest economy (in terms of economic growth) in East Africa, preceded by Ethiopia and Rwanda. With an area of 947,300 km 2 ,3 the country counts about 61,498,438 people. 4 About one-third of the population lives under the poverty line and almost 70% live on less than $1.25 a day. 5 The segments of people without access to electricity and clean energy cooking represent 39.9% and 5%, respectively with a strong disparity between rural and urban areas. Tanzania is endowed with several energy sources but less exploited. To increase the electrification rate in order to reduce the poverty and unemployment rates and accelerate the economic development of the country, a vast electrification program was launched by the government in collaboration with the Millennium Challenge Corporation over the period from 2008 to 2013. Several findings have emerged from the analysis on socio-economic impact analysis of this program ( [Chaplin, 2017]). For example, there is an increase in children's hours of studying at night, ownership of electric appliances, time spent watching television, income, and a reduction of poverty measured by per capita consumption. There is also an increase in the likelihood to operate in IGA but not among low-cost-connection households. We have deepened this analysis by performing a similar analysis, focusing not only on the impact of electricity on household consumption (per adult equivalent) but also on IGA. The panel data (2008)(2009)(2010)(2011)(2012)(2013) allowed us to compare the household situation before and after electrification. Based on the instrumental variable estimator and difference-in-difference method combined with the propensity matching score approach, we find that households' consumption increases significantly after they are connected to the electric grid. Concerning the IGAs, the analysis combined with various approaches (triple stage least square, instrumental Probit model, instrumental Tobit model, and endogenous switching regression model), and reveal that connected households' capital stock endowment represents the main channel through which electricity affects home-based activity (i.e. IGAs). This materializes through higher turnover and workers. The fact that electric light can prolong the daily working duration of IGAs is not enough to sustain their growth. We deduce that to be efficient, the electrification program should be accompanied by financial inclusion program that would enable households to equip their business. However, China's penetration into the African market would also be an opportunity for African firms to import cheaper goods and services. While China's penetration is an opportunity for the informal sector it could constitute both an opportunity and an obstacle for the formal sector mainly the manufacturing one. Chapter (3) dedicated to China-Africa trade seeks to underline the effects of China's penetration into African markets on African manufacturing firms' growth. The impact of China's economic rise has led several researchers to undertake different analyses at the country level but also at the industry and firm levels. For instance, at the macroeconomic level, the variables of analysis are economic growth ( [START_REF] Miao | [END_REF], [START_REF] Gebre | The impact of Africa-China trade openness on technology transfer and economic growth for Africa : A dynamic panel data approach[END_REF], [Xin, 2014], [Lu, 2018]), poverty rate ( [Jenkins, 2005], [START_REF] Kaplinsky | [END_REF]), and unemployment ( [START_REF] Pigato | [END_REF], [START_REF] Ebenstein | Why are American workers getting poorer ? China, trade and offshoring[END_REF]). At the microeconomic level, the industries or firms' performance, measured by sales and labor growth ( [Autor, 2013], [START_REF] Kigundu Macharia | The impact of Chinese import competition on the local structure of employment and wages : evidence from France[END_REF]) on the one hand, and the total factor productivity ( [START_REF] Bloom | Trade induced technical change ? The impact of Chinese imports on innovation, IT and productivity[END_REF], [Darko, 2021]) on the other hand, has been at the heart of analysis. Analyses have been performed on upstream and downstream of firms (or industries) as well as on third markets. These studies have also sought to examine the impact of imports of China's inputs, and China's output on firm performance, as well as the competition effect on the common external market (CES). The studies have found mixed effects. Some studies found that China's competition in the domestic market is harmful to firms' performance, while others concluded the existence of a pro-competitive effect. This divergence in outcomes can also be observed at the input level. Concerning the effects of the third market, there is almost unanimity that China's competition in CES is a major threat to exporting firms. However, there are few studies on Africa. Chapter (3) aims to complement the existing literature by showing how China's imports can affect African firms upstream and downstream as well as in third markets by distinguishing between developing and developed markets. That is, on the one hand, we analyze China's input and output penetration impact on African firms' growth approximated by sales growth and labor growth. On the other hand, we consider the effect of China's penetration into the foreign common market on the exporting firms' growth. We find that China's output penetration into African markets represents a huge threat to small and younger 6 firms, while it generates a pro-competitive effect for large firms. In the foreign common market, the increase in China's penetration into the African markets has led to a decrease in the exporting firms' growth, but the impact appears to be more pronounced in developed markets. While we expected China's inputs to have generated economies of scale and boosted African firms' growth, we find, that they affected negatively and significantly the firms' growth. We attribute this negative outcome to the fact that China's inputs might not be appropriate for African firms' working conditions, although the results may also be explained by the approach used to construct the indicator of inputs' penetration.
As shown in Figure 1.6, unreliable electricity supply and hindered access to the financial market represent the main obstacles faced by manufacturing firms in African countries mainly small and medium enterprises ([African Economic Outlook, 2019]). The results found in Chapter 2 (concerning small and younger firms) can become worse for firms that are confronted with these challenges. For instance, firms that face major obstacles in the financial market, would not be able to make the necessary investments to improve their efficiency so as to retaliate when facing an increase in imports from China. Likewise, firms confronted with bad-quality of electricity would be more vulnerable to China's penetration into the domestic market and would lose more in energy productivity if actions were not taken to improve it. Chapter (4) investigates not only the overall impact of China's imports on firms' performance i.e. total factor productivity (TFP) and energy efficiency approximated by the energy intensity (measured by the ratio of energy consumed to the value of output produced). It also seeks to investigate how African firms have managed to adapt to China's competition, despite their electricity and financial challenges. Finally, the chapter explores the direction of causality between the two indicators of the firms' performance, 6. Firms that are at most five years old namely productivity and energy intensity. There is a vast literature on the relationship between trade openness and productivity on the one hand, and between trade openness and energy efficiency, on the other hand. According to the former, competition from foreign countries following trade liberalization can be detrimental to domestic firms since the foreign goods can oust the domestic ones, leading to losses of economies of scale and firms' performance ([ De Loecker, 2014], [START_REF] Melitz | [END_REF]). However, there can also be an opportunity for dormant domestic enterprises to make use of all their potential ( [START_REF] Mukherjee | [END_REF], [Umoh, 2013]). In this case, China's imports can have a pro-competitive effect on firms' performance ( [START_REF] Melitz | [END_REF], [Darko, 2021]). The penetration of China's inputs is another channel through which China-Africa trade can enhance African firms' performance. As noted above, China can sell machines, appliances, and all other equipment goods as well as intermediaries inputs and energy-saving technologies relevant to the African manufacturing firms' activities [START_REF] Gebre | The impact of Africa-China trade openness on technology transfer and economic growth for Africa : A dynamic panel data approach[END_REF], [START_REF] Munemo | Examining imports of capital goods from China as a channel for technology transfer and growth in Sub-Saharan Africa[END_REF]). Many studies have addressed this issue but not much in the context of China-Africa trade, which is the main contribution of this paper to the literature. Another contribution of this paper to the literature is the theoretical description of the relationship between firms' productivity, firms' energy intensity, and production level. This theoretical analysis is useful for the interpretation of empirical results, and the understanding of how China's penetration into African markets can affect firms' performance through its effect on their sales. The empirical outcomes based on the instrumental variables approach (two-stage least squares and three-stage least squares) have shown that China's penetration into the African market has led to a decrease in both firms' productivity and energy intensity of small and medium enterprises. But, according to the theoretical model, the reduction of energy intensity could be explained by the decrease in the firm's production level that leads to the decrease in energy use. However, overall, the improvement in productivity leads to a decrease in energy intensity but there is no reverse causality. Furthermore, among small firms, only those facing a shortage of electricity improve their productivity while among large firms, better performance is observed only for those facing financial obstacles. We conclude that China's penetration into the African markets has incentivized African firms to address their challenges and develop their potential to stay in the market and continue to produce. We also deduce from our results that improving traditional performance (through higher productivity and energy efficiency) is not enough to compete with China, but firms should improve innovation at the product level to maintain their market share.
Chapitre 2
Electricity access, consumption, and home-based business : Evidence from Tanzanian households
Ce papier évalue l'impact de l'accès à l'électricité sur la consommation des ménages et les revenus des entreprises à domicile en Tanzanie. Il utilise des données de panel qui s'étendent de 2008 à 2013 et échelonnées en trois vagues. A l'aide des méthodes de différence première et de variable instrumentale, nous constatons que cet impact est positif et statistiquement significative avec une valeur avoisinant 120%. Nous utilisons ensuite la méthode des triples moindres carrés, le modèle probit instrumental et le modèle Endogenous Switching Regression pour montrer respectivement que l'électricité agit par le biais du stock de capital pour transformer les revenus des entreprises à domicile et leur permettre de créer des emplois.
Introduction
Energy, in all its forms, is important for human welfare and a healthy economy. It was a key factor in the first industrial revolution with the appearance of the steam engine that replaced human and animal labor [START_REF] Anthony | Energy and the English industrial revolution[END_REF]. Since then, labor productivity has increased, and the population's needs are covered faster than before. Today, energy exists in many forms including mechanical, thermal, chemical, and electrical, of which the last is the most useful. It is omnipresent in all human activities, and it is becoming necessary both at home and in companies. Because we can pass from one form of energy to another, electrical energy can be obtained from all other forms of energy. For the widespread use of electricity, primary sources such as fossil fuels are generally required. Unfortunately, despite an abundance of these primary resources, Africa suffers from a crucial lack of electricity.
According to World Bank statistics 1 (2014), about 38% of the sub-Saharan African population has access to electricity while the equivalent figure is 45% for the whole of Africa [Agency, 2014] and 85% worldwide. These statistics prove the scale of the challenge in this sector, especially in sub-Saharan African countries including Tanzania. In fact, Tanzania is one of the countries whose energy situation is paradoxical. It has large energy reserves, of which very few are exploited. Its hydroelectric power potential is estimated at 4,700 megawatts, of which about 15 percent is used, natural gas is estimated at nine billion barrels of oil, and finally, its solar radiation is estimated at 4 to 7 kWh/m2/d with a sunshine duration of 2,800 to 3,500 hours per year [START_REF] Group | African Development Bank Group. Renewable Energy in Africa[END_REF]. Despite these performances, only 32.8% of Tanzanian households have access to electricity [Tanzania, 2017] with 16.9% in rural areas compared to 65.3% in urban areas. Given that the installation of large factories and enterprises is conditioned, among other things, by the availability of electricity ( [START_REF] Kanagawa | Assessment of access to electricity and the socio-economic impacts in rural areas of developing countries[END_REF] ; [Avila, ]), rural areas will have to continue, for a long time, to put up with the paucity of their infrastructure and the consequent economic poverty which gives rural households a poor standard of living. In addition to this macroeconomic effect of electrification on household welfare, the literature postulates other micro-level effects that are no less negligible but very mixed. This paper examines the causal link between households' access to electricity and their income on the one hand, and, on the other, investigates how electricity transforms the revenue of home-based enterprises, thus creating externalities for the neighborhood.
This paper makes several contributions. The study is one of the few dealing with such a problem in Tanzania using panel data, to which we have applied the approach of first difference with instrumental variables to determine the effect of electricity on household consumption. The "triple stage least square" estimator is used for the transmission channel while the instrumental Probit and endogenous switching regression models are employed to assess the externality of households' access to electricity.
Results of the first model show a statistically significant positive effect of electricity on dayto-day household consumption (food, communication, and transport). This increase may be due to a reallocation of household income or, according to the second model, to a growth in the income of their home-based enterprise which is only possible for households with sufficient physical capital stock. Results also show that these home-based businesses are able to create employment when they are well equipped with physical capital, but the latter is dependent upon the presence of electricity. Therefore, we deduce from our analysis that households' access to electricity indirectly impacts their income and that of neighboring households through physical capital.
This paper is structured into five sections. Section 1 presents the literature review. Section 2 presents the data and some descriptive statistics while section 3 presents the methodology. Sections 4 and 5 describe the results and robustness tests.
Literature review
The omnipresence of electricity is becoming more and more evident today. Advances in various technological fields make life very dependent on energy and more particularly on electrical energy. The scarcity of the latter in the less advanced countries (and especially those of sub-Saharan Africa) has brought the question of its universal accessibility to the forefront. For example, universal access to electricity is one of the UN's priority Sustainable Development Goals (SDGs) and is also one of the major issues being addressed by the African Development Bank. To this end, several electrification projects are being implemented on the continent in order to boost its economy and provide its people with a better life. However, the literature is peppered with controversial results regarding the achievement of these expected social and economic objectives. This paper focuses on how access to electricity affects households' income, measured through their consumption.
Households' possession of electricity is not intrinsically dependent on their desire to have it. Apart from the accessibility of the area where the household is located [START_REF] Tzempelikos | The impact of shading design and control on building cooling and lighting demand[END_REF], its demographic characteristics, its level of well-being as well as the average educational level of its members ( [Behera, 2017] ; [START_REF] Choumert | Stacking up the ladder : A panel data analysis of Tanzanian household energy choices[END_REF]) are also factors determining the choice of electricity use in households. Having electricity improves the lives of households in many ways. It can improve their health, their members' educational level, and their economic status.
Access to electricity has a considerable impact on household health [START_REF] Bruce | Dynamic light scattering : with applications to chemistry, biology, and physics[END_REF] ; [START_REF] Dherani | Indoor air pollution from unprocessed solid fuel use and pneumonia risk in children aged under five years : a systematic review and meta-analysis[END_REF] ; [Samad, 2013] ; [START_REF] Harris | [END_REF] ; [START_REF] Grimm | [END_REF] ; [START_REF] Po | Respiratory disease associated with solid biomass fuel exposure in rural women and children : systematic review and meta-analysis[END_REF] ; [START_REF] Van De Walle | [END_REF]). Households traditionally use fossil fuels whose combustion emits noxious gases. Women are therefore the main victims of this phenomenon because, according to tradition in Africa and indeed many other developing countries, they usually do the cooking. Young children are also subject to this problem since they are constantly with their mothers, while schoolchildren have to spend time close to kerosene lanterns in order to study after dark. Access to electricity can thus reduce lung disease and eye problems caused by the use of kerosene ( [START_REF] Aklin | Quantifying slum electrification in India and explaining local variation[END_REF] ; [START_REF] Gurung | [END_REF] ; [Brass, 2012] ; [START_REF] Grimm | [END_REF]). The use of fans to renew the air in dwellings and the use of refrigerators to conserve the quality of food and drink for many days are also beneficial from the health and economic viewpoints for electrified households ( [Bastakoti, 2006] ; [Khandker, 2013] ; [Kooijman-van Dijk, 2012]). Television is also a non-negligible source of health improvement since household members can learn much about, and better understand how to avoid certain dangers. Periods of cold weather also show how lack of electricity can affect health, particularly for the most vulnerable (the elderly for example) and people allergic to the cold because they suffer from a particular disease that is exacerbated by the cold. [START_REF] Chirakijja | Inexpensive Heating Reduces Winter Mortality[END_REF], following a study carried out in the United States, showed that, because of the high cost of domestic heating, the mortality rate accelerates during cold periods. Most of these deaths are caused by cardiovascular and respiratory diseases. Households' access to electricity is, therefore, necessary for better health.
Albeit with reservations, the literature on education also finds that households' access to electricity increases the literacy rate ( [START_REF] Dinkelman | The effects of rural electrification on employment : New evidence from South Africa[END_REF] ; [START_REF] Gurung | [END_REF] ; [Sovacool, 2013] ; [START_REF] Kanagawa | Assessment of access to electricity and the socio-economic impacts in rural areas of developing countries[END_REF]) and allows members to spend more time studying since electric lamps provide comfortable learning conditions after dark [START_REF] Moses Kwame Aglina | Policy framework on energy access and key development indicators : ECO-WAS interventions and the case of Ghana[END_REF]. It encourages learners to do more research, read more, and use computers and other things that can help them to perform better at school ( [START_REF] Kanagawa | Assessment of access to electricity and the socio-economic impacts in rural areas of developing countries[END_REF] ; [Arraiz, 2015]). As a result, they reach higher academic levels and get jobs that can change the household's standard of living. Furthermore, in developing countries, about 83% of households do not have access to clean cooking fuel [Outlook, 2019], a percentage that is higher in rural areas where they rely on harvesting wood. Under these conditions, girls are not able to attend school since they are responsible for wood collection. When households get access to electricity, they can use electrical cooking appliances, or they manage their time better to allow girls to learn more ( [Peters, 2009] ; [START_REF] Agoramoorthy | Lighting the lives of the impoverished in India's rural and tribal drylands[END_REF] ; [START_REF] Ravindra | A case study of solar photovoltaic power system at Sagardeep Island, India[END_REF]). In the long term, this contributes to reducing socio-economic gender inequality [START_REF] Grogan | [END_REF]. The ultimate consequence of education is that being literate can help people to adopt a hygienic lifestyle and take more control over their lives which can in turn be useful to society.
The literature's main focus is on households' economic situation. What happens to household incomes when they get electricity ? Researchers have done considerable work on this issue without reaching a consensus. Some studies conclude that electrification has a positive impact on households' income and leads to a reduction in their poverty [START_REF] Douglas | Energy strategies for rural india : Evidence from six states[END_REF], [START_REF] Khandker | [END_REF], [Fan, 2005b]). Other studies find no significant impact ( [Bensch, 2011], [Escobal, 2001]), while yet others find negative impacts on the income of poor households2 . It should be noted that controversy prevails, especially in rural areas where the conclusions are far from unanimous ( [Khandker, 2012b], [Peters, 2009], [Mensah, 2014], [START_REF] Bernard | Impact analysis of rural electrification projects in Sub-Saharan Africa[END_REF], [START_REF] Khandker | [END_REF], [Khandker, 2012a]). According to the literature, electricity affects household income through several channels. The main one is income-generating activity (IGA). Many papers have shown that there is growth in IGA in newly-electrified areas, leading to a growth in the employment rate ( [Bastakoti, 2006], [Kooijman-van Dijk, 2010]). Since it is possible to have an IGA without access to electricity, once households get electricity, they can shift from non-electrified IGA to electrified IGA or extend their working time after dark ( [Cabraal, 2005] ; [Mishra, 2016]).
Furthermore, improvements in health could favor people spending more time on IGA or other economic activities. Likewise, the improvement in educational performance would increase the probability of an individual getting a good job. These two aspects would bring more income to households and increase their medium-or long-term consumption of food and non-food goods such as modern sources of energy. The transition from traditional fuels to modern energy allows women to spend less time on domestic activities (collecting wood, cooking, etc.) and to undertake other, more lucrative, activities, thus increasing their employment rate and reducing gender inequality ( [START_REF] Dinkelman | The effects of rural electrification on employment : New evidence from South Africa[END_REF] ; [Grogan, 2009] ; [START_REF] Bowlus | Gender wage differentials, job search, and part-time employment in the UK[END_REF]).
In Tanzania, very few authors have shown an interest in the question of how electrification impacts household income. Among these papers, [Fan, 2005a] state that not only does the electrification of households have a significant and positive impact on their income, but a 1% increase in the number of connected households would lift some 140,000 people out of poverty. However, this work is not exclusively devoted to electricity but to all state infrastructure expenditures (education, health, roads, electricity, agriculture), the (binary) variable "access to electricity" has not been dealt with carefully since the connection of a household to the grid depends on several of the above-mentioned factors and is therefore endogenous. Consequently, the authors' conclusion may be questioned since the coefficient of this variable could be biased and inconsistent. [Chaplin, 2017] apply a DID approach to evaluate the effect of an extension of the electricity grid on the welfare of newly connected households and also on (low-income) households that received a government subsidy. First, they find that the grid extensions had no clear impact on the number of hours children studied at night and whether the household operated any income-generating activity (IGA) while there is an increase in ownership of electrical appliances and operation of IGAs that use grid electricity. Likewise, low-cost-connection offers increased ownership of electrical appliances and worsened health outcomes, reduced poverty as measured by per capita consumption but had no clear impact on the likelihood of operating an IGA. However, being actually connected to the grid increased the number of hours children studied after dark, the likelihood of operating an electrified IGA, and income and reduce poverty. Other studies were also carried out in the country, but at the regional level, to evaluate the impact of access to electricity on the welfare of these communities. One example is Kigoma [START_REF] Vohra | [END_REF] and [Hankinson, 2011].
The following diagram summarizes the three main channels through which electricity can affect household income according to the literature. According to Channel 1, access to electricity not only allows women to better manage their time but also increases the productivity of hours allocated to domestic tasks. This creates a surplus of time for women to use for more lucrative purposes and increases household income. Channel 2 shows that access to electricity strengthens home-based businesses and increases their turnover. Finally, channel 3 is often expressed in the long term. It highlights the impact of access to electricity on education. Electricity offers better conditions for learning leading to the improvement of household members' skills. As a result, they are more likely to get a better-paying job, which in turn improves the household income in the long run.
Data and Descriptive Statistics
Data Sources
Since 2008, Tanzania has acquired a panel of data collected from a national survey, conducted in three phases at regular intervals of two years. Thus, the first phase started in October 2008 and ended in October 2009 and included 3,265 households representing the national population in its geographical and ecological diversity. This base sample is divided into 409 enumeration areas across mainland Tanzania and Zanzibar. The sample makes it possible to extrapolate to Zanzibar and three zones in mainland Tanzania : rural areas, Dar es Salaam and other urban areas. In the next two phases, these households were monitored across space and time as far as possible to avoid attrition problems. Furthermore, individuals who left their initial households to create their own households were added to the survey data. This brings the sample size of the second phase to 3,924 and the third to 5,010 households respectively. The second phase was conducted from October 2010 to November 2011 and the third from October 2012 to November 2013. Individuals were also monitored. As a result, detailed data on health, education, consumption, crime, and other topics were collected at individual, household, and geographical (ward, district, regional) levels.
This survey was designed to meet three main objectives. The first is to monitor progress toward the goals set out in the National Strategy for Growth and Poverty Reduction. The survey provides annual high quality, twelve-month data relating to indicators that are representative at the national level and over time. This, therefore, acts as a key reference for monitoring a wide range of development indicators. The second objective is to provide a better understanding of the determinants of poverty in Tanzania and how it changes at the national level over time. It is also intended to serve as a basis for analyzing the determinants of income growth, changes in and improvement of academic performance, and changes in the quality of public service delivery. Finally, the survey aims at allowing rigorous evaluation of the impact of government and nongovernment development initiatives. To achieve this, the National Bureau of Statistics worked closely with relevant ministries to link administrative data on relevant projects to changes in development outcomes measured in the survey.
Descriptive statistics
household electricity sources
Here we give the statistics for Tanzania's main sources of electric power : generators, solar panels, and the Tanzanian electric energy production and distribution company (Tanesco). As shown in Table 2.1, the latter is the source used by most households, followed by solar panels in 2008 (used to a greater extent in rural areas) and generators in 2012. Given the small proportions of households using other sources, throughout the rest of the document we define as "electrified" only those connected to Tanesco.
The interest of our study is to capture the effect of access to electricity on household consumption and the channels through which this effect is transmitted. To this end, we distinguish two types of household : households initially without electricity in 2008 but which had it in 2012, denoted treated households, and the control group, consisting of households not connected between 2008 and 2012, denoted untreated households. Indeed, measuring the impact of electricity by comparing the "before and after" situations of treated households requires the counterfactual situation i.e. absence of electricity. As this is impossible to obtain, the control group provides an alternative. To be valid, it must have approximately the same characteristics as the treated group. Table 2.2 shows these two groups : In total, we have 169 treated households and 2,180 untreated, most of which are in rural areas.
Variable of interest : household consumption expenditure
In Africa, it is difficult to collect information about household income, partly because of social considerations and also because most people work in the informal sector where incomes are very erratic. Household income is estimated either upward or downward, depending on the survey/census conditions. If we assume that households do not consume more than they earn, the best way to approximate household income is to estimate their total actual expenditure [Datt, 1992]. However, this equivalence may be slightly wrong given that mutual aid and solidarity between households are still deeply rooted in African traditions. It is, therefore, possible for a household to consume or spend beyond its financial capacity. Its income can then drop without an accompanying drop in consumption. The sample being representative of the population, it would contain a low proportion of these types of households. Hence, consumption remains a powerful means of assessing households' income. (in 2008) in the treatment and control groups. This is the total monthly expenditure per adult equivalent. We can see a shift of the red curve to the right. That means the treated households have, on average, a consumption, and therefore an income, higher than that of others. According to the literature ( [Louw, 2008] ; [Terza, 1986] ; [Ziramba, 2008]) access to electricity bears a cost that can be an obstacle for low-income households. Apart from the unavailability of electrical energy in the household's immediate environment, its income level may also constitute an obstacle to its connection. As illustrated in Table 2.3, this assertion seems to be relevant due to the fact that the physical characteristics (roof, walls, floor) of households' dwellings show a big difference in favor of Treated households. The first column presents the characteristics of all households in the sample while columns ( 2) and ( 3) show those of the treated and untreated groups, respectively. Column (4) contains the coefficients of the linear regression of each characteristic on the variable T, which takes the value of 1 if the household is treated and the value of 0 otherwise. For each modality of a variable, a significant coefficient means that the gap between the two groups of households is considerable. The table indicates that, unlike the "owner" households, there are more treated in the group of tenants. Proximity to a major road and to a market also favors treatment as well as living in flat, low-elevation areas. Finally, living in an urban setting also offers a greater opportunity to benefit from treatment. One can therefore deduce that, in addition to income, other characteristics also prove the selective aspect that exists in the connection process of the two categories of households.
Methodology
Impact of electricity access on household consumption
As discussed above, there is a selection issue in the sample that would result from several factors. Some of these depend on households' intrinsic characteristics. However, the geographical and socio-economic characteristics of these households' location may also be an obstacle or an asset to their electrification. For instance, Table 2.3 reveals that treated households live in better housing and in more accessible geographical areas than untreated households. This selection issue in the reception of the treatment must be corrected in order to obtain consistent results.
While the location of development projects is often subject to conditions, the case of electrification is even more so. In the literature, very few papers ( [START_REF] Barron | [END_REF] ; [START_REF] Pablo | Electricity provision and industrial development : Evidence from India[END_REF], [START_REF] Grimm | [END_REF]] [Torero, 2015] ; [START_REF] Barron | [END_REF] ; [START_REF] Barron | [END_REF]) have mentioned a random distribution of electrification projects at the national or community level. In the absence of random distribution, academics refer to more or less laborious tools to overcome the endogeneity issue. This is how [Bensch, 2011] and [START_REF] Khandker | [END_REF] combined the propensity score matching (PSM) and difference-in-difference (DiD) method to ensure that the assessment of the treatment effect is performed on similar households and that the time-invariant unobservable factors likely to alter the treatment effect have been removed. The most widely-used method in the literature is that of an instrumental variable in a fixed-effects regression model. However, this instrumental variable is still not easy to find, and its nature depends not only on the available data but also on knowledge of the issue. In the specific case of electricity, geographical characteristics (slope, elevation, vegetation cover, etc.) and demographic characteristics (e.g. density) have been used as instruments in several papers ( [START_REF] Grogan | [END_REF], [START_REF] Dinkelman | The effects of rural electrification on employment : New evidence from South Africa[END_REF], [Samad, 2013], [START_REF] Lipscomb | Development effects of electrification : Evidence from the topographic placement of hydropower plants in Brazil[END_REF]). But these variables are inadequate in some cases, especially when dealing with economic variables such as income, consumption, and economic growth which are themselves related to geographical and demographic characteristics. In this case, it becomes difficult to correct the endogeneity issue relating to access to electricity. In response, several authors have decided to construct alternative instruments. This is the case of [START_REF] Khandker | [END_REF] and [Van de [START_REF] Van De Walle | [END_REF] who used the distance between the household and the nearest power line and the distance between the household and the nearest power station, respectively.
For a rigorous analysis, we also opt for the instrumental variable method. To assess the impact of electricity on household consumption, we follow [START_REF] Dinkelman | The effects of rural electrification on employment : New evidence from South Africa[END_REF] :
y hwt = α 0 + γT hwt + θ w t + β h + β w + β t + ϵ hwt (2.1)
Where y hwt represents the logarithm of day-to-day consumption of household h living in ward w in period t which takes the value 0 in the baseline period and that of 1 in the final period. Day-to-day consumption is defined as the sum of three main expenditure items namely food, communication, and transport. The sum of these costs is evaluated per adult-equivalent. We suppose that these three consumption items are relevant because they represent the day-to-day consumption of most households and are likely to change when the household's income changes. Indeed, the increase in a household's income due to its electrification could lead to a change in its consumption habits. For example the purchase of products requiring cool storage, which was previously not possible. Although it is possible to have a mobile phone without being electrified, it could be a heavy burden in environments where charging is somewhat expensive, especially with cheaper, but poor-quality laptops or mobile phones that discharge quickly. However, access to electricity would encourage full-time use of these appliances and therefore increase the household's communication expenditure. Finally, when a household member starts a home-based business to increase household income, transport costs would increase because he/she has to travel to pay for inputs, for example. The treatment variable is noted T hwt while ϵ hwt is the model's error term. The former takes the value of 1 when the household h is treated and 0 otherwise. Knowing that β t captures time specific effects, the parameters β h and β w indicate unobservable fixed effects at household and ward levels respectively while θ w captures the ward trend. Unobservable changes (θ w ) between the two periods of study could affect the treatment effect by biasing it either downward or upward if it was not included in the model. For instance, if a household head obtains a job, because of an improvement in the economic situation in his ward between t 0 and t 1 , the household income would increase. Thus, it can buy a refrigerator and start using new types of food. This additional consumption might not be possible if the household income had not increased. In this case, the effect of the treatment would be biased upwards because this income growth is not due to the connection to the electricity grid. Similarly, the treatment effect may also be influenced by certain fixed characteristics of the household's residence area. As an example, urban households do not benefit from electricity in the same way as rural households. To account for these heterogeneities3 , we introduced the following variables into the model : the distance separating the household from the city center, the distance separating the household from the nearest market, the occupation status of their dwelling (owner or tenant), and the geographical slope of the area. Let X hw be these control variables.
y hwt = α 0 + γT hwt + λX hw + σ(X hw * t) + θ w t + β h + β w + β t + ϵ hwt (2.2)
In order to obtain the change in consumption due to the treatment between 2008 and 2012, we used the first difference.
∆y hw = y hw(t+1) -y hwt = γ 0 + γ∆T hw + σX hw + θ w + ∆ϵ hw (2.3) This form allows us to avoid potential endogenous bias which could be occasioned by the specific fixed effects β w , but mainly β h . The constant term γ 0 is equal to β 1β 0 .
The most common variable that explains the probability of electrification of an area is its degree of accessibility. To measure this, the relief (geographical slope and elevation, and vegetation cover) of the area is generally used. In addition, the transport cost of electricity from the power plant to the households' area increases with distance [ [START_REF] Van De Walle | [END_REF]. This distance takes precedence over geographical characteristics since an area very close to a power plant is likely to be electrified regardless of its relief. Also, the relief can explain the economic status of an area, i.e. the development of economic activity in this area and the consequential change in household consumption. For instance, the geography of an area can lead to floods that would be an obstacle to the smooth running of economic activity.
Furthermore, because households use a low voltage, it is more relevant to use the distance that separates them from the nearest substation rather than from the main power station as [Van de [START_REF] Van De Walle | [END_REF]] did in the case of Indian rural households, where they analyze the impact of being connected on education and consumption. However, information about the exact geographical position of households or of substations is not available in our sample. We thus maintained as the first instrument the distance separating the household's ward from the nearest station. The probability of a ward being electrified thus decreases as this distance increases.
Otherwise, in Tanzania, connection fees vary according to the distance between the household and the nearest electricity pole beyond 30 meters [Chaplin, 2017]. To take this into account, we considered as a second instrument the distance separating the household from the nearest main road. Indeed, since electricity poles are often erected along highways, the distance between the household and the nearest major road may be used as a proxy for the distance between a power pole and the household [Salmon, 2016].
In summary, the distance separating the household's ward from the nearest power plant captures the geographical obstacle while the distance separating the household from the main road captures the size of the financial barrier to be overcome. Even if these two instruments seem a priori exogenous, it is not impossible that they may be related to household income. For instance, high-income households are likely to locate close to major roads and in areas with a high probability of electrification. However, this argument is insufficient to disqualify our instruments because the power plant can also be built -or the major road route built -after the installation of the household. In light of the IV approach, the model then becomes :
∆y hw = γ 0 + γ∆T hw + σX hw + θ w + ∆ϵ hw (2.4) with ∆T hw = ψ 0 + ψ 1 IV hw + µ w + πX hw + τ hw (2.5)
The IV hw instrument4 provides information on the probability that a household would connect to the national grid. It represents the vector of the two instrumental variables previously described. We also assume that the baseline household characteristics X hw introduced into the model as control variables, cannot significantly affect their consumption when they do not benefit from the treatment.
Impact of electricity access on home-based business
Electricity plays an important role in home-based businesses in various ways. Very few authors ( [Samad, 2013], [Chaplin, 2017], ESMAP 2002) have however addressed this channel but only briefly. Nevertheless, their analyses have reached the conclusion that access to electricity is not essential for home-based business creation [Chaplin, 2017] but it does have a positive impact on the revenue of households with a home-based business [Samad, 2013]. Like the latter, we will study the case of Tanzania to determine whether households with a home-based business derive a real benefit from their electrification and how they proceed. To do this, we have grouped the data (capital stock, number of employees, etc.) by household. We can perform this grouping because the distribution of the number of people who run a home-based business does not vary according to their connection status (Appendix 1). We then use only data from the third wave of the panel and then restrict the study only to households with a home-based business. We have abandoned the panel data for the simple reason that the composition of households changes over time. If, for example, in 2008, there were three people in a household involved in a home-based business but in 2013 one of them left the household, the capital stock and/or the number of employees may decrease, thus leading to erroneous coefficients. To compensate for this, the analysis is limited to 2013 alone, which also allows us to make use of all the possible observations in the sample.
Electricity access and Turnover of home-based business
We analyze the effect of access to electricity on the gross income of home-based businesses as follows :
BusinessIncome h = α 0 + α 1 E h + α 2 capitalstock + λX h + ϵ h (2.6)
The variable on the left is the Business income, the variable E h takes the value of 1 if household h is electrified and 0 otherwise ; X h is the vector of control variables and ϵ h is the error term. As a reminder, the variable E h is endogenous since the possession of electricity depends both on geographical accessibility and on household income. It must be instrumented to avoid biased coefficients. Here, to be valid, the instrument must be correlated with E h without being correlated with the error term ϵ.
The proximity of a company to a major road can indeed influence its income while its proximity to a power plant has virtually no impact on its business income. However, the proximity of a major road becomes irrelevant if the household's residence area has a higher density. We have used a new instrument vector that is composed of the distance of household from the nearest power plant, Power of this station and their interaction term. We use the 2SLS method to correct the endogeneity of the electricity variable through the following model :
BusinessIncome h = α 0 + α 1 E h + α 2 capitalstock h + λX h + ϵ h (2.7) With E h = µ 0 + µ 1 IV h + µ 2 capitalstock h + γX h + ψ h (2.8)
The vector X h includes household characteristics (e.g., occupation status and size), its geographical characteristics, and some information on its activity like the current monthly expenses. A review of the data (Appendix 2) shows that 95% of households without capital are households not connected to the electricity grid. Also, the average capital stock is lower among households that are not connected compared to others (Appendix 3). Therefore, we can hypothesize that the household capital endowment depends on its access to electricity. We can test this hypothesis using the "instrumental Tobit" model 5 . A selection bias in the model may arise because the capital endowment depends on the possession or not of electricity. Moreover, electrified and nonelectrified households do not necessarily have the same type of equipment. Households without electricity are likely to be more equipped with purely manual or, at most, thermal equipment while electrified households are likely to have electrical machines that will necessarily have better productivity than non-electrical ones. Since the coefficient α 2 captures both the impact of electrical and non-electrical machines, it would be estimated downwards. To account for this recursion between access to electricity, capital stock, and income from the business, we use the triple-stage least-squares (3SLS) method as follows :
capitalstock h = θE h + σX h + ψ h BusinessIncome h = αE h + βcapitalstock h + λX h + ϵ h
In the first equation, the capital stock is estimated while in the second, its effect on business turnover is evaluated. Both include the variable "electricity" which is still considered endogenous and thus instrumented.
Electricity access and Employment of home-based business
Job creation is a key part of the economy, and anything goes when it comes to creating as many jobs as possible legally and legitimately. Even though the vast majority are small, homebased businesses sometimes need additional employees in order to be operational. In our sample, 11.82% of households have at least one employee in their businesses. Similarly, only 28.12% of households with a home-based business are connected to Tanesco. We seek here to detect a possible causal link between the possession of electricity and the creation of employment. Indeed, we have found, previously, that the accumulation of capital is observed to a greater extent in electrified households. Since capital and labor are two essential factors of production ( [Douglas, 1928], [START_REF] Arrow | Capital-Labor Substitution and Economic Efficiency[END_REF]), which may be complementary or substitutable, we suppose that access to electricity would justify the creation of employment in these households in terms of probability and number. To verify this, we constructed a binary variable (labor h ) that takes the value of 1 when household "h" has at least one employee and the value of 0, otherwise. We then model the probability that the household has an employee in its business as follows :
prob(labor h = 1) = αE h + βX h + ϵ h
(2.9)
E * h = µIV h + γX h + ψ h (2.10)
5. Since the "capital stock" variable is considered to be censored at zero while "access to electricity" is endogenous and must be instrumented.
E h = 1 if E * h ≥ 0 0 otherwise
In this "instrumental variable Probit" model, E h is the household connection status, X h the control variables, and ϵ h the error term. We use the same instrument as in the previous cases to correct for the endogeneity of Electricity (E h ) the variable.
Electrified households have a relatively larger business size than others. This can have a positive effect on their likelihood of having at least one employee. However, the fact that they are more capital-intensive can lead them to use fewer employees compared to non-electrified households. We verify these two opposing mechanisms using the "endogenous switching regression model" as follows :
Employment h = βX h + θE h + ϵ h (2.11) E * h = γZ h + λϵ h + ψ h (2.
12)
E h = 1 if E * h ≥ 0 0 otherwise
The variables X h and E h are identical to those described previously. In this model, the Employment h variable takes integer values and represents the number of employees used by the household's business. The estimation of the model has two levels. The first is dedicated to the instrumentation of the endogenous binary variable, here "access to electricity" whose effect on the number of jobs created is evaluated at the last stage. At this point, we used the negative binomial model because, as shown in Appendix 4, the variable "number of jobs" has an over-dispersion that disqualifies the Poisson model.
Analysis Results
All variables used in this section are defined in Appendix 5. The results concerning consumption analysis are summarized in Table 2.4 which presents four linear regression models. The first is estimated with the ordinary least square (OLS) estimator which assumes that the treatment is exogenous. Its results reveal that access to electricity has a positive, but not statistically significant impact on household consumption. When we move to the models estimated from the two-stage least square (2SLS) method which assumes that the treatment is endogenous, the results change systematically (in size) and the instrumental variables are positive and statistically significant at least at 10% in all model specifications. However, our instrument is not totally immune to criticism. The low value of the R-square parameter in the first step equation leaves some doubt about the power of these instruments. Nevertheless, all these first steps are significant regarding their F-statistics. Our small sample size constrains us to consider only the regional trend since both the number of wards and the number of districts are very high. This 2008 and2012 (∆Cons) while the variable of interest (i.e., treatment) is the connection to the electricity grid (∆T ) between 2008 and 2012. The inverse of the distances is considered in the models for compliance. The slope of the regions (slop_reg) is sometimes considered instead of the fixed effects. This reduces the number of parameters to be estimated and could improve the reliability of the results.
Source : author's calculation trend is captured by dummies in models 1 and 3. In the last model, it is replaced by the geographical slope which is a continuous variable. These models reveal that access to electricity has a positive and statistically significant effect on household consumption. The latter doubles or even triples when households get electricity. The high value of these coefficients might not hide anything extraordinary. In addition to all the transmission channels listed above, these high values may be justified by the fact that when the purchasing power of households increases, their preference for superior goods would increase to the detriment of inferior goods. Therefore, the high value of the impact of the treatment does not necessarily mean an increase in the volume consumed but rather an increase in expensive products. Finally, the results also capture economic recovery effects after the 2008 financial crisis [START_REF] Prosper | The current global economic crisis and its impacts in Tanzania[END_REF]. Indeed, some developing countries, particularly Tanzania, adopted policies to minimize the effects of the crisis ( [START_REF] Naudé | The financial crisis of 2008 and the developing countries[END_REF], [Te Velde, 2008]). In Tanzania, these included the use of expansionary monetary and fiscal policies (reducing tax rates and increasing subsidies for both producers and consumers of goods and services) in general and appropriate policy tools such as increasing money supply and reducing interest rates [START_REF] Prosper | The current global economic crisis and its impacts in Tanzania[END_REF]. These policies could boost household consumption, mainly that of electrified households that own home-based enterprises. For instance, if the population's income increases on average due to these different policies, demand would also increase, leading to an increase in the turnover of all companies including home-based businesses. Finally, there is an increase in household income and ultimately in consumption.
Concerning the analysis of home-based activity, the results in Table 2.5 show that access to electricity increases households' activity income by 75.6% (2sls model). Similarly, activity-related expenditure and household size have a positive and statistically significant effect on income. As for the capital stock, an increase of 1% leads to an increase of about 25% in activity income according to the 2sls estimates. The results of the IV Tobit model corroborate our hypothesis about the relationship between capital endowment and access to electricity. They reveal that Source : author's calculation access to electricity can induce a nearly twofold increase in capital stock.
According to the results of the 3sls estimator (Table 2.6), the effect of the household's access to electricity on capital increases from 3 to 5 while the increase of 1% in the capital stock results in an increase from 45% to 60% in home-based businesses gross income. These results confirm that the effect of the physical capital stock is underestimated in Table 2.5. We describe this effect as the "indirect effect" of electricity on the business' income. The direct effect is identified in model 3 (Table 2.6) where we insert the variable tanesco that represents "access to electricity". Its coefficient is not significant but positive. We, therefore, deduce that electricity only acts indirectly on the household business's income. This means that a household can only benefit from electricity in its business if it is well-endowed in physical capital. This conclusion then refutes the hypothesis that electric lamps would prolong daily working time and thus increase the income of home-based businesses.
Table 2.7 reports the effects of access to electricity on job creation in home-based companies. In model "M1", the variable capital stock is in a continuous form and whereas in model "M2" it is in categorical form. The third model does not include this variable. These three models suggest that access to electricity does not necessarily indicate the presence of employees in their business. However, the capital stock has a positive and statistically significant effect on job creation. It Source : author's calculation increases by 30.2% the probability that the household creates jobs as a result of an infinitesimal increase in its average. However, this effect is local because the relationship between capital stock and the probability of having an employee is non-linear in this model. The model "M2" supports the previous result by reporting a positive and statistically significant effect of capital stock. The correlation between the capital stock and access to electricity could lead to the nonsignificance of the coefficient of the latter. To overcome this, we excluded the variable capital stock (M3). Model "M3" shows that the coefficient of variable E h is still not statistically significant. According to the ESR model (M4), access to electricity reduces the number of employees used by the household, but this effect is not statistically significant. As for the capital stock, its impact continues to be statistically significant. Indeed, capital endowment significantly increases the number of jobs created in home-based businesses. We deduce that having electricity is not enough for households to create jobs, they also need physical capital. Since access to electricity generates capital accumulation, the effect of capital stock on employment can also be described as an indirect effect of electricity on employment.
Robustness
To estimate the treatment effect on household consumption, we referred to an instrumental variable method. We continue with this method by calculating an interval of treatment effect under certain conditions. According to the Wooldridge test, the treatment's exogeneity is often rejected at the 10% level, this may be due to the poor quality of the instrument which would, in turn, have resulted in the high coefficients that we obtained. Another approach to address the endogeneity issue of treatment is the use of non-parametric methods. Here, we use a differencein-difference matching approach.
Bounds of possible treatment effects
The study of causal relationships is becoming increasingly inescapable in social sciences. This justifies the current use of instrumental variables (IV) in many papers, in the social sciences, and particularly in economics, given that it is difficult to obtain completely exogenous variables. However, obtaining a valid IV raises two key challenges. Let 6 : 6. As an example, we consider an explanatory variable and an instrumental variable
Y = βX + γZ + ϵ (2.13) X = πZ + ζ (2.14)
Y is the endogenous variable to explain, and X is the explanatory variable of interest that should be exogenous but is suspected to be weak. Vector Z is that of the instrumental variables. X is endogenous when it is correlated to ϵ, thus leading the β coefficient to be inconsistent with the "ols" estimator. Instrument Z is therefore called upon to correct this bias. To be valid, Z must be strongly correlated with X (first condition) and must explain Y only by X (second condition that implies γ = 0). That means that the instrument must be exogenous and therefore not correlated to ϵ. However, it is difficult to find an instrument that faithfully respects both conditions. If having an instrument correlated to X is relatively easy, this is not the case for the second condition. Instruments that do not simultaneously satisfy these two assumptions often lead to less satisfactory results. In order to facilitate their use, recent studies ( [Small, 2007], [Nevo, 2012], [Hotz, 1997], [Conley, 2012]) have proposed alternatives that partially validate these types of instruments by allowing, for example, γ coefficients to belong to an interval ([γ min , γ max ]) rather than being equal to 0 [Conley, 2012] or by accepting a non-zero correlation between Z and ϵ [Nevo, 2012] in a given interval so that the β coefficient is estimated as an interval and not as a point estimate. We have to calculate these intervals among which treatment effects can vary for each specification because the instrument may influence household consumption independently of the household electrification status. For instance, the proximity of a household's ward to a power plant can lead to a massive presence of businesses. This would allow household members to gain employment and improve their income and hence their consumption. Table 2.8 presents the possible intervals of treatment effects using the approach of [Conley, 2012].
Table 2.8 -Possible intervals of treatment effects idistr
γ min = 0 γ max = 0.1 | γ min = 0 γ max = 0.2 | γ min = 0 γ max = 0.1 ID γ min = 0 γ max = 0.3 | γ min = 0 γ max = 0.4 | γ min = 0 γ max = 0.3 model 2 [-.688 1.280] model 3 [-1.173 1.280] model 4 [-.688 1.280]
Source : author's calculation It presents the possible ranges of the treatment effect if a small correlation is allowed between the dependent variable (consumption change) and the instruments. The upper bound of each of these intervals is determined following a linear regression (ols) of the dependent variable on the instruments and the other explanatory variables. We deduce from this table that the coefficient of each model is in the corresponding interval except in the second specification where the effect of the treatment is outside the range. We also note that the three upper bounds are all identical. This means that the effect of the treatment can in no case exceed 1.280.
Difference-in-difference matching with Mahalanobis distance
Difference-in-difference (DiD) is a powerful tool used in research and especially in economics, to evaluate the effect of a project or development program on its beneficiaries. With data on participant and control observations before and after program intervention, a DiD estimator can be constructed. Its particularity is that it allows for unobserved characteristics affecting program take-up, assuming that these unobserved traits do not vary over time. It also assumes that, in the absence of treatment, the explained variable (i.e. consumption in our case) must adopt the same trend in both treatment and control groups. This assumption may not be verified if some characteristics that may influence changes in the variable of interest do not have the same distribution in the two groups ( [Abadie, 2005], [Bensch, 2011], [START_REF] Khandker | [END_REF]). This is the problem that the Propensity Score Matching, based on Mahalanobis distance [START_REF] Rosenbaum | [END_REF], solves by forcing the similarity of the two groups through carefully selected features. The method follows three stages. The first stage is to estimate the predicted value p h (propensity score with logit or probit) of household "h"'s access to electricity as a function of the household characteristics and its ward attributes. In the second stage, we select all untreated households j such that |p j -p h | < c where "c" is the caliper constant. From this selected subset, we match the treated and untreated households that are closest in the sense of the Mahalanobis distance D hj [Rubin, 2000], based on variables Z, where :
D hj = (Z h -Z j ) t -1 (Z h -Z j ) if |p h -p j | ≤ c +∞ if not
and Z is the set of key covariates. σ is the variance-covariance matrix of Z. The Caliper c is a function of standard deviation, and a low value is necessary when the variance in the treatment group is much larger than that in the control group. [START_REF] Rosenbaum | [END_REF] suggests a caliper of 0.25 standard deviations of the linear propensity score. The third stage estimates the Average Treatment effect on the Treated (ATT) by matching and weighting the control group by the predicted values or a function of these. This combination is known as difference-in-difference matching.
Let us denote by Y ht (D) the consumption of household h at the period t ∈ (0,1) and D the treatment variable which takes 1 if the household is treated and 0 otherwise. We use the following assumptions :
Assumption 1 :
E(Y 1 (0) -Y 0 (0)|X, G = 1) = E(Y 1 (0) -Y 0 (0)|X, G = 0) (2.15)
Assumption 2 :
P (D = 1) > 0 and P (D = 1|X) < 1) (2.16)
where G is the household group. It takes the value of 1 if the household belongs to the treatment group and the value of 0 if it is in the control group. The Vector X contains household characteristics (in t = 0), which are used for the matching. Equation 15 reflects the common trend hypothesis. It states that conditional on the vector X, the expenditure of the households in groups 1 and 2 maintains the same gap between 2008 (t = 0) and 2012 (t = 1) if no household receives the treatment. The second hypothesis, on the other hand, requires the presence of treated households in the Treated sample with an uncertain probability proving its status of treated, conditional on the pairing characteristics. On validation of these two hypotheses, the effect of the treatment is evaluated by the following formula :
AT T DiD P SM = 1 N [ i∈G=1 (Y i1 -Y i0 ) - j∈G=0 ω(i, j)(Y j1 -Y j0 )] (2.17)
Equation 17 represents the effect of treatment on the treated. The weighting value derives from the characteristics of the households (owner or tenant) and their dwellings (walls, roof, and floor). These characteristics approximate their income levels. It also contains other characteristics to control for their geographical accessibility and similarity. This is the ward slope, the distance separating it from the (nearest) main road, and finally the distance to the nearest town center. The results of this estimation are displayed in Appendix 7. The variable consumption (Y) is expressed in logarithmic form. Appendix 6 shows that pairing is possible because of the existence of common support. According to the results, the households that received the treatment experienced an increase in their consumption of around 26.33%. Although this is lower than the values found in Section 4, it is significant and proves that households' access to electricity affects their consumption. Since it is a non-parametric method, the values obtained are conditional and depend a little on the chosen pairing variables. So we do not have enough evidence to refute the robustness of our results. We just retain here that treatment has a positive and significant effect on treated households and in addition, it is contained in all the intervals found in the previous table.
Conclusion
This paper considers the effect of the electrification of households on their overall wellbeing in Tanzania. Our analyses reveal that households can observe a growth in consumption of up to 120% due to their connection to the electricity grid. The specificity of this paper is the fact that it is interested in one of the potential channels through which electricity has an effect on households' income, which is the home-based business and the related externality, i.e. job creation. We found that electrified households that own a home-based business have larger capital stock than non-electrified households. This capital endowment explains the difference in their business's gross income. Since the possession of electricity does not affect the gross income of home-based businesses without acting through capital stock, we have treated this effect as indirect rather than direct. The provision of electricity in Tanzania is not therefore enough for households to generate income and employment because our results showed that they must have a large stock of physical capital in addition. In conclusion, we can say that electricity benefits households with capital stock which in turn can positively impact the income of their neighbors by offering them jobs.
Most of these households work in the informal sector, which is the main provider of jobs in Africa (ADB, 2019). However, since the actors of this sector do not have administrative documents, access to bank credit is almost impossible. This explained part of the difficulties experienced by workers in this sector and the precariousness of their working conditions. For this sector to be economically viable and to contribute significantly to national production, it would be necessary not only to offer its actors electricity but also to support them through financial inclusion policies that affect all social strata. Indeed to take advantage of electricity, households that have a business in their home need to invest in physical capital. This will not be possible if the household faces financial constraints. In that case, they must receive help from microfinance organizations or banks for possible loans to equip their companies. Note : we have considered the inverse of the distances because in most cases there is a negative correlation between the instrument (distance to the nearest major road and distance to the nearest power plant) and the variable to be explained (access to electricity). Thus, with the intention of having a positive sign, we took their inverse. For convenience, we also took the inverse of all other distances. However, since some distances are zero, we added "1" to all of their values to avoid missing observations. This is a personal choice with no scientific basis. The above table shows the distribution of household activity sectors according to their electrification status. We deduce that in the rank of non-electrified households, six to eight (to nine) distinct activities are available respectively per hundred individuals and hundred households, while this is doubled for electrified households. Based on these statistics and our econometric results, we can say that the electrification of households gives rise to new and diverse activities.
Mechanism
Source : Own elaboration This diversification generates a widening range of activity sectors in which households and individuals specialize. This leads to lower competition compared to the case of non-electrified households that have to struggle more to maintain their market share. However, the electrified activities require a relatively high investment in physical capital, a necessary condition to benefit from a high activity income and to create jobs. Figure 7 summarizes these main channels.
Chapitre 3
Advent of Chinese goods into African markets : Impact on firms' growth 1 Introduction
Initially, international trade was based on the theory of comparative advantage and was therefore done for the sake of complementarity. Each country brought to the international market the goods whose production costs it as little as possible. In doing so, the exchanges were of a North-North and North-South nature. In the first category, high-tech goods were traded. For example, a country that has a comparative advantage in manufacturing ships could trade with another that has a comparative advantage in manufacturing airplanes. In the second category, less developed Southern countries needed high-tech goods, in which Northern countries held a monopoly, for the construction of infrastructure. The countries of the North, for their part, needed abundant and untapped raw materials from the Southern countries, which also had a comparative advantage in the production of labor-intensive goods. Despite their importance, these exchanges resulted in almost limited growth in the total volume exchanged until the late 1960s (Figure 3.1). In the 1970s, international trade began to take off due to an intensive and extensive expansion of international trade (IT). The intensive development was explained by a rapid increase in the total volume traded by the old IT players while the extensive expansion was the result of the appearance of new products in the international market, due to technical progress. Furthermore, the extensive development of IT can also be attributed to the appearance of new players in the international market, particularly in African countries. In the 1960s, several African countries gained their independence. To build their economies, they had to trade with the rest of the world to procure necessary goods that they had neither the capacity nor the knowledge to produce. From the 1990s, South-South trade grew faster than North-North and North-South ; the total volume traded between Southern countries became an important part of their total trade with the rest of the world. However, the share traded with the Northern countries remained the highest. The advent of China as a global trading partner in the early 2000s marked a turning point in international trade. As shown in Figure 3.1, international trade China has played and continues to play a remarkable role in opening Africa to trade. To understand the advantages and disadvantages of trade linking Africa and the rest of the world, it is important to distinguish two periods : pre-China and post-China WTO accession. In the first period, Africa only had two main partners, the European Union, and the United States. These countries are known for their high-tech products while Africa was known for its low-tech and labor-intensive products. The clear difference between these trading partners and their products generated little competition in the African market. But after China's accession to the WTO, the situation changed. Indeed, China is able to produce a wide range of any given product, from low to high quality. Thus, two types of competition arise in the African market. On one hand, The latter has been supported by the Forum on China-Africa Cooperation (FOCAC), which especially favors economic relations between these two partners, to such an extent that Chinese products have experienced a huge rise in the African market. Indeed, Figure 3.3 shows the evolution of the share of imports to the African continent from its main partners. China's share rose dramatically after 2000, followed by that of India, which is also a key partner of African countries. Former players experience a gradual decline over the same period ; the fall of the European Union being the most drastic. Today, China and India are the first and second largest trading partners, respectively, of African countries with a trade share above that of all other single trade partners of the continent1 . However, the total volume traded between China and Africa is about three times that traded between India and Africa [Nowak, 2016] but the destination markets are approximately the same. In other words, the top ten African countries that trade with China are almost the same that trade with India. As shown in Figure 3.4, India is a net importer from Africa, while the opposite is true for China. Indeed, the figure reveals that Indo-African trade is often in deficit in favor of Africa while China has had a permanent trade surplus with Africa since 2000. Therefore, we deduce from all the above that to analyze the impact of the penetration of foreign products likely to compete with local products, it is more interesting to focus on Chinese products, given their strong penetration in the African market. But how ?
The African economy, like any other economy, is generally based, on two pillars2 : services and manufacturing. The latter was towards the end of the 18th century, the main engine of growth, development, and catch-up, argued [Szirmai, 2012]. He added that the manufacturing sector is important for development for several reasons including that (a) there is an empirical correlation between the degree of industrialization and per capita income in developing countries ; (b) productivity is higher in the manufacturing sector than in the agricultural sector ; (c) manufacturing is more dynamic than other sectors ; and (d) developing countries with higher shares of manufacturing and lower shares of services grow faster than advanced service econo-mies. The literature has shown that the countries which are now developed have gone through industrial development [START_REF] Hallward-Driemeier | [END_REF] knowing that the manufacturing sector has the capacity to create dozens of jobs at a time. These revelations emphasize the relevance of focusing on the manufacturing sector. In Africa, there is a strong disparity between manufacturing sectors, from a "technological level" point of view. The manufacturing sector is at a very low level in low-income African countries, so Chinese penetration could be an obstacle to their emergence. However, it could also be an opportunity to invigorate once-dormant industries.
Until recently, many studies ( [Autor, 2013] ; [Dix-Carneiro, 2017]) have shown adverse effects of Chinese product penetration in developed countries on the local labor market and firm performance (sales, employment, productivity, etc.). In developing countries, particularly in Asia and Latin America, the harmful effects of Chinese product penetration have also been the subject of several studies ( [START_REF] Iacovone | Trade as an engine of creative destruction : Mexican experience with Chinese competition[END_REF] ; [START_REF] Molina | The China Effect on Colombia's Manufacturing Labor Market[END_REF] ; [Li, 2019] ; [Blyde, 2020] ; [Pierola, 2020] ; [START_REF] Rodrıguez Chatruc | Productivity, Innovation, and Employment : Lessons from the Impact of Chinese Competition on Manufacturing in Brazil[END_REF]). Those results are relatively similar to those found in developed countries. China has left the same traces in all these countries, whatever their nature, i.e developed or not.
In this paper, we analyzed the impact of Chinese product penetration in African markets on firm sales and employment. To do this, we followed the [START_REF] Dorn | [END_REF] approach and used a 2SLS estimator. Results showed that Chinese product penetration into African markets has a negative impact on firms' growth. However, this impact varies according to firms and countries' characteristics. Finally, we found that China's export to the same third countries as African countries reduced significantly African firms' sales but had no significant effect on their size. This impact is more important when the common market is in developed countries.
To our knowledge, this paper is the first to analyze the impact of Chinese product penetration in African markets on a sample of African countries at the firm level. The firm-level analysis allows us to take into account the disparities across countries, industries, and time to better understand the Chinese shock in Africa.
The rest of the paper is divided into five sections. After the literature review and the presentation of our methodology respectively, we describe our dataset and present our main variables. In the fourth section, we display the estimation results before checking their heterogeneity in the last main section.
Literature review 2.1 Possible effects of the Chinese shock
The Chinese shock has been a common subject of the economic literature in recent years. The manifestation of this shock has several aspects, namely (a) the growth of Chinese foreign direct investment (FDI) flows ; (b) the increase in the total volume of Chinese debt and aid allocated to developing countries ; and (c) the sharp increase in the total volume of trade between China and the rest of the world. According to the literature, these three manifestations of the Chinese shock are also observed on the African continent. In this paper, we focused on the trade side.
It appears from the literature that China mainly exports textiles, apparel, and machinery3 to Africa4 . These goods can be used by both households and businesses ; Chinese products serve as final consumption when they are consumed by households and as intermediate consumption or investment goods when they are consumed by companies. In the rest of our study, we will qualify the goods consumed directly by the consumers as output and those consumed by companies as input. The distinction between these two categories of goods allows us to explore all the mechanisms likely to explain the impacts of the Chinese shock on the manufacturing sector in Africa. A third dimension that also needs to be clarified is the common third market. The latter represents any market in which African and Chinese goods are simultaneously observed. Such a market may or may not be located on the African continent. Figure 3.5 illustrates the possible impacts of Chinese goods on the African market. These impacts are observed here on business characteristics such as sales, employment, and productivity which are fundamental for assessing socio-economic well-being and industrial emergence in a given economy.
Chinese penetration downstream of manufacturing plant : Domestic market
The penetration of (Chinese) outputs, also called here Downstream products (DP), can exert downward pressure on local production. Indeed, Africa is known for its poverty rate and subsequent low purchasing power of its population. As such, competitive Chinese products are welcome regardless of their quality. The consumer's indifference to the quality of Chinese products is the main cause of the crowding out of local products. Thus, the drop in demand for local products leads to a drop in local supply, which is followed by a drop in productivity and manufacturing employment. The decline in productivity is obvious because the reduction in supply is not systematically accompanied by the reduction of inflexible inputs in the short term. As sales decline, company revenues will also decline, resulting in either lower wages or the dismissal of some employees. Chinese penetration can also be useful for African companies in the sense that it can encourage local companies, seeking to retain their market share, to invest in research and development in order to make innovations necessary to improve their productivity. This determines the leeway available to these companies in setting their selling price. The higher the productivity, the more they can reduce their price and thus maintain or increase their market share and sales. In the literature, several papers have focused on this problem, both in developed and developing countries ; China does not go unnoticed in any market, be it European, American, Asian, or African. The research conducted on Chinese trade has led to similar conclusions, especially in developed countries. In the U.S. market, Chinese products have severely crowded out local ones. The corollary of this phenomenon is a drop in sales, an increase in unemployment, and a reduction in spending on research and development by companies ( [START_REF] Dorn | [END_REF], [Autor, 2013], [Acemoglu, 2016b]). In Europe, the situation is similar. For instance, in France, [START_REF] Kigundu Macharia | The impact of Chinese import competition on the local structure of employment and wages : evidence from France[END_REF] analyzed the impact of Chinese competition on employment and wages. He found that these two variables were negatively affected in both the manufacturing and nonmanufacturing sectors. In the manufacturing sector, the local employment structure has been polarized. The distribution of wages is uniformly affected in the manufacturing sector, while the "non-tradable goods" sector experiences wage polarization, i.e. increased inequality at the top and decreased inequality at the bottom. Although overall wage inequality is on average unaffected, it has increased in response to trade shocks in areas where minimum wages are only weakly binding. For developing countries, the results are mixed. In Brazil, a study by [START_REF] Rodrıguez Chatruc | Productivity, Innovation, and Employment : Lessons from the Impact of Chinese Competition on Manufacturing in Brazil[END_REF]] revealed a pro-competitive effect of Chinese competition on the productivity of factories, while employment appears to be negatively affected. While the impact appears to be homogeneous in Brazil and in Colombia, as shown by [START_REF] Molina | The China Effect on Colombia's Manufacturing Labor Market[END_REF], this is not always the case elsewhere. In many other countries, Chinese penetration has had an impact that is not only heterogeneous across industries but also varies within the same industry according to the size and capital intensity of each firm. Thus, in countries where the overall effect is negative, it is not uncommon to find positive effects in some places. In general, large and capital-intensive firms tend to be more resilient to China's competition. This is the case, for instance, in El Salvador, where a study conducted by [Li, 2019] showed that Chinese penetration has negatively affected employment, productivity, and income of manufacturing firms overall, and especially those with fewer than 50 employees and low capital intensity. Relatively similar conclusions have been reported for Mexico [Blyde, 2020]. In Peru, firms reacted to increased competition from Chinese manufacturing goods mainly by altering their factor choices. For instance, smaller firms seem to have opted for reducing their demand for labor while larger firms seem to have adapted by deepening their capital requirements [START_REF] Mercado | The Impact of Import Competition from China on Firm Performance in the Peruvian Manufacturing Sector[END_REF]. Moreover, [START_REF] Medina | Import competition, quality upgrading and exporting : Evidence from the peruvian apparel industry[END_REF] had already shown that in the textile sector, the decline in profits for low-quality product segments is prompting some firms to raise the quality of their products to reallocate their production factors. This has resulted in an increase in the demand for employment in these companies.
Africa is no exception to these varied observations. Whether significant or insignificant, positive or negative, China has left its mark on the African manufacturing sector in a variety of ways across industries and countries. [START_REF] Edwards | [END_REF] analyzed the impact of Chinese penetration on the manufacturing sector in South Africa. He found that Chinese competition has led to a significant decline in sales and employment in manufacturing firms. According to the author, this is because competition is highly concentrated in labor-intensive firms, but also because this stiff competition has led to an increase in labor productivity. Africa, known for its low level of technology, could gain efficiency (improving Total Factor Productivity for instance) by duplicating the Chinese techniques hidden in the Chinese products that invade its market. This, however, is not possible if the workforce is not sufficiently trained for this purpose. For example, out of five African countries (Ghana, Kenya, Nigeria, South Africa, and Tanzania), [Elu, 2010] assess the effect of Chinese FDI and trade opening with China on firms' productivity. They find that neither has any significant effect. The result is the same for these countries' economic growth despite a large trade opening with China. However, this conclusion should be viewed with caution because the number of countries is small and the empirical techniques used have some limits. In the case of Kenya [Onjala, 2008] and Ghana [START_REF] Tsikata | [END_REF], Chinese textiles have flooded the local market resulting in a loss of competitiveness for local industries and massive job losses.
Chinese penetration downstream of plant : Foreign market
Exporting companies can suffer a double setback in the face of the Chinese shock. In addition to domestic threats, they are also likely to face China in a third market. This common third market can be found in Africa, thus reducing intra-African trade, or outside the continent especially the developed countries whose market constitutes a popular destination for African products [START_REF] Kowalski | South-South trade in goods[END_REF]. In the third market, competitive Chinese products, crowd out not only local products but also products from elsewhere, thus reducing the exports of several other countries. But as in the domestic market, these exporting companies can respond by adopting techniques necessary to improve their competitiveness both in the domestic market and abroad. They can also benefit from the support of their governments through policies that can facilitate the flow of their products to foreign markets. As a result, they would be less affected by any setbacks of the Chinese shock on the foreign market. We can cite, for instance, the cases of El Salvador [Li, 2019] and Peru [START_REF] Mercado | The Impact of Import Competition from China on Firm Performance in the Peruvian Manufacturing Sector[END_REF] where companies exporting to the same market as China have been differently affected. In El Salvador, competition in the common market led to an increase in the number of employees in charge of production in large firms, a decrease in productivity in medium-sized firms, and a reduction and increase of total income in low-and high-productivity firms respectively. In contrast to the previous case, Peruvian firms that shared an export market with China experienced a significant decline in hiring growth.
Africa, like many developing regions, is still under attack from China in the external market, despite the preferential treatment given to their exports. In American and European Union markets, China has severely crowded out low-cost products from Africa. Using disaggregated data, the empirical results of [START_REF] Giovannetti | [END_REF] highlight these crowding-out effects at the sector, product, region, and market levels. Likewise, intra-African trade is currently less developed and seems to be shrinking more in the face of exports from China. The conflict arises when Chinese and African exports have the same destination in Africa. This phenomenon is more evident in the textile sector 5 , where [Kaplinsky, 2006] showed that, between 2004 and 2005, Chinese exports to Africa increased at least 58% (and up to 112%) while those within Africa fell at least at 3% (and up to 45%). The manufacturing sector is also gradually fracturing, especially in countries where it is relatively developed. This is the case in Cameroon which, according to [Khan, 2008], experienced a drop of about 42% in its exports to other countries on the continent. In East Africa, [Onjala, 2008] shows that this loss is around 20% for Kenya. In the case of Mauritius, [START_REF] Dey | Goliath : Mauritius facing up to China ; a draft scoping study[END_REF] estimated that this collision of Chinese and African exports to Africa led to a reduction in the production of around 12.5%, the closure of 112 factories, and the loss of 25,000 jobs.
Chinese penetration upstream of companies
It is important to stress that the effects of Chinese penetration on African companies do not appear only downstream but also upstream. Indeed, China can give to companies a boost by providing them with less expensive inputs, defined as upstream products (UP). In doing so, it can help them reduce their production costs and thus benefit from economy of scale (i.e price effect of inputs). However, this advantage is not systematic and could hide a disadvantage. At the beginning of China's rise, Chinese products were considered to be of low quality. If this were the case, inputs from China would negatively affect the productivity of companies and make them less competitive (quality effect of inputs). Unsuitability or inadequacy of inputs (intermediate goods and investment goods) to production conditions in Africa can be decisive in the efficiency of these inputs. African production facilities can be obsolete and potentially hazardous, making them more susceptible to the negative effects of poor-quality materials. Alternatively, one could hypothesize that companies lack the expertise to properly handle Chinese products. These different reasons can justify a probable negative impact of Chinese inputs on productivity and consequently on company sales. Very few studies have focused on the effects of Chinese inputs on the manufacturing sector of importing countries. Among them, some have found that Chinese inputs have no significant effect on firm sales ( [Li, 2019], [Blyde, 2020]) while others have found negative effects ( [START_REF] Iacovone | Trade as an engine of creative destruction : Mexican experience with Chinese competition[END_REF], [Mion, 2013]). According to the literature, far from being beneficial to firms, Chinese inputs aggravate the threat of Chinese outputs. Finally, China has also left mark on the informal sector. Thanks to cheaper and more accessible inputs, companies in this sector have transformed themselves by expanding and strengthening their capital stock. The corollary of all this is not only their transition to the formal sector but also an increase in the demand for labor and an improvement in productivity. For example, in Peru, [Pierola, 2020], revealed that as Chinese penetration increases, the demand for unskilled labor increases in the informal sector but their real wages decline.
China shock effects through domestic production network
So far, we have addressed the direct aspect of the impact of Chinese penetration on the domestic market. But we still have to discuss the indirect aspect, namely the impact on the local production network, which stems from the diffusion of the direct effect along the production chain. When firms at the last stage of the production chain receive a positive or negative shock, it can trickle upstream to the input suppliers. Input suppliers are doubly impacted when they are also exposed to a direct shock. For example, this problem may arise when China exports both finished and semi-finished products that are used to manufacture finished goods to the same country. Figure 3.6 illustrates this phenomenon. Suppose this is the structure of the manufacturing sector in a given economy. The circles represent firms at different levels of the production chain. The arrows indicate that the products of the preceding firms serve as inputs for the following firms. Let us assume in our example a multi-product enterprise that serves two separate enterprises and then produces a variety of products. Thus, firms at the beginning of a production chain can be called, for instance, extractive firms in the sense that they directly handle raw materials in their raw state to obtain one or more products of low added value. The enterprises within the chain successively add value to the products whose final transformation will be carried out by the enterprises at the end of the production chain. The products thus obtained at the last stage are called finished products and are destined for final consumption by households. Even if the literature shows that China exports more goods for final consumption, its presence is nonetheless significant at the intermediate stages. When China gains market shares with goods ready for final consumption that are, at the same time, locally produced, a rivalry arises in the local market. As mentioned above, this competition can lead to a decrease or an increase in local production. Everything depends on the resilience and response capacity of the firms at the end of the production chain (WTO, 2022). To simplify the analysis, let us assume that China does not intervene in the intermediate stages. In this case, two extreme situations can arise and the consequences are trivial. Indeed, when all the firms in the last stage are negatively or positively affected, the intermediate firms are also respectively affected. But when there is heterogeneity in the impact, the extent of diffusion depends on the architecture of the manufacturing sector. For example, suppose that the sales of firms 9 and 10 fall while those of firm 11 rise. In this case, firms 9 and 10 would also reduce their demand for inputs, which would reduce the sales of firms 8, 5, and 1. Firm 11, on the other hand, would increase its demand for inputs, which would increase the sales of firms 7 and 4. The variation in inputs also extends to the labor factor and may also impact the capital factor in the medium or long term, which may fall or rise depending on the case. These phenomena have been highlighted by [START_REF] Atalay | How important are sectoral shocks ?[END_REF] and [Di Giovanni, 2014]. [START_REF] Vasco | Aggregate fluctuations and the network structure of intersectoral trade[END_REF] and [Acemoglu, 2012] find that idiosyncratic shocks can propagate through the production network and produce macroeconomic effects. Thus, the China shock does not only affect the directly exposed firms but spreads throughout the economy through their suppliers ( [Acemoglu, 2016a], [Acemoglu, 2016b], [Pierce, 2016]). Returning to our illustration, the impact of any shock on firms 2, 3, and 6 is undetermined. For example, firm 6 supplies two firms, one of which is negatively impacted and the other positively impacted, with distinct inputs. In this case, the literature postulates that firm 6 would specialize in producing inputs for firm 11. However, other firms of the same type as 6 may specialize in supplying inputs to firms of type 10. This mechanism was highlighted by [START_REF] Goya | The effect of Chinese competition on what domestic suppliers produce[END_REF] who studied the effect of Chinese competition on the domestic production network in Chile. His results show that when the demand for a good falls in the domestic market, this leads to a fall in the number of domestic firms producing inputs for that good. In particular, multi-product firms tend to reduce the number of varieties produced, as a result of lower demand. On the other hand, we also see these same phenomena when the shock occurs on the external market, i.e. on exports. In the case of China, this is competition in common foreign markets, which may drive up or down the exports of firms at the bottom of the production chain. For example, [Ito, 2014] show that Japanese firms employ more workers when their customers' overseas sales increase. Similarly, [START_REF] Huneeus | Production network dynamics and the propagation of shocks[END_REF] find that the export demand shock induced by the Great Recession propagated backward through connected firms in the production network.
In sum, it is important to analyze both direct and indirect effects of China's penetration on a country's manufacturing sector, it is important to analyze both direct and indirect effects, especially with respect to factors of production when they are mobile within or across sectors. None of the studies on Africa have addressed the problem in all its dimensions, i.e., by looking at the impacts of China-Africa trade at the input and output levels separately. In the next section, we present models and methods that can help fill this gap.
3 Data and Empirical strategy and Some descriptive analysis
Data
We used three main sources of data. The firm-level data are taken from the "Enterprises Survey Data" (ESD) of the World Bank. They cover 28 African countries for different periods between 2001 and 2018 (see appendixes 1, 2, and 3). Data on most countries covers 2 or 3 years. Except for Cape Verde which joined after 2001 (the date of China's entry to the WTO), all other countries joined the multilateral trade institution in at least 1995. Even though Ethiopia has still observer status, it has a strong trade relationship with China [START_REF] Gebreegziabher | The Developmental Impact of China and India on Ethiopia with Emphasis on Small Scale Footwear Producers[END_REF]. Several pieces of information are available in this first dataset. The ESD enables us to distinguish two categories of firms : importing firms that import at least 10% of their inputs (see appendix 2) and non-importing firms for whom at least 90% of their inputs come from domestic markets (see appendix 3). Likewise, firms that export at least 10% of their production are exporters. A review of the data reveals that firms that use imported goods have a high level of partial foreign ownership. A stronger presence of foreign owners may perhaps facilitate the import of foreign inputs. In addition, the average number of employees in African importing firms is statistically higher than that in non-importing firms. This reveals that large companies in Africa tend to use more foreign products.
National input-output data available at the industry level is taken from EORA Database. While there is a convergence across the major Input-Output (IO) databases (EORA, EXIO-BASE, WIOD, GTAP, and OECD), Eora has the longest annual time series from 1970 to 2015 and also covers the largest number of countries. All factories are classified into eight industries 6 .
Bilateral trade Data from BACI 7 of CEPII (Centre d'Études Prospectives et d'Informations Internationales) reports annual import values and quantities by product of 6-digits at the country-level. This information is available for 47 African countries 8 until at least 2011. To conform with the EORA design, all imported goods have been grouped into seven industries. We have used the concordance file between H0 codes and isic rev3 codes downloaded from World Integrated Trade Solution (WITS) to convert H0 into the "isic rev3" codes. After that, we follow [Lenzen, 2013] approach to match "isic rev3 codes" to the EORA industry classification.
Empirical strategy
In this paper, we are seeking to measure the impact of the penetration of Chinese products on African firms. This analysis will focus on three dimensions : output, input, and third market level. These dimensions allow us to capture the Chinese effect on both upstream and downstream African firms.
In the literature, many firms' characteristics are used to analyze their growth. These are typical assets, employment, market share, physical output, profit, and sales ( [Delmar, 1997], [Ardishvili, 1998]). Based on available data, we focused on two frequently used indicators of growth [Delmar, 1997], namely sales and employment growth. These two complementary indicators provide an overall measure of firms' growth ( [START_REF] Delmar | Measuring Growth : Methodological Considerations and Empirical Results. W : Entrepreneurship and SME Research : On its way to the Next Millennium[END_REF], [Coad, 2009]). Indeed, the sales growth indicator provides information on short and long-run growth. But, unlike employment growth, it is correlated with economic market conditions, such as inflation and exchange rate variation, especially for exporting firms. Unlike sales growth, employment growth is not suitable for short-term analysis because the dismissal and recruitment of a worker tend to be a longterm decision process. [START_REF] Delmar | Measuring Growth : Methodological Considerations and Empirical Results. W : Entrepreneurship and SME Research : On its way to the Next Millennium[END_REF] shows that these two indicators may evolve in opposite 6. Food and beverages, textiles and wearing apparel, Wood and paper, Petroleum, chemical, and non-metallic mineral products, Metal products, Electrical and machinery, Transport equipment, and Other manufacturing 7. International Trade Database at the Product-Level. 8. The BACI Database does not provide data on Botswana, Comoros, Eswatini, Lesotho, Somalia, Namibia, and Sudan directions. However, they are often correlated with other growth indicators. For example, when a firm increases its assets, its sales increase and the number of employees may also increase.
To assess the effect of Chinese penetration on African firms' growth, we followed [START_REF] Dorn | [END_REF] who evaluated this effect on the US market innovation as measured by patent production. So our general model can be presented as the following equation :
∆Y ikjτ = α τ + β∆CHN _P en kjτ + γ 1 ∆X kjτ + γ 2 Z jt 0 + λ k + µ j + ρ t + ∆ϵ ikjτ (3.1)
where
∆Y ikjτ = 100(Y ikjt 1 -Y ikjt 0 ) 0.5(Y ikjt 1 + Y ikjt 0 ) (3.2)
∆Y ikjτ represents the growth of sales or employment of firm i for industry k in country j over period τ = (t 0 , t 1 ). Like the growth indicator, the measurement of growth has been the subject of controversy in the literature. There is still no consensus on this subject [Wong, 2005]. For the same growth indicator, different measurement methods can lead to different conclusions.
Researchers have distinguished several techniques to measure firm growth quantitatively and qualitatively ( [Coad, 2009] ; [McKelvie, 2010]) : organic growth, creation of new firms, concentration of existing firms (mergers, acquisitions), and growth through innovation and diffusion of new products and processes [START_REF] Delmar | Measuring Growth : Methodological Considerations and Empirical Results. W : Entrepreneurship and SME Research : On its way to the Next Millennium[END_REF]. Quantitative analysis tends to be easier to conduct because it is less data-intensive than qualitative analysis, which requires fairly detailed data.
Given that the data used in this analysis is less detailed, firm growth is measured as organic growth. There are two types of organic growth : absolute growth, which consists of calculating a simple difference in the growth indicator (here sales and employment), and relative growth, which examines the relative change in the indicator. The usual formula for assessing relative growth is to divide absolute growth by the initial (i.e., t 0 ) value of the growth indicator. In doing so, the former is biased in favor of large firms while the latter is biased in favor of small firms. Equation 2 addresses these problems by dividing the change of sales (respectively, employment) during t 0 and t 1 , by the firm's simple average of sales (respectively employment) during the same period. This formula is preferred to the usual formula for calculating the growth rate because it allows outliers to be taken into account ( [Davis, 1992], [START_REF] Aterido | The impact of the investment climate on employment growth : does Sub-Saharan Africa mirror other low-income regions ?[END_REF], [START_REF] Haltiwanger | [END_REF]] [Léon, 2020a], [Léon, 2020b]). One-off shocks can create a sharp rise or fall in output only at the beginning or end of the period. In such cases, we could see a boom or a bust in growth.
For example, suppose we want to calculate the growth of firms between the year 2007, marked by a financial crisis, and the more or less stable year 2012. With the usual growth rate formula, we would obtain a very high growth rate that would not reflect the true performance of these firms. Similarly, outliers can also come from small and young firms [Fitzsimmons, 2005] even if Gibrat's law (much criticized in the literature) stipulates that a firm's growth is independent of its size [Gibrat, 1931]. Indeed, since small firms have low output and few employees, they can have higher growth rates ( [Scherer, 1990] ; [START_REF] Khalid Almsafir | The validity of Gibrat's law : Evidence from the service sector in Jordan[END_REF]).
The main explanatory variable is ∆CHN _P en kjτ , the absolute growth of Chinese (output or input) product penetrations in industry k in country j over τ . It is measured at the country level and by industry. Since China is not the only country that exports to Africa, other trading partners' products may compete with both domestic firms and Chinese products. If the penetration of these potential competitors is not included in the model, their effects would be captured by the coefficient β. We have retained as potential competitors, India and countries belonging to the same integration zone 9 as the country j considered. Thus, the variable ∆X kjτ incorporates the growth of penetration of these competitors. We also added other control variables (Z jt 0 ) in lagged period susceptible to influence the firm response to China penetration, such as electricity access rate.
Since the analysis integrates several countries, industry-specific dummy and country-specific dummy variables are included to take into account the unobservable and variable heterogeneities over time between industries (λ k ) and countries (µ j ). To account for circumstantial effects and global heterogeneity over time, we included a time dummy. This model probably could suffer from a major limitation. Indeed, when the China penetration changes slowly across time, this can lead to large standard errors for the associated coefficient. Fortunately here, there is a great gap between each observation year for most countries, therefore it is likely to have very large variations for our variables.
China's shock effect on downstream African firms : Output level analysis
All products locally produced are considered output. While some of these products are directly used by households, others can be used as input by others enterprises. The analysis of China's shock on downstream African firms does not require knowing "which product from which industry is used as input of a given industry". It only requires information on which product is produced by a given enterprise and what is the China penetration of this product. The literature suggests that Chinese product penetration can significantly reduce African firms market share (β<0) but can also drive up the firm's sales (β>0) if they respond to the competition by investing to become more efficient and resilient. To investigate the impact of Chinese penetration, we modified Equation 1 as follows :
∆Y ikjτ = α τ + β∆CHN _P en_out kjτ + γ 1 ∆X kjτ + γ 2 Z jt 0 + λ k + µ j + ρ t + ∆ϵ ikjτ (3.3)
While most variables remained the same, the variable ∆CHN _P en kjt is replaced with ∆CHN _P en_out kjt , which measures the penetration of downstream products as follows ( [START_REF] Dorn | [END_REF])
∆CHN _P en_out kjτ = M chn kjt 1 -M chn kjt 0 P rod kj2000 -X kj2000 + M kj2000
Where M chn kjt is the import from China in industry k, in country j at time t. The denominator measures the economic absorption in the year 2000. It is composed of the share of the industry k production used by a national economy which is the difference between production (P rod kj2000 ) 9. Integration zone includes ECOWAS, SADC, COMESA among others and export (X kj2000 ) in industry k. The last term in the denominator represents the import (M kj2000 ) from the rest of the world of industry k products. The year 2000 is chosen to reduce the endogeneity risk of the variable CHN _P en_out kjt . The underlying assumption is a country's production in any industry could not have been disturbed by Chinese penetration given that China's trade opening began at the end of 2001.
Endogeneity stems from the Chinese penetration variable, and the error term ∆ϵ ijτ . Indeed, positive or negative shocks on domestic demand or the supply side of firms might cause endogeneity. A positive shock on demand would abruptly drive it upwards and downwards otherwise. Thus, a sudden increase in demand from local consumers will draw on both local production and imports. This is also the case for a negative shock on domestic demand ; it would lead to a drop in local supply and a drop in imports. As for the supply shock, there is a problem of reverse causality. For example, a failure of the electricity network leading to untimely load shedding could reduce domestic firms' productivity and ultimately their production supply [Cole, 2018], which would reduce their competitiveness. This exogenous phenomenon would favor the penetration growth of Chinese products, which are very competitive. In the same way, a positive shock on the supply side, such as a tax exemption on the import of physical capital (e.g. equipment or machinery) would encourage business investment and therefore generate an increase in the productivity of local companies. This increase in productivity would strengthen the competitiveness of local products against imports, reducing the Chinese penetration growth.
Finally, many African countries (mainly from the Sahel) face insecurity and terrorism which can affect both national business climate and trade with the rest of the world. Unfortunately, a measure of national security for all African countries and years for analysis are not available10 . The absence of this variable in the model could be a source of endogeneity and thus a further threat to the consistency of the β coefficient. For these various reasons, the variable CHN _P en_out kjt is likely to be endogenous. In order to correct this endogeneity problem and in accordance with our data structure, we opted for the instrumentation method.
As a reminder, an instrument must respect two principal conditions to justify its validity. Firstly, there must be a strong causal relationship between the endogenous variable and the instrument, conditional to any control variable of the main model. The second condition, the "exclusion hypothesis", stipulates that the instrument must not have a causal relationship with the dependent variable (here, firm sales and employment) without passing by the endogenous variable. To achieve this, we follow [Autor, 2013]. The authors analyze the effect of rising Chinese import competition between 1990 and 2007 on US local labor markets. They use other highincome countries imports from China to instrument US imports from China. In this study, we instrument each African country's imports from China by using other low-income countries' imports from China.
According to the World Bank classification, we have collected all (non-African) countries belonging to low and intermediate-income categories. In order for the instrument to respect the exclusion hypothesis, we then proceeded to keep only those countries that have no economic link or, at most, a very weak economic link with African countries. After removing those countries for which all trade data are not available on BACI, only ten countries remained : Cambodia, El Salvador, Honduras, Lao PDR, Moldova, Mongolia, Myanmar, Nepal, Nicaragua, and Papua New Guinea. Since not all African countries are economically similar, it would be incorrect to assume that China will penetrate their domestic markets in a similar way and assign the same values for the instrument. While some countries have experienced strong growth in Chinese penetration, others have shown a recession. To meet this requirement, we assign each African country in the sample with a country or set of countries belonging to the ten (non-African) countries selected. Most variables used for this association are taken from the WDI database. The sea distance between China and each country is taken from the CERDI-SeaDistance Database 11 and calculated by [START_REF] Bertoli | [END_REF]. Applying the principal component analysis (PCA), we went from twelve variables 12 to four variables. The latter represents the first four factor axes obtained after PCA. These account for about 80% of the total information contained in the initial 12 variables. Finally, we performed a hierarchical bottom-up classification based on these four variables. The analysis of the dendrogram (Appendix 4) shows a division into three classes. Each of these classes includes corresponding African and non-African countries. Thus, all African countries in the same group will have the same values for the instrument. The instrument variable is defined as follows :
∆CHN _P en_out G h kjτ = 1 n h j M chn kjt 1 -M chn kjt 0 P rod kj2000 -X kj2000 + M kj2000 * 1 Gh
where G h (h ∈ {1, 2, 3}) is the group h to which the non-African country j belongs. It includes a total of n h non-African countries.
The instrument respects the "exclusion criteria" because it is not linked with the dependent variable. As there are no tangible economic relations between African countries and those which contributed to the construction of our instrument, Chinese penetration in the latter will have no significant impact on the performance of African companies. However, Chinese penetrations in these two categories of countries are very correlated because they are linked by the same cause i.e the productivity gains which are internal to China and do not depend on the global economy.
China's shock effect on upstream African firm : Input level analysis
As trade openness profits both firms and consumers to get cheap inputs and final products, respectively, as China's penetration does similar. China's import penetration would allow African companies to acquire cheaper intermediate inputs and thus reduce their production costs. The literature showed that following trade liberalization, companies that use imported inputs tend to be slightly more resilient to foreign competition ( [Altomonte, 2014], [Amiti, 2007]). This 11. https ://zenodo.org/record/46822.X9PPithKg2y 12. Access to electricity ; rural (% of rural population) ; Access to electricity, urban (% of urban population) ; GDP per capita (constant 2010 US$) ; Final consumption expenditure (% of GDP) ; Imports of goods and services (% of GDP) ; Oil rents (% of GDP) ; Total natural resources rents (% of GDP) ; Manufacturing, value added (% of GDP) ; GDP per capita, PPP (constant 2017 international $) ; Exports of goods and services (% of GDP) ; Urban population (% of the total population) and sea distance between China and each country resilience is due to the fact that not only are the prices of inputs affordable but also because the quality improves under the effect of competition. The Chinese case could be different because, at the start of its trade openness, its technological level was relatively low compared to that of developed countries. The low price of its products was mainly explained by its relatively high labor productivity. As [Blyde, 2020], we follow [Mion, 2013] and [Acemoglu, 2016b]. The former constructs plant measures for offshored inputs from China while the latter constructs upstream industry measures of Chinese import penetration using input-output linkages. Due to the absence of detailed information on the origin of the inputs imported by each company, we computed the offshored inputs from China at the industry level, as follows :
OF F kjt =
Import chn kjt P roduction kjt (3.4) where Import chn kjt is the import from China to country j in industry k at time t while P roduction kjt is the total production of country j in sector k at time t. To address a potential endogeneity concern related to this measure, [Mion, 2013] used the exchange rate as an instrument, based on the idea that movements in the exchange rate are mainly driven by financial and macroeconomic determinants. This exchange rate is weighted by the share of China's input in the total input used by the firm. The movement of the exchange rate should have a large impact on firms using only inputs from China. Given that this paper considers several countries, this instrument can not be used because all African countries do not use the same currency. The order of magnitude of the different exchange rates would give absolutely no meaningful information in the aggregate. The other alternative proposed by [Acemoglu, 2016b] measures input penetration at industry level, as follows :
CHN _IP kjt = r U rkt r U rkt CHN _P en_out rjt (3.5)
where U rkt is the value of "upstream" industry r used by industry k at time t, and CHN _P en_out rjt is the import penetration from China in industry r at time t. This measure of input penetration consists of a weighted average of the import penetrations in all the industries that provide inputs to industry k with weights based on input-output linkages. However, in the EORA (inputoutput) database, the imported inputs are not broken down by industry, so it is impossible to know, in total, which part comes from which industry. The WIOD database, on the other hand, provides fairly detailed information on the origin and industry of each input used by another industry. Unfortunately, this is only available for the (28) countries of the European Union and other (15) large countries around the world. Among these countries are some developing countries including India which can be technologically close to African countries. Therefore, we use the technical coefficients (TC 13 ) of India for African countries 14 . This approach encompasses several dimensions of Chinese competition at the input level. It not only takes into account the fact that companies import inputs from China but, it also encompasses the competitive effect that this could generate at the local input market [Blyde, 2020]. However, this situation leads to ambivalence in the interpretation of this variable because a strong penetration of Chinese inputs can exert downward pressure on local prices so that firms that do not import from China will ultimately benefit even in sourcing from the domestic market. Unfortunately, we cannot distinguish these two effects. To correct the endogeneity of this variable, we proceeded as previously by constructing [Autor, 2013]'s instrument.
China's shock effect on African firm through the third market
Competition between China and African businesses not only takes place in the domestic market, as discussed above but also in the foreign market. That happens when Chinese firms and African companies export to the same market. This effect has been widely analyzed in the literature, and the results are mixed. Some studies report a negative effect (i.e crowding out of exports from African companies). Other studies find a pro-competitive effect, whereby competition with Chinese firms in internal markets leads African firms to improve their productivity to remain internationally competitive. While several methods exist in the literature to analyze this phenomenon, in particular through the calculation of Chinese penetration on the third market, we follow [Blyde, 2020] who studied the case of Mexico and consider the United States as its main market since the United States represents around 90% of its exports. Then they used Chinese penetration in the US market to examine the effect of Chinese penetration in the common market on Mexican firms. In the BACI database, we identified 209 common markets for African countries (taken as a whole) and China out of international markets. Among these common markets, only the main partners (with the obvious exception of China) were retained. These include the countries of the European Union, the USA, India, and Africa itself. Moreover, [Baliamoune-Lutz, 2011] finds that Africa's exports to China have, overall, no effect on Africa's economic growth. Exporting to the OECD market is found to have a U-shaped effect with ultimately a positive effect on economic growth. In line with [Baliamoune-Lutz, 2011], we distinguish two types of markets : a developing market made up of African countries and India and a developed market made up of the European Union and the USA.
13. The technical coefficient is computed as follows :
TC rk = U rkt r U rkt
where U rkt is the amount of input that industry k buys from industry r 14. The technical coefficients of India are also used as a proxy for the other ten countries selected for the instrument variable.
∆Y ikjτ = α τ + β∆CHN _P en_out kjτ + l θ 1l (∆CHN _P en_CM klτ * Exporter ikjt 0 )+ (3.6) θ 2 Exporter ikjt 0 + γ 1 ∆X kjτ + γ 2 Z jt 0 + λ k + µ j + ρ t + ∆ϵ ikjτ
where -∆Y ikjτ is the growth of firm i, in industry k and country j over period τ -
CHN _P en_CM klτ = M chn klt 1 -M chn klt 0 P rod kl2000 -X kl2000 + M kl2000
∆CHN _P en_CM klτ is China penetration on common market l in industry k over period τ -l ∈ {developing market (Africa&India) ; developed market (European-Union&USA)}, -θ 1l is the parameter of interaction term. A positive sign means greater Chinese product penetration on the common market l more the exporting firm's growth increases on average. Since penetration is measured at the sector level and only exporting companies can be affected by the impact of common markets, this variable is crossed with the exporting status of the company at the beginning of the period. The variable Exporter ikjt 0 is a binary variable taking the value of 1 if the firm i in industry k, country j at time t 0 is an exporter and 0 otherwise. We do not need to instrument these variables because a (positive or negative) shock on demand or supply, in one of the countries forming the common markets, is insufficient to explain the penetration of China or any African country to the common market group considered.
Some descriptive analysis
The next figure shows the output penetrations 15 for five African countries : Egypt, Ethiopia, South Africa, Tanzania, and Togo over the period 2000-2018. It shows that there is great diversity in the dynamics of Chinese penetration over time and across countries.
15. The output penetration is defined as The layout of the curves reveals that low-income countries have a strong acceleration in Chinese penetration in their relevant markets. Even though high-income African countries have experienced this shock gradually, they nonetheless represent the largest importers of Chinese products as shown by [Nowak, 2016]. The strong penetration of China in low-income countries can be justified by the weak development of their manufacturing sector. Therefore, there is a large gap between domestic demand and production, which would be filled by Chinese products at very affordable prices compared to Western products. The cost of transport faced by each of these countries is another relevant factor. Indeed, a high cost of transport could constitute a real obstacle to importing from China and could explain the divergence of the exposure of each to the Chinese shock within the sub-sample of low-income countries in Africa. For example, two poor countries with similar economies will likely face different transport costs if one country is landlocked and the other is not. Several other characteristics such as population growth, economic growth, and trade policies are likely to explain the evolution of Chinese penetration in each country.
CHN _P en_out kjt = M kjt * 100 P rod kj2000 -X kj2000 + M kj2000
Like the country level, there is also heterogeneity in China's penetration at the industry level. In Figure 3.8, the penetration by industry is shown for the previous countries (except Tanzania). This figure shows that not all sectors are exposed (downstream) in a similar way to the Chinese shock. The textile sector is the most penetrated. of the four countries and probably in most African countries according to the literature. In South Africa, the "Electrical and Machinery" sector is the second most penetrated. As for the other countries, it is difficult to prioritize penetrations at the level of industries. The heterogeneity of Chinese product penetration can also be explained by the heterogeneity of customs tariffs for corresponding industries. Indeed, foreign products are not subject to the same taxes. It is therefore possible that customs taxes are a barrier to Chinese penetration in certain industries. Another argument that could explain this divergence is that not all industries have the same level of development within the same country. Some industries are more advanced than others and are able to efficiently meet a very high proportion of local demand. In other words, the greater the deficit between local production and local demand, the more China's penetration is likely to be high.
Although the most recent year of data is 2015, the calculation of input penetration can be extended up to 2018 thanks to the [Acemoglu, 2016b]'s approach 16 . Figure 3.9 illustrates the input penetrations by industry for four African countries between 2000 and 2018.
16. The input penetration is computed as follows The figure shows that not only does input penetration vary over time, but it also varies by industry. In other words, not all upstream industries are exposed to the Chinese shock, in the same way. If we consider Egypt, the textile industry benefits the most from Chinese inputs. Ethiopia presents a slight peculiarity. There is an almost perfect correlation between Chinese input penetrations in certain industries. Nevertheless, by observing the structure of penetrations in the four countries, we see that there is a strong diversity both at the industry level and at the country level.
CHN _IP kjt = r U rkjt r U rkjt * CHN _P en_outrjt
Estimation results
This section presents the estimation results of the evaluation of the effects of Chinese penetration through outputs (∆ChinaP enOut), inputs (∆ChinaP enInp), and common external markets on sales and employment. As discussed above, we have also added the penetration of India (∆IndP en) and that of the member countries of the same free trade area (∆F T AP en). The two types of common external markets namely developed (∆chnX_EU SA_cm) and developing (∆chnX_AI_cm) markets are also included in the models. These are interacted with the exporter status (Exporter) of each firm. At the country level, we have used the variable electricity access rate 17 because the availability of electricity is essential for the proper functioning of a company. Given the small number of observations in our sample, we used groups of countries and times instead of dummy countries and dummy times. We distinguished five groups of countries (Ecowas, ECCAS, EAC, AMU, and SADC) according to their geographical position. The
The variable electricity access rate is a categorical variable
underlying assumption is that geographically close countries, linked diplomatically and economically, would undergo common systemic shocks or would be affected by economic shocks jointly. With regard to time, we have distinguished three periods, namely 2001-2006 ; 2007-2008, and 2008-2018. This division of time is based on the assumption that there is a certain continuity and monotony in time before and after the financial crisis of 2007-2008.
China's penetration effect on firm sales
While the penetration of Chinese outputs on these markets could be a threat to African firms, Chinese input penetration on the domestic market could be, an opportunity thanks to the economies of scale that they could generate. Table 3.1 presents the results of the empirical analysis of the downstream and upstream effects of Chinese penetration on African firms' sales.
Columns [1] to [4] report the results of the downstream effects, while columns [5] and [6] display the results of the upstream effects. Columns [7] to [9] report the results for both downstream and upstream effects. For each model specification, the common external market variables are taken into account except for specifications 5 and 6. All model specifications include industry, country, time-fixed effects, and the electricity dummy variable.
Overall, we retain from the first sequence that the penetration of Chinese outputs significantly reduces the sales of African companies, which confirms our initial hypothesis. The OLS model also shows a very statistically significant but smaller effect than the 2SLS estimates. We also see that controlling for the penetration of potential competitors (faced by China in the African market) namely India and countries of the same FTA (model 2), reduces the effect of the Chinese shock, in absolute value. With regard to common external markets, competition is fiercer in developed countries.
When we consider the following sequence (that of the inputs), the OLS model indicates a negative but not significant effect while the 2SLS model reports a significant decrease in domestic sales following the penetration of Chinese inputs. This result is counterintuitive and runs counter to our hypothesis. This finding can be explained if African firms lack the expertise to better utilize Chinese inputs. Inadequate Chinese inputs to African firms' working conditions could also explain this result.
The last model specification, which incorporates both types of products (input and output) at the same time, presents results that go in the same direction as the previous ones. However, the upstream effect is not statistically significant while retaining its sign knowing that the significance of the outputs has dropped considerably. This may be due to the collinearity between the two Chinese penetration variables. Consequently, in the end, we maintain that the Chinese shock constitutes a real threat to African companies both on the output side and on the input side. These results are in line with those found in the literature. Notes : All models include a dummy for country (µj), industry (λ k ), time (ρt) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[ and [80,100]) Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1. IV refers to 2SLS estimator with ∆ChinaP enOut and ∆ChinaP enInp are instrumented by their correspondent instruments.
China penetration effect on firm size
Industry in Africa is slowly expanding even though the sector remains dependent on an abundant labor force and still struggles to adopt advanced technologies. African firms are still specialized in the production of labor-intensive goods. In that context, a negative or positive shock on their sales is likely to impact employment. Table 3.2 presents the results of the evaluation of the effect of the Chinese shock on African firm size captured by its number of employees. Notes : All models include a dummy for country (µ j ), industry (λ k ), time (ρ t ) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[ and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed
The same model specification is used as in the case of the firms' sales analysis, namely industry, country, and time fixed effects, China's penetration, other competing countries, and common markets are included. The results are consistent with our assumptions except in the OLS model. Indeed, the latter shows a positive and statistically significant effect of Chinese penetration on firms' size, but as soon as we correct for endogeneity by switching to 2SLS models, this coefficient becomes negative and statistically significant. As for the effects via the channel of the common external markets, no statistically significant impact on firms' employment appears neither on the side of the developing countries nor on the side of the developed countries. In conclusion, Chinese competition has been found to have a negative impact on manufacturing jobs in Africa.
Robustness Check
Alternative Model
In the previous models, we considered the dummy countries, industries, and time separately. This means that the unobservables across countries, industries, or over time are the same. However, this is not necessarily true. For example, the "wood and paper" industries in Benin and Morocco are likely to be different in terms of development level. These two industries are also not subject to the same political, economic, and social shocks. The same goes for other countries and industries. We repeated the first three model specification in Table 3.1 to include 5. Robustness Check the interaction terms of country, industry, and time to take into account these specificities. ;[20,40[ ;[40,60[ ;[60,80[ and [80,100]) ; Robust standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2sls estimator with ∆ChinaP enOut instrumented with the correspondent instrument while FE refers to Fixed-Effects estimator
The results shown in Table 3.3 are relatively similar to the benchmark results. The impact of Chinese penetration on sales growth remains negative and statistically significant. As for competition in foreign markets, the magnitude of the impact on developed markets is always greater than that on developing markets. In addition, in order to verify the robustness of the results, we follow [START_REF] Chauvet | [END_REF] and reestimate the previous models using 2-year lagged observations. We used the same method of calculation to evaluate the growth over the period [t-3, t-1]. All the other variables are also recalculated over the same period18 . To these variables, we applied two methods, the instrumental variable method and the within method (Fixed effects) combined with that of the instrumental variable. The results remain qualitatively similar. However, the coefficient of the variable of interest is only statistically significant when the interaction term of country, industry, and time, highlighting the importance of crossing unobservable heterogeneities in order to take into account local and temporary specificities.
Instrumentation of India's penetration
In all the estimation models, the effect of India's penetration is close to or often greater than that of China. This situation seems paradoxical given that the growth of Chinese penetration and China's import share are greater than that of India. The larger estimated effect of Indian penetration may be due to endogeneity. To address this risk, we instrumented India's penetration using [Autor, 2013] approach applied similarly to the case of China. The results of this analysis, shown in appendix 5, confirm our assertion. While other variables conserve their sign and significance, the coefficient of India's penetration is still negative but no longer statistically significant. We, therefore, deduce that in all the previous and the next model specifications the coefficient of Indian penetration should be interpreted with caution. However, the latter remains a relevant control variable in these models.
Finally, note that our data show a very high attrition rate. If this is not corrected, it can introduce an upward or downward bias in the estimates. Indeed, the disappearance of certain companies in the sample in the second period or later may be due to an exit from the market or a voluntary non-inclusion for reasons of financing (as was the case for some companies in the sample). [START_REF] Chauvet | [END_REF] who worked on a similar sample 19 , showed that this attrition had a negligible impact on the estimates.
Heterogeneity Analysis
China's penetration effect on sales according to firm characteristics
While one of the key benefits of free trade is low-cost access to finished and semi-finished goods by consumers and businesses respectively, one should not lose sight of its potential adverse effects on nascent businesses ( [Shafaeddin, 2005]). In the worst-case scenario, the latter are sometimes doomed to leave the market. No less important is the case of small businesses. They are often less capital intensive and are therefore easy prey to the penetration of foreign products. A 5-year-old firm is defined as young while a firm exceeding the median size of all firms in the sample, namely 30, is defined as large.
In Table 3.4, the model specifications 2,3 and 4 from Table 3.1 are repeated on different sub-samples. The results show that there is a statistically significant and negative impact of the penetration of Chinese products (outputs) on less than 5 years old companies. When we estimate the same models on the sub-sample of firms that are 10 years older, the estimated impact turns positive but statistically insignificant in the three model specifications. These results suggest that young companies are particularly exposed to China's import competition.
The analysis of the firms' size shows that small African firms experienced a drop in sales as a result of the penetration of Chinese products into their markets. Conversely, Chinese penetration leads to a significant increase in large African firms. Large companies are often old companies ( [Sleuwaegen, 2002], [START_REF] Shanmugam | [END_REF]) with experiences that allow them to adapt to all the vagaries that occur in their markets. Large companies have more access to financial markets 19. Their sample also incorporates some African companies featured in our data than smaller ones [Wang, 2016] and can therefore secure loans necessary to challenge their competitors. Notes : All models include a dummy for country (µ j ), industry (λ k ), time (ρ t ) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[, and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed
While the right combination of production factors is critical to a firm's efficiency, the Chinese shock could constitute a threat to firms less developed technologically because they are susceptible to becoming less productive. This hypothesis is tested in Table 3.5 which displays the results of the analysis by interacting with China's penetration and capital intensity status (Cap_Int) considering the same model specifications as in the previous table. The Capital intensity is measured by the firm's capital-labor ratio. It takes the value of 1 if it is greater than the median (models 1-3) or than the third quartile (models 3-6). These thresholds are calculated by industry. The results show that African firms with high capital intensity experienced a reduction in sales caused by the Chinese import competition but to a lesser extent compared to others. Notes : All models include the interacting term of the country (µ j ), industry (λ k ), time (ρ t ) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[, and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed
Effect of the Chinese shock according to the country's endowment in natural resources
The relationship between natural resources and economic development, especially in developing countries, raises many questions and has been the subject of numerous studies in the literature. Countries with large endowments of natural resources tend to be less diversified and highly dependent on the natural resource sector for tax revenue, foreign exchange, growth, and employment [Hailu, 2017]. According to [Martin, 2005], reducing natural resource dependence is important for several reasons, including (i) the deteriorating terms of trade for commodity exports ; (ii) the volatility of commodity prices and their subsequent impact on the domestic economy ; (iii) the lower rate of technological change in resource extraction activities compared to other sectors ; and (iv) the rent-seeking behavior, weak governance, and civil wars associated with resource-intensive economies. According to the literature [START_REF] Strecker | The fact and fiction of Sino-African energy relations[END_REF], [START_REF] Alden | China and Africa's natural resources : The challenges and implications for development and governance[END_REF], [START_REF] Powanga | China's contribution to the African power sector : policy implications for African countries[END_REF]), China's presence in Africa is explained by its crucial need for raw materials, particularly energy resources, to feed its industries and meet the energy needs of its population. If this is the case, we hypothesize that African countries dependent on natural resources are more exposed to Chinese competition. Countries with large endowments of natural resources tend to develop the extractive sector to the detriment of other sectors, particularly manufacturing. This phenomenon is known as Dutch disease ( [Corden, 1982], [Corden, 1984], [Javaid, 2009], [Trevino, 2011], [START_REF] Rodriguez | Why do resource-abundant economies grow more slowly ?[END_REF]). In the context of the China-Africa partnership, this situation may be exacerbated if China mainly imports raw materials from Africa and exports its manufacturing products to Africa.
However, others scholars have questioned the Dutch disease hypothesis ([Sala-i-Martin, 2004], [START_REF] Lederman | [END_REF]). Criticism often goes to the empirical methods used but also the nature of the data ( [De V Cavalcanti, 2015] ; [Alexeev, 2009] and [START_REF] Lederman | Trade structure and growth[END_REF]). Others point to the per-iods of study ( [START_REF] Wright | [END_REF] ; [Manzano, 2001]) as well as the indicators used ( [Brunnschweiler, 2008a] ; [Brunnschweiler, 2008b] ; [Ding, 2005] ; [START_REF] Stijns | Katsiaryna Svirydzenka. Introducing a new broad-based index of financial development[END_REF]). The positive effects of resource dependence are justified by the fact that revenues (taxes and rents) from natural resource exploitation can be used to diversify the national economy through complementarities between the resource sector and other sectors ( [Hirschman, 1958]. When these revenues are invested in other sectors throughout the economy, the benefits diffuse through the national production network. Wages paid to workers (in the extractive and non-extractive industries) and spent on consumable goods further transform the national economy. In addition, the government can also use revenues from the extractive sector to build infrastructure (electricity, roads, etc.) and invest in human capital necessary for business growth. By doing so, these companies can reinvest the profits into the acquisition of new technologies and boost their productivity and consequently improve their competitiveness on the international market. Additional sources of foreign exchange accumulation can also be used to finance development. In conclusion, dependence on natural resources can lead to economic development and prosperity in the manufacturing sector if it is accompanied by a policy that can reallocate the resources it generates to other sectors so as to make it transitory and not permanent [Hailu, 2017].
Several approaches are used in the literature to determine a country's dependence on natural resources. Their share of GDP or in total exports ( [Davis, 1995] ; [START_REF] Lederman | Trade structure and growth[END_REF]), the resource rent in GDP [Dobbs, 2013] is one of the most commonly used indicators. The distinction between natural resource-dependent and non-dependent countries is based on a threshold arbitrarily chosen in the literature, from 10% ( [Dobbs, 2013] to 20% ( [START_REF] Baunsgaard | Fiscal frameworks for resource rich developing countries[END_REF] or 25% ( [Haglund, 2011]. Such thresholds have several limitations, including the impossibility, of making comparisons between countries and over time. Similarly, these indicators do not take into account the production environment of non-extractive sectors. To address these limitations, [Hailu, 2017] designed the Extractives Dependence Index (EDI), a composite index consisting of three indicators : i) the share of export earnings from extractives in total export earnings ; ii) the share of the revenue from extractives in total fiscal revenue ; and iii) the extractive industry value added in GDP. These indicators were weighted to take into account the productive capacity of the country in order to determine the presence of alternative sources of export and fiscal revenues as well as the industrial capacity for diversification necessary for structural change in the economy. The EDI for country j at time t is defined as follows :
EDI jt = [EIX jt * (1 -HT M jt )] * [Rev jt * (1 -N IP C jt )] * [EV A jt * (1 -M V A jt )] (3.7)
where : EIX jt : the export revenue from oil, gas, and minerals as a share of total export revenue of country j at time t ; HT M jt : the export revenue from high-skill and technology-intensive manufacturing as a share of global HTM exported in year t ; Rev jt : the revenue generated by the extractive industry as a share of total fiscal revenue of country j at time t ; N IP C jt : the total tax revenue collected from non-resource income, profits and capital gains as a share of GDP of the country j at time t ; EV A jt : the extractives industries value added as a share of GDP of the country j at time t ; and M V A jt : the per capita manufacturing value added used as a proxy for domestic industrial capability of country j at time t. Notes : All models include a dummy for country (µ j ), industry (λ k ), time (ρ t ) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[, and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed Table 3.6 shows the effect of Chinese penetration when the country's dependence on natural resources is taken into account (∆ChinaP enOut * EDI). As in the previous model specification, the penetration of Chinese products continues to have a statistically significant negative impact on African firms' sales. The extractive dependence index (EDI) is positive and significant invalidating the Dutch disease hypothesis. The interactive term between China's penetration and EDI is positive and statistically significant. African firms' sales tend to increase the more a country is dependent on the extractive sector and the more the Chinese penetration increases. These findings are consistent with the more recent theories about China-Africa economic relationship, according to which, signed agreements between China and natural-resource-rich African countries, lead the former to invest in many infrastructure projects particularly road, rail, and electricity sectors20 ( [START_REF] Powanga | China's contribution to the African power sector : policy implications for African countries[END_REF], [START_REF] Kaplinsky | [END_REF]). Working conditions in African firms have improved because they can serve more local and foreign markets. Hence, their TFP improves and they can become more competitive. Countries well endowed in and dependent on natural resources can get foreign currencies, which can reduce financial constraints and facilitate the import of production tools by local firms.
Regional and sectoral Impacts of China shock
China's penetration in the African market has not evolved uniformly across the continent, both at the country level and at the sector level. For instance, the textile industry has experienced a strong increase in the penetration of Chinese products. Conversely, the agri-food industry has been less exposed to Chinese import competition. As for the other sectors, no common trend emerges. We make the assumption that countries that belong to the same economic region (i.e. ECOWAS, AMU, EAC, CEMAC, SADC) would experience similar Chinese penetration. We estimate some model specifications for these regions. The results shown in table 3.7 therefore relate to these regional communities with the exception of CEMAC, whose number of valid observations is too low to give an estimate. The results are in the line with the benchmark estimations. However, Chinese penetration seems to have a pro-competitive effect in Egypt and Morocco, part of the Arab Maghreb Union (AMU). But when we take into account the competition in the common external markets, this pro-competitive effect disappears. The ECOWAS zone21 shows a negative and significant effect but of low magnitude. This shows that China has caused minor damage, on average, to the countries of West Africa. As for the other two zones (EAC, SADC), although the effects are negative, they are not significant and this could reflect a mitigated impact of the Chinese shock in these zones but with a slightly more widespread negative effect. However, we can also assume that the non-significance may be due to the small size of the sub-samples considered. ;[20,40[ ;[40,60[ ;[60,80[, and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed
As far as sector analysis is concerned, the results are quite surprising. Table 3.8 does not reveal any statistically significant effect of Chinese penetration at the industry level. Two explanations are possible. As in the previous case, either it is due to the small size of the sub-samples or it is due to the non-uniformity of the effect of the Chinese shock over the continent. In other words, there is a strong heterogeneity of the impact of the Chinese shock in a given industry by moving from one country to another so that no trend is dominant. ;[20,40[ ;[40,60[ ;[60,80[, and [80,100]) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed
Conclusion
On the international scene, the Chinese shock leaves no country unaffected. This paper analyzed the impact of China's product penetration on African firms' growth. Overall, Chinese competition in Africa is leaving untoward damage in the manufacturing sector. The penetration of Chinese products leads to a significant drop in the growth of African companies. However, it should be noted that this effect is not homogeneous. On the one hand, some firms, such as small, young, and the least capital-intensive are more impacted by Chinese penetration. On the other hand, we find that African firms in resource-dependent countries are less adversely affected by the penetration of Chinese goods. In addition, when the common external market is a developed country, the Chinese shock has a more severe impact on African exports. Moreover, the results reveal that the use of these inputs by African companies leads to a drop in their sales. These results are very important and answer the question "is the Chinese shock an opportunity for African industrialization ?" with a "no". Even if a pro-competitive effect is observed for large companies, small and medium-sized enterprises which constitute the main provider of jobs and contribute to the manufacturing sector, are impacted negatively.
The African Continental Free Trade Area (AfCFTA) is one of the flagship projects intended to catalyze the economic and industrial development of the continent. For this to be effective, the continent should change its posture vis-à-vis Chinese trade. As long as Chinese products are cheaper, they will likely compete and gain market share over locally produced goods. This is why Africa must equip itself with all the necessary means to sufficiently boost its manufacturing industry in the face of the Chinese threat and possibly other shocks that could disrupt the proper functioning of the AfCFTA. Concretely, Africa should work to raise the level of total factor productivity of its companies and should invest in education to improve its human capital quality in order to make its investments in research and development more profitable. Furthermore, the availability of electric power is essential for the competitiveness of the manufacturing sector. In that context, the financial sector must be well-developed and well-inclusive to encourage private and public investments. Notes : All models include a dummy for country (µ j ), industry (λ k ), time (ρ t ) and electricity (<20 ; [20,40[ ;[40,60[ ;[60,80[, and [80,100]
Appendixes
Introduction
China-Africa cooperation has been growing over the years and attracted a lot of attention. This cooperation has been strengthened since China joined the WTO, and stands out in different ways. Chinese foreign direct investment (FDI) inflows in Africa and bilateral trade between the two regions have expanded significantly. These changes are not without impacts on African economies. In this paper, we focus on the impact of Chinese imports on African firms' performance. The literature suggests that China's penetration in foreign markets, generally, led to a decrease in firms' growth but this impact is heterogeneous across firms according to their characteristics ( [START_REF] Bugamelli | The pro-competitive effect of imports from China : an analysis of firm-level price data[END_REF], [Mercado, 2019], [START_REF] Kigundu Macharia | The impact of Chinese import competition on the local structure of employment and wages : evidence from France[END_REF]). For instance, while small and young enterprises are vulnerable, large ones can benefit from pro-competitive effects ( [Li, 2019], [Blyde, 2020]). The Chinese competition may constrain local firms to reinforce their performance, in particular their productivity. A firm is more productive than another one if it is able to produce the same outputs with fewer inputs or if it produces more outputs using the same inputs [ [START_REF] Biesebroeck | Revisiting some productivity debates[END_REF]. To exist in the market, a firm must reach a minimum level of productivity that requires, in turn, a minimum level of capital stock according to its production function [Olley, 1992]. A firm's market power increases with its productivity [START_REF] Melitz | [END_REF]. For a firm, an increase in the stock of capital means an improvement in productivity and hence the market power that increases the probability of staying in the market. Without high productivity, African firms' response to China's penetration is likely to be challenging. Their productivity can decrease as a result of a decrease in production or may increase as a result of a special investment.
According to the literature, one of the major obstacles faced by African firms is the low access to financial markets [African Economic Outlook, 2019]. In such a context, investment in new equipment can be a challenge mainly for young and small enterprises ( [Beck, 2006a], [Okpara, 2007], [START_REF] Fowowe | Access to finance and firm performance : Evidence from African countries[END_REF]). This situation could explain the heterogeneous impact of China's pene- -tration into African markets across firms. However, as shown in Figure 4.1, for more than two decades, China has become, thanks in large part to more competitive prices, the main supplier of machines, motors, and other electrical and electronic products to African countries. Access to affordable imported inputs and equipment can facilitate firms' investment at a lower cost. As a result, Chinese inputs may also generate productivity gains for African firms. Furthermore, The literature points to low electricity quality as another obstacle faced by manufacturing firms in Africa ([African Economic Outlook, 2019], [Alby, 2013], [START_REF] Gelb | [END_REF]). Low electricity quality increases the cost of energy use, making the production more energy-intensive and firms less competitive. This can lead firms to invest in energy efficiency to overcome the access to lowquality electricity. Like in the productivity case and according to Figure 4.1, Chinese products still constitute a good opportunity for African firms to have access not only to energy-efficient appliances but also to energy-saving technologies. The direct advantage of importing these products is low energy consumption which means more profit for the enterprise. Another advantage is the reduction in CO 2 emissions. Fossil fuels still continue to be an important source of electricity generation in African countries ( [Lin, 2021], [Lin, 2022]) and making the continent a major emitter of CO 2 relative to its GDP. But, as shown in Figure 4.2, even if total CO 2 emissions increased significantly since the early 2000s, its rate relative to GDP decreased significantly over the same period. The increase in these total emissions can be explained by accelerated economic growth in the African continent and the adoption of new technologies. The objective of this paper is to investigate, at the firm level, the relationship between Chinese penetration, firms' productivity, and energy intensity in African countries. First, we propose a conceptual framework that explains theoretically the link between our main variables that is energy intensity (or energy efficiency), productivity, and production. Second, we apply the two-stage least square (2SLS) approach to assess the impact of Chinese penetration into African markets on firms' productivity and energy intensity. In addition to the 2SLS method, we used the three-stage least square (3SLS) estimators to estimate a simultaneous equation model that links productivity and energy intensity.
Results show that China's competition in African markets leads to a reduction in the productivity and energy intensity of small enterprises, but does not affect the large ones. According to our theoretical model, the reduction in energy intensity is associated with a reduction in production rather than productivity improvement. We also find that following China's penetration, productivity improves in SMEs facing electricity obstacles and large firms facing financial obstacles. Finally, the results suggest that improving a firm's productivity tends to decrease the energy intensity but we do not find any significant reverse causality.
This paper builds on two strands of the literature on trade openness, namely the one focusing on a firm's productivity and the one addressing firms' energy efficiency. Likewise, this paper is closed in spirit to the literature linking firm productivity with its energy intensity, and the one concerning trade and environmental pollution.
The rest of the paper is organized as follows. The first two sections present the literature review on trade and firms' performance (section 1) and the theoretical relationship between energy intensity, productivity, and production at the firm level (section 2). The next section presents the empirical strategy while section 4 presents data and main variables. The empirical results are presented in section 5 while section 6 presents the robustness check analysis. Finally, we concluded and made some policy recommendations.
Literature Review
Trade Openness and Productivity in the Manufacturing Sector
Joining the WTO is considered as a trade opening. Most African countries joined it when it was created in 1995, while China only joined at the end of 2001. WTO membership has resulted in an increase in trade between Africa and the rest of the world (notably, China), a decrease in import costs (customs tariffs), and a diversity of imported goods in terms of quality and variety. China, in particular, is able to produce a wide range of goods at markedly lower prices than most of its competitors. Lower prices and diversity in imported products are both an opportunity for consumers and producers. For firms, we distinguish two types of products, namely (variable and fixed) inputs, and outputs. Inputs are transformed upstream to produce outputs. Trade openness allows firms to have access to low-cost, high-quality inputs, which not only reduce their production costs but can also improve their productivity ( [START_REF] Melitz | [END_REF] ; [Bernard, 2003]). As a result, firms have room to adjust their output prices ( [Thompson, 1997] ; [START_REF] Horiuchi | The effect of FDI on economic growth and the importance of host country characteristics[END_REF] ; [Ades, 1999] ; [Frankel, 1999] ; [START_REF] Alesina | [END_REF]). On the output side, lower tariffs create competition in the domestic market and may crowd out local products. As a result, increased import competition may lead to a reduction in the growth and productivity of less competitive local firms. Local firms can adjust to the new market conditions reducing their production costs and increasing their productivity through the adoption of new technologies. Trade openness promotes the influx of goods designed with different technologies into their market. Local firms can absorb the technology incorporated in imported products and adjust it to the local conditions ( [START_REF] Falvey | Trade liberalization and technology choice[END_REF], [Hafner, 2011]). Consequently, the adoption of these new technologies can enhance their performance.
Trade openness also leads to a reallocation of the factors of production with some less competitive sectors contracting and other competitive sectors expanding. ( [START_REF] Melitz | [END_REF], [Hoekman, 2010], [Melitz, 2014]). Moreover, due to the absence of protection of intellectual property rights, developing countries tend to invest little in research and development (RD), which is an important instrument used by many developed countries to face Chinese competition in their domestic market ( [START_REF] Dorn | [END_REF], [START_REF] Bloom | Trade induced technical change ? The impact of Chinese imports on innovation, IT and productivity[END_REF] ; [Chen, 2005]). RD allows for product and technological innovation. Innovation in product design gives companies market power over these products because they are not manufactured elsewhere. Technical innovation, on the other hand, has a direct impact on productivity by saving time and inputs ( [Grazzi, 2016], [START_REF] Crowley | Firm innovation and productivity in Europe : evidence from innovation-driven and transition-driven economies[END_REF], [Hall, 2011]). These different phenomena have been highlighted in the literature. [Amiti, 2007] studied the impact of Indonesia's trade openness on firm productivity and found that lower tariffs led to an increase in productivity. The effect of intermediate goods is higher (than for finished goods), especially for firms using intensively these goods. The same result was found by [START_REF] Kasahara | [END_REF] when analyzing the evolution of the productivity of Chilean firms that move from non-importing to importing intermediate goods and by [Schor, 2004], in the case of Brazilian firms. This effect of intermediate goods has, however, been found to vary under certain conditions. In addition to [Amiti, 2007], [Luong, 2011] found for Mexican firms that when intermediate goods are highly substitutable rather than complementary, lower tariffs on intermediate goods have a positive effect on productivity. Conversely, lower tariffs on finished goods have a negative effect on a firm's productivity. Similarly, [Altomonte, 2014] showed that the penetration of foreign intermediate goods improves most the productivity of firms located upstream of the industrial production chain. [Elu, 2010] found that Chinese FDI and trade openness (input and output penetration) with China had no impact on firms' productivity in Ghana, Kenya, Nigeria, South Africa, and Tanzania. [Darko, 2021] found a positive impact of Chinese goods penetration in Sub-Saharan Africa (SSA) on the productivity of manufacturing firms. This impact is positive for both inputs and outputs. The discrepancy in the outcomes of these different studies may be attributed to different datasets used 1 . In particular, the first paper covers only a few years of Chinese trade shock that really materializes after its adhesion to WTO.
Trade Openness and Energy Intensity in the Manufacturing Sector
Energy intensity (EI) is the amount of energy consumed per unit of output produced. It is calculated by dividing the total energy consumed by the total associated output. Higher energy intensity means that a firm's production process uses a lot of energy, maybe even wasting energy, which could translate into financial loss and ultimately affect the firm's competitiveness. Faced with competition, every company seeks to optimize production costs, which include tangible inputs (e.g., raw materials or semi-finished products) and non-tangible inputs (e.g., electricity, internet). As tangible inputs are less compressible than other inputs. In the case of non-tangible inputs such as electricity, the best-known and most widely adopted optimization method by companies is improving energy efficiency (IEE), which is measured as the ratio of useful energy to the energy consumed by a system. IEE is, therefore, about rationalizing the use of energy (and not about saving energy) i.e., reducing energy consumption without altering the level of production. In other words, it represents the economic gains from spending less energy than before for the same level of production.
Trade openness can influence energy intensity in several ways. It allows firms to have access to energy-saving technologies and good quality inputs that improve their productivity and, therefore, their energy intensity [START_REF] Kasahara | Does the use of imported intermediates increase productivity ? Plant-level evidence[END_REF], [START_REF] Imbruno | Energy efficiency gains from importing intermediate inputs : Firm-level evidence from Indonesia[END_REF], [Cui, 2020], [Gutiérrez, 2018], [Holladay, 2016]). This phenomenon is more common among exporting firms, which take advantage of cheaper inputs to improve their productivity and energy intensity and make their exports more competitive ( [Kimura, 2006], [De Loecker, 2007], [START_REF] Bas | Input-trade liberalization and firm export decisions : Evidence from Argentina[END_REF]). However, competition from foreign goods in the domestic market can also increase a firm's energy intensity through the reduction of its domestic market share. Indeed, by reducing its domestic market share, the firm reduces its production level which can reduce its productivity and consequently increase its energy intensity ( [START_REF] Shen | R&D efforts, total factor productivity, and the energy intensity in China[END_REF]). This situation can be exacerbated in countries where companies are faced with the problem of load shedding -i.e. distributing demand for electrical power across multiple power sources-( [Adom, 2015], [Adom, 2018]). It is also important to note that a structural change in the economy, induced by trade policy, can also lead to an increase in energy intensity at the macroeconomic level ( [Cole, 2006], [START_REF] Mulder | Structural change and convergence of energy intensity across OECD countries, 1970-2005[END_REF]] [Chintrakarn, 2013], [Adom, 2016], [START_REF] Ma | Industrial structure, energy-saving regulations and energy intensity : Evidence from Chinese cities[END_REF]). For instance, foreign competition may compel a multi-product firm to specialize, or simply to shift toward less energy-intensive goods due to high prices and bad quality of electricity (this is mainly the case in African countries). Thus, to reduce their vulnerability, some firms opt to innovate rather than import foreign technologies ( [START_REF] Dorn | [END_REF], [Liu, 2020]), to increase their productivity and hence their energy intensity. In countries where electricity is expensive or of poor quality, it is also desirable to invest in energy-saving technologies or operations capable of reducing corporate energy intensity ( [Roy, 2015], [Gutiérrez, 2018], [START_REF] Barrows | Foreign demand, developing country exports, and CO2 emissions : Firm-level evidence from India[END_REF]).
Strategies adopted by companies to adjust to trade policies often require investment, and thus the need for access to bank credit, especially when the company's own resources are insufficient. Unfortunately, in developing countries and especially in Africa, the financial market is less developed, and access to bank credit is an obstacle to investment projects and a brake on the growth of companies ( [Green, 2013], [START_REF] Fauceglia | Credit constraints, firm exports and financial development : Evidence from developing countries[END_REF]). Several papers have analyzed the impact of financial development on corporate energy intensity. Many of them report a negative relationship between access to finance and the energy intensity of firms ( [Adom, 2020], [START_REF] Xue | Impact of finance pressure on energy intensity : Evidence from China's manufacturing sectors[END_REF]). For example, [Amuakwa- Mensah, 2018] showed that improved banking performance encourages investment in energy efficiency in Sub-Saharan Africa, both in the short and long term. However, other studies have found a positive relationship between financial development and energy intensity. For example, in the absence of enforcement measures or rules to limit air pollution, some companies have been found to pay little attention to energy efficiency when they receive bank financing ( [START_REF] Haider | [END_REF], [Ling, 2020], [Zhang, 2020]). Several factors can explain this firm's behavior. First, there is a lack of sufficient information about which investments are profitable for the company [START_REF] Velthuijsen | Determinants of investment in energy conservation[END_REF]. This insufficient information may be due to high transaction costs ([DeCanio, 1998], [Sanstad, 1994] ; [Howarth, 1995]) associated with collecting it, or a lack of qualified staff to conduct the necessary and relevant analyses about the projects ( [Beckenstein, 1986] ; [De Almeida, 1998] ; [Gabel, 1993], [Howarth, 1995], [DeCanio, 1994]). Se-cond, when facing financial constraints, companies ration capital ( [Ross, 1986], [Howarth, 1995] ; [START_REF] Worrell | [END_REF]) so that energy efficiency becomes just a criterion used in the purchase of capital goods and, therefore, does not require any particular investment or attention [Reddy, 1991]. On the other hand, some studies have shown that despite the existence of a positive net present value of an energy efficiency investment project, some companies remain wary and reluctant to undertake this investment. This is the paradox of energy efficiency ([DeCanio, 1998], [Van Soest, 2001]). Several reasons, including uncertainty about the future, explain this mistrust [Hassett, 1993]. Indeed, investment in energy efficiency would not bring the same benefit to the company if the price of electricity were to fall in the short or medium term. Another reason put forth by [Van Soest, 2001] is that companies are willing to bear the cost associated with postponing investment in energy efficiency in the hope that, thanks to technical progress, more sophisticated technologies may appear in the near future.
Trade openness, Productivity, and Energy Intensity
The literature investigating the link between total factor productivity (TFP) and energy intensity within firms is mixed. There are two categories of analysis due to the bi-directional causality between these two variables [Berndt, 1978]. First, improving productivity can have a negative impact on energy intensity. Indeed, all investments in new equipment or accessibility to better and cheap intermediate inputs to improve productivity [Darko, 2021], reduce firm energy intensity, all else equal. This is the same for the process of production reorganization and firm innovation in response to increased Chinese competition [Shu, 2019], which can affect energy intensity through productivity improvement. This result reflects the fact that productivity improvement leads to a fall in the energy consumed per unit produced, that is, a decrease in energy intensity. Second, the firm can directly target energy efficiency [START_REF] Pan | How do industrialization and trade openness influence energy intensity ? Evidence from a path model in case of Bangladesh[END_REF] by investing in energy-saving technology or cleaning up its energy system by reducing energy loss (in all its forms) identified after the energy audit. Likewise, investment in energy-efficient equipment and in research development can also lead to a drop in energy intensity [Hodson, 2018]. The decrease in total energy used can also lead to improving the productivity of the energy factor, and hence, increasing TFP. Therefore, the investment in energy efficiency that decreases the energy intensity, results in an increase in TFP, due to the improvement of energy productivity. The literature review discussed above suggests that Chinese competition can affect firm energy intensity and productivity in several ways. In the next section, we investigate theoretically the relationship between productivity and energy intensity.
A simple production framework
As mentioned above, the general decline in firm growth due to Chinese competition in the African market can be mediated through two main channels, namely productivity and energy intensity. In this section, we present a mechanism that relates these two variables to Chinese competition. Assume a firm i with the following neoclassical production function :
Q = AK α L β E γ M λ
where A is the productivity of the firm, K is the stock of physical capital, L is labor, E is energy and M is intermediate goods. The parameters α, β, γ, and λ represent the elasticities of production with respect to the corresponding production factors.
Energy intensity is defined as energy consumption per unit produced :
EI = E Q
Where EI is the energy intensity, E is the total energy consumed by the company over a given period (e.g. a year) and Q is the company's production over the same period. Let be :
EI = E AK α L β E γ M λ = E (1-γ) AK α L β M λ
Suppose that energy consumption varies while other factors (K, L, M ) remain constant. This means that the production level remains also unchanged. This is possible if, for instance, the firm invests in energy efficiency2 or maintains its production process by reducing the waste of energy in all its forms (thermal, electrical, etc.). This leads to a reduction in the energy consumed but with the same level of output. Likewise, a failure in the process can increase the quantity of energy consumed without affecting other factors or the quantity produced.
To summarize, we suppose that the firm wins or loses in energy efficiency at the same level of production. To take into account this phenomenon, we derive the above expression by the energy variable (E) while maintaining other factors constant. We obtain the following expression :
dEI dE = (1 -γ)E (-γ) AK α L β M λ = 1 -γ AK α L β E γ M λ dEI dE = 1 -γ Q dEI = 1 -γ Q dE
Now, to see how energy intensity varies as a function of production (Q), divide the precedent equation by dQ. Here we allow energy to vary as a function of output (Q) and we suppose that the change of energy intensity as a function of Q does not depend on the initial level of energy intensity 3 . Then, we obtain the following expression :
dEI dQ = (1 -γ) Q dE dQ
In physical science, the energy consumed by a machine is equal to the product of its power (P) and the duration (t) of operation :
E = P * t
The previous equation thus becomes :
dEI dQ = (1 -γ) Q d(P * t) dQ ⇒ dEI dQ = (1 -γ)P Q dt dQ dEI dQ = (1 -γ)P Q 1 dQ dt
Let µ be the time productivity of the firm, defined as the firm's output per unit of time
µ = dQ dt
The change in energy intensity can be formulated as follow :
dEI dQ = (1 -γ)P Q 1 µ dEI = (1 -γ)P µ dQ Q
Passing to the integral, we obtain :
dEI = (1 -γ)P µ dQ Q ⇒ EI = (1 -γ)P µ ln(Q) + EI 0
The constant term4 EI 0 represents the incompressible energy intensity of the company. It corresponds to the minimum production (Q min ) of the company and the associated energy consumption (E min ). The minimum production level corresponds to zero profit. Below Q min , the firm is no longer viable in the market and must leave it so as to avoid negative profits. The associated energy E min is equal to the sum of the amount of fixed energy (i.e. that does not depend on production) and the variable energy consumed (by machines, motors, apparatus, etc.) in producing Q min . The time productivity depends on TFP (A). When A increases, µ also increases.
Similarly, installed capacity depends on the capital stock, all other things being equal. Let us therefore posit : µ = µ A and P = P K Hence the following expression for energy intensity :
EI = (1 -γ)P K µ A ln(Q) + EI 0
To analyze this expression from a dynamic point of view, it is necessary to distinguish between the short-term and the long-term. According to economic theory, physical capital varies only in the long run. We admit that productivity (µ A ), as well as installed power (P K ), also vary in the long run since they depend on the stock of physical capital. We, therefore, have the following expressions :
In the short term
EI t = (1 -γ)P K µ A ln(Q t ) + EI 0 ⇒ ∆EI = (1 -γ)P K µ A ∆ln(Q)
According to this expression, there is a positive relationship between production and energy intensity. The latter evolves in the same direction but in a non-linear manner with production because of the logarithmic form of Q 5 . When production increases (decreases), the energy consumed also increases (decreases), but less quickly than the increase (decrease) in production. A firm with a high elasticity of production relative to energy (i.e. γ) tends to have a lower energy intensity. While the power value is linearly related to the change in EI, there is an inverse relationship between this latter and the firm's TFP. With increased trade competition, the EI decreases less quickly in less productive firms (all else being equal) but more quickly in high capital-intensive firms (all else being equal).
In the long term
EI t = (1-γ)P K t µ A t ln(Q t ) + EI 0t ⇒ ∆EI t = ∆( (1-γ)P K t µ A t ln(Q t ) + EI 0t ) ⇒ ∆EI t = ∆( (1-γ)P K t µ A t ln(Q t )) + ∆(EI 0t )
Here, the period t is at least 1 year. In the short-and long-term, γ is assumed to be constant because we suppose that the firm's production function remains unchanged. The incompressible energy intensity varies in the long term because, from one period to another, the company can expand or shrink from the administrative point of view, which would reduce the fixed energy component. Similarly, the minimum production level can also vary according to market realities. Moreover, although the relationship between each variable in the expression and EI has not changed, it is nevertheless difficult to analyze the evolution of energy intensity because it depends on the simultaneous variation of all the variables in the equation. However, it should be noted that energy intensity improves, if and only if the firm invests in energy efficiency to reduce installed capacity and/or also invests in advanced technologies to improve its productivity. Energy intensity increases with installed power while it decreases with productivity.
4 Empirical strategy
Impact of China's penetration on firms' productivity
As noted above, China's penetration into the African market can influence firms' productivity in different ways mainly through import competition of inputs and output. On the one hand, imports of Chinese output generate competition that leads to a loss of a firm's market share and consequently a reduction in productivity. On the other side, Chinese input penetration can facilitate investment in new capital goods or the reduction in intermediary input prices that generate both productivity gain and economy of scale. In this paper, we focus on output penetration due to data limitations. In enterprise survey data, there is no detailed information about the origin of imported inputs used by each firm.
The effect of Chinese imports on African firms' productivity has not been extensively investigated in the literature. [Darko, 2021] is among the few papers that addressed the issue with a comprehensive dataset and empirical strategy. However, the instruments used in their analysis namely maritime transport shock 6 and China's comparative advantage shock did not fully respect the required homogeneity condition. According to the maritime transport measure, a firm's exposition to China's penetration increases as a seaport with good quality (size and depth) is near. However, the proximity to a seaport cannot affect firms' productivity without favoring, in advance, the entry of Chinese products. This statement can be a bit problematic because a firm that is near a large seaport is not only exposed to Chinese products but also to all foreign products. Moreover, this proximity to the seaport can facilitate its exports, and imports of goods 6. The instrument takes into account the size, depth, the proximity of the seaport to the enterprise.
(such as capital goods or intermediary inputs) that can improve its productivity. Then it's possible that the proximity to a large port affects a firm's productivity without passing by China's penetration. Concerning the second instrument that follows [Autor, 2013], its appropriateness is questionable since the authors used other Sub-Saharan countries to construct it. Indeed, even if the intra-African trade is not yet quite developed, there is a strong link between many of them, in particular between countries geographically close. For instance, insecurity in some Sahelian African countries can cause higher inflation in the neighboring countries that are not directly affected by such insecurity ( [START_REF] Noubissi | The impact of terrorism on agriculture in African countries[END_REF]). This calls into question the validity of this second instrument. We refrain from using the instruments used by these authors and explore another instrumentation approach to confront their results. The following model is performed to analyze the impact of China's penetration into African markets on African firms' productivity :
ln(T F P ikjt ) = α 0 + αchina_pen ikj(t-1) + α 1 obst ikjt + α 2 obst ikjt * china_pen ikj(t-1) + βX ikjt + γZ jt + D kj + D kt + D jt + ϵ ikjt (4.1)
Where ln(T F P ikjt ) represents the natural logarithm of TFP of firm i of industry k at time t, which belongs in country j at time t. Our variable of interest is china_pen ikj(t-1) which represents China's penetration while obst ikjt * china_pen ikj(t-1) represents its interaction term with electricity or financial obstacle. We considered its lag because a firm may take time to respond to Chinese competition. For instance, to invest in capital goods, the theory of enterprise considers one year. This penetration is evaluated following [START_REF] Dorn | [END_REF] :
china_pen ikjt = M chn kjt P rod kj2000 -X kj2000 + M kj2000
M chn kjt is the import from China in industry k, country j at time t. In the denominator, we have the economic absorption in the year 2000. It is composed of the part of the industry k production used by the national economy (which is the difference between production (P rod kj2000 ) and export (X kj2000 ) in industry k and the import ((M kj2000 )) of the industry k products from the rest of the world. The choice of the year 2000 allows for reducing the risk of endogeneity of the variable china_pen ikjt . Indeed, we supposed that a country's production in any industry in the year 2000, could not be affected by the Chinese market penetration, since China's trade opening began at the end of 2001.
The variables X ikjt capture firm-level characteristics, including firms' size (size), and their exporting status (Exporter). The latter takes the value of 1 if a firm exports more than 10% of its production. As for the firm's size, we distinguish small (and medium -SME) from large enterprises. A firm is qualified as 'small' or 'medium' when it has less than 70 7 employees, in which case the variable size takes the value of 0. It takes the value of 1 for large firms. Other firm-level control variables include financial obstacle (f inan_obst) and electricity obstacle (elect_obst). While both variables contain five modalities, we dichotomize them to have a binary variable taking the value of 1 if a firm does face a major or severe financial or electricity obstacle, respectively, or 0 when there are no obstacles or they remain minor. We do not consider the 7. Third quartile of "full-time employees" distribution.
case where the obstacle is moderate to avoid any inaccuracy 8 in the analysis. The interaction term of these variables is also included in the model to take into account the case where both types of obstacles are present simultaneously. The variables Z jt are measured at the countryindustry-year level, including penetration in African markets of African countries' main trade partners that are susceptible to compete for both domestic firms and China in local markets. These partners are India and countries that belong to the same free trade agreement area 9 . Finally, we include country-industry D kj fixed effects, as well as country-year D jt and industryyear D kt time-trends in order to control for any other sources of unobserved variability. Given that the sample does not contain many observations, the time variable is categorized into three groups, with the first one covering the years 2005 and 2006, the second covering 2007 and 2008, and the last one covering the years from 2009 to 2018. We recognize that there is no stability in the last period due to the petroleum shock in 2014 and 2016, but in light of the composition of our sample, this classification seems, in our view, better.
The variable capturing China's penetration suffers from endogeneity issues due to at least three reasons. First, there are productive firms that participate more in international trade and are more exposed to import competition from China (i.e. reverse causality). Second, it is possible that firms that remain in the market are more productive and more competitive than those that disappear (i.e. selection bias). Finally, the estimates may also be affected by unobservable characteristics that are correlated with both the exposure to imports from China and firm productivity. Like [Darko, 2021], we follow the approach taken by [Autor, 2013], but modify it slightly by selecting about ten countries in the world that are developing countries 10 , and do not have any economic linkage with any African countries. Next, we apply a principal component analysis (PCA) to 12 economic variables 11 in order to obtain four variables. Finally, based on these four factors, we match each African country, thanks to the ascending hierarchical classification (AHC), with one or many other countries in our selected countries. Then, the value of the instrument for each country is the average of contemporaneous Chinese penetration in the associated countries for each industry.
Impact of china's penetration on firms' energy intensity
Most of the previous studies that focused on energy intensity were performed at the macroeconomic level. The variables of interest used were, among others, trade openness, ( [Wang, 2021], [Le, 2020]), financial development ( [START_REF] Xue | Impact of finance pressure on energy intensity : Evidence from China's manufacturing sectors[END_REF], [Adom, 2020], [Le, 2020]), and Foreign Direct In-8. A moderate obstacle is ambiguous and may tend to be a minor obstacle for some companies and a major one for others. 9. Economic Community of West African States (ECOWAS), East African Community (EAC), Southern African Development Community (SADC), Economic Community of Central African States (ECCAS). 10. Cambodia ; El Salvador ; Honduras ; Lao PDR ; Moldova ; Mongolia ; Myanmar ; Nepal ; Nicaragua and Papua New Guinea.
11. These variables are : are (1) access to electricity in rural areas (% of rural population) ; (2) access to electricity in urban areas (% of urban population) ; (3) GDP per capita (constant 2010 US$) ; (4) final consumption expenditure (% of GDP) ; (5) imports of goods and services (% of GDP) ; (6) oil rents (% of GDP) ; (7) total natural resources rents (% of GDP) ; (8) value added in manufacturing (% of GDP) ; (9) GDP per capita, PPP (constant 2017 international US$) ; (10) exports of goods and services (% of GDP) ; (11) urban population (% of the total population) ; and (12) sea distance between China and each country.
vestment [START_REF] Yang | Do foreign direct investment and trade lead to lower energy intensity ? Evidence from selected African countries[END_REF], [Hübler, 2010]). Few other studies were performed at the microeconomic level, precisely at the household level ( [START_REF] Hojjati | US household energy consumption and intensity trends : a decomposition approach[END_REF] ; [START_REF] Vringer | Long-term trends in direct and indirect household energy intensities : a factor in dematerialisation ?[END_REF] ; [START_REF] Dong Feng | Laspeyres decomposition of energy intensity including household-energy factors[END_REF]) and the firm level ( [Zhang, 2020], [START_REF] Inés | Investments and energy efficiency in Colombian manufacturing industries[END_REF], [START_REF] Xue | Impact of finance pressure on energy intensity : Evidence from China's manufacturing sectors[END_REF], [Yang, 2021]). The present analysis is closed in spirit to these papers. [Yang, 2021] applied the Difference-in-Difference method to analyze the effect of trade policy uncertainty defined by China's accession to the WTO on the energy intensity of Chinese manufacturing firms. [Wang, 2021] conducted the same analysis by using mediation models, and regression discontinuity identification strategy in Chinese manufacturing at the industry level. In the same vein, [Adom, 2014] investigated the effects of the change in the trade structure, and technical characteristics of the manufacturing sector on energy intensity in Ghana, using the Phillip-Hansen, Park, and Stock-Watson cointegration models, which are more robust to serial correlation and exogeneity problems. All these papers used different empirical approaches in their empirical works, although they focused mainly on a single country. Our analysis covers additional dimensions. Even if the issue at hand is analyzed at the firm level, the firms concerned do not belong to the same country and were not observed in the same period. This nature of the dataset makes it difficult to choose the appropriate econometric methodology. As productivity and energy intensity are both proxies for firms' performance, we build on equation ( 1) and postulate the following model :
ln(EI ikjt ) = α 0 + αchina_pen ikj(t-1) + α 1 obst ikjt + α 2 obst ikjt * china_pen ikj(t-1) + βX ikjt + γZ jt + D kj + D kt + D jt + ϵ ikjt (4.2)
where EI ikjt stands for the energy intensity defined as the ratio of electricity cost divided by the firm's sale. Two different measures of electricity cost, and consequently two variables of energy intensity are considered. The first one is the electricity purchased from the network grid, and the second one is the total cost of electricity measured by the cost of electricity purchased from the network grid plus the cost of the fuel used to generate electricity. In addition to the control variables in the model of productivity (equation ( 1)), we include the share of production lost (lossper_due_out) due to the power outage. This model specification suffers also from endogeneity issues since the indicator of energy intensity is calculated using data on enterprise production, while both the latter and Chinese penetration can be influenced by either demand or supply shock. Therefore, we use the same instruments as described above to correct this endogeneity problem.
Relation between TFP and energy intensity
The impact of productivity on firms' energy intensity has been largely investigated in the literature using different econometric approaches. For example, [Haider, 2020] employed the instrumental variable-generalized method of moments (GMM-IV) to examine the nature of the relationship between TFP and energy intensity in the Indian paper industry. [START_REF] Sahu | [END_REF] and [START_REF] Bagchi | [END_REF] provided similar analyses, by focusing on all manufacturing sectors, and using the fixed effects approach. [START_REF] Ladu | [END_REF] applied a cointegration approach to dynamic panel datasets to analyze the TFP and energy intensity nexus for Italian regions. In other studies, the change in productivity is assimilated to technological change or technological innovation so that TFP does not appear directly in their model ( [START_REF] Pan | How do industrialization and trade openness influence energy intensity ? Evidence from a path model in case of Bangladesh[END_REF]).
We do not intend to investigate only whether the TFP influences the energy intensity, but to explore whether there exists a bi-directional relationship between these two variables for firms in the African manufacturing sector. Due to the working condition of African firms, it is important to consider the effect of energy efficiency, proxied here by energy intensity, on firms' productivity. This is because according to the literature, electricity is the first obstacle faced by firms. Thus, it is possible that trade competition, including Chinese competition, constrains these firms to invest first in energy efficiency. This type of causality between TFP and EI has been investigated by scholars. For instance, in Kenya's manufacturing sector, [Macharia, 2022] examined empirically the effects of energy efficiency on firm productivity using the generalized method of moments (GMM) estimator for dynamic panel models. The same analysis was performed by [Filippini, 2020] who applied a Difference-in-Difference (DID) model to Chinese iron and steel firms. Likewise, [Haider, 2017] employed a vector error correction mechanism (VECM) to explore the dynamic linkage between energy efficiency and TFP in India. In the cross-countries analysis of the productivity gains of energy efficiency, [Cantore, 2016] applied panel fixed-effects model to 29 low-income countries while a pooled ordinary least squares regression model is applied by [START_REF] Montalbano | [END_REF] to a sample of 30 Latin American Caribbean (LAC) states.
In the present analysis, we explore the simultaneous relationship between TFP and energy intensity, by adopting a simultaneous equation model approach. This allows us to take into account all aspects governing the relationship between energy efficiency and productivity in the African context. Based on our theoretical model, we consider the following model :
ln(EI ikjt ) = α 0 + α 1 ln(T F P ikjt ) + α 2 intk ikjt + α 3 log_V a ikjt + X ikjt λ + D kj + D kt + D jt + ϵ ikjt ln(T F P ikjt ) = θ 0 + θ 1 ln(EI ikjt ) + θ 2 intk ikjt + θ 3 log_V a ikjt + Z ikjt σ + D kj + D kt + D jt + ψ ikjt
According to the first equation, TFP has an impact on energy intensity while the second captures the inverse relationship. The mains explanatory variables identified from the theoretical model are respectively the capital intensity -i.e. the ratio of capital cost to labor cost-(intk), and the logarithm of value added (log_V a). The production Q included in the theoretical model is replaced by the correspondent value-added while the physical capital stock is replaced by the capital intensity to avoid potential multicollinearity problems. The variables X ikjt and Z ikjt represent control variables while ϵ ikjt and ψ ikjt are error terms in the first and second equations, respectively. By definition, the model suffers from endogeneity issues. Apart from variables that are considered exogenous or predetermined, all dependent variables that appear on the righthand side are treated as endogenous variables. Further, because some of the explanatory variables are also the dependent variables of other equations in the system, the error terms among the equations are expected to be correlated. The parameters of the system have to be identified in order to estimate the system. According to the order condition rule, the number of exogenous variables12 not appearing in the first equation must be at least as large as the number of endogenous variables appearing on the right-hand side of the first equation [START_REF] Jeffrey M Wooldridge | Econometric analysis of cross section and panel data[END_REF],
and vice-versa. The next step is to consider the most appropriate estimation method. Due to the endogeneity issues, namely the correlation between regressors and error terms of each equation of the system, the traditional ordinary least square (OLS) method leads to inconsistent estimated parameters. Available methods that deal with this problem can be classified into two different strategies : single-equation methods and full-system methods [Davidson, 1993]. The former, which includes primarily the 2SLS and Limited-Information Maximum Likelihood (LIML) estimators, estimates the model equation by equation. The full-system methods, of which the main examples are 3SLS and Full-Information Maximum Likelihood (FIML), estimate all the parameters of the model at once. To deal with endogeneity, 2SLS instruments out the endogenous regressors. In the case of the "weak instrument" problem, the alternative technique LIML, introduced by [START_REF][END_REF], is warranted. LIML yields invariant 13 estimates which in many respects have better finite-sample properties than the 2SLS estimator [Davidson, 1993]. However, single equation techniques such as 2SLS and LIML estimators ignore two important issues : correlation between error terms and cross-equation restrictions 14 . In contrast, full-system techniques like the FIML and 3SLS estimators in which all the parameters in the model are estimated jointly, take these problems into account and yield more efficient estimates in general. The FIML estimator brings, however, no advantage over the 3SLS estimator, and is much more complicated to implement [William, 2008]. Because the full-system methods require to specify the structure of all equations in the model, misspecification of an equation will, in general, lead to inconsistent estimates of all equations ( [Davidson, 1993], [START_REF] Jeffrey M Wooldridge | Econometric analysis of cross section and panel data[END_REF]). When the sample size is large and heteroskedasticity is severe, the heteroskedasticity three-stage least squares (H3SLS) estimator is likely to be the preferred estimator [Davidson, 1993]. Finally, note that in the event an equation is not identified, that is, there is no instrument for a dependent variable, the single equation method is unable to estimate the system's parameters. The use of the 3SLS method requires, however, some assumptions. For instance, we can make a restriction by equalizing parameters of variables present in both equations of the system or considering that the matrix of variance-covariance of error terms (ψ ikjt , ϵ ikjt ) is diagonal [START_REF] Jeffrey M Wooldridge | Econometric analysis of cross section and panel data[END_REF]. It is also possible to introduce in the identified equation, the combination (square or interaction) of other variables to generate the missing instrument.
Data and main variables
In this section, we present the main data that are used to estimate all the above models. The main variables are also presented in detail. Appendix 1 reports for each variable, standard descriptive statistics, including the mean, standard deviation, minimum, and maximum, over the sample.
13. Maximum Likelihood methods yield estimates that are invariant to reparameterization. 14. Sometimes, some systems of equations require that the cross-equation restriction be estimated, which is not possible with a single-equation technique.
Data
The data on firms used in this study are taken from the database15 set up by the World Bank to estimate the productivity of firms. Only firms that have been surveyed at least twice are included. In total, there are 2421 firms with 5395 observations in 25 African countries. The database contains all the variables (sales and factors of production) needed to estimate productivity. However, we have supplemented it with energy variables, namely the total costs of electricity and fuel. Following the procedure used by the World Bank, we have adjusted them by removing outliers and deflating them with the GDP deflator (2009) while converting them into dollars according to the fiscal year of each company. Information on Chinese goods imports is extracted from the Base pour l'Analyse du Commerce International (BACI) database of the Centre d'Études Prospectives et d'Informations Internationales (CEPII). Finally, we have extracted from the EORA database, the variables needed to calculate the penetration of Chinese outputs and its instrument. The information is available at the industry level and by country. The manufacturing sector is split into seven specific industries with an eighth industry that includes all unclassified entities. In order to harmonize data in the dataset, taking into account the variety of their sources, we also classified the (2421) companies into eight industries, but only the first seven are considered in our study.
Explained Variables
Energy Intensity
Energy efficiency can be computed regardless of whether energy is consumed at home, in the office, or in the factory. Since energy exists in several forms and given that there is a diversity of energy use, there are several indicators. Generally, they are defined as the ratio of output to the energy consumed for that purpose, or sometimes the inverse of that ratio. The useful output can be expressed in various units, i.e. units of mass or volume, energy, currency or distance, etc. This is what led [Patterson, 1996] to the categorization of all indicators into four main groups : (purely) thermodynamic, physical-thermodynamic, economic-thermodynamic, or (purely) economic.
-Thermodynamic indicator
Still called enthalpic efficiency, the numerator and denominator of this indicator are expressed in units of energy. The output is energy and the calculation of the two components of the ratio is based either on the first or the second principle of thermodynamics. To do so, it exploits the ambient conditions of the system, i.e. temperature, pressure, the number of molecules, etc. It has the advantage of being useful in thermal processes where there is heat exchange or transfer. However, the information conveyed by this indicator is very technical and is, therefore, not understandable by all energy consumers who are not energy specialists or technicians.
-Physical-thermodynamic indicator
Unlike the previous indicator, physical-thermodynamic estimators evaluate the final task of the consumer in a unit that is accessible to him. These units can be kilometer (e.g., distance traveled), ton (e.g., quantity transported or produced). Although the denominator is always expressed in units of energy, any consumer can define or appreciate his utility per unit of energy consumed and thus manage to compare and explain two distinct situations or systems. However, the real problem here is how to define the useful task. For example, consider a company that produces wheat flour from a mill. Let's assume that the mill uses a thermal engine, thus converting thermal energy into mechanical energy represented by a gear system whose rotary motion allows it to crush the raw material that is wheat. In this case, there are three types of useful output, namely the speed of rotation of the system, the quantity of wheat crushed, or the quantity of flour obtained, knowing that each of these variables can be influenced by other factors endogenous or exogenous to the whole production system.
-Economic-thermodynamic indicator
Like the traditional productivity of labor or capital, the economic-thermodynamic indicator represents the productivity of energy. The latter is useful at the macroeconomic level because it allows us to know the gross output of an economy per unit of energy consumed. The fact that useful output is expressed in monetary units is a limitation in terms of its use to compare two economies with different currencies since the exchange rate does not incorporate the purchasing power of each of these currencies in their respective economies, and this can bias a comparative analysis. However, to overcome these shortcomings, [Reister, 1987] suggests calculating GDP in purchasing power parity.
-Economic Indicator
Here, the two components of the indicator are evaluated in monetary units. It has the advantage of taking into account economic realities and facilitating comparison between countries since it is dimensionless. However, it does not truly reflect the energy efficiency of either a country or an industry. Indeed, at the company level, the value of the indicator may change not because the company's energy consumption has changed but because the energy price or the market price of the output has varied. However, in an econometric analysis, including these control variables makes it possible to rule out the influence of these economic contingencies.
-Choice of indicator
The thermodynamic indicator seems to be more objective than all the others because it does not depend on the economic variables so it is easy to compare the energy efficiency of a system over time. This last aspect is also appropriate for the physical-thermodynamic indicator, provided that the same output is maintained over time. Similarly, the comparison between any two countries is also easy with these two indicators. However, the physical-thermodynamic indicator is less adapted to our case as we are working on a sample of different types of enterprises whose outputs are not expressed in the same unit. Likewise, the fact that these indicators do not take into account the type (or quality) of energy used can also bias the comparison of the energy efficiency of two systems powered by two different types of energy. Another problem is the allocation of energy in the case of multi-product companies. Indeed, when the company produces several goods, it is difficult to know the quantity of energy consumed by each of these goods. This is especially true for thermodynamic and physical-thermodynamic indicators. As for the other types, provided that the two products are separable, a simple approach would be to allocate the energy consumed on a pro-rata basis according to the share of each product, for example, in turnover of a firm, or by using an econometric approach ( [Cleland, 1981], [Rao, 1981]).
Summing up, as we work on companies belonging to different sectors and different countries, a pure indicator is more appropriate than a hybrid energy intensity indicator. The pure thermodynamic indicator is very difficult to evaluate when the direct output of the company is not energy, whereas the economic indicator can be adapted to all types of companies, whatever their output. It follows that the latter is the most suitable and is therefore retained for the rest of our study. Moreover, the economic indicator has been widely used in the literature by different authors in various analyses. Although it does not exactly reflect the energy efficiency of a process or a company, it at least partially takes into account the quality of the energy source because the more environmentally friendly it is, the more expensive it is, and the better the system that consumes it. Then comes the question of the choice of energy source to consider. Factories generally use two sources of energy : electricity and petroleum products such as gas, gasoline, diesel, and heavy fuel oil. While electricity is ubiquitous in all businesses regardless of their nature or industry, petroleum products are not, and they differ widely in quality and characteristics. It follows that the most plausible choice would be electricity. However, since heat is essentially cheaper to produce with petroleum products, this choice would be detrimental if several firms used this energy (heat) in their production process. In conclusion, we retain (pure) economic indicators whose inverse is called energy intensity, and electricity as the main energy source of production.
Explanatory variables
Total Factor Productivity
Consider the following standard production function (4.3) where K (natural logarithm of capital) and L (natural logarithm of labor cost) are the main factors of production. The left-hand side variable is the natural logarithm of the value added in production (Val_Ad). The error term is decomposed into two components that are productivity ω ijt and idiosyncratic error Ψ ijt . The latter is known by both producer and econometrician while only the producer knows the former. This leads the producer to increase (or decrease) the input when productivity increases (decreases) so that the input can be adjusted according to the dynamic of the productivity. Consequently, there is an endogeneity issue. To address this endogeneity issue, many papers in the literature used resorted to the GMM (Generalized Method of Moments) approach ( [Arellano, 1991] ; [Arellano, 1995] ; [Blundell, 1998] ; [Arellano, 2001] ( [Elu, 2010]), fixed-effects panel methods [START_REF] Mulder | Structural change and convergence of energy intensity across OECD countries, 1970-2005[END_REF], [Hoch, 1962]), instrumental variable (IV) method ( [Ackerberg, 2007], [Lu, 1999]) or input control approaches like that of [Levinsohn, 2003] (LP) which is based on that of [Olley, 1992] (OP). In our case, the fact that our panel dataset covers only two periods makes it difficult to use the GMM method.
V al_Ad ijt = α 0 + α k .K ijt + α l .L ijt + ω ijt + Ψ ijt
The panel fixed-effects approach assumes that firms' productivity does not change across time, but this is a strong assumption since productivity is very sensitive, and changes according to several parameters such as input quality [START_REF] Et | Does input quality drive measured differences in firm productivity ?[END_REF], [START_REF] Halpern | Imports and productivity[END_REF]), firm size ( [START_REF] Dhawan | Firm size and productivity differential : theory and evidence from a panel of US firms[END_REF]) which can itself, change over time. As for the instrumental method, the instruments used are input prices, which can vary across firms, and are rarely available. Concerning the LP and OP approaches, it is important to note two fundamental concerns. These methods are applied on only one sector 16 in only one country, 17 thereby allowing to make the assumption that all firms belonging to each sector face the same market conditions. Other scholars used non-parametric methods [START_REF] Hall | [END_REF] where the production function parameters that are also the output-input elasticities are calculated as the share of input's cost relative to the total cost. This calculation is performed for each input at the industry level before inferring the productivity at the firm level by a simple difference. However, this approach is based on the assumptions of constant return and perfect competition in the market. Few studies have analyzed cross-country productivity at the firm level. [Şeker, 2018] made a cross-country analysis of TFP on a sample of 69 countries and about 19787 firms. They applied the non-parametric approach of [START_REF] Hall | [END_REF] to the trans-log production function. 18 They first determine the model's parameters and then compute the productivity at firm levels before aggregating them by the industry for each country. To check whether China transfers productivity-enhancing technology to Sub-Saharan African manufacturing firms in the context of the Sino-African trade, [Elu, 2010] used the GMM method to evaluate the effect of the Sino-African trade on manufacturing firms' productivity in five countries using data from CSAE 19 Centre for the Study of African Economies (CSAE) at Oxford University. [Darko, 2021] used LP's approach to perform the same analyses. To respect the assumptions of this approach, they estimated productivity by industry and by country for all firms using the Enterprise Survey Data from World Bank.
In this paper, we also use the LP's approach. [Levinsohn, 2003] used intermediate goods to correct endogeneity while [Olley, 1992] used investment. Since very few companies have made investments in our database, it would be advantageous to use the former approach in order to exploit the maximum possible observations of our data. Following [Darko, 2021], we estimate the productivity at the firm level, by each industry-country group.
16. Telecommunication in the case of [Olley, 1992] and manufacturing industries. (Food products, Metals, Textiles, and Wood products, separately) 17. USA for [Olley, 1992] and Chile for [Levinsohn, 2003]. 18. This function does not impose constant elasticities of production factors but they are not monotonic like in the Cobb-Douglas model and are harder to interpret.
19. .
Presentation of Levinsohn and Petrin (2003) method
The estimation method of the production function developed by LP is applicable when the dependent variable is revenue (price multiplied by quantity) or value added (revenue minus intermediate input). The latter is used in this paper. It is based on three major conditions that are described below. Let's consider our production function in equation ( 1). The first condition stems from the endogeneity problem mentioned above, that is, the fact that the company increases its intermediate inputs as its productivity increases conditionally to its capital stock level. We translate this phenomenon into the following equation :
i t = f t (k t , ω t ) (4.4)
where i t is the intermediate input (material) and f t represents the demand function of intermediate inputs. According to the separability condition, the production technology must be separable in the particular input that is used as a proxy.
The condition denoted as monotony condition stipulates that i t must strictly increase in function of productivity. We need this condition in order to invert f t and obtain ω t as a function of k t and i t . Note that the demand function is indexed by t to take into account the change in intermediate input price over time, which is common across all firms in a given industry. This leads to the second condition denoted perfect competition condition, which stipulates that firms in the same industry must belong to the same market, and hence face the same input and output prices.
So,
ω t = f -1 t (k t , i t ) = w t (k t , i t ) (4.5) (3) in (1) gives V al_Ad ijt = α l .L ijt + Φ t (k t , i t ) + Ψ ijt (4.6) with Φ t (k t , i t ) = α 0 + α k .K ijt + w t (k t , i t )
Besides controlling for endogeneity, another advantage of this method 20 also lies in the fact that it gives exclusively positive values, unlike other methods such as the GMM and panel fixed-effects.
China's penetration
The calculation of the Chinese penetration in the domestic market of African countries requires variables such as imports from China and absorption in the domestic economy. The former is directly extracted from the BACI database, while the latter is determined thanks to 20. Further details on this estimation method could be found in [Levinsohn, 2003].
the EORA database. In fact, production, exports, and imports are available at the industry level, and this makes it possible to calculate the absorption in the domestic economy in the year 2000. 21 Since the numerator (of this variable) varies from one year to the other, it is better to use the BACI database because its values are observed, and are therefore closer to reality than those of EORA, whose values are based on estimates. Nevertheless, the EORA Database has been used in many academic studies. The instrument is constructed following [Autor, 2013] and measures the penetration of China in economically similar countries to those in our sample that is concerned.
Financial development
A country's finance is multi-faceted and cannot be grasped by a single variable. This explains why the World Bank and the IMF have released several indicators22 to better assess the financial system of a country. Many papers have measured financial development by using bank credit to the private sector as a percentage of GDP, the ratio of total bank assets to GDP, and stock market capitalization in the percentage of GDP ( [Xiao, 2012], [START_REF] Arellano | [END_REF], [Demirguc-Kunt, 2008]). Similarly, past financial crises have highlighted the significant role of non-bank financial institutions and thus led to using total financial assets to GDP as an alternative proxy ( [START_REF] Cheng | The impact of bank and non-bank financial institutions on local economic growth in China[END_REF], [Liang, 2012], [START_REF] Rateiwa | Non-bank financial institutions and economic growth : Evidence from Africa's three largest economies[END_REF]). However, these indicators do not provide information on the accessibility and efficiency of these institutions, nor do they incorporate financial market characteristics. To fill this gap and exploit a large amount of data available concerning the financial system, [Čihák, 2012] developed several measures of four characteristics of financial institutions and markets, namely (a) the size of financial institutions and markets (financial depth), (b) the degree to which individuals can and do use financial institutions and markets (access), (c) the efficiency of financial institutions and markets in providing financial services (efficiency), and (d) the stability of financial institutions and markets (stability). The authors recognize that financial depth, access, efficiency, and stability might not fully capture all features of financial systems and they make no attempt to construct a single composite index of financial systems. However, based on the first three characteristics, [Svirydzenka, 2016] proposed to aggregate in a pyramidal form a single index of financial development.
Moreover, in developing countries, especially in African countries, the financial sector is poorly developed, which has led many researchers ( [START_REF] Jalilian | Financial development and poverty reduction in developing countries[END_REF], [START_REF] Pj Dawson | Financial development and economic growth in developing countries[END_REF], [Perez-Moreno, 2011]), to favor the traditional indicator of the ratio of bank credits to GDP, despite its limitations. Given that the analysis in this paper is at the firm level, we construct our variable measuring the level of financial constraints faced by firms as the level of the financial obstacle, perceived as such by the firm manager ( [Sleuwaegen, 2002]). This information is available in the Enterprises Survey Data of the World Bank's Database. [START_REF] Harrison | Explaining Africa's (dis) advantage[END_REF] also used the same indicator, but pointed out that it is likely endogenous. They then interact its mean, at the city level, with the size of the firm to reduce the endogeneity of the variable. The financial development All these models show a significant and negative effect of China's penetration on firms' productivity in the full sample and for small (and medium) enterprises. Large firms do not seem to react to China's competition by increasing their productivity. India's penetration in African countries' markets does not affect firms' productivity, apart from Model 1 where the coefficient of the related variable is negative and statistically significant on large firms. However, the latter finding is not sufficient to deduce that imports from India represent a threat to African firms. The penetration of FTA members in African markets has, sometimes, exerted a significant negative impact on African firms' productivity, but the estimate is no longer statistically significant when all control variables are taken into account. This instability in the estimates prevents us from concluding that FTA members represent a threat to African firms. Table 4.1 also shows that the firm's size and its exporting status have a strong positive and statistically significant effect on its productivity. That means that larger firms and exporting firms tend to have high productivity. This is in line with several papers, such as [Bernard, 2004] and [START_REF] Melitz | [END_REF].
While the electricity obstacle does not affect firms' productivity in Model 2, Model 3 reports that finance's obstacle reduces firms' productivity. Both model specifications suggest that when firms face electricity or financial obstacles, they further improve their productivity, as China's penetration in the market increases. This counter-intuitive result is confirmed in Models 4 and 5 where both obstacles have a negative statistically significant effect on firms' productivity but to a lesser extent for finance's obstacle than for the electricity obstacle in the last model. This can be explained by the fact that electricity continues to be an obstacle for an enterprise when it also faces finance's obstacles. Hence, financial obstacles information is partly taken into account by the existence of electricity obstacles. According to these results, African countries that face market failures in the electricity and financial sectors do not care about these obstacles in the absence of (foreign) threats in the local market. More precisely, these results show that china's penetration provides incentives to SMEs to improve their TFP when they face electricity obstacles, but not when they face financial obstacles. It seems that it is much more difficult for firms to overcome financial obstacles than electricity obstacles. Contrary to SMEs, China's penetration does not seem to incentivize large firms to improve their productivity when they face electricity obstacles. In contrast, China's penetration leads to an improvement in large firms' productivity when these firms face financial obstacles. This result can be explained by the fact that electricity is not a big obstacle for large enterprises because, in the presence of a power outage, they can use profitably a generator due to economies of scale. That is not obvious for SMEs that, perhaps, have to wait for better market competition to make this investment. Also, large enterprises can more easily overcome financial obstacles and make investments to improve their productivity ( [Beck, 2006b]). However, these results continue to be puzzling because they show that the average impact of China's penetration on firms' productivity in the full sample is negative, while (small) firms that face electricity obstacles improve their productivity. This finding is puzzling and runs against results largely found in the literature. But we can be tempted to argue that these findings are specific to the case of China's penetration into African markets. Let us also assume that they are two types of enterprises in the market : firms that face either electricity obstacle or finance obstacle with productivity ω 1 , and other firms that face none of these obstacles, with productivity ω 2 . Suppose that the minimum productivity to stay in the market is ω 0 . The presence of these two types of enterprises in the market means that ω 1 > ω 0 and ω 2 > ω 0 (Figure 4.4). Suppose that after China's trade opening, the threshold of productivity increases and becomes ω 0 ' due to the Chinese competition that reduces the market price. The lower the price, the greater the need for firms to increase their productivity to sell at this price. Hence, to continue to stay in the market, firms have to invest to improve their productivity. Based on our results, we can argue that firm that faced obstacles (before China's entry African market) which undermined their productivity, would try to overcome these obstacles by improving their productivity (from ω 1 to ω 1 ')) in order to stay in the market. More precisely, in order to improve their productivity when facing Chinese competition in the local markets, African SMEs struggle to solve their electricity problem and large enterprises bargain to access finance. SMEs that experience financial obstacles fully undergo the shock. According to our results, firms that do not face any substantial obstacle do not undertake any action to improve their productivity. This means that they do not wait for competition (including from China) to improve their productivity. Conversely, because the overall impact is negative for SMEs, the productivity of firms that do not face any obstacle decreases (from ω 2 to ω 2 ') when these firms face competitive pressure from China's imports. Although difficult to explain, these findings may reflect the fact that the competition from China is not only based on prices. Indeed, China is a particular trading partner able to adapt to the market conditions of the host countries. As the purchasing power of consumers in developing countries is low, China could supply low-quality goods (although of a higher quality than similar local products) at a price lower than the one prevailing in the local market of the host country. For example, [Gebre-Egziabher, 2007] found that China's penetration in the Ethiopian's textile sector led to a reduction in Ethiopian firms' production due to its low product price but also led to better product quality, mainly for firms that struggle to stay on the market. We can infer that apart from resolving traditional problems, African firms have to make innovations to face the import competition from China. If firms are efficient, but their products are not wanted by local consumers, their efficiency would not bring them any advantage, but would instead result in a loss of market share and consequently a loss in their productivity.
China's penetration and firm's energy intensity
The effect of china's penetration on the firms' energy intensity is presented in Tables 4.2 and 4.3. The indicator of firms' energy intensity used in 4.2 takes into account only electricity purchased from the network grid while the indicator of firms' energy intensity used in 4.3 is calculated using the total cost of electricity. Table 4.2 shows that China's penetration does not affect firms' energy intensity. In fact, once the endogeneity is corrected, the coefficient becomes statistically significant and negative, but the significance disappears when we add control variables even if in the last two model specifications, the impact of China's penetration on African firms' energy intensity is negative and slightly statistically significant. This means that China's penetration leads to a slight decrease in energy intensity. Even though India's penetration is positive and statistically significant in the last two model specifications, it is not statistically significant in the previous model specifications. It is the same for other FTA members' penetration that shows no statistically significant coefficients in all models. Results also show that being an exporting firm does not have a statistically significant effect on energy consumed per unit produced, while being a larger firm lowers energy intensity, perhaps due to economy of scale. All model specifications show a large statistically significant effect of the loss due to outage on energy intensity, but only for SMEs. This suggests that once the products are made, the absence of energy leads to a loss of part of the production due maybe to a depreciation of goods produced. This is likely the case for the agro-industry where product conservation is indispensable because products are perishable. In this case, the fact that all output is not sold in the market leads to a higher energy intensity. This reveals a limit to using the pure economic indicator of energy intensity to proxy for energy efficiency. Even if a firm is energy efficient, it can have a high energy intensity given the way it is calculated. Finally, the estimation shows that both financial and electricity obstacles do not lead African firms to invest in energy efficiency even in the presence of competitive pressure from imported Chinese products. The results using the total cost of energy show a more pronounced negative and statistically significant negative impact of trade competition from China on the local market for SMEs. But this decrease in energy intensity can have many explanations. For example, due to Chinese competition, a firm can reduce its capital intensity so as to reduce its energy cost even if it becomes less productive. Likewise, based on our theoretical model, the logarithm relationship between energy intensity and production suggests that a slight fall in the production of SMEs leads to a large reduction in energy intensity. It follows that the competitive effect of China on SMEs can lower their energy intensity, ceteris paribus. Other trading partners also influence significantly firms' energy intensity but to a lesser extent. There is no statistically significant effect of being an exporting firm on the firm's energy intensity while being a larger enterprise can reduce its total energy intensity. The loss due to the power outage continues to increase the firm's energy intensity. This means that the use of an electrical generator does not help firms reduce their energy intensity. Compared to the case of electricity intensity, there may be a compensatory effect here, namely, the use of a generator reduces the loss of the quantity produced but may increase the energy costs due to the purchase of diesel. We note from the results reported in Table 4.1 that large enterprises that are confronted with financial obstacles make the investment to improve their productivity in the context of competitive pressure from China. This should lead to a change in energy intensity. At the same time, the results reported in Table 4.3 do not show a statistically significant effect of the interaction term between finance obstacle and China's competition on energy intensity. This result can also be explained by the logarithm relationship between energy intensity and production level according to the theoretical model described above. For higher values of production, a large increase of production is associated with a small increase in energy intensity, ceteris paribus. Table 4.1 also shows that the productivity of SMEs that are confronted with financial obstacles does not increase under the competitive pressure from China, while firms confronted with electricity obstacles increase it. According to the results in Table 4.3, the energy intensity increases for SMEs that are confronted with financial obstacles. Hence, we can deduce that SMEs that are likely confronted with both electricity and finance obstacles struggle only to get the necessary finances to purchase generators when they face competitive pressure from China.
In the last model specification, the positive sign of the interaction terms suggests that regardless of whether firms face financial obstacles, electricity obstacles, or both, Chinese import competition leads to an increase in their energy intensity. This confirms that the impact is more significant when firms are confronted with both types of obstacles.
Productivity and energy intensity
We started this analysis by using the single equation approach with the 2SLS model (Table 4.4) before moving to the 3SLS estimator (Table 4.5). In the former, we instrument energy intensity by the variable lossper_due_out because it is strongly correlated with energy intensity but not correlated with the dependent variable, productivity. For the latter, it is complicated to find an instrument that would respect the key conditions. Based on the literature, the firms' productivity is correlated with the firm's size that is large firms tend to have high productivity, as observed in Table 4.1. According to Tables 4.2 and 4.3, the energy intensity is also strongly correlated with the firm's size. Despite this, we used the firm's size as an instrument for productivity (no other instrument is available), but in different ways. We use it in categorical form (size) and in continuous form. Instead of choosing the firm's total size, we retain only the number of skilled employees (skill) as they are likely to have more impact on the firm's efficiency. The number of skilled employees is also interacted with the capacity utilization rate (cur) because firms that have the same size and do not use their equipment at the same level may have very different productivities. The results are presented in Table 4.4. The variable lossper_due_out is strongly correlated with energy intensity. Likewise, the variables size and skill are also si-gnificantly correlated with the productivity variable but with an unexpected negative sign. We have to be cautious when interpreting the outcomes related to these variables. According to the estimation results, a rise in firms' productivity leads to a fall in energy intensity, while the latter does not affect significantly the former. Apart from the weakness of the instruments for productivity, these results are not surprising. As shown theoretically, energy intensity and productivity are inversely correlated. But the empirical results show that the causality seems to go in one direction only, namely from productivity to energy intensity. If we refer to our theoretical model, we can argue that, perhaps, the increase in energy intensity is due to the increase in both production and installed physical capital, so that through an offsetting effect, the energy intensity ultimately exerts no significant effect on productivity. On another note, added value and capital intensity variables affect positively and statistically significantly firms' productivity but not firms' energy intensity. The loss due to outage (lossper_due_out) and the firm's size as well as its interaction with the capacity utilization rate have the same signs as in the first stage.
As the 3SLS-estimator ignores the quality of the instrument but requires good specification and respect for identification rules, we have used the previous instruments to estimate the structural equation modeling (SEM) with this estimator. Results in Table 4.5 indicate that firms' energy intensity does not affect statistically significantly their productivity, but the latter reduces statistically significantly the energy intensity. All other variables have maintained their sign and remain statistically significant except for the capital intensity indicator.
In sum, the analysis suggests that the relationship between African firms' energy intensity and productivity runs mainly from productivity to energy intensity.
Robustness
The obstacle perceived as such by the firms' managers is considered above to take into account the firm's accessibility to the financial market and reliable electricity. As these variables are binary, the value of 1 has the same weight for all countries. But this assumption loses its relevance once we account for the fact that countries do not have the same development level. For example, an Egyptian firm and a Togolese firm that claim that electricity (or finance) is a major obstacle are unlikely to face the same situation. The Egyptian firm may face, on average, three outages per week, while the Togolese firm faces, on average, ten outages per week with different impacts on productivity and energy intensity. In this case, it could be easier for the Egyptian firm to overcome its obstacle compared to the Togolese firm. If the sample is dominated by Egyptian firms, this could lead to the results found previously. Hence, it is important to take into account the global situation of the country in which each firm operates. To do so, for each firm, we interact the variable of electricity obstacle and the rate of electricity access at the country level, on the one side, and on the other side, the variable of financial obstacle and the financial development indicator of its country. We measure financial development with banks' credit to the private sector in the percentage of GDP and the financial development index developed by [Svirydzenka, 2016]. The benchmark results (Tables 4.1,4.2,4.3) suggest that the firms facing obstacles are susceptible to improving their performance. If this result remains the same for the interaction term, that means firms improve their performance when they operate in an environment where the financial sector or electrical infrastructures are developed. Furthermore, there is another way to interpret the interaction term. Given that the value 0 means "no obstacle or minor obstacle", higher values of the interaction variable could also mean higher levels of obstacle. However, this reasoning runs against the previous one and could generate some confusion. To clarify this, we have taken the complementary values, that is, 100 minus the values of the electricity access rate, and 1 minus the values of the financial development indicator. As values of the share of private bank credit to GDP are greater than 100%, we normalize 23 this indicator so that its values range between 0 and 1. If the sample is dominated by firms that are in countries less financially and infrastructurally vulnerable, they could lead to the estimated parameters of the benchmark results. By taking into account the development level of countries, we hope that if a firm that faces any obstacle is located in a low-income country, it would not be able to improve for example its productivity and the coefficient could become negative or at least lose its significance. We have re-estimated the model specifications of the impact of China's penetration on firms' productivity and (total) energy intensity, and conditioned on firms' size. The findings are reported in appendix 2-7. The first column considers the financial obstacle variable (finan_obst*(1 -bank_credit)), while the second considers the electricity obstacle variable (elect_obst*(1 -elect_rate)). The third column includes both obstacles while the fourth adds their interaction. The last three columns re-estimate the previous model specifications by considering the second variable of financial obstacle (finan_obst*(1 -FD)).
In Appendices 2 and 5, the coefficients of the variables that represent china's penetration, and other trading partners' penetration, as well as those of other control variables have maintained their signs and their statistical significance compared to the benchmark results (Table 4.1). The estimation results reported in Appendix 2 suggest that the new variables of finance and electricity obstacles have maintained their sign and statistical significance when they are introduced separately in the model. But, when they are introduced together in the same specification, they are no longer statistically significant. Conversely, the results related to firms' size align broadly with those in Appendix 5 and the benchmark results. However, the electricity obstacle shows a positive effect on productivity for some specifications of SMEs, but significant only at the 10% level. This effect is difficult to explain. The interaction term of both obstacle variables is not statistically significant for SMEs, but it is slightly significant for other firms. Hence, we retain 23.
bank_credit =
privcrebybandeptogdp -min(privcrebybandeptogdp) max(privcrebybandeptogdp) -min(privcrebybandeptogdp) where bank_credit is the normalized value and privcrebybandeptogdp is the observed value ; min and max mean the minimum and the maximum, respectively from these results that the competition from China leads SMEs to invest when they are confronted with electricity obstacles, but not financial obstacles. Inversely, large firms invest only when they are confronted with financial obstacles in the context of competitive pressure from China.
Appendices 3 and 6 display estimation results about electricity intensity in the full sample and by subsample according to firms' size, respectively, while Appendices 4 and 7 present (in the same order as previously) the estimation results about the total energy intensity, which takes into account the fuel cost to generate electricity. In the first case, the findings align with those found in Table 4.2. We find that, in almost all cases, neither the variable of finance obstacle nor that of electricity obstacles have a statistically significant impact as Chinese import competition in the market increases. But the latter tends to reduce the electricity intensity in the last three columns that consider FDI as a financial obstacle, mainly for SMEs. The electricity intensity of large firms seems to be insensitive to china's penetration (Appendix 6). These results are confirmed by those reported in Appendix 7, where only the total energy intensity of SMEs is negatively and significantly affected by Chinese import competition. Likewise, we can also learn from Appendix 7 that the competition from China leads to an increase in SMEs' energy intensity when they are confronted with financial obstacles maybe because they invest in new generators in the context of increased competition in the market.
Finally, we do not have enough arguments to argue that there is a difference in the magnitude of obstacles faced by African firms conditioned on their country's level of development, or that the effect of china's penetration on firms' performance varies according to the development level of the country in which the firm operates. Overall, SMEs that face electricity obstacles make investments to fix the electricity problem while large African firms facing financial obstacles tend to invest when competition from China intensifies in the domestic market.
Conclusions
China's emergence in the international market has attracted increased attention in the economic literature. Many papers have analyzed its impacts on developed or developing countries. This literature also suggests, for example, that trade with foreign countries leads to changes in firms' behavior. Few papers have analyzed in depth the effects of China's competitive pressure on firms' characteristics in African markets. This paper contributes to filling partly this gap in the literature by analyzing the incentive effect of Sino-African trade on African firms' productivity and energy efficiency. Based on various data and empirical strategies, the results suggest that China's penetration into African markets has induced, on average, a productivity loss and a reduction in energy intensity among African SMEs while we find no statistically significant impact on the performance of large firms. But drawing from our theoretical model, we have deduced that a reduction in energy intensity cannot be attributed to improved energy efficiency but rather to a significant decrease in firms' production. Likewise, improving the firm productivity leads to a drop in its energy intensity but there is no evidence of reverse causality. As a result, the more companies lose in technical efficiency (TFP) following Chinese penetration, the more they lose in energy efficiency.
Conclusions
We also investigate whether the finance and electricity obstacles are potential factors that can explain these results. These are, in fact, the main threats to firms in less developed countries, mainly African ones. The empirical results show that both obstacles are barriers to improving firms' performance for SMEs. But as china's penetration in the market increases, SMEs that are confronted with an electricity obstacle improve their productivity but experience an increase in their energy intensity when they face financial obstacles. This result is ambiguous and can be explained in two ways. On the other hand, china's penetration pressure compels SMEs to overcome their financial barriers and invest in electrical generators so as to continuously produce in the market. This leads to their productivity growth with in return a high expense in the purchase of diesel. On the other hand, we can also interpret this result by the fact that financial obstacles represent a barrier for SMEs to invest in energy efficiency further to china's penetration. As a consequence, their use of energy increases. But the first explanation is more consistent. As for the large African firms, their performance is not affected by electricity or financial obstacles. However, as China's penetration grows, large African firms that face financial obstacles tend to improve their productivity, but china's penetration does not necessarily affect their energy intensity.
Finally, we retain from the results that China's competitive pressure required not only an improvement in African firms' traditional performance but also their efficiency, including through innovation at the product level. These results are consistent with previous literature. However, we also find that the improvement in productivity and energy efficiency has not been large enough to sustain foreign competition mainly from China, which competes on price and quality.
However, the penetration of Chinese products into African markets can lead struggling domestic firms to find a way to improve their performance, otherwise, they would lose part of their market share, and in the extreme case, be constrained to leave the market.
Based on these results, African firms should hire highly skilled workers and invest in research and development in order to innovate at both the technology and products level. African governments should pay particular attention to the educational sector to make available to companies a well-trained workforce. In addition, governments should endeavor to supply electricity of good quality to SMEs and facilitate their access to the financial market. The lack of reliable electricity leads firms to use electrical generators that are not only expensive but also pollute enormously since they are powered by fossil fuel.
An avenue for future research could be to investigate the effect of China's input penetration in African markets on African firms' performance. However, this exercise requires information about the origins of inputs used by these firms in order to take into account the share of China's inputs. So more detailed firm-level data is required. Chapitre 5
General Conclusion
According to Regional Economic Outlook (IMF, 2022) and Economic Development in Africa Report (UNCTAD, 2022), most African countries presented a high rate of economic growth before the recent crises, namely the covid-19 pandemic and the war in Ukraine. This economic performance is observed in the environment where trade between Africa and China, which has been the main African trading partner since 2009, has been on the rise. However, the high economic growth did not allow Sub-Saharan African countries to reverse the trend of traditional evils including energy poverty and low financial development, which impeded the real take-off of these economies for sustainable growth and the reduction of households' poverty. The present thesis investigates in its first chapter the effect of access to electricity on Tanzanian households' consumption and how it affects the growth of home-based businesses. The second chapter assesses the effect of China's penetration on the growth of African manufacturing firms in both domestic and foreign markets. The last chapter has explored how Chinese product competition has influenced firms' productivity and energy efficiency. This chapter has distinguished small and medium enterprises from large ones, that faced or not financial and/or electricity constraints.
Based on the data from the first and the last wave of the latest panel survey of Tanzania, the analysis made in the first chapter has generated several results. First, electricity access has led to an increase in household consumption. Second, we have found from the latest wave, that access to electricity has not affected significantly home-based businesses' income. However, access to electricity has affected positively and significantly the stock of physical capital, which has, in turn, led to a strong increase in home-based business income. The results are similar for the workforce of these businesses. Indeed, access to electricity does not exert a significant impact on the probability to have a worker or the number of employees of the home-based business. However, the stock of physical capital leads to a positive and significant impact on these variables. Hence, the extension of the duration of the activity of home-based businesses allowed by the electric lamps is not enough to facilitate their growth.
The panel of African firms from the enterprises' survey data of the World Bank, the international trade data from CEPII, and the national input-output data from Eora's database are the main datasets used in the second chapter to study how China's penetration into African markets affects the African firms' growth measured by firms' sales and firms' size. Results have revealed that only small and younger firms are victims of Chinese product competition. The large firms have shown a pro-competitive effect. In the common external markets, Chinese products have crowded out African ones. This negative and significant impact of Chinese products on exporting firms' growth is more pronounced in developed countries markets. Surprisingly, the Chinese input penetration has affected negatively firms' growth. This peculiar result may be attributed to the indicator used to measure Chinese input penetration. The last chapter is as rich in the database as the previous one. To the ESD, BACI, and Eora data, we have added the financial development indicators from the World Bank and International Monetary Fund. Our theoretical model on the relationship between energy intensity (proxy of energy efficiency), productivity, and production of any manufacturing firm, has shown that the energy intensity is inversely related to the firm's productivity, while there is a logarithmic (that is a non-linear) relationship between each of these indicators and the firm's production-level. As the energy intensity is inversely related to energy efficiency, we deduce from these results that the improvement of energy efficiency can lead to higher productivity and vice-versa. We also deduce that the energy intensity and productivity vary slowly for the high production level but more rapidly for the low production level ceteris paribus. The empirical results suggest that Chinese product competition in African markets has been associated with a reduction of both energy intensity and productivity for SMEs, while there is no statistically significant impact on large firms' energy intensity and productivity. However, the productivity improvement has led to a reduction in energy intensity but there is no reverse causality. These results seem to be conflicting, but referring to our theoretical model and the second chapter's results, we have deduced that the reduction of (small) firm production level further to China's penetration may lead to the reduction of both productivity and energy intensity. The reduction of the quantity produced has resulted in the reduction of energy consumed, ceteris paribus, and consequently the reduction of the ratio of the energy consumed to the quantity produced. Furthermore, the empirical results have revealed that small firms that face the electricity obstacle and the large ones that are confronted with the financial obstacle tend to improve their productivity as Chinese competition increases in the domestic market. At the same time, the energy intensity of the small firms also increases. These firms would invest in electric generators in order to overcome their electricity obstacle and increase their production-level, and ultimately, their productivity. We also deduce that firms that did not face any obstacle have lost in performance. These results highlight the insufficiency of firms' traditional performance in terms of productivity and energy efficiency, to face Chinese product competition. They would also need to improve their innovation performance at the product level by investing in research and development.
Policy implications
Summing up, this thesis provides four important results. First, access to electricity benefits households that are well-equipped in physical capital. Second, small and young firms are the most adversely affected by Chinese product competition. Third, the Chinese competition forces firms that face market failures to overcome these failures and improve their performance. Finally, firms should find another approach, for instance, increasing innovation at the product level, to face Chinese product competition. These results could be useful for African countries to orient public investments toward achieving sustainable and inclusive economic growth on one hand. On the other hand, they could be useful for African Development Bank (AfDB) to well accomplish its development mission across the continent. Indeed, the AfDB Group ambitions to spur sustainable economic development and social progress in its regional member countries (RMCs), thus contributing to poverty reduction.1 As poverty reduction, electrification, and industrial development are the first common objectives of these two institutions, it is important to note that before an electrification project brings households out of poverty, it must be accompanied by a micro-financial program to enable the poor households to make investments in their home-based businesses. Likewise, reliable electricity is also critical for African firms, mainly the small ones to reduce their vulnerability to China's competition. Overall, it is important to give priority to the education sector in order to provide a skilled workforce to African firms to generate profitable investments. However, these firms' investments required the existence of a well-developed financial sector to enable both SMEs and large enterprises to access easily financial credits to finance their businesses.
3. 1
1 Evolution of world trade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Share in world exports (%) of major players in world trade (source : WTO) . . . 3.3 Import share by Africa main partners (source : Own elaboration) . . . . . . . . . 3.4 Africa's trade balance with China and India (source : IMF, 2020) . . . . . . . . . 3.5 Transmission Channels of China's Penetration Effects on Manufacturing (source : Own elaboration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Example of Domestic Production Network (source : Own elaboration) . . . . . . 3.7 Evolution of China's penetration (%) across selected African countries (source : Own elaboration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 China output penetration (%) : Time variation within countries across industries (source : Own elaboration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 China input penetration (%) : Time variation within countries across sectors (source : Own elaboration) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 African imports of electrical appliances from major exporters (source : Own elaboration [Data from BACI, CEPII]) . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Total CO 2 emissions and CO 2 per GDP in Sub-Saharan Africa (source : Own elaboration [Data from WDI World Bank]) . . . . . . . . . . . . . . . . . . . . . 4.3 Energy intensity and productivity relationship (source : Own elaboration) . . . . 4.4 Productivity movement before and after China's entry into the African market .
Figure 1
1 Figure 1.1 -Share of population with access to electricity in 2020 (source : World Bank 2022)
Figure 1 . 4 -
14 Figure 1.4 -Africa's current energy generation mix (source : BP Energy Outlook 2020)
Figure 1
1 Figure 1.5 -China's export in the world market by region (source : Own elaboration [Data from World Integrated Trade Solution])Note : The second axis is for the bar chat.
Figure 1
1 Figure 1.6 -Biggest obstacles to doing business in Africa, by firm size and sector, most recent year available during 2006-17 (source : AfDBG/AEO,2019) Note : All values are survey-weighted.
Figure 2 .
2 Figure 2.1 -Principal transmission channels (source : Own elaboration)
Figure 2 . 2 -
22 Figure 2.2 -Comparison of the total consumption expenses of each group (source : Own elaboration
Figure 3 .
3 Figure 3.1 -Evolution of world trade
Figure 3 . 2 -
32 Figure 3.2 -Share in world exports (%) of major players in world trade (source : WTO)
Figure 3 . 3 -
33 Figure 3.3 -Import share by Africa main partners (source : Own elaboration)
Figure 3 . 4 -
34 Figure 3.4 -Africa's trade balance with China and India (source : IMF, 2020)
Figure 3 .
3 Figure 3.5 -Transmission Channels of China's Penetration Effects on Manufacturing (source : Own elaboration) (Notes : UP : Upstream products ; DP : Downstream products)
5. World Trade, Organization. (2006) International Trade Studies 2006. Geneva, Switzerland : WTO.
Figure 3 .
3 Figure 3.6 -Example of Domestic Production Network (source : Own elaboration)
Figure 3 .
3 Figure 3.7 -Evolution of China's penetration (%) across selected African countries (source : Own elaboration)
Figure 3 .
3 Figure 3.8 -China output penetration (%) : Time variation within countries across industries (source : Own elaboration)
Figure 3 . 9 -
39 Figure 3.9 -China input penetration (%) : Time variation within countries across sectors (source : Own elaboration)
4
4 ) ; Robust standard errors in parentheses ; *** p<0.01, ** p<0.05, * p<0.1 ; IV refers to 2SLS estimator with ∆ChinaP enOut instrumented with the correspondent instrument constructed Chapitre African firms' adaptation to Chinese Shock under financial and electricity constraints Les liens commerciaux entre la Chine et les pays africains se sont intensifiés au cours des deux dernières décennies, faisant de la Chine le premier partenaire commercial de l'Afrique. Dans cet article, nous nous concentrons d'une part, sur les effets de la concurrence des importations chinoises en Afrique sur la productivité des entreprises locales et sur leur intensité énergétique. D'autre part, nous investiguons l'éventuelle relation mutuelle existante entre la productivité et la consommation d'énergie (c'est-à-dire l'in-tensité énergétique des entreprises). En utilisant une approche de variable instrumentale, nous montrons que la productivité et l'intensité énergétique des petites et moyennes entreprises sont affectées négativement par la concurrence des produits chinois, alors que nous ne trouvons pas d'impact statistiquement significatif sur la performance des grosses entreprises. Cependant, l'amélioration de la productivité de l'entreprise entraîne une baisse de son intensité énergétique, mais il n'y a pas de preuve de causalité inverse. Par conséquent, plus les entreprises perdent en efficacité technique (TFP) suite à la pénétration chinoise, plus elles perdent en efficacité énergétique. Nous montrons également qu'en fonction de la nature des obstacles à leur croissance, les petites et les grandes entreprises s'adaptent différemment à la concurrence chinoise. Parmi les petites et moyennes entreprises, seules celles qui sont confrontées à une pénurie d'électricité améliorent leur performance énergétique, tandis que parmi les grosses entreprises, une meilleure performance n'est observée que parmi celles qui sont confrontées à des obstacles financiers.
Figure 4 .
4 Figure 4.1 -African imports of electrical appliances from major exporters (source : Own elaboration [Data from BACI, CEPII])
Figure 4 . 2 -
42 Figure 4.2 -Total CO 2 emissions and CO 2 per GDP in Sub-Saharan Africa (source : Own elaboration [Data from WDI World Bank])
Figure 4 .
4 3 depicts both channels :
Figure 4 . 3 -
43 Figure 4.3 -Energy intensity and productivity relationship (source : Own elaboration)
Figure 4 . 4 -
44 Figure 4.4 -Productivity movement before and after China's entry into the African market
Table des matières
des Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Share of population with access to electricity in 2020 (source : World Bank 2022) 1.2 Share of population with access to clean cooking fuels and technologies, by country, 2020 (source : World Bank 2022) . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 SDG Index score vs International Spillover Index score (source : World Bank 2022) 1.4 Africa's current energy generation mix (source : BP Energy Outlook 2020) . . . . 1.5 China's export in the world market by region (source : Own elaboration [Data from World Integrated Trade Solution]) . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Biggest obstacles to doing business in Africa, by firm size and sector, most recent
5 General Conclusion
Bibliographie Liste des figures
1.1
vii
year available during 2006-17 (source : AfDBG/AEO,2019) . . . . . . . . . . . . 2.1 Principal transmission channels (source : Own elaboration) . . . . . . . . . . . . 2.2 Comparison of the total consumption expenses of each group (source : Own elaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 2 .
2
1 -Household electricity sources
Residence area 2008 Rural Urban Rural Urban 2012
Tanesco 57 532 218 495
Community generator 15 13 2 0
Solar panels 13 2 66 12
Own generator 0 1 3 2
Total 85 548 289 509
Source : author's calculation
Table 2 .
2 At the baseline date (2008), no household is treated i.e. no household has electricity. There are 1,838 in rural areas and 511 in urban areas. In 2012, 117 urban and only 52 rural households received the treatment i.e. were connected to Tanesco. This is in line with the national report (EASR 2016) on energy in Tanzania which reports a low rate of electrification of rural households.
2 -Control and treatment groups
Residence area 2008 All Treated Control Treated 2012
Rural 1838 0 1786 52
Urban 511 0 394 117
Source : author's calculation
Table 2 .
2
3 -Characteristics of treated and untreated households
global sample Treated Untreated difference
column 1 column 2 column 3 column 4
occupation Owners Tenants 0.78 0.14 0.63 0.24 0.87 0.07 -0.24*** 0.17***
walls cement Bamboo/mud 0.44 0.55 0.73 0.27 0.29 0.70 0.43*** -0.42***
sheet metal 0.62 0.93 0.49 0.44***
roof Tile/concrete 0.02 0.02 0.00 0.01***
Bamboo/other 0.37 0.05 0.50 -0.46***
floor cement earth 0.42 0.57 0.85 0.15 0.23 0.76 0.62*** -0.61***
gas 0.01 0.01 0.00 0.01***
cooking paraffin 0.03 0.05 0.01 0.04***
fuel coal 0.23 0.01 0.00 0.01***
firewood 0.72 0.41 0.89 -0.48***
Middle of residence urban 0.35 0.75 0.18 0.58***
Distance
from major average(km) 17.88 13.10 20.58 -7.48***
road
Distance
from nea- average(km) 71.14 58.07 79.59 -21.52***
rest market
elevation average(m) 705.15 492.87 824.14 -331.27***
slope average(%) 4.97 4.14 5.47 -1.33***
Note : * pvalue<0.1 ; ** pvalue<0.05 ; *** pvalue<0.01
Source : author's calculation
Table 2 .
2
4 -Electricity access effect on household consumption
1 | 2 | 3 | 4
VARIABLES OLS ∆T ∆Cons ∆T ∆Cons ∆T ∆Cons
owner 0.184*** -0.079** 0.292*** -0.070** 0.339*** -0.078** 0.296***
idistm 0.309 0.288* -0.477 0.282 -0.418 0.301* -0.531
idistv 0.385* 0.307*** 0.142 0.182* -0.107 0.295*** 0.136
idistr 0.046** 0.062*** 0.047**
IDPP 0.279*** 0.202** 0.266***
∆T 0.038 1.110* 1.912** 1.202*
slop_reg -0.005*** 0.012
Constant 0.088*** -0.035 0.052 -0.206 0.114*** -0.102
region fixed effects Yes Yes Yes
Observations 2,334 2,334 2,334 2,334 2,334 2,334
R-squared 0.095 0.150 0.097
F-statistic 15.10 6.05 13.62
Wooldridge(p_endog) 0.0901 0.0192 0.0786
Wooldridge(p_overid) 0.733 0.987 0.806
Note : * pvalue<0.1 ; ** pvalue<0.05 ; *** pvalue<0.01 ; In all models, the dependent variable is the change
in household consumption between
Table 2
2
.5 -relationship between home-based business incomes and electricity access
2sls | IV Tobit
VARIABLES tanesco ln(Income) ln(Cap_stock) tanesco
ln(Cap_stock) 0.053*** 0.247***
household size 0.002 0.026** 0.105*** 0.010**
spending 2.27e-08 4.77e-07***
owner -0.144*** 0.052 0.537** -0.123***
idistv 0.700*** 0 -0.696 0.895***
idistm 0.285 0
IDPP 0.890*** 1.206***
Power 0.002*** 0.002***
IDPP2 -1.072*** -1.490***
Power2 -5.32e-06*** -5.02e-06***
IDPP*Power 0.004* 0.005**
tanesco 0.756*** 2.282***
slop_ward -0.010 -0.004
Constant -0.391*** 8.936*** 9.857*** 0.176***
Observations 817 817 844 844
R-squared 0.286 0.483
F-statistic 37.22
Wooldridge(p_endog) 0.150
Wooldridge(p_overid) 0.346
Note : * pvalue<0.1 ; ** pvalue<0.05 ; *** pvalue<0.01. The table shows how
access to electricity affects the income of home-based businesses without
considering that physical capital endowment is endogenous. This endogeneity
is proved in the second part of the model through the Tobit IV estimator. Five
instruments (IDPP, Power, IDPP2, Power2, and IDPP*Power) are used here to
correct the endogeneity of access to electricity. The Wooldridge test does not
reveal any problem of overidentification.
Table 2 .
2 6 -Relationship between electricity and gross income of home-based businesses The results in this table are obtained using the 3SLS estimator. We have two dependent variables (income and capital). Access to electricity is always endogenous and thus instrumented with the two instruments used in equation 2.5 i.e. idistr and IDPP.
1 | 2 | 3
VARIABLES ln(Cap_stock) ln(Income) ln(Cap_stock) ln(Income) ln(Cap_stock) ln(Income)
ln(Cap_stock) 0.593*** 0.519*** 0.455***
spending 4.28e-07*** 4.14e-07*** 4.05e-07***
household size 0.100*** -0.012 0.087*** -0.001 0.086*** 0.004
owner 0.707*** -0.156 0.931*** -0.125 0.948*** -0.063
idistm 0.590 0.897 0.724
idistv -1.457** -0.130 -11.89*** -0.206 -12.83*** -0.352
idistr -0.160 -0.071 -0.105
slop_ward -0.004 -0.012 -0.001 -0.014 -0.001 -0.013
tanesco 3.117*** 4.635*** 4.748*** 0.346
idistv2 17.29*** 18.90***
Constant 9.569*** 5.674*** 9.542*** 6.410*** 9.548*** 6.995***
Observations 817 817 817 817 817 817
R-squared 0.072 0.292 -0.101 0.377 -0.121 0.426
Note : * pvalue<0.1 ; ** pvalue<0.05 ; *** pvalue<0.01.
Table 2
2
.7 -Electricity and household employment creation
IV Probit | ESR model
M1 | M2 | M3 | M4
VARIABLES Pr(labor=1) tanesco Pr(labor=1) tanesco Pr(labor=1) tanesco Employment tanesco
tanesco 0.470 0.869 0.832 -0.855
urban -0.489* 0.239*** -0.485** 0.238*** -0.278 0.277***
owner -0.085 -0.114*** 0.025 -0.117*** 0.094 -0.101** -0.062 -0.393***
household size 0.003 0.005 0.009 0.006 0.035* 0.013*** 0.068
2.stk 0.082 0.100***
3.stk 0.660** 0.106**
4.stk 0.892*** 0.180***
5.stk 0.983** 0.354***
6.stk 1.644*** 0.427***
ln(Cap_stock) 0.302*** 0.054***
IDPP 0.211 0.211** 0.215** 1.189***
idistr 0.282*** 0.284*** 0.298*** 1.950** 1.324***
Cap_stock 6.37e-08***
idistv -3.116**
idistm 1.484
Constant -2.526*** -0.058 -1.989*** 0.021 -1.603*** 0.083* -4.201*** -0.841***
Observations 844 844 892 892 892 892 914 914
sigma 2.266***
rho 0.467***
Note : * pvalue<0.1 ; ** pvalue<0.05 ; *** pvalue<0.01. The two instruments used in equation
2.5 are used here to correct for the endogeneity of access to electricity. The "physical capital"
variable is used in logarithmic form in model M1, in discrete form in model M2, and without
any discrete form in model M2, and without any transformation in model M4. It is not included
in model M3. The dependent variable is the number of jobs created per household, while the
variable of interest is access to electricity.
Source : author's calculation
Appendix 8 : Characteristics of households and individuals in 2012
Appendix 8 (continued)
Characteristics Characteristics Frequency (HbES) Percentage Frequency (GS) Percentage HbES/GS Frequency (HbES) Percentage Frequency (GS) Percentage HbES/GS
Relation with household head Study Level 1117 100% 100% 25412 10000 100% 100%
HEAD PP 619 55.42 5010 7 19.72 0.07 12.36 0.00
SPOUSE ADULT 334 29.90 3485 4 13.71 0.04 9.58 0.00
SON/DAUGHTER STEP-SON/DAUGHTER D1 5 90 4 8.06 0.36 0.54 10843 542 95 42.67 2.13 0.95 0.83 0.74 5.26
SISTER/BROTHER D2 21 7 0.63 2.26 448 258 1.76 2.58 1.56 8.14
GRANDCHILD D3 25 10 0.90 2.69 2728 291 10.74 2.91 0.37 8.59
FATHER/MOTHER D4 67 6 0.54 7.22 200 740 0.79 7.40 3.00 9.05
OTHER RELATIVE LIVE-IN SERVANT D5 OTHER NON-RELATIVES D6 16 16 38 2 7 3.40 0.18 1.72 0.63 1.72 1767 150 283 239 309 6.95 0.59 2.83 0.94 3.09 2.15 1.33 2.93 5.65 5.18
Gender D7 532 100% 57.33 25412 5411 100% 54.11 9.83
MALE D8 11 472 42.26 1.19 12351 100 48.60 1.00 3.82 11.00
FEMALE PREFORM 1 1 645 57.74 0.11 13061 9 51.40 0.09 4.94 11.11
Main Occupation (last 12 months) MS+COURSE 18 100% 1.94 25206 117 100% 1.17 15.38
AGRICULTURE/LIVESTOCK F1 16 497 44.49 1.72 7330 126 29.08 1.26 6.78 12.70
FISHING F2 47 5.06 105 548 0.42 5.48 0.00 8.58
MINING TOURISM F3 GOVERMENT F4 31 78 5 24 0.45 3.34 2.15 8.41 36 4 236 420 971 0.14 0.02 2.36 1.67 9.71 13.89 0.00 5.71 13.14 8.03
PARASTATAL 'O'+COURSE 21 1 0.09 2.26 40 224 0.16 2.24 2.50 9.38
PRIVATE SECTOR F5 39 3.49 1204 5 4.78 0.05 3.24 0.00
NGO/RELIGIOUS F6 5 1 0.09 0.54 59 44 0.23 0.44 1.69 11.36
SELF-EMPLOYED UNPAID FAMILY WORK 'A'+COURSE 4 508 30 45.48 2.69 0.43 1692 1761 33 6.71 6.99 0.33 30.02 1.70 12.12
PAID FAMILY WORK DIPLOMA 6 0.65 91 76 0.36 0.76 0.00 7.89
JOB SEEKERS U2 2 2 0.18 0.22 71 3 0.28 0.03 2.82 66.67
STUDENT U3 4 4 0.36 0.43 6552 56 25.99 0.56 0.06 7.14
DISABLED NO JOB U4 3 3 0.27 0.27 316 990 21 1.25 3.93 0.21 0.95 0.30 0.00
TOO YOUNG U5&+ 2 0.22 4535 33 17.99 0.33 0.00 6.06
Marital Status Occupation 914 100% 100% 16674 5008 100% 100% 18.25
MONOGAMOUS MARRIED Owner 778 528 47.35 85.12 5540 4047 33.23 80.81 9.53 19.22
POLYGAMOUS MARRIED Tenant 136 114 10.22 14.88 1155 961 6.93 19.19 9.87 14.15
LIVING TOGETHER Residence area 914 135 12.11 100% 1265 5010 7.59 100% 10.67 18.24
SEPARATED DIVORCED Urban 361 60 66 5.38 5.92 39.50 595 408 1791 3.57 2.45 35.75 10.08 16.18 20.16
NEVER MARRIED Rural 553 116 10.40 60.50 6857 3219 41.12 64.25 1.69 17.18
WIDOW(ER) 96 8.61 854 5.12 11.24
Note : HbES means Home-based Enterprise Sample while GS is Global Sample
Note : HbES means Home-based Enterprise Sample while GS is Global Sample
Table 3 .
3 1 -Effect of China penetration on African domestic firms' sales
Downstream effect Upstream effect Both effects
VARIABLES [1]-OLS [2]-IV [3]-IV [4]-IV [5]-OLS [6]-IV [7]-IV [8]-IV [9]-IV
∆ChinaP enOut -.315*** -6.876** -4.350*** -3.757*** -5.636* -4.193** -3.469*
(.059) (2.905) (1.349) (1.210) (3.425) (1.887) (1.837)
∆ChinaP enInp -.158 -1.659*** -0.514 -0.234 -0.345
(.115) (0.303) (0.684) (0.689) (0.710)
∆F T AP en -.334 -1.378*** -1.190*** -1.492** -1.253**
(.307) (0.462) (0.436) (0.637) (0.620)
∆IndP en -.606* -4.413*** -3.975*** -3.655* -3.093
(.311) (1.027) (0.983) (2.080) (2.173)
∆chnX_AI_cm * Exp -2.397 -6.935*** -9.068**
(1.85) (2.676) (3.706)
∆chnX_EU SA_cm * Exp -32.667*** -25.29*** -26.62***
(6.797) (8.026) (9.075)
Exporter 34.239 36.78*** 42.02***
(7.783) (8.340) (9.102)
Constant -94.505*** 72.83 34.45 13.10 -100.77*** -107.3*** 62.13 42.36 14.73
(10.037) (48.87) (28.70) (27.96) (8.575) (9.396) (88.67) (58.74) (61.17)
Observations 3,651 3,985 3,868 3,651 3,830 3,830 3,830 3,713 3,503
IV F-stat 26.30 73.27 70 770.5 4.666 15.50 12.51
Wooldridge(p_endog) 2.88e-10 6.46e-10 1.32e-07 3.03e-09 0 0 4.78e-09
Table 3 .
3 2 -Effect of China penetration on African firms' employment
VARIABLES OLS [1]-IV [2]-IV [3]-IV
∆ChinaP enOut .241*** -2.691* -1.555** -1.446**
(.049) (1.629) (0.768) (0.729)
∆F T AP en .207 -0.350 -0.241
(.181) (0.255) (0.246)
∆IndP en -.004 -1.524*** -1.513***
(.190) (0.588) (0.578)
∆chnX_AI_cm * Exp -.321 -2.503
(1.253) (1.606)
∆chnX_EU SA_cm * Exp -9.942** -6.123
(4.116) (4.448)
Exporter -4.161 -3.024
(4.393) (4.592)
Constant -7.631 54.77** 38.41** 38.28**
(6.025) (27.21) (16.83) (16.59)
Observations 4,056 4,391 4,269 4,034
IV F-stat 18.02 55.46 54.87
Wooldridge(p_endog) 0.000997 0.000606 0.00107
Table 3 .
3 3 -China's penetration effects on firms' sales (alternative approaches)
VARIABLES Sale Growth(t0,t1) Sale Growth(t,t-2)
IV IV FE-IV
∆ChinaP enOut -8.096** -10.27** -4.194***
(3.577) (4.735) (0.786)
∆F T AP en -1.105 -1.541** 1.549**
(0.760) (0.780) (0.649)
∆IndP en -9.482*** -10.77*** -6.807***
(3.370) (4.085) (1.052)
∆chnX_AI_cm * Exp -19.94*** -19.91*** -6.697**
(7.587) (7.649) (2.690)
∆chnX_EU SA_cm * Exp -3.947 -14.28 -18.71**
(15.70) (13.07) (8.742)
Exporter 43.56*** 49.19*** 27.26*** -3.224 40.09 -2.243 -12,658* -13,179 -9,578
(10.83) (11.82) (8.236) -5.092 (112.3) (3.369) 7.039 (7,036) (7,077)
∆ChinaP enOut(t, t -2) -1.842 11.36 -1.482*** -0,673 -0,591 -1,239***
(1.154) (36.23) (0.343) (0,551) (0,578) (0,476)
∆F T AP en(t, t -2) -0.0716 -0.0985 0.113 -0,083 -0,099 -0,18
(0.0719) (0.255) (0.0922) (0,114) (0,112) (0,116)
∆IndP en(t, t -2) -0.680 8.051 -0.179 -0,243 -0,291 -1,599***
(0.818) (23.06) (0.308) (0,427) (0,422) (0,572)
Exp*∆chnX_EU SA_cm(t, t -2) 3.509 -41.59 -2.336 -12,162 -13,877 -15,066*
(7.808) (102.7) (4.555) (9,183) (8,931) (8,966)
Exp*∆chnX_AI_cm(t, t -2) 1.808 -20.18 1.950 -0,618 -0,445 0,482
(2.340) (59.31) (1.242) (1,741) (1,824) (1,850)
Constant 39.73 113.9** -26.93** -7.340 -139.3 25.46*** -28,918 -37,368* 1,717
(43.62) (56.92) (12.72) (15.96) (444.9) (5.340) 22.716 (16,849) (16,944)
Observations 3,651 3,651 3,651 3,908 3,908 3,908 3,908 3,908 3,908
R-squared 0.112 0.210 0.270 0.283 0.299
IV F-stat 17.05 11.45 305.5 14.84 0.481 178.9 84.28 56.97 41.85
Industry*Country X X X
Industry*Year X X X
Country*Industry*Year X X X
Notes : All models include electricity variable (<20
Table 3 .
3 4 -Effect of Chinese penetration on African firms by age and company size
[1]-IV [2]-IV [3]-IV
age<5years -1.550** -1.139*** -1.239***
(0.622) (0.369) (0.370)
Observations 516 501 489
age>10years 26.17 38,932 161.8
(31.48) (7.961e+07) (1,764)
Observations 2,965 2,882 2,688
labor<30 -4.444*** -3.051*** -2.993***
(1.602) (0.860) (0.864)
Observations 2,015 1,944 1,886
labor>30 7.164*** 10.49** 6.490**
(2.718) (4.386) (3.125)
Observations 1,950 1,904 1,747
Table 3 .
3 5 -Effect of Chinese penetration on African firm by capital intensity
threshold=Median threshold=third quartile
VARIABLES [1]-IV [2]-IV [3]-IV [4]-IV [5]-IV [6]-IV
∆ChinaP enOut -3.294*** -1.709*** -1.749*** -3.170** -1.635*** -1.673***
(1.255) (0.589) (0.600) (1.240) (0.570) (0.579)
Cap_Int -3.563 -3.368 -3.166 2.777 4.243 3.291
(5.058) (5.063) (5.324) (6.194) (6.181) (6.291)
∆ChinaP enOut * Cap_Int 1.802** 0.916** 0.920** 1.488** 0.763* 0.735*
(0.785) (0.412) (0.420) (0.685) (0.398) (0.395)
Observations 2,688 2,592 2,424 2,688 2,592 2,424
Table 3
3
.6 -China's penetration and countries endowment in natural resources
VARIABLES [1]-IV [2]-IV [3]-IV
∆ChinaP enOut -6.038*** -3.447*** -3.133***
(1.383) (0.504) (0.483)
∆ChinaP enOut * EDI 0.378*** 0.295*** 0.266***
(0.0579) (0.0341) (0.0336)
EDI 2.891*** 6.622*** 6.417***
(0.589) (0.641) (0.637)
∆F T AP en 3.317*** 3.139***
(0.804) (0.783)
∆IndP en -4.086*** -3.733***
(0.940) (0.895)
∆chnX_AI_cm * Exp -5.578**
(2.585)
∆chnX_EU SA_cm * Exp -34.97***
(12.28)
Exporter 47.45***
(11.67)
Constant -93.51** -299.4*** -308.4***
(47.14) (42.79) (42.18)
Observations 2,290 2,190 2,150
R-squared 0.293 0.304
IV F-stat 151.8 482 470.5
Wooldridge(p_endog) 0 2.93e-08 5.49e-07
Table 3 .
3 7 -China shock effect on African firms by economic region
ECOWAS EAC AMU (Egypt-Maroc) SADC
variables [1] [2] [3] [1] [2] [3] [1] [2] [3] [1] [2] [3]
∆ChinaP enOut -0.442** -0.462** -0.572** -3.284 -2.823 -4.886 11.31*** 7.936*** 1.105 -23.88 -25.52 -20.06
(0.192) (0.217) (0.235) (3.878) (3.378) (5.096) (2.634) (2.950) (2.417) (17.26) (48.63) (34.58)
∆F T AP en 0.0299 -0.190 1.199 1.773 43.59** 18.40 1.249 1.253
(0.497) (0.505) (1.319) (1.590) (18.56) (16.84) (3.696) (3.120)
∆IndP en -1.422** -1.804*** 0.401 0.510 -14.81** -30.76*** -0.865 -5.969
(0.646) (0.669) (0.903) (1.210) (7.020) (8.286) (17.06) (8.298)
∆chnX_AI_cm * Exp -8.857*** 5.617 6.070 8.248
(3.206) (5.560) (5.198) (17.47)
∆chnX_EU SA_cm * Exp 7.919 -0.823 -91.67*** 21.97
(15.08) (17.45) (12.31) (29.13)
Exporter 66.18*** -26.64 45.99*** -65.15
(15.23) (31.43) (15.10) (64.03)
Constant -77.09*** -70.58*** -69.47*** -22.28 -25.51 -35.61 71.99*** 82.74*** 33.40 -49.51 -35.58 -17.65
(16.99) (21.10) (22.17) (35.01) (35.28) (38.12) (25.18) (27.43) (21.76) (72.58) (91.81) (67.78)
Observations 721 704 699 430 430 417 2,153 2,153 1,960 581 581 575
R-squared 0.134 0.158 0.176 0.008 0.046
IV F-stat 173.9 158.5 149.5 23.08 28.93 16.54 133.7 82.55 97.47 6.950 0.870 1.563
Wooldridge(p_endog) 0.0168 0.0682 0.0291 0.292 0.330 0.259 0 1.12e-08 0.00179 0.0840 0.497 0.494
Notes : All models include a dummy for industry (λk), time (ρt) and electricity (<20
Table 3 .
3 8 -China shock effect on African firms by industry
Variable Food and beverages wearing apparel Textiles and Wood and paper Petroleum, chemical, and non-metallic mineral product products Metal Electrical and machinery
∆ChinaP enOut -141.2 -43.83 -31.19 -9.467 -4.387 1.710
(177.5) (76.34) (21.33) (8.203) (6.583) (5.045)
∆F T AP en -24.12 -18.22 -7.376 0.316 0.463 -1.141
(26.44) (27.67) (6.053) (0.990) (0.908) (2.590)
∆IndP en -81.49 -25.88 13.63 -4.237** -0.953 -6.592
(100.3) (44.66) (24.11) (2.135) (4.851) (9.171)
∆chnX_AI_cm * Exp -43.57 -57.68 -11.61 -9.864 -34.66*** 24.25
(70.00) (104.9) (47.50) (6.964) (10.51) (23.96)
∆chnX_EU SA_cm * Exp -2,437 -113.9 -348.7 -166.7*** -134.2 -82.40**
(2,320) (144.1) (233.7) (37.74) (90.49) (39.81)
Exporter 335.3 249.4 75.61 114.0*** 184.0*** 98.76**
(339.3) (289.1) (68.62) (25.64) (55.10) (50.21)
Constant 523.4 4,862 130.6 -48.36 -75.97 -99.72
(657.2) (8,580) (143.0) (38.63) (90.03) (121.2)
Observations 772 991 342 803 506 211
R-squared 0.117 0.046 0.074
IV F-stat 1.569 0.492 8.630 59.85 3.873 5.710
Wooldridge(p_endog) 5.17e-05 0 0.00633 0.787 0.435 0.590
Notes : All models include a dummy for industry (λ k ), time (ρ t ) and electricity (<20
Table 4 .
4 1 -Impact of China's penetration on firms' productivity Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables.
log_tfp -OLS log_tfp -IV
model 1 model 2 model 3 Model 4 Model 5
Full Full Small Large Full Small Large Full Small Large Full Full
china_pen1 -.0013 -0.009* -0.012** -0.006 -0.012* -0.015* -0.009 -0.013** -0.017** -0.016* -0.021** -0.021**
(.001) (0.005) (0.006) (0.009) (0.006) (0.008) (0.011) (0.006) (0.008) (0.009) (0.009) (0.009)
india_pen1 0.001 0.008 0.014 -0.171** 0.009 0.012 -0.102 -0.001 0.006 -0.059 0.004 0.003
(.008) (0.009) (0.009) (0.069) (0.009) (0.008) (0.070) (0.009) (0.009) (0.089) (0.010) (0.010)
fta_pen1 -.004 -0.005* -0.006* -0.076*** -0.006* -0.006* -0.066* -0.004 -0.007** -0.026 -0.005 -0.006
(.002) (0.003) (0.003) (0.025) (0.003) (0.003) (0.036) (0.003) (0.003) (0.042) (0.004) (0.004)
exporter .234*** 0.256*** 0.220* 0.270*** 0.232*** 0.214** 0.229** 0.247*** 0.234** 0.216* 0.284*** 0.290***
(.070) (0.079) (0.117) (0.098) (0.075) (0.109) (0.107) (0.081) (0.117) (0.111) (0.091) (0.091)
size .326*** 0.287*** 0.321*** 0.350*** 0.357*** 0.354***
(.063) (0.070) (0.068) (0.074) (0.083) (0.083)
elect_obst -0.110 -0.068 -0.207 -0.209** -0.164*
(0.087) (0.099) (0.152) (0.104) (0.095)
elect_obst*china_pen1 0.010** 0.014** 0.006 0.0167***
(0.005) (0.005) (0.009) (0.006)
finan_obst -0.138** -0.183** -0.121 -0.161* -0.117
(0.070) (0.082) (0.138) (0.082) (0.074)
finan_obst*china_pen1 0.008* 0.008 0.020** 0.010*
(0.004) (0.005) (0.009) (0.005)
0.finan_obst*1.elect_obst*china_pen1 0.019***
(0.007)
1.finan_obst*0.elect_obst*china_pen1 0.015**
(0.007)
1.finan_obst*1.elect_obst*china_pen1 0.018**
(.008)
constant .007*** 0.032 0.061* 3.234*** 0.090 0.059 3.182*** 0.106** 0.139** 3.336*** 0.285** 0.220**
(.002) (0.033) (0.036) (0.433) (0.066) (0.075) (0.560) (0.051) (0.060) (0.614) (0.111) (0.094)
Observations 1650 1,337 947 390 1,390 993 397 1,650 892 348 1,055 1,055
R-squared 0.849 0.862 0.853 0.857 0.875 0.832 0.849 0.859 0.850 0.853 0.855
IV F-stat 184.3 124 53.40 124.4 74.70 52.25 200.7 77.22 26.96 65.65 76.83
Notes :
Table 4 .
4 2 -Impact of China's penetration on firms' energy intensity
log_ei -OLS log_ei -IV
model 1 model 2 model 3 Model 4 Model 5
Full Full Small Large Full Small Large Full Small Large Full Full
china_pen1 0.000 -0.001** -0.000 0.003 -0.002 -0.000 -0.057 -0.002 -0.001* 0.001 -0.001* -0.001*
(.000) (0.000) (0.000) (0.003) (0.001) (0.000) (0.644) (0.002) (0.001) (0.003) (0.001) (0.001)
india_pen1 .000 -0.000 -0.000 -0.004 0.000 0.000110 0.028 0.001 0.000 -0.001 0.001** 0.001**
(.000) (0.000275) (0.000) (0.004) (0.000) (0.000) (0.320) (0.001) (0.000) (0.004) (0.000) (0.000)
fta_pen1 .000 -0.000 -0.000 0.001 -0.000 0.000 -0.013 -0.000 -0.000 0.001 -0.000 -0.000
(.000) (0.000) (0.000) (0.001) (0.000) (0.000) (0.159) (0.000) (0.000) (0.001) (0.000) (0.000)
exporter -.002 -0.002 -0.005 0.001 -0.006 -0.004 -0.184 0.001 -0.004 0.005 -0.002 -0.00153
(.003) (0.003) (0.003) (0.007) (0.004) (0.004) (1.955) (0.005) (0.004) (0.008) (0.004) (0.004)
size -.005** -0.008*** -0.009** -0.009* -0.011*** -0.011***
(.002) (0.003) (0.004) (0.005) (0.004) (0.004)
lossper_due_out .0003*** 0.0004*** 0.0004*** -0.000 0.0003** 0.0004*** 0.000 0.0004** 0.0004*** -0.0002 0.0003*** 0.0003***
(.0001) (0.0001) (0.0001) (0.0003) (0.0001) (0.0001) (0.003) (0.0001) (0.0001) (0.0005) (0.0001) (0.0001)
elect_obst -0.026 0.005 -0.616 -0.009 -0.011
(0.019) (0.004) (6.965) (0.007) (0.007)
elect_obst*china_pen1 0.001 -0.000 0.026 0.001
(0.001) (0.0002) (0.289) (0.0003)
finan_obst -0.027 -0.002 0.007 -0.007 -0.007
(0.0258) (0.00524) (0.024) (0.005) (0.005)
finan_obst*china_pen1 0.002 0.0004 -0.0004 0.001*
(0.001) (0.0002) (0.001) (0.0003)
0b.finan_obst*1.elect_obst*china_pen1 0.001
(0.001)
1.finan_obst*0b.elect_obst*china_pen1 0.001
(0.001)
1.finan_obst*1.elect_obst*china_pen1 0.001
(0.001)
Constant .014*** 0.016*** 0.014*** 0.051 0.033** 0.007 0.643 0.039* 0.019*** 0.043 0.0264*** 0.0270***
(.004) (0.004) (0.004) (0.038) (0.017) (0.005) (7.002) (0.021) (0.006) (0.035) (0.009) (0.009)
Observations 2,368 2,368 1,738 630 1,903 1,398 505 1,761 1,274 487 1,421 1,421
R-squared 0.337 0.354 0.279 0.032 0.072 0.051
IV F-stat 13.02 72.59 5.212 10.26 52.96 0.033 4.932 35.11 0.976 13.48 12.10
Notes : Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables.
Table 4 .
4 3 -Impact of China's penetration on firms' total energy intensity
log_tei -OLS log_tei -IV
model 1 model 2 model 3 Model 4 Model 5
Full Full Small Large Full Small Large Full Small Large Full Full
china_pen1 -0.000 -0.001** -0.001* 0.006 -0.002* -0.001 -0.008 -0.002* -0.001** 0.009 -0.002** -0.002**
0.000 (0.000) (0.0003) (0.009) (0.001) (0.0005) (0.011) (0.001) (0.001) (0.187) (0.001) (0.001)
india_pen1 0.000 -0.0001 -0.000 -0.007 0.0002 -0.0001 0.003 0.001* 0.001 -0.011 0.001** 0.001*
(0.0002) (0.0003) (0.0003) (0.009) (0.0003) (0.0003) (0.006) (0.0007) (0.0003) (0.208) (0.0003) (0.0003)
fta_pen1 -0.0001 -0.0006* -0.0003 0.002 -0.0003 -0.0002 -0.0009 -0.0005** -0.0005** 0.0025 -0.0004* -0.0005**
(0.0002) (0.0003) (0.0002) (0.002) (0.0002) (0.0002) (0.002) (0.0002) (0.0002) (0.042) (0.0002) (0.0002)
exporter 0.0002 0.0008 -0.002 0.005 -0.003 -0.000 -0.029 0.005 0.0004 0.026 0.003 0.004
(0.003) (0.003) (0.004) (0.012) (0.005) (0.005) (0.036) (0.005) (0.005) (0.428) (0.005) (0.005)
size -0.004 -0.008** -0.012** -0.009** -0.014*** -0.014***
(0.003) (0.003) (0.005) (0.004) (0.005) (0.005)
lossper_due_out 0.0003*** 0.0004*** 0.0004*** -0.0002 0.0002* 0.0003*** 0.000 0.0004*** 0.0004*** -0.001 0.0003** 0.0003**
(0.000) (0.0001) (0.0001) (0.001) (0.0001) (0.0001) (0.0003) (0.0002) (0.0001) (0.0233) (0.0001) (0.0001)
elect_obst -0.0334 0.0007 -0.0632 -0.0162* -0.0182*
(0.0209) (0.0056) (0.110) (0.009) (0.010)
elect_obst*china_pen1 0.002* 0.0002 0.0033 0.001*
(0.001) (0.0002) (0.005) (0.0005)
finan_obst -0.0269 -0.005 0.069 -0.012* -0.013*
(0.018) (0.006) (1.375) (0.007) (0.007)
finan_obst*china_pen1 0.0015 0.001** -0.003 0.001**
(0.001) (0.0003) (0.0605) (0.0004)
0b.finan_obst*1.elect_obst*china_pen1 0.0013*
(0.001)
1.finan_obst*0b.elect_obst*china_pen1 0.0014*
(0.001)
1.finan_obst*1.elect_obst*china_pen1 0.002**
(0.001)
Constant 0.021*** 0.023*** 0.021*** 0.059 0.046** 0.018** 0.085 0.046*** 0.029*** -0.038 0.044*** 0.046***
(0.005) (0.005) (0.005) (0.054) (0.018) (0.007) (0.110) (0.015) (0.007) (1.900) (0.013) (0.014)
Observations 2,374 2,374 1,744 630 1,910 1,405 505 1,768 1,280 488 1,430 1,430
R-squared 0.306 0.341 0.350 0.294
IV F-stat 14.05 73.35 1.819 13.34 51.01 2.478 7.997 37.57 0.009 14.75 12.74
Notes : Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables.
Table 4 .
4 4 -Relationship between energy intensity and productivity (2SLS approach)
VARIABLES log_tfp log_tfp log_tfp log_tei log_tei log_tei
log_tfp -0.030** -0.023** -0.022**
(0.014) (0.010) (0.009)
log_tei 7.296 2.260 2.714
(7.240) (12.36) (12.40)
log_va 0.267*** 0.346*** 0.353*** 0.003 0.002 0.002
(0.039) (0.075) (0.075) (0.003) (0.003) (0.003)
intk -0.072 0.231*** 0.233*** 0.002 0.001 0.001
(0.076) (0.063) (0.064) (0.002) (0.003) (0.003)
cur 0.003* 0.0003 0.001
(0.001) (0.002) (0.002)
sizze -0.262***
(0.086)
lossper_due_out 0.0004*** 0.0003** 0.0003**
(0.0001) (0.0001) (0.0001)
skill -0.001*** -0.001
(0.0004) (0.0004)
cur*skill -0.000
(0.000)
Constant -3.059*** -4.300*** -4.423*** -0.012 0.002 0.003
(0.583) (1.282) (1.290) (0.036) (0.039) (0.038)
Observations 1,099 868 868 1,127 886 868
R-squared 0.885 0.931 0.930 0.004 0.125 0.140
Instruments
lossper_due_out .0004** .0003* .0003*
(.002) (.002) (.002)
sizze -.209***
(.071)
skill -.0011*** -.001
(.0003) (.0004)
cur .001
(.002)
skill*cur -0.000
(0.000)
IV F-stat 11.38 4.078 4.073 18.53 28.08 9.683
Notes : Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ;
all models include dummy country*time, industry*time and country*industry variables.
Table 4 .
4 5 -Relationship between energy intensity and productivity (3SLS approach)
model 1 model 2 model 3 model 4
VARIABLES log_tfp log_tei log_tfp log_tei log_tfp log_tei log_tfp log_tei
log_tfp -0.030** -0.023* -0.023** -0.022**
(0.014) (0.012) (0.011) (0.011)
log_tei 7.859 7.579 2.928 2.746
(5.597) (5.558) (7.072) (8.355)
log_va 0.269*** 0.003 0.268*** 0.001 0.346*** 0.002 0.353*** 0.002
(0.030) (0.003) (0.029) (0.002) (0.046) (0.003) (0.052) (0.003)
intk -0.069 0.002 -0.076 0.002 0.204*** 0.001 0.233*** 0.001
(0.051) (0.002) (0.053) (0.002) (0.055) (0.003) (0.057) (0.004)
sizze -0.259*** -0.453***
(0.066) (0.134)
lossper_due_out 0.0004*** 0.0004*** 0.0003*** 0.0003**
(0.0001) (0.0001) (0.0001) (0.0001)
cur -0.002 0.001
(0.003) (0.0015)
size*cur 0.003
(0.002)
skill -0.001*** -0.001
(0.0003) (0.0004)
skill*cur -0.000
(0.000)
Constant -2.914*** -0.012 -2.726*** 0.005 -4.256*** 0.002 -4.432*** 0.003
(0.443) (0.0368) (0.571) (0.0313) (0.740) (0.0425) (0.899) (0.043)
Observations 1,127 1,127 1,099 1,099 886 886 868 868
R-squared 0.881 0.004 0.884 0.088 0.929 0.125 0.930 0.140
Notes : Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include
dummy country*time, industry*time and country*industry variables.
Appendix 2 : China's penetration and firms' productivity Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables. Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables. Appendix 4 : China's penetration and firms' total energy intensity Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables. Appendix 5 : China's penetration and firms' productivity by firms' size Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables. Appendix 7 : China's penetration and firms' total energy intensity by firms' size Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include dummy country*time, industry*time and country*industry variables.
VARIABLES china_pen1 india_pen1 fta_pen1 exporter size finan_obst*(1 -bank_credit) finan_obst*(1 -bank_credit)*china_pen1 elect_obst*(1 -elect_rate) elect_obst*(1 -elect_rate)*china_pen1 finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 finan_obst*(1 -FD) finan_obst*(1 -FD)*china_pen1 finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 Constant Observations R-squared IV F-stat Notes : Appendix 3 : China's penetration and firms' electricity intensity log_tfp log_tfp log_tfp log_tfp -0.014** -0.010* -0.011** -0.011** -0.014** -0.011** -0.011** log_tfp log_tfp log_tfp (0.006) (0.006) (0.005) (0.005) (0.006) (0.005) (0.005) -0.0002 0.008 0.003 0.003 0.001 0.003 0.003 (0.009) (0.009) (0.011) (0.011) (0.010) (0.011) (0.011) -0.006 -0.006* -0.006 -0.006 -0.006 -0.006 -0.006 (0.004) (0.003) (0.004) (0.004) (0.004) (0.004) (0.004) 0.235*** 0.227*** 0.247*** 0.247*** 0.257*** 0.271*** 0.271*** (0.084) (0.075) (0.092) (0.092) (0.084) (0.091) (0.091) 0.363*** 0.320*** 0.371*** 0.373*** 0.373*** 0.380*** 0.382*** (0.078) (0.068) (0.084) (0.084) (0.077) (0.084) (0.084) -0.147* -0.063 -0.085 (0.079) (0.072) (0.073) 0.009* (0.004) -0.000 0.002 0.001 0.002 0.001 (0.002) (0.002) (0.002) (0.001) (0.002) 0.0002** (0.000) 0.000 (0.000) -0.181** -0.094 -0.118 (0.085) (0.079) (0.080) 0.009* (0.005) 0.000 (0.0001) 0.113** 0.008 -0.026 0.011 0.123** -0.011 0.023 (0.057) (0.084) (0.097) (0.106) (0.055) (0.090) (0.098) 1,165 1,389 993 993 1,182 1,011 1,011 0.846 0.858 0.854 0.854 0.840 0.849 0.849 113.3 161.2 144.4 140.5 107.5 145 139 VARIABLES log_ei log_ei log_ei log_ei log_ei log_ei log_ei china_pen1 -0.001* -0.001* -0.001* -0.001* -0.002 -0.001* -0.001** (0.0003) (0.0006) (0.0003) (0.0003) (0.0014) (0.0003) (0.0004) india_pen1 0.001** 0.0003 0.001*** 0.001*** 0.001* 0.0003 0.001*** (0.0004) (0.0002) (0.0005) (0.0005) (0.0008) (0.0003) (0.0003) fta_pen1 -0.0001 0.000 -0.0002 -0.0002 -0.0004 -0.0004 -0.0002 (0.0002) (0.0002) (0.0002) (0.0002) (0.0003) (0.0003) (0.0002) exporter -0.001 -0.005 -0.004 -0.004 0.0005 -0.003 -0.004 (0.003) (0.003) (0.004) (0.004) (0.005) (0.004) (0.004) size -0.007** -0.008*** -0.009*** -0.009*** -0.012* -0.014*** -0.0125*** (0.003) (0.003) (0.003) (0.003) (0.006) (0.004) (0.004) finan_obst*(1 -bank_credit) 0.001 0.002 0.002 (0.003) (0.003) (0.003) finan_obst*(1 -bank_credit)*china_pen1 0.0003** (0.0001) elect_obst*(1 -elect_rate) -0.0002 0.000 0.000 0.000 -0.000 (0.0001) (0.000) (0.000) (0.000) (0.000) elect_obst*(1 -elect_rate)*china_pen1 0.000* (0.000) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.000) finan_obst*(1 -FD) -0.035 0.007* -0.005 (0.028) (0.004) (0.005) finan_obst*(1 -FD)*china_pen1 0.002 (0.0014) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.000** (0.000) lossper_due_out 0.0003** 0.0003*** 0.0003*** 0.0003*** 0.0004** 0.0003** 0.0002** (0.0001) (0.0001) (0.0001) (0.0001) (0.0002) (0.0001) (0.0001) Constant 0.018*** 0.023** 0.011*** 0.011*** 0.042** 0.004 0.020*** (0.006) (0.009) (0.004) (0.005) (0.019) (0.006) (0.006) Observations 1,564 1,898 1,282 1,282 1,512 1,237 1,237 R-squared 0.254 0.154 0.217 0.216 0.139 IV F-stat 108.9 33.71 96.10 92.64 6.195 7.571 25.44 Notes : VARIABLES log_tei log_tei log_tei log_tei log_tei log_tei log_tei china_pen1 -0.001** -0.002** -0.001** -0.001** -0.002* -0.001** -0.001** (0.0005) (0.0008) (0.0005) (0.0005) (0.001) (0.0005) (0.001) india_pen1 0.001** 0.0002 0.001* 0.001* 0.002* 0.0001 0.001*** (0.0004) (0.0003) (0.0005) (0.0005) (0.0008) (0.0005) (0.0003) fta_pen1 -0.0003 -0.0003 -0.0003 -0.0003 -0.001** -0.001*** -0.001** (0.0002) (0.0002) (0.0002) (0.0002) (0.0003) (0.0004) (0.0003) exporter 0.002 -0.003 -0.0003 -0.0002 0.005 0.002 0.001 (0.004) (0.004) (0.005) (0.005) (0.006) (0.005) (0.005) size -0.008** -0.009*** -0.011*** -0.011*** -0.012** -0.018*** -0.016*** (0.003) (0.004) (0.004) (0.004) (0.006) (0.006) (0.005) lossper_due_out 0.0003** 0.0002** 0.0003** 0.0003** 0.0005** 0.0003** 0.0003* (0.0001) (0.0001) (0.0001) (0.0001) (0.0002) (0.0001) (0.0001) finan_obst*(1 -bank_credit) -0.003 0.001 0.001 (0.004) (0.004) (0.004) finan_obst*(1 -bank_credit)*china_pen1 0.0005** (0.0002) elect_obst*(1 -elect_rate) -0.0003 0.0001 0.000 0.0002 -0.0001 (0.0002) (0.0001) (0.0001) (0.0001) (0.0001) elect_obst*(1 -elect_rate)*china_pen1 0.0001** (0.0001) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.000) finan_obst*(1 -FD) -0.038 0.010* -0.011 (0.023) (0.006) (0.007) finan_obst*(1 -FD)*china_pen1 0.002* (0.001) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.00002** (0.000) Constant 0.029*** 0.034*** 0.019*** 0.021*** 0.051*** 0.008 0.034*** (0.007) (0.012) (0.006) (0.006) (0.017) (0.009) (0.009) Observations 1,575 1,910 1,292 1,292 1,524 1,249 1,249 R-squared 0.261 0.098 0.234 0.231 0.090 IV F-stat 94.83 33.21 83.53 80.20 7.539 7.117 24.15 Notes : VARIABLES log_tfp log_tfp log_tfp log_tfp log_tfp log_tfp log_tfp Small china_pen1 -0.018** -0.013* -0.013** -0.013** -0.017** -0.012** -0.012* (0.008) (0.007) (0.006) (0.006) (0.008) (0.006) (0.006) finan_obst*(1 -bank_credit) -0.201** -0.125 -0.126 (0.0923) (0.085) (0.087) finan_obst*(1 -bank_credit)*china_pen1 0.009 (0.006) elect_obst*(1 -elect_rate) 0.0001 0.003* 0.003* 0.003* 0.003* (0.002) (0.002) (0.002) (0.002) (0.002) elect_obst*(1 -elect_rate)*china_pen1 0.0002** (0.0001) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 0.0005 (0.0001) finan_obst*(1 -FD) -0.215** -0.154* -0.151 (0.096) (0.093) (0.094) finan_obst*(1 -FD)*china_pen1 0.009 (0.006) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 -0.000 (0.0001) Constant 0.152** 0.001 -0.055 -0.053 0.146** -0.036 -0.039 (0.067) (0.087) (0.101) (0.112) (0.063) (0.094) (0.104) Observations 838 993 710 710 849 721 721 R-squared 0.853 0.876 0.865 0.865 0.849 0.862 0.862 Large china_pen1 -0.0160* -0.0061 -0.0103 -0.0120 -0.0185* -0.0116 -0.0129 (0.009) (0.009) (0.009) (0.008) (0.009) (0.009) (0.009) finan_obst*(1 -bank_credit) -0.129 0.086 -0.057 (0.154) (0.130) (0.125) finan_obst*(1 -bank_credit)*china_pen1 0.0225** (0.00986) elect_obst*(1 -elect_rate) -0.0006 -0.005 -0.007 -0.002 -0.004 (0.004) (0.006) (0.006) (0.004) (0.005) elect_obst*(1 -elect_rate)*china_pen1 0.0002 (0.0001) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 0.001 (0.001) finan_obst*(1 -FD) -0.185 0.0842 -0.105 (0.174) (0.151) (0.150) finan_obst*(1 -FD)*china_pen1 0.031** (0.0130) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.002* (0.001) Constant 4.395*** 3.022*** 4.665*** 4.910*** 3.234*** 3.119** 2.393* (0.650) (0.581) (0.558) (0.569) (1.165) (1.455) (1.319) Observations 327 396 283 283 333 290 290 R-squared 0.852 0.832 0.855 0.857 0.842 0.846 0.850 Notes : Robust standard errors in parentheses, *** p<0.01, ** p<0.05, * p<0.1 ; all models include 0.000 0.000 0.000 0.000 (0.000) (0.000) (0.000) (0.000) (0.000) elect_obst*(1 -elect_rate)*china_pen1 -0.000 (0.000) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 -0.000 (0.000) finan_obst*(1 -FD) -0.004 0.007* 0.005 (0.006) (0.004) (0.004) finan_obst*(1 -FD)*china_pen1 0.0005* (0.0003) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.000) Constant 0.0162*** 0.006 0.008* 0.008* 0.020*** 0.006 0.008* (0.006) (0.005) (0.004) (0.004) (0.007) (0.004) (0.004) Observations 1,169 1,398 957 957 1,114 912 912 R-squared 0.346 0.354 0.310 0.310 0.249 0.314 0.310 Large china_pen1 -0.0002 -0.005 -0.0005 -0.0005 -0.059 -0.009 -0.001 (0.001) (0.006) (0.001) (0.001) (3.732) (0.099) (0.002) finan_obst*(1 -bank_credit) 0.001 -0.006 -0.009 (0.007) (0.007) (0.008) finan_obst*(1 -bank_credit)*china_pen1 -0.0001 (0.0004) elect_obst*(1 -elect_rate) -0.0008 0.0002** 0.0001 0.001 0.000 (0.001) (.0001) (.0001) (0.008) (.0001) elect_obst*(1 -elect_rate)*china_pen1 .000 (0.000) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 .000 (0.000) finan_obst*(1 -FD) -0.630 -0.011 -0.013 (39.88) (0.056) (0.015) finan_obst*(1 -FD)*china_pen1 0.023 (1.430) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.000) Constant 0.060*** 0.083 0.013 0.039 0.750 0.073 0.052 (0.017) (0.077) (0.025) (0.032) (43.59) (0.423) (0.042) Observations 395 500 325 325 398 325 325 R-squared 0.150 0.195 0.198 0.055 Robust standard errors in parentheses log_tei log_tei log_tei log_tei log_tei log_tei log_tei Small china_pen1 -0.001** -0.001 -0.001** -0.001** -0.001*** -0.001*** -0.001*** (0.001) (0.0004) (0.0005) (0.0005) (0.0005) (0.0004) (0.0005) finan_obst*(1 -bank_credit) -0.001 0.005 0.004 (0.005) (0.005) (0.005) finan_obst*(1 -bank_credit)*china_pen1 0.0005* (0.0002) elect_obst*(1 -elect_rate) 0.0001 0.0003 0.0002 0.0004 0.0004 (0.0008) (0.0006) (0.0007) (0.0007) (0.0007) elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.000) finan_obst*(1 -FD) -0.008 0.009* 0.004 (0.007) (0.005) (0.005) finan_obst*(1 -FD)*china_pen1 0.0007** (0.0003) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.000* (0.000) Constant 0.027*** 0.017** 0.018*** 0.018*** 0.030*** 0.013** 0.019*** (0.007) (0.007) (0.006) (0.006) (0.007) (0.006) (0.007) Observations 1,176 1,405 964 964 1,120 919 919 R-squared 0.331 0.353 0.322 0.322 0.266 0.303 0.289 Large china_pen1 -0.0002 -0.006 -0.0004 -0.0005 0.033 -0.012 -0.001 (0.001) (0.006) (0.001) (0.001) (1.333) (0.169) (0.002) finan_obst*(1 -bank_credit) -0.0002 -0.008 -0.009 (0.008) (0.007) (0.008) finan_obst*(1 -bank_credit)*china_pen1 -0.0001 (0.0004) elect_obst*(1 -elect_rate) -0.001 0.0004*** 0.0004* 0.001 0.0002 (0.001) (0.0001) (0.0002) (0.012) (0.0001) elect_obst*(1 -elect_rate)*china_pen1 0.0003 (0.0003) finan_obst*(1 -bank_credit)*elect_obst*(1 -elect_rate)*china_pen1 0.000 (0.0002) finan_obst*(1 -FD) 0.340 -0.016 -0.017 (13.92) (0.076) (0.016) finan_obst*(1 -FD)*china_pen1 -0.013 (0.511) finan_obst*(1 -FD)*elect_obst*(1 -elect_rate)*china_pen1 0.0001 (0.0001) Constant 0.063*** 0.075 -0.002 0.007 -0.316 0.080 0.039 (0.017) (0.079) (0.029) (0.041) (15.37) (0.849) (0.046) Observations 399 505 328 328 404 330 330 R-squared 0.227 0.297 0.297 0.216 Notes : VARIABLES Notes :
dummy country*time, industry*time and country*industry variables.
https ://www.pwc.co.za/en/assets/pdf/africa-energy-review-2021.pdf
https ://data.worldbank.org/indicator/AG.SRF.TOTL.K2 ?locations=TZ
https ://data.worldbank.org/indicator/SP.POP.TOTL ?locations=TZ
Bertelsmann Stiftung's Transformation Index (BTI) 2022.
https ://www.adb.org/documents/adb-annual-report-2005
We postulate that these characteristics are not faithfully captured by the unobservable fixed effects.
Data on power plants can be downloaded from the World Bank website via the link : https ://datacatalog.worldbank.org/dataset/tanzania-power-plants
Dans cet papier, nous évaluons les effets du commerce Chine-Afrique sur la croissance des entreprises africaines. Pour ce faire, nous avons utilisé les bases de données "EnterpriseSurveyData" de la Banque mondiale, "Eora" (Input-Output) et "BACI" du CEPII. Les résultats obtenus par la méthode de Variable Instrumentale montrent que les produits chinois constituent une menace pour la croissance des entreprises africaines. La pénétration de la Chine sur le marché africain entraîne une baisse significative des ventes et de la taille des entreprises. En outre, en examinant les entreprises selon plusieurs caractéristiques, nous constatons que les petites entreprises, les jeunes et celles moins capitalistiques sont les plus vulnérables. Cependant, nous avons également remarqué que les entreprises des pays dépendants de ressources naturelles sont moins susceptibles d'être victimes de la pénétration chinoise sur leur marché. Enfin, les entreprises africaines subissent de plein fouet le choc chinois sur les marchés extérieurs communs, avec une ampleur plus considérable dans les pays développés.
African Trade Report 2020, Afreximbank https ://afr-corp-media-prod.s3-eu-west-1.amazonaws.com/afrexim/African-Trade-Report-2020.pdf
plus agriculture in Africa
Industrial Equipment, Electrical Appliances, Telecommunication Equipment, and Transport Vehicules
https ://www.oecd.org/swac/publications/38409391.pdf // IMF (Direction of Trade Statistics), 2020
The World Bank has constructed an indicator that provides information on the level of national security, but this information is not available either for all countries or for all years
The variable Exporter represents the exporting status of the last period (t-1) contrary to the previous cases where it represented the exporting status at the first period t0
This infrastructure is necessary for mining and transport of natural resources
the majority of members of which took part in our estimates
without changing significantly its capital stock that serves directly to the production.
This assumption is important because we support that energy consumed changes without changing output level.
It is constant over a period (for instance 1 year) but can vary from one period to another.
Non-linear relationship between energy intensity and production has been found empirically by[START_REF] Sahu | [END_REF].
The excluded exogenous variables serve as instruments for the endogenous variable in the first equation and should not be directly related to the dependent variable of this equation.
World Bank Group, Enterprise Analysis Unit. 2017. "Firm-Level Productivity Estimates". www.enterprisesurveys.org/
The production value is only available in the EORA database. Now, whether we used the import and export values from BACI or EORA to calculate the absorption does not make a significant difference in the results.
See worldbank.org/financialdevelopment and http ://data.worldbank.org/datacatalog/global-financialdevelopment.
https ://www.afdb.org/en/about/mission-strategy
Remerciements
Remerciements
25 4,167 30,667 77,348 22,652 0,000 2,600 (23,847) (39,726) (39,726) (0,000) (2,191) 3 138 4,348 78,261 18,116 3,623 23,840 89,860 7,713 2,412 1,966 (18,020) (27,546) (23,948) (14,837) (2,383) 3 517 1,161 89,555 9,091 1,354 38,406 96,657 2,108 0,688 1,814 (27,150) (16,365) (12,722) (7,969) (2,413) 2 9 66,667 77,778 22,222 0 10,000 100,000 0,000 0,000 0,000 (0,000) (0,000) (0,000) (0,000) (0,000) 8 197 3,046 93,908 5,076 1,015 44,144 90,174 8,718 0,595 1,575 (33,710) (28,671) (27,304) (6,067) (2,234) 9 1222 0,491 72,013 19,067 8,919 36,439 89,648 5,500 3,520 2,939 (25,946) (28,662) (21,179) (17,905) (2,161) 1 548 1,095 78,102 16,788 5,109 36,507 94,248 1,022 1,774 1,627 (24,728) (22,421) (9,846) (12,567) (2,490) 13 390 1,538 92,051 6,923 1,025 48,771 86,856 11,186 0,653 1,699 (28,644) (31,489) (29,553) (6,148) (1,330) 3 127 4,724 96,062 3,937 0 33,333 84,148 5,696 0,635 2,279 (25,820) (35,193) (22,438) (4,435) (1,297) 3 163 3,681 78,527 17,178 4,294 48,000 95,507 3,555 0,253 2,063 (23,720) (17,073) (14,855) (3,062) (2,265) 6 637 0,942 86,813 10,047 3,139 38,489 91,550 5,915 0,714 2,195 (23,975) (25,503) (22,048) (5,638) (2,207) 2 199 3,015 96,482 3,517 0 42,619 81,166 2,209 0,982 1,583 (23,432) (36,250) (10,063) (5,796) (1,442) 1 53 11,321 64,151 33,962 1,887 47,333 88,580 11,420 0,000 4,600 (12,702) (29,961) (29,961) (0,000) (0,894) 8 434 1,382 90,092 8,295 1,613 40,432 88,301 6,427 0,963 1,622 (25,055) (29,578) (22,459) (7,356) (2,103) 3 66 9,091 93,939 6,061 0 10,000 80,476 6,349 0,000 1,091 (. . . ) (39,531) (24,580) (0,000) (1,044) 2 84 7,143 90,476 5,952 3,571 55,800 93,096 3,976 1,000 2,375 (34,851) (23,642) (18,995) (9,110) (2,264)
Source : Author's calculation
The table shows the number of outside employees per household. An outside employee is not a member of the household and therefore earns an hourly, daily, or monthly wage, etc. It can be deduced from the table that most households (about 88%) do not use these types of employees. Those that do, often employ one employee. However, others may employ two (6.02%), three (1.31%), etc., or even several dozen.
Commentary to Appendixes 8 and 9
Appendix 8 presents the main characteristics of individuals and households both in the home entrepreneur sub-sample (HbES) and in the global sample (GS). The last column assesses the proportion, compared to the entire sample, of individuals and households involved in homebased activities according to the characteristics considered. The figure in Appendix 9 shows the age distribution in the subsample (HbES) and the aggregate sample (GS). It appears from this description that home-based entrepreneurship is largely observed in households owning their houses and is practiced mainly by heads of households (both genders combined) or by their spouses. In urban areas, there are more households (20.16%) engaging in home-based entrepreneurship compared to rural areas (17.18%). It represents an alternative i.e. a secondary activity for individuals whose main activity is seasonal (e.g. agriculture). It especially includes young people of all ages (16-50 years) and all levels of study, the majority of which are the lowest.
Synthesis
Household electrification offers people several possibilities. It improves their well-being through several dimensions such as health, education, and economic situation. The latter case manifests itself through several channels including home entrepreneurship. indicators will be used as a robustness check.
Empirical Results
The following sections present the main result of the empirical analysis. The first two sections assess the effect of Chinese imports on firms' productivity and energy intensity, respectively. The last section reports the results of the simultaneous analysis between productivity and energy intensity.
China's penetration and firm productivity
China's influence on firms' African firms' TFP is analyzed through different angles that are presented in Table 4.1. We, first, present the results using the use of the OLS estimator and then display the results of the model specification estimated using the instrumental variable (IV) estimator. In all these models, the TFP is expressed in logarithm. Although the OLS estimation suggests that China's penetration has no statistically significant effect on firms' productivity, the IV estimations show a significant and negative effect of Chinese penetration on firms' productivity. Failing to address the endogeneity of China's penetration variable leads to biased results. We estimate each model on the full sample and on the sub-sample of small (and medium) enterprises and of large enterprises separately. Model 1 specification includes China's penetration, the main trade partner penetrations, the firm's size, and its exporting status. Model 2 includes the electricity obstacle variable, while Model 3 includes the finance obstacle variable. Model 4 includes both electricity and finance obstacle variables. The interaction of both variables is included in Model 5 (see
ABSTRACT
African countries aspire to industrial development to diversify their exports, currently concentrated on natural resources.
However, the electrification and the reinforcement of the competitiveness of national companies remain a challenge when companies face fierce import competition, including from Chinese products. This topic is at the heart of the present thesis, which is divided into three chapters. The first chapter analyses the impact of Tanzanian households' access to electricity on their consumption and income from home-based activities. The findings suggest that electricity access leads to an increase in households' day-to-day consumption. Likewise, turnover and the number of jobs created have resulted in capital-intensive businesses, but electricity is essential to be capital-intensive. In the second chapter dedicated to the effect of Sino-African trade on African firms' growth, the empirical evidence suggests that China's penetration reduces the growth of small and younger firms, while larger firms have experienced a pro-competitive effect. Exporting firms were hit twice, as their growth decreased when China's penetration increased in the common external market. Finally, the third chapter investigates how China's penetration into African markets has affected firms' performance. From the theoretical perspective, firms' productivity and energy intensity are inversely related, while there is a logarithmic relationship between these indicators and firm production level. Empirical results show that China's penetration into the African market has led to a decrease in both productivity and energy intensity of small and medium firms, but has exerted no significant impact on large firms. Based on the theoretical results, we deduce that the decrease in energy intensity could be explained by the reduction of the production level. While both electricity and financial obstacles affect negatively firms' performance, small firms facing electricity barriers and large firms facing financial barriers have improved their performance under the impulse of the Chinese competition. Empirical results have also revealed that productivity negatively affects energy intensity, while there is no reverse causality.
KEYWORDS
Energy, Household, Consumption, China-African trade, Firm, Productivity, Financial development |
04121360 | en | [
"info"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121360/file/publi-7294.pdf | Jakub Klemsa
email: [email protected]
Hitchhiker's Guide to the TFHE Scheme
Keywords: Fully homomorphic encryption, TFHE scheme, Reliable homomorphic evaluation, Correctness
Also referred to as the holy grail of cryptography, Fully Homomorphic Encryption (FHE) allows for arbitrary calculations over encrypted data. First proposed as a challenge by Rivest et al. in 1978, the existence of an FHE scheme has only been shown by Gentry in 2009. However, until these days, existing general-purpose FHE schemes suffer from a substantial computational overhead, which has been vastly reduced since the original construction of Gentry, but it still poses an obstacle that prevents a massive practical deployment of FHE. The TFHE scheme by Chillotti et al. represents the state-of-the-art among general-purpose FHE schemes. This paper aims to serve as a thorough guide to the TFHE scheme, with a strong focus on the reliability of computations over encrypted data, and help researchers and developers understand the internal mechanisms of TFHE in detail. In particular, it may serve as a baseline for future improvements and/or modifications of TFHE or related schemes.
Introduction
Fully Homomorphic Encryption (FHE), first discovered by Gentry [START_REF] Gentry | Fully homomorphic encryption using ideal lattices[END_REF] in 2009, enables the evaluation of an arbitrary (computable) function over encrypted data. As a basic use-case of FHE, we outline a secure cloud-aided computation: we describe how a user (U) may delegate a computation over her sensitive data to a semi-trusted cloud (C):
• U generates secret keys sk, and (public) evaluation keys ek, which she sends to C; • U encrypts her sensitive data d with sk, and sends the encrypted data to C; • C employs ek to evaluate function f , homomorphically, over the encrypted data (i.e., without ever decrypting it), yielding an encryption of f (d), which it sends back to U; • U decrypts the message from C with sk, obtaining the desired result: f (d) in plain.
We illustrate the concept of homomorphic evaluation of a function over the plain vs. over the encrypted data in Figure 1. Among existing FHE schemes, two main means of homomorphic evaluation can be identified: (i) the leveled approach (e.g., [START_REF] Brakerski | Leveled) fully homomorphic encryption without bootstrapping[END_REF][START_REF] Cheon | Homomorphic encryption for arithmetic of approximate numbers[END_REF]), and (ii) the bootstrapped approach (e.g., [START_REF] Gentry | Fully homomorphic encryption using ideal lattices[END_REF][START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF]), while a combination of both is also possible [START_REF] Cheon | Bootstrapping for approximate homomorphic encryption[END_REF]. The leveled approach (i) is particularly useful for functions that are represented by an evaluation circuit of limited depth which is known in advance. After evaluating it, no additional operation can be performed with the result, otherwise, the data might be corrupted. Based on the circuit's depth and other properties, parameters must be chosen accordingly. On the other hand, for the bootstrapped approach (ii), there is no limit on the circuit depth, which also means that the circuit does not need to be known in advance ← ----------Encrypted result Encr f (d)
Fig. 1 Illustration of an evaluation of function f over the plain and over the encrypted data in the User's and in the Cloud's domain, respectively. In both ways, the same result is obtained.
scheme by Chillotti et al. [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF] is currently considered as the state-of-the-art FHE scheme that follows the bootstrapped approach.
For a basic overview of the evolution of FHE schemes, we refer to a survey by Acar et al. [START_REF] Acar | A survey on homomorphic encryption schemes: Theory and implementation[END_REF] (from 2018; in particular for implementations, much progress has been made since then).
Basic Overview of TFHE
First, let us provide a high-level overview of TFHE and its abilities, and let us outline its structure.
Similar to many other FHE schemes, the TFHE scheme builds upon the Learning With Errors (LWE) encryption scheme, first introduced by Regev [START_REF] Regev | On lattices, learning with errors, random linear codes, and cryptography[END_REF]. There are two important properties of LWE: additive homomorphism, and the presence of noise; let us comment on either:
Additive homomorphism: LWE ciphertexts, also referred to as samples, are represented by vectors of (additive) group elements. By the nature of LWE encryption, LWE samples are additively homomorphic, which means that-roughly speaking-for two samples c 1 and c 2 , which encrypt respectively µ 1 and µ 2 , it holds that c 1 + c 2 encrypts µ 1 + µ 2 . Noise: To achieve security, LWE samples need to contain a certain amount of noise. However, with each homomorphic addition, noises also add up, which may ultimately destroy the accuracy/correctness of the plaintext.
Like many other FHE schemes, TFHE deals with the noise growth by defining a procedure referred to as bootstrapping. Bootstrapping aims at resetting the noise to an-on average-fixed level. Otherwise, if the noise exceeded a certain bound, the probability of correct decryption would drop rapidly.
To sum up, TFHE offers two operations: (i) homomorphic addition, and (ii) bootstrapping.
Homomorphic addition (i) is a very cheap operation, however, the noise accumulates. On the other hand, bootstrapping (ii) is a costly operation, but it refreshes the noise and-in case of TFHE-it is inherently capable of evaluating homomorphically a custom Look-Up Table (LUT), which can be moreover encrypted. These two operations are sufficient for the full homomorphism, i.e., the possibility to evaluate any computable function over encrypted data. In Figure 2, we introduce the TFHE gate, which comprises (i) homomorphic addition(s), grouped into a homomorphic dot-product with integer weights, followed by (ii) TFHE bootstrapping. For bootstrapping, we outline its internal structure that consists of four sub-operations: KeySwitch, ModSwitch, Blind-Rotate and SampleExtr.
Aim of this Work
The aim of this work is to provide FHE researchers and developers with a comprehensive and intelligible guide to the TFHE scheme. In particular, in this paper, we thoroughly analyze the noise growth of TFHE's operations, which is decisive for the correctness and reliability of homomorphic evaluations in the wild. Besides that, we comment on the purpose of selected tricks that are intended to decrease the noise growth. Therefore, our TFHE guide is supposed to provide useful insights for any prospective improvements and/or design modifications to TFHE(-related schemes).
Related Work. The original full paper on TFHE [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF] is followed by other papers that recall or redefine TFHE in numerous ways [START_REF] Guimarães | Revisiting the functional bootstrap in TFHE[END_REF][START_REF] Chillotti | Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE[END_REF][START_REF] Chen | Multikey homomorphic encryption from tfhe[END_REF][START_REF] Kwak | Towards Practical Multi-key TFHE: Parallelizable, Key-Compatible, Quasi-linear Complexity[END_REF], while changing the notation as well as the approach to describe the bootstrapping procedure.
Joye [START_REF] Joye | Sok: Fully homomorphic encryption over the [discretized] torus[END_REF] provides a SoK paper on TFHE, supported by many examples with concrete values. Our TFHE guide complements their SoK in particular by providing a thorough noise growth analysis, which is one of its pillars.
Paper Outline
We introduce building blocks of the TFHE scheme in Section 2. Next, in Section 3, we describe the construction of TFHE in detail with a particular focus on noise propagation. We further focus on the correctness of homomorphic evaluation in Section 4. In Section 5, we briefly comment on implementation aspects of TFHE. We conclude our paper in Section 6.
Building Blocks of TFHE
In this section, we first briefly outline flavors of LWE, we outline a technical notion, referred to as the concentrated distribution, and we provide a list of symbols and notation. Then, we introduce in detail a generalized variant of LWE, denoted GLWE, and we comment on its additive homomorphism, security and other properties. Finally, we define the decomposition operation that we use to build up a compound scheme called GGSW, which enables multiplicative homomorphism. The torus is the underlying additive group of LWE that is used in TFHE, denoted T and defined as T := R/Z with the addition operation. The torus can be represented by the interval [0, 1), with each addition followed by reduction mod 1, e.g., 0.3+0.8 = 0.1. Since T is an abelian group, we may perceive T as an algebraic Z-module, i.e., we further have scalar multiplication Z×T → T, defined as repeated addition.
The ring variant of LWE, introduced by Lyubashevsky et al. [START_REF] Lyubashevsky | On ideal lattices and learning with errors over rings[END_REF], extends the module's ring to a ring of polynomials with a bounded degree. In TFHE, we will work with the ring Z[X]/(X N + 1), denoted Z (N ) [X], with N a power of two. Then, the underlying Z (N ) [X]-module comprises torus polynomials modulo
X N + 1, denoted T (N ) [X].
Concentrated Distribution
Unlike (scalar) multiplication, the division of a torus element by an integer cannot be defined without ambiguity, the same holds for the expectation of a distribution over the torus. However, this can be fixed for a concentrated distribution [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF], which is a distribution with support limited to a ball of radius 1 /4, up to a negligible subset. For further details, we refer to [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF].
Symbols & Notation
Throughout the paper, we use the following symbols & notation; we denote:
• B := {0, 1} ⊂ Z the set of binary coefficients, • T the additive group R/Z, referred to as the torus (i.e., real numbers modulo 1), • Z n the quotient ring Z/nZ (or its additive group), • ⌊•⌉ : R → Z the standard rounding function,
• for vector v, v i stands for its i-th coordinate, • ⟨u, v⟩ the dot product of two vectors u and v, • M (N ) [X] the additive group (or ring) of polynomials modulo X N + 1 with coefficients from M , where N ∈ N is a power of two, • for polynomial p(X), p (i) stands for the coefficient of p at X i , • for vector of polynomials w, w (j) i stands for the coefficient at X j of the i-th coordinate of w, • ∥p(X)∥ 2 2 the square of the l 2 -norm of polynomial p(X) (i.e., the sum of squared coefficients), • a $ ← M the uniform draw of random variable a from M , • a α ← M the draw of random variable a from M with distribution α (for α ∈ R, we consider the zero-centered /discrete/ Gaussian draw with standard deviation α),
• E[X], Var[X] the expectation and the variance of random variable X, respectively.
Generalized LWE
First, we define a generalized variant of the LWE scheme, referred to as GLWE, which combines plain LWE with its ring variant. We define GLWE solely over the torus, although another underlying structure might be used, e.g., Z q with prime q that is taken in some other schemes. Definition 1 (GLWE Sample). Let k ∈ N be the dimension, N ∈ N, N a power of two, be the degree, α ∈ R + 0 be the standard deviation of the noise, and let the plaintext space P = T (N ) [X], the ciphertext (sample) space C = T (N ) [X] 1+k and the key space K = Z (N ) [X] k . For µ ∈ P and z χ ← K, where χ is a key distribution, we call c = (b, a) =: GLWE z (µ) the GLWE sample of message µ under key z,
if b = µ -⟨z, a⟩ + e, (1)
where a
$ ← T (N ) [X] k and e α ← T (N ) [X]
. If a = 0, we call the sample trivial, and if µ = 0, we call the sample homogeneous. We denote z := (1, z) ∈ Z (N ) [X] 1+k , referred to as the extended key. For N = 1, we have the (plain) LWE sample and we usually denote its dimension by n. We also generalize GLWE sampling to vector messages, yielding a matrix of 1+k columns, with one GLWE sample per row.
GLWE sampling is actually encryption: in TFHE, plaintext data is encrypted using the plain LWE, while GLWE is used internally. To decrypt, we apply the GLWE phase function (followed by rounding if applicable). Definition 2 (GLWE phase). Let k, N and α be GLWE parameters as per Definition 1, and let c = (b, a) be a GLWE sample of µ under GLWE key z. We call the function φ z : T
(N ) [X] × T (N ) [X] k → T (N ) [X], φ z (b, a) = b + ⟨z, a⟩ = ⟨z, c⟩, ( = µ + e), (2)
the GLWE phase. We call the sample c valid iff the distribution of φ z (c) is concentrated. Finally, for valid sample c, we call msg z (c) := E φ z (c) the message of c, which equals µ, since the noise is zero-centered and concentrated. Remark 1. GLWE phase returns µ + e, i.e., the original message with a small amount of noise. We may define GLWE decryption as either:
1. an erroneous decryption via GLWE phasewe accept some errors in the decrypted result, which might be considered harmless or even useful, e.g., in the context of differential privacy [START_REF] Dwork | The algorithmic foundations of differential privacy[END_REF]; or 2. a correctable decryption -for this purpose, we need to control the amount of noise and follow GLWE phase by an appropriate rounding step (relevant for this paper); or 3. an expectation of GLWE phase, i.e., msg z (c)this is useful for formal definitions and proofs.
In the following theorem, we state the additively homomorphic property of GLWE.
Theorem 1 (Additive Homomorphism). Let c1 , . . . , cn be valid and independent GLWE samples under GLWE key z and let w 1 , . . . , w n ∈ Z (N ) [X] be integer polynomials (weights). In case
c = n i=1 w i • ci is a valid GLWE sample, it holds msg z n i=1 w i • ci = n i=1 w i • msg z (c i ) (3)
and for the noise variance
Var[c] = n i=1 ∥w i ∥ 2 2 • Var[c i ]. ( 4
)
If all samples ci have the same variance V 0 , we have
Var[c] = V 0 • n i=1 ∥w i ∥ 2 2
and we define
ν 2 := n i=1 ∥w i ∥ 2 2 , (5)
referred to as the quadratic weights. We refer to the operation (3) as the (homomorphic) dot product (DP).
Discrete-Valued Plaintext Space
As outlined in Remark 1, item 2, in this paper, we focus on a variant of GLWE that restricts its messages to a discrete subspace of the entire torus plaintext space. Denoted by M, we refer to the plaintext subspace as the cleartext space, leaving the term plaintext space for torus polynomials.
In this paper, we only focus on the cleartext space of the form M = 1 2 π Z/Z ⊂ T (a subgroup of T isomorphic to Z 2 π ), where we refer to the parameter π as the cleartext precision. In terms of Definition 2, if it holds for the noise e that |e| < 1 /2 π+1 , then rounding of the value φ z (b, a) ∈ T to the closest element of M leads to the correct decryption/recovery of µ.
Discretized Torus
For the sake of simplicity of the noise growth analysis, TFHE is defined over the continuous torus, whereas in implementation, a discretized finite representation must be used instead. To cover the unit interval uniformly, TFHE implementations use an integral type-usually 32-or 64-bit (u)int-to represent a torus element, where we denote the bit-precision by τ . E.g., for τ = 32-bit uint32 type, t ∈ uint32 represents t /2 32 ∈ T ∼ [0, 1), where the denominator is usually denoted by q = 2 τ (in this case q = 2 32 ). Using such a representation, we effectively restrict the torus T to its submodule T q := q -1 Z/Z ⊂ T.
Distribution of GLWE Keys
For the coefficients of GLWE keys, a ternary distribution χ p : (-1, 0, 1) → (p, 1 -2p, p), parameterized by p ∈ (0, 1 /2), can be used. In particular, uniform ternary distribution is suggested by a draft of the homomorphic encryption standard [START_REF] Albrecht | Homomorphic encryption standard[END_REF], and it also is widely adopted by main FHE libraries like HElib [START_REF]HElib[END_REF], Lattigo [START_REF] Mouchet | Lattigo: A multiparty homomorphic encryption library in go[END_REF], SEAL [START_REF][END_REF], or HEAAN [19], although they implement other schemes than TFHE. With a fixed GLWE dimension and carefully chosen p, the distribution χ p may achieve better security as well as lower noise growth than uniform binary U 2 : (0, 1) → ( 1 /2, 1 /2). On the other hand, it is worth noting that for "small" values of p, such keys are also referred to as sparse keys (in particular with a fixed/limited Hamming weight), and there exist specially tailored attacks [START_REF] Cheon | A hybrid of dual and meet-in-the-middle attack on sparse and ternary secret lwe[END_REF][START_REF] Son | Revisiting the hybrid attack on sparse secret lwe and application to he parameters[END_REF]; we discuss security in the following paragraphs. Note 1. In TFHE, one instance of LWE and one of GLWE is employed. For LWE keys, usually, uniform binary distribution is used for technical reasons, although attempts to extend the key space can be found in the literature [START_REF] Joye | Blind rotation in fully homomorphic encryption with extended keys[END_REF]. For GLWE keys, a ternary distribution can be used immediately.
Security of (G)LWE
Estimation of the security of (G)LWE encryption is a complex task: it depends on (i) the size of the secret key (i.e., the dimension and/or the polynomial degree), (ii) the distribution of its coefficients, (iii) the distribution of the noise, which is usually given by its standard deviation, denoted by α, and (iv) the underlying structure (usually integers modulo q). As a rule of thumb, it holds that the longer key, the better security, as well as the greater noise, the better security.
A state-of-the-art tool that implements an LWE security assessment is known as lattice--estimator -a tool by Albrecht et al. [START_REF] Albrecht | contributors: Security Estimates for Lattice Problems[END_REF][START_REF] Albrecht | On the concrete hardness of learning with errors[END_REF]. Authors aim at considering all known relevant attacks on LWE, including those targeting sparse keys, as outlined previously. A plot that shows selected results of lattice-estimator can be found in Figure 3. A code example of the usage of lattice-estimator as well as raw data that were used to generate the figure can be found in our repository 1 .
Balancing Parameters
The downside of increasing the key size (improves security) is longer evaluation time (reduces performance), similarly increasing the amount of noise (improves security) leads to an error-prone evaluation (reduces correctness). Therefore, the goal is to find the best balance within the triangle of somehow orthogonal goals:
security performance correctness
The problem of finding such a balance is thoroughly studied by Bergerat et al. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF], who provide concrete results that aim at achieving the best performance, without sacrificing security, nor correctness.
Decomposition
To enable homomorphic multiplication and at the same time to reduce its noise growth, torus elements get decomposed into a series of integers. The operation is parameterized by (i) the decomposition base (denoted B; we only consider B = 2 γ a power of two), and (ii) by the decomposition depth (denoted d). We further denote
g := ( 1 /B, 1 /B 2 , . . . , 1 /B d ), (6)
referred to as the gadget vector. We define gad-
get decomposition of µ ∈ T ∼ [-1 /2, 1 /2) ⊂ R, denoted g -1 (µ), as the base-B representation of μ = ⌊B d • µ⌉ ∈ Z (multiplied in R) in the alphabet [-B /2, B /2) ∩ Z.
Note that such decomposition is unique. For the decomposition error, it holds that
µ -⟨g, g -1 (µ)⟩ ≤ 1 /2B d . (7)
We denote
ε 2 := 1 12B 2d and (8)
1 https://github.com/fakub/LWE-Estimates
V B := B 2 + 2 12 (9)
the variance of the decomposition error and the mean of squares of the alphabet [-B /2, B /2) ∩ Z (n.b., we assume B is even), respectively; for both we consider a uniform distribution. Note that with the alphabet [0, B) ∩ Z, the respective value of V B would have been higher, i.e., this is one of the little tricks to reduce later the noise growth. For k ∈ N, k ≥ 2, we further denote
G k := I k ⊗ g, (10)
where I k is identity matrix of size k and ⊗ stands for the tensor product, i.e., we have G k ∈ T kd×k , referred to as the gadget matrix.
We generalize g -1 to torus vectors and torus polynomials (and their combination) in a natural way: for vector t ∈ T n , g -1 (t) is the concatenation of respective component-wise decompositions g -1 (t i ), for polynomial t ∈ T (N ) [X], g -1 (t) proceeds coefficient-wise, i.e., the output is a vector of integer polynomials. Finally, for a vector of torus polynomials, g -1 outputs a concatenation of respective vectors of integer polynomials.
GGSW & Homomorphic Multiplication
Unlike GLWE, which encrypts torus polynomials, GGSW encrypts integer polynomials. The main aim of GGSW is to allow homomorphic multiplication of a (GLWE-encrypted) torus polynomial by a (GGSW-encrypted) integer polynomial. The multiplicative homomorphic operation is referred to as the External Product, denoted by : GGSW × GLWE → GLWE. Definition 3 (GGSW Sample). Let k, N and α be the parameters of a GLWE instance with key z. ) [X] if rows of Z are mutually independent, homogeneous GLWE samples under the key z. We call the sample valid iff there exists m ∈ Z ) [X] 1+k and GGSW sample Ā of corresponding dimensions, we define the External Product, : GGSW × GLWE → GLWE, as
We call C = Z+m•G 1+k , C ∈ T (N ) [X] (1+k)d,1+k , the GGSW sample of m ∈ Z (N
(N ) [X] such that each row of C-m•G 1+k is a valid homogeneous GLWE sample. Definition 4 (External Product). For GLWE sample c = (b, a) ∈ T (N
g -1 (c) T • Ā =: Ā c. ( 11
)
Est In the following theorem, we state the multiplicative homomorphic property of the external product and we evaluate its excess noise. Theorem 2 (Correctness & Noise Growth of ). Given GLWE sample c of µ c ∈ T (N ) [X] under GLWE key z and noise parameter α, and GGSW sample Ā of m A ∈ Z (N ) [X] under the same key and noise parameters, external product returns GLWE sample c′ = Ā c, which holds excess noise e , given by z, c′ = m A • z, c + e , for which it holds
Var[e ] ≈ dN V B α 2 (1 + k) amplified GGSW noise + + ∥m A ∥ 2 2 • ε 2 (1 + kN V z ) decomp. errors , ( 12
)
where V z is the variance of individual coefficients of the GLWE key z and other parameters are as per previous definitions. If e and the noise of c are "sufficiently small", c′ encrypts msg z (c ′ ) = m A • µ c , i.e., external product is indeed multiplicatively homomorphic.
Proof. Find the proof in Appendix A.1.
Constructing the TFHE Scheme
By far, there are two issues with (G)LWE:
1. As shown in Theorem 1, each additive homomorphic operation over (G)LWE samples leads to noise growth in the resulting aggregate sample, which limits the number of additions and which may also lead to incorrect results if the noise grows "too much". 2. Besides that, no homomorphic operation other than addition has been defined yet, which is not sufficient to achieve the full homomorphism.
The procedure referred to as bootstrapping aims at resolving them both at the same time: while refreshing the noise to a certain, constant-onaverage level, bootstrapping also inherently evaluates a function, referred to as the bootstrapping function, represented by a Look-Up Table (LUT). This makes TFHE fully homomorphic. Note 2. An approach that clearly achieves the full homomorphism is presented in the original paper by Chillotti et al. [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF], where authors define several logical gates, including ¬, ∨, and ∧. We refer to this variant as the binary TFHE, however, in this paper we rather focus on the variant of TFHE with a discrete multi-value cleartext space Z 2 π , as out- lined in Section 2.1.1. Binary TFHE might then be perceived as a special case.
As already outlined in Figure 2, TFHE gate is a combination of a homomorphic dot-product (cf. Theorem 1) and the bootstrapping procedure, which consists of four algorithms: KeySwitch, Mod-Switch, BlindRotate and SampleExtr; find a more detailed illustration of bootstrapping in Figure 4. In the rest of this section, we discuss each algorithm in detail (except ModSwitch, which we cover together with BlindRotate) and we combine them into the Bootstrap algorithm.
According to the results of Bergerat et al. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF], it shows that in most cases, more efficient TFHE parameters can be found if dot-product is Fig. 4 Outline of the internal structure of TFHE's bootstrapping. The aim of KeySwitch is to improve performance, whereas BlindRotate (inside the gray box) evaluates the LUT and refreshes the noise.
moved before key-switching (originally proposed by Bourse et al. [START_REF] Bourse | Fast homomorphic evaluation of deep discretized neural networks[END_REF]), as opposed to the original variant of TFHE [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF]. I.e., in the new variant, key-switching appears at the beginning of bootstrapping, whereas in the original variant, keyswitching is the last step. Authors of [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF] also consider a variant that omits key-switching, but they do not find it more efficient either. Hence, we describe solely the new, re-ordered variant with key-switching in this paper.
Key-Switching
The first step towards refreshing the noise, which happens in blind-rotate, is key-switching. Since blind-rotate is a demanding operation, the aim of key-switching is to reduce the dimension of the input sample. Attempts to omit key-switching were also tested by Bergerat et al. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF], however, achieving a poorer performance than the variant with key-switching.
The key-switching operation, denoted Key-Switch, effectively changes the encryption key of LWE sample (b ′ , a ′ ) from LWE key s ′ ∈ B n ′ to LWE key s ∈ B n . Besides the input LWE sample, KeySwitch requires a series of key-switching keys, while the j-th key is defined as
KS j := LWE s (s ′ j g ′ ), j ∈ [1, n ′ ], (13)
where g ′ is a gadget vector given by decomposition base B ′ and depth d ′ , and where each component of s ′ j g ′ produces one LWE sample, independent from others. I.e., KS j ∈ T d ′ ,1+n is interpreted as a matrix, where rows are actual LWE samples. We denote the set of key-switching keys from s ′ to s as KS s ′ →s := (KS j ) n ′ j=1 . Note that key-switching keys consist of LWE samples and they can therefore be published as a part of evaluation keys.
Given LWE sample (b ′ , a ′ ) ∈ T 1+n ′ of µ under s ′ , key-switching keys KS s ′ →s , generated with gadget vector g ′ , we define key-switching as
KeySwitch s ′ →s (b ′ , a ′ ) = (b ′ , 0)- n ′ j=1 g ′-1 (a ′ j ) T •KS j , (14)
which returns an LWE sample of µ under s. Note that in fact, KeySwitch homomorphically evaluates the phase function. In the following theorem, we evaluate the excess noise induced by KeySwitch. Theorem 3 (Correctness & Noise Growth of Key-Switching). Given LWE sample c′ of µ ∈ T under LWE key s ′ and key-switching keys KS s ′ →s , encrypted with noise parameter α ′ , KeySwitch s ′ →s returns LWE sample c, which holds excess noise e KS , given by s, c = s′ , c′ + e KS , for which it holds
Var[e KS ] ≈ n ′ V s ′ ε ′2 decomp. errors + n ′ d ′ V B ′ α ′2 amplif. KS noise , (15)
where ε ′2 and V B ′ are as per (8) and (9), respectively, with B ′ and d ′ , V s ′ is the variance of individual coefficients of the LWE key s ′ , and other parameters are as per previous definitions. If e KS and the noise of c′ are "sufficiently small", it holds µ = msg s (c) = msg s ′ (c ′ ), i.e., KeySwitch indeed changes the key, without modifying the message.
Proof. Find the proof in Appendix A.2.
Blind-Rotate
The blind-rotate operation, denoted BlindRotate, is the cornerstone of bootstrapping since this is where the noise gets refreshed. It combines two ingredients: the decryption (phase) function φ s (b, a) = µ + e (cf. ( 2)), and the multiplicative homomorphism of GGSW×GLWE samples (cf. Theorem 2).
Internally, blind-rotate evaluates a relation reminiscent of the phase function φ s (b, a). However, as such, the phase function does not get rid of the noise -indeed, the error term remains present in the original amount; cf. [START_REF] Brakerski | Leveled) fully homomorphic encryption without bootstrapping[END_REF]. Therefore, a rounding step-as outlined in Section 2.1.1-must be included, too. Blind-rotate achieves rounding using a staircase LUT, i.e., a LUT that encodes a staircase function, where the "stairs" are responsible for rounding. Such a LUT is provided in a form of a (possibly encrypted) polynomial, which is referred to as the test vector, denoted by tv(X), whose coefficients represent the LUT values.
Roughly speaking, we aim at multiplying tv(X) by X -M , where M holds somehow the erroneous phase µ + e. This "shifts" the coefficients of tv(X) by M positions towards lower powers of X (we can think of discarding the coefficients that underflow for now). Then, evaluating tv(X)•X -M at X = 0 (i.e., taking the constant term of the product) yields the originally M -th coefficient of tv(X).
In the following paragraphs, we outline more concretely how a LUT can be encoded into a torus polynomial, which we further reduce modulo X N + 1, so that it can be taken as a plaintext for a (possibly trivial) GLWE sample.
Encoding a LUT into a Polynomial Modulo X N + 1
The product of degree-N polynomial tv(X) and monomial X m (with 0 ≤ m < N ) holds the coefficients of tv shifted by m positions towards higher degrees. Reducing the product tv(X) • X m modulo X N + 1 brings the coefficients of powers higher than or equal to N back to lower powers (namely by N positions) while flipping their sign (e.g., aX N +k is reduced to -aX k ). Hence, multiplication of a polynomial by a monomial modulo X N + 1 yields a negacyclic rotation. These are the consequences for LUT evaluation in blind-rotate:
• multiplying tv(X) by X -m mod X N + 1 with 0 ≤ m < N results in moving the m-th coefficient of tv(X) to the constant position, which is then taken as a result of the LUT; • for N ≤ m < 2N , the result equals to the (m -N )-th coefficient of tv(X) with a flipped sign, due to the negacyclic rotation; and • for greater m, it is worth noting that the period is 2N .
Since we assume that tv is a torus polynomial, we have LUT : Z 2N → T and it holds
LUT(N + m) = -LUT(m), m ∈ [0, N ), (16)
i.e., only the first N values of a LUT need to be provided explicitly, while the other N values are given implicitly by the negacyclic extension, and the rest is periodic with a period of 2N . Encoded in a test vector tv ∈ T (N ) [X] as
tv (m) = LUT(m), m ∈ [0, N ), ( 17
)
the LUT is evaluated at m ∈ Z as
X -m • tv(X) mod X N + 1 (0) = = (-1) ⌊ m /N⌋ • tv(X) (m mod N ) = = LUT(m mod 2N ). ( 18
)
We illustrate encoding of a LUT into a polynomial (test vector) tv(X) in Figure 5. Next, we outline the overall idea of blind-rotate.
Encoding the Stairs
Let us put forward explicitly the process of encoding the desired (negacyclic) bootstrapping function f : Z 2 π → Z 2 π (which acts on cleartexts) into the respective LUT : Z 2N → T, represented by the test vector tv(X), including the "stairs":
LUT(k) = f k • 2 π 2N , k ∈ [0, 2N ). ( 19
)
We provide an illustration of such an encoding in Figure 6. We recall that only the LUT values for k ∈ [0, N ) are actually encoded into the test vector; cf. ( 17), [START_REF][END_REF] and Figure 5. Recall that the "stairs" are supposed to be responsible for rounding, which in turn refreshes the noise. From 19, it follows that the width of such a stair is 1 /2 π . By E max , defined as
E max := 1 2 π+1 , (20)
we denote the maximum of error magnitude that leads to the correct LUT evaluation; cf. Figure 6. Note 3. Bootstrapping cannot be applied to only refreshing the noise, i.e., setting identity as the
tv(X) = 0 +1X -1X 2 +2X 3 (mod X 4 + 1) . . . provided LUT values . . .
negacyclic extension
Rotation by X -m with m = 3
X -3 • tv(X) = 2 +0X -1X 2 +1X 3 (mod X 4 + 1)
Fig. 5 Illustration of encoding of a LUT into a polynomial mod X N + 1 that is used in blind-rotate. We set N = 4 and we evaluate at m = 3, which means "rotation" by X -3 . The desired output value LUT(3) is emphasized in red. N.b., in this illustration, we omit the "stairs" for simplicity.
bootstrapping function, since identity is not negacyclic. A workaround must be made, with respect to a particular use case. Specifically, many existing implementations prepend an extra bit of padding, which they set to zero and do not use it. Note that such implementations need to somehow prevent possible overflows of homomorphic additions.
Idea of Blind-Rotate
First, let us get back to the phase function, which is responsible for decryption. Originally, with a LWE sample (b, a) to be bootstrapped, φ s (b, a) is (i) evaluated over the torus, and (ii) it is using known bits of the key s. For (i): based on previous observations, we first rescale & round the sample (b, a) ∈ T 1+n to the Z 2N domain, which preserves periodicity. Therefore, we calculate the scaled and rounded value of the phase function as
m = b + ⟨s, ã⟩, (21)
where b = ⌊2N b⌉ and ãi = ⌊2N a i ⌉, which is also referred to as modulus switching. As outlined previously, we aim at performing the evaluation of m as per [START_REF] Son | Revisiting the hybrid attack on sparse secret lwe and application to he parameters[END_REF] in powers of X; cf. [START_REF][END_REF].
For (ii): the secret key s is clearly not known to the evaluator (the cloud). Instead, bits s i of the key are provided in a form of GGSW samples BK i , encrypted with GLWE key z, and referred to as the bootstrapping keys, denoted by BK s→z = (BK i ) n i=1 . From (i) and (ii), it follows that given the sample (b, a) and the encrypted bits of the key s (bootstrapping keys), we can apply homomorphic operations to obtain the (encrypted) monomial X m modulo X N + 1, which we employ for LUT evaluation as per [START_REF][END_REF]. I.e., the test vector gets blindly rotated. s t a i r c . L U T p r e c . c l e a r t e x t p r e c . 6 Relation between a negacyclic bootstrapping function f and respective staircase LUT (illustrative). If the evaluated value does not leave its "stair", i.e., the input's error magnitude is lower than Emax, LUT gets evaluated correctly.
1 2 π . . . f : Z 2 π → Z 2 π . . . LUT : Z 2N → T Fig.
In the following paragraphs, we provide a full technical overview of modulus-switching and blind-rotate, respectively.
Modulus-Switching
As outlined, blind-rotate is preceded by a technical step, referred to as modulus-switching and denoted by ModSwitch, which is parameterized by N ∈ N. ModSwitch N , which inputs LWE sample (b, a) ∈ T 1+n under LWE key s and outputs LWE sample ( b, ã) ∈ Z 1+n 2N under the same key, is defined as
ModSwitch N (b, a) = ⌊2N b⌉, ⌊2N a i ⌉ n i=1 =: ( b, ã), (22)
where multiplications of type 2N •a i are performed in R (using any unit interval for T), then rounding brings result back to Z, from where we easily obtain Z 2N .
Due to rounding, additional noise is induced by ModSwitch; we evaluate it in the following lemma. Lemma 1 (Noise Growth of Modulus-Switching). ModSwitch N induces an excess noise, given by 1 /2N •⟨s, ( b, ã)⟩ = ⟨s, (b, a)⟩+e MS , for which it holds
Var[e MS ] = 1 + n /2 48N 2 , ( 23
)
where n is the LWE dimension.
Proof. We write
e MS = ⟨(1, s), ( b/2N -b, ã/2N -a)⟩ = = b/2N -b ∈(-1 /4N, 1 /4N] + s i • ( ãi /2N -a i ) ∈(-1 /4N, 1 /4N] , (24)
where each underbraced term is assumed to have a uniform distribution on (-1 /4N, 1 /4N], i.e., the variance of 1 /48N 2 . For s i , we have E[
s 2 i ] = 1 /2. For independent variables with E[Y ] = 0, it holds Var[X • Y ] = E[X 2 ] • Var[Y ],
which is this case for X = s i and Y = ãi /2N -a i . The result follows.
Description of Blind-Rotate
A description of blind-rotate is given in Algorithm 1. In line 3, if BK i encrypts s i = 0, the line evaluates to (encrypted) ACC = X 0•ãi • ACC, if BK i encrypts s i = 1, we obtain (encrypted) X 1•ãi • ACC. I.e., after blind-rotation, we obtain an encryption of X m • tv, where m = ⟨s, ( b, ã)⟩. Line 3 also mandates s i ∈ B, as outlined in Note 1, although generalization attempts exist [START_REF] Joye | Blind rotation in fully homomorphic encryption with extended keys[END_REF]. Remark 2. During blind-rotate, the "old" noise is refreshed with a fresh noise, which comes from the bootstrapping keys and from the (possibly encrypted) test vector -the fresh noise does not depend on the noise of the input sample. Nevertheless, the "old" noise affects what value from the test vector is selected (gets rotated to), i.e., at which point the (staircase) LUT is evaluated; cf. Figure 6. We discuss two types of decryption errors later in Section 4.1.1. In the following theorem, we evaluate the noise of the output of BlindRotate -i.e., the refreshed noise, which we denote V 0 .
Algorithm 1 BlindRotate
Input: LWE sample (b, a) of µ ∈ T under LWE key s ∈ B n , modulus-switched to ( b, ã) ∈ Z 1+n 2N , Input: (usually trivial) GLWE sample t ∈ T (N ) [X] 1+k of tv ∈ T (N ) [X] (aka. test vector) under GLWE key z ∈ Z (N ) [X] k , Input: for i ∈ [1,
n], GGSW samples of s i under z, referred to as bootstrapping keys, denoted BK s→z := (BK i ) n i=1 . Output: GLWE sample of X m • tv under z, where m = ⟨s, ( b, ã)⟩ ≈ 2N µ.
1: ACC ← X b • t ▷ aka. accumulator 2: for i ∈ [1, n] do 3: ACC ← ACC + BK i (X
Var[⟨z, ACC⟩] ≈ α 2 t + ndN V B α 2 (1 + k) + + nε 2 (1 + kN V z ) =: V 0 , (25)
which we denote by V 0 , other parameters are as per previous theorems and definitions. If the noise of ⟨z, ACC⟩ is "sufficiently small", it holds msg z (ACC) = X m • tv, where m = ⟨s, ( b, ã)⟩ ≈ 2N µ, i.e., BlindRotate indeed "rotates" the test vector by the approximate phase of (b, a), scaled to Z 2N .
Proof. Find the proof in Appendix A.3.
Sample-Extract
By far, BlindRotate outputs a GLWE sample of a polynomial (blindly-rotated test vector), which holds the desired value at its constant term and which is encrypted with a GLWE key. The goal of SampleExtr is to literally extract a partial LWE sample, which encrypts the constant term, out of the GLWE sample, which we denote by (b, a) ∈ T (N ) [X] 1+k (the last-step ACC in Algorithm 1). Note that a similar thing happens with the key:
the new LWE key is also an "extract" of the original polynomial GLWE key z ∈ Z (N ) [X] k . Writing down the constant term of ⟨z, (b, a)⟩, which is a torus polynomial, we obtain z, (b, a)
(0) = b (0) + k i=1 z i (X) • a i (X) (0) = = b (0) + k i=1 z (0) i , -z (N -1) i , . . . , -z (1) i i-th partial extr. LWE key z * i , a (0)
i , a
i , . . . , a
(N -1) i i-th partial extr. LWE sample a * i , ( 26
)
where we denote the i-th partial extracted LWE key and sample by z * i ∈ Z N and a * i ∈ T N , respectively. We obtain the full extracted LWE key and sample as their concatenations, denoted respectively as z * ∈ Z kN and (b (0) , a * ) ∈ T 1+kN , where b (0) is prepended. Note that a * is a simple serialization of polynomial coefficients of a, whereas for z, a rearranging is needed, together with negative signs. Finally, we have z, (b, a)
(0) = z * , (b (0) , a * ) , (27)
while noise preserves. Note that the extracted key z * plays the role of the LWE key s ′ that is supposed to be encrypted in key-switching keys -we compose the four algorithms and we provide further details in the next section.
TFHE Bootstrapping
Putting the four algorithms together, we obtain the TFHE (Programmable) Bootstrapping algorithm; find it as Algorithm 2, previously outlined in Figure 4. It is worth noting that BlindRotate (on line 3 of that algorithm) inputs a negative sample: this is due to the LUT encoding that we use; cf. ( 17) and ( 18), where a negative sign at m is expected, although by Theorem 4, a positive sign appears in the power of X.
We provide a summary of parameters in Table 1. For an exhaustive technical overview of blind-rotate, preceded by modulus-switching and followed by sample-extract, we refer to Appendix B, Figure B1.
Recall that the noise gets refreshed in Blind-Rotate (cf. Remark 2) and it does not change in ) [X] under the key z, where tv encodes negacyclic bootstrapping function f : Z 2 π → Z 2 π as per (17) and ( 19), Input: key-switching keys KS z * →s and bootstrapping keys BK s→z . Output: LWE sample of f (m) /2 π under key z * . SampleExtr, i.e., the variance of a freshly bootstrapped sample is given by V 0 as per [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF]. N.b., at this point, we do not guarantee the correctness of the output -details will be given in Section 4, where we identify bounds that need to be satisfied so that the bootstrapping function is evaluated at the correct point.
Algorithm 2 Bootstrap Input: LWE sample (b * , a * ) ∈ T 1+kN of µ = m /2 π , m ∈ Z 2 π , under LWE key z * ∈ Z kN , extracted from GLWE key z ∈ Z (N ) [X] k , Input: (possibly trivial) GLWE sample t ∈ T (N ) [X] 1+k of test vector tv ∈ T (N
1: (b, a) ← KeySwitch (b * , a * ), KS z * →s 2: ( b, ã) ← ModSwitch N (b, a) 3: (s, r) ← BlindRotate (-b, -ã), t, BK s→z 4: return (b ′ , a ′ ) ← SampleExtr (s, r)
Correctness of LUT Evaluation
In this section, we combine the noise growth estimates from the previous section and we derive a condition for the correct evaluation of the bootstrapping function. As outlined, the noise propagates throughout various operations, let us provide an overview first.
Overview of Noise Propagation
The noise, which is present in every encrypted sample, evolves during the evaluation of a TFHE gate, which comprises (i) homomorphic dotproduct and (ii) bootstrapping (with its four suboperations). Let us comment on either operation: Dot-product: Provided that the noise of involved samples is independent, the error variance of a weighted sum is additive with weights squared (cf. (4) in Theorem 1).
Bootstrapping: If the noise of the sample-to-bebootstrapped is smaller than a certain bound, the blind-rotate step of bootstrapping evaluates the bootstrapping function correctly: i.e., the error of m (as per Theorem 4 and Algorithm 1) is smaller than the bound E max ; cf. Figure 6. The resulting sample then carries-on average-a fixed amount of noise (independent of the original sample), which solely depends on the TFHE parameters (cf. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF] in Theorem 4).
In Figure 7, we illustrate the error propagation throughout a TFHE gate, where e
(i)
0 denotes actual noise of the i-th, freshly bootstrapped sample. Note that the overall maximum of relative average noise is achieved within bootstrapping when the modulus-switched sample ( b, ã) enters blindrotate, which refreshes the noise; cf. Remark 2.
We denote the maximum error and its variance by e max and V max , respectively, and we have
V max ≈ ν 2 max • V 0 + V KS + V MS , (28)
where ν 2 max is the maximum of sums of squares of integer weights of dot-products (cf. ( 5)) across the entire computation, V 0 , V KS and V MS are respectively the variance of a freshly bootstrapped sample (cf. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF], combined using (4)), the variance of the excess noise of key-switching (cf. [START_REF] Albrecht | Homomorphic encryption standard[END_REF]) and that of modulus-switching (cf. ( 23)). Note that e max is the relative, torus-scaled error of m ∈ Z 2N that enters blind-rotate; cf. Algorithm 2. The magnitude of this error is decisive for the correctness of the bootstrapping function evaluation as per Figure 6. Note 4. Let us outline an intuition that justifies the design where key-switching is moved to the beginning of bootstrapping (proposed in [START_REF] Bourse | Fast homomorphic evaluation of deep discretized neural networks[END_REF], experimentally shown to be more efficient in [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF], presented in this paper), as opposed to the original TFHE design [START_REF] Chillotti | TFHE: fast fully homomorphic encryption over the torus[END_REF], where key-switching is the last step of bootstrapping. The variance of the maximum error of the original variant writes
V max ≈ (V BR + V KS V (or.) 0 ) • ν 2 + V MS , (29)
whereas in the re-ordered variant, we have
V max ≈ V BR V (re.) 0 •ν 2 + V KS + V MS , (30)
where V BR is the variance of the output of Blind-Rotate. We may notice that the re-ordered variant is expected to achieve a lower noise growth, in particular for applications with greater ν 2 . Also, note that the re-ordered variant in fact only swaps key-switching and dot-product; let us outline both, starting after sample-extract: ---→ KS → MS → . . .
Correct Evaluation of the Bootstrapping Function
Let us define the quantity κ, which aims at quantifying the probability of correct evaluation of the bootstrapping function (i.e., |e max | < E max ), as
κ := E max √ V max . ( 31
)
The aim of κ is to tell how many times the standard deviation of the maximum error, denoted σ max = √ V max , fits into the target interval of the size of 2E max around the expected value. The probability that a normally distributed random variable falls within the interval of κ times its standard deviation can be looked-up from standard normal tables (aka. the Z-tables). Note that by the Central Limit Theorem, we assume a normal distribution for the value of m. E.g., for κ = 3, we have Pr[•] ≈ 99.73% ≈ 1 /370 (aka. rule of 3σ), however, we recommend higher values of κ (e.g., Bergerat et al. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF] provide their parameters with κ = 4, which gives error rate ≈ 1 /15 787). N.b., also the size of the evaluated circuit as well as possible real-world consequences of an incorrect evaluation shall be taken into account.
From [START_REF]Concrete: State-of-the-art TFHE library for boolean and integer arithmetics[END_REF] and [START_REF] Cheon | A hybrid of dual and meet-in-the-middle attack on sparse and ternary secret lwe[END_REF], we obtain the fundamental condition on the variance of the maximum error as
V max ≤ 1 κ 2 • 2 2π+2 , (32)
where V max can be further broken down by [START_REF]NuCypher[END_REF] and other previous equalities. If the fundamental condition is satisfied, the (erroneous) value of m does not leave its "stair" with high probability (related to κ), and the bootstrapping function f is evaluated correctly; cf. Figure 6.
Types of Decryption Errors
Correct blind-rotate does not itself guarantee the correctness of the result after decryption -indeed, there is a non-zero probability that the freshly bootstrapped sample (or a dot-product of them) decrypts incorrectly due to the intrinsic LWE noise. Therefore, we define two types of decryption errors that may occur after a dot-product followed by bootstrapping: one due to LWE noise (as just outlined), and one due to incorrect blind-rotate.
Fresh Bootstrap Error (Err 1 ) First, let us assume that blind-rotate rotates the test vector correctly (i.e., |e max | < E max ) and we denote the output LWE sample of bootstrapping as c′ . Then, if the distance of ⟨s, c′ ⟩ from the expected value is greater than E max , we refer to this kind of error as the type-1 error, denoted Err 1 . The probability of Err 1 relates to the noise of a correctly blind-rotated, freshly bootstrapped sample, which can be estimated from V 0 ; see [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF].
Blind-Rotate Error (Err 2 ) Second, we consider the result of a TFHE gate, i.e., we take a dot-product of a bunch of independent, freshly bootstrapped samples, with ν2 ≤ ν 2 max , and we bootstrap it. Then, if blind-rotate rotates the test vector incorrectly (i.e., |e max | > E max ), we refer to this kind of error as the type-2 error, denoted Err 2 . Note that a combination of both error types may occur 2 .
The probability of Err 2 relates to the error of modulus-switched sample ( b, ã) that appears inside bootstrapping, and it can be estimated from V max ; see [START_REF]NuCypher[END_REF]. We outline both error types in Figure 8. Corollary 1. For the probabilities of type-1 and type-2 errors, by [START_REF]NuCypher[END_REF] we have
Pr[Err 1 ] < Pr[Err 2 ]. ( 33
)
For common choices of parameters, Pr[Err 1 ] can be neglected. I.e., we may use the fundamental condition [START_REF] Van Beirendonck | FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption[END_REF] to estimate the probability of incorrect evaluation of a single TFHE gate.
Parameter Constraints
Previously, we justified the use of the fundamental condition [START_REF] Van Beirendonck | FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption[END_REF] to make error probability estimates. Next, we identify four high-level parameters that aim at characterizing the properties of an instance of TFHE. Finally, we combine the fundamental condition to obtain a relation between the four high-level parameters and actual TFHE parameters (like LWE dimension or noise amplitude).
Characteristic Parameters
Given a usage scenario, an instance of TFHE can be characterized by the following four (input) parameters:
1. cleartext space bit-precision, denoted by π (cf. Section 2.1.1); 2. quadratic weights, denoted by ν 2 (cf.
Theorem 1, we take maximum of computation); 3. bit-security level, denoted by λ; and 4. bootstrapping correctness, denoted by η := Pr[Err 2 ] (cf. Corollary 1).
Let us provide more (practical) comments on each of the input parameters.
Cleartext Bit-Precision: π Regarding the choice of an appropriate cleartext bit-precision, we point out two things: First, it shows that the complexity of the TFHE bootstrapping grows roughly exponentially with the cleartext bit-precision -reasonable bootstrapping times can be achieved for up to about π = 8 bits, then, splitting the cleartext into multiple chunks comes into play. Second, bootstrapping is capable of evaluating a custom bootstrapping function f : Z 2 π → Z 2 π , however, such function must be negacyclic; cf. ( 16) and (19), unless a workaround is adopted as per Note 3. Both limitations must be carefully considered before choosing the right cleartext space bit-precision π: it might make sense to decrease the cleartext space size at the expense of additional, but cheaper bootstrapping.
Quadratic Weights: ν 2 As outlined in (28), ν 2 max := max g {ν 2 g } is defined as the maximum of sums of squares of integer weights of dot-products across the whole circuit that comprises TFHE gates g ∈ G (with ν 2 defined in ( 5)). Note that log(ν g ) expresses the number of bits of the standard deviation of the excess noise introduced by the dot-product in gate g.
Security Level: λ
We discuss LWE/GLWE security in Section 2.1.4. Recall that the higher λ is requested, the higher LWE dimension and/or the lower noise must be present, and security also depends on the distribution of keys.
Bootstrapping Correctness: η Introduced in Section 4.1, the parameter κ characterizes the probability of erroneous blind-rotate. In Corollary 1, we use this probability to estimate the overall probability of correct evaluation of a TFHE gate. Hence, to quantify the probability of correct evaluation of a single TFHE gate, we take η, and by standard normal tables, we deduce the value of κ, which we use for the rest of the analysis. Recall that κ relates to the correctness of a single TFHE gate, i.e., for a circuit that consists of multiple TFHE gates, the value of η needs to be modified accordingly.
Parameter Relations
To make the fundamental condition (32) hold, we may combine ( 28) with ( 25), ( 15) and ( 23), and mandate
V max ≈ ν 2 • V 0 + V KS + V MS ≈ ≈ ν 2 • α 2 t + ndN V B α 2 (1 + k) + + nε 2 (1 + kN V z ) + kN V z ε ′2 + + kN d ′ V B ′ α ′2 + 1 + n /2 48N 2 ! ≤ ! ≤ 1 κ 2 • 2 2π+2 , (34)
where the baseline parameters are summarized in Table 1, ε 2 and V B are defined in ( 8) and ( 9), respectively, and V z stands for the variance of coefficients of the internal GLWE secret key z.
In terms of the security ↔ correctness ↔ performance triangle given in Section 2.1.5, this inequality only provides a guarantee of correctness, which is given by η (translated into κ), for prescribed plaintext precision π and quadratic weights ν 2 . In particular, security is not addressed and it must be resolved separately, e.g., using lattice-estimator [START_REF] Albrecht | contributors: Security Estimates for Lattice Problems[END_REF]. The combination of constraints on TFHE parameters makes it a complex task to generate a set of parameters, which is further supposed to achieve a good performance. Authors of [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF] claim to have implemented a generator of efficient TFHE parameters and they provide a comprehensive list of TFHE parameters for many input setups. However, at the time of writing, the tool is not publicly available yet.
Implementation Remarks
In this section, we briefly comment on various implementation aspects of TFHE, namely • (negacyclic) polynomial multiplication;
• additional errors (noise) that stem from particular implementation choices; • estimated complexity & key sizes; and • existing implementations of TFHE, including recent trends and advances.
Negacyclic Polynomial Multiplication
For performance reasons, modular polynomial multiplication in TFHE-which appears, e.g., in GLWE encryption (1) or in external product (11)-is implemented using Fast Fourier Transform (FFT). Recall that polynomials mod X N + 1 rotate negacyclically when multiplied by X k , unlike polynomials mod X N -1, which rotate cyclically. Note that in such a case, polynomial multiplication is equivalent to the standard cyclic convolution, which can be calculated using Fast Fourier Transform (FFT). However, for polynomials mod X N + 1, other tricks need to be put into place; find a description of negacyclic polynomial multiplication, e.g., in [START_REF] Klemsa | Fast and error-free negacyclic integer convolution using extended fourier transform[END_REF].
Implementation Noise
Notably, FFT is the major source of additional errors (noise) that are not captured by the theoretical noise analysis given in Section 4. The magnitude of FFT errors depends particularly on the number representation that is used by selected FFT implementation; find a study on FFT errors in [START_REF] Klemsa | Fast and error-free negacyclic integer convolution using extended fourier transform[END_REF]. Although for commonly used parameters and FFT implementations, FFT errors are negligible compared to (G)LWE noises, they shall be kept in mind, in particular in non-standard constructions or new designs.
In addition, compared to the theoretical results of Section 4, torus elements are represented using a finite representation (as outlined in Section 2.1.2; e.g., with 64-bit integers), which also changes the errors slightly. However, as long as the torus precision (e.g., 2 -64 ) is much smaller than the standard deviation of the (G)LWE noisewhich is usually the case for common parameter choices-this contribution can be neglected.
Key Sizes & Bootstrapping Complexity
Below, we provide the sizes of key-switching and bootstrapping keys, as represented in a TFHE implementation with a τ -bit representation of torus elements. The complexity of TFHE bootstrapping, which is the dominant operation of TFHE, is roughly proportional to the key sizes.
Size of Key-Switching Keys
Using notation of Table 1, key-switching keys consist of N sub-keys KS j ∈ T d ′ ,1+n ; altogether we have
(KS j ) N j=1 = N d ′ (1 + n)τ [bits]. ( 35
)
Note that a common method to store/transmit key-switching keys, which can also be applied to bootstrapping keys, is to keep just a seed for a pseudo-random number generator (PRNG), instead of all of the randomness. I.e., for each LWE sample, just the value of b is kept and the values of a can be re-generated from the seed.
Size of Bootstrapping Keys
Bootstrapping keys consist of n GGSW samples BK i ∈ T (N ) [X] (1+k)d,1+k ; altogether we have
(BK i ) n i=1 = n(1 + k) 2 dN τ [bits]. ( 36
)
Rough Estimate of Bootstrapping Complexity Key-switching is dominated by N d ′ (1 + n) torus multiplications, followed by 1 + n summations of N d ′ elements, which makes keyswitching O N d ′ (1 + n)τ . Blind-rotate is dominated by n(1 + k) 2 d degree-N polynomial multiplications, followed by a similar number of additions/subtractions, which makes blind-rotate O n(1+k) 2 dN τ . Note that the entire calculation of BlindRotate (cf. Algorithm 1) can be performed in the Fourier domain -thanks to its linearity and pre-computed bootstrapping keys, i.e., the O(τ N log N ) term of FFT can be neglected. In this rough estimate, we neglect modulus-switching and sample-extract. Also, we do not distinguish the bit-length of τ for LWE and GLWE, as some implementations do [START_REF]NuCypher[END_REF].
Existing TFHE Implementations
The original (experimental) TFHE library [START_REF] Tfhe | Fast Fully Homomorphic Encryption Library over the Torus[END_REF] is not developed anymore, instead, there are other, more or less active implementations. Here we list selected implementations of TFHE: TFHE-rs [START_REF] Tfhe-Rs | Pure Rust implementation of the TFHE scheme for boolean and integers FHE arithmetics[END_REF]: written in Rust, TFHE-rs implements latest findings by Bergerat et al. [START_REF] Bergerat | Parameter optimization & larger precision for (t)fhe[END_REF] (recently separated from the Concrete Library [START_REF]Concrete: State-of-the-art TFHE library for boolean and integer arithmetics[END_REF] that implements higher-level operations and interfaces); FPT [START_REF] Van Beirendonck | FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption[END_REF]: an experimental FPGA accelerator for TFHE bootstrapping (a benchmark of state-of-the-art implementations in software/GPU/FPGA/ASIC can also be found in [START_REF] Van Beirendonck | FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption[END_REF]); nuFHE [START_REF] Nucypher | A GPU implementation of fully homomorphic encryption on torus[END_REF]: a GPU implementation of TFHE.
TFHE is also implemented as a part of more generic libraries like OpenFHE [START_REF] Badawi | OpenFHE: Open-source fully homomorphic encryption library[END_REF], which is a successor of PALISADE [START_REF] Palisade | PALISADE Lattice Cryptography Library[END_REF] and which also attempts to incorporate the capabilities of HElib [START_REF]HElib[END_REF] and HEAAN [19]. There exist many other implementations that are not listed here.
Conclusion
We believe that our TFHE guide helps many researchers and developers understand the inner structure of TFHE, in particular probably the most mysterious operation -the negacyclic blindrotate -thanks to a step-by-step explanation, which we support with an illustration. Not only do we provide an intelligible description of each sub-operation of bootstrapping, but we also highlight what tweaks can be put in place to limit the noise growth at little to no additional cost (e.g., the order of key-switch ↔ dot-product or the signed decomposition alphabet). Last but not least, we provide a comprehensive noise analysis, supported by proofs, where we employ an easyto-follow notation of the decomposition operation using g and g -1 . Finally, we list selected implementation remarks that shall be kept in mind when attempting to implement TFHE and its variants or modifications.
eFig. 7
7 Fig. 7 Noise propagation from a bunch of freshly bootstrapped samples throughout a TFHE gate towards a new, freshly bootstrapped sample. If |emax| < Emax, the bootstrapping function is evaluated correctly.
2 Fig. 8
28 Fig.8Illustration of type-1 and type-2 errors: LUT evaluates correctly and incorrectly to 0 and 3, respectively.
Bit-security of LWE as estimated by lattice-estimator by Albrecht et al.[START_REF] Albrecht | contributors: Security Estimates for Lattice Problems[END_REF][START_REF] Albrecht | On the concrete hardness of learning with errors[END_REF] (commit ID f9dc7c), using underlying group size q = 2 64 . Interpolated between grid points. Raw data can be found at https://github.com/fakub/ LWE-Estimates.
40 576
. bit-security of LWE 512
32 448
-log(α) 24 256 320 384
16 192
128
8 64
0 256 512 768 1024 1280 1536 1792 2048
LWE dimension n
Fig. 3
ãi • ACC -ACC) 4: end for 5: return ACC Theorem 4 (Correctness & Noise Growth of Blind-Rotate). Given inputs of Algorithm 1, where bootstrapping keys are encrypted with noise parameter α and test vector is (possibly) encrypted with noise parameter α t (i.e., α t = 0 or α), BlindRotate returns the last-step ACC with noise variance given by
Table 1
1 Summary of parameters' notation. Parameters ε 2 , ε ′2 and V B , V B ′ are derived from respective decomposition parameters; cf. (8) and (9).
LWE secret key s GLWE secret key z
LWE dimension n GLWE dimension k
GLWE polyn. degree N LWE noise std-dev α ′ GLWE noise std-dev α KS decomp. base B ′ BK decomp. base B KS decomp. depth d ′ BK decomp. depth d
The result may combine both kinds of errors and decrypt correctly at the same time -in such a case, we consider that both error types occur simultaneously.
Acknowledgments
We would like to thank Melek Önen for her useful comments as well as the anonymous referees for their helpful suggestions.
Funding
This work was supported by the MESRI-BMBF French-German joint project UPCARE (ANR-20-CYAL-0003-01), and by the Grant Agency of CTU in Prague, grant No. SGS21/160/OHK3/3T/13.
Availability of Data and Material
The datasets generated during and/or analyzed during the current study are available in the following repository: https://github.com/fakub/ LWE-Estimates.
Declarations
Conflicts of interest
The authors have no relevant financial or nonfinancial interests to disclose.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors. Given GLWE sample c of µ c ∈ T (N ) [X] under GLWE key z and noise parameter α, and GGSW sample Ā of m A ∈ Z (N ) [X] under the same key and noise parameters, external product returns GLWE sample c′ = Ā c, which holds excess noise e , given by z, c′ = m A • z, c + e , for which it holds
where V z is the variance of individual coefficients of the GLWE key z and other parameters are as per previous definitions. If e and the noise of c are "sufficiently small", c′ encrypts msg z (c ′ ) = m A • µ c , i.e., external product is indeed multiplicatively homomorphic.
Proof. Let us denote c = (b, a) ∈ T (N ) [X] 1+k and let us unfold the construction of Ā as
where
i . Unfolding the construction of Ā and that of external product, we obtain
while in (A3), terms denoted ♦ and ♥ cancel out. Next, in (A4), we assume that each decomposition error term (cf. ( 7)) has a uniform distribution on [-1 /2B d , 1 /2B d ), hence variance of ε 2 ; cf. [START_REF] Guimarães | Revisiting the functional bootstrap in TFHE[END_REF]. Finally, in (A5), we assume that the decomposition digits have a uniform distribution on [-B /2, B /2) ∩ Z, hence their mean of squares equals V B ; cf. [START_REF] Chillotti | Improved Programmable Bootstrapping with Larger Precision and Efficient Arithmetic Circuits for TFHE[END_REF]. Evaluating the variance of each term, the result follows, while ≈ is due to the possible statistical dependency of variables across terms. Note that we indicate the length of inner products in lower indices.
A.2 Proof of Theorem 3
where ε ′2 and V B ′ are as per (8) and (9), respectively, with B ′ and d ′ , V s ′ is the variance of individual coefficients of the LWE key s ′ , and other parameters are as per previous definitions. If e KS and the noise of c′ are "sufficiently small", it holds µ = msg s (c) = msg s ′ (c ′ ), i.e., KeySwitch indeed changes the key, without modifying the message.
Proof. Similar to the proof of Theorem 2, we write
and the result follows.
A.3 Proof of Theorem 4
Theorem 4 (Correctness & Noise Growth of Blind-Rotate). Given inputs of Algorithm 1, where bootstrapping keys are encrypted with noise parameter α and test vector is (possibly) encrypted with noise parameter α t (i.e., α t = 0 or α), BlindRotate returns the last-step ACC with noise variance given by
which we denote by V 0 , other parameters are as per previous theorems and definitions. If the noise of ⟨z, ACC⟩ is "sufficiently small", it holds msg z (ACC) = X m • tv, where m = ⟨s, ( b, ã)⟩ ≈ 2N µ, i.e., BlindRotate indeed "rotates" the test vector by the approximate phase of (b, a), scaled to Z 2N .
Proof. The core of BlindRotate consists of the sample t being gradually externally-multiplied by BK i 's (plus some other additions/multiplications). For i-th step with a := ãi ∈ Z 2N and BK := BK i that encrypts s := s i ∈ {0, 1}, we write:
where (A9) is by Theorem 2 and the step towards (A10) holds for s ∈ {0, 1}. Hence, with each such
Appendix B Tech. Overview
In Figure B1, we provide an exhaustive technical overview of blind-rotate, preceded by modulusswitching and followed by sample-extract.
Blind-Rotate of TFHE
Encrypted bootstrapping function (aka. test vector; assuming k = 1)
blind-rotate 0.s (0) + 0.s (1) X + . . . + 0.s (N -1) X N -1 sample-extract 0.s (0) 0.r (N -1) . . . 0.r (1) 0.r (0)
GLWE sample τ bits 0.v (0) + 0.v (1) X + . . . + 0.v (N -1) X N -1 0.u ( b) + . . . (ACC) 0.r (0) + . . . |
04121362 | en | [
"sdv.imm.ia",
"sdv.mp.vir"
] | 2024/03/04 16:41:26 | 2022 | https://theses.hal.science/tel-04121362/file/230302_corrected_va_Dynesen_Lasse.pdf | Dr Hugo Mouquet
Dr Andrés Alcover
Astrid Theis
Marija Michelle Kevin
Claudia Val
Viola Fabiana
Mara Lena
Nathan Gaëlle
Min, Michael, Chloe, Carys, Crispin Kyle Lu
Keywords: Virus foamy simien, Rétrovirus, Zoonose, Anticorps neutralisants, Épitopes v Simian foamy virus, Retrovirus, Zoonosis, Neutralizing antibodies, Epitopes vi
Thank you to the experts at the cytometry platform at Pasteur for their expertise and help, in particular Pierre-Henri Commere for help with single-cell sorting experiments.
la partie supérieure du trimère de l'Env du VFS. Cela confirme également que leur structure RBD est repliée dans la conformation proche de celle observée sur les particules virales.
Ma première contribution a été de valider que le RBD adopte une forme native. J'ai réalisé des tests de neutralisation dans lesquels le RBD soluble recombinant et l'Env exprimée à la surface des particules virales sont en compétition pour se lier à des AcNs présents dans des échantillons de plasma de personnes infectées par des souches homologues de VFS de gorille. La liaison des AcNs au RBD entraîne une augmentation de l'infection par rapport aux particules virales incubées avec un échantillon de plasma en présence de la protéine d'Env d'un virus non apparenté. La protéine RBD a été produite dans des cellules d'insecte S2 ou des cellules de mammifère Expi293F qui produisent des protéines présentant des types distincts de glycosylation de surface. Les deux protéines ont été diluées en série et incubées avec du plasma de donneurs infectés par le VFS avant l'ajout de vecteurs viraux foamy (VVFs). Une augmentation de l'infectivité a été observée pour les deux protéines RBD de manière dosedépendante. Ces résultats confirment que les protéines RBD adoptent une conformation reconnue par les AcNs.
Le VFS utilise l'héparane sulfate (HS) comme un facteur d'attachement pour l'entrée virale dans les cellules sensibles. Afin de rechercher un site potentiel de liaison à l'héparane (HBS) sur l'Env du VFS, nos collaborateurs ont déterminé le potentiel électrostatique de surface du RBD et ont utilisé la modélisation d'une molécule d'HS sur la surface accessible au solvant pour identifier un HBS potentiel. Leurs prédictions ont mis en évidence quatre résidus (K342, R343, R356 et R369) présentant un nombre élevé de contacts avec le HS modélisé dans une zone chargée positivement du domaine inférieur du RBD. Ces résidus ont été mutés par paires (K342/R343 et R356/R369) en alanine dans les protéines ectodomaines trimériques. J'ai mis en place un test de liaison cellulaire basé sur la cytométrie en flux pour mesurer l'impact des mutations sur la liaison aux HS. La liaison de l'ectodomaine sauvage (WT) dépend des niveaux d'expression de l'HS sur les cellules sensibles, étant plus élevé sur les cellules HT1080 que sur les cellules BHK-21. Les ectodomaines mutants se lient environ dix fois moins sur les deux lignées cellulaires que l'ectodomaine sauvage GII-K74. J'ai ensuite traité les cellules HT1080 avec de l'héparinase III pour éliminer les HS. La liaison de l'ectodomaine aux cellules traitées a été réduite pour l'ectodomaine WT alors qu'elle n'a pas été affectée pour les homologues mutants. Ces résultats confirment que les résidus identifiés K342, R343, R356 et R369 interviennent dans la liaison de l'Env du VFS à l'HS exprimé sur les cellules.
Résumé Substantiel en Français viii La structure de RBD nous permet de comprendre ses sous-domaines fonctionnels. Les deux sous-domaines essentiels à la liaison forment le sous-domaine inférieur et une partie du domaine supérieur. Le HBS est en effet situé dans le domaine inférieur. Le sous-domaine qui peut être supprimé sans affecter la liaison de l'Env aux cellules (appelé RBD de jonction, RBDj) est situé dans le domaine supérieur. La prédiction computationnelle AlphaFold 2.0 (AF) du RBD GII-K74 a révélé une structure très similaire à la structure du RBD obtenue expérimentalement. La comparaison des structures de RBD prédites par AF à partir des VFs distincts confirme que le RBD se replie dans un "centre commun" (CC) conservé entre les différents VFs. En revanche, les régions extérieures, y compris certaines boucles très flexibles au sommet du RBD, présentent une grande divergence de repliement, y compris entre des génotypes VFSs distincts. Les boucles forment des contacts entre les protomères RBD lorsqu'elles sont superposées dans la carte cryo-EM du trimère CI-PFV. Ainsi, notre hypothèse est que ces boucles mobiles stabilisent le trimère Env dans une conformation de pré-fusion.
Collectivement, nos données soutiennent que le domaine supérieur du RBD de VFS est impliqué dans la stabilisation du trimère tandis que le domaine inférieur est impliqué dans la liaison aux HS.
Manuscrit II: Caractérisation des épitopes des AcNs
Dans la partie principale de mon projet de doctorat, j'ai étudié la localisation et les caractéristiques des épitopes ciblés par les AcNs dans des échantillons de plasma provenant de chasseurs d'Afrique centrale infectés par les virus zoonotiques du gorille. Il a été démontré précédemment que ces AcNs ciblent la région variable du domaine SU de l'Env du VFS (SUvar), qui recouvre la majeure partie du RBD et définit deux génotypes du VFS. Avant mon arrivée, une étudiante de M2 a réalisé une cartographie des épitopes linéaires en utilisant des peptides couvrant les régions épitopiques de SUvar prédites in silico. Comme peu de réactivités ont été observées envers ces peptides linéaires dans le test enzyme-linked immunosorbent assay (ELISA), j'ai opté pour la cartographie des épitopes conformationnels dans notre recherche d'épitopes AcNs spécifiques au génotype. J'ai cartographié les épitopes conformationnels en utilisant des protéines recombinantes comme concurrentes de l'Env exprimée par des vecteurs viraux pour la liaison aux AcNs dans les tests de neutralisation. J'ai d'abord utilisé les données publiées sur le sous-domaine de liaison Env et les sites de glycosylation définis par des tests fonctionnels. Ensuite, je me suis concentré sur les séquences spécifiques des génotypes et sur les prédictions in silico d'épitopes B linéaires.
Lorsque la structure du RBD GII-K74 est devenue disponible, je l'ai utilisée pour la conception rationnelle de nouvelles mutations sur les protéines du domaine SU. J'ai d'abord testé plusieurs constructions pour l'expression dans des cellules de mammifères et d'insectes à partir de souches de VFS de gorilles zoonotiques des deux génotypes, GI-D468 et GII-K74, et de la souche CI-PFV adaptée en laboratoire. Ces constructions comprenaient des domaines RBD et SU monomères, des protéines immunoadhésines dimères composées du domaine SU fusionné au Fc de mIgG2a (SU-Ig) et des ectodomaines trimériques. Parmi ces constructions, les protéines chimériques SU-Ig étaient les seules à présenter un niveau d'expression protéique adéquat pour deux génotypes distincts. Pour ces raisons, j'ai utilisé les SU-Ig pour l'étude de cartographie des épitopes. J'ai mis en place la production, la purification et la validation des protéines SU-Ig homologues GII-K74 et hétérologues CI-PFV pour la cartographie des épitopes AcNs spécifiques de GII et GI, respectivement. Les protéines ont été utilisées comme concurrentes dans des tests de neutralisation. J'ai confirmé que ces protéines bloquent les AcNs plasmatiques d'une manière spécifique au génotype et dose-dépendante sans affecter l'entrée des vecteurs viraux. Ces protéines ont été titrées à plusieurs reprises contre un panel d'échantillons de plasmas provenant de donneurs infectés par le VFS, dilués à leur IC90 respectif. Cette dilution a été choisie pour permettre la saturation des AcNs par les protéines recombinantes. Deux paramètres ont été définis pour caractériser la capacité de la protéine à bloquer les AcNs; l'IC50 comme mesure de leur affinité et le % d'inhibition maximale (MaxI) qui correspond à la fraction d'AcNs inhibée. Des mutations ont ensuite été introduites dans ces protéines SU-Ig pour cartographier les épitopes des AcNs dans les tests de neutralisation en comparant les valeurs de la IC50 et du MaxI (%) des mutants à celles du WT. L'introduction de mutations a généralement donné l'un des quatre résultats suivants: I) aucun impact et une activité identique à celle de la protéine WT, II) une affinité plus faible des AcNs pour la protéine mutante, comme en témoigne une IC50 plus élevée par rapport à la WT, III) un plateau MaxI plus bas, ce qui signifie qu'une fraction des AcNs n'est plus bloquée par la protéine mutante, ou IV) les deux. J'ai d'abord étudié le rôle de la glycosylation dans les épitopes des AcNs et j'ai observé que les glycanes de type complexe et à haute teneur en mannose n'influençaient pas le blocage des AcNs dans notre test. En revanche, la déglycosylation a eu un effet notable et a diminué de manière significative l'affinité des protéines SU-Ig pour la liaison aux AcNs plasmatiques de six des huit donneurs testés. Ces résultats suggèrent que certains épitopes peuvent être composés d'un glycane. Pour identifier le glycane impliqué dans cette reconnaissance, j'ai supprimé six des sept sites de glycosylation individuels au sein de la SUvar sur la protéine SU-Résumé Substantiel en Français x Ig homologue GII-K74, tandis que le glycane conservé N8 a été ignoré car il est essentiel à l'expression de la protéine. Parmi tous les mutants, la délétion du glycane N7' a eu l'effet le plus fort et a entraîné une perte significative de l'activité de blocage de l'AcN pour cinq des sept donneurs infectés par GII testés. Ce glycane est situé au CC du RBD dans le domaine inférieur et à proximité immédiate du glycane N8 conservé. L'élimination du glycane N10, qui a un emplacement spécifique au génotype, n'a pas affecté le blocage des AcNs pour tous les échantillons de plasma testés.
J'ai également cherché à savoir si les AcNs reconnaîtraient le nouveau site de liaison de l'héparane sulfate que nous avons cartographié sur le domaine inférieur du RBD (manuscrit I).
Cependant, les protéines portant les quatre mutations HBS ont conservé une activité égale à celle du WT pour trois des quatre donneurs infectés par le GII testés. Nous concluons donc que le HBS n'est pas une cible dominante des AcNs.
Ensuite, nous avons examiné le rôle des domaines fonctionnels. Ainsi, nous avons généré des protéines SU-Ig mutantes RBDj des deux génotypes et testé leur capacité à inhiber les AcNs.
L'élimination du RBDj a complètement aboli l'activité bloquante de la protéine SU pour sept des huit donneurs, ce qui suggère que les principaux épitopes AcNs sont situés dans cette région. Nous avons ensuite généré un échange de SU-Ig GII avec un sous-domaine GI-RBDj qui a bloqué les AcNs plasmatiques de quatre donneurs infectés par GI. Ces résultats confirment que le sous-domaine RBDj est une cible dominante des AcNs chez les humains infectés par les souches de génotype I du gorille.
Sur la structure 3D, le RBDj se situe à l'apex du RBD et du trimère. De plus, nous avons observé que cette région comporte les quatre boucles supposées être impliquées dans la stabilisation du trimère. Parmi ces boucles (L1-4), L1 semble enfouie dans le trimère et probablement non accessible pour les AcNs. Ainsi, pour mieux définir les épitopes dans cette région, nous avons conçu des protéines SU-Ig avec des mutations au niveau des boucles pour les deux génotypes.
Les trois boucles sommitales restantes (L2; aa 278-293, L3; aa 410-433 et L4; aa 442-458) ont été supprimées individuellement. Les nouveaux mutants de boucle ont démontré un ciblage spécifique du génotype par les AcNs. Les AcNs spécifiques de GI ciblent principalement la région L3 (CI-PFV L3; aa 411-436), tandis que les AcNs spécifiques de GII ont une réponse plus large et ciblent les trois boucles. Pour confirmer les résultats pour GI, nous avons généré un mutant avec un échange GII-L3 dans le squelette SU-Ig du CI-PFV et nous avons confirmé que ce mutant perdait sa capacité à bloquer les AcNs de six échantillons de plasma spécifiques de GI testés. Ensuite, une chercheuse post-doctorale, Dr. Youna Coquin, a produit des VVFs
TABLE OF FIGURES
CHAPTER I
___________________________________________________________________________
INTRODUCTION 1 | INTRODUCTION
The emergence of pathogenic infectious agents in the human population is often the result of a transmission from an animal reservoir, a so-called zoonosis [START_REF] Jones | Global trends in emerging infectious diseases[END_REF][START_REF] Taylor | Risk factors for human disease emergence[END_REF]. Pathogenic zoonotic viruses have led to numerous outbreaks with major impacts on human society during the past century [START_REF] Jones | Global trends in emerging infectious diseases[END_REF][START_REF] Morens | Emerging infections: a perpetual challenge[END_REF]. Of notice are the 1918 and 2009 avian/swine flu pandemics [START_REF] Garten | Antigenic and genetic characteristics of swine-origin 2009 A(H1N1) influenza viruses circulating in humans[END_REF][START_REF] Potter | A history of influenza[END_REF] as well as the ongoing severe acute respiratory syndrome corona virus 2 (SARS-CoV-2) pandemic [START_REF] Wu | A new coronavirus associated with human respiratory disease in China[END_REF][START_REF] Zhou | A pneumonia outbreak associated with a new coronavirus of probable bat origin[END_REF]. Besides pandemics, outbreaks of reemerging viruses are frequent: important examples include Ebola viruses [START_REF] Malvy | Ebola virus disease[END_REF], hantaviruses [START_REF] Martínez | Super-Spreaders" and Person-to-Person Transmission of Andes Virus in Argentina[END_REF] and vector born arboviruses such as Zika, Dengue and Chikungunya [START_REF] Marston | Emerging viral diseases: confronting threats with new technologies[END_REF]. The introduction of zoonotic viruses is influenced by factors like live animal markets, rapid urbanization and ongoing climate changes [START_REF] Bloom | Emerging infectious diseases: A proactive approach[END_REF][START_REF] Daszak | Emerging infectious diseases of wildlife--threats to biodiversity and human health[END_REF]. Indeed, recent model simulations predict that hotpots for viral sharing and cross-species transmissions will expand and increase in regions with high biodiversity and dense human populations as global warming continues in the coming 50 years [START_REF] Carlson | Climate change increases cross-species viral transmission risk[END_REF]. The likelihood of infectious organisms to emerge in humans is influenced by both the organism species and presence of geographical overlap between their animal hosts and humans [START_REF] Davies | Phylogeny and geography predict pathogen community similarity in wild primates and humans[END_REF].
In line with this, the continuing rise of human density in rural forest areas including deforestation and non-human primate (NHP) bushmeat marked sales have likely favored retrovirus cross-species transmissions from simian reservoirs to humans in Central Africa [START_REF] Locatelli | Cross-species transmission of simian retroviruses: how and why they could lead to the emergence of new diseases in the human population[END_REF], and led to the emergence of human immunodeficiency virus (HIV) in the 1980s [START_REF] Rua | Origin, evolution and innate immune control of simian foamy viruses in humans[END_REF][START_REF] Sharp | Origins of HIV and the AIDS pandemic[END_REF].
A broad range of Old-World monkeys (OWMs) and Apes are natural hosts of several retroviruses including the simian immunodeficiency virus (SIV) and simian T-cell leukemia virus (STLV) [START_REF] Locatelli | Cross-species transmission of simian retroviruses: how and why they could lead to the emergence of new diseases in the human population[END_REF]. Today, more than 40 different SIVs have been discovered in African OWMs and Apes while more than 30 NHP species across Africa and Asia have been shown to naturally carry STLVs. A complex transmission pattern of SIVs from smaller monkeys including the Cercopithecus species to certain chimpanzee subspeciesand from chimpanzees to a subspecies of gorillas have influenced the emergence of the human immunodeficiency virus type 1 (HIV-1) [START_REF] Peeters | Simian retroviruses in African apes[END_REF][START_REF] Sharp | Origins of HIV and the AIDS pandemic[END_REF].
Indeed, a single cross-species transmission of SIVcpz from chimpanzees (Pan troglodytes troglodytes) in Southeastern Cameroon gave rise to the HIV-1 group M epidemic, while an additional SIVcpz transmission in South-central Cameroon led to the HIV-1 group N strains infecting a more limited number of individuals [START_REF] Keele | Chimpanzee reservoirs of pandemic and nonpandemic HIV-1[END_REF][START_REF] Sharp | Origins of HIV and the AIDS pandemic[END_REF]. The endemic HIV-1 group O and P strains have been shown related to SIVgor from Western lowland gorillas (Gorilla gorilla) [START_REF] D'arc | Origin of the HIV-1 group O epidemic in western lowland gorillas[END_REF][START_REF] Plantier | A new human immunodeficiency virus derived from gorillas[END_REF]. This gorilla subspecies likely aquired its SIV-infection from chimpanzees about 100-200 years ago [START_REF] Takehisa | Origin and biology of simian immunodeficiency virus in wild-living western gorillas[END_REF]. The SIVcpz transmission event which led to the HIV-1 group M pandemic was estimated to have occurred in Kinshasa in the Democratic Republique of Congo (DRC) in the 1920s [START_REF] Faria | HIV epidemiology. The early spread and epidemic ignition of HIV-1 in human populations[END_REF]. Concurrently, the less pathogenic and distally related HIV-2 resulted from nine independent transmission events of SIVsmm from sooty mangabey monkeys (Cercocebus atys) in West Africa [START_REF] Ayouba | Evidence for continuing cross-species transmission of SIVsmm to humans: characterization of a new HIV-2 lineage in rural Cote d'Ivoire[END_REF][START_REF] Chen | Genetic characterization of new West African simian immunodeficiency virus SIVsm: geographic clustering of household-derived SIV strains with human immunodeficiency virus type 2 subtypes and genetically diverse viruses from a single feral sooty mangabey troop[END_REF][START_REF] Hirsch | An African primate lentivirus (SIVsm) closely related to HIV-2[END_REF]. These transmissions gave rise to endemic HIV-2 group A and B strains infection approx. 1-2 million people in West Africa, and seven 'dead-end' infections (group C to I) each described only in one or two individuals to date [START_REF] Visseaux | Hiv-2 molecular epidemiology[END_REF]. Another human retrovirus with oncogenic features was isolated from a patient with cutaneous T-cell lymphoma, today known as human T-cell leukemia virus type 1 (HTLV-1) [START_REF] Poiesz | Detection and isolation of type C retrovirus particles from fresh and cultured lymphocytes of a patient with cutaneous Tcell lymphoma[END_REF]. HTLV-1 is highly endemic in areas of Japan, sub-Saharan Africa, the South Americas, the Caribbean and among Aboriginal groups of Australia. An estimated 5-10 million people are carriers of HTLV-1 and although most cases remain asymptomatic, approximately 5% are associated with severe disease like adult T-cell leukemia/lymphoma (ATL) [START_REF] Gessain | Epidemiological Aspects and World Distribution of HTLV-1 Infection[END_REF].
Epidemiology and molecular virology studies on the simian counterpart to HTLV-1 (STLV-1) suggest that cross-species transmission of STLV-1 to humans occurred approximately [START_REF] Barnett | Structure and mechanism of a coreceptor for infection by a pathogenic feline retrovirus[END_REF]300 years ago in Africa [START_REF] Jegado | Infection of CD4+ T lymphocytes by the human T cell leukemia virus type 1 is mediated by the glucose transporter GLUT-1: evidence using antibodies specific to the receptor's large extracellular domain[END_REF][START_REF] Peeters | Simian retroviruses in African apes[END_REF]. Importantly, zoonotic transmissions of SIVsmm and STLV-1 are still ongoing today in populations exposed to NHPs [START_REF] Ayouba | Evidence for continuing cross-species transmission of SIVsmm to humans: characterization of a new HIV-2 lineage in rural Cote d'Ivoire[END_REF][START_REF] Filippone | A Severe Bite From a Nonhuman Primate Is a Major Risk Factor for HTLV-1 Infection in Hunters From Central Africa[END_REF][START_REF] Kazanji | Origin of HTLV-1 in hunters of nonhuman primates in Central Africa[END_REF][START_REF] Wolfe | Emergence of unique primate T-lymphotropic viruses among central African bushmeat hunters[END_REF].
The foamy viruses (FVs, also named spumaviruses) constitute the third family of complex retroviruses found widespread among many animal species including both OWMs, Apes and New-World monkeys (NWMs). My host laboratory has described that bites from NHPs constitute the major route of zoonotic transmission of simian FVs (SFVs) into humans [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF][START_REF] Calattini | Simian foamy virus transmission from apes to humans, rural Cameroon[END_REF]. Human SFV-infection leads to the establishment of a lifelong persistent infection without reported severe pathogenicity or human-to-human transmission.
The course of animal pathogens to emerge and cause disease in the human population can be explained by five stages [START_REF] Wolfe | Origins of major human infectious diseases[END_REF]. SFVs represent a stage 2 pathogen. This stage describes a zoonotic spill-over from an animal reservoir into a human without subsequent human-to-human transmission. Most SFV-infected humans reported direct contact with NHPs and are therefore the first human host of these zoonotic viruses. In contrast, HTLV-1 and HIV-1 group M represent stage 4 and 5 pathogens, respectively (Fig. I-1). Stage 4 delineate several cycles of animal-to-human and/or human-to-human transmissions while stage 5 transmissions are driven solely by a human reservoir.
With this in mind, FVs constitute a good model to study one key step of emergence of a simian retrovirus: its persistence in the primary human host and the restriction of its transmission to other human hosts. The work of this PhD thesis has been focusing on the humoral immune responses directed against zoonotic SFVs with use of plasma samples from SFV-infected Central African inhabitants from Cameroon bitten by gorillas during hunting activities.
Neutralizing antibodies from individuals were tested against viral derived vectors and proteins whose sequence are identical to the SFV strains they are infected with. Examples of different stages of infection/spill-over (left column) by four major zoonotic viruses in accordance to the reservoir mediating transmission to humans (right column). Stage 2 agents: characterized by human acquisition from direct contact with animal reservoir and no human-human transmission, exemplified by SFV transmission from NHP reservoirs. Stage 3 agents: characterized by human cases acquired from animal reservoirs leading to smaller human outbreaks with limited human-human transmission, exemplified by Ebola virus from bat and NHP reservoirs. Stage 4 agents: described by longer endemic outbreaks with human-human transmission as main route, although animal-human transmissions still occur as exemplified by HTLV-1 originating from NHP reservoirs. Stage 5 agents: characterized by the establishment of a long epidemic outbreak with exclusive human-human transmission, as seen for HIV-1 group M originating from a single spill-over from a chimpanzee reservoir. Figure created with BioRender.com and adapted from [START_REF] Wolfe | Origins of major human infectious diseases[END_REF].
Genetic and molecular characterization of foamy viruses
FVs were discovered by coincidence in 1954 by Enders and Peebles who observed cytopathic effects (CPEs) in rhesus monkey kidney cell culture [START_REF] Enders | Propagation in tissue cultures of cytopathogenic agents from patients with measles[END_REF]. In the following years several strains of SFVs were isolated from tissue cultures of rhesus and cynomolgus macaques as well as from baboons and red grass monkeys [START_REF] Clarke | The morphogenesis of simian foamy agents[END_REF][START_REF] Johnston | A second immunologic type of simian foamy virus: monkey throat infections and unmasking by both types[END_REF][START_REF] Rustigian | Infection of monkey kidney tissue cultures with viruslike agents[END_REF]. Then in 1971, Achong and colleagues isolated a similar virus (Fig. I-2) from a nasopharyngeal carcinoma biopsy of a patient from Kenya causing CPEs in human cells [START_REF] Achong | An unusual virus in cultures from a human nasopharyngeal carcinoma[END_REF]. The virus was initially termed human foamy virus (HFV) and subsequent serological characterizations of HFV showed high similarities to strains isolated from chimpanzees [START_REF] Brown | Human foamy virus: further characterization, seroepidemiology, and relationship to chimpanzee foamy viruses[END_REF][START_REF] Nemo | Antigenic relationship of human foamy virus to the simian foamy viruses[END_REF]. Isolation of the HFV strain initially led to speculations on naturally circulating HFVs. However, a comprehensive study with appropriate testing (serology and PCR) failed to detect HFV in samples from 223 patients and did not find antibodies in any of >2600 human sera samples from suspected high-risk areas [START_REF] Meiering | Historical perspective of foamy virus epidemiology and infection[END_REF][START_REF] Schweizer | Markers of foamy virus infections in monkeys, apes, and accidentally infected humans: appropriate testing fails to confirm suspected foamy virus prevalence in humans[END_REF]. Sequencing of HFV demonstrated this strain to be closely related to a chimpanzee SFV strain [START_REF] Herchenroder | Isolation, cloning, and sequencing of simian foamy viruses from chimpanzees (SFVcpz): high homology to human foamy virus (HFV)[END_REF]. A later study found the integrase (IN) and group-specific antigen (gag) sequences of HFV to be 96% identical at the nucleotide level with SFV strains isolated from the chimpanzee subspecies Pan troglodydes schweinfurthii (SFVpsc), suggesting the Kenyan HFV isolate to represent a unique case of zoonotic spill-over from NHPs [START_REF] Switzer | Frequent simian foamy virus infection in persons occupationally exposed to nonhuman primates[END_REF]. Today, HFV is referred to as prototype foamy virus (PFV) which is the most commonly studied laboratory adapted strain. (x137,500). Right: Mature and immature (arrows) viral particles budding from cell surface plasma membrane (x180,000). Authors acquired the pictures by a Phillips EM 300 electron microscope. Figure from [START_REF] Achong | An unusual virus in cultures from a human nasopharyngeal carcinoma[END_REF].
Phylogeny of retroviruses
Retroviruses are single stranded (ss) RNA viruses named after their characteristic enzyme, the reverse transcriptase that transcribes their ssRNA genome into double stranded (ds) DNA. The Retroviridae family consists of both exogenous and endogenous retroviruses (ERVs) (Fig. I-3).
The exogenous retroviruses are divided into seven distinct genera (Alpha-, Beta-, Delta-, Epsilon-and Gamma retrovirus, Lenti-and Spumavirus) which are separated into two subfamilies; the Orthoretrovirinae and Spumaretrovirinae. The FVs constitute a single genus of the Spumaretrovirinae subfamily. This taxonomy reflects that FVs are basal and distinct from other exogenous retroviruses due to their unusual style of replication which share properties with that of hepatitis B virus (HBV) from the Hepadnaviridae family (Rethwilm, 1996). The exogenous retroviruses are also classified as either simple or complex type based on the absence or presence of viral accessory genes. Three genus of complex type retroviruses are found among several vertebrate animal species, including primates and humans: Lentivirus which comprise the SIV/HIV, Deltaretrovirus which comprise the STLV/HTLV and Spumavirus.
FV evolution
The exogenous family of Spumavirus is divided into five distinct genera; Simiispumavirus (SFVs), Prosimiispumavirus (prosimian FVs, PSFVs), Felispumavirus (feline FVs, FFV), Bovispumavirus (bovine FVs, BFVs) and Equispumavirus (equine FVs, EFVs) [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF]. Exogeneous FVs are found naturally in many mammals counting cats and pumas [START_REF] Kechejian | Feline Foamy Virus is Highly Prevalent in Free-Ranging Puma concolor from Colorado, Florida and Southern California[END_REF][START_REF] Riggs | Syncytium-forming agent isolated from domestic cats[END_REF], cattle and bisons [START_REF] Amborski | Isolation of a retrovirus from the American bison and its relation to bovine retroviruses[END_REF][START_REF] Malmquist | Isolation, immunodiffusion, immunofluorescence, and electron microscopy of a syncytial virus of lymphosarcomatous and apparently normal cattle[END_REF], horses [START_REF] Tobaly-Tapiero | Isolation and characterization of an equine foamy virus[END_REF], sheeps [START_REF] Flanagan | Isolation of a spumavirus from a sheep[END_REF], bats [START_REF] Wu | Virome analysis for identification of novel mammalian viruses in bat species from Chinese provinces[END_REF], prosimians [START_REF] Katzourakis | Discovery of prosimian and afrotherian foamy viruses and potential cross species transmissions amidst stable and ancient mammalian co-evolution[END_REF] and notably in a broad range of NHP species (Fig. I-4). These include both Asian and African OWMs and Apes [START_REF] Hussain | Screening for simian foamy virus infection by using a combined antigen Western blot assay: evidence for a wide distribution among Old World primates and identification of four new divergent viruses[END_REF] as well as NWMs in the South Americas [START_REF] Ghersi | Wide distribution and ancient evolutionary history of simian foamy viruses in New World primates[END_REF][START_REF] Muniz | An expanded search for simian foamy viruses (SFV) in Brazilian New World primates identifies novel SFV lineages and host age-related infections[END_REF][START_REF] Muniz | Identification and characterization of highly divergent simian foamy viruses in a wide range of new world primates from Brazil[END_REF], and SFVs have been shown to co-speciate with their NHP hosts for 30 million years [START_REF] Switzer | Ancient co-speciation of simian foamy viruses and primates[END_REF].
To reflect on FV evolution, the nomenclature of FVs use the virus host name and "foamy virus" in three letter capitals (i.e. FFV for feline FV, SFV for simian FV) followed by a three-letter lowercase of the latin host species name. The three-letter lowercase comprises the first letter of the host genus followed by the first two letters of the species/subspecies. Isolate-identifying information are added after an underscore such as the host from which it was isolated, for instance "hu" for human or isolate name [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF]. As an example, the HFV/PFV strain which was the first human isolate later demonstrated to be a zoonotic chimpanzee (Pan troglodytes schweinfurthii) SFV strain also termed HSRV clone 13 is designated SFVpsc_huHSRV.13 and the zoonotic gorilla (Gorilla gorilla) SFV strain BAK74 isolated from an accidentally infected African hunter is designated SFVggo_huBAK74 [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF]Rua et al., 2012a). Phylogenetic tree of pol and env sequences from 21 diverse mammalian FV hosts including 15 distinct species of NWMs, OWMs and Apes. Names of viral species mentioned at branch tips with three-letter lower case code of subspecies and name of common mammalian host species in brackets. Figure from [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF].
Interestingly, the polymerase (pol) gene from the human ERV type L has been shown related to pol from FVs [START_REF] Cordonnier | Isolation of novel human endogenous retroviruslike elements with foamy virus-related pol sequence[END_REF]. Indeed, FV-like sequences have been discovered as ERVs in a diverse group of animal species including zebra-and platyfish [START_REF] Llorens | The foamy virus genome remains unintegrated in the nuclei of G1/S phase-arrested cells, and integrase is critical for preintegration complex transport into the nucleus[END_REF][START_REF] Schartl | The genome of the platyfish, Xiphophorus maculatus, provides insights into evolutionary adaptation and several complex traits[END_REF], the ancient marine fish coelacanth (Han and Worobey, 2012a), sloths [START_REF] Katzourakis | Macroevolution of complex retroviruses[END_REF], reptiles [START_REF] Aiewsakun | The First Co-Opted Endogenous Foamy Viruses and the Evolutionary History of Reptilian Foamy Viruses[END_REF], the prosimian aye-aye (Han and Worobey, 2012b), as well as birds and snakes [START_REF] Aiewsakun | Avian and serpentine endogenous foamy viruses, and new insights into the macroevolutionary history of foamy viruses[END_REF] suggesting an extremely ancient FV evolution for more than 450 million years (Fig. I-5) [START_REF] Aiewsakun | Marine origin of retroviruses in the early Palaeozoic Era[END_REF]. These studies on the macroevolutionary history of FVs have shown that all major vertebrate groups have been hosts of FVs in the past and that long-term co-speciation histories exist. FVs likely originated in the ocean half a billion years ago before co-diversifying with early vertebrate hosts into fish. Amphibian and reptile FVs radiated to dry land during this process [START_REF] Aiewsakun | Avian and serpentine endogenous foamy viruses, and new insights into the macroevolutionary history of foamy viruses[END_REF]. Moreover, the results on endogenous FVs suggest that cross-group transmissions have appeared from reptiles once or maybe twice, likely from iguanas, lizards or snakes (Toxicofera group), which ultimately gave rise to mammalian and avian FV lineages (Fig. I-5) [START_REF] Aiewsakun | Avian and serpentine endogenous foamy viruses, and new insights into the macroevolutionary history of foamy viruses[END_REF]. [START_REF] Aiewsakun | Avian and serpentine endogenous foamy viruses, and new insights into the macroevolutionary history of foamy viruses[END_REF].
FV virions, genome, protein synthesis and replication cycle
Most of the knowledge on FV has been obtained with use of the PFV strain (SFVpsc_huHSRV.13) and a viral vector model system based on PFV [START_REF] Hashimoto-Gotoh | Phylogenetic analyses reveal that simian foamy virus isolated from Japanese Yakushima macaques (Macaca fuscata yakui) is distinct from most of Japanese Hondo macaques (Macaca fuscata fuscata)[END_REF][START_REF] Trobridge | Improved foamy virus vectors with minimal viral sequences[END_REF], and reviewed by [START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF]. Unless specified, literature presented refers to this strain.
FV virions and genomic organization
FV virions are enveloped spherical structures at a size of 100-140nm with characteristic long spikes (~14nm) on their surface when observed by electron microscopy (EM) (Fig. I-2) [START_REF] Delelis | Biphasic DNA synthesis in spumaviruses[END_REF][START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. FVs contain two copies of positive sense ssRNA genomes contained inside a protein capsid (Fig. I-6). In the infected cell, a late retrotranscription generates particles with dsDNA. The genome of PFV is approx. 13kb and follows the classical complex retrovirus organization with canonical structural genes gag, pol and envelope glycoprotein (env) flanked by 5' and 3' long terminal repeats (LTRs). These genes encode for structural protein Gag, the viral enzymes protease (PR), reverse transcriptase (RT), RNAse H (RH) and IN as well as the Env. Env is cleaved into three subunits; leader peptide (LP), surface domain (SU) and transmembrane domain (TM) [START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF]. [START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
The genome of FVs contains two open reading frames (ORFs) in the 3' part of the genome, tas and bet, which encode for two accessory proteins, Tas and Bet. The 5' LTR harbors the typical retroviral promoter in its U3 region and an unusual internal promoter (IP) located in env upstream of tas. The low basal activity of the IP leads to Tas expression that activates transcription at the IP, resulting in a positive feedback loop. Transcription of structural FV genes is strictly dependent on Tas since the promoter in 5' LTR U3 region is practically inactive [START_REF] Keller | Characterization of the transcriptional trans activator of human foamy retrovirus[END_REF][START_REF] Lukic | The human foamy virus internal promoter directs the expression of the functional Bel 1 transactivator and Bet protein early after infection[END_REF][START_REF] Lukic | The human foamy virus internal promoter directs the expression of the functional Bel 1 transactivator and Bet protein early after infection[END_REF][START_REF] Löchelt | The human foamy virus internal promoter is required for efficient gene expression and infectivity[END_REF]. The only known function of Bet is to inhibit the restriction factor from the APOBEC3 family, as described in further detail in section 1.3 [START_REF] Löchelt | The antiretroviral activity of APOBEC3 is inhibited by the foamy virus accessory Bet protein[END_REF][START_REF] Russell | Foamy virus Bet proteins function as novel inhibitors of the APOBEC3 family of innate antiretroviral defense factors[END_REF]. [START_REF] Delelis | Biphasic DNA synthesis in spumaviruses[END_REF][START_REF] Hamann | Foamy Virus Protein-Nucleic Acid Interactions during Particle Morphogenesis[END_REF][START_REF] Pollard | The HIV-1 Rev protein[END_REF].
Synthesis of FV proteins
The Gag precursor protein p71 of approximately 71kDa is encoded by a full-length viral RNA which undergoes a rather limited processing: FV Gag is cleaved just once near its C-terminus (C-ter) yielding two subunits (p3/p68) of which the larger product p68 joins the p71 precursor to form the viral capsid (Fig. I-7, top) [START_REF] Flügel | Proteolytic processing of foamy virus Gag and Pol proteins[END_REF]. The smaller cleavage product p3 has yet to be demonstrated present in budding virions and its role is currently less clear. This unusual and restricted cleavage of Gag is of great importance as cleavage mutants fail to yield infectious viruses, although lack of cleavage does not affect viral particle expression (Enssle et al., 1997). Thus, the FV Gag is unique and does not include the typical matrix (MA), capsid (CA) and nucleocapsid (NC) products observed in orthoretroviruses [START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
FV Pol is translated as an independent precursor p127 protein from a separate messenger RNA (mRNA). FV pol cleaves itself into two separate enzymatic subunits: p85 with PR, RT and RH activities and a smaller p40 product with IN activity (Fig. I-7, top) [START_REF] Cartellieri | N-terminal Gag domain required for foamy virus particle assembly and export[END_REF][START_REF] Lee | Foamy virus assembly with emphasis on pol encapsidation[END_REF]. Both of these subunits have been described to localize in the nucleus of PFVinfected cells [START_REF] Imrich | Primate foamy virus Pol proteins are imported into the nucleus[END_REF]. Inactivation of the PR results in non-infectious viral particles still able to bud and enter target cells [START_REF] Baldwin | The roles of Pol and Env in the assembly pathway of human foamy virus[END_REF][START_REF] Lehmann-Che | Protease-dependent uncoating of a complex retrovirus[END_REF].
An active IN is required for FV replication [START_REF] Enssle | An active foamy virus integrase is required for virus replication[END_REF]. The 3D structure of the p40 IN subunit in complex with its viral and target DNA substrates (nucleoprotein complex termed intasome) was the first full-length retroviral IN to be solved and shows the viral DNA strands located in the cleft between two IN dimers and target DNA below [START_REF] Löchelt | The human foamy virus internal promoter is required for efficient gene expression and infectivity[END_REF].
Studies also demonstrated PFV susceptible to the clinically approved HIV-1 IN inhibitors elvitegravir and raltegravir (Hare et al., 2010b). The structure of PFV intasome complexed to HIV-1 IN inhibitors revealed their mode of action as strand-transfer inhibitors that function by displacement of the viral DNA ends from the active sites in the intasome (Hare et al., 2010a).
Overall, the synthesis of Pol independently of Gag and the regulation of protease activity are unusual compared to orthoretroviruses.
The FV Env is translated as a full-length precursor gp130 protein. The LP targets Env gp130 for the secretory pathway in the rough endoplasmic reticulum (ER) [START_REF] Lindemann | A particle-associated glycoprotein signal peptide essential for virus maturation and infectivity[END_REF][START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF]. The gp130 is proteolytically processed into its three cleavage products LP gp18, SU gp80 and TM gp48 (Fig. I-7, top) during transfer to the cell surface membrane (Geiselhart et al., 2004). All FVs contain optimal furin cleavage site consensus motifs (R-x-x-R) between the three subunits. The cleavage between SU gp80 and TM gp48 have been shown of particular importance for viral infectivity, while LP gp18/SU gp80 cleavage is less essential [START_REF] Bansal | Characterization of the R572T point mutant of a putative cleavage site in human foamy virus Env[END_REF][START_REF] Duda | Prototype foamy virus envelope glycoprotein leader peptide processing is mediated by a furin-like cellular protease, but cleavage is not essential for viral infectivity[END_REF].
FV replication cycle
The general retroviral replication cycle comprises the following sequential steps: These steps are also followed by FVs, however some differ from orthoretroviruses including steps that resemble more of what is observed for HBV such as the late RT step giving rise to budding particles containing dsDNA (Table I-1) [START_REF] Yu | Human foamy virus replication: a pathway distinct from that of retroviruses and hepadnaviruses[END_REF]. Step 1 -Attachment: Heparan sulfate (HS) has been demonstrated as an attachment-factor for FV entry [START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF][START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF]. FVs have an extremely broad cell tropism in vitro explained by the ability of PFV to infect virtually all cell lines tested with exception of a few derived from human and zebrafish origin (Table I-2) [START_REF] Mergia | Cell tropism of the simian foamy virus type 1 (SFV-1)[END_REF][START_REF] Stirnnagel | Analysis of prototype foamy virus particle-host cell interaction with autofluorescent retroviral particles[END_REF]. Interestingly, studies on FV entry found that cells with low HS expression, including the two cell lines resistant to FV-infection, still exhibited surface binding by Env which might indicate that FVs may require additional co-factors for efficient viral entry [START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF][START_REF] Stirnnagel | Analysis of prototype foamy virus particle-host cell interaction with autofluorescent retroviral particles[END_REF]. This ability of FVs to enter a broad range of cell types provides a useful tool for delivery of therapeutic genes by foamy viral vectors (FVVs) [START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF]. Of note, the use of FVVs as an in vivo hematopoietic stem cell (HSC) therapy has been applied to several monogenetic pre-clinical animal models, most notably the model of canine X-linked severe immunodeficiency (SCID-X1) in dogs [START_REF] Burtner | Intravenous injection of a foamy virus vector to correct canine SCID-X1[END_REF][START_REF] Trobridge | Stem cell selection in vivo using foamy vectors cures canine pyruvate kinase deficiency[END_REF]. Such studies have highlighted the safety of in vivo FVV therapy and these vectors was shown as efficient as lentiviral vectors in long-term transduction of blood CD34 + cells in the canine model [START_REF] Rajawat | In-Vivo Gene Therapy with Foamy Virus Vectors[END_REF][START_REF] Trobridge | Foamy and lentiviral vectors transduce canine long-term repopulating cells at similar efficiency[END_REF]. More recently, FVVs have been optimized and used for delivery of gene editing tools such as CRISPR/Cas9 which is an interesting alternative strategy and highlights the many possibilities of FVVs in gene therapy [START_REF] Lindel | TraFo-CRISPR: Enhanced Genome Engineering by Transient Foamy Virus Vector-Mediated Delivery of CRISPR/Cas9 Components[END_REF].
Step 2 -Entry/fusion: The broad cell tropism suggests that FVs utilize one or more ubiquitously expressed cell surface receptor(s) for entry into its target cells, which is still unknown. The fusion process by which viral and cellular membrane merge is Env and pHdependent, with a preference of acidic pH of 5.5 for most FVs, except PFV [START_REF] Picard-Maureau | Foamy virus envelope glycoprotein-mediated entry involves a pH-dependent fusion process[END_REF]. A recent study demonstrated fusion of PFV to occur at both the plasma membrane as well as from endosomes while a macaque SFV strain only was found to fuse from endosomes [START_REF] Dupont | Identification of an Intermediate Step in Foamy Virus Fusion[END_REF]. A novel intermediate fusion step was observed in which capsids and Env were still tethered despite separated by up to 400nm before complete separation [START_REF] Dupont | Identification of an Intermediate Step in Foamy Virus Fusion[END_REF].
Step 3 -Decapsidation: In the cytosol, released capsids bind dynein motor protein complexes and accumulate by the microtubule organizing center (MTOC) (Fig. I-8) [START_REF] Petit | Targeting of incoming retroviral Gag to the centrosome involves a direct interaction with the dynein light chain 8[END_REF][START_REF] Saïb | Nuclear targeting of incoming human foamy virus Gag proteins involves a centriolar step[END_REF][START_REF] Yu | Foamy virus capsid assembly occurs at a pericentriolar region through a cytoplasmic targeting/retention signal in Gag[END_REF]. Capsid uncoating is protease and cell cycle-dependent [START_REF] Lehmann-Che | Protease-dependent uncoating of a complex retrovirus[END_REF][START_REF] Patton | Cell-cycle dependence of foamy virus vectors[END_REF]. Consequently, FVs do not infect non-proliferating cells resembling gamma retroviruses but differing from lentiviruses (Bieniasz et al., 1995b).
Reptilian
IgH-2 Iguana Heart epithelium ++ ND [START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF])
Fish
Pac2
Zebrafish Embryonic fibroblasts -ND [START_REF] Stirnnagel | Analysis of prototype foamy virus particle-host cell interaction with autofluorescent retroviral particles[END_REF] Step 4 -Nuclear entrance: In vitro, the FV genome persists in infected cells undergoing cell cycle-arrest while no integration is observed (Lo et al., 2010;[START_REF] Trobridge | Cell cycle requirements for transduction by foamy virus vectors compared to those of oncovirus and lentivirus vectors[END_REF].
The preintegration complex (PIC) comprises IN and viral DNA. The nuclear localization signal (NLS) from IN allows the translocation of PIC into the nucleus. The viral Gag protein also harbors an NLS within the second of three Gly-Arg (GR) rich boxes located in the C-ter [START_REF] Schliephake | Nuclear localization of foamy virus Gag precursor protein[END_REF]. Once nuclear entrance occurs, Gag enhances proviral integration by tethering to host-cell chromatin through an evolutionary conserved arginine residue inside the GRII box (Paris et al., 2018;Tobaly-Tapiero et al., 2008), as well as nuclear RNA export through a nuclear export sequence (NES) in the N-ter of Gag [START_REF] Lesbats | Structural basis for spumavirus GAG tethering to chromatin[END_REF][START_REF] Renault | A nuclear export signal within the structural Gag protein is required for prototype foamy virus replication[END_REF].
Step 5 -Transcription and nuclear export: An unusual feature of FVs is that each gene is encoded by at least one separate transcript derived from either the U3-promotor or the IP. These transcripts are spliced into more than 15 different mRNAs. However, the genomic pre-RNA genome which span the complete coding region is the transcript preferentially encapsidated into the budding particles, reviewed by [START_REF] Bodem | Regulation of foamy viral transcription and RNA export[END_REF]. The fully spliced viral mRNAs are believed to exit the nucleus by the same pathway as cellular mRNAs. In contrast to other complex-type retroviruses, FV do not express a regulatory protein to export intron-containing transcripts.
Instead, FVs relies on the RNA-binding host protein nuclear export machinery for final export of full-length transcripts [START_REF] Bodem | Foamy virus nuclear RNA export is distinct from that of other retroviruses[END_REF]. One of such machineries is host-cell exportin 1 also termed chromosomal maintenance 1 (CRM1) superfamily of karyopherin soluble nuclear transport factors also used by other retroviruses, reviewed by [START_REF] Cullen | Nuclear mRNA export: insights from virology[END_REF]. Indeed, CRM1
was shown essential for the nuclear export of FV full-length transcripts. However, this study demonstrated that the viral RNA binds a cellular protein named HuR, and that HuR generates a complex with other cellular proteins including CRM1 that facilitates nuclear export [START_REF] Bodem | Foamy virus nuclear RNA export is distinct from that of other retroviruses[END_REF]. In addition, viral RNA export has also been proposed to rely on the leucine-rich NES of Gag which is recognized by CRM1 as well [START_REF] Renault | A nuclear export signal within the structural Gag protein is required for prototype foamy virus replication[END_REF]. The majority of capsids migrate to the ER and Golgi where they fuse with intracellular membranes containing Env. Mature viral particles bud from intracellular compartments depending on Env and are most likely released from the cell through exocytosis. Some capsids acquire Env by the plasma membrane and small amounts of capsid-less SVPs are also released from the cell surface. Figure from [START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
Step 6 -Protein synthesis and viral assembly: Upon export of mRNA from the nucleus, viral protein translation takes place in the cytoplasm with the exception of Env in the rough ER (see section 1.1.3.2). The precise capsid assembly in the cytoplasm is driven by Gag and the formation of capsids are observed near centrosomes [START_REF] Lindemann | The Unique, the Known, and the Unknown of Spumaretrovirus Assembly[END_REF]. Although not essential for formation of capsids, Pol encapsidation and PR-mediated processing of Gag precursor is required for closed capsid structures [START_REF] Baldwin | The roles of Pol and Env in the assembly pathway of human foamy virus[END_REF][START_REF] Fischer | Foamy virus particle formation[END_REF]. Moreover, Gag interaction with viral nucleic acids through cis-acting sequence (CAS) elements in the viral RNA genome (located in the C-ter GR-rich domain of Gag) is also essential for formation of normal shaped capsids. Similarly, Pol also engages CAS elements important for its encapsidation into the capsid [START_REF] Hamann | Foamy Virus Protein-Nucleic Acid Interactions during Particle Morphogenesis[END_REF][START_REF] Hamann | The cooperative function of arginine residues in the Prototype Foamy Virus Gag C-terminus mediates viral and cellular RNA encapsidation[END_REF]. The precise sequence and location of these events are not entirely defined [START_REF] Lindemann | The Unique, the Known, and the Unknown of Spumaretrovirus Assembly[END_REF].
One important feature of FV replication is that reverse transcription of the packaged RNA genome occurs after capsid assembly (Fig. I-8) but before particle release resulting in FV particles containing full-length proviral DNA (up to 10-20% of total). This proviral DNA has been shown to contribute substantially to viral infectivity [START_REF] Moebes | Human foamy virus reverse transcription that occurs late in the viral replication cycle[END_REF][START_REF] Yu | Evidence that the human foamy virus genome is DNA[END_REF] and DNA extracted from cell-free PFV particles are directly infectious upon transfection [START_REF] Yu | Human foamy virus replication: a pathway distinct from that of retroviruses and hepadnaviruses[END_REF]. Later studies showed that early in vitro treatment of cells with the zidovudine (AZT) RT inhibitor completely abolished PFV replication, proviral integration and DNA synthesis at a low multiplicity of infection (MOI) [START_REF] Delelis | Biphasic DNA synthesis in spumaviruses[END_REF][START_REF] Zamborlini | Early reverse transcription is essential for productive foamy virus infection[END_REF].
These results suggest a requirement of an early RT event for FV replication, at least under low MOI conditions with limited presence of infectious viral DNA from incoming virions [START_REF] Zamborlini | Early reverse transcription is essential for productive foamy virus infection[END_REF].
Step 7 -Budding: Finally, release of mature viral particles from FV-infected cells is strictly Env-dependent in opposition to other retroviruses for which budding is Gag-dependent.
Budding of FV virions predominantly occur intracellularly at ER or Golgi rather than by the plasma membrane (Fig. I-8). The mature virions are most likely exported by exocytosis and contain Env incorporated into a host-cell derived lipid bilayer surrounding the viral capsid and genome [START_REF] Baldwin | The roles of Pol and Env in the assembly pathway of human foamy virus[END_REF][START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF][START_REF] Fischer | Foamy virus particle formation[END_REF]. Budding of capsid free virus-like particles (VLPs), so-called subviral particles (SVPs) from the plasma membrane (Fig. I-8) also occur to a lesser extend [START_REF] Stange | Subviral particle release determinants of prototype foamy virus[END_REF], as discussed in section 1. 1.4.3. Also discussed in this section, the Env LP gp18 directly interacts with the viral capsid which is essential for viral particle budding. The precise timing of this capsid interaction with Env during the Env maturation process is unclear, but may be enhanced by proteolytic cleavage of the Env precursor by furin [START_REF] Lindemann | The Unique, the Known, and the Unknown of Spumaretrovirus Assembly[END_REF].
Functional and structural characterization of SFV Env
As the main topic of this thesis is SFV-specific antiviral antibodies directed against Env, the function and structure of Env will be explained in detail. Retroviral Envs are type-I class TM proteins composed of an extracellular SU domain and a transmembrane TM domain. This class of fusogens presents a post-fusion structure that forms a trimer with prominent central α-helical coiled-coils [START_REF] Rey | Effects of human recombinant alpha and gamma and of highly purified natural beta interferons on simian Spumavirinae prototype (simian foamy virus 1) multiplication in human cells[END_REF]. The Envs are synthesized as precursors which are cotranslationally imported into the ER leading to a maturation process involving Env folding, oligomerization and attachment of surface glycans. Following these steps, the Env precursors from orthoretroviruses are transported to the trans golgi network where furin or furin-like proteases cleave the SU and TM subunits apart. While FV Env share many of these events, some aspects differ from orthoretroviruses beyond their unusual organization with LP-SU-TM subunits as described above [START_REF] Jones | Molecular aspects of HTLV-1 entry: functional domains of the HTLV-1 surface subunit (SU) and their relationships to the entry receptors[END_REF][START_REF] Lindemann | A particle-associated glycoprotein signal peptide essential for virus maturation and infectivity[END_REF][START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
Env primary sequence, receptor binding domain and 3D structure
The PFV Env is a 130kDa precursor glycoprotein of 989 amino acids (aa [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. These trimers are distanced ~110Å from each other in the hexagonal assembly with additional interactions between Env trimers observed ~45Å above the membrane level [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF].
The LP contains an intracellular tail also termed the cytoplasmic domain (CyD) upstream of a hydrophobic aa sequence named the H domain. This H domain is located within the LP and is 22 aa long (aa . The TM also contains a CyD located in the C-ter downstream of the predicted and evolutionary conserved membrane spanning domain (MSD) (Fig. I-9, top) [START_REF] Lindemann | A particle-associated glycoprotein signal peptide essential for virus maturation and infectivity[END_REF][START_REF] Pietschmann | An evolutionarily conserved positively charged amino acid in the putative membrane-spanning domain of the foamy virus envelope protein controls fusion activity[END_REF][START_REF] Sun | Comparative analysis of the envelope glycoproteins of foamy viruses[END_REF]. Both hydrophobic regions (H domain and MSD) are predicted to fold in α-helices. Such helixes were visualized by cryo-EM of the trimeric folded PFV Env in its pre-fusion state (Fig. I-9, middle). Interestingly, the helices cluster in a hexagonal close proximity with outer and inner helices buried in the membrane.
The TM gp48 subunit mediate the fusion step with the target cell membrane using an α-helix fusion peptide (FP) located in the N-ter of TM [START_REF] Sun | Comparative analysis of the envelope glycoproteins of foamy viruses[END_REF]Wang et al., 2016b). Three These FPs appear shielded by the central SU domains which was hypothesized to fold primarily as the upper part of the trimer [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
The SU domain is exclusively extracellular and contains the RBD (Fig. I-9, top). The RBD of PFV has been characterized by flow cytometry cell-binding assays with use of recombinant Env [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF]. By mutating progressively shorter domains of the PFV SU, Duda et al. demonstrated the RBD to be a discontinues region spanning from aa 225-555 with a non-essential joining RBD (RBDjoin) at aa position 397-483 (Fig. I-10) [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. Of notice, previous binding studies of recombinant fusion Env proteins suggested PFV to interact with two receptors of different affinity [START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF].
PFV Env also harbors 24 cysteine residues spanning all three subunits which potentially form disulfide bonds important for folding . Cell-binding assays using wild-type (WT) and mutant recombinant Env revealed essential roles of cysteines located within the C-ter of the SU domain as well as a glycosylation site within the RBD [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF].
Env glycosylation
SU primarily contains complex type glycans while the LP and TM present attachment of high mannose or hybrid type glycans [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF]. The primary FV Env sequence usually harbors between 14 and 16 potential N-glycosylation sites (PNGS) and glycans contribute by about 50% of its molecular weight [START_REF] Sun | Comparative analysis of the envelope glycoproteins of foamy viruses[END_REF]. This level of glycosylation intermediates between the highly glycosylated HIV Env (median of 25 PNGS per monomeric SU gp120 subunit) and murine leukemia virus (MLV) Env (less than 10 PNGS) [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF]. Sequence alignments of Env from different FVs reveal certain N-glycosylation sites to be highly conserved across species-specific FVs. For PFV, two PNGS are located at the LP, ten at the SU and three at the TM (Fig [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF].
Env-dependent FV particle budding and subviral particle release
Egress and intracellular transport of FV particles are highly dependent on Env in contrast to orthoretroviruses able to release VLPs in absence of Env. The FV budding relies on the evolutionary conserved W-x-x-W motif located in the N-ter region of the LP subunit (Fig. I-10). This motif interacts with Gag during budding [START_REF] Fischer | Foamy virus particle formation[END_REF]. The capsid region interacting with the N-ter of LP is located in the N-ter of Gag as observed in crystal structures of N-ter Gag (aa 1-179) complexed to LP N-ter peptides (aa 1-20) [START_REF] Goldstone | A unique spumavirus Gag N-terminal domain with functional properties of orthoretroviral matrix and capsid[END_REF][START_REF] Reh | An N-terminal domain helical motif of Prototype Foamy Virus Gag with dual functions essential for particle egress and viral infectivity[END_REF][START_REF] Wilk | Specific interaction of a novel foamy virus Env leader protein with the N-terminal Gag domain[END_REF]. Similar LP-Gag interaction occurs in FFV [START_REF] Geiselhart | Features of the Env leader protein and the N-terminal Gag domain of feline foamy virus important for virus morphogenesis[END_REF].
FV particle transport and egress is also impacted by a K-K-x-x dilysine motif (K-K-K in PFV) located in the C-ter of the TM domain which functions as an ER retrieval signal (Fig. I-10) [START_REF] Goepfert | A sorting motif localizes the foamy virus glycoprotein to the endoplasmic reticulum[END_REF][START_REF] Goepfert | Identification of an ER retrieval signal in a retroviral glycoprotein[END_REF]. EFV TM lacks the dilysine motif and was shown to exclusively bud at the plasma membrane of in vitro infected cells [START_REF] Kirisawa | Isolation of an Equine Foamy Virus and Sero-Epidemiology of the Viral Infection in Horses in Japan[END_REF][START_REF] Tobaly-Tapiero | Isolation and characterization of an equine foamy virus[END_REF].
FVs release Env-only SVPs as HBV [START_REF] Hussain | Screening for simian foamy virus infection by using a combined antigen Western blot assay: evidence for a wide distribution among Old World primates and identification of four new divergent viruses[END_REF]. SVPs are non-infectious empty vesicles containing glycoproteins in absence of both viral capsid and viral genome. These particles are released in massive amounts during HBV-infection, and may act as decoy factors for neutralizing antibodies (nAbs) potentially inducing immune tolerance [START_REF] Chai | Properties of subviral particles of hepatitis B virus[END_REF].
The release of FV SVPs is significantly lower than the one of HBV [START_REF] Shaw | Foamy virus envelope glycoprotein is sufficient for particle budding and release[END_REF], and is regulated by ubiquitination of the CyD of the LP domain of PFV Env [START_REF] Stanke | Ubiquitination of the prototype foamy virus envelope glycoprotein leader peptide regulates subviral particle release[END_REF]. The LP domain of PFV contains five potential Ub attachment sites downstream of the tryptophan Gag interaction motif (Fig. I-10). These, in particular the first two, suppress the generation of SVPs [START_REF] Stange | Subviral particle release determinants of prototype foamy virus[END_REF][START_REF] Stange | Characterization of prototype foamy virus gag late assembly domain motifs and their role in particle egress and infectivity[END_REF][START_REF] Stanke | Ubiquitination of the prototype foamy virus envelope glycoprotein leader peptide regulates subviral particle release[END_REF]. [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Env-induced superinfection resistance
Superinfection resistance (SIR) describes the resistance of a cell infected with a certain virus towards infection with the same type of exogenous virus (Fig. I-11). SIR usually results from either extracellular Env binding to the entry receptor at the surface of infected cells or Env binding to the receptor intracellularly. Both scenarios lead to masking and/or downregulation of receptor expression on the cell surface [START_REF] Nethe | Retroviral superinfection resistance[END_REF]. FV Env-transduced cells stably expressing PFV Env on their surface are less susceptible to infection than non-transduced cells [START_REF] Berg | Determinants of foamy virus envelope glycoprotein mediated resistance to superinfection[END_REF][START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF][START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF]. SIR depends upon the CyD and/or MSD domains from the TM. Artificial anchoring of the Env by substituting the MSD with a phosphoinositol signal sequence could restore SIR and cell surface expression (Berg et al., 2003). Secreted SFV SU does not mediate SIR. This contrasts with the inhibition of MLVinfection by monomeric SU [START_REF] Battini | Receptor-binding domain of murine leukemia virus envelope glycoproteins[END_REF]. Expression of PFV Env was also able to induce SIR against PFV vectors pseudotyped with heterologous Env from SFV, FFV, BFV and EFV strains, suggesting that at least one host molecule is used for entry by FVs infecting different mammalian species [START_REF] Berg | Determinants of foamy virus envelope glycoprotein mediated resistance to superinfection[END_REF]. SIR was also observed against strains from different genotypes that will be defined in section 1.1.5 [START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF]. This lack of resistance to infection mediated by soluble recombinant Env is a key property, which allowed the epitope mapping strategy possible (section 5, Manuscript II).
Genetic variability and recombination of SFVs
Despite the fact that FVs contain an RNA phase in their replication cycle, their genomes are very stable in vivo. The mutational error rate of PFV in cell cultures has been shown in the range of 1.7 x 10 -4 to 1.1 x 10 -5 substitutions per nucleotide per replication cycle [START_REF] Boyer | In vitro fidelity of the prototype primate foamy virus (PFV) RT compared to HIV-1 RT[END_REF][START_REF] Gärtner | Accuracy estimation of foamy virus genome copying[END_REF]. For comparison, the average mutation rate of HIV-1 and influenza
A viruses is 6.3 and 2.5 x 10 -5 , respectively [START_REF] Sanjuán | Mechanisms of viral mutation[END_REF]. The genetic stability of SFVs is also evident from their historical evolution characterized by ancient cospeciation with their species-specific NHP and by evidence of cross-species transmission events [START_REF] Muniz | An expanded search for simian foamy viruses (SFV) in Brazilian New World primates identifies novel SFV lineages and host age-related infections[END_REF][START_REF] Rethwilm | Evolution of foamy viruses: the most ancient of all retroviruses[END_REF][START_REF] Switzer | Ancient co-speciation of simian foamy viruses and primates[END_REF].
Genetic stability of FVs
FV evolution was studied in co-housed African green monkeys (AGMs) (Cercopithecus aethiops) imported from Kenya harboring SFVagm. Viral clones obtained during a 13-year period from one of these monkeys and an animal caretaker from the facility who accidentally acquired a SFVagm-infection, had between 99.5 and 100% aa identity [START_REF] Schweizer | Simian foamy virus isolated from an accidentally infected human individual[END_REF].
This remarkable stability of FVs in a single host and their slow genetic drift during evolution could potentially be explained by two scenarios: a low replication rate of FVs in vivo or a very high-fidelity FV RT. Indeed, viral replication (i.e., presence of SFV RNA) is primarily restricted to the oral cavity and mucosa in NHPs while proviral DNA can be detected in the majority of tissues indicative of a largely latent infection, discussed further in section 1.2 (Falcone et al., 1999a;[START_REF] Murray | Replication in a superficial epithelial cell niche explains the lack of pathogenicity of primate foamy virus infections[END_REF]. The PFV RT has an in vitro error rate similar to the one of HIV-1 RT. PFV RT does however not seem to focus errors at specific hotspots in vitro as for HIV-1 RT but produces more insertions and deletions overall [START_REF] Boyer | In vitro fidelity of the prototype primate foamy virus (PFV) RT compared to HIV-1 RT[END_REF].
Interestingly, a study reported that all PFV nucleotide mutations observed in vitro was of guanosine to adenosine (G to A) suggesting in vitro activity of the apolipoprotein B editing complex 3 (APOBEC3) family of proteins [START_REF] Gärtner | Accuracy estimation of foamy virus genome copying[END_REF]. These mutations were reduced by 50% when co-expressing recombinant Bet protein supporting its antagonistic role, discussed further in section 1.3 [START_REF] Gärtner | Accuracy estimation of foamy virus genome copying[END_REF].
FV diversity and recombination
A central genetic alteration giving rise to viral diversity is recombination. Genetic recombination can occur when two distinct but related viruses infect the same cell, also termed homologous recombination. Frequent SFV recombination events occurred in vitro with use of the PFV vector system [START_REF] Gärtner | Accuracy estimation of foamy virus genome copying[END_REF]. Phylogenetic analyses of pol gene have been conducted on tissue samples from chimpanzees living in the Taï National Park in Côte d'Ivoire and fecal samples collected from four chimpanzee subspecies living at 25 sites spread over equatorial Africa. They revealed intra-and interspecies transmission by SFV strains between chimpanzees and from African Colobus and Cercopithecus monkeys to Apes. In fact, since chimpanzees often prey on smaller monkeys the occurrence of such SFV superinfections could give rise to new recombinant SFVs [START_REF] Blasse | Mother-offspring transmission and age-dependent accumulation of simian foamy virus in wild chimpanzees[END_REF][START_REF] Leendertz | Interspecies transmission of simian foamy virus in a natural predator-prey system[END_REF][START_REF] Liu | Molecular ecology and natural history of simian foamy virus infection in wild-living chimpanzees[END_REF].
Indeed, recombination have been observed in Bangladesh using gag sequences from rhesus macaque monkeys showing that SFVmmu (Macaca mulatta) strains cluster according to geographical sampling [START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF]. Further evidence supports that deforestation and NHP translocation, as observed by nomadic people travelling with performance monkeys, are influencing SFV transmission and diversity in this region [START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF].
SFV env is overall more conserved than gag across NHP species, which is the opposite of simian lentiviruses presenting greater variabilities in env leading to immune response escape mutations [START_REF] Rethwilm | Evolution of foamy viruses: the most ancient of all retroviruses[END_REF]. Despite this, significant divergence within SFV env have also been observed. The first observation is based on the 19 co-housed AGMs. Their env sequences clustered into four SFVagm subtypes showing >95% sequence similarity within clusters but 3 to 25% aa divergence between clusters [START_REF] Schweizer | Genetic stability of foamy viruses: long-term study in an African green monkey population[END_REF].
Following full-length sequencing of SFVmcy from Taiwanese Formosan Rock macaques (Macaca cyclopis), it was observed that one of two isolated SFVmcy strains presented a greater SU aa similarity to SFVagm compared to SU from the other SFVmcy. The researchers proposed that recombination occurred between SFV strains and suggested that recombination hotspots may be present in the SU region [START_REF] Galvin | Identification of recombination in the envelope gene of simian foamy virus serotype 2 isolated from Macaca cyclopis[END_REF]. Similar hotspots had previously been observed within the SU domain of Env from SFVagm [START_REF] Schweizer | Genetic stability of foamy viruses: long-term study in an African green monkey population[END_REF] and FFV strains [START_REF] Winkler | Epidemiology of feline foamy virus and feline immunodeficiency virus infections in domestic and feral cats: a seroepidemiological study[END_REF]. FFV sequences clustered into two subgroups presenting less than 60% identity in SU between each other, while identity was >97% found within each group [START_REF] Winkler | Epidemiology of feline foamy virus and feline immunodeficiency virus infections in domestic and feral cats: a seroepidemiological study[END_REF].
Env diversity of SFV strains infecting humans and Apes in Central Africa
My research unit has contributed extensively to the discovery and prevalence of new zoonotic SFV strains through epidemiological studies in humans living in rural areas of Cameroon and Gabon [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF][START_REF] Calattini | Multiple retroviral infection by HTLV type 1, 2, 3 and simian foamy virus in a family of Pygmies from Cameroon[END_REF][START_REF] Calattini | Simian foamy virus transmission from apes to humans, rural Cameroon[END_REF]Mouinga-Ondémé et al., 2012). Five primary zoonotic SFV strains have been isolated by coculture of peripheral blood mononuclear cells (PBMCs) from infected Central African hunters with BHK-21 cells which are highly susceptible to SFVs. Two strains belonged to the gorilla SFV species, two to the chimpanzee species and one to the Cercopithecus species (Rua et al., 2012a). Molecular characterization of these SFV strains demonstrated high degree of genetic conservation between zoonotic and NHP sequences. Natural SFV polymorphisms in gag, tas, bet and the U3 region of the LTR were observed (Rua et al., 2012a). The env gene was the most variable one. [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
The two SFVcpz/ggo genotypes were characterized by env recombination hotspots located within SU as previously reported for SFVmcy/agm and FFV. More recent studies have also determined two SU genotypes circulating among mandrills (Madrillus sphinx, SFVmsp) from Cameroon and Gabon (Aiewsakun et al., 2019a). In that study the authors predicted the SU Recombination event map based on env sequences from fifty-four SFV strains (forty-seven full-length). Two recombination hotspots were discovered between nucleotide 631/768 and 1369/1521, respectively. Site-wise evolutionary rate shown according to Env. SFV strain Bad316 was predicted to have a recombination site outside the predicted recombination hotspot that defines SUvar. Figure from (Aiewsakun et al., 2019a).
A more recent study took advantage of the species-specific SFV co-speciation with NHPs to address evolution of Japanese macaques (Macaca fuscata) [START_REF] Hashimoto-Gotoh | Phylogenetic analyses reveal that simian foamy virus isolated from Japanese Yakushima macaques (Macaca fuscata yakui) is distinct from most of Japanese Hondo macaques (Macaca fuscata fuscata)[END_REF].
This study showed that the conserved region of env presented remarkable high correlation with phylogenetic trees of NHP evolution based on host genome sequences and further confirmed the presence of a variable region within SU among the new SFVmfu isolates [START_REF] Hashimoto-Gotoh | Phylogenetic analyses reveal that simian foamy virus isolated from Japanese Yakushima macaques (Macaca fuscata yakui) is distinct from most of Japanese Hondo macaques (Macaca fuscata fuscata)[END_REF].
The studies on Cameroonian and Gabonese zoonotic SFVggo strains [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF] provide evidence for SFV strain clustering according to geographical location, which goes in line with observations from previous studies on macaque SFVs in Bangladesh [START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF] and mandrill SFVs in Gabon [START_REF] Mouinga-Ondeme | Two distinct variants of simian foamy virus in naturally infected mandrills (Mandrillus sphinx) and cross-species transmission to humans[END_REF]. Moreover, these mandrill SFVmsp strains from Gabon cluster according to location in the North and South which is separated by the Ogooué river with the two SU variants occurring in both populations (Aiewsakun et al., 2019a). These results further strengthen that phylogenetically distinct clades display host co-divergence and separation patterns based on geography.
After these genetic studies, we referred to the divergent SU region as SU variant (SUvar). The remainder of the SU domain spanning the N-ter and C-ter region is very conserved (SUcon) as for the . In addition to forming two phylogenetically distinct env clades or potential genotypes, the SUvar region also overlaps the bipartite RBD which could explain neutralization profiles of different serotypes (see section 1.3.3). Finally, following work from our unit provided strong evidence that the SUvar region is targeted by nAbs from the majority of SFV-infected African hunters confirming the match between SFV genotype and serotype [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. This important study by Lambert et al. serves as basis for the work of this thesis aiming to further characterize the zoonotic SFV-specific nAb epitopes located within SUvar, explained in depth in section 1.3. [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Epidemiology and zoonotic transmission of SFVs
Since the discovery of FVs in the 1950s, epidemiological surveys have been performed across continents. This section will focus on what is currently known on zoonotic SFV transmissions to humans with emphasis on prevalence in NHPs, transmission modes and risk factors. Then, I will describe investigations on the in vivo tropism and pathology. In addition to SFVs, nonprimate FVs are also abundant in their respective hosts (cats, cows, and horses). Although understudied in comparison to SFVs, human transmissions of FFV and EFV have not been reported, while some rare BFV antibody positive human cases have been documented, reviewed by [START_REF] Kehl | Non-simian foamy viruses: molecular virology, tropism and prevalence and zoonotic/interspecies transmission[END_REF]. The epidemiological review of these FVs is beyond the scope of this thesis, but the following studies have addressed the prevalence of FFV (Ledesma-Feliciano et al., 2019;[START_REF] Winkler | Epidemiology of feline foamy virus and feline immunodeficiency virus infections in domestic and feral cats: a seroepidemiological study[END_REF], BFV [START_REF] Okamoto | Genomic characterization and distribution of bovine foamy virus in Japan[END_REF][START_REF] Romen | Serological detection systems for identification of cows shedding bovine foamy virus via milk[END_REF] and EFV [START_REF] Kirisawa | Isolation of an Equine Foamy Virus and Sero-Epidemiology of the Viral Infection in Horses in Japan[END_REF] and reviewed by [START_REF] Pinto-Santini | Foamy virus zoonotic infections[END_REF].
SFV prevalence and transmission in and between NHPs
Primates are distributed over a large global area spanning Central and South America, nearly all parts of Africa, Southwest parts of the Middle East and most regions of South and Southeast Asia (Fig. I-15). The frequency of SFV has been investigated in specimens from Apes, OWMs, and NWMs. The first SFV strains were successfully isolated from several NHP species [START_REF] Gajdusek | Transmission experiments with kuru in chimpanzees and the isolation of latent viruses from the explanted tissues of affected animals[END_REF][START_REF] Hooks | Characteization and distribution of two new foamy viruses isolated from chimpanzees[END_REF][START_REF] Johnston | A second immunologic type of simian foamy virus: monkey throat infections and unmasking by both types[END_REF][START_REF] Rogers | Latent viruses in chimpanzees with experimental kuru[END_REF][START_REF] Rustigian | Infection of monkey kidney tissue cultures with viruslike agents[END_REF][START_REF] Stiles | COMPARISON OF SIMIAN FOAMY VIRUS STRAINS INCLUDING A NEW SEROLOGICAL TYPE[END_REF] showing that SFV is widespread and that prevalence rates in adult NHPs are as high as up to 100% in some cases [START_REF] Jones-Engel | Sensitive assays for simian foamy viruses reveal a high prevalence of infection in commensal, free-ranging Asian monkeys[END_REF]. Today, the methods used for detection are PCR and/or serology. Most SFV prevalence studies have used blood and tissue samples [START_REF] Leendertz | High prevalence, coinfection rate, and genetic diversity of retroviruses in wild red colobus monkeys (Piliocolobus badius badius) in Tai National Park, Cote d'Ivoire[END_REF]. Novel sampling techniques allowed the study of wild animals with non-invasive methods such as feces or urine [START_REF] Liu | Molecular ecology and natural history of simian foamy virus infection in wild-living chimpanzees[END_REF], discarded plants eaten by NHPs (Smiley [START_REF] Smiley Evans | Detection of viruses using discarded plants from wild mountain gorillas and golden monkeys[END_REF] and ropes hidden inside distributed food sources [START_REF] Smiley Evans | Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates[END_REF].
SFV prevalence across different NHP species has been shown to increase with age and captivity, which could be due to a horizontal transmission route through aggressive behavior in elder and sexually mature monkeys as well as forced contacts between monkeys in captivity compared to in nature resulting in bites and wounds facilitating SFV transmission [START_REF] Calattini | Modes of transmission and genetic diversity of foamy viruses in a Macaca tonkeana colony[END_REF][START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF][START_REF] Ghersi | Wide distribution and ancient evolutionary history of simian foamy viruses in New World primates[END_REF] and reviewed by [START_REF] Meiering | Historical perspective of foamy virus epidemiology and infection[END_REF].
The following synthesis will report results grouped by geographical areas and by living conditions (wild, semi-free ranged or captivity) that influence SFV transmission. When studied, the association of age with SFV prevalence reported is described in the following paragraph on transmission modes summarized.
Figure I-15 -A view of the global distribution of NHPs
World map highlighting regions across 90 countries that are hosts for native species of primates spanning Latin America, Africa, the Middle East and Asia. SFV is endemic in primates from all regions including Madagascar. Regions hosting NMWs, OWMs and Apes are highlighted in green, pale orange and dark red, respectively. Figure created with BioRender.com and adapted from [START_REF] Santos | Simian Foamy Viruses in Central and South America: A New World of Discovery[END_REF].
African continent -OWMs and Apes
Captive: Early studies in the US and Germany investigated the SFV prevalence in baboon and AGM breeding colonies which was above 95% in both cases [START_REF] Blewett | Simian foamy virus infections in a baboon breeding colony[END_REF][START_REF] Schweizer | Genetic stability of foamy viruses: long-term study in an African green monkey population[END_REF]. Larger screening on >350 sera samples obtained from various NHP species held in captivity, including 43 different species of OWMs and Apes, described SFV prevalence of 68% across all species tested [START_REF] Hussain | Screening for simian foamy virus infection by using a combined antigen Western blot assay: evidence for a wide distribution among Old World primates and identification of four new divergent viruses[END_REF].
Semi-free ranged:
In the context of semi-free ranged NHPs, SFV prevalence was found to be 83% in Mandrill colonies in Gabon by our lab [START_REF] Mouinga-Ondeme | Two distinct variants of simian foamy virus in naturally infected mandrills (Mandrillus sphinx) and cross-species transmission to humans[END_REF] and 44% in baboons (Papio Anubis) from Uganda (Smiley [START_REF] Smiley Evans | Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates[END_REF].
Wild: Studies in wild living and wild caught NHPs from Africa also described high SFV prevalence. Our lab has demonstrated a wide distribution of SFV in mandrills, drills, chimpanzees and lowland gorillas (Gorilla gorilla gorilla) housed in zoos and sanctuaries in Cameroon and Gabon (Calattini et al., 2006a;[START_REF] Calattini | Natural simian foamy virus infection in wild-caught gorillas, mandrills and drills from Cameroon and Gabon[END_REF] [START_REF] Liu | Molecular ecology and natural history of simian foamy virus infection in wild-living chimpanzees[END_REF]. SFV was documented at all sites and the prevalence ranged from 44-100%. This non-invasive method has previously been used
for detection of SIV-infection and SFV-SIV coinfections have been described [START_REF] Keele | Chimpanzee reservoirs of pandemic and nonpandemic HIV-1[END_REF][START_REF] Santiago | Simian immunodeficiency virus infection in free-ranging sooty mangabeys (Cercocebus atys atys) from the Tai Forest, Cote d'Ivoire: implications for the origin of epidemic human immunodeficiency virus type 2[END_REF]. Interestingly, the prevalence of SFV (86%) was shown to exceed that of SIV (82%) and STLV-1 (50%) in a wild population of red colobus monkeys (Piliocolobus badius badius) in Côte d'Ivoire Taï National Park (n=54) [START_REF] Leendertz | High prevalence, coinfection rate, and genetic diversity of retroviruses in wild red colobus monkeys (Piliocolobus badius badius) in Tai National Park, Cote d'Ivoire[END_REF].
The study on wild chimpanzee populations supports horizontal transmission as the primary route because infection was not detected in young animals [START_REF] Liu | Molecular ecology and natural history of simian foamy virus infection in wild-living chimpanzees[END_REF]. Another study documented frequent mother-offspring transmissions in a colony of wild chimpanzees, in which mother-infant pairs were identified (P. t. verus) [START_REF] Blasse | Mother-offspring transmission and age-dependent accumulation of simian foamy virus in wild chimpanzees[END_REF]. In addition to vertical transmission, cases of superinfections were described supporting a continuous acquisition of diverse SFV strains throughout life via the horizontal route in Apes [START_REF] Blasse | Mother-offspring transmission and age-dependent accumulation of simian foamy virus in wild chimpanzees[END_REF]. Thus, chimpanzees are the only NHP species for which vertical transmission has been described. The proportion of primary infections due to vertical transmission is however controversial.
Importantly, cross-species transmission of SFV happens in-between NHPs in the wild, likely through aggressive contacts [START_REF] Leendertz | Interspecies transmission of simian foamy virus in a natural predator-prey system[END_REF][START_REF] Liu | Molecular ecology and natural history of simian foamy virus infection in wild-living chimpanzees[END_REF].
South and Southeast Asia -OWMs and Apes
Captive: In a captive but free-breeding colony of Macaca tonkeana housed at a primatology center in Strasbourg, France, SFV was detected in up to 89.5% of adult animals [START_REF] Calattini | Modes of transmission and genetic diversity of foamy viruses in a Macaca tonkeana colony[END_REF]. In Bangladesh, SFV prevalence was 79% in performance macaques from nomadic people (Bedey) travelling with their monkeys (n=38) [START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF] and 52.9% among urban performance M. fascicularis monkeys in Indonesia (n=20) [START_REF] Schillaci | Prevalence of enzootic simian viruses among urban performance monkeys in Indonesia[END_REF]. In captive Orangutans (Pongo pygmaeus) hosted in zoos in London and Zurich, the seroprevalence was 100% among tested animals (n=14) and replicative virus was isolated from two of the Apes [START_REF] Mcclure | Isolation of a new foamy retrovirus from orangutans[END_REF].
No evidence for vertical transmission of SFV was observed in the breeding colony of M. tonkeana in Strasbourg [START_REF] Calattini | Modes of transmission and genetic diversity of foamy viruses in a Macaca tonkeana colony[END_REF] and most SFV-infected mother-offspring pairs were infected with distinct strains. Moreover, SFV seropositivity was extremely rare in young macaques as described in animals before their sexual maturity [START_REF] Jones-Engel | Sensitive assays for simian foamy viruses reveal a high prevalence of infection in commensal, free-ranging Asian monkeys[END_REF].
Semi-free ranged: SFV prevalence was 39% in a rhesus macaque (Macaca mulatta) population (n=74) living semi-free ranged in a zoo in Yunnan, China [START_REF] Huang | Simian foamy virus prevalence in Macaca mulatta and zookeepers[END_REF].
Wild: Macaques represent the most studied species of NHPs to date, as these monkeys live in close contact with humans across Asia, Northern Africa and Gibraltar (Fig. I-15). The total SFV prevalence rate was 92% (n=118) in adult free ranging Asian monkeys spanning five macaque taxa; M. mulatta, fascicularis, assamensis, nemestrina and arctoides sampled in Thailand and Singapore [START_REF] Jones-Engel | Sensitive assays for simian foamy viruses reveal a high prevalence of infection in commensal, free-ranging Asian monkeys[END_REF]. All animals above three years old were infected. SFV was also detected in 38/39 (97.4%) rhesus macaques living in and around temples in Nepal [START_REF] Engel | Risk assessment: A model for predicting cross-species transmission of simian foamy virus from macaques (M. fascicularis) to humans at a monkey temple in Bali, Indonesia[END_REF]. This contrasted a later study which found SFV in only 18% of free ranging M. mulatta in Nepal (Smiley [START_REF] Smiley Evans | Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates[END_REF]. In Bangladesh, SFV prevalence was 94.4% in free ranging macaques (n=126) [START_REF] Feeroz | Population dynamics of rhesus macaques and associated foamy virus in Bangladesh[END_REF], 88% in adult M. sylvanus macaques (n=79) in Gibraltar [START_REF] Engel | Unique pattern of enzootic primate viruses in Gibraltar macaques[END_REF], 98% among free living Indian rhesus macaques (n=35) [START_REF] Nandi | Transmission of infectious viruses in the natural setting at human-animal interface[END_REF] and 56.5% in long-tailed M. fascicularis (n=649) across Thailand [START_REF] Kaewchot | Zoonotic pathogens survey in free-living long-tailed macaques in Thailand[END_REF].
Central and South America -NWMs
Lastly, SFVs has also been shown to have a wide distribution in neotropical monkeys inhabiting the Latin Americasin contrast to SIVs and STLVs which do not circulate in NWMs and are exclusively found in Asian and/or African NHP species [START_REF] Santos | Simian Foamy Viruses in Central and South America: A New World of Discovery[END_REF]. Ghersi et al. demonstrated SFV in 11/15 genera from captive and wild-caught NWMs in the US and Peru reaching prevalence of 45.2% and 37.5%, respectively [START_REF] Ghersi | Wide distribution and ancient evolutionary history of simian foamy viruses in New World primates[END_REF]. Those data expanded results from previous reports from Brazil [START_REF] Muniz | An expanded search for simian foamy viruses (SFV) in Brazilian New World primates identifies novel SFV lineages and host age-related infections[END_REF][START_REF] Muniz | Identification and characterization of highly divergent simian foamy viruses in a wide range of new world primates from Brazil[END_REF].
The SFV prevalence in that study ranged from 0-100% in distinct NWM species, although sampling numbers were low for some groups. Moreover, seropositivity was observed in NWMs held at a rescue center (18.9%) and illegal trade markets (42.9%) in Peru [START_REF] Ghersi | Wide distribution and ancient evolutionary history of simian foamy viruses in New World primates[END_REF].
Phylogenic trees of the sequenced strains also showed strong co-speciation of SFV with their NWM host, as known for OWMs, and provided evidence for cross-species transmission during evolution, even across genera [START_REF] Ghersi | Wide distribution and ancient evolutionary history of simian foamy viruses in New World primates[END_REF].
SFV in vivo tropism and pathology in NHPs
SFV has been shown by many groups to latently infect most tissues in vivo in naturally infected NHPs, supporting the broad in vitro tropism observed. One of the first studies detected SFV DNA in all samples of a broad range of tissues from AGMs naturally infected with SFVagm (n=4) (Falcone et al., 1999a). Samples included tissue from less commonly tested sites such as bone marrow, brain, testes, prostate and uterus. SFV RNA was only detected in one monkey in the oral mucosa (Falcone et al., 1999a). Following this study, Murray et al. were the first to confirm that the primary site of viral replication and presence of viral RNA in vivo locates to the oral cavity by in situ hybridizationmore specifically to a small cell niche of superficial epithelial cells [START_REF] Murray | Replication in a superficial epithelial cell niche explains the lack of pathogenicity of primate foamy virus infections[END_REF]. These results were proposed to explain why SFV-infection is largely non-pathogenic in NHPs, since these epithelial cells are short-lived and shed into the saliva. The data also explains the efficient transmission of SFV between monkeys and support bites as a major horizontal route of infection. Moreover, a later study on rhesus macaques in Bangladesh (n=61) demonstrated a strong correlation between viral strains obtained from blood cells and buccal mucosal samples, suggesting that the actively transcribingand likely transmitting viruses in the oral mucosaare also those integrated throughout the body [START_REF] Soliven | Simian foamy virus infection of rhesus macaques in Bangladesh: relationship of latent proviruses and transcriptionally active viruses[END_REF].
In blood from infected NHPs, SFV DNA was detectable in T cell, B cell, monocytes and polymorphonuclear leucocytes (PMNL) from chimpanzees (n=4) and AGMs (n=9) (Table I-4) [START_REF] Von Laer | Lymphocytes are the major reservoir for foamy viruses in peripheral blood[END_REF]. In vitro, PFV-infection induce CPEs in chronically HTLV-1 and HIV-1 infected T cell lines [START_REF] Mikovits | In vitro infection of primary and retrovirus-infected human leukocytes by human foamy virus[END_REF]. However, no transactivation of the PFV LTR by HTLV-1 Tax or vice versa was observed [START_REF] Keller | Characterization of the transcriptional trans activator of human foamy retrovirus[END_REF][START_REF] Mikovits | In vitro infection of primary and retrovirus-infected human leukocytes by human foamy virus[END_REF]. This is in contradiction with most recent data showing that HTLV-1 Tax transactivate the PFV LTR in vitro [START_REF] Alais | STLV-1 co-infection is correlated with an increased SFV proviral load in the peripheral blood of SFV/STLV-1 naturally infected non-human primates[END_REF]. In line with this, SFV-STLV-1 dual-infected baboons had significantly higher proviral DNA loads in their PBMCs compared to SFV mono-infected animals [START_REF] Alais | STLV-1 co-infection is correlated with an increased SFV proviral load in the peripheral blood of SFV/STLV-1 naturally infected non-human primates[END_REF].
To date, SFV-infection has not been directly associated with overt pathology in naturally infected NHPs. However, one study demonstrated acceleration in progression towards AIDSlike disease and death in experimentally SIV-infected rhesus macaques with chronic SFVinfection compared to SFV negative animals [START_REF] Choudhary | Influence of naturally occurring simian foamy viruses (SFVs) on SIV disease progression in the rhesus macaque (Macaca mulatta) model[END_REF]. In SFV mono and SFV-SIV co-infected animals, the presence of SFV RNA was tested across several tissues; buccal epithelium, pharyngeal epithelium, tongue, tonsils, lung, small intestine, mesenteric lymph node, parotid salivary glands, colon and blood (PBMCs). SFV viral transcripts were abundant in lung, tongue, tonsils, buccal and pharyngeal epithelium in both groups, however SFV viral RNA was also found in the small intestine and lymph nodes of SIV co-infected animals suggesting an expansion of tissue tropism [START_REF] Murray | Expanded tissue targets for foamy virus replication with simian immunodeficiency virus-induced immunosuppression[END_REF]. These results emphasize the importance of understanding SFV-SIV or SFV-HIV co-infections in humans which has been reported [START_REF] Switzer | Coinfection with HIV-1 and simian foamy virus in West Central Africans[END_REF][START_REF] Switzer | Dual Simian Foamy Virus/Human Immunodeficiency Virus Type 1 Infections in Persons from Cote d'Ivoire[END_REF] and reviewed by [START_REF] Murray | Simian Foamy Virus Co-Infections[END_REF].
Zoonotic SFV infections
The prevalence of SFV in the human population has been investigated in 23 principal studies to date. Taking into account all published and reviewed data [START_REF] Gessain | HTLV-3/4 and simian foamy retroviruses in humans: discovery, epidemiology, cross-species transmission and molecular virology[END_REF][START_REF] Pinto-Santini | Foamy virus zoonotic infections[END_REF] including recent reports [START_REF] Halbrook | Human T-cell lymphotropic virus type 1 transmission dynamics in rural villages in the democratic republic of the congo with high nonhuman primate exposure[END_REF][START_REF] Muniz | Zoonotic infection of Brazilian primate workers with New World simian foamy virus[END_REF][START_REF] Switzer | Dual Simian Foamy Virus/Human Immunodeficiency Virus Type 1 Infections in Persons from Cote d'Ivoire[END_REF] [START_REF] Heneine | Specific binding of recombinant foamy virus envelope protein to host cells correlates with susceptibility to infection[END_REF][START_REF] Schweizer | Simian foamy virus isolated from an accidentally infected human individual[END_REF]) and a single case of accidental laboratory infection reported in Germany [START_REF] Von Laer | Lymphocytes are the major reservoir for foamy viruses in peripheral blood[END_REF]. World map highlighting documented zoonotic SFV-infections in humans, through occupational (blue) or natural settings (green). The first isolated strain (HFV/PFV) highlighted in red. Circular symbols placed at approximate location of reported SFV-cases shown in size according to number of cases reported for the region. Name of country and region entitled in boxes and details of studies conduction in the region shown below according to infection setting. Relevant additional information for individual studies such as retroviral co-infections or proportion of cases in subdivided groups are shown in parenthesis. Studies with no PCR documented human cases (NWM SFV) highlighted in dashed boxes. Total number of reported cases in respective settings according to testing method or retroviral co-infection for respective studies are detailed in Table I 50) [START_REF] Switzer | Coinfection with HIV-1 and simian foamy virus in West Central Africans[END_REF][START_REF] Switzer | Dual Simian Foamy Virus/Human Immunodeficiency Virus Type 1 Infections in Persons from Cote d'Ivoire[END_REF] PFV strain isolated from cell culture was included as a PCR pos case. ND; not determined, Gen.; general, STD; sexually transmitted disease, TB; Tuberculosis. Setting colors according to Figure I-16. Table adapted from (Pinto-Santini et al., 2017).
Zoonotic cases in West and Central Africa
Our lab has performed surveys on humans exposed to NHPs and their body fluids over the past two decades in Central Africa in collaborations with the Centre Pasteur du Cameroun in Yaoundé, Cameroon and Centre International de Recherche Médicale de Franceville in Gabon.
These tested individuals belonged to tribes of Pygmies and Bantus and were living in rural settlements of rainforest regions in southern Cameroon. Individuals were screened by WB;
samples with positive or indeterminate serology were then tested by two PCR assays, amplifying the LTR and IN region. PCR assays were carried out for all individuals who reported contact with NHPs, leading to the identification of four individuals who tested negative in WB.
In total, 74 individuals tested positive in at least one PCR assay. In these studies, only individuals with a positive PCR were considered as infected [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF][START_REF] Calattini | Multiple retroviral infection by HTLV type 1, 2, 3 and simian foamy virus in a family of Pygmies from Cameroon[END_REF][START_REF] Calattini | Simian foamy virus transmission from apes to humans, rural Cameroon[END_REF][START_REF] Mouinga-Ondeme | Two distinct variants of simian foamy virus in naturally infected mandrills (Mandrillus sphinx) and cross-species transmission to humans[END_REF]Mouinga-Ondémé et al., 2012).
The majority of the individuals reported bites by NHPs during hunting activities and were infected with SFV strains derived from gorillas and chimpanzees. Infection by SFV from monkeys was less frequent (Mouinga-Ondeme and [START_REF] Mouinga-Ondeme | Simian foamy virus in non-human primates and crossspecies transmission to humans in Gabon: an emerging zoonotic disease in central Africa?[END_REF]. Our lab has also shown that a severe bite from an NHP is strongly associated with acquisition of SFV [START_REF] Filippone | A Severe Bite From a Nonhuman Primate Is a Major Risk Factor for HTLV-1 Infection in Hunters From Central Africa[END_REF]. In that study, 56.5% (13/23) of SFV-infected individuals were co-infected with STLV-1/HTLV-1, which could have been acquired at the same time as their SFV-infection (Fig.
I-16,
Table I-3) [START_REF] Filippone | A Severe Bite From a Nonhuman Primate Is a Major Risk Factor for HTLV-1 Infection in Hunters From Central Africa[END_REF].
Moreover, five studies have searched for SFV-infection by testing Biobanks constituted for HIV or Monkeypox surveys in urban or rural areas of Cameroon, Côte d'Ivoire and DRC [START_REF] Halbrook | Human T-cell lymphotropic virus type 1 transmission dynamics in rural villages in the democratic republic of the congo with high nonhuman primate exposure[END_REF][START_REF] Switzer | Coinfection with HIV-1 and simian foamy virus in West Central Africans[END_REF][START_REF] Switzer | Novel simian foamy virus infections from multiple monkey species in women from the Democratic Republic of Congo[END_REF][START_REF] Switzer | Dual Simian Foamy Virus/Human Immunodeficiency Virus Type 1 Infections in Persons from Cote d'Ivoire[END_REF][START_REF] Wolfe | Naturally acquired simian retrovirus infections in central African hunters[END_REF]. Seroprevalence ranged from 0.2 -0.91%. These studies also reported four cases of SFV-HIV-1 (2/4 PCR confirmed) [START_REF] Switzer | Coinfection with HIV-1 and simian foamy virus in West Central Africans[END_REF][START_REF] Switzer | Dual Simian Foamy Virus/Human Immunodeficiency Virus Type 1 Infections in Persons from Cote d'Ivoire[END_REF] and three cases of SFV-HTLV-1 dual-infection (3/3 PCR confirmed) [START_REF] Halbrook | Human T-cell lymphotropic virus type 1 transmission dynamics in rural villages in the democratic republic of the congo with high nonhuman primate exposure[END_REF]. Interestingly, the SFV-infected population living in rural areas from DRC is distinct from the one described in Gabon and Cameroon: most were women (12/16) who had contact with NHPs or their body fluids, but none reported wounds [START_REF] Switzer | Novel simian foamy virus infections from multiple monkey species in women from the Democratic Republic of Congo[END_REF].
Zoonotic cases in South and Southeast Asia
Human SFV-infection was reported in four epidemiological surveys in South and Southeast Asia [START_REF] Craig | A Seminomadic Population in Bangladesh with Extensive Exposure to Macaques Does Not Exhibit High Levels of Zoonotic Simian Foamy Virus Infection[END_REF][START_REF] Engel | Zoonotic simian foamy virus in Bangladesh reflects diverse patterns of transmission and co-infection[END_REF][START_REF] Jones-Engel | Primate-to-human retroviral transmission in Asia[END_REF][START_REF] Jones-Engel | Diverse contexts of zoonotic transmission of simian foamy viruses in Asia[END_REF].
Nine SFV WB pos cases including four PCR confirmed were found among temple workers in close contact with macaque monkeys living around these religious buildings in Bali and Nepal [START_REF] Jones-Engel | Primate-to-human retroviral transmission in Asia[END_REF][START_REF] Jones-Engel | Diverse contexts of zoonotic transmission of simian foamy viruses in Asia[END_REF]. Similarly, an SFVmac frequency of up to 5% was shown in villagers from Bangladesh [START_REF] Engel | Zoonotic simian foamy virus in Bangladesh reflects diverse patterns of transmission and co-infection[END_REF]. In contrast, SFV could not be confirmed in seminomadic people (n=45) from an ethic group called Bedey who own performance monkeys in Bangladesh and generally have extensive exposure to macaques [START_REF] Craig | A Seminomadic Population in Bangladesh with Extensive Exposure to Macaques Does Not Exhibit High Levels of Zoonotic Simian Foamy Virus Infection[END_REF]. The resistance mechanism to SFV infection, if any, has not been elucidated.
One study built a model using these data and predicted that SFVmac would be transmitted to approximately six individuals out of every 1000 visitors at religious temples in Bali, Indonesia [START_REF] Engel | Risk assessment: A model for predicting cross-species transmission of simian foamy virus from macaques (M. fascicularis) to humans at a monkey temple in Bali, Indonesia[END_REF].
Zoonotic cases of NWM SFV
Human SFV-infections from NWMs has also been reported, primarily in zookeepers and people working in primate research centers [START_REF] Muniz | Zoonotic infection of Brazilian primate workers with New World simian foamy virus[END_REF][START_REF] Stenbak | New World simian foamy virus infections in vivo and in vitro[END_REF]. Interestingly, these cases were based on serological evidence only and could not be confirmed by PCR.
Moreover, one report demonstrated potential sero-reversion in three cases over a 2-3 year period [START_REF] Muniz | Zoonotic infection of Brazilian primate workers with New World simian foamy virus[END_REF]. Thus, human NWM SFV infection may be cleared or fully silenced. Those results suggest that the persistence and/or replication levels in humans depends on the relatedness of the SFV host species with humans, with more frequent persistence of SFV from Apes and OWM compared to NWM SFVs.
Taking into account the number of people exposed to NHPs in many places of the world, the absence of SFV screening policiesexcept in research studies described aboveit is highly probable that a large number of people are living with an undiagnosed SFV-infection.
Furthermore, SFV cross-species transmissions are currently ongoing worldwide.
Despite >100 individual zoonotic SFV cases documented, there is yet to be reported any humanto-human transmission of SFV. Indeed, several of the abovementioned studies investigated SFV sero-reactivity in close relatives of zoonotic and accidentally SFV-infected cases (Betsem et al., 2011;[START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF][START_REF] Heneine | Specific binding of recombinant foamy virus envelope protein to host cells correlates with susceptibility to infection[END_REF][START_REF] Schweizer | Simian foamy virus isolated from an accidentally infected human individual[END_REF][START_REF] Switzer | Novel simian foamy virus infections from multiple monkey species in women from the Democratic Republic of Congo[END_REF].
Only one sample from a relative had a WB pos SFV test, which was confirmed PCR neg supporting a general lack of secondary SFV-transmission [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF]. In line with this, no evidence of SFV transmission was found in four recipients of transfused blood from an accidentally infected donor [START_REF] Boneva | Simian foamy virus infection in a blood donor[END_REF].
Pathology and clinical signs associated with SFV-infection in humans
Our lab has performed the first thorough case-control matched studies on clinical signs, blood tests and immune status of Cameroonian hunters infected with zoonotic gorilla SFVs [START_REF] Buseyne | Clinical Signs and Blood Test Results Among Humans Infected With Zoonotic Simian Foamy Virus: A Case-Control Study[END_REF][START_REF] Gessain | Case-control study of the immune status of humans infected with zoonotic gorilla simian foamy viruses[END_REF]. Only healthy individuals were included and the frequency of clinical signs did not differ between infected cases and controls. Levels from some hematological and biochemical parameters differed between cases and controls. The most pronounced difference was the lower level of hemoglobin in SFV-infected cases. Mild to moderate anemia was observed in 58% of cases and 17% of controls matched for age and ethnicity. Urea, creatinine, protein and lactate dehydrogenase were higher than in controls [START_REF] Buseyne | Clinical Signs and Blood Test Results Among Humans Infected With Zoonotic Simian Foamy Virus: A Case-Control Study[END_REF]. In regards to plasma biomarkers and blood-cell phenotypes, gorilla SFVinfected cases had higher levels of soluble scavenger receptor CD163 in the plasma and higher levels of CD4 + T cells expressing programmed death receptor 1 (PD-1) compared to matched controls. Cases had a significantly higher percentage of CD8 + T cells, while other immune cells such as B cells, natural killer (NK) cells and CD4 + T cells were unchanged [START_REF] Gessain | Case-control study of the immune status of humans infected with zoonotic gorilla simian foamy viruses[END_REF]. Thus, chronic asymptomatic SFV-infection is associated with T cell differentiation as well as monocyte activation, and reduced hemoglobin levels.
In addition to work from our lab, one early study reported on clinical and hematological status of nine accidentally SFV-infected cases [START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF]. Clinical laboratory testing was normal or as expected according to co-morbidities such as diabetes. On the other hand, hematological abnormalities were observed for three individuals which included one case with low eosinophil count, one case with thrombocytopenia and one case with mild thrombocytopenia and NK-cell lymphocytosis above upper limit at three independent time points [START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF]. No symptoms related to the clinical status were reported.
In vivo tropism of SFVs in humans
Three studies investigated the in vivo SFV tropism using PBMC samples from SFV-infected humans [START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF][START_REF] Rua | In vivo cellular tropism of gorilla simian foamy virus in blood of infected humans[END_REF][START_REF] Von Laer | Lymphocytes are the major reservoir for foamy viruses in peripheral blood[END_REF]. Our lab detected SFV DNA in PBMC-derived blood cells from SFVggo-infected Central African huntersprimarily in T and B cells, while rarely in CD14 + and CD56 + monocytes and NK cells (Table I-4 and Fig.
I-17) [START_REF] Rua | In vivo cellular tropism of gorilla simian foamy virus in blood of infected humans[END_REF]. In accidentally infected humans (n=7) working at zoos and primate centers, SFV DNA was detected by PCR in 19/19 PBMCs samples, 2/5 urine and 1/1 semen samples taken over a 5year period [START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF]. One study readily detected SFV in CD8 + T cells from two SFV-infected humans but failed to detect in other cell populations. In contrast, SFV was detected in T cells, B cells, NK cells and PMNLs in naturally infected NHPs (n=13) from the same study [START_REF] Von Laer | Lymphocytes are the major reservoir for foamy viruses in peripheral blood[END_REF]. SFV DNA copies in PBMC populations isolated from Central African hunters (n=11) infected with SFVggo strains. SFV DNA loads below the limit of detection (LOD) arbitrarily set as two SFV DNA copies/10 5 cells equal to half the LOD. Figure from [START_REF] Rua | In vivo cellular tropism of gorilla simian foamy virus in blood of infected humans[END_REF].
Since SFV replicates in the oral cavity of infected NHPs, saliva samples were examined in humans. SFV DNA was detected in saliva and throat swaps from accidentally infected zoo keepers and African hunters, although at lower levels compared to PBMCs [START_REF] Boneva | Clinical and virological characterization of persistent human infection with simian foamy viruses[END_REF][START_REF] Huang | Simian foamy virus prevalence in Macaca mulatta and zookeepers[END_REF][START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF]. All these studies focused on SFV DNA, indicative of a latent infection. In contrast, viral RNA has not been detectable in human samples tested so far, suggestive of primarily latent infection and potential immune control in SFV-infected humans [START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF].
In vitro, PFV was shown to infect primary human CD4 + T lymphocytes, monocytes and brainderived microglial cells, but poorly infected primary CD8 + T cells [START_REF] Mikovits | In vitro infection of primary and retrovirus-infected human leukocytes by human foamy virus[END_REF].
Moreover, whereas strong cytopathic characteristics were observed for most cell lines and primary cells productively infected in vitro, monocyte-derived macrophages did not show cytopathology upon PFV-infection [START_REF] Mikovits | In vitro infection of primary and retrovirus-infected human leukocytes by human foamy virus[END_REF]. Lack of productive in vitro PFVinfection of CD8 + T lymphocytes contrast with the detection of SFV DNA in CD8 + T cells from Cameroonian hunters and occupationally infected humans, including one human case who acquired PFV itself from a lab (Fig. I-17) [START_REF] Rua | In vivo cellular tropism of gorilla simian foamy virus in blood of infected humans[END_REF][START_REF] Von Laer | Lymphocytes are the major reservoir for foamy viruses in peripheral blood[END_REF]. does not cause disease and is not transmitted to close relatives suggesting that the virus is under a potential immune control from the host. This immune control is preventing the further transmission and subsequent diffusion of SFV into the human population on a larger scale. This scenario is in strong contrast to SIVs and STLVs which efficiently generated epidemic (HIV-1) and endemic (HTLV-1/2) outbreaks in the human population from related simian reservoirs [START_REF] Gessain | HTLV-3/4 and simian foamy retroviruses in humans: discovery, epidemiology, cross-species transmission and molecular virology[END_REF].
Immune responses to FVs
The immune response against retroviruses is characterized by both innate as well as adaptive immunity including cell-mediated and humoral responses. The innate immune response constitutes the first line of defense against a foreign pathogen while the adaptive immune response is raised later, is antigen specific and persists over time [START_REF] Sáez-Cirión | Immune Responses to Retroviruses[END_REF]. In this final introduction section, an overview of immunity to retroviruses will be given in line with what is known for FVs including a detailed description on nAbs and epitopes.
Overview of immune responses to retroviruses
The proteins leading to induction of IFN-stimulated genes (ISGs) encoding for many of the restrictions factors that mediate the early innate defense [START_REF] Platanias | Mechanisms of type-I-and type-II-interferon-mediated signalling[END_REF].
Innate sensing
The early innate immune response relies on the recognition of pathogen-associated molecular are exclusively located intracellularly in endosomes and are specialized in sensing foreign nucleic acids. These TLRs are the primary receptors to recognize viral PAMPs. TLR9 senses viral DNA while TLR7/8 and -3 recognize ssRNA and dsRNA, respectively [START_REF] Diebold | Innate antiviral responses by means of TLR7-mediated recognition of single-stranded RNA[END_REF][START_REF] Pang | The invariant arginine within the chromatin-binding motif regulates both nucleolar localization and chromatin binding of Foamy virus Gag[END_REF]. TLR2 and -4 are expressed extracellularly on the cell surface membrane of innate immune cells and recognize PAMPs unique to microbes not produced by the host such as viral proteins [START_REF] Barbalat | Toll-like receptor 2 on inflammatory monocytes induces type I interferon in response to viral but not bacterial ligands[END_REF]. Engagement of PRRs leads to the production of interferons, cytokines and proinflammatory molecules, activation of complement cascades and activation of cellular immunity facilitating the induction of apoptosis.
Restriction factors
Upon sensing of viral PAMPs by PRRs and/or production of IFNs, a broad variety of intrinsic host factors are induced, some with antiviral functions. Among these, some of the best viruses including retroviruses evolved mechanisms to counteract these cellular host factors through adaptation of their accessory proteins [START_REF] Kirchhoff | Immune evasion and counteraction of restriction factors by HIV-1 and other primate lentiviruses[END_REF].
Cellular effectors of innate immunity
The cellular component of the innate immunity is composed of a wide range of cells. Some of the most crucial cell types include granulocytes such as neutrophils and mast cells, monocytes the precursor of DCs and macrophages which constitute antigen presenting cells (APCs) and innate lymphoid cells (ILCs) which include NK cells among other subtypes of ILCs. As described above, these cellular effectors of the innate immune response are important for the sensing of PAMPs through binding to the cellular PRRs and subsequent induction of antiviral restriction factors and IFNs [START_REF] Rich | The Human Immune Response[END_REF]. The main driver of type I IFN production is a subtype of DCs termed plasmacytoid DCs (pDCs) which are highly activated during viral infections [START_REF] Malleret | Primary infection with simian immunodeficiency virus: plasmacytoid dendritic cell homing to lymph nodes, type I interferon, and immune suppression[END_REF]. The innate immune cells also harbor additional functions of great importance including killing and phagocytosis of infected or damaged cells. Neutrophils, monocytes and macrophages are the primary cells with such phagocytic functions [START_REF] Rich | The Human Immune Response[END_REF].
ILCs such as NK cells are innate lymphoid cells that do not express diverse, rearranged and clonally distributed antigen-specific receptors as seen for T and B lymphocytes. Instead, they express their germ-line encoded PRRs. In addition, NK cells express killer Ig-like receptors (KIRs) that can recognize major histocompatibility complex (MHC) class I molecules and peptides presented by non-classical HLA-E. These KIRs are categorized into inhibiting and activating receptors based on the signal cascades induced by the immunoreceptor tyrosinebased inhibitory and activating motifs (ITIMS and ITAMs, respectively) located at their cytoplasmic tails, reviewed by [START_REF] Saunders | Immunoglobulin prophylaxis against milkborne transmission of human T cell leukemia virus type I in rabbits[END_REF].
Interactions between innate and adaptive immunity
Beyond serving as an early defense against evading pathogens, innate immune cells provide a link between the innate and adaptive immunity. APCs like DCs and macrophages can release stimulatory cytokines and through their MHC class I and II molecules present processed foreign peptides to cytotoxic T lymphocytes (CTLs) and T helper (Th) cells, respectively. Upon stimulation, NK cells and other ILCs are characterized by the production of different stimulatory cytokines including IFN- and tumor necrosis factor (TNF) [START_REF] Rich | The Human Immune Response[END_REF]. Moreover, NK cells, monocytes and neutrophils express a variety of crystallizable fragment (Fc) receptors (FcRs), including FcRs able to bind the Fc portion of IgGs with distinct affinity. This interaction can occur on free as well as cell-bound antigens coated by plasma IgG antibodies, leading to degranulation and release of molecules like perforin and granzyme B by NK cells [START_REF] Bruhns | Mouse and human FcR effector functions[END_REF]. In the case of cell-bound or cellexpressed antigens (such as on an infected cell), the release of these cytotoxic molecules facilitates cytolytic destruction and elimination of the target cell in a process termed antibodydependent cellular cytotoxicity (ADCC). For monocytes and neutrophils, this FcR-antibodyantigen complex can mediate antibody-dependent cellular phagocytosis (ADCP) while the fixation of complement on antibody-bound antigen can facilitate antibody-mediated complement activation to destroy target cells, reviewed by (Lu et al., 2018). In a similar fashion, monocytes and macrophages also express complement receptors recognizing complement factor C3b opsonized on cell-free or cell-expressed antigens. This interaction also mediates phagocytosis of the foreign antigens and infected cells [START_REF] Rich | The Human Immune Response[END_REF].
Adaptive immunities
The adaptive immune response is acquired during acute infection, is highly specific for antigens, and memory cells and effector molecules persist after clearance of infection to provide protection against subsequent challenge by the same pathogens. The adaptive immune cells are CD4 + and CD8 + T lymphocytes and B lymphocytes.
T lymphocytes
T cells principally recognize peptide-MHC complexes presented by APCs instead of the antigen in its native conformation, although some T cell subpopulations like T cells bind to diverse but intact non-peptide antigens including proteins, glycolipids as well as other small molecules.
This recognition is mediated by the T cell receptor (TCR) which is composed of variable (V)
and constant (C) Ig domains forming a heterodimer of and -chain or a and -chain, respectively. TCR signaling transduction is dependent on TCR association with multimeric CD3 complex. The diversity of TCRs is generated by genetic recombination of the V domain of the TCR, by rearrangement of gene segments designated V and J (joining) for and chains, and gene segments V, D (diversity) and J for and -chains. This TCR-peptide/MHC interaction and following secondary co-stimulatory interactions leads to proliferation and differentiation of the naïve T cell into distinct effector T cell subsets [START_REF] Rich | The Human Immune Response[END_REF]. The role of CD8 + CTLs is similar to that of NK cells. They mediate cytolytic lysis of infected cells by secretion of perforin which creates pores in the target cell membrane and granzymes that can passively diffuse into the cytosol for induction of apoptosis through caspase-cascades [START_REF] Fevrier | CD4+ T cell depletion in human immunodeficiency virus (HIV) infection: role of apoptosis[END_REF]. Moreover, the activity of CTLs is enhanced by IFN-. The importance of CD8 + T cells during retroviral infections is well highlighted by studies demonstrating that early, persistent and specific CTL responses during primary HIV-1 infection significantly correlate with plasma viral loads and CD4 + T cell counts [START_REF] Streeck | Human immunodeficiency virus type 1-specific CD8+ T-cell responses during primary infection are major determinants of the viral set point and loss of CD4+ T cells[END_REF]. Of notice, currently no data exist on SFV antigen-specific T cell responses in infected humans or NHPs which remains a high priority for future work on immune control of zoonotic SFV-infections.
B lymphocytes
The B cell receptor (BCR) consists of four Ig chains; two identical heavy (H) and two identical light (L) chains. The light chain exists as two isotypes; kappa () and lambda (). These heavy and light chains have two or more domains, each consisting of two sandwiched β-pleated sheets linked by a disulfide bond. As for the TCR, these domains are grouped as either constant or variable. The chain contains one N-ter variable domain and a varying number of C-ter constant domains. In addition, the chains form two functional domains separated by a hinge region.
These functional domains include an antigen-binding fragment (Fab) and the Fc region. The Fab is formed by the entire light chain (VL+CL) and the VH and CH1 domains of the heavy chain.
The Fc region is composed of a varying number of CH domains and is linked to the plasma membrane in the BCR due to alternative splicing of the Ig transcript at its 3' end [START_REF] Rich | The Human Immune Response[END_REF]. On secreted antibodies, the Fc region can bind to cellular sensors that deploy host-mediated effector functions as described above (Fig. I-19). The BCR also engages noncovalently with heterodimeric complexes Igα:Igβ (CD79α:CD79β) essential for signal transduction. In contrast to the TCR, the BCR is able to bind virtually any foreign molecules in their native folds. The binding interface is mediated by the VL/VH domains and are formed by three hypervariable loop regions termed complementarity-determining regions (CDRs) interspaced between four stable framework (FR) sequences in both V domains [START_REF] Rich | The Human Immune Response[END_REF]. [START_REF] Mesin | Germinal Center B Cell Dynamics[END_REF][START_REF] Victora | Germinal center dynamics revealed by multiphoton microscopy with a photoactivatable fluorescent reporter[END_REF]. In this process, co-engagements with TFH cells are highly important for the support of B cell antibody generation as mentioned in section [START_REF] Crotty | T Follicular Helper Cell Biology: A Decade of Discovery and Diseases[END_REF]. The B cell fate upon first time encounter with a pathogen is crucial for establishment of long-term immunity and a single naïve B cell is in fact capable of generating all types of B cell progenies including GC B cells, plasma cells and memory B cells (MBCs). Several factors are influencing the fate of the B cells within the GC including BCR affinity towards antigens and self. A deeper description of these processes is beyond the scope of this thesis but have been studied and reviewed elsewhere (Sabouri et al., 2014;[START_REF] Viant | Antibody Affinity Shapes the Choice between Memory and Germinal Center B Cell Fates[END_REF][START_REF] Victora | Germinal Centers[END_REF].
Upon activation through engagement of the BCR with an antigen, B cells first produce low affinity IgM. The affinity-maturated B cells can differentiate into several subsets including short-lived Ab-secreting plasma cells and long-lived MBCs [START_REF] Moir | B-cell responses to HIV infection[END_REF]. These In response to a reinfection or booster vaccination, preexisting pathogen-experienced plasma cells act as a constitutive first line of reactive defense rapidly producing antibodies capable of neutralizing the reinvading pathogen. On the other hand, sentinel tissue resident MBCs found in mucosae and other strategic locations provide second line of reactive humoral immunity. The MBCs are capable of further affinity maturation by reentering the GC, reviewed by [START_REF] Inoue | Generation of memory B cells and their reactivation[END_REF]. The long-term protective immunity however seems to depend on both the MBC and plasma cell pools as shown by a comprehensive study which found highly stable levels and long half-lives of circulating serum antibodies without frequent correlation between these and peripheral blood MBCs [START_REF] Amanna | Duration of humoral immunity to common viral and vaccine antigens[END_REF]. Fab and Fc binding to antigen and receptors are highly affected by features like hinge length and flexibility, glycosylation sites and disulfide bonds. Figure from (Lu et al., 2018).
Figure I-20 -Overview of dynamics during germinal center reaction and B cell fates
Schematic of fates taken by a GC B cell. In the dark zone of the GC, a B cell acquires SMHs through AID which yield B cell clones with higher BCR affinity for cognate antigen. In the light zone of the GC, the B cell will test its BCR affinity to foreign antigen presented by an FDC while also engage with TFH cells through CD40-CD40L and MHC-II-TCR complexes. In case of high affinity of the BCR to the presented antigen and strong B cell-TFH interaction (1), the B cell can leave the GC as a plasma cell (PC) which undergoes Ig-class switching and produce large amounts of soluble antibodies. In case of lower BCR affinity to the antigen and weaker B cell-TFH interaction (2), the B cell can leave the GC as an MBC with an unswitched BCR. In case of inadequate BCR antigen affinity or high self-reactivity, the B cell can either die through apoptosis ( 3) or recycle into the dark zone of the GC for additional SMHs ( 4). The AID-mediated SHMs may give rise to B cell clones with increased BCR antigen affinity [START_REF] Aiewsakun | Modular nature of simian foamy virus genomes and their evolutionary history[END_REF], which is tested by reentering the light zone and engage with antigen presented by FDCs once again [START_REF] Aiewsakun | The First Co-Opted Endogenous Foamy Viruses and the Evolutionary History of Reptilian Foamy Viruses[END_REF]. Figure created with BioRender.com and adapted from [START_REF] Mesin | Germinal Center B Cell Dynamics[END_REF].
Role of nAbs during viral infections
The humoral antiviral response relies on antigen-specific antibodies. One of their major functions is to bind and 'neutralize' invading pathogens. These nAbs mainly interfere with early steps of the viral replication cycle by blocking viral particles from entry into host cells [START_REF] Murin | Antibody responses to viral infections: a structural perspective across three different enveloped viruses[END_REF]. A second role of nAbs is the elimination of infected cells, mediated by cellular effectors as described in section 1.3.1.4. The importance of nAbs in antiviral immunity is demonstrated by prophylactic vaccines against major viral pathogens which induce antibodies able to neutralize virus or interfere with their spread in the organism [START_REF] Plotkin | Correlates of protection induced by vaccination[END_REF]. Another example includes maternal antibodies transferred to neonates during pregnancy and through breast-feeding which protect the infant against infections during the first months of life, reviewed by [START_REF] Hansda | Plasma therapy: a passive resistance against the deadliest[END_REF]. Moreover, passively administered viral antigen-specific polyclonal Igs are able to hinder infections in occasions of no prior immunity or in absence or lack of efficacious vaccines. Such plasma therapies have been well established for prevention of rabies virus infection in particular [START_REF] Cabasso | Rabies immune globulin of human origin: preparation and dosage determination in non-exposed volunteer subjects[END_REF][START_REF] Rupprecht | Clinical practice. Prophylaxis against rabies[END_REF], and have been successfully used against hepatitis A virus [START_REF] Stapleton | Passive immunization against hepatitis A[END_REF] and varicella zoster/human herpes virus type 3 infections [START_REF] Sauerbrei | Diagnosis, antiviral therapy, and prophylaxis of varicella-zoster virus infections[END_REF].
Importantly, nAbs not only prevent infection but can also be used as treatment of established infection as it has been demonstrated in response to HIV-1 [START_REF] Barin | HIV-1 antibodies in prevention of transmission[END_REF].
Immunotherapy by passively administered broadly neutralizing antibodies (bnAbs) can suppress viremia in HIV-1 infected humanized mice and SHIV-infected NHPs [START_REF] Barouch | Therapeutic efficacy of potent neutralizing HIV-1-specific monoclonal antibodies in SHIV-infected rhesus monkeys[END_REF][START_REF] Gautam | A single injection of crystallizable fragment domain-modified antibodies elicits durable protection from SHIV infection[END_REF][START_REF] Klein | Somatic mutations of the immunoglobulin framework are generally required for broad and potent HIV-1 neutralization[END_REF][START_REF] Moldt | Highly potent HIV-specific antibody neutralization in vitro translates into effective protection against mucosal SHIV challenge in vivo[END_REF][START_REF] Shingai | Antibody-mediated immunotherapy of macaques chronically infected with SHIV suppresses viraemia[END_REF]. Moreover, such anti-HIV-1 bnAbs have been and are currently being tested as immunotherapy or as prophylaxis in humans. Although passive infusion of individual bnAbs in most cases have led to mAb-resistant viruses, a combination of two potent bnAbs was able to suppress viremia and delay viral rebound in HIV-1 infected individuals undergoing analytical ART interruption [START_REF] Bar-On | Safety and antiviral activity of combination HIV-1 broadly neutralizing antibodies in viremic individuals[END_REF][START_REF] Mendoza | Combination therapy with anti-HIV-1 antibodies maintains viral suppression[END_REF] and reviewed by [START_REF] Barin | HIV-1 antibodies in prevention of transmission[END_REF][START_REF] Caskey | Broadly neutralizing antibodies for the treatment and prevention of HIV infection[END_REF][START_REF] Gruell | Broadly neutralizing antibodies against HIV-1 and concepts for application[END_REF]. Although larger studies are required, a recent study reported evidence for a decrease in size of intact proviral reservoirs in HIV-infected individuals treated with such bnAb combinations [START_REF] Gaebler | Prolonged viral suppression with anti-HIV-1 antibody therapy[END_REF]. Thus, nAbs play essential roles in the control of established viral infections and in prevention of their transmission to new hosts.
For this reason, my host laboratory initiated a research program on antibodies raised by humans infected with zoonotic SFVs.
Innate immunity to FVs
Although the in vivo role of innate immunity during FV-infection has not been studied in depth to date, it is likely that the innate immune response restricts FV replication within infected hosts. Indeed, culture system studies have shown significant restriction of FVs by both novel and well-characterized intrinsic antiviral restriction factors, reviewed by [START_REF] Berka | Early events in foamy virus-host interaction and intracellular trafficking[END_REF] .
Two recent studies have also reported on the direct sensing of SFVs by primary hematopoietic and myeloid cells [START_REF] Bergez | Insights into Innate Sensing of Prototype Foamy Viruses in Myeloid Cells[END_REF]Rua et al., 2012b). Moreover, early and recent studies have shown that SFVs are sensitive to IFNs [START_REF] Bahr | Interferon but not MxB inhibits foamy retroviruses[END_REF]Falcone et al., 1999b;Rhodes-Feuillette et al., 1987).
Innate sensing of FVs
Type I IFNs are produced in particular by infected cells in response to sensing of PAMPs by PRRs and these cytokines can limit the spread of virus to neighboring cells by induction of antiviral ISGs. However, early in vitro culture system experiments with different serotypes of SFV showed that the majority of tested strains did not induce type I IFN production in nonhematopoietic cell lines of human, simian and murine origin (Rhodes-Feuillette et al., 1987;[START_REF] Sabile | Redemption of autoantibodies on anergic B cells by variable-region glycosylation and mutation away from self-reactivity[END_REF].
My lab showed sensing of PFV and primary zoonotic SFV strains by primary human mononuclear cells (Rua et al., 2012b). PBMCs were activated by both SFV cell-free particles and SFV-infected BHK-21 cells. Among PBMCs, the pDCs were the main type I IFNproducing cells (Rua et al., 2012b). The sensing and production of type I IFNs was dependent on expression of FV Env and independent of RT activity (Rua et al., 2012b). Inhibition of vesicular acidification and use of an endosomal TLR antagonist significantly decreased IFN induction in response to SFV particles and infected cells. Similarly, silencing of TLR7 in a pDC-like cell line significantly decreased IFN release in response to sensing of SFV, indicative of TLR7 to be the main innate PRR activated upon SFV entry in pDCs (Rua et al., 2012b).
More recently, monocyte derived macrophages and monocyte derived DCs were shown to sense full-length replication-competent PFV [START_REF] Bergez | Insights into Innate Sensing of Prototype Foamy Viruses in Myeloid Cells[END_REF]. Knock-out of molecules involved in signaling pathways showed that the cyclic GMP-AMP synthase (cGAS) and STING were the principal sensors in this cell type by detection of reverse transcriped SFV DNA present in incoming viral particles. A fusion-defective mutant virus furthermore failed to induce ISGexpression indicative of cytoplasmic presence to be important for myeloid sensing of PFV.
Integration deficient or accessory Tas/Bet protein mutant viruses were all sensed [START_REF] Bergez | Insights into Innate Sensing of Prototype Foamy Viruses in Myeloid Cells[END_REF].
FVs are susceptible to IFNs
The addition of recombinant human IFN-α, -β or -γ to SFV-infected cells significantly decreased the CPEs on the culture indicative of antiviral activities (Rhodes-Feuillette et al., 1990;Rhodes-Feuillette et al., 1987;[START_REF] Sabile | Redemption of autoantibodies on anergic B cells by variable-region glycosylation and mutation away from self-reactivity[END_REF]. IFN-γ produced by mitogen activated human primary blood leukocytes was shown to inhibit the replication of a SFVagm strain (SFVcae_huKa) (Falcone et al., 1999b), and more recently, IFN-β was demonstrated to inhibit the early steps of FV infection [START_REF] Bahr | Interferon but not MxB inhibits foamy retroviruses[END_REF]. In line with these studies, my lab has shown that blockling molecules involved in IFN signaling pathways such as JAK1/2 increased the replication of zoonotic primary SFVgor strains isolated from infected African hunters [START_REF] Couteaudier | Inhibitors of the interferon response increase the replication of gorilla simian foamy viruses[END_REF]. Interestingly, FV capsid proteins primarily contain arginine as basic residues instead of lysines that are found in high numbers compared to arginine in capsid proteins from other retroviruses. Reversion of these arginines to lysines in PFV Gag showed limited impact on replication of PFV infectious molecular clones. Conversely, Arg-to-Lys reversion increased the succeptibility to IFN-α treatment for the PFV mutant clones compared to WT suggesting capsid-dependent restriction by an unknown host factor [START_REF] Matthes | Basic residues in the foamy virus Gag protein[END_REF].
FV restriction by well-characterized host factors
Tetherin is an IFN-inducible transmembrane restriction factor that also acts as a ligand for the immunoglobulin-like transcript 7 (ILT7) receptor expressed on pDCs [START_REF] Cao | Regulation of TLR7/9 responses in plasmacytoid dendritic cells by BST2 and ILT7 receptor interaction[END_REF].
Tetherin is known to block the release of viral particles from a broad range of viruses including retro-and filoviruses in infected cells [START_REF] Jouvenet | Broad-spectrum inhibition of retroviral and filoviral particle release by tetherin[END_REF]. The mechanism of action likely involves the C-ter glycosyl phosphatidylinositol membrane anchor of Tetherin inserted into the viral membrane, while its N-ter domain is anchored in the cell membrane. Tethered viral particles are then internalized and degraded in endosomes, reviewed by [START_REF] Colomer-Lluch | Restriction Factors: From Intrinsic Viral Restriction to Shaping Cellular Immunity Against HIV-1[END_REF]. For PFV, virion release is strongly inhibited by the human, simian, bovine and canine Tetherin and this inhibition was shown dependent on the membrane anchor and on Tetherin dimerization [START_REF] Jouvenet | Broad-spectrum inhibition of retroviral and filoviral particle release by tetherin[END_REF][START_REF] Xu | Novel Host Protein TBC1D16, a GTPase Activating Protein of Rab5C, Inhibits Prototype Foamy Virus Replication[END_REF].
The E3 ubiquitin ligase TRIM5α also function as a retroviral restriction factor, in addition to its role as an PRR specific for retroviral capsids and regulation of innate immune signaling [START_REF] Pertel | TRIM5 is an innate immune sensor for the retrovirus capsid lattice[END_REF]. PFV and SFVmac were observed sensitive to TRIM5α from NWMs, but resistant to TRIM5α from OWMs and Apes. FFV was restricted by TRIM5α from Apes whereas TRIM5α from OWMs did not restrict any of the three FV isolates [START_REF] Yap | Restriction of foamy viruses by primate Trim5alpha[END_REF].
NWM SFVs are inhibited by TRIM5α from related NWM species but are not affected by their own species-specific TRIM5α potentially reflecting FV evolution and adaptation of host TRIM5α [START_REF] Pacheco | Species-specific inhibition of foamy viruses from South American monkeys by New World Monkey TRIM5{alpha} proteins[END_REF]. Structural studies support that TRIM5α restricts FVs through interaction of its B30.2 domain with viral Gag C-ter domain, as shown for orthoretroviruses [START_REF] Goldstone | A unique spumavirus Gag N-terminal domain with functional properties of orthoretroviral matrix and capsid[END_REF].
FVs have been shown resistant to the human Mx proteins [START_REF] Bahr | Interferon but not MxB inhibits foamy retroviruses[END_REF][START_REF] Regad | PML mediates the interferon-induced antiviral state against a complex retrovirus via its association with the viral transactivator[END_REF]. This is in contrast to the demonstrated inhibition of influenza A virus pol activity by MxA and the late-step capsid-dependent inhibition of HIV by Mx2 [START_REF] Goujon | Human MX2 is an interferon-induced post-entry inhibitor of HIV-1 infection[END_REF][START_REF] Kane | MX2 is an interferon-induced inhibitor of HIV-1 infection[END_REF][START_REF] Mänz | Pandemic influenza A viruses escape from restriction by human MxA through adaptive mutations in the nucleoprotein[END_REF]. The restrictive function of SAMHD1 works by limiting the intracellular pool of dNTPs [START_REF] Colomer-Lluch | Restriction Factors: From Intrinsic Viral Restriction to Shaping Cellular Immunity Against HIV-1[END_REF]). An ISG library screen did not find restriction of PFV by human SAMHD1, in agreement with a functional study [START_REF] Gramberg | Restriction of diverse retroviruses by SAMHD1[END_REF][START_REF] Kane | Identification of Interferon-Stimulated Genes with Antiretroviral Activity[END_REF]. However, the screen detected modest restriction of early PFV replication by the simian SAMHD1 [START_REF] Kane | Identification of Interferon-Stimulated Genes with Antiretroviral Activity[END_REF]. The SERINC proteins have been shown to harbor antiviral activity against HIV-1 and the accessory protein Nef can counteract SERINCswith the exception of the human paralog SERINC2 [START_REF] Colomer-Lluch | Restriction Factors: From Intrinsic Viral Restriction to Shaping Cellular Immunity Against HIV-1[END_REF][START_REF] Ramdas | Coelacanth SERINC2 Inhibits HIV-1 Infectivity and Is Counteracted by Envelope Glycoprotein from Foamy Virus[END_REF]. On the contrary, FVs are resistant to SERINCs. Interestingly, SERINC2 from the ancient coelacanth was able to restrict HIV-1 but not FV. Overexpression of FV Env rescued HIV-1 infectivity, suggesting Env to counteract SERINC2. These results elucidate a long evolutionary relationship between SERINCs and retroviruses [START_REF] Ramdas | Coelacanth SERINC2 Inhibits HIV-1 Infectivity and Is Counteracted by Envelope Glycoprotein from Foamy Virus[END_REF]. While IFITMs has not been shown to restrict PFV, a study demonstrated inhibition of FFV in a late step of replication by human IFITM1, -2 and -3, although time-dependent as restriction was diminished at later time points [START_REF] Kim | IFITM proteins inhibit the late step of feline foamy virus replication[END_REF].
A class of well-studied restriction factors is the APOBEC3 proteins. These are incorporated into virions where they deaminate cytosine residues of the viral DNA into uracil (C-to-U) leading to degradation of the viral genome. In addition, human APOBEC3F and -3G interfere with reverse transcription of HIV. The majority of U-containing viral genomes are enzymatically degraded or integrated with numerous G-to-A substitutions, reviewed by [START_REF] Jaguva Vasudevan | Foamy Viruses, Bet, and APOBEC3 Restriction[END_REF]. In regards to FVs, human, simian and murine APOBEC3G induces Gto-A editing of the FV genome and acts as inhibitor of FV infectivity in vitro [START_REF] Delebecque | Restriction of foamy viruses by APOBEC cytidine deaminases[END_REF][START_REF] Löchelt | The antiretroviral activity of APOBEC3 is inhibited by the foamy virus accessory Bet protein[END_REF][START_REF] Russell | Foamy virus Bet proteins function as novel inhibitors of the APOBEC3 family of innate antiretroviral defense factors[END_REF]. Moreover, it was shown that human APOBEC3F and -3G as well as the three feline APOBECs interact directly with FV Bet and some with Gag [START_REF] Chareza | Molecular and functional interactions of cat APOBEC3 and feline foamy and immunodeficiency virus proteins: different ways to counteract host-encoded restriction[END_REF][START_REF] Russell | Foamy virus Bet proteins function as novel inhibitors of the APOBEC3 family of innate antiretroviral defense factors[END_REF]. Indeed, with one exception, most studies found that the accessory protein Bet antagonize APOBEC3 proteins and prevents their incorporation into viral particles including PFV [START_REF] Delebecque | Restriction of foamy viruses by APOBEC cytidine deaminases[END_REF][START_REF] Russell | Foamy virus Bet proteins function as novel inhibitors of the APOBEC3 family of innate antiretroviral defense factors[END_REF] and FFV [START_REF] Chareza | Molecular and functional interactions of cat APOBEC3 and feline foamy and immunodeficiency virus proteins: different ways to counteract host-encoded restriction[END_REF][START_REF] Lukic | The human foamy virus internal promoter directs the expression of the functional Bel 1 transactivator and Bet protein early after infection[END_REF][START_REF] Löchelt | The antiretroviral activity of APOBEC3 is inhibited by the foamy virus accessory Bet protein[END_REF]. PFV Bet also inhibits human APOBEC3B, -3C and -3G activity with Bet forming a complex that prevents APOBEC3 dimers and APOBEC3 degradation, which is in contrast to Vif from SIV and HIV-2 that also restrict APOBEC3s [START_REF] Jaguva Vasudevan | Prototype foamy virus Bet impairs the dimerization and cytosolic solubility of human APOBEC3G[END_REF][START_REF] Perkovic | Species-specific inhibition of APOBEC3C by the prototype foamy virus protein bet[END_REF][START_REF] Zhang | HIV-2 Vif and foamy virus Bet antagonize APOBEC3B by different mechanisms[END_REF].
The in vivo activity of APOBEC3s on SFV has also been addressed in a few studies. Notably, G-to-A substitutions have been found in SFV genomes recovered from humans infected with gorilla SFVs [START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF]. However, contribution of APOBEC3 proteins to the induction of these hypermutations were either too rare to represent actual in vivo hypermutations [START_REF] Delebecque | Restriction of foamy viruses by APOBEC cytidine deaminases[END_REF], or may alternatively reflect the expansion of a few number of edited clones [START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF]. The latter is likely since all the G-to-A mutations in distinct clones were observed at the very same position of the genome [START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF]. A computational analysis reported a higher frequency APOBEC3 mutated macaque SFV genomes in human blood cells from infected donors than in blood and buccal cells from naturally infected macaques in Bagladesh [START_REF] Matsen | A novel Bayesian method for detection of APOBEC3-mediated hypermutation and its application to zoonotic transmission of simian foamy viruses[END_REF]. These genetic studies of zoonotic SFV strains are the only evidence that host restriction factors may control SFV replication in vivo. Moreover, their interpretation is complex due to the lack of comparison between gorilla and human samples [START_REF] Delebecque | Restriction of foamy viruses by APOBEC cytidine deaminases[END_REF][START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF]. The clonal expansion of edited genomes observed by Rua et al. was not taken into account in the comparison between macaque and human samples [START_REF] Matsen | A novel Bayesian method for detection of APOBEC3-mediated hypermutation and its application to zoonotic transmission of simian foamy viruses[END_REF]. Finally, the counterselection of hypermutated defective genomes may lead to an underestimation of APOBEC3 action in the three studies [START_REF] Delebecque | Restriction of foamy viruses by APOBEC cytidine deaminases[END_REF][START_REF] Matsen | A novel Bayesian method for detection of APOBEC3-mediated hypermutation and its application to zoonotic transmission of simian foamy viruses[END_REF][START_REF] Rua | Viral latency in blood and saliva of simian foamy virusinfected humans[END_REF].
FV restriction by novel intrinsic host factors
A macaque and human ISG screen of early and late steps of viral replication identified several restriction factors that target mostly the production of PFV in HT1080 cells [START_REF] Kane | Identification of Interferon-Stimulated Genes with Antiretroviral Activity[END_REF].
The only few hits with a modest reduction of PFV-infectivity in the incoming screen was the human serine/threonine protein kinase PAK3 and macaque PHD finger domain protein 11 (PHF11). The production screen however found a list of candidate host factors with stronger impact on viral replication. The human ISGs included APOBEC3G, the oligoadenylate synthase-like (OASL) protein, the free fatty acid receptor 2 (FFAR2) and the human RNA helicase Moloney leukemia virus 10-like (MOV10) protein as the most significant hits. The macaque-specific ISG hits included APOBEC3B/G/F, Schlafen (SLFN) family member 12 (SLFN12), decysin-1 disintegrin and metalloprotease-like protein (ADAMDEC1), OASL, mixed lineage kinase domain-like protein (MLKL) and tumor necrosis factor superfamily member 10 (TNFSF10) [START_REF] Kane | Identification of Interferon-Stimulated Genes with Antiretroviral Activity[END_REF].
Surprisingly, the screen identified restriction of late events of PFV replication by the MOV10 although silencing its expression had no effect on PFV replication in a previous report [START_REF] Yu | The DEAD-box RNA helicase DDX6 is required for efficient encapsidation of a retroviral genome[END_REF]. A recent follow-up study described that macaque and human PHF11 (discovered as a modest antiviral factor in the incoming ISG screen) restrict several spumaviruses by specifically targeting the IPpreventing its basal transcription, Tas expression and thus viral replication [START_REF] Kane | Inhibition of spumavirus gene expression by PHF11[END_REF]. Moreover, after the discovery of SLFN12 protein as a modest productive restriction factor in the ISG screen [START_REF] Kane | Identification of Interferon-Stimulated Genes with Antiretroviral Activity[END_REF], another group demonstrated that its related family member SLFN11 from human, cattle and AGMs are potent inhibitors of PFV replication [START_REF] Guo | Human Schlafen 11 exploits codon preference discrimination to attenuate viral protein synthesis of prototype foamy virus (PFV)[END_REF]. Inhibition of viral protein expression was rescued by gene codon optimization. Moreover, the ATPase and helicase activities of SLFN11 was required for PFV restriction [START_REF] Guo | Human Schlafen 11 exploits codon preference discrimination to attenuate viral protein synthesis of prototype foamy virus (PFV)[END_REF].
A series of other host factors restrict PFV as demonstrated through in vitro overexpression and knock-down experiments. Several of these target Tas including the promyelocytic leukemia (PML) protein [START_REF] Regad | PML mediates the interferon-induced antiviral state against a complex retrovirus via its association with the viral transactivator[END_REF], N-Myc interactor (Nmi) [START_REF] Hu | N-Myc interactor inhibits prototype foamy virus by sequestering viral Tas protein in the cytoplasm[END_REF], the p53-induced RING-H2 (Pirh-2) protein [START_REF] Dong | Human Pirh2 is a novel inhibitor of prototype foamy virus replication[END_REF] and serum/glucocorticoid regulated kinase 1 (SGK1) (Zhang et al., 2022). These factors interact with Tas either in the nucleus or in the cytoplasm [START_REF] Hu | N-Myc interactor inhibits prototype foamy virus by sequestering viral Tas protein in the cytoplasm[END_REF][START_REF] Regad | PML mediates the interferon-induced antiviral state against a complex retrovirus via its association with the viral transactivator[END_REF]. They prevent LTR and IP transactivation or target Tas for proteasomal degradation and reduce viral transcription. SGK1 was also found to reduce stability of Gag (Zhang et al., 2022). Of notice, PML was identified as an inhibitor of PFV gene expression by complexing to Tas and preventing its binding to viral DNA [START_REF] Regad | PML mediates the interferon-induced antiviral state against a complex retrovirus via its association with the viral transactivator[END_REF]. However, a later study demonstrated that endogenous PML was not involved in FV latency questioning its role as a FV restriction factor [START_REF] Meiering | The promyelocytic leukemia protein does not mediate foamy virus latency in vitro[END_REF]. Lastly, PFV replication was shown inhibited by a novel host factor TBC1D16 which belongs to the Rab GTPase-activating protein of the TBC domain-containing protein family. Overexpression inhibited transcription and expression of Tas and Gag, while silencing enhanced PFV replication (Yan et al., 2021). A summary of host proteins determined as antiviral restriction factors against PFV is shown in Table I-5.
FV restriction by miRNAs
MicroRNAs (miRNAs) are small regulatory RNAs that can bind to complementary mRNAs and inhibit their translation and subsequent expression. A wide range of tissue-specific cellular miRNAs are expressed in cells. Some viruses are also encoding their own miRNAs, including SFVagm that encodes two miRNAs which can regulate the innate immune response and harbor functional similarity to certain host miRNAs [START_REF] Kincaid | Noncanonical microRNA (miRNA) biogenesis gives rise to retroviral mimics of lymphoproliferative and immunosuppressive host miRNAs[END_REF]. Although rare, some viruses directly rely on miRNAs for their replication such as hepatitis C virus, reviewed by [START_REF] Cullen | How do viruses avoid inhibition by endogenous cellular microRNAs?[END_REF]. The inhibition of viruses by cellular miRNAs is rare in mammalian cells, despite frequent viral RNA silencing in plants and insects. In fact, it appears that viral inhibition by endogenous cellular miRNAs is avoided for most viruses through evolution [START_REF] Bogerd | Replication of many human viruses is refractory to inhibition by endogenous cellular microRNAs[END_REF]. Despite this, a few studies have reported that miRNAs can restrict viral gene expression and replication in human cells. One example of this is PFV. The human cellular miRNA-32 was shown to inhibit translation of PFV in vitro and it was subsequently shown that Tas acts as a counter silencing suppressor protein, similarly to RNA silencing by plant and insect virus produced suppressor proteins [START_REF] Lecellier | A cellular microRNA mediates antiviral defense in human cells[END_REF].
Antibody responses to FVs
The humoral immune response to FV has initially been used for the diagnosis of infected animals and humans. In addition, the susceptibility of viral strains to neutralizing antibodies was used until the 90's to classify FVs into serotypes. Regarding their function, SFV-specific antibodies were shown to protect against infection in one NHP model, and their neutralizing activity has been studied in cats and in humans only. My host laboratory characterized the nAbs and viral strains from the same SFV-infected hunters and established the link between the early described serotypes and recently described env genotypes. FVs are transmitted as cell-free particles and as cell-associated viruses, and antibodies blocked only the first mode of transmission. The antiviral role of non-neutralizing antibodies (ADCC, ADCP, complement dependent cytotoxicity, inhibition of viral budding) has not been studied to date.
FV serology and diagnostics
Serological studies have been performed with a range of distinct FV strains and sera from naturally infected NHPs, cats, cattle and accidentally infected humans. Sera from NHPs react against Gag and Bet proteins, and Gag doublet bands on immunoblots has become a common tool for diagnostic purposes [START_REF] Hahn | Reactivity of primate sera to foamy virus Gag and Bet proteins[END_REF]. In contrast, no response to Env is detected, even in the largest study based on samples from 16 humans and 129 NHPs representing 32
African and Asian species who were diagnosed as infected by PCR [START_REF] Hussain | Screening for simian foamy virus infection by using a combined antigen Western blot assay: evidence for a wide distribution among Old World primates and identification of four new divergent viruses[END_REF].
However, Env glycoproteins were present in radiolabeled PFV-infected cell lysates immunoprecipitated with sera from infected humans and NHPs [START_REF] Netzer | Identification of the major immunogenic structural proteins of human foamy virus[END_REF]. Thus, most epitopes on Env are likely conformational. Similarly, BFV and FFV-specific sera mostly bind to Gag and Bet in enzyme-linked immunosorbent assay (ELISA) and western blot assays using infected cell lysates as source of antigen [START_REF] Alke | Characterization of the humoral immune response and virus replication in cats experimentally infected with feline foamy virus[END_REF][START_REF] Romen | Serological detection systems for identification of cows shedding bovine foamy virus via milk[END_REF][START_REF] Romen | Antibodies against Gag are diagnostic markers for feline foamy virus infections while Env and Bet reactivity is undetectable in a substantial fraction of infected cats[END_REF]. Interestingly, a recombinant FFV TM protein was recognized in ELISA assay by sera from infected cats and immunized animals and binding to TM and Gag antigens was concordant [START_REF] Mühle | Immunological properties of the transmembrane envelope protein of the feline foamy virus and its use for serological screening[END_REF].
The presence of mucosal and systemic antibodies against SFV was investigated using plasma, urine and saliva samples from persistently SFV-infected humans and chimpanzees [START_REF] Cummins | Mucosal and systemic antibody responses in humans infected with simian foamy virus[END_REF]. IgA responses against both Gag and Bet proteins were undetectable despite strong IgG reactivity in western blot [START_REF] Cummins | Mucosal and systemic antibody responses in humans infected with simian foamy virus[END_REF]. No data are available on the presence of Env-specific antibodies in saliva of infected animals or humans and their role in SFV transmission.
FV serotypes
SFV strains were initially characterized by their susceptibility to neutralization (serotyping) which showed the segregation of SFV according to their host species and the existence of two serogroups among strains isolated from the same species [START_REF] Hooks | The foamy viruses[END_REF][START_REF] O'brien | Foamy virus serotypes 1 and 2 in rhesus monkey tissues[END_REF]. For example, the first macaque SFV isolate (SFVmcy_FV21) was designated SFV type 1 [START_REF] Rustigian | Infection of monkey kidney tissue cultures with viruslike agents[END_REF], while a second macaque isolate (SFVmcy_FV34) belongs to a distinct serogroup and was designated SFV type 2 [START_REF] Johnston | A second immunologic type of simian foamy virus: monkey throat infections and unmasking by both types[END_REF]. In total, 11 serotypes have been defined and seven of these eleven isolates has been full-length sequenced (see Table I -6).
Cross-neutralization of isolates from distinct NHP species by the same reference sera was observed [START_REF] Hooks | The foamy viruses[END_REF]. For example, the first gorilla SFV strain was neutralized by sera raised against chimpanzee SFV (Bieniasz et al., 1995a). Overall, the early serologic studies have demonstrated the induction of strong nAb responses in SFV-infected hosts.
Furthermore, serum-mediated neutralization by chimpanzee sera could be inhibited by competition with recombinant Env fusion proteins, and immune sera blocked Env from binding to cells [START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF]. Those findings gave first indications that Env is targeted by nAbs and that some neutralizing epitopes interfere with Env binding to susceptible cells. [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF][START_REF] Calattini | Multiple retroviral infection by HTLV type 1, 2, 3 and simian foamy virus in a family of Pygmies from Cameroon[END_REF][START_REF] Calattini | Simian foamy virus transmission from apes to humans, rural Cameroon[END_REF]Mouinga-Ondémé et al., 2012). Viral strains were of gorilla, chimpanzee and Cercopithecus origin. They aimed to test the hypothesis that adaptive immune responses participate to the control of zoonotic SFV strains in infected humans and restrict their transmission to other human hosts. They initiated a research program to assess whether SFV-specific antibodies are present in infected individuals and to characterize their mode of action. The first step was to search for neutralizing antibodies, focusing on individuals infected by a gorilla or a chimpanzee SFV. Viral strains were those isolated from infected hunters (Rua et al., 2012a). More specifically two gorilla SFVs belonging to genotypes I and II, and a chimpanzee genotype II SFV strain. As no zoonotic chimpanzee genotype I strain was isolated, the laboratory-adapted PFV strain was used. An indicator cell line expressing the β-galactosidase gene under the control of one gorilla LTR promoter (GFAB)
was constructed and used to perform microneutralization assays (i.e., in P96-well plates) [START_REF] Lambert | A new sensitive indicator cell line reveals crosstransactivation of the viral LTR by gorilla and chimpanzee simian foamy viruses[END_REF]. Interestingly, the indicator GFAB cell line detects strains of both gorilla and chimpanzee origin.
Plasma samples from the majority of gorilla SFV infected individuals neutralized at least one SFVgor strain (40/44, 91%) (Fig. I-21A). Neutralizing titers ranged from 1:10 to 1:14,724 [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. The strains neutralized usually belonged to the same genotype as infecting SFV strains previously determined by the sequence of PCR-amplified env [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF] (Fig. I-21B). However, cross-neutralization of strains from both genotypes was frequent (36%). The infecting strains were characterized by an env genotype-specific PCR, expected to be more sensitive than the first molecular study. Eight individuals (20%) were coinfected with strains from two distinct genotypes, and all but one neutralized both genotypes.
Seven other samples from single-infected donors neutralized two strains, suggesting either cross-neutralization of both genotypes or that one genotype was undetectable by PCR (Fig. I-21A) [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
The second part of this work was the demonstration that nAbs target exclusively the genotypespecific SUvar domain. FVVs were originally developed with Env, Gag and Pol from the PFV strain. These FVVs were pseudotyped with gorilla Env from each genotype and with chimeric Env in which the SUvar from both gorilla strains was swapped. Plasma antibody neutralization of the chimeric vectors demonstrated that nAbs strictly target the SUvar region [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Of note, plasma samples from individuals infected with a gorilla SFV cross-neutralized chimpanzee SFV from matched genotype and vice versa (Fig. I-21C and D).
Two genotypes based on SUvar were also described for FFV-isolates with a correlation between genotype and serum neutralization similar to the one observed for SFV [START_REF] Phung | Genetic analyses of feline foamy virus isolates from domestic and wild feline species in geographically distinct areas[END_REF][START_REF] Phung | Characterization of Env antigenicity of feline foamy virus (FeFV) using FeFV-infected cat sera and a monoclonal antibody[END_REF][START_REF] Winkler | Epidemiology of feline foamy virus and feline immunodeficiency virus infections in domestic and feral cats: a seroepidemiological study[END_REF][START_REF] Zemba | Construction of infectious feline foamy virus genomes: cat antisera do not crossneutralize feline foamy virus chimera with serotype-specific Env sequences[END_REF]. Tables indicate number of donors with measurable nAb activity against the four strains and statistical significance according to Fischer's exact test. Figure adapted from [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
Mapping of linear epitopes on Env
Two studies searched for linear epitopes on Env using synthetic peptides. Through use of FFV Env gp130 peptide microarrays and samples from infected cats, pumas and immunized rats, four immunodominant clusters were identified: two in LP and two in TM (after the fusion peptide and the membrane-proximal external region (MPER)) [START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF]. My host laboratory used a library of 169 PFV peptides covering the Env ectodomain in an ELISA assay with 36 plasma samples from SFV-infected African hunters [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. A single immunodominant linear B cell epitope was found in the LP mapped down to residues at position 98-108; this epitope is also present on FFV Env (residues 92-119 peptide) [START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF].
Sera from an infected cat that neutralized the FUV strain representative of one of the two FFV genotypes bind to a peptide spanning residues 441-463 within the SUvar domain from FFV SU.
The authors hypothesized that these residues could represent a genotype-specific nAb epitopic region [START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF]. A former master student from my laboratory found that a homologous peptide was recognized by plasma samples from SFV-infected hunters. However, the recognition was not genotype-specific: the peptide with genotype I sequence was recognized by plasma from an individual infected with a genotype II strain (see Manuscript II and Discussion and Perspectives section 6.1.2, Chapter V and VI, respectively (Dynesen et al., 2022, submitted)).
These two studies [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF][START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF] showed that some linear antigenic sites are shared by FV from different species. Peptides from the TM were recognized by cat sera only; such may reflect different immunogenicity of the TM in felines and in humans or the use of different techniques and peptides. As mentioned, most epitopes targeted by Env-specific antibodies are conformational.
FV-specific nAbs and cell-to-cell transmission
SFVs use both cell-free and cell-associated routes to spread in cell cultures [START_REF] Couteaudier | Plasma antibodies from humans infected with zoonotic simian foamy virus do not inhibit cell-to-cell transmission of the virus despite binding to the surface of infected cells[END_REF][START_REF] Heinkelein | Retrotransposition and cell-to-cell transfer of foamy viruses[END_REF][START_REF] Hooks | The foamy viruses[END_REF]. Some EFV and BFV isolates are strictly cell-associated without release of infectious cell-free viral particles into the culture supernatants [START_REF] Bao | In Vitro Evolution of Bovine Foamy Virus Variants with Enhanced Cell-Free Virus Titers and Transmission[END_REF][START_REF] Kirisawa | Isolation of an Equine Foamy Virus and Sero-Epidemiology of the Viral Infection in Horses in Japan[END_REF][START_REF] Materniak-Kornas | Bovine Foamy Virus: Shared and Unique Molecular Features In Vitro and In Vivo[END_REF]. For BFV, the transmission route depends on both cell-type and adaptations in Gag and the C-ter of Env [START_REF] Bao | Shared and cell type-specific adaptation strategies of Gag and Env yield high titer bovine foamy virus variants[END_REF][START_REF] Zhang | The Influence of Envelope C-Terminus Amino Acid Composition on the Ratio of Cell-Free to Cell-Cell Transmission for Bovine Foamy Virus[END_REF].
Two studies addressed the capacity of antibodies to block SFV cell-to-cell transmission. The first one assessed the role of plasma antibodies in prevention of SFV transmission through blood cell transfusion in rhesus macaques. Transfusion of blood from an SFV-infected donor macaque into a non-infected recipient donor led to infection when the plasma was removed but not when whole blood was transferred. Thus, this study shows the importance of antibodies in protection against SFV-cell associated transmission [START_REF] Khan | Simian foamy virus infection by whole-blood transfer in rhesus macaques: potential for transfusion transmission in humans[END_REF][START_REF] Williams | Role of neutralizing antibodies in controlling simian foamy virus transmission and infection[END_REF].
The second study by my host laboratory showed that plasma from SFV-infected humans do not block cell-associated transmission of SFV in vitro, despite inhibiting the entry of cell-free virus [START_REF] Couteaudier | Plasma antibodies from humans infected with zoonotic simian foamy virus do not inhibit cell-to-cell transmission of the virus despite binding to the surface of infected cells[END_REF]. They showed plasma antibodies bound to Env expressed at the surface of infected cells opening the possibility that antibodies mediate cell destruction through the recruitment of complement or innate immune cells with cytotoxic or phagocytic functionalities. Such functions of SFV-specific antibodies have not been studied so far.
nAb epitopes on Env from other retroviruses
As nAbs and their epitopes represent the main topic of this PhD thesis, I will present a brief
MLV
MLVs are simple-type retroviruses belonging to the gammaretrovirus genus. They were discovered in the 1950s as the causative oncogenic agent of leukemia in inbreed laboratory mice. MLVs exist both as endogenous and exogenous viruses, reviewed by [START_REF] Kozak | Origins of the endogenous and infectious laboratory mouse gammaretroviruses[END_REF].
Three classes of MLVs are defined based on their tropism. Ecotropic strains have a strictly murine cell tropism, such as Moloney and Friend MLVs (Fr-MLV) isolates. Xenotropic strains infect cells of non-murine origin. Amphotropic/polytropic strains infect both murine and nonmurine cells [START_REF] Sitbon | Les rétrovirus leucémogènes murins : pathogènes, gènes et outils génétiques[END_REF]. The use of different receptor molecules explains the MLV tropisms and all MLV receptors belong to the superfamily of transporters for which the cationic aa-transporter mCAT-1 is used by ecotropic MLVs [START_REF] Albritton | A putative murine ecotropic retrovirus receptor gene encodes a multiple membrane-spanning protein and confers susceptibility to virus infection[END_REF].
The SU gp70 can be divided into three subunits; an N-ter domain (NTD), a C-ter domain (CTD) and a central proline rich hinge region (PRRH) acting as a hinge between the two. This PRRH has been shown to mediate conformational changes necessary for fusion [START_REF] Lavillette | A proline-rich motif downstream of the receptor binding domain modulates conformation and fusogenicity of murine retroviral envelopes[END_REF][START_REF] Lavillette | Relationship between SU subdomains that regulate the receptor-mediated transition from the native (fusion-inhibited) to the fusionactive conformation of the murine leukemia virus glycoprotein[END_REF], while the CTD forms an important disulfide bridge with the TM to generate SU/TM heterodimers [START_REF] Opstelten | Moloney murine leukemia virus envelope protein subunits, gp70 and Pr15E, form a stable disulfide-linked complex[END_REF]. The NTD harbors the RBD and contains discontinuous variable regions (VR) termed VRA, -B and C which determine MLV tropism (Fig. I-23, top panel) [START_REF] Battini | Receptor-binding domain of murine leukemia virus envelope glycoproteins[END_REF][START_REF] Battini | Receptor choice determinants in the envelope glycoproteins of amphotropic, xenotropic, and polytropic murine leukemia viruses[END_REF]. A crystal structure showed that the Fr-MLV RBD (SU aa 1-236) is composed of an Ig-like core of anti-parallel β-sheets (Fig.
I-23, bottom panel) and a helical subdomain sited on the top of the core [START_REF] Fass | Structure of a murine leukemia virus receptor-binding glycoprotein at 2.0 angstrom resolution[END_REF]. The latter comprises VRA and -B folded in close proximity and VRC folded as a loop in a distinct region (Fig. I-23, bottom panel). In 2003, the RBD structure of another gammaretrovirus, the feline leukemia virus (FeLV) was solved [START_REF] Barnett | Structure and mechanism of a coreceptor for infection by a pathogenic feline retrovirus[END_REF]. Its structural organization is similar to the one of Fr-MLV RBD with the divergence of the variable regions, consistent with the distinct tropism of these two gammaretroviruses.
Upon infection or immunization, the majority of nAbs target the VRs within the RBD and interfere with SU-mCAT-1 interaction [START_REF] Burkhart | Distinct mechanisms of neutralization by monoclonal antibodies specific for sites in the N-terminal or C-terminal domain of murine leukemia virus SU[END_REF]. Residues S84, D86 and W102 within VRA of Fr-MLV SU were shown to be important for its interaction with mCAT-1 [START_REF] Davey | Identification of a receptor-binding pocket on the envelope protein of friend murine leukemia virus[END_REF]. Non-neutralizing antibodies bind SU at similar affinities as nAbs, however they rarely recognize the RBD but frequently target the PPRH and CTD [START_REF] Burkhart | Distinct mechanisms of neutralization by monoclonal antibodies specific for sites in the N-terminal or C-terminal domain of murine leukemia virus SU[END_REF].
Interestingly, some broadly cross or pan-neutralizing mAbs (83A25 and 573) that target the PRRH and CTD show loss of interaction to linearized SU suggestive of conformationaldependent recognition [START_REF] Evans | Analysis of two monoclonal antibodies reactive with envelope proteins of murine retroviruses: one pan specific antibody and one specific for Moloney leukemia virus[END_REF][START_REF] Evans | A neutralizable epitope common to the envelope glycoproteins of ecotropic, polytropic, xenotropic, and amphotropic murine leukemia viruses[END_REF]. The mode of action for such mAbs was suggested to involve inhibition of viral fusion or conformational changes needed for fusion since they do not prevent SU-mCAT-1 interaction [START_REF] Burkhart | Distinct mechanisms of neutralization by monoclonal antibodies specific for sites in the N-terminal or C-terminal domain of murine leukemia virus SU[END_REF].
Env capture among MLVs -tropism and recognition by nAbs
In the 70s and 80s, studies demonstrated that polytropic MLVs arise in mice by recombination of exogenous ecotropic strains with endogenous retroviral envelope genes, and that these novel recombinants have wider tropism and greater virulence [START_REF] Elder | Biochemical evidence that MCF murine leukemia viruses are envelope (env) gene recombinants[END_REF][START_REF] Evans | Friend and Moloney murine leukemia viruses specifically recombine with different endogenous retroviral sequences to generate mink cell focus-forming viruses[END_REF][START_REF] Ruscetti | Friend murine leukemia virus-induced leukemia is associated with the formation of mink cell focus-inducing viruses and is blocked in mice expressing endogenous mink cell focus-inducing xenotropic viral envelope genes[END_REF][START_REF] Stoye | The four classes of endogenous murine leukemia virus: structural relationships and potential for recombination[END_REF]. Later it was shown that specific sites of recombination occur at hotspots within the SU domain, in fact confined to a region of high homology between the exogenous and endogenous sequences [START_REF] Alamgir | Precise identification of endogenous proviruses of NFS/N mice participating in recombination with moloney ecotropic murine leukemia virus (MuLV) to generate polytropic MuLVs[END_REF].
Importantly, recombined endogenous env sequences impact SU recognition by nAbs in mice [START_REF] Tumas | Loss of antigenic epitopes as the result of env gene recombination in retrovirus-induced leukemia in immunocompetent mice[END_REF] and some of these endogenous env sequences were distributed or classified into two distinct serotypes [START_REF] Lavignon | Characterization of epitopes defining two major subclasses of polytropic murine leukemia viruses (MuLVs) which are differentially expressed in mice infected with different ecotropic MuLVs[END_REF]. In addition to recombination, transfer and spread of intact non-recombined endogenous viruses has also been demonstrated to occur through pseudotyping of endogenous strains by an ecotropic virus [START_REF] Evans | Mobilization of endogenous retroviruses in mice after infection with an exogenous retrovirus[END_REF].
HTLV
The Env from the delta retrovirus HTLV-1 possess many similarities to Env from gammaretroviruses such as MLV. The precursor Env gp62 from HTLV-1 is composed of SU the one from Fr-MLV [START_REF] Johnston | Envelope proteins containing single amino acid substitutions support a structural model of the receptor-binding domain of bovine leukemia virus surface protein[END_REF].
Two main receptors have been determined for HTLV-1; Neuropilin 1 (NRP-1) [START_REF] Ghez | Neuropilin-1 is involved in human T-cell lymphotropic virus type 1 entry[END_REF] and the main glucose transporter type 1 (GLUT-1) (Jin et al., 2006;[START_REF] Manel | The ubiquitous glucose transporter GLUT-1 is a receptor for HTLV[END_REF].
The residues 90-98 and 106+114 are essential for binding of HTLV-1 SU to NRP-1 and GLUT-1, respectively. SU can directly interact with GLUT-1 and NRP-1 as a tripartite complex, reviewed by [START_REF] Jones | Molecular aspects of HTLV-1 entry: functional domains of the HTLV-1 surface subunit (SU) and their relationships to the entry receptors[END_REF]. GLUT-1 is involved in cell-to-cell transmission or fusion.
However, it is expressed at low level on primary HTLV-1 targets (cord blood and activated CD4 + T lymphocytes) raising concern on its role as primary binding receptor [START_REF] Takenouchi | GLUT1 is not the primary binding receptor but is associated with cell-to-cell transmission of human T-cell leukemia virus type 1[END_REF]. In addition to NRP-1 and GLUT-1, HTLV-1 uses heparan sulfate proteoglycans (HSPGs) as a third receptor and SU binding to NRP-1 depends on conformational changes induced by HSPG binding [START_REF] Jones | Heparan sulfate proteoglycans mediate attachment and entry of human T-cell leukemia virus type 1 virions into CD4+ T cells[END_REF][START_REF] Piñon | Human T-cell leukemia virus type 1 envelope glycoprotein gp46 interacts with cell surface heparan sulfate proteoglycans[END_REF]. HTLV-1 SU bound to HSPG mimics the pro-angiogenic factor VEGF-165, the natural ligand for NRP-1 [START_REF] Lambert | HTLV-1 uses HSPG and neuropilin-1 for entry by molecular mimicry of VEGF165[END_REF]. The binding site of HSPG on HTLV-1 SU was mapped to the CTD. Interestingly, CD4 + T cells express higher HSPG levels than CD8 + T cells, while CD8 + T cells express higher GLUT-1 levels than their CD4 + counterpart potentially explaining the target cell preference by different receptor usage of HTLV-1 and -2 [START_REF] Jones | Human T-cell leukemia virus type 1 (HTLV-1) and HTLV-2 use different receptor complexes to enter T cells[END_REF].
As HTLV is highly cell-associated, the classical neutralization assays have been adapted using vesicular stomatitis virus (VSV) pseudotyped with Env from HTLV or of syncytia formation upon co-culture of HTLV-infected donor cells with uninfected cells. Plasma nAbs were detected in HTLV-1-infected individuals with limiting cross-neutralization of HTLV-2 [START_REF] Clapham | Pseudotypes of human T-cell leukemia virus types 1 and 2: neutralization by patients' sera[END_REF][START_REF] Hoshino | Detection of lymphocytes producing a human retrovirus associated with adult T-cell leukemia by syncytia induction assay[END_REF][START_REF] Nagy | Human T-cell leukemia virus type I: induction of syncytia and inhibition by patients' sera[END_REF]. Plasma-mediated crossneutralization of HTLV-1 Cosmopolitan and Melanesian strains was however observed, supporting that nAbs target conserved epitopes [START_REF] Benson | Crossneutralizing antibodies against cosmopolitan and Melanesian strains of human T cell leukemia/lymphotropic virus type I in sera from inhabitants of Africa and the Solomon Islands[END_REF]. The role of nAbs was addressed in vivo; the plasma from HTLV-1 infected humans can protect rabbits against cellassociated HTLV-1 infection [START_REF] Kataoka | Transmission of HTLV-I by blood transfusion and its prevention by passive immunization in rabbits[END_REF][START_REF] Miyoshi | Immunoglobulin prophylaxis against HTLV-I in a rabbit model[END_REF]. In a humanized mouse model, a neutralizing mAb could block cell-to-cell transmission while infusion of nonneutralizing mAbs reduced but did not prevent transmission [START_REF] Saito | The neutralizing function of the anti-HTLV-1 antibody is essential in preventing in vivo transmission of HTLV-1 to human T cells in NOD-SCID/γcnull (NOG) mice[END_REF]. Studies on pregnant women with HTLV-1 has provided evidence that the maternally transferred HTLV-1 specific nAbs can protect against mother-to-child transmission [START_REF] Iwahara | Neutralizing antibody to vesicular stomatitis virus (HTLV-I) pseudotype in infants born to seropositive mothers[END_REF][START_REF] Takahashi | Inhibitory effect of maternal antibody on mother-to-child transmission of human T-lymphotropic virus type I. The Mother-to-Child Transmission Study Group[END_REF] and reviewed by [START_REF] Percher | Mother-to-Child Transmission of HTLV-1 Epidemiological Aspects, Mechanisms and Determinants of Mother-to-Child Transmission[END_REF]. The protection of newborns against breastmilk-transmitted HTLV-1 through passive transfer of plasma nAbs or neutralizing mAbs to the mother was shown in rabbits and rats [START_REF] Fujii | A Potential of an Anti-HTLV-I gp46 Neutralizing Monoclonal Antibody (LAT-27) for Passive Immunization against Both Horizontal and Mother-to-Child Vertical Infection with Human T Cell Leukemia Virus Type-I[END_REF][START_REF] Saunders | Immunoglobulin prophylaxis against milkborne transmission of human T cell leukemia virus type I in rabbits[END_REF].
To map the nAb targets, plasma from HTLV-1-infected humans were tested in syncytium formation inhibition alone or in competition with overlapping linear peptides spanning both SU and TM. Two peptides spanning aa 53-75 in NTD and 287-311 in CTD, blocked the neutralizing activity [START_REF] Desgranges | Identification of novel neutralization-inducing regions of the human T cell lymphotropic virus type I envelope glycoproteins with human HTLV-I-seropositive sera[END_REF]. Another common linear nAb epitope is located in the PPRH region (aa 175-215) (Fig. I-24) [START_REF] Baba | Multiple neutralizing B-cell epitopes of human T-cell leukemia virus type 1 (HTLV-1) identified by human monoclonal antibodies. A basis for the design of an HTLV-1 peptide vaccine[END_REF][START_REF] Blanchard | Amino acid changes at positions 173 and 187 in the human T-cell leukemia virus type 1 surface glycoprotein induce specific neutralizing antibodies[END_REF].
Immunization of rabbits and mice with synthetic peptides spanning such nAb epitopic regions did not induce plasma nAbs [START_REF] Grange | Identification of exposed epitopes on the envelope glycoproteins of human T-cell lymphotropic virus type I (HTLV-I)[END_REF]. In line with this, the majority of plasma antibodies from HTLV-1 infected humans must target conformational epitopes since a strong loss of reactivity to denaturized antigens was observed in ELISA compared to native antigens [START_REF] Hadlock | The humoral immune response to human T-cell lymphotropic virus type 1 envelope glycoprotein gp46 is directed primarily against conformational epitopes[END_REF]. Indeed, human monoclonal nAbs targeting conformational epitopes on SU have been described [START_REF] Hadlock | Neutralizing human monoclonal antibodies to conformational epitopes of human T-cell lymphotropic virus type 1 and 2 gp46[END_REF][START_REF] Hadlock | Epitope mapping of human monoclonal antibodies recognizing conformational epitopes within HTLV type 1 gp46, employing HTLV type 1/2 envelope chimeras[END_REF].
HIV
The HIV-1 precursor Env gp160 is composed of an extracellular SU gp120 domain responsible for binding to the CD4 receptor and CCR5/CXCR4 co-receptors and a TM gp41 subunit that contains the viral fusion machinery. The SU gp120 is composed of an inner and outer domain (Fig. I-25, panel A) [START_REF] Chan | Core structure of gp41 from the HIV envelope glycoprotein[END_REF][START_REF] Kwong | Structure of an HIV gp120 envelope glycoprotein in complex with the CD4 receptor and a neutralizing human antibody[END_REF][START_REF] Weissenhorn | Atomic structure of the ectodomain from HIV-1 gp41[END_REF]. A key feature of the HIV-1 Env is its extensive glycosylation which can be up to 90 N-linked glycans on a trimer depending on strain (Fig. I-25, panel B), shielding its protein surface from exposure to nAbs, reviewed by [START_REF] Dubrovskaya | Vaccination with Glycan-Modified HIV NFL Envelope Trimer-Liposomes Elicits Broadly Neutralizing Antibodies to Multiple Sites of Vulnerability[END_REF][START_REF] Wagh | Hitting the sweet spot: exploiting HIV-1 glycan shield for induction of broadly neutralizing antibodies[END_REF]. This dense layer of glycans is flexible and heterogenous in its composition of sugar subunits across the Env and varies between strains (Stewart- [START_REF] Stewart-Jones | Trimeric HIV-1-Env Structures Define Glycan Shields from Clades A, B, and G[END_REF].
HIV-specific nAbs are usually elicited a few weeks post-infection. The early response is directed against strain-specific regions including the variable loops V1-4 [START_REF] Bar | Early low-titer neutralizing antibodies impede HIV-1 replication and select for virus escape[END_REF], and is usually narrow [START_REF] Piantadosi | Chronic HIV-1 infection frequently fails to protect against superinfection[END_REF]. These autologous nAbs drives the emergence of escape variants which induce novel nAbs and an arms-race between the humoral response and The seventh class of bnAbs recognize the MPER, reviewed by [START_REF] Caillat | Neutralizing Antibodies Targeting HIV-1 gp41[END_REF]. Most classes of bnAb epitopes are partially or entirely dependent on surface glycans [START_REF] Wagh | Hitting the sweet spot: exploiting HIV-1 glycan shield for induction of broadly neutralizing antibodies[END_REF]. Key characteristics of each site and bnAb mechanisms of action are given:
V1V2: This epitope locates to the apex of the Env trimer and loops in this region undergo significant conformational changes upon Env-CD4 engagement. The class of bnAbs recognizing these loops often target quaternary epitopes and function by stabilizing the Env trimer, thus preventing changes in trimer states, reviewed by [START_REF] Pancera | How HIV-1 entry mechanism and broadly neutralizing antibodies guide structure-based vaccine design[END_REF].
Glycan-V3:
This site is located next to the V1V2 site and is usually composed of high-mannose type glycans (also termed the mannose-patch) [START_REF] Pritchard | Structural Constraints Determine the Glycosylation of HIV-1 Envelope Trimers[END_REF]. Epitopes of this site are mainly composed of high-mannose type glycans but complex-type have also been reported for one bnAb [START_REF] Mouquet | Complex-type N-glycan recognition by potent broadly neutralizing HIV antibodies[END_REF]. Deleting or shifting glycans at this site can lead to neutralization resistance (Pritchard et al., 2015a). The bnAbs targeting this epitope are usually interfering with Env binding to its co-receptor or prevention of conformational changes necessary for this interaction [START_REF] Miller | A Structural Update of Neutralizing Epitopes on the HIV Envelope, a Moving Target[END_REF].
CD4-binding site: bnAbs targeting the conserved CD4-binding site are some of the most potent discovered so far which function by preventing the Env-CD4 binding. This site is located in a cavity and is one of the few sites that does not require bnAb interaction with glycans despite it being highly shielded by glycosylation in a pre-fusion trimer state. Removal of glycans surrounding this site highly influence the nAb-response generated by Env-trimer immunogens in animals [START_REF] Zhou | Quantification of the Impact of the HIV-1-Glycan Shield on Antibody Elicitation[END_REF]. Some of the bnAbs targeting this site function by mimicking the CD4 interaction with Env (Parker [START_REF] Miller | A Structural Update of Neutralizing Epitopes on the HIV Envelope, a Moving Target[END_REF].
Silent face: bnAbs targeting this site were discovered more recently and completed the Env surface coverage [START_REF] Zhou | A Neutralizing Antibody Recognizing Primarily N-Linked Glycan Targets the Silent Face of the HIV Envelope[END_REF]. This centered region is highly glycosylated and the antibodies make extensive contact to these. Although less structural information exists for bnAbs recognizing this site, their mode of action likely involves inhibition of conformational changes needed for cell entry and complete receptor binding, reviewed by [START_REF] Miller | A Structural Update of Neutralizing Epitopes on the HIV Envelope, a Moving Target[END_REF].
Fusion peptide:
The fusion peptide at the N-ter of gp41 locates to the lower part of the Env trimer. The bnAbs recognizing this peptide function by blocking fusion through stabilization of the Env in a pre-fusion trimer 'closed state' and prevention of transition to post-fusion 'open state'. The bnAbs targeting this conserved sequence can also engage in glycan interactions, reviewed by [START_REF] Caillat | Neutralizing Antibodies Targeting HIV-1 gp41[END_REF].
Subunit interface:
This site locates between the gp120 and gp41 Env subunits but bnAbs have also been found to target the interface between two gp41 subunits. The mechanism of action for this class of bnAbs often involves Env trimer disassembly or decay [START_REF] Dubrovskaya | Vaccination with Glycan-Modified HIV NFL Envelope Trimer-Liposomes Elicits Broadly Neutralizing Antibodies to Multiple Sites of Vulnerability[END_REF][START_REF] Lee | Antibodies to a conformational epitope on gp41 neutralize HIV-1 by destabilizing the Env spike[END_REF].
MPER:
This conserved site is targeted by highly potent bnAbs that interfere with membrane fusion. However, their epitope is usually only accessible after Env receptor binding and thus neutralization efficacy is generally lower for cell-to-cell transmission [START_REF] Caillat | Neutralizing Antibodies Targeting HIV-1 gp41[END_REF].
Epitopes on this site are often composed of both Env and membrane components due to its proximal location just above the viral membrane [START_REF] Lee | Cryo-EM structure of a native, fully glycosylated, cleaved HIV-1 envelope trimer[END_REF]. Poly-and autoreactivity of these antibodies are therefore more frequent. Figures adapted from [START_REF] Chuang | Structural Survey of Broadly Neutralizing Antibodies Targeting the HIV-1 Env Trimer Delineates Epitope Categories and Characteristics of Recognition[END_REF][START_REF] Sliepen | HIV-1 envelope glycoprotein immunogens to induce broadly neutralizing antibodies[END_REF].
In summary, this final section highlights common and distinct features between SFV-specific nAbs and nAbs targeting other retroviruses. Key characteristics include:
• SFV-specific nAbs solely target the RBD, while other regions are recognized on Env from MLV, HTLV and HIV
• SFV share a modular env gene structure with MLV, resulting from recombination and with variants encoding for the RBD and nAb epitopes
• SFV-specific nAbs target conserved sequences (within each genotype) similar to nAbs specific for HTLV-1, but in sharp contrast to nAbs against HIV-1
In addition, besides HIV, retroviral envelopes and their recognition by nAbs are poorly described. In this respect, SFV Env has no sequence homology with other retroviral Envs. My PhD project on SFV-specific antibodies has been initiated to understand the immune control of this zoonotic virus, but my work may also provide fundamental knowledge on structural basis for retrovirus inhibition by nAbs.
CHAPTER II
___________________________________________________________________________
PHD THESIS AIMS AND HYPOTHESIS 2 | PHD THESIS AIMS AND HYPOTHESIS
SFV generates a persistent life-long infection in humans without associated pathology.
CHAPTER III
___________________________________________________________________________
PRESENTATION OF PUBLICATIONS 3 | PRESENTATION OF PUBLICATIONS
The work of this PhD thesis is presented as two publications. The first manuscript describes the structure of the RBD from SFV Env, while the second manuscript represents the major work of my PhD on nAb epitopes. These two papers were the results of a collaborative effort.
Accordingly, I will introduce the manuscripts and highlight my precise contribution to this work including a more detailed explanation of the experiments I performed.
Manuscript I: Novel structure of an SFV RBD
We engaged in a collaboration with the lab of Prof. Félix Rey who is an expert in structural I firstly investigated the role of glycosylation in nAb epitopes and observed that complex and high-mannose type glycans did not influence the block of nAbs in our assay. In contrast, deglycosylation had a noticeable effect and significantly decreased the affinity of SU-Ig proteins for binding to plasma nAbs from six of eight donors tested. These results suggested that some epitopes may be composed of a glycan. To identify which glycan is involved in this recognition, I deleted six out of seven individual glycosylation sites within SUvar on the homologous GII-K74 SU-Ig protein, while the conserved glycan N8 was skipped as it is essential for protein expression. Among all mutants, deletion of glycan N7' had the strongest effect and resulted in significant loss of nAb blocking activity for five of seven GII-infected donors tested. This glycan is located at the CC of the RBD in the lower domain and in close proximity to the conserved glycan N8. Removal of glycan N10 which has a genotype-specific location did not affect block of nAbs for any plasma samples tested.
I also investigated if nAbs would recognize the novel heparan sulfate binding site that we mapped on the lower domain of the RBD in our collaborator's manuscript. However, proteins harboring the four HBS mutations retained activity equal to the WT for three of four GIIinfected donors tested. Thus, we conclude that the HBS is not a dominant nAb target.
Next, we looked into the role of functional domains. Thus, we generated RBDj mutant SU-Ig proteins from both genotypes and tested their capacity to inhibit nAbs. Removal of RBDj completely abolished the blocking activity of the SU protein for seven of eight donors, suggesting that the major nAb epitopes are located within this region. We then generated a swap of GII SU-Ig with a GI-RBDj subdomain that blocked plasma nAbs from four GI-infected donors. These results confirm that the RBDj subdomain is a dominant target of nAbs in humans infected with gorilla genotype I strains.
On the 3D-structure, RBDj locates at the very apex of the RBD and of the trimer. Moreover, we observed that this region harbors the four loops hypothesized to be involved in trimer stabilization. Among these loops (L1-4), L1 appear buried within the trimer and likely not accessible for nAbs. Thus, to further define epitopes within this region we designed SU-Ig proteins with loop mutations for both genotypes. The remaining three apex loops (L2; aa 278-293, L3; aa 410-433 and L4; aa 442-458) were individually deleted. The novel loop mutants demonstrated a genotype-specific targeting by nAbs. GI-specific nAbs mostly target the L3 region (CI-PFV L3; aa 411-436), while GII-specific nAbs had a wider response and target all three loops. To confirm the findings for GI, we generated a mutant with GII-L3 swap into the CI-PFV SU-Ig backbone and confirmed that this mutant lost its ability to block nAbs from six GI-specific plasma samples tested. Next, a Post-doctoral researcher, Dr. Youna Coquin, produced FVVs with RBDj and loop deletions matching those designed on SU-Ig proteins. She demonstrated that these mutants bound to SFV-susceptible cells but were not infectious. These data support that nAbs target epitopes on the apex of the RBD that are functionally important for viral entry.
Before our collaborators solved the RBD structure, I used in silico prediction tools to design seven mutations by inserting glycans for epitope disruption on the GII SU-Ig backbone. Among these, several mutations were located within or nearby the apex loops, and some of these confirmed our findings that these loops contain epitopes. Some mutations where also confirmed in the CI-PFV backbone for mapping of GI-specific plasma nAbs. Interestingly, I discovered a GII-specific epitope located in a loop (aa 345-353) region on the lower domain of the RBD.
Glycan inserts to this loop strongly abolished block of GII-but not GI-specific plasma nAbs.
Additional mutations within and nearby this loop including chimeric swaps confirmed this region to be a dominant target of nAbs from humans infected with genotype II gorilla SFV strains.
Collectively, our two manuscripts and previous report allowed us to propose a new model with attribution of functional roles to certain structural features of the SFV RBD. We propose that the upper domain of the RBD and apex loops are involved in protomer-protomer interactions and potentially Env trimer stabilization. In contrast, the lower domain of the RBD contains a putative HBS and is potentially involved in binding to other receptor molecules yet to be identified. My study on nAbs identified epitopes on both domains of the RBD and on the CC of the RBD. More precisely, I identified genotype-specific targets in regions of the upper RBD, while a strictly GII-specific epitope was defined at the lower base of the RBD.
| MANUSCRIPT I
The crystal structure of a simian foamy virus receptor-binding domain provides clues about entry into host cells * Corresponding author: Marija Backovic, [email protected]
Abstract
The surface envelope glycoprotein (Env) of all retroviruses mediates virus binding to cells and fusion of the viral and cellular membranes. A structure-function relationship for the HIV Env, which belongs to the Orthoretrovirus subfamily, has been well established. Structural information is however largely missing for the Env of Foamy viruses (FVs), the second retroviral subfamily. FV Envs lack sequence similarity with their HIV counterpart. We present the X-ray structure of the receptor binding domain (RBD) of a simian FV Env at 2.6 Å resolution, revealing two subdomains and an unprecedented fold. We have generated a model for the organization of the RBDs within the trimeric Env which indicates that the upper subdomain is important for stabilization of the full-length Env, and have demonstrated that residues K342, R343, R359 and R369 in the lower subdomain play key roles in the interaction of the RBD and viral particles with heparan sulfate.
Introduction
Spumaretroviruses, also known as foamy viruses (FVs) are ancient retroviruses that have coevolved with vertebrate hosts for over 400 million years [START_REF] Aiewsakun | Marine origin of retroviruses in the early Palaeozoic Era[END_REF][START_REF] Rethwilm | Evolution of foamy viruses: the most ancient of all retroviruses[END_REF]. FVs are prevalent in nonhuman primates, which can transmit them to humans, most often through bites [START_REF] Pinto-Santini | Foamy virus zoonotic infections[END_REF]. Unlike their betterstudied Orthoretrovirinae relatives (HIV being the most notable member) FVs have extremely slowly mutating genomes and do not induce severe pathologies despite integrating into the host genome and establishing lifelong persistent infections [START_REF] Buseyne | Clinical Signs and Blood Test Results Among Humans Infected With Zoonotic Simian Foamy Virus: A Case-Control Study[END_REF][START_REF] Ledesma-Feliciano | Feline Foamy Virus Infection: Characterization of Experimental Infection and Prevalence of Natural Infection in Domestic Cats with and without Chronic Kidney Disease[END_REF]. These features, along with broad tropism and host range [START_REF] Meiering | Historical perspective of foamy virus epidemiology and infection[END_REF], make FVs attractive vector candidates for gene therapy [START_REF] Rajawat | In-Vivo Gene Therapy with Foamy Virus Vectors[END_REF].
Viral fusion proteins drive membrane fusion by undergoing a conformational change, which can be triggered by acidification in an endosomal compartment and / or binding to a specific cellular receptor [START_REF] Harrison | Viral membrane fusion[END_REF][START_REF] White | Fusion of Enveloped Viruses in Endosomes[END_REF]. FVs enter cells by endocytosis, with fusion of the viral and cellular membranes occurring in the endosomal compartment in a pH-sensitive manner, leading to capsid release into the cytosol [START_REF] Picard-Maureau | Foamy virus envelope glycoprotein-mediated entry involves a pH-dependent fusion process[END_REF]. The exception is the prototype FV (PFV) which can also fuse at the plasma membrane [START_REF] Dupont | Identification of an Intermediate Step in Foamy Virus Fusion[END_REF]. The FV fusion protein, the envelope glycoprotein (Env), exhibits the organization of a class I fusogen [START_REF] Rey | Effects of human recombinant alpha and gamma and of highly purified natural beta interferons on simian Spumavirinae prototype (simian foamy virus 1) multiplication in human cells[END_REF], which are synthesized as single-chain precursors and fold into trimers, within which protomers are subsequently cleaved in secretory Golgi compartments. FV Env is cleaved twice by cellular furin during maturation, giving rise to 3 fragments: the leader peptide (LP), the surface (SU) subunit, which has the receptor binding domain (RBD), and the transmembrane subunit (TM), which harbors the fusion machinery. The structural information available for FV Env is limited to cryo-electron tomography (ET) of viral particles and a 9 Å cryo-electron microscopy (EM) reconstruction of PFV Env [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF], which revealed LP-SU-TM trimers arranged in interlocked hexagonal assemblies [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF][START_REF] Wilk | The intact retroviral Env glycoprotein of human foamy virus is a trimer[END_REF], with an architecture that is different to that of HIV Env trimers [START_REF] Pancera | Structure and immune recognition of trimeric pre-fusion HIV-1 Env[END_REF].
Heparan sulfate (HS) is as an attachment factor for PFV and feline FV [START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF][START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF] but the requirements for a surface or intracellular receptor, which would trigger membrane fusion by FV Env, remain unclear. The search for a receptor has been complicated by the FV binding to HS, which is ubiquitously expressed on cells, masking potential candidates. A bipartite RBD, consisting of two discontinuous regions of the polypeptide chain, was identified within the FV SU by screening a panel of recombinant SU truncations for binding to cells [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. FV Env is heavily glycosylated, with at least 13 predicted N-linked glycosylation sites. Mutational analysis has revealed that three of these N-sites are essential for the PFV infectivity -two located in the TM subunit, and one in the RBD. The latter site, referred to as the glycosylation site 8 or N8 [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF] is conserved across the FV subfamily, and has been suggested to play a direct role in binding to a receptor [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF] (to distinguish the nomenclature of the predicted N-linked glycosylation sites (N1 to N15) from the single letter symbol for asparagine residues (N), the former will be underlined throughout the text). The remaining molecular determinants of the RBD interaction with host cells remain elusive, largely due to a lack of structural information, which has precluded rational approaches to mutagenesis and functional analyses. A highresolution structure of the FV RBD, structural information regarding the RBDs organization within the Env trimer and how the RBDs contribute to the Env activation are not available. In this manuscript we present the first X-ray structure of the RBD from a zoonotic gorilla simian FV at 2.6 Å resolution, which reveals an entirely novel fold. We propose a model for the RBD assembly in the trimeric Env and report the identification of residues involved in HS binding, with functional and evolutionary implications discussed.
Results
The X-ray structure of the SFV RBD reveals a novel fold
Recombinant RBDs from several simian FV (SFV) strains were tested for production in Drosophila S2 insect cells, and only the RBD from gorilla SFV (strain SFVggo_huBAK74 [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF], genotype II; abbreviated as 'GII' herein) was both expressed in the quantities required for structural studies and also crystallized. The RBD was expected to be heavily glycosylated due to 8 predicted N-glycosylation sites (Fig. IV-1). To increase the chances of generating well-diffracting crystals, a fraction of the purified protein was enzymatically deglycosylated (RBD D ). Crystals were obtained for the RBD D as well as for the untreated protein (RBD G ). The RBD D diffracted better (2.6 Å) than RBD G and the structural analyses presented below were carried out using the RBD D structure, unless otherwise noted. The data collection and structure determination statistics for both crystal forms are summarized in Table S.IV-1.
The SFV RBD folds into two subdomains, each with + topology, which we refer to as 'lower'
(residues 218-245, 311-369 and 491-524) and 'upper' subdomains (residues 246-310 and 370-490) in reference to their positioning with respect to the viral membrane [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF] (see below). The overall RBD fold approximates a ~65 Å long bean-shape, with the upper subdomain on the wider side (~45 Å diameter), and the N-and C-termini on the opposite, narrower side (~20 Å diameter) (Fig. IV-1B). The lower subdomain is comprised of a threehelical bundle (1, 7, 8) that packs against an anti-parallel, twisted four-stranded -sheet (14-1-5-15) and against helix 2 (residues 333-346), which lays perpendicularly on the side of the bundle. Within the helical bundle and the -sheet, the regions proximal to the Nand C-termini are tied together, and each structure is reinforced by disulfide bonds (DS) DS1
(C228-C503) and DS2 (C235-C318), respectively. Seventy-and 130-residue long segments, forming most of the upper subdomain, are inserted between 1 and 5, and between 4 and 14 of the lower subdomain, respectively (Fig. IV-2). The polypeptide chain extends upwards and back twice, finally yielding the outer strands of the -sheet (14 and 15). These secondary structural elements in the lower subdomain contain a prominent hydrophobic core that extends into the upper subdomain, which has lower secondary structure content (Table S.IV-2) and is stabilized by several networks of polar interactions (Fig. S.IV-1, Table S.IV-3).
Four notable protrusions, designated loops 1 to 4 (L1-L4) emanate from the upper domain: Therefore, the SFV RBD represents, to the best of our knowledge, an unprecedented fold. and prevents aggregation, consistent with the reported misfolding and low levels of the secreted immunoadhesin carrying the RBD with a mutation in the N8 site [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF].
The N8 is the only N-glycosylation site in the SU that is strictly conserved across the FV subfamily (Fig. S.IV-5), and the hydrophobic patch residues laying beneath it are conserved as well (Fig. IV-3C). Thus, N8 likely plays an important structural role in all FV RBDs.
The RBD fold is predicted to be conserved within the
Spumaretrovirinae subfamily
To investigate potential conformational differences between RBDs from different species, we used AlphaFold (AF) [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF] software to predict the RBD structures from members of each of the 5 FV genera, some of which exist as two genotypes due to the modular nature of FV Env (Aiewsakun et al., 2019a). Within each FV Env, a 250-residue long region within the RBD, termed the variable or 'SU var ' region, defines two co-circulating genotypes, I and II, which have been found in gorillas [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF], chimpanzees [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF] and mandrills (Aiewsakun et al., 2019a), among others. The SU var regions share less than
Fitting of the RBD atomic model into Env cryo-EM density map reveals the trimeric RBD arrangement
To investigate the RBD arrangement within trimeric Env, we fitted the RBD atomic model into the 9Å cryo-EM map reported for trimeric PFV, (a chimpanzee genotype I FV) Env expressed on foamy viral vector (FVV) particles [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. The RBD fitting was performed with the fit-in-map function in Chimera suite [START_REF] Pettersen | UCSF Chimera--a visualization system for exploratory research and analysis[END_REF]. The correlation coefficient of 0.96 strongly suggests that the recombinantly expressed RBD represents its biologically relevant conformation as observed at the surface of virus The loops belonging to each protomer are designated as L, L' and L''. Images in all three panels were generated in Chimera [START_REF] Pettersen | UCSF Chimera--a visualization system for exploratory research and analysis[END_REF].
Positively charged residues in the lower subdomain form a heparan sulfate binding site
To locate potential HS binding regions with RBD, we investigated electrostatic potential surface distribution and identified a large, continuous region in the lower subdomain with strong positive potential (Fig. IV-5A). We next analyzed the RBD structure with the ClusPro server that predicts putative HS binding sites on protein surfaces [START_REF] Kozakov | The ClusPro web server for protein-protein docking[END_REF]. K342
and R343 in 2 helix, R359 in the proceeding 4 helix and R369 in an extended chain region were among the residues that had the highest number of contacts with HS models that were To prove that the designed mutations specifically affected the interaction with cellular HS, we measured binding to HT1080 cells that were pre-treated with heparinase, which removed more than 90% of HS from the cells (Fig
Discussion
FV RBD adopts a novel fold and is composed of two subdomains. We determined the X-ray structure of the RBD from a gorilla FV, revealing a fold (Fig [START_REF] Fass | Structure of a murine leukemia virus receptor-binding glycoprotein at 2.0 angstrom resolution[END_REF][START_REF] Fass | Retrovirus envelope domain at 1.7 angstrom resolution[END_REF][START_REF] Mccarthy | Structure of the Receptor Binding Domain of EnvP(b)1, an Endogenous Retroviral Envelope Protein Expressed in Human Tissues[END_REF], and gp120 from HIV (lentivirus genus) (Kwong et al., 1998) (Fig. S.IV-4). This finding expands the repertoire of unique FV features (assembly, particle release [START_REF] Lindemann | The Unique, the Known, and the Unknown of Spumaretrovirus Assembly[END_REF], replication [START_REF] Rethwilm | The replication strategy of foamy viruses[END_REF]) that are not shared with Orthoretroviruses, and is consistent with the lack of Env sequence conservation between
Orthoretroviruses and FVs.
The gammaretroviral RBDs are relatively small (200 residues) and fold into an antiparallel sandwich with two extended loops that give rise to a helical subdomain that sits on top of the sandwich [START_REF] Fass | Structure of a murine leukemia virus receptor-binding glycoprotein at 2.0 angstrom resolution[END_REF][START_REF] Fass | Retrovirus envelope domain at 1.7 angstrom resolution[END_REF] (Fig. S.IV-4). The helical subdomain defines the tropism for cellular receptors [START_REF] Battini | Receptor choice determinants in the envelope glycoproteins of amphotropic, xenotropic, and polytropic murine leukemia viruses[END_REF]) and shows high sequence variability within the genus. HIV interacts with its cognate receptor CD4 through gp120, its SU, which is larger (450 residues) and has two subdomains, inner and outer. The receptor binding surface of gp120 is formed by secondary structure elements from both subdomains [START_REF] Kwong | Structure of an HIV gp120 envelope glycoprotein in complex with the CD4 receptor and a neutralizing human antibody[END_REF].
Variable loops project out from the gp120 core, and participate in receptor binding and immune invasion [START_REF] Chen | Molecular Mechanism of HIV-1 Entry[END_REF]. It is possible to argue that the FV RBD global organization into two subdomains -the lower, which is better conserved, and upper, which contains the protruding loops and is variable in sequence -is reminiscent of the characteristics described above for the Orthoretrovirus RBDs. Whether the presence of similar features implies similar function remains to be investigated.
The RBDs form a cage-like structure at the membrane-distal side of Env. We fitted the experimentally determined RBD structure into the low-resolution density map (Fig. IV-4A) obtained by cryo-EM single particle reconstruction of trimeric PFV Env expressed on FVV particles [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. The resulting model of the RBD trimer arrangement is consistent with the biochemical and functional data presented here -as expected, the HS binding residues (K342, R343, R356, R369) and seven N-linked carbohydrates map to the Env Based on these observations we speculate that the inter-protomer interactions mediated by the loops maintain a closed Env conformation, stabilizing the Env protein in its metastable pre-fusion form that is displayed on the virion surface. It is possible to envisage that different sets of interacting residues in the loops in different FVs stabilize the RBDs within the trimeric Env sufficiently well to maintain its native state. We hypothesize that the RBDs are loosely bound within native Env to enable them to readily dissociate upon a fusion trigger, which could be delivered in the endosome (acidic pH) and / or by a specific cellular receptor. In that respect, the FV upper domain loops could play a role equivalent to V1/V2/V3 loops in HIV Env (Wang et al., 2016a). It will be also important to discern the RBD molecular determinants, if any, that drive the membrane fusion at the plasma membrane, as used by PFV, in comparison to all the other FVs that fuse in the endosomes [START_REF] Dupont | Identification of an Intermediate Step in Foamy Virus Fusion[END_REF].
The structure explains why upper RBD domain is tolerant to deletions or substitutions. 14A)) [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. Within the proposed region, the terminal segments were found to be essential for the RBD to retain its cell binding activity, while the central region was dispensable. This RBD region, which is not essential for binding to cells [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF], termed also RBDjoin (Dynesen et al., 2022, submitted), maps to the top of the RBD, is clamped by two intra-region DS bonds, and encompasses L3 and L4 (Fig. S.IV-5). Its location, away from the HS binding residues, is consistent with the ability of the PFV SU truncation lacking the "non-essential" region to bind to cells at the levels measured for WT protein [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. The AF model of the PFV RBD lacking this RBDjoin region reveals a 3D fold very similar to that of the complete RBD (Fig.
Based on the ability of
S.IV-14B). Considering the RBD arrangement within trimeric
Env, which we show is maintained by the loops in the RBDjoin region (Fig. IV-4), we hypothesized that the loop deletions, while tolerated by isolated, monomeric RBD, would likely be detrimental for the integrity of the trimeric Env and its function in fusion and entry.
Dynesen et al. indeed show that the virus with Env carrying deletion in loop regions lead to the loss of infectivity (Dynesen et al., 2022, submitted).
The lower RBD subdomain carries the residues involved in HS binding. Our data demonstrate that K342/R343 and R356/R369 are the key residues for the RBD interaction with HS immobilized on an inert matrix or expressed on cells (Fig. and that HS is an attachment factor for SFV expands upon previous reports for PFV [START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF]. In SFV Envs, is consistent with the existence of another cell receptor(s) in FV entry. These HS-binding defective Env variants will be useful tools in the research of the potential proteinaceous receptor, as they eliminate binding to HS, which is a widely expressed attachment factor.
Concluding remarks.
In this manuscript we have described the first X-ray structure of a FV RBD and validated that the novel fold is the one adopted in the native Env. We identified, within the RBD, two subdomains in terms of their structure, conservation, and function: the upper subdomain, which encompasses the majority of the genotype-specific region, and is likely involved in maintaining the closed prefusion Env conformation, and a more conserved, lower subdomain, important for binding to the attachment factor HS. We generated AF models for 11 additional FV RBDs, highlighting its conserved three-dimensional conformation.
This information is critical for understanding virus-cell interactions and provides a framework for structure-driven mutagenesis studies necessary for establishing the molecular basis of FV entry and recognition by neutralizing antibodies as described in Dynesen et al. (Dynesen et al., 2022, submitted). The AlphaFold algorithm [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF] cannot predict the arrangement of oligosaccharides at the surface of glycoproteins. The previously reported functional observations on FV Envs, along with the role of N8 can now be understood in light of the experimentally derived structure, underscoring the necessity for structure determination by experimental means. Identification of HS binding residues will aid the search for additional putative FV receptor(s). Insights into the structure-function relationship of the metastable, multimeric and heavily glycosylated FV Env, as well as unraveling the molecular basis of receptor activation and membrane fusion, will require integrated biology efforts and experimental structural methods.
Materials and Methods
Expression construct design (SFV RBD and ectodomains for HS binding studies)
A flow-cytometry assay was developed by Duda et al. to detect binding of recombinantly expressed foamy virus Env variants to cells [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. By using a panel of SU The Phyre2 webserver was also used to design the ectodomain construct, which starts after the first predicted transmembrane helix (S91) and encompasses residues up to I905.
Recombinant SFV RBD and ectodomain production and purification
For structural studies the RBD (residues 218-552, GII-K74 strain, Env accession number JQ867464) [START_REF] Krey | The disulfide bonds in glycoprotein E2 of hepatitis C virus reveal the tertiary organization of the molecule[END_REF] was cloned into a modified pMT/BiP insect cell expression plasmid (Invitrogen) designated pT350, which contains a divalent-cation inducible metallothionein promoter, the BiP signal peptide at the N-terminus (MKLCILLAVVAFVGLSLG), and a double strep tag (DST) (AGWSHPQFEKGGGSGGGSGGGSWSHPQFEK) at the C-terminus, as previously described [START_REF] Krey | The disulfide bonds in glycoprotein E2 of hepatitis C virus reveal the tertiary organization of the molecule[END_REF]. This plasmid was co-transfected to Drosophila Schneider line 2 cells (S2) with the pCoPuro plasmid for puromycin selection [START_REF] Backovic | Stable Drosophila Cell Lines: An Alternative Approach to Exogenous Protein Expression[END_REF]. The cell line has undergone selection in serum-free insect cell medium (HyClone, GE Healthcare) containing 7 g/ml puromycin and 1% penicillin/streptomycin. For the protein production stage, the cells were grown in spinner flasks until the density reached approximately 1 10 7 cells/ml, at which point the protein expression was induced with 4 M CdCl2. After 6 days, the cells were separated by centrifugation, and the supernatant was concentrated and used for affinity purification using a Streptactin column (IBA).
Approximately 20 milligrams of recombinant RBD were obtained per liter of S2 cell culture. The wild-type gorilla GII FV ectodomain was cloned into the pT350 vector and used as a template for generating the heparan-sulphate binding mutants by site-directed mutagenesis.
Drosophila S2 cells were stably transfected with all the vectors, as previously mentioned. The ectodomains expression followed the same steps reported for the RBD production and after 6 days they were purified from the cell supernatants by affinity chromatography using a
StrepTactin column (IBA) and SEC on a Superose 6 10/300 column (Cytiva) in 10mM Tris-HCl, 100 mM NaCl, pH 8.0. The fractions within the peak corresponding to the trimeric ectodomain were concentrated in VivaSpin concentrators and stored at -80 C until used.
Crystallization
Crystallization trials were performed in 200 nanoliter sitting drops formed by mixing equal volumes of the protein and reservoir solution in the format of 96 Greiner plates, using a Mosquito robot, and monitored by a Rock-Imager at the Core Facility for Protein Crystallization at Institut Pasteur in Paris, France [START_REF] Weber | High-Throughput Crystallization Pipeline at the Crystallography Core Facility of the Institut Pasteur[END_REF]. The native RBD D crystal used for data collection was grown in 0.1M Tris pH 8.5, 3.5M sodium formate (NaCOOH). For the derivative data, the RBD D crystal, grown in 0.1M Tris pH 8.5, 3.25M sodium formate, was soaked overnight in the same crystallization solution supplemented with 0.5M sodium iodide and directly frozen using the mother liquor containing 33% ethylene glycol as cryo-buffer. The RBD G crystals were obtained from a solution containing 0.2M ammonium tartarate ((NH4)2 C4H4O6) and 20% w/v PEG 3350.
X-ray diffraction data collection and SFV RBD structure determination
The native, the derivative (iodine-soaked) and the 'glycosylated' data were all collected at 100K on the Proxima-1 [START_REF] Chavas | PROXIMA-1 beamline for macromolecular crystallography measurements at Synchrotron SOLEIL[END_REF] or Proxima-2A beamlines at the SOLEIL synchrotron source (Saint Aubin, France), using the Pilatus Eiger X 16M or Eiger X 9M detectors (Dectris), respectively.
We obtained trigonal crystals, space group 3221 for the RBD D (2.6 Å), P3121 (later found to be P3221) for the derivative RBD D (3.2 Å), and hexagonal crystals for the RBD G protein (2.8 Å, space group P61). Diffraction data were processed using XDS (Kabsch, 2010) and scaled and merged with AIMLESS [START_REF] Evans | How good are my data and what is the resolution?[END_REF]. The high-resolution cut-off was based on the statistical indicator CC1/2 [START_REF] Karplus | Linking crystallographic model and data quality[END_REF]. Several applications from the CCP4 suite were used throughout processing [START_REF] Winn | Overview of the CCP4 suite and current developments[END_REF]. The statistics are given in Table S.IV-1.
To solve the structure of the RBD D , the AutoSol pipeline from the Phenix suite [START_REF] Adams | PHENIX: a comprehensive Python-based system for macromolecular structure solution[END_REF][START_REF] Terwilliger | Decision-making in structure solution using Bayesian estimates of map quality: the PHENIX AutoSol wizard[END_REF] was employed, using the anomalous data set, searching for 20 iodine sites and specifying two NCS copies in the asymmetric unit (ASU). AutoSol reliably determined the substructure, composed of 20 iodine sites. The refined anomalous phases were internally used to phase the entire protein with the aid of density modification. The result of the process was a structure with a low R-factor; moreover, the density modified map showed a good contrast between the protein and the solvent and helical features clearly discernible. The initial assignment of the space group of the anomalous data was tentative, as the screw axis that is present in the cell allows for two alternatives (P3121 or P3221). The enantiomorph ambiguity was resolved after density modification with the anomalous phases and model building by looking at the map and its quality. AutoSol unambiguously selected the correct space group, which is P3221. The structure was further improved in Buccaneer [START_REF] Cowtan | The Buccaneer software for automated model building. 1. Tracing protein chains[END_REF] in 'experimental phases' mode, using the density modified map from AutoSol and the refined substructure from AutoSol. Finally, the BUCCANEER model was refined against the native data at 2.6 Å by iterative rounds of phenix.refine [START_REF] Adams | PHENIX: a comprehensive Python-based system for macromolecular structure solution[END_REF], BUSTER [START_REF] Blanc | Refinement of severely incomplete structures with maximum likelihood in BUSTER-TNT[END_REF][START_REF] Bricogne | BUSTER. 2.8.0 edn[END_REF] and Coot [START_REF] Emsley | Coot: model-building tools for molecular graphics[END_REF], which was used throughout all model building and refinement to inspect and manually correct the model.
To solve the structure of the RBD G , the RBD D was used as a search-model in Molecular
Replacement in Phaser (McCoy et al., 2007) from the Phenix suite. In this case, the ASU was found to contain two molecules, which were again refined using a combination of BUSTER and phenix.refine.
For both models, the 2|F|o-|F|c and |F|o-|F|c electron density maps were used to unambiguously identify the carbohydrate moieties and built them. For both models, the final stereochemistry was assessed by MolProbity (http://molprobity.biochem.duke.edu/) [START_REF] Chen | MolProbity: all-atom structure validation for macromolecular crystallography[END_REF].
The final maps showed clear, interpretable electron density, except for a region comprising residues 419-427 precluding building on these 9 amino acids and indicating inherent flexibility of the region. The atomic models were refined to Rwork/Rfree of 0.21/0.25 and 0.19/0.23, for the RBD D and RBD G crystals, respectively.
Cells, viral sequences and production of foamy virus viral vectors
Baby Hamster Kidney (BHK)-21 cells (ATCC-CLL-10) were cultured in DMEM-glutamax-5% fetal bovine serum (FBS) (PAA Laboratories). HT1080 cells (ECACC 85111505) were cultured in EMEM-10% FBS supplemented with 1x L-glutamine and 1x non-essential amino acids (NEAA).
Human embryonic kidney 293T cells (CRL-3216) were cultured in DMEM-glutamax-10% FBS.
Foamy virus isolates were named according to the revised taxonomy [START_REF] Khan | Spumaretroviruses: Updated taxonomy and nomenclature[END_REF]) and short names were used for gorilla and chimpanzee strains [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. The fourcomponent FVV system (plasmids pcoPG, pcoPP, pcoPE, pcu2MD9-BGAL (a transfer plasmid encoding for β-galactosidase)) and the gorilla Env construct containing sequences from the zoonotic and GI-D468 (JQ867465) and GII-K74 (JQ867464) env genes (EnvGI-SUGII) have been described [START_REF] Hütter | Prototype foamy virus protease activity is essential for intraparticle reverse transcription initiation but not absolutely required for uncoating upon host cell entry[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF], also in Dynesen et al. (Dynesen et al., 2022, submitted). Briefly, the genotype II Env construct we used (EnvGI-SUGII) is comprised of the SU is from the GII-BAK74 genotype, and the LP and TM from the GI strain BAD468, the latter two being very conserved between GI and GII [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
Mutations in the RBD predicted heparan sulfate binding site (K342A/R343A and R356A/R369A) were introduced to this gorilla Env plasmid containing full-length GII SU. FVVs amplification cycles (15s at 95°C, 20s at 60°C and 30s at 72°C) carried out with an Eppendorf realplex2 Mastercycler (Eppendorf). A standard curve prepared with serial dilutions of pcu2MD9-BGAL plasmid was used to determine the copy number of FVVs. Results were expressed as vector particles/ml, considering that each particle carries 2 copies of the transgene.
Prediction of RBD heparan-binding site and mutant design
The server ClusPro (https://cluspro.org/login.php) was used for identifying a potential heparin-binding site [START_REF] Desta | Performance and Its Limits in Rigid Body Protein-Protein Docking[END_REF][START_REF] Kozakov | The ClusPro web server for protein-protein docking[END_REF][START_REF] Mottarella | Docking server for the identification of heparin binding sites on proteins[END_REF]Vajda et al., 2017). The server generated 13 models of a fully sulfated tetra-saccharide heparin fragment docked to the FV RBD and a list of atom-atom contacts between the heparin chain and the protein residues that was used to generate the plots on
Heparan sulfate removal and detection
Cells were treated with Trypsin-EDTA and 5 x 10 5 cells were labelled per condition. Cells were washed once with PBS-0.1% BSA prior to incubation with 0.1 mIU/ml heparinase III from Flavobacterium heparinum (Sigma-Aldrich, #H8891) in 20 mM Tris-HCl, 0.1 mg/mL BSA and 4 mM CaCl2, pH 7.45 for 15 min. at 37°C. Heparan sulfate was detected by staining with F58-10E4 antibody (5 µg/ml, AmsBio, UK #370255-S) and anti-mouse IgM-AF488 antibodies (2 µg/ml, Invitrogen #A-21042). The neoantigen generated by HS removal (ΔHS) was detected with the F69-3G10 antibody (10 µg/ml, AmsBio #370260-S) and anti-mIgG-AF647 antibodies (4 µg/ml, Invitrogen #A-31571). Cell staining and washing were performed in PBS-0.1% BSA at 4°C. Incubation times were 60 and 30 min for primary and secondary antibodies, respectively.
Cytometer acquisition, and data analysis were performed as described for Env binding (Fig.
S.IV-12
). Cells labelled with secondary antibodies only were used as a reference. Levels of HS and ΔHS staining were expressed as the ratio of MFI from labelled to unlabeled cells (Fig. S.IV-12C).
FVVs binding assay
HT1080 cells were incubated with FVV particles (1, 10 and 100 particles/cell) on ice for 1h.
Cells were washed 3 times with PBS to eliminate unbound FVVs and RNAs were extracted using RNeasy plus mini Kit (Qiagen) according to manufacturer's protocol. RT was performed as described for FVVs RNA quantification. Bound FVV were quantified by qPCR of bgal gene as described for vector titration; cells were quantified by a qPCR amplifying the hgapdh gene with the following primers: hGAPDH_F 5' GGAGCGAGATCCCTCCAAAAT 3' and hGAPDH_R 5' GGCTGTTGTCATACTTCTCATGG 3'. The qPCR reaction conditions were the same as those used to amplify the bgal gene. Relative mRNA expression of bgal versus hgapdh was calculated using the -ΔΔCt method, and relative binding as 2 -ΔΔCt .
Statistics
The infectious titers, particle concentration, percentages of infectious particles and quantity of bound FVVs carrying wild-type and mutant SU were compared using the paired t-test.
Data availability
The data related to the X-ray structures determined for the SFV GII RBD D and RBD G have been deposited to the RCSB protein databank under PDB accession codes 8AEZ and 8AIZ, respectively.
CHAPTER V
___________________________________________________________________________
MANUSCRIPT II
Introduction
Foamy viruses (FVs) are the most ancient of retroviruses [START_REF] Pinto-Santini | Foamy virus zoonotic infections[END_REF][START_REF] Rethwilm | Evolution of foamy viruses: the most ancient of all retroviruses[END_REF]. Simian foamy viruses (SFVs) are widespread in nonhuman primates (NHPs), replicate in the buccal cavity, and are transmitted to humans mostly through bites by infected
NHPs [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF][START_REF] Filippone | A Severe Bite From a Nonhuman Primate Is a Major Risk Factor for HTLV-1 Infection in Hunters From Central Africa[END_REF][START_REF] Pinto-Santini | Foamy virus zoonotic infections[END_REF]. Such cross-species transmission events currently occur in Asia, Africa, and the Americas. Most individuals known to be infected with zoonotic SFV live in Central Africa, the region at the epicenter of the emergence of HIV-1 and HTLV-1 from their simian reservoirs. In their human hosts, SFVs establish a life-long persistent infection associated with subclinical pathophysiological alterations [START_REF] Buseyne | Clinical Signs and Blood Test Results Among Humans Infected With Zoonotic Simian Foamy Virus: A Case-Control Study[END_REF][START_REF] Gessain | Case-control study of the immune status of humans infected with zoonotic gorilla simian foamy viruses[END_REF]. Thus far, neither severe disease nor human-to-human transmission have been described, suggesting efficient control of SFV replication and transmission in humans.
SFV infection elicits envelope (Env)-specific neutralizing antibodies (nAbs) that block the entry of viral particles into susceptible cells in vitro [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Although they do not block cell-to-cell infection in vitro, antibodies prevent cell-associated SFV transmission by transfusion in monkeys [START_REF] Couteaudier | Plasma antibodies from humans infected with zoonotic simian foamy virus do not inhibit cell-to-cell transmission of the virus despite binding to the surface of infected cells[END_REF][START_REF] Williams | Role of neutralizing antibodies in controlling simian foamy virus transmission and infection[END_REF]. SFV Env is cleaved into three subunits, the leader peptide (LP), the surface subunit (SU), which binds to cells, and the transmembrane (TM) subunit, which carries out fusion. In several SFV species, there are two variants of the env gene, defining two genotypes (Aiewsakun et al., 2019a;[START_REF] Galvin | Identification of recombination in the envelope gene of simian foamy virus serotype 2 isolated from Macaca cyclopis[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]. The two variants differ in a discrete region encoding 250 residues, named SUvar, located at the center of the SU and overlapping with most of the receptorbinding domain (RBD) [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. The RBD is thus bimorphic and the exclusive target of genotype-specific nAbs [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
We have determined the RBD structure, which we describe in a co-submitted paper (Fernandez et al., 2022, submitted). Here, we present the epitopic sites that were concomitantly defined by the use of plasma samples from SFV-infected African hunters. These individuals were infected by SFV of gorilla origin through bites [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF]. We have previously described the infecting strains, ex vivo blood target cells, antibody response, and medical status of these individuals [START_REF] Buseyne | Clinical Signs and Blood Test Results Among Humans Infected With Zoonotic Simian Foamy Virus: A Case-Control Study[END_REF][START_REF] Couteaudier | Inhibitors of the interferon response increase the replication of gorilla simian foamy viruses[END_REF][START_REF] Couteaudier | Plasma antibodies from humans infected with zoonotic simian foamy virus do not inhibit cell-to-cell transmission of the virus despite binding to the surface of infected cells[END_REF][START_REF] Gessain | Case-control study of the immune status of humans infected with zoonotic gorilla simian foamy viruses[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF][START_REF] Rua | In vivo cellular tropism of gorilla simian foamy virus in blood of infected humans[END_REF].
We expressed the SU as a soluble recombinant protein that competes with SU present on viral vector particles for binding to plasma nAbs in a neutralization assay. We defined the regions targeted by the nAbs using mutant SU protein modified at the glycosylation sites [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF], RBD functional subregions [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF], genotype-specific sequences that present properties of B-cell epitopes, and structural information (Fernandez et al., 2022, submitted). Finally, we tested whether immunodominant epitopes recognized by nAbs are involved in SU binding to and viral entry into susceptible cells. The genotype-specific variable SU region (SUvar, aa 248-488) partially overlaps with the RBD and is the exclusive target of nAbs [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. C. The structure of the GII-K74 RBD monomer (aa 218-552, PDB code 8AEZ, (Fernandez et al., 2022, submitted)) is presented in cartoon with glycans shown as sticks. RBD1, RBD2, and RBDj regions are color-coded as on panel A. Structural elements relevant for the present study are indicated, and the full description of the structure is available in (Fernandez et al., 2022, submitted). D. The SUvar region (red) is rendered as solvent accessible surface. The conserved region (SUcon) is displayed in cartoon. The dashed line indicates the boundary between the lower and upper RBD subdomains.
Results
Infrequent binding of plasma antibodies to linear epitopes located on the SUvar domain
We started the epitope mapping project by screening plasma samples for binding to 37 Plasma antibodies bound to GI and/or GII peptides irrespective of the SFV genotype against which they were raised. In conclusion, the binding study showed that most SFV-specific antibodies do not recognize genotype-specific linear epitopes. Seventeen plasma samples from African hunters were tested for binding to 37 peptides located in the SUvar domain (Table S.V-2). The summary graph shows positive responses (ΔOD, y-axis) plotted against peptides identified by the position of their first aa (x-axis). Plasma samples are identified by a color code corresponding to the genotype(s) of the infecting strains (blue: infected with a GI strain, red: infected with a GII strain, purple: infected by strains of both genotypes [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]). Left, peptides spanning GI SUvar; right, peptides spanning GII SUvar; the RBDj region is indicated by the lighter color. Detailed binding activity is shown on the aligned SU sequences; the RBDj sequence is highlighted in italic characters; the sequence covered by peptides is highlighted by grey background. Recognized peptides are underscored and designated by the same letters as those used in panel A. Reactive plasma samples are indicated above the sequence and colored according to the genotype(s) of the infecting strains. Six plasma samples from SFV-infected individuals reacted against seven peptides located in the RBDj region (BAD551, BAK55, BAK56, BAK74, BAK82, and BAK132) and two samples reacted against a peptide located in the RBD1 or RBD2 subdomains (BAD468 and BAK56, respectively). Plasma antibody binding to the peptides was not genotype-specific. For example, sample BAK56 reacted against peptide b (399W-K418) from the GI-D468 and GII-K74 strains.
CI-PFV LVKYKEPKPW PKEGLIADQC PLPGYHAGLT YNRQSIWDYY
SFV SU protein competes with the virus for binding to nAbs
We sought to define nAb epitopes by performing neutralization assays in the presence of recombinant SU protein that competes with the SU within Env present at the surface of foamy viral vector (FVV) particles. While wild-type (WT) SU bound to the nAbs, allowing cell infection to proceed, SU mutants with altered nAb epitopes did not (Fig.
V-3A and V-3B). We tested several recombinant Env and SU expression constructs, of which the production level and stability varied according to the genotype: Gorilla genotype II (GII) Env-derived constructs were expressed at high levels, whereas gorilla genotype I (GI) Env-derived counterparts were poorly expressed and aggregated (Fig. S.V-2A). The immunoglobulin Fc domain acts as a secretion carrier for the SU, giving the construct high stability. Such chimeric proteinsreferred to as immunoadhesins -have been used to produce soluble SFV SU [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. The GI SU immunoadhesin was expressed at insufficient levels, but we readily produced the chimpanzee genotype I (CI) and GII SU immunoadhesins. We therefore studied epitopes recognized by GII-specific nAbs with the GII immunoadhesin (referred to as GII SU for the WT sequence) and those recognized by GI-specific nAbs with the CI immunoadhesin ( CI SU), as allowed by the frequent cross-neutralization of gorilla and chimpanzee strains belonging to the same genotype [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF] and CI and GI SUvar sequence identity of 70% [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure V-3 -The SFV SU block nAbs without affecting viral entry
A. Schematic presentation of the neutralization assay using SU immunoadhesin as competitor. Plasma samples were diluted to achieve a reduction in the number of FVV-transduced cells by 90%. WT immunoadhesins compete with Env on FVV for binding by nAbs, resulting in a higher number of FVV-transduced cells; mutations in SU that affect binding by nAbs result in inefficient competition and reduced FVV transduction. Representative images of wells with FVV-transduced cells stained by X-gal are shown. B. Schematic representation of titration curves corresponding to SU mutants that lose their capacity to block a fraction of nAbs (mut1, green), block all nAbs with reduced affinity (mut2 SU, orange), or block a fraction of nAbs with reduced affinity (mut 3, pink) or those with no blocking activity (mut4, purple). The curves are summarized by two parameters, MaxI and IC50, which were compared to those obtained with WT SU (panel C and D) to define significant differences in binding (see Materials and Methods). C. The BAK132 (anti-GI) and MEBAK88 (anti-GII) plasma samples were incubated with immunoadhesins and the mix added to FVVs expressing matched Env before titration. The relative proportion of transduced cells is expressed as the percentage of cells transduced by untreated FVVs (no plasma and no protein), is referred to as the relative infectivity, and is presented as a function of protein concentration. The addition of CI SU (blue symbols) inhibited the action of nAbs from sample BAK132, as shown by increased CI-PFV Env FVV relative infectivity, whereas GII SU (red symbols) had no effect. Conversely, GII SU inhibited the action of nAbs from sample MEBAK88. MLV SU (grey symbols) had no effect on the plasma antibodies. D. The infectivity of the CI-PFV and GII-K74 Env vectors was quantified in the presence of CI SU, GII SU, and MLV SU; the relative infectivity is presented as a function of protein concentration. E and F. Thirteen pairs of plasma samples and genotypematched immunoadhesins were tested at least five times for their activity against FFVs. The mean and standard error of the mean of the IC50 (panel E) and MaxI (panel F) are shown for the CI-PFV Env vectors and anti-GI plasma samples (blue symbols) and the GII-K74 Env vectors and anti-GII plasma samples (red symbols).
We validated the competition strategy with genotype-matched and mismatched immunoadhesins (Fig. V-3, Table S.V-3 and S.V-4). We tested plasma samples from gorilla SFV-infected hunters at the dilution required for 90% reduction of FVV infectivity (IC90) to allow Env-specific antibody saturation by the competitor immunoadhesin. Thus, plasma samples were tested at the same nAb concentration and responses from different individuals could be compared. We incubated immunoadhesins with plasma samples before addition to a FVV expressing the matching envelope protein and titration of residual infectivity. Both CI SU and GII SU blocked the action of nAbs present in genotype-matched plasma samples in a concentration-dependent manner but failed to block the activity of genotype-mismatched samples (Fig. V-3C). Importantly, the immunoadhesins did not affect SFV entry (Fig. V-3D, [START_REF] Berg | Determinants of foamy virus envelope glycoprotein mediated resistance to superinfection[END_REF]), allowing their use as competitors for binding to nAbs in an infectivity assay. The unrelated MLV SU construct ( MLV SU) had no impact on either the neutralizing activity of plasma samples or SFV infectivity (Fig.
V-3C and V-3D). We titrated 12 samples against matched CI SU and GII SU, including one from a coinfected individual which was tested against both immunoadhesins, leading to 13 sample-immunoadhesin pairs tested.
Immunoadhesin's effect on nAb can be described by its affinity and by the fraction of plasma nAb blocked. The IC50, a measure of affinity, ranged between 3.3 and 12.6 nM for CI SU and 2.9 and 12.5 nM for GII SU, depending on the plasma sample (Fig. V-3E). Taking FVV treated with MLV SU without plasma as a reference value for full infectivity (100%), CI SU and GII SU restored the infectivity of plasma-treated FVV to between 56% and 94% of infectivity. This value is referred to as maximum infectivity (MaxI) and quantifies the proportion of nAb blocked by the immunoadhesin (Fig.
V-3B and V-3F). We used the values from at least five independent experiments to calculate the IC50 and MaxI threshold values to define the nAb blocking activity of mutated recombinant proteins as being significantly different from that of WT SU (Fig.
V- 3B).
We evaluated the impact of SU oligomerization and mammalian-type glycosylation on its capacity to block nAbs by testing the following constructs: SU monomers produced in mammalian and insect cells, trimeric Env ectodomains produced in insect cells, and dimeric SU (i.e., immunoadhesin) produced in mammalian cells. All had the same capacity to block the nAbs (Fig. S.V-2B). Therefore, we used the only recombinant proteins available for both genotypes, the SU immunoadhesins. We first studied GII-specific nAbs using GII SU before testing GI-specific nAbs using CI SU.
Certain nAbs target glyco-epitopes
The GII SUvar domain carries seven of the 11 SU glycosylation sites (Fig.
V-4A and V-4B).
We assessed whether these glycans are recognized by nAbs or whether they shield epitopes from nAb recognition. We produced GII SU in the presence of the mannosidase inhibitor kifunensine, which prevents complex-type glycan addition, or treated it with endo-H to remove all glycans. The absence of complex glycans had no impact on nAb blockade by GII SU (Fig The N10 glycosylation site is located at different positions for the two genotypes and may be part of a potential epitopic region (N422/N423 on GI-D468/CI-PFV strains, N411 on the GII-K74 strain). The N10 glycosylation knock-down mutant ( GII ΔN10) was as efficient as GII SU in blocking nAbs from all plasma samples (Fig. V-4E). N10 belongs to a stretch of seven genotype-specific residues that were replaced by those from the GI-D468 strain. The chimeric protein ( GII swap407) blocked nAbs as efficiently as GII SU for three samples and showed decreased affinity against nAbs for one sample (Fig. V-4F). Thus, GII-specific nAbs occasionally target the N10 glycosylation site.
To identify other glycosylation sites targeted by nAbs, we knocked them down one by one, except for N8, which is critical for SU expression [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF]. The GII ΔN7' mutant showed reduced capacity to block nAbs from five of the seven samples (Fig. V-4J). GII ΔN5 and A. Schematic representation of N-glycosylation sites on the SUvar domain. The N7' site (position 374, brown symbol) is absent from CI-PFV but present in zoonotic gorilla SFV and several chimpanzee SFVs [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]. The N8 site (position 391, bold stem) is required for SU expression [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF]. The N10 site has a genotype-specific location (N411 in GII strains, red symbol; N422 or 423 in GI/CI strains, blue symbol). The glycosylation sites outside SUvar are shown in grey. B. The RBD is shown as a solvent accessible surface, with SUvar in dark grey and SUcon in light grey. Side chains of the glycans are shown in orange and were identified on deglycosylated RBD; the N5 glycan was poorly resolved and N286 is colored to indicate the localization of its anchor. C and D. To determine whether SFV-specific nAbs target residue glycosylated epitopes, vectors carrying CI-PFV or GII-K74 Env were mixed with four genotype-matched plasma samples previously incubated with untreated, kifunensine (C), or kifunensine and endo-H treated (D) GII SU at several concentrations. E. To test whether nAbs target the genotype-specific N10 glycosylation site, the immunoadhesin in which N10 was inactivated ( GII ΔN10) was incubated with four genotype-matched plasma samples. F. Residues 407-413 were swapped with those from the GI-D468 strain and the resulting GII swap407 was tested for its ability to block four anti-GII plasma samples. G to K. The glycosylation sites located on the immunoadhesins were inactivated one by one (except N8) and tested for their inability to block four GII-specific plasma samples. GII ΔN7' was tested against three additional samples to confirm its impact on nAbs (J). For each plasma sample, the IC50 is presented as a function of MaxI for untreated and enzyme-treated GII SU (C and D) or GII SU and mutated immunoadhesins (E to K). The IC50 and MaxI values for untreated GII SU are presented as open symbols and are from the same experiment in which the mutant immunoadhesins were tested. For the enzyme-treated and mutated immunoadhesins, the symbols are colored according to the IC50 and MaxI thresholds that were used to statistically define significant differences from GII SU.
SFV-specific nAbs target loops at the RBD apex
Next, we aimed to locate nAb epitopes on the SU subdomains involved in binding to susceptible cells (Aiewsakun et al., 2019a). Two regions of the RBD were identified as being essential for binding (RBD1 and RBD2) and the region in between (RBDj) could be deleted without fully compromising cell binding (Fig. V-1A, V-1C, V-5A and V-5B). The SUvar region encompasses RBD1 (except its first 16 N-term residues), RBDj, and the seven N-terminal residues of RBD2 (Fig. V-1B, V-1D). We first constructed a mutant immunoadhesin with RBDj deleted. GII ΔRBDj was highly impaired in blocking nAbs from three individuals: the IC50 values were above the highest concentration tested (200 nM) and the MaxI values were close to 10%, the value of plasma samples incubated with the irrelevant MLV SU (Fig. V-5C). By contrast, GII ΔRBDj blocked nAbs from the BAD551 plasma sample as efficiently as GII SU.
Then, we used the recently discovered RBD structural elements. In the lower subdomain, the main candidate epitopic region was the heparan sulfate glycosaminoglycan binding site (HBS).
The GII Env ectodomain mutated on HBS residues ( GII K342A/R343A or GII R356A/R369A, (Dynesen et al., 2022, submitted)) blocked nAbs from three samples as efficiently as their WT counterpart and showed lower affinity for the MEBAK88 plasma sample (Fig. V-5D and V-5E). Thus, nAb activity is modestly affected by mutated HBS residues.
The X-ray structure of the RBD showed that four loops emanate from the upper domain (Fernandez et al., 2022, submitted). L1 is shielded at the center of the trimer and is probably not accessible to nAbs, whereas L2 (276-281), L3 (416-436), and L4 (446-453) are exposed to solvent, mobile, and therefore considered to be candidate epitopic regions. Furthermore, L3 and L4 are located in the RBDj region, which is a major target of nAbs (Fig. V-5C). We deleted the loops individually (Fig. V-5F-H). GII ΔL3 blocked all GII-specific samples but one, although with lower affinity. GII ΔL2 and GII ΔL4 showed plasma-dependent effects, with three patterns:
(1) full activity (such as against BAD551) (2), full loss or a strong reduction of activity (for example, against BAK133), ( 3) or reduction of the MaxI with unchanged affinity (such as MEBAK88). The last pattern probably reflects the presence of nAbs targeting different epitopes within the same plasma sample, some epitopes being altered on mutant immunoadhesins while others are not.
Even though L2, L3, and L4 are mobile in the structure of the RBD monomer (Fernandez et al., 2022, submitted), the plasma samples did not recognize synthetic peptides spanning these loops (Fig. S.V-5), indicating that they may be part of conformational epitopes. Overall, we show that nAbs from most samples target conformational epitopes at the RBD apex, some being carried by L2, L3, and L4. [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF], loops, and HBS identified in (Fernandez et al., 2022, submitted). B. The RBD is shown as a solvent accessible surface with SUvar in dark grey, SUcon in light grey, L2 in orange, L3 in green, L4 in maroon, and HBS in black. To locate SFV-specific nAb epitopes on SU functional domains, vectors carrying GII-K74 Env were treated with GII-specific plasma samples previously incubated with GII SU and mutant immunoadhesin with RDBj deleted ( GII ΔRBDj, C), the Env ectodomain with WT or mutated HBS ( GII K342A/R343A in panel D and GII K356A/R369A in panel E), and immunoadhesins with deleted loops ( GII ΔL2, GII ΔL3, and GII ΔL4 in panels F to H). For each plasma sample, the IC50 is presented as a function of MaxI for GII SU and the mutated immunoadhesins. The IC50 and MaxI values for GII SU are presented as open symbols and are from the same experiment in which the mutant immunoadhesins were tested. Symbols are colored according to the IC50 and MaxI thresholds used to define statistically significant differences from GII SU; we applied the same threshold values for the analyses of the Env ectodomain, which had the same inhibitory activity as GII SU (Fig. S.V-2).
SFV-specific nAbs target the base of the SUvar region
Before the 3D structure was available, we used the insertion of N-linked glycosylation sequences (NXS/T) to disrupt potential epitopes. We selected seven genotype-specific sequences predicted to have a disordered secondary structure or to contain a B-cell epitope and built mutant immunoadhesins with a glycosylation site inserted in the targeted sequence [START_REF] Aiewsakun | Modular nature of simian foamy virus genomes and their evolutionary history[END_REF]. Four of the seven glycans were inserted in the upper subdomain and resulted in decreased affinity against certain plasma nAbs: GII 426glyc in L3, GII 450glyc in L4, GII 459glyc after L4, and GII 485glyc in RBDj (Fig. V-6C-F). GII 263glyc had no or only a modest effect on nAb activity, which is consistent with its location at the center of the trimer (Fig. V-6G, (Fernandez et al., 2022, submitted)). Overall, glycan insertions at five sites of the RBD upper subdomain confirmed its frequent recognition by nAbs.
(Fig. V-6A, V-6B, Table S.V-
One immunoadhesin with glycans inserted in the lower RBD subdomain ( GII 351glyc) showed a strongly reduced capacity to block the nAbs (Fig. V-6H). The modified residue is located on a solvent-exposed loop at the base of the SUvar region (aa 345-353) (Fig. V-6B and V-6B').
GII 351glyc was the first mutant to indicate that nAbs recognize a region of the RBD to which no function has yet been attributed. We therefore focused the following experiments on this novel epitopic region.
As GII 351glyc was prone to aggregation (Fig. S.V-4H), we produced a second batch, half of which was purified by affinity chromatography (standard protocol) and the other half further purified by size exclusion chromatography (SEC) to eliminate aggregates. Both GII 351glyc preparations were unable to block nAbs from the plasma samples (Fig. S.V-6A-E). Glycan insertion at an adjacent position in GII 350glyc led to the loss of nAb blockade and reduced aggregation (Fig. V-6I). The GII 345-353 loop is one residue shorter than that of the GI-D468 and CI-PFV strains (Fig. V-6B'). The mutant immunoadhesin with an extra E at position 349 ( GII 349+E) showed a strongly reduced capacity to block the nAbs (Fig. V-6J). We replaced the seven GII residues by the eight residues from the GI-D468 strain; GII swap345 showed a reduced ability to block nAbs from five of the seven plasma samples (Fig. V-6K). By contrast, swapping the α3-helix located N-terminal to the loop ( GII swap333) had no impact on the capacity to block the nAbs (Fig. V-6L). Finally, we tested the substitution of residues on the α8-helix facing the 345-353 loop; GII E502A and GII L505N showed a reduced capacity to block the nAbs (Fig. V-6M-N). Overall, these experiments define the 345-353 loop and adjacent α8-helix as a major epitopic region on GII RBD. S.V-5) were disrupted by inserting glycans in the immunoadhesins that were tested for their capacity to block nAbs. J to M. Five mutant immunoadhesins were tested to characterize the epitopic region in the 345-353 loop. All mutants were tested against at least four plasma samples. Those for which the capacity to block nAbs was the most altered were then tested on additional samples. For each plasma sample, the IC50 is presented as a function of MaxI for GII SU and the mutant SUs. The IC50 and MaxI values of GII SU are presented as open symbols and are those from the same experiment in which the mutated immunoadhesins were tested. Symbols are colored according to the IC50 and MaxI thresholds used to statistically define significant differences from GII SU. GII swap333 was tested twice at three concentrations and showed similar blocking capacity as GII SU; the IC50 values were arbitrarily set to the same level as those of GII SU (see Materials and Methods).
GI and GII-specific nAbs target different epitopes
We applied a similar strategy using GI-specific plasma samples and CI SU harboring the mutations with the greatest impact on GII-specific epitopes. We present these data with a comparison with those obtained with the GII-specific samples (Fig. V-7, blue symbols over light red lines). GI-specific nAbs targeted glycans removed by endo-H but not complex glycans and did not recognize N10, for which the presence is genotype-specific (Fig.
V-7A-7C). We could not test the recognition of N7', as it is absent from the CI-PFV strain.
The most striking difference between GI and GII epitopes was around residue 350: in sharp Thus, L3 is more important for GI-than GII-specific nAbs. Indeed, GII ΔL3 blocked GII-specific nAbs, although with lower affinity than GII SU. CI ΔL2 and CI ΔL4 showed sample-dependent effects, as their GII counterparts (Fig.
V-7F and V-7H). CI 463glyc blocked GI-specific nAbs, with a reduced MaxI (Fig. V-7I), indicating that a fraction of GI-specific nAbs was not blocked, whereas the other was blocked as efficiently as by CI SU. The corresponding GII 459glyc mutant blocked GII-specific nAbs, but with lower affinity than GII SU (Fig. V-6E). Overall, nAbs from GIand GII-specific plasma samples target different epitopes.
The mutations on the SU immunoadhesins may have induced epitope-specific or nonspecific conformational changes. The RBDj deletion had a likely epitope-specific effect on the GII backbone, as GII ΔRBDj retained its capacity to block one sample containing nAbs that preferentially targeted epitopes outside the RBDj (Fig. V-5C). However, CI ΔRBDj and CI ΔL3 did not display any nAb blocking activity. Thus, we confirmed the results using chimeric immunoadhesins: GII swapRBDj (i.e., GI RBDj in GII SU) blocked nAbs from the four anti-GI plasma samples with a similar or modestly reduced capacity relative to that of CI SU (Fig.
V-7J),
whereas GII SU had no blocking activity (Fig. V-7K). CI swapL3 (GII L3 in CI SU) did not block the nAbs (Fig. V-7L). These data suggest that either GI-specific nAbs recognize the L3 loop or that the L3 loop requires a matching interaction to adopt its native conformation. CI SU immunoadhesins with mutations matching the most informative GII-K74 mutations were tested for their capacity to block nAbs from GI-specific plasma samples. A, Kifunensin-treated CI SU; B, Kifunensin and endoHtreated CI SU; C, CI ΔN10; D, CI 350glyc; E, CI ΔRBDj; F, CI ΔL2; G, CI ΔL3; H, CI ΔL4; I, CI 463glyc; J, GII swapRBDj; K, GII SU; L, CI swapL3. All mutants were tested against four plasma samples. Those for which the capacity to block nAbs was the most altered were then tested on additional samples. For each plasma sample, the IC50 is presented as a function of MaxI for the CI SU and mutant immunoadhesins. The IC50 and MaxI values of CI SU are presented as open symbols and are those from the same experiment in which the mutant immunoadhesins were tested. For mutant immunoadhesins, the symbols are colored according to the IC50 and MaxI thresholds used to statistically define significant differences from CI SU. The red lines correspond to data obtained with GII-specific plasma samples against equivalent constructs (Panels V-4C, 4D, 4E, 5C, 5F, 5H, 6E, and 6H match panels V-7A to 7H, respectively).
Human plasma samples contain nAbs targeting a variable number of epitopic regions
The effect of the mutant SU on each plasma sample can be summarized as recognition similar to that as for WT SU, recognition with a reduced affinity (i.e., increased IC50), blocking a smaller fraction of nAbs than WT SU (i.e., reduced MaxI), or having both effects (Fig. V-8). We grouped the most relevant mutants according to their location: the RBD apex, the 345-353 loop, and the N7' region. We tested the recognition of the N7' region by GII-specific samples only. This summary highlights the two genotype-specific and immunodominant epitopic regions that we identified: loop 345-353 on GII SU (Fig. V-8A) and L3 on CI SU (Fig. V-8B).
Within the RBD apex, four mutant immunoadhesins correspond to different epitopic sites (L2, L3, L4, and 459/463 glyc ). GI-specific plasma samples contained nAbs focused on a single subdomain (such as BAK55 and CI L3) or epitopes on up to four sites (BAD348 and BAK132).
Similarly, GII-specific plasma samples presented interindividual variations in their specificity. or having both effects, orange. Black squares indicate the major epitopic regions of the GII-and GI-specific samples (panels A and B, respectively). Within each epitopic region defined by several mutant immunoadhesins, nAb specificity varied between individuals.
We examined the sequence of the identified epitopes and found that all were conserved on the SFV strains infecting individuals from this study or circulating in Central Africa (Fig. [START_REF] Alais | STLV-1 co-infection is correlated with an increased SFV proviral load in the peripheral blood of SFV/STLV-1 naturally infected non-human primates[END_REF]. Overall, we identified three immunodominant epitopic regions (RBD apex, 345-353 loop, and N7' region), of which the sequences are conserved within each genotype. We provide evidence that SFV-infected individuals have nAbs that target several epitopes and recognize epitopes that differ between individuals.
S.V-
Functional studies
Protein binding to susceptible cells
All immunoadhesins used for epitope mapping were tested for their capacity to bind to human HT1080 cells, which are highly susceptible to gorilla and chimpanzee SFVs [START_REF] Couteaudier | Inhibitors of the interferon response increase the replication of gorilla simian foamy viruses[END_REF] [START_REF] Alamgir | Precise identification of endogenous proviruses of NFS/N mice participating in recombination with moloney ecotropic murine leukemia virus (MuLV) to generate polytropic MuLVs[END_REF]. Immunoadhesin binding was enhanced after glycan removal, possibly through reduced steric hindrance. Deletion of the RBDj reduced binding to susceptible cells, while one-by-one deletion of loops L2, L3, and L4 abolished it. We observed genotype-specific differences; the RBDj deletion had a stronger impact on GII than on CI immunoadhesins (6-vs. 2.5-fold reduction). Conversely, glycan insertion after L4 had only a moderate impact on the GII immunoadhesin ( GII 459glyc, 6-fold reduction) but abolished binding of the CI immunoadhesin ( CI 463 glyc ,≈100-fold reduction). In the 345-353 loop, glycan insertion strongly affected the binding of GII immunoadhesins ( GII 350glyc and GII 351glyc, ≈50-fold reduction), but there was no effect on CI immunoadhesins, mirroring the recognition by the nAbs. Overall, our data suggest that certain residues involved in cell binding may be genotypespecific.
(Fig. V-9A, Fig. S.V-
The binding data were also considered for the interpretation of nAb blocking experiments.
The mutant immunoadhesins that retained their capacity to bind cells were considered to likely be properly folded and loss of nAb blockade could be considered as being due to mutation of the epitope. Overall, a single mutated adhesins failed to bind cells and block the nAbs from all samples tested and related data are not presented.
The epitopic regions targeted by nAbs are involved in viral infectivity
We then tested the impact of RBDj and loop deletions in the context of FVVs. We produced FVVs with Env mutants corresponding to the immunoadhesins most affected in blocking genotype-specific nAbs (i.e., ΔRBDj and ΔL3 CI-PFV, ΔRBDj, ΔL2, and ΔL4 GII-K74 Env). The quantity of FVV particles carrying deletions was strongly reduced, over 1000-fold for CI-PFV ΔRBDj, over 100-fold for CI-PFV ΔL3 and GII-K74 ΔRBDj, and over 50-fold GII-K74 ΔL2 and ΔL4 Particules/mL IU/mL
CISU GIIN5 GIIN6 GIIN7 GIIN7' GIIN9 GIIN10 CIN10 GIIswap407 GIIRBDj CIRBDj GIIswapRBDj GIIL2 CIL2 GIIL3 CIL3 CIswapL3 GIIL4 CIL4 GII263glyc GII426glyc GII450glyc GII459glyc CI463glyc GII485glyc GII351glyc GII350glyc CI350glyc GIIE349+E GIIswap345 GIIswap333 GIIE502A GIIL505N 0.001 0.01 10 AUC mut /AUC WT WT = 1 10% of WT = 0.1 A GII-K74 GII-K74 ΔRBDj GII-K74 ΔL2 GII-K74 ΔL4
✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ GII-K74 GII-K74 ΔRBDj GII-K74 ΔL2 GII-K74 ΔL4
✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ GII-K74 GII-K74 ΔRBDj GII-K74 ΔL2 GII-K74 ΔL4 CI-PFV CI-PFV ΔRBDj CI-PFV ΔL3 10 -4 10 -3 10 -2 10 -1 10 0 Relative binding ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ ✱ D C B
A. HT1080 cells were incubated with the panel of tested immunadhesins (Fig. V-4 to V-7). Cell-bound immunoadhesins were detected by staining with a fluorescently labeled antibody targeting the murine IgG Fc fragment. Stained cells were analyzed on a flow cell cytometer. Fig. S.V-8 presents the gating strategy and normalization of the results to the levels of bound WT immunoadhesins ( GII SU and CI SU). Binding levels of GII mutants (red symbols) are presented in the same order as in Fig. V-4, V-5 and V-6. CI mutants (blue symbols) are presented side-by-side with the corresponding and related GII mutants, with a colored background. Mutated HBS shows reduced binding to susceptible cells, as shown in (Fernandez et al., 2022, submitted). B to D. Three batches of FVVs carrying WT or mutated Env were produced. The concentration of vector particles was quantified by RT-qPCR amplification of the bgal transgene. Each batch was titrated twice and the mean titers are presented; lines represent the mean values from the three FVV batches. FVV infectious titers were quantified on BHK-21 cells. Each batch was titrated twice and the mean titers are presented; lines represent the mean values from the three FVV batches. FVVs carrying WT and mutated Env were incubated with HT1080 cells on ice for 1 h before washing and quantification of bgal mRNA incorporated in the vector particles and the gapdh gene of susceptible cells. The FVV dose was 10 FVV particles per cell. The levels of bgal and cellular gapdh mRNA were quantified by RT-qPCR; the ΔΔCt method was used to calculate the relative amount of FVV particles bound to cells. Values below the threshold were arbitrarily set at 0.0005. Lines represent the mean values from tested FVV batches. The dotted lines in panels B to D represent the quantification threshold. The infectious titers, particle concentration, and levels of bound particles from FVVs carrying mutant or WT SU were compared using the ANOVA test and Sidak's multiple comparison test, *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
Three-dimensional map of epitopic regions
We built a model integrating the current knowledge on Env structure, genetic variability, function, and our findings on the recognition by nAbs (Fig. V-10). According to our model for the arrangement of the RBD within the trimeric Env, the RBD occupies the space at the top of Env spike [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]Fernandez et al., 2022, submitted); each RBD has two subdomains, referred to as upper and lower (Fernandez et al., 2022, submitted)
(Fig. V-10A).
The genotype-specific SUvar region is targeted by nAbs (Aiewsakun et al., 2019a;[START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF] and comprises the entire upper RBD subdomain (SUvar Up ) and one third of its lower subdomain (SUvar Lo ) (Fig. V-10B). SUvar Lo wraps around a stem formed by the conserved part of the RBD (SUcon region). The functionally defined bipartite RBD [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]) could be located as follows: RBD1 forms half of the SUvar Up and the SUvar Lo region wrapping around the conserved RBD2, while the RBDj forms the top of SUvar Up .
Here, we show that nAbs recognize the three solvent-exposed loops on SUvar Up (Fig. V-10C).
We also identified mutations located C-terminal ( GII 459glyc and CI 463glyc) that may affect nAb binding or the SU folding. For SUvar Lo , we show the 345-353 loop to be an immunodominant epitope exclusively recognized by GII-specific nAbs and that it has a role in binding to cells.
The glycosylation site N7' had the greatest impact on the nAbs. It is located at the opening of the cavity in which the essential N8 glycan is buried (Fernandez et al., 2022, submitted). Of note, the glycan inserted in GII 485glyc may interfere with N8 function and/or accessibility to nAbs. SUvar Lo comprises the HBS, which was not frequently recognized. In conclusion, the nAbs bound to the loops on SUvar Up might interfere with trimer integrity or the conformational changes required for the triggering of fusion, whereas those that bound to SUvar Lo may prevent viral attachment and/or binding. A. The RBD domain is composed of two subdomains located at the top of Env trimers; the drawing highlights the region involved in trimer assembly and the HBS [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]Fernandez et al., 2022, submitted). B. Genetic studies on SFV strains circulating in Central Africa have identified two genotypes that differ in the sequence encoding the central part of the SU (SUvar) domain (Aiewsakun et al., 2019a;[START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]. The SUvar domain forms the upper part of the RBD (SUvar Up ) prolonged, in the lower subdomain, by an arm (SUvar Lo ) wrapping around the stem of which the sequence is conserved (SUcon). Binding studies defined the RBD as being bipartite, with two essential binding regions (RBD1 and RBD2) separated by RBDj [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. These regions form the bottom of SUvar Up and SUvar Lo (variable part of RBD1), the top of SUvar Up (RBDj), and the stem (short conserved fragment of RBD1 and RBD2). C. nAbs exclusively target the SUvar domain [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. In the present paper, we identified GII-specific epitopes in the SUvar Up and SUvar Lo domains and GI-specific epitopes in SUvar Up only. nAbs recognized L2, L3, and L4 of SUvar Up and the N7' region and 345-353 loop of SUvar Lo (lines ending with a dot). Glycans inserted in both domains revealed additional epitopic sites (lines ending with a triangle).
Discussion
Here, we demonstrate that SFV GII-specific nAbs raised by infected humans recognize two antigenic regions in the genotype-specific region of the SU and that GI-specific nAbs target at least one of these regions. Based on previous knowledge [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF], the X-ray structure presented in the co-submitted paper (Fernandez et al., 2022, submitted), and our functional data, the SFV-specific nAbs most likely target sites involved in either Env trimer formation or cell binding.
We identified conformational epitopes only, despite several attempts to capture linear epitopes using peptides (Fig. V-2, Fig. S.V-5 and (Lambert et al., 2019)). Our data are consistent with the low reactivity of NHP and human immune sera to denatured Env in immunoblot assays, which contain antibodies that bind to native Env in radioimmunoprecipitation and neutralization assays [START_REF] Cummins | Mucosal and systemic antibody responses in humans infected with simian foamy virus[END_REF][START_REF] Hahn | Reactivity of primate sera to foamy virus Gag and Bet proteins[END_REF][START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF][START_REF] Netzer | Identification of the major immunogenic structural proteins of human foamy virus[END_REF]. Linear epitopes are usually located in mobile regions of the polypeptide chains. Such mobile segments are absent from the RBD lower subdomain formed by the compact assembly of α-helices and β-sheets (Fernandez et al., 2022, submitted). In RBD monomers, the upper subdomain loops appear to be mobile, but interprotomer interactions probably impose specific conformations in full-length Env. The loops likely form a number of discontinuous epitopes; L2 and L4 are in proximity in the monomer and L3 and L4 in the trimer (Fernandez et al., 2022, submitted).
Twelve plasma samples were used in this study, including one from an individual infected by strains belonging to both genotypes. Overall, the epitopic regions were recognized by all or most donors and can be considered to be immunodominant (Fig. V-8). Importantly, within each genotype, sequences from the epitopic regions were identical among SFV strains infecting the studied individuals and those circulating in the same geographical area. This observation is consistent with the genetic stability of SFV [START_REF] Switzer | Ancient co-speciation of simian foamy viruses and primates[END_REF]. The use of polyclonal plasma samples was key to providing a global picture of nAbs raised upon infection with zoonotic SFV, but only epitopes recognized by a significant fraction of plasma antibodies can be detected. Thus, future studies may identify subdominant epitopes and provide a more precise definition of those that are immunodominant.
Immunoadhesins fully blocked nAbs from certain samples. However, for most tested samples, the infectivity was maintained but at levels lower than those observed for FVVs not exposed to plasma samples (Fig. V-3). The heterogeneous glycosylation of the immunoadhesins could explain the partial blockade of nAbs [START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF], as proposed for the incomplete neutralization of HIV [START_REF] Doores | Variable loop glycan dependency of the broad and potent HIV-1neutralizing antibodies PG9 and PG16[END_REF][START_REF] Kim | Antibody to gp41 MPER alters functional properties of HIV-1 Env without complete neutralization[END_REF][START_REF] Prasad | Cryo-ET of Env on intact HIV virions reveals structural variation and positioning on the Gag lattice[END_REF][START_REF] Mccoy | Incomplete Neutralization and Deviation from Sigmoidal Neutralization Curves for HIV Broadly Neutralizing Monoclonal Antibodies[END_REF]. The SU antigenicity of soluble monomeric proteins probably differed from those contained within viral particles due to the lack of the intra-and interprotomer interactions that form quaternary structures. Thus, our data identify nAb epitopes presented on the monomeric SU domain and indirectly suggest the existence of quaternary epitopes.
We have provided experimental evidence for the targeting of different epitopes by nAbs raised after infection by the two SFV genotypes. The RBD fold from different genotypes and FV from different host species is highly conserved (Fernandez et al., 2022, submitted). In superinfection-resistance experiments, the CI Env inhibited entry from genotype II strains, indicating that SFV from different genotypes share the use of at least one molecule for entry into target cells; this molecule could act as an attachment factor or receptor [START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF].
The SUvar Up domain was recognized by both GI-and GII-specific nAbs and the SUvar Lo by GIIspecific nAbs only. GI-specific nAbs may, nevertheless, recognize epitopes on SUvar Lo that are yet to be identified. Indeed, the biochemical properties of GI Env protein resulted in low expression and protein aggregation. Therefore, we used an SU from chimpanzee genotype I SFV to map epitopes recognized by antibodies induced by a gorilla genotype I SFV. We previously reported that plasma samples cross-neutralize both SFV species, with a strong correlation between nAb titers; however, nAb titers were globally higher against the GI than CI strain [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Thus, we may have missed a number of GI-specific epitopes.
Relative to other viruses, SFV-specific nAbs target a limited region on Env, i.e., they do not recognize epitopes on the TM nor non-RBD sites of the SU [START_REF] Burkhart | Distinct mechanisms of neutralization by monoclonal antibodies specific for sites in the N-terminal or C-terminal domain of murine leukemia virus SU[END_REF][START_REF] Chuang | Structural Survey of Broadly Neutralizing Antibodies Targeting the HIV-1 Env Trimer Delineates Epitope Categories and Characteristics of Recognition[END_REF][START_REF] Murin | Antibody responses to viral infections: a structural perspective across three different enveloped viruses[END_REF]. Based on the inhibition mechanisms described for other viruses [START_REF] Murin | Antibody responses to viral infections: a structural perspective across three different enveloped viruses[END_REF], we can propose a number of possible modes of action. The targeting of RBD loops by nAbs may prevent the fusogenic conformational transition and exposure of the fusion peptide, which is located at the center of the trimer and shielded by the RBD, as visualized on viral particles [START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. It is possible that certain loop-specific nAbs bind several protomers, as described for HIV-and Ebola-specific nAbs [START_REF] Mclellan | Structure of HIV-1 gp120 V1/V2 domain with broadly neutralizing antibody PG9[END_REF][START_REF] Milligan | Asymmetric and non-stoichiometric glycoprotein recognition by two distinct antibodies results in broad protection against ebolaviruses[END_REF]. An alternative neutralization mechanism could be the targeting of epitopes located close to the protomer interface, resulting in trimer disruption, as described for HIV-and Flu-specific nAbs [START_REF] Bangaru | A Site of Vulnerability on the Influenza Virus Hemagglutinin Head Domain Trimer Interface[END_REF][START_REF] Turner | Disassembly of HIV envelope glycoprotein trimer immunogens is driven by antibodies elicited via immunization[END_REF]. Many nAbs interfere with particle binding to susceptible cells. The existence of a bona fide receptor for SFV is yet to be demonstrated. Consequently, the molecular determinants of Env binding to susceptible cells within SUvar Lo are still only roughly defined. Certain nAbs recognize attachment factors, such as an HBS on SARS-CoV-2 spike [START_REF] Bermejo-Jambrina | Infection and transmission of SARS-CoV-2 depend on heparan sulfate proteoglycans[END_REF]. The SFV HBS is a patch of residues located in SUvar Lo . The mutants deficient for HS binding efficiently blocked most plasma samples, indicating that HBS is either not a major nAb target or that the antigenicity of these mutants is preserved. Of note, the N7' site located in the vicinity of HBS was recognized. All immunoadhesins designed to map potential B-cell epitopes were tested for their capacity to bind susceptible cells. SU from the two genotypes differed in certain residues involved in cell binding. Most notably, nAbs recognizing the 345-353 loop may prevent GII Env binding, whereas the loop on GI Env is not involved in cell binding and is not targeted by nAbs (Fig. V-9). This last observation highlights, for the first time, genotype-specific differences in SFV binding to susceptible cells and could be a starting point for further studies on identifying attachment factors and receptors for SFV.
Our choice of a functional strategy to map the epitopes targeted by nAbs proved to be critical due to the lack of linear epitopes. Overall, the use of human samples and soluble SU as competitor allowed us to create the first map of targeted regions. We considered several caveats (polyspecificity of plasma samples, absence of quaternary epitopes). We also carefully considered the possible nonspecific effect of mutations on protein folding, which could generate misleading results. Certain mutants were, indeed, poorly expressed and could not be used (Table S.V-3). A number of these limitations may be overcome in the future with the use of alternative tools, such as human monoclonal antibodies and subviral particles.
Human infection with zoonotic SFV represents a model for cross-species transmission of retroviruses leading to persistent infection that is successfully controlled by the immune system. We have previously reported the presence of potent nAbs in most infected individuals [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. Here, we have mapped major antigenic sites. Concerning the control of SFV in humans, a notable result from the present study is that two immune escape mechanisms, sequence variation and glycan shielding, were observed. The SFV RBD is structurally different from known and modelled retroviral RBDs [START_REF] Hötzel | Deep-Time Structural Evolution of Retroviral and Filoviral Surface Envelope Proteins[END_REF]. We have provided a novel model integrating structural, genetic, functional, and immunological knowledge on the bimorphic SFV RBD. We have thus gained information on the two SFV genotypes that have persisted for over 30 million years of evolution with their animal hosts.
Through the study of SFV and its unique properties, we should also gain fundamental knowledge on the structural basis for the inhibition of viruses by nAbs.
were cultured in Eagle's Minimum Essential Medium with Earle's Balanced Salts and Lglutamine supplemented with 10% FBS and 1% L-glutamine. Human embryonic kidney 293T cells (Cat. N˚ 12022001, Sigma) were cultured in DMEM-10% FBS. FreeStyle 293-F cells (Life Technologies) were cultured in Ex-cell 293 HEK serum-free medium supplemented with 5 µg/mL phenol red sodium salt, 2% L-glutamine, and 0.2x penicillin-streptomycin.
Peptides
At the beginning of the project, the SUvar (aa 248-488) sequences from the GI-D468 and GII-K74 strains were analyzed using linear B-cell epitope prediction software available on the Immune Epitope Data Base (http://tools.iedb.org/bcell/), which are based on known B-cell epitopes (LBtope, [START_REF] Singh | Improved method for linear B-cell epitope prediction using antigen's primary sequence[END_REF]), hydrophilicity (Parker prediction replaced by Bepipred software, [START_REF] Larsen | Improved method for predicting linear B-cell epitopes[END_REF]), and protrusion outside of globular proteins (Ellipro, [START_REF] Ponomarenko | ElliPro: a new structure-based tool for the prediction of antibody epitopes[END_REF]). In addition, we manually inspected viral sequences for stretches of genotype-specific epitopes. We selected 14 sequences and tested the corresponding GI-D468 and GII-K74 peptides, as well as nine CI-PFV peptides from a previously synthesized peptide set [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF] (Table S.V-2). After resolution of the RBD crystal structure, we designed a novel peptide set corresponding to the four loops located in the RBD upper subdomain (Fernandez et al., 2022, submitted). As positive controls, we used the N96-V110 (NKDIQVLGPVIDWNV from SFV Env LP) and I176-I199 (INTEPSQLPPTAPPLLPHSNLDHI from HTLV-1 gp46) peptides containing immunodominant epitopes [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. Peptides were synthesized by Smartox SAS (Saint-Martin d'Hères, France) or Genscript (Leiden, The Netherlands) and were tested individually.
ELISA
The protocol was described in [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. Briefly, peptides diluted in carbonate buffer at 1 µg/mL were coated on clear high-binding polystyrene 96-well microplates (Biotechne) overnight (ON) at +4°C. Plasma samples were diluted at 1:200 in phosphate buffered saline (PBS)-0.1% bovine serum albumin (BSA)-0.1% Tween20. Bound plasma antibodies were detected with horseradish peroxidase (HRP)-conjugated goat anti-human IgG H+L (0.02 µg/mL, #109-035-008, Jackson Immuno Research Europe). The peptide diluent was used as the negative control and antibody binding to peptides is expressed as the difference in OD (ΔOD = ODtest -ODcontrol). Plasma samples from three SFV-uninfected individuals were tested for binding to the 37 peptides. The mean + 2 SD of ΔOD (0.14) was used to define the positivity cutoff. The proportion of responding individuals was calculated among those infected with a given virus (SFV, n = 17; HTLV-1, n = 7).
Plasmids
The four-component CI- PFV FVV system (pcoPG,pcoPP,pcoPE,and pcu2MD9) and the gorilla SFV Env constructs containing sequence from the zoonotic GI-D468 and GII-K74 env genes have already been described [START_REF] Hütter | Prototype foamy virus protease activity is essential for intraparticle reverse transcription initiation but not absolutely required for uncoating upon host cell entry[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Novel plasmids were synthesized by Genscript (Piscataway, NJ, USA). We built a FVV expressing β-galactosidase with a nuclear localization signal (puc2MD9-B-GAL) by replacement of the gfp gene in the puc2MD9 backbone for easier image analysis on our quantification device. CI-PFV Env deleted of RBDj or L3, GII-K74 Env deleted of RBDj or L2 or L4 were constructed with the boundaries used for immunoadhesins (Table S.V-3).
Immunoadhesin constructs express a fusion protein formed by the murine IgK signal peptide, SFV SU with deleted furin cleavage site (aa 127-567), and the heavy chain (hinge-CH2-CH3) of murine IgG2a. A Twin-Strep-tag was fused after the mIgG2a to facilitate immunoadhesin purification, except for the first immunoadhesin constructs ( GII ΔRBDj, GII swapRBDj and GII ΔN10). The murine leukemia virus (MLV) gp70 SU (aa 34-475, strain FB29, NP_040334.1) was fused to the Twin-Strep-tag. All genes were codon optimized for expression in mammalian cells and placed under the control of a CMV promotor and intron in the pcZI plasmid [START_REF] Hashimoto-Gotoh | Phylogenetic analyses reveal that simian foamy virus isolated from Japanese Yakushima macaques (Macaca fuscata yakui) is distinct from most of Japanese Hondo macaques (Macaca fuscata fuscata)[END_REF].
The codon-optimized synthetic genes encoding GII-K74 SU (aa 127-566) and ectodomain (aa were cloned into the already described pT350 plasmid [START_REF] Krey | The disulfide bonds in glycoprotein E2 of hepatitis C virus reveal the tertiary organization of the molecule[END_REF], which contains the Drosophila metallothionein promoter, which is inducible by divalent cations [START_REF] Bunch | Characterization and use of the Drosophila metallothionein promoter in cultured Drosophila melanogaster cells[END_REF], the Drosophila BiP signal peptide (MKLCILLAVVAFVGLSLG) at the Nterminus and a Twin-Strep-tag (AGWSHPQFEKGGGSGGGSGGGSWSHPQFEK) for affinity purification at the C-terminus. Stable S2 cell lines were generated by antibiotic resistance and protein production was induced by the addition of 2 M CaCl2. For expression in mammalian cells, the murine IgK signal peptide was fused at the N-terminus of codon-optimized genes encoding GII-K74 SU (aa 127-566). The genes were placed under the control of the CMV promoter and intron [START_REF] Hashimoto-Gotoh | Phylogenetic analyses reveal that simian foamy virus isolated from Japanese Yakushima macaques (Macaca fuscata yakui) is distinct from most of Japanese Hondo macaques (Macaca fuscata fuscata)[END_REF] by insertion into the pcDNA3.1 vector.
(#NP0322, Invitrogen). The gel was incubated with Bio-Safe Coomassie staining solution (#1610786, Bio-Rad) for 2 h with gentle shaking, rinsed in H2O ON, and imaged using a G:BOX (Syngene) (Fig. S.V-4).
Western-blots
Western-blot analysis of protein expression was performed on cell supernatants. Samples were heat-denaturated and reduced as described for purified proteins before loading onto precast NuPAGE 4-12% Bis-Tris gels (#NP0322, Invitrogen). Proteins were migrated for 2-3 h at 120 V in NuPAGE MOPS SDS running buffer (#NP0001, Invitrogen) and transferred onto a PVDF membrane (#1704156, Bio-Rad) using a Trans-Blot Turbo transfer system (Bio-Rad).
Membranes were blocked and antibody labeled in Tris-buffered saline (TBS)-0.1%Tween-5% BSA. For Strep-tag detection, membranes were incubated with StrepMAB-Classic monoclonal antibody conjugated to HRP (0.05 µg/mL, #2-1509-001, Iba Lifesciences) for 2 h at RT. For SFV SU detection, membranes were incubated with a biotinylated anti-SU murine antibody ON at 4°C (3E10, 1 µg/mL), washed three times in TBS-0.1% Tween for 10 min and incubated with Streptavidin-HRP (1:2000, #DY998, Biotechne). Membranes were washed three times in TBS-0.1% Tween for 10 min before revelation with SuperSignal West Pico PLUS chemiluminescence substrate (#34579, ThermoFischer Scientific) and signal acquisition using a G:BOX.
Foamy viral vectors (FVVs)
FVV particles were produced by co-transfection of HEK293T cells by the four plasmids. Fifteen g total DNA (gag:env:pol:transgene ratio of 8:2:3:32) and 45 l polyethyleneimine (#101-10N, JetPEI, Polyplus) were added to a 10 cm 2 culture dish seeded with 4 x 10 6 293T cells.
Supernatants were collected 48 h post-transfection, clarified at 500 x g for 10 min, filtered using a 0.45 µm filter, and stored as single-use aliquots at -80°C. FVVs were titrated as described [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Lambert | A new sensitive indicator cell line reveals crosstransactivation of the viral LTR by gorilla and chimpanzee simian foamy viruses[END_REF], with minor modifications to optimize X-Gal staining of transduced cells, which was lighter than that of infected cells. Briefly, FVV samples were thawed and clarified by spinning at 10,000 x g at 4°C for 10 min. Serial five-fold dilutions were added in triplicate to sub-confluent BHK-21 cells seeded in 96-well plates and cultured for 72 h at 37°C. Plates were fixed with 0.5% glutaraldehyde in PBS for 10 min at RT, washed with PBS, and stained with 150 l X-gal solution containing 2 mM MgCl2, 10 mM potassium ferricyanide, 10 mM potassium ferrocyanide, and 0.8 mg/mL 5-bromo-4-chloro-3-indolyl-B-D-galactopyranoside in PBS for 3 h at 37°C. Blue-stained cells were counted using an S6 Ultimate Image UV analyzer (CTL Europe, Bonn, Germany). One infectious unit was defined as one blue cell. Cell transduction by FVV is a surrogate for viral infectivity and FVV titers are expressed as infectious units (IU)/mL.
Neutralization assays
Prior to use in neutralization assays, plasma samples were diluted 1:10 in DMEM + 1 mM MgCl2, decomplemented by heating at 56°C for 30 min, and frozen at -80°C as single-use aliquots. Thawed plasma samples were clarified by spinning at 10,000 x g for 10 min at 4°C. Serial two-fold dilutions were incubated with FVV for 1 h at 37°C before titration in triplicate and residual infectivity quantified using the 96-well plate microtitration assay described above. P96-well plates seeded with 5,000 BHK-21 cells were exposed to 300 IUs of FVV. Plasma samples were initially titrated against replicating virus [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. We selected those with neutralization titers > 1:100 against the virus and performed plasma titration against the FVV. We defined the plasma dilution required for a 90% reduction of FVV infectivity and used it as the fixed concentration for the nAbs. Plasma samples were incubated with serial dilutions of recombinant Env proteins for 1 h at 37°C. FVV was then mixed with the plasma/protein preparation and incubated 1 h at 37°C before addition to BHK-21 cells. FVV infectivity was quantified 72 h later as described for their titration. All conditions were tested in triplicate and the mean IU/well calculated. Cells transduced with mock treated FVV (i.e., incubated with MLV SU at 20 nM or with medium) were quantified on each plate and this value used as a reference. Relative infectivity was calculated for wells treated with the plasma/protein mix and is expressed as the percentage of the reference value. The WT immunoadhesin ( GII SU or CI SU) matched with the FVV Env was titrated along with the tested mutant immunoadhesins on every plate in every experiment for each plasma tested. All immunoadhesins were tested against each plasma at least twice and against at least four plasma samples. In the first assay (screening), immunoadhesins were added at three concentrations, ranging from 200 to 2 nM. In the second assay (confirmation), immunoadhesins with activity similar to that of WT or with no activity were tested a second time at three dilutions. Mutant immunoadhesins with intermediate activity were titrated in a three-fold serial dilution setting starting at 60 nM. We used two parameters to define the action of the immunoadhesins on the nAbs from each plasma sample: maximum infectivity (MaxI) and 50% inhibitory concentration (IC50). MaxI corresponds to the maximal infectivity in the presence of immunoadhesin-blocking nAb and is defined as the mean relative infectivity at the two highest doses tested. The IC50 values were calculated by plotting the relative infectivity as a function of immunoadhesin concentration and a four parameters regression modeling of the Prism software (Version 9, GraphPad). Mutant immunoadhesins with nAb activity similar to the WT one were given its IC50, and those with minimal or no activity were given an arbitrary IC50 value of 200 nM, corresponding to the highest concentration of immunoadhesin tested.
SFV Env binding to cells
Immunoadhesins were thawed at RT and clarified at 10,000 x g for 10 min. HT1080 cells were treated with trypsin-EDTA and 5 x 10 5 cells resuspended in 0.1 mL FACS buffer (PBS-0.1% BSA) containing the immunoadhesins and incubated on ice for 1 h. Cells were washed twice and incubated with donkey anti-murine IgG-(H+L)-AF488 (20 µg/mL, #A21202, Invitrogen) on ice for 30 min. Cells were washed and resuspended in 0.2 mL PBS-2% PFA. Data were acquired on a CytoFlex cytometer (Beckman Coulter) and analyzed using Kaluza software. A minimum of 25,000 cells were acquired. Single cells were selected by sequential gating on FSC-A/SSC-A and SSC-A/SSC-H dot-plots (Fig. S.V-8A). Immunoadhesin binding is expressed as the ratio of mean fluorescence intensity (MFI) of immunoadhesin-treated over untreated cells. Each immunoadhesin was tested twice at 3, 30, and 300 nM. The MFI ratios were plotted as a function of immunoadhesin concentration and the area under the curve was calculated (Fig.
S.V-8B and S.V-8C). To limit inter-experimental variation, WT immunoadhesins were included in every experiment and the binding level of mutant immunoadhesins normalized to that of the WT.
Analysis of FVVs carrying mutated Env
The yield of FVV particles was estimated by quantifying particle-associated transgene RNA.
FVV RNA was extracted from raw cell supernatants using a QIAamp Viral RNA Extraction Kit (Qiagen), treated with a DNA-free kit (Life Technologies), and retro-transcribed with Maxima H Minus Reverse Transcriptase (Thermo Fischer Scientific) using random primers (Thermo Fischer Scientific) according to the manufacturers' instructions. qPCR was performed on cDNA using BGAL primers (BGAL_F 5' AAACTCGCAAGCCGACTGAT 3' and BGAL_R 5' ATATCGCGGCTCAGTTCGAG 3') with a 10-min denaturation step at 95°C and 40 amplification cycles (15 s at 95°C, 20 s at 60°C, and 30 s at 72°C) carried out using an Eppendorf realplex2 Mastercycler (Eppendorf). A standard curve prepared with serial dilutions of pcu2MD9-BGAL plasmid was used to determine the FVV copy number. Results are expressed as vector particles/mL, considering that each particle carries two copies of the transgene.
The expression level of mutated Env proteins in producer cells was assessed by western-blot analysis. 293T cells transfected to produce FVVs were collected and lysed in RIPA buffer (#R0278, Sigma) containing protease inhibitors (#11140920, Roche) for 2 h at 4°C. Samples were heat-denaturated at 70°C for 10 min in LDS sample buffer (NuPage) and 50 mM DTT before loading onto a precast NuPAGE 4-12% Bis-Tris gel (Invitrogen) and migration for 2 h at 120 V in MOPS running buffer. The content of ≈ one million cells was analyzed. Samples were transferred onto a PVDF membrane using a Trans-Blot Turbo transfer system (Bio-Rad, Hercules, CA, USA). Membrane blockade and antibody labelling were performed in TBS-0.1%Tween-5%BSA. Membranes were incubated with an anti-LP antibody ON at 4°C (clone P6G11-G11, 1 µg/mL), washed, and incubated with a goat anti-mouse secondary antibody HRP conjugate (1:10000, #31430, Invitrogen). Membranes were washed three times in TBS-0.1% Tween for 10 min before revelation using Immobilon ECL Ultra Western HRP Substrate (WBULS0100, Merck) and signal acquisition on an Amersham Imager 680 (GE Healthcare).
To test the capacity of mutated Env to mediate the binding of vector particles to cells, HT1080 cells were incubated with the FVV particles (1, 10 and 100 particles/cell) on ice for 1 h. The cells were washed three times with PBS to eliminate unbound FVV and RNA was extracted using an RNeasy plus mini-Kit (Qiagen) according to manufacturer's protocol and RT performed as described for FVV RNA quantification. Bound FVV was quantified by qPCR of the bgal gene as described for vector titration; cells were quantified by qPCR amplification of the hgapdh gene with the following primers: hGAPDH_F 5' GGAGCGAGATCCCTCCAAAAT 3' and hGAPDH_R 5' GGCTGTTGTCATACTTCTCATGG 3'. The qPCR reaction conditions were the same as those used to amplify the bgal gene. The relative mRNA expression of bgal versus hgapdh was calculated using the -ΔΔCt method, and relative binding as 2-ΔΔCt.
Statistical analysis
Mutations in the immunoadhesins modified their ability to block the nAbs in two ways: either by reducing avidity, leading to a higher IC50, or by the fraction of plasma nAb blocked, leading to lower maximal infectivity. For each plasma sample, the IC50 and MaxI were tested for the WT immunoadhesin in at least five experiments. The threshold value defining a statistically significant change rerlative to WT was defined as the mean + 3*SD for the IC50 and the mean -3*SD for MaxI. Thresholds were defined for each plasma sample. The infectious titers, particle concentration, percentage of infectious particles, and quantity of bound FVV carrying WT and mutant immunoadhesins were compared using the ANOVA test for paired samples and Sidak's multiple comparisons test (GraphPad Prism 9 software). We report limited recognition of linear peptides spanning the SUvar region by plasma nAbs, in agreement with previous work by our lab [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. While linear epitopes appear rare, one study on FFV-specific feline antibodies reported strong plasma reactivities to a peptide (residues 441-463) within the SU domain of FFV Env that correlated with genotype-specific neutralization [START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF], as I will discuss below. Peptides used in the previous study by our lab might be suboptimal to detect antibody binding due to their short size (15-mers) and because their sequence was from the chimpanzee CI-PFV strain, while plasma antibodies were from individuals infected with gorilla SFV strains. In contrast, the peptides used in my current study were longer to match the usual sizes of B cell epitopes and their sequences were derived from homologous gorilla strains. Nevertheless, few antibodies bind to these peptides and binding did not match with the genotype of their infecting strain (Fig. V-2). Accordingly, these results highlight that the majority of nAb epitopes are conformational. Once the RBD structure was known, we indeed observed that the major epitopic regions on the RBD structure located to disordered outer regions rather than the more structurally conserved 'common core' of the RBD. Indeed, peptide sequences spanning the mobile loops on the RBD apex from homologous gorilla SFV strains were not recognized by plasma antibodies . This observation goes in line with what has been observed for plasma antibodies directed against HTLV, which also mostly target conformational epitopes [START_REF] Hadlock | The humoral immune response to human T-cell lymphotropic virus type 1 envelope glycoprotein gp46 is directed primarily against conformational epitopes[END_REF]. Similarly, nAbs against MLV target variable loops on the RBD and conformational epitopes have been reported [START_REF] Evans | Analysis of two monoclonal antibodies reactive with envelope proteins of murine retroviruses: one pan specific antibody and one specific for Moloney leukemia virus[END_REF][START_REF] Fass | Structure of a murine leukemia virus receptor-binding glycoprotein at 2.0 angstrom resolution[END_REF].
Localization of nAb epitopes on RBD and possible mode of actions
While some epitopes were found at the core and lower domain of the structure (N7', base loop residues 345-351 and adjacent helixes), the vast majority located to the upper apex of the RBD, in particular within three mobile loops (Fig. VI-1). Moreover, it is worth notice that the nAb epitopes I identified fall to one face of the RBD, consistent with the orientation of the RBD within the SFV Env trimer as demonstrated by our collaborators (Fig. IV-4). Immunodominance of GI-and GII-specific epitopes is highlighted according to spectrum of blue and red shades, respectively. For the purpose of location to adjacent discovered epitopes, the determined HBS of GII-K74 and its equivalent site predicted on CI-PFV is highlighted in blue and green for the specific residues, respectively. Figure created in PyMOL.
Epitopes on the Upper domain
Most of the tested plasma samples target epitopes on the three mobile/flexible loops (L2/3/4) located at the RBD apex. These loops are likely to form interfaces between RBD protomers in the trimer and may potentially be key for the stabilization of the trimeric Env in a pre-fusion conformation (Fig. S.IV-2). Indeed, superposition of the GII-K74 RBD onto the cryo-EM density map of CI-PFV Env trimer showed an overall good fit, and clearly support that the apex loops form protomer-protomer interfaces (Fig. IV-4).
Despite their dominance, we have not defined these apex epitopes at the residue level. However, some mutations based on removal and insertion of glycosylation sites and sequence swaps allow us to exclude or specify certain residues from the discovered epitopes. For instance, mutation of glycan N5 within the central part of GII-L2 had minimal effect on block of nAbs. Similarly, deletion of N10 had no effect on block of nAbs specific for either genotype from eight donors, for which this glycan locates in the N-ter of L3 in GII. In line with those results, a chimeric mutant spanning glycan N10 and its upstream region swapped from GI into GII (swap407) blocked three of four samples from GII-donors to same extend as WT. Those results exclude this region as part of a dominant epitope within GII-L3, although one donor may have nAbs Within L4, glycan insertion mutant D450 only slightly affected block of nAbs from two of four GII-infected donors tested. Thus, we may exclude this region as part of a major GII-specific epitope within L4 (Fig. V-5H vs V-6D). Interestingly, we noticed an overlap between the FFVgenotype specific peptide (residues 441-463) recognized by plasma antibodies from FFVinfected cats and the sequence location of L4 in our study with SFVs [START_REF] Mühle | Epitope Mapping of the Antibody Response Against the Envelope Proteins of the Feline Foamy Virus[END_REF]. Six residues (E-C-C-Y-P-E, highlighted in bold with grey background on sequences) within this region are conserved between FFV and both SFV strains (respective peptide sequences shown with underline). Moreover, five residues in this region are conserved between FFV and either the CI or the GII SFV strains (highlighted in bold with grey background and residue letters in color according to SFV genotype; blue = CI, red = GII). The D450N glycan insert mutation which had limited effect on block of GII-specific nAbs is located in a central region of the peptides that is most divergent between FFV and SFV sequences (highlighted in bold with residue letter in orange on GII-K74 sequence). Furthermore, the two SFV peptides border the glycan insert mutations GII-E459N and CI-W463N (highlighted in bold with residue letters in green on respective sequences). These mutations largely affected block of plasma nAbs from 11 of 13 plasma-protein pairs tested (Fig.
V-6E vs V-7I). Those data suggest that the SFV genotype-specific L4 epitopes may locate to the C-ter region, which was not fully covered by our L4 loop peptidesor that the glycan insert mutations next to the C-ter of the peptides may disrupt a distinct epitope. The latter seems more likely since the six residues at the C-ter of both L4-peptides are 100% conserved between the two SFV strains, making this an unlikely genotype-specific epitope. However, we cannot exclude that these residues fold in genotypespecific conformations on Env from the respective strains.
Importantly, these mobile regions were crucial for infectivity of viral vectors when substituting individual loops by glycine residue linkers matched to those on loop mutant SU-Ig proteins (Fig. V-9C). Our results support that the nAbs target epitopes with functional importance. A possible mode of action could be maintenance of the Env trimer in a pre-fusion conformation unable to undergo conformational changes necessary for transition to a post-fusion conformation and fusion of viral and cellular membranes. Such apex epitope location is frequently observed for Flu-specific antibodies targeting the head domain of the hemagglutinin on influenza viruses [START_REF] Bangaru | A Site of Vulnerability on the Influenza Virus Hemagglutinin Head Domain Trimer Interface[END_REF]. Another potential mechanism of action in nAb block of viral entry is the disassembly of Env trimers into protomers which has been observed for anti-HIV-1 nAbs and bnAbs [START_REF] Lee | Antibodies to a conformational epitope on gp41 neutralize HIV-1 by destabilizing the Env spike[END_REF][START_REF] Pancera | Structure and immune recognition of trimeric pre-fusion HIV-1 Env[END_REF].
Our approach to map nAb epitopes with SU-Igs did not include quaternary structures formed by distinct protomers within an Env trimer, however such epitopes are likely to exist. Indeed, for most donors tested the WT SU-Ig proteins rarely reverted vector infectivity in presence of plasma to a level observed in the absence of plasma (Fig.
V-3B and V-3F). Those data indicate that some nAbs were not blocked by the SU-Igs and indirectly suggest the presence of nAbs targeting quaternary epitopes. These may represent a proportion of neutralizing activity ranging from 5% to 40% (Fig. V-3F). The apex loops may form most of the quaternary epitopes based on their placement in the trimer. Indeed, quaternary epitopes have been observed for nAbs targeting the Ebola GP trimer and one of such neutralizing mAbs engaged in interactions with residues from all three protomers in a GP trimer at once [START_REF] Milligan | Asymmetric and non-stoichiometric glycoprotein recognition by two distinct antibodies results in broad protection against ebolaviruses[END_REF]. A possibility to map such quaternary epitopes would be the use of Env trimers in a pre-fusion conformation or sub-viral particles which only contain Env [START_REF] Stanke | Ubiquitination of the prototype foamy virus envelope glycoprotein leader peptide regulates subviral particle release[END_REF].
The quaternary epitope mapping has not been initiated during my thesis because the conformation of the apex loops is still unknown. Indeed, some parts of L3 could not be resolved in the RBD crystal structure. Furthermore, structural prediction tools were not able to give confident answers to the folding of these loops. In general, our use of prediction tools was not very informative in regards to discovery of novel genotype-specific epitopes. These included tools for both linear (BepiPred 2.0, (Jespersen et al., 2017)) as well as conformational epitopes (DiscoTope 2.0, [START_REF] Kringelum | Reliable B cell epitope predictions: impacts of method development and improved benchmarking[END_REF]) which failed to pinpoint obvious genotype-specific epitopic regions. In fact, we discovered genotype-specific epitopes not predicted by these in silico tools such as the base loop by manually inspecting genotype-specific sequences.
In summary, we have discovered genotype-specific conformational epitopes at the upper domain of the RBD. These epitopes are located at mobile loops forming protomer interfaces at the trimer apex that may stabilize the pre-fusion conformation important for viral infectivity.
One loop (L3) appears dominantly targeted by plasma nAbs from GI-infected donors, while a second loop (L4) overlaps in sequence and location with a previously reported FFV genotypespecific linear epitope. Further studies are needed to precisely map these epitopes.
Epitopes on the Lower domain
While the SFV receptor is yet to be identified, HS is a well-established attachment factor [START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF][START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF]. In this study, we identified four residues essential for SFV Env binding to HS located on the CC of the GII-K74 RBD (Fig. . Mutation of these residues only affected block of plasma nAbs to some extend for one of four GII-infected donors tested (Fig.
V-5D and V-5E). These results show that the HBS is not a dominant target for GII-specific nAbs. Additionally, we know that HS is an attachment factor for CI-PFV and thus this strain must also harbor an HBS. However, currently we do not have structural insights to a GI/CI RBD hence we did not explore if this site is a target for GIspecific nAbs. In the current absence of experimental CI/GI-RBD structures, AF prediction could provide a useful tool as it gives high confidence structures for the CC of the RBD (Fig.
S.IV-8 and S.IV-9). Thus, mutations in an AF predicted HBS on CI-PFV SU-Ig could be designed to test whether a potential GI-specific HBS is recognized by nAbs from GI-infected individuals. Mutations could also be introduced in Env on FVVs to test if the predicted residues impact binding and entry into cells.
While our data suggest that the HBS does not form a common nAb epitope itself, regions surrounding the HBS potentially constitute important targets. Indeed, residues around glycan N7' located above the HBS were identified as common targets of plasma nAbs from most GIIinfected donors (Fig. V-4J). Moreover, N8 is located just above N7' where it arises from a cavity, thus one could speculate if N8 takes part of an epitope formed by N7'. The buried location of N8 renders it resistant to removal by glycosidases as observed on the solved RBD structure and explains its essential role for Env expression (Fig. IV-3). For those reasons we could not mutate this glycan and directly investigate this question. In line with this, one could speculate if N8 is masking nAb epitopes. We assume this to be unlikely since the glycan is necessary for expression and the residues masked on the protein surface underneath this glycan forms a hydrophobic patch (Fig. . Instead to answer these questions, I would design mutations surrounding the N8 glycan. Indeed, the glycan insertion mutant E485N which is located close to N8 had a notable effect on block of GII-specific nAbs (Fig. Although we did not find evidence of nAb recognition of the base loop among six GI-infected donors tested against the CI-PFV G350N mutant, this result remains to be confirmed with additional mutants. For example, the swap345 mutant (Fig. V-6K) containing the majority of the GI base loop sequence in GII SU backbone could be tested for acquired activity against GIspecific plasma samples. CI-PFV base loop deletion and swapping with GII-K74 in SU-Igs were tested but resulted in low protein expression. Importantly, studies on SFV strains from other species including macaques and mandrills found that our inserted N350 glycosylation site exists and circulates in nature in genotype I strains (Aiewsakun et al., 2019a). While we cannot confirm the actual attachment of a glycan on this position, these data suggest that this glycan is not deleterious for genotype I SU binding to susceptible cells or entry (Fig. V-9). In addition,
we have observed another GI-specific polymorphism just upstream at position 346 within the base loop resulting in the presence or absence of glycosylation site N7 (Fig. S.IV-7B).
In summary, we have identified two epitopic regions on the lower RBD subdomain targeted by GII-specific nAbs. Further studies are required to obtain a more precise location of these epitopes. GI-specific nAbs mostly target the upper domain and therefore we have limited the investigations on epitopes at the lower subdomain because these are likely subdominant.
Limitations of current data and opportunities to address them in future studies
Our study has some limitations for which most are related to our experimental settings. Some of these has already been addressed in the sections above, while other important limitations were not addressed yet. Firstly, while we have performed a broad mapping of GII-specific epitopes using a homologous protein to the strain of infection, our mapping of GI-specific epitopes was performed using a heterologous protein from the laboratory adapted CI-PFV strain. Although this strain is cross-neutralized by most GI-infected donors [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF], nAb titers to this strain are usually lower compared to the homologous GI-D468 strain isolated from a zoonotically infected donor. Unfortunately, the GI-D468 SU was poorly expressed, tended to aggregate and for that reason we only mapped the best candidate epitopes using the CI-PFV SU-Ig which was well expressed. This could influence our data and we have likely missed GI-strain specific epitopes not present at the CI-PFV strain. For example, this strain does not carry the N7' glycan which is present in most other CI and GI strains as already mentioned. An insert of N7' within CI SU or chimeras with GI-CI sequence swaps could be used to address some of these missed GI-specific epitopes.
Secondly, the use of polyclonal plasma samples for mapping of nAb epitopes hampers the precise definition of epitopes. Our results are likely reflecting a mapping of dominant epitopes.
We would expect to observe stronger negative effects on block of nAbs by mutations affecting a larger fraction of nAbs targeting similar sites or epitopes, compared to a mutation affecting a smaller fraction of nAbs. For those reasons, it is likely that we have missed some subdominant epitopes. A solution to this would be isolation of monoclonal antibodies that recognize subdominant epitopes (see perspective section 6.3).
Other limitations include introduction of mutations with detrimental effects for protein expression and/or folding. We observed such effects for certain mutants as mentioned in supplementary table S.V-3. Moreover, our glycan insertion strategy has some drawbacks that includes uncertainty in regards to actual glycan insertion on the desired location. Such glycan differences are difficult to observe by size of protein bands on gels. Instead, a solution would be to perform mass-spectrometry on protein samples to confirm the actual insertion of a glycan.
The glycan insert approach has been used by others for mapping of HIV-1 nAbs on viral particles [START_REF] Dingens | Comprehensive Mapping of HIV-1 Escape from a Broadly Neutralizing Antibody[END_REF][START_REF] Dingens | High-resolution mapping of the neutralizing and binding specificities of polyclonal sera post-HIV Env trimer vaccination[END_REF]. We chose our strategy with protein competitors since this was the simplest and we indeed identified important epitopes using this strategy. Glycan insertion might be used as a screening tool to be confirmed with additional constructs and design of mutations which may preserve folding, such as swapping stretches of residues or aa substitutions. The model for this approach is the experiments performed to define the epitope on the base loop on the lower RBD (aa 345-351). This epitope was the easiest to study because the structure was known with high confidence and the loop size was genotypespecific.
Epitope comparison to nAbs targeting orthoretroviruses
MLV and HTLV: The majority of characterized nAbs against MLV, for which the RBD structure has been solved, are strain-specific and target epitopes located on the variable loops VRA -B and -C (Fig. I-23). These loops define the viral tropism as they are critical for receptor binding. Our work on SFV also shows the targeting of variable loops on the RBD by human antibodies. In contrast to MLV, the three loops in the upper domain are not involved in cell binding. Indeed, deletion of RBDj and loops on recombinant SU and Env expressed on SFV vectors had minor effects on binding to cells (Fig. V-9). Our data also confirms that most SFVspecific nAb epitopes are conformational which goes in line with what has been reported for HTLV-specific human antibodies [START_REF] Hadlock | The humoral immune response to human T-cell lymphotropic virus type 1 envelope glycoprotein gp46 is directed primarily against conformational epitopes[END_REF].
HIV: A wide range of bnAb epitopes have been defined on the HIV-1 Env, including the CD4receptor binding site. This site is a critical target of bnAbs because it is more conserved and allows fewer mutations due to its functional role across all major HIV strains [START_REF] Scheid | Sequence and structural convergence of broad and potent HIV antibodies that mimic CD4 binding[END_REF][START_REF] Zhou | Structural Repertoire of HIV-1-Neutralizing Antibodies Targeting the CD4 Supersite in 14 Donors[END_REF] and reviewed by [START_REF] Georgiev | Elicitation of HIV-1-neutralizing antibodies against the CD4-binding site[END_REF]. In addition, many epitopes formed by glycans have been well characterized for the heavily glycosylated HIV-1 Env.
Glycans on HIV Env themselves often participate in bnAb contact but can also modulate epitope focus as seen for nAbs targeting the glycan hole super site [START_REF] Dingens | High-resolution mapping of the neutralizing and binding specificities of polyclonal sera post-HIV Env trimer vaccination[END_REF][START_REF] Klasse | Env Exceptionalism: Why Are HIV-1 Env Glycoproteins Atypical Immunogens?[END_REF][START_REF] Mccoy | Holes in the Glycan Shield of the Native HIV Envelope Are a Target of Trimer-Elicited Neutralizing Antibodies[END_REF]. Despite less extensive glycosylation, our results pinpoint glycan N7' as a nAb target on SFV GII Env (Fig. V-4J). In contrast to HIV-1, SFV Env does not seem to use glycan masking of epitopes, which likely also participates to the lack of immune escape variants observed in SFV-infections. Moreover, our results confirm that the major nAb epitopes are located on the RBD within SU and not on the LP or TM domains of SFV Env. This is in contrast to HIV-1, for which its TM domain has been shown targeted by bnAbs as well as the SU/TM subunit interface. Lastly, HIV-1 nAb epitopes frequently undergo deep immune escape through sequence variations which is in stark contrast to the highly stable genome of SFVs.
Achievements regarding the two genotypes 6.2.1 Discovery of genotype-specific epitopes
Based on previous work from our lab showing that the majority of SFV single-infected individuals neutralize only one SFV genotype [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF], we expected and directly searched for epitopes within SUvar. Indeed, I identified key epitopic regions that appear distinctly recognized by GI vs GII-infected donors.
Firstly, I identified a potent nAb epitopic region around residues 345-351 targeted by most GIIinfected donors. Glycan insertion at position G350 in the base loop of the RBD resulted in strong decrease in block of nAbs from GII-infected donors but not GI-infected donors (Fig. V-7D). Secondly, the apex loops L2, -3 and -4 appear as major targets of nAbs for all donors, however substantial differences were observed. In particular, L3 was a dominant target of GIspecific nAbs, while all three loops were targeted by GII-specific nAbs (Fig.
V-7G and V-7L).
Further studies to precisely narrow down these genotype-specific epitopes and potentially identify novel ones may expand the list of differences.
Genotype-specific determinants of binding
While the receptor for SFV is still unknown, our results on SU-Ig binding to cells described one genotype-specific region critical for SU binding to its entry receptor(s) expressed on HT1080 cells (Fig. V-9). In line with block of nAbs, the base loop appears important for binding of SFV Env to cells for the GII-K74 strain but not for CI-PFV. Indeed, glycan insertion into this base loop only impacted binding of SU-Ig from the GII strain (Fig. V-9A). Interestingly, I also observed overall higher level of SU binding for the CI-PFV strain compared to that of GII-K74 (Fig. S.V-8B). In line with those results, our structural predictions highlight with strong confidence a clear difference in fold of this loop for which CI/GI strains harbor an additional residue as mentioned above (Fig. V-6B'). Thus, our data support that the base loop located at the lower domain of RBD may be involved in receptor binding. One could speculate that this loop interacts directly with a cellular receptor, which is different for the two genotypes.
However, this goes against published data supporting that all FVs use the same entry receptor or attachment factors [START_REF] Berg | Determinants of foamy virus envelope glycoprotein mediated resistance to superinfection[END_REF]. Alternatively, each genotype may bind the same cellular receptor in a different way. Currently, we cannot exclude the existence of several molecules required for SFV entry, including some common to all genotypes and others specific for a certain genotype.
A similar global fold of RBD among distinct FVs
The SFV RBD structure solved by our collaborators represents the first high-resolution structure of a FV RBD and adds to the list of solved exogenous retroviral RBD structures from HIV, . This RBD structure represent a novel fold with no precedents and shows limited similarity to these other retroviral RBDs. Based on AF structural predictions, sequence alignments and the novel gorilla SFV RBD structure, an RBD 'common core' was shown structurally conserved across all FVs. In contrast, outer regions of the RBD were found to be more divergent between FVsincluding among SFV strains from distinct genotypes (Fig. S.IV-9). Although the AF neural network predicts the CC with high precision for a wide range of FVs, it fails to obtain confident predictions of the outer apex loops (Fig.
S.IV-8
). For those reasons, experimental structures are still crucial in order to obtain and validate native protein conformations. We mainly found genotype-specific epitopes on the outer RBD regions, but epitopes located on the CC are expected and we should search for more GI epitopes in this region. Antibodies able to cross-neutralize both genotypes would likely target more 'structurally' conserved epitopes within the genotype-specific SUvar region.
Genotype-specific ELISA assay and tools to identify co-infected individuals
Our lab has previously identified co-infection by strains from two genotypes. The neutralization assay identified one third of individuals whose plasma samples neutralized both strains. The genotype-specific PCR identified only half of these infections based on SUvar directed primers [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]. Thus, PCR may be less sensitive than serologic assays. For those reasons, methods for detection of genotype-specific infection by serological means would be of great interest. I tested the recognition of SU-Ig immunoadhesins by human plasma samples in ELISA and found that SU-Ig binding by plasma antibodies matched the genotype of infection. Those findings support the use of the CI-PFV and GII-K74 SU-Ig as a diagnostic tool for serological identification of genotype of infection. Such genotype specific binding to SU was unexpected, as nearly half of it sequence (44%) is well conserved between the two strains used, and suggest that the SUvar (aa identity = 58%) region is more immunogenic than the SUcon (aa identity = 87%). For comparison and in support of this, the aa identity of SUcon is 97.2% between the zoonotic GII-K74 and GI-D468 strains homologous to those viruses infecting our cohort of donors. In contrast, SUvar from the heterologous CI-PFV strain used in the assay present only 71.4% aa identity to SUvar from the gorilla GI-D468 strain. Thus, most antibodies recognizing SU are directed against the genotype-defining SUvar region.
Next step on epitope mapping
While we chose a functional assay for mapping of nAb epitopes, other approaches would be ideal to use in the future for mapping of polyclonal antibody epitopes. One recently emerged technique for that exact purpose is negative-stain electron microscopy. This tool was used for the mapping of polyclonal antibody epitopes on HIV-1 Env in sera from immunized animals [START_REF] Bianchi | Electron-Microscopy-Based Epitope Mapping Defines Specificities of Polyclonal Antibodies Elicited during HIV-1 BG505 Envelope Trimer Immunization[END_REF]. That approach, in combination with other classical epitope mapping strategies nicely recapitulated findings from other studies and gave a broad overview of major Env targets [START_REF] Dingens | High-resolution mapping of the neutralizing and binding specificities of polyclonal sera post-HIV Env trimer vaccination[END_REF]. However, this technique requires large amounts of plasma which is currently not available in our biobank from SFV-infected donors.
The more precise mapping of nAb epitopes would be through structural obtainment of antigenantibody complexes. However, this requires monoclonal antibodies which are currently not available. If such mAbs were at hand, solving the structure of a mAb Fab bound to the SFV Env would directly identify the epitope and residues of contact. A Fab arm complexed to Env could potentially also favor Env in a pre-fusion state, in particular if the mAb binds a quaternary epitope. Thus, the obtainment of SFV Env-specific mAbs may help our collaborators obtain an Env structure in its pre-fusion conformation.
To obtain mAbs from SFV-infected individuals, I set up the assay for the isolation of memory B cells and subsequent screening for SFV Env-reactive antibodies in ELISA based largely on a published protocol [START_REF] Huang | Isolation of human monoclonal antibodies from peripheral blood B cells[END_REF]. This approach relies on fluorescence activated cell sorting (FACS) based isolation of single CD19 + IgG + memory B cells from frozen PBMCs.
Single-sorted memory B cells are cultured for two weeks in P384-well plates seeded with irradiated 3T3-msCD40L feeder cells in presence of IL-2 and -21. After two weeks of culture, supernatants are harvested and screened for IgG and binding to SFV antigens. The first cloning of memory B cells from a GI-infected donor has been initiated.
Env mutations useful for selecting Env-specific B cells
A commonly used method for isolation of monoclonal antibodies relies on sorting of antigenspecific memory B cells with fluorescently conjugated protein baits. Such methods have been used for isolation of potent nAbs against a broad range of pathogenic viruses including HIV, influenza, ebola and more [START_REF] Gieselmann | Effective high-throughput isolation of fully human antibodies targeting infectious pathogens[END_REF]. In the case of SFV, using the Env protein as a bait would likely result in high background binding to all major B cell populations expressing the ubiquitous receptor yet to be identified. Indeed, my preliminary data showed high binding of trimeric Env and SU-Ig proteins to all major B cell, T cell, NK cell and monocyte populations from healthy donor PBMCs (data not shown). For those reasons, I initiated B cell isolation strategies that does not rely on Env baits as described above.
Interestingly, the monomeric RBD proteins showed limited binding compared to SU-Ig and trimeric Env on the HT1080 cell line, despite the RBD blocking nAbs as efficiently as SU immunoadhesin. This knowledge is valuable for the use of SFV Env as a protein bait. Moreover, the identification of mutations which lowered cell binding but retained nAb block such as the HBS mutant pairs (K342/R343 and R356/R369) may further diminish Env background binding to HS expressed on unspecific B cells. These mutant proteins have to be tested for their binding to human primary B cells before their use to isolate SFV-specific B cells.
Next step on SFV Env structure
At the present moment, we do not have a pre-fusion Env trimer structure from SFV available.
As mentioned above, an Env structure in its pre-fusion conformation would be highly valuable and needed for the characterization of quaternary nAb epitopes as well as decoding the SFV fusion machinery. Vice versa, isolation of mAbs targeting a quaternary epitope may allow the obtainment of a stabilized Env trimer 3D-structure. In absence of mAbs, a pre-fusion Env trimer could be obtained through introduction of stabilizing mutations preventing the Env from folding into a post-fusion state. Such mutations were successfully introduced into HIV-1 Env [START_REF] Julien | Crystal structure of a soluble cleaved HIV-1 envelope trimer[END_REF] and CoV spike [START_REF] Hsieh | Structure-based design of prefusion-stabilized SARS-CoV-2 spikes[END_REF][START_REF] Kirchdoerfer | Pre-fusion structure of a human coronavirus spike protein[END_REF] among other viral fusogenic glycoproteins, and these stabilized trimers are the preferred use as vaccine antigens since they favor induction of nAbs compared to non-stabilized trimers, reviewed by [START_REF] Sliepen | HIV-1 envelope glycoprotein immunogens to induce broadly neutralizing antibodies[END_REF].
While we have structural details of the GII-K74 strain, its opposite genotype is lacking such information. Prediction tools like AF did not give clues to differences in fold of RBD from GI strains vs our X-ray RBD structure from the GII-K74 strain. Thus, obtaining high-resolution structures of an RBD or pre-fusion trimer from a GI strain to compare major differences between SFV genotypes would be valuable and help to find key regions that can explain genotype-specific epitopes. Moreover, obtaining trimeric Env structures would validate our results on location of the RBD within the pre-fusion trimer. It is expected that the flexible apex loops will have distinct conformations, which may only be visible within a trimeric contextor may only be visible upon nAb binding.
Next step on SFV receptor 6.6.1 HBS
Our results identified residues K342, R343, R356 and R369 as essential for SFV Env binding to HS expressed on cells. However, binding was readily observed for HBS mutant proteins despite removal of surface expressed HS. It is currently unclear whether residual HS mediates the low level of binding or whether the results indirectly support the presence of an additional receptor. HT1080 cells express an excessive amount of HS compared to BHK cells (approx.
10-fold higher), and HS surface expression is correlated with the susceptibility to infection in cell lines [START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF]. Indeed, one study concluded that HS acts as a direct entry receptor for PFV [START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF], while another study concluded that an additional receptor must be present since FVV transduction of HS neg cells was possible, albeit at lower levels compared to its parental HS pos clone [START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF]. Indeed, the study of SFV entry and the search of a FV-specific receptor may require the use of HS-deficient cell lines. Several studies have used Raji B cells as control due to their overall resistance to PFVinfection and low transduction by FVVs which correlate with their near absence of surface expressed HS. In my experiments Raji cells are stained by SFV Env proteins at low level.
Similarly, HS staining is very low but treatment with heparinase yields a clear staining with the control antibody against the neo-epitope present post HS cleavage (data not shown). These results suggest a low but detectable level of HS on Raji cells.
Our definition of the HS binding site and the mutant proteins will likely be useful for the experiments on SFV entry in the future.
Next step on antiviral role of antibodies
Lastly, my study has focused solely on nAbs and thus, we did not assess other functionalities beyond neutralization. Fc-mediated functions and complement may play important roles in antibody-mediated control of SFV infection, as has been observed for HIV-1 nAbs in protection against SHIV challenge in macaques [START_REF] Hessell | Fc receptor but not complement binding is important in antibody protection against HIV[END_REF]. In addition to block of viral entry, nAbs may prevent dissemination of virus in vivo and could potentially mediate reduction or elimination of viral reservoirs through ADCC or ADCP by binding to surface expressed Env on infected cells [START_REF] Barin | HIV-1 antibodies in prevention of transmission[END_REF]. In the case of HIV-1, experimental intravenous use of bnAbs in NHP models have shown that these potent antibodies can suppress viral replication in anatomical sites distant from the site of viral inoculation [START_REF] Liu | Antibody-mediated protection against SHIV challenge includes systemic clearance of distal virus[END_REF]. This study also suggested that the V3-glycan recognizing bnAb PGT121 permits mucosal translocation of SHIV. In relation to this, another possible mode of action is block of viral budding from infected cells as has been observed in vitro [START_REF] Dufloo | Broadly neutralizing anti-HIV-1 antibodies tether viral particles at the surface of infected cells[END_REF].
For SFV, two small studies have addressed the role of antibodies in protection against infection and establishment of chronic SFV-infection in NHPs [START_REF] Khan | Simian foamy virus infection by whole-blood transfer in rhesus macaques: potential for transfusion transmission in humans[END_REF][START_REF] Williams | Role of neutralizing antibodies in controlling simian foamy virus transmission and infection[END_REF]. Experimental SFV-infection in rhesus macaques upon whole blood transfusion from an SFV-infected donor monkey to an SFV-naïve recipient monkey demonstrated that infection only occurred when low nAb titers were present, or when the plasma was removed from the whole blood transfusion [START_REF] Khan | Simian foamy virus infection by whole-blood transfer in rhesus macaques: potential for transfusion transmission in humans[END_REF][START_REF] Williams | Role of neutralizing antibodies in controlling simian foamy virus transmission and infection[END_REF].
Although only few animals were studied, those results suggest that antibodies play a role in protection against SFV-infection. Non-neutralizing antibody functionalities were not investigated in these studies. Importantly, a recent paper from our lab showed that SFV-specific plasma antibodies from infected humans can bind to the surface of infected cells, possibly allowing the recruitment of innate effector cells [START_REF] Couteaudier | Plasma antibodies from humans infected with zoonotic simian foamy virus do not inhibit cell-to-cell transmission of the virus despite binding to the surface of infected cells[END_REF]. Currently, the obstacle to test antibody-mediated elimination of SFV-infected cells is the low Env surface expression on infected cells and the relatively narrow window of Env expression prior to syncytia formation and cell death upon in vitro infection with replication competent virus.
Instead, target cells transduced with an Env expression vector for stable Env surface presentation or surrogate assays using recombinant Env and reporter effector cells could be applied to circumvent this issue for the future. Such assays can be used with polyclonal plasma samples from SFV-infected individuals. Moreover, and beyond neutralization, this would also be ideal to test for any isolated mAbs specific for SFV Env.
| CONCLUSIONS
Our collaborators describe the first high-resolution structure of a FV RBD from a zoonotic gorilla SFV strain. This 3D structure shows the organization of the RBD in an upper and a lower subdomain. The upper domain is involved in inter-trimer contacts and likely stabilizes the trimeric Env. Moreover, we discovered a potent HBS within the lower domain and demonstrated that four residues K342, R343, R356 and R369 are directly involved in SFV Env binding to immobilized heparin and cell surface-expressed HS.
I demonstrated that nAb epitopes are mainly conformational and I discovered epitopes on both domains of the RBD with evidence for genotype-specific targets. In addition, we demonstrated that mobile loops at the RBD apex likely involved in trimer stabilization are targeted by nAbs in a genotype-specific manner, for which L3 is dominantly targeted by GI-infected donors. We confirmed that these loops are necessary for viral entry or fusion and that they are not recognized as linear peptides by plasma antibodies. Moreover, a dominant genotype-specific epitope was defined around residues 345-351 in the lower domain targeted only by plasma antibodies from GII-infected donors. My results do not support the HBS as a dominant epitope, however other targets in its proximity were discovered including the N7' glycan on GII-RBD.
In conclusion, the work presented in this thesis highlights the first comprehensive mapping of conformational epitopes on the SFV Env targeted by human polyclonal plasma antibodies. My work contributes substantially to knowledge on immune responses to SFVs and I provide evidence that human nAbs target epitopes with functional importance for the virus. This knowledge may aid the ongoing work on understanding SFV impact on human health, the subsequent control in the human host upon zoonotic spill-over from NHP reservoirs and prevention of viral emergence in the human population. Our work may also guide the future design and use of FVs as gene therapy tools. Cell dimensions a, b, c (Å) 99.5, 99.5, 120.6 99.6, 99.6, 120.9 123.6, 123.6, 191.6 , , () 90, 90, 120 90, 90, 120 90, 90, 120 Resolution range (Å)
CHAPTER VIII
___________________________________________________________________________
LIST OF REFERENCES
Table S.IV-3 -Intramolecular interactions within GII RBD
The intramolecular interactions were analyzed by ProteinTools program https://proteintools.uni-bayreuth.de [START_REF] Ferruz | ProteinTools: a toolkit to analyze protein structures[END_REF].
Van der Waals contacts in the SFV RBD
Cluster # Area (Å 2 ) # of residues Location (domain) The FV RBD core is formed by the hydrophobic residues grouped in 6 clusters -2 in the lower subdomain (clusters #1 and #2), 3 in the upper subdomain (clusters #4, #5, #6), and the largest hydrophobic cluster (BSA=2451 Å 2 with 51 participating residues; cluster #3 shown in green) running in the direction of the longer axis of the RBD and containing residues from both domains. There are 24 networks of residues whose side chains contribute to 43 hydrogen bonds, with 21 charged residues forming 9 salt bridges. The area of the hydrophobic interfaces in the lower subdomain is about 6 times larger than in the upper subdomain, while the hydrogen bonds and salt bridges are more prevalent in the upper subdomain. The full list of intramolecular interactions and relevant details are given in Table S.IV-3. The intramolecular interactions were analyzed by ProteinTools program https://proteintools.uni-bayreuth.de [START_REF] Ferruz | ProteinTools: a toolkit to analyze protein structures[END_REF]. ). Genotypes I and II have been defined for gorilla, chimpanzee, green monkey and macaque FVs (Aiewsakun et al., 2019a;[START_REF] Galvin | Identification of recombination in the envelope gene of simian foamy virus serotype 2 isolated from Macaca cyclopis[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure S.IV-6 -Intramolecular contacts between N8 sugar and RBD
The buried surface area (BSA) for each sugar residue was calculated as a percent of the total surface area (Å 2 ) in ePISA [START_REF] Krissinel | Inference of macromolecular assemblies from crystalline state[END_REF] and plotted. Sugars that establish hydrogen bonds with the amino acids are indicated with letter 'h'. Sugars 6 and 7 are colored in grey for RBD D because they were not resolved in the structure. Models generated by the AF prediction program [START_REF] Jumper | Highly accurate protein structure prediction with AlphaFold[END_REF] colored according to the per-residue confidence metric called 'predicted local distance difference test' (pLDDT). The pLDDT can have a value between 0 and 100, with the higher model confidence corresponding to the higher pLDDT number. pLDDT > 90 (rendered in blue on the panels) is the high accuracy cut-off, above which the backbone and rotamers are predicted with high confidence; values between 70 and 90 (cyan) correspond to the regions where the backbone conformation is correct; values between 50 and 70 (yellow) have low confidence and are not reliably predicted, and regions with pLDDT below 50 (red) should not be interpreted. Structural superpositions of all AF models against each other were carried out using mTM-align server for multiple structural alignments (Dong et al., 2018a;[START_REF] Dong | mTM-align: an algorithm for fast and accurate multiple protein structure alignment[END_REF] available at https://yanglab.nankai.edu.cn/mTM-align/. Below each model is a 'template modelling score' (TM-score), which is a length-independent scoring function reflecting the similarity of two structures (Zhang and Skolnick, 2004) . The TM-scores can take values between 0 and 1, with the higher TM-score indicating higher structural similarity. The indicated TM-scores correspond to the pairwise superimposition of each AF model onto the X-ray Gorilla GII RBD structure. Flow cytometry gating strategy for the detection of Env binding and HS expression on SFV-susceptible cell lines. Cells were treated with trypsin EDTA before labelling with Env proteins or anti-HS antibodies. A) Representative example of HT1080 single cell selection: live cells were selected by a gate applied on an FSC-A/SSC-A dot-plot and a single cell gate applied on an SSC-A/SSC-H dot-plot. B) Representative example of Env binding analysis. HT1080 cells were labelled with GII-K74 WT, K342A/R343A (mut1) or R356A/R369A (mut2) ectodomain proteins, anti-StrepMAB-Classic-HRP and anti-HRP-AF488 antibodies. Staining obtained on gated single cells are presented on the histogram overlay: MFI is presented on the x-axis and frequency is expressed as the normalized percentage of gated events on the y-axis (%Max). Cells labelled with secondary antibodies only ("control" condition, black curve) were used as a reference; Env-specific staining was quantified by the ratio of MFI from Env treated to untreated cells. C) Representative example of heparan sulfate staining after treatment with heparinase III. HT1080 cells were treated with heparinase III or buffer and stained with the F58-10E4 antibody specific for heparan sulfate (anti-HS) and the F69-3G10 antibody specific for glycans exposed after heparan sulfate removal (anti-HS). Staining obtained on gated single cells are presented on the histogram overlay: MFI is presented on the x-axis and frequency is expressed as the normalized percentage of gated events on the y-axis (%Max). Cells labelled with secondary antibodies only ("control" condition, black curve) were used as a reference; HS and ΔHS-specific staining was quantified by the ratio of MFI from labelled to control cells. D) HT1080 cells were treated with heparinase III or buffer and stained with antibodies specific for HS (HS) or glycans exposed after heparan sulfate removal (ΔHS). Expression levels are calculated as the ratio of MFI from labelled to unlabeled cells ( c Linear B-cell epitopes were predicted using the software available on the Immune Epitope Data Base (http://tools.iedb.org/bcell/): LBtope [START_REF] Singh | Improved method for linear B-cell epitope prediction using antigen's primary sequence[END_REF] and Parker hydrophilicity prediction replaced by the Bepipred program [START_REF] Larsen | Improved method for predicting linear B-cell epitopes[END_REF] and Ellipro [START_REF] Ponomarenko | ElliPro: a new structure-based tool for the prediction of antibody epitopes[END_REF]). Genotype-specific sequences were manually defined. After resolution of the RBD structure (Fernandez et al., 2022, submitted), eight novel peptides overlapping the four loops were synthesized. b The level of expression was assessed on crude supernatants of transfected cells and categorized as undetected, insufficient to perform the experiments, decreased, or normal relative to the WT counterpart. Pygmy GI+GII a Participants were infected with a gorilla SFV of which the genotype was defined by PCR using primers located within SUvar [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Among the four individuals infected by both genotypes, only one (BAK55) was tested against both genotypes in epitope mapping experiments because his nAb titers were high against both genotypes; the three other samples were tested against a single viral genotype. For the CI and GII SUs, two bands are visible, mostly for the SU-ST construct, in accordance with the results of other reports [START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF]. To verify immunoadhesin purity and aggregate formation, 1.5 µg of purified immunoadhesins were heatdenaturated at 70°C for 10 min, with or without DTT. Samples were loaded onto a precast NuPAGE 4-12% Bis-Tris gel and the proteins separated by electrophoresis. Gels were then stained with Coomassie blue and imaged using a G:BOX (Syngene). A western-blot control was performed for all affinity-purified immunoadhesins. The constructs are listed in Table S.V-3 and their names are indicated over the images. Twelve plasma samples from African hunters (Table S.V-1) were tested for binding to peptides overlapping loops located in the upper subdomain of the RBD (Table S.V-2). Plasma samples from four uninfected (grey symbols), four GI-infected (blue symbols), and four GII-infected (red symbols) individuals were tested. The CMV and SFV Env6 peptides were used as positive controls and the SFV Env5 peptide as a negative control [START_REF] Lambert | An Immunodominant and Conserved B-Cell Epitope in the Envelope of Simian Foamy Virus Recognized by Humans Infected with Zoonotic Strains from Apes[END_REF]. The responses are presented as the net optical density (y-axis) for each peptide (x-axis). and incubated with immunoadhesins at concentrations ranging from 60 to 0.02 nM. The mix was then added to FVVs expressing the GII-K74 Env before titrating infectivity. The relative infectivity is presented as a function of immunoadhesin concentration. The addition of GII SU (black symbols) inhibited the action of the nAbs, whereas affinity-purified GII 351glyc did not (green closed symbols). To exclude that GII 351glyc aggregation led to epitope masking, the chromatography-purified fractions 3 and 4 ( GII 351glyc [SEC], green open symbols) were pooled, concentrated, and tested in parallel. These contained no aggregates (panel A) but were unable to block nAbs from the four individuals. The SUvar protein sequences from the SFVggo strains circulating in Central Africa and CI-PFV were aligned. We included all available sequences: 10 genotype II sequences (nine zoonotic strains and one animal strain [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]) and 15 genotype I sequences (14 zoonotic strains and one animal strain [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]). To compare the binding capacity of the immunoadhesins, staining was performed at three doses, the MFI ratios plotted as a function of immunoadhesin concentration, and the area under the curve (AUC) calculated. Shaded regions represent the AUC. Data from five independent experiments performed with WT immunoadhesins are presented as the mean and standard error.
Supplementary Figures -Manuscript II
CMV
CI SU bound at higher levels than GII SU. C. The treated and mutated immunoadhesins were tested for binding to susceptible cells and staining levels were normalized to that of the WT immunoadhesin included in every experiment. The graph shows lower staining by GII ΔRBDj than GII SU.
Figure I- 1 -
1 Figure I-1 -Different stages of emerging zoonotic viral agents .........................................................................
Figure I- 2 -
2 Figure I-2 -Electron microscopy pictures of HFV derived from tissue cultures of a Kenyan patient ........
Figure I- 3 -
3 Figure I-3 -Phylogenetic tree of retroviruses belonging to the Retroviridae family .....................................
Figure I- 4 -
4 Figure I-4 -Phylogenetic relationship of simian members in the Spumaretrovirinae subfamily .................
Figure I- 5 -
5 Figure I-5 -Evolution of FV co-speciation within vertebrate hosts ................................................................
Figure I- 6 -
6 Figure I-6 -Structural organization of FV particle ..........................................................................................
Figure I- 7 -
7 Figure I-7 -Genomic organization of PFV and its viral transcripts ...............................................................
Figure I- 8 -
8 Figure I-8 -Schematic and electron microscopy representation of PFV replication cycle ...........................
Figure I- 9 -
9 Figure I-9 -Primary sequence and 3D structure of trimeric PFV Env ..........................................................
Figure I- 10 -
10 Figure I-10 -Schematic overview of PFV Env gp130 .......................................................................................
Figure I- 11 -
11 Figure I-11 -Schematic overview of superinfection resistance ........................................................................
Figure I- 12 -
12 Figure I-12 -Phylogenetic analysis of SFV strains based on the conserved and variable region of env ......
Figure I- 13 -
13 Figure I-13 -Recombination analysis of fifty-four SFV env sequences ..........................................................
Figure I- 14 -
14 Figure I-14 -Schematic overview of SUvar region within PFV SU gp80 .......................................................
Figure I- 15 -
15 Figure I-15 -A view of the global distribution of NHPs ...................................................................................
Figure I- 16 -
16 Figure I-16 -Global distribution of zoonotic SFV-infections reported to date ..............................................
Figure I- 17 -
17 Figure I-17 -SFV in vivo tropism in Central African hunters ........................................................................
Figure I- 18 -
18 Figure I-18 -Pyramidal scheme of zoonotic SFV transmission and potential immune control ...................
Figure I- 19 -
19 Figure I-19 -Schematic structural overview of human Ig molecules and subclasses ....................................
Figure I- 20 -
20 Figure I-20 -Overview of dynamics during germinal center reaction and B cell fates .................................
Figure I- 21 -
21 Figure I-21 -Neutralization profiles of donors infected with zoonotic SFV strains from two genotypes ....
Figure I- 22 -
22 Figure I-22 -Organization of distinct retroviral Envs .....................................................................................
Figure I- 23 -
23 Figure I-23 -Fr-MLV SU gp70 sequence and RBD structure .........................................................................
Figure I- 24 -
24 Figure I-24 -HTLV-1 SU gp46 sequence and location of nAb epitopes .........................................................
Figure I- 25 -
25 Figure I-25 -Structure of HIV-1 Env pre-fusion trimer and location of major classes of bnAb epitopes ..
Figure IV- 1 -
1 Figure IV-1 -Overview of the novel fold adopted by the SFV RBD ...............................................................
Figure IV- 2 -
2 Figure IV-2 -SFV secondary structure topology diagram ..............................................................................
Figure IV- 3 -
3 Figure IV-3 -The oligosaccharide linked to N390 plays a structural role in the RBD ..................................
Figure IV- 4 -
4 Figure IV-4 -The RBDs form a trimeric assembly at the apex of the full-length Env ..................................
Figure IV- 5 -
5 Figure IV-5 -Prediction of HS binding residues and design of the variants impaired in binding ...............
Figure IV- 6 -
6 Figure IV-6 -The SFV RBD residues K342, R343, R356 and R369 mediate Env binding to HS ..............
Figure V- 1 -
1 Figure V-1 -SFV Env ........................................................................................................................................
Figure V- 2 -
2 Figure V-2 -Only a small proportion of plasma samples bind to peptides covering the SUvar domain ...
Figure V- 3 -
3 Figure V-3 -The SFV SU block nAbs without affecting viral entry .............................................................
Figure V- 4 -
4 Figure V-4 -SFV-specific nAbs recognize glycans on SUvar ........................................................................
Figure V- 5 -
5 Figure V-5 -Most SFV-specific nAbs recognize the RBDj domain ..............................................................
Figure V- 6 -
6 Figure V-6 -Epitope disruption by glycan insertion revealed additional sites recognized by nAbs ..........
Figure V- 7 -
7 Figure V-7 -GI and GII-specific nAbs target different epitopes ..................................................................
Figure V- 8 -
8 Figure V-8 -Most samples contain nAbs that recognize several epitopic regions on SUvar ......................
Figure V- 9 -
9 Figure V-9 -nAbs target epitopic regions involved in SU binding to susceptible cells or required for viral infectivity ...........................................................................................................................................................
Figure V- 10 -
10 Figure V-10 -Schematic summary of current knowledge on SFV Env ........................................................
Figure VI- 1 -
1 Figure VI-1 -Summary of discovered genotype-specific epitopic regions....................................................
Figure S.IV- 1 -
1 Figure S.IV-1 -The fold of the FV RBD is maintained by hydrophobic and polar interactions ................
Figure S.IV- 2 -
2 Figure S.IV-2 -Mobile loops decorate the apex of the RBD ..........................................................................
Figure S.IV- 3 -
3 Figure S.IV-3 -Comparison of glycosylated vs deglycosylated RBD structures ..........................................
Figure S.IV- 4 -
4 Figure S.IV-4 -Comparison of the SFV RBD fold with that of the RBD of Orthoretroviruses .................
Figure S.IV- 5 -
5 Figure S.IV-5 -Sequence conservation of FV Env .........................................................................................
Figure S.IV- 6 -
6 Figure S.IV-6 -Intramolecular contacts between N8 sugar and RBD ..........................................................
Figure S.IV- 7 -
7 Figure S.IV-7 -Functional features of FV RBD mapped onto the structure ...............................................
Figure S.IV- 8 -
8 Figure S.IV-8 -AlphaFold models of FV RBDs ..............................................................................................
Figure
Figure S.IV-9 -FV RBD common core excludes a large portion of the upper subdomain .........................
Figure
Figure S.IV-10 -The inter-protomer RBD contacts formed by the upper domain loops show poor sequence conservation .......................................................................................................................................................
Figure
Figure S.IV-11 -Recombinant RBD variants remain monomeric in solution .............................................
Figure S.IV- 12 -
12 Figure S.IV-12 -Flow cytometry gating strategy ............................................................................................
Figure S.IV- 13 -
13 Figure S.IV-13 -Effect of mutations on FVV release and infectious titer ....................................................
Figure S.IV- 14 -
14 Figure S.IV-14 -Structural basis for RBDjoin region being dispensable for binding to cells ....................
Figure S.V- 1 -
1 Figure S.V-1 -CI-PFV, GI-D468, and GII-K74 Env sequence alignment....................................................
Figure S.V- 2 -
2 Figure S.V-2 -Recombinant SFV Env oligomerization and mammalian-specific glycolysation do not affect the capacity to inhibit GII-specific nAbs .........................................................................................................
Figure S.V- 3 -
3 Figure S.V-3 -Western-blot analysis of WT SU proteins used in the study .................................................
Figure S.V- 4 -
4 Figure S.V-4 -Purity of immunoadhesins used in the study assessed by Coomassie blue gel staining ......
Figure S.V- 5 -
5 Figure S.V-5 -Plasma antibodies do not bind to peptides covering the loops located at the apex of the RBD and targeted by nAbs ........................................................................................................................................
Figure S.V- 6 -
6 Figure S.V-6 -The GII 351glyc immunoadhesin is unable to block nAbsexclusion of a nonspecific effect of protein aggregation ...........................................................................................................................................
Figure S.V- 7 -
7 Figure S.V-7 -Sequences from gorilla SFV strains circulating in Central Africa are conserved in the epitopic regions targeted by nAbs....................................................................................................................
Figure S.V- 8 -
8 Figure S.V-8 -Recombinant immunoadhesins bind to susceptible cells .......................................................
Figure S.V- 9 -
9 Figure S.V-9 -Env proteins deleted of RBDj, L2, L3 or L4 sequences are expressed in transfected cells
Figure I- 1 -
1 Figure I-1 -Different stages of emerging zoonotic viral agents
Figure I- 2 -
2 Figure I-2 -Electron microscopy pictures of HFV derived from tissue cultures of a Kenyan patient Left: Mature viral particle budding from cell surface plasma membrane (x137,500). Right: Mature and immature (arrows) viral particles budding from cell surface plasma membrane (x180,000). Authors acquired the pictures by a Phillips EM 300 electron microscope. Figure from[START_REF] Achong | An unusual virus in cultures from a human nasopharyngeal carcinoma[END_REF].
Figure I- 3 -
3 Figure I-3 -Phylogenetic tree of retroviruses belonging to the Retroviridae family Phylogenetic tree of exogenous and endogenous members from the Retroviridae family based on the conserved sequence of pol. Genera belonging to the Orthoretrovirinae and Spumaretrovirinae subfamilies are shown in blue and red, respectively. The three classes of ERVs are shown in olive. Complex type retroviruses are highlighted with a star by the name of genus in order to distinguish these from simple type retroviruses. Posterior probabilities shown near selected nodes.Figure adapted from (Han and Worobey, 2012a).
Figure I- 4 -
4 Figure I-4 -Phylogenetic relationship of simian members in the Spumaretrovirinae subfamily
Figure I- 5 -
5 Figure I-5 -Evolution of FV co-speciation within vertebrate hosts FV phylogeny in colored lines and host phylogeny in black lines. Colors represent aquatic, amphibian and terrestrial FVs in blue, purple and red colors, respectively. Dotted lines represent extinct or yet to be discovered FV lineages. Cross-clade transmissions are depicted as thick vertical transparent bars demonstrating direction of transmission by arrows. Certain transmission routes are unclear (?) to this date due to limited data. Host evolutionary timescale and scalebar are in units of million years.Figure from[START_REF] Aiewsakun | Avian and serpentine endogenous foamy viruses, and new insights into the macroevolutionary history of foamy viruses[END_REF].
Figure I- 6 -
6 Figure I-6 -Structural organization of FV particle Structure of PFV virion with enlargement of Env gp130 and its three subdomains LP gp18, SU gp80 and TM gp48. Other viral components including Gag p71/p68, IN p40, PR-RT-RH p85 and the viral genome highlighted below.Figure adapted from[START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
Figure I- 7 -
7 Figure I-7 -Genomic organization of PFV and its viral transcripts Top: The viral RNA genome harbors a cap at its 5' end and a polyadenylation tail at its 3'. The viral ssRNA is reverse transcribed into viral dsDNA. The LTR includes a U3, R and U5 region shown in dark grey, black and light grey boxes, respectively. Provirus is generated by integration of the viral dsDNA into the host genome. The FV genome includes two promotors; a classical retroviral promotor in the U3 region of the 5' LTR and an unusual IP in env shown by black curved arrows. The viral genes gag, pol, env, tas and bet are shown in faded orange, blue, green, yellow and purple boxes, respectively encoding polyproteins shown below in dark shades. The protein Gag p71 is cleaved at its C-ter yielding p68 and p3 products shown in red. The protein Pol p127 containing the viral enzymes PR, RT, RH and IN is cleaved between RT and IN yielding PR-RT-RH p85 and IN p40 shown in dark and light blue, respectively. The surface Env gp130 is cleaved twice yielding LP gp18, SU gp80 and TM gp48 subunits shown in dark green, black and brown. The accessory proteins Tas and Bet are illustrated by dark yellow and purple boxes. Proteins cleavage sites shown by black vertical arrows. Bottom: Spliced and full-length primary transcripts derived from the IP or 5' LTR promotors are shown below polyproteins with ORFs colored accordingly. The cap and polyA tail are represented by C and An symbols, respectively.Figure adapted from[START_REF] Delelis | Biphasic DNA synthesis in spumaviruses[END_REF][START_REF] Hamann | Foamy Virus Protein-Nucleic Acid Interactions during Particle Morphogenesis[END_REF][START_REF] Pollard | The HIV-1 Rev protein[END_REF].
1 )
1 Attachment of the virus to its surface expressed receptor on target cell 2) Entry of the virus into the cytoplasm and release of capsid 3) Decapsidation and reverse transcription of viral ssRNA into dsDNA 4) Transport of the viral dsDNA into the nucleus and integration into the host-cell chromosomes (creation of provirus) 5) Transcription of proviral genome by host-cell RNA polymerase II into genomic RNA (replication) and mRNA for synthesis of viral proteins 6) Virus assembly in cytoplasm 7) Maturation process including budding and egress of new mature viral particles from host-cell
Figure I- 8 -
8 Figure I-8 -Schematic and electron microscopy representation of PFV replication cycle The PFV virion binds to a ubiquitously expressed surface molecule(s) on a host target cell. Post fusion of viral and cellular membranes, the viral capsid migrates to the MTOC. Uncoating of the capsid releases the viral genome. The viral ssRNA genome is transcribed into dsDNA by RT and translocated into the nucleus. The viral genome is integrated into the host chromosomes by IN. Host-cell machinery transcribes the viral genome and differentially spliced viral RNAs are exported out of the nucleus. Pol, Gag and Tas transcripts are translated into protein by the ribosome in the cytosol while Env is translated by ER.Capsids containing viral RNA are formed at the MTOC and a late RT event may occur after capsid assembly and before budding resulting in capsids containing viral dsDNA. The majority of capsids migrate to the ER and Golgi where they fuse with intracellular membranes containing Env. Mature viral particles bud from intracellular compartments depending on Env and are most likely released from the cell through exocytosis. Some capsids acquire Env by the plasma membrane and small amounts of capsid-less SVPs are also released from the cell surface. Figure from[START_REF] Lindemann | Foamy virus biology and its application for vector development[END_REF].
central helices were observed by cryo-EM below the upper part of the trimer (Fig. I-9, middle).
Figure I- 9 -
9 Figure I-9 -Primary sequence and 3D structure of trimeric PFV Env Top: Schematic of PFV Env gp130 primary sequence with C-ter LP gp18, central SU gp80 and N-ter TM gp48 subunits in light grey. PFV aa numbering and cleavage site (arrows) location indicated on top of sequence. Essential domains of each subunit shown by dark grey boxes with specific aa location below. Middle: Cryo-EM 3D sideview full (left), cut (center) and grey density map (right) reconstruction of a single PFV WT Env trimer (iNAB mutant deficient in Gag-RNA binding) at resolution of approximately 9Å (EMD:4011). Extracellular domain presented in salmon (left) and grey (center). The three central TM gp48 FP helices are presented in green. The LP gp18 and TM gp48 helices spanning the membrane are colored orange (outer helices) and blue (inner helices) but cannot be attributed due to insufficient resolution. Scalebar present 50Å. Bottom: Subtomogram averaging of WT PFV Env (left). Hexagonal assembly of PFV Env from WT particles at 32Å resolution (right) (EMD:4006).Figure adapted from[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
. I-10). A study demonstrated 14 out of 15 sites to be in use, for which the only PNGS not attached by a glycan was the very first located on the CyD of LP. Three evolutionary conserved glycans have been shown of particular importance for Env expression and intracellular transport; glycan number 8 located at N391 in the SU domain and glycans 13 and 15 located at N782 and N833 in the TM domain (Fig. I-10)
Figure I- 10 -
10 Figure I-10 -Schematic overview of PFV Env gp130 The three Env gp130 subunits are enlarged and shown in green, black and brown, respectively. Cleavage sites between domains presented as straight arrows and aa numbering are presented on top of domain boxes. The 15 PNGS are presented as Y on top of domain boxes and the 24 cysteine residues are presented below the domain boxes, respectively. Asparagine (N) and cysteine (C) residue numbering shown by lowered number next to residue. N25 highlighted by star as this residue is not attached by glycan. N391, N782 and N833 highlighted as they are essential for Env expression and intracellular transport. Arrows below the LP and TM domains show the motifs involved in LP-Gag interaction, ubiquitination sites and ER retrieval signal sequences involved in SVP release and intracellular transport and egress of Env, respectively. The SU domain harbors the bipartite RBD composed of RBD1 and -2 domains in orange separated by the non-essential RBDjoin domain in grey as shown below the SU domain box.Figure adapted from[START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure I- 11 -
11 Figure I-11 -Schematic overview of superinfection resistance Comparison of superinfection resistance (SIR) scenarios for MLV and PFV. A: Cellular entry of MLV is inhibited by soluble MLV SU gp70 monomer (grey subunit) bound to the MLV entry receptor mouse cationic aa transporter 1 (mCAT1) shown in yellow. B: Cellular entry of PFV is allowed in presence of soluble PFV SU gp80 monomer (black subunit), despite SU gp80 binding to cell surface expressed heparan sulfate proteoglycan (HSPG) attachment factor (shown in green) and a currently unknown but assumedly ubiquitously expressed FV-specific entry receptor (shown in pink). C: Cellular entry of PFV is inhibited by cell-surface expressed Env gp130 on PFV-
Therefore, env DNA was amplified from blood samples of 40 individuals infected with chimpanzee or gorilla SFVs and from wild caught NHPs living in surrounding rural areas of the study population. Phylogenetic alignment of the obtained sequences revealed presence of a variant region within the central region of SU whose sequence do not segregate according to host species, in contrast to the flanking parts of the env gene. Recombination was the most probable origin of these two variants, but one parental strain was unidentified (Fig.I-12)[START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure I- 12 -
12 Figure I-12 -Phylogenetic analysis of SFV strains based on the conserved and variable region of env Phylogenetic alignment of SFV env sequences from NHPs and zoonotically infected humans based on the conserved (a) or variant (b) region of env located within the SU gp80 domain. SFVggo sequences shown in purple or blue and SFVcpz sequences shown in yellow or orange, respectively. Alignments based on the variable region demonstrate the presence of two SFV clades or genotypes circulating among gorillas and chimpanzees in Central Africa. Figure from[START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
based genotypes circulating among NHPs to have originated approximately 30 million years ago during the diversification of OWMs and Apes (Fig. I-13) (Aiewsakun et al., 2019a).
Figure I- 13 -
13 Figure I-13 -Recombination analysis of fifty-four SFV env sequences
Figure I- 14 -
14 Figure I-14 -Schematic overview of SUvar region within PFV SU gp80 The sequence of SU gp80 protein with SUcon and SUvar regions shown in black and purple, respectively. Numbering and location of N-linked glycans highlighted according to PFV Env aa sequence. The SU domain harbors the bipartite RBD composed of RBD1 and -2 domains in orange separated by the non-essential RBDjoin domain in grey as shown below the SU domain box. The 249aa long SUvar region overlaps the majority of the RBD. Figure adapted from[START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure I- 16 -
16 Figure I-16 -Global distribution of zoonotic SFV-infections reported to date
Figure I- 17 -
17 Figure I-17 -SFV in vivo tropism in Central African hunters
Figure I- 18 -
18 Figure I-18 -Pyramidal scheme of zoonotic SFV transmission and potential immune control Schematic illustration of zoonotic SFV transmission from NHPs to humans. SFV efficiently cross the speciesbarrier to humans leading to a replication-competent and life-long persisting infection. Immune control may prevent human-to-human transmissions and subsequent diffusion of SFV into the human population which is in stark contrast to the epidemic and endemic HIV-1 and HTLV-1/2. Figure adapted from (Gessain et al., 2013). The findings discussed above indicate limited pathology associated with zoonotic SFV infections. Based on the current knowledge addressed so far, we propose the following model for SFV transmission and its subsequent diffusion into the human population (Fig. I-18, from top to bottom): SFV is frequently transmitted to humans in close contact with NHPs, leading to a persistent and replication competent life-long infection. In zoonotically infected humans, SFV
first innate defense mechanisms against viruses are constitutively expressed host restriction factors that interfere with the different steps of viral replication. Sensors of innate immunity recognize viral components and/or damage associated with infection and induce the production of IFN. IFNs belong to three families defined by their respective cell surface receptor. The type I IFN family comprises many IFN-α subtypes and IFN-β. The type II IFN family comprises IFN-γ only, and the type III IFN family comprises three subtypes of IFN-λ. Upon binding to their respective cell surface receptor complex, all IFNs initiate signaling pathway cascades involving janus kinase (JAK) and signal transducer and activator of transcription (STAT)
patterns (PAMPs) by a limited number of universal germ-line encoded receptors termed pattern-recognition receptors (PRRs)[START_REF] Yan | Intrinsic antiviral immunity[END_REF]. PRRs include Toll-like receptors (TLRs), nucleotide oligomerization and binding domain-like receptors (NLRs) and retinoic acid-inducible gene I-like receptors (RLRs). The TLRs are expressed on several innate immune cell types like dendritic cells (DCs), macrophages, NK cells, γδ T cells, granulocyte-like mast cells, neutrophils and eosinophils. The TLR family comprises 10 members which differ from one another by ligand specificities and gene expression upon activation. TLR3, -7, -8 and -9
characterized restriction factors shown to target multiple viruses include; Tetherin, APOBEC3 family, TRIM5α of the tripartite-motif (TRIM) family, sterile alpha motif and HD domain 1 (SAMHD1), myxovirus resistance (Mx) proteins, the family of serine incorporator (SERINC) proteins and the IFN-induced transmembrane (IFITM) family members (Colomer-Lluch et al., 2018; Yan and Chen, 2012). These host factors interfere with different parts of the viral replication cycle including both early and late steps. Examples of early steps include inhibition of cytosolic entry and fusion (IFITMs, SERINCs), block of viral capsid uncoating (TRIM5α) and inhibition of RT and/or transcription (APOBEC3s, SAMHD1). Examples of late step interference include inhibition of nuclear accumulation/integration (MxB) and blocking the release of budding viral particles (Tetherin) (Colomer-Lluch et al., 2018). Moreover, many
For
CD4 + Th cells, subsets are defined on the basis of secretion of cytokines. The three major Th cell subsets are Th1, Th2 and Th17 cells. Th1 cells predominantly secrete IL-2 and IFN- that exclusively promote the cytotoxic effector functions of CD8 + CTLs and NK cells capable of killing viral infected cells. Th2 cells secrete primarily IL-4 and -13 which facilitate induction of humoral immunity by activation of B cells producing antibodies. Th17 cells produce IL-17 and play an important role in the exacerbation and induction of autoimmunity as well as in host defense against various pathogens. Another T cell subpopulation serving a crucial role in the regulation of T cell-dependent B cell responses are the T follicular helper (TFH) cells. TFH cells are essential for promoting the survival, proliferation, Ig isotype class switching, affinity maturation and B cell differentiation in the germinal center (GC)[START_REF] Rich | The Human Immune Response[END_REF].
B
cells arise from the bone marrow and transit as immature B cells with functional BCRs to the peripheral blood. From there, further development into mature naïve B cells occurs through more selection processes in the spleen. The great variety of the Ig gene can be attributed to recombination of the V(D)J germline Ig sequences, just as for the TCR. However, in contrast to the TCR, further diversity of the BCR is generated through affinity maturation and somatic hypermutations (SHMs) in the variable domains of the Ig gene upon antigen-exposure leading to an extraordinary diversity. This BCR affinity maturation process occurs in GCs where naïve B cells with unmutated low-affinity V(D)J germline sequences migrate in and out of GCs in search for antigens trapped and displayed by follicular DCs (FDCs) in the light zone of the GC (Fig. I-20) (Victora and Nussenzweig, 2022). SHMs are induced by the enzyme activationinduced cytidine deaminase (AID) and happens when naïve B cell clones enter in the GC dark zone. During the GC-reaction, SHM-acquired clones will re-transit into the light zone of the GC where an antigen affinity-based selection occurs ultimately giving rise to affinity-matured B cell clones
MBC subsets can express five different subclasses of Igs based on the CH domains: IgM, IgD, IgG, IgA and IgE, respectively (Fig. I-19) (Lu et al., 2018). Class-switching from IgM to IgG or IgA usually happens within the first week post-infection, and is maintained further.
Figure I- 19 -
19 Figure I-19 -Schematic structural overview of human Ig molecules and subclassesHuman Ig molecules consist of two functional domains linked by a hinge region. The Fab binds the antigen while the Fc region bind sensors which deploy host-mediated effector functions. All Igs are composed of four chains; two identical heavy chains (blue) and two identical light chains (red). These chains are further subdivided into variable (VH/VL) and constant (CH/CL) domains. Ig light chains also exist as and types. The human Ig molecules exist as five subclasses: IgM, IgD, IgE, IgG and IgA including four IgG (IgG1-4) and two IgA (IgA1-2) isotypes. These molecules either form monomers, dimers or multimers depending on subclass and isotype for which multimers are linked through disulfide bonds. Dimeric IgA further contains a secretory component (or J chain). Fab and Fc binding to antigen and receptors are highly affected by features like hinge length and flexibility, glycosylation sites and disulfide bonds. Figure from(Lu et al., 2018).
Figure I- 21 -
21 Figure I-21 -Neutralization profiles of donors infected with zoonotic SFV strains from two genotypes A: Plasma neutralization titers against replicative GI-D468 and GII-K74 strains with symbols for each donor according to genotype of infection determined through SUvar specific PCRs. Detection threshold (1:20) set at the dashed lines. B-D: Neutralization titers of plasma nAbs from zoonotic SFVcpz/gor-infected African hunters (n=52) against replicative CI-PFV, CII-SFV7, GI-D468 and GII-K74 strains. Open circles and filled squares represent SFVgor-and SFVcpz-infected donors, respectively. Detection threshold (1:20) set at the dashed lines.Tables indicate number of donors with measurable nAb activity against the four strains and statistical significance according to Fischer's exact test. Figure adapted from[START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF].
literature review on retroviral Envs and their recognition by nAbs focusing on three retroviruses; MLV, HTLV-1 and HIV-1. As discussed in section 1.1.4, retroviral Envs are type-I class transmembrane molecules composed of two subunits: the extracellular SU domain typically harboring the RBD and the TM domain which is involved in fusion of the viral and cellular membranes (Fig. I-22) (Rey and Lok, 2018). The signal peptide (SP) of most retroviruses is cleaved off during translation, although FVs are an exception for this as the LP gp18 remains associated with SU and TM. SU/TM heterodimers form trimers at the surface of the virus particle.
Figure I- 22 -
22 Figure I-22 -Organization of distinct retroviral Envs Env precursors shown for four retroviruses; PFV Env, Friend MLV (Fr-MLV), HTLV-1 and HIV-1. SP, SU and TM subunits are shown in green, grey and brown, respectively. LP gp18 of FVs is cleaved by a furin-like protease and remained associated with SU and TM. Sequence numbering correspond to proteins with the following accession numbers; PFV #P14351, Fr-MLV isolate FB29 #P26804, HTLV-1 Japanese subtype A strain ATK-1 #P03381 and HIV-1 group M subtype B strain HXB2 #P04578.Figure created in BioRender.
Figure I- 23 -
23 Figure I-23 -Fr-MLV SU gp70 sequence and RBD structure Top: Schematic representation of Fr-MLV SU gp70 sequence (Env aa 35-478). Subdomains within the SU are highlighted in distinct colored boxes. The NTD comprising the RBD is shown in yellow, PRRH shown in green and CTD shown in red. Glycosylation sites (Y) and their aa position shown on top of SU subdomains. Variable regions targeted by nAbs shown in grey boxes below the SU bar, and colored according to VR-location on Fr-MLV RBD structure (bottom panel). The mCAT-1 receptor binding region shown below SU bar. Sequence numbering (aa) according to Fr-MLV isolate FB29 (accession number #P26804). Bottom: Monomeric RBD structure (PDB:1AOL) of Fr-MLV isolate 57 (accession number #P03390) at 2.0Å resolution. Fr-MLV structure with VRA (green), VRB (red) and VRC (blue) domains highlighted. Figures created in BioRender and PyMOL.
gp46 and TM gp21. Sequence alignments of MLV and HTLV-1/2 Env present an unusually high homology in several regions across both SU and TM suggestive of ancestral origins and potential MLV env capture by HTLV[START_REF] Kim | Emergence of vertebrate retroviruses and envelope capture[END_REF][START_REF] Kim | Definition of an amino-terminal domain of the human T-cell leukemia virus type 1 envelope surface unit that extends the fusogenic range of an ecotropic murine leukemia virus[END_REF]. HTLV-1 SU and MLV SU share the NTD-PRRH-CTD organization and the location of the RBD in the NTD (Fig.I-24 and I-23, top panel). No structure of HTLV Env has been solved to this date. The predicted structure from another deltaretrovirus, the bovine leukemia virus (BLV) resembles
Figure I- 24 -
24 Figure I-24 -HTLV-1 SU gp46 sequence and location of nAb epitopes Schematic representation of HTLV-1 SU gp46 sequence (Env aa 21-312). Subdomains within the SU are highlighted in distinct colored boxes. The NTD comprising the RBD is shown in yellow, PRRH shown in green
the highly mutating virus is established. Moreover, comparison of the sensitivity of historical and more contemporary HIV-1 strains to neutralization by sera from infected donors support that circulating HIV-1 strains have undergone evolutionary changes over the course of the pandemic resulting in enhanced resistance to nAbs at a population level[START_REF] Bouvin-Pley | Evidence for a continuous drift of the HIV-1 species towards higher resistance to neutralizing antibodies over the course of the epidemic[END_REF][START_REF] Bunnik | Adaptation of HIV-1 envelope gp120 to humoral immunity at a population level[END_REF]. Nonetheless, bnAbs able to cross-neutralize several heterologous strains occur in approx. 20% of HIV-1 infected donors after several years of infection[START_REF] Kwong | Human antibodies that neutralize HIV-1: identification, structures, and B cell ontogenies[END_REF].During the past two decades, monoclonal HIV-1 specific bnAbs have been isolated and cloned from such HIV-1 infected donors. The anti-HIV bnAbs target an array of different epitopes, mostly conformational[START_REF] Mccoy | The expanding array of HIV broadly neutralizing antibodies[END_REF]. Env is a highly metastable protein which adopts multiple closed, intermediate and open states. The nAb epitopes are influenced by these conformations[START_REF] Kwong | HIV-1 evades antibody-mediated neutralization through conformational masking of receptor-binding sites[END_REF][START_REF] Munro | Conformational dynamics of single HIV-1 envelope trimers on the surface of native virions[END_REF]. Moreover, bnAbs have discrete modes of neutralization mechanisms beyond inhibition of gp120-CD4 and co-receptor binding including prevention of Env transition between different states, fusion inhibition and destabilization or disassembly of the Env trimer, reviewed by[START_REF] Miller | A Structural Update of Neutralizing Epitopes on the HIV Envelope, a Moving Target[END_REF]. Highresolution 3D-structures of the HIV Env complexed to the respective bnAb Fabs has given a detailed map of the epitopes targeted. These can be categorized at seven epitopic regions for which six are located on pre-fusion Env; CD4-binding site, V1V2 loops, V3 glycan, the silent face, the fusion peptide and SU/TM subunit interface panel C) [START_REF] Chuang | Structural Survey of Broadly Neutralizing Antibodies Targeting the HIV-1 Env Trimer Delineates Epitope Categories and Characteristics of Recognition[END_REF].
Figure I- 25 -
25 Figure I-25 -Structure of HIV-1 Env pre-fusion trimer and location of major classes of bnAb epitopes A: Structure of HIV-1 Env ectodomain (gp140) trimer in 'closed' pre-fusion conformation (PDB:4TVP) from strain BG505 (accession number #DQ208458) with SOSIP-mutations and bound by bnAbs PGT122 and 35O22 (not shown) at 3.1Å resolution. One gp140 protomer shown in ribbon with gp120 outer and inner domains and gp41 highlighted in orange, pale yellow and dark grey, respectively. B: Glycan shield on HIV-1 Env trimer. Modelled N-linked glycans colored in green. C: Surface representation of pre-fusion closed Env trimer in side view (left) and top view (right) with highlight of six bnAb classes shown in distinct colors; V1V2, glycan-V3, CD4-binding site, silent face center, fusion peptide and subunit interface. MPER class of bnAbs not shown. Figures adapted from[START_REF] Chuang | Structural Survey of Broadly Neutralizing Antibodies Targeting the HIV-1 Env Trimer Delineates Epitope Categories and Characteristics of Recognition[END_REF][START_REF] Sliepen | HIV-1 envelope glycoprotein immunogens to induce broadly neutralizing antibodies[END_REF].
Furthermore, no
human-to-human transmissions has been documented to date suggesting natural immune control of viral replication in vivo. A better understanding of the nAbs raised against SFV upon infection, their epitopes and mechanisms of action may provide information to whether these contribute to viral control and protection against viral transmission to partners and close relatives. We hypothesize that the immune system of zoonotically SFV-infected individuals supports the efficient control of viral replication and prevents the viral emergence of these retroviruses in the human population. The work conducted during this PhD aimed to characterize the sites on viral envelope recognized by nAbs from Central African hunters infected with zoonotic SFV strains. Thus, my PhD thesis had the following aims: I. Characterize epitopes within the genotype-specific SUvar region of the SFV Env targeted by nAbs through use of polyclonal plasma samples from zoonotic gorilla SFVinfected Central African hunters II. Contribute to the collaborative work aiming to solve 3D-structures of the SFV Env by performing the functional study of recombinant Env-derived proteins to complete the biochemical and structural approaches
virology and in particular on the fusion mechanisms of viral glycoproteins. This work was led by Dr. Ignacio Fernandez and Dr. Marija Backovic, Post-doctoral researcher and permanent scientist in the Rey unit, respectively. The aim was to obtain a structure of the Env from SFV and gain insights into the fusion mechanisms of SFV Env and potential receptor usage. In addition, this knowledge has and will aid the characterization of nAb epitopes including their mechanisms of action. Moreover, as FVs are extremely ancient viruses and have co-evolved with their hosts for millions of years, such structural knowledge could potentially give new insights to FV evolution and its relationship with orthoretroviruses. About 18 months after initiation of this project, our collaborators succeeded in solving a highresolution (2.6Å) X-ray crystal structure of an RBD from the zoonotic gorilla genotype II strain BAK74 (GII-K74), a strain isolated in our lab using PBMCs derived from an accidentally infected Central African hunter. The structure shows a novel fold with no precedents, and thus does not show similarity to the RBDs from other retroviruses (MLV, FeLV and HIV-1). The novel RBD structure has a bean-like shape with an upper and lower domain.A previous low-resolution cryo-EM structure by German researchers on viral particles of the chimpanzee genotype I strain PFV (CI-PFV) found the Env to form trimeric structures on the surface[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. The novel SFV GII-K74 RBD structure fitted well into the lowresolution cryo-EM map of CI-PFV Env, supporting that the RBD locates at the upper part of the SFV Env trimer. This also supports that their RBD structure is folded in the correct conformation seen on viral particles.My first contribution was to validate that the RBD adopts a native fold. I performed neutralization assays in which the recombinant soluble RBD and Env expressed at the surface of viral particles compete for binding to nAbs present in plasma samples of individuals infected with homologous gorilla SFV strains. Binding of nAbs to the RBD results in increased infection when compared to viral particles incubated with plasma sample in presence of unrelated recombinant protein. The RBD protein was produced in S2 insect cells or mammalian Expi293F cells which give rise to proteins with distinct types of surface glycosylation. Both proteins were serially diluted and incubated with plasma from SFV-infected donors before addition of FVVs.An increase in infectivity was observed for both RBD proteins in a dose-dependent manner.These results confirm that the RBD proteins adopt a conformation recognized by nAbs raised in the context of a natural infection.SFV uses HS as an attachment factor for viral entry into susceptible cells. To search for a potential heparan binding site (HBS) on the SFV Env, our collaborators determined the electrostatic surface potential of the RBD and used structural docking/modelling of an HS molecule to the solvent accessible surface for identification of a potential HBS. Their predictions highlighted four residues (K342, R343, R356 and R369) with a high number of contacts with the modelled HS within a positively charged area of the lower domain of the RBD. These residues were mutated in pairs (K342/R343 and R356/R369) to alanine in trimeric ectodomain proteins. I set up a flow cytometry-based cell binding assay to measure the impact of mutations on binding to HS. Binding of WT ectodomain depended on the HS expression levels on susceptible cells, being higher on HT1080 than BHK-21 cells. Mutant ectodomains bound about ten-fold less on both cell lines compared to the parental GII-K74 ectodomain. I then treated HT1080 cells with heparinase III to remove HS. Ectodomain binding to treated cells was lowered for the WT ectodomain while unaffected for mutant counterparts. These results confirm that the identified residues K342, R343, R356 and R369 are mediating SFV Env binding to HS expressed on cells.The RBD structure allows us to understand its functional subdomains. The two subdomains essential for binding form the lower subdomain and part of the upper domain. The HS is indeed located in the lower domain. The subdomain which can be removed without affecting Env binding to cells (called joining RBD, RBDj) is located on the upper domain. AlphaFold 2.0 (AF) computational prediction of the GII-K74 RBD revealed a structure highly similar to the experimentally obtained RBD structure. Comparison of AF-predicted RBD structures from distinct FVs support that the RBD folds into a 'common core' (CC) conserved in overall fold between different FVs. In contrast, outer regions including some highly flexible loops at the apex of the RBD show high divergence in fold, even between distinct SFV genotypes. The loops form contacts between RBD protomers when superimposed into the CI-PFV trimer cryo-EM map. Thus, our hypothesis is that these mobile loops stabilize the Env trimer in a pre-fusion conformation. Collectively, our data support that the upper domain of the SFV RBD is involved in trimer stabilization while the lower domain is involved in binding to HS.3.2 Manuscript II: Characterization of nAb epitopesIn the main part of my PhD project, I have investigated the location and characteristics of epitopes targeted by nAbs in plasma samples from Central African hunters infected with zoonotic gorilla SFVs. These nAbs were previously shown to target the variable region, SUvar, which overlaps most of the RBD and defines the two SFV genotypes. Before my arrival, a master student performed linear epitope mapping using peptides spanning in silico predicted epitopic regions of SUvar. As few reactivities were observed towards these linear peptides in ELISA, I went towards conformational epitope mapping in our search for genotype-specific nAb epitopes.I mapped the conformational epitopes using recombinant proteins as competitors to Env expressed by viral vectors for binding to nAbs in neutralization assays. I firstly used the published data on the functionally defined Env binding subdomain and glycosylation sites.Then, I focused on the genotype-specific sequences and linear in silico B-cell epitope predictions. When the X-ray crystal structure of the GII-K74 RBD became available, I used it for the rational design of novel mutations on SU domain proteins. I first tested several constructs for expression in mammalian and insect cells based on zoonotic gorilla SFV strains from the two genotypes, GI-D468 and GII-K74, and the laboratory adapted CI-PFV strain. These constructs included monomeric RBD and SU domains, dimeric immunoadhesin proteins composed of SU domain fused to Fc of mIgG2a (SU-Ig) and trimeric ectodomains. Among these, the only construct with adequate protein expression level for two distinct genotypes were the chimeric SU-Ig proteins. For those reasons, I used SU-Igs for the epitope mapping study. I set up the production, purification and validation of homologous GII-K74 and heterologous CI-PFV SU-Ig proteins for mapping of GII-and GI-specific nAb epitopes, respectively. The proteins were used as competitors in neutralization assays. I confirmed that these proteins block plasma nAbs in a genotype-specific and dose-dependent manner without affecting the entry of Env-pseudotyped viral vectors. These proteins were repeatedly titrated against a panel of plasma samples from SFV-infected donors diluted to their respective IC90. This dilution was chosen to allow the saturation of nAb by recombinant proteins. Two parameters were defined to characterize the capacity of protein to block nAbs; IC50 as a measure of their affinity and % maximum inhibition (MaxI) which corresponds to the fraction of nAbs inhibited. Mutations were then introduced to these SU-Ig proteins to map nAb epitopes in neutralization assays by comparison of mutant IC50 and MaxI (%) values to that of WT. Introduction of mutations generally demonstrated one of four outcomes; I) no impact and activity same as for WT, II) lower affinity of nAbs to mutant protein as seen by a higher IC50 compared to WT, III) a lower MaxI plateau meaning a fraction of nAbs were no longer blocked by the mutant protein, or IV) both.
loop 1 (
1 L1, residues 253-270, connecting 2 and 1), loop 2 (L2, residues 276-281, connecting 1 and 3), loop 3 (L3, residues 414-436, connecting 6 and 9) and loop 4 (L4, residues 446-453, connecting 10 and 11). The loops L3 and L4 are particularly mobile in our structures as indicated by high B-factors (> 105 Å 2 ) for their C atoms (Fig. S.IV-2). Electron density was observed for 7 out of the 8 predicted N-glycosylation sites (Fig. IV-1C), allowing the modelling of at least one N-acetyl glucosamine (NAG) at each site (Fig. IV-2, Fig. S.IV-3).
Figure IV- 1 -
1 Figure IV-1 -Overview of the novel fold adopted by the SFV RBD Schematic representation of SFV Env protein organization indicating the three constituent chains: leader peptide (LP), surface subunit (SU), and transmembrane subunit (TM). The transmembrane domains anchoring the LP and TM in the membrane are represented as black boxes; the receptor binding domain (RBD) within SU is highlighted in blue-red spectrum; the fusion peptide at the N-terminus of the TM is shown in blue. The furin sites between the LP and SU (RIAR 126 ), and SU and TM (RRKR 570 ) are indicated with scissors symbols. The expression construct
Figure IV- 2 -
2 Figure IV-2 -SFV secondary structure topology diagram The horizontal dashed line designates the boundary between the lower and upper subdomains. The NAG and MAN units built only into the RBD G (and not RBD D ) structure are indicated with red frames. The figure was created with BioRender.com.
4. 3 . 2
32 The sugar attached to the strictly conserved 8 th N-glycosylation site plays a structural role There were no major differences between the X-ray structures of RBD D and RBD G (their superposition yielded a root mean square deviation (rmsd) below 1Å (Fig. S.IV-3B)), except for the different number of sugar units that we could built into the electron density maps (Fig. S.IV-3A). A prominent feature of the upper subdomain is the eighth N-linked sugar (N8) attached to the 4 helix residue N390. The N390 and the first two NAG residues are buried in the RBD rendering the Endo-H/D cleavage site inaccessible (Fig. IV-3B), which allowed building of 10 sugars in the RBD G and 8 in the deglycosylated protein crystals (Fig. IV-2, Fig. IV-3A). The N8 glycan emerges from a cavity that has N390 at its base and extends upwards, remaining in contact with the protein and preserving the same conformation in both crystal forms.
Figure IV- 3 -
3 Figure IV-3 -The oligosaccharide linked to N390 plays a structural role in the RBD A) Molecular surface representation of the SFV RBD G colored by residue hydrophobicity. Hydrophobicity for each residue was calculated according to the Kyte and Doolittle scale[START_REF] Kyte | A simple method for displaying the hydropathic character of a protein[END_REF] in Chimera[START_REF] Pettersen | UCSF Chimera--a visualization system for exploratory research and analysis[END_REF], with the gradient color key indicating the lowest hydrophobicity in blue, to the highest hydrophobicity in yellow. The sugars at sites N6, N7, N7', and N9 are displayed as white sticks, and the sugar attached to N8 as cyan sticks. The inlet shows the bond cleaved by glycosydases Endo D/H, which is protected in N8. B) The N8 sugar attached to Asn 390 covers a hydrophobic region. Zoom into the region within the dashedline rectangle in panel A). The NAG and MAN residues are labeled with numbers that match the N-oligosaccharide drawn in panel A. C) The hydrophobic patch covered by N8 is well conserved. The SFV RBD surface is rendered by residue conservation in Chimera[START_REF] Pettersen | UCSF Chimera--a visualization system for exploratory research and analysis[END_REF], according to the % of the identical residues in the 11 FV Env sequences (alignment shown in Fig.S.IV-5). Residues conserved in less than 30% and more than 90% of sequences are colored in white and purple, respectively, and residues in between with a white-purple gradient, as indicated on the color key below the surface representation.Structural analyses revealed that the glycan establishes extensive van der Waal contacts with the residues underneath (buried surface area of 803 Å 2 ) and forms hydrogen bonds with main chain atoms from Y394 and I484 and the side chain of E361 (Fig. S.IV-6). The oligosaccharide covers a well-conserved and hydrophobic surface (Fig. IV-3C) and thus maintains the RBD fold
70% identity (Fig. S.IV-5), while the rest of Env residues are highly conserved (>95% sequence identity). The SU var is located within the upper subdomain and encompasses loops L1-L4 (residues 282-487 in GII RBD; Fig. S.IV-7).All the generated models have high confidence metrics (Fig.S.IV-8) and display a conserved fold in agreement with a sequence identity >30%. Significant deviations were found only in the loops within the SU var . The Template Modelling score (TM-score), which is, unlike rmsd, a length-independent measure of structural similarity[START_REF] Zhang | Scoring function for automated assessment of protein structure template quality[END_REF] has the average value of 0.891 for the 11 compared structures. The AF model of the GII RBD and our experimentally determined structure of the same strain superimpose with a TM-score of 0.96 and rmsd of 1.5 Å for 320 out of 328 C atoms aligned, confirming the high accuracy of the AF model. A 'common core' (CC), which includes the ensemble of residues with C rmsd values smaller than 4Å for all the pairwise superpositions, was calculated by the mTM-align webserver(Dong et al., 2018a). The CC of the FV RBD contains 239 out of 308 aligned residues (Fig. S.IV-9), with most CC residues belonging to the secondary structure elements forming the lower subdomain. The loops in the upper subdomain are largely not a part of the CC (Fig. S.IV-9).
The fitting was justified by the high structural conservation between gorilla and chimpanzee RBDs, indicated by a TM-score of 0.88 for the superposition of the GII RBD structure and the predicted PFV RBD model (Fig. S.IV-8).
particles. The three RBDs are arranged around a central cavity at the apex (membrane-distal region) of Env (Fig.IV-4A). The analyses of the macromolecular surfaces of the trimeric RBD model, carried out in PDBePISA[START_REF] Krissinel | Inference of macromolecular assemblies from crystalline state[END_REF], revealed a limited interprotomer interface (<10% of the entire RBD solvent accessible surface) established solely by loops L1, L3 and L4 that form a ring-like structure at the RBD apex, leaving most of the RBD exposed (Fig.IV-4B). The three L1 loops engage in homotypic interactions at the center of the RBD, forming an inner ring, while each L3 loop contacts the L4 of a neighboring protomer, further stabilizing the interface. The sequences of the interacting loops are poorly conserved within the FV subfamily (Fig. S.IV-10). The seven N-linked glycans that we could resolve in the RBD structure are all fully solvent-exposed (Fig.IV-4C). The RBD N-and C-termini point towards the membrane, indicating that the lower half of the Env density is occupied by the TM subunit and the remaining SU residues, as previously suggested[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF].
Figure IV- 4 -
4 Figure IV-4 -The RBDs form a trimeric assembly at the apex of the full-length EnvA) Three SFV RBD protomers were fitted in the 9Å cryo-EM map (EMBD: 4013) obtained by cryo-EM 3D reconstruction of the full-length PFV Env expressed on viral vector particles[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. The map is shown in light grey surface, and RBDs in cartoon mode, with each protomer colored differently (yellow, white, light blue). B) The three RBDs fitted as explained in panel A) are shown to illustrate that the 2 and 4 helices, which carry the HS binding residues (K342, R343, R359, R369), as well as the N-linked glycosylations (N6, N7, N7', N8, N9 and N11) point outward and are solvent accessible. The boxed region on the left panel is magnified for clarity on the right panel (only one protomer, colored in white, is represented for clarity purposes). C) The views at the trimeric RBD arrangement from the top i.e. looking at the membrane (left) and bottom i.e. looking from
docked onto the surface (Fig. IV-5B, IV-5C). The four residues also mapped within the positively-charged region in the lower subdomain.
Figure IV- 5 -
5 Figure IV-5 -Prediction of HS binding residues and design of the variants impaired in bindingA) Electrostatic potential distribution was calculated using Adaptive Poisson-Boltzmann Solver[START_REF] Jurrus | Improvements to the APBS biomolecular solvation software suite[END_REF] module in PyMOL[START_REF] Delano | The PyMOL Molecular Graphics System[END_REF] and plotted on the solvent accessible surface of the RBD, with red corresponding to the negative, and blue to positive potentials (2 left panels). The ensemble of HS molecules modeled by ClusPro[START_REF] Kozakov | The ClusPro web server for protein-protein docking[END_REF] map to the lower subdomain and are displayed in sticks on the two right panels, which show the RBD in cartoon model and in two orientations to illustrate the location of predicted HS binding secondary structure elements. B) Predicted number of contacts per residue and per side chain atoms calculated by ClusPro and plotted for each RBD residue, revealing the most likely candidates to be engaged in HS binding. C) Structure of RBD is shown in cartoon, with the region containing 2 and 4 helices highlighted in grey. Magnification of the grey boxed region is shown on the right panel, with the relevant secondary structure elements and predicted HS binding residues shown in sticks. Two disulfide bonds are indicated with yellow circles. The figure was created with PyMOL (DeLano, 2002) and BioRender.com. Based on the ClusPro predictions, we produced two GII RBD variants (K342/R343, termed 'mut1', and R359/R369, termed 'mut2') and tested their binding to HS immobilized on a Sepharose matrix (Fig. IV-6A). The two RBD variants eluted at the same volume on size exclusion chromatography consistent with the expected size of a monomer (Fig. S.IV-11),indicating that the introduced mutations did not cause protein misfolding. The WT RBD was retained on the column and eluted at 300 mM sodium chloride concentration, while mut1 and mut2 variants were not retained by the column and eluted in the flow-through fraction. The observed loss of heparin binding capacity strongly suggests that residues K342, K343, R359, and R369 are directly involved in interactions with HS.
. S.IV-12D). Binding of the WT ectodomain to heparinase-treated cells was diminished about 100-fold when compared to buffer-treated cells, while the binding of the mut1 and mut2 variants was not affected by heparinase treatment (Fig.IV-6D).The importance of residues K342, R343, R356, R369 for viral entry into susceptible cells was tested using FVVs that express either GII WT, mut1 or mut2 Env on their surface. The total number of FVV particles released by the transfected cells, measured by RT-qPCR, was 6-fold lower for the mut1 compared to WT FVVs, while mut2 had the same particle production as WT (Fig. S.IV-13A). The infectious titers were 34-and 65-fold lower for mut1 and mut2, respectively, compared to WT (Fig. S.IV-13B). The proportion of infectious particles, defined as the infectious titer (Fig. S.IV-13B), divided by the total number of FVVs (Fig. S.IV-13A) was 0.7% for WT Env FVVs, while the values for mut1 and mut2 FVVs were 3-and 22-fold lower, respectively (Fig. IV-6E). We measured the binding of FVVs to cells by RT-qPCR, and found that the binding was also reduced 3-and 23-fold for FVVs carrying mut1 and mut2 Envs, respectively, compared to the WT Env FVVs (Fig.IV-6F). Thus, binding to cells and entry levels were decreased to the same extent for the FVVs carrying Env proteins with mutations in the HS binding site.The results described for the recombinant Env proteins and FVVs carrying full-length Env agree with the biochemical data (Fig.IV-6A) and demonstrate that residues K342, R343, R356, R369 play a crucial role in virus interaction with HS.
Figure IV- 6 -
6 Figure IV-6 -The SFV RBD residues K342, R343, R356 and R369 mediate Env binding to HS A) Chromatogram of the recombinant SFV RBD, WT (red line) and variants with mutations in HS binding residues: mut1 (K342A/R343A) in blue, and mut2 (R356A/R369A) in green on a heparin-Sepharose column. Dotted line shows salt concentration, which is plotted on the right y-axis. B) Binding of recombinant WT RBD and ectodomains to HT1080 cells. To be comparable with the RBD, the concentration for the ectodomain is calculated
. IV-1, Fig. IV-2) distinct from the available orthoretroviral RBD structures i.e. from the Friend murine leukemia virus, feline leukemia virus, human endogenous retrovirus EnvP(b)1 (gammaretrovirus genus)
surface that is exposed to the solvent (Fig.. According to our model, the L1-L4 loops, located at the top of the upper subdomain of each protomer, form the inter-protomer interface (Fig. S.IV-10) leaving, just below, a cavity that was clearly observable in the cryo-EM maps[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF]. The loop sequences are poorly conserved across the FV family (Fig. S.IV-10), and the superposition of the AF models of 11 FV RBDs reveal only slight structural differences, which are mostly limited to this variable region containing the loops (Fig. S.IV-8, Fig. S.IV-9).
Env truncated variants to bind to cells, Duda et al. defined the RBD of PFV Env as a region spanning residues 225-555 (residues 226-552 in gorilla GII Env (Fig. S.IV-
the residue at position equivalent to 343 in gorilla GII Env is always an arginine or lysine, while arginine is strictly conserved at position 356 (Fig. S.IV-5). Residues at positions 342 and 369 are less conserved among SFV Envs, although they are usually surrounded by positive or polar residues. This suggests that the R343 and R356 may be important for HS binding in all FVs, while other positively charged residues, specific to each virus, can be dispersed in the patch with high positive electrostatic potential (Fig. IV-5A) contributing to the HS binding in a virusspecific context. Existence of an FV receptor had been proposed by Plochmann et al. since a total lack of HS did not abolish FV infection, but HS has also been proposed to function as a true FV receptor (Nasimuzzaman and Persons, 2012). The residual Env binding to cells devoid of HS, which we observed both for the WT and the HS-binding impaired variants (Fig. IV-6D),
truncations fused to the Fc region of murine IgG (immunoadhesins) the authors showed that the RBD -defined as the minimal region of the PFV Env sufficient for binding to cellsencompassed residues 225 to 555 (corresponding to residues 226 to 552 in gorilla FV RBD (GII-K74 strain, accession number JQ867464)(Rua et al., 2012a) (Fig. S.IV-5). When designing the expression construct for SFV RBD, we also considered the secondary prediction generated by the Phyre2 webserver[START_REF] Kelley | The Phyre2 web portal for protein modeling, prediction and analysis[END_REF]. Residue I225 was in the middle of a putative helix (residues 220-230), leading us to choose an upstream residue R218 as the N-terminus of the construct (Fig. IV-1A, Fig. S.IV-5).
The DST was removed by incubating the protein with 64 units of Enterokinase light chain (BioLabs) in 10 mM Tris-HCl, 100 mM NaCl, 2 mM CaCl2, pH 8.0, at room temperature, overnight. The proteolysis reaction was buffer exchanged into 10 mM Tris-HCl, 100mM NaCl, pH 8.0, and subjected to another affinity purification, recovering the flow-through fraction containing the untagged RBD. The protein was concentrated and its enzymatic deglycosylation with EndoD and EndoH was set up at room-temperature following overnight incubation with 1000 units of each glycosidase in 50 mM Na-acetate, 200 mM NaCl, pH 5.5. The protein was further purified on a size exclusion chromatography (SEC) column Superdex 200 16/60 (Cytiva) in 10 mM Tris-HCl, 100 mM NaCl, pH 8.0, concentrated in VivaSpin concentrators to 8.2 mg/ml and used as such for crystallization trials.For cell binding experiments the RBD construct was cloned in a pcDNA3.1(+) derived plasmid, for expression in mammalian cells. The expression plasmid was modified by inserting a CMV exon-intron-exon sequence that increases the expression of recombinant proteins. The RBD was cloned downstream of the CD5 signal peptide (MPMGSLQPLATLYLLGMLVASCLG) with an enterokinase cleavage site and a DST tag in the C-terminus. The HS mutants were generated by site-directed mutagenesis. The plasmids coding for the recombinant proteins were transiently transfected in Expi293F TM cells (Thermo Fischer) using FectroPRO DNA transfection reagent (Polyplus), according to the manufacturer's instructions. The cells were incubated at 37 C for 5 days after which the cultures were centrifuged. The protein was purified from the supernatants by affinity chromatography using a StrepTactin column (IBA), followed by SEC on a Superdex 200 10/300 column (Cytiva) equilibrated in 10 mM Tris-HCl, 100 mM NaCl, pH 8.0. The peak corresponding to the monomeric protein was concentrated and stored at -80 C until used.
were produced by co-transfection of four plasmids (gag:env:pol:transgene (-galactosidase) ratio of 8:2:3:32). Three g total DNA and eight l polyethyleneimine (JetPEI, #101-10N, Polyplus, Ozyme) were added to 0.5 x 10 6 HEK 293T cells seeded in 6-well plates. Supernatants were collected 48 hours post transfection, clarified at 1,500 x g for 10 min, and stored as single-use aliquots at -80°C. Vector infectivity was determined by transducing BHK-21 cells with serial five-fold dilutions of vectors and detecting β-galactosidase expression after 72 hours of culture at 37°C. Plates were fixed with 0.5% glutaraldehyde in PBS for 10 min at room temperature (RT), washed with PBS and stained with 150 l X-gal solution containing 2 mM MgCl2, 10 mM potassium ferricyanide, 10 mM potassium ferrocyanide and 0.8 mg/ml 5bromo-4-chloro-3-indolyl-B-D-galactopyranoside in PBS for 3 hours at 37°C. Plates were counted on a S6 Ultimate Image UV analyzer (CTL Europe, Bonn, Germany). One blue cell was defined as one infectious unit. Cell transduction by FVV is a surrogate for viral infectivity and FVV titers were expressed as infectious units/ml. The yield of FVV particles was estimated by the quantification of particle-associated transgene RNA. FVVs RNAs were extracted from raw cell supernatants with QIAamp Viral RNA Extraction Kit (Qiagen). RNAs were treated with DNA free kit (Life Technologies), retro-transcribed with Maxima H Minus Reverse Transcriptase (Thermo Fischer Scientific) using random primers (Thermo Fischer Scientific), according to manufacturer's instructions. qPCR was performed on cDNA using BGAL primers (BGAL_F 5' AAACTCGCAAGCCGACTGAT 3' and BGAL_R 5' ATATCGCGGCTCAGTTCGAG 3') with a 10-min-long denaturation step at 95°C and 40
Fig 7 4 . 5 . 8
7458 Env interactions with heparan sulfate assayed by binding to heparinsepharose 100 g of recombinant FV RBDs (wild-type, R356A/R369A, K342A/R343A) were injected at 1 ml/min onto a Heparin-Sepharose column (Cytiva) previously equilibrated with running buffer (10 mM Tris-HCl, 100 mM NaCl, pH 8.0). After washing, a linear gradient (50% in 30 minutes) of elution buffer (10 mM Tris-HCl, 2M NaCl, pH 8.0) was applied. Env interactions with heparan sulfate on cells (in vitro): Env protein binding assay HT1080 and BHK-21 adherent cells were detached with Trypsin-EDTA and 5 x 10 5 cells were used per condition. Cell washing and staining steps were performed in PBS-0.1% BSA at 4°C. SFV Env ectodomains were added to the cell pellet for 1 hour. Cells were washed twice, incubated with anti-StrepMAB-Classic-HRP antibody that recognizes the strep tag at the Cterminus of the SFV Env ectodomain (7.5 µg/ml, IBA Lifesciences #2-1509-001) for 1 hour, washed twice and incubated with the secondary antibody coupled to fluorophore AF488, anti-HRP-AF488 (0.75 µg/ml, Jackson ImmunoResearch, #123-545-021) for 30 min. Cells were washed and fixed in PBS-2% PFA at RT for 10 min and kept at 4°C until acquisition. A minimum of 25,000 cells were acquired on a CytoFLEX cytometer (Beckman Coulter). Data were analyzed using Kaluza software (Beckman Coulter). Viable single cells were selected by the sequential application of gates on FSC-A/SSC-A and SSC-A/SSC-H dot-plots (Fig. S.IV-12A). Cells labelled with the two secondary antibodies only were used as a reference. SFV Env binding was expressed as the ratio of mean fluorescence intensity (MFI) from the cells that were incubated with the recombinant ectodomains vs untreated cells (Fig. S.IV-12B).
Figure V- 1 -
1 Figure V-1 -SFV Env A. Schematic representation of SFV Env. The precursor Env protein is cleaved by furin-like protease(s) at two sites (vertical bars) to generate LP, SU, and TM. The dark sections highlight the transmembrane regions of LP (H) and TM (MSD), and the fusion peptide (F). The minimal continuous RBD (aa 225-555, blue background) comprises two regions essential for SU binding to cells, RBD1 (aa 225-396) and RBD2 (aa 484-555)[START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF]. The intervening region (aa 397-483), named RBDj, can be deleted without abrogating binding to susceptible cells. B. The genotype-specific variable SU region (SUvar, aa 248-488) partially overlaps with the RBD and is the exclusive target of nAbs[START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. C. The structure of the GII-K74 RBD monomer (aa 218-552, PDB code 8AEZ,(Fernandez et al., 2022, submitted)) is presented in cartoon with glycans shown as sticks. RBD1, RBD2, and RBDj regions are color-coded as on panel A. Structural elements relevant for the present study are indicated, and the full description of the structure is available in(Fernandez et al., 2022, submitted). D. The SUvar region (red) is rendered as solvent accessible surface. The conserved region (SUcon) is displayed in cartoon. The dashed line indicates the boundary between the lower and upper RBD subdomains.
synthetic peptides spanning the SUvar domain from three viral strains (Fig. V-1, Table S.V-1 and S.V-2, Fig. S.V-1). By ELISA, seven of 17 (41%) plasma samples from SFV-infected individuals reacted against at least one peptide; five locations represented by peptides with GI and/or GII sequences were recognized by at least one plasma sample (Fig. V-2). In comparison, 76%, and 71% of plasma samples displayed binding activity against immunodominant viral peptides used as positive controls (see Materials and Methods).
Figure V- 2 -
2 Figure V-2 -Only a small proportion of plasma samples bind to peptides covering the SUvar domain
. V-4C), confirming data obtained with GII Env constructs expressed in insect cells, which also lack complex glycans (Fig. S.V-2). By contrast, the removal of all glycans decreased nAb blockade and protein titration showed lower affinity for the nAbs present in three out of four plasma samples (Fig.V-4D). Endo-H treatment did not induce protein aggregation (Fig. S.V-4) but may have a global effect on the SU affecting SFV-specific nAbs. Conversely, glycans did not shield major nAb epitopes on SFV Env, as their removal did not enhance nAb blockade in the samples tested.
GIIFigure V- 4 -
4 Figure V-4 -SFV-specific nAbs recognize glycans on SUvar
Figure V- 5 -
5 Figure V-5 -Most SFV-specific nAbs recognize the RBDj domainA. Schematic representation of functional RBD subdomains[START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF], loops, and HBS identified in(Fernandez et al., 2022, submitted). B. The RBD is shown as a solvent accessible surface with SUvar in dark grey, SUcon in light grey, L2 in orange, L3 in green, L4 in maroon, and HBS in black. To locate SFV-specific nAb epitopes on SU functional domains, vectors carrying GII-K74 Env were treated with GII-specific plasma samples previously incubated with GII SU and mutant immunoadhesin with RDBj deleted ( GII ΔRBDj, C), the Env ectodomain with WT or mutated HBS ( GII K342A/R343A in panel D and GII K356A/R369A in panel E), and immunoadhesins with deleted loops ( GII ΔL2, GII ΔL3, and GII ΔL4 in panels F to H). For each plasma sample, the IC50 is presented as a function of MaxI for GII SU and the mutated immunoadhesins. The IC50 and MaxI values for GII SU are presented as open
Figure V- 6 -
6 Figure V-6 -Epitope disruption by glycan insertion revealed additional sites recognized by nAbs
contrast to GII 351glyc (Fig. V-6H) and GII 350glyc (Fig. V-6I), CI 350glyc fully blocked GI-specific nAbs (Fig. V-7D). Neither CI ΔRBDj nor CI ΔL3 proteins blocked the nAbs (Fig. V-7E and V-7G).
Figure V- 7 -
7 Figure V-7 -GI and GII-specific nAbs target different epitopes
Figure V- 8 -
8 Figure V-8 -Most samples contain nAbs that recognize several epitopic regions on SUvarThe results from competition experiments were summarized for each plasma sample and for the mutant immunoadhesins from the most targeted regions (apex, 345-353 loop, and the N7' region). Four outcomes are presented: same recognition as WT SU (green), recognition with reduced affinity (i.e., increased IC50, orange vertical shading), blocking a smaller fraction of nAbs than WT SU (i.e., reduced MaxI, orange horizontal shading),
(
Fig. V-9B), despite all mutant Env being expressed in transfected cells (Fig. S.V-9). Moreover, none of the FVVs with mutant Env were infectious (Fig. V-9C). However, most bound to susceptible cells at levels 2 to 4-fold below those of FVVs with WT Env, except the CI-PFV ΔRBDj mutant, for which the binding was reduced ≈ 30-fold) (Fig. V-9D). In conclusion, the regions from the upper RBD subdomain targeted by nAbs are required for viral infectivity independently of their capacity to bind to susceptible cells.
Figure V- 9 -
9 Figure V-9 -nAbs target epitopic regions involved in SU binding to susceptible cells or required for viral infectivity
Figure V- 10 -
10 Figure V-10 -Schematic summary of current knowledge on SFV Env
Figure VI- 1 -
1 Figure VI-1 -Summary of discovered genotype-specific epitopic regions SFV nAb epitope footprints determined in the current study with use of human plasma samples from SFV-infected donors are presented on CI-PFV (AlphaFold predicted) and GII-K74 (experimentally determined) RBD structures.Immunodominance of GI-and GII-specific epitopes is highlighted according to spectrum of blue and red shades, respectively. For the purpose of location to adjacent discovered epitopes, the determined HBS of GII-K74 and its equivalent site predicted on CI-PFV is highlighted in blue and green for the specific residues, respectively. Figure created in PyMOL.
that recognize the N-ter of L3 (Fig. V-4E vs V-4F). In contrast, glycan insertion on residue N426 at the C-ter of L3 demonstrated similar loss of activity as for deletion of L3 for two of three GII-donors (Fig. V-5G vs V-6C). Those results imply that the central part of L3 may form a more common epitope.
FFV
V-6F), which indirectly suggests the presence of epitopes in near proximity to this evolutionary conserved glycan.GI-specific nAbs target the L3 loop, which is clearly an immunodominant epitope. For this reason, I have not intensively searched for epitopes on the lower domain as these would probably be subdominant and likely difficult to study with polyclonal plasma samples. To build a more comprehensive study on nAb epitopes on GI Env, some additional mutants could be tested on CI-PFV SU-Ig. HBS mutations could be determined through AF structural prediction as discussed above. Moreover, the N7' glycosylation site is absent on CI-PFV. Thus, a mutant with glycan insertion on this position would indicate if GI-specific nAbs are targeting this glycan as was observed for GII-specific plasma nAbs. In such case, N7' insertion should enhance the block of nAbs compared to WT CI-PFV SU-Ig.My data support that one prominent and genotype-specific epitopic region is located on base loop (residues 345-353) within the lower domain of the GII-K74 RBD. Insertion of a glycan at residue G350 completely abrogated block of nAbs from seven GII-infected donors (Fig.V-6I).Repositioning of the inserted glycan one residue downstream at position N351 resulted in same effect as observed for the glycan at position G350 (Fig.V-6H). Although these proteins tended to form more aggregates compared to WT, we excluded this as reason for lack of mutant activity through size exclusion chromatography (Fig. S.V-6). Moreover, swapping of base loop residues 345-351 from GI-D468 into GII-K74 SU supported the location of an epitope (Fig.V-6K).Interestingly, sequence alignments and structural predictions by AF found substantial variation in this loop, primarily due to an additional residue present in CI/GI strains compared to GII strains (Fig. V-6B'). Insertion of this missing residue (E349) from the GI-D468 strain into the backbone of GII-K74 SU confirmed our results (Fig.V-6J). Lastly, we showed that insertion of a glycan on the helix adjacent to the base loop also disrupted binding of most nAbs (Fig.V-6N). Collectively, these results support that the base loop residues 345-351 are part of a GIIspecific nAb epitope.
Figure S.IV- 1 -
1 Figure S.IV-1 -The fold of the FV RBD is maintained by hydrophobic and polar interactions
Figure S.IV- 2 -
2 Figure S.IV-2 -Mobile loops decorate the apex of the RBD A) The upper subdomains loops are designated as follows: loop 1 (L1, residues 253-270, connecting 2 and 1) in blue, loop 2 (L2, residues 276-281, connecting 1 and 3) in orange, loop 3 (L3, residues 414-436, connecting 6 and 9) in green, and loop 4 (L4, residues 446-453, connecting 10 and 11) in dark purple. B) The RBD structure is shown in 'tube' presentation to illustrate the mobility, with the more flexible regions shown as thicker tubes. Coloring scheme corresponds to the C atomic B-factors (low to high B factors shown in blue to orange spectrum). The images were generated in PyMOL (DeLano, 2002).
Figure S.IV- 3 -
3 Figure S.IV-3 -Comparison of glycosylated vs deglycosylated RBD structures A) Schematic representation of SFV Env and the 17 predicted N-glycosylation sites, labeled as N1 to N15 (Luftenegger et al., 2005). The sites that are 100% conserved in all FV Envs (Asn 141 (N3), Asn 390 (N8), Asn 781 (N13), Asn 807 (N14), Asn 832 (N15)) are indicated with red underscored letters. The two furin sites are represented by scissors. LP, SU and TM are the abbreviations for the leader peptide, surface subunit and transmembrane subunit, respectively. The sugar residues, N-acetyl glucosamine (NAG) and mannose (MAN) that could be resolved in RBD D or RBD G are shown. The N-linked oligosaccharide core is shown in the grey inlet, with the cleavage sites indicated for the EndoD and EndoH glycosyades. A fraction of proteins expressed in insect cells contains an α1-6 fucose bound to the first NAG, rendering the sugar sensitive to cleavage by EndoD, but resistant to EndoH. Thus, both EndoD and EndoH were used for deglycosylation of the recombinant RBD. The figure was created in Biorender.com. B) Superposition of the RBD G (purple) and RBD D (grey) structures done in PyMOL (DeLano, 2002).
Figure S.IV- 4 -
4 Figure S.IV-4 -Comparison of the SFV RBD fold with that of the RBD of Orthoretroviruses Structures of RBDs from gammaretorviruses and of SU from HIV are shown to illustrate a lack of structural homology between the RBDs from different genera of retroviruses.
Figure S.IV- 5 -
5 Figure S.IV-5 -Sequence conservation of FV EnvSequences corresponding to 11 FV Env were aligned in Clustal Omega5 and the alignment was plotted using ESPript https://espript.ibcp.fr[START_REF] Robert | Deciphering key features in protein structures with the new ENDscript server[END_REF], with colors that indicate % identity (white letter, red background 100% identical; red letters, white background >70% identity; black letters, white background, <70% identity). The black, horizontal line separates simian from other FVs. The secondary structure elements corresponding to the SFV RBD X-ray structure and the AF model for the feline FV RBD are plotted above and below the alignment, respectively. The N-linked glycosylation sites are indicated with stars. The already established nomenclature for the N-glycosylation sites (N1 to N15) is applied. The strictly conserved Nglycosylation sites have a thicker border, and the sites that carried sugars, which could be resolved in our structure, have grey filling. The residues interacting with heparan-sulfate are marked with blue ovals. Loops 1-4 are indicated with bars above the alignment and labeled as L1-L4, using the same color code as in Fig.S2. The boundaries for the LP, SU, RBD, TM subunit are shown, as well as for the RBD variable and RBDjoin regions (the numbering corresponds to that of gorilla SFV Env, GII-K74 genotype). To distinguish the TM subunit from the TM domain, which is the region spanning the membrane, the latter is referred to as the TM anchor . The two furin sites are indicated with the scissors drawing. The Env sequences used in the alignment were obtained from public databases and with following accession numbers: SFVggo_huBAK74 (GII-K74, genotype II gorilla SFV, GenBank: AFX98090.1), SFVggo_huBAD468 (GI-D468, genotype I gorilla SFV; GenBank: AFX98095.1), SFVpsc_huHSRV13 (CI-PFV, known as Prototype Foamy Virus genotype I chimpanzee SFV; GenBank: AQM52259.1), SFVpvePan2 (CII-SFV7, genotype II chimpanzee SFV; UniProtKB/Swiss-Prot: Q87041.1), SFVcae_LK3 (Genotype II African green monkey SFV; NCBI Reference: YP_001956723.2), SFVmcy_FV21 (genotype I macaque SFV; UniProtKB/Swiss-Prot: P23073.3), SFVcja_FXV (Marmoset FV; GenBank: GU356395.1), SFVppy_bella (Orangutan SFV; GenBank: CAD67563.1), BFVbta_BSV11 (Bovine FV; NCBI Reference: NP_044930.1), EFVeca_1 (Equine FV; GenBank: AAF64415.1), and FFVfca_FUV7 (Feline FV; UniProtKB/Swiss-Prot: O56861.1). Genotypes I and II have been defined for gorilla, chimpanzee, green monkey and macaque FVs(Aiewsakun et al., 2019a;[START_REF] Galvin | Identification of recombination in the envelope gene of simian foamy virus serotype 2 isolated from Macaca cyclopis[END_REF][START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF].
Figure S.IV- 7 -
7 Figure S.IV-7 -Functional features of FV RBD mapped onto the structure Functional features plotted on the RBD structure. The conserved 'RBD cons ' (residues 218-241 and 488-552) and variable regions 'RBD var ' (residues 242-487) are plotted on the X-ray structure of gorilla SFV RBD and colored in light grey and red, respectively. The glycosylation sites are indicated with the stars on the bottom.
Figure S.IV- 8 -
8 Figure S.IV-8 -AlphaFold models of FV RBDs
Figure
Figure S.IV-9 -FV RBD common core excludes a large portion of the upper subdomain Superposition of the RBD experimental structure and 11 AF models (Fig. S.IV-8) yielded a 'common core', model that includes the residues with C rmsd < 4Å for all pairwise superpositions. Those residues are indicated with purple bars above the sequence alignment, which is colored using the same scheme as in Fig. S.IV-5. The small inlet in the upper left corner represents the entire RBD as a reference for comparison, with the common core colored in purple, and the remaining residues in grey. The structural and sequence alignments were carried out as explained in Figs. S.IV-8 and S.IV-5, respectively.
Figure
Figure S.IV-10 -The inter-protomer RBD contacts formed by the upper domain loops show poor sequence conservationThree SFV RBD protomers fitted in the electron density maps obtained for PFV Env[START_REF] Effantin | Cryo-electron Microscopy Structure of the Native Prototype Foamy Virus Glycoprotein and Virus Architecture[END_REF], as shown in Fig.IV-4are rendered by residue conservation. The % identity was calculated in Chimera[START_REF] Pettersen | UCSF Chimera--a visualization system for exploratory research and analysis[END_REF] according to the sequence alignment shown in Fig.S.IV-5, and residues were colored with the white to maroon gradient as indicated with the color key. The residues showing less than 30% and more than 90% sequence identity are colored in solid white and maroon, respectively.
Figure
Figure S.IV-11 -Recombinant RBD variants remain monomeric in solutionThe size exclusion chromatography (Superdex 200) profiles for the GII RBD expressed in mammalian cells are shown for the WT protein (red line) and the variants (blue and green lines).
Figure S.IV- 12 -
12 Figure S.IV-12 -Flow cytometry gating strategy
Fig. S.IV-12C). Mean and SD from two independent experiments are shown.
Figure
Figure S.IV-13 -Effect of mutations on FVV release and infectious titer Five batches of FVVs carrying wild-type GII-K74 SU (red), mut1 (blue) and mut2 (green) SU were produced, each represented with a single dot. A) The concentration of the vector particles was quantified by RT-qPCR of galactosidase transgene. Each batch was titrated twice, and mean titers are presented; lines represent mean values from the five FVV batches. The dotted line represents the quantification threshold. B) FVVs infectious titers were quantified on BHK-21 cells. Each batch was titrated twice, and mean titers are presented; lines represent mean values from the five FVV batches. The dotted line represents the quantification threshold. The different FVVs were compared using the paired t-test, * p<0.05, ** p < 0.01.
Figure
Figure S.IV-14 -Structural basis for RBDjoin region being dispensable for binding to cells Functional features plotted on the RBD structure. A) The regions identified in the bipartite PFV RBD as essential (indicated as RBD1 and RBD2 according to the more recent nomenclature) (Duda et al., 2006) and non-essential (or RBDjoin) (Dynesen et al., 2022, submitted) for SFV entry are colored in dark grey and green, respectively, and plotted on the X-ray structure of gorilla SFV RBD. The numbering corresponds to the gorilla GII RBD. B) The AF models of the PFV RBD lacking the non-essential RBDjoin region (left panel) and of the whole PFV RBD (right panel) are colored according to the pLDDT values, using the same palette as in Fig. S.IV-8.
) a aa positions indicated are those of each protein; for GII-K74, some differ from the CI-PFV based numbering (Fig.S.V-1). The symbols > and Δ designate aa substitutions and deletions, respectively.
..S.... ...N..HQ.. Q....S.L.. .E.......E ....D.V... GII-K74 .....S.... ...N..HQ.. Q....S.L.. .E.......E ....D.I... CI-PFV RDKFRYLLYT CCATSSRVLA WMFLVCILLI IVLVSCFVTI SRIQWNKDIQ GI-D468 M.RVK.F... .......... ..L.A...F. .II....I.L .......... GII-K74 M.RVK.F... .......... ..L.A...F. .II....I.L .......... |SU CI-PFV VLGPVIDWNV TQRAVYQPLQ TRRIARSLRM QHPVPKYVEV NMTSIPQGVY GI-D468 .......... .......... L.....A..A .......... .........F GII-K74 .......... .......... L.....A..A .......... .........F CI-PFV YEPHPEPIVV KERVLGLSQI LMINSENIAN NANLTQEVKK LLTEMVNEEM GI-D468 .Q......IH T........V .......V.. S...S..T.A .....I.... GII-K74 .Q......IH T........V .......V.. S...S..T.A .......... ╟RBD1 |SUvar CI-PFV QSLSDVMIDF EIPLGDPRDQ EQYIHRKCYQ EFANCYLVKY KEPKPWPKEG GI-D468 .......... .......... .......... ...H...... .T.Q...S.E GII-K74 .G........ .......... .......... ...H...... .T.Q...N.. CI-PFV LIADQCPLPG YHAGLTYNRQ SIWDYYIKVE SIRPANWTTK SKYGQARLGS GI-D468 .......... ....VE.TT. A......... IT..K...SY AQ..N..... GII-K74 .......... LADVSF.PY. A.....A.I. N.......SS KL..K..M.. CI-PFV FYIPSSLRQI NVSHVLFCSD QLYSKWYNIE NTIEQNERFL LNKLNNLTSG GI-D468 .F..PHV.K-.FT....... ...A...... ..LLK..EL. QK......EL GII-K74 Y...KR..N. .NT.I..... V.......LQ .S.L...NE. TKR.S...-I ╟RBDj CI-PFV TSVLKKRALP KDWSSQGKNA LFREINVLDI CSKPESVILL NTSYYSFSLW GI-D468 ..L....... RT.TT....N ...N.T...V .NR..M.L.. .I..DL.... GII-K74 GNK..N.... YE.AKG.L.R ...N.S...V ..R..M.L.. .KT..T.... CI-PFV EGDCNFTKDM ISQLVPECDG FYNNSKWMHM HPYACRFWRS KNEKEETKCR GI-D468 .....Y...K ..EI..Q.K. .......... .........N .........D GII-K74 .....I.RYN VNET....KD .PHRR--FND ...S..L..Y REG...V..L ╟RBD2 | CI-PFV DGETKRCLYY PLWDSPESTY DFGYLAYQKN FPSPICIEQQ KIRDQDYEVY GI-D468 GRDDNK.... .......A.. ...F....N. ..A....SSK Q..Q...... GII-K74 TSDHT..... .EYSN..ALF ...F.S.MR. ..G.Q...ST S..Q...... CI-PFV SLYQERKIAS KAYGIDTVLF SLKNFLNYTG TPVNEMPNAR AFVGLIDPKF GI-D468 .I...C.L.. RIH...S... .......... K......... .......... GII-K74 .I...C.L.. .T....S... .......... K......... .......... ╢ |TM CI-PFV PPSYPNVTRE HYTSCN--NR KRRSVDNNYA KLRSMGYALT GAVQTLSQIS GI-D468 ..T...I..D Q.QG..INQ. RK.E.N...S .......... ......A... GII-K74 ..T...I..D Q.QG..INQ. RK.E.N...S .......... ......A... CI-PFV DINDENLQQG IYLLRDHVIT LMEATLHDIS VMEGMFAVQH LHTHLNHLKT GI-D468 ....Q..... .......IV. .......... I......... V.......R. GII-K74 ....Q..... .......IV. .......... I......... V.......R. CI-PFV MLLERRIDWT YMSSTWLQQ QLQKSDDEMKV IKRIARSLVY YVKQTHSSPT GI-D468 ..M....... ....S...T ........... ...T...... .....YN.L. GII-K74 ..M....... ....S...T ........... ...T...... .....YN.L.
Figure S.V- 1 -
1 Figure S.V-1 -CI-PFV, GI-D468, and GII-K74 Env sequence alignmentEnv sequences from CI-PFV, GI-D468, and GII-K74 strains were aligned using CLC Mainworkbench software. Identical residues are indicated with dots. Boundaries of the leader peptide (LP), surface protein (SU), transmembrane protein (TM), receptor binding domain (RBD)1, RBDj, and RBD2 are indicated over the sequences. The RBDj domain is highlighted by italic characters and the SUvar domain is highlighted by the grey colored background.
Figure S.V- 3 -
3 Figure S.V-3 -Western-blot analysis of WT SU proteins used in the studyWestern-blotting analysis of WT SU protein and immunoadhesins. Mammalian cell supernatants collected 72 h post-transfection were heat-denaturated before immunoblotting with either anti-Strep-tag antibody (A and C) or an anti-SU antibody (B). SFV SU was expressed as GII-K74 immunoadhesin without a Strep-tag, GII-K74 immunoadhesin with a Strep-tag, GII-K74 SU with a Strep-tag, and as CI SU with a Strep-tag (A and B). MLV SU was expressed fused to a Strep-tag (C). For the CI and GII SUs, two bands are visible, mostly for the SU-ST construct, in accordance with the results of other reports[START_REF] Falcone | Sites of simian foamy virus persistence in naturally infected African green monkeys: latent provirus is ubiquitous, whereas viral replication is restricted to the oral mucosa[END_REF].
Figure S.V- 4 -
4 Figure S.V-4 -Purity of immunoadhesins used in the study assessed by Coomassie blue gel staining
Figure S.V- 5 -
5 Figure S.V-5 -Plasma antibodies do not bind to peptides covering the loops located at the apex of the RBD and targeted by nAbs
6 -
6 Figure S.V-6 -The GII 351glyc immunoadhesin is unable to block nAbsexclusion of a nonspecific effect of protein aggregation293-F cells were transfected with the plasmid encoding GII 351glyc. The supernatant was collected after 72 h of culture and the immunoadhesin affinity purified using the Strep-Tag fused to the C-terminus. Then, half the volume was purified by size exclusion chromatography. A. The affinity-purified and four fractions of chromatography-purified GII 351glyc were analyzed on a Coomassie-stained gel, with or without reducing treatment. High molecular weight proteins were present in the affinity purified sample and the first two SEC fractions. B to E. The plasma samples from four individuals infected with a GII SFV were diluted to their ≈ IC90
Figure S.V- 7 -
7 Figure S.V-7 -Sequences from gorilla SFV strains circulating in Central Africa are conserved in the epitopic regions targeted by nAbs
Identical residues are indicated by dots, the background colors correspond to the physical properties (rasmol color code). Black squares indicate the identified epitopic regions: L2 (A), 345-353 loop and N7' (B), L3 (C), and L4 (D). Within each genotype, we observed identical sequences or conservative aa changes. The only exception was an N351/D polymorphism in the 345-353 loop from GI that may alter the expression of N7 strain.
Figure S.V- 8 -
8 Figure S.V-8 -Recombinant immunoadhesins bind to susceptible cellsHT1080 cells were incubated with immunoadhesins and the bound protein detected by staining with an antimouse Fc antibody and flow cytometry analysis. A. The gating strategy of viable single cells and staining intensity are shown for GII SU added at three concentrations. Levels of bound GII SU are expressed as the ratio of the MFI from immunoadhesin-treated cells to the MFI of untreated cells. B. To compare the binding capacity of the immunoadhesins, staining was performed at three doses, the MFI ratios plotted as a function of immunoadhesin concentration, and the area under the curve (AUC) calculated. Shaded regions represent the AUC. Data from five independent experiments performed with WT immunoadhesins are presented as the mean and standard error. CI SU bound at higher levels than GII SU. C. The treated and mutated immunoadhesins were tested for binding to susceptible cells and staining levels were normalized to that of the WT immunoadhesin included in every experiment. The graph shows lower staining by GII ΔRBDj than GII SU.
Table I -1 -Properties of FV in comparison to orthoretroviruses and hepadnaviruses Properties Orthoretroviridae Spumaretroviridae Hepadnaviridae
I
Viral genome RNA RNA/DNA DNA
Reverse transcription stage Early Early/Late Late
Synthesis of pol transcript independently of gag No Yes Yes
Integration step Yes Yes Rarely
Particle budding site Plasma membrane Intracellular membrane, ER or Golgi Intracellular membrane, ER
Table I -2 -In vitro cell tropism of replicative or vector-based PFV and HS expression
I
Cell line Species Tissue Sensitivity Cell surface References
to PFV level HS-
infection expression
Mammalian
293/293T Human Embryonic kidney ++ ++ (Hill et al., 1999;
epithelium Nasimuzzaman and
Persons, 2012)
A549 Human Lung epithelium ++ ++ (Nasimuzzaman and
Persons, 2012)
BHK-21 Hamster Kidney fibroblasts +++ ++ Reference cell line
CHO-K1 Hamster Cervix/ovary +++ +++ (Hill et al., 1999;
epithelium Nasimuzzaman and
COS-7 AGM Kidney fibroblast- +++ +++ Persons, 2012;
like Plochmann et al., 2012)
CRFK- Cat Kidney epithelium +++ +++ (Plochmann et al., 2012)
LL
G1E-ER4 Mouse Pro-erythroblasts - ND (Stirnnagel et al., 2010)
HEL Human Lung fibroblasts ++++ ND (Hill et al., 1999)
Hep G2 Human Liver epithelium- +++ ++ (Plochmann et al., 2012)
like
hMSC- Human Bone marrow ++++ +++ (Plochmann et al., 2012)
Tert mesen-chymal
stem cells
HT1080 Human Fibrosarcoma ++++ +++++ Reference cell line
epithelium
HT29 Human Colorectal + ND (Hill et al., 1999)
epithelium
Indian Deer Skin fibroblasts ++++ ND (Hill et al., 1999)
Muntjac
Jurkat Human T lymphocyte ++ ++ (Nasimuzzaman and
Persons, 2012; Stirnnagel
et al., 2010)
K562 Human Myeloid bone ++ ++ (Nasimuzzaman and
marrow Persons, 2012)
LMtk - Mouse Fibroblasts ++++ ND (Hill et al., 1999)
Mouse L Mouse Subcutaneous, +++ +++ (Stirnnagel et al., 2010)
adipose, areolar
fibroblasts
MPK Minipig Kidney fibroblasts ++ ND (Hill et al., 1999)
MRC-5 Human Lung fibroblasts ++++ +++ (Plochmann et al., 2012)
Mv.1.Lu Mink Lung fibroblasts ++ ND (Hill et al., 1999)
NIH-3T3 Mouse Embryonic +++ +++ (Hill et al., 1999;
fibroblasts Nasimuzzaman and
Persons, 2012)
). Its three subunits the N-ter LP, central SU and C-ter TM are 126, 446 and 417 aa long, respectively. The LP and TM subunits mediate membrane anchorage while the SU subunit contain the receptor binding domain (RBD). FV Env folds as a heterotrimeric spike protein which further assembles mostly
in hexagonal arrangements or lattices in clusters on the surface of virions as shown by cryoelectron tomography (Fig.
I-9, bottom)
Global distribution of wild NHPs NWMs OWMs Apes
;Mouinga-Ondeme et al.,
2010). In contrast, SFV was not detected in wild living populations of mountain gorillas
(Gorilla beringei beringei) in the rain forest spanning the border of Rwanda, Uganda and DRC
using a non-invasive sampling method of discarded plants. SFV was readily detected in
sympatric golden monkeys (Cercopithecus mitis kandti) from same region using the same
sampling method, excluding this non-invasive technique as reason for lack of detection in
mountain gorillas (Smiley Evans et al., 2016). In a survey using fecal samples (n=724) from
wild chimpanzees collected at 25 field sites over equatorial Africa, Liu et al. demonstrated that
SFVcpz is wide-spread across the four chimpanzee (Pan troglodytes) subspecies (P.t. verus,
vellerosus, troglodytes and schweinfurthii)
-3 below. Figure created with BioRender.com.Table I -3 -Details on zoonotic SFV-infections documented to date Location Setting No. (Study cases) WB pos (%) PCR pos (%) Reference Europe
I
Africa
Kenya First isolate 1 (HFV/PFV) ND 1 (Culture) (Achong et al., 1971)
Cameroon Natural 1099 (NHP contact) 10 (0.91) 3 (0.27) (Wolfe et al., 2004)
Cameroon Natural 1164 (Gen. population) 21 (1.8) 4 (0.34) (Calattini et al., 2007)
85/102 (NHP contact) 10 (9.8) 9 (8.8)
Cameroon, DRC Natural 139 (Sex workers) 1 (0.72) 0 (0) (Switzer et al., 2008)
41 (STD patients) 0 (0) 0 (0)
179 (Blood donors) 1 (0.56) 1 (0.56)
Gabon Occupational 20 (Primate center) 2 (10) 2 (10) (Mouinga-Ondeme et al.,
2010)
Cameroon Natural 35 (HTLV-3 case family) 5 (14.3) 1 (2.9) (Calattini et al., 2011)
Cameroon Natural 1321 (Gen. population) 26 (2) 2 (0.2) (Betsem et al., 2011)
198 (NHP contact) 53 (26.7) 41 (20.7) [4/41 WB neg ]
Gabon Natural 78 (NHP contact) 19 (24.4) 15 (19.2) (Mouinga-Ondémé et al.,
2012)
DRC Natural 3846 (Rural population) 16 (0.34) 3 (0.08) (Switzer et al., 2012)
Côte d'Ivoire Natural 1529 (Sick patients, 3 (0.20) 1 (0.07) (Switzer et al., 2016)
pregnant women and TB
patients)
Asia
Indonesia Natural 82 (Temple workers) 1 (1.2) 1 (1.2) (Jones-Engel et al., 2005)
Bangladesh, Natural 305 (Total) 8 (2.6) 3 (0.98) (Jones-Engel et al., 2008)
Indonesia, Nepal, 234 (Temple workers)
Thailand 21 (Pet owners)
23 (Bushmeat hunters)
8 (Zoo workers)
19 (Villagers)
China Occupational 12 (Zoo workers) ND 2 (16.7) (Huang et al., 2012)
Bangladesh Natural 209 (Villagers) 18 (8.1) 11 (5) (Engel et al., 2013)
13 (Bedey nomads) 0 (0) 0 (0)
Bangladesh Natural 269 (Villagers) 17 (6.4) 12 (4.5) (Craig et al., 2015)
45 (Bedey nomads) 1 (2.2) 0 (0)
Sum of cases
Global Occupational 803 41 (5.1) 20 (2.5)
Germany Occupational 2 (Primate center and lab ND (50) 2 (100) (von Laer et al., 1996)
worker) 1 (50) (Schweizer et al., 1997)
North America
Canada, US Occupational 231 (Research centers) 4 (1.8) 4 (1.8) (Heneine et al., 1998)
Canada, US Occupational 133 (Zoo workers) 4 (3) ND (Sandstrom et al., 2000)
Canada Occupational 46 (Primate centers) 2 (4.3) 1 (2.2) (Brooks et al., 2002)
Canada, US Occupational 187 (Research and zoos) 10 (5.3) 9 (4.8) (Switzer et al., 2004)
US Occupational 116 (Primatologist) 8 (6.9) 0 (0) (Stenbak et al., 2014)
South America
Brazil Occupational 56 (Research and zoos) 10 (17.8) 0 (Muniz et al., 2017)
WB pos only: n=21 Global Natural 10655 210 (2.0) 108 (1.0) WB pos only: n=106 Global Total 11456 251 (2.2) 128 (1.1) WB pos or PCR pos : n=129 [WB neg/ND , PCR pos : n=6] Retroviral co-infections
Cameroon SFV-HTLV 16 13 13 (100) (Filippone et al., 2015)
DRC 3 3 (100) (Halbrook et al., 2021)
Cameroon, DRC SFV-HIV 4 2 1 (50)
Côte d'Ivoire 2 1 (
Table I -4 -In vivo SFV cell tropism in infected NHPs and humans
I
Infecting virus CD4+ CD8+ CD19+ CD14+ Other cell References
(#donors) T cells T cells B cells monocytes types
NHP SFVagm (9) SFVcpz (4) 1/3 3/4 9/9 4/4 5/9 * 3/4 * 2/7 1/4 PMNL: 1/7 PMNL: 3/4 (von Laer et al., 1996)
SFVagm CD56+
Human (1) PFV (1) SFVgor (11) 0/2 9/11 2/2 10/11 0/2 7/11 0/2 2/11 NK cells: 0/2 CD56+ NK cells: 1/11 (von Laer et al., 1996) (Rua et al., 2014)
Table I -5 -Summary of PFV restriction by intrinsic host factors
I
MxA No Human - - - (Regad et al., 2001)
Mx2 No Human - - - (Bahr et al., 2016)
Host factor SAMHD1 Early steps SERINC2 Restriction Host No Human No Coelacanth No Human Inter-action --- Stage of viral life cycle --- Counter --action Env Env (Gramberg et al., 2013; References Kane et al., 2016) (Ramdas et al., 2021)
PAK3 TRIM5α Yes No Human Human ND - Early step - -- (Kane et al., 2016) (Yap et al., 2008)
SAMHD1 Yes No Macaque Apes ND - Early step - -- (Kane et al., 2016)
Late steps No OWMs - - -
ADAMDEC1 Yes Macaque ND Late step - (Kane et al., 2016)
APOBEC3 Yes Feline RT+Gag RT Bet (Chareza et al., 2012;
Lukic et al., 2013;
Löchelt et al., 2005)
Murine RT RT - (Russell et al., 2005)
APOBEC3B Yes Human RT RT Bet (Delebecque et al.,
Macaque ND Late step - 2006)
(Kane et al., 2016)
APOBEC3C Yes Human RT RT Bet (Russell et al., 2005)
APOBEC3G Yes Human RT+Gag RT Bet (Delebecque et al.,
Murine RT RT - 2006; Russell et al.,
Simian RT RT - 2005)
Macaque ND Late step (Kane et al., 2016)
APOBEC3F Yes Macaque ND Late step (Kane et al., 2016)
Human RT RT (Delebecque et al.,
2006; Russell et al.,
2005)
FFAR2 Yes Human ND Late step - (Kane et al., 2016)
MLKL Yes Macaque ND Late step - (Kane et al., 2016)
MOV10 Yes* Human ND Late step - (Kane et al., 2016; Yu
et al., 2011)
Nmi Yes Human Tas Transcription - (Hu et al., 2014)
OASL Yes Human ND Late step - (Kane et al., 2016)
Yes Macaque ND Late step - (Kane et al., 2016)
PHF11 Yes Human IP Transcription - (Kane et al., 2020;
Kane et al., 2016)
Pirh-2 Yes Human Tas Transcription - (Dong et al., 2015)
PML Yes Human Tas Transcription - (Meiering and Linial,
2003; Regad et al.,
2001)
SGK1 Yes Human Tas + Transcription/ - (Zhang et al., 2022)
Gag Gag stability
SLFN11 Yes Human (Guo et al., 2021)
Yes Bovine
Yes AGM
SLFN12 Yes Human ND Late step - (Kane et al., 2016)
TBC1D16 Yes Human ND Transcription - (Yan et al., 2021)
Tetherin Yes Human Env Virion release - (Jouvenet et al., 2009;
Xu et al., 2011)
TNFSF10 Yes Macaque ND Late step - (Kane et al., 2016)
TRIM5α Yes NWMs Gag Capsid - (Yap et al., 2008)
assembly
Resistant
APOBEC1 No Human - - - (Delebecque et al.,
No Murine - - - 2006)
APOBEC2 No Human - - - (Delebecque et al.,
No Murine - - - 2006)
APOBEC3A No Human - - - (Russell et al., 2005)
*Contradictory results. ND; Not determined.
Ignacio Fernández 1 , Lasse Toftdal Dynesen 2 , Youna Coquin 2 , Riccardo Pederzoli 1 , Delphine Brun 1 , Ahmed Haouz 3 , Antoine Gessain 2 , Felix A. Rey 1 , Florence Buseyne 2 and Marija
Backovic 1*
1 Institut Pasteur, Université Paris Cité, CNRS UMR3569, Unité de Virologie Structurale, 75015
Paris, France.
2 Institut Pasteur, Université Paris Cité, CNRS UMR3569, Unité d'Epidémiologie et Physiopathologie des Virus Oncogènes, 75015 Paris, France. 3 Institut Pasteur, Université Paris Cité, Plateforme de cristallographie-C2RT, CNRS UMR 3528, 75015 Paris, France.
9 | APPENDICES 9.1 Supplementary Tables -Manuscript I
Table S.IV-1 -X-ray crystallography data collection and refinement statistics
SFV GII RBD D (native) data RBD D (derivative) data SFV GII RBD G (native) data
(PBD 8AEZ) (PBD 8AIC)
Data collection
Wavelength 0.9786 1.907 0.9786
Space group P3221 P3121 P61
Table S .
S V-4 -
Plasma samples used for the neutralization study
Participant Ethnicity SFV infection a
BAD448 Bantu GI
BAK132 Pygmy GI
LOBAK2 Pygmy GI
BAD551 Bantu GII
BAK133 Pygmy GII
BAK228 Pygmy GII
BAK232 Pygmy GII
MEBAK88 Pygmy GII
BAD348 Bantu GI+GII
BAD447 Bantu GI+GII
BAD468 Bantu GI+GII
BAK55
CI-PFV ATAWEIGLYY ELVIPKHIY LNNWNVVNIGH LVKSAGQLTH VTIAHPYEII 748 GI-D468 .......... ..I..R... ....Q...... .I........ ..LS...... 749 GII-K74 .......... ..I..R... ....QI..... .I........ ..LS...... 747 CI-PFV NKECVETIYL HLEDCTRQDY VICDVVKIVQ PCGNSSDTSD CPVWAEAVKE 798 GI-D468 .R..SN.L.. ...E.R.L.. .......... .......S.. ......P... 799 GII-K74 .R..SN.L.. ...E.R.L.. .......... .......S.. ......P... IS.... .......... .......... V.......Q. ..VT..K... 849 GII-K74 .H..IS.... .......... .......... V.......Q. ..VT..K... KT.L..Q ..H....... .......... .......... ...D.L.... 899 GII-K74 ...KT.L..Q ..H....... .......... .......... ...D.L.... 897 CI-PFV AELLRLDIHE GDTPAWIQQL AAATKDVWPA AASALQGIGN FLSGTAQGIF 948 GI-D468 .......... .......... ....E..... .....K.... ..T.A...L. 949 GII-K74 .......... .......R.. ....E..... .....K.... ..T.A...L. 947 CI-PFV GTAFSLLGYL KPILIGVGVI LLVILIFKIV SWIPTKKKNQ 988 GI-D468 .....I.... ......I.I. I........L K...I.R.S. 989 GII-K74 .....I.... ......I.I. I........L K...I...S. 987
797
CI-PFV PFVQVNPLKN GSYLVLASST DCQIPPYVPS IVTVNETTSC FGLDFKRPLV 848
GI-D468 .H..847
CI-PFV AEERLSFEPR LPNLQLRLPH LVGIIAKIKG IKIEVTSSGE SIKEQIERAK 898
GI-D468 ...
Cell surface HS-expression level according to[START_REF] Nasimuzzaman | Cell Membrane-associated heparan sulfate is a receptor for prototype foamy virus in human, monkey, and rodent cells[END_REF][START_REF] Plochmann | Heparan sulfate is an attachment factor for foamy virus entry[END_REF][START_REF] Stirnnagel | Analysis of prototype foamy virus particle-host cell interaction with autofluorescent retroviral particles[END_REF], ND; not determined. Tableadapted from[START_REF] Hill | Properties of human foamy virus relevant to its development as a vector for gene therapy[END_REF].
ACKNOWLEDGEMENTS
I sincerely thank my PhD advisor Dr. Florence Buseyne for giving me the opportunity to be part of her group. I am forever grateful for the scientific guidance and support during the past +4yrs. It allowed me to stay curious and importantlybecome a better scientist.
I would like to send a special thank you to all jury members who agreed to evaluate my work and be part of my PhD Thesis dissertation: Prof. Sylvie Van Der Werf, Dr. Arnaud Moris, Dr.
Martine Braibant, Prof. Alessia Zamborlini and Dr. Ahidjo Ayouba.
A big thank you to Prof. Antoine Gessain and all members of the EPVO unit for the support and for providing a brilliant and professional environment for scientific growth. I want to thank Dr. Philippe Afonso for the many important discussions during my PhD and interests in my work. Thank you to Olivier Cassar for many great moments and all sorts of administrative support, especially when arriving as a foreigner in Paris. Big thank you to Dr. Youna Coquin and Thomas Montange for sharing office and for your vital contributions to my manuscript. I am grateful for your help and company during the many hours spend together in the P2+. Also, thank you to fellow and former PhD students in the lab for creating a lively, fun and open space: Dr. Jim Zoladek, Dr. Jill-Lea Ramassamy and (soon to be Dr.) Sophie Desgraupes. Thank you to Prof. Pierre-Emmanuel Ceccaldi, Dr. Aurore Vidy-Roche, Patricia Jeannin, Jeanne Pascard, Jocelyne Creff and Isma Ziani as well as former EPVO members Dr. Mathilde Coutaudier, Ingrid Fert, Dr. Claudia Filippone and Dr. Mathieu Hubert for help, support and all the good moments in the lab. A very special thank you to my #TeamFoamy collaborators Dr. Ignacio Fernandez, Dr. Marija Backovic and Prof. Félix Rey. Their contributions and support in my project are beyond words. I am so grateful for all their structural and biochemical expertise and inputs including countless discussions on experimental settings and trouble-shootings. Thank you for helping me bring this project to the next level. It has been an absolute privilege to work with all of you! I would like to thank the PPU for giving me this opportunity to come and pursue my PhD at Institut Pasteur. This program made a big difference for me and I am grateful for all the support, network and friendships I have made through the PPU. Thank you to Dr. Susanna Celli, Dr. Nathalie Pardigon, Katri Vuollet and everybody involved. Also thank you to all my fellow PhD students from the PPU Trefouël Class for many great moments, both scientifically and socially despite the COVID pandemic.
Acknowledgements 4.6 Acknowledgements This work was supported by the 'Agence Nationale de la Recherche' [ANR-10-LABX62-IBEID, Intra-Labex Grant, M.B] and the 'Programme de recherche transversal from Institut Pasteur' [PTR2020-353 ZOOFOAMENV, F.B.]. The funding agencies had no role in the study design, generation of results, or writing of the manuscript. We thank the staff from the Utechs
Cytometry & Biomarkers and Crystallography platform at the Institut Pasteur, the synchrotron source Soleil (Saint-Aubin, France) for granting access to the facility, and to the staff of Proxima 1 and Proxima 2A beamlines for helpful assistance during X-ray data collection. We are grateful to Jan Hellert, Pablo Guardado-Calvo and Philippe Afonso for the discussions and advice, with special thanks to Max Baker for reading the manuscript and English language corrections.
Acknowledgements
We thank H. Mouquet and V. Lorin for sharing the 293-F cells and advice. We are grateful to Mathilde Couteaudier for mentoring MD and LTD in their performance of the experimental work. We used devices from the Cytometry and Biomarker Utechs at the Institut Pasteur and we thank their staff for their support and advice. We thank Delphine Brun from the Unité de Virologie Structurale for technical assistance. The manuscript has been edited by a native English speaker.
Thank you to the FRM and the Department of Virology at Institut Pasteur for funding to extend my PhD
Funding
This work was supported by the Agence Nationale de la Recherche (ANR-10-LABX62-IBEID, Intra-Labex Grant (M.B.)), and the Programme de Recherche Transversal from the Institut Pasteur (PTR2020-353 ZOOFOAMENV, F.B.). SFV protein production was supported by the European Virus Archive-GLOBAL project, which has received funding from the EU Horizon 2020 Research and Innovation Programme (grant agreement number 871029). L.T.D. was supported by the Pasteur-Paris-University (PPU) International Doctoral Program and the Fondation pour la Recherche Médicale, including additional supportive funding from the
Competing interests
The authors have no competing interests to declare.
Abstract
Simian foamy viruses (SFVs) are retroviruses that are frequently cross-transmitted to humans.
SFVs establish life-long infection in their human hosts, with the persistence of replicationcompetent virus. Zoonotic SFVs do not induce severe pathology and are not transmitted between humans. Infected individuals develop potent neutralizing antibodies (nAbs) that targets the SFV envelope protein (Env). Env carries a variable region that defines two SFV genotypes and is the exclusive target of nAbs. However, its antigenic determinants are not understood. Here, we characterized nAbs from SFV-infected individuals living in Central Africa.
The nAbs target conformational epitopes within two major antigenic areas located at the Env apex: One mediates the interaction between Env protomers to form Env trimers and one harbors several determinants of Env binding to susceptible cells. One binding determinant is genotype-specific. We propose a model integrating structural, genetic, functional, and immunological knowledge on the SFV receptor binding domain.
Materials and Methods
Human plasma samples
Blood samples were drawn from adult populations living in villages and settlements throughout the rainforests of Cameroon. Participants gave written informed consent. Ethics approval was obtained from the relevant national authorities in Cameroon (the Ministry of Health and the National Ethics Committee) and France (Commission Nationale de l'Informatique et des Libertés, Comité de Protection des Personnes Ile de France IV). The study was registered at www.clinicaltrials.gov, https://clinicaltrials.gov/ct2/show/NCT03225794/.
SFV infection was diagnosed by a clearly positive Gag doublet on Western blots using plasma samples from the participants and the amplification of the integrase gene and/or LTR DNA fragments by PCR using cellular DNA isolated from blood buffy-coats [START_REF] Betsem | Frequent and recent human acquisition of simian foamy viruses through apes' bites in central Africa[END_REF]. We identified the SFV origin by phylogenetic analysis of the integrase gene sequence (Betsem et al., 2011). The SFV genotype was determined by amplification of the SUvar DNA fragment by PCR [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Plasma samples from 24 participants were used for this study (Tables S.V-1 and S.V-4). Four participants were not infected with SFV and 20 were infected with a gorilla SFV.
Viral strains, amino-acid numbering, and Env domain nomenclature
We used sequences from primary zoonotic gorilla SFVs, SFVggo_huBAD468 (GI-D468, JQ867465) and SFVggo_huBAK74 (GII-K74, JQ867464) (Rua et al., 2012a) and the laboratoryadapted chimpanzee SFV, SFVpsc_huPFV (CI-PFV, KX087159) [START_REF] Wagner | Sequence errors in foamy virus sequences in the GenBank database: resequencing of the prototypic foamy virus proviral plasmids[END_REF] for the synthesis of foamy viral vectors (FVV) and envelope proteins and peptides. For simple reference to previously described Env sequences and functions, we used the amino-acid positions from the CI-PFV strain, unless otherwise stated. The GI-D468, GII-K74, and CI-PFV sequence alignment are shown in Fig. S.V-1. When referring to infecting SFV strains and the antibodies raised against them, gorilla and chimpanzee genotype I SFV are referred to as GI and CI, respectively; gorilla genotype II SFV is referred to as GII.
Cells
Baby hamster kidney (BHK)-21 cells (ATCC-CLL-10, hamster kidney fibroblast) were cultured in DMEM-5% fetal bovine serum (FBS). HT1080 cells (ECACC 85111505, human fibrosarcoma) Danish Pasteur Society, Augustinus Fonden, Knud-Højgaards Fond, and Viet-Jacobsen Fonden.
The funding agencies had no role in the study design, generation of results, or writing of the manuscript. The secondary structure content was calculated using 2StrucCompare webserver [START_REF] Klose | 2Struc: the secondary structure server[END_REF] at https://2struccompare.cryst.bbk.ac.uk/index.php.
Authors contribution
Supplementary Tables -Manuscript II
Table S.V-1 -Plasma samples used for the ELISA assays
Participant Ethnicity SFV infection a Fig.
Pygmy GI+GII X a Participants were infected with a gorilla SFV of which the genotype (GI or GII) was defined by PCR using primers located within SUvar [START_REF] Lambert | Potent neutralizing antibodies in humans infected with zoonotic simian foamy viruses target conserved epitopes located in the dimorphic domain of the surface envelope protein[END_REF]. Some participants were coinfected by strains from both genotypes (GI+GII).
Table S.V-5 -Methods used to predict epitopic regions and to design the mutant SU proteins
Name
Prediction Genotype-specific features GII ΔN5, GII ΔN6, GII ΔN7, GII ΔN9 Functional study [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF] GII ΔN7'
Presence varied according to the viral strain [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF] Absent from CI-PFV strain GII ΔN10, CI ΔN10 Functional study [START_REF] Luftenegger | Analysis and function of prototype foamy virus envelope N glycosylation[END_REF] Genotype-specific localization of N10 [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]) GII swap407
Genotype-specific sequence before N10 Genotype-specific localization of N10 [START_REF] Richard | Cocirculation of Two env Molecular Variants, of Possible Recombinant Origin, in Gorilla and Chimpanzee Simian Foamy Virus Strains from Central Africa[END_REF]) GII ΔRBDj, GII swapRBDj, CI ΔRBDj, CI swapRBDj Functional study [START_REF] Duda | Characterization of the prototype foamy virus envelope glycoprotein receptor-binding domain[END_REF] GII SU with CI RBDj ( GII swapRBDj) was expressed; CI SU with GII RBDj was not GII K342A/R343A, GII R356A/R369A Functional study (Fernandez et al., 2022, submitted) Mutations introduced in GII-K74 ectodomain; CI-PFV ectodomain is expressed at low levels GII ΔL2, CI ΔL2
Crystal structure (Fernandez et al., 2022, submitted) + disordered secondary structure a GII ΔL3, CI ΔL3, CI L3swap, GII ΔL4, CI ΔL4 Crystal structure (Fernandez et al., 2022, submitted) GII 263 glyc , GII 426 glyc , GII 450 glyc , GII 459 glyc Genotype specific + CBtope b GII 351 glyc , GII 364 glyc , GII 485 glyc Genotype specific + CBtope b + disordered secondary structure a
Loop size around aa351 differs between genotypes GII 350 glyc , GII 349+E, GII swap345, GII swap333, GII E502A, GII L505N, CI G350 glyc Designed after testing GII 351 glyc CI 463 glyc Designed after testing GII GII 459 glyc a The Protein Homology/analogY Recognition Engine V 2.0 (Phyre2) web portal was used for secondary structure prediction [START_REF] Kelley | The Phyre2 web portal for protein modeling, prediction and analysis[END_REF]. Disordered secondary structures predicted by the software were considered to define epitopic regions to be tested.
b CBtope software predicts conformational B-cell epitopes on the basis of their primary sequence (http://crdd.osdd.net/raghava/cbtope/, [START_REF] Ansari | Identification of conformational B-cell Epitopes in an antigen from its primary sequence[END_REF]). We applied the recommended parameters, i.e. 19 aa-long window, -0.3 threshold, and considered values ≥ 4 as potential epitope. Membranes were stained with a mouse monoclonal antibody that recognizes the LP subunit of Env. The WT gp130 Env precursor was observed, as well as mutant Env with a lower molecular weight, as expected.
ABSTRACT
Simian foamy viruses (SFVs) are ancient and wide-spread complex-type retroviruses that have co-evolved with their non-human primate (NHP) species for millions of years. These viruses can be transmitted to humans, primarily through bites, leading to the establishment of a life- |
04121400 | en | [
"info"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121400/file/publi-7237.pdf | Mohsen Ahadi
email: [email protected]
Florian Kaltenberger
email: [email protected]
5GNR Indoor Positioning By Joint DL-TDoA and DL-AoD
Keywords: Positioning, Time Difference of Arrival, Angle of Departure, Position Estimation, Beamforming, 5G New Radio
Positioning a user terminal based on cellular signals is an attractive solution in situations where accurate positioning based on global navigation satellite systems (GNSS) is not possible, such as inside buildings. The 5G New Radio (NR) networks based on 3GPP have introduced several enhanced features to allow the accurate positioning of user terminals. Especially Frequency Range 2 (FR2) mmWave setups are attractive for localization because the large bandwidths allow for highresolution estimation of the Time Difference of Arrival (TDoA) while the beamforming capabilities can be exploited to estimate the Angle of Arrival (AoA) or Angle of Departure (AoD) of the signals. In this work, we propose a new UE-based positioning algorithm that combines the Downlink Time Difference of Arrival (DL-TDoA) and Downlink Angle of Departure (DL-AoD) information estimated from the downlink positioning reference signals (DL-PRS) introduced in 3GPP Rel-16.
TDoA is a widely used horizontal positioning technique that does not require tight synchronization between base stations and mobile stations. Moreover, multiple antennas beamforming on base stations leads to high vertical positioning accuracy with AoD. We use ray-tracing-based site-specific channel models to evaluate our joint positioning algorithm's performance in an Indoor Factory (InF) scenario. The simulation results show sub-meter user localization error which is a significant improvement compared to applying the previous methods separately.
I. INTRODUCTION
Location information of mobile devices provides valuable data that can be exploited for new services and applications, for example, Industry 4.0 and the Internet of Things (IoT). This information may also be used to support wireless operators to enhance network performance [START_REF] Wang | Mobile device localization in 5G wireless networks[END_REF]. However, a precise position estimation based on GNSS is not possible for indoor scenarios where no Line of Sight (LOS) to a satellite is available. 5G-NR Stand Alone (SA) networks propose mmWave in FR2 which enables wideband signals, low latency, beamforming, and precise angle estimation with multiple antennas. These possibilities in combination with the 5GNR positioning architecture can be exploited for an accurate indoor positioning system. Several well-known positioning techniques have been proposed for determining the position of a point: Time of Arrival (ToA) [START_REF] Le | Closed-form and near closed-form solutions for TOA-based joint source and sensor localization[END_REF], [START_REF] Nguyen | Optimal geometry analysis for multistatic TOA localization[END_REF] Time Difference of Arrival (TDoA), Received Signal Strength (RSS), and Direction of Arrival [START_REF] Del Peral-Rosado | Survey of cellular mobile radio localization methods: From 1G to 5G[END_REF] and Departure. (also known as Angle of Arrival and Angle of Departure) Although ToA-based positioning results in a proper horizontal accuracy along the x, and y-axis, it requires highly accurate clock synchronization among all Base Stations (BSs) and mobile devices. RSS-based techniques [START_REF] Tarrio | A new positioning technique for RSS-based localization based on a weighted least squares estimator[END_REF], [START_REF] Stoyanova | RSS-based localization for wireless sensor networks in practice[END_REF], are very sensitive to fast and slow fading and tend to provide only rough localization estimates. On the other hand, TDoA systems avoid the requirements of clock synchronization at the point of interest or tag-end by considering the different arrival times of signals that originate at two distinct reference points [START_REF] Munoz | Position Location Techniques and Applications[END_REF]. The time difference calculation at the mobile avoids synchronization requirements between the mobile and the BSs. Note that TDoA-based positioning schemes still require clock synchronization between all the BSs in the system. Thus, excluding synchronization issues, the sources of error of TDoA-based systems are the same as the ones existing for ToA schemes. The utilization of antenna arrays on the transmitter or receiver, allows the angle parameters to be measured and exploited for high positioning precision especially vertical. Time and angle-based positioning have been pursued in many previous research works separately. The authors of [START_REF] Wong | A geometric approach to passive target localization[END_REF] propose a geometric approach for positioning problems, but optimization is not considered. In the papers [START_REF] Chanandk | A simple and efficient estimator for hyperbolic location[END_REF]- [START_REF] Yang | Constrained total least-squares location algorithm using time-difference-of-arrival measurements[END_REF], refinements for TDoA-based positioning algorithms are proposed to make the position estimations robust. However, the practical implementation is not well-considered, and neither are simulations in realistic environments. They also assume (in the simulations) that all the TDoA (or ToA) estimates have the same statistical characteristics, which is not the case in practice. In [START_REF] Chen | Positioning Algorithm and AoD Estimation for mmWave FD-MISO System[END_REF], a simple minimum least squared method is used to estimate the user's coordinates based on DL-AoD. In other related works, finding the UE position depends on measuring AoA which requires multiple antennas on UE. This is not feasible for every type of equipment such as mobile devices. Therefore, we propose using DL-AoD which is reported from gNBs to UE over network protocols [START_REF]Stage 2 functional specification of User Equipment (UE) positioning in NG-RAN[END_REF] without requiring a large antenna array on the user terminal. Also, creating a joint TDoA-AoD positioning estimation algorithm for an indoor scenario with no GNSS LOS is presented for precise UE self-positioning. The novelty of this paper is applying a joint DL-TDoA and DL-AoD positioning algorithm with submeter accuracy performance in both horizontal and vertical planes. In the following sections, we will go through the latest 5G positioning enhancements in architecture, signals, and measurements. Moreover, we demonstrate possible positioning accuracy with these new capabilities.
II. 5G POSITIONING ARCHITECTURE AND PROTOCOLS
A. RAN and Core Network Architecture
The 5G positioning architecture in the Release 16 [START_REF]Stage 2 functional specification of User Equipment (UE) positioning in NG-RAN[END_REF] follows the LTE positioning model by introducing new entities in the 5G Next Generation Radio Access Network (5G-RAN) and the 5G Core Network (5GC) depicted in figure 1. LTE Positioning Protocol (LPP) is used as a communication protocol between the UE and the Location Management Function (LMF) via Access Mobility Function (AMF) in the 5GC. Moreover, signaling between gNB and LMF is obtained over New Generation Positioning Protocol A (NRPPa). Multiple antennas on gNBs and the availability of mm-Wave in 5G NR, enable the measurement of Downlink Angle of Departure (DL-AoD) as supporting information for UE to localize itself. This is a new on-demand System Information (SI) procedure, a UE can request positioning System Information Blocks (posSIBs) including the coordinate of the transmitting antennas as well as the beam angle information. The gNB provides this information to LMF over NRPPa. For DL-AoD estimation, UE measures the DL-PRS beams Reference Signal Received Power (RSRP). Moreover, By performing a Reference Signal Time Difference (RSTD) estimation on determined beams with the highest power from multiple gNBs, UE measures DL-TDoA of DL-PRS resources [START_REF]LTE; 5G; LTE Positioning Protocol (LPP)[END_REF]. Based on PRS DL-TDoA, DL-AoD, and the geographical coordinates of the gNBs shared by LMF, UE can estimate its position. In section V we demonstrate how to jointly use angle and time parameters along with the geographical coordinates of the gNBs to estimate the UE position accurately in horizontal and vertical. When configuring a device to measure on a specific PRS resource in a PRS resource set, the location server learns not only which site the reported measurements for this PRS resource set belong to, but also the specific beam from that site [START_REF]Physical channels and modulation[END_REF].
2) PRS Comb Size: PRS setup enables permuted staggered comb. In this application, permuted indicates that the comb in each OFDM symbol has a distinct offset in the frequency domain. The comb factor can be adjusted to 2, 4, 6, or 12 sub-carriers, which means that the comb is used on every 2nd to every 12th sub-carrier. Using various combs, numerous simultaneous PRSs may be multiplexed.
3) PRS Bandwidth: A PRS resource in the frequency domain can be configured to have up to 264 Physical Resource Blocks (PRB) with Sub carrier Spacing (SCS) 120 KHz that makes 400 MHz Band Width (BW) in FR2, with all PRS resources in a PRS resource set having the same bandwidth and frequency domain location. In the time domain, one PRS resource is represented by two, four, six, or twelve OFDM signals.
III. AOD ESTIMATION
A. Beamforming
Beam management is a set of Layer 1 (physical) and Layer 2 (medium access control) procedures to acquire and maintain a set of beam pair links [START_REF]Study on New Radio access technology physical layer aspects." 3rd Generation Partnership Project[END_REF] (a beam used at gNB paired with a beam used at UE). Beam sweeping is necessary at both transmit/receive point (TRP) side and UE side to establish a beam pair link.
The transmitted beamformed PRS waveform is received sequentially across each receive beam for receive-end beam sweeping. In general, for N transmit beams and M receive beams, each of the N beams is sent M times from gNB, so that each transmit beam is received over the M receive beams. Using assistance data sent to the UE, it can estimate the AoA and AoD based on the indices of the best beam pair.
B. Phased Antenna Array
To illustrate this procedure, assume that we use an Uniform Rectangular Array (URA). Array elements are distributed in the yz-plane in a rectangular lattice with the array bore sight is along the x-axis. For a wave propagation in a direction described by azimuth ϕ and elevation θ, the wave-vector k is given by:
k(θ, ϕ) = (k x , k y , k z ) = 2π λ (sin θ cos ϕ, sin θ sin ϕ, cos θ) (1)
The steering vector represents the set of phase delays for an incoming wave at each sensor element. For a plane wave that is described by a wave vector k, with N elements in an antenna array, steering vector w(k) is an N × 1 complex vector representing the relative phases at each antenna and is given by:
w(θ, ϕ) = e -jk•r1 , e -jk•r2 , . . . , e -jk•r M T , (2)
where r i = (x i , y i , z i ) is the location of the ith antenna element.
To estimate the AoD, each gNB goes through a beam training process with a code book containing a set of code words C P = {w 1 , ..., w P } [START_REF] He | Codebook-Based Hybrid Precoding for Millimeter Wave Multiuser Systems[END_REF]. Therefore, a coarse estimation for the AoD of the propagation path is described as:
p * = arg max p∈P A p (3)
where the beam with code word w p (θ, ϕ) maximizes the received power A p on the UE. Without considering the noise, the ideal received signal strength is given by:
A p = (hw p (θ, ϕ)) H (hw p (θ, ϕ)) (4)
Thus, using the beam index p, the UE can derive the azimuth (θ) and elevation (ϕ) from the positioning assistance data sent to the UE [START_REF] He | Codebook-Based Hybrid Precoding for Millimeter Wave Multiuser Systems[END_REF].
IV. TDOA ESTIMATION
Based on the RSRP measurements, the pair of TX and RX beams have been identified for further processing of the TDoA. The time difference calculation on the UE avoids exact time synchronization requirements to the gNB. Note that TDoAbased positioning schemes still require clock synchronization between all the gNBs in the system and frequency synchronization between UE and gNBs.
To measure the DL-TDoA based on PRS resources, a channel estimation algorithm is used to calculate the timing offset between the reception and transmission of PRS resources. As Channel Estimation in OFDM Systems describes [START_REF] Van De Beek | On channel estimation in OFDM systems[END_REF], the least-squares estimates of the channel frequency response at the pilot symbols are computed. To minimize any undesired noise from the pilot symbols, the least squares estimates are then averaged across time, then frequency bandwidth, with a growing window size that is proportional to the channel coherence time. Later, the findings will be interpolated over the whole sub-carriers in the bandwidth.
In the next step, the frequency domain channel estimation is converted into an impulse response using an inverse Fourier transform. In this work, we use the position of the strongest peak in the impulse response as our Time of Arrival (ToA) estimate, but more sophisticated peak search algorithms could be used as well. Finally, the timing difference between ToAs of the different pairs of gNBs is considered as the time difference of arrival or TDoA [START_REF]NR Physical layer measurements." 3rd Generation Partnership Project; Technical Specification Group Radio Access Network[END_REF].
V. JOINT AOD-TDOA POSITION ESTIMATION
The trigonometric function of the estimated azimuth and zenith (90 •elevation) angle of departure, as well as the range difference between the gNBs, are used in the positioning equations to compute the unknown UE coordinates. Taking (x, y, z) as the coordinate of the UE, the system of location equations can be written in a vector as follows:
f (x, y, z) + e = r (5)
where f contains (3N -1) equations for N gNBs (with 2N angle, and N -1 time observations):
f AoD,2n-1 (x, y, z) = cos (θ n ) = (z-z n ) √ (x-x n ) 2 +(y-y n ) 2 +(z-z n ) 2 (6) f AoD,2n (x, y, z) = sin (φ n ) = (y-yn) √ (x-xn) 2 +(y-yn) 2 (7) f T DoA,n (x, y, z) = d n -d 1 = (x -x n ) 2 + (y -y n ) 2 -(x -x 1 ) 2 + (y -y 1 ) 2 (8)
In the above equations, (x n , y n , z n ) is the n th gNB coordinate and d n is the distance between the n th gNB and UE for n > 1 in f T DoA,n . Here d 1 is the reference gNB that the UE is synchronized to and has the strongest received signal which is assumed to be the nearest gNB with a LOS.
A. Taylor's Series Decomposition
The set of non-linear equations ( 6)-( 8) at the point (x (i) , y (i) , z (i) ) can be solved by applying a Taylor series decomposition and limiting it by a linear term [START_REF] Foy | Position-location solutions by Taylor-series estimation[END_REF], as below; [START_REF] Chanandk | A simple and efficient estimator for hyperbolic location[END_REF] where D (i) is the (3N -1) × 3 differential matrix estimated at the point (x (i) , y (i) , z (i) ), x is the 3 × 1 UE coordinate vector, and x (i) is the 3 × 1 coordinate vector estimated at the point (x (i) , y (i) , z (i) ).
f (x, y, z) = f x (i) , y (i) , z (i) + D (i) x -x (i) ,
D (i) = ∂f (i) AoD,1 ∂x ∂f (i) AoD,
(i) T DoA,n ∂z , x = x y z , x (i) = x (i) y (i) z (i) . (10)
Substituting ( 10) into ( 6) gives the following linear system of equations with regard to the (x-x (i) ):
D (i) x -x (i) + e = b (i) , (11)
where b (i) describes the difference between the observation vector r and the vector f estimated at the point (x (i) , y (i) , z (i) ).
b (i) = r -f x (i) , y (i) , z (i) . (12)
B. Gauss-Newton Process
A Minimum Variance Unbiased (MVU) estimator exists if the error vector e in (12) has a Gaussian Probability Density Function (PDF) with zero mean and covariance matrix C [START_REF] Kay | Fundamentals of Statistical Signal Processing -Detection Theory[END_REF]. Therefore, (x-x (i) ) can be found [START_REF] Sosnin | DL-AOD Positioning Algorithm for Enhanced 5G NR Location Services[END_REF] as:
x -x (i) = D (i) T C -1 D (i) -1 D (i) T C -1 b (i) (13)
Ultimately by using ( 13), an iterative equation can be written to update vector x based on its previous estimate:
x (i+1) = x (i) + D (i) T C -1 D (i) -1 D (i) T C -1 b (i) (14)
Here C -1 is a 3N -1×3N -1 inverse covariance matrix of vector e. The error in measurements comes from the reliability of LOS and NLOS identification. By measuring Λ 2 l,m after beam determinations, which is the channel impulse response power of the l th gNB and the m th time domain path from the total number of channel time domain paths N th m , used in the channel estimation, we have the test statistics u l for the first path channel impulse response power Λ 2 l,1 from l th gNB:
u l = Λ 2 l,1 / Nm m=1 Λ 2 l,m (15)
Therefore, we assume that the first received path in channel impulse response in time domain from each gNB corresponds to the LOS channel component. We take C -1 l,l = 1 for LOS if u l exceeds the threshold, otherwise, C -1 l,l = 0 for NLOS.
C -1 l,l = F (u l ) = 1 if u l > γ 0 if u l ≤ γ (16)
Where the threshold γ = 0.5 gives the best trade-off between the probability of NLOS detection and false alarm. The resulting equation ( 14), starting from an initial guess x (0) , converges after a few iterations as shown in figure 7, and the vector x (i+1) found at the last iteration is used as an estimate of the UE coordinates (x, y, z).
VI. SIMULATION SETUP
To evaluate the proposed algorithm's performance, we created a simulation environment in Matlab, based on the 3GPP standard indoor scenario with a map-based channel [START_REF]Study on channel model for frequencies from 0.5 to 100 GHz[END_REF], which represents a 60m × 120m × 10m factory hall. 8 gNBs with known locations are installed above the clutters which improves the availability of LOS. There are 12 metal clutters with various dimensions, scattered on the ground. The clutters may block the LOS between the gNBs and the UE as well as reflect the downlink rays. UE can have a random horizontal position with a standard height of 1.5 m. Each of the gNBs transmits the DL-PRS with the configuration in table I 1) Ray Tracing: This method employs ray tracing to display and compute propagation pathways with surface geometry provided by the 'Map' attribute. Each displayed propagation path is color-coded according to the received power (dBm) or path loss (dB) along the path. The ray tracing analysis includes surface reflections but does not include effects from diffraction, refraction, or scattering
B. Channel Model
To obtain the channel-impaired signal we pass the transmit signal through a Clustered Delay Line (CDL) multi-input multi-output (MIMO) link-level fading channel. CDL-delay profile is configured using the ray-tracing propagation model outputs including path delay and path gain, angle of departure, and arrival. The first path follows a Ricean fading distribution which can point to LOS and the other paths follow a Rayleigh fading distribution coming from multipath propagation.
VII. RESULTS
Figures 4 and5 demonstrate the Cumulative Distribution of horizontal and vertical error for 100 randomly chosen user locations. TDoA and AoD positioning methods were employed both independently and jointly. As the results reveal, TDoA may achieve satisfactory horizontal precision, whereas AoD leads to proper vertical accuracy. The joint position estimation approach, on the other hand, produces a precise estimation of user coordinates in both directions. The graphs also show how a bigger antenna array enhances accuracy. This improvement is due to more beams produced by a bigger antenna array, as well as enhanced transmitter angle coverage and beamforming. Table II illustrates an analytical comparison between two positioning results based on Horizontal Root Mean Square Error (HRMSE) and Vertical Root Mean Square Error (VRMSE) where M is the number of user positions in simulations. In this paper, M = 100. Despite the mean squared error for vertical and horizontal being less than 1m in multi-antenna configurations, a larger antenna array enhances accuracy even further.
Fig. 1 .
1 Fig. 1. 5GNR Positioning Architecture
Fig. 2 .
2 Fig. 2. Array geometry and coordinate system
Fig. 3 .
3 Fig. 3. Zenith and azimuth angle of departure
. To show the effect of transmitter antenna array size on the final positioning result, two different sizes of 2 × 2, and 4 × 4 are placed on the surrounding walls in 8 m height shown in figure 4. For instance, from a TX antenna array of size 4 × 4 that we used here, we have a combination of 16 pairs of [azimuth, elevation] degrees to transmit the DL-PRS in (-45:45) degrees in azimuth and (-45:0) degrees in elevation. where ϕ = [-45 • : 90/4 : 45 • ] and θ = [-45 • : 45/4 : 0 • ].
Fig. 4 .
4 Fig. 4. Factory hall simulation environment
Fig. 5 .Fig. 6 .
56 Fig. 5. Horizontal positioning error
Fig. 7 .
7 Fig. 7. Gauss-Newton process error convergence
ACKNOWLEDGMENT
The work included in this paper has been supported by the "France 2030" investment program through the project 5G-OPERA as well as the European Space Agency Navigation Innovation and Support Programme (NAVISP) through the project HOP-5G. The view expressed herein can in no way be taken to reflect the official opinion of the European Space Agency. The authors would like to thank Rakesh Mundlamuri for producing the indoor map of this paper's simulation. |
04081199 | en | [
"info",
"shs.langue"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04081199v2/file/Exploring_Machine_Learning_perspectives_for_electroglottographic_signals___Minh_Ch%C3%A2u_NGUY%C3%8AN___livrable_CLD2025.pdf | Châu Nguyên
Keywords: language documentation, unwritten language, natural language processing, machine learning, neural networks, phonation types, creaky voice
Automatic Speech Recognition for less-studied languages: Exploring Machine Learning perspectives for electroglottographic signals by Minh-
Appendices
List of Figures The work reported here was carried out within the framework of the research project entitled "Computational Language Documentation by 2025" (hereafter, the CLD2025 project). The main objective of the CLD2025 project is to facilitate the urgent task of documenting endangered languages by leveraging the potential of computational methods. To recapitulate the core argument of the project application: until a decade ago, attempts at using Automatic Speech Recognition for low-resource languages (including newly documented languages) yielded modest results: there were interesting developments, but practical usefulness remained limited, and deployment as part of language workers' workflows still appeared as a prospect for the future [START_REF] Besacier | Automatic speech recognition for under-resourced languages: A survey[END_REF][START_REF] Do | Towards the automatic processing of Yongning Na (Sino-Tibetan): developing a 'light' acoustic model of the target language and testing 'heavyweight' models from five national languages[END_REF]. A breakthrough is now possible: machine learning tools (such as artificial neural networks and Bayesian models) have improved to a point where they can effectively help to perform linguistic annotation tasks such as automatic transcription of audio recordings, automatic glossing of texts, and automatic word discovery [START_REF] Thieberger | LD&C possibilities for the next decade[END_REF][START_REF] Michaud | Integrating automatic transcription into the language documentation workflow: Experiments with Na data and the Persephone toolkit[END_REF][START_REF] Anastasopoulos | Endangered Languages meet Modern NLP[END_REF]. Significant achievements in this space include [START_REF] Partanen | Speech Recognition for Endangered and Extinct Samoyedic languages[END_REF], [START_REF] Prud'hommeaux | Automatic speech recognition for supporting endangered language documentation[END_REF], [START_REF] Liu | Enhancing documentation of Hupa with automatic speech recognition[END_REF], [START_REF] Macaire | Automatic Speech Recognition and query by example for Creole languages documentation[END_REF], and [START_REF] Rodríguez | Speech-to-text recognition for multilingual spoken data in language documentation[END_REF].
The CLD 2025 project, "Computational Language Documentation by 2025", is organized in six work packages. The present task contributes to the second work package: "Semi-automatic tonal models", which aims at designing workflows involving automatic speech recognition tools to supersede purely manual workflows. In other words, the ultimate goal here is to contribute to the implementation of processing chains for the documentation of low-resource languages, which integrate automatic speech recognition tools.
Beyond the staple tasks of Natural Language Processing, there exists a considerable space for computerassisted exploration and analysis of phonetic and phonological properties of languages. New strands of interdisciplinary research include linguistic reflections based on error analysis: reflecting on unexpected output from NLP tools, which offer a fresh perspective on the data (see e.g. [START_REF] Michaud | La transcription du linguiste au miroir de l'intelligence artificielle : réflexions à partir de la transcription phonémique automatique[END_REF]. Another strand consists in using machine-learning-assisted analysis of signals to obtain phonetic parameters to explore the phonetic and phonological characteristics of a language or dialect. The present work therefore aims to contribute to these new and challenging approaches.
Goals
A first-level goal of the present investigation is to automate the extraction of the glottal open quotient from electroglottographic recordings (some details on this technique will be provided in section 1.3).
That is a time-consuming annotation task which requires a certain degree of expertise. Based on my experience of linguistic fieldwork, I concur with the observation that computer-assisted work is highly desirable over fully manual workflows. In this respect, the annotation of electroglottographic signals is similar to standard tasks in Natural Language Processing, such as speech recognition, translation, and glossing. Manual annotation and semi-automatic analysis of electroglottographic signals using the PeakDet tool (as described in M.-C. Nguyen 2021, pp. 99-110) took me roughly 15 months to carry out for the data of 20 speakers. The recording duration for each speaker's data is just about 15 minutes, and the corpus has a simple structure: the 66 target syllables were embedded inside a carrier sentence (details are provided in Chapter 2 below). Performing the same work on a corpus of spontaneous speech would be even more complex and tedious in view of the well-documented variability of spontaneous speech. Moreover, workflows based on manual decisions are not technically reproducible, raising thorny epistemological issues. It would clearly not be desirable for electroglottographic analysis to become a craft proudly performed by a guild of experts on the rather vague basis of personal experience. Such a situation would be reminiscent of the problem with Cardinal Vowels, for which the ultimate reference was personal instruction from Daniel Jones... and notable variation is found from one generation of students to the next, defeating the purpose of this set of reference vowels [START_REF] Vaissière | On the acoustic and perceptual characterization of reference vowels in a cross-language perspective[END_REF].
Conversely, under the hypothesis (which, as we shall see below, was clearly over-optimistic) that a model could be trained to replicate the task of analysis of the electroglottographic signal, then the entire set of available (open-access) electroglottographic recordings on a range of typologically diverse languages could be processed in a consistent and reproducible manner, opening new avenues for the cross-linguistic study of phonation types in human speech, in full-reproducibility mode -a highly desirable development for the field of speech sciences [START_REF] Kobrock | Assessing the replication landscape in experimental linguistics[END_REF].
My contribution consisted in: (i) preparing and managing linguistic data for statistical processing and for a pilot study on the application of a neural network for the estimation of the glottal open quotient;
(ii) writing this technical report as a reference for linguists (non-computer scientists) in approaching and applying machine learning tools to data processing and archiving.
The work was carried out at LIG (Laboratoire d'Informatique de Grenoble, UMR 5217), under the joint supervision of a linguist, Solange Rossato, and a computer scientist, Maximin Coavoux (in collaboration with Alexis Michaud, from LACITO, UMR 7107).
Electroglottography: principles, analysis methods, and prospects for automation of analysis processes
In this section, we present a short introduction to the collection and analysis of electroglottographic data.
The technical explanations here aim to summarize the underlying principles of electroglottography, and the ways in which electroglottographic signals can be used in research. This part is crucially important because the linguist must communicate well with the computer scientist who will take over the data to train them in the neural network. The more precise and elaborate explanation is provided about the data and its usefulness to research, the better a computer scientist can understand the targets of the task and come up with solutions for automation. As an example: among the limitations acknowledged in Section 3.4, we will discuss an instance of misleading communication on my part that affected the outcome of the study.
Principles
The electroglottographic signal provides an estimate of variation in the contact area between the two vocal folds. Electroglottography (often abbreviated to EGG) was invented by Fabre in the mid-20th century. The initial report about the invention [START_REF] Fabre | Un procédé électrique percutané d'inscription de l'accolement glottique au cours de la phonation: glottographie de haute fréquence[END_REF] was followed by further studies by the same author over the following years (Fabre, 1958;Fabre, 1959;Fabre, 1961), initiating strands of research that are still active to this day.
Electroglottography is a common, widespread technique that enables the investigation of vocal-fold contact area in phonation in an easy and noninvasive way. A high frequency modulated current (F = 1 MHz) is sent through the neck of the subject. Between the electrodes, electrical admittance varies with the vibratory movements of the vocal folds, increasing as the vocal folds increase in contact. [START_REF] Henrich | On the use of the derivative of electroglottographic signals for characterization of nonpathological phonation[END_REF](Henrich et al., , p. 1321) ) The EGG signal is a continuous signal, like the audio signal. It can therefore be stored in the same format as the audio, and displayed with the same tools.
Methods to analyze the electroglottographic signal
In the previous manual/semi-automatic processing, the method chosen for analysis of the electroglottographic signal uses its derivative signal (dEGG). Glottis-closure instants are approximated through detection of positive peaks in the first derivative of the signal, and glottis-opening instants through detection of negative peaks in-between the positive peaks. This method is set out in full by [START_REF] Henrich | On the use of the derivative of electroglottographic signals for characterization of nonpathological phonation[END_REF]. Henrich's paper goes into technical detail concerning the four main phases of a glottal cycle: (i) closing phase, (ii) closed phase, (iii) opening phase, and (iv) open phase. Increase in vocal fold contact area is reflected by the closing phase (itself followed by the closed phase) in the electroglottographic signal, and the moment of fastest increase in vocal fold contact area corresponds to the glottis-closure instant. Decrease in vocal fold contact area begins during the closed phase, and continues into the opening phase; the moment of fastest decrease in vocal fold contact area is the glottis-opening instant.
But the correspondence between these four main phases, on the one hand, and detectable events on the electroglottographic signal, on the other hand, is not easy to establish. Instead, the electroglottographic signal corresponding to one glottal cycle can be divided into two portions only, as shown on Figure 1.1. These two portions are named closed phase and open phase of the vocal-fold vibratory cycle, and defined as follows: the closed phase extends from a glottis-closure instant to the next glottis-opening instant; and the rest of the cycle is the open phase (for a more detailed description, see [START_REF] Henrich | On the use of the derivative of electroglottographic signals for characterization of nonpathological phonation[END_REF], pp. 1321-1322, as well as D. G. Childers and Krishnamurthy 1984[START_REF] Colton | Problems and pitfalls of electroglottography[END_REF][START_REF] Orlikoff | Scrambled EGG: the uses and abuses of electroglottography[END_REF] The derivative of the electroglottographic signal (dEGG) signal typically has a positive peak at glottis closure and a negative peak at glottis opening. Figure 1 fold contact area and considered as the beginning of the glottal open phase. These peaks in the dEGG signal serve as the basis for estimating glottal parameters. The most well-known parameter is speech fundamental frequency (f 0 ). The glottal open quotient is less well-known among linguists, as it cannot be easily estimated from the audio signal only, but it is used in many linguistic studies of phonation types and tones: see in particular Michaud (2004b), Brunelle, Nguyên, and K. H. Nguyen (2010), [START_REF] Abramson | Voice register in Mon: Acoustics and electroglottography[END_REF], [START_REF] Brunelle | Transphonologization of voicing in Chru: Studies in production and perception[END_REF], and [START_REF] Kirby | Transphonologization of onset voicing: revisiting Northern and Eastern Kmhmu[END_REF]. Additionally, the amplitude of the closing peak (Derivative-Electroglottographic Closure Peak Amplitude, abbreviated as DECPA) can be measured from the dEGG signal (Michaud, 2004a;[START_REF] Kuang | Vocal fold vibratory patterns in tense versus lax phonation contrasts[END_REF]; it is not presented and not studied here because the relationship of this parameter to phonation types is still not well established.
Fundamental frequency (f 0 , unit: Hz) is the inverse of the glottal period (i.e. the inverse of glottal cycle duration). Specifically, f 0 dEGG is obtained by measuring the duration between two consecutive glottal closing instants, corresponding to a period (one glottal cycle). The inverse of the duration of the cycle yields the fundamental frequency of the voice (the formula is simple: F = 1 / T). The values of f 0 have pitch as their perceptual counterpart: low f 0 is heard as low pitch, and high f 0 as high pitch.
Glottal open quotient (O q , unit: %). The measurement of O q dEGG requires the measurement of the duration of the glottal cycle, plus the detection of the glottal opening instant. This allows for computing the glottis-open interval; the open quotient is the ratio of the open-glottis interval to the entire cycle (the ratio between open time and fundamental period). This can be stated as the following equation: O q = (Open phase) / (Open phase+Closed phase). O q is a parameter that relates to phonation types: low O q demonstrates pressed phonation; medium O q reflects modal phonation; and high O q reflects flow phonation (whispery voice, shading into breathy voice). This relates to the following observation:
There might be a continuum of phonation types, defined in terms of the aperture between the arytenoid cartilages, ranging from voiceless (furthest apart), through breathy voiced, to regular, modal voicing, and then on through creaky voice to glottal closure (closest together). This continuum is depicted schematically in Figure 1.2. (Gordon and Ladefoged, 2001, p. 384) CC BY-SA 3.0 fr page 12
A strong motivation for adopting the dEGG method for estimating O q is comparability across studies. This method is fairly widely used in phonetic studies of phonation types published since Nathalie Henrich's methodological article [START_REF] Michaud | Nasal release, nasal finals and tonal contrasts in Hanoi Vietnamese: an aerodynamic experiment[END_REF][START_REF] Mazaudon | Tonal contrasts and initial consonants: a case study of Tamang, a 'missing link' in tonogenesis[END_REF]Gao 2016, among others), and also in various other phonetic studies (e.g. [START_REF] Recasens | Voicing assimilation in Catalan three-consonant clusters[END_REF]. The use of similar algorithms facilitates comparison across studies, and hence across languages as well as across speakers and across datasets.
Moreover, the dEGG method is grounded in explicit assumptions that relate to physiological observations in a way which, although not simple and straightforward, is intuitively clear.
Criticism of estimation of the glottal open quotient by electroglottography
Estimation of the glottal open quotient by electroglottography has come under criticism, which it appears useful to review here.
In a review article entitled "Electroglottography -an update", [START_REF] Herbst | Electroglottography -An Update[END_REF] recapitulates important caveats about the interpretation of the electroglottographic signal. Some of them are well-known: "Vocal fold vibration, a complex phenomenon taking place in three spatial dimensions, is mapped onto a single time-varying value" (Herbst, 2020, p. 4). It needs to be borne in mind that electroglottography provides a linear insight into phenomena that are not linear, and thus only offers glimpses into complex phenomena, which ideally need to be addressed through an array of exploratory techniques: a multisensor platform [START_REF] Vaissière | Multisensor platform for speech physiology research in a phonetics laboratory[END_REF]. But Herbst's criticism cuts deeper. He questions the assumption that underpins the method employed here: that peaks on the derivative of the EGG signal provide reliable estimates of the timing of glottisclosure instants. Reviewing recent studies, he considers that they "strongly suggest that positive and negative dEGG peaks do not necessarily precisely coincide with GCI (i.e. glottis closing instant) and GOI (i.e. glottis opening instant), a notion that was already put forward by D. [START_REF] Childers | Vocal quality factors: Analysis, synthesis and perception[END_REF], who maintained that the EGG signal may not provide an exact indication for the instant of glottal closure" (Herbst, 2020, p. 7). As emphasized by [START_REF] Hampala | Relationship between the electroglottographic signal and vocal fold contact area[END_REF], "any quantitative and statistical data derived from EGG should be interpreted cautiously, allowing for potential deviations from true VFCA [Vocal Fold Contact Area]". The criticism is then extended to the very notions of glottis-closure instant and glottis-opening instant: "vocal fold contacting and de-contacting (as measured by EGG) actually do not occur at infinitesimally small instants of time, but extend over a certain interval, particularly under the influence of anterior-posterior (...) and inferior-superior phase differences of vocal fold vibration" (Herbst 2020, p. 7; emphasis in original).
From a theoretical point of view, there may be a slight confusion here, as surely no one among users of Figure 1.2: Continuum of phonation types (Reproduced from Gordon and Ladefoged, 2001, p. 384).
CC BY-SA 3.0 fr page 13
the method of estimating f 0 and O q by means of peaks on the dEGG signal believes that glottal activity consists of instantaneous events of glottis closing and opening. The notions of glottis-closure instant and glottis-opening instant should, as a matter of course, be delivered complete with due precautions and careful hedging for their proper interpretation, but these precautions do not detract from the usefulness of these concepts. It should suffice to say once and for all that f 0 and O q as estimated through the dEGG method should not be confused with the physical parameter that they aim to capture. A good way to make this distinction consistently consists in embedding a reminder about the estimation method within the acronym used for the parameter. Therefore, the notations f 0 dEGG and O q dEGG are adopted throughout the present report to refer to the measured parameters, as distinct from f 0 and O q , the latter being understood either as abstract and ideal, or as generic labels.
From a practical point of view, a key point here is what is meant by "precisely" when claiming that dEGG peaks do not coincide "precisely" with glottis-closure instants and glottis-opening instants. The weak claim that "perhaps the glottal area waveform, if available, would be a more suitable candidate" than the dEGG signal as a ground truth for glottal events is perfectly safe as a hypothesis, but hardly helpful for those to whom the glottal area waveform is simply not available. In practice, the difficulty of obtaining low-noise electroglottographic signals is a much more serious subject of concern to me than the fully accepted theoretical limitation whereby "the determination of contacting and de-contacting instants or events is an artificial concept" (Herbst, 2020, p. 10). The fact that glottis-opening instants as estimated from dEGG signals may be slightly earlier than those obtained by other methods does not detract from cross-token, cross-speaker and cross-language comparability, and common sense suggests that those are precious assets.
On topics of terminology, Herbst's proposals are not particularly straightforward to implement. He uses 'closed quotient' (C q ) rather than 'open quotient' (O q ), which is not a real difference at all:
C q = 1 -O q
He argues that 'closed quotient' should be replaced by 'contact quotient':
Given that the underlying EGG signal measures relative vocal fold contact area and not glottal closure, the terminology for that parameter should be limited to "contact quotient" instead of "closed quotient". Consequently, the term "open quotient" is also inappropriate, because EGG does not measure glottal opening. Instead, the term "quasi open quotient" (QOQ) might be used. (Herbst, 2020, p. 11) I leave it to more established researchers to decide whether to take the turn towards use of the term "quasi open quotient" (QOQ). Trying to weigh the advantages, I find them very slight, compared to O q . The suggestion to prefix "quasi" to the term "open quotient" strikes me as standing in contradiction to the statement (made by the author earlier on in the same paragraph) that this parameter "is not an ersatz closed quotient" (Herbst, 2020, p. 11). Among prefixes, "quasi" sounds like a reasonable equivalent for description as "ersatz": an inferior substitute or imitation, used to replace something that is unavailable and can only be approached, not equated.
Within studies related to electroglottography, "quasi" also brings to mind a proposal to build a "quasiglottogram signal" from the EGG signal [START_REF] Kochanski | A Quasi-glottogram signal for voicing and power estimation[END_REF]. While I can make no claim to understanding the maths sustaining the attempt to build a "quasi-glottogram signal", it is intuitively clear that the relationship between glottogram and electroglottogram in this proposal (published in the Journal of the Acoustical Society of America) was a much less straightforward one than that which links QOQ with O q . To conclude, it does not seem completely fair or productive to dismiss O q along with all other estimations of the glottal open quotient through electroglottography.
CC BY-SA 3.0 fr page 14 However, this method is more reliable for quasi-periodic phonation where the detection of glottal cycles can be easily determined by performing an auto-correlation analysis. The case of nonperiodic phonation, on the other hand, is more complicated to address (to the point that it is doubtful whether this method can be made to succeed). In the article, this problem was also pointed out and they proposed an alternative algorithm for glottal cycle detection: "the period should be rather determined on a cycle-to-cycle basis from direct inspection of the electroglottographic signal and its first derivative in the time domain", i.e. falling back on a manual workflow.
The material of the current study involves creaky voice, a case of non-periodic phonation, as the target phenomenon. It could be interesting in the future to apply and test the wavegram in our data and compare it to the available results to see to what extent this technique can be applied to non-modal phonation and if it is a better approach to the EGG signal than the dEGG signal. For now, we retain the open quotient as the main glottal parameter extracted specifically from the electroglottographic signal.
CC BY-SA 3.0 fr page 15
Chapter 2
Input: corpus
The corpus: content (speech materials) and status of the data
The speech materials for this experiment is composed of minimal sets of real words. In total, it consists of 12 minimal sets1 that contrast for tone in smooth syllables (i.e., open syllables or syllables ending with a nasal coda) plus 3 minimal pairs that contrast for tone in checked syllables (i.e., syllables ending with a stop coda). Tables A.1 and A.2 in Appendix A provide full detail about these minimal sets and pairs.
Method of collection: each target word is required to be spoken four times: twice in isolation and twice in a carrier sentence.
The carrier sentence is a question including 4 words:
( The total corpus (per speaker): Figure 2.1 recapitulates the total corpus of this study. Not only the target words but also the three frame words of the carrier sentence are annotated and processed. Thus, for each speaker, we have a total of 660 items, of which 264 items are target syllables and 396 items are frame syllables. A more detailed list of the amount of materials is given in Table 2.2. In some cases, the maximum number of items is not reached because some frame words are missing, as speakers tend to shorten the carrier sentence during a series of repetitions. The most serious case is in the data of the speaker F10. For some technical reason, we made a pause but mistakenly did not press the record button to resume, so the last part of the experiment was missed on the first run. In particular, the minimal set N o 11 in carrier sentence, the minimal set N o 12 and all three minimal pairs both in isolation and in carrier sentence were not recorded. As a consequence, this data lacks 75 items, including 11 target words in isolation and 16 target words in carrier sentence, which leads to the sorely felt absence of 48 frame words at all three positions (i.e. 16 items for each). The actual status of the data of each speaker is summarized in Table A.3. There are a total of 26 participants, 28 data files (F1 and M12 performed the experiment 2 times), twenty of which have been processed.
Note on some irregularities in the corpus
Some asymmetrical points of the data There were some asymmetry defects from the data related to factors not accounted for in data collection, annotation, and pre-processing.
The most notable asymmetry in the data is the large difference in number between the sets for smooth tones and the pairs for checked tones. We have 12 minimal sets of smooth tones but only 3 minimal pairs of checked tones. Therefore, the target samples of the checked tones are four times fewer than those of the smooth tones. This is due to the fact that when designing the data collection, I did not consider the balance of the data between tones. I only considered syllables that were already found in the system of smooth tones, and then combined them with a final stop. This is unnecessary and limits the checked pairs that could be found. There are a few minimal pairs that were found but were omitted during the recording process due to inaccuracies in meaning or loanwords from Vietnamese. A detailed report on this topic is found in my Ph.D. dissertation at the end of the table of minimal pairs (M.-C. Nguyen, 2021, p. 67).
The second asymmetry is that the total number of items processed is not identical in the data of twenty speakers. As previously mentioned in Section 2.1, the data for each speaker normally contains a total of 660 items. However, due to a technical problems during the recording session and the tendency to shorten the carrier sentence during a series of repetitions in some cases, there are therefore a few words CC BY-SA 3.0 fr page 17
CLD2025 ANR-19-CE38-0015 PRCI -International ANR-DFG WP2
missing from the data of 3 among the 20 speakers. The actual status of the data of each speaker is summarized in Table A.3. There are a total of 26 participants, 28 data files (F1 and M12 perform the experiment 2 times) of which 20 have been processed. Among the 20 data files that have been processed, 17/20 files have full size of 660 items, 2/20 files (the data of F13 and M14) have missing items from frame words, and 1/20 (the data of F10) file have missing items from both frame words and target words.
One of the data chunks that has the full size of 660 items but that nonetheless calls for special attention is that contributed by speaker F20. An issue with these data is that the ratio of excluded O q values stands at 83%, which is especially high and makes it an outlier. The average of this ratio for the other speakers is 18%, with the highest ratio being 36.3% (for F9) and the lowest ratio being 4.7% (for M9).
The reason for suppression of O q values in speakers is often due to unclear opening peaks when the voice breaks into creaky voice. But this is not the case for F20. Not only the syllables carrying Tone 4 but all other tones have the same situation with imprecise opening peaks, making it impossible to measure O q values. The consistent behavior of these peaks throughout the experiment until the last minimal set over the syllable /ku/ (which was performed twice separately) provides evidence which makes me believe that this is related to a physiological phenomenon, rather than artifacts. This interesting case would be worth studying further and should be kept an eye on for this present study when we use the O q values from the semi-manual process for machine learning process and also to evaluate the result later.
During the pre-processing to prepare data for this experiment and run it using machine learning methods, two unusual points have been detected from the previous manual processing.
Error on data of F3:
The first unusual point was noticed during the process of preparing data by extracting MFCC frames. In the result of the speaker F3 from manual processing, there were some cases (11 cases, exactly) where the last annotated cycles of a syllable were outside its time interval.
In order to understand precisely this problem, we must first get to know the process explained later in the section 3.1. In short, for running the machine learning models, the required data are the EGG signal and the results from the manual processing which are stored in a three-dimension matrix in Matlab files. The necessary information extracted from Matlab matrices are stored in Excel files, including: the identifier and time intervals of each item (syllable), the starting and ending time of each glottal cycle detected within the syllable, and the corresponding values of f 0 dEGG and O q dEGG if measurable.
Logically, the sum of the duration of all detected glottal cycles inside a syllabe would be exactly the duration of that syllable. Therefore, the error was detected when, in some items, the last periods are out of range of the syllable duration. Thanks to automatic processing, such error can be easily detected.
This occurred only in the data from speaker F3 and there were 11 items, which are not consecutive, that had this error. Considering all these erroneous items, I found a pattern in which the spurious length of the erroneous items is duplicated from the item that follows immediately after them. For example, item with UID 0431 is one of the erroneous items. The incorrect length in the result is 41 glottal cycles, which is the correct lenght of the next item, which has UID 0432. By checking the autosave of each individual item, I figured out that the real length of item 0431 is 31 cycles instead of 41 cycles. The length is copied from item 3042, but the values of the extra 10 cycles at the end of the item 0431 are not the same as the ones at item 0432. This part is patched from somewhere else and actually does not correspond to any part in the data.
The solution for this error is that I can simply cut off the extra spurious cycles of 11 erroneous items.
It was easy to check the erroneous part since the starting time of this part is not consecutive to the correct part.
CC BY-SA 3.0 fr page 18
CLD2025 ANR-19-CE38-0015 PRCI -International ANR-DFG WP2
I could not explain how this happened during the semi-manual processing. But by referring to the manual processing log, I can guess that this error occurred due to the fact that the data from F3 being annotated and processed twice because of some mistakes made during the first processing. And in the second time of processing, I did not start from the beginning, but from the token where I made the mistake for the first time. The part from the beginning to this token was resumed from the previous processing.
This shows a side benefit of computational tools, which is that it can help detect manual processing errors, which are difficult to check manually in a systematic way.
Manual workflow
In order to be better prepared and to have a better understanding of what the neural network has to learn automatically, this part tries to briefly recap the procedure of manual processing that produced the data which will be used down the line as a basis for tests with machine-learning tools.
In order to study the tone system of the target language, two phonetic parameters, fundamental frequency (f 0 dEGG ) and glottal open quotient (O q dEGG ), were estimated from the derivative of the EGG signal, DEGG [START_REF] Henrich | On the use of the derivative of electroglottographic signals for characterization of nonpathological phonation[END_REF], using PeakDet, a script available from the COVAREP repository [START_REF] Degottex | COVAREP-A collaborative voice analysis repository for speech technologies[END_REF] (An implementation in Praat is also available: Kirby, 2020.) PeakDet is designed for semi-automatic measurement: the results for each token are verified visually.
General process
Figure 2.3 shows the basic procedure that was applied to the data set of twenty speakers to obtain results.
For the most part, the initial input materials for this process are audio files obtained from the minimal set experiment. They are first segmented and annotated (using the Sound Forge software) to obtain the annotated EGG file in mono-channel format and the Regions List indicating the time codes for each token, together with its unique identifier (UID). These are the two inputs required by PeakDet, a semi-automatic tool for estimating f 0 dEGG and O q dEGG from the EGG signals. This meticulous (and time-consuming) verification process is summarized in the algorithm shown as Figure 2.4.
Fundamental frequency measurement
In this study, the f 0 is estimated from the derivative of the electroglottographic signal in the EGG signal, hence we call it f 0 dEGG . To measure f 0 , the period is determined by detecting two successive closing peaks, the duration between two consecutive glottal closing instants corresponds to a fundamental period; its inverse gives the fundamental frequency of the voice. In the majority of cases, the closing peak is unique and precise in each glottal period to obtain the f 0 dEGG . Figure 1.1 shows the ideal example of well-defined closing peaks. However, in a few cases, imprecise multi-closing peaks linked to a physiological phenomenon could be encountered.
Glottal open quotient measurement
Whereas f 0 dEGG is calculated based on closing peaks, which in the great majority of cases are welldefined (often with a unique peak), estimating O q dEGG requires the detection of opening peaks, which often runs into difficulties due to imprecise peaks: either cases where no peak stands out clearly, or cases where two or more peaks are present (multiple peaks). The search for opening peaks is even more difficult in the case of nonmodal phonation, such as when voicing transitions into creaky voice.
This makes user verification of O q dEGG a delicate business, which is not so similar with verification of f 0 dEGG : it requires more than just a few adjustments for peculiar situations.
PeakDet will ask the verification of O q dEGG after the verification of f 0 dEGG has been completed. At first, it will process automatically and offer O q dEGG calculated in four different ways:
1. maxima 2 on unsmoothed dEGG signal (displayed as orange squares);
2. maxima on smoothed dEGG signal (displayed as orange stars);
3. barycentre of peak on unsmoothed dEGG signal (displayed as blue squares);
4. barycentre of peak on smoothed dEGG signal (displayed as blue stars)
The methods are divided into two sets:
• Detection of the local minimum on the signal in-between two closure peaks. This method is applied twice: on the unsmoothed dEGG signal, and on the smoothed dEGG 2 Technically, 'maxima' here should be referred to as 'minima', since the peak is a negative peak.
CC BY-SA 3.0 fr page 20 • Analysis of the shape of opening peaks and calculation of a barycentre of the detected 'peakswithin-the-peak', giving each of the peaks a coefficient proportional to its amplitude. Again, this method is applied twice: on the unsmoothed dEGG signal, and on the smoothed dEGG.
The manual task in this step requires the researcher to visually examine the dEGG signal to decide which method is most reliable for detecting negative peaks, which directly affects the calculation of O q dEGG values. This is a two-step verification. The first step is to decide which method will be selected as the most reliable among the four methods, so that the O q dEGG values calculated from that method will be stored in the last column (10th column) of the PeakDet results matrix. The second step is to examine each glottal cycle of the dEGG signal in detail to check which cycles have a precise opening instant with a clear negative peak, which will be retained as reasonable O q dEGG values for those cycles. Otherwise, cycles that do not have a clear negative peak, because they have multiple peaks or no clear negative peak can be detected (as demonstrated in Figure 2.6), will be removed by setting them to zero to indicate that the automatically calculated O q dEGG value for these cycles is inapplicable.
If the decision to retain or remove O q dEGG values is simply based on whether or not a single precise opening peak is present in a glottal cycle, it would not be challenging to do it automatically and would not require much effort to visually verify each glottal cycle of each syllable, which is the most time-consuming task, and thus the biggest hurdle for the EGG analysis. However, since information on O q dEGG is frequently missing due to the absence of a single, clear opening peak during glottal cycles, it is worth trying to keep the O q dEGG values nearly clear in the cases where there is more than one opening peak but visual inspection reveals that one peak really stands out from the others. Two examples in Figure 2.7 show "good" cases of multiple opening peaks: cases where a prominent peak can be noticed in almost every cycle, and the distance between peaks inside the same cycle is small. In cases like this, as a rule I tried to keep as many O q dEGG values as possible by choosing the most reasonable method. For example, in the case of Figure 2.7a, I would select the method of barycentre and keep all the O q dEGG values because, even though the double-opening peak in cycles 62-64 are clear (compared to its neighboring cycles) and thus make it difficult to decide which one should be the main CC BY-SA 3.0 fr page 23 opening instant, the distance between two peaks is nonetheless close enough that it is safe to take an intermediate point between them (by barycentre method) as representative of the approximate opening instant. In the second example in Figure 2.7b, all glottal cycles have triple-opening peaks but the first peaks are always much more salient than the next two peaks. In that case, I would select the method of maxima on dEGG signal to catch the first peak at every cycles as the opening instant used in the calculation of the open quotient.
These two examples illustrate the fact that it is feasible to keep some O q dEGG values in case of "good" multi-opening peaks: as long as the main peaks stand out clearly and the distance bettwen them is small (less than 5% difference between methods of measuring O q dEGG ). In practice, there are many tricky cases, where the decision of choosing a method and suppressing certain O q dEGG values is much more delicate and tough, particularly when there is a transition between "good" peaks and "hopeless" peaks (Figure 2.7a is a simple example of this). Doing (human) visual verification is not a simply task, and I am not completely confident that my decisions were fully consistent during the analysis (and even if I were confident, proving that consistency was present is yet another matter). These observations are crucial to the investigation reported here, as they make it clear that we are totally aware that estimating O q dEGG automatically on the basis of an electroglottographic signal is a challenge for neural networks to learn.
In view of the information set out so far, we can now move on to the core of the report: investigating to what extent neural networks can learn from human decisions (as encapsulated in the available corpus) to carry out O q estimation from the dEGG signal.
CC O q dEGG estimation using machine learning methods
This chapter1 presents a set of experiments whose aim is to select among several estimations of the open quotient for each cycle of an EGG signal. The basis for selection consists of two inputs of a different nature: (i) the EGG signal2 and (ii) the output of PeakDet, consisting in the time codes of each EGG cycle, 4 candidate values based on the four methods presented in Section 2.3.3, as well as the f 0 dEGG of each cycle also computed by PeakDet. The objective of this section is to describe a neural network that reproduces the manual annotation pipeline described in Section 2.3, i.e. predicting for each cycle whether the cycle has a computable O q dEGG , and if so, what the most appropriate method is (among the 4 methods).
Data Description and Preprocessing
The results from manual processing on MATLAB are extracted and stored in excel files (one excel file for each speaker's data) which are made available on a Github repository (see: https://github.com/ MinhChauNGUYEN/CLD2025_EGG). Each excel file includes the following information:
• (Column A) The UID (Unique Identifier) of the item.
• (Column B) The beginning time of the syllable/item in the recording (in second)
• (Column C) The end time of the syllable/item in the recording (in second)
• (Column D) The beginning time of each glottal cycle inside the syllable • (Column K) The Oq values that were retained after checking the opening peaks in the DEGG signal (by the user). The zeros mean that the Oq values at these cycles have been suppressed due to the imprecise opening peaks.
• (Column L) The result of a Creak Detection algorithm: (0) means no creak, ( 1) means pressed voice or single-pulsed creak, (2) means aperiodic creak, and (3) means double-pulsed creak.
Columns B-J will be used as the input to the machine learning system (together with the EGG signal), whereas column K codes the target we need to predict. We present some statistics about the numbers of syllables, glottal cycles and distribution of labels in Table 3 values, some cycles may have several correct labels. As a result the sum of percentages is higher than 100%.
We performed two preprocessing steps, on the wav files in the corpus: (i) segmentation of the signal in multiple shorter files (10 seconds maximum) so that each file contains one or a small number of items/syllables, and (ii) resampling to 32,000 Hz.
Neural Network Architecture
The machine learning system we implemented is a neural network based on a bi-LSTM. It was implemented using the Speechbrain library [START_REF] Ravanelli | SpeechBrain: A General-Purpose Speech Toolkit[END_REF]. As stated above, its input consists in the EGG signal, together with additional information for each glottal cycle, namely: time codes (start time and end time), the f 0 dEGG value and the 4 O q dEGG values computed by PeakDet. The EGG signal is represented with Mel-frequency Cepstral Coefficients (MFCC) vectors, with a 6 ms window, sliding every 2 ms (these values are lower than values typically used in speech recognition, in order to take into account the granularity of the representations we need).
MFCC vectors form a N × F matrix M (0) , where N is the number of MFCC frames (i.e. the length of the signal in milliseconds divided by 2) and F is the number of MFCC features for each frame. This matrix is fed to a feedforward neural network:
M (1) = tanh(W (1) • LayerNorm(M (0) ) + b (1) ),
and contextualized with a bidirectional LSTM: 1) ).
M (2) = bi-LSTM(M (
(3.1)
Then, we represent each glottal cycle c by the concatenation of 3 vectors:
v c = [M (2) c b ; M (2) ce ; o c ],
where c d and c f are the time codes for the (b)eginning and (e)nd of the cycle, and v c ∈ R 5 is a vector containing the 4 O q dEGG values and the f 0 dEGG of the cycle, as computed by PeakDet. Finally, we use another feedforward network to compute scores for each label and predict a label:
P = Sigmoid(W (3) • ReLU(W (2) • LayerNorm(v c ) + b (2) ) + b (3) ),
where each P = [P (y 0 = 1|c), . . . , P (y 4 = 1|c)] gives a probability for each label. An illustration of the neural network is presented in Figure 3.1.
Experiments
Training We train the model by maximizing the probabilities of gold labels on the training set, using the Adam algorithm [START_REF] Kingma | Adam: A Method for Stochastic Optimization[END_REF]. When evaluating the model, we take the highest probability label as the model's prediction. The hyperparameters of the model are:
• For MFCCs: window size (6ms), hop size (2ms), context size (2 frames on each side);
• For the network: dimension of hidden layers (128 for feed-forward networks, 128 for each direction of the bi-LSTM);
• For training: optimization algorithm (Adam), learning rate (0.008 for models that use the signal as input, 0.001 for the model ablation that does not use the signal), size of batches (8), number of training epochs.
We calibrated hyperparameters (in particular the learning rate) on the dev set during preliminary experiments. For final experiments, we train the models for 100 epochs and keep the checkpoint that maximizes accuracy on the dev corpus to evaluate it on the test section.
Experimental settings Our objective is to determine whether the use of the EGG signal improves the prediction of labels, and contributes additional information compared to PeakDet O q dEGG and f 0 dEGG values. Moreover, we would like to assess whether the EGG signal provides better information than the more easily accessible audio signal. To answer these questions, we experiment with several configurations, namely models with different types of input: iii
i EGG signal + PeakDet O q dEGG + PeakDet f 0 dEGG : [M (2) c d ; M (2) c f ; o c ]; ii EGG signal: [M (2) c d ; M (
PeakDet O q dEGG + PeakDet f 0 dEGG : [o c ];
iv as (i) but replace EGG signal by audio signal; v as (ii) but replace EGG signal by audio signal.
Results and discussion
We report results in Table 3.2. We use the following evaluation metrics: 3-class accuracy (0 vs 1-2 vs 3-4), 2-class accuracy and 2-class Fscore (label 0 vs other labels). These metrics are meant to take into account class imbalance as well as massive overlap between respectively labels 1-2 and labels 3-4. For comparison, we also present results for a most-frequent-label baseline.
From Table 3.2 we see that the neural network outperforms the baseline, albeit by a small margin, indicating that some learning has occurred but modestly. Model (ii), with only the EGG signal as input, has lower results than model (iii), which suggests that the shape of the signal as provided as input is either insufficient to make a prediction, or underexploited by the model. The best models (i, iv) have access to both a signal and the additional information (PeakDet O q dEGG and f 0 dEGG ). Unexpectedly, models (i) and (iv) have comparable scores, with a slight advantage for using the audio signal (iv) instead of the EGG signal (i).
This conclusion is indeed unexpected and not good news for those who, like us, work on the EGG signal, but it is actually not totally bizarre and unreasonable. If we look back and compare all the available signals of syllables bearing a creaky tone, in many cases such as item in Figure 3.3 is an example, we could notice that, in fact, the audio signal with its spectrogramme, already provide the good evidence for the presence of creaky voice, even much clearer visually in compararison to the EGG signal with the series of discrete rails along the whole syllable. Nevertheless, the negative results obtained here by machine learning do not mean that the phonetic analysis of the EGG approach should be rejected or discredited but instead it brings us back to the fundamental issue on the reality of these signals. Ultimately, neither of these signals is fully representative of what actually happens during speech articulation. They are all linear views of a non-linear phenomenon. They are therefore not in a mutually contradictory or mutually exclusive relationship, but can actually support each other in the analysis of complex phonetic characteristics. We chose to collect and analyze the EGG signal not because we think it is better than the acoustic signal, but in the hope that it might provide an additional/alternative approach to the unresolved limitations of audio signals on speech analysis. In this corpus, all the data of EGG signal were recorded and stored simultaneously with the acoustic signal. This allows us, after taking a big jump from manual processing to end-to-end processing, to step back and see the possibility of improving the current task, for instance by further examining the correlation between EGG and acoustic signals on the same parameters to figure out how they could complement and reinforce each other in the automatic workflow.
Confusion matrix Going into details of the results, we present in Table 3.3 the confusion matrix of model (iv), which obtained the best results. Each column represents a combination of labels seen in the data (e.g., column 3/4 represents glottic cycles where PeakDet methods 3 and 4 gave the correct value.) We observe that labels 1 and 3, which are the least present in the data (and often give the same result as methods 2 and 4 respectively), are almost never predicted, which shows that the system does not discriminate between classes 1 and 2 on the one hand and classes 3 and 4 on the other hand. The most frequent errors are the difficulty in predicting classes 3, 4 and 0, where the model falls back on the most frequent class (2). This reflects to some extent what actually happened during the semi-automatic processing.
As mentioned in Section 2.3.3, among 4 proposed options of O q dEGG verification on PeakDet, actually there are only two different methods which is: (i) "maxima" (detection of the local minimum on the signal in-between two closure peaks) in methods 1 and 2 and (ii) and "barycenter" (analysis of the shape CC BY-SA 3.0 fr page 30
CLD2025 ANR-19-CE38-0015 PRCI -International ANR-DFG WP2
of opening peaks and calculation of a barycentre of the detected 'peaks-within-the-peak', giving each of the peaks a coefficient proportional to its amplitude.) in methods 3 and 4. Then, each method is subdivided according to whether it was applied to the unsmoothed dEGG signal (method 1 or 3) or the smoothed signal (method 2 or 4).
Since the openning peak of a glottic cycle is less clear than the closing peak due to one of the facts that multiple negative peaks can often be detected during the open phase, methods applying on the smoothed signal are therefore preferable to unsmoothed methods in order to avoid redundant detection of peaks caused by signal noise. This explains why methods 1 and 3 are much less selected than methods 2 and 4. Even in cases where there is a clear opening peak for which all four methods give identical or quasi-identical O q dEGG values, I kept selecting the methods on the smooth signal to keep the results consistent.
The option of barycenter method on unsmoothed dEGG signal (method 3) was barely selected, as it is the least appropriate in all cases, whether it is a single peak or a multiple peak. Cases like in the examples 2.7a or 3.2 that involve double-opening peaks or multiple-opening peaks but the distance is safely closed than thus it is worth keeping some of them with the barycenter method. And the one applied to the smoothed signal (i.e., method 2) is the most reliable option to eliminate all negative peaks that are not prominent enough. This explains why method 2 has been the most frequent option, as it is the safest in most cases, especially for "good" multiple-opening peaks.
On the other hand, the local minimum method on the unsmoothed signal (method 1) was particularly used in a few cases involving creaky voice. During manual processing, we observed that in most cases, once creaky voice occurs, it frequently causes irregularities in the glottal cycles. The mess in the EGG signal and consequently in the dEGG signal in this case does not imply a bad signal due to a recording artifact, but in fact, it faithfully reflects a messy vocal fold contact area. The loss of periodicity is one of the main factors that make the analysis of creaky voice in particular and glottalization in general a real challenge. Every single value is precious for assessing the phonetic characteristics of this non-modal vocal quality. Therefore, the primary goal in the manual analysis of the data was to try to retain as much information as possible about the values in the tokens containing the creaky voice, without of course falling into the "creaky voice lover" bias. This is to say that in this case (where the barycenter method is hopeless), it makes sense to consier picking either method 1 (minimum in the unsmoothed signal) or method 2 (minimum in the smoothed signal) to ensure that the most reliable values will be retained, while undergoing an honest verification. We know that creaky voice is the lowest voice quality with low f 0 and O q . But the choice of method 1 or 2 was not driven by having the lowest recorded values, but by having the most reliable values of the creaky portion where each cycle is visually verified and ensured that there is a prominent negative peak worth saving.
Figure 3.3 provides an example of a syllable bearing a creaky tone (data from speaker F3, syllable /na4/ "archery")3 . As the principle just mentioned, in this case, since most glottal cycles have multiple opening peaks along the open phases, two methods of barrycenter are appropriate. The consideration here will be only the two methods of local minimum peak on the dEGG signal, i.e. method 1 or 2. Despite the messy glottal cycles, we still can clearly spot that at many cycles a good negative peak could be found among plenty of minor peaks. In a zoom on cycles 6 to 10, cycles 6 -7 -9 clearly have a precise opening peak, thus should be retained, whereas cycles 8 and 10 also tend to have a major peak at 2/3 portion of the open phase but much less clear, thus should be eliminated. In general, for the whole syllable, method 1 often has lower O q dEGG values than method 2, but in this case I chose method 2 over method 1 because it gives a correct value more on the cycle 6th with a good opening peak as verified in the previous step. The analysis and examination was carried out with such a delicate and meticulous process. It intrigued me how the neural network could learn and perform the same task. Limitations In retrospect, it becomes clear that the task is too hard to be addressed with an end-toend statistical tool, used "off-the-shelf", as it were. Specifically, we identified two weaknesses in the design of the experiments we have presented.
First, the data includes both target syllables and frame syllables from carrier sentences. Since carrier sentences remain stable (quasi identical) across items, the model might be biased towards easier predictions. For a closer look at this issue, we need to refer to the total number of each component of the copus, as summarized in Figure 2.2. In the total number of 660 items for each speaker's data, the number of processed items is 264 in the target syllables and 396 in the frame syllables. Each target syllable was repeated twice in isolation and twice in a carrier sentence (2.1). The frame syllables of the carrier sentence are composed of only three different words that are repeated to carry each of the 132 target syllables, i.e. each frame word will also be repeated 132 times. It is therefore obvious that this data is highly repetitive and simple. The most diverse part is that the target words also belong to 12 minimal sets and 3 minimal pairs, i.e., within the sets, they are completely or almost exactly the same in terms of syllables, and differ only in tone. This is apparently not an ideal corpus for training in automatic machine translation or transcription tasks. Indeed, this corpus was one of the test subjects for the Persephone tool within the framework of the project of phonemic transcription of low-resource languages carried out by [START_REF] Wisniewski | Phonemic transcription of lowresource languages: To what extent can preprocessing be automated?[END_REF] and, as anticipated, with only one hour of training, the results were rather good. In this pilot study exploring the capability of machine learning perspertives for the analysis of electroglottographic signals, despite the simplicity of the corpus, we still obtained a negative result. That shows how challenging this task is.
If the first limitation comes from the inherent design of the data that cannot be changed because the starting point of the study is not initially set for an end-to-end approach, the second limitation comes from the flaw in communication between, on the one hand, the linguists who own and understand the data and, on the other hand, the scientist who takes over the data for training in the neural network model. In detail, the manual annotation process makes 2 decisions: (i) a decision on the best PeakDet method (at the level of the syllable) (ii) a decision about whether to keep or discard the O q dEGG (a decision made at the level of glottal cycle). In contrast, our model only makes predictions at the level of the cycle, thus making the task harder. Aggregating cycle-level predictions into a syllable-level prediction (e.g. through voting) might lead to better results. This limitation is due to a misunderstanding about the process of verifying O q dEGG using PeakDet. The most effective way to explain this would be to make a demo of what actually happened during the semi-automatic processing using PeakDet.
However, due to a technical problem (PeakDet was not working properly on my current computer), I was not able to do this demo directly, and apparently written explanations, even if very specific and detailed, are still not sufficient for data scientists to receive and understand the data thoroughly. As a consequence, the process of machine learning (learning patterns of statistical association between data and labels) clearly appears not to have followed the path that I (somewhat naively) expected, namely: kindly following in my footsteps, emulating the analytic process that I had adopted.
Such a finding is by no means new: features extracted by end-to-end models should not be expected to match the features commonly used in 'manual' workflows. Thus, [START_REF] Gendrot | Analyse phonétique de la variation inter-locuteurs au moyen de réseaux de neurones convolutifs: voyelles seules et séquences courtes de parole[END_REF] report that a convolutional neural network that is (moderately) successful at speaker classification makes use of spectral and temporal features that are not related to classical phonetic measures in any straightforward way. Such observations open into an exciting mid-term research program: studying statistical models to see how they encapsulate relevant information, and how this information can shed light on the languages found in the datasets used at training. No more will be said on this topic here, though, as studies on explicability of neural models are best based on highly successful models, whereas the experiments reported here did not yet reach the level of practical usefulness (which would be technically reflected in low error rates).
CC BY-SA 3.0 fr page 34
Study of phonation types
The multidisciplinary work described here has implications for the phonetic study of phonation types. Indeed, the set of results obtained here highlights the fact that the process of assessing the reliability of the open quotient estimate in view of the signal is a non-trivial task. This provides an opportunity to return to the fundamental question of what the glottic open quotient reflects, and how it is interpreted. The open quotient is a linear projection of the non-linear phenomena involved in phonation, and therefore obviously cannot by itself be a sufficient descriptor of the various types of phonation: whispered voice, pressed voice, cracked voice; among the commonly used references, see in particular [START_REF] Laver | The phonetic description of voice quality[END_REF]. Specifically in the case of creaky voice, we observe a good correlation between the presence of creaky voice and the spectral slope information reflected in the spectrogram (a higher intensity in the upper half of the spectrogram, from 5 to 10 kHz, than in the lower half). By contrast, the glottal open quotient does not clearly show the same degree of correlation with creaky voice. Thus, the audio signal can sometimes be a better guide than the open quotient for detecting creaky voice. The EGG signal contains other information -most obviously the fundamental frequency -which provides clearer indications than O q regarding the phonatory type.
Acoustically, it is known that the open quotient is neither the only nor the most important of the glottal source parameters. The fact that it can be estimated from the EGG signal has undoubtedly led to it being given particular importance, in comparison, for example, with the speed quotient, which is not easily accessible for estimation. Thus, the height of the positive peak on the derivative (corresponding to the moment of glottic closure at the beginning and end of the cycle), DECPA (for Derivative-Electroglottographic Closure Peak Amplitude: Michaud 2004a), does not constitute a means to estimate the speed quotient robustly and reliably. Clearly, O q would benefit from being integrated into machine learning experiments in which it would be integrated into a larger set of acoustic parameters, in order to characterize various types of phonation in an objective and complete way.
Perspectives
In future work, we consider developing and evaluating regression models that directly predict O q dEGG as a continuous variable, instead of predicting it indirectly through a classification task.
Another future direction that we consider investigating consists in the replacement of MFCC features by available multilingual pretrained acoustic models [START_REF] Conneau | Unsupervised cross-lingual representation learning for speech recognition[END_REF], which have rapidly become mainstream in speech processing (including applications to fieldwork corpora: [START_REF] Guillaume | Fine-tuning pre-trained models for Automatic Speech Recognition, experiments on a fieldwork corpus of Japhug (Trans-Himalayan family)[END_REF].
CC
1
1 : "Computational Language Documentation by 2025" . . . . . . . . . . . . . . 1.2 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.3 Electroglottography: principles, analysis methods, and prospects for automation of analysis processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.2 Methods to analyze the electroglottographic signal . . . . . . . . . . . . . . . . 1.3.3 Criticism of estimation of the glottal open quotient by electroglottography . . . . The corpus: content (speech materials) and status of the data . . . . . . . . . . . . . . 2.2 Note on some irregularities in the corpus . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Manual workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 General process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Fundamental frequency measurement . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Glottal open quotient measurement . . . . . . . . . . . . . . . . . . . . . . . . 3 O q dEGG estimation using machine learning methods 3.1 Data Description and Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Neural Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 1 . 3 :
13 Figure 1.3: Basic procedure of creating an electroglottographic wavegram. Reproduction fromHerbst, Fitch, and Švec (2010, p. 3072)
Figure 2 . 1 :
21 Figure 2.1: Calculation of the size of the total corpus.
Figure 2
2 Figure 2.2: A brief summary view of the corpus.
Figure 2.3: Basic procedure of data processing.
CLD2025Figure 2
2 Figure 2.4: A schematic representation of data processing with PeakDet.
Figure 2.7: Examples need manual visual verification. Cases where multi-opening peaks appear but are worth retaining with different methods.
Figure 3 . 1 :
31 Figure 3.1: Neural network architecture (centered on a single glottal cycle) In practice, the bi-LSTM encodes a full syllable.
Figure 3
3 Figure 3.2: O q dEGG verification: case where methods 3 and 4 (barycenter of the peaks on the dEGG signal) have been considered to be chosen.
1 :
1 Speech materials: eight minimal sets and four near-minimal sets that contrast for five tones in smooth syllables. materials: three minimal pairs that contrast the two tones of checked syllables.
Context: "Computational Language Documentation by 2025"
1.1 Example of EGG and dEGG signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Continuum of phonation types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics on the corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Final results on development and test sets (%). . . . . . . . . . . . . . . . . . . . . . .
List of Tables 3.1 Chapter 1 Introduction
1.1
1.3 Procedure of creating an EGG wavegram . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Calculation of the size of the total corpus. . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 A brief summary view of the corpus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Basic procedure of data processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 A schematic representation of data processing with PeakDet. . . . . . . . . . . . . . . . 2.5 Examples of imprecise closing peaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Examples of precise and imprecise opening peaks on DEGG signal . . . . . . . . . . . . 2.7 Example shows the role of manual O q dEGG verification: "good" multi-opening peaks . . 3.1 The neural network architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 O q dEGG verification: case where methods of barycenter must be chosen . . . . . . . . . 3.3 O q dEGG verification: case where methods of local minimum must be chosen . . . . . . . 3.3 Confusion matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Speech materials: minimal sets for tones in smooth syllables . . . . . . . . . . . . . . . A.2 Speech materials: minimal sets for tones in checked syllables . . . . . . . . . . . . . . . A.3 Summary of the processing status of the entire corpus . . . . . . . . . . . . . . . . . . .
.1 illustrates visually a synchronization of EGG and dEGG signals. In the case of clear signals, one closing peak is clearly visible for each cycle, corresponding to the peak increase in vocal fold contact area and considered as the beginning of the glottal closed phase, and one (less salient) opening peak, corresponding to a peak in the decrease in vocal
CLD2025 ANR-19-CE38-0015 PRCI -International ANR-DFG WP2
Bottom: smoothed DEGG; mid: DEGG; top: EGG Closing Closing Closing Closing Closing Closing Closed phase Open peak Opening peak peak Opening peak peak Opening peak phase Closed phase Open phase Closed phase Open phase peak peak peak Opening Opening Opening peak peak peak Closing Closing Closing peak peak peak
2 4 6 8 10 12 14 ms ms ms
Figure 1.1: Example of EGG and dEGG signals with indication of glottis closure and opening. Repro-
duced with permission from the author, Alexis Michaud.
CC BY-SA 3.0 fr page 11
.1.
Section Train Dev Test Complete Corpus
Number of syllabes 9050 1913 2011 12974
Number of glottal cycles 295753 62350 65727 423830
Label distribution
0 = No O q dEGG computable 69237 (23.41%) 14583 (23.39%) 14670 (22.32%) 98490 (23.24%)
1 = maxima without smoothing 60527 (20.47%) 13227 (21.21%) 13461 (20.48%) 87215 (20.58%)
2 = maxima with smoothing 137461 (46.48%) 29909 (47.97%) 30386 (46.23%) 197756 (46.66%)
3 = barycentre without smoothing 42603 (14.40%) 7870 (12.62%) 10102 (15.37%) 60575 (14.29%)
4 = barycentre with smoothing 93583 (31.64%) 18775 (30.11%) 21723 (33.05%) 134081 (31.64%)
Table 3 .
3
1: Statistics on the corpus. Since several of the 4 methods implemented in PeakDet may sometimes output the same O q dEGG
In 12 minimal sets, there are 8 complete minimal sets and 4 near minimal sets."Pairs that show segments in nearly identical environments, such as azure/assure or author/either, are called near-minimal pairs. They help to establish contrasts where no minimal pairs can be found."[START_REF] Dobrovolsky | Phonology: the function and patterning of sounds[END_REF].
Part of this chapter is adapted from[START_REF] Nguyễn | Apprentissage profond pour l'estimation du quotient ouvert à partir du signal électroglottographique[END_REF].
All data sets of 20 speakers are available on the Pangloss Collection (https://pangloss.cnrs.fr/corpus/Mường). The audio and EGG signal are both freely accessible and downloadable under the CC BY-NC-SA
3.0 license, in the spirit of the open data movement in phonetic research[START_REF] Garellek | Toward open data policies in phonetics: What we can gain and how we can avoid pitfalls[END_REF].
The data of this example is available here: https://doi.org/10.24397/pangloss-0006761#W90CC BY-SA 3.0 fr page 31
Acknowledgments
The work presented here is funded by the French-German project "Computational Language Documentation by 2025 / La documentation automatique des langues à l'horizon 2025" (CLD 2025, ANR-19-
PRCI -International ANR-DFG Niveau de dissémination PU Public X PP Restreint aux autres participants du programme (y compris ANR et DFG) RE Restreint à un groupe spécifié par le consortium (y compris ANR et DFG) CO Confidential, restreint aux members du consortium (ANR et DFG) CLD2025 ANR-19-CE38-0015 PRCI -International ANR-DFG WP2
Abbreviations
Detailed information on the speech material and the total corpus
A.1 Speech material
Tables A.1 and A.2 provide full detail about the minimal sets and pairs. The tables include:
• First column: The numbering of minimal sets (from 1 to 12) and minimal pairs (from 1 to 3).
• Second column: The numbering of target syllables, labeled as "UID" (for "Unique Identifier") because this number constitutes the unique identifier of target syllables. This number is used in the annotation of audio files, and in data processing down the line.
• Third column: The target syllables. These constitute the actual speech material of the recording session. In other words, the speakers were asked to pronounce these monosyllabic morphemes (roots).
• Fourth column: The full form of the target words from which monosyllables were extracted, in cases where the usual form of the word at issue is disyllabic. This point will be elaborated on below.
• Fifth-sixth-seventh columns: The translations in English, French and Vietnamese, respectively.
This information is important to know in order to be able to retrieve the content of the data later, as it is encoded as labels in the annotation list.
A.2 Corpus status |
04121467 | en | [
"info",
"math",
"phys"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/tel-04121467/file/111424_HADJADJ_2023_archivage.pdf | Machine Learning predictive models have been applied to many fields and applications so far. The majority of these learning algorithms rely on labeled training data which may be expensive to obtain as they require labeling by an expert. Additionally, with the new storage capabilities, large amounts of unlabeled data exist in abundance.
In this context, the development of new frameworks to learn efficient models from a small set of labeled data, together with a large amount of unlabeled data, is a crucial
emphasis of the current research community. Achieving this goal would significantly elevate the state-of-the-art machine intelligence to be comparable to or surpass the human capability of learning to generalize concepts from very few labeled examples. Semi-supervised learning and active learning are two ongoing active research subdomains that aim to achieve this goal.
In this thesis, we investigate two directions in machine learning theory for semisupervised and active learning. First, We are interested in the generalization properties of a self-training algorithm using halfspaces with explicit mislabel modeling. We propose an iterative algorithm to learn a list of halfspaces from labeled and unlabeled training data, in which each iteration consists of two steps, exploration and pruning.
We derive a generalization bound for the proposed algorithm under a Massart noise mislabeling model. Second, we propose a meta-approach for pool-based active learning strategies in the context of multi-class classification tasks, which relies on the proposed concept of learning on Proper Topological Regions (PTR) with an underlying smoothness assumption on the metric space. PTR allows the pool-based active learning strategies to obtain a better initial training set than random selection and increase the training sample size during the rounds while operating in a low-budget regime scenario. Experiments carried out on various benchmarks demonstrate the efficiency of our proposed approaches for semi-supervised and active learning compared to state-of-the-art methods.
A third contribution of the thesis concerns the development of practical deeplearning solutions in the challenging domain of Transmission Electron Microscopy i (TEM) for material design. In the context of orientation microscopy, ML-based approaches still need to catch up to traditional techniques, such as template matching or the Kikuchi technique, when it comes to generalization performance over unseen orientations and phases during training. This is due mainly to the limited experimental data about the studied phenomena for training the models. Nevertheless, it is a realistic and practical constraint, especially for narrow-domain applications where actual data are not widely available. Some successful attempts have been made to use unsupervised learning techniques to gain more insight into the data, but clustering information does not solve the orientation microscopy problem. To this end, we propose a multi-task learning framework based on neural architecture search for fast automation of phase and orientation determination in TEM images.
Résumé
Les modèles prédictifs d'apprentissage automatique ont été appliqués à de nombreux domaines et applications jusqu'à présent. La majorité de ces algorithmes d'apprentissage reposent sur des données d'apprentissage étiquetées qui peuvent être coûteuses à obtenir car elles nécessitent l'étiquetage par un expert. De plus, avec les nouvelles capacités de stockage, une grande quantité de données non étiquetées existe en abondance. Dans ce contexte, le développement de nouveaux cadres pour apprendre des modèles efficaces à partir d'un petit ensemble de données étiquetées, ainsi qu'une grande quantité de données non étiquetées est un accent crucial de la communauté de recherche actuelle. Atteindre cet objectif élèverait considérablement l'état de l'art de l'intelligence artificielle pour être comparable ou surpasser la capacité humaine sur comment apprendre à généraliser des concepts à partir de très peu d'exemples étiquetés. L'apprentissage semi-supervisé et l'apprentissage actif sont deux sous-domaines de recherche actifs en cours qui visent à atteindre cet objectif.
Dans cette thèse, nous étudions deux directions de la théorie de l'apprentissage automatique pour l'apprentissage semi-supervisé et actif. Premièrement, nous nous intéressons aux propriétés de généralisation d'un algorithme d'auto-apprentissage utilisant des demi-espaces avec une modélisation explicite des erreurs d'étiquetage. Nous proposons un algorithme itératif pour apprendre une liste de demi-espaces à partir de données d'apprentissage étiquetées et non étiquetées, dans lequel chaque itération consiste en deux étapes, l'exploration et l'élagage. Nous dérivons une borne de généralisation pour l'algorithme proposé sous un modèle d'étiquetage de bruit de Massart. Deuxièmement, nous proposons une méta-approche pour les stratégies d'apprentissage actif basées sur des pools dans le contexte de tâches de classification multi-classes, qui s'appuie sur le concept proposé d'apprentissage sur les régions topologiques propres (RTP) avec une hypothèse sous-jacente de lissage sur l'espace métrique. Le TRP permet aux stratégies d'apprentissage actif basées sur le pool d'obtenir un meilleur ensemble d'entraînement initial que la sélection aléatoire et d'augmenter la taille de l'échantillon d'entraînement pendant les tours tout en fonciii tionnant dans un scénario de régime à petit budget. Des expérimentations menées sur différents benchmarks démontrent l'efficacité de nos approches proposées pour l'apprentissage semi-supervisé et actif par rapport aux méthodes de l'état de l'art. Une troisième contribution de la thèse concerne le développement de solutions pratiques d'apprentissage en profondeur dans le domaine difficile de la microscopie électronique à transmission (TEM) pour la conception de matériaux. Dans le contexte de la microscopie d'orientation, les approches basées sur ML doivent encore rattraper les techniques traditionnelles, telles que l'appariement de modèles ou la technique de Kikuchi, en ce qui concerne les performances de généralisation sur des orientations et des phases inconnu lors de l'apprentissage. Cela est dû principalement au peu de données expérimentales sur les phénomènes étudiés pour l'entraînement des modèles. Néanmoins, il s'agit d'une contrainte réaliste et pratique, en particulier pour les applications à domaine étroit où les données réelles ne sont pas largement disponibles. Certaines tentatives réussies ont été faites pour utiliser des techniques d'apprentissage non supervisées pour mieux comprendre les données, mais le regroupement des informations ne résout pas le problème de la microscopie d'orientation. À cette fin, nous proposons un cadre d'apprentissage multi-tâches basé sur la recherche d'architecture neuronale pour l'automatisation rapide de la détermination de la phase et de l'orientation dans les images TEM.
List of Figures
Average balanced classification accuracy and standard deviation of dif-
ferent pool-based active learning strategies and budgets on protein dataset, using random forest estimator over 20 stratified random splits.
3.3 Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on banknote dataset, using random forest estimator over 20 stratified random splits. 3.4 Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on coil-20 dataset, using random forest estimator over 20 stratified random splits.
Average balanced classification accuracy and standard deviation of
different pool-based active learning strategies and budgets on isolet dataset, using random forest estimator over 20 stratified random splits.
3.6 Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on pendigits dataset, using random forest estimator over 20 stratified random splits.
Introduction
In this chapter, we will introduce our study by first presenting its context in Section 1.1; then by describing the main application we considered in Material Science that is the Transmission Electron Microscopy in Section 1.2, and the motivation of using Machine Learning for this task in Section 1.3. We will finally present the structure of this thesis in Section 1.4 and give our personal publications and those that have been submitted for this work.
Context
This thesis is being written as part of the "Multidisciplinary Institute in Artificial Intelligence's" (MIAI) 1 Magnet chair. MIAI's mission is to establish a center of excellence in AI for research in Grenoble, where scientists from different research domains may meet and form new alliances. Magnet Chair's purpose is to create new learning frameworks that incorporate contextual knowledge in training models that explore the multidimensional design space of materials. The development of new materials is at the heart of any technological transition, many of which are on the horizon: lighter transportation (more efficient structures), energy production (renewable materials), circular economy (recyclable materials), resource crisis (substitution of critical chemical species), gas capture and release (CO2, toxic gases). The vastness of the materials design space presents a critical opportunity to combine high-throughput materials exploration tactics based on experimental and simulation strategies with powerful artificial intelligence capabilities. In this regard, the objective is to create novel materials with optimum functionality to address the industrial difficulties posed by future societal restrictions. The functions that are being sought include structural, chemical, and physical qualities, as well as those linked to safety, ecology, recycling, low cost, and accessibility. The quest for novel materials with specified qualities remains highly empirical, typically directed by intuition and trial and error.
Beyond that, data-to-knowledge methodologies in materials research are particularly promising. The chair has used novel Machine Learning (ML) methodologies to produce genuine materials based on specified attributes (local structure, microstructure, thermodynamic and mechanical properties), anticipate new phases, and the atomic structure of materials.
The goal of my thesis is to use context to increase the effectiveness of learning models in two ways: first, by utilizing the structure of the data by using unlabeled instances in the training of the models, and secondly, by considering related tasks to the primary one in the training phase.
In this way the learning paradigms that we are taking into consideration; are self-training, active-learning, and multi-task learning. The next section gives a quick introduction to these frameworks. setting, the learning algorithm is presented with all the unlabeled examples at once in a pool. In the stream-based setting, the unlabelled examples are presented in a stream, where each example is sent individually to the learning algorithm [START_REF] Settles | Active learning literature survey[END_REF]. In Chapter 3, we propose a study on pool-based active learning algorithms.
Self-training
Transfer learning
In transfer learning, the knowledge of an already trained ML model is applied to a different but related problem. with already some patterns learned from solving a related task [START_REF] Yang | Transfer Learning[END_REF]. In Chapter 4, we show how to use TL in our applicative contributions to material science.
Multi-task learning
Multi-task learning (MTL) is another subfield of ML in which multiple learning tasks are solved at the same time by the learning algorithm. It has many denominations, such as joint learning, learning to learn, and learning with auxiliary tasks, etc., but they all share the same principle, improving the performance of multiple tasks by learning them jointly rather than solving them separately [127]. In Chapter 4, we design a use case for MTL and show its benefits for our applicative contributions to material science.
Figure 1.3: Multi-task learning framework in ML.
Application to Material Design: Transmission Electron Microscopy (TEM)
Transmission Electron Microscopy (TEM) is a particular type of microscopy that uses an electron beam traversing a thin sample (of the order of 100 nm) to characterize its microstructure (nature, orientation and spatial distribution of phases) at high magnification (down to the nanometer resolution or less). TEM has many applications and observation modes in a number of different fields, such as life sciences, nanotechnology, biological and material research, industry, etc. In Figure 1.4, we depict the general process of a TEM experiment under the particular observation mode considered in the present work, namely scanning transmission electron microscopy (STEM) accompanied by Automated Crystal Orientation Mapping (ACOM). In this mode, the sample is scanned (in 2D) by a small electron beam, generating a 2D map of typically 10 × 10 µm2 ; on each point of the map the diffraction diagram is acquired, which results in a 4D dataset [START_REF] Carter | Transmission Electron Microscopy: Diffraction, Imaging, and Spectrometry[END_REF]. Further analysis of the diffraction diagrams allows to generate many different information. For example, a virtual brightfield image (VBF) of the scanned area can be drawn by plotting the intensity of the transmitted spot More advanced interpretation requires the analysis of all the diffraction diagrams.
Because it is resource intensive, this is performed in an offline fashion after the experiment's end. The algorithms designed to analyze these data are versions of the template matching algorithm [START_REF] Brunelli | Template Matching Techniques in Computer Vision: Theory and Practice[END_REF], where the objective is to find the best match to each
Motivation
As previously mentioned, effectively reducing the training cost of learning algorithms in terms of labeled examples for ML is of utmost importance. It will contribute directly to the spread of ML in narrow-domain applications where publicly available data collections are in their premise growth and other applications where the cost of labeling is such that having extensive training collections for Deep-Learning (DL) is simply unrealistic. Therefore, the main aspect of this thesis was not only to implement well-known strategies from the previously mentioned sub-fields of ML, which aim to solve this problem but also to contribute to these subdomains by proposing in Chapter 2, and Chapter 3 novel approaches and algorithms with solid theoretical foundations. TEM is a good example of a narrow-domain application with relatively scarce publicly available datasets. Most of the matching algorithms designed to analyze TEM data, with millions of image to retrieve, have a high time complexity by definition, which implies that in the standard TEM workflow, the data analysis is performed offline after the experiment's end. ML approaches have a clear advantage over these algorithms because the model's prediction time is instantaneous, which enables online solutions during the TEM experiment. However, there are practical challenges and constraints to take into consideration for the development of ML solutions in this application:
The limited amount of available data to train ML models.
The algorithms should generalize to unseen orientations during training.
The TEM data is dependent on the experiment's setting and microscope.
TEM data exhibits a high frequency of duplicates, reducing the training data size even further.
Chapter 4 presents a detailed investigation of what DL has to offer in order to solve these challenges and proposes a DL model for the real-time analysis of TEM data.
Thesis structure
The rest of the thesis consists of the following:
In Chapter 3 uses topological clustering to provide a meta-approach for poolbased active learning algorithms in low-budget regime scenarios. We demonstrate how different active-learning algorithms might profit from this technique in order to operate on limited budgets and handle the cold-start problem in a cohesive manner.
In Chapter 4 shows how we can successfully use DL to automate the analysis of TEM data collections to achieve real-time prediction during the TEM experiment.
Finally, Chapter 5 concludes our study in general and offers some future prospects.
Personal References
Under Review
[HDAL22] Lies Hadjadj, Alexis Deschamps, Massih-Reza Amini, and Sana Louichi. Deep learning for automated phase and orientation determination in tem maps with ceramic microstructures. Materials Characterization Journal,
2022.
[HDMA22] Lies Hadjadj, Emilie Devijver, Rémi Molinier, and Massih-Reza Amini.
Pool-based active learning with topological clustering. Machine Learning Journal, 2022.
Chapter 2
Self-Training of Halfspaces with Generalization Guarantees under Massart Mislabeling Noise Model
We investigate the generalization properties of a self-training algorithm with halfspaces. The approach learns a list of halfspaces iteratively from labeled and unlabeled training data, in which each iteration consists of two steps: exploration and pruning. In the exploration phase, the halfspace is found sequentially by maximizing the unsigned-margin among unlabeled examples and then assigning pseudo-labels to those that have a distance higher than the current threshold. The pseudo-labeled examples are then added to the training set, and a new classifier is learned. This process is repeated until no more unlabeled examples remain for pseudo-labeling. In the pruning phase, pseudo-labeled samples that have a distance to the last halfspace greater than the associated unsigned-margin are then discarded. We prove that the misclassification error of the resulting sequence of classifiers is bounded and show that the resulting semi-supervised approach never degrades performance compared to the classifier learned using only the initial labeled training set. Experiments carried out on a variety of benchmarks demonstrate the efficiency of the proposed approach compared to state-of-the-art methods. This chapter is based on the following papers [HALD22? ].
Introduction
In recent years, several attempts have been made to establish a theoretical foundation for semi-supervised learning. These studies are mainly interested in the generalization ability of semi-supervised learning techniques [START_REF] Rigollet | Generalization error bounds in semi-supervised classification under the cluster assumption[END_REF][START_REF] Maximov | Rademacher complexity bounds for a penalized multi-class semi-supervised algorithm[END_REF] and the utility of unlabeled data in the training process [START_REF] Castelli | On the exponential value of labeled samples[END_REF][START_REF] Singh | Unlabeled data: Now it helps, now it doesn[END_REF][START_REF] Li | Towards Making Unlabeled Data Never Hurt[END_REF]. The majority of these works are based on the concept called compatibility in [START_REF] Balcan | An augmented PAC model for semisupervised learning[END_REF], and try to exploit the connection between the marginal data distribution and the target function to be learned. The common conclusion of these studies is that unlabeled data will only be useful for training if such a relationship exists.
The three key types of relations considered in the literature are cluster assumption, manifold assumption, and low-density separation [130,[START_REF] Chapelle | Semi-Supervised Learning[END_REF]. The cluster assumption states that data contains homogeneous labeled clusters, and unlabeled training examples allow to recognize these clusters. In this case, the marginal distribution is viewed as a mixture of class conditional distributions, and semi-supervised learning has been shown to be superior to supervised learning in terms of achieving smaller finitesample error bounds in some general cases, and in some others, it provides a faster rate of error convergence [START_REF] Castelli | On the exponential value of labeled samples[END_REF][START_REF] Rigollet | Generalization error bounds in semi-supervised classification under the cluster assumption[END_REF][START_REF] Maximov | Rademacher complexity bounds for a penalized multi-class semi-supervised algorithm[END_REF][START_REF] Singh | Unlabeled data: Now it helps, now it doesn[END_REF]. In this line, [START_REF] Ben | Does unlabeled data provably help? worstcase analysis of the sample complexity of semi-supervised learning[END_REF] showed that the access to the marginal distribution over unlabeled training data would not provide sample size guarantees better than those obtained by supervised learning unless one assumes very strong assumptions about the conditional distribution over the class labels. Manifold assumption stipulates that the target function is in a low-dimensional manifold. [START_REF] Niyogi | Manifold regularization and semi-supervised learning: Some theoretical analyses[END_REF] establishes a context through which such algorithms can be analyzed and potentially justified; the main result of this study is that unlabeled data may help the learning task in certain cases by defining the manifold. Finally, low-density separation states that the decision boundary lies in low-density regions. A principal way, in this case, is to employ a margin maximization strategy which results in pushing away the decision boundary from the unlabeled data [30, ch. 6]. Semi-supervised approaches based on this paradigm mainly assign pseudo-labels to high-confident unlabeled training examples with respect to the predictions and include these pseudo-labeled samples in the learning process [START_REF] Vittaut | Learning classification with both labeled and unlabeled data[END_REF]. However, [START_REF] Chawla | Learning from labeled and unlabeled data: An empirical study across techniques and domains[END_REF] investigated empirically the problem of label noise bias introduced during the pseudo-labeling process in this case and showed that the use of unlabeled examples could have a minimal gain or even degraded performance, depending on the generalization ability of the initial classifier trained over the labeled training data.
In this chapter, we study the generalization ability of a self-training algorithm with halfspaces that operates in two steps. In the first step, halfspaces are found In the remainder of the chapter, Section 2.2 presents the definitions and the learning objective. In Section 2.3, we present in detail the adaptation of the self-training algorithm for halfspaces. Section 2.4 presents a bound over the misclassification error of the classifier outputted by the proposed algorithm and demonstrates that this misclassification error is upper-bounded by the misclassification error of the fully supervised halfspace. In Section 2.5, we present experimental results, and we conclude this work in Section 2.6.
Framework and Notations
We consider binary classification problems where the input space X is a subset of R d , and the output space is Y = {-1, +1}. We study learning algorithms that operate in hypothesis space H d = {h w : X → Y} of centered halfspaces, where each h w ∈ H d is a Boolean function of the form h w (x) = sign(⟨w, x⟩), with w ∈ R d such that ∥w∥ 2 ≤ 1.
Our analysis succeeds the recent theoretical advances in robust supervised learning of polynomial algorithms for training halfspaces under large margin assumption [START_REF] Diakonikolas | Distributionindependent pac learning of halfspaces with massart noise[END_REF][START_REF] Montasser | Efficiently learning adversarially robust halfspaces with noise[END_REF][START_REF] Diakonikolas | Hardness of learning halfspaces with massart noise[END_REF], where the label distribution has been corrupted with the Massart noise model [START_REF] Massart | Risk bounds for statistical learning[END_REF].
Learning objective
Given S ℓ and X u , our goal is to find a learning algorithm that outputs a hypothesis h w ∈ H d such that with high probability, the misclassification error
P (x,y)∼D [h w (x) ̸ = y]
is minimized and to show with high probability that the performance of such algo- By considering the indicator function 1 π defined as 1 π = 1 if the predicate π is true and 0 otherwise, we prove in the following lemma that the probability of misclassification of halfspaces over examples with an unsigned-margin greater than a threshold γ > 0 is bounded by the same quantity 1 > η η η > 0 that upper-bounds the misclassification error of these examples.
Lemma 2.1. For all h w ∈ H d , if there exist η η η ∈]0, 1[ and γ > 0 such that
P x∼Dx [|⟨w, x⟩| ≥ γ] > 0 and that E x∼Dx [(η w (x) -η η η)1 |⟨w,x⟩|≥γ ] ≤ 0, then P (x,y)∼D [h w (x) ̸ = y |⟨w, x⟩| ≥ γ] ≤ η η η.
Proof. For all hypotheses h w in H d , we know that the error achieved by h w in the region of margin γ from w satisfies E x∼Dx [(η w (x) -η η η)1 |⟨w,x⟩|≥γ ] ≤ 0; by rewriting the expectation, we obtain the following
E x∼Dx [η w (x)1 |⟨w,x⟩|≥γ ] -η η ηP x∼Dx [|⟨w, x⟩| ≥ γ] ≤ 0.
We have then
E x∼Dx [ηw(x)1 |⟨w,x⟩|≥γ ] P x∼Dx [|⟨w,x⟩|≥γ]
≤ η η η and the result follows from the equality:
P (x,y)∼D [h w (x) ̸ = y |⟨w, x⟩| ≥ γ] = E x∼Dx [η w (x)1 |⟨w,x⟩|≥γ ] P x∼Dx [|⟨w, x⟩| ≥ γ]
.
Suppose that there exists a pair ( w, γ) minimizing:
( w, γ) ∈ arg min
w∈R d ,γ≥0 E x∼Dx [η w (x)1 |⟨w,x⟩|≥γ ] P x∼Dx [|⟨w, x⟩| ≥ γ]
.
(2.1)
By defining η as:
η = inf w∈R d ,γ≥0 E x∼Dx [η w (x)1 |⟨w,x⟩|≥γ ] P x∼Dx [|⟨w, x⟩| ≥ γ]
.
The following inequality holds:
η ≤ inf w∈R d E x∼Dx [η w (x)1 |⟨w,x⟩|≥0 ] P x∼Dx [|⟨w, x⟩| ≥ 0] = η η η * .
This inequality paves the way for the following claim, which is central to the selftraining strategy described in the next section.
Claim 2.2. Suppose that there exists a pair ( w, γ) satisfying the minimization prob-
lem (2.1) with P x∼Dx [|⟨ w, x⟩| ≥ γ] > 0 , then P (x,y)∼D [h w (x) ̸ = y |⟨ w, x⟩| ≥ γ] ≤ η η η * .
Proof. The requirements of Lemma 2.1 are satisfied with (w, γ) = ( w, γ) and η = η.
This claim is then proved using the conclusion of Lemma 2.1 together with the fact that η ≤ η * .
The claim above demonstrates that for examples generated by the probability distribution D, there exists a region in X on either side of a margin γ to the decision boundary defined by w solution of (Eq. 2.1); where the probability of misclassification error of the corresponding halfspace in this region is upper-bounded by the optimal misclassification error η * . This result is consistent with semi-supervised learning studies that consider the margin as an indicator of confidence and search the decision boundary on low-density regions [START_REF] Joachims | Transductive inference for text classification using support vector machines[END_REF][START_REF] Grandvalet | Semi-supervised learning by entropy minimization[END_REF][START_REF] Massih | A transductive bound for the voted classifier with an application to semi-supervised learning[END_REF][START_REF] Feofanov | Transductive bounds for the multi-class majority vote classifier[END_REF][START_REF] Usunier | Multiview semisupervised learning for ranking multilingual documents[END_REF].
Problem resolution
We use a block coordinate minimization method for solving the optimization problem (2.1). This strategy consists in first finding a halfspace with parameters w that minimizes Eq. (2.1) with a threshold γ = 0, and then by fixing w, finds the threshold γ for which Eq. (2.1) is minimum. We resolve this problem using the following claim, which links the misclassification error η w and the perceptron loss
ℓ p (y, h w (x)) : Y × Y → R + ; ℓ p (y, h w (x)) = -y⟨w, x⟩1 y⟨w,x⟩≤0 .
Claim 2.3. For a given weight vector w, we have:
E x∼Dx [|⟨w, x⟩|η w (x)] = E (x,y)∼D [ℓ p (y, h w (x))] (2.2)
Proof. For a fixed weight vector w, we have that: We then have:
E (x,y)∼D [ℓ p (y, h w (x))] = E (x,
γ R E x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ] ≤ E x∼Dx [η w (x) |⟨w, x⟩| ≥ γ] ≤ E x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ].
Proof. From the condition |⟨ w, x⟩| ≥ γ in the expectation, we have:
γE x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ] ≤ E x∼Dx [|⟨ w, x⟩|η w (x) |⟨ w, x⟩| ≥ γ]
Applying the definition of w to the right-hand side of the above inequality gives:
γE x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ] ≤ E x∼Dx [|⟨w, x⟩|η w (x) |⟨w, x⟩| ≥ γ]
Using the Cauchy-Schwarz inequality and the definition of R, we get:
γE x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ] ≤ R E x∼Dx [η w (x) |⟨w, x⟩| ≥ γ]
Then from the definition of w, we know:
R E x∼Dx [η w (x) |⟨w, x⟩| ≥ γ] ≤ R E x∼Dx [η w (x) |⟨ w, x⟩| ≥ γ]
Dividing the two inequalities above by R gives the result.
Algorithm 1 Self-Training with Halfspaces 1: Input : S ℓ = (x i , y i ) 1≤i≤l , X u = (x i ) l+1≤i≤n , p = 5 number of threshold tests.
2: Set k ← 0, S (k) = S ℓ , U (k) = X u , w = |S (k) | p , L = []. 3: while |S (k) | ≥ ℓ do 4: Let RS (k) (w) = 1 |S (k) | (x,y)∈S (k) [ℓ p (y, h w (x))] 5:
Run projected SGD on RS (k) (w) to obtain w (k) such that ∥w (k) ∥ 2 ≤ 1.
6:
Order S (k) by decreasing order of margin from w (k) .
7:
Set a window of indices I = [w, 2w, ..., pw],
8:
find t = arg min i∈I 1 |S (k) ≥i | (x,y)∈S (k) ≥i 1 h w (k) (x)̸ =y . 9:
Set γ (k) to the margin of the sample at position I[t].
10:
Let U (k) = {x ∈ X u |⟨w (k) , x⟩| ≥ γ (k) }. 11: if |U (k) | > 0 then 12: S (k) u = {(x, y) x ∈ U (k) ∧ y = sign(⟨w (k) , x⟩)} 13: S (k+1) ← S (k) ∪ S (k) u 14: X u ← X u \ U (k)
15:
else 16:
L = L ∪ [(w (k) , γ (k) )]
17:
S (k+1) = {(x, y) ∈ S (k) |⟨w (k) , x⟩| < γ (k) } 18:
end if Lemma 2.4 guarantees that the approximation of the perceptron loss to the misclassification error is more accurate for examples that have a comparable distance to the halfspace. This result paves the way for our implementation of the self-learning algorithm.
19: Set k ← k + 1, w = |S (k) |
Self-Training with Halfspaces
Given S ℓ and X u drawn i.i.d. from a distribution D corrupted with O(f, D x , η (0) ). Algorithm 1 learns iteratively a list of halfspaces L m = [(w (1) , γ (1) ), ..., (w (m) , γ (m) )] with each round consisting of exploration and pruning steps. The goal of the exploration phase is to discover the halfspace with the highest margin on the set of unlabeled samples that are not still pseudo-labeled. This is done by first, learning a halfspace that minimizes the empirical surrogate loss of R D (w) = E (x,y)∼D [ℓ p (y, h w (x))] over a set of labeled and already pseudo-labeled examples S (k) from S ℓ and X u :
min w RS (k) (w) = 1 |S (k) | (x,y)∈S (k) ℓ p (y, h w (x)) (2.3) s.t. ||w|| 2 ≤ 1
At round k = 0, we have S (0) = S ℓ . Once the halfspace with parameters w (k) is found, a threshold γ (k) , defined as the highest unsigned margin in S (k) , is set such that the empirical loss over the set of examples in S (k) with unsigned-margin above γ (k) , is the lowest. In the pseudo-code of the algorithm, S
≥i refers to the subset of examples in S (k) having an unsigned margin greater or equal to ω × i. Unlabeled examples x ∈ X u that are not pseudo-labeled are assigned labels, i.e., y = sign(⟨w (k) , x⟩) iff |⟨w (k) , x⟩| ≥ γ (k) . These pseudo-labeled examples are added to S (k) and removed from X u , and a new halfspace minimizing Eq. (2.3) is found. Examples in S (k) are supposed to be misclassified by the oracle O(f, D x , η (k) ) following Definition 2.1 with the parameter function η (k) that refers to the conditional probability of corruption in k) . Once the halfspace with parameters w (k) and threshold γ (k) are found such that there are no more unlabeled samples having an unsigned-margin larger than γ (k) , the pair (w (k) , γ (k) ) is added to the list L m , and samples from S (k) having an unsignedmargin above γ (k) are removed (pruning phase). Remind that γ (k) is the largest threshold above which the misclassification error over S (k) increases.
S (k) defined as η (k) (x) = P y∼S (k) y (x) [f (x) ̸ = y] ≤ η (
In detail. The self-training algorithm takes as input the labeled set S ℓ , the unlabeled set X u and p, which refers to the number of tests for threshold estimation, set to 5. After finding the weight vector w (k) at round k, with projected SGD (step 5), we order the labeled set S (k) (with S (0) = S ℓ ) by decreasing order of unsignedmargin to w (k) . The threshold γ (k) is defined as the largest margin such that the error of examples in S (k) with an unsigned margin higher than γ (k) increases (step 9). At this stage, observations x ∈ X u with an unsigned margin greater than γ (k)
(step 12 -13) are pseudo-labeled and added to the labeled set S (k) and they are removed from the unlabeled set. This exploration phase of finding a halfspace with the largest threshold γ (k) is repeated until there are no more unlabeled samples with an unsigned margin larger than this threshold. After this phase, the pruning phase begins by removing examples in S (k) with an unsigned margin strictly less than γ (k)
(step 17). The parameters of the halfspace and the corresponding threshold are added to the list of selected classifiers L m , and the procedure is repeated until the size of the labeled set becomes less than ℓ.
To classify an unknown example x, the prediction of the first halfspace with normal vector w (i) in the list L m , such that the unsigned-margin |⟨w (i) , x⟩| of x is higher or equal to the corresponding threshold γ (i) , is returned. By abuse of notation, we note that the prediction for x is L m (x) = h w (i) (x). From Claim 2.2, we know that the misclassification error of this halfspace on the region where x lies is bounded by the optimal misclassification error η * . If no such halfspace exists, the observation is classified using the prediction of the first classifier h w (1) that was trained over all the labeled and the pseudo-labeled samples without pruning; i.e.,
L m (x) = h w (1) (x). L m (x) = h w (i) (x) if ∃i : i = arg min 1≤k≤m:|⟨w (k) ,x⟩|≥γ (k) |⟨w (k) , x⟩| -γ (k) , h w (1) (x) otherwise.
Corruption noise modeling and Generalization guarantees
In the following, we relate the process of pseudo-labeling to the corruption noise model
O(f, D x , η (k)
) for all pseudo-labeling iterations k in Algorithm 1, then we present a bound over the misclassification error of the classifier L m outputted by the algorithm and demonstrate that this misclassification error is upper-bounded by the misclassification error of the fully supervised halfspace.
Claim 2.5. Let S (0) = S ℓ be a labeled set drawn i.i.d. from D = O(f, D x , η (0) ) and
U (0) = X u an initial unlabeled set drawn i.i.d. from D x . For all iterations k ∈ [K] of Algorithm 1; the active labeled set S (k) is drawn i.i.d. from D = O(f, D x , η (k) )
where the corruption noise distribution η (k) is bounded by:
∀k ∈ [K], E x∼Dx [η (k) (x) x ∈ S (k) ] ≤ max j∈[K] η (j)
Proof. We know that ∀k ∈
[K], S (k) ⊆ S (0) ∪ k-1 i=0 S (i) u , where S (i) u is the set of pseudo-labeled pairs of examples x from U (i) , S (i) u = ∅ for the iterations i ∈ [K]
when no examples are pseudo-labeled. Then the noise distribution η (k) satisfies for all k ∈ [K]:
E x∼Dx [η (k) (x)1 x∈S (k) ] = E x∼Dx [η (k) (x)1 x∈S (k) ∩S (0) ] + k-1 i=0 E x∼Dx [η (k) (x)1 x∈S (k) ∩S (i) u ]
If we condition on x ∈ S (k) , we obtain for all k ∈ [K]:
E x∼Dx [η (k) (x) x ∈ S (k) ] = P[x ∈ S (0) x ∈ S (k) ]E x∼Dx [η (0) (x) x ∈ S (k) ∩ S (0) ]+ k-1 i=0 P[x ∈ S (i) u x ∈ S (k) ]E x∼Dx [η w (i) (x) x ∈ S (k) ∩ S (i) u ],
this equation includes the initial corruption of the labeled set S (0) = S ℓ in addition to the noise injected by each classifier h w (k) at each round k when pseudo-labeling occurs. Now that we have modeled the process of pseudo-labeling, the result is straightforward
considering the fact that E x∼Dx [η (0) (x)] ≤ η (0) ; ∀k ∈ [K], E x∼Dx [η w (k) (x)] ≤ η (k)
; and,
P x∼Dx [x ∈ S (0) x ∈ S (k) ] + k-1 i=0 P x∼Dx [x ∈ S (i) u x ∈ S (k) ] = P x∼Dx [x ∈ S (0) ∪ k-1 i=0 S (k) u x ∈ S (k) ] ≤ 1. □
We can now state our main contribution that bounds the generalization error of the classifier L m outputted by Algorithm 1 with respect to the optimal misclassification error η * in the case where projected SGD is used for the minimization of Eq. (2.3).
Note that in this case the time complexity of the algorithm is polynomial with respect to the dimension d, the upper bound on the bit complexity of examples, the total number of iterations, and the upper bound on SGD steps.
Theorem 2.6. Let S ℓ be a set of i.i.d. samples of size ℓ drawn from a distribution
D = O(f, D x , η (0) ) on R d × {-1, +1}
, where f is an unknown concept function and η (0) an unknown parameter function bounded by 1/2, let X u be an unlabeled set of size u drawn i.i.d. from D x . Algorithm 1 terminates after K iterations, and outputs a non-proper classifier L m of m halfspaces such that with high probability:
P (x,y)∼D [L m (x) ̸ = y] ≤ η * + max k∈I ϵ (k) + π K+1 ,
where I is the set of rounds k ∈ [K] at which the halfspaces were added to L m , ϵ (k) is the projected SGD convergence error rate at round k, and π K+1 a negligible not-accounted mass of D x .
The proof of Theorem 2.6 is based on the following property of projected SGD.
Lemma 2.7 (From [START_REF] Duchi | Introductory lectures on stochastic convex optimization[END_REF]). Let R be a convex function of any type. Consider the projected SGD iteration, which starts with w (0) and computes for each step.
w (t+ 1 2 ) = w (t) -α (t) g (t) ; w (t+1) = arg min w:||w|| 2 ≤1 ||w -w (t+ 1 2 ) || 2 . Where g (t) is a stochastic sub- gradient such that E x∼Dx [g(w, x)] ∈ ∂ R(w) = {g : R(w ′ ) ≥ R(w) + ⟨E[g], w ′ - w⟩ for all w ′ } and E x∼Dx [||g(w, x)|| 2 2 ] ≤ M 2 . For any ϵ, δ > 0; if the projected SGD is executed T = Ω(log(1/δ)/ϵ 2 ) times with a step size α (t) = 1 M √ t , then for w = 1 T T t=1 w (t)
, we have with probability at least 1 -δ that:
E x∼Dx [ R( w)] ≤ min w,∥w∥ 2 ≤1 E x∼Dx [ R(w)] + ϵ.
Proof of Theorem 2.6. We consider the steps of Algorithm 1. At iteration k of the while loop, we consider the active training set S (k) consisting of examples not handled in previous iterations.
We first note that the algorithm terminates after at most K iterations. From the fact that at every iteration k, we discard a non-empty set from S (k) when we do not pseudo-label or from U (k) when we pseudo-label, and that the empirical distributions S ℓ and X u are finite sets. By the guarantees of Lemma 2.7, running SGD (step 4) on RS (k) for T = Ω(log(1/δ)/ϵ 2 ) steps, we obtain a weight vector w (k) such that with probability at least 1 -δ:
E x∼Dx [ RS (k) (w (k) )] ≤ min w,∥w∥ 2 ≤1 E x∼Dx [ RS (k) (w)] + ϵ (k) ,
from Claim 2.3, we derive with high probability:
E x∼Dx [|⟨w (k) , x⟩|η w (k) (x)] ≤ min w,∥w∥ 2 ≤1 E x∼Dx [|⟨w, x⟩|η w (x)] + ϵ (k) .
Then the margin γ (k) is estimated minimizing Eq. (2.1) given w (k) , following Lemma 2.4
with
R (k) = max x∼Dx ∥x∥ 2 the radius of the truncated support of the marginal distri- bution D x at iteration k, we can assume that γ (k) R (k) ≈ 1, ∀k ∈ [K]
, one may argue that the assumption is unrealistic knowing that the sequence of (γ (k) ) m k=1 decreases overall, but as we show in the supplementary, we prove in Theorem B.1 that under some convergence guarantees of the pairs {(w (k) , w (k+1) )} m-1 k=1 , one can show that the sequence {R (k) } m k=1 decreases as a function of γ (k) respectively to k. As a result, we can derive with high probability:
E x∼Dx [η w (k) (x) |⟨w (k) , x⟩| ≥ γ (k) ] ≤ min w,∥w∥ 2 ≤1 E x∼Dx [η w (x) |⟨w, x⟩| ≥ γ (k) ] + ϵ (k) .
and the fact that for each round k only points with comparable large margins are considered, we can assume that the conditional covariance for these examples with an unsigned margin greater than γ
(k) satisfy Cov x∼Dx (|⟨w (k) , x⟩|, η w (k) (x)) ≈ Cov x∼Dx (|⟨w * , x⟩|, η w * (x)
), which implies that at round k:
E x∼Dx [η w (k) (x) |⟨w (k) , x⟩| ≥ γ (k) ] -E x∼Dx [η w * (x) |⟨w (k) , x⟩| ≥ γ (k) ] ≤ ϵ (k) where ϵ (k) = 3R (k) 2 √ P (k) E x∼Dx [|⟨w * ,x⟩| |⟨w (k) ,x⟩|≥γ (k) ] , ∀k ∈ [K].
From the statement of Claim 2.2 and giving the pair (w (k) , γ (k) ), we obtain with high probability that at round k:
P (x,y)∼D [h w (k) (x) ̸ = y |⟨w (k) , x⟩| ≥ γ (k) ] ≤ η η η * + ϵ (k) .
(2.4)
When the while loop terminates, we have accounted m ≤ K halfspaces in the list L m satisfying Eq. (2.4). For all k ∈ I, every classifier h w (k) in L m has guarantees on an empirical distribution mass of at least κ = min
k∈I P x∼S (k) [|⟨w (k) , x⟩| ≥ γ (k) ]
; the DKW (Dvoretzky-Kiefer-Wolfowitz) inequality [START_REF] Dvoretzky | Asymptotic Minimax Character of the Sample Distribution Function and of the Classical Multinomial Estimator[END_REF] implies that the true probability
mass κ = min k∈I P x∼Dx [|⟨w (k) , x⟩| ≥ γ (k) ] of this region is at least κ - log 2 δ 2|S (n) | with probability 1 -δ, where n = arg min k∈I P x∼S (k) [|⟨w (k) , x⟩| ≥ γ (k) ].
The pruning phase in the algorithm ensures that these regions are disjoint for all halfspaces in L m , it follows that using the Boole-Fréchet inequality [START_REF] Boole | An Investigation of the Laws of Thought: On Which Are Founded the Mathematical Theories of Logic and Probabilities[END_REF] on the conjunctions of Eq. (2.4) overall rounds k ∈ [I], implies that L m classifies at least a (1 -mκ)-fraction of the total probability mass of D with guarantees of Eq. (2.4) with high probability, let π K+1 = P x∼Dx [x ∈ S (K+1) ] be the probability mass of the region not accounted by L m . We argue that this region is negligible from the fact that |S (K+1) | < ℓ and ℓ ≪ u, such that setting ϵ = max k∈I ϵ (k) + π K+1 provides the result. □ In the following, we show that the misclassification error of the classifier L m output of Algorithm 1 is at most equal to the error of the supervised classifier obtained over the labeled training set S ℓ , when using the same learning procedure. This result suggests that the use of unlabeled data in Algorithm 1 does not degrade the performance of the initially supervised classifier.
Theorem 2.8. Let S ℓ be a set of i.i.d. samples of size ℓ drawn from a distribution
D = O(f, D x , η (0) ) on R d × {-1, +1}
, where f is an unknown concept function and η (0) an unknown parameter function bounded by 1/2, let X u be an unlabeled set of size u drawn i.i.d. from D x . Let L m be the output of Algorithm 1 on input S ℓ and X u , and let h w (0) be the halfspace of the first iteration obtained from the empirical distribution S (0) = S ℓ , there is a high probability that:
P (x,y)∼D [L m (x) ̸ = y] ≤ P (x,y)∼D [h w (0) (x) ̸ = y]
Proof. By the guarantees of Lemma 2.7, the classifier h w (0) obtained on running SGD on RS (0) with projection to the unit l 2 -ball for P (0) steps satisfies :
E (x,y)∼D [Relu(-y⟨w (0) , x⟩)] -E (x,y)∼D [Relu(-y⟨w * , x⟩)] ≤ 3 max x∈S ℓ ∥x∥ 2 √ P (0)
Let k be the iteration at which the first pair (w (1) , γ (1) ) is added to L m . The first
pruning phase in Algorithm 1 results in a set S (k) ⊆ S ℓ ∪ k-1 i=1 S (i)
u . Claim 2.5 ensures that the probability of corruption in the pseudo-labeled set k-1 i=1 S
(i)
u is bounded by max j∈[k] η (j) ≤ η * + ϵ.
In other words, the weight vector w (1) is obtained from an empirical distribution that includes both the initial labeled set S ℓ and a pseudo-labeled set from X u . Particularly, if this pseudo-labeled set is not empty, then its pseudo-labeling error is nearly optimal, which implies that
P (x,y)∼D [h w (1) (x) ̸ = y] ≤ P (x,y)∼D [h w (0) (x) ̸ = y].
Ultimately, L m classifies a large fraction of the probability mass of D with nearly optimal guarantees (e.i., Eq. (2.4) in proof of Theorem 2.6) and the rest using h w (1) with an error of misclassification at most equal to
P (x,y)∼D [h w (0) (x) ̸ = y].
We show the assumption admitted for the proof of Theorem 4.5 in the following Theorem.
Theorem 2.9. Let S ℓ be a set of i.i.d. samples of size ℓ drawn from a distribution
D = O(f, D x , η (0) ) on B d ×{-1, +1}
, where f is an unknown concept function and η (0) an unknown parameter function bounded by 1/2, let X u be an unlabeled set of size u
drawn i.i.d. from D x , let L m = [(w (i) , γ (i) )] m
i=1 be the outputted list by Algorithm 1 on input S ℓ and X u , and let α (k) be the smallest angle between two consecutive halfspaces
(w (k) , w (k+1) ) in L m for all k ∈ [m -1]. We define (R (k) ) m i=1 the sequence of bounds where each R (k) is the bound of the margin |⟨w (k) , x⟩| over D x when the pair (w (k) , γ (k) )
is obtained, we have for all k in [m]:
R (1) = 1 ; R (k+1) = sin α (k) + arcsin γ (k) if α (k) ∈ 0, arccos γ (k) , 1 otherwise.
Proof. For k = 1, it is trivial to say that R (1) is the upper-bound for the margin distribution of w (1) using Cauchy-Schwarz inequality and given ∥w (1) ∥ 2 ≤ 1. Next, we suppose that the definition is true for k, and we will show in the following that the definition holds for k + 1, for that we will distinguish two different cases:
Case when α (k) ∈ 0, arccos γ (k)
In Figure 2.1, left, we show that if the halfspace with weight vector w (k+1) does not extremely deviates from w (k) , then we can express its upper-bound R (k+1) of the margin distribution on the truncated space of D x as a function of γ (k) and the angle deviation α (k) . Note that if α (k) = 0, then we have that R (k+1) = γ (k) .
Case when α (k) ∈ arccos γ (k) , π 2 In Figure 2.1, right, we show that if the halfspace with weight vector w (k+1) extremely deviates from w (k) , then its upper-bound R (k+1) of the margin distribution on the truncated space of D x is equal to the radius of the unit ball.
w (k) γ (k) γ (k) α (k) w (k+1) R (k+1) w (k) γ (k) γ (k) α (k) w (k+1) R (k+1) Figure 2.1: Case when α (k) ∈ 0, arccos γ (k) (left) and α (k) ∈ arccos γ (k) , π 2 (right) for a pair w (k) , w (k+1) in L m . For a pair (w (k) , γ (k) ) estimated in Algorithm 1 at iteration k ∈ [K], the con- ditional covariance Cov x∼Dx (|⟨w (k) , x⟩|, η w (k) (x) | |⟨w (k) , x⟩| ≥ γ (k) ) for examples in D with an unsigned margin greater or equal than γ (k) to w (k) is comparable to the conditional covariance Cov x∼Dx (|⟨w * , x⟩|, η w * (x) | |⟨w (k) , x⟩| ≥ γ (k) ) for these same examples. Similarly, the unsigned margin average of these examples to w (k) denoted as E x∼Dx (|⟨w (k) , x⟩| | |⟨w (k) , x⟩| ≥ γ (k) ) is also comparable to the unsigned margin average to w * denoted as E x∼Dx (|⟨w * , x⟩| | |⟨w (k) , x⟩| ≥ γ (k) ).
In the following, we relate the process of pseudo-labeling to the corruption noise model O(f, D x , η (k) ) for all pseudo-labeling iterations k in Algorithm 1, then we present a bound over the misclassification error of the classifier L m outputted by the algorithm and demonstrate that this misclassification error is upper-bounded by the misclassification error of the fully supervised halfspace.
Claim 2.10. Let S (0) = S ℓ be a labeled set drawn i.i.d. from D = O(f, D x , η (0) ) and
U (0) = X u an initial unlabeled set drawn i.i.d. from D x . For all iterations k ∈ [K] of Algorithm 1; the active labeled set S (k) is drawn i.i.d. from D = O(f, D x , η (k) )
where the corruption noise distribution η (k) is bounded by:
∀k ∈ [K], E x∼Dx [η (k) (x) x ∈ S (k) ] ≤ max j∈[K] η (j) Proof. We know that ∀k ∈ [K], S (k) ⊆ S (0) ∪ k-1 i=0 S (i) u , where S (i) u is the set of pseudo-labeled pairs of examples x from U (i) , S (i) u = ∅ for the iterations i ∈ [K]
when no examples are pseudo-labeled. Then the noise distribution η (k) satisfies for all k ∈ [K]:
E x∼Dx [η (k) (x)1 x∈S (k) ] = E x∼Dx [η (k) (x)1 x∈S (k) ∩S (0) ] + k-1 i=0 E x∼Dx [η (k) (x)1 x∈S (k) ∩S (i) u ] = P x∼Dx [x ∈ S (k) ∩ S (0) ]E x∼Dx [η (0) (x) x ∈ S (k) ∩ S (0) ]+ k-1 i=1 P x∼Dx [x ∈ S (k) ∩ S (i) u ]E x∼Dx [η w (i) (x) x ∈ S (k) ∩ S (i) u ]
If we condition on x ∈ S (k) , we obtain for all k ∈ [K]:
E x∼Dx [η (k) (x) x ∈ S (k) ] = P[x ∈ S (0) x ∈ S (k) ]E x∼Dx [η (0) (x) x ∈ S (k) ∩ S (0) ]+ k-1 i=0 P[x ∈ S (i) u x ∈ S (k) ]E x∼Dx [η w (i) (x) x ∈ S (k) ∩ S (i) u ],
this equation includes the initial corruption of the labeled set S (0) = S ℓ in addition to the noise injected by each classifier h w (k) at each round k when pseudo-labeling occurs. Now that we have modeled the process of pseudo-labeling, the result is straightforward k) ; and,
considering the fact that E x∼Dx [η (0) (x)] ≤ η (0) ; ∀k ∈ [K], E x∼Dx [η w (k) (x)] ≤ η (
P x∼Dx [x ∈ S (0) x ∈ S (k) ]+ k-1 i=0 P x∼Dx [x ∈ S (i) u x ∈ S (k) ] = P x∼Dx [x ∈ S (0) ∪ k-1 i=0 S (k) u x ∈ S (k) ] ≤ 1. □
Empirical Results
We compare the proposed approach to state-of-the-art strategies developed over the three fundamental working assumptions in semi-supervised learning over ten publicly available datasets. We shall now describe the corpora and methodology.
Datasets. We mainly consider benchmark data sets from [START_REF] Chapelle | Semi-Supervised Learning[END_REF]. Some of these collections such as baseball-hockey, pc-mac and religion-atheism are binary classification tasks extracted from the 20-newsgroups data set. We used tf-idf representation for all textual data sets above. spambase is a collection of spam e-mails from the UCI repository [START_REF] Dua | UCI machine learning repository[END_REF]. one-two, odd-even are handwritten digits recognition tasks originally from optical recognition of handwritten digits database also from UCI repository, one-two is digits "1" versus "2"; odd-even is the artificial task of classifying odd "1, 3, 5, 7, 9" versus even "0, 2, 4, 6, 8" digits.
weather is a data set from Kaggle which contains about ten years of daily weather observations from many locations across Australia, and the objective is to classify the next-day rain target variable. We have also included data sets from extreme classification repository [START_REF] Bhatia | The extreme classification repository: Multi-label datasets and code[END_REF] mediamill2 and delicious2 by selecting the label which gives the best ratio in class distribution. The statistics of these data sets are given in Table 2.1.
Baseline methods. We implemented the halfspace or Linear Threshold Function (LTF) using TensorFlow 2.0 in python aside from Algorithm 11 (L m ), we ran a Support Vector Machine (SVM) [START_REF] Cortes | Support-vector networks[END_REF] with a linear kernel from the LIBLINEAR library [START_REF] Fan | Liblinear: A library for large linear classification[END_REF] as another supervised classifier. We compared results with a semi-supervised Gaussian naive Bayes model (GM) [START_REF] Chapelle | Semi-Supervised Learning[END_REF] from the scikit-learn library. The working hypothesis behind (GM) is the cluster assumption stipulating that data contains homogeneous labeled clusters, which can be detected using unlabeled training samples.
We also compared results with label propagation (LP) [131] which is a semi-supervised graph-based technique. We used the implementation of LP from the scikit-learn library. This approach follows the manifold assumption that the decision boundary is located on a low-dimensional manifold and that unlabeled data may be utilized to identify it. We also included entropy regularized logistic regression (ERLR) proposed by [START_REF] Grandvalet | Semi-supervised learning by entropy minimization[END_REF] from [START_REF] Krijthe | Rssl: Semi-supervised learning in r[END_REF]. This approach is based on low-density separation that stipulates that the decision boundary lies on low-density regions. In the implementation of [START_REF] Krijthe | Rssl: Semi-supervised learning in r[END_REF], the initially supervised classifier is a logistic regression that has a similar performance to the SVM classifier. We tested these approaches with relatively small labeled training sets ℓ ∈ {10, 50, 100}, and because labeled information is scarce, we used the default hyper-parameters for all approaches.
Experimental setup. algorithms exhibiting the generalization ability of the method can provide a better understanding of the usefulness of unlabeled training data in the learning process.
Conclusion
In this chapter, we presented a first bound over the misclassification error of a selftraining algorithm that iteratively finds a list of halfspaces from partially labeled training data. Each round consists of two steps: exploration and pruning. The exploration phase aims to determine the halfspace with the largest margin and assign pseudo-labels to unlabeled observations with an unsigned margin larger than the discovered threshold. The pseudo-labeled instances are then added to the training set, and the procedure is repeated until there are no more unlabeled instances to pseudolabel. In the pruning phase, the last halfspace with the largest threshold is preserved, ensuring that there are no more unlabeled samples with an unsigned margin greater than this threshold and pseudo-labeled samples with an unsigned margin greater than the specified threshold are removed. Our findings are based on recent theoretical advances in robust supervised learning of polynomial algorithms for training halfspaces under large margin assumptions with a corrupted label distribution using the Massart noise model. We ultimately show that the use of unlabeled data in the proposed self-training algorithm does not degrade the performance of the initially supervised classifier. As future work, we are interested in quantifying the real gain of learning with unlabeled and labeled training data compared to a fully supervised scheme.
Chapter 3
Pool-Based Active Learning with Proper Topological Regions
Pool-based active learning methods are one of the most promising paradigms of active learning in solving the problem of annotation efficiency. One of the main criticism of these approaches is that they are supposed to operate under low-budget regimes where they can have an advantage over semi-supervised or self-supervised methods.
However, most of these approaches rely on the underlying trained estimator accuracy, which often has a higher sample complexity than the low-budget regime scenario.
Secondly, they commonly share the initial training set selection drawback, namely the cold-start problem. This chapter presents a meta-approach for pool-based active learning strategies in the context of multi-class classification tasks. Our approach relies on the proposed concept of learning on proper topological regions with an underlying smoothness assumption on the metric space. This allows us to increase the training sample size during the rounds while operating in a low-budget regime scenario. We show empirically on various real benchmark datasets that our approach dramatically improves the performance of uncertainty-based sampling strategies over the random selection, not only for the cold-start problem but overall the iterations in a low-budget regime. Furthermore, comparisons on the same benchmarks show that the performance of our approach is competitive to the state-of-the-art methods from the literature that address the cold-start problem in active learning. This chapter is based on the following paper [HDMA22].
Introduction
In recent years, machine learning has found gainful application in diverse domains.
However, it still has a heavy dependence on expensive labeled data: advances in cheap computing and storage have made it easier to store and process large amounts of unlabeled data, but labeling needs often to be done by humans or using costly tools. Therefore, there is a need to develop general domain-independent methods to learn models effectively from a large amount of unlabeled data at the disposal, along with a minimal amount of labeled data. Active learning specifically aims to detect the observations to be labeled to optimize the learning process and efficiently reduce the labeling cost. In this setting, learning occurs iteratively. At each round, the algorithms can interactively query a ground truth oracle to label unlabeled examples.
Then, after training, the algorithms proactively select the subset of examples to be labeled next from the pool of unlabeled data. The primary assumption behind the active learner algorithm concept is that machine learning algorithms could reach a higher level of performance while using a smaller number of training labels if they were allowed to choose the training data set [START_REF] Settles | Active learning literature survey[END_REF]. Most common and straightforward active learning approaches are iterative, also known as pool-based methods [START_REF] Lewis | Heterogeneous uncertainty sampling for supervised learning[END_REF][START_REF] Cohn | Improving generalization with active learning[END_REF],
where we first derive a model trained on a small random labeled subsample. Then, at each iteration, we choose unlabeled examples to query based on the predictions of the current model and a predefined priority score. These approaches show their limitations in low-budget regime scenarios from their need for a sufficient budget to learn a weak model [START_REF] Pourahmadi | A simple baseline for low-budget active learning[END_REF]. The literature has shown that for active learning to operate in a low-budget regime successfully, we need to introduce a form of regularization in training [START_REF] Guyon | Results of the active learning challenge[END_REF] usually found in other sub-domains, such as semi-supervised learning or self-learning [START_REF] Chapelle | Semi-Supervised Learning[END_REF]. Another line of work shows that the choice of the initial seed set in these approaches significantly impacts the end performance of their models [START_REF] Hu | Off to a good start: Using clustering to select the initial training set in active learning[END_REF][START_REF] Chen | Making your first choice: To address cold start problem in vision active learning[END_REF], also known as the cold-start problem in active learning. Our work is a step further in this direction. We propose a unified meta-approach for pool-based active learning methods to efficiently resolve these previously mentioned drawbacks and to enhance even further the performance of these methods while reducing the amount of queried examples.
Topological data analysis (TDA) [START_REF] Edelsbrunner | Computational Topology -an Introduction[END_REF][START_REF] Adams | Persistence images: A stable vector representation of persistent homology[END_REF] has been successful in various fields [START_REF] Xia | Persistent homology analysis of protein structure, flexibility, and folding[END_REF][START_REF] Rieck | Uncovering the topology of time-varying fmri data using cubical persistence[END_REF][START_REF] Jiang | Topological representations of crystalline compounds for the machine-learning prediction of materials properties[END_REF][START_REF] Aditi | Machine learning with persistent homology and chemical word embeddings improves prediction accuracy and interpretability in metalorganic frameworks[END_REF], including machine learning. A critical insight in TDA, a widely accepted assumption, is that data sets often have nontrivial topologies that should be exploited in their analysis [START_REF] Carlsson | The Shape of Data[END_REF]. TDA provides mathematically well-founded and flexible tools based on the algebraic topology to recover topological information from data to get insights from this hidden information. Many of these tools use persistent homology which allows studying the underlying topological information of a wide variety data types, even in high-dimension. To understand the underlying topology, we can construct Vietoris-Rips complexes [START_REF] Hausmann | On the Vietoris-Rips complexes and a cohomology theory for metric spaces[END_REF] from the data which are then inspected through persistent homology, topological information is then encoded with persistence modules and diagrams [START_REF] Edelsbrunner | Topological persistence and simplification[END_REF]. These topological insights can then be exploited to enhance the study of the structure of the data [START_REF] Singh | Topological methods for the analysis of high dimensional data sets and 3d object recognition[END_REF][START_REF] Lum | Extracting insights from the shape of complex data using topology[END_REF][START_REF] Carlsson | Topological approaches to deep learning[END_REF].
Related literature
Different attempts have been made to reduce the annotation burden of machine learning algorithms. We can refer to the remarkable advances made in semi-supervised learning [131, [START_REF] Grandvalet | Semi-supervised learning by entropy minimization[END_REF][START_REF] Belkin | Manifold regularization: A geometric framework for learning from labeled and unlabeled examples[END_REF][START_REF] Amini | Learning with Partially Labeled and Interdependent Data[END_REF][START_REF] Zhang | mixup: Beyond empirical risk minimization[END_REF][START_REF] Berthelot | Mixmatch: A holistic approach to semi-supervised learning[END_REF], these methods take as input a small set of labeled training data together with a large number of unlabeled examples. They introduce a form of consistency regularization to the supervised loss function by applying data augmentation using unlabeled observations [START_REF] Chapelle | Semi-Supervised Learning[END_REF]. Similarly, pool-based active learning methods also take as input a large number of unlabeled examples together with an expert in which they iteratively query to annotate data samples in order to maximize the model knowledge while minimizing the number of queries. Most commonly known pool-based strategies are uncertainty sampling [START_REF] Lewis | Heterogeneous uncertainty sampling for supervised learning[END_REF]129], margin sampling and entropy sampling strategies [START_REF] Settles | Active learning literature survey[END_REF]. Some proposed strategies rely on the query-by-committee approach [128, 120, 78], which learns an ensemble of models at each round. Query by bagging and query by boosting are two practical implementations of this approach that use bagging and boosting to build the committees [START_REF] Abe | Query learning strategies using boosting and bagging[END_REF]. There has been exhaustive research on how to derive efficient disagreement measures and query strategies from a committee, including vote entropy, consensus entropy, and maximum disagreement [START_REF] Settles | Active learning literature survey[END_REF], whereas [START_REF] Ali | Active learning with model selection[END_REF] introduces model selection for a committee. Some research focuses on solving a derived optimization problem for optimal query selection, in [START_REF] Roy | Toward optimal active learning through sampling estimation of error reduction[END_REF] they use Monte Carlo estimation of the expected error reduction on test examples. In contrast, other strategies employ Bayesian optimization on acquisition functions such as the probability of improvement or the expected improvement [START_REF] Garnett | Bayesian Optimization[END_REF], and in [START_REF] Auer | Finite-time analysis of the multiarmed bandit problem[END_REF], the authors propose to cast the problem of selecting the most relevant active learning criterion as an instance of the multi-armed bandit problem. Aside from the pool-based setting, we also find the stream-based setting for active learning in the literature [START_REF] Lughofer | Single-pass active learning with conflict and ignorance[END_REF][START_REF] Baram | Online choice of active learning algorithms[END_REF]. In this case, the learner has no access to any unlabeled examples. Instead, each unlabeled sample is given to him individually, and he queries its label if he finds it helpful. For instance, an example can be marked as valuable if the prediction is uncertain, and acquiring its label would remove this uncertainty.
Recent advances in active learning propose enhancing the pool-based methods by extracting knowledge from the distribution of unlabeled examples [START_REF] Bonnin | A cluster-based strategy for active learning of rgb-d object detectors[END_REF]. [START_REF] Perez | Cluster-based active learning[END_REF] propose to use clustering of unlabeled examples to boost the performance of pool-based active learners, with the expert annotating at each iteration cluster rather than single examples. Such strategy allows to reduce the annotation effort under the assumption that the cost of cluster annotation is comparable to single example labeling.
Similarly, [START_REF] Krempl | Clustering-based optimised probabilistic active learning (copal)[END_REF][START_REF] Krempl | Optimised probabilistic active learning (opal)[END_REF] propose to combine clustering with Bayesian optimization in the stream-based setting. In [START_REF] Urner | Plal: Cluster-based active learning[END_REF]authors propose a procedure for binary domain feature sets to recover the labeling of a set of examples while minimizing the number of queries. They show that this routine reduces label complexity for training learners, [START_REF] Yu | Active learning based constrained clustering for speaker diarization[END_REF] propose a two-stage clustering constraint in the active learning algorithm, a first exploration phase to discover representative clusters of all classes, and a postclustering reassignment phase where the learner is constrained on the initial clusters found at the first stage. Finally, unsupervised algorithms such as clustering showed promising results for addressing the cold-start problem in pool-based active learning strategies [START_REF] Hu | Off to a good start: Using clustering to select the initial training set in active learning[END_REF][START_REF] Chen | Making your first choice: To address cold start problem in vision active learning[END_REF].
The meta-method that we propose for pool-based active learning relies on notions from topological data analysis (TDA), which has recently brought exciting new ideas to the machine learning community. Primarily, topological clustering has been used in unsupervised learning [START_REF] Bonis | A fuzzy clustering algorithm for the modeseeking framework[END_REF][START_REF] Cabanes | A new topological clustering algorithm for interval data[END_REF]. Among these studies, ToMATo [START_REF] Frederic | Persistence-based clustering in riemannian manifolds[END_REF] is a mode-seeking clustering algorithm with a cluster merging phase guided by topological persistence [START_REF] Polterovich | Topological Persistence in Geometry and Analysis[END_REF].
Framework and topological considerations
Framework and notations
We consider multi-class classification problems such that the input space X is a subset of R m , the output space Y = {1, . . . , c} is a set of unknown classes of cardinal c ∈ N, c ≥ 2, the pair (X , d) is a metric space, and d : X ×X → [0, ∞) is a fixed and known distance metric. Let S = {(x i , y i )} n i=1 be the data sample of size n generated by some unknown distribution over X × Y with unknown labels y i , and let P be the unknown marginal distribution over X . An active learner will have access to the unlabeled sample set S x = {x i } n i=1 and the conditional concept function O : X → Y. We denote C R : {1, . . . , n} → {1, . . . , k} the partition function induced by a graph R = (S x , E) of
k connected components with k ∈ N * , where C R (x i ) = {∀x j ∈ S x , C R (i) = C R (j)} is
the connected component of graph R that includes the node x i . For all i ∈ {1, . . . , n}, we define C P R : S x → Y as the labeling function that propagates the label of the sample with the highest density in the connected component C R (x i ) to all the examples of the same connected component in graph R :
C P R (x i ) = O arg max x j ∈C R (x i ) P(x j ) , ∀i ∈ {1, . . . , n}, note that C P R (.
) is a crucial notion in our proposed meta-approach to allow pool-based active learning strategies to operate in a low-budget regime.
Definition 3.1 (Rips graph). Given a finite point cloud S x = {x i } n i=1 from a metric space (X , d) and a real number δ ≥ 0, the Rips graph R δ (S x ) is the graph of vertex set S x whose edges correspond to the pairs of points x i , x j ∈ S x such that d(x i , x j ) ≤ δ:
R δ (S x ) = (V, E) : V = S x , E = {(x i , x j ) ∈ V 2 , i ̸ = j, d(x i , x j ) < δ}, let the hypothesis class of Rips graphs over S x be H R = {R δ (S x ), ∀δ ∈ R + }. Definition 3.2 (σ-Rips graph). Given a finite point cloud S x = {x i } n i=1 from a metric space (X , d), a real-valued function σ δ : X 2 → R * + with parameters δ, the σ-Rips graph R σ δ (S x
) is the graph of vertex set S x whose edges correspond to the pairs of points
x i , x j ∈ S x such that d(x i , x j ) ≤ σ δ (x i , x j ): R σ δ (S x ) = (V, E) : V = S x , E = {(x i , x j ) ∈ V 2 , i ̸ = j, d(x i , x j ) < σ δ (x i , x j )},
let the hypothesis class of σ-Rips graphs over S x be H R σ = {R σ δ (S x ), ∀δ ∈ dom(σ)}. Classification algorithms in machine learning generally assume (often implicitly) that close samples in the considered metric space (X , d) are associated with similar labels, also known as the smoothness assumption. Given a data sample S, the Rips graph R δ (S x ) encodes this notion to some extent with an appropriate threshold δ.
However, class similarity might be different over the metric space. For example, samples in high-density regions should be closer to being associated with similar labels than those in low-density regions. Consequently, we need to generalize the definition of the Rips graph to account for such cases, namely the σ-Rips graph R σ δ (S x ) with an appropriate threshold function σ δ , and parameters δ. In this work, we choose the following threshold function:
σ (a,r,t) : X × X -→ R * + (x, x ′ ) -→ a(r -max (P(x), P(x ′ ))) 1 t , (3.1)
with t ∈ (0, 1], and a, r ∈ R * + such that r > max x P(x), note that the Rips graph is a special case of the σ-Rips graph when σ δ is a constant function. Next, we compare the persistence between the Rips and the σ-Rips graphs.
Persistence on superlevel sets
We refer to [START_REF] Edelsbrunner | Computational Topology -an Introduction[END_REF] for an introduction to topological persistence and its applications.
Let X be a Riemannian manifold, and P be a K-Lipschitz-continuous probability density function with respect to the Hausdorff measure for a real constant K ≥ 0.
A persistence module is a sequence of vector spaces X = (X α ) α∈R where R = R ∪ {-∞, +∞} together with linear maps X β → X α whenever α ≤ β (setting X α → X α as the identity). The persistence diagram DX of this persistence module is then a multi-set of points in R 2 containing the diagonal ∆ = {(x, x) | x ∈ R} and points (i, j) corresponding to the lifespan of some generator appearing at time i and dying at time j < i (here, according to the way our spaces are connected, we see alpha starting from +∞ and at -∞). The multiplicity of a point of D 0 X is +∞ for the points of ∆, and a finite alternating sums of ranks of composed linear maps (see for example [133] for more details).
Persistence is often used with homology, and we refer the reader to [START_REF] Hatcher | Algebraic topology[END_REF] for more details. Here we will only use the 0-dimensional homology, which detects connected components. More precisely, if X is a topological space or a graph, H 0 (X) will be the vector space spanned by the (path) connected components of X. Moreover, if g : X → Y is a continuous map between spaces or a graph homomorphism between graphs, it induces a natural linear application f * : H 0 (X) → H 0 (Y ). Let us now look at meaningful examples for the rest of the chapter. The first is the 0dimensional persistence homology induced by P. More precisely, for α ∈ R, we set
F α = P -1 ([α, +∞]
). If α ≤ β are two reals, then there is an inclusion F β ⊆ F α , and this induces linear maps H 0 (F β ) → H 0 (F α ). We will denote by D 0 P the corresponding persistence diagram (the 0 is there to remind us that we are working with the 0-dimensional homology). Another persistence diagram we will consider is the one induced by the Rips graph.
Definition 3.3 (upper-star Rips filtration). Given a finite point cloud S x from a metric space (X , d) with the probability density function P, a real value δ ∈ R + , The upper-star Rips filtration of P, denoted R δ (S x , P), is the nested family of subgraphs of the Rips graph R δ (S x ) defined as R δ (S x , P) = (R δ (S x ∩ P -1 ([α, +∞])) α∈R .
Such a nested family of graphs give rise to, after applying the 0-dimensional homology, a persistent module R δ (S x , P) and a persistence diagram D 0 R δ (S x , P). A word of caution might be necessary. Technical difficulties can occur when there are infinitely many points (counted with multiplicity) away from the diagonal in a persistent diagram. Fortunately, with an upper-star Rips filtration, it is not an issue if the point cloud S x is finite since only a finite number of changes appear in the nested family of graphs. When considering the persistence diagram D 0 P induced by a function P which is continuous, one can for example require that P has finitely many critical points to avoid problems. The bottleneck distance is an effective and natural proximity measure for these objects to compare persistence diagrams:
Definition 3.4 (bottleneck distance). Given two multi-subsets A 1 , A 2 of R2 , a multibijection γ between A 1 and A 2 is a bijection:
γ : p∈|A 1 | ⨿ µ(p) i=1 p → q∈|A 2 | ⨿ µ(q) i=1 q,
where, for i ∈ {1, 2}, |A i | denotes the support of A i and µ(p) denotes the multiplicities
of point p ∈ |A i |. The bottleneck distance d ∞ B (A 1 , A 2 ) between A 1 and A 2 is the quantity: d ∞ B (A 1 , A 2 ) = min γ max p∈A 1 ∥p -γ(p)∥ ∞
To show that two persistence diagrams are close to one another with respect to the bottleneck distance, one can use the following notion introduced in [START_REF] Chazal | Proximity of persistence modules and their diagrams[END_REF]. Definition 3.5 (ε-interleaved). Let X = (X α ) α∈R and Y = (Y α ) α∈R be two persistence modules and let D 0 X and D 0 Y be there associated persistence diagrams. We say that X and Y are strongly ε-interleaved if there exists two families of linear application {φ α : X α → Y α-ε } α∈R and {ψ α : Y α → X α-ε } α∈R , such that for all α, β ∈ R, if α ≤ β, then the following diagrams, whenever they make sens, are commutative:
X β+ε G G φ β+ε × × X α-ε Y β G G Y α ψα X β G G φ β × × X α φα × × Y β-ε G G Y α-ε X β G G X α φα × × Y β+ε ψ β+ε G G Y α-ε X β-ε G G X α-ε Y β G G ψ β Y α ψα
The idea behind these diagrams is that every component appearing (resp. dying) in X at some time α must appear (resp. die) in Y within [α -ε, α +ε], and vice-versa.
The following lemma highlights how important this notion is. Lemma 3.1. Let X and Y be two persistence modules such that D 0 X and D 0 Y have only finitely many points away from the diagonal, and let ε > 0. If X and Y are strongly ε-interleaved, then D 0 X and D 0 Y are at a distance at most ε with respect to the bottleneck distance.
Proof. This is a direct consequence of [START_REF] Chazal | Proximity of persistence modules and their diagrams[END_REF]Theorem 4.4] where the result is proven for every homological dimension.
For example, in [START_REF] Chazal | Scalar field analysis over point cloud data[END_REF]Theorem 5], it is proven that given the density function P on a point cloud S x with sufficient sampling density, the persistence diagram D 0 R δ (S x , P) built upon the Rips graph R δ (S x ) with an appropriate δ is a good approximation of D 0 P the persistence diagram of P. Consequently, D 0 R δ (S x , P) encodes the 0th homology groups of the underlying space of S x , this is a crucial ingredient in the proof of the theoretical guarantees of ToMATo.
Persistence of Rips graph and σ-Rips graph
ToMATo is a clustering method that uses the hill climbing algorithm on the Rips graph along with a merging rule on the Rips graph's persistence, it comes with theoretical guarantees under the manifold assumption, we would like to derive similar guarantees for our proposed approach. As previously mentioned, ToMATo guarantees are deduced from a careful comparison (with respect to the bottleneck distance) between D 0 P and D 0 R δ (S x , P), one way to get similar guarantees for our procedure is to control the bottleneck distance between D 0 R δ (S x , P) and the persistence diagram of the σ-Rips graph. To do so, we need to introduce the following. Lemma 3.2. Given a finite point cloud S x from a metric space (X , d), for all σ-Rips graphs R σ δ (S x ) with real-valued function σ δ : X 2 → R * + and parameters δ, there exist a non-metric space (X , d) such that R σ δ (S x ) is the Rips graph R 1 (S x ).
Proof. The proof is trivial by the following definition of d:
d : X × X -→ R + (x, x ′ ) -→ d(x, x ′ ) σ δ (x, x ′ ) .
Given a graph R(S x ), we will denote by P R (x i , x j ) the set of all paths in R(S x ) from the vertex x i to the vertex x j , where a path p is a sequence of vertices of R(S x )
where two consecutive vertices of p are adjacent in R(S x ).
Definition 3.6 (appearance level). Given a finite point cloud S x = {x i } n i=1 from a metric space (X , d) with the probability density function P, and a graph R(S x ). We define f R (x i , x j ) as the level α at which the first connection between the vertices x i , x j appearances in the upper-star Rips filtration R δ (S x , P), ∀i, j ∈ {1, . . . , n}, i ̸ = j:
f R (x i , x j ) = max p∈P R (x i ,x j ) min x∈p P(x) if P R (x i , x j ) ̸ = ∅, 0
otherwise. Now we can for example prove that the persistent module R δ (S x , P) of the Rips graph R δ (S x ) over the metric space (X , d) and the persistent module R 1 (S x , P) of the σ-Rips graph R σ δ (S x ) over the non-metric space (X , d) of Lemma 3.2 are ϵ-interleaved.
Theorem 3.3. Given a finite point cloud S x = {x i } n i=1 from a metric space (X , d) with the probability density function P. Let the Rips graph R δ ′ (S x ) with parameter δ ′ , and the σ-Rips graph R σ δ (S x ) with a threshold function σ of parameter set δ.
Let denote by R = R δ ′ (S x , P), R σ = R 1 (S x , P), R = R δ ′ (S x ), and R σ = R σ δ (S x ). For α ∈ R, we set: R α = R δ ′ S x ∩ P -1 ([α, +∞]) and R σ α = R σ δ S x ∩ P -1 ([α, +∞]) .
Concretely, for all i, j ∈ {1, . . . , n} such that i ̸ = j, x i and x j appear in R and R σ at α = P(x i ) and P(x j ), respectively. They are then merged in R, resp. R σ , at α = f R (x i , x j ), resp. α = f R σ (x i , x j ). Finally, by choosing ε as ε = max x i ,x j ∈Sxi̸ =j |f R (x i , x j )-f R σ (x i , x j )|, we have that R and R σ are strongly ε-interleaved.
Proof. For α ∈ R, let C 1 , . . . , C k be the connected components of R α . For every i ∈ {1, . . . , k}, and each vertices x, x ′ ∈ C i , we have that f Rα (x, x ′ ) ≥ α and thus, by
definition of ε, f R σ α (x, x ′ ) ≥ α -ε. Hence C i is contained in a connected component of R σ α-ε .
This gives a linear map:
φ α : H 0 (R α ) → H 0 (R σ α-ε ).
By a similar argument, we get a linear map:
ψ α : H 0 (R σ α ) → H 0 (R σ α-ε ).
By construction, the following diagrams are commutative (inclusions on connected components induce the linear maps involved).
H 0 (R β+ε ) G G φ β+ε × × H 0 (R α-ε ) H 0 (R σ β ) G G H 0 (R σ α ) ψα H 0 (R β ) G G φ β × × H 0 (R α ) φα × × H 0 (R σ β-ε ) G G H 0 (R σ α-ε ) H 0 (R β ) G G H 0 (R α ) φα × × H 0 (R σ β+ε ) ψ β+ε G G H 0 (R σ α-ε ) H 0 (R β-ε ) G G H 0 (R α-ε ) H 0 (R σ β ) G G ψ β H 0 (R σ α )
ψα Consequently, R and R σ are strongly ε-interleaved.
In other words, when switching from the metric distance d to the non-metric distance d, if the dendrogram induced by the upper star Rips graph is mostly the same during the persistence process, our procedure enjoys the same theoretical guarantees as the ToMATo method.
Learning with proper topological regions
In the following, we will first clarify our proposed notion of proper topological regions in the context of our framework, and then derive our meta-approach for pool-based active learning strategies.
Proper topological regions
The proper topological regions of a sample set S = {(x i , y i )} n i=1 is the σ-Rips graph R σ δ * (S x ), with an appropriate threshold function σ, resulting from solving the following optimization problem:
minimize R∈H R σ PuritySize(R σ δ (S x )) = k n + 1 n n i=1 1 C P R (x i )̸ =y i ∈ [0, 1] subject to d ∞ B (R σ , D 0 P) ≤ ε. (3.2)
Similarly, we can define the optimal Rips graph R δ * (S x ) that encodes the proper topological regions of S. The PuritySize is an objective function that penalizes the labeling error when propagating the labels inside the connected components of the graph with C P R , and the coverage with k, the number of connected components in the graph R. However, there is a need to derive an unsupervised objective function in the context of our approach. To this end, we empirically investigated the following score functions, typically used to assess clustering quality. For a given graph R(S x ), let C 1 , . . . , C k be the connected components of this graph,
µ i = 1 |C i | x∈C i x,
S ch (R(S x )) = (n -k)B (k -1) k i=1 W i ∈ [0, +∞), with B = k i=1 |C i |∥µ i -µ∥ 2
is the inter-group variance, and W i = x∈C i ∥xµ i ∥ 2 is the intra-group variance, for all i ∈ {1, . . . , k}. It translates that good partitioning should maximize the average inter-group variance and minimize the average intra-group variance; some well known clustering algorithms, such as K-means [START_REF] Lloyd | Least squares quantization in pcm[END_REF], maximize this criterion by construction.
Davies-Bouldin score
S db (R(S x )) = 1 k k i=1 max j̸ =i δi + δj d(µ i , µ j ) ∈ (+∞, 0], with δi = 1 |C i | x∈C i d(x, µ i )
is the average distance of all samples in the group to their mean-sample group, for all i ∈ {1, . . . , n}.
Dunn score
S d (R(S x )) = min i,j d(µ i , µ j ) max i ∆ i ∈ [0, +∞), with ∆ i = max x,x ′ ∈C i d(x, x ′
) being the diameter of group C i , similar to the Calinski-Harbasz score, we aim to maximize the minimum distance between the meansample groups and minimize the maximum group diameter.
Silhouette score
S sil (R(S x )) = 1 k k i=1 1 |C i | x∈C i s il (x) ∈ [-1, 1],
this score affects s il (x) = b(x)-a(x) max(a(x),b(x)) , the silhouette coefficient to every sample, with a(x) = x ′ ∈C j d(x, x ′ ) being the average distance of sample x to his neighbor group.
1 |C R (x)|-1 x ′ ∈C R (x),x̸ =x ′ d(x, x ′ )
We observed promising empirical evidence that the Silhouette score is the best candidate among the above scores to substitute the propagation error term in the PuritySize cost function in (3.2). Therefore, we define the unsupervised cost function SilSize(R, S x ) that we use in solving the unsupervised setting of problem (3.2).
SilSize α (R(S x )) = S Sil (R(S x )) -α k n , with α ∈ R + . (3.3)
Given our choice of the threshold function σ δ in (3.1), and the new objective function (3.3), our problem (3.2) becomes:
maximize δ=(a,r,t)∈dom(σ) SilSize α (R σ δ (S x )) = S Sil (R σ δ (S x )) -α k n ∈ -1 -α, 1 - α n subject to d ∞ B (R σ , D 0 P) ≤ ε.
(3.4) Theorem 3.3 tells us that one way to ensure the bottleneck constraint in (3.4) is to apply the same post-processing phase used in the ToMATo algorithm on the σ-Rips graph. It consists of applying a merging rule along the hill-climbing method on the graph with P. This merging rule compares the topological persistence of groups to an additional merging parameter τ ∈ [0, max x∈Sx P(x)] [START_REF] Frederic | Persistence-based clustering in riemannian manifolds[END_REF]. We will refer to this procedure as ToMATo τ (R(S x ), P), which returns the partition function C R that encodes the proper topological regions of S x . We describe in Algorithm 2 a two-stage black-box optimization scheme to uncover the σ-graph parameters δ, and the merging parameter τ , solution to our optimization problem 3.4, for the proper topological regions of S.
Algorithm 2 Optimization procedure for PTR Require: S x := {x i } n i=1 , d : X × X → [0, ∞), s the step size for the linear search, and l the number of trials for the optimization strategy. 1: Set α = s. 2: Compute density estimator P with (3.5). 3: Optimize the unconstrained problem (3.4) for l trials, and return δ = (ã, r, t). 4: Build the σ-Rips graph R σ δ (S x ). 5: while R σ δ (S x ) is not a degenerate graph do 6:
Save the current graph parameters δ.
7:
Update α ←-α + s.
8:
Optimize the unconstrained problem (3.4) for l trials.
9:
Build the σ-Rips graph R σ δ (S x ), with new parameters δ. 10: end while 11: Update α ←-α -s. 12: Optimize problem (3.4) with ToMATo τ (R σ δ (S x ), P) for l trials, on merging parameter τ , given the fixed parameters δ of line 6, and return τ . 13: Output : parameters δ = (ã, r, t), and τ .
Remark 1. Note that the trade-off parameter α in (3.3) is key in uncovering the proper topological regions of the sample set S. Higher α values penalize the coverage compactness, resulting in partitions with a high degree of agglomeration, meaning fewer groups with large cardinals, which is typically the case in clustering methods, for example. However, an additional way to control the labeling propagation error term of the PuritySize objective in an unsupervised setting is to control the size distribution of groups in the resulting partition. Put differently, minimizing the group size is an excellent proxy for minimizing the labeling propagation error. Inversely, lower α values results in highly fragmented partitions with many groups with small cardinals, such setting is sub-optimal to our purpose of increasing the training sample size, additionally, the Silhouette is a score metric used to evaluate clustering quality and defined for non-singleton groups. Using it solely as an objective function in (3.4) often converges to graphs with a single non-singleton connected component. Accordingly, there should be a middle ground for α values that we find with a line search method and a stopping criterion on the size distribution of groups. We will clarify other technical details and choices in the next subsection. Train the learning agent h st (S i , B).
7:
Ask a set S r from h st of size B.
S i+1 = S i ∪ {label ccs containing S r with C P R σ δ }. 9:
if extra budget then 10:
S i+1 = S i+1 ∪ {label largest ccs of R σ δ (S x ) with C P R σ δ }. 11:
end if 12: end for 13: Output : trained agent h st Algorithm 3 presents the main contribution of this work, our meta-approach for training pool-based active learning strategies on proper topological regions of a sample set S x . In addition to standard inputs in active learning methods such as the unlabeled sample set S x , the oracle O, the budget B, and the number of rounds r. It also takes as input the metric distance d, a given pool-based active learning method h st (S x , B), the parameters δ and τ tuned by Algorithm 2. The proper topological regions of S x are encoded in the partition function C R σ δ , we typically obtain a much higher number of regions than the number of classes c, or what we expect to have with clustering methods. Initially, we aim to maximize the initial training set size by labeling the largest regions. Then the underlying active learner strategy consumes the rest of the regions during the rounds of active training. We might receive queries during these rounds which contain samples from the same region; in that case, we use the extra budget to label the largest regions. After r rounds, we return the trained estimator of the strategy. Note that we also can have cases where all the regions are consumed before the end of rounds of active training. Next, we shall discuss practical considerations and technical details related to implementing this approach.
Practical considerations
We consider for d, the Euclidean distance overall experiments. Furthermore, we use l-nearest neighbors for the estimation of the distance matrix D as follows: Let D = (d i,j ) ∈ R n×n be a sparse distance matrix, with only ℓ non zero values in each row:
d i,j = d(x i , x j ) if x j is one of the ℓ-nearest neighbors of x i , 0 otherwise.
The density estimation P : S x → R + is calculated as follows:
P(x i ) = 1 l n j=1 m 2 i,j -1/2
, for all i ∈ {1, . . . , n}.
(3.5)
We consider ℓ = 2000, for all datasets of greater size. We choose the Tree-structured Parzen Estimator (TPE) [START_REF] Bergstra | Algorithms for hyper-parameter optimization[END_REF] for the optimization procedure of Algorithm 2, with a number of trials l = 1000, and a step size s of 0.1 for the line search procedure.
Empirical results
In this section, we will describe the corpora and methodology.
Datasets
We conduct experiments on benchmark datasets for classification problems also often used in active learning: coil-20 [START_REF] Yang | Robust affine invariant descriptors[END_REF], isolet [START_REF] Fanty | Spoken letter recognition[END_REF], protein [START_REF] Higuera | Self-organizing feature maps identify proteins critical to learning in a mouse model of down syndrome[END_REF], banknote, pendigits, nursery, and adult [START_REF] Joseph D Romano | Pmlb v1.0: an open source dataset collection for benchmarking machine learning methods[END_REF].
K-Medoids clustering
The K-Medoids algorithm [START_REF] Kaufman | Finding Groups in Data: An Introduction To Cluster Analysis[END_REF] is very similar to K-Means except that it uses the actual samples for centers, namely medoids, as the center of each cluster.
These medoids are then used to form the initial training set in active learning.
Agglomerative Hierarchical Clustering (AHC)
Agglomerative hierarchical clustering [START_REF] Voorhees | The Effectiveness & Efficiency of Agglomerative Hierarchic Clustering in Document Retrieval[END_REF] is a bottom-up clustering approach that builds a hierarchy of clusters. Initially, each sample represents a singleton cluster. Then the algorithm recursively merges the closest clusters using a linkage function until until only one cluster is left. This process is usually presented in a dendogram, where each level refers to a merge in the algorithm.
AHC has been used for active learning in [START_REF] Dasgupta | Hierarchical sampling for active learning[END_REF] by pruning the dendogram at a certain level to obtain clusters, then similar to K-Means, selecting the closest samples to the centroid clusters to generate the initial training set.
Furthest-First-Traversal (FFT)
The farthest-first traversal of a sample set is a sequence of a selected sample, where the first sample in the sequence is selected arbitrarily, and each successive sample in the sequence is located the furthest away from the set of previouslyselected samples. The resulting sequence is then used as the initial training set for active learning. The FFT algorithm has been used for active learning in [START_REF] Baram | Online choice of active learning algorithms[END_REF].
Affinity Propagation Clustering (APC)
Affinity propagation is a clustering algorithm designed to find exemplars of the sample set which are representative of clusters. It simultaneously considers all the sample set as possible exemplars and uses the message-passing procedure to converge to a relevant set of exemplars. The exemplars found are then used as an initial training set for active learning [START_REF] Hu | Off to a good start: Using clustering to select the initial training set in active learning[END_REF].
For the active learning experiments, and following results from [START_REF] Siméoni | Rethinking deep active learning: Using unlabeled data at model training[END_REF], we only consider the comparison against the random labeling strategy, as it outperforms many recent strategies in active learning in small budget scenarios. In all our experiments, we use the random forest classifier [START_REF] Tin | Random decision forests[END_REF] as base estimator for the different strategies with default parameters, we also consider several budgets [START_REF] Amini | Learning with Partially Labeled and Interdependent Data[END_REF][START_REF] Bonnin | A cluster-based strategy for active learning of rgb-d object detectors[END_REF], and 20 stratified random splits, with 70% of the data in the training set and 30% in the test set. For all the datasets, we observe that the optimal threshold rule's values found in the hypothesis class of the σ-Rips graph with our proposed threshold function σ δ in (3.1) are negatively correlated to the estimation density P. We also observe that the best σ-Rips graph achieves better PuritySize scores overall datasets, except for coil-20 and nursery datasets, where it has similar scores to the best Rips graph found.
B ∈ [3,
Rips graph vs σ-Rips graph
Particularly for the coil-20 dataset where the optimal threshold seems to be a constant, we notice that, in this case, both graphs converge to the same threshold function, this shows that our proposed threshold function (3.1) can effectively approximate constant threshold functions. These findings confirm our hypothesis that class similarity is a density-aware measure. It also supports our choice of σ δ in (3.1) as an appropriate threshold function to generalize the Rips graph.
Cold-start results
Table 3.2 presents the cold-start results of our meta-approach for pool-based active learning strategies, denoted TPR(USRG) where USRG is refering to the unsupervised setting of our optimization procedure for solving Problem (3.4) using the σ-Rips graph, next to the Random Selection (RS), and other considered baseline methods overall the datasets. In all cases, TPR(USRG) provides significantly better results than the random selection approach except for the imbalanced dataset nursery with the largest budget. This shows that using the largest proper topological regions found by Algorithm 2 as an initial training set (Line 4 of Algorithm 3) provides a better starting point for pool-based active learning strategies than random selection. Furthermore, our meta-approach shows very competitive results overall datasets, when compared to the baseline methods that are solely designed to tackle the cold-start problem in active learning.
Active learning results
Lastly, we present the results of our meta-approach for pool-based active learning strategies. We consider the strategies mentioned above, namely the uncertainty sampling query strategy, the entropy sampling query strategy, and the margin sampling query strategy. When using our meta-approach presented in Algorithm 3 with USRG as described in the previous subsection. Compared to RS, the vanilla training approach in pool-based active learning, when using random selection to initiate the active learning strategies. We also show SSRG to illustrate the difference of performance, when we minimize the supervised setting for TPR (Problem(3.2)), compared to the unsupervised setting (Problem(3.4)). We present one figure per each dataset, and each figure is constituted of six error bars plots, referring to the average balanced accuracy and standard deviation over the splits, for all the active learning rounds, plus the initial round. The plots are indexed to show a specific budget per row, and a specific active learning strategy per column.
We can see from the results that all the considered pool-based active learning strategies clearly benefit from our approach in comparison to the vanilla setting. The only instance where we don't see such advantage is for nursery dataset. The reason being that nursery has a high class imbalance ratio. We choose in Algorithm 3 to prioritize the gain in the training sample size, regardless of class discovery, or class ratio, which may empathizes the class imbalance in such case. This shows that we need to introduce other sampling criterion of TPR in Algorithm 3, than simply selecting the largest ones, when training with highly class-imbalanced datasets.
Conclusion
In this chapter, we proposed a data driven meta-approach for pool-based active learning strategies for multi-class classification problems. Our approach is based on an introduced notion of Topological Proper Regions (TPR) of a given sample set, we showed the theoretical foundations of this notion, and derived a black-box optimization problem to uncover the TPR. Our empirical study validates our meta-approach on different benchmarks, in low budget scenario, for various pool-based active learning strategies. Challenging open questions are left, a theoretical analysis that guarantee good performance in active learning, such as, generalization bounds. The use of semisupervised approaches to conclude the analysis with a model dependent approach, by having a regularization term derived from the TPR.
Chapter 4 Deep Learning for Rapid Automation of Transmission Electron Microscopy Analysis
Deep learning is revolutionizing many areas of science and technology, including the analysis of data obtained by Transmission Electron Microscopy (TEM). This chapter presents our practical contributions aimed at automating the analysis of phase and orientation mapping from scanning diffraction data obtained during TEM analysis.
More precisely, we aim to derive a DL approach capable of accurately predicting a crystal's phase and orientation, given its diffraction diagram. These contributions aim to achieve real-time orientation and phase determination maps during the acquisition experiments. This chapter is based on the following paper [HDR + 22, HDAL22].
Introduction
Transmission Electron Microscopy (TEM) has expanded the type of information obtained on nanocrystalline microstructures, such as phase and orientation [START_REF] Carter | Transmission Electron Microscopy: Diffraction, Imaging, and Spectrometry[END_REF]. Orientation microscopy is a technique that enables spatially resolved measurements of crystal phases and orientations in a sample and reconstructs the microstructure from this information. Using a scanning mode and acquiring on each scanning data point a full diffraction diagram, orientation mapping TEM experiments, alternatively called ACOM (Automated Crystal Orientation Mapping) or SPED (Scanning Precession Electron Diffraction) are producing large collections of datasets, which are often impossible to process manually. As a result, extensive research [START_REF] Rauch | Rapid diffraction patterns identification through template matching[END_REF][START_REF] Zaefferer | Development of a TEM-Based Orientation Microscopy System[END_REF] has proposed semi-automated approaches to analyze these datasets. These deterministic methods rely on classical computer vision techniques (e.g., Hough transform, Fourier filtering, segmentation, and cross-correlation for similarity measure), which typically require manual hyperparameter tunning and a computation cost for each experiment. Deep neural networks (DNNs) have shown superior performance compared to classical computer vision techniques in most benchmark tasks. This led to the emergence of fully automated approaches [132,[START_REF] Aguiar | Decoding crystallography from high-resolution electron imaging and diffraction datasets with deep learning[END_REF] and tools [START_REF] Xu | Automating electron microscopy through machine learning and usetem[END_REF][START_REF] Munshi | 4d crystal: Deep learning crystallographic information from electron diffraction images[END_REF] for various TEM tasks. In the context of orientation microscopy, ML-based approaches are still falling behind traditional techniques such as template matching [START_REF] Rauch | Rapid diffraction patterns identification through template matching[END_REF] or Kikuchi technique [START_REF] Kikuchi | Transmission Electron Microscopy (TEM[END_REF] when it comes to generalization performance to unseen crystal orientations and phases during training. This is due mainly to the limited experimental data about the studied phenomena for training the models. It is a realistic and practical constraint, especially for narrow-domain applications where real data is not widely available. Some successful attempts have been made to use unsupervised learning techniques to gain more insight into the data [START_REF] Ben | Unsupervised machine learning applied to scanning precession electron diffraction data[END_REF][START_REF] Shi | Rapid and semiautomated analysis of 4d-stem data via unsupervised learning[END_REF], but clustering information does not directly solve the orientation microscopy problem.
Early Deep learning breakthroughs were primarily in the computer vision domain, mainly due to the increased availability of new big data benchmarks and organized competitions such as ImageNet [START_REF] Jia | Imagenet: A large-scale hierarchical image database[END_REF] since 2009. This dynamic resulted in many sophisticated models for image classification [START_REF] He | Deep residual learning for image recognition[END_REF][START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF][START_REF] Huang | Convolutional networks with dense connectivity[END_REF][START_REF] Chollet | Xception: Deep learning with depthwise separable convolutions[END_REF]. There is a clear potential for automated image analysis tools using state-of-the-art machine learning techniques for phase and orientation determination to complement the existing relatively slow, complex, and hyperparameter-dependent approaches. To this end, we investigate multi-task DL solutions with the purpose of boosting the existing slow phase and orientation determination techniques by replacing them with DL models for fast realtime prediction during data acquisition but with lower generalization accuracy to be used as a less accurate but real-time analysis for TEM experiments.
Experimental
We conduct our experiments using two batches of maps from two different studies. The second batch of TEM data was collected in a study investigating the structural and mechanical properties of different ultrafine-grained (UFG) structures obtained from aluminum alloys [START_REF] Duchaussoy | Structure and mechanical behavior of ultrafine-grained aluminum-iron alloy stabilized by nanoscaled intermetallic particles[END_REF].The study investigated a specimen of an Al-2wt%Fe alloy after severe plastic deformation (SPD) by high-pressure torsion (HPT) at strain levels
Labeling strategy
Transmission Electron Microscopy (TEM) was performed using a JEOL 2100F microscope using a Stingray camera recording the phosphorus screen and equipped with automated crystal orientation mapping using the (ASTAR) package [START_REF] Rauch | Rapid diffraction patterns identification through template matching[END_REF][START_REF] Edgar | Orientation and phase mapping in tem microscopy (ebsd-tem like): Applications to materials science[END_REF][START_REF] Viladot | Orientation and phase mapping in the transmission electron microscope using precession-assisted diffraction spot recognition: State-of-the-art results[END_REF]. The ASTAR template matching package was used to analyze the phase and orientation distribution maps, which will be considered as the ground truth for the following. (c) Map 3 : 97.4% alpha iron (α-Fe), 0.3% niobium carbide (NbC), and 2.3% cementite (Fe 3 C).
(d) Map 10T : 98.0% aluminium (Al), and 2.0% aluminium-iron (Al 6 Fe).
(e) Map as-cast : 89.6% aluminium (Al), and 10.4% aluminium-iron (Al 6 Fe).
(f) Map 100T : 99.1% aluminium (Al), and 0.9% aluminium-iron (Al 6 Fe).
The crystal orientations in the specimen are also provided from the template matching analysis. The Euler angles [START_REF] Goldstein | Classical Mechanics. Addison-Wesley series in physics[END_REF] usually denoted as (ϕ 1 , Φ, ϕ 2 ) can fully describe these crystal orientations. A compact approach for representing these orientations is to map them to the RGB channels to create an image showing a given micrograph's orientation map. Additionally, we have access to template diffraction diagrams where the diffraction patterns are simulated using the crystal information and the TEM experimental settings, Figure 4.4 shows a simulated diffraction pattern of alpha iron at a given orientation with Euler's angles. Each simulated template is provided as a set of coordinates with the corresponding intensities which is the radius at these spots. We draw simulated DPs using OpenCV [START_REF]Itseez. Open source computer vision library[END_REF], and we cover the Euler space of each crystal by simulating the fundamental zone corresponding to the class symmetry of each considered phase.
As mentioned previously, the objective of this study is to retrieve information about the crystal phase and orientation from the collected data during TEM experiments. Ultimately, we want to investigate Deep learning methods to infer Euler's orientation and phase determination maps from real diffraction images. From the descriptions provided in this section, we can already identify important challenges to take into consideration when designing such approaches: TEM data is highly heterogeneous: the experimental settings in which the data is collected are often different from one TEM experiment to another for practical reasons. This adds another difficulty for DL approaches to adapt to new experimental settings when analyzing new micrographs.
Data preprocessing for ML
A lot of effort has been made into researching and developing new ways of preprocessing real diffraction patterns or DPs and simulated DPs to improve the prediction accuracy of ML-based approaches [132,[START_REF] Munshi | 4d crystal: Deep learning crystallographic information from electron diffraction images[END_REF]. Nowadays, computer vision models rely on the End-to-End approach with minimal feature engineering. Nonetheless, such an approach has some underlying assumptions that cannot be met in our case, namely a highly favorable signal-to-noise ratio which is usually the case in standard highresolution RGB images in datasets such as ImageNet [START_REF] Jia | Imagenet: A large-scale hierarchical image database[END_REF], and a tremendous amount of relevant training data to allow the underlying algorithm to model and adapt to the noise distribution. Initial End-to-End experiments for phase classification with outof-the-shelf DL models showed that these models indeed fail to generalize to DPs from unseen micrographs, confirming our belief in the need for an appropriate preprocessing to alleviate this phenomenon. Additionally, traditional normalization techniques are not adapted to our use case. In figure 4.5, we present the mean experimental diffraction diagram for each map (calculated on all map's datapoints). These spots are present in the mean images because of the high frequency of duplicates in TEM data from the major grains. It is relevant information in frequent DPs, and subtracting it in a normalization scheme would result in a signal loss. However, the mean images present statistical noise intrinsic to the TEM experiment, which should be subtracted.
g σ (i, j) = 1 2πσ 2 e -(i 2 +j 2 )/(2σ 2 ) (4.1)
In our study, after standardizing the images, we propose to filter out the statistical noise by fitting a centered Gaussian filter in Eq (4.1), where i is the distance from the center in the horizontal axis, j is the distance from the center in the vertical axis, and σ is the standard deviation, to the mean images of each micrograph (map) and subtracting the resulted filter from all the DPs of that map. After subtraction, all negative points on the image are assigned a value of 0. Concretely, For each mean image m of a given micrograph, we solve:
σ * =
Deep learning for TEM data analysis
This section considers two different research directions, the first direction was to investigate the capability of DL approaches to train with simulated TEM data, and to evaluate how well it translates to our problem of real TEM data analysis. The second direction of this work was to focus on the performance of SOTA DL models in predicting accurate unseen maps while relying directly on real TEM data for training.
Training on simulated TEM data
To solve our objective of analyzing TEM data using DL models, we address two known tasks in ML, a classification task to identify phases and be able to predict the phase determination map of a micrograph (see Figure 4.2) given its diffraction data, and a regression task to predict the Euler's orientation map given the diffraction data of the considered datapoint (see Figure 4.3). In the following, we will only consider the first batch of micrographs, namely, the maps 1, 2, and 3 of Figure 4.1.
Additionally, we consider the desciptors extracted following the procedure detailed in the preprocessing section, of the fundamental zone of the phases alpha iron, niobium carbide, and cementite described in We consider the task of classifying the considered phases with their code label described in Table 4.1, and the regression task of Euler's angle prediction, such that for the angles of all signatures we apply the following transformation in order to map the cyclical angle space [0, 2ϕ] to a continuous space: f (ϕ 1 , Φ, ϕ 2 ) = (sin(ϕ 1 ), cos(ϕ 1 ), sin(Φ), cos(Φ), sin(ϕ 2 ), cos(ϕ 2 )) f -1 (x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ) = (arctan2(x 1 , x 2 ), arctan2(x 3 , x 4 ), arctan2(x 5 , x 6 )) (4.2)
We reserved 20% of the total simulated data as a separate test set to estimate the performance of the trained DL models using two metrics: the balanced accuracy score for the classification task evaluation, and to evaluate the quality of the regression predictions we introduce the mean absolute angle error defined between a 1D vector A of true angles and 1D vector of angle predictions both of size n as:
maae(A, Â) = 1 n n i=1 min(|A i -Âi |, 360 -|A i -Âi |) (4.3)
The rest of the simulated data is used for training and validating the considered DL models, in a 80%, 20% split sizes, we consider three DL architectures in this experiment:
MLP: a full Multi-Layer Perceptron [START_REF] Malsburg | Frank rosenblatt: Principles of neurodynamics: Perceptrons and the theory of brain mechanisms[END_REF], with three first dense layers of size 500 with gelu activations, and two last dense layers with sizes 100, 50 with also gelu activations.
CNN: a Convolutional Neural Network [START_REF] Lecun | Gradientbased learning applied to document recognition[END_REF], with four consecutive 1D convolutions of size 128, kernel size 7, and relu activations, followed by a 1d max pooling layer of kernel size 3, a flatten operation and a final dense layer of size 64 with relu activation.
LSTM: Long Short-Term Memory model [START_REF] Hochreiter | Long short-term memory[END_REF], with a first CNN layer of size 128, kernel size 7, and relu activation, followed by a 1d max pooling layer of kernel size 3, the LSTM layer with 256 units, a flatten layer output the full sequence of the LSTM layer.
The CNN and MLP models were retrieved using a Neural Architecture Search (NAS) procedure with AutoKeras [START_REF] Jin | Auto-keras: An efficient neural architecture search system[END_REF], the LSTM model was manually handcrafted based on the architecture presented in [START_REF] Meng | lncrnalstm: Prediction of plant long non-coding rnas using long short-term memory based on p-nts encoding[END_REF]. In this experiment, we compare these models in two different settings. The first setting where a first instance of a model is trained to classify the signatures, and three other dedicated instances for the regression task for each class. The second setting is the multi-task approach (MT) where we trained a single instance of each model with two heads one for the regression task and the other for the classification task, the classification head has an output of size 3 as the number of classes, and the regression head has an output of size 6 to predict the angle transformation f of Eq (4.2) of the signature's angles. All the models were trained with a batch size of 1024 for 500 epochs, and with an early stopping on the validation loss. The nuances between these two settings of predicting phase and angle information from DP descriptors are depicted in From Table 4.1, we show that the classification task on the descriptors extracted from simulated DPs is a relatively simple task compared to angle prediction, even during training, all the models reached a perfect accuracy score in training and validation after a few epochs. Each model is compared in the two settings described above in Multi Task MT, or in a hierarchical way with a model instance per each class for the regression and another anstance for classification only, the MT setting has the advantage of being less complex, having a single model handling both predictions simplify greatly the approach, and as we can see in training data and take advantage of the implicit knowledge between the two tasks to enhance the accuracy of each one of them.
Next, we investigate the capabilities of these models to adapt from simulated descriptors to the descriptors extract from real DPs, transfert learning has become a common technique used in ML, from NLP to computer vision, pretrained models are widely used in various applications to enhance model performance [START_REF] Yang | Transfer Learning[END_REF]. In the next set of experiments, we will investigate if the trained models from the previous step can transfer their knowledge from simulated TEM data to successfully analyze real TEM diffraction data. For this experiment, we consider the first batch of maps, where map 2 and map 3 are used for fine-tuning the models, and the map 1 for the performance evaluation. real diffraction images correctly. The preprocessing adopted in the previous section has played a critical role in facilitating this transfer. Secondly, fine-tuning shows superior performance compared to using the pre-trained models without fine-tuning, especially for the angle's predictions. This result is expected, there are an intrinsic difference between real TEM data, and simulations. Allowing DL models to fine-tune thier acquired knowledge on real data will naturally lead to better predictions. We also note that the balanced accuracy is relatively low, it is a consequence of the highly class imbalanced distribution in TEM micrographs (we refer the reader to Table 4.4).
Lastly, fine-tuning increased the overall prediction accuracy of Euler's angles for map 1. However, the accuracy is still not comparable to the level of precision that we obtained with simulated TEM data. In Figure 4.9, we depict the predicted maps for the orientation determination and the phases determination of map 1, by all the fine-tunned models. As expected from the classification accuracy, the phase maps are well predicted by the predictor. Concerning the orientation map, the situation is of course less favorable. However, regardless of the relatively high maae error reported in the previous table by these models on map 1, the predicted Euler's orientation map highlights the main grain boundary in the micrograph of map 1, although some other boundaries are not so clearly visible. the FT/MT/LSTM model achieved the best balanced accuracy metric, and it is visually validated by its predicted phases determination map, compared to the two other maps, from models FT/MT/CNN, and FT/MT/MLP. Finally, we can notice in the predicted orientation maps the presence of noise in the predictions compared to the ground truth map provided by ASTAR.
The lack of relevant real DPs data covering the Euler space has a consequence in the robustness of these estimators.
In summary, the quality of the similarity between the simulated and real descriptors heavily depends on the preprocessing steps, which has a direct implication on the final model performance. Fine-tuning, multi-task learning, and an efficient preprocessing procedure allow us to reduce this bias when training with real descriptors. We successfully analyzed TEM experimental data by combining all these tools. The qualitative and quantitative promising results show that DL approaches can successfully be used for real-time TEM data analysis, especially for phase identification. The approach presented in this section has however some limitations. First, it requires the simulation of all DPs in the fundamental zone of each considered phase.
This simulation might be expensive depending on the symmetry class of the considered crystals. Our choice of using compact descriptors rather than raw 2D images reduce this cost significantly at the expense of an information loss regarding the experimental DPs. Finally, even if the orientation maps present relevant information, the high noise rate in the predictions made by this initial DL analysis of the data much less efficient than the deterministic algorithms such as template matching.
Alternatively, in the next section, we will investigate the potential of the stateof-the-art DL models in analyzing experimental TEM data, relying solely on real diffraction diagrams, and using the images as inputs.
Training on real TEM data
We are interested in the following in the task of analyzing TEM data by relying solely on experimental DPs. For this purpose, we will use all the diffraction data available to us from the considered micrographs of Figure 4.1 to train and evaluate SOTA DL models from computer vision, next we will first show the potential of these models and their limitations in solving the TEM data analysis task. The aggregated diffraction data is presented in We consider for this set of experiments the SOTA models in computer vision tasks, the ResNet model [START_REF] He | Deep residual learning for image recognition[END_REF] and its variations, ResNet50, ResNet50V2, ResNet101, InceptionResNetV2, and ResNet152. The EfficientNet model [START_REF] Tan | Efficientnet: Rethinking model scaling for convolutional neural networks[END_REF] and its variations from B0 to B7. The DenseNet architecture [START_REF] Huang | Convolutional networks with dense connectivity[END_REF] and its variations, the Xception model [START_REF] Chollet | Xception: Deep learning with depthwise separable convolutions[END_REF], the Inception model [START_REF] Szegedy | Rethinking the inception architecture for computer vision[END_REF], VGG16 and VGG19 [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF]. The barplot in Figure 4.10 presents the results for this SOTA models in phases classification with the unique DPs dataset in blue, and the 100 duplicate DPs dataset in orange. All models were trained on diffraction data from all the maps except for map 1 (from steel sample) and 100T (from Aluminum sample) which were kept for test set evaluation.
From the results depicted in observe a significant drop of performance when the models are trained and evaluated on the dataset with 100 DP duplicates at most. The class label distribution does not significantly shift between these two dataset as shown in Table 4.5, our interpretation of this phenomenon relies on the fact that the SOTA DL models fail to learn meaningful representations of TEM data when trained with limited diffraction TEM data containing noisy duplicated DPs, thus, making them inadequate to be used for TEM data analysis. To remediate to this inefficiency we introduce in the following a new DL model, designed in the spirit of analyzing TEM diffraction data in experimental conditions. But first, we introduce the segmentation map of a micrograph as the result of a deterministic function that maps unique DPs with a unique label and Euler angles, to a unique identification. Figure 4.11 presents the segmentation maps of all considered micrographs. We implement the encoding function that maps the angles and phase label to a unique identifier using label encoder, these identifier are then used are indices for the XKCD color list1 to retrieve the segmentation map.
By definition of the mapping function, a segmentation map aggregates the information of both phase determination and Euler's orientation maps. Therefore, it is essential to facilitate the interaction between DL models and existing software for TEM data analysis, correctly predicted segmentation maps will significantly reduce the cost of post-processing analysis of the data with deterministic approaches such as template matching. The accurate prediction of a segmentation map requires a DL model to learn to distinguish between real DPs. Intuitively, differentiating DP images seems more accessible than the orientation prediction and has a better chance to generalize to unseen DP orientations. Consequently, we will focus on a specifically designed DL model for solving the segmentation map prediction problem for the rest of this study. This model is denoted as Multi-task Pairwise Siamese Network (MPSN). Siamese networks [START_REF] Bromley | Signature verification using a "siamese" time delay neural network[END_REF] are a particular class of DL models that internally have a twin or more identical subnetworks. Siamese architecture is used in many applications, from anomaly detection and classification to similarity and representation learning, etc. [START_REF] Chopra | Learning a similarity metric discriminatively, with application to face verification[END_REF][START_REF] Chicco | Siamese Neural Networks: An Overview[END_REF]. The subnetworks in the siamese architecture are trained by mirroring the gradients during the backpropagation, they can take pairs or triplets as inputs, a rich literature exists on different training algorithms and losses, such as the contrastive loss or the triplet loss (for more details regarding siamese networks, please refer to [START_REF] Chicco | Siamese Neural Networks: An Overview[END_REF]). Furthermore, siamese networks are known to be more robust to class imbalance by implementing a particular sampling strategy for the inputs during training, and they were also For the MPSN architecture, we make use of the siamese idea in order to train our DL model to learn to distinguish between DP image pairs of the same phase class. The MPSN model takes as input DP image pairs (x 0 , x 1 ) from the same phase and a binary label vector y of size 7, the first coefficient of y corresponds to the label pair, which takes one for positive pairs and zero for negative pairs. A positive pair, in our case, is a pair of PD images of the same class with the same Euler orientation, meaning DP duplicates. Negative pairs, on the other hand, are PD image pairs of the same phase but at different Euler orientations. the latter 6 coefficients correspond to the one hot encoding of phase labels. The image pairs are sequentially forwarded through an image encoder. In our case, we chose the ResNet50 DL model [START_REF] He | Deep residual learning for image recognition[END_REF] as the encoder by removing the dense top layer of the classifier and flattening the latent representation of the model as the image embedding space. This architecture is depicted in Figure 4.12.
This embedding is then connected to two separate heads. The siamese head is composed of a dense layer with relu activation for feature extraction. The feature vectors are used to estimate the distance metric. Then the result is mapped to the 0-1 space with the sigmoid function S(x) = 1/(1 + e -x ). The second head is a standard classification architecture, composed of a dense layer with the same number of units as the number of classes and a softmax activation function defined as σ(x) i = e x i /( j x j ) for each coordinate i of x, to map the logits to a probability distribution vector. The image pairs are also forwarded sequentially to obtain the probability vectors for phase prediction of each DP image.
The MPSN model is trained in an MTL fashion with a composite loss function balanced by a hyperparameter λ, constituted of a binary cross-entropy for the siamese head and another cross-entropy loss for the classification head: ℓ λ (x 0 , x 1 , y) = -λ[(y 0 log(f s (x 0 , x 1 ; θ e , θ s )) + (1 -y 0 ) log(1 -f s (x 0 , x 1 ; θ e , θ s )))] Given the predicted segmentation maps by the MPSN model, we can recover Euler's orientation map and the phase map by querying a deterministic approach, such as template matching to label a single DP from each segment and propagate the labels of this DP through all the segments. Table 4.6 and Table 4.7 present the precision of these retrieved maps with respect to the ground truth maps provided by ASTAR. In addition to the metrics used in the previous section, we introduce the reduction metric as the proportion of queries saved when relying on label propagation with the predicted segmentation maps rather than querying all DPs of the micrograph.
-(1 -λ)
Note that with smaller κ, we obtain segmentation maps with fewer segments, which significantly reduces the number of queries needed to retrieve the Euler's orientation and phase's maps, as shown in the tables. On the other hand, high κ values ensure better precision in predicting the phases and angles. These results show that the MPSN model architecture can drastically reduce the cost of analyzing TEM data with deterministic approaches by over 90%, while providing a high accuracy on the resulted MAPS after the analysis. Furthermore, we show from our experimental protocol that the model can generalize to new experimental TEM data without the need of training on simulated data or fine-tuning. These arguments support the realistic deployment of such a DL approach in an interactive setting with an existing deterministic method to significantly speed up The TEM data analysis for a real-time use case.
Conclusion
In this work, we investigated the potential of deep learning for the rapid automation of TEM data analysis. We proposed a preprocessing of simulated and experimental DPs with descriptors. We demonstrated the value of multi-task learning for the regres- In the current state of the presented methods, it appears that accurate phase determination can be reached both based on training on simulated images and directly on experimental data. In the latter case, the main ussie is obviously the amount of data available and the diversity of data, since experimental maps contain a large proportion of duplicates. As for the simulated data strategy, in order to become more user-friendly, future work should also address how a pre-trained model could be easily transferred to different experimental configurations (camera length, precession conditions, etc.). Concerning the prediction of orientation angle, promising precision has been obtained on some Euler angles but not all, and further research is need to improve this precision to a degree where the noise on experimental maps can be reduced to observe relevant features in most microstructures. The possibility to implement a real time approximation of the phase and orientation map during TEM acquisition is not out of reach given the results presented here, and would be a valuable asset for more useful data collection by the experimentalists.
Chapter 5
Conclusions and Future Perspectives
The work in this thesis has concentrated on demonstrating the relevance of implicit information in the design of machine learning algorithms. This study direction has been extensively investigated by the community, with its different subdomains in semi-supervised learning, active learning, and transfer learning. We demonstrated that implicit information may take numerous forms and is especially useful in learning circumstances where labeled information is sparse and expensive. This implicit information can also come from a variety of sources, such as data distribution, a learning algorithm's knowledge model, or simulations. We studied these many sources of implicit information by developing tailored learning algorithms to recognize and exploit this knowledge to improve learning efficiency.
Further research into new approaches to develop learning algorithms to recognize and apply implicit information is critical. To solve our daily activities, we create effective ways of combining contextual knowledge and prior experiences. However, without enough labeled data, these activities remain beyond the capabilities of machine intelligence.
Future goals should be to achieve human intelligence not by completing a specific set of problems, as was previously regarded to be a milestone for artificial intelligence, but rather by developing learning models and algorithms that better blend memory, experience, and perception. With developing study paths such as few-shot learning or self-learning, the research community has already set its course to move in this direction. In my opinion, the research done in this area will be important in advancing AI to the next level.
1. 1
1 Self-learning algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Active learning compared to traditional passive learning. . . . . . . . 1.3 Multi-task learning framework in ML. . . . . . . . . . . . . . . . . . . 1.4 Transmission Electron Microscopy (TEM) analysis workflow. . . . . . 2.1 Case when α (k) ∈ 0, arccos γ (k) (left) and α (k) ∈ arccos γ (k) , π 2 (right) for a pair w (k) , w (k+1) in L m . . . . . . . . . . . . . . . . . . 3.1 Comparison study between Rips graph and σ-Rips graph overall datasets, the PuritySize score is reported for each minimizer. . . . . . . . . . .
4. 1
1 Collected micrographs of size 500 × 500. . . . . . . . . . . . . . . . . 4.2 ASTAR phase determination maps for all considered micrographs. . . 4.3 ASTAR Euler's orientation determination maps for all considered micrographs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 4.4 Simulation of diffraction patterns for TEM experiments. . . . . . . . 4.5 The two first rows coorespond to the mean diffraction diagram of DPs from each micrograph, and thier resulting Gaussian filters in the next rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Preprocessing result on random sampled DPs. The first and second rows from the top are the raw DPs from each specimen, whereas the second and last rows are the resulting DPs after preprocessing. . . . . 4.7 Preprocessing steps of real diffraction diagrams vs template simulated real diffraction diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Multi-Task Learning approach compared to hierarchical learning approach for training DL models in analyzing TEM data. . . . . . . . . 4.9 Predicted Euler's orientation maps, and phase determination maps of all fine-tunned DL models for the micrograph of map 1. . . . . . . . . 4.10 Classification results of SOTA DL models, accuracy scores are on experiments with two datasets, the unique DPs dataset and the 100 DPs duplicates dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 Encoded segmentation maps for all considered micrographs. . . . . . 4.12 Multi Pairwise Siamese Network (MPSN) architecture. . . . . . . . . 4.13 First row corresponds to the segmentation maps of the micrographs 1 and 100T, in the second row are the ones predicted by the MPSN model with κ = 0.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Phases and Euler's orientation maps found using the segmentation map predicted by the MPSN model on map 1. . . . . . . . . . . . . . . . . 4.15 Phases and Euler's orientation maps found using the segmentation map predicted by the MPSN model on map 100T. . . . . . . . . . . . . . . The expected value of a random variable X Cov The covariance of random variables ⊥ ⊥ The independence operator X The input space and subset of R d x An observation from X Y = {1, . . . , c} The output space of cardinal c ∈ N, c ≥ 2 D A joint probability distribution of the data h w A parametric model, with parameters w H d = {h w : X → Y} A hypothesis class S A sample set S ℓ The set of labeled examples of cardinal ℓ X u The set of unlabeled examples of cardinal u arctan2 The arc tangent function arg max The argument of the maxima arg min The argument of the minima Chapter 1
Figure 1.1: Self-learning algorithm.
(
central spot in the diffraction diagram), see micrograph in Figure 1.4.
Figure 1
1 Figure 1.4: Transmission Electron Microscopy (TEM) analysis workflow.
diffraction diagram from a predefined set of banks of different crystals (Section 4.1 of Chapter 4 details the literature). The resulting match provides information about the type of crystal (phase detection) and its orientation in the specimen (orientation detection) at the coordinate of the queried diffraction diagram. The analysis result is presented as two different maps, the phases map and the orientation map, shown on the right-hand side of Figure 1.4.
Chapter 2, we investigate a new self-training algorithm for binary classification with halfspaces under the assumption the training set is corrupted by label noise. We use a Massart noise model to describe label corruption and examine the generalization properties of the classifier found by the self-training algorithm.
iteratively over the set of labeled and unlabeled training data by maximizing the unsigned margin of unlabeled examples and then assigning pseudo-labels to those with a distance greater than a found threshold. The pseudo-labeled unlabeled examples are then added to the training set, and a new classifier is learned. This process is repeated until there are no more unlabeled examples to pseudo-label. In the second step, pseudo-labeled examples with an unsigned margin greater than the last found threshold are removed from the training set. Our contribution is twofold: (a) we present a first generalization bound for selftraining with halfspaces in the case where class labels of examples are supposed to be corrupted by a Massart noise model; (b) we show that the use of unlabeled data in the proposed self-training algorithm does not degrade the performance of the first halfspace trained over the labeled training data.
rithm is better or equal to any hypothesis in H d obtained from S ℓ only. Here we denote by η w (x) = P y∼Dy(x) [h w (x) ̸ = y] the conditional misclassification error of a hypothesis h w ∈ H d with respect to D and w * the normal vector of h w * ∈ H d that achieves the optimal misclassification error; η η η * = min w,∥w∥ 2 ≤1 P (x,y)∼D [h w (x) ̸ = y].
p 20 : end while 21 :
2021 Output : L m = [(w(1) , γ (1) ), ..., (w (m) , γ (m) )]
∀i ∈ {1, . . . , k}, and µ = 1 n n i=1 x i are the mean-sample per connected component C i , and the mean-sample of S x , respectively: Calinski-Harabasz score
being the average distance of sample x to his group and b(x) = min j̸ =i,x∈C i 1 |C j |
Algorithm 3
3 Pool-based active learning on PTR Require: S x := {x i } n i=1 , oracle O, budget B and d : X × X → [0, ∞), graph parameters δ and merging parameter τ , active learner agent h st (S x , B) with an underlying pool-based strategy st, and r the active training rounds. 1: Compute density estimator P with (3.5). 2: Build the σ-Rips graph R σ δ (S x ). 3: Apply ToMATo τ (R σ δ (S x ), P) to get C R σ δ , the partition function that encodes the proper topological regions of S x . 4: S 0 = {B largest connected components (ccs) of R σ δ (S x ) labeled with C P R σ δ }. 5: for i = 0, . . . , r do 6:
Baseline methods For the cold-start experiments, we consider the following approaches from the literature:K-Means clusteringThe K-Means algorithm[START_REF] Lloyd | Least squares quantization in pcm[END_REF] partitions a collection of examples into K clusters by minimizing the sum of squared distances to the cluster centers. It has been used for active learning in [129], to generate the initial training set by labeling the closest sample to each centroid. Another variation proposed in [70] adds artificial samples from the centroids, named model examples, to the initial training set. This approach is named K-Means+ME and leads to an initial training set twice as large as the one created using K-Means.
To validate our hypothesis of a density-aware threshold (3.1) for class similarity, and to motivate our generalization of the Rips graph to express this notion, we present a comparison study in Figure3.1 between the Rips and the σ-Rips graphs overall considered datasets. Each plot presents the threshold of the best Rips graph and σ-Rips graph in minimizing the PuritySize cost function on all the datasets with the same practical considerations of Subsection 3.4.2. Note that the Rips graph's threshold is a constant presented as a horizontal line in the plots. We also show two additional side plots. The x-axis plot shows the distribution of the density estimation P in the dataset. In contrast, the y-axis plot shows the distribution of the Euclidean distances in the dataset's distance matrix D. We are interested in comparing the threshold rules within these intervals because, from Definition 3.1 of the Rips graph, and Definition 3.2 of the σ-Rips graph, threshold values greater than the maximum distance will result in a clique graph.
of best Rips graph : 0.1381 PuritySize of best σ-Rips graph : 0PuritySize of best Rips graph : 0.7776 PuritySize of best σ-Rips graph : 0
Figure 3 . 1 :
31 Figure 3.1: Comparison study between Rips graph and σ-Rips graph overall datasets, the PuritySize score is reported for each minimizer.
Figure 3 . 2 :
32 Figure 3.2: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on protein dataset, using random forest estimator over 20 stratified random splits.
Figure 3 . 3 :
33 Figure 3.3: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on banknote dataset, using random forest estimator over 20 stratified random splits.
Figure 3
3 Figure 3.4: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on coil-20 dataset, using random forest estimator over 20 stratified random splits.
Figure 3 . 5 :
35 Figure 3.5: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on isolet dataset, using random forest estimator over 20 stratified random splits.
Figure 3 . 6 :
36 Figure 3.6: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on pendigits dataset, using random forest estimator over 20 stratified random splits.
Figure 3 . 7 :
37 Figure 3.7: Average balanced classification accuracy and standard deviation of different pool-based active learning strategies and budgets on nursery dataset, using random forest estimator over 20 stratified random splits.
The first batch is constituted of three different experimental Micrographs presented in (a),(b), and (c) of Figure4.1. Each collected map consists of 500 × 500 diffraction images of size 144 × 144 pixels each. The three micrographs contain four phases, namely the alpha iron (α-Fe), the gamma iron (γ-Fe), the niobium carbide (NbC), and the cementite (Fe 3 C).
(
figure corresponds to the as-cast material microstructure used in the study, meaning the microstructure of the studied alloy in the as-cast conditions with no further SPD.
Figure 4.1: Collected micrographs of size 500 × 500.
Figure 4.2 shows the phase determination maps provided by ASTAR analysis of TEM data. We have the following microstructure composition:
Figure 4
4 Figure 4.2: ASTAR phase determination maps for all considered micrographs.
Figure 4.3: ASTAR Euler's orientation determination maps for all considered micrographs.
Figure 4 . 4 :
44 Figure 4.4: Simulation of diffraction patterns for TEM experiments.
Figure 4.5 shows the gaussian filters g σ * estimated from each micrograph, and Figure 4.6 shows the effect of our approach on random DPs from each micrograph before and after applying our preprocessing. On the other hand, we employ a similar preprocessing introduced in[START_REF] Zaefferer | Development of a TEM-Based Orientation Microscopy System[END_REF] to preprocess simulated DPs or templates. The simulation is parametrized by the number of spot diffractions and their size tuned to best fit the real DPs. Next we apply a blurring filter to smooth the intensities, then we extract intensity descriptors from their polar representation to obtain similar descriptors to those extracted from real diffraction diagrams.
Figure 4 . 5 :
45 Figure 4.5: The two first rows coorespond to the mean diffraction diagram of DPs from each micrograph, and thier resulting Gaussian filters in the next rows.
Figure 4 . 6 :
46 Figure 4.6: Preprocessing result on random sampled DPs. The first and second rows from the top are the raw DPs from each specimen, whereas the second and last rows are the resulting DPs after preprocessing.
Figure 4.8.
Figure 4 . 8 :
48 Figure 4.8: Multi-Task Learning approach compared to hierarchical learning approach for training DL models in analyzing TEM data.
Phases and Euler's orientation maps of micrograph of map 1.α-Fe NbC Fe3C (b) Phases and Euler's orientation maps predicted by FT/MT/LSTM model. α-Fe NbC Fe3C (c) Phases and Euler's orientation maps predicted by FT/MT/MLP model. α-Fe NbC Fe3C (d) Phases and Euler's orientation maps predicted by FT/MT/CNN model.
Figure 4 . 9 :
49 Figure 4.9: Predicted Euler's orientation maps, and phase determination maps of all fine-tunned DL models for the micrograph of map 1.
Figure 4 . 10 :
410 Figure 4.10: Classification results of SOTA DL models, accuracy scores are on experiments with two datasets, the unique DPs dataset and the 100 DPs duplicates dataset.
Figure 4 . 11 :
411 Figure 4.11: Encoded segmentation maps for all considered micrographs.
successfully applied in representation learning algorithms; they focus on learning a latent representation with a semantic similarity, encoded in the similarity measure used during training.
Figure 4 . 12 :
412 Figure 4.12: Multi Pairwise Siamese Network (MPSN) architecture.
Figure 4 .
4 Figure 4.12 illustrates the architecture of the Multi-task Pairwise Siamese Network.
1 i=0 6 k=1y
6 k log(f c (x i ; θ e , θ c )).
(4. 4 )
4 Where f s (.; θ e , θ s ) is the siamese parametric function, with the shared encoder parameters θ e , and the task-specific parameters θ s , and f c (.; θ e , θ c ) the classification parametric function with the shared encoder parameters θ e , and the task-specific parameters θ c . MPSN is trained to optimize the following minimization formulation:
Figure 4 . 13 :
413 Figure 4.13: First row corresponds to the segmentation maps of the micrographs 1 and 100T, in the second row are the ones predicted by the MPSN model with κ = 0.7.
Figure 4 .
4 Figure 4.14 and Figure 4.15 show the predicted orientation and phase maps by label propagation, given the segmentation map predicted by the MPSN model with a confidence level κ = 0.8, for the micrographs of map 1 and map 100T respectively.
Phases and Euler's orientation maps of micrograph of map 1.α-Fe NbC Fe3C (b) Predicted phases and Euler's orientation maps using the MPSN model with κ = 0.8.
Figure 4 . 14 :
414 Figure 4.14: Phases and Euler's orientation maps found using the segmentation map predicted by the MPSN model on map 1.
Figure 4 . 15 :
415 Figure 4.15: Phases and Euler's orientation maps found using the segmentation map predicted by the MPSN model on map 100T.
sion task of orientation prediction and the classification task of phase determination and transfer learning by successfully learning to analyze simulated TEM diffraction data and applying it to experimental diffraction data. In parallel, we performed an extensive comparative study of SOTA DL models to analyze diffraction data with a realistic experimental protocol. Furthermore, we introduced the segmentation map a new visualization support to allow DL-based solutions to interact in an interactive way with existing deterministic approaches for TEM data analysis. In this context, we address the problem of TEM data analysis for DL as the task of differentiating between experimental DPs, in order to predict the corresponding segmentation map.We proposed a new DL model, namely the Multi-task Pairwise Siamese Network with dedicated training and inference procedures. The model is able to drastically reduce labeling cost in terms of the number of queries of traditional deterministic algorithms for TEM analysis while showing favorable results in the prediction of maps using label propagation on the predicted segmentation map with promising performance in generalizing to unseen micrographs.
For example, suppose we trained a model to solve a classification problem on simulated data. In that case, we could use the knowledge gained during its training to solve a new classification problem on real data. The
advantage of this technique is here to uses the knowledge a model has learned from a source task with a lot of available labeled training data in a new task that does not have much data. Instead of starting the learning process from scratch, we start
Table 2
2
data set d -1 +1 ℓ + u test
one-two 64 177 182 251 108
banknote 4 762 610 919 453
odd-even 64 906 891 1257 540
pc-mac 3868 982 963 1361 584
baseball-hockey 5724 994 999 1395 598
religion-atheism 7829 1796 628 1696 728
spambase 57 2788 1813 3082 1519
weather 17 43993 12427 37801 18619
delicious2 500 9610 6495 12920 3185
mediamill2 120 15969 27938 30993 12914
.1: data set statistics, -1 and +1 refer to the size of negative and positive class respectively, and test is the size of test set.
Table 2 .
2 2: Mean and standard deviations of accuracy on test sets over the 20 trials for each data set. The best and the second-best performance are respectively in bold and underlined. ↓ indicates statistically significantly worse performance than the best result, according to a Wilcoxon rank-sum test (p < 0.01)[START_REF] Wolfe | Nonparametrics: Statistical Methods Based on Ranks and Its Impact on the Field of Nonparametric Statistics[END_REF].[START_REF] Baram | Online choice of active learning algorithms[END_REF].71 ↓ 70.87 ± 13.24 ↓ 48.61 ± 3.98 ↓ 75.09 ± 1.30 53.65 ± 10.65 ↓ 77.77 ± 1.75 92.77 ± 3.05 88.00 ± 3.24 ↓ 49.35 ± 4.20 ↓ 84.67 ± 4.98 ↓ 75.78 ± 8.74 ↓ 91.34 ± 3.21 100 96.15 ± 1.38 92.50 ± 1.43 ↓ 67.82 ± 12.99 ↓ 86.52 ± 3.26 ↓ 55.41 ± 3.16 ↓ 56.53 ± 5.18 49.86 ± 1.77 ↓ 49.88 ± 1.89 ↓ 56.47 ± 5.50 58.66 ± 6.90 ↓ 69.29 ± 4.32 50.11 ± 1.84 ↓ 66.76 ± 5.40 ↓ 50.16 ± 1.90 ↓ 72.85 ± 6.52 100 68.40 ± 4.65 ↓ 76.25 ± 2.41 ↓ 49.97 ± 1.82 ↓ 71.12 ± 5.06 ↓ 50.35 ± 1.89 ↓ 79.48 ± 4.36 religion-atheism 67.30 ± 6.95 57.30 ± 4.89 ↓ 67.59 ± 6.36 60.67 ± 16.37 ↓ 71.95 ± 5.03 64.25 ± 7.24 ↓ 74.61 ± 1.62 71.79 ± 1.98 ↓ 67.43 ± 6.05 62.59 ± 9.42 ↓ 74.99 ± 6.04 61.15 ± 0.86 ↓ 78.25 ± 2.62 53.63 ± 9.86 ↓ 76.13 ± 3.08 100 69.43 ± 10.19 ↓ 80.07 ± 4.08 61.24 ± 10.26 ↓ 79.08 ± 2.83
Dataset ℓ SVM LTF LP GM ERLR L m
one-two 61.38 ± 79.25 ± 6.87 ↓ 94.62 ± 2.46
57.50 ± 7.21 ↓ 69.40 ± 5.53 ↓ 55.98 ± 2.00 ↓ 69.04 ± 4.60 ↓ 56.71 ± 4.53 ↓ 77.24 ± 3.81
banknote 61.67 ± 4.86 ↓ 82.31 ± 2.13 ↓ 56.28 ± 1.89 ↓ 75.48 ± 5.30 ↓ 65.95 ± 2.01 ↓ 85.64 ± 5.36
100 71.65 ± 6.24 ↓ 89.38 ± 3.24 57.20 ± 2.19 ↓ 77.56 ± 4.34 ↓ 70.95 ± 3.24 ↓ 90.82 ± 3.31
53.45 ± 4.80 ↓ 58.20 ± 4.71 ↓ 50.37 ± 1.95 ↓ 60.69 ± 7.48 50.40 ± 2.21 ↓ 63.21 ± 7.51
odd-even 64.75 ± 5.65 ↓ 76.84 ± 2.99 ↓ 50.37 ± 1.95 ↓ 62.67 ± 5.82 ↓ 53.17 ± 4.80 ↓ 80.61 ± 3.10
100 75.89 ± 6.25 ↓ 77.68 ± 4.56 ↓ 53.37 ± 1.95 ↓ 64.25 ± 8.18 ↓ 59.23 ± 6.28 ↓ 84.58 ± 2.12
51.00 ± 3.22 ↓ 54.92 ± 2.00 ↓ 50.93 ± 1.59 ↓ 54.76 ± 3.42 ↓ 50.14 ± 2.06 ↓ 57.75 ± 3.19
pc-mac 58.85 ± 5.09 ↓ 61.78 ± 2.86 ↓ 50.83 ± 2.08 ↓ 58.78 ± 4.31 ↓ 49.71 ± 1.99 ↓ 64.31 ± 3.55
100 64.57 ± 4.42 ↓ 67.98 ± 2.37 50.76 ± 2.26 ↓ 62.49 ± 1.88 ↓ 50.36 ± 2.19 ↓ 68.15 ± 5.66
51.57 ± 2.98 ↓
baseball-hockey
80 ↓
60.04 ± 0.62 54.78 ± 2.57 ↓ 60.00 ± 0.59 48.35 ± 1.31 ↓ 53.48 ± 8.66 ↓ 55.37 ± 3.33 ↓
100 58.88 ± 3.70 56.04 ± 1.83 83
100 63.64 ± 0.15 ↓ 64.26 ± 4.79 36.37 ± 0.15 ↓ 67.34 ± 0.73 63.64 ± 0.16
In our experiments, we have randomly chosen 70% of each data collection for training and the remaining 30% for testing. We randomly selected sets of different sizes (i.e., ℓ ∈ {10, 50, 100}) from the training set as labeled examples; the remaining was considered as unlabeled training samples. Results are evaluated over the test set using the accuracy measure. Each reported performance value is the average over the 20 random (labeled/unlabeled/test) sets of the initial collection. All experiments are carried out on a machine with an Intel Core i7 processor, 2.2GhZ quad-core, and 16Go 1600 MHz of RAM memory.
Analysis of results. Table
2
.2 summarizes the results. We used boldface (resp. underline) to indicate the highest (resp. the second-highest) performance rate, and the symbol ↓ indicates that performance is significantly worse than the best result, according to a Wilcoxon rank-sum test with a p-value threshold of 0.01
[START_REF] Wolfe | Nonparametrics: Statistical Methods Based on Ranks and Its Impact on the Field of Nonparametric Statistics[END_REF]
. From these results, it comes out that the proposed approach (L m ) consistently outperforms the supervised halfspace (LTF). This finding is in line with the result of Theorem 2.8. Furthermore, compared to other techniques, L m generally performs the best or the second best. We also notice that in some cases, LP, GM, and ERLR outperform the supervised approaches, SVM and LTF (i.e., GM on spambase for ℓ ∈ {10, 50}), but in other cases, they are outperformed by both SVM and LTF (i.e., GM on religionatheism). These results suggest that unlabeled data contain useful information for classification and that existing semi-supervised techniques may use it to some extent. They also highlight that developing semi-supervised algorithms following the given assumptions are necessary for learning with labeled and unlabeled training data but not sufficient. The importance of developing theoretically founded semi-supervised ↓ 69.16 ± 7.88 74.16 ± 1.88 72.47 ± 2.00 100 74.66 ± 1.59 73.67 ± 1.76 62.84 ± 19.33 ↓ 70.45 ± 4.39 ↓ 73.21 ± 1.75 73.77 ± 1.82 spambase 61.20 ± 5.15 ↓ 57.80 ± 5.29 ↓ 60.82 ± 0.84 ↓ 74.41 ± 6.64 53.38 ± 11.23 ↓ 68.92 ± 5.83 ↓ ↓ 58.21 ± 6.34 ↓ 81.93 ± 2.46 weather 74.85 ± 0.51 68.09 ± 1.73 ↓ 75.49 ± 0.34 75.02 ± 2.79 40.35 ± 17.29 ↓ 75.08 ± 4.18 75.79 ± 0.28 75.30 ± 3.85 77.99 ± 0.31 75.68 ± 2.78 41.55 ± 27.39 ↓ 75.34 ± 3.80 100 77.99 ± 0.25 76.27 ± 3.64 77.99 ± 0.25 74.92 ± 1.92 46.00 ± 24.87 ↓ 77.28 ± 2.99 delicious2 51.83 ± 9.88 50.59 ± 2.65 ↓ 60.02 ± 0.61 49.41 ± 3.83 ↓ 51.83 ± 10.42 ↓ 51.08 ± 1.↓ 59.87 ± 0.67 48.92 ± 0.94 ↓ 54.43 ± 7.27 ↓ 56.54 ± 1.87 ↓ mediamill2 62.54 ± 2.62 ↓ 60.98 ± 6.85 ↓ 36.35 ± 0.15 ↓ 63.92 ± 1.71 47.24 ± 14.08 ↓ 64.31 ± 3.14 63.64 ± 0.15 ↓ 60.88 ± 7.45 ↓ 36.36 ± 0.15 ↓ 65.98 ± 3.32 58.58 ± 11.88 ↓ 65.41 ± 4.↓ 67.80 ± 2.21
Table 3 .
3 Table 3.1 presents statistics of the datasets. 1: dataset statistics, n is the size of training set, test is the size of test set and imbalance corresponds to class imbalance ratio. The third column corresponds to the dimension of the feature space R p .
dataset n p c imbalance test
protein 756 77 8 0.70 324
banknote 943 4 2 0.83 405
coil-20 1008 1024 20 1.00 432
isolet 4366 617 26 0.99 1872
pendigits 7694 16 10 0.92 3298
nursery 9070 8 4 0.09 3888
adult 34.2k 14 2 0.31 14.7k
Table 3 .
3 2: Average balanced classification accuracy (in %) and standard deviation of random forest classifier with the initial training set obtained from different methods over 20 stratified random splits for different budgets B. ↑ / ↓ indicate statistically significantly better/worse performance than Random Selection RS, according to a Wilcoxon rank sum test (p < 0.05) [117]. 84.32 ± 5.58 ↑ 62.48 ± 3.32 ↑ 63.69 ± 4.49 ↑ 58.22 ± 7.30 ↑ 58.74 ± 7.96 70.21 ± 14.73 ↑ 10 79.88 ± 9.91 85.23 ± 5.68 ↑ 86.80 ± 4.75 ↑ 87.59 ± 3.33 ↑ 85.59 ± 5.08 ↑ 70.58 ± 5.30 ↓ 82.40 ± 6.92 88.68 ± 4.43 ↑ 20 87.58 ± 2.91 90.74 ± 2.40 ↑ 92.43 ± 2.00 ↑ 92.34 ± 2.44 ↑ 92.58 ± 2.86 ↑ 71.89 ± 7.16 ↓ 90.92 ± 3.18 ↑ 93.88 ± 3.44 ↑ ± 0.00 ↑ 14.99 ± 0.00 ↑ 14.99 ± .00 ↑ 14.99 ± 0.00 ↑ 10.83 ± 1.98 ↓ 11.70 ± 2.26 13.59 ± 1.66 10 28.97 ± 5.66 36.68 ± 4.24 ↑ 38.15 ± 2.70 ↑ 32.85 ± 5.14 ↑ 36.00 ± 3.66 ↑ 18.56 ± 3.38 ↓ 27.21 ± 4.80 44.18 ± 2.43 ↑ 20 42.02 ± 5.83 56.69 ± 3.74 ↑ 62.99 ± 2.78 ↑ 42.30 ± 3.47 58.10 ± 4.07 ↑ 25.61 ± 2.46 ↓ 41.40 ± 4.74 71.05 ± 3.78 ↑ 07.81 ± 1.63 09.05 ± 1.86 ↑ 09.24 ± 0.96 ↑ 07.51 ± 1.84 10.82 ± 1.05 ↑ 10 13.78 ± 2.97 22.30 ± 1.55 ↑ 27.61 ± 1.59 ↑ 07.06 ± 1.85 ↓ 23.26 ± 1.76 ↑ 16.47 ± 1.67 ↑ 15.41 ± 3.18 27.53 ± 2.84 ↑ 20 19.20 ± 2.69 27.88 ± 2.46 ↑ 40.37 ± 3.22 ↑ 10.73 ± 1.98 ↓ 28.19 ± 2.14 ↑ 18.79 ± 2.42 21.14 ± 3.13 ↑ 38.63 ± 3.17 ↑ ± 2.55 ↑ 19.42 ± 1.79 ↓ 17.26 ± 3.72 ↓ 17.81 ± 4.86 ↓ 29.89 ± 0.04 ↑ 10 37.35 ± 7.19 62.54 ± 3.46 ↑ 65.63 ± 2.25 ↑ 53.90 ± 5.21 ↑ 61.39 ± 1.86 ↑ 27.17 ± 4.87 ↓ 38.33 ± 8.20 80.11 ± 2.60 ↑ 20 54.25 ± 5.86 72.26 ± 2.72 ↑ 75.80 ± 2.31 ↑ 64.00 ± 3.64 ↑ 72.34 ± 2.45 ↑ 34.78 ± 4.49 ↓ 52.24 ± 5.93 87.68 ± 4.10 ↑ ± 0.15 ↓ 28.33 ± 3.86 ↓ 29.97 ± 3.24 29.99 ± 3.74 35.07 ± 5.57 ↑ 10 42.74 ± 7.20 44.45 ± 5.71 49.26 ± 4.01 ↑ 28.43 ± 1.30 ↓ 44.92 ± 7.22 39.06 ± 3.52 45.12 ± 6.70 46.47 ± 5.98 20 55.33 ± 2.77 52.78 ± 3.27 ↓ 54.42 ± 2.99 32.91 ± 1.10 ↓ 53.82 ± 2.71 39.79 ± 1.07 ↓ 52.52 ± 4.90 ↓ 54.08 ± 4.51
Dataset B RS K-Means K-Means+ME K-Medoids AHC FFT APC TPR(USRG)
3 16.91 ± 3.98 21.21 ± 1.80 ↑ 23.87 ± 2.16 ↑ 21.24 ± 4.36 ↑ 22.74 ± 2.54 ↑ 17.35 ± 3.33 16.72 ± 3.32 22.07 ± 6.20 ↑
protein 10 28.20 ± 3.21 30.61 ± 4.56 ↑ 31.40 ± 4.53 ↑ 29.28 ± 4.35 31.58 ± 3.67 ↑ 21.78 ± 3.80 ↓ 28.79 ± 3.43 40.49 ± 3.92 ↑
20 36.42 ± 3.76 42.07 ± 3.90 ↑ 45.53 ± 2.52 ↑ 39.16 ± 4.84 43.43 ± 3.40 ↑ 26.11 ± 3.37 ↓ 39.18 ± 3.70 53.99 ± 3.39 ↑
3 55.48 ± 7.22 73.98 ± 4.59 ↑
banknote
coil-20 14.99 isolet 3 12.35 ± 2.62 3 07.64 ± 1.54 08.69 ± 0.85 ↑ 09.68 ± 0.63 ↑
pendigits 26.57 nursery 3 21.45 ± 3.49 21.27 ± 1.93 22.53 ± 2.09 3 30.71 ± 4.00 29.20 ± 5.19 30.23 ± 6.50 25.04
Table 4
4
.1.
Table 4 .
4 1: Statistics of signatures extracted from simulated DPs of three phases: the alpha iron, the niobium carbide, and the cementite.
Table 4
4 Table 4.2: Balanced accuracy score and mean absolute angle error of the models LSTM, CNN, and MLP trained on hierarchy or in MT with simulated DPs.
.1 it can provide
the models with a better accuracy by letting a single model ingest all available
Table 4.3 presents the results of our experiments, for each MT model we report performance results for the prediction of angles with maae metric, and phases with accuracy and balanced accuracy metrics. The results are reported with and without fine-tuning for the models MT/LSTM, MT/MLP, and MT/CNN, fine-tuned models are prefixed with FT, best performance are in bold. * refers to the best overall performance for each metric.
models accuracy balanced accuracy maae (degrees) ϕ 1 Φ ϕ 2
MT/LSTM 98.6% * 34.5% 92.6 8.9 18.8
FT/MT/LSTM 98.1% 37.8% * 46.0 6.2 7.5 *
MT/MLP 83.7% 31.5% 101.4 6.5 17.1
FT/MT/MLP 97.8% 34.4% 39.9 * 5.9 * 8.9
MT/CNN 93.9% 35.0% 78.0 9.6 26.0
FT/MT/CNN 93.2% 36.1% 40.6 6.37 8.2
Table 4.3: Balanced accuracy score and mean absolute angle error of the pretrained
models MT/LSTM, MT/CNN, and MT/MLP on map 1, each model is compared
with and without fine-tuning using maps 2 and 3.
From Table 4.3, we note that the classification accuracy of all pre-trained models
is high. It indicates a benefit in transferring from training on simulated DP sig-
natures. All the models can successfully apply the learned knowledge of classifying
extracted signatures from simulated diffractions to classify extracted signatures from
Table 4
4 Therefore, discarding the duplicates will drastically reduce the data for training DL models. Additionally, for experimental TEM data, DP duplicates do not contain the same signal, depending on the detailed electron beam diffraction, the superposition of grains, each diffraction pattern is slightly different and results in a different signal in the duplicates. Hence, trained models should be able to correctly predict these DPs in experimental conditions, to consider a realistic evaluation for TEM data analysis by DL approaches.To this end, we design the following experimental protocol to evaluate the potential of DL models to analyze TEM data. We used all available diffraction data from Table 4.4, to constitute two datasets, the first dataset contains only unique DPs, and the second dataset, contains DPs dupliactes up to 100 duplicates. To sample this dataset, we undersampled all DPs images that have more than 100 duplicates, Table 4.5 resumes the statistics of this two sampled datasets.
.4.
phase size proportion label
Al 818260 56.1% 1
Al 6 Fe 35246 2.4% 2
α-Fe 557322 38.2% 3
NbC 3943 0.3% 4
Fe 3 C 40390 2.8% 5
γ-Fe 2252 0.2% 6
Table 4.4: Raw TEM data statistics overall micrographs.
The majority of traditional ML experimental protocols include the filtering of
sample duplicates in their feature engineering steps. Sample duplicates are, in our
case, all DP images that share the same class label and the same Euler's angles.
Thus, dropping sample duplicates is a crucial step in learning efficient models. Yet, when analyzing TEM data, sample duplicates may be essential for training DL models. First, as previously mentioned, TEM data exhibits a high volume of duplicates.
Table 4 .
4
5: Sampled TEM data statistics from all available diffraction data.
Table 4.6: Accuracy, balanced accuracy and mean absolute angle error of the MPSN model predictions on map 1, the reduction is in terms of number of queries. Table 4.7: Accuracy, balanced accuracy and mean absolute angle error of the MPSN model predictions on map 100T, the reduction is in terms of number of queries.
κ reduction accuracy balanced accuracy maae (degrees) ϕ 1 Φ ϕ 2
0.6 99.58% 95.7% 55.9% 10.0 5.3 7.2
0.7 98.59% 97.5% 63.5% 5.7 3.2 4.7
0.8 94.70% 98.4% 75.7% 3.6 2.1 3.2
0.9 66.54% 99.4% 88.8% 1.3 0.8 1.2
κ reduction accuracy balanced accuracy maae (degrees) ϕ 1 Φ ϕ 2
0.6 99.50% 99.2% 66.6% 50.9 39.3 51.0
0.7 97.84% 99.3% 73.3% 22.4 18.1 24.3
0.8 92.37% 99.6% 87.9% 11.3 9.0 12.8
0.9 65.67% 99.9% 95.2% 4.8 3.9 5.8
https://miai.univ-grenoble-alpes.fr
For research purposes, the code will be freely available.
https://blog.xkcd.com/2010/05/03/color-survey-results/
size allows us to efficiently simulate the Euler's orientation space of a given phase.
Once the Euler's orientation space is simulated for a given phase, the last step of preprocessing is to filter out the symmetries and keep only the descriptors of angles in the fundamental zone of each phase. This last step is dependent on the symmetry class of each phase since for different classes' symmetries, different fundamental zones need to be considered [START_REF] Nolze | Euler angles and crystal symmetry[END_REF]). minimize
Where t refers to the t th pair in the created sample set of pairs. The strategy for sampling image pairs is crucial for standard siamese models as same as for MPSN After training the model, we can predict the class label for the phase determination map and the segmentation map from preprocessed DPs using the algorithm below, which takes as input the TEM diffraction data of a micrograph and return, phase labels and segmentation ids to construct the maps. Predict pair labels on pairs (g, duplicate(x)) with the classification head of mpsn using the confidence level κ.
Add set of the positive pairs from the prediction to L with labels (p, c).
Remove this set from g. Set c = c + 1 end while end for Output : L the set of labeled TEM DP images, given unlabeled as input in S.
The algorithm above takes advantage of both predictions in the MPSN model to retrieve the segmentation map of a given micrograph, the parameter κ determines the precision of the retrieved map, higher values will result in a segmentation map with a high number of segments, whereas values close to 0.5 will produce segmentations with fewer and larger segments. The figure below shows the predicted segmentation maps of the micrographs 1 and 100T by the MPSN model. Figure 4.13 shows the segmentation maps of map 1 and 100T, predicted by the trained MPSN model using the algorithms described above. For the purpose of visualization clarity we fixed κ to 0.7 rather than the default value of 0.8. We observe that the predicted are visually similar to the ground truth maps, note that for the segmentation maps, we are interested in the segment shapes and not there color. The color id is just a code to differentiate between different segment. In our case, we suppose that each segment contains a unique DP with a given phase label and orientation. Especially for the micrograph of map 100T, the MPSN model is able to retrieved the mosaic patterns in the Euler's angle map, this confirms our intuition that using DL to differentiate between DP pairs may leads to better results than by addressing the regression task of angles, in particular when generalizing to unseen orientations. |
04121477 | en | [
"shs"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121477/file/Panos%20Survey%20Report_English_FRENCH%20Version.pdf | table of contents ACRONYMS AND ABBREVIATIONS 4
Finding 5: Currently, the preparation for possible trips abroad is the most significant motivation for vaccination.
Approximately 61% of respondents report that they take the vaccine at this time (during the survey, March 2023) for reasons of foreign travel, while 39% indicate a late awareness of the danger of COVID-19.
Executive Summary
The objective of this study -conducted in March 2023 at vaccination sites in the West and North Departments of Haiti -is to determine the reasons why, in the period between January and March 2023, people who turned up in large numbers to be vaccinated against COVID-19, did not want to do so before. Therefore, the specific objective of this survey report is to bring to the surface three main categories of information:
a. Information available to respondents about the need for CO-VID-19 vaccination; b. The reasons that prevented them from getting the vaccination before; c. The main reasons for the decision to get vaccinated now.
This study was conducted among a sample of 446 people (48% women and 52% men). The questionnaire was administered both faceto-face (80% of respondents) and by telephone (20% of respondents). The study revealed that most respondents had at least one piece of information relevant to the need for vaccination, with reducing the risk of contagion and the development of serious disease being the main reasons cited (70%).
The main reasons mentioned that prevented them from getting vaccinated before the survey includes lack of information (85%), rumors (71%), and lack of confidence in the authorities (51%). Finally, besides a minority indicating becoming aware of the existence of COVID-19, a large majority of respondents mentioned their preparation for possible trips abroad (61%) as a reason for getting vaccinated now.
This study was conducted among a sample of 446 people (48% women and 52% men).
Lack of information (85%), rumors (71%), and lack of confidence in the authorities (51%).
The impact of rumors and misinformation on vaccination against COVID-19
Survey Report :
APRIL 2023
Summary of results
The main findings of this study are the following:
Finding 1: Overall, the factors indicating the need for COVID-19 vaccination are partially known by participants.
Most participants were able to cite at least one reason for COVID-19 vaccination. For example, for 70% of respondents, vaccination against COVID-19 can help protect against more serious diseases. In addition, 66% of the respondents think that vaccination is a determining factor in reducing the risk of contagion, while 25% indicate that it ensures the health security of the family and the community.
The obstacles that prevented the respondents from being vaccinated before can be broken down as follows:
Finding 2: Exposure to rumors and misinformation had a negative impact on the decision of respondents.
Rumors, distrust of authorities, and lack of information were the main trends, often repeated, in the responses collected, regarding barriers to vaccination.
Finding 3: Respondents report that they were not able to properly access quality information.
85% of respondents said that they did not get vaccinated before because of lack of access to quality information about COVID-19. In addition, 61% said that they did not know where the vaccination centers were located. 51% indicated that information was not available through media such as radio and television. In the scope of these direct responses (yes or no), 71% of respondents aged 18-39 years said that rumor and misinformation were the reasons for their lack of motivation to get vaccinated against COVID-19.
Finding 4: According to the respondents, the negative representation of the COVID-19 pandemic has also negatively influenced their decision about getting vaccinated.
66% of respondents indicated that they did not believe that COVID-19 existed. Furthermore, 51% of the respondents indicated that they had no confidence at all in the Haitian authorities. Finally, 45% of respondents believed that Christians should not be vaccinated.
Introduction
Infectious diseases represent high risks for our health and that of our loved ones. Most of the time, these constitute life-threatening factors. Immunization through vaccination is one of the many strategies for promoting robust health, which in turn is one of the elements that lead to well-being. It is on this basis that the WHO adopted in 2021 its new Program for Immunization1 which «provides a strategic framework for addressing key immunization issues in the context of primary health care and universal health coverage during the period 2021-2030.» 2 Also, PAHO regularly emphasizes the importance of immunization in the scope of efforts to prevent or eliminate certain diseases. Hence, the annual organization of recurrent vaccination campaigns in the Americas.
In recent years, COVID-19 has been among the priorities of the above-mentioned vaccination campaigns and policies. Declared by the WHO on March 11, 2020, as a pandemic, the Coronavirus, an emerging disease, entered Haiti and various measures were imposed to limit the fallout. This infectious pathology required the adoption of new behaviors and/or innovative strategies related to preventive measures. From quarantine to physical distancing, the wearing of masks, the promotion of hand washing, and the closing of borders, the country arrived at a much more reassuring strategy: the anti-CO-VID-19 vaccination, which aims at collective immunity of the population and the reduction of the chain of transmission from one person to another.
The Ministry of Public Health and Population (MSPP), in collaboration with its local and international partners, has made great efforts to monitor and respond to this pandemic. This has been done through communication campaigns, vaccination and mass community and group awareness, in order to persuade people to be fully vaccinated. However, until recently, the results have shown that these efforts have not led the people to get vaccinated. Some centers, such as in the North and West, have even been closed due to a lack of demand for vaccination. In total, as of December 30, 2022, only 2% of the population is fully vaccinated 3 .
Since January 2023, with the announcement of the US Humanitarian Parole Program, this trend has reversed. Requests for vaccinations are becoming more and more urgent. This is shown by the statistics of the North department. The health authorities in the North reported that since January 2023, the number of people who voluntary want to be vaccinated has increased considerably. For example, the Health Directorate of the North Department (DSN) reports that from January through March alone, one of their vacci-
10 PANOS SURVEY REPORT APRIL 2023
nation sites administered approximately 135 doses of vaccine, compared to 200 doses over the past six months. For the entire department, a total of 3,483 doses of vaccine were administered for the period January to March 2023. This is significantly higher than the cumulative number of doses administered during the previous six months. The same trend was noted in the West.
It is obvious that this is a phenomenon to be understood but determine the reasons should also be determined why people who have come since January 2023 en masse to be vaccinated, did not want to be vaccinated before. This survey intends to provide two types of information: to verify the level of information available to new applicants for vaccination, but also to assess their perception of the reasons why they had not yet been vaccinated. It should be noted that at the time of this survey, some centers which had been closed for non-attendance, had to reopen due to a surge in demand.
COVID-19 in the global health ecosystem in Haiti
Since the official announcement by the government of the presence of COVID-19 in Haiti, the country is living with the pandemic in a form of paradox: on the one hand, there are state and institutional decisions that impose a pattern of behavior on citizens (notably prohibitions on meeting in large groups and the obligation to adopt barrier measures).
On the other hand, despite government announcements and measures, citizens adopt their own strategies for managing the pandemic. These include distrusting the official and institutional word and referring to local and traditional medicinal approaches. It could be said that COVID-19 is a new factor to measure the relationship between Haitian institutions and society and vice versa. Its relations can be qualified, to use a concept of the philosopher Jacques Rancière, "as relations of non-relation,» a mode of parallel coexistence where the State and the citizens settle in a form of dialogue of the deaf.
There is not a culture of substantial care for public health in Haiti. The State only devotes 13 US dollars per capita per year according to its budget. This makes it one of the lowest rates in the Western Hemisphere 4 . The various reports on health in Haiti show that, in general, there is no investment in the health sectors and public resources for health are far from meeting the growing needs of the population 5 . In other words, the latter are left to their own devices and must either find the means to go to a private doctor or turn to traditional medicine, despite the fact that results of the latter sometimes are mixed. The use of official medicine remains very low (in 2017, the rate of attendance at health insti-
11 PANOS SURVEY REPORT APRIL 2023
tutions was only 7% of the population) 6 and is usually done either because it is difficult to find solutions elsewhere, or because this attendance is part of the conditions required by external institutions, including immigration, to be allowed to enter a target country.
As already mentioned in the introduction, the current influx of vaccine seekers to anti-COVID-19 vaccination centers seems to result from the wariness of Haitians of anti-COVID-19 vaccine campaigns until now. But it should be noted that the anti-vaccination trend is not specifically Haitian. It must be understood as part of an international movement of distrust of state authorities. This distrust, in Haiti, is motivated by different reasons and has taken different forms of dispute depending on the context. The literature review below will discuss a few anti-vaccine movements before highlighting the main characteristics of this trend in Haiti.
Conceptual approach
Anti-vax: an international trend
The anti-vax movement is a trend noticed in several countries, from when the first doses of the COVID-19 vaccine were released. It is a form of protest claiming the freedom not to be vaccinated. This distrust is interpreted in some contexts as a generalized citizen distrust of all institutions and sometimes as a form of resistance to the State. In France, 7 the United States of America and Canada 8 , the movement has taken different forms and magnitudes: public statements in the media, street demonstrations, defiance of the authorities and confrontations with law enforcement officers in some contexts.
Within the framework of the anti-vax movements in the fall of 2021, the refusal to be vaccinated was justified for several reasons: the protesters sometimes denounced a conspiracy of the political and wealthy elites against ordinary people; accused the big pharmaceutical laboratories of considering them as guinea pigs; or saw in the imposition of the vaccine a form of political despotism, depriving each individual of the right to freely arrange his or her life. Together, these protests called into question the very notion of trust in public and private institutions. In other words, the state logic of collective protection and the institutional ethic of international solidarity is opposed by a logic of individual sovereignty, including in the face of a pandemic that could prove fatal.
Vaccine protest was thus motivated either by political reasons through opposition to an established power, or through much more imprecise forms such as conspiracy theories, government corruption, or feelings of inequality. As such, the issue was both the object of political expression and of demonstrations by citizens on the ground in the name of individual freedom. In the United States, the latter aspect has been expressed in a most systematic way. Several States, including Texas, have in fact widely supported both citizens' decisions not to be vaccinated and not to wear masks in public spaces.
Haiti: distrust and mistrust of the authorities
The distrust towards vaccination against COVID-19 in Haiti has not taken a structured form and has not given rise to public protests, as was the case in the contexts mentioned above. In the case of Haiti, one could speak of a soft antivax trend. The mobilization was more done on social networks than in the streets or in confrontation with the forces of order. Nevertheless, the questioning of the benefits of vaccination was effective, as rumors circulated, some more unfounded than others: at times vaccines were connected to a diabolical project aiming at the materialization of Satan's hold on the world, at times they were part of a political project seeking to draw the attention of the international community to Haiti, or sometimes vaccines would have long-term impacts on men's libido, etc. 9
There was a clear manifestation of a categorical refusal to vaccinate, knowing that of the first 500,000 vaccine doses received from the U.S. in 2021, the Haitian government had to return nearly half because of non-use. 10 By the end of 2021, only 27,000 persons had received the two doses out of 43,000 Haitians declared vaccinated, 11 at the time that laboratories were showing the success of their trials and also at the same time that international solidarity in terms of vaccines was strengthening. By the end of 2022, only 87,906 people or 2% of the Haitian population had been vaccinated. 12 All in all, Haitians seem to have been, contrary to other contexts, [START_REF] Malpel | The influence of the media on COVID-19 vaccine decision-making[END_REF] very little influenced by the information and vaccination campaigns, conducted both on the radio and on social networks. [START_REF] Benjamin | COVID-19: Perception of the Pandemic and the Importance of Barrier Measures by the Fruit and Vegetable Sellers of Port-au-Prince[END_REF] While the reasons for the influx of applicants at the centers are more than obvious -the U.S. government's requirement that they be vaccinated in order to qualify for the humanitarian program -the reasons that led Haitians to shun the various moments of the vac-cination campaign are less well known. In order to bring these reasons to light, this study wants to use the momentum provided by the program carried by the American president (Joe Biden) to analyze the various influences on the population, both of COVID-19 and the reasons that kept them away from the vaccination centers.
The presence of diseases in Haiti and their management
The issues around the diagnosis and treatment of diseases in traditional societies are not always obvious. Diagnosis and treatment are regularly confronted with a double rationality: a scientific rationality that seeks to establish the cause-and-effect relationships of the detected symptoms, but also an extra-scientific mystical rationality whose objective is to establish the non-evidence of the scientific finding. This second rationality proposes, as Jean-Pierre Dozon states, to look for the truth elsewhere; 15 an elsewhere to which science does not always have access.
The Haitian patient is regularly caught between these two rationalities and sometimes pays the price at his own expense. In the social representation of illnesses in Haiti, these are generally understood through supernatural phenomena. 16 Moreover, the recognition and acceptance of the power to heal is a complex process in which morality, religion, politics, etc. intervene at the same time. 17 However, in addition to this double dimension through which diseases are understood, there is also the phenomenon of rumors that generally distracts from the true nature of the problems. In a country where the illiteracy rate is high and information circulates in the form of hearsay, the spread of rumors becomes current and impacts people's behavior in very high proportions. Social networks and online media, having taken an important place in people's daily lives, play a major role in the circulation of rumors. For many, it becomes sometimes difficult to differentiate the true from the false.
The interest of this study lies in the fact that it allows for revisiting the foundations of citizens' distrust of public information, but also in the fact that it will allow for characterization of the perceptions and motivations of the citizens towards the pandemic.
Methodology
The methodology of this survey is exploratory. It focuses on semi-structured interviews with a random sample selected at different immunization sites in Port-au-Prince and Cap-Haitien. The data was collected using the Kobo Tool Box software before being processed in Excel format for standardization, analysis and interpretation. The study was structured around three main research questions:
• What information is available to Haitian patients about the need for immunization? • What are the main reasons why Haitians distrust the COVID-19 vaccination campaigns and did not get vaccinated before the period of this survey? • What are the reasons which explain someone's motivation to get vaccinated now?
Sample and data collection sites
The study was carried out with a sample of 446 people chosen at random from vaccination sites. However, two main criteria were taken into account: (1) gender, as the target was 50% of the interviewees to be women; (2) profiling by age group, considering that it seemed relevant to take into account several age groups. For the second criterion, the interviewer chose according to the estimated age. The survey was carried out in two departments (the North and the West), at the different vaccination sites represented in the graph below. Cap-Haitien has the largest share of the sample, with 301 respondents, or 68%, compared to 32% in the West. It should be noted that the surveys were conducted in heterogeneous socio-demographic environments, including people with different educational and socio-economic status.
APRIL 2023
Data collection strategies and profile of the interviewers
Two collection strategies were used:
(1) A telephone collection strategy (20% of the sample) : Use of records of sites where the vaccine was no longer available at the time of the survey. In this case, an agent randomly called a few people who were vaccinated at the sites in question between January and March 2023.
(2) An in situ collection strategy (80% of the sample): Interviewers were deployed to the immunization sites and administered the questionnaires to randomly selected participants.
The interviewers were all health professionals working with Panos Caribbean and had experience in data collection.
Data collection tool
The survey was conducted using a semi-structured questionnaire with three sets of questions:
• First question concerning the level of information about COVID-19 to which the respondent had access; • Second semi-structured question, divided into multiple choice sub-questions concerning the respondents' understanding of the pandemic; • Third question, also with multiple-choice sub-questions, asking about the reasons why the respondent had decided to be vaccinated at the time of the survey (March 2023). A space is reserved for the interviewer to indicate relevant remarks that would not be taken into account by the questionnaire.
Presentation of the findings
The results are presented as follows:
(1) Presentation of the demographic information from the survey, with emphasis on factors of respondents relating to gender and age;
(2) Presentation of findings through consideration of three levels of information:
-Respondents' level of knowledge about the need for vaccination; -Evaluation of the reasons that prevented them from being vaccinated until the time of the survey; -Reasons that finally motivated the decision to be vaccinated.
APRIL 2023
Participant demographic information
The demographic information takes into account variables such as gender, age, and locations where participants reside. Consideration of the collection modality will complete the methodology section.
Distribution by age and sex
Women Men
These results show that gender equity was respected in practical terms in this survey: 48% of participants are women in the West and 49% in Cap-Haitien. Furthermore, the majority (55.6%) of respondents are between 25 and 39 years old. Women represent nearly 60% of the respondents who are between 18 and 25 years old.
Typology of information mentioned by the respondents in favor of vaccination against COVID-19
QR1: What information is available to Haitian patients about the need for immunization?
APRIL 2023
What justifies the need to get vaccinated against COVID-19?
The justifying factors mentioned in the participants' argumentation regarding vaccination against COVID-19, reveal the level of difficulty citizens have to access reliable information on COVID-19 related issues. 70% of the respondents believe that vaccination against COVID-19 can protect against more serious diseases. In addition, 66% believe that vaccination is a key factor in reducing the risk of contagion and 25% maintain that it helps to ensure family and community health security.
The main reasons that prevented respondents from getting vaccinated
QR2. What are the main reasons why Haitians distrust the COVID-19 vaccination campaigns?
This research question is answered in two stages:
(1) In the first stage, the interviewers collected the free expression of respondents, through a semi-structured question (What prevented you from getting vaccinated before this campaign?); (2) In a second phase, the respondents answer the same question, but justify their choice among a series of answers proposed by the interviewer. For these closed answers (the respondent had to answer yes or no), the results were grouped into two sets of data: (a) One set of data expresses the difficulties respondents have in accessing information about COVID-19; (b) Another set is related to survey participants' perception of the COVID-19 pandemic.
Overall, two key pieces of information emerged from the results: (1) information about COVID-19 is not be accessible to everyone; and (2) COVID-19 was perceived as something very distant from the concerns of Haitians. (c) Third pattern reveals that respondents may have been very poorly informed about the benefits of vaccination.
Difficulty in access to information as a barrier to vaccination
The findings show that generally, regardless of age and region, 8 out of 10 respondents are concerned about access to information about COVID-19 vaccination. The vaccine causes some diseases
Difficulty of access to information
Read: %
APRIL 2023
This graph shows the potential determinants of people's lack of motivation to be vaccinated against COVID-19. According to the responses to the survey, 85% of the participants mentioned a lack of access to quality information on COVID-19 as a reason for not getting vaccinated. This is consistent with the 61% who did not know where the vaccination centers were located and the 51% who indicated that information was not available through media such as radio and television. This 51% is primarily among respondents over the age of 40.
These findings also show that the younger the respondent, the more likely s/he is to be influenced by rumors and misinformation about COVID-19 vaccines. 71% of respondents between the ages of 18 and 39, or a great majority of all the young participants in the survey, said that rumors and misinformation were the reasons for their lack of motivation with regard to vaccination against COVID-19. Overall, 65% of respondents -across all age groups -said they did not get vaccinated before because of rumors that COVID-19 vaccines cause discomfort and fertility problems and sometimes death.
The negative image of the vaccine as a blocking factor
The survey revealed that the COVID-19 vaccine was perceived, not always for the same reasons, negatively by the people. Several factors contributed to this: religious belief, lack of trust between citizens and authorities and the socioeconomic conditions of citizens, to name the three most important factors. Thus, respondents' perceptions are linked to the socio-cultural, economic and political specificities of Haitian society. The vaccines carry the mark of the beast Christians do not let thems elves be vaccinated
Other reasons for not getting the vaccine before now
Read: %
This graph shows that, with a rate of 66%, the belief in the non-existence of COVID-19 was very strong. This is consistent with the 51% of citizens who have no confidence in the Haitian authorities. This lack of trust contributes largely to the negative image of the vaccine. Additionally, the religious influence -45% of respondents believe that Christians should not be vaccinated -demonstrates the gap between citizens' perceptions and the authorities' concern for public health.
Rationale for the motivation of newly vaccinated people
QR3. What are the reasons which explain someone's motivation to get vaccinated now?
The response to this question is constructed on the basis of the repeated statements given by the interviewees. Each time, interviewees mention two reasons: e.g., I am vaccinating for travel reasons/ because the information is now available. The most repeated responses are provided in the graph below. These findings show that the motive for vaccination is travel/going abroad in 61% of the responses. 39% of the respondents mention an awareness of the danger of COVID-19 and 33% indicate that they only recently got access to information about COVID-19.
The other reasons given in response to this question should be analyzed with caution, as the increased demand for the vaccine corresponds to a particular requirement for travel.
In this case, it is quite likely that a higher percentage of people are getting vaccinated for immigration reasons, but prefer to cite other reasons. For example, it is questionable to argue that the most accurate information is only now becoming available. One may wonder if attention to this information is not simply related to the primary reason, i.e., to comply with U.S. immigration.
Discussion of the findings (conclusionsand recommendations)
Overall, the findings of the survey are consistent with the literature review. This shows that, despite its socio-economic particularity (poor country with an extremely precarious health system), Haiti has not escaped the global debate on the COVID-19 pandemic. Beyond the differences in the level of challenges, the refusal and distrust expressed, the mixed reception of the vaccines in Haiti matches with the rejection at the international level. There is one constant: the authorities are regularly singled out and suspected of serving interests different from those of the people.
However, is such an attitude, no matter how legitimate it may be (a matter of individual freedom), not detrimental to the interests of the community in general? For, in spite of the suspicions that one may have concerning the management of the pandemic, it remains true that COVID-19 is a dangerous disease that can rapidly take the lives of hundreds of thousands of people. The cases of the massive deaths in Italy and the USA are still in mind, as we talk about the harmfulness of this virus. The ideal would be to find a way to balance the respect of individual liberties with the safeguarding of the community from whatever puts it at risk.
APRIL 2023
Conclusion
Despite the value of this research, it is understood that it remains limited, both because of the sample size (446) compared to the population that has taken both doses of COVID-19 vaccines (240,967 as of December 30, 2022). A generalization of the results would have required a larger sample size. It is also recognized that many other questions regarding other aspects of the pandemic (such as impacts on the economy, education, traffic, impact of traditional medicines consumed, etc.) could also be investigated. This would undoubtedly give a broader perspective on COVID-19 in different areas. But this was not the objective of this survey, which was designed to verify the reasons why Haitians were so reluctant to participate in vaccination campaigns. From this point of view, we can say that the survey achieved its objectives and that the results obtained will allow further studies on the virus. Despite its limitations, the results obtained express deep-seated trends in the population's perception of COVID-19 vaccines.
The investigation allows to confirm that the lack of information has been a major handicap to vaccination, that rumors have discouraged many people from getting vaccinated and/or that the negative image of the disease (a lie or a foreign issue for many) has played an important role in this demotivation. Finally, it was confirmed that the current vaccine craze is largely explained by migration reasons.
Recommendations
Based on the findings presented above and considering that there is currently a resurgence of COVID-19 in some countries of the world, the following recommendations are made: Recommendation 2: Public health authorities in the country should set up "rumor collection teams" so that they can better adjust their response to accompany the population. Nowadays, social networks amplify the spread of rumors. No effective communication strategy can be designed without careful attention to the circulation of these rumors and rigorous work on how to respond to them. As such, technology and social networks must also be put to good use in this «counter-attack».
Recommendation 3: Vaccination centers should be set up in places frequented by patients (churches and public market, etc.). Consequently, community leaders should be solicited and become involved in managing outreach.
A high percentage of people indicated that they did not know where the vaccination centers were located. Many of the centers were set up in hospitals and such locations that can be considered to have a high population attendance. The problem is that in general, if a Haitian does not feel seriously ill, s/he does not go to the hospital. Therefore, the installation of vaccination centers should be diversified to locations as close as possible to the centers of gravity of the targeted population. The interest of this approach with community leaders is twofold: firstly, it will make it possible to better work on the negative representations of the pandemic in particular, and of care in general (for example, many people think that Christians should not be vaccinated). It would also allow for the involvement of the people in managing public health.
The impact of rumors and misinformation on vaccination against COVID-19
Survey Report :
APRIL 2023
Annex
Mini survey on the motivation to get vaccinated
The objective of this mini-survey in a few vaccination sites is to determine the reasons why people, who came to be vaccinated against COVID-19 now, did not want to do so before. The methodology consists of interviewing people (semi-structured interviews) who have come to be vaccinated.
In sites where the vaccine is not currently available, data will be collected by telephone in agreement with the managers of the vaccine sites. An agent will randomly call a few people who took the vaccine between January and March 2023. (Register of names and phone numbers available). We keep the option of focus group with people who came to take vaccine. (Arrangement with the site manager to give them priority if they agree to talk to us for 10-15 minutes apart from the long line).
Avertissement
Le contenu de ce rapport ne reflète en rien les opinions ou les positions de l'USAID ou du gouvernement des États-Unis. Comme déjà mentionné justement dans l'introduction, l'affluence actuelle des demandeurs de vaccins dans les centres de vaccination anti-Covid-19 semble être, jusqu'à présent, le résultat de la méfiance des Haïtiens vis-à-vis des campagnes de vaccins anti-Covid-19. Il est à noter que la tendance anti-vaccination n'est pas spécifiquement haïtienne. Il faut l'appréhender dans le cadre d'un mouvement international de défiance aux autorités étatiques. Cette méfiance, en Haïti, est motivée par des raisons différentes et a pris des formes de contestation différentes en fonction du contexte. Dans la revue de littérature ci-dessous, nous allons évoquer quelques mouvements antivaccins avant de faire ressortir les principales caractéristiques de cette tendance en Haïti.
Approche conceptuelle
Les antivax: une tendance internationale
Le mouvement antivax est une tendance remarquée dans plusieurs pays à la sortie des premières doses de vaccin anti-Covid-19. C'est une forme de contestation revendiquant la liberté de ne pas se faire vacciner. Cette méfiance est interprétée dans certains contextes par rapport à une défiance citoyenne généralisée à l'égard de toutes les institutions et parfois comme étant une forme de résistance à l'État. En France 7 , aux États Unis d'Amérique, au Canada 8 , le mouvement a pris des formes et des ampleurs différentes : prises de positions publiques dans les médias, manifestations de rues, défiance des gouvernements et confrontations avec les agents de maintien de l'ordre dans certains contextes. Le refus de se faire vacciner était justifié, dans le cadre des mouvements antivax, à l'automne 2021, par plusieurs raisons : les contestataires dénoncent tantôt un complot des élites politiques et fortunées contre les gens modestes ; accusent les grands laboratoires pharmaceutiques de les considérer comme des cobayes ; ou voyaient dans l'imposition du vaccin une forme de despotisme politique, privant chaque individu du droit de disposer de sa vie. Dans l'ensemble, ces contestations ont remis en cause la notion même de confiance dans les institutions publiques et privées. Autrement dit, à la logique étatique de protection collective et de l'éthique institutionnelle de la solidarité internationale, s'oppose une logique de la souveraineté individuelle, y compris face à une pandémie qui pourrait se révéler mortelle.
La contestation du vaccin était donc motivée soit par des raisons politiques à travers l'opposition à un pouvoir établi, soit à travers des formes beaucoup plus imprécises comme les théories du complot, la corruption des gouvernements, ou le sentiment d'inégalité. À ce titre, la question était à la fois objet de récupération politique que de manifestations réelles de citoyens sur le terrain, au nom de la liberté individuelle. C'est aux États-Unis que ce dernier aspect a été exprimé de manière la plus systématique possible. Plusieurs États fédéraux, dont le Texas, ont en effet largement soutenu à la fois les décisions citoyennes de ne pas se faire vacciner mais aussi de ne pas porter de masque dans les espaces publics.
Haïti : méfiance et défiance vis-à-vis des autorités
La méfiance à la vaccination contre le Covid-9 n'a pas pris une forme structurée et n'a pas donné lieu à des contestations publiques comme cela a pu être le cas dans les contextes évoqués précédemment. On pourrait parler dans le cas d'Haïti d'une tendance antivax soft. La mobilisation se faisait davantage sur les réseaux sociaux que dans les rues ou dans la confrontation avec les forces de l'ordre. Il n'en demeure pas moins vrai que cette remise en question du bienfait de la vaccination était efficace, car faisant circuler des rumeurs les unes les plus infondées que d'autres : tantôt les vaccins étaient assimilés à un projet diabolique visant la matérialisation de l'emprise de Satan sur le monde, tantôt il faisait partie d'un projet politique cherchant à attirer l'attention de la communauté internationale sur Haïti, tantôt les vaccins auraient des impacts à long termes sur le libido des hommes, etc.9
Il y a donc clairement eu une manifestation d'un refus catégorique à la vaccination, sachant que des premiers stocks de 500 000 vaccins reçus des USA en 2021 le gouver-nement haïtien 10 a dû retourner près de la moitié pour cause de non-utilisation. À la fin de l'année 2021, seulement 27 000 avaient reçu les deux doses sur 43 000 Haïtiens déclarés vaccinées 11 , au moment même où les laboratoires faisaient étalage de la réussite de leurs essais et aussi au moment où la solidarité internationale en matière de vaccin se renforçait. Vers la fin de 2022, seulement 87906 personnes, soit 2% de la population haïtienne, ont été vaccinées 12 . Tout compte fait, les Haïtiens semblent avoir été, contrairement à d'autres contextes [START_REF] Malpel | The influence of the media on COVID-19 vaccine decision-making[END_REF] , très peu influencés par les campagnes d'information et de vaccination menées à la fois dans les radios et sur les réseaux sociaux. [START_REF] Benjamin | COVID-19: Perception of the Pandemic and the Importance of Barrier Measures by the Fruit and Vegetable Sellers of Port-au-Prince[END_REF] Si les raisons de l'affluence des demandeurs constatés dans les centres sont plus qu'évidentes -l'exigence faite par le gouvernement américain d'être vacciné pour prétendre au programme humanitaire parole-, les raisons qui ont porté les Haïtiens à bouder les différents moments de la campagne de vaccination sont moins connues. C'est précisément pour faire remonter ces raisons que cette étude veut profiter du Momentum du programme porté par le président américain (Joe Biden) pour analyser les différentes influences à la fois de la Covid-19 mais aussi des raisons les ayant éloignés des centres de vaccination.
De la représentation des maladies et de leur prise en charge en Haïti
La problématique du diagnostic et du traitement des maladies dans les sociétés traditionnelles n'est pas toujours posée comme évidente. Le diagnostic et le soin sont régulièrement confrontés à double rationalité : une rationalité scientifique qui cherche à établir les rapports de cause à effet des symptômes détectés, mais aussi une rationalité mystique extrascientifique dont l'objectif est d'établir la non-évidence de l'évidence du constat scientifique. L'intérêt de cette étude réside dans le fait qu'elle permet de revenir à la fois sur les fondements des méfiances des citoyens vis-à-vis la parole publique, mais aussi dans le fait qu'elle permettra de caractériser la perception et la motivation de ces derniers de la pandémie.
AVRIL 2023
Tool box avant d'être exploitées sous format Excel pour être normalisées, analysées et interprétées. Cette étude s'est structurée autour de trois questions de recherche principales :
• De quelles informations les patients haïtiens disposent-ils parmi celles établissant la nécessité de se faire vacciner ? • Quelles sont les principales raisons de méfiances des Haïtiens vis-à-vis des campagnes de vaccination anti-Covid-19 les ayant empêchés de se faire vacciner avant cette enquête ? • Quelles sont les raisons qui expliquent leur motivation pour se faire vacciner maintenant ?
Échantillon et site de collecte des données
L'étude est réalisée auprès d'un échantillon de quatre cent quarante-six ( 446) personnes choisies au hasard sur des sites de vaccination. Deux critères principaux étaient toutefois pris en compte : 1/ le sexe, car nous tenions à ce que 50% des personnes questionnées soient des femmes ; 2/ un profilage par groupe d'âge, considérant qu'il nous paraissait pertinent de prendre en compte plusieurs groupes d'âge dans cet échange. Pour ce dernier critère, l'enquêteur choisit en fonction de l'âge estimatif. L'enquête a été réalisée sur deux départements (le Nord et l'Ouest), sur les différents sites de vaccination représentés dans le graphe ci-dessous.
Graphe 1 : Répartition des répondants par centre de vaccination
Le Cap-Haitien concentre la plus grande part de l'échantillon, avec 301 répondants, soit 68% contre 32% pour l'Ouest. Il est à noter que les collectes sont réalisées dans des milieux sociodémographiques hétérogènes incluant des personnes avec des statuts de formation et socioéconomiques différents.
AVRIL 2023
Informations démographiques concernant les participants
Les renseignements démographiques tiennent compte des variables telles que le sexe, l'âge et les régions où se trouvent les participants. La prise en compte de la modalité de collecte permettra de compléter la partie méthodologie. La réponse à cette question de recherche se fait en deux temps :
(1) Dans un premier temps, les enquêteurs recueillent l'expression libre des enquêtés, à travers une question semi-dirigées (Qu'est-ce qui vous a empêché de vous faire vacciner avant cette campagne ?) ; (2) 1.
Dans un second temps, les enquêtés répondent à la même question mais en justifiant parmi une série de réponses proposées par l'enquêteur. Pour ces réponses fermées (l'enquêté devait répondre par oui ou par non), nous regroupons les résultats en deux séries de données : Un ensemble de données exprimant les difficultés d'accès des enquêtés aux informations concernant la Covid-19 ; b/. Un autre ensemble est lié à la perception des participants à l'enquête de la pandémie de la Covid-19.
Dans l'ensemble, deux informations clés ressortent des résultats : 1. Les informations concernant la Covid-19 ne seraient pas accessibles à tous ; 2. La Covid-19 était perçu comme quelque chose très éloigné des préoccupations des Haïtiens.
La représentation négative du vaccin comme facteur de blocage
Cette enquête a révélé que le vaccin anti-Covid-19 a été perçu, pas toujours pour les mêmes raisons, négativement au niveau de la population. Plusieurs facteurs y ont contribué : la croyance religieuse, l'absence de relation de confiance entre les citoyens et les autorités, les conditions socio-économiques des citoyens, pour ne citer que les trois facteurs les plus importants. Ainsi, les perceptions des répondants sont liées aux spécificités socio-culturelles, économiques et politiques de la société haïtienne.
AVRIL 2023
Graphe 6: Raison portant sur la perception de la Covid-19
Ce graphe révèle que, avec un taux de 66%, la croyance dans l'inexistence de la Covid-19 était très forte. Ce qui est en cohérence avec les 51% des citoyens n'ayant aucune méfiance dans les autorités haïtienne. Ce manque de confiance contribue largement dans la représentation négative du vaccin. Par ailleurs, le facteur religieux -45% croient que les chrétiens ne devaient pas se vacciner -révèle l'éloignement existant entre la perception des citoyens et la préoccupation des autorités en matière de santé publique.
Justification de la motivation des nouveaux vaccinés
QR3. Quelles en sont les raisons qui expliquent leur motivation pour se faire vacciner maintenant ?
Les réponses à cette question sont construites sur la base de la récurrence répertoriée dans les réponses des interviewés. À chaque fois que l'interviewé évoque deux raisons (par exemple : Je me vaccine pour de raisons de voyages/ parce que les informations sont maintenant disponibles), nous nous focalisons sur les réponses les plus répétées. D'où le graphe ci-dessous.
Discussion des résultats (conclusions et recommandations)
Dans l'ensemble, les résultats de l'enquête sont en résonance avec la revue de littérature mobilisée. Cela montre que, en dépit de sa particularité socioéconomique (pays pauvre avec un système sanitaire extrêmement précaire), Haïti n'a pas échappé au débat global sur la pandémie de Covid-19. Au-delà des différences au niveau des contestations, refus et méfiance exprimée, la concordance entre la réception mitigée des vaccins en Haïti et le rejet au niveau international : il y a une constante : les autorités sont régulièrement pointées du droit et soupçonnées d'être au service d'intérêts différents de ceux de la population.
Cependant, une telle attitude, pour légitime qu'elle soit (relevant en effet de la liberté individuelle de chacun), n'est pas préjudiciable aux intérêts de la communauté en général ? Car, en dépit des suspicions que l'on peut avoir concernant la gestion de la pandémie, il n'en demeure pas moins vrai que la Covid-19 soit une maladie dangereuse pouvant rapidement enlever la vie à des centaines de milliers de personnes. Les cas des décès massifs en Italie et aux USA sont encore à l'esprit, comme qu'on parle de la nocivité de ce virus. L'idéal serait de trouver le moyen d'établir un point d'équilibre entre le respect des libertés individuelles et la sauvegarde de la communauté vis-à-vis de ce qui la met en péril.
AVRIL 2023
Conclusion
En dépit de l'intérêt de cette recherche, il est entendu qu'elle demeure limitée à la fois à cause de la taille de l'échantillon (446), comparée à la population ayant pris les deux doses des vaccins contre la Covid-19 (240 967 au 30 décembre 2022). Une généralisation des résultats aurait nécessité un échantillon plus conséquent. Par ailleurs, il est aussi admis que beaucoup d'autres questions concernant d'autres aspects de la pandémie (telles que les impacts sur l'économie, sur l'éducation, sur la circulation, l'impact des médicaments traditionnels consommés, etc.) pourraient être aussi posées. Cela donnerait sans doute une perspective plus large sur la Covid-19 dans différents domaines. Mais tel n'a pas été l'objectif de cette enquête dont le but était de vérifier les raisons pour lesquelles les Haïtiens étaient si réticents par rapport aux campagnes de vaccination. De ce point de vue, nous pouvons dire que l'enquête a atteint ses objectifs et que les résultats obtenus permettront de procéder à d'autres études sur le virus. En dépit de sa limitation, les résultats obtenus expriment des tendances profondes de la population quant à la perception des vaccins anti-Covid-19.
En effet, nous avons pu confirmer que le manque d'information a été un handicap majeur à la vaccination, que les rumeurs ont découragés bien de personnes à se faire vacciner ou que la représentation négative de la maladie (qui ne serait qu'un mensonge ou une affaire de blanc pour beaucoup) a joué un rôle important dans cette démotivation. Finalement, nous avons confirmé que l'engouement actuel pour le vaccin s'explique largement par des raisons migratoires.
Recommandations
Sur la base des résultats ci-dessus et considérant qu'il y a actuellement une résurgence du Covid-9 dans certains pays du monde, les recommandations suivantes sont formulées : Recommandation 2 : Les autorités en matière de santé publique dans le pays devraient mettre en place des équipes de recension des rumeurs de manière à pouvoir mieux ajuster les réponses à l'accompagnement de la population.
Aujourd'hui, les réseaux sociaux ont amplifié la propagation des rumeurs. Aucune stratégie de communication efficace ne peut être conçue sans une attention particulière à la circulation de ces rumeurs et un travail rigoureux sur la façon d'y répondre. À ce titre, la technologie et les réseaux sociaux doivent aussi mis à profit dans cette «contre-attaque».
Recommandation 3 : Les centres de vaccination doivent être installés sur les lieux de fréquentation des patients marché publique, etc.). En ce sens, les leaders communautaires doivent être sollicités et impliqués dans la gestion de la sensibilisation.
Un fort pourcentage de gens a indiqué n'avoir pas su où se trouvaient les centres de vaccination. Or, beaucoup de centres ont été installés dans des endroits, dont des hôpitaux pouvant être considérés comme susceptibles de recevoir une fréquentation élevée de la population. Le problème est qu'en général, si un Haïtien ne se sent pas gravement malade, il ne se rend pas à l'hôpital. D'où la nécessité de diversifier les lieux d'installation des centres de vaccination afin de les mettre au plus près des centres d'intérêt de la population ciblée. L'intérêt de ce rapprochement avec les leaders communautaires est double : d'abord cela permettra de mieux travailler sur les représentations négatives de la pandémie en particulière et de la prise en charge en général (beaucoup pensent par exemple que les chrétiens ne devraient pas se faire vacciner). Cela permettrait aussi d'impliquer la population dans la gestion de la santé publique.
Graph 2 and 3: Distribution of respondents by age and gender, as well as by region
Graph 4 :
4 Why should I be vaccinated against COVID-19?
influence of rumors, lack of trust and lack of information The analysis of various open-ended responses reveals that not all participants are well informed about COVID-19 vaccination. The reasons are often related to lack of awareness, misinformation or rumors. The issue of access to vaccination centers was also relatively common. The open-ended responses can be grouped into three major trends: (a) The most recurrent trend is the impact of rumors and fear of being victimized by vaccination. In the open-ended responses, this was a major barrier to respondents' being vaccinated prior to the survey period.
Figure 1 :
1 Figure 1: Respondents' comments about rumors
Figure
Figure 2: Participants' statements about their distrust of the authorities
Figure 3 :
3 Figure 3: Participants' comments on lack of access to information
have quality information about the vaccin
oesn't exist Lack of trust in the authorities The COVID-19 vaccine is for foreigners
Graph 7 :
7 Reasons for requesting vaccination, from January to March 2023
the COVID vaccine: Radio and TV do not provide information: I didn't know where to get the vaccine: Christians do not need to be vaccinated: The vaccine can cause several discomforts: Other reasons, to name a few: protect his/her family and community To protect his loved ones Other reasons given by the person. Please cite these reasons: _________________________________________________________________________ a. Tell us the first reason that prevented you from getting the vaccine before:_____________________________________ b. Could these reasons have prevented you as well? It prevents someone from developing severe forms of the disease Because s/he will be traveling To get a job Why should a person be vaccinated against Corona? What prevented you from getting vaccinated before? Panos a recueilli les données de ce rapport dans le cadre du projet «Expanding Multimedia and Community Engagement Activities to Increase COVID-19 Vaccine Uptake», financé par l'USAID. Francklin BENJAMIN, Ph.D., l'a rédigé. Panos remercie la Direction Sanitaire de l'Ouest (DSO), l'Institut pour la Santé, Population et Développement (ISPD), et SEROvie à travers le projet ECP2 pour leur collaboration.
Graphe 2 et 3 :
3 Distribution des répondants selon l'âge et le sexe par régions Ces résultats montrent que l'équité de genre a quasiment été respectée lors de cette enquête : 48 % de femmes dans l'Ouest et 49 % au Cap-Haitien. Par ailleurs, la majorité des répondants ont entre 25 et 39 ans, soit 55, 6%. Les femmes représentent près de 60% des répondants qui ont entre 18 et 25 ans. 5.2 Typologie des informations évoquées par les enquêtés en faveur de la vaccination anti-Covid-19 QR1: De quelles informations les patients haïtiens disposent-ils parmi celles établissant la nécessité de se faire vacciner ? 'est-ce qui justifie le besoin de se faire vacciner contre la Covid-19 Les facteurs justificatifs évoqués dans l'argumentation des participants par rapport à la vaccination contre la Covid-19, révèlent le niveau de difficultés et accès des citoyens aux informations fiables sur les questions liées la pandémie de Covid-19.
Graphe 4 :
4 Pourquoi doit-on se faire vacciner contre la Covid-19 ?Pour 70% des répondants, la vaccination contre la Covid-19 peut permettre de se protéger contre des maladies plus graves. Par ailleurs, 66 % d'entre eux pensent que la vaccination est un facteur déterminant dans la diminution des risques de contagion et 25 % soutiennent qu'elle permet d'assurer la sécurité sanitaire de la famille et de la communauté.5.3 Les principales raisons ayant empêché les enquêtés de se faire vaccinerQR2. Quelles sont les principales raisons de méfiances des haïtiens vis-àvis des campagnes de vaccination anti-Covid-19 ?
5. 4
4 Facteurs évoqués comme obstacles ayant empêché les enquêtés de se faire vacciner 5.4.1 Influence négative des rumeurs, d'un manque de confiance et d'informations Les différentes réponses libres analysées révèlent que les participants ne sont pas tous bien informés de la vaccination contre le Covid19. Les raisons sont souvent liées au manque de sensibilisation, aux mauvaises informations ou à des rumeurs. La question d'accès aux centres de vaccination est relativement évoquée. Les réponses libres peuvent être regroupées selon trois tendances majeures : (a) La tendance la plus récurrente est celle des impacts des rumeurs et de la peur d'être victimes de la vaccination. Dans les réponses libres, c'est un obstacle majeur dans la non-vaccination des répondants avant la période de l'enquête.
Figures 1 :
1 Figures 1 : Propos de répondants concernant les rumeurs
Figures 2 :
2 Figures 2 : Propos des participants sur leur méfiance vis-à-vis des autorités
Recommandation 1 :
1 Les stratégies de formation et d'information sur la Covid-19 devraient être renforcées et diversifiées. La plupart des réponses contenues dans le cadre de cette enquête révèlent en effet que les répondants sont partiellement imbus des raisons pour lesquelles ils doivent se faire vacciner. Par ailleurs, ils sont très exposés à des rumeurs et à la désinformation. D'où la nécessité de mieux cibler la population à former. Les réponses fournies montrent que les plus âgés sont les plus mal informés sur le Covid-19.
Recommendation 1: Training and information strategies on COVID-19 should be strengthened and diversified.
The majority of responses collected in the scope of this survey indicate that the respondents are partially unaware of why they should be vaccinated. Additionally, they are very vulnerable to rumors and misinformation. Hence the need to better target the population for educational activities. The responses provided show that the oldest age group is the least informed about COVID-19.
APRIL 2023
impact des rumeurs et de la désinformation sur la vaccination contre la Covid-19
AVRIL 2023
table des matières SIGLES ET ABREVIATIONS Résumé exécutif Synthèse des résultats
SIGLES ET ABREVIATIONS Résumé exécutif Banque Mondiale L'objectif de cette étude, réalisée au mois de mars 2023 dans des 4 5 Cette étude a été BM sites de vaccination dans les départements de l'Ouest et du Nord réalisée auprès d'Haïti est de déterminer les raisons pour lesquelles les personnes d'un échantillon
Synthèse des résultats venues se faire vacciner en grand nombre dans l'intervalle de janvier 6 Direction Sanitaire de l'Ouest de 446 per-DSO
Recommandations sonnes (48% de 1. Introduction à mars 2023 contre la Covid-19 ne voulaient pas le faire avant. Ce 7 9 HUEH Hôpital de l'Université d'État d'Haïti femmes et 52% rapport d'enquête a donc pour objectif spécifique de faire remonter trois catégories d'informations principales : d'hommes).
MSPP Ministère de la Santé Publique et de la Population
2. La Covid-19 dans l'écosystème sanitaire global en Haïti a. Les informations dont disposent les enquêtés sur la nécessité 10
3. le manque d'in-OMS Approche conceptuelle Organisation Mondiale de la Santé de se faire vacciner contre la Covid-19 ; b. Les raisons qui les ont empêchés de le faire avant ; 11
4. formation (85%), OPS Méthodologie 15 c. Les raisons principales qui ont motivé la décision de se faire Organisation Panaméricaine de la Santé
les rumeurs (71%), vacciner maintenant.
5. et le manque de QR confiance dans les Présentation des résultats Question de Recherche 17
6. autorités (51%). WHO Discussion des résultats (conclusions et recommandations) World Health Organisation 24
7. Conclusion 25
ECP2 8. Bibliographie Epidemic Control among Priority Populations 27
AVRIL 2023 AVRIL 2023 ISPD Rapport d'enquête : Institut pour la Santé, la Population et le Développement Annexe 4 PANOS -RAPPORT D'ENQUÊTE
Cette étude a été réalisée auprès d'un échantillon de 446 personnes (48% de femmes et 52% d'hommes). Le questionnaire a été administré à la fois en présentiel (80% des enquêtés) et par téléphone (20% des enquêtés). Elle a permis de révéler que la plupart des enquêtés disposent d'au moins une information pertinente concernant la nécessité de se faire vacciner, la prévention du risque de contagion et le développement de maladies graves étant les principales raisons évoquées (70%). Quant aux raisons les ayant empêchés de se faire vacciner avant l'enquête, le manque d'information (85%), les rumeurs (71%), et le manque de confiance dans les autorités (51%) sont les principales raisons évoquées. Enfin, une grande majorité des répondants évoquent, à côté d'une minorité indiquant une prise de conscience de l'existence de la Covid-19, la préparation d'éventuels voyages à l'étranger (61%) comme raison les motivant de se faire vacciner maintenant.
L'
Les principaux résultats de cette étude sont les suivants : Résultats 1 : Dans l'ensemble, les facteurs justifiant la nécessité de la vac- cination contre la Covid-19 sont partiellement connus par les participants.
En effet, la plupart des participants était en mesure d'évoquer au moins une raison jus-
tifiant la vaccination contre la Covid-19. Ainsi, pour 70% des répondants, la vaccination
contre la Covid-19 peut permettre de se protéger contre des maladies plus graves. Par
ailleurs, 66 % des répondants pensent que la vaccination est un facteur déterminant
dans la diminution des risques de contagion alors que 25% indiquent qu'elle permet
d'assurer la sécurité sanitaire de la famille et de la communauté.
En ce qui a trait aux obstacles ayant empêchés les enquêtés de se faire vacciner avant, ils peuvent se décliner de la manière suivante : Résultat 2 : L'exposition aux rumeurs et à la désinformation a eu un im- pact négatif sur la décision des répondants.
Dans le cadre de ces réponses directes (oui ou non), 71% des répondants âgés entre 18
et 39 ans affirment que la rumeur et la désinformation sont les raisons de leur démoti-
vation par rapport à la vaccination contre le Covid19.
Résultat 4 :
En effet, les rumeurs, la méfiance envers les autorités et le manque d'information sont les tendances principales qui se répètent dans la réponse des répondants en matière d'obstacles à la vaccination.
Résultat 3 : Les répondants indiquent n'avoir pas pu correctement accéder à des informations de qualité.
En effet, 85% des enquêtés disent ne s'étaient pas fait vacciner avant parce qu'ils n'ont pas eu accès à des informations de qualité sur la Covid-19. Par ailleurs, 61% indiquent n'avoir pas su où se trouvaient les centres de vaccination. 51% indiquent que les informations n'étaient pas disponibles à travers les médias tels que la radio et la télévision.
Suivant les répondants, la représentation négative de la pan- démie de Covid-19 au aussi influence négativement leur décision de ne pas se faire vacciner.
En effet, 66% des participants ont indiqué avoir cru que la Covid-19 n'existait pas. Par
ailleurs, 51% des questionnés ont indiqué n'avoir aucune confiance dans les autorités haïtienne. Enfin, 45% des répondants croient que les chrétiens ne devraient pas se vacciner. 7 PANOS -RAPPORT D'
ENQUÊTE AVRIL 2023 Résultat 5 : Quant aux justifications de la motivation actuelle pour le vac- cin, la préparation d'éventuels voyages pour l'étranger est la plus récur- rente.
Environ 61% des répondants indiquent prendre le vaccin au moment de l'enquête (mars
2023) pour des raisons de voyage à l'étranger contre 39% qui indiquent avoir tardive-
ment pris conscience du danger que représente la Covid-19.
Recommandations
Sur
la base de ces résultats et considérant qu'il y a actuellement une ré- surgence du Covid-9 dans certains pays du monde, les recommandations suivantes sont formulées : Recommandation 1 : Les stratégies de formation et d'information sur la Covid-19 devraient être renforcées et diversifiées
.
La plupart des réponses contenues dans le cadre de cette enquête révèlent en effet que les répondants sont partiellement imbus des raisons pour lesquelles ils doivent se faire vacciner. Par ailleurs, ils sont très exposés à des rumeurs et à la désinformation. D'où la nécessité de mieux cibler la population à former. Les réponses fournies montrent que les plus âgés sont les plus mal informés sur la Covid-19.
Recommandation 2 :
Les autorités en matière de santé publique dans le pays devraient mettre en place des équipes de recension des rumeurs de manière à pouvoir mieux ajuster les réponses à l'accompagnement de la population
. La prolifération des réseaux sociaux et l'accès facile à la technologie (l'enquête a été largement menée dans des zones urbaines) dans les villes augmente aujourd'hui la possibilité de la propagation des rumeurs. Aucune stratégie de communication efficace ne peut être conçue sans une attention particulière à la circulation de ces rumeurs et un travail rigoureux sur la façon d'y répondre. À ce titre, la technologie et les réseaux sociaux doivent aussi mis à profit dans ce « contre-attaque ».
Recommandation 3 :
Les centres de vaccination doivent être installés sur les lieux de fréquentation des patients (église, marché public, etc.). En ce sens, les leaders communautaires doivent être sollicités et impliqués dans la gestion de la sensibilisation.
Un fort pourcentage de gens a indiqué n'avoir pas su où se trouvaient les centres de vac-
cination. Or, beaucoup de centres ont été installés dans des endroits, dont des hôpitaux.
Le problème est qu'en général, si un Haïtien ne se sent pas gravement malade, elle ne se
rend pas à l'hôpital. D'où la nécessité de diversifier les lieux d'installation des centres de
vaccination afin de les mettre au plus près des centres d'intérêt de la population ciblée.
L'intérêt de ce rapprochement avec les leaders communautaires est double : d'abord cela permettra de mieux travailler sur les représentations négatives de la pandémie en particulière et de la prise en charge en générale (beaucoup pensent par exemple que les chrétiens ne devaient pas se faire vacciner). Cela permettrait aussi d'impliquer la population dans la gestion de la santé publique.
L'
impact des rumeurs et de la désinformation sur la vaccination contre la Covid-19
Les maladies infectieuses représentent des risques élevés pour notre santé et celle de nos proches. La plupart du temps elles constituent des facteurs mettant en jeu notre vie. De ce fait, l'immunisation par la vaccination représente pour toute population, l'une des nombreuses stratégies permettant de favoriser une santé robuste, qui elle-même est l'un des éléments qui procure le bien-être. C'est sur cette base que l'OMS a adopté en 2021 son nouveau Programme pour la vaccination 1 , lequel « fournit un cadre stratégique pour aborder les principales questions liées à la vaccination dans le contexte des soins de santé primaires et de la couverture sanitaire universelle au cours de la période 2021-2030» 2 . Par ailleurs, l'OPS ne manque pas de souligner l'importance de la vaccination dans le cadre des efforts pour la prévention ou l'élimination de certaines maladies. D'où l'organisation annuelle de campagnes de vaccination récurrente dans les Amériques. Certains centres, tel que dans le Nord et l'Ouest, ont même été fermés pour faute de demande de vaccination. Au total, au 30 décembre 2022, seulement 2% de la population est complètements vaccinés. 3 Depuis le mois de janvier 2023, avec l'annonce du programme Humanitarian Parole des USA, cette tendance s'est inversée. Les demandes de vaccination sont de plus en plus
1. Introduction
Ces dernières années, la Covid-19 figure parmi les priorités de ces campagnes et politiques de vaccination susmentionnées. Déclaré, le 11 mars 2020 comme pandémie par l'OMS, le Coronavirus, maladie émergente, s'est introduit en Haïti et du coup a imposé diverses mesures ayant pour but de limiter les retombées. Cette pathologie infectieuse a exigé l'adoption de nouveaux comportements et stratégies novatrices liés aux mesures préventives. De la quarantaine en passant par la distanciation physique, le port des masques, la promotion du lavage des mains, la fermeture des frontières, le pays est parvenu à une stratégie beaucoup plus rassurante, à savoir la vaccination anti-Covid-19, laquelle vise l'immunité collective de la population et la réduction de la chaîne de transmission d'une personne à l'autre.
Le ministère de la Santé publique et de la Population (MSPP) en collaboration avec ses partenaires locaux et internationaux a consenti beaucoup d'efforts dans le cadre de la surveillance et la riposte face à cette pandémie. À travers notamment des campagnes de communication, de vaccination et de sensibilisation communautaire de masse et de groupe en vue de persuader la population à se faire complètement vacciner. Pourtant, jusqu'à une date récente, les résultats ont montré que ces efforts n'ont pas véritablement porté la population à se faire vacciner. 10 PANOS -RAPPORT D'
ENQUÊTE AVRIL 2023 pressantes
. C'est ce que révèlent notamment les statistiques du département du Nord. Les autorités sanitaires du Nord ont en effet notifié par exemple que depuis le mois de janvier 2023, le nombre de volontaires désireux se faire vacciner a augmenté considérablement. Par exemple, la Direction de la Santé du département du Nord (DSN) indique que, seulement de janvier à Mars, un de leur site de vaccination a administré environ 135 doses de vaccin contre 200 doses au cours des six (6) mois écoulés. Sur l'ensemble du département, un total de 3483 doses de vaccin a été administré pour la période de janvier à Mars 2023. Ce qui est largement supérieur au cumul des doses administrées durant les 6 mois précédents. La même tendance a été remarquée dans l'Ouest.Il est évident qu'il y a là un phénomène à comprendre mais aussi c'est le moment pour chercher à déterminer les raisons pour lesquelles les personnes venues massivement se faire vacciner depuis le mois de janvier 2023 ne voulaient pas se faire avant. Plus précisément, cette enquête entend faire remonter deux types d'information : vérifier le niveau d'information dont disposaient les nouveaux demandeurs de vaccin, mais aussi évaluer leur perception quant aux raisons pour lesquelles ils ne s'étaient pas encore faits vaccinés. On notera qu'au moment de cette enquête, certains centres fermés pour non-fréquentation ont dû rouvrir leur porte à cause d'une affluence de demandes. On pourrait dire que la Covid-19 est un nouveau facteur de mesure des rapports des institutions haïtienne à la société et vice versa. Ses rapports peuvent être qualifiés, pour reprendre un concept du philosophe Jacques Rancière, "comme des rapports de non-rapport", c'est-à-dire un mode de coexistence parallèle ou l'État et les citoyens s'installent dans une forme de dialogue de sourds.Il faut dire qu'il n'y a pas une culture de prise en charge massive de la santé publique en Haïti, quand on sait que l'État ne consacre dans son budget que 13 dollars américains par habitant par année. Ce qui en fait l'un des taux les plus faibles de l'hémisphère occidental
2. La Covid-19 dans l'écosystème sanitaire global en Haïti
Depuis l'officialisation de la présence de la Covid-19 en Haïti en 2020 par le gouvernement, le pays vit la pandémie suivant une forme de paradoxe : il y a d'une part, les décisions étatiques et institutionnelles qui imposent un mode de comportements aux citoyens (interdictions notamment de se réunir en grand groupe et obligation d'adopter les gestes barrières). Il y a aussi, d'autre part, les citoyens qui adoptent, en dépit des annonces et des mesures du gouvernement, leurs stratégies propres dans la gestion de la pandémie. Ces dernières consistent notamment à se méfier de la parole officielle et institutionnelle et à se référer à des approches médicinales locales et traditionnelles.
4
. Les différents rapports en matière de santé en Haïti montrent que, de manière 11 PANOS -RAPPORT D'
ENQUÊTE AVRIL 2023 générale
, il n'y a pas d'investissement dans les secteurs sanitaires en Haïti et que les ressources publiques en matière de santé sont loin de satisfaire les besoins grandissant de la population 5 . Autrement dit, cette dernière est laissée à elle-même et doit soit trouver les moyens pour se rendre dans un cabinet médical privé ou se tourner justement vers la médecine traditionnelle, en dépit du fait que les résultats de cette dernière se révèlent parfois mitigés. Le recours à la médecine officielle reste très faible (en 2017, le taux de fréquentation des institutions sanitaires s'élevait seulement à 7% de la population) 6 et se fait généralement soit parce qu'il est difficile de trouver des solutions ailleurs, soit parce que cette fréquentation fait partie des conditions exigées par des institutions externes, notamment celle d'immigrations, pour être autorisé à entrer dans le pays ciblé.
Groupe de la Banque Mondiale, 2017. Mieux depenser pour mieux soigner. Regard sur le financement de la santé en Haïti.
rapeutiques de la pandémie de Covid-19 en Haïti », Études caribéennes [En ligne], 49 | Août 2021, mis en ligne le 30 août 2021, consulté le 24 mars 2023. URL : http://journals.openedition.org/etudescaribeennes/22299 ; DOI : https://doi.org/10.4000/etudescaribeennes.22299
5
6 Institut Haïtien de l'Enfance (IHE) et ICF International. 2019. Évaluation de la Prestation des Services de Soins de Santé, Haïti, 2017-2018. Rockville, Maryland, USA : IHE et ICF International, p. 3. 7 Bergem, I. M. (2022). Anti-vaccination as political dissent -a post-political reading of Yellow Vests' accounts of Covid-19, vaccines and the Health pass. Philosophy & Social Criticism, 0(0). https://doi.org/10.1177/01914537221141462 8 Emilie Dubreuil « Vax et antivax : les moins belles histoires des Pays-d' en-Haut », Radio-Canada.ca AVRIL 2023
Cette seconde rationalité propose à chaque fois, comme le rappelle Jean-Pierre Dozon, de chercher la vérité ailleurs 15 . Un ailleurs auquel la science n'a pas toujours accès.Le patient haïtien est régulièrement pris entre ces deux rationalités dont parfois il fait les frais à ses dépens. Dans la représentation sociale des maladies en Haïti, ces dernières sont généralement appréhendées à travers des phénomènes surnaturels16 . Par ailleurs, La vérité est ailleurs. Complots et sorcellerie. Paris: Éditions de la FMSH.DOI : 10.4000/books.editionsmsh.16126 16 Apply A., Benjamin F., Raymond L., Michel D., St Louis D. & Emmanuel E. (2021). Social Representations Of Diseases Linked To Climate Change In The Population Of A Slum District: A Case Study From Haiti. European Scientific Journal, ESJ, 17(15), 262. https:// doi.org/10.19044/esj.2021.v17n15p26 AVRIL 2023la reconnaissance et l'acceptation du pouvoir de guérir est un processus complexe dans lequel interviennent à la fois la morale, la religion, la politique, etc. 17 Mais à côté de ce double référent par lequel les maladies sont appréhendées, existe aussi le phénomène des rumeurs qui éloigne généralement de la vraie nature des problèmes. Dans un pays où le taux d'analphabétisme est élevé et l'information circule sous forme de ouï-dire, la propagation de rumeurs devient de plus en plus récurrente et impacte le comportement des gens dans des proportions très élevées. Les réseaux sociaux et les médias en ligne, ayant pris une place importante dans la vie quotidienne des gens, jouent un rôle de premier plan dans la circulation des rumeurs. Il devient parfois difficile pour beaucoup de différencier le vrai du faux.
10 www.unicef.fr
11 www.lapresse.ca
12 OPS, (2022), Vaccination contre la Covid-19 en Haïti. Synthèse des résultats. Port-au-Prince, (document de travail).
13 Malpel, S., Demonceaux, S. & Lévêque, G. (2022). L'influence des médias sur la prise de décision vaccinale contre la Covid-19. Hegel, 2, 120-129. https://doi.org/10.3917/heg.122.0120
14 Benjamin F., Jean K., Antoine R., Prou M., Millien M., Balthazard-Accou K. Emmanuel E. (2021). COVID-19: Perception of the Pan-demic and the Importance of Barrier Measures by the Fruit and Vegetable Sellers of Port-Au-Prince. European Scientific Journal, ESJ, 17(5), 165. https://doi.org/10.19044/esj.2021.v17n5p165
15 Dozon, J.-P. (2017).
Ces résultats montrent aussi que plus le répondant est jeune, plus il est susceptible d'être influencé par les rumeurs et les mauvaises informations qui circulent autour des vaccins contre la Covid-19. En effet, 71% des répondants ayant entre 18 et 39 ans, soit la quasi-totalité des jeunes participants à l'enquête, affirment que les rumeurs et la désinformation seraient les raisons de leur démotivation par rapport à la vaccination contre la Covid-19. Globalement, 65% des répondants -toutes tranches d'âges considéréesaffirment qu'ils ne se feraient pas vacciner par rapport aux rumeurs prétendant que les vaccins contre la Covid-19 provoqueraient des malaises et entraîneraient des problèmes de fertilité et parfois la mort.
Graphe 7 : Raison ayant motivé les demandeurs de vaccin de janvier à mars 2023Ces résultats montrent que le motif de voyage/partir à l'étranger est récurrent dans 61% des réponses. Par ailleurs, 39% des réponses évoquent une prise de conscience du danger que représente la Covid-19 et 33% indiquent que ce n'est que récemment que les répondants auraient accès aux informations concernant la Covid-19.Les autres raisons avancées dans le cadre de cette question doivent être prises avec précautions, dans la mesure où cette demande accrue pour le vaccin correspond à une exigence particulière qui est celle du voyage. Dans ce cas, il est fort probable que le pourcentage de gens qui se font vacciner pour des raisons d'immigration soit plus élevé, mais préfèrent invoquer d'autres raisons. On peut par exemple questionner le fait de soutenir que ce n'est que maintenant que les informations les plus précises existent. On peut se demander si l'attention à ces informations n'est pas simplement liée à la raison principale, c'est-à-dire celle consistant à se mettre en règle avec l'immigration américaine.
L'impact des rumeurs et de la désinformation sur la vaccination contre la Covid-19 AVRIL 2023 Annexe Mini enquête sur la motivation de se faire vacciner L
'objectif de cette mini-enquête dans quelques sites de vaccination est de déterminer les raisons pour lesquelles les personnes venues se faire vacciner maintenant contre la Covid-19 ne voulaient pas le avant. La méthodologie consiste à interviewer les personnes (entretien semi-directifs) venues se faire vacciner.Dans les sites ou le vaccin n'est disponible présentement, on collecte des données via téléphone en accord avec les responsables de sites de vaccins, un agent va appeler au hasard quelques personnes ayant pris le vaccin entre janvier-mars 2023. (Registre de noms et No. Tel disponibles). On garde l'option Focus groupe avec des personnes venues prendre le vaccin. (Arrangement avec le responsable du site pour leur donner la priorité si elles acceptent de nous parler 10-15 minutes à part en les faisant sortir de la longue file d'attente).
Implementing the Immunization Agenda
2030: A Framework for Action through Coordinated Planning, Monitoring & Evaluation, Ownership & Accountability, and Communication & Advocacy. Available at: www.immunizationagenda2030.org/framework-for-action. 2 World Health Organization, Program for Immunization by 2030, May 2021. A74/9 Add.4 (who.int) 3 PAHO, 2022.
Rapport d'enquête :
Implementing the Immunization Agenda
2030: A Framework for Action through Coordinated Planning, Monitoring & Evaluation, Ownership & Accountability, and Communication & Advocacy. Disponible à l'adresse www.immunizationagenda2030.org/frameworkfor-action. 2 Organisation Mondiale de la Santé, Programme pour la vaccination à l'horizon 2030, mai 2021. A74/9 Add.4 (who.int) 3 OPS, 2022.
Nadège Mézié et Obrillant Damus, « « Se mèt kò ki veye kò (chacun doit protéger farouchement son corps) » : représentations et thé-
Nadège Mézié et Obrillant Damus, « « Se mèt kò ki veye kò (chacun doit protéger farouchement son corps)»: représentations et thérapeutiques de la pandémie de Covid-19 en Haïti », Études caribéennes [En ligne], 49 | Août 2021, mis en ligne le 30 août 2021, consulté le 24 mars 2023. URL : http://journals.openedition.org/etudescaribeennes/22299 ; DOI : https://doi.org/10.4000/etudescaribeennes.22299
Brodwin, P. (1996). Medicine and Morality in Haiti. The Contest for Healing Power. New York: Cambridge University Press. DOI : 10.1017/CBO9780511613128
APRIL 2023
The impact of rumors and misinformation on vaccination against COVID-19
Survey Report :
Stratégies de collecte et profil des enquêteurs
Deux stratégies de collecte ont été utilisées :
(1) Une stratégie de collecte par téléphone (20% de l'échantillon): à partir des registres des sites où le vaccin n'était plus disponible au moment de l'enquête. Dans ce cas, un agent appelle au hasard quelques personnes ayant été vaccinées sur les sites en question entre janvier et mars 2023.
(2) Une stratégie de collecte in situ (80% de l'échantillon): Des enquêteurs été déployés sur les sites de vaccination et ont administré les questionnaires à des participant choisis au hasard.
Les enquêteurs sont tous des cadres de la santé collaborant avec Panos Caraïbes et ayant de l'expérience dans la collecte de données.
Outil de collecte
L'enquête a été réalisée à partir d'un questionnaire semi-dirigé comportant trois séries de questions :
• Une première question portant sur le niveau d'information concernant la Covid-19 auquel à accès le questionner ; • Une deuxième question semi dirigée, déclinée en sous-questions à choix multiples portant sur la représentation des questionnés sur la pandémie ; • Une troisième question, déclinée elle aussi en sous-questions à choix multiples interrogeant sur les raisons ayant motivé le questionné à se faire vacciner au moment de l'enquête (mars 2023). Un espace est réservé à l'enquêteur pour indiquer des remarques pertinentes qui ne seraient pas prises en compte par le questionnaire.
Présentation des résultats
Les résultats sont présentés en suivants l'ordre de présentation suivants :
(1) Présentation des informations démographiques de l'enquête, en insistant sur les facteurs de genre et de l'âge des répondants (2) Présentation des résultats en considérant trois niveaux d'information :
-Le niveau de connaissance des enquêtés sur la nécessité de se faire vacciner; -L'évaluation des raisons ayant empêché de se faire vacciner jusqu'au moment de l'enquête ; -Les raisons ayant motivé finalement la décision de faire vacciner. |
04121489 | en | [
"shs.edu"
] | 2024/03/04 16:41:26 | 2003 | https://hal.science/hal-04121489/file/forum_2003_34.pdf | El propòsit d'aquesta comunicació és ajudar a donar a conèixer una de les experiències més interessants relacionades amb l'àmbit de la didàctica de la llengua i la literatura que s'estan portant a terme en aquest moment amb un marcat esperit interdisciplinar en l'educació literària de persones adultes, com són les tertúlies literàries.
QUÈ SÓN?
Les tertúlies literàries dialògiques són una activitat cultural i educativa que s'està desenvolupant en diferents tipus d'entitats -com escoles de persones adultes, associacions de mares i pares, grups de dones o entitats culturals i educatives-on un grup de persones es reuneixen al voltant d'un llibre per a reflexionar i dialogar al voltant d'ell.
Els resultats d'aquesta activitat són contundents. La tertúlia literària a través de la seua metodologia aconsegueix que persones que mai hagen llegit un llibre, apleguen a gaudir de les obres de la literatura clàssica universal.
Tertúlia literària dialògica a Castelló de la Plana
LES TERTÚLIES LITERÀRIES DIALÒGIQUES A L'ESTAT ESPANYOL
El naixement i el desenvolupament de les tertúlies literàries dialògiques ha tingut lloc en un determinat context social i cultural a l'Estat Espanyol. Encara que les tertúlies literàries es van desenvolupar com a programes de lectura en educació de persones adultes durant els anys 80, els seus orígens els hem de trobar en els moviments socials i culturals del segle XIX. Per a entendre el significat i la influència de les tertúlies literàries com a experiència educativa transformadora i com a motor de canvi inclosa en l'educació de persones adultes, hem de tenir en compte la rellevància de la il•lustració de la classe treballadora. Avui l'educació de persones adultes va cap a un procés de democratització seguint les tendències de la societat.
Els i les participants s'estan organitzant i demanen el compartir amb ells mateixos la presa de decisions En aquest moviment, ells estan reinventant l'educació popular i les tertúlies literàries són una de les claus del programes que promouen (Soler, 2001).
La tertúlia literària dialògica va néixer en 1980 en l'escola d'adults de La Verneda-Sant Martí, un barri obrer de Barcelona. Els anys 80 van ser uns anys de transició en l'Estat Espanyol. La Dictadura va acabar en 1975 i es va indicar un període de transició democràtica que va acabar cap a mitjans dels anys 80. Durant aquests anys l'educació de persones adultes espanyola ha experimentat un gir complet des d'un model compensatori que el règim dictatorial havia imposat a un model més democràtic. En aquest nou marc, un grup d'educadors/es crítics/ques de La Vermeda-Sant Martí van crear una tertúlia literària inspirats/des en les iniciatives de l'educació llibertària que van florèixer a final del s. XIX i principis del s. XX.
El primer precedent de les tertúlies literàries el podem trobar en la tradició dels cercles literaris que van començar durant el segle XVIII amb el naixement de les Sociedades Económicas de Amigos del País. Aquestes societats van ser creades per a subministrar cursos de nit per a treballadors i gent pobra, organitzant trobades nocturnes per a reunir-se i conversar. Aquests cercles literaris continuaren en els ateneus populars del segle XIX, on les conferències, els cursos culturals i les activitats van ser mantingudes amb la intenció d'estendre una educació general per a tot el món. Desafortunadament, totes eixes institucions van ser dràsticament aturades en 1939, al final de la Guerra Civil amb la imposició de la Dictadura de Franco.
Per a descriure l'evolució de l'educació d'adults en l'Estat espanyol en general i en les tertúlies literàries en particular distingim entre tres períodes:
1. Des del segle XVIII fins a la II República (1850República ( -1936)): Emergència i rellevància dels ateneus de treballadors i treballadores. 2. Des de la Guerra Civil fins a la fi de la Dictadura (1936Dictadura ( -1975)): Lluita contra el feixisme i la regressió cultural posterior 3. Des de la Transició democràtica fins als nostres dies : Floreixement de l'esperit democràtic en educació.
LES TERTÚLIES LITERÀRIES EN UNA EDUCACIÓ D'ADULTS DEMOCRÀTICA
En una societat que està esdevenint cada cop més dialògica i on s'està donant una nova forma a les estructures organitzatives, hi ha noves possibilitats per al diàleg horitzontal i la ciutadania activa. Com a resultat, nous canals i projectes que centren la radicalització de la democràcia estan emergint. En aquest escenari el moviment Educació Democràtica d'Adults (EDA) ha sorgit recentment en l'Estat Espanyol seguint les tendències democràtiques i dialògiques, particularment en la societat civil. La tertúlia literària dialògica amb la seua naturalesa democràtica i els seus participants actius ha esdevingut un motor per a promoure el moviment.
El moviment EDA (Educació Democràtica d'Adults) representa els interessos de la gent a la qual la manca d'estudis acadèmics ha exclòs de les posicions en una societat literada i dels moviments socials que lluiten per l'emancipació social.
L'EDA està promoguda per CONFAPEA (Confederación de Asociaciones de Educación de Personas Adultas), amb el suport de la Red de Educación Democrática de Personas Adultas i el Grupo 90 ( Red Española de investigadores y escolares en educación de adultos.)
EL MOVIMENT DE PARTICIPANTS I LA SEUA CARTA DE DRETS
El moviment de participants coordinat per CONFAPEA a nivell nacional té com a principal objectiu la promoció d'un model de persones adultes educatiu participatiu, democràtic i transformatiu. Així, elles estan promovent un model social d'educació on es reconeixen les veus, interessos i necessitats dels seus participants. Per mitjà d'aquestes associacions i federacions, les persones participants s'organitzen elles mateixes no sols per a lluitar pels seus drets educatius, sinó també per a participar en la societat en el sentit ampli.
Un tema clau en CONFAPEA és la seua Declaració dels Drets dels Participants. Aquesta declaració de drets està composada per 13 articles que esdevenen una afirmació pública en els quals els i les participants en processos educatius de persones adultes afirmen la seua voluntat d'uns drets que els permeten tenir una educació adaptada als seus interessos i definida per mitjà del consens.
MIL I UNA TERTÚLIA LITERÀRIA DIALÒGICA
Les tertúlies literàries estan esdevenint un moviment social amb CONFAPEA i els/les participants estan estenent la seua experiència de lectura dialògica mitjançant un projecte anomenat Mil i una tertúlia literària dialògica.
L'extensió de les tertúlies literàries dialògiques és un dels més importants projectes de CONFA-PEA, degut a la seua dimensió transformativa. Aquest projecte també està creant una xarxa coordinada entre les diferents tertúlies i publicant un butlletí internacional. Un dels actes més importants és la Conferència anual de tertúlies literàries dialògiques coordinada i promoguda per CONFAPEA per a disseminar la lectura dialògica.
L'organisme està interessat en l'extensió de les tertúlies literàries per a democratitzar l'accés a la literatura clàssica, cultura i educació a tots els adults. Fer açò significa promoure el diàleg, la reflexió crítica i la participació activa entre gent que tradicionalment ha estat exclosa. Mil i una tertúlia literària dialògica vol promoure les tertúlies literàries no sols a l'Estat Espanyol sinó al món sencer, lluitant així per l'educació democràtica de persones adultes.
Dialogant al voltant dels llibres
6.GATHERING IN CYBERSPACE
Gathering in Cyberspace 1 és un projecte europeu coordinat per Àgora, una associació de participants de Barcelona. Àgora va començar aquest projecte per a promoure l'experiència de les tertúlies literàries dialògiques i augmentar les seues possibilitats a través d'Internet. Es va iniciar en desembre del 1999, i en el primer encontre a Barcelona es va definir una metodologia coordinada i un pla de treball. Va començar amb la participació d'aprenents adults de Dinamarca, República Txeca i França. Les persones participants d'aquestes trobades eren també aprenents adults sense coneixements acadèmics, els quals després del primer encontre internacional, van començar a llegir els clàssics amb el llibre de Franz Kafka , La Metamorfosi.
Com a part d'aquest projecte cada partner tenia l'obligació de crear una nova tertúlia literària cada setmana. Les tertúlies literàries cibernètiques en Dinamarca, França, la República Txeca i l'Estat Espanyol llegeixen el mateix llibre i comparteixen les seues reflexions col•lectives en un fòrum virtual en Internet.
Les Gathering in Cyberspace segueixen la metodologia de la tertúlia literària dialògica, en el qual els participants de cada tertúlia :
1.Decideixen el nombre de pàgines que llegeixen en casa, elegeixen un paràgraf, contribueixen a les reflexions de seu grup, i comencen a dialogar al voltant d'eixes contribucions. 2. La seua discussió en la sessió és després compartida en el fòrum amb els participants d'altres països, els quals estan fent el mateix en els seus propis grups. 3. Abans de cada sessió setmanal al menys un participant del grup llig en el fòrum els comentaris, reflexions o opinions dels participants d'altres cercles d'altres països per a incloure'ls també en la discussió local.
En aquestes Gatherings in Cyberspace no tots els participants necessiten saber utilitzar els ordinadors ja que l'activitat més important és la lectura dialògica en cada cercle. Aquelles persones que mai han fet servir un ordinador poden participar escoltant a els companys/es que han estat llegint en el fòrum. Els i les tertulians-es no necessiten parlar anglés per a formar part d'eixes trobades ja que hi pot haver una persona que sàpiga anglés que vaja traduint.
Aquest projecte ha ajudat a desenvolupar moltes dimensions de l'aprenentatge: la lectura dialògica d'obres clàssiques, les reflexions col•lectives al voltant de la tertúlia, la creació de coneixement entre tots-es els i les participants i ha incrementat l'accés a la nova informació i a les tecnologies de la informació.
Contactar amb gent d'altres cultures per mitjà de les tertúlies literàries cibernètiques ha trencat amb els prejudicis i les falses suposicions. La gent es dóna compte que ells tenen moltes coses en comú encara que visquen en diferents països i que molts dels problemes socials són similars al voltant del món. Aquest fet fomenta la solidaritat, actituds antiracistes i coexistència cultural.
LA TERTÚLIA LITERÀRIA EN UNA SOCIETAT DIALÒGICA
La societat està esdevenint cada cop més dialògica. Freire (1997) defineix dialogisme com una interacció social, la qual veu com a requeriment de la natura humana i una reivindicació en favor de les opcions democràtiques.
Canvis recents com la proliferació de les noves tecnologies de la informació i la globalització incrementen el risc i les incerteses en la vida de la gent mentre es redefineixen les relacions socials en la família, en el treball, en la comunitat i en la vida privada. Cada cop més aquesta redefinició té lloc en els diàlegs diaris.
En les tertúlies literàries totes els contribucions i les opinions són valorades i aquesta pedagogia assegura que no hi ha veus silenciades. La gent que no tenia coneixements acadèmics, i que tradicionalment havia estat exclosa de l'educació i de les activitats culturals troben un lloc en el qual poden escoltar i llegir, expandint, adquirint i creant coneixement (Soler, 2001). Els valors i les habilitats de la gent en la cultura popular ha de ser reforçada per una pedagogia que prenga més atenció a les seues competències que a les seues deficiències. Aquest fet anima la participació en una societat on la gent estiga emprant cada dia més diàleg en les seues negociacions diàries.
El nou gir dialògic de la societat i en les ciències socials, humanes i educacionals és un marc útil en el qual té lloc l'experiència de la lectura dialògica de la tertúlia literària.
En les tertúlies literàries dialògiques ens hem basat en quatre punts bàsics del paradigma comunicatiu:
-Intersubjectivitat -Capacitat universal d'aprenentatge -Recreació del món de la vida -Transformació Aquests elements són els que aplicarem a l'experiència de la lectura en les lectures dialògiques, formant un nou model de comprensió de la lectura .
Estudiant l'aplicació d'aquests quatre elements d'estudi de la tertúlia literària tindrem en compte l'obra de Paolo Freire i de Jürgen Habermas. Els dos proposen la transformació social per mitjà de la promoció de les relacions basades en un diàleg igualitari.
Mentre que Habermas (1987) desenvolupa una teoria encaminada a la radicalització de la democràcia per mitjà de la racionalitat comunicativa de la gent, Freire es centra en la importància de la democratització per mitjà del diàleg. Freire defensa el poder transformador d'una pedagogia dialògica contra aquells que mantenen que l'educació únicament reprodueix les estructures de poder.
Per altra banda, encara que mai menciona el terme dialogisme, Vigotsky (1978) afirma que la potencialitat està desenvolupada per mitjà de les interaccions socials.
NOVES POSSIBILITATS EN L'ERA DE LA INFORMACIÓ
Canvis de la nostra societat tals com la revolució de la informació i de les tecnologies de la comunicació, el fenomen de la globalització i l'increment de la flexibilitat econòmica han donat una millora de la societat organitzada com una xarxa en les quals la transmissió d'informació és crucial (Castells,1997-8).
En una societat de la informació, l'economia està organitzada mitjançant xarxes i empreses que funcionen a un nivell global. En aquest nou ordre social, la informació i el coneixement són les principals fonts de productivitat i, conseqüentment, la clau que done accés a la xarxa i ajude al procés de seleccionar informació per participar en l'esfera social, política i econòmica. (Ayuste et al.,1994;Castells et al.,1999). Així les persones han de tenir èxit en l'educació per a fugir de la marginació i l'exclusió social.
La gent amb coneixements educatius diferents, ha creat una nova forma de desigualtats, incrementant el buit social entre aquells que han tingut accés a la informació i aquells que no. ( Giroux & Flecha , 1992).
D'aquesta manera els efectes de la societat de la informació han estat en dos sentits. Per una banda, la nova economia mundial ha creat noves formes d'exclusió i ha incrementat els forats en la distribució de la riquesa. Per això les persones més riques han esdevingut encara més riques i la gent més pobra, més marginada en base a la seua capacitat d'accés a aquesta nova societat.
Per l'altra banda, el desenvolupament d'una societat de la informació està obrint noves possibilitats per al diàleg i la creació de noves formes de solidaritat (Castells,1996).
Aquestes possibilitats han donat un augment de projectes alternatius que usen el diàleg fora del control i de la mediació de les institucions. Un exemple molt clar el podríem observar en les pròpies tertúlies literàries.
L'Era de la informació ha obert possibilitats per al diàleg entre xarxes de ciutadans i ciutadanes, fent que un projecte cultural com les tertúlies literàries dialògiques haja estat un èxit pel seu enorme potencial. Promoure les oportunitats d'aprenentatge per a aquelles persones que han tingut menys accés a la informació i fomentar la seua participació en les noves oportunitats dialògiques que emergeixen en la societat ajuda a lluitar contra la seua exclusió.
ELEMENTS DIALÒGICS MANIFESTATS EN LA TERTÚLIA LITERÀRIA
En la societat dialògica, els projectes educacionals que estan proveint d'oportunitats reals per a la transformació i per superar les desigualtats tenen una orientació dialògica. És a dir, promouen un aprenentatge instrumental, així com una reflexió crítica i una participació democràtica.
La tertúlia literària és un projecte basat en la tendència dialògica de la nostra societat que està proveint de noves oportunitats l'estudiantat adult sense una base acadèmica per a accedir a la literatura i a la cultura.
Els educadors i educadores de persones adultes han impulsat l'aprenentatge dialògic mitjançant diverses experiències en diversos ambients educacionals i en diferents parts del món. Aquest aprenentatge ha estat promogut moltes vegades per pedagogies emancipatòries que tenen en compte les seues experiències de vida, així com les seues veus i les seues opinions. Açò ha fet que els propis subjectes siguen actors i actrius de la pròpia transformació dels seus contextos.
La tertúlia literària dialògica és un espai on la gent pot participar en un projecte educacional que promou la literatura i la cultura. En ella, els i les participants transformen les seues vides i els seus entorns. Per a explicar el procés d'aprenentatge i de transformació ens centrem en quatre elements de l'enfocament dialògic que hi ha en la tertúlia literària dialògica:
1. Diàleg intersubjectiu 2. Capacitat universal per a aprendre 3. Recreació del món de la vida 4. Transformació
ANÀLISI DELS ELEMENTS DIALÒGICS
El primer element que analitzarem serà la noció d'intersubjectivitat, és a dir, la interacció entre subjectes capaços de llenguatge i acció ( Habermas, 1987) Habermas i Freire reflexen en molts casos relacions de persones i accions basades en el diàleg. Habermas (1987) argumenta que la gent en les seus pràctiques usa accions comunicatives per a reestructurar el seu món de la vida en base de la comprensió i el agraïment.
La idea de la racionalitat comunicativa i la intersubjectivitat és també la base de propostes com les de Freire (1970) que elaboren la centralitat d'interacció entre educadors i educadores i estudiantat.
Mirant els fonaments de la racionalitat comunicativa i del diàleg igualitari podem veure la clau per a comprendre i explicar la naturalesa dialògica de la tertúlia i la creença que tots els i les participants poden aprendre i participar en aquesta activitat cultural. Haurem d'observar també alguns requeriments per al diàleg igualitari com l'argumentació, pretensió de validesa (més que pretensions de poder), i l'objectiu, el subjectiu i els móns socials que formen el bagatge de les persones.
Un altre gran tema en la tertúlia és que la capacitat d'aprendre no és qüestionada. Ninguna capacitat per llegir, comprendre o formular opinions és posada en dubte. Quan la gent troba un espai on desenvolupar el seu potencial per a aprendre i superar barreres, es comprova que realment ho pot aconseguir.
CREA (1999) ha desenvolupat una teoria de l'aprenentatge dialògic que està explicada en 7 principis. Com podrem comprovar tots ells es relacionen amb la metodologia de la tertúlia:
-Diàleg igualitari: Ús de les habilitats comunicatives com a instruments per a resoldre situacions que una persona en solitari no seria capaç de solucionar. És possible quan en el diàleg es tenen en compte la validesa dels arguments aportats en lloc de la posició de poder o privilegi de les persones.
-La intel•ligència cultural: Fa referència tant a la intel•ligència acadèmica com a la pràctica i a les habilitats comunicatives. A través de les habilitats comunicatives és possible el traspàs de competències d'àmbits acadèmics a pràctics, o d'àmbits pràctics a acadèmics. La definició del concepte d'intelligència cultural parteix de les capacitats que tots i totes posseïm, amb la qual cosa es desacrediten les teories educatives basades en els dèficits.
-La transformació: L'aprenentatge dialògic transforma les relacions entre la gent i el seu entorn. Es basa en la premissa de Freire que les persones som éssers de transformació i no d'adaptació. Defén la possibilitat i conveniència de les transformacions igualitàries que siguen resultat del diàleg.
-La dimensió instrumental: L'aprenentatge es basa, no en la reproducció jeràrquica o la competitivitat, sinó en la creació a través del diàleg d'expectatives positives i en la selecció dialogada d'aquelles coses que es volen i de quina manera es volen aprendre.
-Creació de sentit: Implica el desenvolupament de l'autonomia, el compromís i la responsabilitat de les persones amb elles mateixes per a orientar la pròpia existència al voltant del projecte de vida que eligeixen. Amb la creació de sentit s'amplia a l'esfera de decisió més personal la capacitat de transformació que implica l'aprenentatge dialògic.
-La solidaritat: La comunitat basada en l'aprenentatge dialògic es constitueix com un espai solidari creat per les aportacions de tots i totes, fetes no en funció de l'estatus social sinó de l'interés comú. Suposa una lluita contra l'exclusió derivada de la dualització social.
-Igualtat de diferències: Significa respectar per igual totes les diferències. Dos d'eixos principis -intel•ligència cultural i la dimensió instrumental-són la clau de la tertúlia literària dialògica.
El tercer element, reinvenció del món de la vida, es manifesta en la tertúlia literària de dues maneres. Primer, en la tertúlia literària la gent redescobreix la racionalitat comunicativa del seu món de la vida, així com els espais comunicatius que ells havien perdut quan van ser presos per la burocràcia i la rutina. En les tertúlies literàries dialògiques la gent qüestiona la superioritat del coneixement oficial que un professor-a o uns i unes especialistes per exemple en literatura estudien. Totes les contribucions dels participants creen coneixement al voltant de la realitat de manera crítica com a part d'un debat intel•lectual molt ric.
El quart element és la transformació. Freire (1997) diu que som éssers de transformació no d'acomodació. La gent que participa en les tertúlies literàries ha estat exclosa de l'educació i d'un tipus concret de cultura i molts i moltes han internalitzat un sentiment d'inferioritat. Freire està interessat en explicar com la gent experimenta formes de dominació i d'opressió i com han caigut en la cultura del silenci. Creu, no obstant, en el protagonisme de la gent i amb Habermas, ho defineix com poder comunicatiu. Aquest autor afirma que nosaltres hem de denunciar les desigualtats i les opressions, i proposa vies alternatives per a superar aquestes situacions. En aquest sentit també rebutja els i les intel•lectuals que critiquen les situacions sense oferir solucions, perpetuant així situacions d'injustícia i d'opressió. En oposició a aquests intel•lectuals podem trobar autors com Freire, Giroux, Macedo o Flecha que proposen alternatives per al canvi social.
Les tertúlies literàries dialògiques demostren que les pràctiques igualitàries en els espais són usualment reproduïts en uns altres aspectes de la vida de la gent. Aquest marc de transformació social ens ajuda a analitzar com les tertúlies literàries, al marge de ser un centre d'aprenentatge accelerat, de reflexió crítica i de transformació personal, s'estan expandint i estan esdevenint un moviment social important.
DIMENSIÓ DE LA LECTURA DIALÒGICA
Les persones participants tenen moltes raons per a atendre les tertúlies literàries però el més important és el procés de compartir les lectures en un entorn dialògic, el qual ens condueix a una millor comprensió de les mateixes.
Aquesta nova comprensió no s'oposa a la visió tradicional de la lectura i els elements associats a l'aprenentatge lector. En la tertúlia literària els i les participants usen el diàleg igualitari per a decidir que volen llegir en el grups. Després cadascú llig el text decidit a casa connectant-lo amb les seus experiències personals. Quan tornen al grup tots i totes comparteixen el que han llegit amb els altres participants i engeguen un procés de reflexió. Més tard eixes reflexions tornaran a les diverses esferes de les seues vides. La lectura esdevé així doncs un procés singular que inclou diferents dimensions que Soler (2001) defineix com a lectura dialògica.
Lectura dialògica és el procés intersubjectiu de lectura i establiment de significat d'un text en el qual els lectors i lectores reforcen la seua comprensió lectora instrumental més enllà de la seua interpretació literària i reflexionen críticament sobre la vida i la societat mitjançant el diàleg igualitari, obrint així noves possibilitats de transformacions socials i personals com a lectors-es i com a persones del món.
Un element clau en la lectura dialògica és la intersubjectivitat, que uneix l'experiència individual subjectiva amb l'experiència col•lectiva del grup. La intersubjectivitat es refereix a la interacció entre els subjectes i els seus contextos socials.
La lectura dialògica és un procés comunicatiu en compte d'un procés individual, això implica que els participants engeguen un procés d'interacció en el qual les seues experiències personals (en referència als seus móns objectius, subjectius i socials) es troben, interactuen i sovint es qüestionen i es redimensionen amb el col•lectiu.
El llegir, tradicionalment, s'ha vist com un procés en el qual s'intenta entendre el significat que l'autor/a tenia la intenció de donar a entendre. El significat és situat en la interacció entre el/ lector/a i el text, i el llegir és entés com una experiència subjectiva en el qual el/a lector/a aporta la seua subjectivitat a la tasca i interpreta el text per interacció amb el mateix. En resum, nocions com la intertextualitat també es centren en la construcció subjectiva d'un nou text que connecta experiències actuals i prèvies (Hartman,1994). No obstant eixes perspectives, encara que diferents totes, es centren en el/la lector/a com un/a actor/actriu individual que ignora el grup quan analitza el procés de lectura. En lloc d'això, en les tertúlies literàries, la lectura dialògica va un pas enllà perquè inclouen tot el món. La lectura dialògica és una experiència intersubjectiva, basada en el diàleg igualitari i la solidaritat, en el qual la interacció entre el lector/a, el text i altres lectors i lectores esdevé la experiència interpretativa .
La lectura dialògica es centra en la comprensió lectora i en un compromís per mitjà del text. Esdevenir un lector o lectora dialògic significa ser més que un lector o lectora en profunditat amb una mirada crítica, entenent el text profundament i aportant les visions de la vida al mateix text. Això estableix també establir una relació dialògica amb les altres persones lectores. Els lectors/es dialògics/ ques interpreten i fan reflexions crítiques del llibre amb les altres reflexions dels i de les participants. L'experiència compartida fa que els i les participants tinguen més curiositat per tot allò que envolta la lectura, com la vida de l'autor/a o el context en què es va escriure l'obra.
En la tertúlia literària dialògica, entendre i interpretar els llibres clàssics, engegar una reflexió crítica sobre els mateixos i repensar la societat i l'aprenentatge al llarg d'una vida són dimensions associades amb l'alfabetització que sols poden tenir lloc mitjançant el diàleg entre els i les participants. El diàleg és la base del funcionament i del procés d'aprenentatge que té lloc a la tertúlia literària. Aquest procés d'aprenentatge és un cercle que es manté, incrementant-se i transformant-se de la realitat quotidiana a la tertúlia i des de la tertúlia a la comunitat, i així continuant.
DESENVOLUPAMENT DE LES SESSIONS EN LES TERTÚLIES LITERÀRIES DIALÒGIQUES.
Abans de començar el desenvolupament de les sessions hem de tenir en compte vàries premisses com que totes les persones de la tertúlia són iguals i diferents o que la igualtat formal fa que el grup acorde unes normes per a superar les dificultats que sorgeixen, per exemple el nombre desigual d'intervencions. La col•laboració i l'autocreació de sentit fan que es rebutge el sentit d'aplegar a opinions homogènies i que es potencie la reflexió i visió personal del món i de la literatura.
Passant ja al desenvolupament propi de les sessions observem el següent (Aguilar,2002):
-El grup es reuneix cada setmana en una sessió de dues hores.
-El número de persones del grup oscil•la entre vint i trenta participants. "La majoria passen de no llegir cap llibre a devorar obres de Proust, Baudelaire o Cortázar. El secret de tal transformació està en les pròpies persones. Fins a aleshores les seues riquíssimes capacitats havien tingut escassa valoració social, per expressar-se amb formes diferents als poder acadèmics" (Flecha, 1997).
-Decideixen un llibre conjuntament i acorden el número de pàgines que llegiran eixa setmana.
-Tots els i les integrants llegeixen en casa les pàgines acordades i al dia següent es reuneixen per a dialogar el contingut o de temes que deriven de la lectura.
-El diàleg s'efectua de la següent manera: cada participant porta com a mínim un fragment elegit per a llegir-lo en veu alta i explicar per què li ha resultat interessant o especialment significatiu. L'aprenentatge dialògic inclou l'aprenentatge instrumental decidit per les persones participants. Açò els impulsa a buscar més dades al voltant dels autors i autores, les seues vides, influències i contextos històrics. Aquests temes són contemplats en el grup com a suggerències del diàleg.
Aquesta proposta de les tertúlies literàries dialògiques està creixent dia a dia i ha aconseguit apropar els clàssics de la literatura a aquells que no havien tingut abans la possibilitat d'accedir a ells i donant un valor literari a les coses quotidianes. Esperem que aplegue el dia que s'arriben a formar les Mil i una tertúlia literària dialògica.
BIBLIOGRAFIA1
AGUILAR ,C.(2002): "La tertulia literaria dialógica del CREA o cómo aprender a saltar las tapias de la desigualdad social a través de la literatura" en VII Congreso Internacional de la Sociedad española de Didáctica de la Lengua y la Literatura.Santiago de Compostela, 27-29 de noviembre. (Actes en premsa) AYUSTE, A.,FLECHA, R.,LÓPEZ, F.,LLERAS, J. (1994). Planteamientos de la pedagogía crítica.Comunicar y trasnformar. Barcelona: Aula Graó. BECK, U., (1998): La sociedad del riesgo: hacia una nueva modernidad. Barcelona: Paidós. CASTELLS, M. (1997-1998): La era de la información.Economía, sociedad y cultura. Vol.I: La sociedad en red. Vol.II: El poder de la identida. Vol.III. Fin del Milenio. Madrid: Alianza. CASTELLS, M -FLECHA, R-FREIRE, P.-GIROUX , H-, MACEDO, D-WILLIS, P. (1994): Nuevas perspectivas críticas en educación.Barcelona: Paidós. CREA ( 1999): "Cambio Educativo. Teorías y pràcticas que superan las desigualdades". I Jornadas Educativas en el Parc Científic de Barcelona. 22-23 Noviembre. FLECHA,R. (1997): Compartiendo palabras. El aprendizaje de las personas adultas a través del diálogo.Barcelona: Paidós. FREIRE, P. (1970): Pedagogy of the Opressed. New York:Continuum FREIRE, P.(1997): Pedagogy of the Heart. New Cork: Continuum GIROUX, H.A.-FLECHA, R. (1992): Igualdad Educativa y Diferencia Cultural. Barcelona: Roure. HABERMAS, J. ( 1987):Teoría de la Acción comunicativa. Vol.I: Racionalidad de la acción y racionalización social. Vol. II: Crítica de la razón funcionalista . Madrid. Taurus. HARTMAN, D.K. ( 1994): The Intertextual Links of Readers Using Multiple Passages: A Postmodern /Semiotic/Cognitive View of Meaning making. En Rudell, R.-,Rapp Rudell, M. -Singer. H. ( Eds.)(1994): Theoretical Models and Processes of Reading . Internacional Reading Association. SOLER,M. (2001).Dialogic reading: A new understanding of the reading event .(Tesi doctoral). Harvard University. VYGOSTKY, L.S. ( 1978): Mind in Society. The Development of Higher Psychological Processes. Cambridge:, MA:Harvard University Press. ANOTACIONS La traducció al català seria "Trobades al Ciberespai" |
00412149 | en | [
"math.math-co"
] | 2024/03/04 16:41:26 | 2008 | https://hal.science/hal-00412149/file/Symmetric_CFTP_final.pdf | Philippe Duchon
Florent Le Gac
Exact Random Generation of Symmetric and Quasi-symmetric Alternating-sign Matrices
We show how to adapt the Monotone Coupling from the Past exact sampling algorithm to sample from some symmetric subsets of finite distributive lattices. The method is applied to generate uniform random elements of all symmetry classes of alternating-sign matrices.
Introduction
Coupling from the Past (CFTP) is an algorithm due to Propp and Wilson [6] to sample from the exact stationary distribution (rather than from an approximation, like the classical MCMC method) of an ergodic Markov chain. In its arguably most practical version of monotone-CFTP, the Markov chain's state space is a partially ordered set with unique minimum and maximum elements, and the chain is run through a monotone update mechanism: at each time step, an appropriately-distributed increasing function mapping the state space to itself is randomly chosen, and arbitrarily many copies of the Markov chain can be coupled using the same function by defining the new state of each of them to be the image of their previous state. The key element in the CFTP algorithm lies in being able to (backward) compose a number of such random update functions until the resulting function is found to be a constant. Using only increasing functions makes it possible to detect this coalescence by computing the successive images of only the minimum and maximum elements of the state space.
Thus, CFTP is particularly well suited to sampling from distributive lattices, either uniformly or according to some well-behaved distribution. In favorable cases, this makes it possible to completely eliminate initiation bias from the corresponding MCMC simulations, in time comparable to what a well-tuned (for small enough bias) MCMC algorithm would require. It is also worth mentioning that tight bounds on the mixing time of Markov chains are often tricky to obtain, so that MCMC simulations typically either have to be run for an empirically determined time, or for a sometimes widely overestimated guaranteed mixing time.
Many combinatorially interesting families of objects fall in the "monotone CFTP on a distributive lattice" framework, but in some cases this fails because of some imposed symmetry condition. As an example, consider alternating-sign matrices (ASM for short; see Section 3 for definitions). These are in bijection with another set of matrices which form a distributive lattice, and CFTP is an efficient way to generate random ASMs.
.
Random generation of non-symmetric ASMs can be considered a textbook exercise in monotone-CFTP. For some symmetry classes, however, the defining symmetries correspond to a decreasing automorphism on the lattice of ASMs, and as a result the set of symmetric ASMs no longer has a natural ordering.
In this paper, we show how to adapt the monotone-CFTP algorithm in a systematic way to this kind of situation. The new variant, which we call symmetric-CFTP, uses the fact that the set we are trying to sample from can be seen as the set of symmetric ideals of some partially ordered set.
The paper is organized as follows. In Section 2, we review the usual monotone-CFTP and describe the general idea of symmetric-CFTP. In Section 3, we apply it to random sampling from arbitrary symmetry classes of alternating-sign matrices, and give some experimental results.
2 Monotone and symmetric coupling from the past Let (P, ≤) denote a finite partially ordered set (poset). Recall that a subset E ⊂ P is an ideal of P if it is closed under taking smaller elements (x ≤ y and y ∈ E imply x ∈ E), and that y is said to cover x if x < y but no z exists satisfying x < z < y. The set J(P ) of lower ideals of P forms a finite distributive lattice, and it is a standard result (see, e.g., [START_REF] Stanley | Enumerative Combinatorics[END_REF]) that any finite distributive lattice is isomorphic to the set of lower ideals of some finite poset. We consider the problem of sampling uniformly at random from J(P ), or from a specific subset.
Monotone CFTP
The standard application of monotone-CFTP in this context is as follows. For any x ∈ P , let T +
x and T - x denote the applications J(P ) → J(P ) defined by
T + x (E) = E ∪ {x} if E ∪ {x} ∈ J(P ) E otherwise , T - x (E) = E -{x} if E -{x} ∈ J(P ) E otherwise
Thus, T + x (resp. T - x ) acts on any ideal by trying to add (resp. remove) x, leaving it unchanged if this violates the ideal condition. Note that each such application is monotone increasing for the order inclusion, i.e. E ⊆ F ⇒ T (E) ⊆ T (F ). Now consider any probability distribution π on P (suject to the condition that π(x) > 0 for all x ∈ P ). We can define a Markov chain with state space J(P ) where one step of the chain consists in choosing a random x ∈ P according to π and an independent sign uniformly in {+, -}, and replacing the current state E by E = T x (E). Such a Markov chain is clearly irreducible (one can move from any state E to the empty ideal ∅ by applying the transforms T -
x , x ∈ E, in decreasing order, and back by applying the transforms T +
x , x ∈ E, in increasing order) and aperiodic (T - x (E) = E whenever x is not a maximal element of E and T +
x (E) = E whenever x is not a minimal element of P -E, which implies that the transition matrix has nonzero diagonal elements), with a symmetric transition matrix; thus, its unique stationary distribution is the uniform distribution on the whole lattice J(P ). In fact, the above description actually defines a grand coupling of an arbitrary number of copies of the Markov chains: one can start one copy of the chain from each state and run them in parallel using the same update functions T x ; if at any time two copies are in the same state, they will be in the same state forever. With probability 1, all copies will ultimately be in the same state; this is equivalent to the fact that since one can obtain a constant function by composing the update functions T x (for instance, for any linear extension x 1 < x 2 < . . . < x n of the order ≤ on P , the compose
T - x1 • T - x2 • . . . • T - xn
is the constant function which maps each ideal to the empty ideal), composing random functions will ultimately yield a constant function.
The idea of CFTP is to compose such update functions in reverse order until the compose is constant, i.e. generate a sequence of independent pairs (x i , i ) until
T 1,n = T 1 x1 • . . . • T k
x k is constant, and then output the single value of this function, which is uniformly distributed on J(P ). The monotonicity of the update functions is what makes the method practically useful: since each T 1,n is an increasing mapping of J(P ) to itself, one can decide whether it is constant by computing the images of the minimum and maximum ideals ∅ and P ; T 1,n is constant if and only if T 1,n (∅) = T 1,n (P ).
Automorphisms of finite posets
Now consider the group of increasing or decreasing automorphisms of P . Increasing automorphisms are the "true" automorphisms of the poset, that is, bijections σ of P to itself such that σ(x) ≤ σ(y) ⇐⇒ x ≤ y; decreasing automorphisms are isomorphisms of P to its dual poset, that is, bijections σ of P to itself such that σ(x) ≤ σ(y) ⇐⇒ x ≥ y.
For any automorphism σ, we say that a lower ideal E ∈ J(P ) is σ-symmetric if σ(E) = E when σ is increasing, or if σ(E) = P -E when σ is decreasing. For any subgroup G of the group of automorphisms, we say that E is G-symmetric if it is σ-symmetric for all σ ∈ G. Generally speaking, given P and G we are interested in sampling from the set of G-symmetric lower ideals of P .
When G contains only increasing automorphisms, G-symmetric ideals are in natural bijection with the ideals of the quotient poset P/G, so that one can naturally apply the CFTP algorithm to the quotient poset. The situation is different, however, when G contains decreasing automorphisms. In this case, the quotient set P/G no longer has a naturally defined partial order. Note, however, that quotienting by the subgroup G + of increasing automorphisms reduces all increasing automorphisms to the identity, and all decreasing automorphisms to a single decreasing involution. As a result, we can, without loss of generality, assume that the group G only contains the identity and a single decreasing involution σ on P .
Note that if σ has any fixed points, the set of σ-symmetric ideals is empty. In such a case, fixed points are necessarily pairwise incomparable (since σ is decreasing). To obtain the "next best thing" to truly σ-symmetric ideals, we can change the definition of a (quasi) σ-symmetric ideal to σ(E) = P -E -F σ , where F σ denotes the set of fixed points of σ. Equivalently, we could replace (P, ≤) by the induced order on P -F σ and sample from the set of "truly" symmetric lower ideals on this smaller set. In the rest of this section, we assume that σ has no fixed points; we will see in Section 3 that this extended definition of symmetric ideals makes sense from a combinatorial point of view when dealing with symmetric ASMs.
Symmetric CFTP
We now turn to a description of symmetric-CFTP. In view of the discussion in the previous paragraph, we assume that we are given a finite poset P and a decreasing involution on P , and we are interested in random sampling from the set J σ (P ) of (quasi) σ-symmetric lower ideals of P .
A simple observation is the following: Lemma 1. Let x and y be two distincts elements of a finite poset P , and let T x : J(P ) → J(P ) be defined as in Subsection 2.1.
• if x and y do not cover each other, then T x and T y commute;
• if y does not cover x, then T + y and T - x commute.
As a consequence, when x and σ(x) are incomparable in the poset
P , S x = T x • T - σ(x)
maps symmetric ideals to symmetric ideals in a similar way that T x maps ideals to ideals; and, as a compose of increasing mappings of J(P ) to itself, it is also increasing. Now if, say, x > σ(x), every symmetric ideal contains σ(x) but not x. In this situation, S +
x is still defined as a commutative compose, and leaves each symmetric ideal unchanged. It may not be true that T -
x and T + σ(x) commute when x covers σ(x), and in this case we define S - x = S + x . With these definitions, we have, for any two symmetric ideals E and E , S x (E) = E ⇐⇒ S - σ(x) (E ) = E. Thus, provided we can check that the resulting Markov chain on J σ (P ) is irreducible (we will see that this is always true), we obtain a CFTP algorithm on J σ (P ) by taking the previous monotone-CFTP algorithm on J(P ) and simply using "symmetric" update functions S x instead of the original T x .
One possible problem lies in making this algorithm practical. J σ (P ) does not have a naturally ordered structure -any symmetric ideal must contain exactly half the elements of P , so no two symmetric ideals can be compared by inclusion. This makes it potentially difficult to detect coalescence of the process.
Fortunately, we still have a naturally ordered space on which the Markov coupling using the S x update functions is monotone, namely, J(P ). When run with J(P ) as a state space, the Markov chain is no longer irreducible since no transitions lead out of J σ (P ) (no non-symmetric state is accessible from any symmetric state), but it is still a monotone grand coupling, and coalescence of the whole state space can still be detected by the condition that the copies started from states ∅ and P have met.
One can save some time by precomputing a lower and upper bound (in J(P )) for J σ (P ), and using these two ideals as starting states instead of ∅ and P . If we define
E low = {x ∈ P : x < σ(x)} E high = {x ∈ P : x ≯ σ(x)}
then any symmetric ideal contains E low and is included in E high , so that checking coalescence for the coupled chains started at E low and E high is sufficient. Alternately, one can note that J σ (P ) is isomorphic to J σ (E high -E low ) (any symmetric ideal of P is the disjoint union of E low and some symmetric ideal of the induced order on E high -E low ) and work on the smaller induced order, where no comparable pairs (x, σ(x)) remain.
The whole symmetric-CFTP method can be summed up in the following theorem:
Theorem 1. Let (P, ≤) be some finite poset, and assume there exists a decreasing involution σ on P with no fixed points. Let P denote the set of elements of P such that x does not cover, and is not covered by, σ(x).
Then the set J σ (P ) of σ-symmetric ideals of P is nonempty. Furthermore, for any probability distribution π on P such that π(x) = π(σ(x)) > 0 for all x ∈ P , if (x k ) k≥1 is a sequence of independent random elements of P with common distribution π, ( k ) k≥1 is an independent sequence of independent, uniform random signs, and
S k = S 1 x1 • . . . S k x k , then
• the random variable N = inf{k : S k (∅) = S k (P )} is almost surely finite, with finite expectation;
• S N (∅) is uniformly distributed on J σ (P ).
Proof. We construct a symmetric ideal I explicitly by examining each pair (x, σ(x)) in an arbitrary order, each time selecting exactly one of the two to be included in I. The rules are as follows:
• if x < σ(x), or if I already contains an element y with y > x, then add x to I;
• if σ(x) < x, or if I already contains an element y with y > σ(x), then add σ(x) to I;
• otherwise, arbitrarily select one of x or σ(x) to be added to I.
The first and second rules ensure that the set I obtained is indeed a lower ideal of P (which will be σ-symmetric by construction), provided one never runs into a situation where they are in contradiction. Assume that one has x < y and σ(x) < y with both y and y in I, then one has σ(y ) < x < y and σ(y) < y , so that whichever of y and y was added last was not added according to the rules; a similar argument holds for situations where x and σ(x) are comparable.
The proof that N is almost surely finite and that S N (∅) does have the required distribution is very similar to the classical proofs for CFTP, and is omitted due to space constraints.
A note on coalescence detection
The stopping criterion for the symmetric-CFTP algorithm, as suggested by Theorem 1, is that the coupled chains started at the minimum and maximum states (or at the lower and higher bounds E low and E high as defined earlier) have coalesced to the same state, which, by monotonicity, implies that the composed update function is constant on the whole of J(P ). This is not necessary, and is not optimal. We only need a stopping criterion which ensures that the composed update function is constant on J σ (P ). Since J σ (P ) is an antichain in J(P ) (no two symmetric ideals are comparable), a sufficient condition for coalescence on J σ (P ) is that S N (∅) ∈ J σ (P ) (or, symmetrically, S N (P ) ∈ J σ (P )). Using this condition in the algorithm will always save time (provided checking symmetry in an ideal is not significantly longer than checking for equality of two ideals), not only because one only has to run one copy of the Markov chain instead of two, but also because this corresponds to running the coupling until the chain started at ∅ reaches J σ (P ), and not until the two chains started at ∅ and P reach J σ (P ).
Random sampling from symmetry classes of alternating-sign matrices
In this section, we demonstrate the usefulness of our symmetric-CFTP framework by applying it to the random generation of alternating-sign matrices with arbitrary symmetry conditions. Though all but one symmetry classes can be tackled using only the classical monotone-CFTP algorithm, our approach has the added benefit of providing a unified treatment of all symmetry classes.
An alternating-sign matrix (ASM for short) of size N is an N × N square matrix whose entries are all 0, 1 or -1, and such that nonzero entries in each line and column alternate in sign, starting and ending with a 1. These matrices have many fascinating combinatorial properties; see [START_REF] Propp | The many faces of alternating-sign matrices[END_REF] for a survey, or [START_REF] Bressoud | Proofs and confirmations: the story of the alternating-sign matrix conjecture[END_REF] for the story of their enumeration.
There is a simple bijection between ASMs of size N and a set of height matrices of size N , which are defined as those (N + 1) × (N + 1) matrices with integer entries h i,j (0 ≤ i, j ≤ N ) satisfying the conditions
• for all 0 ≤ i ≤ N , h i,0 = h 0,i = h N -i,N = h N,N -i = i; • for all 0 ≤ i, j < N , |h i,j -h i,j+1 | = |h i,j -h i+1,j | = 1.
A simple way of expressing the bijection is as follows: if the entries of an ASM A are (a i,j ) 1≤i,j≤N , set
h i,j = i + j -2 k≤i ≤j a i,j
to obtain the corresponding height matrix. Height matrices of a given size are naturally ordered by entry-wise comparison, and one easily checks that the resulting poset is a distributive lattice where the meet and join operations are defined by taking entry-wise maximums and minimums. The minium (respectively, maximum) matrices have their entries defined by h i,j = |i -j| (respectively, h i,j = N -|Ni -j|). One easily checks that the set of height matrices of size N is isomorphic to the set of ideals of the set
P N = {(i, j, k) : 1 ≤ i, j < N, k = i + j (mod 2), |i -j| < k ≤ N -|N -i -j|},
with a partial order defined by
(i, j, k) ≤ (i , j , k ) ⇔ k ≥ k + |i -i | + |j -j |.
Given a lower ideal E of P N , the corresponding height matrix has its entries defined by h i,j = max (|i -j|, sup{k : (i, j, k) ∈ E}) , and the inverse bijection defines the lower ideal corresponding to a height matrix as
E = {(i, j, k) : k ≤ h i,j }.
All these bijections make it easy to apply the monotone-CFTP algorithm to the random generation of ASMs. The most basic application of the algorithm would be to select some arbitrary probability distribution on P N (typically, uniform) and use the update functions T x as we defined them in Section 2; a very simple variant, which is easier to implement in practice, would only ask for a probability distribution on coordinates (i, j) (again, the uniform distribution is an obvious and easy choice) and using update functions of the form T i,j = k T (i,j,k) .
(In the above formula, the product denotes composition; note that all involved T i,j,k update functions commute, so that the notation is not ambiguous.)
In their matrix formulation, the update functions T + i,j (respectively, T - i,j ) correspond to the instructions "increase (respectively, decrease) entry h i,j by 2 if this is compatible with the height matrix conditions, otherwise do nothing", and seem more natural than the T i,j,k , which correspond to "increase (decrease) entry h i,j to k if possible, otherwise do nothing".
The 8-element dihedral group of symmetries of the square acts naturally on ASMs by permuting entries, so that, for each of its subgroups, one may consider the set of ASMs that are invariant under its action. It turns out that each of these symmetries corresponds, through the standard isomorphism we described above, to an (increasing or decreasing) automorphism of the poset P N , so that either classical monotone-CFTP or symmetric-CFTP can be used to sample from each symmetry class. In some cases, no truly symmetric ASMs exist for some sizes, due to the existence of fixed points for the decreasing automorphisms; in such cases, we instead turn to quasi-symmetric ASMs.
Diagonal symmetry
Height matrices can be symmetric around their main diagonal (h i,j = h j,i for all i, j), which corresponds exactly to symmetric ideals for the (increasing) automorphism σ D : (i, j, k) → (j, i, k).
Symmetry around the other diagonal corresponds to the automorphism
σ A : (i, j, k) → (N -j, N -i, k).
Diagonally and doubly-diagonally symmetric ASMs of all sizes exist, though no formula for their enumeration is known (and no simple formula is likely to exist, since the enumerating sequence does not seem to factor into small primes).
Vertical or horizontal symmetry
ASMs invariant under symmetry around a vertical axis correspond to height matrices satisfying h i,N -j = Nh i,j for all i, j. The corresponding involution on P N is
σ V : (i, j, k) → (i, N -j, N + 2 -k)
which is a decreasing automorphism. Note that, when N is even, σ V has fixed points of the form (i, N/2, N/2 + 1) with odd i, so that truly vertically symmetric ASMs of even sizes do not exist (this is, of course, very easy to see directly in the ASM formulation: since every ASM has a single nonzero entry in its first line, this entry must be in the middle column for vertical symmetry, and even sized matrices do not have a middle column).
If we turn to quasi-vertically symmetric ASMs of even sizes, these correspond to σ Vsymmetric ideals of P N with the fixed points removed. The corresponding height matrices have entries in their middle column alternate between N/2 and N/2 -1. These were counted by Kuperberg [4] under the guise of UASMs (U-turn ASMs), with a slight change of size; there is an easy bijection between quasi-vertically symmetric ASMs of size 2N and Kuperberg's UASMs of size 2N -2.
Horizontal symmetry is of course similar, corresponding to the decreasing automorphism σ H : (i, j, k) → (Ni, j, Nk).
For the random generation of vertically symmetric ASMs, Symmetric-CFTP can be used. Note, however, that the set of fixed entries in height matrices (the two middle columns for odd size; the middle column and one entry in every two in adjoining columns for even sizes) separates nonfixed entries in the left and right halves of the matrix. This makes it possible to define a distributive lattice structure on the set of (quasi-)symmetric ASMs by using only the left half of the height matrix columns for entry-wise comparison; with this ad hoc partial ordering, classical Monotone-CFTP can also be used (and is actually the same algorithm) in this case.
ASMs that are both horizontally and vertically symmetric can be treated in a similar way, using both σ H and σ V . Again, one can use Monotone-CFTP by using only the top left quarter of the height matrices for comparisons. Generating functions for quasi-symmetric ASMs of even size 4n were described by Kuperberg under the name of UUASMs (double U-turn ASMs) of size 4n -4. Quasi-symmetric ASMs of size 4n + 2 seem to be missing from current literature; their enumeration sequence, starting with 1, 2, 28, 3146, 2855320, does not appear in [START_REF] Sloane | The on-line encyclopedia of integer sequences[END_REF].
Rotational symmetry
Half-turn symmetric ASMs were enumerated by Kuperberg [4] for even size, and by Razumov and Stroganov [START_REF] Razumov | Combinatorial nature of ground state of O(1) loop model[END_REF] for odd size. The symmetry condition on height matrices (h N -i,N -j = h i,j ) corresponds to σ C -symmetry for the increasing automorphism σ C : (i, j, k) → (Ni, Nj, k), and again classical Monotone-CFTP can be used to sample from this symmetry class.
Quarter-turn symmetry corresponds to the condition h j,N -i = Nh i,j for height matrices, or to σ Q -symmetry for the decreasing automorphism
σ Q : (i, j, k) → (j, N -i, N + 2 -k).
This automorphism has a single fixed point (N/2, N/2, N/2 + 1) if N is of the form 4n + 2, and none in other cases. Truly quarter-turn-symmetric ASMs of size divisible by 4 were enumerated by Kuperberg [START_REF] Kuperberg | Symmetry classes of alternating-sign matrices under one roof[END_REF], and those with odd size by Razumov and Stroganov [8]; a similar enumeration formula for quasi-quarter-turn-symmetric ASMs with size 4n + 2 is conjectured in [START_REF] Duchon | On the link pattern distribution of quarter-turn symmetric FPL configurations[END_REF].
For all sizes of quarter-turn (quasi-)symmetric ASMs, classical Monotone-CFTP does not seem to be adapted to random sampling, as there is no natural way to define a partial ordering on such ASMs. In this case, Symmetric-CFTP is a natural and efficient choice.
Full symmetry
The maximum amount of symmetry one can ask for in ASMs is invariance under the action of the whole group of symmetries of the unit square. In the P N formulation, this corresponds to (quasi-)symmetry under σ D , σ V , and their composes. As was the case for vertical symmetry, despite the fact that a decreasing automorphism is used, the whole symmetry class can be turned into a distributive lattice by using only a triangular-shaped region of the matrix (e.g., entries h i,j with i ≤ j ≤ N/2) to define the partial ordering, so that an ad hoc version of Monotone-CFTP can be used instead of Symmetric-CFTP.
Simulation results
We implemented our algorithm to generate random ASMs with prescribed symmetry conditions, and some results with moderate size are presented in Figure 1. White dots represent positive entries, black dots represent negative entries, and gray areas cover zones where the symmetry constraints are not respected due to quasi-symmetry; one sample from each symmetry class is shown. For large sizes, one observes that nonzero entries are very rare outside of a roughly circular shape, a phenomenon known as the arctic circle phenomenon; the exact shape (or even existence) of this frozen region has not yet been proved for uniform random ASMs, though Colomo and Pronko [START_REF] Colomo | Email communication on the domino forum[END_REF] recently announced a partial proof that this limit shape exists and is not circular.
We also give experimental results on coalescence times (in number of time steps) in Table 1. We ran the coupled copies of the chain, starting from the minimum and maximum states (or from E low and E high when appropriate), forward in time until both met. While the resulting states would not be uniform, the distribution of the coalescence time is the same as when the chain is run backward, and this takes less simulation time. This number of steps is a good measure for the running time of the CFTP algorithm, which will have to perform a number of simulation steps roughly proportional to it, both in the "binary-backoff" and "read-once" [START_REF] Wilson | How to couple from the past using a read-once source of randomness[END_REF] variants.
All simulations were done using symmetric moves of the form S i,j = k S i,j,k for the definition of S i,j,k appropriate to the considered symmetry class, using the corresponding In cases where the symmetry class is defined through a group with only increasing automorphisms, the coalescence time seems to be roughly divided by the order of the group between nonsymmetric and symmetric ASMs of the same size. This relationship also holds between the vertically and horizontally symmetric and totally symmetric cases. In cases where the group contains negative automorphisms, no such simple relationship seems to hold; the situation is made more complex by the change in the starting states.
Concluding remarks
In this paper, we have described a new variant of the Monotone-CFTP algorithm, which is well suited to sampling from some symmetric subsets of finite distributed lattices. While we have mainly studied its application to generating random symmetric alternating-sign matrices, it can be applied to other similar situations; an example which immediately comes to mind is that of symmetric plane partitions (the fact that the enumeration sequences for symmetry classes of ASMs and plane partitions often derive from each other is irrelevant here).
Strictly speaking, (quarter-turn) rotational invariance is the only symmetry class for which one cannot adapt the usual Monotone-CFTP algorithm, but even for other symmetry classes, our setting has the advantage of giving a unified treatment.
On the theoretical side, an unsolved issue is that of the coalescence time. In all the examples we have experimented with, the coalescence time for sampling from the symmetric ideals is significantly lower than that for sampling from the whole ambient lattice. It would be interesting to give a precise and rigorous statement of this observation, and to know whether this is a general phenomenon or is due to the special structure of the posets we studied.
Figure 1 :
1 Figure 1: Random ASMs of size 30
Table 1 :
1 7 ±8.1 10 6 Diagonal 6528±2808 144917±51115 2.8 10 6 ±7.1 10 5 1.6 10 7 ±4.0 10 6 ±5.8 10 4 1.0 10 6 ±2.7 10 5 Half turn 5546±2403 127685±37860 2.7 10 6 ±6.5 10 5 1.5 10 7 ±3.4 10 6 Quarter turn 1054±545 28938±11567 7.1 10 5 ±2.0 10 5 4.3 10 6 ±1.1 10 6 ±2.8 10 4 4.7 10 5 ±1.5 10 5 Observed coalescence times (average and standard deviation)
Double diag. 3481±1561 71557±25490 1.4 10 6 ±3.6 10 5 7.6 10 6 ±1.2 10 6
Vertical 1201±582 34881±12312 8.2 10 5 ±2.2 10 5 5.2 10 6 ±1.4 10 6
Vert. & horiz. 1.7 10 5 Total symmetry 159±113 6507±2815 91±69 3385±1810 8.1 10 4
Acknowledgements
This research was supported by French ANR project SADA (ANR-05-BLAN-0372). |
00412150 | en | [
"info.info-mo",
"info.info-ni",
"info.info-mc"
] | 2024/03/04 16:41:26 | 2009 | https://inria.hal.science/inria-00412150/file/ebenhami_article.pdf | Ben Elyes
Guillaume Hamida
email: [email protected]
Jean Marie Chelius
Gorce
Impact of the Physical Layer Modeling on the Accuracy and Scalability of Wireless Network Simulation
Keywords: wireless network simulation, simulation scalability, accuracy, simulation environment, physical layer modeling, interference modeling
Recent years have witnessed a tremendous growth of research in the field of wireless systems and networking protocols. Consequently, simulation has appeared as the most convenient approach for the performance evaluation of such systems and several wireless network simulators have been proposed in the last years. However, the complexity of the wireless physical layer (PHY) induces a clear tradeoff between the accuracy and the scalability of simulators. Thereby, the accuracy of the simulation results varies drastically from one simulator to another. In this paper, we focus on this tradeoff and we investigate the impact of the physical layer modeling accuracy on both the computational cost and the confidence in simulations. We first provide a detailed discussion on physical layer issues, including the radio range, link and interference modeling, and we investigate how they have been handled in existing popular simulators. We then introduce a flexible and modular new wireless network simulator, called WSNet. Using this simulator, we analyze the influence of the PHY modeling on the performance and the accuracy of simulations. The results show that the PHY modeling, and in particular interference modeling, can significatively impact the behavior of the evaluated protocols at the expense of an increased computational overhead. Moreover, we show that the use of realistic propagation models can improve the simulation accuracy without inducing a severe degradation of scalability.
INTRODUCTION
In wireless multi-hop networks (i.e., ad hoc, sensor and mesh networks), there is a growing need for the performance evaluation of protocols and distributed applications. Three main methodologies are generally adopted to investigate the performance and the behavior of networking protocols: theoretical analysis, experimentation and simulation.
Due to the high complexity of wireless communications, analytical studies are often based on unrealistic assumptions and inaccurate physical layer, e.g., the socalled Unit Disk Graph model that has been widely used to model the radio range of wireless nodes [START_REF] Dousse | Connectivity in ad-hoc and hybrid networks[END_REF]- [START_REF] Frey | On delivery guarantees of face and combined greedy-face routing in ad hoc and sensor networks[END_REF]. Moreover, analytical studies usually focus on a given layer ignoring the impact of the other network and physical layers. The experimentation approach may provide valuable insight into the performance and the behavior of wireless systems. However, setting-up testbeds is a tedious and expensive task. For these reasons, simulations are generally considered as the most convenient methodology to explore the behavior of protocols and distributed applications. Nonetheless, the complexity of the physical phenomena constituting the radio medium introduces a tradeoff between accuracy and computational cost in wireless network simulation.
Several wireless network simulators have been proposed in the last years. Examples are NS-2 [START_REF]The network simulator[END_REF], Glo-MoSim [START_REF] Bajaj | Simulation of large-scale heterogeneous communication systems[END_REF], JiST/SWANS [START_REF] Barr | Scalable Wireless Ad hoc Network Simulation. Handbook on Theoretical and Algorithmic Aspects of Sensor, Ad hoc Wireless, and Peer-to-Peer Networks[END_REF], GTSNetS [START_REF] Ould-Ahmed-Vall | Simulation of large-scale sensor networks using gtsnets[END_REF], etc. They all provide an advanced and complete simulation environment to investigate and evaluate networking protocols and wireless systems. However, the complexity of the wireless physical layer (PHY) enforces the use of simplified models and unrealistic assumptions for the design of simulators. As a consequence, the retained tradeoff between accuracy and computational load varies drastically from one simulator to another. As it has been highlighted in previous publications [START_REF] Cavin | On the accuracy of manet simulators[END_REF]- [START_REF] Heidemann | Effects of detail in wireless network simulation[END_REF], these variations largely impact on the results of a simulation, and the obtained simulation results may significantly diverge from experimental results [START_REF] Newport | Experimental evaluation of wireless simulation assumptions[END_REF]. A correct modeling of the PHY layer is then crucial for confidence in the simulation results. Nonetheless, most of wireless network simulators are still based on inaccurate and heterogeneous PHY models [START_REF] Takai | Effects of wireless physical layer modeling in mobile ad hoc networks[END_REF]. The reason that generally prevails to justify this low-accuracy is scalability.
In this paper, we focus on the physical layer modeling issue and we investigate its impact on the accuracy and the complexity of wireless network simulations. The question we raise is what is the real cost of PHY simulation accuracy ? We deliberately keep aside optimizations and scalability of the node and protocol aspects of simulations which have been the subject of other studies [START_REF]Simulation scalability issues in wireless sensor networks[END_REF].
To evaluate the PHY tradeoff, we introduce a flexible and modular simulation framework, called WSNet [START_REF]The wsnet wireless network simulator[END_REF]. We dropped the idea of using existing simulators for such a study as none of them offer a sufficient diversity in PHY models. Comparing several existing simulators would not have helped, as they differ in many other aspects than PHY modeling. We thus use WSNet to better understand the impact of the PHY layer on the obtained simulation results.
The remainder of this paper is organized as follows. In Section 2, we review the related works. In Section 3, we discuss the main aspects of wireless communications. Then, in Section 4, we review some common wireless network simulators and compare their PHY modeling. Next, the WSNet simulation framework is presented in Section 5. In Section 6 we investigate the impact of the PHY modeling on the simulation speedup, and in Section 7 we analyze the limited interference model. Finally, the tradeoff involving accuracy and complexity is illustrated in Section 8.
BACKGROUND
Wireless network simulators. Numerous wireless network simulators have been developed and are concurrently used in the academic research world.
• The NS-2 [START_REF]The network simulator[END_REF] network simulator is one of the most popular environment for wired and wireless network simulations. NS-2 is developed in C++ and uses OTcl for scripting and configuration. However it suffers from a limited scalability though some recent optimizations have been proposed to support simulations of a few thousand nodes [START_REF] Naoumov | Simulation of large ad hoc networks[END_REF].
• GloMoSim [START_REF] Bajaj | Simulation of large-scale heterogeneous communication systems[END_REF] is a simulation environment based on a C-derived language, called Parsec, which supports the sequential and parallel execution of discrete-event simulations. Thanks to parallelization, GloMoSim was shown to scale up to 10, 000 nodes [START_REF] Bagrodia | A modular and scalable simulation tool for large wireless networks[END_REF]. • The JiST/SWANS [START_REF] Barr | Scalable Wireless Ad hoc Network Simulation. Handbook on Theoretical and Algorithmic Aspects of Sensor, Ad hoc Wireless, and Peer-to-Peer Networks[END_REF] is a java-based discrete event simulation for wireless networks. It was shown that JiST/SWANS outperforms NS-2 and GloMoSim in terms of scalability and memory usage [START_REF] Barr | Scalable Wireless Ad hoc Network Simulation. Handbook on Theoretical and Algorithmic Aspects of Sensor, Ad hoc Wireless, and Peer-to-Peer Networks[END_REF].
• The Georgia Tech Network Simulator, GTSNeTS [START_REF] Ould-Ahmed-Vall | Simulation of large-scale sensor networks using gtsnets[END_REF], is a C++ object-oriented simulation environment dedicated to the simulation of wireless sensor networks. GTSNeTS claims to scale to networks of several hundred thousands of nodes [START_REF] Ould-Ahmed-Vall | Largescale sensor networks simulation with gtsnets[END_REF]. This review is far from being exhaustive and other available simulators with wireless support are OPNET, OMNeT++, J-Sim, QualNet, etc.
Accuracy and complexity in wireless network simulations. The literature provides a lot of papers analyzing the accuracy and the complexity of wireless network simulations.
In [START_REF] Cavin | On the accuracy of manet simulators[END_REF], the authors investigate the accuracy of three popular simulators (NS2, GloMoSim and OPNET). They show a significant divergence in the obtained results between the simulators while simulating a basic flooding algorithm. The major reason for this issue is the PHY layer modeling which is implemented differently from one simulator to another. Indeed, in [START_REF] Takai | Effects of wireless physical layer modeling in mobile ad hoc networks[END_REF], the physical layer models of these three simulators is presented in detail, and the authors discuss some PHY layer factors which are relevant to the performance evaluation of protocols. In [START_REF] Newport | Experimental evaluation of wireless simulation assumptions[END_REF], the authors review some assumptions used in many wireless networking studies by comparing simulation results and measurements taken from real experiments. They show the weakness of these assumptions and describe their impact on the accuracy of the simulation results.
Regarding the performance and the complexity of simulations, [START_REF] Heidemann | Effects of detail in wireless network simulation[END_REF] discuss the effects of details in wireless simulation. The authors show how the evaluated performance of protocols can vary when the level of details is tuned. Such details are the energy consumption model, the radio propagation model, the MAC protocol, etc. They suggest to adapt the level of details required by a given case study. In [START_REF] Lee | Efficient simulation of wireless networks using lazy mac state update[END_REF], the authors describe a new method called LAMP (LAzy MAC state uPdate) whose aim is to reduce the overhead introduced by the MAC layer with no loss of accuracy. The idea is to increase the scalability of simulators particularly when simulating large scale networks. Still considering the scalability issue, [START_REF]Simulation scalability issues in wireless sensor networks[END_REF] discusses several aspects impacting the scalability of simulations, including the radio channel, the environment modeling, the energy model, etc.
Contribution. Our contribution is twofold. First, we provide an in-depth analysis of the physical layer modeling issue. We describe analytical models used to model the radio range, the radio link and the interference. We then introduce a new simulation environment called WSNet which provides a large diversity in PHY models. Second, using WSNet we investigate the impact of the physical layer modeling on the accuracy, the complexity and the performance of wireless network simulations.
PHY-LAYER MODELING TRENDS
For the sake of realism and confidence in simulation results, an accurate PHY modeling is a key point. In analytical studies as in simulations, the disk model, shown in Figure 1-(a), has long prevailed. It relies on a set of strong assumptions: time stationarity :
l ij (t) = l ij (1)
independence :
l ij = f (x i , x j ) (2)
switched link (on/off) :
l ij ∈ {0, 1} (3)
symmetry :
l ij = l ji (4)
isotropy :
l ij = f (x i , d ij ) (5)
homogeneity :
l ij = f (d ij ) (6)
where l ij refers to the radio link between nodes i and j and d ij to the geometric distance between i and j.
The disk model provides the radio network with three axioms: the radio range is constant, the radio link is switched, and the network is interference free. The asset of this model holds in its simplicity for both theoretical studies and simulations but in turn suffers from a lack of realism. Nonetheless, improving this model is not a trivial task as a hard tradeoff between complexity and realism holds. Basically, this model can be improved by relaxing either of the three previously stated axioms, as discussed below.
Radio range modeling
The range of a radio system is based upon the definition of a signal to noise ratio (SNR) threshold denoted γlim . If the system is interference free, the range is a constant and the radio link is defined by:
l ij : Ω 2 → B = {0, 1} (x i , x j ) → l(x i , x j ) = 1 if γij ≥ γlim 0 else (7)
where the SNR γij is given by: γij = h ij • Pi Nj , where h ij is the path-loss and P i and N j are the transmission power and the noise level respectively.
The transceiver properties. They are the transmission power P i , the noise level N i , the antenna gain and its radiation pattern g i (θ, φ). Variations of P i , N i or antenna gains affect the spatial homogeneity assumption (and so far the symmetry), which means that all nodes not further have the same range. Note that a non-uniform noise level N j is highly probable for low cost small radio systems. The isotropy which is not statistically affected by these parameters does not further hold if the radiation patterns of the antennas are introduced according to:
h ij = g(x i , x j ) • g i (θ ij , φ ij ) • g j (θ ji , φ ji )
where g(x i , x j ) is the propagation path-loss (see Figure 1-(b)). It should be noted that the use of 3D radiation patterns and a 3D spatial distribution of radio nodes may be pertinent in some cases, in small indoor environments for example.
Propagation models. The simplest model refers to the line of sight (LOS) scenario but in urban and indoor environments, more complicated scenarios occur due to shadowing and multiple paths. Two complementary approaches can be used to deal with propagation.
The former approach relies on a deterministic modeling of the wave propagation and provides fine simulations of any environment. The most usual algorithms are ray-tracing based [START_REF] Almers | Survey of channel and radio propagation models for wirelessmimo systems[END_REF] but discrete methods have also been proposed [START_REF] Gorce | A deterministic approach for fast simulations of indoor radio wave propagation[END_REF]. The high accuracy of these methods is definitely balanced by their high computational cost. Another limitation of purely deterministic models is that simulating one real environment is often too specific. Thus, the latter approach relies on a statistical description complementing the deterministic model. A stochastic variable s ij is then introduced in the propagation pathloss to handle shadowing:
g ij = g ij • s ij
The most usual model is the log-normal shadowing which produces large scale SNR variations as depicted in Figure 1-(c). Introducing a spatial correlation between radio links is not yet proposed in current simulators but represents a challenging issue. For this purpose, s ij should refer to a spatially correlated stochastic process, slowly varying (see [START_REF] Graziosi | A general correlation model for shadow fading in mobile radio systems[END_REF] for instance). Shadowing is sometimes confused with fading. Fading instead relies on SNR time variations due to multi-path self-interference (see Figure 1-(d)). It has a leading role in wireless communications and is introduced also as a stochastic variable f ij :
g ij (t) = g ij • s ij • f ij (t)
This late variable is not spatially correlated as it relies on small scale phenomenon. Meanwhile, it is a time variant parameter. Considering its temporal correlation may be highly relevant [START_REF] Pham | New cross-layer design approach to ad hoc networks under Rayleigh fading[END_REF] and represents another challenge for wireless simulators.
Radio link modeling
A frame error rate (FER), or a packet error rate (PER), as a function of the mean SNR can substitutes for the SNR threshold of Equation 7. It derives from the bit error rate (BER) function which itself relies on the radio interface properties. l ij then relates to the probability of a successful transmission. Some theoretical asymptotic expressions are well-known for various modulation techniques [START_REF] Wang | A simple and general parametrization quantifying performance in fading channels[END_REF]. However, the exact derivation at low SNR is sometimes not straightforward, more specifically in the case of fading channels, which can be introduced at this level with:
P s (E|γ) = ∞ 0 P s (E|γ) • f γ (γ|γ) • dγ s . ( 8
)
where f γ (γ|γ) stands for the instantaneous SNR distribution due to fading. The threshold-based radio link model as opposed to the PER-based radio link model, is depicted in Figure 2. It can be also of great importance to consider channel coding. For instance, bloc coding can be introduced thanks to bounding the code word error probability. Last but not least, the radio link can be more complex if the pulse time-spreading due to multi-path lies beyond the symbol period. In this case, the impulse response should be considered.
Interference modeling
Interference disturbs the packet reception at the physical layer. It appears as a crucial point in PHY simulations as final results can be strongly influenced by the interference model [START_REF] Cavin | On the accuracy of manet simulators[END_REF], [START_REF] Takai | Effects of wireless physical layer modeling in mobile ad hoc networks[END_REF], [START_REF] Heidemann | Effects of detail in wireless network simulation[END_REF]. As we will see in Section 4, interference management is probably the point where current simulators differ the most largely. Sources of interference include nodes operating in the same frequency band or in different frequencies. The first type of interference is known as co-channel interference, while the latter is termed adjacent channel interference.
The most efficient approach for introducing interference consists in replacing the SNR by a signal to interference plus noise ratio, SINR, which can be derived according to:
γij = h ij • P i N j + k =i,j h kj • P k (9)
The proper derivation of the SINR requires the knowledge, at a given time, of all the signals which are concurrently received at a given receiver.
To be exhaustive, it should be noted that non linear receivers (with multi-user detection for instance) [START_REF] Andrews | Interference cancellation for cellular systems: a contemporary overview[END_REF] can outperform classical receivers in the presence of interference. In this case, it is necessary to compute the FER not from the general SINR, but rather from the vector of received powers at each node.
MIMO systems
Multiple input multiple output (MIMO) interfaces are very promising and their simulation can be assessed in two ways. The former but simplest approach is referred to as the node-based approach and exploits the single antenna framework described above by adjusting the FER function according to a MIMO specific model. This approach is however not relevant for realistic systems where the radiation patterns of the multiple antennas differ from each other as well as for correlated channels. The latter, referred to as the antenna-based approach, is more powerful and simulates the channel state for each antenna-to-antenna link, separately. For instance, in a 2 × 2 MIMO system, four channels are simulated. It is very important to take care about the correlation between these channels to have accurate simulations. In this context, modeling spatially correlated shadowing and time correlated fading is of great importance [START_REF] Almers | Survey of channel and radio propagation models for wirelessmimo systems[END_REF].
PHY LAYER MODELING IN COMMON SIMU-
LATORS
Numerous wireless network simulators have been developed and are concurrently used in the wireless networking research domain. In this section, the PHY modeling of four widely used simulators is presented and briefly investigated. Table 1 summarizes the PHY models implemented in NS-2 [START_REF]The network simulator[END_REF], GloMoSim [START_REF] Bajaj | Simulation of large-scale heterogeneous communication systems[END_REF], JiST/SWANS [START_REF] Barr | Scalable Wireless Ad hoc Network Simulation. Handbook on Theoretical and Algorithmic Aspects of Sensor, Ad hoc Wireless, and Peer-to-Peer Networks[END_REF], GTSNetS [START_REF] Riley | Large-scale network simulations with gtnets[END_REF] and WSNet. This review is far from being exhaustive and other simulators with wireless support are available, such as OPNET, OMNeT++, etc.
Let us consider the PHY modeling as implemented in these simulators. As shown in Table 1, interference modeling, which is one of the most crucial aspects for the evaluation of protocols, is probably the point where current simulators differ the most largely.
The first step toward interference evaluation is to identify which signals are interfering with each other in order to assess the terms in the denominator of eq. 9, on the basis of timing considerations only. This set of interfering signals can be very large for large scale simulations. As a consequence, various simulators rather limit the range at which any signal can propagate and thus interfere. In other words, disregarding the radio range model effectively used for the received signal strength computation, the simulator does not generate receptions at nodes further than a given range from the source. Consequently, the considered source cannot induce interference at nodes further than this range. This optimization, which we will call Limited interference model as opposed to a Full interference model, is depicted in Figure 4. It privileges performance at the cost of accuracy. This optimization is implemented in JiST/SWANS, GloMoSim, GTSNeTS, etc.
Regarding the SINR computation, several strategies have been investigated and implemented in existing simulators. They are all variations of eq. 9 regarding timing granularity. They induce a varying level of realism, precision but also complexity. They are summed up in Figure 3: Note that none of these simulators supports multichannel systems and adjacent channel interference. For this purpose, the interference model in ( 9) should be replaced by:
• Fig 3(
γij = h ij • P i N j + k =i,j α ik • h kj • P k , ( 10
)
where α ik stands for the rejection factor between the channels associated with signals i and k.
It is obvious that these interference models offer different levels of complexity, accuracy or realism. In order to better evaluate the computational cost of interference modeling, we introduce the WSNet PHY simulation framework.
THE WSNET SIMULATION FRAMEWORK
The WSNet simulation framework is well-designed for the purpose of evaluating the performance of wireless networks. We first present a general overview of the WSNet simulator. We then focus on the physical layer modeling, including the radio range, the radio link and the interference modeling, and we show how these have been handled in WSNet.
WSNet: a general overview
WSNet [START_REF]The wsnet wireless network simulator[END_REF] operates either as a complete event-driven simulator for wireless networks (e.g., ad hoc, sensor, mesh, etc.), including the simulation of both the wireless nodes (i.e., application, routing, MAC, radio and antenna layers) and the radio medium as shown in Figure 5, or as a physical layer simulator with nodes being simulated by external simulators in a distributed simulation framework [START_REF] Chelius | Worldsens: development and prototyping tools for application specific wireless sensors networks[END_REF]. The design of WSNet has been mainly guided by two constraints: modularity and flexibility. Thanks to modularity, WSNet can offer a wide range of PHY models, from ideal ones to pseudo-realistic ones.
Thanks to flexibility, WSNet can be tuned to offering various trade-off between accuracy and computational complexity in the radio medium simulation. This choice obviously impacts the WSNet performance. The WSNet simulator, which is under the CeCILL Free Software License Agreement, is implemented in C language and runs on the Linux operating system. The WSNet source code can be downloaded from our SVN repository [START_REF]The wsnet wireless network simulator[END_REF]. Thanks to modularity, all simulation models (e.g., application, routing, MAC, propagation model, interference model, antenna, etc.) are compiled into dynamic libraries to ease the development and the integration of new simulation models. Moreover, WSNet uses an xml file to configure a simulation. This xml file describes the simulation setup and specifies, for example, the number of nodes to simulate, the libraries used to model the radio medium and the wireless nodes (e.g., application, routing, mobility, battery, etc.).
Once WSNet is compiled and installed on a Linux operating system, users can run simulations using the following command: "wsnet -c ./demo/cbr.xml", where ./demo/cbr.xml is a file describing the parameters of the simulation (i.e., number of nodes, size of the simulation area, node architecture, PHY models, etc.). Readers can refer to the WSNet website [START_REF]The wsnet wireless network simulator[END_REF] for more information. In the rest of this section, we describe the WSNet PHY modeling.
PHY layer simulation
In WSNet, a radio signal is characterized by the following values: the transmission power, the symbol rate, the modulation scheme and the channel number. The PHY layer is abstracted as a combination of independent blocs which represent radio medium properties or hardware components. The radio simulation is built upon the following blocs: (i) a propagation bloc dealing with propagation aspects including path-loss, shadowing and fading according to the definitions provided in section 3;
(ii) an interference bloc which implements eq. 10; (iii) modulation blocs which provide SINR-to-BER conversions; and (iv) antenna blocs which implement antenna properties: loss, gain, radiation pattern, orientation and position. Note that WSNet offers a 3D spatial representation.
These blocs are black-boxes with well-defined interfaces. At simulation runtime, they are linked to dynamic libraries which effectively implement the interface. As an example, for the propagation bloc, a dynamic library implements the free space propagation model while another one provides a Rayleigh channel and a third one inputs path-loss values from an external file or an external propagation tool. More generally, this architecture is an opportunity to offer a wide range of radio medium models, from a basic ideal physical layer with no interference and no path-loss to a more realistic one including a Rayleigh channel, multiple frequencies, complex modulation schemes and smart antennas. It is thus a good opportunity to study the computational cost of using pseudo-realistic radio models.
Radio range simulation
When considering a transmission initiated by an antenna i, WSNet generates two events at each receiving antenna. The first one notifies the apparition of the signal while the second one notifies its end. Given a receiving antenna j, the two events are scheduled according to the distance between the emitting and receiving antennas, the radio propagation speed and the signal symbol rate. By default, the set of receiving antennas is extended to all antennas of the network. Thus, by default, WSNet operates in a full interference mode. However, the simulation setup may be initialized with a limited interference mode together with a maximum range after which receptions and interference are not further computed. This range is independent from the radio range model implemented in the propagation bloc.
For each reception, i.e. each receiving antenna j, the received signal strength is computed according to the operations and properties depicted in Figure 6: (1) the emitting antenna noise N i , (2) the emitting antenna gain g i (θ ij , φ ij ), (3) the signal attenuation including shadowing and fading effects g ij (t), (4) the receiving antenna gain g j (θ ji , φ ji ) and ( 5) the receiving antenna noise N j . Noise at R = P 1 × α 13 + P 2 × α 23 + P 3 × α 33 α ij is the correlation factor between channels i and j. Channels have also been introduced to support multichannel and MIMO systems. Channels divide the radio resource in sub-resources which mutually interact. As an example, channels can be used to model frequencies in a FDMA (Frequency Division Multiple Access) network as well as codes in a CDMA (Code Division Multiple Access) one. We do not explicit what channels are but we rather explicit their correlation, i.e. how they interfere with each others. While computing the interference on channel i, the contribution of a signal emitted on channel j is weighted by a correlation factor α ji as given in eq. 10. α ji values are provided by the interference bloc. A simple SINR example with adjacent channels interfering signals is depicted in Figure 7-8.
Radio link simulation
Upon signal reception and for each computed SINR, the impact of interference on the radio signal is evaluated through the computation of a BER value. The SINR to BER conversion is provided by the modulation bloc that characterizes the radio signal. As for the SINR, a BER value is associated with each slot of the radio signal. Based on the BER values and the slot lengths, a FER is finally computed. Contrarily to GloMoSim, WSNet forwards the radio signal to the receiving antenna whatever the FER value and whenever the first error arises. Indeed, in real systems, more specifically when complex coding and scrambling are used, the final decision -success or failure -cannot be taken before the frame is completely received.
An optional feature of WSNet is to randomly introduce errors in the radio signal with a probability equal to the computed BER. In this case, an erroneous signal is transmitted to the receiving antenna. Given this feature, it becomes possible to study precisely the performance of channel coding and error correction algorithms through their implementation in the node simulation part. More generally, CDMA schemes can be either simulated statistically using channels and correlation factors or studied precisely through their implementation and the error introduction feature.
IMPACT OF THE PHYSICAL LAYER MODELING
We now study the cost of PHY modeling accuracy using WSNet, focusing on the three aspects that have been developed in Section 3: radio range modeling, interference modeling and radio link modeling. As an estimation of the computational cost, we use the speedup metric. The speedup of a simulation is the ratio between the logical simulated time and the effective simulation time. Unless specified otherwise, the simulations consist in 100 nodes deployed randomly in a 2D 100m × 100m area. Each node emits a 100B/s Constant Bit Rate (CBR) broadcast traffic through an IEEE 802.15.4 868Mhz compliant radio. No MAC protocol is executed in order to focus on the PHY simulation. The presented speedups are averaged over 100 simulations with the confidence intervals being omitted as they have been found to be very small.
Radio range models
To assess the impact of radio range models on the speedup of simulations, we compare four common propagation models: a disk model (no path-loss) with transmission ranges of 60m and 200m, the free-space model, the two-ray ground model and a Rayleigh channel with a path-loss exponent of 2 for the last three cases. Interference and modulation supports are disabled. The results are drawn in Figure 9. Quite obviously, a better accuracy in the radio range model induces a decrease in the simulation speedup. There are two reasons for this overhead. The first one is the computational cost associated with the radio range model. It takes more time to compute a complex pathloss with random variables than a single distance. The second one is an increase in the number of receptions generated by a single emission. Consider for example the Range(60) and Range(200) models. They have the same computational cost but the number of receivers is larger with a range of 200 meters. In a realistic modeling, this overhead is even higher as the signal reaches all nodes. If the first overhead can hardly be reduced, the second one can be reduced using the limited interference model.
Interference models
Keeping the free-space propagation model, we now analyze the impact of interference management on the scalability of simulations. In a first step, we make the number of SINR slots computed vary from 0, no interference simulation, to n -one computed SINR per byte. Results are depicted on Figure 10. In a second step, we evaluate the cost of multi-channel support and adjacent channel interference computation. We vary the number of simultaneously simulated channels from 1 to 16. Results are depicted in Figure 11.
Support of a cumulative interference model induces a strong overhead as the speedup is roughly reduced by 50% from no interference simulation to a single SINR computation. However, in interference modeling, it seems that the first step is the one that costs the most. Indeed, from a single SINR to a per-byte computation, the extra overhead remains quasi constant for each refinement: from 4% for two SINR values to 45% for n values. In consequence, once the interference price has been paid, there is no strong reason to decline paying for a better accuracy. This same conclusion holds for multichannel support as its cost is negligible compared to the one associated with an accurate interference modeling.
Radio link models
In addition to interference, we now evaluate the cost of an accurate radio link modeling by integrating three modulation models in the simulations: SINR threshold, bpsk and oqpsk. The results are shown in Figure 12.
As a BER value is derived from each computed SINR, the overhead induced by the radio link model is function of the interference modeling accuracy. As in Section 6.1, a realistic model induces a higher overhead: the computational cost of the erf c function used in the bpsk model is much higher than a simple comparison between a SINR and a threshold. However, from a SINR threshold to a bpsk function, the speedup only decreases by 4% to 17% depending of the number of SINR slots. This price may clearly be worth the gain.
LIMITED INTERFERENCE MODEL
In this Section, we study the speedup gain that can be achieved through the use of a limited interference model. If this model decreases the accuracy of the PHY layer simulation, it is supposed to increase the scalability of simulators, justifying its use in GloMoSim, JiST/SWANS, NS-2, GTSNeTS, etc. Next, we discuss the feasibility of a statistical interference model whose aim is to improve the accuracy of the limited interference model without loss of performance.
Limited interference model and range searching
When simulating a wireless network, nodes are generally spread over a two or a three dimensional simulation area. Each node is then characterized by geographic coordinates (i.e., an x-y-z position) and communicate with its neighbor nodes which are located within communication range. Thus, during the physical layer simulation step, the simulator has to determine which nodes will receive a given transmitted packet. This operation is called range searching. It is an important operation in wireless network simulation and can be defined more formally as follows: given a set of N nodes deployed over a X ×Y ×Z area, and a source node located at (x 1 , y 1 , z 1 ) and broadcasting a packet, which nodes lie in the sphere with a radius equal to the limited interference range and centered at the source node?.
A naive solution to the problem of range searching is to organize the nodes into a linear structure, as in GTSNetS [START_REF] Riley | Large-scale network simulations with gtnets[END_REF] and NS-2 [START_REF]The network simulator[END_REF]. To determine the nodes located at a given range from a source node, the idea is to parse the entire linear structure and to compute the euclidean distance between the source node and all possible destination nodes. If the distance is below a given threshold (i.e., the limited interference range), the node is thus selected as a possible receiver for a transmitted packet (see Figure 13-(a)). The complexity of range searching using a linear list is O(n).
Several optimized data structures have been proposed for the problem of range searching [START_REF] Bentley | Data structures for range searching[END_REF]. For example, grid partitioning, which have been shown to increase the scalability of NS-2 [START_REF] Naoumov | Simulation of large ad hoc networks[END_REF], can be considered. In this strategy, the simulation area is divided into cells of fixed size as shown in Figure 13-(c). Each cell represents a linear list containing the nodes which are located within the cell. The range search operation consists thus in searching the nodes in the adjacent cells, reducing the number of queries compared to a flat or a linear scheme where all nodes are considered (see Figure 13-(a)).
To cope with irregular topologies, kd-tree [START_REF] Bentley | Data structures for range searching[END_REF] (see Figure 13-(b)) can be used to aggregate adjacent empty grid cells to form a larger empty grid cell. Range search operation is then performed with a complexity of O(log n). However, the main drawback of this approach is that it is very hard to rebalance the tree when inserting or deleting a record. Such an approach will thus be ineffective in very dynamic scenarios.
To optimize simulations under the limited interference model, WSNet implements the grid partitioning strategy. In addition, to cope with irregular topologies an optimized data structure, called db-tree (dynamic balanced tree), is also implemented (see Figure 13-(d)). The idea, similar to the one presented in [START_REF] Tropf | Multidimensional range search in dynamically balanced trees[END_REF], is to combine the grid strategy, for fast query processing, and the treebased approach for the aggregation of adjacent empty We study, in what follows, the speedup gain that can be achieved through the use of a limited interference model and optimized data structures. If this model decreases the accuracy of the PHY layer simulation, it is supposed to increase the performance of simulators, justifying its use in GloMoSim, JiST/SWANS, etc. For this evaluation, we consider a network of 2000 nodes randomly deployed in a network of size 1000m×1000m. No interference nor modulation is simulated. We compare the speedup achieved using the limited interference model for various maximum interference/reception ranges to the one achieved using the full interference model. The results, depicted in Figure 14, show the speedup achieved by the linear, the grid and the db-tree data structures.
As it was already pointed out in Section 6.1, a reduction in the number of receivers induces a large gain in the simulation speedup. For a limited interference range of 30 meters, with the correct data structure, the speedup can be increased by a factor of 332. In largescale networks, this gain remains high even for larger ranges. With a limited range of 120 meters, a gain of 37 is still achieved.
Towards a statistical interference model
It is quite obvious that the limited interference model is a must to ensure a high performance. However, its use raises the issue of determining a correct range (i.e., the distance limit for the propagation). As a solution, [START_REF] Ji | Improving scalability of wireless network simulation with bounded inaccuracies[END_REF] proposes an empirical method to derive a range with a limited impact on the simulation accuracy. We state that a statistical approach may offer a good tradeoff between the computational overhead of the completely deterministic Full interference model and the lower accuracy of the Limited interference model. In this Statistical interference model, the interference received at a node j is computed according to the following equation:
I j = k =i,j|d(|j,k)≤R h kj • P k + ξ R (11)
where ξ R is the realization of a random variable which statistically models the interference stemming from emitters outside the ball of radius R centered in j, B(j, R). As for the Limited interference model, the validity of this statistical model depends on the considered value of R, which can be easily determined as in [START_REF] Ji | Improving scalability of wireless network simulation with bounded inaccuracies[END_REF], but also on the distribution of ξ R . In such a context, the challenge is to find the exact distribution of ξ R that better approximates the level of interference stemming from long-range emitters.
ACCURACY versus COMPLEXITY OF SIMU-
LATIONS
Finally, we propose a case study to clearly summarize the impact of the physical layer modeling on the accuracy and the complexity of simulations.
Assumptions
We consider a varying number of static nodes randomly deployed in a 2D 200m × 200m area. Each node transmits periodically a hello packet (100B/s) through an IEEE 802.15.4 868Mhz compliant radio. We consider five metrics: the speedup, the average number of discovered neighbors, the average number of connex components, the distance to the farthest discovered neighbor node, and the application throughput. The first metric assesses the impact of the physical layer modeling on the performance and the scalability of the simulator while the four latter metrics show the impact of the PHY models on the evaluation of higher level protocols in terms of network connectivity and application throughput.
We perform the same set of simulations with various PHY models. Starting from (i) an ideal PHY layer, with no collisions nor interference, where all transmitted packets are received according to a basic disk propagation model with a range of 50m. We then (ii) introduce an IEEE 802.15.4 868Mhz compliant radio layer with a transmission power of 0dBm, where collisions can occur, and a limited interference model with a range of 50m. Next, we slightly increase the PHY simulation accuracy through the introduction of a (iii) free-space propagation model with a pathloss of 2, and a (iv) lognormal uncorrelated shadowing propagation model with a standard deviation of 4dBm and a close-in reference distance of 1m. Finally, we consider a cumulative interference model with a limited interference range of 100m and BP SK modulation for (v) one slot per packet and (vi) n slots per packet. We finish with the (vii) full interference model. To better investigate the impact of the Medium Access Control (MAC) layer, we design two sets of simulations with and without a 802.11 DCF MAC protocol. The simulation results are averaged over 30 runs and are reported in Figures 1516171819.
Simulation results
The first set of simulations is designed to investigate the impact of the physical layer modeling on the performance of WSNet through the evaluation of the speedup. As shown in Figure 15, the PHY models impact the obtained average simulation speedup. With no MAC layer (see Figure 15-(a)), we notice that the ideal PHY model induces a high simulation speedup which decreases while improving the accuracy of the physical layer modeling. Still considering the speedup, the main gap occurs with the introduction of interference and modulation support. Next, as previously observed in Section 6.2, considering n SINR slots instead of 1 induces a regular extra overhead. Finally, the full interference model does not induce too much overhead as the network area is small in this particular case.
When considering a MAC layer, the number of events and states increase during the simulation, particularly for a high number of nodes. In that case, the computational overhead is expected to raise. However, we notice from Figure 15-(b) that the obtained global speedup remains high and slightly decreases when using more accurate interference models. As the medium access control layer reduces the number of collisions and interferers, the overhead required for computing the SINR is also reduced and thus the global simulation speedup remains high compared to the case without a MAC layer. The extra overhead introduced by the MAC layer is thus balanced by the complexity reduction of the physical layer simulation.
The second set of simulations is designed to analyze the behavior of layer-3 protocols under variable PHY , that the simulation results are quite different from the case with no MAC layer. Indeed, we observe that the maximal distance of discovery as well as the network connectivity are improving with the introduction of more accurate PHY models. Again, this is a direct consequence of the MAC layer which reduces the number of interferers and collisions, yielding thus to better network performance.
Finally, the third set of simulations is designed to study the behavior of application-layer protocols under variable PHY models. From the results depicted in Figure 19, we can make two observations. First, when simulating the wireless nodes without a MAC layer (see Figure 19-(a)), we notice that the application throughput deteriorates while improving the accuracy of the physical layer modeling. Indeed, a first gap occurs when moving from the ideal physical layer to the basic model. Next, a second gap can be observed when adding the interference and modulation support. Second, we observe from the results obtained with a MAC layer (see Figure 19-(b)) a constant application throughput even with an accurate interference model. Again, the MAC layer enhances the network performance by avoiding collisions reducing thus the level of interference.
CONCLUSION
A multitude of research papers have recently presented simulation results for the performance evaluation of wireless networks. Most of these simulations are based on unrealistic and inaccurate physical models which may impact largely the confidence in the results. In this paper, through the introduction of the WSNet simulator, we have studied the impact of the physical layer modeling on the scalability and the accuracy of wireless network simulations.
To the question raised in the introduction, what is the real cost of PHY simulation accuracy, we can give the following conclusions. Accuracy has a variable cost depending on the considered PHY aspect. Regarding the radio range modeling, the cost of using a realistic model remains low, especially compared to the gain that can be achieved in term of accuracy. As a consequence, the unitdisk graph model can be easily substituted for a more realistic propagation model without a severe degradation of the simulation scalability. However, interference modeling, which is the point where current simulators differ the most largely, induces a real high overhead. This extra overhead remains regular when the interference modeling gets refined through the addition of realistic link and modulation models. As a consequence, it seems regrettable to precisely simulate interference without considering a realistic modulation or link model. As shown in this paper, interference and link modeling have a significative impact on the behavior and the performance of the evaluated protocols.
Finally, we conclude that given these trends, it remains up to the user choice to trade PHY accuracy for a desired scalability. However, we must keep in mind that, as highlighted in this work, the validity of simulation results highly depends on the choices made for the PHY modeling. In particular, if the use of optimization techniques drastically reduces the computational load, it does not solve the accuracy versus scalability tradeoff. Considering the limited interference model for example, the optimization performance and correctness now depend on the chosen limited range. In the future, we plan to investigate the statistical interference model in more details. In parallel to the empirical model proposed by [START_REF] Ji | Improving scalability of wireless network simulation with bounded inaccuracies[END_REF], we think that a stochastic theoretical analysis would be helpful in determining an adequate range and the distribution of interference stemming from nodes outside the interference range. The goal is to still enhance the scalability of simulators without affecting the confidence in the simulation results.
Fig. 1 .
1 Fig. 1. Propagation models.
Fig. 2 .
2 Fig. 2. Radio link modeling.
Fig. 4 .
4 Fig. 4. Limited versus full interference model (T: transmitter, R: receiver and I: interfering nodes).
Fig. 3 .
3 Fig. 3. SINR computation strategies at the node receiving the packet B.
Fig. 5 .
5 Fig. 5. The WSNet simulation environment.
Fig. 6 .reception 5 . 4 α 12 α 13 α 21 1 α 23 α 31 α 32 1 Fig. 7 .
65417 Fig. 6. Signal reception
Fig. 8 .
8 Fig. 8. Co-channel and adjacent channel interference computation.
9 .
9 Impact of propagation models.
Fig. 10 .Fig. 11 .
1011 Fig. 10. Impact of co-channel interference.
Fig. 12 .
12 Fig. 12. Impact of modulation on speedup.
Fig. 13 .
13 Fig. 13. Space partitioning optimization for range search operation under the limited interference model.
Fig. 14 .
14 Fig. 14. Space partitioning methods.
-(a)), the introduction of the interference modeling makes the network less connected and the distance to the farthest discovered neighbor is very low.When considering a 802.11 medium access control protocol, we observe from Figures16-(b), 17-(b), and 18-(b)
Simulation Radio range modeling Radio link modeling Interference modeling environments pathloss shadowing fading link model modulation model SINR computation NS-2 free
space, two-ray log-normal rician, rayleigh threshold - limited strongest signal
GloMoSim [6] free-space, two-ray log-normal rician, rayleigh threshold, BER BPSK, QPSK limited adaptive
JiST/SWANS [7] free-space, two-ray - rician, rayleigh threshold, BER BPSK limited cumulative
GTSNetS [8] free-space, two-ray - - threshold - limited strongest signal
WSNet [23] free-space, two-ray log-normal rician, rayleigh threshold, BER BPSK, STEP, OQPSK, FSCK limited, full adaptive, cumulative
TABLE 1
1 PHY layer modeling in common simulation environments: NS 2.31, JiST/SWANS 1.06, GloMoSim 2.03, GTSNetS and WSNet 2.0.
Impact of the PHY modeling accuracy on the average number of connex components. Impact of the PHY modeling accuracy on the distance of the farthest discovered neighbor. the number of connex components and the distance from the farthest discovered neighbor vary systematically. If the largest gap also occurs when interference and modulation are introduced, the results still degrade when the accuracy of the interference modeling is increased. This degradation is more important with a high number of nodes. Sensibility to PHY accuracy increases with the network size. Indeed, regarding the number of connex components (see Figure17
60 120
100 100
0.001 0.01 0.1 1 10 Fig. 15. Impact of the PHY modeling accuracy on the simulation speedup. ideal basic +pathloss +shad +sinr/bpsk+n slots +all interf Speedup PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes 0.001 0.01 0.1 1 10 ideal basic +pathloss +shad +sinr/bpsk+n slots +all interf Speedup PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes (a) Without MAC (b) With MAC 1 10 100 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf Avg. nbr of discovered neighbors PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes 1 10 100 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf Avg. nbr of discovered neighbors PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes (a) Without MAC (b) With MAC Fig. 16. Impact of the PHY modeling accuracy on the average number of discovered neighbors. 1 10 100 1000 ideal basic +pathloss +shad +sinr/bpsk+n slots +all interf Nbr. of connex components PHY models 1500 nodes 1000 nodes 750 nodes 500 nodes 250 nodes 1 10 100 1000 10000 ideal basic +pathloss +shad +sinr/bpsk+n slots +all interf Nbr. of connex components PHY models 1500 nodes 1000 nodes 750 nodes 500 nodes 250 nodes (a) Without MAC (b) With MAC 20 30 40 50 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf Max. distance of discovery PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes 20 40 60 80 100 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf 250 nodes Max. distance of discovery PHY models 500 nodes 750 nodes 1000 nodes 1500 nodes (a) Without MAC (b) With MAC Fig. 18. 0.1 1 10 100 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf Throughput (kbytes/s) PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes 0.1 1 10 100 ideal basic +pathloss +shad +sinr/bpsk +n slots +all interf Throughput (kbytes/s) PHY models 250 nodes 500 nodes 750 nodes 1000 nodes 1500 nodes (a) Without MAC (b) With MAC Fig. 19. Impact of the PHY modeling accuracy on the throughput. models. With no MAC layer, we can observe from Figures 16-(a), 17-(a), and 18-(a), that the number of Fig. 17. 10 discovered neighbors, |
04121503 | en | [
"info.info-ai"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121503/file/RLD.pdf | Eduardo Hugo Sanchez
email: [email protected]
Read, look and detect: Bounding box annotation from image-caption pairs
Various methods have been proposed to detect objects while reducing the cost of data annotation. For instance, weakly supervised object detection (WSOD) methods rely only on image-level annotations during training. Unfortunately, data annotation remains expensive since annotators must provide the categories describing the content of each image and labeling is restricted to a fixed set of categories. In this paper, we propose a method to locate and label objects in an image by using a form of weaker supervision: image-caption pairs. By leveraging recent advances in vision-language (VL) models and self-supervised vision transformers (ViTs), our method is able to perform phrase grounding and object detection in a weakly supervised manner. Our experiments demonstrate the effectiveness of our approach by achieving a 47.51% recall@1 score in phrase grounding on Flickr30k Entities and establishing a new state-of-the-art in object detection by achieving 21.1 mAP 50 and 10.5 mAP 50:95 on MS COCO when exclusively relying on image-caption pairs.
Introduction
Locating and classifying objects within an image is a fundamental task in computer vision that enables the development of more complex tasks such as image captioning [START_REF] Yang | Image captioning with object detection and localization[END_REF], visual reasoning [START_REF] Harold | Grounded language-image pre-training[END_REF], among others. Nevertheless, the success of object detection models [START_REF] Jocher | ultralytics/yolov5: v7.0 -YOLOv5 SOTA Realtime Instance Segmentation[END_REF][START_REF] Shaoqing Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] typically relies on human supervision in the form of bounding box annotations. In particular, data annotation is a time-consuming and arduous task that requires annotators to draw bounding boxes around objects and label each bounding box with a category from a fixed set of categories. Furthermore, modifying the number of categories may require annotators to relabel or add new bounding boxes.
Several approaches have been proposed to reduce the cost of data annotation in object detection by using image-level labels [START_REF] Bilen | Weakly supervised deep detection networks[END_REF][START_REF] Diba | Weakly supervised cascaded convolutional networks[END_REF][START_REF] Tang | Pcl: Proposal cluster learning for weakly supervised object detection[END_REF][START_REF] Zhang | W2f: A weaklysupervised to fully-supervised framework for object detection[END_REF][START_REF] Gao | C-midn: Coupled multiple instance detection network with segmentation guidance for weakly supervised object detection[END_REF][START_REF] Zeng | Wsod2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection[END_REF][START_REF] Ren | Instance-aware, context-focused, and memory-efficient weakly supervised object detection[END_REF][START_REF] Huang | Comprehensive attention self-distillation for weakly-supervised object detection[END_REF], a dataset containing both labeled and unlabeled data [START_REF] Sohn | A simple semi-supervised learning framework for object detection[END_REF][START_REF] Liu | Unbiased teacher for semi-supervised object detection[END_REF][START_REF] Liu | Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors[END_REF] or sparsely-annotated data [START_REF] Xu | Missing labels in object detection[END_REF][START_REF] Zhang | Solving missing-annotation object detection with background recalibration loss[END_REF][START_REF] Wang | Co-mining: Self-supervised learning for sparsely annotated object detection[END_REF][START_REF] Li | Siod: single instance annotated per category per image for object detection[END_REF]. For instance, WSOD methods only use image-level annotations along with the multiple instance learning (MIL) [START_REF] Maron | A framework for multiple-instance learning[END_REF] approach. However, the annotation effort is still significant and similar to that required for supervised classification.
In this paper, we take a step forward by learning to locate and label objects within an image from image-caption pairs. Not only captions provide a more natural description of the image content than image-level labels but also constitute a form of weaker supervision since image-caption pairs are easier to collect in vast amounts (e.g. from the Web [START_REF] Sharma | Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning[END_REF]). Our approach combines recent advances in vision-language (VL) models [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] and self-supervised vision transformers (ViTs) [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF].
VL models leverage large-scale image-caption datasets and have strong performance on zero-shot image classification, image-text retrieval, and visual reasoning tasks. These models align images with their corresponding captions via contrastive learning. Notably, models that include a cross-modality
LOST
Ours LOST Ours
Figure 1: Our approach leverages language from captions via ALBEF to annotate multiple objects per image (e.g. we detect a dog and a frisbee while LOST generates a single categoryless bounding box).
encoder seem to implicitly learn a more fine-grained word-region alignment without using additional supervision [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF]. We propose to use the location ability of VL models to automatically annotate objects of interest mentioned in captions. Moreover, VL models do not require retraining when the number of categories to annotate changes as they are already included in the VL model's vocabulary.
Despite the strong ability of VL models to locate objects, they aim at the most distinctive part of the object rather than the whole object. For example, ALBEF [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] and VilBERT [START_REF] Lu | Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks[END_REF] perform phrase grounding by ranking the object proposals provided by the supervised detector MattNet [START_REF] Yu | Mattnet: Modular attention network for referring expression comprehension[END_REF]. On the other hand, recent work has shown that representations from self-supervised ViTs contain explicit information about the scene layout of images and produce heatmaps that highlight salient objects [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF]. LOST [START_REF] Siméoni | Localizing objects with self-supervised transformers and no labels[END_REF] and TokenCut [START_REF] Wang | Selfsupervised transformers for unsupervised object discovery using normalized cut[END_REF] show the effectiveness of self-supervised ViT representations to perform unsupervised object discovery and detection without any labels.
We make the following contributions in this work. First, we propose a novel method to locate and label objects in images by combining the ability of VL models to point at objects and the ability of self-supervised ViTs to extract whole objects in Section 3. By building upon ALBEF [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] and LOST [START_REF] Siméoni | Localizing objects with self-supervised transformers and no labels[END_REF], our method is able to locate multiple objects and generate accurate bounding boxes without human supervision. Figure 1 illustrates the improved ability of our model over LOST. Second, we use our approach to perform phrase grounding and object detection in a weakly supervised fashion.
In Section 4, we demonstrate that our method achieves competitive performance in phrase grounding on Flickr30k Entities [START_REF] Bryan A Plummer | Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models[END_REF] and establish a new state-of-the-art in object detection on MS COCO [START_REF] Lin | Microsoft coco: Common objects in context[END_REF] when exclusively relying on image-caption pairs as unique source of supervision. Additionally, we perform ablation experiments to investigate the key components of our approach, transfer learning experiments on PASCAL VOC2007 [START_REF] Everingham | The pascal visual object classes (voc) challenge[END_REF] and pseudo-labeling experiments to improve the performance in WSOD. In Section 5, we discuss the limitations, future work and conclusions of our work.
Related work
Weakly supervised object detection: To reduce the cost of data annotation, several methods propose to train object detectors using only image-level annotations without the need for bounding box annotations. WSDDN [START_REF] Bilen | Weakly supervised deep detection networks[END_REF] introduces the first end-to-end WSOD framework that adopts MIL [START_REF] Maron | A framework for multiple-instance learning[END_REF]. Since then, several improvements have been proposed: PCL [START_REF] Tang | Pcl: Proposal cluster learning for weakly supervised object detection[END_REF] performs clustering to improve object proposals and W2F [START_REF] Zhang | W2f: A weaklysupervised to fully-supervised framework for object detection[END_REF] leverages pseudo-label mining from a WSOD model to train a supervised object detector. C-MIDN [START_REF] Gao | C-midn: Coupled multiple instance detection network with segmentation guidance for weakly supervised object detection[END_REF] introduces a method for coupling proposals to prevent the detector from capturing the most discriminative object part rather than the whole object. WSOD 2 [START_REF] Zeng | Wsod2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection[END_REF] performs pseudo-label mining and incorporates a bounding box regressor to fine-tune the location of each proposal. Likely, MIST [START_REF] Ren | Instance-aware, context-focused, and memory-efficient weakly supervised object detection[END_REF] performs pseudo-label mining where highly overlapping proposals are assigned to the same label. CASD [START_REF] Huang | Comprehensive attention self-distillation for weakly-supervised object detection[END_REF] combines self-distillation with multiple proposal attention maps generated via data augmentation. Closely related to our work, Cap2Det [START_REF] Ye | Cap2det: Learning to amplify weak caption supervision for object detection[END_REF] learns from image-caption pairs by extracting image-level annotations from captions using a supervised text classifier. These predicted image-level annotations are subsequently used to train a WSOD model based on MIL. Additionally, Cap2Det [START_REF] Ye | Cap2det: Learning to amplify weak caption supervision for object detection[END_REF] refines the WSOD model by retraining on instance-level pseudo-labels multiple times. Most of the existing WSOD methods rely on object proposal algorithms (e.g. Selective Search [START_REF] Jasper Rr Uijlings | Selective search for object recognition[END_REF] or Edge Boxes [START_REF] Zitnick | Edge boxes: Locating object proposals from edges[END_REF]). By exclusively leveraging self-supervision on imagecaption pairs, our approach outperforms the state-of-the-art model Cap2Det [START_REF] Ye | Cap2det: Learning to amplify weak caption supervision for object detection[END_REF] without the need for a supervised text classifier. Additionally, our approach outperforms relevant WSOD baselines [START_REF] Tang | Pcl: Proposal cluster learning for weakly supervised object detection[END_REF][START_REF] Gao | C-midn: Coupled multiple instance detection network with segmentation guidance for weakly supervised object detection[END_REF] that use a form of stronger supervision (image-level annotations) and object proposal algorithms.
Learning from unlabeled or partially labeled data: Some approaches alleviate the lack of bounding box annotations by leveraging a small labeled dataset and a large unlabeled dataset via semi-supervised learning [START_REF] Sohn | A simple semi-supervised learning framework for object detection[END_REF][START_REF] Liu | Unbiased teacher for semi-supervised object detection[END_REF][START_REF] Liu | Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors[END_REF] and active learning [START_REF] Wang | Towards human-machine cooperation: Self-supervised sample mining for object detection[END_REF][START_REF] Huy | Active learning strategies for weakly-supervised object detection[END_REF]. Li et al. [START_REF] Li | Siod: single instance annotated per category per image for object detection[END_REF] propose to train an object detector using only a single instance annotation per category per image. Other methods combine image-level and instance-level pseudo-annotations during training [START_REF] Xu | Missing labels in object detection[END_REF][START_REF] Ren | Instance-aware, context-focused, and memory-efficient weakly supervised object detection[END_REF]. Sohn et al. [START_REF] Sohn | A simple semi-supervised learning framework for object detection[END_REF] propose a two-stage training in which an object detector is trained on available labeled data. This model is subsequently used to select high-confidence bounding boxes on unlabeled data as pseudolabels. Wang et al. [START_REF] Wang | Co-mining: Self-supervised learning for sparsely annotated object detection[END_REF] address the missing annotation problem by introducing a siamese network where each branch is used to generate pseudo-labels for each other. Likely, recent work [START_REF] Liu | Unbiased teacher for semi-supervised object detection[END_REF][START_REF] Liu | Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors[END_REF] leverage the teacher-student framework in object detection. In this work, we also explore the use of pseudo-labels to improve WSOD performance.
VL models: Learning joint VL representations from image-caption pairs in a self-supervised fashion has proven to be effective to perform multiple downstream tasks [START_REF] Lu | Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks[END_REF][START_REF] Lu | 12-in-1: Multi-task vision and language representation learning[END_REF] such as visual question answering, image retrieval, image captioning, zero-shot classification, etc. VL models [START_REF] Tan | Lxmert: Learning cross-modality encoder representations from transformers[END_REF][START_REF] Harold | Visualbert: A simple and performant baseline for vision and language[END_REF][START_REF] Su | Vl-bert: Pre-training of generic visual-linguistic representations[END_REF][START_REF] Kim | Vilt: Vision-and-language transformer without convolution or region supervision[END_REF][START_REF] Gan | Large-scale adversarial training for vision-and-language representation learning[END_REF][START_REF] Li | Oscar: Object-semantics aligned pre-training for vision-language tasks[END_REF][START_REF] Chen | Uniter: Universal image-text representation learning[END_REF][START_REF] Jia | Scaling up visual and vision-language representation learning with noisy text supervision[END_REF][START_REF] Radford | Learning transferable visual models from natural language supervision[END_REF][START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] are generally trained on a combination of loss functions: masked language modelling (MLM), where a masked word token is predicted; masked image modelling (MIM), where a masked image region feature or object category is predicted, image-text contrastive learning (ITC), where positive/negative image-caption pairs are assigned to high/low similarity scores, respectively; and image-text matching (ITM), that predicts whether an image and a caption match. Many strategies have been proposed to achieve improved VL representations. VisualBERT [START_REF] Harold | Visualbert: A simple and performant baseline for vision and language[END_REF] uses a supervised object detector to extract visual embeddings. VILLA [START_REF] Gan | Large-scale adversarial training for vision-and-language representation learning[END_REF] performs adversarial training in the representations space. OSCAR [START_REF] Li | Oscar: Object-semantics aligned pre-training for vision-language tasks[END_REF] uses object tags to ease VL alignment. UNITER [START_REF] Chen | Uniter: Universal image-text representation learning[END_REF] encourages alignment between words and image regions extracted by an object detector. More recently, CLIP [START_REF] Radford | Learning transferable visual models from natural language supervision[END_REF] leverages a massive amount of image-caption pairs and achieves impressive performance at zero-shot classification. However, CLIP underperforms at other VL tasks as the interaction between vision and language is very shallow (i.e. a simple dot product). Li et al. [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] propose a new model called ALBEF, which builds upon previous models [START_REF] Tan | Lxmert: Learning cross-modality encoder representations from transformers[END_REF][START_REF] Lu | Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks[END_REF][START_REF] Radford | Learning transferable visual models from natural language supervision[END_REF][START_REF] Kim | Vilt: Vision-and-language transformer without convolution or region supervision[END_REF][START_REF] Jia | Scaling up visual and vision-language representation learning with noisy text supervision[END_REF] and is composed of a vision encoder, a language encoder, and a cross-modality encoder for deeper VL interaction. By leveraging a large image-caption dataset [START_REF] Sharma | Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning[END_REF], ALBEF outperforms previous models at many VL tasks without the need for a supervised object detector to extract region-based image representations. Our approach leverages a pre-trained ALBEF model to locate the image region that corresponds to a word or a textual description.
Open vocabulary detection: Classifying an object or image has been traditionally limited to a small set of fixed categories. Zhang et al. [START_REF] Zhang | Online collaborative learning for open-vocabulary visual classifiers[END_REF] leverages the vocabulary from image-caption datasets to perform image classification across more than 30k classes. The recent success of VL models [START_REF] Radford | Learning transferable visual models from natural language supervision[END_REF][START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] has motivated other methods to leverage image-caption pairs to perform object detection on a larger number of categories. Zareian et al. [START_REF] Zareian | Open-vocabulary object detection using captions[END_REF] use bounding box annotations from base classes to perform correctly in target classes mentioned in captions. Gao et al. [START_REF] Gao | Open vocabulary object detection with pseudo bounding-box labels[END_REF] use a supervised object detector trained on MS COCO [START_REF] Lin | Microsoft coco: Common objects in context[END_REF] to generate pseudo-bounding box annotations for categories mentioned in captions. Similar approaches [START_REF] Zhong | Regionclip: Region-based language-image pretraining[END_REF][START_REF] Shi | Proposalclip: unsupervised open-category object proposal generation via exploiting clip cues[END_REF] have been proposed by extending CLIP [START_REF] Radford | Learning transferable visual models from natural language supervision[END_REF]. We also leverage VL models to annotate objects using an arbitrary number of categories in a self-supervised manner without relying on bounding box annotations like previous methods.
Object discovery: Recently, several studies explore methods for object localization that rely solely on visual cues. LOST [START_REF] Siméoni | Localizing objects with self-supervised transformers and no labels[END_REF] extracts image representations via a self-supervised ViT [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF] which are "A woman is steering a boat with a pole" subsequently used to identify the image patches corresponding to an object based on their correlation. Wang et al. [START_REF] Wang | Selfsupervised transformers for unsupervised object discovery using normalized cut[END_REF] also leverage DINO representations which are used to build a graph. A normalized graph-cut is used to split the foreground object from the background. Both methods can only locate a single object per image without providing its category. Our approach builds upon LOST by integrating the language modality, enabling it to locate and label multiple objects per image.
Caption C L IT M (X, C) Φ c = ∂L IT M (X,C)
Method
To annotate objects from image-caption pairs, our approach consists of two main stages. First, we leverage the cross-modality encoder from a pre-trained VL model to automatically select the image patches (or seeds) that may belong to a given object (defined by a word token or a set of word tokens).
The seed selection process is described in Section 3.1. Second, we use a self-supervised ViT to compute the similarity between image patches. Intra-image similarity is used to filter out image patches selected in the first stage and generate a heatmap corresponding to the object. This process is known as seed expansion. Then, a heatmap threshold is computed via a Gaussian mixture model (GMM) to separate the object patches from the background ones. Finally, a bounding box enclosing the object patches is generated. Section 3.2 describes the process to generate a heatmap and extract an object from it. Figure 2 shows an overview of our approach.
Pointing at objects with VL models
Our proposed method is motivated by the observation that VL models implicitly learn to align words in the captions with patches in the images even though these models are only trained to align images with their corresponding captions [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF]. Furthermore, we can annotate a large amount of objects since the number of objects categories is as large as the vocabulary used in the captions during training of VL models. We leverage the ability of VL models to point at objects and the fact that most of the salient objects in an image are mentioned in its respective caption [START_REF] Li | Oscar: Object-semantics aligned pre-training for vision-language tasks[END_REF].
In this section, we explain how the fine-grained alignment between words and patches is computed in VL models implementing a cross-modality encoder (e.g. ALBEF [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF]) and how we leverage it to point at objects in an image. Let X = {p 1 , p 2 , . . . , p N P } be an image composed of N P patches and C = {w 1 , w 2 , . . . , w N T } be its corresponding caption composed of N T word tokens. An image encoder and text encoder are used to extract image and text representations which are both fed into the cross-modality encoder. In the l vl -th cross-attention layer of this encoder, we compute the value and key representations for each image patch, i.e. V = {v 0 , v 1 , . . . , v N P } and K = {k 0 , k 1 , . . . , k N P }, respectively, where v 0 and k 0 are the representations of the classification token [CLS].
Given a word token of interest w c (e.g. 'person', 'dog', etc.), we compute its query representation q c . The relation between the word token w c and the image patches {p i } N P i=1 is given by the hidden representation h c as shown in Equation 1where d is the dimension of the query representations.
h c = N P i=0 a c,i • v i where a c,i = exp(q ⊺ c k i / √ d) N P j=0 exp(q ⊺ c k j / √ d) (1)
As observed, the hidden representation of w c is a linear combination of the value representations corresponding to the image patches. Furthermore, these representations are weighted according to the attention scores a c,i that implicitly provide the similarity between w c and p i via the product q ⊺ c k i . Through the use of the cross-modality encoder, one can identify the image regions that are most closely aligned with a particular word token. We use Grad-CAM [START_REF] Ramprasaath R Selvaraju | Grad-cam: Visual explanations from deep networks via gradient-based localization[END_REF] to rank the image patches in order of importance. Equation 2 displays the importance score of the image patch p i with respect to the word token w c where L IT M (X, C) is the binary cross-entropy loss that measures whether the image X and the caption C match or not. When ranking image patches, we do not take into account the attention score corresponding to the classification token [CLS], a c,0 .
Φ c,i = ∂L IT M (X, C) ∂a c,i • a c,i (2)
Unfortunately, Grad-CAM scores are insufficient to generate an accurate bounding box by themselves (see Section 4). For example, Gao et al. [START_REF] Gao | Open vocabulary object detection with pseudo bounding-box labels[END_REF] use a supervised Mask R-CNN [START_REF] He | Mask r-cnn[END_REF] to generate bounding boxes that cover the activated image patches by the word token w c for object detection. Similarly, Li et al. [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] rank MattNet [START_REF] Yu | Mattnet: Modular attention network for referring expression comprehension[END_REF] proposals based on Grad-CAM maps for phrase grounding.
However, we observe that while Grad-CAM scores do not highlight the image patches corresponding to the whole object, they are useful to point at the most discriminative parts of it. Therefore, we propose to use a set D = {f i } M i=1 of M image patches with the highest score Φ c,i for a given word token of interest w c to point at the object. The image patches in D are referred to as potential seeds and this process is referred to as seed selection. Pointing is a natural way for humans to refer to an object [START_REF] Bearman | What's the point: Semantic segmentation with point supervision[END_REF] and constitutes the first stage of our proposed approach.
Extracting objects with self-supervised ViTs
We make use of the self-supervised ViT capability [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF] to measure the similarity between image patches. Using the location information provided in the previous stage, our approach takes advantage of the fact that object patches correlates positively with each other but negatively with background patches. This idea is successfully applied in LOST [START_REF] Siméoni | Localizing objects with self-supervised transformers and no labels[END_REF] to perform object discovery. Our work is inspired by LOST and extends its capabilities by incorporating the language modality. Assuming that the object area is smaller than the background area, LOST uses the patch with the smallest number of positive correlations with other patches in order to point at an object. However, this assumption may not always hold in practice (e.g. an object covering more area than the background, multiple objects, etc.). Compared to LOST, our method is able to generate multiple bounding boxes per image (as many objects as mentioned in the caption). Furthermore, our method can annotate each object with a label while LOST can only retrieve a single object without specifying its category. Figure 1 displays the differences between our approach and LOST.
In this work, we average the first N patch locations with the highest value of Φ c,i in D to compute the initial seed s for a given w c . Following LOST, we extract the key representations of the initial and potential seeds, i.e. k s and {k fi } M i=1 , respectively, from the l vit -th self-attention layer of a self-supervised ViT [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF]. Then, the similarity between the initial seed and potential seeds is computed via the dot product of their respective representations to determine the image patches belonging to the object. We assume that potential seeds that are positively correlated to the initial seed belong to the object while potential seeds that are negatively correlated to the initial seed belong to the background. Thus, patches belonging to the object are defined by the set O = {s} ∪ {f i | f i ∈ D and k ⊺ s k fi ≥ 0}. Each patch p ∈ O generates a heatmap Ψ p ∈ R N P , where the i-th dimension Ψ p i is computed via the dot product between its key representation k p and the key representation of the patch p i (also extracted by the ViT), i.e. k pi ∀i ∈ {1, . . . , N P } as shown in Equation 3.
Ψ p i = k ⊺ p k pi (3
) Finally, the heatmap of the object w c is defined by the sum of the heatmaps corresponding to the patches in O as shown in Equation 4. This process is referred to as seed expansion.
Ψ c = p∈O Ψ p (4)
To extract the object from the heatmap Ψ c , we define a threshold t. While LOST sets t=0, we assume that patches belonging to the object and background are defined by two normal distributions
p o =N (µ o , σ 2 o ) and p b =N (µ b , σ 2 b )
, respectively. The parameters µ o , σ o , µ b , σ b ∈ R are estimated via a GMM per heatmap with k=2 components. Then, the threshold is calculated by solving p o (t)=p b (t) such that µ b < t < µ o . For small objects, p o is barely noticeable and hard to estimate via GMM since only the background component is recognizable. We assume only one component is distinguishable when the overlapping between the estimated distributions p o and p b is significant (i.e. µ b + 1.5σ b < µ o -1.5σ o ). In such a case, we use the threshold t = µ + γσ where γ is a constant and µ and σ are the mean and the standard deviation of Ψ c , respectively. Supplementary material provides bounding box examples using multiple t values. To generate a bounding box, a mask m c is obtained by thresholding the heatmap Ψ c as shown Equation 5where Ψ c i is the i-th dimension of the heatmap Ψ c . Later, a bounding box is drawn by enclosing the segment that includes the initial seed s.
m c i = 1 Ψ c i ≥t (5)
4 Experiments and results
Setup details
Tasks and datasets: We perform weakly supervised phrase grounding and object detection to demonstrate the effectiveness of our method to annotate objects. In Section 4.2, we present our experimental results for phrase grounding on Flickr30k Entities [START_REF] Bryan A Plummer | Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models[END_REF], an extension of Flickr30k [START_REF] Young | From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions[END_REF] which consists of ≈ 32k images collected from Flickr each of which is described with 5 captions. Image-caption samples are split into ≈ 30k training, 1k validation, and 1k test samples. Flickr30k Entities includes manually-annotated bounding boxes that are linked with entities mentioned in captions. Results are reported in terms of recall@1 on the test set. In Section 4.3, we perform WSOD on MS COCO [START_REF] Lin | Microsoft coco: Common objects in context[END_REF] which contains 113k training and 5k validation images. Each image is described with 5 captions. Additionally, the dataset provides bounding box annotations covering 80 object categories such as person, bicycle, car, plane, etc. We also conduct transfer learning experiments using samples from MS COCO to train an object detector that predicts PASCAL VOC2007 [START_REF] Everingham | The pascal visual object classes (voc) challenge[END_REF] categories since this dataset does not provide captions. Model architecture: To point at objects, we use ALBEF pre-trained on 14M image-caption pairs [START_REF] Li | Align before fuse: Vision and language representation learning with momentum distillation[END_REF] and fine-tuned on 20k image-caption pairs [START_REF] Yu | Modeling context in referring expressions[END_REF]. It is worth mentioning that any VL model that includes a cross-modality encoder can be used. To perform seed expansion, we use the self-supervised ViT from DINO (i.e. ViT-S/16 [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF]). For comparative purposes, we also use the image encoder from ALBEF (i.e. ViT-B/16 [START_REF] Dosovitskiy | An image is worth 16x16 words: Transformers for image recognition at scale[END_REF]) to compute the similarity between image patches. In WSOD, our approach generates bounding box annotations to train a YOLOv5 object detector [START_REF] Jocher | ultralytics/yolov5: v7.0 -YOLOv5 SOTA Realtime Instance Segmentation[END_REF] in a supervised manner.
Hyperparameters:
We set the VL cross-attention layer to l vl =8 and the ViT self-attention layer to l vit =11. To compute the initial seed, we average the first N =3 patch locations from D and set the number of potential seeds to M =10. To compute the threshold, we use γ = 1.75. Our experiments are executed on a NVIDIA GeForce RTX 3090. [START_REF] Caron | Emerging properties in self-supervised vision transformers[END_REF] InfoGround [START_REF] Gupta | Contrastive learning for weakly supervised phrase grounding[END_REF] Flickr30k Entities [START_REF] Bryan A Plummer | Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models[END_REF] Yes, Faster R-CNN [START_REF] Shaoqing Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] 47.88 InfoGround [START_REF] Gupta | Contrastive learning for weakly supervised phrase grounding[END_REF] MS COCO [START_REF] Lin | Microsoft coco: Common objects in context[END_REF] Yes, Faster R-CNN [START_REF] Shaoqing Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] 51.67
Weakly supervised phrase grounding
We conduct experiments on Flickr30k Entities to evaluate the ability of our approach to associate phrases describing objects to image regions. While a single word can define the category of an object, a phrase provides additional attributes (e.g. color, size, position, etc.). Our method processes phrases by simply adding up the heatmaps of each word w ci in the phrase P , i.e. Ψ phrase = ci∈P Ψ ci .
In Table 1, we report our results in terms of recall@1 that represents the ratio of the number of phrases whose ground truth bounding boxes have significant overlap with the generated bounding boxes by our model (i.e. IoU ≥ 0.5) to the total number of phrases.
Our baseline model (referred to as ALBEF C-A maps) uses the cross-modality encoder to produce heatmaps Φ phrase , which are then thresholded to generate bounding boxes. As shown in Section 3.1, our approach builds upon Φ phrase via a self-supervised ViT to generate the expanded heatmaps Ψ phrase . We evaluate two variants of our approach by using the ViT from ALBEF and DINO to generate the object heatmaps (referred to as ALBEF ViT maps and DINO ViT maps, respectively).
As observed, the variants ALBEF ViT maps and DINO ViT maps achieve higher performance compared to the baseline (improvements of 7.11% and 10.65%, respectively). As hypothesized, the baseline model exhibits limitations in accurately capturing the spatial extent of objects despite its ability to point at them in the image as shown in Figure 3. Moreover, DINO ViT maps outperform ALBEF ViT maps by a margin of 3.54%. This difference suggests that DINO's loss function is more effective to capture the underlying relationships between image patches.
For the sake of comparison, we also report the performance of the state-of-the art model for weakly supervised phrase grounding, i.e. InfoGround [START_REF] Gupta | Contrastive learning for weakly supervised phrase grounding[END_REF]. Our approach achieves a competitive score of 47.51% comparable to InfoGround performance (47.88% and 51.67% when trained on Flickr30k Entities and MS COCO, respectively). Nevertheless, InfoGround uses a Faster R-CNN [START_REF] Shaoqing Ren | Faster r-cnn: Towards real-time object detection with region proposal networks[END_REF] pretrained on Visual Genome [START_REF] Krishna | Visual genome: Connecting language and vision using crowdsourced dense image annotations[END_REF] to generate object proposals and extract object features. Thus, our approach offers an efficient solution for phrase grounding without the need for an object detector. Our approach represents a promising alternative to InfoGround, particularly in scenarios where the object detector does not include some categories or where obtaining bounding box annotations is difficult.
Weakly supervised object detection
We investigate the ability of our approach to perform WSOD. Our methodology involves defining a set of object categories and searching through captions to identify if any of these categories are mentioned. If a category is found, our approach generates a corresponding bounding box as described in Section 3. Then, we train an object detector (i.e. Yolov5 [START_REF] Jocher | ultralytics/yolov5: v7.0 -YOLOv5 SOTA Realtime Instance Segmentation[END_REF]) from scratch in a supervised manner using the generated bounding box annotations. We evaluate our approach on MS COCO [START_REF] Lin | Microsoft coco: Common objects in context[END_REF] and PASCAL VOC 2012 [START_REF] Everingham | The pascal visual object classes (voc) challenge[END_REF]. While our method is capable of labeling a large number of object categories, we use these datasets as they provide bounding box annotations for evaluation purposes.
Comparison with WSOD methods: We compare our approach with state-of-the-art WSOD methods to demonstrate its effectiveness in annotations. Our approach demonstrates better performance without the need for an object proposal algorithm, a supervised text classifier or using refinement. Compared to methods that learn from image-level annotations [START_REF] Gao | C-midn: Coupled multiple instance detection network with segmentation guidance for weakly supervised object detection[END_REF][START_REF] Zeng | Wsod2: Learning bottom-up and top-down objectness distillation for weakly-supervised object detection[END_REF][START_REF] Ren | Instance-aware, context-focused, and memory-efficient weakly supervised object detection[END_REF][START_REF] Huang | Comprehensive attention self-distillation for weakly-supervised object detection[END_REF], our approach demonstrates competitive performance and outperforms relevant baselines such as PCL [START_REF] Tang | Pcl: Proposal cluster learning for weakly supervised object detection[END_REF] and C-MIDN [10] (8.5 mAP 50:95 and 9.6 mAP 50:95 , respectively) by achieving 10.5 mAP 50:95 . It is worth noting that these WSOD methods rely on pseudo-labeling techniques and image-level annotations that constitute a form of stronger supervision.
For the sake of comparison, we also report the results of Yolov5 trained in a fully-supervised manner.
Transfer learning and pseudo-labeling (P-L): Due to the lack of captions in PASCAL VOC2007, our approach generates annotations by searching PASCAL VOC2007 object categories from MS COCO image-caption pairs. Results in terms of mAP 50 per category are reported in Table 3 where best results are highlighted in bold. Our approach achieves 40.9 mAP 50 outperforming Cap2Det EM (39.9 mAP 50 ) while being behind Cap2Det CLSF (43.1 mAP 50 ). To further improve performance, we propose a simple pseudo-labeling (P-L) technique. First, we use the trained object detector to generate predictions on the training images of PASCAL VOC2007. Pseudo-labels are selected by setting the confidence and IoU thresholds to 0.2 and 0.5, respectively in the NMS algorithm. Then, we fine-tune our trained object detector on these pseudo-labels. We report an improvement of 1.6 mAP 50 and 1.1 mAP 50:95 . Despite the global mAP 50 being inferior to that of Cap2Det CLSF , it is worth noting that our approach implementing P-L outperforms Cap2Det CLSF in many categories.
Ablation experiments: We also perform ablation experiments to identify the key components of our approach in WSOD. To annotate objects, we employ the variants of our approach presented
Conclusion
In this paper, we present a two-stage method to locate and label objects by leveraging image-caption pairs without additional supervision. We demonstrate the effectiveness of our approach by performing two tasks in a weakly supervised setting: phrase grounding and object detection. We have performed extensive experiments on Flickr30k Entities, MS COCO and PASCAL VOC2007 achieving stateof-the-art results without the need for supervised object proposal algorithms or text classifiers to process captions. Despite the remarkable performance of our approach, we acknowledge some limitations. Our approach produces a single bounding box per object mentioned in the caption. An interesting direction for further investigation is to extend our method to produce multiple bounding boxes for words representing more than one object instance in the image (e.g. "people", "group of animals", etc.). This is particularly challenging, especially when object instances are overlapping in the image. Also, our approach does not generate bounding boxes for objects present in the image but not mentioned in the caption (or due to spelling mistakes). We believe that an important direction for future work is to extend our approach to explicitly take into account missing annotations. Improved performance could be achieved using a more sophisticated pseudo-labeling framework [START_REF] Xu | Missing labels in object detection[END_REF][START_REF] Wang | Co-mining: Self-supervised learning for sparsely annotated object detection[END_REF][START_REF] Li | Siod: single instance annotated per category per image for object detection[END_REF].
∂ac • a c Figure 2 :
c2 Figure2: Bounding box generation using the caption "A woman is steering a boat with a pole". First, we select the initial and potential seeds (red and gray patches, respectively) via a VL model for each category identified in the caption. Second, we perform seed expansion by measuring similarity between patches via a ViT. Finally, each heatmap is thresholded and a bounding box is drawn on top.
Figure 3 :
3 Figure 3: Heatmaps and generated bounding boxes corresponding to 'man', 'ball' and 'racquet'. ALBEF C-A maps point successfully at objects while struggle to get the object extent. ALBEF ViT maps tend to be noisier than DINO ViT maps which generate high-quality bounding boxes.
PASCAL VOC2007 is an object detection dataset that contains 2501 training, 2510 validation, and 4952 test images. Objects are labeled into 20 classes (e.g. person, bird, cat, cow, dog, etc.). Results are reported in terms of mean average precision at IoU=0.5, i.e. mAP 50 , and average mAP over multiple IoU values ranging from 0.5 to 0.95 with a step of 0.05, i.e. mAP 50:95 . Results are reported on the MS COCO validation set and the PASCAL VOC2007 test set. In all cases, bounding box annotations are only used during evaluation.
Table 1 :
1 Weakly supervised phrase grounding performance on Flickr30k Entities.
Method Training data Supervised object proposal generator? Recall@1
ALBEF C-A maps 14M image-caption pairs [19] No 36.86
ALBEF ViT maps 14M image-caption pairs [19] No 43.97
DINO ViT maps 14M image-caption pairs [19] + ImageNet images No 47.51
Table 2 .
2 Our approach achieves 21.1 mAP 50 and 10.5 mAP 50:95 on MS COCO outperforming the variants of Cap2Det[START_REF] Ye | Cap2det: Learning to amplify weak caption supervision for object detection[END_REF] that learn from image-caption pairs: Cap2Det EM that generates image-level annotations from captions via lexical matching and Cap2Det CLSF that employs a supervised text classifier to process captions and extract image-level
'man' 'ball' 'racquet' Bounding boxes
ALBEF C-A maps
ALBEF ViT maps
DINO ViT maps
(Ours)
Table 2 :
2 Comparison with WSOD models on MS COCO. ALBEF ViT maps and DINO ViT maps. Tables4 and 5display the results of our experiments on MS COCO and PASCAL VOC2007, respectively. As observed, ALBEF C-A maps perform poorly at object detection achieving the lowest scores mAP 50 and mAP 50:95 . While ALBEF C-A maps are able to accurately point at objects, they fail to correctly detect their extent. On the other hand, self-supervised ViTs (ALBEF ViT maps and DINO ViT maps) are effective to capture the extent of objects through the seed expansion. In MS COCO, DINO ViT maps outperform ALBEF ViT maps as expected since DINO ViT maps are less noisy and generates visually more accurate bounding boxes as shown in Figure3. Surprisingly, ALBEF ViT maps achieve slightly better results than DINO ViT maps in PASCAL VOC2007.
Model Supervision source mAP 50 mAP 50:95
Cap2Det EM [48] image-captions pairs 19.7 8.9
Cap2Det CLSF [48] image-captions pairs 20.2 9.1
Ours image-captions pairs 21.1 10.5
PCL [40] image-level annotations 19.4 8.5
C-MIDN [10] image-level annotations 21.4 9.6
WSOD 2 [53] image-level annotations 22.7 10.8
MIST [32] image-level annotations 25.8 12.4
CASD [13] image-level annotations 26.4 12.8
Fully supervised [15] bounding box annotations 66.2 46.7
in Section 4.2, i.e. ALBEF C-A maps,
Table 3 :
3 Comparison with WSOD models on PASCAL VOC2007. .3 50.7 25.9 14.1 64.5 50.8 33.4 17.2 49.0 48.2 46.7 44.2 59.2 10.4 14.3 49.8 37.7 21.5 47.6 39.9 Cap2Det CLSF [48] 63.8 42.6 50.4 29.9 12.1 61.2 46.1 41.6 16.6 61.2 48.3 55.1 51.5 59.7 16.9 15.2 50.5 53.2 38.2 48.2 43.1 Ours 58.8 64.6 52.3 28.9 10.0 57.2 42.2 50.7 12.8 54.3 32.4 38.8 37.4 61.9 24.2 17.6 47.3 39.0 52.3 34.4 40.9 Ours + P-L 56.1 68.5 55.6 31.1 12.3 64.8 48.6 48.8 15.5 57.8 22.9 34.8 42.3 59.1 23.2 19.1 51.8 42.8 54.8 41.0 42.5 Supervised [15] 70.2 74.3 42.8 40.4 40.8 73.6 83.3 62.0 37.7 61.3 58.3 56.1 77.5 71.2 78.0 35.5 50.5 55.0 75.1 60.2 60.2
Model aero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tv mAP50
Cap2Det EM [48] 63.0 50
Table 4 :
4 Ablation experiments on MS COCO.
Method mAP 50 mAP 50:95
ALBEF C-A maps 9.4 3.7
ALBEF ViT maps 18.4 9.0
DINO ViT maps 21.1 10.5
Table 5 :
5 Ablation experiments on VOC2007.
Method mAP 50 mAP 50:95
ALBEF C-A maps 9.2 3.3
ALBEF ViT maps 42.9 20.8
DINO ViT maps 40.9 18.0
Acknowledgments
This work was conducted as part of the MINDS project of IRT Saint Exupéry. We would like to thank Michelle Aubrun, Ahmad Berjaoui, David Bertoin and Franck Mamalet for useful feedback and suggestions and Jérôme Mathieu for invaluable technical support. |
04121602 | en | [
"info",
"info.info-ai"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04121602/file/APS-VSG.pdf | Zhiwen Chen
Zhigang Lv
Ruohai Di
Peng Wang
Xiaoyan Li
Xiaojing Sun
Yuntao Xu
A novel virtual sample generation method to improve the quality of data and the accuracy of data-driven models
Keywords: small sample, data-driven model, virtual sample generation (VSG), acceptable area, joint probability distribution sampling
Small data volume and data imbalance often lead to statistical failure and seriously restrict the accuracy of data-driven models, which has become a bottleneck problem, needing to be solved, in small sample modeling. The data expansion method has become the main way to solve small sample modeling. However, the randomness in the process of virtual sample generation and combination leads to many invalid data, resulting in poor consistency between the expanded data and the original data. For this reason, this paper proposes a virtual sample generation method based on acceptable area and joint probability distribution sampling (APS-VSG) to limit the randomness in the data expansion method, reduce the proportion of invalid data, improve data consistency after expansion, and improve the accuracy of the data-driven model under the condition of small samples. Firstly, the concept of "compact range of interaction (CRI)" was proposed, which further limits the domain estimation range of data to approximate the valid area of the data. Secondly, the prior knowledge was used to improve mega-trend-diffusion (MTD), and the CRI is delineated according to the trend dispersion to obtain the acceptable area of the virtual data. Finally, a joint probability distribution was constructed based on the true values of small samples in the acceptable area, and data sampling was conducted based on the probability distribution to generate virtual data. The experimental results of standard function datasets show that the virtual samples generated by the proposed method can ensure validity of more than 85%.
The experimental results of the NASA li-ion battery dataset show that, compared with Interpolation, Noise, MD-MTD, GAN, and GMM-VSG methods, the error of the data-driven model trained with virtual data generated by the proposed method is significantly reduced.
Compared with GAN and GMM-VSG, MSE, RMSE, MAE, and MAPE are reduced by at least 19.3%, 10.6%, 15.4%, and 16.7%, respectively.
Introduction
Machine learning is the focus in the field of artificial intelligence and pattern recognition.
Various novel machine-learning algorithms emerge in endlessly. In the age of big data, data-driven machine learning algorithms have gradually become research focus, which are widely used to solve complex problems in scientific fields and engineering applications. However, there are still many fields, which cannot obtain a large amount of data due to factors, such as high experiment cost and long test cycle. Therefore, it is difficult to apply advanced deep learning algorithm to solve problems [START_REF] Lu Yiyong | Researches on Few-shot Learning Based on Deep Learning:an Overview[END_REF], such as voice print recognition [START_REF] Nasef | Voice gender recognition under unconstrained environments using self-attention[END_REF] in multimedia field, disease diagnosis [START_REF] Qezelbash-Chamak | A survey of machine learning in kidney disease diagnosis[END_REF] [START_REF] Mirzaei | Machine learning techniques for diagnosis of alzheimer disease, mild cognitive disorder, and other types of dementia[END_REF] and water analysis [START_REF] Wang | Moisture quantitative analysis with a small sample set of maize grain in filling stage based on near infrared spectroscopy[END_REF] in biological and medical field, product sales prediction [START_REF] Li | Lifecycle Forecast for Technology Products with Limited Sales Data[END_REF] in the economic field and life prediction of fuel cells [START_REF] Wang | Data-driven prognostics based on time-frequency analysis and symbolic recurrent neural network for fuel cells under dynamic load[END_REF] in the industrial and military fields. In the process of using machine learning to deal with the above problems, there are problems such as small data volume and data imbalance, etc. All the above belong to small sample problems, because the sample size of the target object is too small to train a model that meets the accuracy requirements [START_REF] Zhao Kai-Lin | Survey on Few-shot Learning[END_REF][START_REF] Xulong | Virtual sample generation method and its application in reforming data modeling[END_REF].
For the small sample problem, scholars have carried out data-centric research. Data expansion is the main method used in data-centric research. It is the most direct method to deal with small sample problems by constructing virtual samples to increase sample size and balance data sets. The main ideas for constructing virtual samples are knowledge-based, disturbance-based, and distribution-based [START_REF] Xu | Research on Virtual Sample Generation Technology[END_REF].
The knowledge-based idea is mainly to artificially generate virtual samples based on expert knowledge in the research field [START_REF] Jing | A novel virtual sample generation method based on Gaussian distribution[END_REF]. Xu [START_REF] Xu Rong-Wu | RESEARCH ON VIRUTAL SAMPLE BASED IDENTIFICATION OF NOISE SOURCES IN RIBBED CYLINDRICAL DOUBLE-SHELLS[END_REF] found, in the engineering problems of the double-cylindrical shell structure, that the signals of the double-cylindrical shell structure collected by sensors are often mixed with the noise from the exciter and the seawater pump.
According to the frequency characteristics of these noises, he proposed a method, based on the frequency response function and Fourier transform, to highly simulate the sampling signals of the double-cylindrical shell structure with the noise signals of the exciter and the seawater pump.
These simulated virtual signal used to train the model can effectively improve the noise recognition rate of the model for the double cylindrical shell structure. The threshold for creating virtual samples based on knowledge is very high, which requires rich expert knowledge and experience, and is difficult for nonprofessional personnel to use. In various methods of constructing virtual samples based on the idea of disturbance, Chris M. Bishop [START_REF] Bishop | Training with Noise is Equivalent to Tikhonov Regularization[END_REF] found that a model with higher generalization performance was obtained by adding a certain amount of noise to the input data and inputting it into the neural network for training. Guozhong An [START_REF]The Effects of Adding Noise During Backpropagation Training on a Generalization Performance[END_REF] further confirmed that adding noise to input samples can effectively improve the generalization performance of classification and regression problems. Wang [START_REF] Weidong | Quadratic Discriminant Analysis Using Virtual Training Samples [J][END_REF] added disturbance based on training samples to obtain new virtual samples and used these virtual samples to make the model have a better recognition rate. The idea based on distribution is mainly to determine the range and probability of virtual sample generation according to the domain distribution of small sample data [START_REF] Li | Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge[END_REF][START_REF] Huang | A diffusion-neural-network for learning from small samples[END_REF][START_REF] Li | data-fuzzification technology in small data set learning to improve FMS scheduling accuracy[END_REF][START_REF] Li | Using mega-fuzzification and data trend estimation in small data set learning for early FMS scheduling knowledge[END_REF][START_REF] Zhu Bao | A novel mega-trend-diffusion for small sample[END_REF][START_REF] Zhu | Research on Virtual Sample Generation Technologies and Their Modeling Application[END_REF][START_REF] Junfei | Virtual Sample Generation Method Based on improved General Trend Diffusion and Hidden layer interpolation and its application[END_REF][START_REF] Li | A Gaussian mixture model based virtual sample generation approach for small datasets in industrial processes[END_REF][START_REF] Li | A genetic algorithm-based virtual sample generation technique to improve small data set learning[END_REF][START_REF] Zhu | Novel SVD integrated with GBDT based Virtual Sample Generation and Its Application in Soft Sensor[END_REF][START_REF] He | Enhanced virtual sample generation based on manifold features: Applications to developing soft sensor using small data[END_REF]. Der Chiang Li [START_REF] Li | Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge[END_REF] proposed mega-trend-diffusion (MTD), which uses a common diffusion function to spread a group of data and determine the possible coverage of the data set based on group consideration to generate reasonable virtual data. Chongfu Huang [START_REF] Huang | A diffusion-neural-network for learning from small samples[END_REF] and Der Chiang Li [START_REF] Li | data-fuzzification technology in small data set learning to improve FMS scheduling accuracy[END_REF] [19] used information diffusion, MTD, and other methods to turn a clear value sample point into a fuzzy set, turning a small number of single value sample points into a large number of set-valued sample points. This method can make the generated virtual samples have more information. Zhu Bao [START_REF] Zhu Bao | A novel mega-trend-diffusion for small sample[END_REF] [START_REF] Zhu | Research on Virtual Sample Generation Technologies and Their Modeling Application[END_REF] proposed multi-distribution mega-trend-diffusion (MD-MTD) to generate virtual samples, in which uniform distribution and triangular distribution are added to describe the characteristics of small sample data. Compared with MTD, the virtual samples generated by MD-MTD are more realistic. Ling Li [START_REF] Li | A Gaussian mixture model based virtual sample generation approach for small datasets in industrial processes[END_REF] proposed a Gaussian mixture model-based virtual sample generation (GMM-VSG) method to generate virtual samples under multiple working conditions. Der-Chiang Li [START_REF] Li | A genetic algorithm-based virtual sample generation technique to improve small data set learning[END_REF] proposed a genetic algorithm-based virtual sample generation (GABVSG), which generated more valid virtual samples by considering the overall integrated effects of the attributes. Qun Xiong Zhu [START_REF] Zhu | Novel SVD integrated with GBDT based Virtual Sample Generation and Its Application in Soft Sensor[END_REF] proposed a VSG method based on singular value decomposition (SVD) feature decomposition and gradient boosting decision tree (GBDT) prediction model. Yan Lin He [START_REF] He | Enhanced virtual sample generation based on manifold features: Applications to developing soft sensor using small data[END_REF] generated virtual samples by using the proposed t-SNE-based virtual sample generation (t-SNE-VSG). SVD-VSG and t-SNE-VSG both use the distribution of the original data as a reference to generate virtual samples, which is difficult to use for small sample data with unknown distribution. In addition, Der Chiang Li [START_REF] Li | A non-linearly virtual sample generation technique using group discovery and parametric equations of hypersphere[END_REF] proposed a nonlinear virtual sample generation technology based on a hypersphere parameter equation in combination with group discovery technology, which can effectively improve the learning accuracy of the model. He also used the interval kernel density estimator to generate more similar virtual samples, overcoming the problem of learning difficulties when data is insufficient [START_REF] Li | Using virtual sample generation to build up management knowledge in the early manufacturing stages[J][END_REF]. Chen Zhongsheng [START_REF] Chen Zhongsheng | Quantile regression CGAN based virtual samples generation and its applications to process modeling[END_REF] proposed a new virtual sample generation method QRCGAN, which embedded quantile regression into a conditional generation countermeasure network. Yan LinHe [START_REF] He | A novel virtual sample generation method based on a modified conditional Wasserstein GAN to address the small sample size problem in soft sensing[J][END_REF] proposed a novel virtual sample generation method embedding a deep neural network as a regression into conditional Wasserstein generative adversarial networks with gradient penalty (rCWGAN).
Embedding networks can improve the validity of virtual samples to some extent, but it takes a long time and is expensive to calculate.
Comparing virtual data generated by different methods, knowledge-based virtual samples are limited by the accuracy and diversity of expert knowledge, obtained in the research field. Manual sample production is expensive and time-consuming, which is difficult to apply to most scenarios.
Disturbance-based virtual samples mainly expand the boundary area of small samples by adding disturbances to improve the generalization of the model. However, there is no unified answer to how much disturbance is appropriate for different problems, which needs to be determined through a large number of experiments. For this reason, we proposed an improved acceptable area estimation method in this paper. This method delimited a more compact range of interaction (CRI) of multi variable within the theoretical upper limit and lower limit of each variable which is called the wide range of permissible (WRP) to ensure that the generated multidimensional data is acceptable in this area. This method can avoid the problem that the generated virtual samples deviate from the real samples too much. The basis for creating virtual samples based on distribution is statistics, but there is still doubt whether the conclusions obtained by using statistics for small samples that do not meet the large sample theorem can express the properties of large samples. For this reason, this paper reduced statistical methods for estimating data distribution through small samples and artificially constructed a probability distribution function with universality to describe the distribution of data according to small samples in the acceptable area, from which the problem that the sample size is too small to accurately estimate the sample distribution was solved.
A large number of scholars have found that in the process of generating multidimensional virtual data, the method of first generating single-dimensional virtual data and then reconstructing high-dimensional data often makes combination errors, resulting in a large number of invalid virtual samples [START_REF] Yao-San Lin | A Virtual Sample Screening Mechanism[END_REF]. This part of invalid virtual samples will greatly reduce the accuracy of the model and it is difficult to build an appropriate screening mechanism to screen out invalid data.
For this reason, this paper analyzes the probability distribution between multiple variables and target variables and then determines the joint distribution between multiple variables by target variables to solve the problem of data combination. The long-standing problem with virtual samples is that it is difficult to consider both validity and robustness. In this paper, edge virtual samples were added during sampling to explore the edge information inconsistent with small samples to improve the robustness of virtual samples in the data-driven model.
The main contributions of this paper are shown as the following:
(1) A virtual sample generation method, based on CRI and joint probability distribution sampling, was proposed in this paper. This method can expand the data for small samples with unknown data characteristics, limit the randomness in the process of virtual sample generation, and greatly improve the validity of virtual samples;
(2) An estimation method of acceptable area was proposed, which uses prior knowledge to improve MTD and estimates CRI according to trend dispersion to comprehensively estimate the generation space of virtual samples, to provide a guarantee for generating high-quality virtual samples;
(3) The conditional distribution of each variable under specific conditions was constructed to ensure a high sampling probability near the real sample and increase the overall sampling probability in the acceptable area to include part of the edge virtual samples to improve the robustness;
(4) Joint probability sampling was used to generate high-dimensional data as a whole to avoid the problem of data combination.
The rest organization of this paper is shown: In section 2, the basic theory of this paper is introduced. In section 3, the proposed method is described in detail. In section 4, the experimental verification is finished. In section 5, the conclusions are drawn.
Basic theory
In this section, the idea source and basic theory of the method in this paper is briefly introduced, including MTD, confidence interval, and sampling methods.
Mega-trend-diffusion
MTD [START_REF] Li | Using mega-trend-diffusion and artificial samples in small data set learning for early flexible manufacturing system scheduling knowledge[END_REF] was proposed by Der Chiang Li to determine the data coverage, which uses a common diffusion function to spread a group of data and determine the possible coverage of the data set based on group consideration. Zhu added uniform distribution and triangular distribution to describe the characteristics of small samples. The new method was called MD-MTD. The core idea of MTD and MD-MTD is to determine the floating range of data according to the number and distribution density of data. Fig. 1 shows that the extrapolation boundary of virtual samples was estimated based on small samples under the triangular diffusion function.
n i L L U L xx n L CL n n n n (1) 2 20 1 () 2 ln(10 ) ( 1) n Ui L U U xx n U CL n n n n (2) min max 2 CL (3)
where, min and max are the minimum and maximum values in a small sample set, It is meaningful to use the MTD to estimate the acceptable area of data, and also effective to generate one-dimensional virtual data according to the extension field. However, it will cause a combination error if aligning multi-dimensional data after generating each one-dimensional virtual data. Especially, when there is a large distribution difference between each one-dimensional data, most of the virtual data generated is even invalid. For this reason, this paper optimizes MTD based on prior knowledge, proposes CRI to limit the excessive acceptable area, and uses joint probability sampling to avoid combination error.
Confidence interval
Confidence interval is a common interval estimation method. It is an estimation interval of population parameters constructed by sample statistics, with upper and lower confidence limits of statistics as upper and lower bounds respectively. It refers to the range where the true value appears with the measured value as the center under a certain confidence, which is the probability that the true value will occur within a certain range. The real data are often not known, and we can only estimate. For example, in[ , ] ab, the probability that the real value appears between a and b is 95% (there is also a 5% probability that it appears outside this interval). The smaller the interval is, the lower the confidence is, and the higher the accuracy is. The 95% confidence level is general. [32] [33] In regression analysis, confidence intervals are usually used to judge the reliability of data and increase the robustness of the analysis. This paper uses this idea for reference and sets a certain confidence level to expand the range of small samples according to the data distribution statistical chart of small samples, which is regarded as one of the criteria for defining the acceptable area.
Sampling
Stratified sampling
Hierarchical sampling is a method of randomly selecting samples from different layers according to the specified proportion of a sample population that can be divided into multiple layers. The advantage of this method is that the sample is representative and the sampling error is small. For data X , if n samples were obtained by stratified sampling, it is divided into k layers according to certain rules, then n k samples are sampled from each layer, and finally, all samples are integrated.
Acceptance-Rejection Sampling
Acceptance () px.
In the case of high-dimensional, there are two problems in ARS. First, it is difficult to obtain the general form of multidimensional distribution when there is only conditional distribution
1 ( | ) p x y , 2 ( | ) p x y , … , ( | ) n p x y .
The second is that it is difficult to find a suitable distribution () qx .
Proposed method
In this section, the methods proposed in this paper will be described in detail. The object of this algorithm is () mn X and ( 1)
n Y . When the target object is (( 1) )
mn Z , try to divide a vector with a strong correlation with other vectors by correlation analysis, and take this vector as [START_REF] Lu Yiyong | Researches on Few-shot Learning Based on Deep Learning:an Overview[END_REF] n Y , the remaining m-dimensional vector is taken as () mn X . In this paper, we first use prior knowledge and trend dispersion to delineate a more reasonable acceptable area where generating virtual samples can greatly improve the validity of virtual samples. Then, two-stage sampling is conducted on the joint distribution constructed in this paper to obtain virtual samples, to ensure that a large amount of valid virtual samples is generated, and a certain number of edge virtual samples can also be generated to improve the robustness. 1 , 1 , , 1 1 2 , 2 , ,
, 1 , , 1 1 2 , 2 , , 2 2 ( 1) , ( 1) , , ( 1) ( 1) 1
n n n n n Z Z Z Z Z Z Z Z Z Z m Z m Z m Zm X X X X X X X partition X X Xm Xm Xm Xm Y Y Y Y (4) 2 2 , , , [ , , , ] n n
Algorithm framework
Different from MTD, the proposed method estimates not only the domain range of small samples but also the compact range of interaction (CRI) between multiple variables to limit the acceptable area of data. To avoid the problem of data combination and generate high-dimensional data as a whole, the proposed method constructs the conditional distribution of each variable and multidimensional probability distribution according to the true value of small sample, which provides technical support for the generation of virtual data. The algorithm framework is shown in [ , ]
X X Q Buttom Q Top 2 2 [ , ] X X Q Buttom Q Top [ , ] Y Y Q Buttom Q Top X p Y XpY Q Top XpY Q Buttom Q Compact range of interaction 1 1 [ , ] X Y X Y Q Buttom Q Top 1 X Y : 2 2 [ , ] X Y X Y Q Buttom Q Top 2 X Y : : 95% 99% Acceptable area Q Q Q
, 1 , , 1 1 2 , 2 , , 2 2 ( 1) , ( 1) , , ( 1) ( 1) 1
, , [ ,,, ]
n n n n n n n Z Z Z Z Z Z Z Z Z Z m Z m Z m Z m X X X X X X X X X Xm Xm Xm Xm Y Y Y Y [ , ] Xm Xm Q Buttom Q Top Xm [ , ] XmY XmY Q Buttom Q Top Xm Y 2 ( | ) P Xp y k X p 1 ( | ) P Xp y k X p X p 2 i X p 3 i X p ( | ) P Xp y l 1 i X p X p Y X p Q Buttom Xp Q Top Y Q Buttom Y Q Top XpY Q Top XpY Q Buttom 2 y k 1 y k y l Xp Q Top X p 2 ( | ) P Xp y k 3 i X p 1 2 2 ( ) ( ) ( , ) XpY y k XpY y k U Q Buttom Q Top 3 2 ( , ) i N Xp 2 ( ) XpY y k Q Buttom 2 ( ) XpY y k Q Top 1 1 1 1 min( 1 , 1 ) 3 n X Y i i X Y i Q Top x x Q Buttom n 2 ( 1, 2 | ) P X X y k 1 ( 1, 2 | ) P X X y k 1 X 2 X 1 X 2 X
Acceptable area construction based on a priori knowledge and trend dispersion
All virtual sample generation methods need to estimate the distribution range of data before generating virtual data. In this paper, this range is referred to as the acceptable area. This paper proposed an estimation method of acceptable area. Firstly, the upper and lower limits of each variable are determined according to prior knowledge, and a large acceptable area of data is determined according to the upper and lower limits of each variable. This large acceptable area of data is referred to as the wide range of permissible (WRP). Then, determine the trend of each X and Y , and fit the distribution curve according to the distribution histogram of small samples in the data trend direction. Take the 99% or 100% confidence interval of the distribution curve as CRI. Finally, final the acceptable area will be obtained by using CRI to limit this large acceptable area. Therefore, an acceptable area Q is the intersection of WRP Q and CRI Q .
Q Q Q (5)
Estimation method of WRP
In practical application, the measured values collected have practical meanings. Generally, these measured values have critical values, which are objective limitations and are usually determined by physical factors such as material and shape. Especially for some products produced by assembly lines, each parameter has a rated working range and a maximum fluctuation range. In this paper, the interval bounded by the theoretical upper limit and lower limit is called the wide range of permissible (WRP), which is the large acceptable area mentioned above.
Fig. 3. The estimation method for WRP
As shown in Fig. 3, is a diagram of the estimation method for WRP, where,
1 () 2 1 22 1 () 2 n nn Z if n is odd CL Z Z if n is even (7) [ ] 1 A CL (8)
For the case that only the theoretical lower limit
[ ] max( ) [max( )] [ ] [max( )] [ ] max( ) (1 [min( )]) [min( )] min( ) [ ] max( ) (1 [ ] ) min( ) [] [max( ) min( )] max( A CL Zp A Zp CL Zp A CL A Zp A CL Zp A Zp CL A Zp Zp Zp A CL Zp A CL CL CL Zp Zp Zp A CL CL Zp CL Zp Zp CL Zp min min ) min( ) Zp Zp Zp (12)
Similarly, when only the theoretical upper limit max Zp of p Z is known but the theoretical lower limit min Zp is not known. [max( ) min( )] min( ) max( )
A CL Zp A Zp CL Zp A CL A Zp CL Zp Zp CL Zp Zp Zp Zp (13)
Estimation method of CRI
It is useful that MTD generates virtual samples
' N X on the estimated acceptable area of ( 1) n X and uses the hyperplane to give the corresponding [START_REF] Lu Yiyong | Researches on Few-shot Learning Based on Deep Learning:an Overview[END_REF] '
N Y . When the dimension of X increases, many of () mn X
obtained by data combinations after the expansion of
(1 ) (1 ) (1 )
1 , 2 , ,
n n n X X Xm
will deviate from the real samples. The reason is that there may be some correlation factor between
(1 ) (1 ) (1 )
1 , 2 , ,
n n n X X Xm
, which makes 2 n X . This change in coupling can greatly narrow the acceptable area of multivariate data. Therefore, this paper further reduces the acceptable area by estimating the area of such influence. This paper will refer to this area as a compact range of interaction (CRI).
As is known to all, in the feature selection of the neural network model, input features with a strong correlation with output features will be selected. Based on this, this paper analyzes the relationship between each input feature
1 , 2 , ,
n n n X X Xm
range corresponding to (1 ) n Y , to limit the acceptable area of the input feature and prevent the phenomenon of combination error.
Firstly,
1 , 2 , ,
n n n X X Xm
is extracted from X in order, and takes vector
min( )
XpY V of ( ) (2 1) XpY XpY XpY V , then (2 2) XpY O is: (2 2) (2 1) (2 1) (1 2) (2 1) (2 1) [max( ), min( )] * T XpY XpY XpY XpY O XpY V V V ( 16
)
where is 1.05.
In the direction of (2 2) XpY O , the distribution curve is fitted according to the distribution histogram of small samples, and the 99% or 100% confidence interval of the distribution curve is taken as the CRI [ , ]
XpY XpY XpY Q Q Buttom Q Top . Small samples Tendency Statistical Area Statistical Chart Xp Y XpY Q Top XpY Q Buttom 95% 99%
Fig. 4. The estimation method for CRI
Joint probability distribution sampling
Tested by the author, it will cause a combination error if aligning multi-dimensional data after generating each one-dimensional virtual data. Especially when there is a large distribution difference between each one-dimensional data, most of the virtual data generated is even invalid.
To solve this problem, this paper constructed the artificial multi-variable joint distribution based on the acceptable area, to simultaneously generate 1, 2, , X X Xm as a whole to avoid combination error.
Artificial joint probability distribution
In small-sample modeling, virtual samples combined with small sample sets are used for model training. The basic reason why the accuracy is not improved or decreased is the low quality of the data. To improve the data quality, it is necessary to start from the data validity and generate virtual samples conforming to the characteristics of small samples. Secondly, to improve the fault tolerance ability of the model, it is necessary to generate a small amount of differentiated virtual samples to improve the robustness.
To make the generated virtual data conform to the characteristics of small samples, the conditional probability distribution ( | obtained by integration. Fig. 5 shows the diagram of the process of constructing a joint probability distribution.
) P Xp y k of * ( [ 1
1 min( 1 , 1 ) 3 n X Y i i X Y i Q Top x x Q Buttom n 1 , , X X m 1 ,
Assume that sample
i is [ 1 , 2 , , ] i i i i X x x xm , [] i Yy , ij yk ,
1 min(| 1 |,| 1 |) 3 n X Y i i X Y i Q Top x x Q Buttom n (18)
The normal distribution constructed according to sample truth value ( 1 , ) ii xy and acceptable area boundary
j yk , 22 2 1 1 1 ( 1 , ) ( 1 , ) ( , ) i i X Y X Y X N x N x U Q Buttom Q Top (19)
From this, the conditional probability distribution ( 1| ) . The more number l is calculated, the denser the distribution, and the smoother the transition.
Then, the joint probability distribution ( | )
P X y is constructed. ( | ( ))= ( 1 2 | ( )) ( 1| ( )) ( 2 | ( )) ( | ( )) P X y k l P X X Xm y k l P X y k l P X y k l P Xm y k l (20)
Finally, the joint probability distribution ( | ) P X y of all values in [ , ] YY Q Buttom Q Top is integrated to get the final joint probability distribution ( | ) P X Y .
Two-stage sampling
Since all the methods used above construct the joint probability distribution of 1, 2, , X X Xm on the premise of determining the value of y . Therefore, it is necessary to take The process of ARS is equivalent to using the CRI and distribution of real samples as a filter to remove invalid samples from the virtual samples obtained by the WRP.
N samples from [ , ] YY Q Buttom Q Top to get (1 ) ' N Y ,
Algorithm description
In summary, the structure and details of the algorithm in this paper are expanded, and then the pseudo-code of the algorithm is given.
Algorithm: APS-VSG
Input: Small samples ( ) (1 )
,
m
( | ) ( 1| ) ( 2 | ) ( | ) P X Y P X Y P X Y P Xm Y 13: Get 1 2 (1 )
'
= ' , ' , , ' N N N mN x x x x x x X xm xm xm 18: Get a virtual sample set ( ) ( ) { ' , ' } m N m N D X Y 12 12 () 1' , 1' , , 1' 2' , 2' , , 2' '
Experiments
The method presented in this paper was tested on three standard functions and NASA lithium battery data sets to analyze the validity of the acceptable area and the virtual samples. Compared with five other methods, including Interpolation, Noise, MTD, GAN and GMM-VSG, the advantages and disadvantages of the proposed method are discussed in the experimental analysis.
Datasets
Standard function datasets
The
y x x x [1,5] [1,10] [1,20] [-2,160] Oscillation 1 2 3 1 2 1 sin( ) sin( ) 3 3 3 y x x x [1.7,4.3] [1.7,9.3] [1.7,19.3] [0,7]
NASA li-ion battery dataset
The ultimate purpose of generating virtual samples is to use the right amount of data to train a data-driven model that meets the target performance. Therefore, to verify the validity of the method proposed in this paper, from the performance degradation data of NASA li-ion battery [START_REF] Chen | Combining empirical mode decomposition and deep recurrent neural networks for predictive maintenance of lithium-ion battery[END_REF],
data of two batteries of the same model were selected to build a real sample set with five inputs and one output, and take 34 samples from the real sample set as the small sample set. 100%
va ca a V V (24)
100%
va cv v V V (25)
100%
va v a v va V V V V (26)
Where, va V is the size of the valid area within the acceptable area, a V is the size of the acceptable area, v V is the size of the valid area.
Direct evaluation of virtual sample
It is judged whether the generated virtual samples are in the valid area, if yes, they are considered as valid virtual samples, if not, they are considered as invalid virtual samples. Count the number of valid virtual samples and the number of invalid virtual samples. The ratio of the number of valid virtual samples to the total number of virtual samples, v , is used to evaluate the validity of the virtual samples. The expression of v is as follows:
v v vi n nn (27)
Where, v n is the number of valid virtual samples, i n is the number of invalid virtual samples.
Indirect evaluation of virtual sample
In
Results and Analysis
Experiment on the standard function datasets
The method proposed in this paper is used to estimate the acceptable area and generate 200 virtual samples through the small sample set of standard functions to evaluate the validity of the acceptable area and the validity of the virtual samples. The experimental results are shown in Fig. 6, 7 and 8. Linear is the classical linear model, which can be perfectly described by linearity. However, the data trends determined based on the small sample set differ slightly from the real sample trends. There is an overall tilt in the acceptable area, which leads to the loss of part of the valid area and the inclusion of part of the invalid area. This leads to the subsequent generation of partially invalid data. The virtual samples generated by APS-VSG mainly surround small samples, which avoids invalid virtual data from being generated in large quantities in sparse areas.
Nonlinear is the classical nonlinear model, which is difficult to describe by linearity. Therefore, the acceptable area is perceptible as too large. Although the acceptable area includes nearly all of the valid areas, most of them are invalid. In particular, the invalid area is as much as 6 times more than the valid area within the acceptable area of 2 X and Y , which greatly increases the difficulty of generating valid virtual samples. However, most of the virtual data generated by APS-VSG surrounds small samples, which greatly improves the validity of virtual samples, because of its sampling based on the distribution of small samples. Oscillation is a model with sinusoidal oscillations and is one of the most complex models. And the general VSG method tends to have huge problems with this type of model. Due to the presence of sinusoidal oscillations, the acceptable area inevitably includes invalid area of peaks and valleys, which results in the invalid area that accounts for almost half of the acceptable area. However, the acceptable area obtained by APS-VSG includes almost all the valid area. APS-VSG generates a large number of valid virtual samples around sinusoidal oscillations by its property of sampling based on small samples.
Although the superimposed uniform distribution-based sampling leads to a small amount of virtual samples out of the valid area.
From the final evaluation results of the experiment, metric ca is not satisfactory. Using ca as a criterion to evaluate the acceptable area, the best is Linear, followed by Oscillation, and finally Nonlinear. However, it cannot be ignored that cv of the acceptable area are high enough to include as many valid areas as possible, which provides strong support for the subsequent sampling. Therefore, it is not feasible to blindly pursue ca while ignoring cv . Using cv as a criterion to evaluate the acceptable area, the best is Oscillation, followed by Linear, and finally Nonlinear. The results of the follow-up the direct evaluation of virtual samples also illustrate exactly this.
In summary, the results of virtual data evaluation on three standard function datasets show that APS-VSG can guarantee the high validity of the virtual samples. This proves that the method proposed in this paper can improve the quality of data.
Experiment on NASA Li-ion battery dataset
Generate 50 virtual samples based on the constructed NASA li-ion battery small sample set, and train BP neural networks by small sample sets, real sample sets and virtual samples generated by the six methods to verify whether the virtual samples generated by various methods can effectively improve the accuracy of the model. A 3-layer BP neural network model with the structure of 5-10-1, 1000 iterations, a learning rate of 10 -3 , a momentum factor of 0.9, and a target error of 4 × 10 -3 is constructed. The training data is the set of small samples and virtual samples, and the test data is Real samples. The average prediction results and errors of multiple runs are shown in Fig. 9. The performance criteria of the model are shown in Table 5. trained with these virtual samples can make accurate predictions for both batteries. Benefiting from both the limits of the acceptable area and the probability distribution based on small samples, the resulting virtual samples are both highly valid and robust.
.96×10 -4 1.40×10 -2 0.91×10 -2 0.56×10 -2 Noise 3.60×10 -4 1.89×10 -2 1.49×10 -2 0.93×10 -2 MD-MTD 3.72×10 -4 1.93×10 -2 1.25×10 -2 0.75×10 -2 GAN 2.40×10 -4 1.54×10 -2 1.05×10 -2 0.66×10 -2 GMM-VSG 2.02×10 -4 1.42×10 -2 1.04×10 -2 0.66×10 -2 APS-VSG 1.63×10 -4
In summary, the small sample modeling test results on NASA li-ion battery dataset show that APS-VSG can improve the accuracy of the data-driven model, and the method proposed in this paper is progressiveness compared with other methods.
Conclusion
This paper proposes a virtual sample generation method based on acceptable area and joint probability distribution sampling (APS-VSG) for the expansion of small samples. Break the bottleneck of low validity of MTD in generating high-dimensional virtual samples. The proposed CRI estimation method and the improved acceptable area estimation method are used to limit the acceptable area of data generation and improve the validity of virtual samples. This paper constructs the reasonable joint probability distribution to ensure a high sampling probability near real samples and increase the overall sampling probability in the acceptable area to include part of the edge virtual samples to improve the robustness. This paper uses two-stage sampling to generate high-dimensional data as a whole to avoid the problem of data combination. Experiments on standard function datasets and NASA li-ion battery datasets show that APS-VSG can improve the quality of data and the accuracy of data-driven models and has better performance than MD-MTD, GAN and GMM-VSG.
Fig. 1. Diagram of MTD
n
are the number of samples in [min, ] CL and [ , max] CL in a small sample set.
Fig
Fig. 2.
Acceptable area construction based on a priori knowledge and trend dispersion (b)Joint probability distribution sampling
Fig. 2 .
2 Fig. 2. The algorithm framework
( ) [min( )] [ ] [min( )]
Fig. 5 .
5 Fig. 5. The process of constructing a joint probability distribution
sample truth value. To include part of the edge virtual samples to improve the robustness, a uniform distribution is superimposed based on normal distribution. So when
distribution is difficult to describe the probability distribution of continuous variables. Therefore, the conditional probability distribution ( | ) P Xp l corresponding to the value l in the middle vacancy of adjacent k values should be calculated,
three artificial standard function datasets used in the experiment have different data characteristics. The first standard function is a classical linear model, called Linear. The second standard function is a classical nonlinear model called Nonlinear. The third standard function is the model with sinusoidal oscillation, called Oscillation. The three models have three inputs and one output and contain a certain amount of noise to simulate the data collected in the real environment. The applicability of the proposed method to various types of data can be explored and the factors that affect the performance of the method can be analyzed by testing on different standard function datasets. Evenly take 2000 samples from the standard function as the real sample set, and take 50 samples from the real sample set as the small sample set.
Fig. 7 .
7 Fig. 6. Experimental results of Linear
Fig. 8 .
8 Fig. 8. Experimental results of Oscillation
Fig. 9 .
9 Fig. 9. Prediction results and errors of the BP model trained with different training dataTable 5 Performance criteria of the BP model trained with different training data Virtual Data Sources MSE RMSE MAE MAPE Small samples 8.29×10 -4 2.88×10 -22.09×10 -2 1.30×10 -2 Interpolation 1.96×10 -4 1.40×10 -2 0.91×10 -2 0.56×10 -2 Noise 3.60×10 -4 1.89×10 -2 1.49×10 -2 0.93×10 -2 MD-MTD 3.72×10 -4 1.93×10 -2 1.25×10 -2 0.75×10 -2 GAN 2.40×10 -4 1.54×10 -2 1.05×10 -2 0.66×10 -2 GMM-VSG 2.02×10 -4 1.42×10 -2 1.04×10 -2 0.66×10 -2 APS-VSG 1.63×10 -41.27×10 -2 0.88×10 -2 0.55×10 -2 Real samples 1.02×10 -4 1.01×10 -2 0.69×10 -2 0.43×10 -2
Table 1
1 Definition of standard function
Define interval
Dataset Standard function
1 x x 2 x 3 y
Table 2 NASA
2 Li-ion Battery Dataset (Real sample set)
Data source Feature Meaning Length Remarks
F1 Time for the voltage to rise to 4.2V during the constant current charging process
B0005 B0007 F2 F3 F4 F5 Time for the current to drop to 20mA during the constant voltage charging process Time for the voltage to drop to 2.7V during the constant current discharge process Total time spent in the discharge process The average temperature during the discharge process 326 the theoretical upper limit of the capacity of the battery is 2Ahr
Capacity Capacity of battery
4
.2 Evaluation 4.2.1 Evaluation of Acceptable area Compare
the degree of overlap between the acceptable area and the valid area. If the acceptable area overlaps highly with the valid area, the valid coverage level is high. If the acceptable area does not completely cover the valid area and contains lots of invalid areas, the valid coverage level is low.
is the ratio of the
cv
valid area covered by the acceptable area to the total valid area is the ratio of the valid area
v
covered by the acceptable area to the total space of the valid area and acceptable area. The
expressions are as follows:
ca , cv , and v are used to evaluate the acceptable area. ca is the ratio of the valid area covered by the acceptable area to the acceptable area
this paper, we use different virtual sample generation methods to obtain the same number of virtual samples, integrate the virtual samples with small samples as the training set of the neural network model, and evaluate the virtual samples by the prediction results of the model. The evaluation criteria used are MAE, MSE, RMSE, and MAPE. MAE is the average of absolute errors, which can better reflect the actual situation of prediction value errors. MSE is the summation average of the square of the difference between the true and predicted values, and RMSE is the root of the MSE. MSE and RMSE are used to detect the deviation between the predicted and true values of the model. MAPE is the mean of the absolute percentage error between the predicted and actual values. The four evaluation criteria are described as follows:
MAE 1 n 1 || ii i y y n (28)
MSE 1 n 1 () ii i y y n 2 (29)
RMSE 1 n 1 () ii i y y n 2 (30)
MAPE 1 1 | n i ny ii i yy | 100% (31)
Table 3
3 Evaluation of the acceptable area
Evaluation criteria
Dataset Acceptable area
1 xy Q 0.9962 0.9623 0.9588
Linear 2 xy Q 0.9666 0.9062 0.8787
3 xy Q 0.9980 0.8653 0.8638
1 xy Q 0.9427 0.8586 0.8160
Nonlinear 2 xy Q 0.1548 0.9797 0.1543
3 xy Q 0.6742 0.9609 0.6562
1 xy Q 0.7502 0.9945 0.7471
Oscillation 2 xy Q 0.4104 0.9731 0.4058
3 xy Q 0.5392 0.9760 0.5321
Table 4 Direct evaluation of virtual samples
Evaluation criteria
Dataset n n
v i v
Linear 286 14 95.33%
Nonlinear 258 42 86.00%
Oscillation 281 19 93.67%
ca ncv v
Table 5
5 Performance criteria of the BP model trained with different training data
Virtual Data Sources MSE RMSE MAE MAPE
Small samples 8.29×10 -4 2.88×10 -2 2.09×10 -2 1.30×10 -2
Interpolation 1
Data expansion algorithms have their own advantages and disadvantages, but they can improve the accuracy of the model under small sample conditions to a certain extent by analyzing the result of the BP model trained with virtual samples generated by five methods. Through quantitative analysis of experimental results, it can be found that the accuracy of the model trained by the proposed method in this paper is improved more compared with the data-driven model under small sample condition. MSE, RMSE, MAE, and MAPE are decreased by 80.3%, 55.9%, 57.9%, and 57.7%, respectively. Compared with Noise and MD-MTD, MSE, RMSE, MAE, and MAPE are decreased by at least 54.7%, 32.8%, 29.6%, and 26.7%, respectively. Compared with GAN and GMM-VSG, MSE, RMSE, MAE, and MAPE are decreased by at least 19.3%, 10.6%, 15.4%, and 16.7%, respectively.The error criteria of Interpolation and APS-VSG are very close. However, Interpolation has a fatal flaw in that the BP model trained with virtual samples generated by Interpolation has difficulty in accurately predicting the interval during which some of the battery capacity picks up.The underlying reason is that the virtual samples generated by Interpolation are basically linear, and when the small sample set is not able to characterize the capacity rebound process, then the virtual data generated by Interpolation also does not have the battery capacity rebound property.Both Noise and MD-MTD have relatively high error criteria. The higher deviation of the result of the BP model trained with virtual samples generated by MD-MTD proves that there are more singular values with large or small values in the prediction results of MD-MTD. If the invalid virtual samples generated by MD-MTD can be further sieved, the accuracy of the model can be greatly improved. In theory, GAN can generate virtual samples that are infinitely close to small sample sets through continuous training. However, due to the small number of samples, the generated virtual samples cannot completely reproduce the real samples in a large number of tests, which leads to an unbreakable bottleneck in the model trained with virtual samples generated by GAN. Due to the stable performance of the Gaussian model, GMM-VSG performs much better than Noise and MD-MTD. However, in localized areas, excessive deviations lead to a slight decrease in the overall performance of the result of the BP model trained with virtual samples generated by GMM-VSG. In many areas where small samples are not well characterized, APS-VSG, like other methods, suffers from large biases. Relatively speaking, APS-VSG delivers excellent performance. The reason is that most of the virtual samples generated by APS-VSG are valid and retain the distribution characteristics of the small sample set well, and the BP model
1.27×10 -2 0.88×10 -2 0.55×10 -2
Real samples 1.02×10 -4 1.01×10 -2 0.69×10 -2 0.43×10 -2
Acknowledgment
This work was supported by National Natural Science Foundation of China (62171360), Shaanxi Science and Technology Department (2022GY-110), Xi'an Key Laboratory of Intelligence (2019220514SYS020CG042), National key research and development program (2022YFF0604900), 2022 Shaanxi University Youth Innovation Team Project, Shandong Key Laboratory of Smart Transportation (Preparation), 2023 Shaanxi Provincial University Engineering Research Center. |
04121608 | en | [
"info.info-lo"
] | 2024/03/04 16:41:26 | 1993 | https://inria.hal.science/hal-04121608/file/complete.pdf | Gilles Dowek
email: [email protected]
A Complete Proof Synthesis Method for the Cube of Type Systems
We present a complete proof synthesis method for the eight type systems of Barendregt's cube extended with η-conversion. Because these systems verify the proofs-as-objects paradigm, the proof synthesis method is a one level process merging unification and resolution. Then we present a variant of this method, which is incomplete but much more efficient. At last we show how to turn this algorithm into a unification algorithm.
Introduction
The Calculus of Constructions is an extension of Church higher order logic in which the terms belong to a λ-calculus with dependent types, polymorphism and type constructors. Because of the richness of the terms structure, proofs can be represented as terms through Heyting semantics and Curry-Howard isomorphism. So a proof of a proposition P is merely a term of type P .
A proof synthesis method for Church higher order logic is given in [START_REF] Huet | Constrained Resolution A Complete Method for Higher Order Logic[END_REF]. In this paper we generalize this method to find terms of a given type in the systems of Barendregt's cube (in particular in the Calculus of Constructions) extended with η-conversion. In some sense, because these type systems are more powerful than Church higher order logic, the proof synthesis problem is more complicated. But we claim that because the proofs-as-objects paradigm simplifies the formalisms, it also simplifies the proof synthesis algorithms. In particular resolution and unification can be merged in a uniform algorithm.
We first present a complete method, then we discuss some efficiency improvements, we present a variant of this method which is incomplete but much more efficient and we show how to turn this algorithm into a unification algorithm.
1 Outline of the Method
Resolution and Unification
In first order logic or higher order logic, we have two syntactical categories, terms and proofs. Terms are trees (in first order logic) or simply typed λ-terms (in higher order logic) and proof are build, for instance, with natural deduction rules. These rules combine some proofs and terms to form new proofs. For instance the rule →-elim In first order logic and higher order logic, the resolution algorithm searches for proof-trees. When during the search of a proof-tree, a term-tree is needed, the unification algorithm is called. In type systems, term-trees and proof-trees belong to a single syntactical category, so our algorithm is a one level process merging resolution and unification.
We shortly present now the unification and resolution algorithms for higher order logic [START_REF] Huet | Constrained Resolution A Complete Method for Higher Order Logic[END_REF] [21] [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF], our goal is to show that a single idea on term enumeration underlies both algorithms.
Higher Order Unification
Higher order unification is based on an algorithm that enumerates all the normal η-long terms of a given type in the simply typed λ-calculus.
Let P 1 → ... → P n → P (P atomic) be a type. All normal η-long terms of this type begin by n abstractions t = [x 1 : P 1 ]...[x n : P n ]t ′
The term t ′ must have type P so it is an atomic term. A variable w : Q 1 → ... → Q p → Q of the context or which is an x i can be the head of this term, only if Q = P . Then this variable must be applied p times in order to get a term of the good type. So then we use recursively our algorithm to instantiate the h i : P 1 → ... → P n → Q i .
In unification, flexible-rigid equations are solved by instantiating the head of the flexible term using this method, the choice of the variable w is very restricted by the rigid term. Rigid-rigid equations are simplified. And flexible-flexible equations are solved in a trivial way.
So the method to enumerate terms underlying higher order unification can be summaried
• try all the possible head variables of the normal η-long form of the term,
• generate new variables for the rest of the term,
• generate a typing constraint to enforce well-typedness of the term,
• use recursively this method to fill these variables.
Higher Order Resolution
Let us present now the resolution method. We just present an incomplete restriction of this method to Horn clauses i.e. propositions on the form
Q 1 → ... → Q n → Q, where Q 1 , ..., Q n , Q are atomic
propositions (free variables are considered as universally quantified in the head of the clause).
When the goal P is atomic we unify it with Q the head of an hypothesis Q 1 → ... → Q n → Q, we generate subgoals σQ 1 , ..., σQ n (where σ is a unifer of P and Q) and we recursively search proofs of these propositions.
When the goal is also a Horn clause P 1 → ... → P n → P , then the clause form of the negation of this goal introduces P 1 , ..., P n as hypothesis and the atomic goal P .
This restriction is called the introduction-resolution algorithm. It can be presented with two rules: the resolution rule between an atomic goal and the head of an hypothesis is just called the resolution rule and the introduction of the hypothesis P 1 when we want to prove P 1 → P is called the introduction rule. It is incomplete. In [START_REF] Huet | Constrained Resolution A Complete Method for Higher Order Logic[END_REF] another rule (the splitting rule) is added to make it complete.
Remark that this method can also be used for hereditary Horn clauses i.e. propositions on the form
Q 1 → ... → Q n → Q, where Q is atomic and Q 1 , ..., Q n are hereditary Horn clauses.
Introduction-Resolution in Type Systems
In type systems, all the propositions can be considered as hereditary Horn clauses where the arrow is generalized to a dependent product, so we can apply the same method.
Furthermore, in type systems, a proof-synthesis method must not only assert that the proposition is provable but also exhibit a proof-term of this proposition.
The introduction-resolution algorithm can be presented that way:
• Introduction: To find a proof of a proposition (x : P 1 )P in a context Γ, find a proof of P in the context Γ[x : P 1 ]. When we have a proof t of P , the term [x : P 1 ]t is a proof of (x : P 1 )P in the context Γ.
• Resolution: To find a proof of P in a context Γ where there is a proposition
f : (x 1 : Q 1 )...(x n : Q n )Q
unify P and Q (consider x 1 , ..., x n as variables in Q) and then if x i has not been instantiated during the unification process, find a proof of σQ i (where σ is a unifier of P and Q). For each i, we get (by unification or as a proof of a subgoal) a term c i of type
Q i [x 1 ← c 1 , ..., x i-1 ← c i-1 ]. The term (f c 1 ... c n ) is a proof of P .
This method is fully described in [START_REF] Helmink | Resolution and Type Theory[END_REF]. Let us give an example. We have an hypothesis
f : (x : T )(y : T )(z : T )(u : (R x y))(v : (R y z))(R x z)
and we want to prove (R a c). We unify (R x z) and (R a c) this gives the substitution x ← a, z ← c.
Then we have to find terms b : T , t : (R a b) and u : (R b c). The term (f a b c t u) is a proof of (R a c). This method is in general incomplete. Another drawback is that no general unification algorithm is known for type systems. Algorithms are known only for the simply typed λ-calculus [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF] and the λΠ-calculus [START_REF] Elliott | Higher-order Unification with Dependent Function Types[END_REF] [11] [START_REF] Pym | Proof, Search and Computation in General Logic[END_REF].
An alternative presentation of this method is the following
• Introduction: we give the proof [x : P 1 ]h for the proposition and the variable h is a subgoal to be proved.
• Resolution: we give the proof (f h 1 ... h p ) for the proposition together with an equation
Q[x 1 ← h 1 , ..., x p ← h p ] = P (in our example (R h 1 h 3 ) = (R a b))
to force the term (f h 1 ... h p ) to have type P . We solve this unification problem which fills some of the variables h i (here h 1 and h 3 ) and then the other variables (here h 2 , h 4 and h 5 ) are subgoals to be proved.
When, to prove a proposition, we first apply the introduction rule n times then the resolution rule, we give the proof [x 1 :
P 1 ]...[x n : P n ](f h 1 ... h p ) i.e.
we give f for the head variable of proof.
Remark that the dependence of the h i on the x j is implicit. So the method to enumerate terms underlying the introduction-resolution method is
• try all the possible head variables of the normal η-long form of the term,
• generate new variables for the rest of the term,
• generate a typing constraint to enforce well-typedness of the term,
• use recursively this method to fill these variables.
So this method is the same as the method underlying higher order unification.
A Method to Enumerate the Terms of a Given Type
Now we use this idea to construct a complete proof synthesis method for type systems. To enumerate the terms t that we can substitute to a variable x of type T we imagine the normal η-long form of t
t = [x 1 : P 1 ]...[x n : P n ](w c 1 ... c p ) or t = [x 1 : P 1 ]...[x n : P n ](y : A)B
and we perform all the elementary substitutions
x ← [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h p x 1 ... x n )) x ← [x 1 : P 1 ]...[x n : P n ](y : (h 1 x 1 ... x n ))(h 2 x 1 ... x n y)
Then we enumerate all the terms that can be substituted to the variables h 1 , ..., h p . Because we consider terms in η-long form, the number of abstractions n is the number of products in the type T and the types P 1 , ..., P n are the types of the variables bound in these products. So for completeness we have to consider • all the possible w,
• all the possible p and types for h 1 , ..., h p .
When we perform such a substitution, we have to make sure to get a term of type T . This welltypedness constraint is an equation.
In the introduction-resolution algorithm, the solution of this constraint is called the unification step of resolution. In unification in simply typed λ-calculus, types are always ground, so the constraint relates ground terms an we only have to check that they are identical, this is the selection of the head variables step. In unification in the λΠ-calculus [START_REF] Elliott | Higher-order Unification with Dependent Function Types[END_REF] [11] [START_REF] Pym | Proof, Search and Computation in General Logic[END_REF] this constraint is added to the set of equations, it is the accounting equation.
Here we keep this equation in the context. Equations are constraints that the forthcoming substitutions must verify. Since we do not have a special algorithm to solve these equations but use the standard method to fill the variables and use the equations as constraints on the substitutions, this method is a one level process merging resolution and unification.
Number of Applications
In the introduction-resolution algorithm when we want to prove an atomic proposition P and we resolve it with a variable w : (y 1 : Q 1 )...(y q : Q q )Q (Q atomic) we only consider proofs of the form (w h 1 ... h q ) and the constraint is that the type of this term must be equal to P i.e. Q[y 1 ← h 1 , ..., y q ← h q ] = P .
But in polymorphic type systems the term (w c 1 ... c q ) can still have a type which is a product and we can go on applying variables. This is why the introduction-resolution method is incomplete and why a splitting rule is needed in [START_REF] Huet | Constrained Resolution A Complete Method for Higher Order Logic[END_REF].
Here we consider an infinite number of possibilities for the number of applications and an infinitely branching search tree, we use interleaving to enumerate its nodes. In an incomplete but more efficient method we consider a restriction which is more or less similar to the introductionresolution method.
Variables in the Proposition to be Proved
In some type systems, the variables h i may have occurrences in the type of h j for j > i. We could first instantiate the variable h i and then instantiate the variable h j when it has a ground type, but it is well known that it is more efficient to solve the variable h j first. For instance to enumerate the terms of the type T = ∃x : N at.(P x) we must not enumerate all the terms n of type N at and for each n enumerate the terms of type (P n), but enumerate first the terms of type (P x).
Thus in polymorphic type systems when we have a variable in the proposition to be proved, it is not always possible to know the number and the types of abstractions: if the proposition to be proved is T = (x 1 : P 1 )...(x n : P n )P (P atomic) and the head of P is a variable that can be substituted then a substitution may increase the number of products in T and therefore the number of abstractions in t. So we have to delay the instantiation of x : T until we have instantiated T in a term (x 1 : P 1 )...(x n : P n )P where the head of P cannot be substituted.
η-conversion
In this paper we consider pure type systems extended with η-conversion. The methods developed here should also be applicable for pure type systems without η-conversion but, as in higher order unification [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [22], we would need to consider more elementary substitutions.
2 Pure Type Systems Extended with η-conversion
Definition
A pure type system extended with η-conversion [START_REF] Barendregt | Introduction to Generalized Type Systems[END_REF] is a λ-calculus given by a set S (the elements of S are called sorts), a subset Ax of S × S and a subset R of S × S × S.
Definition 1 Functional Type System
A type system is said to be functional if
< s, s ′ >∈ Ax and < s, s ′′ >∈ Ax implies s ′ = s ′′ < s, s ′ , s ′′ >∈ R and < s, s ′ , s ′′′ >∈ R implies s ′′ = s ′′′
In this paper we consider systems such that S = {P rop, T ype}, Ax = {< P rop, T ype >}, if < s, s ′ , s ′′ >∈ R then s ′ = s ′′ and < P rop, P rop, P rop >∈ R. These systems are the eight systems of Barendregt's cube. All these systems are functional. Examples are the simply typed λ-calculus, the λΠ-calculus [19] [7] [16], the system F, the system Fω [START_REF] Girard | Interprétation fonctionnelle et élimination des coupures dans l'arithmétique d'ordre supérieur[END_REF] and the Calculus of Constructions [2] [5].
Definition 2 Syntax T ::= s | x | (T T ) | [x : T ]T | (x : T )T
In this paper we ignore variable renaming problems. A rigorous presentation would use de Bruijn indices [START_REF] De Bruijn | Lambda Calculus Notation with Nameless Dummies, a Tool for Automatic Formula Manipulation, with Application to the Church-Rosser Theorem[END_REF]. The terms s are sorts, the terms x are called variables, the terms (T T ′ ) applications, the terms [x : T ]T ′ λ-abstractions and the terms (x : T )T ′ products. The notation T → T ′ is used for (x : T )T ′ when x has no free occurrence in T ′ .
Let t and t ′ be terms and x a variable. We write t[x ← t ′ ] for the term obtained by substituting t ′ for x in t. We write t ≡ t ′ when t and t ′ are βη-equivalent.
Definition 3 Context
A context Γ is a list of pairs < x, T > (written x : T ) where x is a variable and T a term. The term T is called the type of x in Γ.
We write [x 1 : T 1 ; ...; x n : T n ] for the context with elements x 1 : T 1 , ..., x n : T n and Γ 1 Γ 2 for the concatenation of the contexts Γ 1 and Γ 2 .
Definition 4 Typing Rules
We define inductively two judgements: Γ is well-formed and t has type T in Γ (Γ ⊢ t : T ) where Γ is a context and t and T are terms. Empty:
[ ] well-formed Declaration: Γ ⊢ T : s s ∈ S Γ[x : T ] well-formed Sort: Γ well-formed < s, s ′ >∈ Ax Γ ⊢ s : s ′ Variable: Γ well-formed x : T ∈ Γ Γ ⊢ x : T Product: Γ ⊢ T : s Γ[x : T ] ⊢ T ′ : s ′ < s, s ′ , s ′′ >∈ R Γ ⊢ (x : T )T ′ : s ′′ Abstraction: Γ ⊢ (x : T )T ′ : s Γ[x : T ] ⊢ t : T ′ s ∈ S Γ ⊢ [x : T ]t : (x : T )T ′ Application: Γ ⊢ t : (x : T )T ′ Γ ⊢ t ′ : T Γ ⊢ (t t ′ ) : T ′ [x ← t ′ ] Conversion: Γ ⊢ T : s Γ ⊢ T ′ : s Γ ⊢ t : T T ≡ T ′ s ∈ S Γ ⊢ t : T ′ Definition 5 Well-typed Term
A term t is said to be well-typed in a context Γ if there exists a term T such that Γ ⊢ t : T .
Definition 6 Type
A term T is said to be a type in the context Γ if it is a sort or if there exists a sort s such that Γ ⊢ T : s.
Proposition 1 If Γ ⊢ t : T then T is a type.
Proof By induction on the length of the derivation of Γ ⊢ t : T .
Proposition 2
The βη-reduction is strongly normalizable and confluent on well-typed terms. Thus a well-typed term has a unique βη-normal form. Two well-typed terms are equivalent if they have the same βη-normal form. Proof Proofs of strong normalization and confluence for β-reduction are given in [START_REF] Th | Une Théorie des Constructions[END_REF] [START_REF] Geuvers | A Modular Proof of Strong Normalization for the Calculus of Constructions[END_REF]. As remarqued in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] the strong normalization proof of [START_REF] Geuvers | A Modular Proof of Strong Normalization for the Calculus of Constructions[END_REF] can be adapted to βη-reduction and does not need confluence. Then confluence proofs for βη-reduction are given in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] [30] [START_REF] Th | An Algorithm for Testing Conversion in Type Theory, Logical Frameworks I[END_REF].
Proposition 3 In a functional type system, in which reduction is confluent, a term t well-typed in a context Γ has a unique type modulo βη-equivalence. Proof By induction over the structure of t.
Definition 7 Atomic Term
A term t is said to be atomic if it has the form (u c 1 ... c n ) where u is a variable or a sort. The symbol u is called the head of the term t.
Proposition 4 A normal well-typed term t is either an abstraction, a product or an atomic term. Proof If the term t is neither an abstraction nor a product then it can be written in a unique way
t = (u c 1 ... c n )
where u is not an application. The term u is not a product (if n ̸ = 0 because a product is of type s for some sort s and therefore cannot be applied and if n = 0 because t is not a product). It is not an abstraction (if n ̸ = 0 because t is in normal form and if n = 0 because t is not an abstraction). It is therefore a variable or a sort.
Proposition 5 Let T be a well-typed normal type, the term T can be written in a unique way T = (x 1 : P 1 )...(x n : P n )P with P atomic. Proof By induction over the structure of T .
Definition 8 η-long form
Let Γ be a context and t be a βη-normal term well-typed in Γ and T the βη-normal form of its type. The η-long form of the term t is defined as
• If t = [x : U ]u then the η-long form of t is [x : U ′ ]u ′ where U ′ is the η-long form of U in Γ and u ′ the η-long form of u in Γ[x : U ]. • If t = (x : U )V then the η-long form of t is (x : U ′ )V ′ where U ′ is the η-long form of U in Γ and V ′ the η-long form of V in Γ[x : U ]. • If t = (w c 1 ... c p ) then we let T = (x 1 : P 1 )...(x n : P n )P (P atomic). The η-long form of t is [x 1 : P ′ 1 ]...[x n : P ′ n ](w c ′ 1 ... c ′ p x ′ 1 ... x ′ n )
where c ′ i is the η-long form of c i in Γ, P ′ i the η-long form of P i in Γ[x 1 : P 1 ; ...; x i-1 : P i-1 ] and x ′ i the η-long form of x i in Γ[x 1 : P 1 ; ...; x i :
P i ].
The well-foundedness of this definition is proved in [8] [9].
Definition 9 normal η-long form Let t be a term well-typed in a context Γ, the normal η-long form of t is the η-long form of its βη-normal form.
Definition 10 Subterm
We consider well-typed normal η-long terms labeled with the contexts in which they are welltyped: t Γ . Let t Γ such a term, we define by induction over the structure of
t Γ the set Sub(t Γ ) of strict subterms of t Γ • if t Γ is a sort or a variable then Sub(t Γ ) = {}, • if t Γ is an application, t = (u v), then Sub(t Γ ) = {u Γ , v Γ } ∪ Sub(u Γ ) ∪ Sub(v Γ ), • if t Γ is an abstraction, t = [x : P ]u, then Sub(t Γ ) = {P Γ , u Γ[x:P ] } ∪ Sub(P Γ ) ∪ Sub(u Γ[x:P ] ), • if t Γ is a product, t = (x : P )u, then Sub(t Γ ) = {P Γ , u Γ[x:P ] } ∪ Sub(P Γ ) ∪ Sub(u Γ[x:P ] ).
The M eta Type System
Let T be a type system of the cube. In order to express the proof synthesis method, we have to be allowed to declare a variable that stands for any well-typed term of T . For instance a variable that stands for P rop i.e. a variable of type T ype and this is not possible in T because the term T ype is not well-typed.
We want also to be allowed to express a term t : T ′ which is well-typed in Γ[x : T ] as t = (f x) with f well-formed in Γ. We want actually to be allowed to define f = [x : T ]t whatever the terms T and t may be. In general this cannot be done in T , because the type (x : T )T ′ may be not well-typed. So we are going to embed our type system T in another type system: the M eta type system.
Definition 11 M eta M eta =< S ′ , Ax ′ , R ′ > S ′ = {P rop, T ype, Extern}
Ax ′ = {< P rop, T ype >, < T ype, Extern >} R ′ = {< P rop, P rop, P rop >, < P rop, T ype, T ype >, < T ype, P rop, P rop >, < T ype, T ype, T ype >, < P rop, Extern, Extern >, < T ype, Extern, Extern >}
The strong normalization and confluence of βη-reduction for this system are not yet fully proved. But since all the terms typable in the M eta type system are typable in the Calculus of Constructions with Universes [START_REF] Th | An analysis of Girard's paradox[END_REF] [25] (identifying the sorts Extern and T ype 1 ), the β-reduction is strongly normalizable and confluent [START_REF] Th | An analysis of Girard's paradox[END_REF] [START_REF] Luo | An Extended Calculus of Constructions[END_REF]. It is conjectured in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] that in a pure type system in which the β-reduction is strongly normalizable the βη-reduction also is. Then assuming strong normalization, confluence is proved in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] [30] [START_REF] Th | An Algorithm for Testing Conversion in Type Theory, Logical Frameworks I[END_REF].
Proposition 6 Because the M eta type system is functional and βη-reduction is confluent, if a term t is well-typed in a context Γ then it has a unique type modulo βη-equivalence.
3 Constrained Quantified Contexts and Substitution
Constrained Quantified Contexts Definition 12 Constrained Quantified Contexts
A quantified declaration is a triple < Q, x, T > (written Qx : T ) where Q is a quantifier (∀ or ∃), x a variable and T is a term. A constraint is a pair of terms < a, b > (written a = b). A constrained quantified context is a list of quantified declarations and constraints. If Γ contains the declaration ∀x : T then the variable x is said to be universal in Γ. If it contains the declaration ∃x : T then x is said to be existential in Γ. These constrained quantified contexts are generalizations of Miller's mixed prefixes [START_REF] Miller | Unification Under a Mixed Prefix[END_REF] which are lists of quantified declarations.
Usual contexts are identified with constrained quantified contexts with only universal variables.
Definition 13 Equivalence Modulo Constraints
Let Γ be a constrained quantified context, we define the relation between terms ≡ Γ as the smallest equivalence relation compatible with terms structure such that
• if t ≡ t ′ then t ≡ Γ t ′ , • if (a = b) ∈ Γ then a ≡ Γ b.
Definition 14 Typing Rules
First we modify the rules to deal with the new syntax. The declaration rule is modified in
Γ ⊢ T : s s ∈ S Γ[Qx : T ] well-formed the variable rule is modified in Γ well-formed Qx : T ∈ Γ Γ ⊢ x : T the product rule is modified in Γ ⊢ T : s Γ[Qx : T ] ⊢ T ′ : s ′ < s, s ′ , s ′′ >∈ R Γ ⊢ (x : T )T ′ : s ′′ the abstraction rule is modified in Γ ⊢ (x : T )T ′ : s Γ[Qx : T ] ⊢ t : T ′ s ∈ S Γ ⊢ [x : T ]t : (x : T )T ′ and we add a constraint rule Γ ⊢ a : T Γ ⊢ b : T Γ[a = b] well-formed
Then we extend the system by replacing the conversion rule by
Γ ⊢ T : s Γ ⊢ T ′ : s Γ ⊢ t : T T ≡ Γ T ′ s ∈ S Γ ⊢ t : T ′
This defines two new judgements: Γ is well-formed using the constraints and t has type T in Γ using the constraints.
Remark that a term may be well-typed in Γ using the constraints and still be not normalizable.
Definition 15 Well-typed Without Using the Constraints
Let Γ be a context and t and T be two terms. The term t is said to be of type T in Γ without using the constraints if there exists ∆ subcontext of Γ (i.e. obtained by removing some items of Γ) such that ∆ has no constraints, is a well-formed context and ∆ ⊢ t : T . Proposition 7 If a term is well-typed in a context without using the constraints then it is strongly normalizable.
Definition 16 Normal Form of a Context
Let Γ be a well-formed context, the normal form of Γ is obtained by normalizing (in normal η-long form) all the types of variables that are well-typed without using the constraints and all the constraints which terms are well-typed without using the constraints.
Proposition 8 Let Γ be a well-formed context and Γ ′ be its normal form. The context Γ ′ is well-formed and if Γ ⊢ t : T then Γ ′ ⊢ t : T . Proof By induction on the length of the derivation of Γ well-formed and Γ ⊢ t : T .
Definition 17 Ground Term
A term t is said to be ground in a context Γ if it has no occurrence of an existential variable.
Definition 18 Rigid and Flexible Terms A normal η-long term well-typed without the constraints which is an abstraction, a product or an atomic term (w c 1 ... c n ) with w universal variable or sort is said to be rigid. A normal η-long term well-typed without the constraints which is atomic (w c 1 ... c n ) with w existential variable is said to be flexible.
Definition 19 Success and Failure Contexts
A normal well-formed context Γ is said to be a success context if it has only universal variables and constraints relating identical terms. It is said to be a failure context if it contains a constraint relating two normal η-long ground terms which are not identical.
Proposition 9 Let Γ be a normal context which is neither a success context nor a failure context. Then there exists an existential variable x : T whose type T is well-typed without using the constraints and has the form (x 1 : P 1 )...(x n : P n )P with the term P atomic and rigid in the context Γ[∀x 1 : P 1 ; ...; ∀x n : P n ]. Proof Let us consider the leftmost item which is neither a universal variable nor a normal η-long constraint relating identical terms (such an item exists since the context is not a success context).
If this item is a constraint (a = b), we let Γ = ∆[a = b]∆ ′ , the terms a and b are well-typed in ∆ so, since there are neither existential variables nor (non trivial) constraints in ∆, they are ground, well-typed without the constraints and normal. These terms are different and the context Γ is a failure context. So if Γ is not a failure context then this item is an existential variable ∃x : T and T is well-typed without the constraints and ground.
Substitution
Definition 20 Existential Context
A context (well-formed or not) is said to be existential if it contains only declarations of existential variables and constraints but no declarations of universal variables.
Definition 21 Substitution
A finite set σ of triples < x, γ, t > where x is a variable, γ an existential context and t a term is said to be a substitution if for each variable x there is at most one triple of the form < x, γ, t > in σ. When σ contains only one triple < x, γ, t > we write it σ = x ← γ, t.
Definition 22 Variable Bound by a Substitution
Let x be a variable and σ be a substitution. If the substitution σ contains a triple < x, γ, t > then x is said to be bound by σ. The context γ is said to be the context associated to x in σ.
Definition 23 Substitution Applied to a Term
Let x be a variable and σ a substitution. If there is a triple < x, γ, t > in σ then we let σx = t else we let σx = x. This definition extends straightforwardly to terms by
• σs = s, • σ(t u) = (σt σu), • σ[x : T ]u = [x : σT ]σu,
• σ(x : T )u = (x : σT )σu.
Definition 24 Substitution Applied to a Context
Let Γ be a context and σ a substitution, the context σΓ is defined inductively by The substitution σ is said to be well-typed in a context Γ if and only if the context σΓ is wellformed, no universal variable of Γ is bound by σ and for each existential variable x of type T in Γ and bound by σ, we have (σ∆)γ ⊢ t : σT where ∆ is the unique context such that Γ = ∆[∃x : T ]∆ ′ and < x, γ, t > the unique triple of σ binding the variable x.
• σ[ ] = [ ] • σ(∆[Qx : T ]) = (σ∆
Proposition 10 Let Γ be a context, σ a substitution and T and T ′ terms such that T ≡ Γ T ′ . Then σT ≡ σΓ σT ′ . Proof By induction on the length of the derivation of T ≡ Γ T ′ .
Proposition 11 Let σ be a substitution, t and u two terms and x a variable which is not bound by σ and which is not free in any σy for y ̸ = x. Then σ(t
[x ← u]) = (σt)[x ← σu].
Proof By induction over the structure of t.
Proposition 12 Let Γ be a context, σ a substitution well-typed in Γ and x a variable declared of type T in Γ. Then σΓ ⊢ σx : σT . Proof Since the substitution σ is well-typed in Γ, the context σΓ is well-formed. If x not bound by Γ then σx = x and Qx : σT ∈ σΓ so σΓ ⊢ σx : σT . If x is bound by σ then there exists a unique triple < x, γ, t > in σ. Let us write Γ = ∆[∃x : T ]∆ ′ . We have σx = t. Since σ is well-typed in Γ we have (σ∆)γ ⊢ t : σT and (σ∆)γ is a prefix of σΓ. So σΓ ⊢ σx : σT .
Proposition 13 Let Γ be a context, σ a substitution well-typed in Γ, t and T two terms such that Γ ⊢ t : T . We have σΓ ⊢ σt : σT . Proof By induction on the length of the derivation of Γ ⊢ t : T .
• If the last rule is the sort rule then we use the fact that σΓ is well-formed.
• If the last rule is the variable rule then we use the fact that σΓ is well-formed and σΓ ⊢ σx : σT .
• If the last rule is the product rule then by induction hypothesis we have σΓ ⊢ σT : s and (σΓ)[∀x : σT ] ⊢ σT ′ : s ′ , so σΓ ⊢ σ(x : T )T ′ : s ′′ .
• If the last rule is the abstraction rule then by induction hypothesis we have σΓ ⊢ σ(x : T )T ′ : s and (σΓ)[∀x : σT ] ⊢ σt : σT ′ , so σΓ ⊢ σ[x : T ]t : σ(x : T )T ′ .
• If the last rule is the application rule then by induction hypothesis we have σΓ ⊢ σt : σ(x : T )T ′ and σΓ ⊢ σt ′ : σT , so we get σΓ ⊢ σ(t t ′ ) : (σT ′ )[x ← σt ′ ] and since x is not bound by σ and is not free in any σy for y ̸ = x, we have σΓ ⊢ σ(t t ′ ) : σ(T ′ [x ← t ′ ]).
• If the last rule is the conversion rule then by induction hypothesis we have σΓ ⊢ σT : s, σΓ ⊢ σT ′ : s and since T ≡ Γ T ′ we have σT ≡ σΓ σT ′ , so σΓ ⊢ σt : σT ′ .
Definition 26 Composition of Substitutions
Let σ and τ two substitutions. The substitution τ • σ is defined as
(τ • σ) = {< x, τ γ, τ t > | < x, γ, t >∈ σ} ∪ {< x, γ, t > | < x,
A Complete Method Informal Introduction
When we search a proof of a proposition T in a context Γ we search a substitution σ well-typed in Γ[∃x : T ] such that σ(Γ[∃x : T ]) is a success context. When we have a context which contains several existential variables we choose such a variable x of type T = (x 1 : P 1 )...(x n : P n )P with P atomic rigid in Γ[∀x 1 : P 1 ; ...; ∀x n : P n ] and we perform an elementary substitution instantiating this variable. In the general case the elementary substitutions have the form
x ← [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h p x 1 ... x n ))
Let us write (y 1 : Q 1 )(y 2 : (Q 2 y 1 ))...(y q : (Q q y 1 ... y q-1 ))(Q y 1 ... y q ) the type of w. For the substitutions such that p = q we only need to declare the new variables h 1 , ..., h q used in the substitution and the constraint expressing the well-typedness of the substitution
(⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )) = (⃗ x : ⃗ P )P
(where (⃗ x : ⃗ P )t is an abbreviation for (x 1 : P 1 )...(x n : P n )t). For the substitutions such that p = q + r, r ≥ 1, we first declare the variables h 1 , ..., h q , then we need the type of the term (w (h 1 x 1 ... x n ) ... (h q x 1 ... x n )) to be a product, we introduce therefore two existential variables H 1 and K 1 and a constraint
(⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )) = (⃗ x : ⃗ P )(z : (H 1 x 1 ... x n ))(K 1 x 1 ... x n z)
Then we can introduce the variable h q+1 with the type (⃗ x : ⃗ P )(H 1 x 1 ... x n ). If r = 1 we just need a last constraint expressing the well-typedness of the substitution
(⃗ x : ⃗ P )(K 1 x 1 ... x n (h q+1 x 1 ... x n )) = (⃗ x : ⃗ P )P
and in the general case we introduce in the same way 4r items and then a well-typedness constraint.
We also have to include other elementary substitutions in which the variable x is substituted by a term of the form [x 1 : P 1 ]...[x n : P n ]u where u is a sort or a product.
Definition 27 Elementary Substitutions
Let Γ be a normal context well-formed in the M eta type system such that the type of the type of universal variables is P rop or T ype but not Extern and which is neither a success nor a failure context. We choose in Γ an existential variable x whose type is normal η-long and has the form T = (x 1 : P 1 )...(x n : P n )P with P atomic rigid in Γ[∀x 1 : P 1 ; ...; ∀x n : P n ] (such a variable exists because the context in neither a success nor a failure context) and we construct the set of substitutions Σ(Γ) in the following way.
Notation If t is a term, (⃗ x : ⃗ P )t is an abbreviation for (x 1 : P 1 )...(x n : P n )t.
• For every w which is a universal variable declared in the left of x or which is an x i Γ[∀x 1 : P 1 ; ...; ∀x n :
P n ] ⊢ w : (y 1 : Q ′ 1 )...(y q : Q ′ q )Q ′ (Q ′ atomic)
We let
Q 1 = Q ′ 1 Q 2 = [y 1 : Q ′ 1 ]Q ′ 2 ... Q q = [y 1 : Q ′ 1 ]...[y q-1 : Q ′ q-1 ]Q ′ q Q = [y 1 : Q ′ 1 ]...[y q : Q ′ q ]Q ′ We have w : (y 1 : Q 1 )(y 2 : (Q 2 y 1 ))...(y q : (Q q y 1 ... y q-1 ))(Q y 1 ... y q )
For every r ≥ 0 we consider the following substitutions with r-splitting. Let s be the sort type of Q ′ and s ′ the sort type of P in Γ[∀x 1 : P 1 ; ...; ∀x n :
P n ]. For every sequence s 1 , s ′ 1 , ..., s i , s ′ i , ..., s r , s ′ r such that < s 1 , s ′ 1 , s >∈ R, ..., < s i , s ′ i , s ′ i-1 >∈ R and s ′ r = s ′ , we consider the substitution x ← γ, t where t = [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+r x 1 ... x n )) γ = φχ 1 ...χ r ψ with φ = [∃h 1 : (⃗ x : ⃗ P )Q 1 ; ∃h 2 : (⃗ x : ⃗ P )(Q 2 (h 1 x 1 ... x n ));
...;
∃h q : (⃗ x : ⃗ P )(Q q (h 1 x 1 ... x n ) ... (h q-1 x 1 ... x n ))] χ 1 = [∃H 1 : (⃗ x : ⃗ P )s 1 ; ∃K 1 : (⃗ x : ⃗ P )(z : (H 1 x 1 ... x n ))s ′ 1 ; (⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )) = (⃗ x : ⃗ P )(z : (H 1 x 1 ... x n ))(K 1 x 1 ... x n z); ∃h q+1 : (⃗ x : ⃗ P )(H 1 x 1 ... x n )] for all i, 1 < i ≤ r χ i = [∃H i : (⃗ x : ⃗ P )s i ; ∃K i : (⃗ x : ⃗ P )(z : (H i x 1 ... x n ))s ′ i ; (⃗ x : ⃗ P )(K i-1 x 1 ... x n (h q+i-1 x 1 ... x n )) = (⃗ x : ⃗ P )(z : (H i x 1 ... x n ))(K i x 1 ... x n z); ∃h q+i : (⃗ x : ⃗ P )(H i x 1 ... x n )] and if r = 0 ψ = [(⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )) = (⃗ x : ⃗ P )P ] otherwise ψ = [(⃗ x : ⃗ P )(K r x 1 ... x n (h q+r x 1 ... x n )) = (⃗ x : ⃗ P )P ]
• If P is a sort then for every sort s, such that < s, P >∈ Ax, we consider also the substitution
x ← [ ], [x 1 : P 1 ]...[x n : P n ]s
and for every pair of sorts < s, s ′ > such that < s, s ′ , P >∈ R we also consider the substitution
x ← γ, t
where t = [x 1 : P 1 ]...[x n : P n ](y : (h x 1 ... x n ))(k x 1 ... x n y) γ = [∃h : (⃗ x : ⃗ P )s ; ∃k : (⃗ x : ⃗ P )(y : (h x 1 ... x n ))s ′ ]
The set Σ(Γ) is the set which contains all the substitutions considered above.
Definition 28 Derivation A derivation of a context Γ is a list of substitutions [σ 1 ; ...; σ m ] such that σ i ∈ Σ(σ i-1 σ i-2 ...σ 1 Γ) and σ m σ m-1 ...σ 1 Γ is a success context.
Definition 29 Search tree
Let Γ be a context, we build a tree, called the search tree of Γ. Nodes are labeled by contexts and edges by elementary substitutions. The root is labeled by Γ. Nodes labeled by success and failure contexts are leaves. From a node labeled by a context ∆ which is neither a success context nor failure context, for each σ of Σ(∆) we grow an edge labeled σ to a new node labeled by σ∆.
Success nodes are in bijection with the derivations of Γ. A semi-algorithm of proof synthesis is to enumerate the nodes of the tree in order to find a success node. Since the number of sons of a node may be infinite we have, in order to get an finitely branching tree, to delay the exploration of node with r-splitting for r generations. For x, we perform the substitution with head Antisym and 0-splitting
x ← (Antisym h 1 h 2 h 3 h 4 ) We get the context ∆[∃h 1 : T ; ∃h 2 : T ; ∃h 3 : (R h 1 h 2 ); ∃h 4 : (R h 2 h 1 ); (Eq h 1 h 2 ) = (Eq a b)].
For h 1 , we perform the substitution with head a and 0-splitting
h 1 ← a
For h 2 , we perform the substitution with head b and 0-splitting
h 2 ← b
We get the context ∆[∃h 3 : (R a b); ∃h 4 : (R b a)] (we do not write the trivial constraints). For h 3 , we perform the substitution with head u and 0-splitting
h 3 ← u We get the context ∆[∃h 4 : (R b a)].
For h 4 , we perform the substitution with head v and 0-splitting
h 4 ← v
And we get the success context ∆. Let θ be the composition of these substitutions, θx = (Antisym a b u v).
As we will see in the following, in this example, at each step, all the substitutions but the one considered leed to obviously hopeless contexts.
An Example with Splitting
Let ∆ = [∀A : P rop; ∀B : P rop; ∀I : P rop → P rop; ∀u : (P : P rop)((I P ) → P ); ∀v : (I (A → B)); ∀w : A] and Γ = ∆[∃p : B]. For p, we perform the substitution with head u and 1-splitting
p ← (u h 1 h 2 h 3 )
with the new variables: h 1 : P rop, h 2 : (I h 1 ), H : P rop, K : H → P rop and h 3 : H. We get the context ∆[∃h 1 : P rop; ∃h 2 : (I h 1 ); ∃H : P rop; ∃K : H → P rop; ∃h 3 :
H; h 1 = (x : H)(K x); (K h 3 ) = B].
For h 1 , we perform a product substitution
h 1 ← (x : h ′ )(k ′ x)
We get the context ∆[∃h ′ : P rop; ∃k ′ : h ′ → P rop; ∃h 2 : (I ((x : h ′ )(k ′ x))); ∃H : P rop; ∃K : H → P rop; ∃h 3 : H;
(x : H)(K x) = (x : h ′ )(k ′ x); (K h 3 ) = B].
For h 2 we perform the substitution with head v and 0-splitting
h 2 ← v
We get the context ∆[∃h ′ : P rop; ∃k ′ : h ′ → P rop; (I ((x : h ′ )(k ′ x))) = (I (A → B)); ∃H : P rop; ∃K : H → P rop;
∃h 3 : H; (x : H)(K x) = (x : h ′ )(k ′ x); (K h 3 ) = B].
For h ′ we perform the substitution with head A and 0-splitting
h ′ ← A ∆[∃k ′ : A → P rop; (I ((x : A)(k ′ x))) = (I (A → B)); ∃H : P rop; ∃K : H → P rop; ∃h 3 : H; (x : H)(K x) = (x : A)(k ′ x); (K h 3 ) = B].
For k ′ we perform the substitution with head B and 0-splitting
k ′ ← [x : A]B
We get the context ∆[∃H : P rop; ∃K : H → P rop; ∃h 3 : H;
(x : H)(K x) = (A → B); (K h 3 ) = B].
For H we perform the substitution with head A and 0-splitting
H ← A
We get the context ∆[∃K : A → P rop; ∃h 3 : A; (x : A)(K x) = (A → B);
(K h 3 ) = B].
For K we perform the substitution with head B and 0-splitting
K ← [x : A]B
We get the context ∆[∃h 3 : A]. For h 3 we perform the substitution with head w and 0-splitting
h 3 ← w
And we get the success context ∆. Let θ be the composition of all these substitutions, θp = (u (A → B) v w).
Properties
Well-typedness
In this section Γ is a normal context well-formed in the M eta type system such that the type the type of universal variables is P rop or T ype but not Extern and which is neither a success nor a failure context and σ = x ← γ, t is a substitution of Σ(Γ). We write Γ = ∆[∃x : T ]∆ ′ . We prove that the substitution σ is well-typed in the context Γ. so in the context ∆γ
[x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+1 x 1 ... x n )) : (⃗ x : ⃗ P )(K 1 x 1 ... x n (h q+1 x 1 ... x n ))
Then if we assume this for i we deduce
[
(w (h 1 x 1 ... x n ) ... (h q+i+1 x 1 ... x n )) : (K i+1 x 1 ... x n (h q+i+1 x 1 ... x n )) so in the context ∆γ [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+i+1 x 1 ... x n )) : (⃗ x : ⃗ P )(K i+1 x 1 ... x n (h q+i+1 x 1 ... x n ))
So we deduce Let Γ be a well-formed context, the substitution θ is said to be a solution to Γ if
Soundness
• the substitution θ is well-typed in Γ,
• the context θΓ is a success context.
Definition 31 Normal Solution to a Context
Let Γ be a well-formed context, the substitution θ, solution to Γ, is said to be a normal solution to Γ if
• the substitution θ binds exactly the existential variables of Γ,
• for each existential variable x of Γ, the context associated to x by θ is empty and σx is normal η-long in the context θ∆ where ∆ is the unique context such that Γ = ∆[∃x : T ]∆ ′ .
Definition 32 Normal Form of a Solution Let Γ be a well-formed context and θ a solution to Γ. We define θ ′ the normal form of θ in Γ by induction over the length of Γ.
• If Γ = [ ] then we let θ ′ = {}. • If Γ = ∆[∀x : T ] or Γ = ∆[a = b]
then θ is a solution to ∆, we let θ ′ be the normal form of θ in ∆.
• If Γ = ∆[∃x : T ] then θ is a solution to ∆. We let θ 1 be the normal form of θ in ∆. We have θΓ ⊢ θx : θT , let t be the normal η-long form of θx in θΓ. We let
θ ′ = θ 1 ∪ {< x, [ ], t >}.
Proposition 20 Let Γ be a context and Γ ′ be a context obtained by removing some constraints relating identical terms. If Γ is well-formed then Γ ′ is well-formed. If Γ ⊢ t : T then Γ ′ ⊢ t : T .
Proof By induction on the length of the derivation of Γ well-formed or Γ ⊢ t : T .
Proposition 21 Let ∆ and γ be two contexts such that ∆γ is a success context and γ is an existential context. Then γ is a list of constraints relating terms well-typed without using the constraints and whose normal forms are identical.
Proof By induction on the length of γ.
splitting algorithm this is found in both cases, but the price to pay is inefficiency, with the weak splitting this is found when the type of the existential variable is A → B but not when it is B. Roughly speaking, this means that when we must use a proposition with a quantifier on a proposition or a predicate (for instance an induction axiom) the weak splitting algorithm finds the proposition to be used only when it can be seen in the type of the existential variable, and not when the proof requires an induction loading [START_REF] Helmink | Resolution and Type Theory[END_REF] (as here from B to A → B). This weak splitting algorithm is quite similar to the introduction-resolution method [START_REF] Helmink | Resolution and Type Theory[END_REF] for some unification algorithm. Actually some proofs that cannot be synthesized by introductionresolution can be synthesized with weak splitting. For instance in a context that contains the universal variables u : (P : P rop)(P → A) and v : (B → C) the proof (u (B → C) [x : B](v x)) : A is synthesized by the weak-splitting algorithm and not by the introduction-resolution method. To get exactly the introduction-resolution method we would have to forbid completely product substitutions.
This method with weak splitting is incomplete but its transitive closure is complete [START_REF] Dowek | Démonstration Automatique dans le Calcul des Constructions[END_REF]. This means that even if it cannot be synthesized, each proof can be broken up in smaller proofs that can be synthesized.
Unification 10.1 Ground Unification
The unification problem is to decide if given a context Γ (with universal and existential variables but no constraints) and two terms a and b well-typed in Γ with the same type, there exists a substitution θ well-typed in Γ, such that for every variable x bound by θ, the context associated to x by θ contains no constraints and such that θa = θb. In first order unification and higher order unification we do not require θ to fill all the existential variables of Γ, moreover the substitution θ may introduce new existential variables. This tolerance is due to the fact that in first order and higher order logic, types are all supposed to be inhabited, so from any unifier we can deduce a ground unifier by filling the unfilled variables with arbitrary terms. So unifiability is equivalent to ground unifiability. This is not the case any more in type systems where we have empty types. In unification in simply typed λ-calculus under a quantified context [START_REF] Miller | Unification Under a Mixed Prefix[END_REF] types may also be empty and unifiability is not equivalent to ground unifiability. The notion of unifiability considered in [START_REF] Miller | Unification Under a Mixed Prefix[END_REF] is the one of ground unifiability.
A semi-algorithm for deciding ground unifiability is to apply the method developed in this paper to the context Γ[a = b]. Moreover this algorithm enumerates all the ground unifiers.
In contrast with what happens in simply typed λ-calculus, we have to solve flexible-flexible equations. Indeed these equations may have no solution since the head variables of the terms may have empty types. Moreover it is proved in [START_REF] Miller | Unification Under a Mixed Prefix[END_REF] that the problem of deciding the existence of solutions for flexible-flexible unification problems under a quantified context is undecidable in simply typed λ-calculus. This proof generalizes easily to type systems.
Toward Open Unification
The utility of an open unification algorithm is not obvious, since in a proof-search algorithm (or more generally in an algorithm that uses unification) existential variables are introduced to be filled-in and not to remain forever.
If we still need such an algorithm we may remark that flexible-flexible equations have to be solved since, in contrast with what happens in simply typed λ-calculus, they may have no solution. The equation a = b is flexible-flexible, but there is no solution to this problem. Indeed, let us suppose there is one and consider t = θx. We have t : (P : P rop)P so t = [P : P rop]t ′ with t ′ : P . So t ′ is neither an abstraction nor a product. It is an atomic term (f c 1 ... c n ). If f were the variable P we would have n = 0 and thus t ′ = P which is impossible for type reasons, so f ̸ = P . Since So (f c 1 [P ← (A → B)]) = f which is impossible since a variable cannot be equal to an application. Searching for solutions of flexible-flexible equations seems to be rather difficult, since the head variable of an elementary substitution may now be any variable of Γ and also any new variable. We would have to design a proof search algorithm similar to the one described in this paper but that enumerates all the proofs of a proposition, under all the possible sets of axioms.
It may be possible to recognize flexible-flexible equations that have solutions and those that do not have any and keep the former unsolved until the end of the process as we do in unification in simply typed λ-calculus. In this case the normalization of terms well-typed using such constraints is not obvious, we may have to perform one normalization step as an elementary operation of the algorithm. Also some product substitutions have to be performed when the types of the variables to be instantiated in flexible-rigid and flexible-flexible equations has an existential head variable.
In contrast with ground unifiability, open unifiability seems a quite difficult problem with small interest.
Conclusion
In this paper we have described an semi-decision procedure for type systems. In fact, since proof checking is decidable in these systems, the existence of such a procedure is obvious: we just need to enumerate all the lists of characters until we get a proof of the proposition to be proved. Of course this method (which is sometime used to prove the semi-decidability of predicate calculus) is of no practical interest. We can let it be more realist in enumerating not all the lists of characters but only the normal λ-terms (or proof-trees), this method is comparable to the methods of the early sixties based on the search of a counter Herbrand model.
Resolution excludes more hopeless tentatives by remarking that in a premise ∀x.(P x), we need to instantiate x by t only if (P t) is an instance of the proposition to be proved or an hypothesis of another premise. This method formulated by Robinson [START_REF] Robinson | A Machine-Oriented Logic Based on the Resolution Principle[END_REF] for first order logic and by Huet [START_REF] Huet | Constrained Resolution A Complete Method for Higher Order Logic[END_REF] for higher order logic is generalized here to type systems. The notions of proof-term that appear in these systems simplify the method and makes more explicit the idea of blind enumeration of proof-terms regulated by a failure anticipation mechanism exploiting type constraints.
A
→ B A B combines two proofs (one of A → B and one of A) to get a proof of B. The rule ∀-elim ∀x : A.B t : A B[x ← t] combines a proof (of ∀x : A.B) and a term (of type A) to get a proof of B[x ← t]. So proofs are heterogeneous trees. For instance, in the proof∀f : T → T.((P f ) → (Q f )) [x : T ] [x : T ]x : T → T (P [x : T ]x) → (Q [x : T ]x) (P [x : T ]x) (Q [x : T ]x)the subtree [x : T ] [x : T ]x : T → T is a term derivation.
t = [x 1 :
1 P 1 ]...[x n : P n ](w c 1 ... c p )To mark the dependency of the c i on the x j we writet = [x 1 : P 1 ]...[x n : P n ](w (d 1 x 1 ... x n ) ... (d p x 1 ... x n ))To find the d i we use recursively the same algorithm. So we first taket = [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h p x 1 ... x n ))
)γ where γ is the context associated to x by σ if the variable x is bound by σ and γ = [Qx : σT ] otherwise. • σ(∆[a = b]) = (σ∆)[σa = σb] Definition 25 Substitution Well-typed in a Context
γ, t >∈ τ and x not bound by σ} Proposition 14 Let σ and τ two substitutions and t be a term, we have (τ • σ)t = τ σt. Proof By induction over the structure of t.Proposition 15 Let σ and τ two substitutions and Γ be a context, we have (τ • σ)Γ = τ σΓ. Proof By induction over the length of Γ.Proposition 16 Let Γ be a context and σ and τ two substitutions, such that σ is well-typed in Γ and τ is well-typed in σΓ then τ • σ is well-typed in Γ.Proof We have (τ • σ)Γ = τ σΓ, so the context (τ • σ)Γ is well-formed. No universal variable of Γ is bound by τ • σ. If Γ = ∆[∃x : T ]∆ ′ then σ(∆[∃x : T ]) ⊢ σx : σT sousing the previous proposition τ σ(∆[∃x : T ]) ⊢ τ σx : τ σT , i.e. (τ • σ)(∆[∃x : T ]) ⊢ (τ • σ)x : (τ • σ)T .
1 A
1 First Order Example Let ∆ = [∀T : P rop; ∀R : T → T → P rop; ∀Eq : T → T → P rop; ∀Antisym : (x : T )(y : T )((R x y) → (R y x) → (Eq x y)); ∀a : T ; ∀b : T ; ∀u : (R a b); ∀v : (R b a)] and Γ = ∆[∃x : (Eq a b)].
[x 1 :
1 P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+r x 1 ... x n )) : (⃗ x : ⃗ P )(K r x 1 ... x n (h q+r x 1 ... x n )) then using the last constraint (ψ) [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+r x 1 ... x n )) : (⃗ x : ⃗ P )P In the same way, if σ = x ← γ, t with t = [x 1 : P 1 ]...[x n : P n ]u where u is a sort or a product then the term t also is well-typed and has also type T in the context ∆γ in the M eta type system. Proposition 19 The substitution σ is well-typed in the context Γ in the M eta type system. Proof Let us write ∆ ′ = [e 1 ; ...; e n ]. We prove by induction on i that the substitution σ is well-typed in ∆[∃x : T ][e 1 ; ...; e i ] in the M eta type system. We have σ(∆[∃x : T ]) = ∆γ and σT = T , so the context σ(∆[∃x : T ]) is well-formed in the M eta type system and we have σ(∆[∃x : T ]) ⊢ t : σT . Obviously σ binds no universal variables of ∆[∃x : T ]. So σ is well-typed in ∆[∃x : T ]. Let us assume now that the substitution σ is well-typed in ∆[∃x : T ][e 1 ; ...; e i ]. The item e i+1 is either the declaration of a variable Qy : P or a constraint a = b. In the first case we have ∆[∃x : T ][e 1 ; ...; e i ] ⊢ P : s for some sort s, and in the second ∆[∃x : T ][e 1 ; ...; e i ] ⊢ a : U and ∆[∃x : T ][e 1 ; ...; e i ] ⊢ b : U for some U . So since the substitution σ is well-typed in ∆[∃x : T ][e 1 ; ...; e i ] we have in the first case σ(∆[∃x : T ][e 1 ; ...; e i ]) ⊢ σP : s and in the second σ(∆[∃x : T ][e 1 ; ...; e i ]) ⊢ σa : σU and σ(∆[∃x : T ][e 1 ; ...; e i ]) ⊢ σb : σU So the context σ(∆[∃x : T ][e 1 ; ...; e i+1 ]) = σ(∆[∃x : T ][e 1 ; ...; e i ])[σe i+1 ] is well-formed. Then since we have ∆γ ⊢ t : T and σ obviously binds no universal variables of ∆[∃x : T ][e 1 ; ...; e i+1 ], the substitution σ is well-typed in ∆[∃x : T ][e 1 ; ...; e i+1 ] in the M eta type system.
Definition 30 :
30 Solution to a Context
Consider [∀A : P rop; ∀B : P rop; ∀u : A; ∃x : (P : P rop)P ] a = (x (A → B) u) b = (x B)
(t ′ [P ← (A → B)] u) = t ′ [P ← B] we have (f c 1 [P ← (A → B)] ... c n [P ← (A → B)] u) = (f c 1 [P ← B] ... c n [P ← B])
x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+i x 1 ... x n )) : (⃗ x : ⃗ P )(K i x 1 ... x n (h q+i x 1 ... x n )) using the constraint of χ i [x 1 : P 1 ]...[x n : P n ](w (h 1 x 1 ... x n ) ... (h q+i x 1 ... x n )) : (⃗ x : ⃗ P )(z : (H i+1 x 1 ... x n ))(K i+1 x 1 ... x n z)in the context ∆γ[∀x 1 : P 1 ; ...; ∀x n : P n ] we have (w (h 1 x 1 ... x n ) ... (h q+i x 1 ... x n )) : (z : (H i+1 x 1 ... x n ))(K i+1 x 1 ... x n z)
and since
h q+i+1 : (⃗ x : ⃗ P )(H i+1 x 1 ... x n )
Acknowledgements
The author thanks Gérard Huet who has supervised this work and Amy Felty, Herman Geuvers, Christine Paulin and the anonymous referees for many helpful comments and criticisms.
This research was partly supported by ESPRIT Basic Research Action "Logical Frameworks".
Proposition 17The context ∆γ is well-formed in the M eta type system. Proof By induction on the length of γ, we check that the types of the existential variables and the constraints of γ are well-typed in the M eta type system.
Since (x 1 : P 1 )...(x n : P n )P is well-typed in the M eta type system, the terms P i are well-typed in the M eta type system of type P rop or T ype but not Extern. The term (y 1 : Q ′ 1 )...(y q : Q ′ q )Q ′ is either the type of a universal variable or one of the P i , its type is therefore P rop or T ype but not Extern. So the types of the Q ′ i and of Q ′ are P rop or T ype but not Extern. So the terms Q i and Q are well-typed in the M eta type system. The terms (H i x 1 ... x n ) and (h x 1 ... x n ) are well-typed in the M eta type system and have the type s with < s, s ′ , s ′′ >∈ R for some s ′ and s ′′ . So s is equal to P rop or T ype, but not to Extern.
So the terms (⃗ x : ⃗ P )(Q i (h 1 x 1 ... x n ) ... (h i-1 x 1 ... x n )), (⃗ x : ⃗ P )s i , (⃗ x : ⃗ P )(z : (H i x 1 ... x n ))s ′ i , (⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )), (⃗ x : ⃗ P )(K i-1 x 1 ... x n (h q+i-1 x 1 ... x n )), (⃗ x : ⃗ P )(z : (H i x 1 ... x n ))(K i x 1 ... x n z), (⃗ x : ⃗ P )(H i x 1 ... x n ), (⃗ x : ⃗ P )(Q (h 1 x 1 ... x n ) ... (h q x 1 ... x n )), (⃗ x : ⃗ P )(K r x 1 ... x n (h q+r x 1 ... x n )), (⃗ x : ⃗ P )P , (⃗ x : ⃗ P )s and (⃗ x : ⃗ P )(y : (h x 1 ... x n ))s ′ are well-typed in the M eta type system and have the type P rop, T ype or Extern. Proposition 18 In the M eta type system we have ∆γ ⊢ t : T . Proof For the substitution with 0-splitting we have
then using the constraint ψ
For the substitution with r-splitting (r ≥ 1), we prove by induction on i that in ∆γ, for all i ≥ 1
For i = 1 we have
in the context ∆γ[∀x 1 : P 1 ; ...; ∀x n : P n ] we have
and since
Proposition 22 Let Γ be a well-formed context and θ a solution to Γ. Let θ ′ be the normal form of θ. Then θ ′ is a normal solution to θ.
Proof By induction on the length of Γ.
Lemma 1 Let Γ be a context, if there exists a derivation [σ 1 ; ...; σ m ] of Γ, then there exists a normal solution to Γ.
Proof The substitution σ m • ... • σ 1 , is a solution to Γ. Let θ be its normal form.
The substitution θ called the substitution denoted by the derivation [σ 1 ; ...; σ m ].
Now we prove that the substitution θ which is well-typed in the M eta type system is also well-typed in the original system T .
Proposition 23 Let Γ be a context well-formed in a type system T and T be either a term welltyped in Γ in the system T or the symbol T ype. Let t be a normal η-long term such that Γ ⊢ t : T in the M eta type system such that for every subterm of t which is a product (x : U )U ′ if we let s be the type of U , s ′ be the type of U ′ and s ′′ be the type of (x : U )U ′ , we have < s, s ′ , s ′′ >∈ R (the set of rules of the system T ) and for every subterm of t which is a sort s we have s = P rop (and not T ype), then we have Γ ⊢ t : T in the system T . Proof By induction over the structure of t.
U ] in the type system T and by induction hypothesis Γ[∀x : U ] ⊢ u ′ : U ′ in the system T . So Γ ⊢ t : T in the system T .
• If t = (x : U )U ′ then by induction hypothesis U and U ′ are well-typed in the system T and since the rule < s, s ′ , s ′′ > is a rule of the system T , we have Γ ⊢ t : T in the system T .
• If t = (x c 1 ... c n ) with x variable declared in Γ then x is well-typed in Γ in the system T and we prove by induction on i that the type of c i is well-typed in the system T and c i is well-typed in the system T and we conclude that Γ ⊢ t : T in the system T .
• If t is a sort then by hypothesis t = P rop, it is well-typed in the system T .
Proposition 24 Let Γ be a well-formed context in the system T , [σ 1 ; ...; σ m ] be a derivation of Γ and θ the substitution denoted by this derivation. Let x be an existential variable of Γ and t = θx. For every subterm of t which is a product (x : U )U ′ if we let s be the type of U , s ′ be the type of U ′ and s ′′ be the type of (x : U )U ′ , we have < s, s ′ , s ′′ >∈ R (the set of rules of the system T ) and for every subterm of t which is a sort s we have s = P rop (and not T ype), Proof By induction on m.
Proposition 25 Let Γ be a context well-formed in the system T , [σ 1 ; ...; σ m ] a derivation of Γ and θ the substitution denoted by this derivation. The substitution θ is well-typed in Γ in the system T .
Proof By induction on the length of Γ.
Theorem 1 Soundness
Let Γ be a (non constrained, non quantified) context and P a well-typed type in Γ. Let Γ ′ = Γ[∃x : P ]. If there exists a derivation of Γ ′ then there exists a proof t of P in Γ in T . Proof Let θ be the substitution denoted by the derivation of Γ ′ . The substitution θ is well-typed in Γ ′ in the system T and is a normal solution to Γ ′ . Let t = θx. Since θ is normal we have θΓ ′ = θΓ. We have θΓ ′ = θΓ, θΓ = Γ, θP = P and θΓ ′ ⊢ θx : θP so Γ ⊢ t : P . The proof t is called the proof denoted by the derivation.
Completeness
Definition 33 The Relation < Let < be the smallest transitive relation defined on normal η-long terms such that
As proved in [START_REF] Dowek | A Second Order Pattern Matching Algorithm in the Cube of Typed λ-Calculi[END_REF] [START_REF] Dowek | Démonstration Automatique dans le Calcul des Constructions[END_REF], this relation is well-founded, which means that a function that recurses both on a strict subterm and on the type of its argument is total.
Definition 34 Size of a Term
Let Γ be a context and t a term well-typed in Γ. Let T be the normal η-long form of the type of t in Γ. We define by induction over <, the size of t Γ (|t Γ |)
Definition 35 Size of a Substitution
The size of a substitution θ = {< x i , γ i , t i >} is the sum of the sizes of the t i .
Proposition 26 If Γ is a failure context, then for every substitution σ, σΓ is also a failure context and therefore is not a success context.
Proof Let a = b be a constraint of Γ relating two ground different terms, then (σa = σb) is (a = b) which is a constraint relating two well-typed ground different terms.
Lemma 2 Let Γ be a well-formed context and θ a normal solution to Γ, then there exists a derivation of Γ which denotes θ.
Proof By induction on the size of θ. The context Γ is not a failure context because there exists a substitution θ such that θΓ is a success context. If it is a success context then [ ] is a derivation of Γ. Otherwise let us chose an existential variable x with a normal η-long type (x 1 : P 1 )...(x n : P n )P such that P is atomic rigid. Let t = θx. Since θ is well-typed in Γ, θΓ ⊢ t : (x 1 : θP 1 )...(x n : θP n )θP . Since the term P is atomic rigid, the term θP is atomic. Then t = [x 1 : θP 1 ]...[x n : θP n ]u with u atomic or a product (η-long form).
• If u = (w u 1 ... u p ) then let q the number of products of the type of w and r = p -q. Since the type of u is atomic we have p ≥ q, i.e. r ≥ 0. We let
with γ defined in the algorithm. Then we build the terms to be substituted to the variables h i (1 ≤ i ≤ p) and H i and K i (1 ≤ i ≤ r). We let
..[x n : P n ]u i And for all 1 ≤ i ≤ r, (w u 1 ... u q+i-1 ) has a type which is a product, let (y : U i )V i be this product. We let
Let us prove that the substitution θ ′ is smaller than θ. We have
with γ defined in the algorithm. Then we build the terms to be substituted to the variables h and k. We let
We have θ = θ ′ • σ. Let us prove that the substitution θ ′ is smaller than θ. We have
In both cases, by induction hypothesis, there exists a derivation D of σΓ that denotes θ ′ and [σ]D is a derivation of Γ that denotes θ.
Theorem 2 Completeness
Let Γ be a (non constrained, non quantified) context and P a type well-typed in Γ such that there exists a term t such that Γ ⊢ t : P then there exists a derivation of Γ ′ = Γ[∃x : P ] which denotes the normal η-long form of the proof t. Proof Let t ′ be the normal η-long form of t, and θ = {< x, [ ], t ′ >}. The substitution θ is a normal solution to Γ ′ , so there exists a derivation of Γ ′ that denotes θ.
Efficiency Improvements
To improve the efficiency of this method we have to recognize nodes that cannot lead to success nodes and prune the search tree.
Incremental Partial Checking of Constraints
First we must perform incremental partial checking of the constraints. So we define a function of simplification, which is very similar to the SIMPL function from [START_REF] Huet | A Unification Algorithm for Typed λ-calculus[END_REF] [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF]. In order to define this function we have to modify a little the syntax of constraints: a constraint is now a triple < δ, a, b > where δ is a context with universal variables only. If Γ = ∆[< δ, a, b >]∆ ′ then the terms a and b must be well-typed and have the same type in ∆δ.
Definition 36 Simplification
We consider only constraints well-typed without the constraints. While there are rigid-rigid constraints in Γ and the simplification has not failed we iterate the process of replacing Γ by Γ
• if a and b are both atomic with the same head variable and they have the same number of arguments, a = (w u 1 ... u p ), b = (w t 1 ... t p ), we let
• in the other cases the simplification fails.
If the simplification fails then the branch is hopeless, it can be pruned.
Using the Flexible-Rigid Constraints
When we have a well-typed flexible-rigid constraint, and we can solve the head variable of the flexible term (i.e. when the head of its type is rigid) we must solve it first, the only candidates for the head variable in an elementary substitution are the bound variables and the head of the rigid term [21] [22]. The other variables lead to unsolvable rigid-rigid constraints.
Using the Flexible-Flexible Constraints
When we have a well-typed flexible-flexible constraint and we can solve one of the head variables, we must solve it first, the constraint does not help to restrict the number of substitutions, but it may become a flexible-rigid constraint the next step. Moreover solving constraints helps to unfreeze types of existential variables and constraints that are not well-typed without the constraints.
Avoiding Splitting
The term (Q (h 1 x 1 ... x n )...(h q x 1 ... x n )) in the definition of the method is atomic, if it is rigid then all the solutions with r-splitting r ̸ = 0 lead to hopless constraints and therefore they can be avoided. In particular in the λΠ-calculus, we never consider existential variables for types or predicates and so splitting can always be avoided.
Solving some Decidable Unification Problems
When we have a constraint well-typed without the constraints which is either a first order unification problem [START_REF] Robinson | A Machine-Oriented Logic Based on the Resolution Principle[END_REF], an argument-restricted unification problem [START_REF] Miller | Unification Under a Mixed Prefix[END_REF] [27], a second order matching problem [START_REF] Huet | Résolution d' Équations dans les Langages d[END_REF] [24] [8] [START_REF] Dowek | Démonstration Automatique dans le Calcul des Constructions[END_REF], a second-order-argument-restricted matching problem [START_REF] Dowek | A Second Order Pattern Matching Algorithm in the Cube of Typed λ-Calculi[END_REF] [9] then we can solve it using always terminating algorithms and apply the substitutions obtained in this way. In a system with such heuristics, the elimination of a first-order universal quantifier is a one-step operation and the elimination of a higher-order universal quantifier is a more complex operation that needs several steps and can lead to backtracking.
Priority to the Rightmost Variable
When we have two existential variables x and y such that y is declared on the right of x and x has an occurrence in the type of y, if both variables may be instantiated then we have to begin with the rightmost. For instance if we have an axiom u : (P n) and we search an integer x such that (P x). We have two existential variables: x : N at and y : (P x). If we instantiate y by u then x will be automatically instantiated by n. But if we begin by instantiating x, we will instantiate x by 0, 1, 2, etc. and fail to prove (P 0), (P 1), (P 2), etc. before we reach x = n.
Normalizing some ill-typed terms
An important improvement of the method would be to recognize some normalizable ill-typed terms and then be able to use the constraints more efficiently. For instance it is possible to prove that if t is a normal η-long term and σ an elementary substitution x ← [x 1 : P 1 ]...[x n : P n ]u where u is a product or an atomic term with a head variable which is a universal variable or a sort (but not one of the x i ), then σt is normalizable.
Extension to The Calculus of Constructions With Universes
Definition 37 The Calculus of Constructions With Universes and Without Cumulativity The Calculus of Constructions With Universes and Without Cumulativity is a pure type system which has an infinite number of sorts : P rop, T ype 0 (= T ype), T ype 1 , T ype 2 , T ype 3 , ..., the axioms P rop : T ype 0 and T ype i : T ype i+1 and the rules < P rop, P rop, P rop >, < T ype i , P rop, P rop >, < P rop, T ype i , T ype i >, < T ype i , T ype j , T ype max{i,j} >
The strong normalization and confluence of βη-reduction for this system are not yet fully proved. But since all the terms typable in this system are typable in the Calculus of Constructions with Universes [START_REF] Th | An analysis of Girard's paradox[END_REF] [25], the β-reduction is strongly normalizable and confluent [3] [25]. It is conjectured in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] that in a pure type system in which the β-reduction is strongly normalizable the βη-reduction also is. Then assuming strong normalization, confluence is proved in [START_REF] Geuvers | The Church-Rosser Property for βη-reduction in Typed Lambda Calculi[END_REF] [30] [START_REF] Th | An Algorithm for Testing Conversion in Type Theory, Logical Frameworks I[END_REF].
In an extension of the method presented here to the Calculus of Constructions With Universes and Without Cumulativity we do not need a M eta type system any more because all the terms we consider in the algorithm in this system are well-typed in this system. More generaly the forumulation of this method for an arbitrary normalizable pure type system T seems to requires only the definition of a M eta type system for T .
In a system which has a large number of sorts, there may be a large number of substitutions of the form σ = x ← γ, t with t = [x 1 : P 1 ]...[x n : P n ]u where u is a sort or a product. For instance, in the Calculus of Constructions With Universes and Without Cumulativity, we can instantiate a variable x : P rop by an infinite number of terms (y : h)(k y) with h : T ype(i) and k : (y : h)P rop. In order to avoid more inefficiencies it seems possible to use sort variables and constraints on these variables as in the method of floating universes [17] [23].
An Incomplete but more Efficient Method
In the example in section 5.2, in order to get the proof (u (A → B) v w) : B we have considered the elementary substitution with 1-splitting p ← (u h 1 h 2 h 3 ) with three applications although the type of u begins by only two products.
Because we have to consider all these possible degrees of splitting, the method is quite inefficient, we get a much more efficient one if we restrict the rule to elementary substitutions with 0-splitting and a number of abstractions ranging from 0 to the number of products of the type of the existential variable we substitute. Let us call this algorithm algorithm with weak splitting (by opposition the previous one can be called algorithm with strong splitting). The search tree of the weak splitting algorithm is finitely branching.
This algorithm with weak splitting cannot synthesize a proof of B, but it can synthesize a proof of A → B. Indeed with the variable p ′ : A → B we perform the substitution
with the types h 1 : P rop, h 2 : (I h 1 ) (remark that we have no abstraction although the type of p ′ begins by a product). The constraint h 1 = A → B suggests the substitution
then we have an existential variable h 2 : (I (A → B)) we perform the substitution
The synthesized proof is (u (A → B) v) : (A → B). Remark that this term is not in η-long form, this form is [w : A](u (A → B) v w) : (A → B).
In both proofs considered above (of B and A → B) the problem is to remark that the variable u of type (P : P rop)((I P ) → P ) must be applied to the proposition (A → B). With the strong |
04115390 | en | [
"sdu"
] | 2024/03/04 16:41:26 | 2010 | https://hal.science/hal-04115390/file/EPSC2010-410.pdf | Jean-Loup Bertaux
A revised UV albedo spectrum of Phobos
We present a revised spectral albedo of PHOBOS, collected in the UV (190-310 nm) with SPICAM during several encounters of Mars Express with Phobos.
Introduction
The SPICAM instrument on board Mars Express was used at almost all encounters of Mars Express with Phobos, in order to retrieve the albedo spectrum and try to determine the composition of its soil. Early results in the UV range (180-310 nm) were reported at DPS in 2004 by Perrier et al. (2004). They showed no sign of variation on Phobos, a low albedo , and three broad absorption spectral features at 210-240 nm, 255 and 300 nm.
Reason for revision
However, this early analysis was based on a calibration of SPICAM using observations of the star Delta Scorpii in 2004, and based on its IUE absolute flux. We realized that this star is not appropriate, since it became highly variable since 2001. Therefore, new calibrations were performed on other stars in 2005 and 2009. They show no variation between these dates, but the absolute calibration level and shape is somewhat different.
Results
As a result, the revised albedo spectrum of Phobos is higher by a factor of about two (at ∼ 3%) in the range 200-300 nm; the spectral features have decreased significantly. There is still a significant absorption at 210-240 nm, similar to the well known interstellar dust absorption peak, which has been assigned by some authors to PAH molecules.
. |
04121620 | en | [
"info.info-ni"
] | 2024/03/04 16:41:26 | 2023 | https://imt-atlantique.hal.science/hal-04121620/file/Companion%20article%20Berrou.pdf | The invention of turbo codes Claude Berrou [email protected]
The institution where I spent my entire career was called, at its creation in 1977, École Nationale Supérieure des Télécommunications de Bretagne. Today the name is IMT Atlantique. I was recruited as a lecturer as soon as this institution was founded and for about ten years, I devoted myself entirely to teaching semiconductor physics, digital and analog integrated circuits, and microwaves. It was exciting and research did not seem necessary to me, even if of course constant technology survey was indispensable.
In 1988, a colleague I did not know very well asked me for a meeting. He was Alain Glavieux, a recognized expert in digital communications who was working at that time on the improvement of an underwater acoustic video transmission system. This system had already been used to observe the wreck of the Titanic, but its quality needed to be improved.
Alain explained to me in detail what he was proposing to study. The surface receiver contained an equalizer followed by an error-correcting decoder, both functions based on the Viterbi algorithm. But the equalizer could only provide the decoder with binary decisions. I was therefore asked to imagine an equalization circuit, as simple as possible, able to provide the decoder with weighted decisions. These probabilistic data would increase the correction power of the decoder and a gain of about 2 dB could be expected in the link budget.
At that time, the subject was topical and two publications, one by Gérard Battail [START_REF] Battail | Pondération des symboles décodés par l'algorithme de Viterbi[END_REF] and the other by Joachim Hagenauer and Peter Hoeher [START_REF] Hagenauer | A Viterbi Algorithm with Soft-Decision Outputs and its Applications[END_REF] were precious starting points for me. I knew the Viterbi algorithm because I had supervised students in the design of a gate-array integrated circuit on this subject. However, my expertise was still limited.
So here I was, spending days in convolutional code lattices, in the company of probability logarithms and under the double injunction of good performance and simplicity. This work was certainly the most intense of all that I undertook. A convincing result was finally obtained, with the help of my colleague Patrick Adde, and was presented at ICC' 1993 [START_REF] Berrou | A low complexity soft-output Viterbi decoder architecture[END_REF].
This was also the period when some experts were questioning the validity of the limits established by Claude Shannon or, at least, the reasons why they were inaccessible. The best performing code that had been imagined until then was a concatenated code: a Reed-Solomon encoder followed by a convolutional encoder. About three decibels kept it away from the theoretical limit.
Others [4], before me, had observed that the corresponding decoder was suffering from a strong asymmetry. The outer decoder (according to the Berlekamp-Massey algorithm for example) can benefit not only from the redundancy produced by the Reed-Solomon code but also from the work of the inner decoder, and thus indirectly from the redundancy produced by the convolutional code. The reverse is not true: the inner decoder does not benefit from the outer decoder's power of correction.
It seemed to me that this asymmetry could be easily corrected if the Reed-Solomon code was replaced by a second convolutional code. The weighted output Viterbi decoder was now available to implement a near-optimal feedback. A few days were enough to validate this idea and the rest went very quickly: introduction of extrinsic information (the concept had already been introduced in [START_REF] Gallager | Low-density parity-check codes[END_REF], a publication I did not know), invention of recursive systematic convolutional (RSC) codes, replacement of serial concatenation by parallel concatenation. On this last point, the motivation was linked to my intention to concretize this new coding/decoding scheme by an integrated circuit which would be designed in part by my students. Parallel concatenation simplifies considerably the specifications in that only one clock is needed to drive all the elements of the circuit, as opposed to two decoders that work with distinct rates.
Meanwhile, Alain Glavieux and I had become very friendly. One of his doctoral students, Punya Thitimajshima, a Thai national, started to study recursive systematic convolutional codes and made it his main thesis topic. He was also interested in an algorithm that I did not know: the maximum a posteriori (MAP) algorithm [START_REF] Bahl | Optimal decoding of linear codes for minimizing symbol error rate (Corresp.)[END_REF]. Alain and his PhD student pointed out to me that what separated the performance of my new coding/decoding scheme from Shannon's limits could be reduced by replacing the output-weighted Viterbi algorithm with this MAP algorithm. I therefore got interested in it, without being really convinced by the possibility of making a version that could be implemented on silicon. Thus, the patent filed in 1991 on the parallel concatenation of RSC codes and its iterative decoding mentioned the Viterbi decoding with weighted output while the publication, two years later and under the name of turbo code, was based on the MAP algorithm. I salute the memory of Alain and Punya, who were taken from us too soon.
Then came the time of invited conferences and awards. This gave me the opportunity to meet a lot of colleagues, most of whom I was discovering as I was taking my first steps in this vast international coding and communications community. Scientific research is not practiced in a world without relief, fortunately. The diversity of approaches and actors is to be respected and promoted, whether they are experimenters like Marconi or Tesla, or theorists like Poincaré or Einstein. Let us keep in mind what the latter of these great scientists had observed and of which I became convinced: "We can't solve problems by using the same kind of thinking we used when we created them". In this state of mind, interdisciplinarity is a valuable path, albeit a somewhat perilous one, because it is not always fully appreciated by expert committees. |
04121642 | en | [
"shs.geo"
] | 2024/03/04 16:41:26 | 2021 | https://hal.science/hal-04121642/file/Poster_EnzoLANA_DSSE2021.pdf | Les résidus pharmaceutiques dans la nappe alluviale du Gave de Pau : de l'état des lieux à la gouvernance de la contamination pour garantir une alimentation en eau potable de qualité
Enzo Lana
To cite this version:
Enzo Lana. Les résidus pharmaceutiques dans la nappe alluviale du Gave de Pau : de l'état des lieux à la gouvernance de la contamination pour garantir une alimentation en eau potable de qualité.
v
Une situation problématique aujourd'hui : les micropolluants sont responsables de la disparition d'une espèce aquatique tous les dix ans environ et leurs effets sur la santé humaine sont encore mal appréciés (Synteau; INRAE, 2020). v Une pollution diffuse puisque chaque être vivant consommateur de médicaments rejette des métabolites dans le milieu naturel via les fèces et les urines. Introduction Direction: Sylvie Clarimont, Professeur de Géographie, Université de Pau et des Pays de l'Adour, UMR 6031 TREE Doctorant: Enzo Lana, Université de Pau et des Pays de l'Adour, UMR 6031 TREE
-Mixte : quantitative et qualitative -Exploratoire et itérative : des
ateliers pédagogiques M1 et M2, deux stagiaires et une thèse doctorat pour organiser des allers -retours entre le terrain et l'université.Les résidus pharmaceutiques dans la nappe alluviale du Gave de Pau : de l'état des lieux à la gouvernance de la contamination pour garantir une alimentation en eau potable de qualité
Projet REMANAP
Approches et objectifs
Approche Origines Caractériser la
Analytique contamination pollution
REMANAP :
Un programme de recherche interdisciplinaire associant des chimistes, des Approche territoriale acteurs Modes de gouvernance Degré de sensibilisation des Elus Gestionnaires Consommateurs Professionnels de santé Saisir les représentations de l'eau et du médicament
géographes, des
économistes et des juristes Approche comparative Le cas suisse Le cas suédois Donner un contrepoint et identifier des
leviers d'actions
Méthodologie hybride Ribérac, La Roche- Présentation du terrain
Chalais
Ø 50 communes
Ø Proximité du bassin
industriel de Lacq
Stage 3 Mois Construction et passation d'un questionnaire test Master 1 DAST -Atelier de diagnostic Diffusion d'un questionnaire Sphinx online sur le territoire de la CA Pau Béarn Pyrénées Stage 3 mois Diagnostic de territoire et enjeux santé 5 syndicats producteurs d'eau potable Ø Une problématique de pollution de la ressource d'origine agricole
2 nd Semestre 1 er Semestre 2 nd Semestre
2020 2021 2021
Thèse E.Lana
Revue de presse
Passation entretiens
semi directifs
exploratoires
Approche comparative
Doctoriales Sciences Sociales de l'eau 2021, Sep 2021, Chateauroux, France. . hal-04121642 ---Le territoire du Plan d'Action Territorial du Gave de Pau : Ø 174 000 habitants Ø 1100 agriculteurs Ø Créé en 2008 Ø Il est constitué de Thèse E.Lana Etat de l'art Construction des grilles d'entretiens d'après les 1 er retours de l'enquête quantitative Premier cadrage théorique
v De faibles contrôles, en France, de présence de résidus médicamenteux dans la ressource en eau qui ne permettent pas d'établir spatialement de durée d'impact ou de danger pour la santé humaine. v Des approches en SHS encore assez rares, surtout dans les villes moyennes. v Un parti-pris de la thèse original : via l'analyse des pratiques et du contexte territorial de mieux saisir les représentations de l'eau et du médicament. |
04121649 | en | [
"shs"
] | 2024/03/04 16:41:26 | 2021 | https://shs.hal.science/halshs-04121649/file/Ode%20to%20an%20Empty%20Plinth%20%E2%80%93%20ISSUE.pdf | teaching and research institutions in France or abroad, or from public or private research centers. |
04059523 | en | [
"info.info-cy"
] | 2024/03/04 16:41:26 | 2023 | https://hal.science/hal-04059523/file/main.pdf | Gaël Guennebaud
email: [email protected]
Aurélie Bugeau
email: [email protected]
Antoine Dudouit
Assessing VoD pressure on network power consumption
Keywords: Access network, Internet power consumption, video streaming, peak usage
Assessing the energy consumption or carbon footprint of data distribution of video streaming services is usually carried out through energy or carbon intensity figures (in Wh or gCO2e per GB). In this paper, we first review the reasons why such approaches are likely to lead to misunderstandings and potentially to erroneous conclusions. To overcome those shortcomings, we propose a new methodology whose key idea is to consider a video streaming usage at the whole scale of a territory, and evaluate the impact of this usage on the network infrastructure. At the core of our methodology is a parametric model of a simplified network and Content Delivery Network (CDN) infrastructure, which is automatically scaled according to peak usage needs. This allows us to compare the power consumption of this infrastructure under different scenarios, ranging from a sober baseline to a generalized use of high bitrate videos. Our results show that classical efficiency indicators do not reflect the power consumption increase of more intensive Internet usage, and might even lead to misleading conclusions.
I. INTRODUCTION
Internet traffic has grown exponentially in the last decade. Cisco [START_REF] Cisco | Cisco visual networking index: Forecast and trends, 2017-2022[END_REF] projected an increase of IP traffic from 122 exabytes per month in 2017 to 365 in 2022. Studying energy consumption and impacts on climate change of data transmission over the Internet has therefore received much attention over the last decade. Such work usually strives to estimate the overall yearly energy consumption of the Internet (in TWh/year) from which energy intensity estimates of data transmission (in Wh/GB) are extracted. A recent review [START_REF] Coroamȃ | Investigating the Inconsistencies among Energy and Energy Intensity Estimates of the Internet, Metrics and Harmonising Values[END_REF] of these works shows high variability in the results and even inconsistencies between overall energy consumption, energy intensity, and Internet traffic. Those variations are likely explained by differences in Internet modeling, system boundaries, hypotheses, and methodologies.
Limits of energy intensity estimates: Despite those high uncertainties, such energy intensity estimates are frequently used to assess the energy consumption, or environmental impacts, of data transmission of a given Internet service such as, for instance, video-streaming. A commonly raised question is: "what is the energy consumption of transferring one GB of data?", which exhibits severe limitations.
This study has been financially supported by the French Research Agency through the PostProdLEAP project (ANR-19-CE23-0027-01) Firstly, we argue such a question is ill posed as it depends on numerous variables that go way beyond distinguishing the core, fixed-access, and mobile-access networks. Some other variable examples include the technological maturity of the considered network (from old energy-intensive equipment, to the newest generation) or the actual route taken by the data: a data intensive service hosted in the US but used in Europe will have a very different impact on the network than another service hosted in the same city than its primary users.
Secondly, unlike what their units convey, those numbers exhibit a poor proportionality with the physical reality. Indeed, one can quickly come to the false conclusion that, e.g., reducing by two the amount of data of a given usage will reduce by two its impacts. This limitation might not be a problem when those numbers are exclusively used as an attributional key to allocate the overall shared footprint across the different usages retrospectively. Their use in a consequential manner is, however, very frequent and misleading both for a short or long term point of view [START_REF] Schien | Help, i shrunk my savings! assessing the carbon reduction potential for video streaming from short-term coding changes[END_REF]. On the shorter term because the infrastructure is permanently switched on, and the volume of data passing through it at a given time has very little influence on the power consumption of the equipment (especially for fixed network equipment). On the longer term, one could expect a correlation because if the traffic volume increases, the traffic peak is expected to increase too, yielding to an increase in the infrastructure equipment, and thus an increase of the overall consumption of the infrastructure [START_REF] Schien | Rethinking allocation in high-baseload systems: A demand-proportional network electricity intensity metric[END_REF]. Conversely, if the traffic is maintained or decreased, oldest equipments might be replaced by smaller and more efficient ones when renewed. However, this long term correlation is only partial because i) only a subset of network hardware is subject to such correlation with peak demands, ii) the energy efficiency of such equipment improves quickly over time, and iii) two identical volumes of transferred data might have very distinct effects on local traffic peaks (because of different bitrates, different routes, or different burstiness [START_REF] Guerin | Equivalent capacity and its application to bandwidth allocation in high-speed networks[END_REF]).
Thirdly, such intensity numbers (in Wh/GB or gCO2e/GB) are only efficiency indicators hiding the true absolute energy consumption or absolute impacts, which are the only numbers that really matter. This observation combined with the aforementioned second point yield a paradox: increasing the total amount of traffic increases load percentage and enables scaling gains. Both lead to an improvement (i.e, a decrease) of those efficiency indicators, while the absolute impacts increase. In contrast, sobriety behaviors are certain to maintain or decrease absolute impacts even though they might degrade those efficiency indicators.
Fourthly, such indicators tend to put the responsibility of the impacts solely on the consumer side, while we believe environmental impacts are systemic problems that concern manufacturers, content providers and users. In other words, by hiding the global absolute impacts, such indicators tend to lead to individualization at the expense of a collective vision at which different and more effective levers could emerge.
Contributions:
To address those shortcomings and convey a more realistic understanding of data transmission, we propose a new methodology whose central idea is to consider a given Internet usage at the whole scale of an appropriately chosen territory, and evaluate the impact of this usage, or variants of this usage, on the IT infrastructure. This is accomplished through a parametric bottom-up network modeling of a simplified network infrastructure. Starting from a minimalistic infrastructure (the baseline), our model automatically scales the required hardware according to peak usage scenarios, from which absolute power and energy consumption can be estimated and compared to the baseline or other scenarios. Our simplified model relies on a tree representation of the network infrastructure, allowing us to adjust it to peak access rates in a hierarchical manner.
Having a precise estimate of the overall energy intensity of the Internet is out of the scope of this paper. The version of the parametric model we propose in this paper is rather designed to analyze and compare given use-cases relatively. It is an approximation of the reality with the aim of comparing past and future scenarios for a known usage under the same boundaries. For the sake of simplicity and clarity, we restrict ourselves to the scale of a territory with a limited geographical extent. The general principle of our methodology is presented in Section III.
In Section IV, we demonstrate our methodology on videoon-demand (VoD) at the scale of the France territory. This use case implies a high data traffic playing an important role on the scale of current infrastructures. Through our experiments, we propose an evaluation of the impacts that would result if watching only HD video in contrast to higher quality streams. Our model enables analyzing which network equipment is most impacted by each parameter of the scenario. The proposed methodology is therefore a first step towards a better understanding of the consequences of politic, industrial or societal decisions on infrastructure sizing. In particular, our results confirm the aforementioned claims and paradox, and show that using a simple efficiency indicator may lead to misleading decisions. Owing to the lack of data, in this paper we restrict our analysis to the estimation of power consumption, however, our model could easily be extended to account for the manufacturing and other life-cycle phases to estimate other environmental indicators.
II. RELATED WORKS
This section presents prior related works on energy consumption of data transmission. The ICT infrastructure is commonly decomposed into three parts: datacenters, transmission networks and user equipment. Many works have tackled the estimation of energy intensity for each part. We here mainly focus on the network. Note that some earliest works estimating the energy consumption of the Internet included datacenters while others did not, but it is now more common to consider them apart [START_REF] Coroamȃ | The energy intensity of the internet: Home and access networks[END_REF]. Early works [START_REF] Koomey | Network electricity use associated with wireless personal digital assistants[END_REF], [START_REF] Gupta | Greening of the internet[END_REF] were bottomup approaches. They were based on an inventory of all US computing and networking equipment and their average yearly energy consumption to compute the total energy of networking devices in the US. Those overall energy consumption were then normalized by some estimates of the overall traffic volume, yielding to energy intensity indicators. Such indicators have been re-evaluated on a regular basis, and we refer to Aslan et al. [START_REF] Aslan | Electricity intensity of internet data transmission: Untangling the estimates[END_REF] and Coroama [START_REF] Coroamȃ | Investigating the Inconsistencies among Energy and Energy Intensity Estimates of the Internet, Metrics and Harmonising Values[END_REF] for recent surveys and analysis. In the rest of this section, we rather focus on network models and alternative approaches.
Baliga et al. [START_REF] Baliga | Energy consumption in optical ip networks[END_REF] proposed one of the first bottom-up network infrastructure model. It is composed of the access, metro and core networks. The total power is the sum of the power P i of each equipment multiplied by i) the power usage effectiveness (PUE), ii) a redundancy factor η to ensure functionality in case of failure, and iii) a scaling factor based on peak access rate R i over individual capacity C i :
P = i P U E × η × P i × R i C i .
Baliga's model has inspired many later works [START_REF] Coroamȃ | The energy intensity of the internet: Home and access networks[END_REF], [START_REF] Schien | The energy intensity of the internet: Edge and core networks[END_REF]- [START_REF] Wu | Intelligent efficiency for data centres and wide area networks[END_REF]. In particular, Hinton et al. [START_REF] Hinton | Energy consumption modelling of optical networks[END_REF] presented an extension to assess the energy consumption of optical networks for different services and scenarios. Each network element is associated with an affine power profile distinguishing the static power P idle and the linear component which is assumed to be proportional to the current throughput (in bit/s). They observed that for such fixed network elements, this proportional part is very small with P idle > 0.9P max . This observation is also confirmed by Malmodin [START_REF] Malmodin | The power consumption of mobile and fixed network data services -the case of streaming video and downloading large files[END_REF]. With such an affine power model, allocating the proportional part boils down to a simple volumebased allocation. To accommodate for the idle power, they proposed different allocation strategies, and in particular one based on relative throughputs. Malmodin's power model [START_REF] Malmodin | The power consumption of mobile and fixed network data services -the case of streaming video and downloading large files[END_REF] is also based on an affine power-profile. The idle power is first equally spread to each line, and then distributed to the potentially multiple users and devices using this line, which can be rather sketchy to do in practice. In a similar vein, Ullrich et al. [START_REF] Ullrich | Estimating the resource intensity of the internet: A meta-model to account for cloud-based services in lca[END_REF] described an hybrid allocation: durationbased for the customer premises equipment (CPE) and access network equipment, and volume-based for the core network. All those strategies [START_REF] Hinton | Energy consumption modelling of optical networks[END_REF], [START_REF] Malmodin | The power consumption of mobile and fixed network data services -the case of streaming video and downloading large files[END_REF], [START_REF] Ullrich | Estimating the resource intensity of the internet: A meta-model to account for cloud-based services in lca[END_REF], however, ignore the energy consumption during standby time, while the last twos assume that the idle power consumption is unrelated to traffic demand, which tends to artificially minimize the network part of the impacts of a given usage. Our approach overcomes those shortcomings by replacing the allocation issues by a more systemic view, and estimating the ideal idle-power for a given usage or service.
To better understand network energy consumption or GHG emissions, several studies focused on a narrow and specific use case. For instance, Schien et al. [START_REF] Schien | Impact of location on the energy footprint of digital media[END_REF] used traceroute data to estimate the number and type of network devices involved in digital media transmission. Coroama et al. [START_REF] Coroamȃ | The direct energy demand of internet data flows[END_REF] considered a 40 Mbps videoconferencing transmission between Switzerland and Japan, and modeled all Internet nodes and links along the way, distributing the energy according to the relative traffic volumes. Ficher et al. [START_REF] Ficher | Rapport : évaluation de l'empreinte carbone de la transmission d'un gigaoctet de données sur le réseau renater[END_REF] estimated the carbon footprint of transmitting one gigabyte of data on a specific segment of the RENATER network. Golard et al. [START_REF] Golard | Evaluation and projection of 4g and 5g ran energy footprints: the case of belgium for 2020-2025[END_REF] evaluated and projected the total energy consumption of broadband radio networks at the scale of Belgium. Another use-case that has received attention is the assessment of the carbon footprint of watching one hour of video streaming, as discussed in Section IV.
III. METHODOLOGY
In this section, we present a general overview of our methodology. A concrete instance on the VoD streaming use case is given in Section IV. Let us recall that our goal here is not to estimate the impact of an existing infrastructure, but rather to estimate the impacts of a given service or use-case through its pressure on the dimensioning of a hypothetical infrastructure. It is instanced by a bottom-up model that automatically scales the infrastructure to different scenarios and hypotheses. From this infrastructure, we can then estimate its absolute power consumption and other environmental impacts. To estimate the absolute impacts of a given usecase, one starts to define a minimalist baseline scenario from which a baseline infrastructure is generated and evaluated. Then, a second infrastructure is generated and evaluated for the given use-case and the difference between the absolute impacts of these two scenarios is attributed to this use-case. In practice, this approach also permits to compare different hypotheses for the same use-case, hence enabling a better understanding of the consequences of different choices on the infrastructure. Through this exercise, it is important to consider the whole service/use-case at the scale of a largeenough territory to be representative of the service/use-case at hand. Those few methodological principles are key to avoid the pitfalls discussed in the introduction, but also to avoid tricky allocation issues of shared or multi-purpose equipment.
Below we present our methodology as four main steps.
Step 1 -Use-case
In order to guide the next steps, we first need to define the service or use-case that we aim to model, evaluate, and analyze. Examples encompass video streaming, videoconferencing, large file downloading, email communication, etc. In addition, one also has to identify the main parameters and variables associated to this use-case, and their range of values that will be explored (e.g., video resolutions, number of viewers, server localization, file sizes, frequencies, etc.). At this step, one can already define the baseline scenario through the choice of the most sober values for those variables (e.g., no streaming, a few emails a day without attachments, etc.).
Step 2 -Boundary This step covers two aspects. First, which parts of the Internet infrastructure are included: datacenters, core, edge, fixed and/or radio network, fiber and/or copper, customer premises equipment (CPE), etc. Second, which geographical territory: a city, a country, the world? The choice of a territory might be dictated by the purpose of the evaluation, e.g., one might be interested in evaluating a service for a given country. Otherwise, for the sake of simplicity, it might be wise to choose the smallest possible territory that is representative of the scenarios identified in step 1.
Step 3 -Design of the parametric infrastructure model This step consists in designing the parametric model that will generate infrastructures according to some dimensioning variables. To this end one must start to design a minimalist infrastructure, e.g., every home and datacenters of the considered territory must be connected with the capacity to exchange some bits, the radio network must cover 99% of the territory, every user of our scenarios possesses at least one smartphone, tablet or laptop, etc. This minimalist instance completes the baseline scenario identified at step 1. The model has to be designed to be able to scale-up and to cover the range of usecases and boundaries defined in the previous steps.
For the network, as a proof of concept, in this work we propose to use a simplified tree representation that goes from the user houses up to the main datacenters hosting the considered service, and passing through nodes representing the different pooling points of the fixed-access, edge, and core network layers. More complex structure representations shall be used in future work. Figure 1 illustrates part of the tree of the infrastructure. Putting aside the end-user devices, the leaves correspond to the home-routers, and the nodes represent converging/splitting points where congestion points might occur. Edges represent fiber links which might either include passive-only equipments, but also active equipments for long-distance hops. Following previous bottom-up network models, its main dimensioning variables are the bandwidth capacities required at the different nodes and links of the tree. Fig. 1. Abstract tree representation of the infrastructure. In this example, tree goes from a main datacenter to many devices through nodes and links. A node can, for instance, represent an IXP that includes a CDN or routers. The detailed infrastructure for our use-case is presented in Figure 3. At each node and link l, the equipment is scaled up to the minimal quantity enabling: i) a connection to every subscribers (households) ii) a bandwidth capacity equal or greater to R l (in bit/s) which has to be adjusted with respect to the peak access rate estimated at the node or link l for the given scenario (see . Throughout this paper, peak rates are assumed to refer to averaged traffic rates over a few seconds. This quantity is then multiplied by the redundancy factor η. The global power consumption is estimated as a sum over all equipments and facilities of the infrastructure. To this end, each equipment must be associated with a power consumption profile, i.e., a static (or idle) power (in W) and a power intensity factor (in W/Gbps) for the dynamic power consumption part which is assumed to be proportional to the actual traffic. Because we scale the infrastructure to peak needs, we are able to understand the physical reality and bottlenecks behind power consumption.
Step 4 -Scenario evaluation and peak demand modeling Finally, to evaluate one of our scenarios, we need to translate it to the dimensioning variables exposed by the model defined in the previous step. In our case, this mainly requires to estimate the capacities R i from peak access rates estimated at every node and link of the tree. This step usually embeds a growing margin factor (α in the following) allowing to anticipate exceptional traffic peaks, and for future growth provisioning.
Our scenarios are constructed based on global average statistics of different kinds. The first kind are expressed in term of percentage of "active" users (e.g., percentage of active subscribers for the baseline, or the percentage of simultaneous VoD watchers). The second kind are numbers such as the average number of inhabitants per house. Whereas using such statistic means would be sufficient when considering a large pool of inhabitants, they cannot be used to reflect worst case scenario near the leaves of the tree where some equipments are shared by few dozens to a few hundreds of inhabitants only. For a better accuracy, we propose to consider their respective distributions, say d n , for a given sub number of inhabitants n. To neglect the most unlikely occurrences only, we use the smallest quantity q such that the probability of having an occurrence x greater than q is extremely low, i.e., d n (x > q) < . We used = 10 -9 .
For the statistics of the first kind, defined by a percentage s of "active" inhabitants, the distribution d n s corresponds to a hy-pergeometric distribution. Assuming that n is small compared to the total number of inhabitants, it is well approximated by the simpler binomial distribution, and we define the function q s (n) as the smallest q ≤ n such that d n s (x > q) < , which corresponds to the 1percentile, which is itself computed through the inverse of the cumulative distribution function (CDF) of d s n . Note that for very large n, we have q s (n) ≈ s×n, but q s (n) can be significantly greater than s×n otherwise (e.g., q 3% (64)/64 = 21%).
For our statistics of the second kind, such as the number of inhabitants per home, we first have to define the respective discrete distribution d i (x), and consider the sum of n random variables having this discrete distribution. Again, we assume that n is small compared to the total number of inhabitants. The resulting distribution d n i (x) is thus obtained by the convolution of d i with itself n times. As before, we then define q i (n) as the 1percentile of d n i (x). Since there is no closed-form formula for d n i , computing q i (n) for many different values of n can be very tedious. Instead, we found that q i (n) can be very well approximated by a function of the form:
q i (n) ≈ max(a i n + b i n ci , n di ) ( 1
)
where di is the mean of d i (x). The three coefficients a i , b i , c i are found numerically to interpolate three points taken at n ∈ {16, 128, 1024}. Values of d i for our use-case presented in next section can be found in Table I. To illustrate our methodology, we consider VoD streaming. This use-case has already been addressed by many studies [START_REF] Ullrich | Estimating the resource intensity of the internet: A meta-model to account for cloud-based services in lca[END_REF], [START_REF]Carbon impact of video streaming[END_REF], [START_REF] Efoui-Hess | Climat : l'insoutenable usage de la vidéo en ligne -Un cas pratique pour la sobriété numérique[END_REF]- [START_REF] Preist | Evaluating sustainable interaction design of digital services: The case of youtube[END_REF]. All of them, however, focus on estimating the electricity intensity of one hour of video or yearly electricity usage of a video service, while allocating the network part based on volume of data (Wh/GB), or a mix of volume and duration. As motivated in the introduction, those a posteriori attributional allocation strategies can hardy be used to predict the impacts of intensifying VoD streaming, nor to understand the real pressure of VoD streaming on the network infrastructure. Applying our methodology to this use-case will thus reveal new insights.
This section follows the four steps presented in the previous section. For the sake of clarity, the steps 3 and 4 are detailed per kind of equipment. Owing to the lack of power profile data, and since we consider a fixed network, we will focus on the estimation of the static power only, leaving the dynamic part for discussions in Section V-A.
A. Design overview 1)
Step 1 -Use-cases definition: Video streaming in general covers a large range of different types of services, each having their own infrastructure design with different usage patterns. In this study, we limit ourselves to over-the-top (OTT) streaming from a unique service provider for the whole territory. We further assume a bounded catalog controlled by the service provider. We thus exclude other types of video streaming such as Youtube (unbounded catalog), live streaming, IPTV, and advertising videos.
Figure 2 depicts the main components of an OTT VoD service. After the content creation stage, the service provider manages its catalog with many different encoding settings and redundant storage in its main datacenters. In general, videos are not delivered directly from the main datacenters but from closer servers belonging to content delivery networks (CDNs). CDNs are storage servers usually located at Internet service provider (ISP) or Internet exchange points (IXP). CDNs reduce data traffic in the core network (and in particular in submarine and longhaul optical fibers) and improve user experience with faster loads. Those CDN servers are partly updated every days during low traffic periods. The videos are then delivered to the customers through the core and access networks.
The main variables are the video quality R v (ranging from 0 to 30 Mbps), and the percentage s v of inhabitants accessing the service simultaneously during the peak period. Other variables include: the size of the catalog and the percentage of video content accessed through the CDN.
In our baseline scenario we consider a traffic expected to represent a "minimalist and sober" use of the Internet as communication and information sharing means, excluding all traffic-intensive usages such as video streams, large file transfers, heavy web pages, etc. (both R v and s v are set to 0). We further ignore all the B2B traffic. At peak hours, this baseline traffic is modeled as a global percentage of active customers s b = 2%, and a per customer speed rate R b = 10 Mbps. Those numbers yields an average peak rate of 200 kbps per customers, which matches average peak rates for copper lines in 2013 in France. Those numbers are thus rather high for a "minimalist and sober" use of Internet, but they can be considered as conservative. We further assume that the baseline and VoD traffic peaks are fully correlated, which is also a rather conservative choice.
2)
Step 2 -VoD streaming boundaries: For the sake of simplicity, we chose a narrow boundary in this study. Parts in dark gray in Figure 2 are those that we include, while we ignore steps in light gray (content creation, encoding, customer management, end-user devices, ...). To be consistent with our tree structure, we consider a unique CDN that includes both servers and dedicated routers.
Territory: The territory is chosen to be representative of the geographical scale typically covered by a unique CDN. This makes Metropolitan France, which hosts the ICT4S 2023 conference, a rather good candidate to scale our network scenarios. We thus consider 65e6 inhabitants for about #home = 30e6 households, each having an internet connection and being a customer of the VoD service. We assume that 2/3 of our baseline traffic stays in this territory. The CDN is naturally located in an IXP in Paris. Main datacenters: The main datacenter servers and routers of the VoD service provider are expected to be shared by many countries. We thus chose to ignore their own power consumption. However, in order to account for the Internet traffic load required to update our CDN, as well as to deliver videos that are not cached in the CDN servers, we still consider one main datacenter located in North America at about 900km from the Atlantic submarine cable landing point. We also assume that the 1/3 of the baseline traffic coming from international sources goes through this same route. Network: We include both the core and edge network active equipment, but ignore passive ones. For the access network, since the VoD service we consider is mostly used at home, we consider only a fixed-access network that we assume to be fully implemented through the GPON (Gigabit Passive Optical Network) FTTH (fiber-to-the-home) architecture. CPE: Since our study focuses on the network, on the customer side, we consider only the ONU needed for the GPON fiber architecture, but ignore all other devices such as home-routers, set-top-boxes, TVs, laptops, etc.
Summary of the infrastructure: Figure 3 summarizes the infrastructure for our use-case. Our infrastructure and the number of equipments, is scaled according to peak usage that depends on the scenario. In the next subsection, we detail each node and how we compute the quantity and power of all equipments. Some general parameters are considered. The P U E indicator for the network is set to 1.8 [START_REF] Aslan | Electricity intensity of internet data transmission: Untangling the estimates[END_REF] for all experiments, while the redundancy factor in case of failure is η = 2. Our design is largely inspired by the model of Baliga et al. [START_REF] Baliga | Energy consumption in optical ip networks[END_REF], some power consumption coming from this paper. When it is the case, we update energy intensity (W/Gbps) of equipments considering the energy efficiency gains through years. The formula proposed by the authors:
I = P C = P0 C0 × (1 -γ)
t considers the t years between 2020 (our reference year) and 2008. P 0 is power and C 0 capacity from 2008 and γ = 0.1.
B. GPON based FTTH accesss network
Step 3 -Access network design: To model FTTH, we have chosen the GPON architecture as our outermost link (Figure 4). Its main component is the Optical Line Terminal (OLT) hosted in central offices (hubs) of the operator. It is primarily composed of GPON cards with up to #gpon/card = 16 GPON ports per card, and up to 16 cards per OLT. Each GPON port has a maximal capacity of 2.5Gbps shared by #sub/gpon ≤ 128 subscribers through a unique output fiber. This fiber is then split in a tree structure through passive optical elements. This tree ends with one Optical Network Unit (ONU) in each household. This active equipment is required to deal with the temporal multiplexing of the underlying GPON protocol. In our model, the OLT is connected to the backbone through 10 Gigabit Ethernet ports (10GE ports). We have at least one 10GE port per OLT and thus at most 256 × #sub/gpon subscribers per 10GE port. The number #hub of central offices is fixed and determined to cover the whole territory. An opendata list of central offices in France in 2020 1 reports about 1500 hubs dedicated to OLT hosting, and 4300 other ones tagged as "copper". In practice, many of them also host OLTs, and the presence of four main operators also tends to increase the number of hubs compared to a single operator scenario. All in all, we took #hub = 3000 (in practice this parameter has a very small influence on the overall results). For the sake of simplicity, we also assume that the subscribers are equally spread among the hubs. Static power consumption and capacity of those equipments are given in Table II.
From this design, the total number of GPON ports is:
#gpon = #home #sub/gpon + #hub × #gpon/card . ( 2
)
1 https://www.data.gouv.fr/fr/datasets/localisations-des-noeuds-de-raccordement-abonnes-nra-et-optiques-nro-dans-openstreetmap/
The right-hand side term accounts for the fact that GPON cards are, in practice, not completely filled. We therefore add the equivalent of one GPON card with #gpon/card = 16 ports per hub. The number of 10GE ports is scaled with respect to the peak-rate demand R * olt at one OLT:
#ge = η #olt R * olt C GE . (3)
The number of OLTs is obtained as:
#olt = max #hub, #gpon #gpon/card × #card/olt , (4)
with #card/olt the maximal number of GPON cards per OLT (#card/olt = 16). The total power of the access network is the cumulative power of all GPON and GE cards:
P access = P U E × (#gpon × P gpon + #ge × P ge ) . (5)
Step 4 -Access network peak demand: In order to evaluate the above equations, we have to compute the actual maximal number of subscribers per GPON tree (#sub/gpon) such that the peak demand traffic of a given scenario does not exceed its bandwidth capacity, as well as the demand rate R * olt . To this end, we will exploit the statistical tools presented in
α t R(n) < C GP ON , (6)
where R(n) is the peak demand for a pool of n subscribers:
R(n) = R v × q sv • q i (n) S + R b × q s b (n) . (7)
We recall that α t is a growing margin factor (Section III-
Step-3), s b is the average percentage of active subscribers with baseline usage, s v is the average percentage of VoD viewers among the population at peak hours, and S is the average number of viewers sharing the same device/video flux (S = 1.5 [START_REF]Carbon impact of video streaming[END_REF]). We used α t = 1.5. In practice, we compute this solution using a binary search while accelerating the evaluations of q sv •q i (n) and q s0 (n) by approximating both of them by functions of the same form as equation ( 1) with coefficients computed using the same three points interpolation strategy. The logic for the 10GE uplink ports is slightly different. Indeed, once the number of subscribers per GPON tree is known, the number of subscribers per OLT is fxed, and we thus avoid the need for the binary search by directly setting the peak-rate demand:
R * olt = α t R ( #sub/#olt ) . (8)
C. National edge and core network
This subsection focuses on the part of the network tree connecting the OLTs to the main IXP located in Paris. The link from the IXP to North America will be addressed in the next subsection.
Step 3 -Edge/core network design: To connect the entire country to the main IXP, we use a tree topology that interconnects children to core router nodes with a star topology. We consider that each core router node has 8 child nodes, and 3 core levels L 0 to L 2 , with L 0 corresponding to the main IXP in Paris. A core router is made of a variable number of modules. Given a capacity requirement of R Gbps, the actual number of modules is given by:
#CN (R) = R C crm ,
with an electrical power of:
P CN (R) = P U E × η × #CN (R) × P crm .
Each of the 64 core nodes of level 2 are then linked to 8 edge nodes (level 3). An edge node is composed of a modular edge router, a modular broadband network gateway (BNG), and a modular Ethernet switch connected to the OLTs. The actual quantity of modules composing one edge node and its respective power is computed just as #CN (R) and P CN but using the capacity and power features given in Table II:
P EN (R) = P U E × η × k∈{eth,BN G,erm} R C k × P k .
Core and edge routers are connected through a wavelength division multiplexing (WDM) transport system, composed of two terminal multiplexers (one at each extremity, with a power of 4.6 W per channel), amplifiers every 100 km (with a power of 3.5 W per channel), and a capacity of 40Gbps per channel. For a capacity R and distance dist, the electrical power of such a link is thus:
P wdm (R, dist) = η R 40 P U E × 9.2 + dist 100 -1 × 3.5 .
Let R * l be the required capacity at a node level l, and dist l be the average distance separating a pair of nodes at level l -1 and l. In our design, dist 1 = dist 2 = 300 km, and dist 3 = 100. The overall power P nat of national core and edge nodes and links are finally:
P nat = 2 3 × P EN (R * 3 ) + l∈[0,2] 2 l P CN (R * l ) + P wdm (R * l+1 , dist l+1 ) .
Step 4 -Edge/core network peak demand modeling: The capacity R * l required at each level l is estimated from the peak demand of our scenarios in a similar fashion than for the uplink ports of the access network. Observing that the number of subscribers related to one node is equal to #home/2 l , we directly have: R * l = α t R #home/2 l .
D. International longhaul link
Step 3 -International longhaul design: The main IXP is connected to the main datacenter in North America through a longhaul WDM transport system made of two terrestrial parts of 600 and 900 km respectively, and one WDM submarine section of 8000km. We further assume that about 7 core nodes are crossed over along this route. Let R * u be the required capacity for this line. The electrical power for the terrestrial part is thus:
P intT = 7 × P CN (R * u ) + dist∈{600,900} P wdm (R * u , dist) .
Submarine WDM systems are slightly more complex to model. It is composed of two terminal multiplexers (35W per channel), with repeaters (0.2W per channel) placed to amplify the signal every 50 kilometers and a capacity of 40 Gbps per channel. The repeaters are powered by electrical suppliers located on the coast with 80% energy efficiency through cables having a resistance yielding power loss of about 0.004 W/km [START_REF] Baliga | Energy consumption in optical ip networks[END_REF]. This sums up to 142 W per channel for our 8000 km cable, yielding for the final electrical intensity of the undersea connection:
P intU = P U E × η × α u × R * u 40 × 142. (9)
Submarine cables are often designed with a larger margin factor than terrestrial links, hence the dedicated scale factor α u = 2 [START_REF] Baliga | Energy consumption in optical ip networks[END_REF].
Step 4 -International longhaul peak demand modeling: The peak demand rate R * u between the main datacenter and our main IXP/CDN is the maximum between the peak rate R * f ill to fill CDN servers and the peak traffic rate corresponding to the baseline and VoD scenarios. For the later, we assume that only 1/3 of the baseline traffic goes trough this international link, and that %CDN percent of the VoD traffic is handled by the CDN (we used %CDN = 80%). We end up with:
R * u = max R * f ill , (1 -%CDN )R * v + 1 3 R * b . (10)
This equation assumes that the hours of CDN filling might overlap with the baseline peak, which is unlikely but conservative. The quantities R * b and R * v are the global peak rates for all customers for the baseline and VoD usages respectively:
R * b = s b × #home × R b ; R * v = s v × di × #home × R v . E. CDN
Step 3 -CDN design: Our CDN includes different kind of servers. Following Netflix's CDN design [START_REF]Open connect website[END_REF], we consider a CDN made of a few storage servers having a large storage capacity to hold a large percentage of the catalog content, and many flash servers having a limited storage capacity, but a very high throughput. It is completed with dedicated edge routers, yielding an overall electrical power modeled as:
P CDN = P U E × R * CDN C f lash P f lash + #sto P sto + η R * CDN C erm P erm . (11)
where R * CDN is the required throughput capacity. The average power and throughput of the flash and storage servers are given in Table II.
Step 4 -CDN peak demand modeling: Following Netflix documentation [START_REF]Open connect website[END_REF], we set the number of storage servers to #sto = 40, and kept it fixed for all our VoD scenarios even though one could slightly adjust it with respect to the maximal video bitrate. The throughput capacity R * CDN is estimated from the global peak demand R * v , and the percentage %CDN of content effectively provided by the CDN:
R * CDN = α t × %CDN × R * v .
Finally, we found that the bitrate R * f ill needed to update the #sto CDN storage servers on a daily basis to be quite negligible. According to Netflix [START_REF]Open connect website[END_REF], ∼ 1.8% of the content of CDN storage servers are updated every day. With 320 TB of capacity each, and assuming they are updated during a period of 8h, this yields R f ill = 72 Gbps, which is ∼ 30 times lower than our estimated baseline peak for the international link.
V. RESULTS ON DIFFERENT SCENARIOS
In this section, we compare our baseline with other scenarios. We first analyze the influence of video quality and discuss proportionality (Section V-A), and then evaluate an hybrid OTT+DTT scenario (Section V-B). We implemented our model as a static web-application allowing the user to modify all the parameters and hypotheses of our model, including the capacity and static power of each network element. This application will be made available online upon acceptance.
A. Effect of video quality
With our consequential methodology, we wish to understand what could be or have been the consequences of some restrictions over infrastructure dimension and energy consumption. In this section, we propose to compare the influence of several video qualities, namely HD (1280×720 at 3 Mbps), FHD (1920×1080 at 5 Mbps), UHD (3840×2160 at 16 Mbps), and a fourth UHD++ scenario with 4K resolution, high-dynamicrange (HDR) color depth, and 60 frame-per-second (at 27 Mbps). In all our scenarios, we assume a baseline peak rate R b = 10 Mbps for a percentage s b = 2% of subscriptions. All our VoD scenarios assume a peak percentage s v = 20% of viewers through OTT. We recall that contrary to s b , this percentage is relative to the whole population.
Moreover, in order to understand the effect of different usage patterns, we also sketched a fictitious download scenario ("DL") of very large files such as OS updates and AAA video games. The latters become larger and larger with the top 26 ranging from 64GB to 200GB at the end of 2021 [START_REF] Ridley | The biggest games by install size, real mighty storage hogs[END_REF], while generating heavy loads on the network at every release or patch update of the most famous titles. Our VoD architecture can easily be adapted to such a use-case by setting: R v = 200M bps, S v = 3%, %CDN = 95%, and replacing the VoD term of equation ( 6) by a simpler subscriber-based term: R v q sv (n). For this term, we also relaxed the confidence parameter to = 10 -7 (Section III-step-4), hence accepting that a few users will very likely experience slightly degraded download rates.
Applying the model detailed in the previous section to our scenarios, and integrating static powers over one year, leads to annual power consumptions presented in Table III. This table also reports the percentage of energy consumption increase relative to the baseline scenario.
A first observation is that the overall energy consumption is not directly proportional to bitrates. This is especially true when including the ONUs that are plugged 24/7 in every home. The increase of the access network is negligible for HD/FHD bitrates, and it remains limited even for the FHD scenario. This is because the baseline 1:128 configuration of the GPON trees is enough to handle such bitrates. Only the 10GE uplinks has to be upscaled. The UHD++ scenario, however, yields a much higher pressure on the GPON trees that have to be upscaled to a 1:75 configuration. For all scenarios, we observed that the submarine cable counts for about 33% of the whole international longhaul connection. Figure 5 shows the energy consumption difference between our VoD/DL scenarios and the baseline. One can observe that for the VoD scenarios, for which only the streaming bitrate changes, this difference is much more proportional to the VoD bitrate than absolute consumption, though not perfect. However, when including a different use-case as our large-file download scenario, this apparent correlation breaks.
Owing to the lack of precise power consumption profiles for each of the considered equipments, the previous energy estimates cover static power consumption only, ignoring the dynamic part. In order to get a rough estimate of what could be the effect of accounting for the dynamic power consumption, we used the average dynamic intensity factor estimated by Malmodin [START_REF] Malmodin | The power consumption of mobile and fixed network data services -the case of streaming video and downloading large files[END_REF] for a fixed line (i.e., ∼ 0.1 Wh/B). Considering an average of 2GB (resp. 25GB) of data per subscriber per month for the baseline (resp. DL) use-case, and an average of 3.2h of OTT video per day per subscriber, we obtained the total yearly traffic volume and updated yearly energy consumptions reported in Table IV. As expected, for fixed lines the dynamic power consumption part is quite negligible. Comparing the UHD++ and DL scenarios, we see that both the absolute (1048 vs 1114 GWh) and incremental (300 vs 366 GWh) energy consumptions are even less correlated to volume (433 vs 10 EB) than bitrates (27 vs 200 Mbps). This table also reports the energy intensity indicators obtained as the ratio of those two numbers in Wh/GB. This clearly shows that increasing the video bitrate enables decreasing the relative data transmission consumption when expressed in Wh/GB, confirming that such an efficiency indicator does not reflect the power consumption increase of more intensive Internet usage.
B. Evaluation of DTT caching
The LoCaT report [START_REF] Pickett | Quantitative study of the GHG emissions of delivering TV content[END_REF] estimates GHG emissions associated with serving TV content across different platforms, including digital terrestrial television (DTT), IPTV and OTT. Their functional unit was delivering one hour of video to a TV set. In one of their scenario, the authors studied the opportunity of viewing VoD content through DTT broadcasting. This requires one additional home-caching device to every home, which would enable receiving content via the DTT signal, cataloging, and storing content locally. When the user requests a given video, the device would first search for the content locally and would only stream it through OTT if absent. In a sense, this home-caching device plays the role of a personal CDN filled through DTT. The authors forecast a reduction of about 25% of OTT downloads, and devices exhibiting a stand-by power of 0.5W, and on power of 10W during 3.5h per day. This yields an average power of 1.9W. We compare our previous FHD and UHD scenarios, with adding one home-caching device per home, and a very optimistic reduction of both the peak percentage of OTT users (s v ) and total hours of OTT videos by 50% for the FHD scenario, and 25% for the UHD one. This difference could be explained by the fact that FHD videos being significantly smaller, many more videos could be cached in the same device. Yearly energy consumption results are summarized in Table V. In both scenarios, the additional power consumption induced by the home-caching devices cannot counter-balance the minor absolute gains. This result contradicts the conclusions of the LoCaT study which is based on misleading Wh/GB energy intensity indicators. Indeed, as already observed and as confirmed again in this table, reducing by 25% or 50% the OTT video traffic (both in volume and peak bitrate), yields to rather small absolute reduction of the global energy consumption. Surprisingly, a 25% reduction on the bandwidth intensive UHD scenario leads to an energy reduction of 5.4%, whereas a higher usage reduction of 50% on the FHD scenario leads to a smaller relative energy reduction of 3%. Yet another example of how relative indicators can be counter-intuitive.
Finally, these different results suggest that an ideal strategy for such a small caching device would be to insert it directly within the OLTs with a direct forward connection to the GPON cards. This way the gains of this cache on the rest of the network elements (10GE uplinks, edge, core, CDN, etc.) would be preserved, but the additional power consumption of a single caching device would be shared by thousands of users. Using a conservative number of 8000 subscribers per OLT, and a constant power of 30W per caching device would yield a negligible additional energy consumption of about 0.7 GWh. Our model also reveals that filling this cache through DTT or over the network during low traffic periods will not make any practical difference regarding the energy consumption of the rest of the network. Note that this option is purely speculative since it is unclear whether this is even technically possible, and what would be the actual power of such a caching card.
VI. DISCUSSIONS
In this section, we discuss some choices made, limits and future work for our model. a) Conservative hypotheses: We emphasize that our observations on the non-proportionality (even for the relative plot of Figure 5) and on the sobriety vs efficiency conflict would be exacerbated through less conservative choices such as:
• Home-routers: we considered a very efficient ONU (2.5 W) but adding the associated home-router energy consumption would significantly flatten the absolute consumption variations. • Peak correlation: decorrelating the baseline and VoD streaming peaks would allow the baseline infrastructure to better absorb part of the VoD traffic. • Homogeneous tree: a more realistic non-homogeneous distribution of the population would result in some nodes of the baseline scenario to be overdimensioned, hence leading to better handling of the additional VoD traffic. • Homogeneous technology: all our scenarios are based on the same building elements with a rather low granular capacity. Basing the baseline infrastructure on bleedingedge technologies (e.g., WDM at 100 Gbps per chanel, 10 Gbps GPON) would again yield an overdimensioned baseline network, whereas allowing the most trafficintensive to use different technologies would flatten the variations. This last option should, however, be accompanied with an increase of embodied impacts because of anticipated renewal. We also assumed that the ONUs are always on. Enabling ONU/home-routers to be switched off when unneeded would result in a significant reduction of the absolute energy consumption. It is thus interesting to question how much a given usage is preventing such equipments to be switched off. In this regard, VoD streaming has a rather low impact with an average of 3.2h a day in our scenarios. For the baseline usage, a reasonable assumption would be to assume that they are switched off a few hours over nights and when the households are empty. An obvious worst-case usage is, however, smarthome equipment that require a permanent connection.
b) Complexity of the real world: Just like previous bottom-up model, ours can only offer a simplified vision of the reality which is much more complex. For instance, network equipments exhibit a huge variability both in terms of capacity and efficiency. Equipments evolve with time with increased capacity and efficiency. With many actors deploying network equipment, peak bitrate demand is not the only driver for increase of the infrastructure, but economical competition and geopolitical strategies also play an important role leading to overdimensioning. As future work, it would thus be interesting to integrate all those aspects in such a model. c) Restricted perimeters: In this work, we have not included user devices, datacenters, nor content creation and encoding. On the datacenter side, maximal video resolution and quality is expected to have a significant effect on the computing (encoding) and storage resources. Those parameters are also expected to play an indirect but important role in accelerating the renewal of end-user equipements, for instance for larger 4K, HDR-enabled TVs, hence increasing the overall electricity consumption, but also manufacturing impacts.
d) Carbon footprint: Converting energy consumption to carbon footprint requires knowing emission factors. These factors are country-dependent but also time-dependent as the energy mix depends on the time or season. Moreover averaged emission factors, even if made temporally varying, are not necessarily correlated to the consequential effect of adding or removing a large body of electricity demand. Therefore, in this study, we have omitted this step on purpose.
e) Life cycle assessment: Most studies are limited to the use phase of equipments, omitting other phases of their life cycle (material production, manufacturing, transport, installation, maintenance, end-of-life). The main reasons are the high level of uncertainty of estimating the emissions of these other phases. Some works do attempt to include embodied impacts [START_REF] Ullrich | Estimating the resource intensity of the internet: A meta-model to account for cloud-based services in lca[END_REF], [START_REF] Pickett | Quantitative study of the GHG emissions of delivering TV content[END_REF] from an average ratio method that computes the scale of embodied emissions compared to the use phase. In this paper, we have focused on the use phase only, but we acknowledge the strong importance of including the other phases, as well as accounting for other environmental impacts (e.g., water footprint, human toxicity, abiotic resource depletion, ...). Properly allocating embodied emissions of shared equipment is as tricky as allocating static power consumption. In this regards, since our global approach bypasses the need for arbitrary allocation, we argue that the methodology proposed in this paper is well suited to be extended to estimate embodied impacts. Our model shall therefore be extended to enumerate all passive equipment that we have neglected so far (cables, shelters, buildings, racks, etc.), as well as installation and maintenance operations.
VII. CONCLUSION
In this paper we have presented a novel methodology to assess the relationship between a given usage and network power consumption. Looking at the global energy consumption rather than attempting to arbitrarily allocate power between the different usages allowed us to avoid the classical pitfalls. Our results confirmed that classical efficiency indicators do not reflect the power consumption increase of more intensive Internet usage, and might even lead to misleading conclusions. The bottom-up parametric network model we presented has the notable property of translating global average statistics to local smaller pools of inhabitants. This theoretical network model is, however, necessarily imperfect and this paper discussed many future work opportunities such as variation of the density of population, variability of equipments, broader boundary, adding a broadband radio access network, and modeling other use-cases. Another interesting future work would be to investigate how to extend our methodology to properly account for multiple use-cases whose peak demands are expected not to overlap.
Fig. 2 .
2 Fig. 2. General OTT VoD infrastructure. Steps in black are the one included in our boundary. Figure modified from [20].
Fig. 3 .
3 Fig.3. Network modeling of our video on demand use case. We restrict our boundary to fiber to the home network, a unique datacenter and one CDN.
Fig. 4 .
4 Fig.4. Representation of the fiber to the home structure that connects core network to access network with GE and GPON ports.
Fig. 5 .
5 Fig. 5. Energy consumption difference between our VoD/DL scenarios and the baseline. The respective video bitrates Rv are shown as dots.
TABLE III ANNUAL
III POWER CONSUMPTION IN GW h FOR EACH NETWORK PART.
Scenario ONU Access National Int. CDN Total
Core+Edge longhaul
Baseline 667 69 9 2.8 0 748
HD 667 71 17 10 2.2 767 (+3%)
FHD 667 72 25 15 2.5 782 (+5%)
UHD 667 84 70 43 11 875 (+17%)
UHD++ 667 136 114 70 18 1005 (+34%)
DL 667 271 140 2.8 32 1113 (+49%)
TABLE IV EFFICIENCY
IV ESTIMATION BASED ON YEARLY VOLUME AND ENERGY CONSUMPTION.
Scenario Yearly volume (EB) Energy (GWh) Wh/GB
Baseline 0.73 748 1024
HD 49 772 15.8
FHD 81 790 9.7
UHD 257 901 3.5
UHD++ 433 1048 2.4
DL 10 1114 111
TABLE V OTT
V VERSUS {DTT+ HOME CACHING DEVICES} SCENARIOS
Scenario OTT OTT + DTT caching
sv GWh sv GWh (network+home-cache)
FHD 20% 790 10% 1271 (768+502)
UHD 20% 901 15% 1362 (860+502) |
04121688 | en | [
"shs.eco"
] | 2024/03/04 16:41:26 | 2022 | https://theses.hal.science/tel-04121688/file/CompleteThesis.pdf | Igwe, Ayaz, Mamoudou, Nabeel, Sumeet, Ubaid, Asma, Houssem, Gotham, Charles, Camille, Geofroy, Chapo, Sacko, Lucas Oussama
Kevin
Kadiatou I Ali
Thank
Completing this Ph.D
Résumé
Les conflits civils sont fréquents au Nigeria et ont contribué à freiner le progrès économique du pays. Le premier essai empirique de cette thèse étudie les effets de l'un des épisodes les plus récents, le conflit de Boko Haram. Il examine les effets de l'exposition au conflit sur la sécurité alimentaire des ménages et détermine si les ménages font face à ce risque majeur pour leur bien-être en s'appuyant sur leur capacité de résilience, notamment l'offre de travail salarié, l'accès aux réseaux de subsistance et aux infrastructures collectives.
La principale conclusion de cette étude est que les ménages font mieux face aux chocs s'ils sont dotés d'une forte capacité de résilience ex ante, ce qui montre l'utilité de la résilience comme concept de développement dans le contexte nigérian. Les deuxième et troisième essais empiriques se concentrent sur le thème de la privation relative qui figure parmi les principales explications des conflits fréquents. Les deux essais abordent la privation du point de vue de l'accès à l'éducation. Le deuxième essai examine les questions relatives à l'accès à l'enseignement universitaire et l'implication générale pour le capital humain accumulé. Il constate que la nature de la répartition des universités affecte non seulement la répartition du capital humain au niveau tertiaire, mais aussi aux niveaux de l'enseignement primaire et secondaire de base. Le troisième essai empirique aborde la question de la persistance des inégalités héritées, en particulier si les inégalités coloniales dénotées par le niveau de capital humain des districts contribuent à la transmission intergénérationnelle du capital humain. L'essai constate principalement que les inégalités de district sont assez persistantes, principalement en raison de l'héritage des missionnaires chrétiens dans les domaines de la fourniture de services sociaux tels que les écoles et les infrastructures sociales générales.
Setting the agenda
The following questions remain interesting to economists; who gets what, when, how, why and at whose cost? How to increase national economic productivity and ensure the equitable distribution occupies the central position in Nigeria's political and economic development. Many of the political and economic struggles the country has had to face since her six decades of independent existence typically originate from this central issue.
The country survived through these multitude of struggles occurring mostly after her colonial independence from the Great Britain in 1960, but the consequences were not lost on her: in spite of abundant endowment of human and material resources, she let the struggles reduce her to a position where nearly 50% of her over 210 million population live in abject poverty and remain vulnerable to frequent economic shocks. Although contemporary political governance leaves much to be desired, it has been shown that legacies from colonialism including extractive institutions and regional economic imbalance make the political economic system difficult to sustain. However, notable examples such as China demonstrates that robust policies based on empirical evidence may reverse the trends, notwithstanding. This thesis empirically investigates three key areas of potential policy intervention and makes policies recommendations.
Nigeria's political economy context
Nigeria is home to about 371 nationalities and ranks as the 3rd most culturally diverse country in the world, and also has long history of ethno-religious tensions dating way back to the precolonial period. The existence of Nigeria in the current geographical and administrative form results mainly from the amalgamation of Southern and Northern colonial protectorates in 1914, which has also been blamed for many of the bitter rivalries that have been part of the political and social landscape of the country ( [START_REF] Papaioannou | Political instability and discontinuity in nigeria: The pre-colonial past and public goods provision under colonial and post-colonial political orders[END_REF]). Shortly after the independence in 1960, Nigeria got plunged into a deadly three-years civil war beginning from 1967 in which part of the territories of the former southern protectorate (the eastern region) attempted unsuccessfully to secede. The war was essentially provoked by perception of marginalisation in the distribution of political and economic power. In the immediate aftermaths of the civil war, Nigeria reformed her political institutions evolving a more formal and proportional revenue sharing arrangement under its fiscal federalism structure, and provided more explicit sharing formula for her oil revenues. This however did not eliminate the ethno-regional-religious agitations and rising religious violence that have been a destabilising force in the political economy. Currently escalating unrests by the Niger-Delta militants in the south-south region, farmers and herders in the north central and the Boko haram in the northeast is a constant reminder of the Nigerian political economy realities ([Krakrafaa-Bestman, 2018]). As such, policy makers need not only design policies to cushion the effects of these crises, but also to address the root causes which have been attributed to relative deprivation and inequitable distribution of economic resources ( [Agbiboa, 2013]). This policy thrust goes to the core of this thesis: the research theme borders on the consequences of the Boko haram conflict and the role of resilience capacity, the distribution and human capital consequences of access to schools and the nature of persistence of economic inequalities over time. The outlines of the chapters are presented below:
Drawing lessons from the Boko haram conflict
Global early warning systems hardly suffice to prevent man-made disasters, neither are aids sufficient to overcome the consequences. Rather, if domestic policies cannot immediately address the root causes, they are expected to build livelihood systems that are resilient to the demands of shocks and stressors. This is the resilience approach to development which calls for the integration of sustainable livelihood systems comprising simultaneous growth in human capital accumulation, social protection and public goods in the process of development [START_REF] Tendall | Food system resilience: defining the concept[END_REF]; [Bene et al., 2016]). This is expected to ensure that unexpected adverse stressors do not have long lasting consequences. In the political economy of Nigeria, the penetration of social protection is abysmal despite the prevalence of economic shocks from natural and man-made sources. Such settings provide an ideal environment to explore the interactions among vulnerability, resilience and economic shocks. Exploiting this setting, chapter one uses a unique identification strategy based on three rounds of panel data to test the role of resilience capacity in mitigating shocks arising from a deadly conflict. It has been difficult to identify the causal effects of violent conflicts due to dearth of longitudinal data in the appropriate settings. The chapter overcomes this shortcoming by drawing from a panel data collected before and after population exposure to the Boko haram conflict. Furthermore, while most of the previous studies in this area only investigate the short-term consequences, and assume direct relationship between conflict shocks and food security, the chapter investigates resilience capacity as an intervening variable and as a potential channel extending the consequences of the conflict from short to long terms. It casts resilience as an absorber of the food security shocks generated by the conflict, and by identifying that resilience cushions the effects of the conflict through its various pillars, the paper aligns with the growing literature on a resilience-based approach to managing development.
The human capital consequences of limited access to universities
Improved access to education, especially at the tertiary level should be a policy priority in developing countries. Schooling generally support social and economic development through the human capital of the population which is part of the resilience of the general economy ([Valero and Reenen, 2019]). As a result, the nature of the spatial distribution of the different categories of schools may generate economic inequalities within territories ( [Gibbons and Vignoles, 2012]; [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF]). This chapter investigates how distance to university during childhood affects individuals' educational attainment. It uses the first three waves of the Living Standard Measurement Survey (LSMS) dataset for Nigeria, which provides information on households' location and individuals' completed years of schooling and combines it with self-constructed database of historical spatial distribution of universities. In particular, relying on GPS coordinates, the database retrieves the shortest straight line distance between the residence of households and university for each individual when they were at the ages of 12 and 18 years.
Using instrumental variable strategies, the chapter finds a negative effect of distance to university on completed years of schooling. The result is robust to accounting for potential migration bias which may occur if the individual's current place of residence differs from the area of residence during the teenage years. It also confirms strong neighbourhood effects indicating longterm inequality of geographical access to universities. Furthermore, taking advantage of recent establishment of 12 public universities and difference-in-difference approach, the chapter provides evidence of a positive spin-off effect of access to universities on student retention in secondary schools. The new universities lead to a reduction of 2.5 percentage points in the intention to drop-out of secondary school for those who live near to new universities -which is a significant contribution because the cross influence of universities on the lower levels of education are rarely of consideration in previous studies.
Making sense of persistence of inequalities of historical origin
That African countries rank highly in indices of relative inequality is generally well known.
Although the earlier studies trace the origin of this to colonial heritage, they generally fail to account for the mechanisms of the transmission across generations. The theory of intergenerational transmission is such that dynasties (which may be families or social groups) maintain inherited inequalities. However, the estimation of the intergenerational elasticities is fraught with potential biases, including unobserved ability correlation and endogenous neighbourhood quality ( [Zimmerman, 2001]; [Glick and Sahn, 2000]). Using the 2019 Nigerian Living Standards Surveys (NLSS), this chapter investigates the persistence of economic opportunities captured through educational attainment, and relies on the original distribution of schools in Nigeria via the colonial Christian missionaries to identify the important estimates. I employ the neighbourhood capital defined as the average years of education within neighbourhood as an intervening influence on the intergenerational transmission of human capital. This sets up a causal mediation framework where parental and neighbourhood capitals are jointly determined. Then, drawing from historical accounts, I use measures of the historical activities of the missionaries as instrument to empirically parse out the direct and indirect effects of parental human capital. I argue that parents influence their children human capital directly by investing in their education, and indirectly by deciding the schooling neighbourhoods. The parental and neighbourhood capitals are necessarily linked through neighbourhood sorting based on educational preferences. The instrumental variable causal mediation analysis which harnesses these relationships is then employed to parse out the direct and indirect effects. The historical activities of the missionaries is assumed to jointly affect the parental and neighbourhood capitals, which is the identifying assumption of the causal mediation framework. Using conventional and heteroscedasticity based instruments, the study finds that parental and neighbourhood capitals significantly contribute to the human capital of offspring, where capitals are respectively denoted by average education of parents and neighbourhoods. Furthermore, when the total effect is decomposed based on the causal mediation framework, the parental capital accounts for about 25% while neighbourhood capital accounts for 75%. This implies that supply side policies such as raising the quantity and quality of schools across districts could be an important option for raising equality of economic opportunities. [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF] [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF]. Does distance determine who attends a university in germany? Economics of Education Review, 29(3):470-479. [START_REF] Tendall | Food system resilience: defining the concept[END_REF][START_REF] Tendall | Food system resilience: defining the concept[END_REF]. Food system resilience: defining the concept. Global Food Security, 6:17-23. [Valero and Reenen, 2019] Valero, A. and Reenen, J. V. (2019). The economic impact of universities: Evidence from across the globe. Economics of Education Review, 68:53-67. ID: 271751. [Zimmerman, 2001] Zimmerman, F. J. (2001). Determinants of school enrollment and performance in bulgaria: The role of income among the poor and rich. Contemporary Economic Policy, 19(1):87-98.
Introduction
As the frequency of natural disasters and civil conflicts spikes globally, rapid response systems, the likes of early warning systems facilitating rapid intervention, assume prominence ( [START_REF] Smith | Famine early warning and response: the missing link[END_REF]). While such interventions alleviate crises, they seldom address the underlying vulnerability. Occasionally, the short-term interventions generate serial dependence of individuals and households on aid and handouts ( [Bene et al., 2016]; [START_REF] Alinovi | Towards the measurement of household resilience to food insecurity: applying a model to palestinian household data. Deriving food security information from national household budget surveys[END_REF]). Some of these concerns motivate the recent calls for the resilience approach to development, whereby building resilience capacity becomes a primary concern of development planning and emergency interventions [START_REF] Tendall | Food system resilience: defining the concept[END_REF]). The resilience approach prioritizes the mobilization of resources through integrative livelihood strategies, human capital combination, social protection, nutritional health, and other private and public goods, which in times of shock protect households from extreme consequences ( [Bene et al., 2016]). In the political economy of most developing countries, the penetration of social protection is abysmal despite the prevalence of economic shocks from natural and man-made sources. Such settings provide an ideal environment to explore the interactions among vulnerability, resilience and economic shocks. Exploiting this setting in the case of Nigeria, this paper uses a unique identification strategy based on three rounds of panel data to test the role of resilience capacity in mitigating shocks arising from a deadly conflict. The evidence from this paper might be useful for general development policies, particularly those related to emergency interventions.
From the economic theory perspectives, resilience protects households from loss of economic welfare and facilitates recovery from shocks ( [Bene et al., 2016]; [START_REF] Alinovi | Towards the measurement of household resilience to food insecurity: applying a model to palestinian household data. Deriving food security information from national household budget surveys[END_REF]).
Acknowledging these roles, development agencies such as the World Bank (WB), the Food and Agricultural Organization (FAO), and the World Food Program (WFP) devote substantial resources to encouraging the build-up of household resilience, and to facilitate empirical assessment of the importance of the concept, the agencies commission works for streamlining its measurement. This incentive has motivated more vigorous assessment of the theoretical links between resilience and food security in particular, with most of the studies adopting the harmonized framework. In the meantime, the frame of analysis is predominantly cross sectional, whereas it might be more appropriate to identify the role of resilience in dynamic settings where the dynamics of shocks, welfare and the intervention of resilience may be exploited [START_REF] Smith | Does resilience capacity reduce the negative impact of shocks on household food security? evidence from the 2014 floods in northern bangladesh[END_REF]). In the particular case of shocks relating to violent conflicts, the difficulty of obtaining before and after longitudinal data with which to investigate the role of resilience confines most of the studies to crosssectional analysis. The present study is distinguished from most of the previous studies in this respect because of the adoption of the setup of [START_REF]Household resilience to food insecurity: evidence from tanzania and uganda[END_REF] and the replacement of its self-reported shocks with objective conflict shocks.
Therefore, this paper uses the shocks originating from the battles of Boko Haram, one of the leading violent terror groups in the world, to test the role of resilience capacity in shock mitigation. Most of the studies linking conflict and food security only investigate the short-term consequences, and assume direct cause and effect relationship between conflict shocks and food security. This study extends this literature by investigating resilience capacity as an intervention factor and as a potential channel of extending the immediate consequences of the conflict. The study casts resilience as an absorber of the food shocks generated by the Boko Haram conflict, which are expected to threaten household food security. By identifying that resilience cushions the effects of the conflict through its various pillars, the paper aligns with the growing literature on a resilience-based approach to managing development.
The remainder of the paper is organized as follows: Section 2 discusses the related literature and background of the study. Section 3 provides an overview of the data and descriptive statistics. Section 4 estimates the basic short-run relationships among the key variables.
Section 5 extends the analysis to the long-run through the effects of the conflict exposure on dimensions of resilience capacity. Section 6 reports some robustness checks, and section 7 concludes with policy recommendations.
Literature and background of the study
The conflict
Violent conflicts such as the Boko Haram insurgency herald a lot of disruptions, including in food systems [START_REF] Souza | Conflict, food price shocks, and food insecurity: The experience of afghan households[END_REF]). The Boko Haram conflict targets important economic activities such as farming and informal trading activities, and previous studies acknowledge that this pattern is behind most of its economic impact, particularly on the ability of households to access food and other livelihood resources ( [Falode, 2016]; [START_REF] Adelaja | Effects of conflict on agriculture: Evidence from the boko haram insurgency[END_REF]). While the apparent objective of the Boko Haram is not directly related to the food systems, food is certainly used as a means to the end, and the food system is incidentally compromised through violent exchanges between state and the insurgents ( [START_REF] Bertoni | Education is forbidden: The effect of the boko haram conflict on education in north-east nigeria[END_REF]; [START_REF] Messer | Conflict, food insecurity and globalization[END_REF]). The Boko Haram adopts a menu of strategies to drive its objectives; first, it launched battles using massive foot soldiers, annexing and occupying territories in the north east of the country. This form of attacks usually involved clashes with the state forces, and from 2013, the conflict has intensified following a more spirited drive of the state to recapture annexed territories and eradicate the insurgency ( [Onapajo, 2017]). Despite this, the Boko Haram attacks did not stop but grew more clandestine and concentrated in less-governed spaces such as farmlands and local markets. Hence, the estimates of this study are most relevant in the rural areas where the attacks remain consistent throughout the period. it became the world's most deadly terrorist group in terms of casualty count, and direct confrontation with state forces was eschewed in favor of targeted and mostly suicide attacks ( [Omeni, 2018]). Consequently, the sabotaging of economic activities through raids on agricultural farms and general disruption of essential economic activities was escalated within few soft target areas ( [START_REF] Campbell | Boko haram's deadly impact[END_REF]). Figure 2.1 clearly demonstrates this transition, where suicide fatalities rose sharply from 2014.
One can therefore imagine the extent of disruption in the food system given that the transition focused attacks on agrarian hot spots ( [Onapajo, 2017]). Cases of infrastructure and personal asset damages reportedly also depicted a similar trend ([Hoek, 2017]). This scenario illustrates potential mechanisms that might drive the expected negative consequences of the conflict on food security, namely, through limitation of food production and distribution ([Kimenyi et al., 2014]; [START_REF]Household resilience to food insecurity: evidence from tanzania and uganda[END_REF]).
Food security and resilience capacity
According to [Spedding, 1988], the household is a central unit of the food system and subject to destabilization by economic shocks, idiosyncratic and general. Under normal circumstances, the household maintains its members' economic welfare by aligning its components with the immediate social and economic environments. Similarly, while facing economic shocks, the household remains central to austerity-coping decisions including deciding income-generating activities, allocating food and non-food expenditures, and choosing risk management strategies, which makes it a suitable nucleus of resilience analysis [START_REF] Cherchye | The collective model of household consumption: a nonparametric characterization[END_REF]). The concept and measurement of resilience has tremendously transformed driven mainly by the evolution of the construct and the diversity of disciplines in which the construct is appropriated. The FAO of the United Nations defined resilience as "the capacity of a household to bounce back to a previous level of food security after a shock" and pioneered the Resilience Index Measurement and Analysis (RIMA) approach, which is widely applied in the field of food security analysis ( [START_REF] Alinovi | Towards the measurement of household resilience to food insecurity: applying a model to palestinian household data. Deriving food security information from national household budget surveys[END_REF]).
RIMA denotes resilience as a latent proxy index, which may be directly or indirectly measured ( [START_REF] Alinovi | Measuring household resilience to food insecurity: application to palestinian households[END_REF]). Under the framework, the latent proxy is usually estimated by reducing a large number of theoretical variables to a single resilience index derived from known pillars measured at the first of the two-stage estimation procedure.
The direct measurement approach mostly uses structural models such as the MIMIC (Multiple Indicators Multiple Causes) and aims at describing households that may be more/less likely to resist shock at a particular point in time. On the other hand, the indirect approach focuses on the theoretical determinants of resilience to draw inference for policies or predict the dynamic path of resilience. However, [START_REF] Ciani | Testing for household resilience to food insecurity: Evidence from nicaragua[END_REF] pioneered a method of resilience capacity measurement that may be applied to predict the consequences of shocks on food security in dynamic settings, and which acts as a bridge between the direct and indirect RIMA measures. This method has been tested by [START_REF] Errico | A dynamic analysis of resilience in Uganda[END_REF] and [Kozlowska et al., 2015] and is adopted for the present study. The Technical Working Group on Resilience Measurement (TWGRM), a group of expert stakeholders, provides the recommendations guiding the selection of variables for the estimation [START_REF] Errico | A dynamic analysis of resilience in Uganda[END_REF]).
The resilience capacity measured under this approach incorporates the idea that households respond to economic shocks by drawing down on accumulated resources and utilizing available capacities to develop optimal coping strategies [START_REF] Fao | Rima -ii resilience index measurement and analysis -ii[END_REF]). Shocks such as violent conflicts may specifically target pillars of resilience including public infrastructure and income generation assets, which might then extend the duration of the consequences of the initial shocks. Therefore, in addition to the short-term analysis of the effects of the conflict on food security, the study considers the potential long-term impact through its effects on the resilience resources that should mitigate future shocks.
Relating the conflict, food security and resilience capacity
The prevailing state of conflict and humanitarian crisis in north-east Nigeria is attributed to the Boko Haram insurgency, which is rooted in a complex combination of institutional failures, extreme religiosity and welfare limitations ([Iyekekpolo, 2016]). Apparently, the general state of economic welfare including food security has dipped since the inception of the crisis. The FAO reports that about 3.7 million individuals would become food insecure in the region by 2018, and the WFP estimates that out of the 14.8 million people exposed to the crisis, about 8 million have become food insecure ( [START_REF] Baliki | Drivers of resilience and food security in north-east nigeria: Learning from micro data in an emergency setting[END_REF]).
These reports indicate that agricultural productivity has declined in the region, but most importantly the functioning of local agricultural markets has been hampered. This implies that food scarcity and rising food prices might be prevalent. As a result, food provisioning strategies such as relying on less preferred foods, skipping meals and so forth have risen among the exposed households who are desperately attempting to survive the conflict ([Marc et al., 2015]).
Unlike other settings of shocks, "food wars" are usually part of civil conflicts, whereby food supply channels are targeted by actors in the conflict. The "Boko Haram" (BH) insurgency emerged primarily to discourage non-Islamic forms of education in the Muslim-dominated territories through advocacy. The first violence claimed by the insurgents in the country was a series of attacks against military formations in Bauchi state on the 29th of July 2009 after their first leader, Mohammed Yusuf, was killed by the Nigerian security forces ( [Adesoji, 2010]). Subsequently, Boko Haram metamorphosed into a terror group involved in violent confrontations with the state. Millions of people have been displaced from their homes and thousands killed in the course of the war ( [START_REF] Adelaja | Effects of conflict on agriculture: Evidence from the boko haram insurgency[END_REF]).
The group employs diverse tactics in the struggle against the state, one of which is the widely condemned kidnapping of about 300 high school girls in 2014 ([Iyekekpolo, 2016]).
Of most concern for this study is the targeting of agricultural production through the kidnapping of farmers and the destruction of farm infrastructure such as irrigation and storage facilities. Additionally, the BH targets and destroys markets, roads, bridges, and other factors that constitute an enabling environment for the production and distribution of foods ( [Campbell, 2019]). These have raised the concerns of stakeholders about possible long-term damages to economic welfare, and food security in particular ( [Fao, 2015]).
Pioneering studies of this conflict detected negative effects on food supply due to substantial loss of agricultural production (see; [START_REF] Adelaja | Effects of conflict on agriculture: Evidence from the boko haram insurgency[END_REF]), leading to widespread food insecurity ([George et al., 2019]). However, beyond the immediate food systems, essential resources supporting household food security and general welfare resilience have also been affected. [Hoek, 2017] reports direct disruption of the functioning of local markets by the conflict due to actual attacks and threats of attacks, while [START_REF] Bertoni | Education is forbidden: The effect of the boko haram conflict on education in north-east nigeria[END_REF] reports substantial decrease in human capital accumulation from the destruction of schooling infrastructures and threats to life. Similarly, [START_REF] Chukwuma | Armed conflict and maternal health care utilization: evidence from the boko haram insurgency in nigeria[END_REF] document substantial decrease in the production and consumption of health services.
Theoretically, the erosion of resilience capacities as reported here would potentially leave affected households stuck in poor economic welfare and food insecurity long after the conflict might have been eliminated. For example, inadequate supply of health services could increase the frequencies of illnesses and draw down on household food consumption budget, and this would likely be the case in this particular conflict given that cases of epidemics are already being reported in the exposed communities [START_REF] Adamu | Trends of non-communicable diseases and public health concerns of the people of northeastern nigeria amidst the boko haram insurgency[END_REF]).
Therefore, policy makers might be interested in understanding the immediate and the long-term implications of the conflict on food security and related outcomes. Using similar shocks, previous studies have demonstrated the importance of resilience capacity for the household's long-term survival. This paper stands out from the previous studies on this conflict through its investigation of the role of resilience in the context of violent conflict that compromises food security and resilience capacity simultaneously. In other contexts, while previous studies document negative effects of shocks on food security, they also demonstrate that resilience capacity intervenes by wholly or partially absorbing the food supply shocks thereby alleviating the adverse welfare consequences of the shocks for households. After the 2014 catastrophic floods in Bangladesh, [START_REF] Smith | Does resilience capacity reduce the negative impact of shocks on household food security? evidence from the 2014 floods in northern bangladesh[END_REF] demonstrate the role of resilience in ensuring household food security recovery, particularly the pillars of asset holdings, and access to basic services and social safety nets. [START_REF] Bruck | The effects of violent conflict on household resilience and food security: Evidence from the 2014 gaza conflict[END_REF] demonstrate similar pattern of resilience mediation in the case of the Israel-Palestine conflict in the Gaza strip. The study identifies social safety nets and access to basic services as important dimensions of resilience, which attenuated the welfare-reducing effects of the conflict. The same mechanism operates also in the case of idiosyncratic shocks as self-reported by the households. For this case, [START_REF] Bruck | The effects of violent conflict on household resilience and food security: Evidence from the 2014 gaza conflict[END_REF] identify the cushioning effects of resilience capacity, particularly the adaptive capacity.
Data
Conflict in the neighborhood of households
Overall, the empirical strategy relies on the difference-in-differences estimator (DiD) to identify the effects of exposure to the conflict on the relevant outcomes. Resultantly, this section adapts the estimation data to the DiD estimation set-up including the main assumption of parallel trend. To create the required treatment and control groups, the relevant households are classified as exposed and non-exposed based on their proximity to the Boko Haram conflict battles. Under this type of classification, the parallel trend assumption may be violated due to certain time-varying economic conditions that predis-pose locations to conflicts, such as poverty ( [Abadie, 2006]; [START_REF] Blattman | Civil war[END_REF]; [START_REF] Pinstrup-Andersen | Do poverty and poor health and nutrition increase the risk of armed conflict onset?[END_REF]). To mitigate this, the dynamic spatial extension of the conflict is closely monitored and used to pick out the locations to be included in each of the exposed and control groups. This mitigation requires that each of the designated treatment and control group experiences exposure to the conflict, but during different data collection rounds. This maneuver potentially mitigates endogenous selection into conflict exposure because economic conditions in exposed locations are likely to be comparable, irrespective of time of exposure. Hence, the identification relies on variation in the timing of exposure and on successive data collection rounds. The data selection process is as described below:
The post harvest visits of the first three waves of the Nigerian Living Standard Measurement Survey (LSMS) collected by the World Bank and Nigeria's national bureau of statistics (NBS) are used in the study1 . The nationally representative LSMS panel contains comprehensive information on household socio-demographic characteristics and consumption, including a dedicated module for food security. The periods covered by the three waves are between February 2011 and April 2011 (hereafter: baseline), between February 2013 and April 2013 (hereafter: period 1), and between February 2016 and April 2016 (hereafter: period 2). 2 The survey data is accompanied by location longitudes and latitudes, which might be used to merge the data with other geo-referenced data sources such as the armed conflicts location and events database (ACLED) (Raleigh, 2010). Using string search within the ACLED database, conflict event data involving the Boko Haram in Nigeria are selected and spatially merged with the LSMS households. This allows spatial proximity analysis determining the spatial distance in kilometers (KM) of a household's location from dated conflict events.
In partitioning the households into exposed and non-exposed households, the former must live within a distance close to any Boko Haram battle involving at least one fatality.
However, the distance should be such that not all the households are considered exposed at a given period. Two buffers of radii, 5KM and 7KM, are created around each conflict event based on distance bands already established for this conflict (see; [START_REF] Bertoni | Education is forbidden: The effect of the boko haram conflict on education in north-east nigeria[END_REF]). 3Only the households residing within any of these buffers are included in the estimation sample. Restricting the main estimations to baseline and period one only, the dichotomy of exposed and control groups is determined by time of exposure as follows: the households that are exposed to events occurring during the time interval between baseline and period one are designated as exposed group, while those exposed to events occurring between periods one and two fall into the control group.4 Under this restriction, the control group is strictly exposed between periods one and two, whereas the exposed group is allowed to include certain households exposed consecutively in the two periods. Finally, households never exposed to any conflict were eliminated. The geographical distribution of the samples is shown in figure 2.2. Data source : LSMS and ACLED
Description of main non-conflict variables
Food security and controls Three main food security measures are considered in this paper: the coping strategy index (CSI), the food consumption score (FCS), and the share of food consumption expenditure in total household expenditure per capita. While the CSI captures the behavioral and food utilization aspect of food insecurity ( [Maxwell, 1996]; [START_REF] Maxwell | Alternative food-security indicators: revisiting the frequency and severity of 'coping strategies[END_REF]), the share of food expenditure captures access to food through household purchasing ability, and the FCS captures food availability through the diversity of household nutritional intake ([Lovon and Mathiassen, 2014]). Except for FCS, which is conversely distributed, higher values of the measures indicate higher food insecurity. Having utilized other household heterogeneities in the computation of household resilience capacity, the control variables are selected to reflect mainly the structural characteristics of the households, including age, gender, schooling, occupation of household head, and size as well as proportion of children in the household. Table 2.1 summarizes the baseline control variables for all the estimations and compares them across exposure status. The household heads in the exposed group are slightly younger and are more likely to be in a polygamous marriage, otherwise the control variables are balanced across the exposure status divide. This being in line with the objective of the data selection strategy reveals that the households are quite comparable in the absence of the conflict exposure and lends credence to the identification strategy.
The relevant food security measures are computed as follows:
F S it = n i=1
f it * w F S it stands for both CSI and FCS. For the CSI, f it represents frequency of coping strategy based on the number of days in the past seven days that such strategies were used and w represents weights based on the severity of the strategy ([WFP, 2008]; [Maxwell, 1996]).
For the FCS, f it represents the number of standard food classes that the household consumed during the past seven days and w weights based on the micro-nutrient contents of the food classes ([WFP, 2008]). 5 The food ratio is calculated as the weekly per capita household food expenditure divided by the total weekly expenditure per capita.
Computing resilience capacity
As discussed in section 2.2.2, the method adopted to measure the RCI in this study is based on the approach developed by [START_REF] Ciani | Testing for household resilience to food insecurity: Evidence from nicaragua[END_REF]. Under this approach which embraces the framework of TWGRM, the resilience to food insecurity of a given household at a given time is assumed to depend primarily on the pillars of AC, access to assets (Assets), ABS and SSN, which are indices computed at the first of a two-stage factor analysis strategy using variables reported at the household level (see table A .6).
At the second stage, the index is calculated as specified below using the already defined index notations:
RCI i = f (AC i , Assets i , ABS i , SSN i ) (2.1)
where i indexes household. [Bene et al., 2016] provide useful guidelines which the study followed in the selection of suitable variables from the LSMS. In practical terms, since the variables measure similar tendencies, the first stage variables are retained if they score up to 60 percent factor uniqueness. The retained variables used in the second stage are the ones displayed in Figure 2.3.
Attrition from the sample
Overall, the attrition in the Nigerian sample of the LSMS is known to be low at about 4% for the data collected during the period of estimations in this paper (see: [Osabohien, 2018]).
However, the sample was further restricted for the analysis, eliminating four out of six political regions of Nigeria where the Boko Haram conflict does not exist. In particular, the North-East and North-West regions (the epicentre of Boko Haram) were retained, whereas the South-East, South-West, South-South and North-Central regions were eliminated.
This technically reduced the sample from 5000 to 1589 households, of which 1511 were interviewed in two consecutive waves (2010 -2016) and 11 further households were dropped due to missing values, thereby resulting in an attrition rate of 5.6%. In general, under normal residential relocation of households, the LSMS team traces and re-interviews such households in their new locations, but this was not the case for these 78 households, suggesting that it was due to the conflict. Although this attrition rate is considerably low, this section conducts a falsification test to confirm that attrition does not bias the estimates. Defined as missing households during the estimation period one, attrition is estimated as a function of the conflict exposure using the equation specified below:
Attrition i1 = α + X ′ i0 δ + θ c + ϵ i (2.2)
where Attrition i1 is the attrition dummy variable, and other variables and parameters remain as defined previously. As shown in table A.4, attrition is related to neither conflict exposure nor the control variables, which is reassuring. Nevertheless, the levels of resilience are weakly correlated with attrition: the coefficients of ABS and AC are negative and marginally significant at the 10 percent level. If households of low resilience capacity in their baseline conditions disintegrate or relocate upon exposure to the conflict, this might introduce a downward bias to the moderating effects of resilience capacity, and this should be remembered when interpreting the estimated role of resilience.
Estimation of the direct effects
2.4.1 The conflict and food (in)security: direct relationship
In the meantime, this section ignores the potential linkage between food security and resilience capacity and investigates the basic relationships between the Boko Haram shocks and food insecurity. In particular, the section estimates the average effects of the conflict without accounting for the mediation of resilience capacity. The extension of these analyses in section 4.4 explores the heterogeneous effects according to level of resilience capacity, and this sheds some lights on the theoretical roles of resilience capacity. In general, the identification is based on the difference-in-differences (DID) estimator where the main outcomes are continuous variables F S it denoting the various measures of food (in)security.
The treatment variable Conf lict i assumes two forms: when denoted as a dummy variable,
Conf lict i equals 1 if as at 2012/2013 period the household resides within any of the buffer zones earlier described, but as a partially continuous variable Conf lict i equals the conflict intensity conventionally represented by the fatalities arising from the conflict. The non-parametric DID estimator α estimates the impact of exposure to the conflict on food security as specified in equation 2.3 below:
α DID = E[F S i1 -F S i0 |Conf lict = 1] -E[F S i1 -F S i0 |Conf lict = 0] (2.3)
If households were randomly exposed to the conflict, the exposure effect would simply be the difference in food security between the exposed and control households, which is not the case in the present study. However, given that exposure is eventually realized for all households in the sample in monitored time intervals, an empirical approximation of this difference may be obtained by monitoring the trends of food security across the defined groups, through for instance the DID framework. The non-parametric DID framework assumes that except for the conflict exposure, the treatment and control groups would have followed similar trends. Then, controlling for time-invariant household characteristics, the differences in food (in)security between the exposed and non-exposed households in the presence of group-based exposure are considered unbiased estimates of the average treatment effects of the conflict on the outcomes. 7 The tests of mean differences by exposure status in table 2.2 provide the bivariate approximation of these differences. In nearly all the cases, the outcome levels are significantly different between the pre-exposure (P RE) and post-exposure (P OST ) periods, suggesting the occurrence of trend discontinuities that likely arose from exposure to the conflict. Nevertheless, these may only be considered associative since the trend may be conflated with other fixed time-varying household characteristics. Hence the multivariate extensions include all the available controls to narrow the sources of the remaining differences to the conflict exposure.
Econometric specifications
Drawing from the preceding discussions, this section estimates two multivariate econometric approximations of the DID model: The first multivariate regression is estimated for the levels of the outcomes in P OST conditioning on baseline control variables, including the baseline levels of the outcomes as a capture for the effects of differences in initial levels of the outcomes, whereas the second version is dynamic with household fixed effects capturing any time-invariant household characteristics. These estimations are specified in equations 2.4 and 2.5 below:
F S i1 = δ + ρConf lict i + γF S i0 + βX i0 + ϕH i + ϵ i (2.4)
where F S i1 denotes the levels of food (in)security for household i measured at period P OST , Conf lict i is a dummy variable indicating exposure to the conflict or the conflict intensity represented as the battle fatalities -which is equal to zero when the dichotomous
Conf lict i equals zero, and strictly positive when Conf lict i equals one. F S i0 is the baseline level of food (in)security, H i is household fixed effects, X i0 is the baseline household characteristics, while ϵ i is the idiosyncratic error term. Given the household controls, equation 2.4 yields an unbiased estimate of ρ, the impact of exposure to the conflict on the outcomes. Nevertheless, to capture potential sources of bias relating to unobserved household characteristics that may be correlated with conflict exposure and household food (in)security, household fixed effects are included in the estimations as follows:
F S it = τ t + θ i + αConf lict i × P OST t + ε it (2.5)
where F S it is the food security indicator, τ t is time fixed effect, θ i is a set of household fixed effects, α is the DiD parameter obtained through the interaction of Conf lict i and the post exposure period (P OST ), ε it is the error term, while other parameters and variables maintain their previous definitions. Equation 2.5 exploits the panel structure of the data through within transformation to evaluate the relationships between the food security measures and the measures of the conflict. The panel structure allows time-invariant household-specific unobservable factors to be differenced out.
Estimates of direct effects
Using the various measures of the conflict exposure, the application of equations 2.4 and 2.5 yields the results reported in table 2.3. Panel A of table 2.3 reports estimates of the direct effects of the conflict exposure Conf lict i denoted as a dummy variable, whereas Panel B reports direct effects of the conflict intensity. In panel A, the estimates indicate significant negative effects of the conflicts on the various indicators of food security. Estimates in Panel A: columns 1, 2 and 3 derive from equation 2.4 estimated without household fixed effects, and indicate that exposure is associated with an increase of about 1.29 points in the coping strategy index, about 7.2 percent increase in the food expenditure share (food ratio) and no significant effect on food consumption score (FCS).
The fixed effects DID estimates reported in Panel A: columns 4, 5 and 6 are prescriptively similar to the previously discussed estimates. Mostly, estimates regarding the FCS are insignificant, whereas those of the CSI and food ratio increased by 1.24 points and 8.6 percent, respectively. In magnitude, the increase in the CSI and food ratio constitutes 69 percent and 11 percentage points of their respective pre-exposure pooled means. Similarly, most of the outcomes respond strongly to the conflict intensity as shown in Panel B. Based on the DID fixed effect estimations in columns 4, 5 and 6, a unit increase in fatalities increases the CSI by about 0.023 points and the food ratio by about 0.075 percent, with no effect on the FCS. However, ignoring household fixed effects, a unit increase in fatalities is associated with about 0.12 points reduction in FCS. There are several discussions about the nature of households' consumption trade-offs during shocks, in terms of qualityrepresented by indicators of dietary diversity such as FCS, and quantity -represented by indicators such as CSI ([D 'Souza and Jolliffe, 2013]; [Jensen and Miller, 2010]). In the current estimates, quality seems to have been traded off for quantity, but there might as well be other nuances, some of which might operate resilience capacity, which is addressed in section 4.4. In general, the models without the fixed effects seem to overestimate the relevant effects, as unaccounted fixed effects induce positive bias in the estimates, hence, the preference for the fixed effects model. Finally, in columns 1, 2 and 3, the initial values of the outcomes are included, and they significantly predict the current values as expected.
The role of resilience capacity
The main question of this section is how much resilience is required to cushion the households against adverse stressors. However, since conflict might as well affect resilience, all estimations in this subsection use pre-conflict levels of resilience, which also accords with the literature on resilience insurance of risks [START_REF] Alinovi | Measuring household resilience to food insecurity: Application to palestinian households[END_REF]). Specifically, the pooled estimation sample is partitioned and designated as low and high resilience capacity groups of households according to whether baseline resilience capacity was below or above the resilience of the median household. Thereafter, the following equation is estimated:
F S it = τ t + θ i + αConf lict i × P OST t + ηConf lict i × R high i0 × P OST t + ε it (2.6)
where R high i0 is a a dummy variable indicator of whether the resilience capacity of the household at baseline exceeds the median resilience among the pool of households. The [START_REF] Smith | Does resilience capacity reduce the negative impact of shocks on household food security? evidence from the 2014 floods in northern bangladesh[END_REF]; [START_REF] Bruck | The effects of violent conflict on household resilience and food security: Evidence from the 2014 gaza conflict[END_REF]).
R
Particularly, while the effect of the conflict on the outcome of FCS as reported in table
2.3 is insignificant, table 2.4 shows that households with higher overall resilience capacity seem to have gained in food security. Estimated as α + η, high overall resilience capacity is associated with a gain of about 4 FCS points, SSN alone with about 3 FCS points, and high assets with about half an FCS point. The strong response of FCS to RCI is somewhat puzzling, especially from the perspectives of households under destabilizing conflict and requires further accounting. It might be argued that SSN comprising public and private remittances to households reacts sharply to emergencies, and thus accounts for much of this effect. Thus, in table A.9, the entire variables used to compute the RCI are included in the estimation to explore the possibility that the entire interaction effect may be accounted by a few variables. Nevertheless, the estimate of interest did not change significantly, rather the overall effect of the conflict disappears, indicating that the conflict impacts on the FCS only through the variables constituting household resilience capacity. Among the variables constituting SSN, only the variables indicating whether the household was migrant are significant, but all the included variables collectively could not fully account for the estimate of interest, which is the triple interaction coefficient of resilience, conflict and post. Another potential mechanism might derive from the interference of the conflict with the inseparable production and consumption of agricultural households ([ [START_REF] Bardhan | Development microeconomics[END_REF]). [Hoek, 2017] describes the disruption of local markets by the Boko Haram conflict, which prevents the households from engaging in the usual market exchange of commodities. A plausible consequence of this might be that households discouraged from routine engagement in market exchanges would then resort to autoconsumption of home production, thereby contributing to multiplying dietary diversity. This is partially supported by the fact that proximity to markets accounts for a significant part of the FCS increase in table A.13, and this pattern is also reported in [George et al., 2019]. Unfortunately, the precise location of markets is not available in the data used in this study, which prevents further investigation of this intuition through strategies that would have accounted for conflict attacks within market locations.
Table 2.4: Impact of baseline resilience levels on the food security during exposure to the conflict 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 Number of households 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500
(1) (2) (3) (4) (5) (6) (7) (8) (9) (
Notes: RCI = Resilience capacity index, ABS = index of Access to basic services, AC = index of adaptive capacity, SSN = Index of social safety nets; Asset = Index of household assets; CSI = Coping strategy index, FCS = food consumption score; Food ratio = Share of household per capita food expenditure; Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1.
longer term effects 2.5.1 Conflict exposure and the household resilience capacity
The preceding section demonstrates the importance of resilience in protecting household food security despite the conflict shocks. However, conflict could also diminish resilience through the destruction of the various pillars upon which resilience is anchored such as assets and ABS ([Justino, 2012]; [START_REF] Minoiu | Armed conflict, household victimization, and child health in côte d'ivoire[END_REF]). In large parts, this portends the critical channel that extends current shocks to long-term consequencesdescribed by [Bene et al., 2016] as a "vulnerability trap." This section estimates the empirical approximation of this relationships. To do this, the empirical specification developed in section 2.4.2 is replicated in equation 2.7 below:
RC it = τ t + θ i + αConf lict i × P OST t + ε it (2.7)
All variables and parameters remain as described in section 2.4.2, except that the outcome variable RC it stands for the overall resilience index (RCI) or its pillars denoted by ABS, SSN, AC and access to assets (Asset). The inclusion of household fixed effects θ i accounts for some time-invariant unobserved household characteristics that may be correlated with exposure to the conflict. 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 Households 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 The results in table 2.5 show that the exposure to the conflict is negatively associated with overall resilience and most of its pillars -except the SSN, which actually increased. The RCI declined by 0.097 points -in the magnitude of about 42 percent of pre-exposure mean.
The ABS declined by 32 percent (0.065 points reduction), AC declined by 40 percent (0.076 points reduction). As for ASSETS, the reduction was insignificant, and the study attributes this to the nature of the Boko Haram conflict being a guerrilla rather than a full-blown war. Such conflicts use mainly the strategies of kidnappings, petty thefts and scaremongering, which may not have enough intensity to significantly decimate assets by destruction or forced sales ( [Falode, 2016]; [START_REF] Baliki | Drivers of resilience and food security in north-east nigeria: Learning from micro data in an emergency setting[END_REF]; [Hoddinott, 2006]).
The increase in SSN aligns with the previous findings (e.g; [START_REF] Bruck | The effects of violent conflict on household resilience and food security: Evidence from the 2014 gaza conflict[END_REF]). The average effect on the SSN is 0.19 points -about 56 percent of the pre-exposure mean. However, the incentive structures of SSN make the long-term extrapolation of this effect complex.
Increased safety nets may safeguard household welfare during shocks or enable them to quickly recover lost economic welfare, inclusive of food security. Yet at the same time, disaster transfers can create moral hazard problems that may produce the so-called conflict merchants who create violence to attract aid ( [Dercon, 2002]; [Heemskerk et al., 2004]).
Nevertheless, this effect should be interpreted cautiously given that the exposed households were slightly more endowed with safety nets prior to conflict exposure, creating some doubts on the attribution of the observed increases to responses targeted at households exposed to the conflict. The effect on safety nets aside, other pillars are negatively affected;
AC which incorporates informal networks within and outside households reduced, and so did access to critical infrastructures, which are part of the ABS. Hence, except when timely policies are well-targeted with respect to these aspects, the above results taken together may suffice to conclude in favor of negative long-run consequences within this partial equilibrium framework. In fact, [START_REF] Sanders | Misallocation of entrepreneurial talent in postconflict environments[END_REF] argue that except when institutional resources are rapidly restored after violent conflicts, upsurge in aid during conflicts may have negative social and economic consequences through destructive entrepreneurship. Nevertheless, a general equilibrium framework may be most appropriate to reach such a conclusion, especially in view of the lack of conclusion on the domain of safety nets.
Robustness checks
Results in the preceding sections established rather strong negative effects of the conflict, directly on food security but attenuated through the resilience capacity. The conflict also produced potential long-term effects through the reduction of the level of household resilience capacity. However, these effects are obtained conditional to the controls for observable characteristics and the study sample restriction strategy, which assumes a balanced distribution of unobservable characteristics (potential confounders) between the treated and control groups. This section tests the robustness of these results by relaxing some of the critical assumptions of the previous estimations, particularly relating to critical resilience capacity thresholds and sample selection that might bias the results.
For the resilience thresholds, tables A.10, A.11 and A.12 replace the thresholds derived based on the median level of household resilience with the top quartiles of resilience in the estimations, and the results remain conclusively similar. Similarly, to further disaggregate the estimated effects of the conflict on resilience as reported in table 2.5, the original variables used to compute the resilience indices are employed as the outcome variables and estimated. While the results reported in table A.13 demonstrate that the conflict affected most of the original variables across the various resilience pillars, much of the effects fall on the components of the SSN.
Selection into conflict exposure
Although the determination of exposure and control groups by means of realized and future exposure to the conflict strongly suggests balance in treatment confounders, there remains some chance that time-varying confounders unrelated to the conflict might disrupt the parallel trend assumption and bias the estimations. In this subsection, I pursue a test of any indication of this that might have started during the pre-treatment period.
Following the sample restriction adopted in the study, I estimate the probability of being included in the exposure group based on baseline control characteristics. The probability is specified as follows:
Conf lict i1 = α + X ′ i0 δ + θ c + ϵ i (2.8)
where Conf lict il is a dummy variable which takes value 1 if household i living in community c is included in the exposure group (5km or 7km buffer), and zero otherwise. X ′ i0 δ is a vector of household and household head characteristics used as control in the previous estimations, and ε ijs is the error term. On the premise that certain community characteristics are important determinants of conflict onset, θ c is included in the selection model. θ c denotes a a vector of community dummy variables, where the survey enumeration areas are used as proxies for communities although they are a bit larger than communities geographically.
Table A.3 reports the results estimated by binary probit model. Clearly, exposure is not selective on the observed control variables. Additionally, indicators of resilience capacity are included to further assess the randomness of exposure even in this dimension. The results implicated the SSN dimension of resilience capacity, which is more favorable to the exposure group at the baseline. Therefore, this has to be remembered while interpreting this aspect of the estimations. The implication might be that the exposed group has a stronger external base of resilience that could be leveraged during emergencies, and this might be a source of potential bias to the estimated role of resilience capacity and its pillars. To further ensure robustness of this aspect of the analyses particularly in relation to the puzzling finding in the case of FCS, the entire array of the first stage variables used in the computation of the pillars of resilience is additionally employed in the estimation of the role of resilience capacity in table A.9. The finding did not significantly change, except that the direct effect of the conflict was taken over by the added variables.
Alternative measure of exposure
In order to partition the sample into exposed and control households, the paper creates a series of buffers around any conflict event, some of which prove too large to allow the separation of the two groups of households. The largest radius that allows reasonable separation is around the 7KM radius, which makes it the reference radius of exposure for the study. Nevertheless, in this section, the smaller alternative buffer (5KM radius) is used. All the previous estimations were repeated under the new exposure measure and the new sets of estimations mirror the former. However, in some cases, coefficients appear stronger but are never statistically different from their previous levels. The estimated baseline estimation on food security is reported in table A.5, while the rest of the results are retained by the author to conserve space. The remaining results are available from the author on demand.
Conclusion and policy recommendations
Using three main indicators of food (in)security, the coping strategy index (CSI), share of food expenditure per capita (Food ratio) and the food consumption score (FCS), this paper demonstrates that exposure to the Boko Haram conflict caused the households to move down the ladder of food security. The overall effects of the conflict are substantial and negative on all the dimensions of food security. However, these overall effects hide substantial heterogeneities across levels of resilience capacity. These heterogeneities are further explored by comparing households of high and low levels of resilience through a model of triple interactions of resilience, conflict exposure and time. The estimations yield the unambiguous prediction that resilience protects household welfare during conflict shocks in line with the theoretical prediction of resilience as a place holder for household welfare. While social safety nets (SSN) dominate the other pillars of resilience in absorbing the shocks, other pillars also play significant roles.
It is anticipated that violent conflict might decimate resilience, and thus push the households into traps of poverty and vulnerability to food insecurity. Hence, it was further estimated that the conflict reduced the overall resilience capacity by 42 percent, ABS by 32 percent, and AC by 40 percent. Contrastingly, the index of SSN increased in line with theoretical expectation. The increase in SSN reflects all the humanitarian aid from donor agencies and private individuals provoked by the need to cushion the conflict-induced sufferings. In all, this study supports the ongoing arguments about the merits of the resilience approach to development, which aims to enhance the ability of systems (households, communities, and states) to withstand and recover from shocks. The study demonstrates that resilience cushions shocks, while being susceptible to the same. Therefore, resilience deserves important consideration during post disaster interventions.
While short-term interventions such as food and cash aids may curtail immediate and direct welfare losses, serial vulnerability may only be eliminated through interventions rebuilding resilience. Advising on the specific projects for enhancing resilience lies beyond the scope of this study. However, it is clear from this study that the enabling environment for resilience comprises public use services such as markets, roads, health facilities and other basic infrastructure that policy would be able to target. To incorporate these in development, public policies in shock-prone regions need to be multi-sectoral and forward looking. The paper invites governments, inter-governmental, and non-governmental organizations to incorporate the enhancement of resilience in future intervention programs.
While the study has employed a number of rigorous estimation procedures to arrive at the reported estimates, no strong claim is being made as to causality given that certain aspects of the estimations do not yield to clear-cut identification. In particular, it may be acknowledged that whereas the computation of the resilience index follows a well-established procedure, the constructed index may not fully capture the essence of the concept. Resilience being multifaceted and data driven, its computation may easily be compromised (see; [START_REF] Bruck | The effects of violent conflict on household resilience and food security: Evidence from the 2014 gaza conflict[END_REF]; [START_REF] Smith | Does resilience capacity reduce the negative impact of shocks on household food security? evidence from the 2014 floods in northern bangladesh[END_REF]). In this light, the structural relationships underlying the concept of resilience capacity risks being undermined due to data quality, and in turn compromising the estimated effects. This, therefore, invites cautious interpretation of this aspect of the results.
[George et al., 2019] George, J., Adelaja, A., and Weatherspoon, D. (2019). Armed conflicts and food insecurity: Evidence from boko haram's attacks. American Journal of Agricultural Economics.
[ Heemskerk et al., 2004] Heemskerk, M., Norton, A., and de Dehn, L. ( 2004). Does public welfare crowd out informal safety nets? ethnographic evidence from rural latin america. Improved access to education, especially high skills producing units such as universities features in the policy priorities of developing countries ( [Moja, 2000]). Universities support social and economic development through the production of specific and general human capital ([Valero and Reenen, 2019]; [START_REF] Cantoni | Medieval universities, legal institutions, and the commercial revolution[END_REF]), as part of the process of economic growth ( [Barro, 2001]; [START_REF] Becker | Education and catch-up in the industrial revolution[END_REF]). In spatial terms, the costs of access may increase with distance ( [Gibbons and Vignoles, 2012]; [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF]).
Therefore, optimising the value of higher education might involve an appropriate spatial placement of the stock of universities and an increase of university capacity in outlying areas ( [Frenette, 2009]). In a number of cases, distance to university has been used as identifying instrument for human capital attainment, anecdotally supporting the claim that it influences educational decisions, at least at the tertiary level (see; [ [Carneiro et al., 2011]; [Kling, 2001]; [Card, 2001]; [Card, 1993]). However, this pattern is mostly observed in the developed countries that have less limited access to university education and more developed credit market. In contrast, the less developed credit markets in the developing countries implies that individuals mainly rely on personal or family income to fund their tertiary education ( [START_REF] Molina | The schooling and labor market effects of eliminating university tuition in ecuador[END_REF]). In addition, developing countries, particularly in Africa, are not well served by universities ([Yusuf et al., 2009]).
Nigeria provides a typical case study, with approximately one university for every 1.2 million inhabitants, compared to one for every sixty thousand inhabitants in the US ( [Kazeem, 2017]; [START_REF] Ejoigu | Sixty-five years of university education in nigeria: Some key cross cutting issues[END_REF]). Moreover, Nigeria is currently committed to increasing tertiary education capacity through the construction of more universities ( [Osili and Long, 2008])
The main objective of the paper is to investigate how distance to university during childhood affects individuals' educational attainment. We use the three first waves of the Living Standard Measurement Survey (LSMS) dataset for Nigeria, which provides information on households' location and individuals' completed years of schooling. We combine this dataset with one we build which provides the historical spatial distribution of universities. In particular, relying on GPS coordinates, we retrieve the shortest straight line distance between the residence of households and university for each individual when they were at the ages of 12 and 18 years. We chose these target ages because they mark the beginning of post basic and tertiary education in Nigeria, respectively ( [Lincove, 2009]). Our empirical strategy confronts several complexities surrounding the relationship between distance to university and educational attainment. The identification of the causal effect of geographical constraints is plagued by the fact that households and individuals are not randomly located relative to universities. First, specific to the context, there are disparities between the southern and northern regions in terms of the location of educational infrastructures, mainly due to the consequences of colonial rule.
Second, households may consider the provision of tertiary education in a given area when determining where to settle. Unobserved households' characteristics may explain both their location and their educational decisions. For instance, one may argue that parents with high expectations for educational completion are expected to locate in areas with great education supply and to have children with the longest and most successful schooling.
Hence, schooling preference are not exogenous to the quantity of university supply. We address this endogeneity issue by adopting an instrumental variable approach drawn from the theory of general equilibrium residential sorting ([Tiebout, 1956]). Specifically, we use households' distance to border posts and local government area (LGA) population density where households' locate to instrument for households' distance to university. We argue that these two components gather preferences through the aggregation of public goods which renders any specific preference insignificant. The validity of our instruments relies on the assumption that distance to border posts and population density have no direct effect on educational attainment other than through households' proximity to university, conditional on the included control variables. The instrumental variable estimates show a negative effect of distance to university on completed years of schooling. This result is robust when accounting for the potential migration bias. The latter may occur if the individual's current place of residence differs from the area of residence during the teenage years. By considering a sub-samples of individuals that never leave their place of birth and households that headed by individuals aged below 35 years, we show that the migration concern does not represent a serious threat to the validity of our empirical strategy. Lastly, we find no gender-specific impact of geographical constraints on human capital accumulation. We also provide evidence of the existence of a neighborhood effect that may attenuate the impact of geographical constraints.
Next, we take advantage of the large-scale establishment of 12 public universities from 2011 which reduced the distance of certain households to the nearest university, as a result of living in areas close to the newly opened university campuses. In particular, we look at the effect of the creation of new universities on secondary school market.
Using a standard difference-in-difference approach, we provide evidence of a positive spin-off effect on student retention in secondary school. We find that the policy leads to a reduction of 2.5 percentage points in the intention to drop-out of secondary school for those who live near to new universities (e.g. individuals' located in the 25 km radius). We show that our estimates are not explained by the presence of differential pre-trends in education levels. We also provide suggestive evidence that the results are not driven by our definition of treatment and control groups. This paper contributes to the literature on education economics in the following aspects. The first is the literature that deals with how geographical distance to schools affects human capital acquisition ( [START_REF] Afoakwah | How does school travel time impact children's learning outcomes in a developing country?[END_REF]; [Gibbons and Vignoles, 2012]; [START_REF] Falch | Geographical constraints and educational attainment[END_REF]).
Nevertheless, most studies relate proximity to specific categories of schools to participation in the corresponding level of education (e.g distance to university and participation in university education), whereas the presence of universities may generate trickle down effects.
For instance, the establishment of universities may spur participation in primary and secondary education, instead of solely in the tertiary level [START_REF] Jagnani | The effects of elite public colleges on primary and secondary schooling markets in india[END_REF]).
Therefore, we focus on completed years of schooling, without any restriction on which level takes advantage of the proximity to the university. We contribute to this literature by using the GPS coordinates of the villages where the households reside to construct a measure of the distance to the nearest university. Second, this paper fits within the emerging literature on the so-called trickle down effects of universities that argues that proximity to higher education institutions affects lower levels of schooling ( [START_REF] Jagnani | The effects of elite public colleges on primary and secondary schooling markets in india[END_REF]).
This paper is laid out as follows. Section 2 describes the Nigerian context. Section 3 presents the empirical framework for the first part of the paper that explores the effect of university proximity on completed schooling. The empirical results follow in Section 4.
Section 5 presents the analyses relating to the effects of the new universities on current schooling. Finally, Section 6 concludes .
The Nigerian educational system and development
Due to colonial ties, Nigeria's formal education took off with administrative structures modelled after the British system of education, and consists of primary, secondary, and tertiary levels. However, starting from 2004, the system has been adjusted to now encompass the levels of basic, post-basic or senior secondary and tertiary education ( [START_REF] Feda | Governance and finance analysis of the basic education sector in Nigeria[END_REF]). According to the latest national policy on education, the basic level of education comprising six years of elementary and three years of junior secondary education is now compulsory [START_REF] Frn | National policy on education[END_REF]). The senior secondary and tertiary levels are not compulsory. The tertiary level comprises the university and non-university sectors, where the later encompasses the polytechnics, monotechnics and colleges of education, which offer opportunities for undergraduate, graduate and vocational and technical education. Since after Nigeria implemented the Structural Adjustment Programme (SAP) in 1986, there have been deficiencies in the educational sector, ranging from low participation at the basic level to severe capacity constraint at the tertiary level ( [Obasi, 1997]). According to the 2010 World Development Indicators (WDI), Nigeria's elementary school enrolment rate at 64 percent still falls short of the global average of 89 percent. Furthermore, one quarter of this enrolled population is expected to dropout of school before reaching the final grade of their current ( [Oyelere, 2010]). The state of the basic level of education naturally reflects the health of the entire educational system, in the case of Nigeria, it shows up through overall poor educational attainment and widespread illiteracy, especially among the young people. As at 2015, Nigeria's youth and adult literacy rates of 72.8 and 59.6 percent were substantially below the global average of 90.6 and 85.3 percent, respectively. The entrenched regional disparities masked by these national level statistics paint even more dire pictures. For instance, in 2010, whereas one in four youths could not read or write in the southern region, the ratio was three in four for the Northern region ( [Favara et al., 2015]).
At the tertiary level, deficient capacity remain a significant constraint. Despite the fact that the ongoing university expansion generally increased admission capacity by almost 20 percent between 2010 and 2015, data from the Joint Admissions and Matriculation Board (JAMB) shows that up to 70 percent of admission requests were rejected due to lack of capacity in 2015 alone. In fact, the average acceptance rate for admission requests made in the decade preceding 2015 is below 20%. The admission market is highly competitive and relies on the quality of precedent basic school qualifications and a general selection examination. The competition is politically determined to be unfair to applicants from the northern region because precedent qualifications are generally poorer than in other regions ( [START_REF] Lebeau | Higher education expansion and social inequalities in sub-saharan africa: Conceptual and empirical perspectives[END_REF]; [START_REF] Oyebade | The gap between the demand for and supply of university education in nigeria (1979-2002)[END_REF]). With this disparities in mind, the national higher education policy was redesigned to promote the policy of admitting undergraduate students on the basis of catchment areas, rather than purely on the basis of competition ( [Adeyemi, 2001]).1
In part, the regional differences in access to education originated from the colonial period via the roles of the colonial Christian missionaries through whom formal education was introduced in Nigeria ( [Okoye and Pongou, 2014]; [Okoye, 2021]). As part of its divide and rule policy, the colonial administration had made the missionaries to confine education activities and the associated infrastructures to the southern region because the administrators fear that education might disrupt the northern region's culture of conservatism and allegiance to the colonial government ( [Mustapha, 1986]; [Daun, 2000]).
This singular act brought about a long lasting trend of educational polarisation subsisting until today, whereby the northern region lags behind the rest of the country in educational achievement. However, since decolonisation, there have been national policies aimed at tackling this imbalance along with improving the general level of education nationally.
Most of these policies have focused on the construction of more schools at various levels ( [Moja, 2000]; [Oyelere, 2010]; [Osili and Long, 2008]). In particular, since 1962, the Nigerian National Universities Commission (NUC) was established to manage the establishment of universities, and to ensure political equity through their geographical distribution ([commission (NUC), 1993]). 2 The NUC sets conditionalities for the location of new universities, including recommending the range of proximity to urban centres, road networks, and other local amenities. Due to urban biased development, these standards are more likely to be fulfilled in the urban areas, thereby leading universities to concentrate within or near urbanised cities. This is demonstrated in Figure 3.1 which indicates that universities are generally closer to households living in densely populated LGAs, typically the urban areas. It might therefore be safe to say that the NUC conditionalities drive universities to be established near areas of high residential concentration -this is the pattern we would like to exploit in our empirical strategy to be discussed in the next section.
2 From the time of Nigeria's independence in 1960, only the the central (federal) government is allowed by legislation to establish and run universities (designated as "unity schools", now the federal universities). Driven by the perceived inadequacy of the federal universities, the legislation was amended since 1972, and states that could afford it, began as well to establish and run universities (designated as the state universities). In 1999, the tertiary education system was fully deregulated, thereby allowing private universities to operate. However, up until the time of writing, most students seek admission into the federal universities because they are better funded by the federal government, and tend to have higher capacity for student admission compared to the state and private universities. In addition, government subsidies are available for students attending federal or state universities, especially the federal, whereas, students attending the private universities have to pay full costs of their tuition.
Empirical specification
To quantify the effect of distance on completed years of schooling, we conceptualize individuals' educational attainment as determined by both supply and demand side factors. Specifically, we model educational attainment as a function of distance to university (measured at ages 12 and 18) and a number of demand-side variables at individual and household levels. Therefore, our main regression model is as follows:
Schooling ihlk = α + β.Dist a hlk + σ.X ih + θ k + ε ihlk (3.1)
where Schooling ihlk is the number of years of education completed by individual i from household h living in LGA l and belonging to a cohort k, where cohort is defined at age a (12 and 18 years). We consider the ages of 12 and 18 as they constitute the end of primary and secondary schooling and therefore represent critical stages in the process of human capital accumulation. The variable of interest Dist a hlk is the log distance of household h to the nearest university. X ih is a vector of current individual and household characteristics. These include : individual's age and gender, household's distance to secondary school4 , household's sector of residence (urban vs rural) and average parental education5 . We also include average village-level completed years of schooling to control for the potential neighborhood effect. The model also incorporates birth cohort fixed effects to account for unobserved factors specific to particular age cohort. Specifically, this may capture particular developments in the educational system that may have affected particular cohorts. Lastly, ε ihvs is the error term clustered at the household-level to allow for abitrary correlation within households.
In the model, distance to university is expected to be endogenous due to selection issues.
We address this problem by adopting an instrumental variable strategy. Hence, our identification strategy relies on variations in households' proximity to state boundary posts and neighbourhood (LGA) population density, while conditioning on relevant controls and fixed effects. The justification for these instruments is provided in the next subsection.
The first stage equation is specified below:
Dist a hlk = α + λ 1 .Distborder h + λ 2 .P opdens l + σ.X ih + θ k + ε hlk (3.2)
where Distborder h and P opdens l represent logs of current household h distance to the nearest state boundary post and the population density of the LGA l, respectively. We use the population figures of the 1991 population census for the administrative units existing before 1991 and the figures of the 2006 census for the ones created or adjusted after 1991. The coefficient λ 1 and λ 2 measure the relevance of our instruments for distance to university. All other variables are defined as in Eq. 3.1. The vector X ih includes individual and household characteristics and θ k denotes the birth cohort fixed effects. Lastly, ε hlk is the error term.
Identification Issues
If there are no systematic biases, Eq. 3.1 can estimate the causal relationship of our interest. However, given our reliance on pooled cross-sectional data, we are not able to rule out the existence of potential unobserved confounders. In particular, on the one hand, we acknowledge that households of superior preference for education may systematically sort into residency of localities in the neighbourhoods of the existing universities (see; [START_REF] Gingrich | Sorting for schools: housing, education and inequality[END_REF]). On the other hand, it is plausible to argue that universities are not randomly distributed across locations, as the siting of educational facilities may be driven by political preferences. Addressing theses issues is the major motivation for our instrumental variables approach. We follow similar strategy as [START_REF] Falch | Geographical constraints and educational attainment[END_REF], who exploited instrumental variables derived from household residential patterns. In our case, we rely on the theory of "Tiebout sorting" ([Tiebout, 1956])which contends that households would reveal their public goods preferences by sorting into neighbourhoods of varying public goods and taxation packages. The theory has been tested in different contexts and has inspired a large body of literature confirming its validity under strict assumptions ( [START_REF] Rhode | Assessing the importance of tiebout sorting: Local heterogeneity from 1850 to 1990[END_REF]; [START_REF] Bayer | A unified framework for measuring preferences for schools and neighborhoods[END_REF];
[ [START_REF] Martin | Does residential sorting explain geographic polarization?[END_REF]). The standard Tiebout theory predicts uniformity in the public goods preferences of residents in the same neighbourhood, conditional on income ( [START_REF] Gramlich | Micro estimates of public spending demand functions and tests of the tiebout and medianvoter hypotheses[END_REF]). Contrary to this, preference mixing is often found in the urban sector where there are abundant varieties of public goods. Degrees of within-neighbourhood mixing denotes the extent of nonconformity to the theory ( [START_REF] Bayer | A unified framework for measuring preferences for schools and neighborhoods[END_REF]). Outlines of the major indicators of the nonconformity may be found in a number of studies (see; [START_REF] Bayer | A unified framework for measuring preferences for schools and neighborhoods[END_REF]; [START_REF] Bayer | Tiebout sorting and neighborhood stratification[END_REF]). These include the clustering of local amenities, a range of distinct neighbourhood attributes, including varieties of housing characteristics and the overall convenience of the geographical location in relation to access to jobs and other essential services.
Based on the general equilibrium theory of residential sorting, we chose a set of instruments that are likely to optimise the varieties of the attributes influencing the household residential choices. These include neighbourhood (LGA) population density and household's distance to the nearest state border post. The population density increases the capacity to provide public goods through taxation and exploits the non-rivalry property of public goods to deliver savings on costs ( [START_REF] Salmon | Rural electrification and household labor supply: Evidence from nigeria[END_REF]; [START_REF] Grogan | Rural electrification and employment in poor countries: Evidence from nicaragua[END_REF]).
As a result, abundant varieties of local amenities and development infrastructures are expected in more densely populated areas. However, population density alone may not be able to attract certain kinds of public goods. In particular, strategic infrastructures such as universities may additionally require a central location in terms of not being situated in the borders of the administrative regions ( [START_REF] Asher | The cost of distance: Geography and governance in rural india[END_REF]; [Lee, 2018]).
We exploit this complementarity by over-identifying the model with population density at the LGA level and the distance to the border posts of administrative states. The validity of these instruments derives from the assumption that they gather preferences through the aggregation of public goods while rendering any specific preference insignificant.
The residential sorting theory anchors on understanding the administrative level at which the public goods are provided. Thus, the administration of Nigeria is managed under three administrative levels; the central (federal), 37 states, and 774
LGAs. The functions of each level is provided under the principles of fiscal federalism in the Nigerian constitution ( [Ekpo, 1994]). Although the LGA is the administrative level closest to the population, it has no meaningful power of public goods provision. Most of the infrastructures and amenities found in the LGAs are provided by the state, and in some cases by the federal government [START_REF] Feda | Governance and finance analysis of the basic education sector in Nigeria[END_REF][START_REF] Alm | An overview of intergovernmental fiscal relations and subnational public finance in nigeria[END_REF]). 6 It is important to stress the overriding influence of the state in the location of infrastructures because only then can the state be expected to steer residential sorting in the manner discussed above. In particular, by taking population density and peripheral locations into consideration. For instance, Figure 3.1 shows that the universities tends to be located closer to densely populated
LGAs, and within
LGAs not located at the state borders.
One possible concern with our identification strategy is that households' proximity to state border posts and population density at the LGA level might be correlated with a range of unobserved factors affecting individuals' educational attainment. However, we attempt to reduce the influence of the unobservables by including the village-level average completed years of schooling, households' distance to secondary school and parental education in the model.
Descriptive evidence
Before moving to the empirical estimates, we first provide descriptive evidence of estimation sample.
Findings
Main estimates
We present the main estimates in this section. In general, the results indicate that geographical distance constrains educational attainment, irrespective of the age at which it was measured. Table 2, columns 1 and 3 report the first stage estimates specified in Eq. 3.2, whereas columns 2 and 4 report the second stage estimates specified in specified in Eq. 3.1. The instruments appear to be strong predictors of proximity to university.
The standard F-statistics for the test of joint significance of our IVs are 44.05 and 31.12, respectively. The F-statistics provide additional evidence of the strength of our instruments based on the "larger than 10" rule of thumb ( [START_REF] Staiger | Instrumental variables regression with weak instruments[END_REF]). We also took the advantage of having over-identified the model, to additionally test the validity of the instruments using the Sargan over-identification test. With p-values higher than 5% (0.316 and 0.216), we fail to reject the validity of our instruments. From the main model with full controls and fixed effects, we can infer that one 1% increase in the distance to university is associated with 0.05 years reduction of schooling when distance is measured at the age of 12, and 0.06 years when measured at 18 years8 . These estimates agree with the inference of previous studies (e.g. [START_REF] Falch | Geographical constraints and educational attainment[END_REF]), to the extent that geographical constraints discourages schooling.
Heterogeneity analysis
In this section, we explore the distribution of the negative effects of distance to university estimated in the previous section with respect to household and individual characteristics . In Table 3, we explore the gender differences in the impact of distance to university on individuals' completed years of schooling. The estimates indicate that there is no gender-specific impact of geographical constraints on human capital accumulation in Nigeria. That said, decreasing geographic barriers will result in greater human capital for both girls and boys. We also test the hypothesis of "neighborhood effect" or "information network effect", which highlights that education attainment of peers in the neighbourhood pushes individuals and parents to higher human capital demand ( [Patacchini and Zenou, 2011]). The neighbourhood effect is most operative at adolescence [START_REF] Agostinelli | It takes a village: the economics of parenting with neighborhood and peer effects[END_REF]; [Do, 2004]; [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF]). We empirically test the "the neighborhood effect" hypothesis by interacting proximity to university with the LGA educational attainment dummy9 . The estimates provided in Table 3.4 reveal evidence of a neighbourhood effect. That is, the fact of living in a community or village that have higher level of schooling mitigates the adverse effects of distant university. .
Addressing potential migration bias
We measured household proximity to university by spatially matching permanent location of universities to the current location of households, thereby invariably assuming that the sample had maintained permanent residential location since commencement of schooling.
In this context where urban -rural migration is rampant ( [START_REF] Lall | Rural-urban migration in developing countries: A survey of theoretical predictions and empirical findings[END_REF]), this is a strong assumption. Ideally, we would measure the proximity to university from the location where each individual started and completed schooling, or at least from the place of birth which would approximate the residential location at the time of schooling.
Unfortunately, the fact that the LSMS reports only the current location of households and their members lets labour migration threaten our estimates. Furthermore, given that the location of most of the universities is approximately urban -where job prospects are higher, the strength of this mechanism could drive us to find negative effects of distance to university, even if none truly exists. Therefore, we undertake the following steps to rule out the effects of this mechanism:
First, we drop households that are headed by individuals aged below 35 years, because such households are most likely formed by individuals who recently completed schooling, and thus have higher probability to have changed residential location. This maneuver reduced the number of observations, but there is no significant change in the estimated effect of distance to university compared to the full sample estimates (see Table 5 columns 1-2).
Second, following [START_REF] Cannonier | The impact of education on women's preferences for gender equality: Evidence from sierra leone[END_REF], we repeat the estimations on a unique non-movers sub-sample of the LSMS -those providing answers to questions relating to cultures and institutions of the communities (LSMS enumeration areas). Prior to answering the community questions, the individuals were asked how many years they have lived in the community, so our sample of non-movers comprises those living in the community from birth, defined as those whose residence duration in the community is equal to their age. There is one caveat with respect to estimations on this sample. In fact, since we cannot match the non-movers sample to households, only individual and LGA controls are available for the estimation. 10 The estimates reported in Table 5 (columns 3-4) remain similar to the baseline ones.
10 Household controls such as parents' education are unavoidably omitted.
New universities
So far, we found that the distance to university to which individuals are subjected at different critical ages acts as a deterrent to human capital accumulation. This section attempts to extend the analysis to current schooling and concentrates on the newly created federal universities in 12 states during 2011-2013 period.11 More precisely, we would exploit episode of massive roll-out of public universities in Nigeria, specifically targeting states that previously had only indirect access to the federal university system.
Typically, the federal government intervenes in the provision of education to balance access among the states ( [START_REF] Isumonah | Federal presence in higher institutions in nigeria and the north/south dichotomy[END_REF]). 12 We aim to examine how the latest intervention affects schooling the secondary school market. In particular, we investigate the effect of the establishment of universities affect pupils' secondary school drop-out intention. From an economic point of view, the initial rational behind the positive indirect effects of university establishment on pupils' secondary schooling relates to financial matters, the so-called "transaction cost effect". There is also what is described as the "neighbourhood effect" or "information network effect", which explains the benefits of the establishment of a university on the secondary local education market. Young people, surrounded by a university environment, can grow up to consider a post-secondary education as a natural goal, thus enhancing their school achievement (Do, 2004).
For the empirical framework, given that universities are not randomly assigned across hosting states, we step down the analysis to sub-state levels using proximity to university to identify the individuals that received the most -least -impact from the establishment of new universities. We define treatment as living within buffers of 25km radius around the universities. 13 The panel dimension of the data allows us to estimate a difference-in-differences (DiD) design whereby we compare changes over time of the school drop-out intention between the treatment and control groups. In particular, the estimation outcome is a dummy variable that represents whether an individual attending secondary school in time t intends to discontinue in t + 1 (Dropout).14 There are two estimation periods for the drop-out intention variable as it was collected only in the first and second waves of the LSMS survey. In existing literature on university enrollment, a number of direct and indirect factors have been highlighted ( [START_REF] Molina | The schooling and labor market effects of eliminating university tuition in ecuador[END_REF]; [START_REF] Bahrs | University tuition fees and high school students' educational intentions[END_REF]; [START_REF] Spiess | Does distance determine who attends a university in germany?[END_REF]). As part of this paper, we estimate the effect of the establishment of universities on secondary school market. Explicitly, at the secondary school level, students are forward-looking and include information about their next expected level of schooling into their subsisting human capital plans, choosing effort level [START_REF] Oreopoulos | Information and college access: Evidence from a randomized field experiment[END_REF]). For instance, introducing or increasing university fees affects enrollment behavior by lowering the intention of secondary school students to attend university ( [START_REF] Bahrs | University tuition fees and high school students' educational intentions[END_REF]; [Hübner, 2012]). Thus, the benefits of completing secondary school may reduce if the intention to attend university decreases15 , and we claim that the latter depends on costs, notably distance to university, and the
Data and raw differences
As in the main analysis, we rely on the first two waves -2010/2011 and 2012/2013 -of the Nigerian Living Standard Measurement Survey (LSMS). The pre-treatment period (t = 0) corresponds to the first wave of 2010/2011 as universities have not yet been introduced in the twelve states concerned. Whereas, the second wave of 2012/2013 refers to the post-treatment (t = 1) period when universities have already been established.
Our sample consists of students in secondary school. The dependent variable equals to 0 if the individual yes to the question: "Do you plan to attend school next year ?, and 1 otherwise. In Table 6, we provide descriptive evidence of the effect of the university creation on university attendance and secondary school drop-out intention. We split individuals according to their proximity to the new universities :"near university (treated)"
includes individuals living within 25KM radius of the newly created universities, and "Far from university (controls)" covers those living beyond the 25KM radius. The "near university" group accounts for about 17.83% (2.247 obs.) of the total observations. The raw statistics in Table 3.6 indicate that the intervention led to a reduction in the intention to drop-out of secondary school in the two groups. The reduction is greater in the treatment group than in the control one, which provides a first insight into the positive effect of the introduction of universities on the lower level of schooling.
Findings
As the results discussed in the first part of this paper emphasize the importance of geographical constraints in human capital accumulation by increasing the marginal costs of education, it is relevant to grasp a potential mechanism that may be at work in lower levels of schooling. In Table 3.7, we report estimates relating to the intention to dropout of secondary school. The DiD estimation finds 2.5 percentage points reduction in the intention to drop out of secondary school, attributed to the establishment of a new university. The results show that beyond the obvious monetary impact (transaction cost explanation) of a new university on local population, there is also a neighborhood effect (or information network effect). When a new university is formed, it provides strong incentives for individuals to continue education. One may argue that secondary students who live very close to a university have lower information costs when seeking information on the decision to participate in higher education.
Additional diagnoses of the DiD estimates
This subsection aims to provide various robustness checks to the estimates previously discussed on the effect of the introduction of new universities on secondary school drop-out intention. First, the validity of the difference-in-differences strategy relies on the common trend assumption. The key identifying assumption here is that the trend of the secondary school drop-out intention would have been the same for treatment and control groups in the absence of treatment (i.e. university establishment). Ideally, the parallel trend assumption can be investigated exploiting data on multiple periods (at least two data periods prior to university introduction), but we do not have this in the present case.
Indeed, we provide a partial test of the common trend hypothesis by specifying a model that examine the effect of a placebo treatment on the outcome. More particularly, we regress the outcome prior to the introduction of the new universities on the "future" treatment dummy. Following [Senne, 2014] and [START_REF] Havnes | No child left behind: Subsidized child care and children's long-run outcomes[END_REF], we specify the model as follows:
Dropout 3.8, and indicate that there is no statistically significant difference between the two groups prior to treatment, for secondary school drop-out intention.
Next, we revisit the composition of our sample. In fact, as already discussed, the selection of states where the universities are sited is known to depend on level of educational development, which we did not fully observe. The definition of the treatment based on physical distance and not state political borders is therefore based on the assumption that the localisation within a state may be quasi-random, conditional on the NUC localisation criteria. However, the free use of the physical distance runs into the risk of extending the groups (particularly the control group) beyond states administrative boundaries, which might then include samples from states that might be at dis-similar level of educational development and other characteristics. To ensure that this is not the case, we restrict the sample to the intervention states only. The idea is to neutralise the catchment policy given that catchment is equally distributed within state and all hosting states are expected to offer the same catchment advantage (see; [Adeyemi, 2001]). Thus, the restricted sample removes those residing in states other than the hosting states, thereby isolating the pure effect of proximity by guaranteeing catchment advantage to all individuals in the sample.
In other words, apart from helping to disentangle the effects of the catchment policy and proximity to university, the strategy also helped to reduce potential bias that may arise from comparing dissimilar groups. The estimates from the restricted sample are displayed in Table 3.9. Similar to the main estimates, the results highlight significant negative effect on the intention to drop-out of secondary school. In this case, one may argue that the entire increase in admission might be attributed to proximity and none to the fact that the treatment group was unduly favoured in admissions through the catchment policy.
Conclusion and policy implications
The question of enhancing human capital in developing countries has been attracting concerns from policy-makers and development stakeholders. while inadequate quantity of institutions of higher learning remains a significant constraint, most of the countries have constructed large number of higher education institutions in recent times. Nevertheless, not much is known about the nature of the spatial distribution and whether such constitute a dimension of educational constraints. This study follows this question, focusing on one of the most populous countries of the world, and one of the most under-served higher education markets. In two related analyses, the paper interrogates the relationship between geographical distance and the accumulation of schooling. The first estimates completed schooling as a function of distance experienced at the time of schooling, and finds unambiguous negative effect of distance on completed schooling. The second exploits recent mass creation of universities and the associated dramatic increase in proximity to university. Using difference-in-differences strategy, it finds that the intervention led to a reduction in the intention to drop out of secondary school.
The article has a number of policy implications. In a geographically large country like Nigeria, while university agglomeration may attract substantial external economies, it necessarily affects equality of access and impedes overall human capital accumulation. This calls attention to spatial distribution in higher education access policies. In lower levels of education, this is already a standard, but it does not apply to higher education because it was considered an elite good. If the developing countries would be able to compete in the present knowledge economy, they must universally expand access to higher learning and create abundance of the skills in demand for the twenty-first century global economy.
Some of these countries still prioritize basic education, neglecting expansion of higher learning. However, this paper also demonstrate possible synergy between both. It shows in line with few other studies that access to higher education institutions may enhance the quality of basic schooling, in this case, by discouraging dropout from secondary schools.
Introduction
In the literature of human capital accumulation, studies frequently find inequalities across individuals and groups which is known to spillover into economic opportunities.
Earlier studies of inequalities of opportunities in Africa trace the origin to colonial heritage (e.g; [START_REF] Cogneau | Inequality of opportunity for income in five countries of africa[END_REF]), but fail to account for the mechanisms of transmission across generations. While most of this may be due to the distribution of innate abilities, it has been shown that family backgrounds and neighbourhood quality make substantial contributions (see: [Zimmerman, 2001]; [Glick and Sahn, 2000]). Studies under the framework of intergenerational transmission focus specifically on the role of parental endowments including education, and this has produced overwhelming evidence that parents have large effects on the education of their children. However, the question is no longer if they do, but to what degree? The degree to which parents manipulate the economic outcomes of their children captures the degree of persistence of social and economic inequalities. Policies might need to alter this and improve economic equality, but through what policies? From the point of view of human capital accumulation, the optimal set of policies might depend on whether the inertia is due to demand or supply constraints.
Nigeria has in the past deployed a mix of demand and supply sides policies but with little achievements ( [Lincove, 2015]). Using the 2019 Nigerian Living Standards Surveys The historical activities of the missionaries is assumed to jointly affect the parental and neighbourhood capitals, which fulfils the identifying assumption of the causal mediation framework.
On the whole, the study finds large effects of parental and neighbourhood capitals on the child human capital. However, the causal mediation analysis finds that the total effects may be distributed 25% and 75% respectively between the parental and neighbourhood capitals. The decomposition of the effects are robust to the inclusion of a battery of controls. The policy implication is that supply side educational policies remains relevant for the objective of achieving more egalitarian distribution of economic opportunities at this stage of Nigeria's economic development. The rest of the paper is organised as follows: section 4.2 discusses the background of the study, section 4.3 presents the empirical framework, section 4.4 presents the data and descriptive statistics, section 4.5 presents the results and section 4.6 estimates and discusses the mechanisms, while section 4.7 concludes.
Background
Historical missionaries activities in colonial Nigeria
In the existing literature, colonial experience has been shown to broadly vary across centralised and decentralised ethnic groups within the various colonies, and the economic development legacies of colonialism linked to this variation. This is because the leadership of the centralised ethnic groups had more intimate interactions with the colonial regimes that enabled them imbibe positive institutional behaviours that sustain economic and social developments ( [START_REF] Gennaioli | The modern impact of precolonial centralization in africa[END_REF]; [START_REF] Porta | The quality of government[END_REF]; [Archibong, 2019]).
In the Nigerian case, the special attraction of the regime to the areas of high ethnic centralisation was predicated on the propriety of the later for direct taxation -a factor considered essential for internal stability by the British colonial regime.
Therefore, the regime adopted two different approaches to administration in the centralised and decentralised ethnic groups, whereby important policies were bargained with rulers in the centralized ethnic groups, but imposed in the decentralised ethnic groups ( [Archibong, 2016]; [Okoye, 2021]; [Fields, 2021]). The bargaining institutionalised the Native Authority Proclamation of 1907, which guaranteed colonial non-intervention in matters of cultures and religion, and further hegemonised traditional rulership over the people. With Muslim emirs ruling virtually all the centralised ethnic groups, this hegemony later became the singular most important tool used to limit the advancement of the missionaries and their activities ( [Archibong, 2018]; [Berger, 2009]; [Burdon, 1904]). In the end, the major achievement in these locations was that the local Islamic population was spared widespread conversion to Christianity unlike the experience of the other locations where there was decentralised leadership. However, this was at the expense of the social services inclusive of education and health plus the associated infrastructures which the missionaries would have provided. The persistence of the resultant inequality in educational opportunities long after the missionaries had left the country is well documented [START_REF] Bauer | Legacies of islamic rule in africa: Colonial responses and contemporary development[END_REF]).
In the literature of economic history, more rapid post-colonial economic and social development is often seen as a product of better involvement of ethnic rulers in colonial administration such as indicated above, due to the acquired leadership training such entails ( [START_REF] Gennaioli | The modern impact of precolonial centralization in africa[END_REF]). However, the case in point stands out:
such areas in Nigeria are the centralised Muslim dominated ethnic groups which today more strongly associated with poor indices of development in the areas of education and other economic outcomes despite being more advanced prior to colonisation ( [START_REF] Gennaioli | The modern impact of precolonial centralization in africa[END_REF]; [Okoye and Pongou, 2014]). This unexpected turnout of outcomes has been described as "reversal of fortunes", and attributed to limited exposure to the externalities of the missionaries' activities, education in particular ( [START_REF] Okoye | Missions and heterogeneous social change: Evidence from border discontinuities in the emirates of nigeria[END_REF]; [Lincove, 2009]; [START_REF] Agwu | University proximity at teenage years and educational attainment[END_REF]) .
Nigeria's educational system
Nigeria's education system consists of primary, secondary, and tertiary education. Schooling grades 1-6 or the primary school level leading to the award of first school leaving certificate (FSLC) typically lasts between the ages of 6 and 11. Grades 7-12 or the secondary school follows and typically lasts between the ages 12 and 17 years, and the Basic Education Certificate Examination (BECE) and Senior Secondary Certificate Examination (SSCE) may be awarded at the first and second successful three years respectively. The tertiary education level which typically consists of 3-5 years programmes in colleges and universities is accessed upon completion of the secondary school level. Nigerian children face significant demand and supply sides constraints to accessing the educational systems, as participation at any level of the schools system remains low compared to the global average: At 52% dropout rate, Nigeria accounts for about 10% of global out-of-school children during 2010 ( [START_REF] Bertoni | Education is forbidden: The effect of the boko haram conflict on education in north-east nigeria[END_REF]). According to the World Bank, Nigeria's gross primary and secondary schools enrolment rate hovers around 87.5% and 44% respectively. In terms of distribution, there is a wide educational disparities between the Nigerian geographical regions: For example, most likely due to the mechanisms discussed in section 4.2.1, school participation is generally higher in south compared with the north, and more than 66% of school participants in the north remain illiterate even after completing primary school, whereas the equivalent is only about 18-28% in the south ( [Favara et al., 2015]; [Lincove, 2009]).
Nigeria's first major national education expansion policy, the Universal Primary Education Policy (UPE) was implemented in 1976, and among other provisions, it abolished tuition for grades 1-6. Although the programme faced huge financial and implementation capacity challenges, a number of studies acknowledge that it helped alleviate the human capital deficit of the country ( [Csapo, 1983]; [Osili and Long, 2008]). However, one of its objectives which is to eliminate the educational imbalance between the northern and southern regions failed to materialize. In fact, some studies suppose that the imbalance reinforced due to the programme, because of inadequate capacity to implement the programme in the north ( [Lincove, 2009]).
Empirical framework 4.3.1 The model of intergenerational mobility
The standard empirical framework of intergenerational mobility inspired by the seminal works of [START_REF] Becker | Human capital and the rise and fall of families[END_REF]] and [Solon, 1992] focuses solely on the role of parental education and specified as follows:
y ij,t = γ 0 + γ 1 y ij,(t-1) + ϵ ij,t (4.1)
where y ij,t represents completed years of schooling of individual child i within group j belonging to generation t, and y ij (t -1) that of parents belonging to generation t -1. The error term; ϵ ij,t is assumed to be independently and identically distributed (i.i.d), while standard controls such as child gender and age are suppressed. The interest parameter γ 1 is the inverse measure of the extent of regression towards the mean across generations.
The model incorporating group capital
Recent studies of intergenerational mobility tend to embrace augmentation of equation 4.1 with measures of group capital measured as average years of education at the levels of family [START_REF] Adermon | Dynastic human capital, inequality, and intergenerational mobility[END_REF]), ethnic group ( [Borjas, 1992]; [START_REF] Güell | New directions in measuring intergenerational mobility: Introduction[END_REF]) or geographical location ( [Patacchini and Zenou, 2011]). The studies argue that equation 4.1
underestimates γ 1 due to the omission of the group capital. I therefore decide to define the group capital at the district level and augment the model1 . This captures the level at which basic education services are provided in Nigeria. Furthermore, many previous studies contend that ethnic capital operates mainly through geographical clustering of ethnic groups, which suggests that the group capitals may be picking up certain fixed factors within the ethnic homelands (see; [Leon, 2005]; [Borjas, 1995]; [START_REF] Alesina | Intergenerational mobility in africa[END_REF]). Equation 4.1 is then modified as follows:
y ij,t = γ 0 + γ 1 y ij,(t-1) + γ 2 y v(j),(t-1) + θ 3 X ij + ϵ ij,t (4.2)
where y v(j),(t-1) represents educational level of the group which get partially transmitted across generations. In addition to individual controls such as gender and age, X ij captures group level controls. The previous studies generally find that the parental and the group capitals jointly influence human capital accumulation. This is consistent with the theoretical proposition that parents value the human capital of their next generation, and have to invest in it within neighbourhoods whose capitals may act as complements or substitutes [START_REF] Becker | A theory of intergenerational mobility[END_REF]).
The model with interacted parental and group capitals
Given that the group effect is captured at spatial location level (district, in this case), y v(j),(t-1) denotes fixed effects capturing not only parental social interactions, but also the quality of the neighbourhoods including the quality of the school system. Then, in line with neighbourhood interactions models ( [Benabou, 1993]; [Ioannides, 2003]; [Patacchini and Zenou, 2011]), I adapt the model to include interaction effects, and the interaction is between the parental and neighbourhood capitals. The interacted model is specified as below :
y ij,t = γ 0 + γ 1 y ij,(t-1) + γ 2 y v(j),(t-1) + δy ij,(t-1) × y v(j),(t-1) + θ 3 X ij + ϵ ij,t (4.3)
The interaction coefficient (δ) captures the nature of the relationships between the two capitals: If δ is less than zero, this implies that intergenerational persistence is lower for those living in neighbourhoods with higher capital, and vice versa. Another advantage of the interaction is that it enables the decomposition of the direct effects of the parental and the neighbourhood capitals respectively under the causal mediation framework ( [START_REF] Imai | Unpacking the black box of causality: Learning about causal mechanisms from experimental and observational studies[END_REF]).2
Identification
In order to identity the parameters of interest, I apply instrumental variables strategy to the models above. 3 The identifying instrument is the intensity of the historical activities of the missionaries, which is relevant because of the documented roles of the missionaries in the provision of social services (education and health) at the time when the colonial governments were not financially and/or administratively in the position to provide them ( [START_REF] Bauer | Legacies of islamic rule in africa: Colonial responses and contemporary development[END_REF]). Economists generally find that modern economies are affected by past institutions, even after the institutions have ceased to exist ( [START_REF] Acemoglu | Persistence of power, elites, and institutions[END_REF]; [Nunn, 2010]).
In the history of colonial formal education development, public goods investments of the missionaries have been found to endure through current provision of schools, collective and individual educational attainments and literacy levels ( [START_REF] Gallego | Christian missionaries and education in former african colonies: How competition mattered[END_REF]; [START_REF] Lankina | Competitive religious entrepreneurs: Christian missionaries and female education in colonial and post-colonial india[END_REF]; [START_REF] Okoye | Missions and heterogeneous social change: Evidence from border discontinuities in the emirates of nigeria[END_REF]).
I assume that after controlling for the historical factors that influenced the penetration of missionaries including ethnic precolonial centralisation, precolonial ethnic share of Muslims, location on the coasts, population density and the presence of cities in the 1940s, the influence of the missionaries only passes through the accumulation of human capital at the neighbourhood level . E(X)|ϵ 2 correlates with ϵ 2 and Y 2 .5
Furthermore, as the theory proposes, equation 4.3 interacts parental and the neighbourhoods capitals and then rely on the causal mediation framework to decompose the total effects into direct effect due to parental capital and indirect effect due to neighbourhood capital. The causal mediation framework requires just an instrument to identify direct effect of treatment T and indirect effects of mediator M on the relevant outcome Y .
The decomposition is identified as long as the endogeneity originates from confounders that jointly influence T and M but not T and Y ( [START_REF] Dippel | Causal mediation analysis in instrumental-variables regressions[END_REF]). The intensity of the missionaries activities could provide this joint influence because it is expected to correlate with the neighbourhood capital M through school quality as discussed in section 4.2.1, and also with parental capital T via institutional persistence and possible parental neighbourhood sorting ( [Ioannides, 2003]; [Patacchini and Zenou, 2011]). Assuming that parents sort into neighbourhoods of better school quality, this increases the value of the causal mediation framework because then the interaction between the parental and neighbourhood capitals would be endogenous. [START_REF] Michalopoulos | Trade and geography in the spread of islam[END_REF]), and obtained from the History Database of the Global Environment ( [START_REF] Goldewijk | Long-term dynamic modeling of global population and built-up area in a spatially explicit way: Hyde 3.1[END_REF]). Considering that waterways may facilitate trade and economic activity and affect incentives for human capital accumulation, I measure the total length of rivers within the ethnic group's homeland with data from from the Natural Earth Project. Disease vectors may also have persistent effects on development ( [Alsan, 2015]):
hence, I include climatic susceptibility to malaria from the Malaria Atlas Project in the vector of controls, and to account for the economic and social consequences of the slave trade, I include controls for the total number of slaves exported through the Atlantic and Indian trade routes for each ethnic group as derived from [START_REF] Nunn | The slave trade and the origins of mistrust in africa[END_REF].
As a measure of colonial government's investment, I include the length of colonial railroads in the ethnic group's territory also drawn from [START_REF] Nunn | The slave trade and the origins of mistrust in africa[END_REF]. Finally,
given that there may be other important geographic features of an ethnic homelands that are not directly observed, I include measures of average elevation and terrain roughness from the U.S. geological Survey's global digital elevation model, size of the ethnic group's land territory, latitude and longitude.
Measurement of key variables and descriptive statistics Human capital variables
The human capital variables are measured to accord with the fundamental theory where it is conceived as the embodiment of sustained investments of time and financial resources in education, and this must be distinguished from snapshot school enrolment. Thus, I measure child and parental human capital as stocks, capturing the highest number of years of schooling completed. With respect to parental human capital, I follow [START_REF] Behrman | Intergenerational mobility in latin america [with comments[END_REF] by using the highest number of years of schooling of either parent.6 I transform the categorical education levels collected as highest grade completed or highest qualification attained into a continuous variable, years of schooling, using the layout on the number of years of schooling required to attain the different education levels in the Nigeria educational system and the format may be found in [START_REF] Bertoni | Education is forbidden: The effect of the boko haram conflict on education in north-east nigeria[END_REF]. The neighbourhood capital is derived from the years of schooling of the parents and is measured as the average of the parents living in specific neighbourhoods defined as the district of current residence.
This fundamentally assumes that the respondents currently reside in the districts where they were principally schooled. To confirm this, I use migration history contained in the NLSS to determine how long individuals have lived in their current locations, and it shows that only less than 0.5% had ever migrated out of their districts since birth. Though the rate of migration is negligible, each migrant is returned to their original districts if they migrated after reaching 18 years age or discarded if otherwise.
Missionaries activities
To construct measures of the historic influence of the missionaries (Catholic and Protestants) within Nigerian districts, I use the missions base stations digitized by [Nunn, 2010].
I sum the number of missions within each polygon of the 774 administrative districts of Nigeria. The sum is then normalized by the land area of each district. Given the absence of the missionaries in many districts, the measure is left skewed. I then follow [Nunn, 2010] and take the natural log of one plus the normalized number of missions. I also measure the distance from the centroid of each district (LGA) to the nearest mission station as an alternative measure of the presence of the missionaries. The locations of the missionary schools and hospitals are not necessarily within the mission stations, but not far from them since mission stations operate as the walking supervision stations for the schools and hospitals. Hence, both the measured distance and the normalized number of missions are only indicative of areas that benefited from the missionaries establishments.
Main results and discussion
The main results pertains the effects of parental and neighbourhood capitals independently and jointly. The results identified through the instrument measured as log number of missions per district land area are discussed in this section. The rest, relying on alternative measurement of the instrument as the nearest distance to missions are reported in tables C.2 and C.3 as robustness. Table 4.3 reports the results of the parental capital estimated in isolation of the neighbourhood capital. In column 1 which is based on OLS, the table 4.3 shows that an increase of parental education by a year leads to 4 months increase in child's years of education. This estimate of persistence falls within the range of OLS estimates for Nigeria from recent studies (see; [START_REF] Funjika | Colonial origin, ethnicity and intergenerational mobility in africa[END_REF]; [START_REF] Azomahou | Intergenerational mobility in education: Is africa different?[END_REF])
-it indicates substantial impact of parental endowments on children outcomes, and therefore persistent inequalities of economic opportunities in the country and other African countries. Note however that the OLS estimates in the current and most of the previous studies are subject to a number of estimation biases: The most common of which are non-random distribution of educational opportunities and influence of unobserved innate abilities [START_REF] Holmlund | The causal effect of parents' schooling on children's schooling: A comparison of estimation methods[END_REF]). Columns 2 -4 attempt to address these concerns using 2SLS identification framework. Using the intensity of the missionaries' activities as identifying instrument, column 1 shows that the intergenerational elasticity is around one year of education, suggesting that the OLS underestimates the effect of parental capital. The 2SLS based solely on the above instrument may fail to eliminate the innate ability bias.
Therefore, columns 3 and 4 employs the internal instruments based on [Lewbel, 2012]. I use the internally generated variables not only as alternative instruments, but also as complements to the external instrument under overidentification restrictions. Assumed to have addressed the two main sources of bias, the estimates in columns 1 and 2 are highly similar and suggest that the effects of parental capital range within 0.144 -0.150. As already indicated in the literature, this might imply that the correlation of innate abilities constitute a large part of the intergenerational transmission process ( [START_REF] Holmlund | The causal effect of parents' schooling on children's schooling: A comparison of estimation methods[END_REF]).
On the other hand, table 4.4 reports the effects of the schooling neighbourhood capital which constitute a larger portion of the total effects than the parental capital, also in line with the related previous studies ([ [START_REF] Funjika | Colonial origin, ethnicity and intergenerational mobility in africa[END_REF]). It suggests that an increase of the average years of education in the neighbourhood by one year increases child's years of education in the range of 1.284 -1.362.
Table 4.5 attempts to measure the relative contribution of the parental and neighbourhood capitals through their interaction in the model. Assuming in line with the literature that the parental capital exert direct effect while the neighbourhood capital exert indirect effect, the table 4.5 shows that their interactions support intergenerational mobility. In particular, that one year increase in the mean years of education in the neighbourhood reduces the effect of parental capital in the range of 0.077 -0.080 depending on whether or not the geographical controls are included. Since there are no separate instruments for the parental and neighbourhood capitals, straightforward identification of the interaction effect is problematic. However, the causal mediation framework allows the interaction to be identified via decomposition under a recently developed algorithm by relying on a single identifying instrument ( [START_REF] Dippel | Causal mediation analysis in instrumental-variables regressions[END_REF]). I apply this approach using the previously used external instrument (the intensity of the missionaries' activities), and the results are reported in table 4.6. The table 4.6 represents summary of findings of the causal mediation analysis: the human capital of generation t -1 (parents and neighbourhood) jointly promote the human capital of generation t via the positive total effects. The direct and indirect effects due respectively to parental and neighbourhood capitals are also positive. In relative terms, the table 4.6 shows that the indirect effects outweighs the direct effects within the range 73.7 -74.8%, suggesting that increasing access to education for the public at the meso or macro levels might be highly rewarding in the context. Given the instrument, the decomposition of the effects remains the local average treatment effects of those affected by the historical education provision of the missionaries. In the next section, I discuss the possible mechanisms including the persisting inequalities in the quantity and quality of schools through the settlements of the missionaries. This section addresses how the historical settlements of the missionaries remain relevant for human capital until this day. Events during the Nigeria's colonial period in which the missionaries rather than the colonial government champion the provision of social services including education must have generated inequality of schooling opportunities because the missionaries only had access to selected localities. However, one might be puzzled that the inequality remains in the present despite governmental policies since independence in 1960.
Mainly since 1976, Nigeria launched aggressive campaign for mass education, and allocated huge budgets for the construction and redistribution of educational infrastructures across the country (see; [Osili and Long, 2008]). What is not clear therefore is whether these attempts fail to redistribute the educational opportunities, or the education provided fail to substitute the one originally provided by the missionaries. In this section, I obtain the contemporary distribution of quality and quantity of primary and secondary schools in Nigeria, and employ them as the mediators of the intergenerational transmission in place of the average neighbourhood schooling, and then use measures of exposure to the missionaries as the identifying instrument.
The persistence mechanisms
This section pertains the mechanisms of the effect of the parental capital. The potential mechanisms also necessarily intersects with indicators of economic inequalities, which the theory of intergenerational transmission aims to capture. Hence, for example, while household credit constraint might capture the extent of income poverty, it also portends limitation on parental ability to invest in their offspring education. Theoretically, parents choose education levels for their children subject to costs and benefits under credit constraints and inefficient credit markets ( [Becker, 1985]; [START_REF] Becker | A theory of intergenerational mobility[END_REF]). To test this, I focus on proxies of credit constraints, particularly household income represented by consumption expenditures per capita, and family family size. Fortunately, a recent paper, [START_REF] Okoye | Missions and heterogeneous social change: Evidence from border discontinuities in the emirates of nigeria[END_REF] provides a suitable background to the analysis in this section: Using regression discontinuities design (RDD) and exploiting discontinuities in mission stations around the borders of the Emirates of the Northern Nigeria 8 , the paper finds that areas with greater historical missionary activities have higher levels of schooling, lower levels of fertility, and higher household wealth in the present. I merely add to this evidence by estimating within the current data the relationships between parental capital and household consumption expenditures and family size. There may be reverse causality between education and most of the outcomes, hence, I apply the two-stage least square method using the missionaries activities as instrument. The results are reported in table 4.9. The results suggest that parental education is negatively correlated with credit constraints in terms of per capital household consumption as well as family size. Although its not possible to determine parental investments in other dimensions such as non-monetary 8 That is, the areas where missionary activities were restricted during the colonial period efforts known to complement financial investments (see; [Patacchini and Zenou, 2011]), the results show that more educated parents have more financial resources at their disposal for educational investments. The table 4.9 shows that on average, each additional year of parental education is associated with 6.9% increase in per capita household expenditure (which also includes educational expenditures). The theory also emphasises the trade-off between the size of the family or number of children and the parental investment in education (see; [START_REF] Becker | On the interaction between the quantity and quality of children[END_REF]).
In line with this, table 4.9 shows that parental capital is negatively associated with family size. Given the established trade-off between the quality and quantity of children, this is a potential channel of the intergenerational persistence. Collectively, the results indicate that credit constraints may be less binding for the more educated parents allowing them the leverage to invest in more their children. Nevertheless, there is no way to determine if these resources are actually employed in the educational investment for the children but it might suffice to infer that more educated parents have higher capacity to invest in their children as a potential channel for the intrahousehold persistence of human capital.
Conclusion
In this paper, I investigate intergenerational mobility in Nigeria using education as a symbol of economic status. Drawing from the intergenerational mobility theory, I argue that quality of neighbourhoods acts as an externality in the production function for human capital. In particular, the quality of the neighbourhoods in which children are raised, influences their human capital and economic opportunities. Previous studies denote this as ethnic capital, but in the context of Nigeria, ethnicities, geographies and administrative boundaries inexorably intersect and jointly interfere in the human capital production.
Therefore, I chose to conduct the analysis on the basis of the administrative level where basic schools are provided. This approach not only exploits the contribution of access to education, but also aligns the analysis to potential policies intervention. I rely on the human capital intervention of the colonial Christian missionaries to identify and disentangle for the first time the effects of parental and neighbourhood capitals.
The empirical results yield two main sets of finding: First, the neighbourhood capital supports intergenerational mobility by acting as mediator to the productivity of parental capital, which is consistent with previous findings. However, I further decomposed the total effect into direct and indirect effects, due respectively to parental and neighbourhood capitals. As it turns out, the parental and neighbourhood capital are substitutes, implying that intergenerational persistence is higher in neighbourhoods of lower quality. What is more, the ratio of the effect of the parental capital to neighbourhood capital is 25:75. I confirm that the effects of the neighbourhood capital is driven by the quality of schools in the neighbourhood, which means that policies aimed at improving schools supply could be relevant in alleviating the persistence of inequalities.
Compared to recent related studies, the current study connects with the political realities of Nigeria by exploiting variation in the provision of basic education across the 774 districts.
It also exploits a combination of identification approaches that not only accounts for endogenous human capital accumulation process, but also the bias of innate abilities correlation. Based on the findings, policymakers have clear path towards ensuring upward mobility opportunities to individuals across the country, through enhancing the provision of schools at the district level.
This thesis examined a number of important themes relating to Nigeria's economic development. These themes are quite recurrent in the political economy discussions and include households' vulnerability to economic shocks mostly as a result of civil conflicts, the nature and consequences of access to schools, and the nature of socioeconomic mobility.
Given the wide ethnic and religious diversity of Nigeria's population, economic shocks due to inter-groups rivalries and other types of civil conflicts are rampant and often produce grave consequences for the standard of living of households. Therefore, policy makers are expected to adopt strategies to protect households' welfare, which may include adhering to the theory of systems' resilience that may provide cushions against economic shocks. In the literature, social services investments and collective capitals such as roads, health and educational institutions support resilient systems. On the other hand, existing studies show that the civil conflicts in Nigeria fits into the relative deprivation theory, which implies that the nature of the distribution of economic opportunities is part of the causes and potentially an integral part of the alleviation. The thesis first assesses the effects of one of the latest civil conflicts under the lenses of the resilience theory, finding that the resilience approach is an important tradition to incorporate in development policies.
Afterwards, it examines deprivation from the standpoint of access to education and the intergenerational transmission. I summarise the key findings of the empirical chapters of the thesis below:
Chapter 2 examines the consequences of recent Boko haram conflict and the mechanisms of coping with the shocks emanating from the conflicts. It was difficult to causally identify the effects of the conflict given highly selective choices of the conflict actors. The chapter exploits both longitudinal data and robust identification strategy that checks the selection bias and finds that resilience capacity is an important factor in mitigating household welfare risks due to conflict shocks. The identification is based on a non-parametric difference-in-differences strategy applied to closely matched exposed and non-exposed groups. As expected, the chapter finds that the conflict shock affects food security negatively, attenuated by resilience capacity but the strength of resilience is weakened in the process. The consistency of the findings with the hypotheses of the resilience approach to sustainable development yields the recommendation that development policies always target to establish development ecosystems that are households oriented and resilient to shocks.
The major limitation of the study is data: Obtaining high-quality survey data in the extreme conditions of the Boko haram is difficult. While, I find the existing LSMS panel and the ACLED datasets quite sufficient for the analysis, a more tailored dataset for wider range of welfare indicators and more direct measures of conflict exposure could provide more robust analysis. Furthermore, I was unable to observe the possible inducement of migration by the conflict which could bias the estimates, and also how the conflict might have affected the data collection. In the future as better data become available, other researchers could assess the robustness of the current estimates. In addition, the span of the panel data which is approximately six years is too short for the long-term concept that resilience represents. The long-term effects of the conflict on economic welfare and resilience is also another aspect that future studies could explore.
Chapter 3 tackles the question of strategic distribution of universities in order to evenly enhance human capital accumulation across the country as this remains a topical issue in Nigeria's policy circles. Social services and institutions of higher learning inherited from the colonial period were not equitably distributed, but how much has the contemporary government been able to affect the distribution? In two related analyses, the chapter interrogates the relationship between geographical distance and the accumulation of schooling. The first set of analyses estimates completed schooling as a function of distance experienced at the time of schooling, and finds unambiguous negative effect of distance on completed schooling. The second set exploits recent mass creation of universities and the associated dramatic increase in proximity to university. Using difference-in-differences strategy, it finds that the intervention led to a reduction in the intention to drop out of secondary school. The overall policy implication is that much still needs to be done to spread schooling opportunities more evenly in order to reduce the incidence of relative deprivation.
However, one of the main concerns in the analysis is the effect of internal migration and residential sorting given that I only observe the location of individuals when the have completed schooling and not when they were schooling. I measured individual's proximity to university by spatially matching permanent location of universities to the current location of households, thereby invariably assuming that the sample had maintained permanent residential location since commencement of schooling. In this context where urban -rural migration is rampant, this is a strong assumption. Ideally, I would measure the proximity to university from the location where each individual started and completed schooling, or at least from the place of birth which would approximate the residential location at the time of schooling. Unfortunately, the fact that the LSMS reports only the current location of households and their members lets labour migration threaten the estimates. Furthermore, given that the location of most of the universities is approximately urban -where job prospects are higher, this mechanism may reinforce the migration bias.
Future studies may well exploit datasets that includes historical residential re-locations which might mitigate these concerns.
Finally, chapter 4 focuses on how inherited inequality denoted by educational attainment is transmitted across generations and whether this is mitigated by the educational quality of the growing up environment. The empirical results yield two main sets of finding: First, the neighbourhood capital supports intergenerational mobility by acting as mediator to the productivity of parental capital, which is consistent with previous findings. However, I further decomposed the total effect into direct and indirect effects, due respectively to parental and neighbourhood capitals. As it turns out, the parental and neighbourhood capital are substitutes, implying that intergenerational persistence is higher in neighbourhoods of lower quality. What is more, the ratio of the effect of the parental capital to neighbourhood capital is 25:75. I confirm that the effects of the neighbourhood capital is driven by the quality of schools in the neighbourhood, which means that policies aimed at improving schools supply could be relevant in alleviating the persistence of inequalities.
Compared to related recent studies, the current study connects more with the political realities of Nigeria by exploiting variation in the provision of basic education across the 774 districts. It also exploits a combination of identification approaches that not only helps to address endogenous human capital accumulation process, but also the bias of innate abilities correlation. Based on the findings, policymakers have clear path towards ensuring upward mobility opportunities to individuals across the country, through enhancing the provision of schools at the district level. The main limitation of the study however is that it was not possible to distinguish between ethnic capital per se and neighbourhood characteristics due to the clustering of ethnicities in the Nigerian case. This could lead to the conflation of the theoretical ethnic and neighbourhood capital. Therefore, the objective of unique contributions of environments and ethnic networks is not possible in this case. 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 3,000 Households 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500 1,500
Figure 2 . 1 :
21 Figure 2.1: Trends of Boko Haram attacks and casualties
Figure 2
2 Figure 2.2: Conflict-defined treatment and control geographical locations
Figure 2 . 3 :
23 Figure 2.3: Indicators of resilience capacity and pillars
Notes: RCI = Resilience capacity index, ABS = index of Access to basic services, AC = index of adaptive capacity, SSN = Index of social safety nets; Asset = Index of household assets Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1.
World Development, 32(6):941-955. ID: 271773. [Hoddinott, 2006] Hoddinott, J. (2006). Shocks and their consequences across and within households in rural zimbabwe. The Journal of Development Studies, 42(2):301-321. [Hoek, 2017] Hoek, J. V. D. (2017). Agricultural market activity and Boko Haram attacks in northeastern Nigeria. [Iyekekpolo, 2016] Iyekekpolo, W. O. (2016). Boko haram: understanding the context. Third World Quarterly, 37(12):2211-2228. [Jensen and Miller, 2010] Jensen, R. T. and Miller, N. H. (2010). A revealed preference approach to measuring hunger and undernutrition. [Justino, 2012] Justino, P. (2012). Shared societies and armed conflict: Costs, inequality and the benefits of peace. IDS Working Papers, 2012(410):1-23.[Kimenyi et al., 2014] Kimenyi, M., Adibe, J., Djire, M., and Jirgi, A. J. (2014). The impact of conflict and political instability on agricultural investments in mali and nigeria.[Kozlowska et al., 2015] Kozlowska, K., d'Errico, M., Pietrelli, R., and Giuseppe, S. D. (2015). Resilience analysis in Burkina Faso.[Lovon and Mathiassen, 2014] Lovon, M. andMathiassen, A. (2014). Are the world food programme's food consumption groups a good proxy for energy deficiency? Food security, 6(4):461-470.[Marc et al., 2015] Marc, A., Verjee, N., and Mogaka, S. (2015). The challenge of stability and security in West Africa. The World Bank. Chapter 3 Accessing schools when it matters: the effect of university proximity during teenagehood on educational attainment 3.1 Introduction
Figure
Figure 3.1: LGA population density and proximity to university
ih,2010 = α + σN ewU niv25 2011 + γX ih,2010 + ϵ ih,2010 (3.4) Eq. 3.4 relates the secondary school drop-out intention in 2010 to a dummy N ewU niv25 2011 that indicates living within 25 km of the location where any of the new universities would be established in 2011 or 2013 as the case may be. The regressions equally condition on the previous controls. The estimates are presented in Table
(
NLSS), this paper investigates the persistence of economic opportunities designated in terms of educational attainment, and relies on the original distribution of schools in Nigeria via the colonial Christian missionaries to identify the important estimates.The missionaries pioneered the provision of social services comprising education and health services in Nigeria at a time when the colonial governments had limited capacity to provide such services. The missionaries also transmitted various forms of cultural change towards modern economic systems, but in general were not allowed to distribute the activities all over the country. I employ the neighbourhood capital defined as the average years of education within neighbourhood as an intervening influence on the intergenerational transmission of human capital. This sets up setup a causal mediation framework where parental and neighbourhood capitals are jointly determined. Then, drawing from historical accounts, I use measures of the historical activities of the missionaries as instrument to empirically parse out the direct and indirect effects of parental human capital. I argue that parents influence their children human capital directly by investing in their education, and indirectly by deciding the schooling neighbourhoods. The parental and neighbourhood capitals are necessarily linked through neighbourhood sorting based on educational preferences. The instrumental variable causal mediation analysis which harnesses these relationships is then employed to parse out the direct and indirect effects.
The sample for analysis derives from the nationally representative Nigerian Living Standards Surveys (NLSS) collected by the Nigerian Bureau of Statistics (NBS) across the 774 Local Governments Areas (districts) of Nigeria in 2019([NBS, 2019]). The NLSS follows the format of the World Bank's Living Standards Measurement Surveys (LSMS), but is preferred in this study because of its universal coverage of the Nigerian districts and larger sample size. The information collected include; years of education of individuals and parents, and demographic characteristics at the individual and household levels. I limit the sample to those who were at least 25 years old in the year of the survey (i.e those born between 1950 and 1994) because such are expected to have completed schooling based on Nigeria's educational system ([Lincove, 2009]). The restriction reduced the sample to 29,522 individuals in 16,621 households belonging to 103 ethnic homelands. The vector X of equation 4.2 includes measures of historical population density and land suitability for agriculture which are potential drivers of the spread of Islam and Christianity
Figure
Figure 4.1: Distribution of child education over cohort
Notes: RCI = Resilience capacity index, ABS = index of Access to basic services, AC = index of adaptive capacity, SSN = Index of social safety nets; Asset = Index of household assets Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1.
Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1. RCI = Resilience capacity index, SSN = Social safety nets, AC = Adaptive capacity, ABS = Access to basic services, ASSET = Access to Assets Q1 -Q4 = Resilience quartiles. Q1 = reference category
Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1. RCI = Resilience capacity index, SSN = Social safety nets, AC = Adaptive capacity, ABS = Access to basic services, ASSET = Access to Assets Q1 -Q4 = Resilience quartiles. Q1 = reference category
Table 2
2
.1: Summary statistics for the control variables at baseline by household exposure
status
pooled sample Treatment 7KM Control 7KM
Variable obs Mean Sd obs Mean Sd obs Mean Sd t-test
Urban 1,500 0.17 0.37 1,062 0.17 0.35 438 0.22 0.42 -0.06
Age of HH head 1,500 47.68 15.25 1,062 49.98 15.53 438 46.89 14.45 3.09*
HH head is wage worker 1,500 0.41 0.28 1,062 0.53 0.27 438 0.48 0.32 0.06
HH is agricultural worker 1,500 0.68 0.14 1,062 0.72 0.14 438 0.68 0.15 0.04
Household size 1,500 6.58 3.37 1,062 6.30 3.04 438 7.34 4.04 -1.05
Female HH head 1,500 0.07 0.25 1,062 0.08 0.27 438 0.04 0.19 0.04
HH head is literate 1,500 0.51 0.50 1,062 0.52 0.50 438 0.49 0.50 0.03
Ratio of children 1,500 0.36 0.23 1,062 0.35 0.23 438 0.38 0.22 -0.02
HH head marital status
Never married 1,500 0.02 0.15 1,062 0.02 0.14 438 0.03 0.18 -0.01
Monogamous marriage 1,500 0.61 0.49 1,062 0.63 0.48 438 0.57 0.50 0.06
Polygamous marriage 1,500 0.28 0.45 1,062 0.27 0.43 438 0.35 0.48 -0.08*
Notes: Treatment group comprises households exposed to conflicts occurring before September 2012. The
control group comprises households exposed to conflicts occurring after September 2012. The date is
chosen because treatment period survey commenced in September 2012. The t-test column refers to
mean differences between the treatment and control groups. *** p¡0.01, ** p¡0.05, * p¡0.1.
Table A
A
.7 presents the summary statistics resilience
variables at the first stage, table A.8 displays the corresponding factor loadings, and
table 2.2 compares the household indices across conflict exposure status and over time.
Table 2
2
.2: Summary statistics of the food security and resilience capacity outcomes by
time and treatment status
Pooled sample Treatment Control t-test of
Variable obs mean sd. obs Mean Sd. obs Mean Sd. means
Table 2 .
2 3: Effect of conflict exposure on food (in)security
A: Conflict exposure within 7KM
(1) (2) (3) (4) (5) (6)
VARIABLES CSI FCS Food ratio CSI FCS Food ratio
Conf ict × P OST 1.287*** -1.384 0.072*** 1.240** -1.942 0.086***
(0.257) (0.858) (0.008) (0.502) (1.566) (0.015)
Baseline CSI 0.884***
(0.027)
Baseline FCS 0.343***
(0.014)
Baseline food ratio 0.103***
(0.010)
Baseline controls yes yes yes No No No
Household fixed effect No No No yes yes yes
Constant 1.277* 41.070*** 0.721*** 12.962** 53.421*** 1.297***
(0.691) (2.318) (0.021) (5.527) (16.401) (0.151)
Observations 3,000 3,000 3,000
Number of households 1,500 1,500 1,500 1,500 1,500 1,500
456 0.075***
(0.561) (1.891) (0.062) (0.561) (1.930) (0.006)
Baseline CSI 0.951***
(0.026)
Baseline FCS 0.471***
(0.017)
Baseline Food ratio 0.281***
(0.013)
Constant 0.226 31.665*** 0.623*** 8.778*** 27.734*** 0.872***
(0.664) (2.263) (0.021) (2.502) (7.424) (0.069)
Baseline controls yes yes yes No No No
Household fixed effects No No No yes yes yes
Observations 3,000 3,000 3,000
Number of households 1,500 1,500 1,500 1,500 1,500 1,500
B: Confict intensity (no. of fatalities) Conflict intensity (100s of fatalities) 3.394*** -12.232*** 0.204*** 2.281*** -0.Notes: CSI = Coping strategy index, FCS = food consumption score; Food ratio = Share of household per capita food expenditure; Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1.
Table 2
2
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
VARIABLES RCI RCI ABS ABS SSN SSN AC AC ASSET ASSET
Conf × P OST -0.097** -0.065** 0.190*** -0.076** -0.107
(0.044) (0.032) (0.063) (0.031) (0.077)
f atalities × 100 -0.137*** -0.104*** 1.653** -0.251*** -0.222***
(0.021) (0.021) (0.792) (0.038) (0.046)
Household FE ? yes yes yes yes yes yes yes yes yes yes
Constant 1.559***1.593***1.675***1.625***4.211***3.782***1.559***1.559***7.914***7.881***
(0.189) (0.180) (0.059) (0.056) (0.203) (0.196) (0.067) (0.064) (0.149) (0.140)
Observations
.5: Effects of conflict exposure on resilience capacity
Table1summarises the main variables of the sample which consists of 16,581 individuals aged over 25 years 7 . The average number of years of schooling is slightly higher than 6 suggesting that most individuals in our sample did not go to university. Therefore,
Table 3.1: Descriptive Statistics
Variable Mean Std. dev. Min Max
Years of schooling 6.28 5.61 0 20
Distance to univeristy at 12 y.o 128.31 134.73 1.24 1007.29
Distance to univeristy at 18 y.o 97.07 105.04 0.44 960.57
Individual is a female 0.48 0.50 0 1
Individual's age 39.15 13.71 25 86
Urban 0.32 0.46 0 1
Parental education 0.04 0.52 0 12.50
the effect we measure is at lower level of schooling. The sample is gender balanced (48% female) and average age is about 39 years. Moreover, most of the individuals live in rural areas (68%). For our variable of interest, individuals' average distance to university is about 128 km when measured at age 12, while slightly more than 97 when taken at age 18.
Table 3 .
3 2: Main estimates -The impact of distance to university on years of schooling
(1) (2) (3) (4)
At 12 years At 18 years
VARIABLES Dist. Univ Yrs of Schooling Dist. Univ Yrs of Schooling
Dist. Univ -5.423*** -6.520***
(0.562) (0.810)
Dist. Border -0.176*** -0.119***
(0.0225) (0.0247)
LGA pop. Density -0.0555*** -0.0642***
(0.0105) (0.0104)
Controls Yes Yes Yes Yes
Birth Cohort FE Yes Yes Yes Yes
Observations 14,797 14,749 14,889 14,841
F-test 44.05 31.12
Sargan statistic p-value 0.316 0.216
Robust standard errors in parentheses, clustered at the household level.
*** p<0.01, ** p<0.05, * p<0.1
Table 3
3
.3: Heterogeneity : The impact of distance to university on years of schooling by
gender
(1) (2) (3) (4)
Years of schooling
At 12 years At 18 years
VARIABLES Male Female Male Female
Dist. Univ -4.787*** -6.088*** -6.123*** -6.415***
(0.610) (0.767) (0.921) (0.994)
Controls Yes Yes Yes Yes
Birth Cohort FE Yes Yes Yes Yes
Observations 7,642 7,107 7,710 7,131
Robust standard errors in parentheses, clustered at the household-level.
*** p<0.01, ** p<0.05, * p<0.1
Table 3
3
.5: Alternative subsamples
(1) (2) (3) (4)
HH head aged more than 35 Non-movers
VARIABLES At 12 years At 18 years At 12 years At 18 years
Dist. Univ -3.448*** -4.297*** -4.616*** -4.338***
(0.846) (1.212) (0.770) (0.682)
Controls Yes Yes Yes Yes
Birth Cohort FE Yes Yes Yes Yes
Observations 8,554 8,604 8,047 7,974
Robust standard errors in parentheses, clustered at the household level.
*** p<0.01, ** p<0.05, * p<0.1
Table 3
3
.6: Raw Differences
Table 3
3
.7: DiD estimates : The effect of new university on secondary school drop-out
intention
(1) (2) (3) (4)
VARIABLES Drop-out intention
NewUniv [0, 25 kms] × Post -0.0265*** -0.0257** -0.0258** -0.0245**
(0.0103) (0.0102) (0.0102) (0.0101)
Constant 0.0617*** 0.235*** 0.281*** 0.284***
(0.00302) (0.0163) (0.0189) (0.0199)
Observations 12,605 12,603 12,603 12,603
Individual Controls No Yes Yes Yes
HH controls No Yes Yes Yes
Distance variables No No Yes Yes
State FE No No No Yes
Robust standard errors in parentheses
*** p<0.01, ** p<0.05, * p<0.1
Table 3
3
.8: Test of parallel trend assumption
(1)
VARIABLES Drop-out intention
NewUniv [0,25 Kms] × Post (t+1) 0.116
(0.171)
Constant 0.410**
(0.173)
Observations 6,426
R-squared 0.059
Individual Controls Yes
HH controls Yes
Distance variables Yes
Robust standard errors in parentheses
*** p<0.01, ** p<0.05, * p<0.1
Table 4
4
.3: Effect of parental capital on child education
(1) (2) (3) (4) (5)
VARIABLES OLS IV Stage-2 IV Stage-2 IV Stage-2 Stage-1
Instrument external onlyinternal onlyexternal & internal
log(number of missions) 103.510***
(13.159)
parental capital 0.318*** 1.002*** 0.144*** 0.150***
(0.005) (0.148) (0.012) (0.012)
Child age 0.079*** 0.148*** 0.061*** 0.062*** -0.102***
(0.016) (0.024) (0.016) (0.016) (0.016)
Child is male 1.761*** 1.504*** 1.827*** 1.824*** 0.374***
(0.045) (0.077) (0.046) (0.046) (0.046)
child is muslim -1.822*** -0.482 -2.163*** -2.151*** -1.928***
(0.073) (0.302) (0.078) (0.078) (0.078)
Rural -2.383*** -1.046*** -2.723*** -2.711*** -1.951***
(0.051) (0.295) (0.055) (0.055) (0.052)
Time trends yes yes yes yes yes
Geographical Controls yes yes yes yes yes
Cragg-Donald Wald F statistic 72.042
Kleibergen-Paap Wald F statistic 61.874
Hansen J statistic (pvalue) 118.45 173.524
(0.000) (0.000)
Constant 1.156 -5.531*** 8.060*** 8.041*** 9.908***
(1.104) (1.966) (0.045) (0.045) (1.096)
Observations 29,522 29,522 29,522 29,522 29,522
R-squared 0.410 0.140 0.393 0.394 0.235
Robust standard errors in parentheses
*** p¡0.01, ** p¡0.05, * p¡0.1
Table 4
4
.5: Relative contributions of parental and neighbourhood capitals
VARIABLES (1) (2) (3) (4)
Parental capital (pc) 0.239*** 0.227*** 0.789*** 0.762***
(0.005) (0.005) (0.019) (0.020)
Neighbourhood capital (nc) 0.828*** 0.854*** 0.956*** 0.993***
(0.013) (0.015) (0.013) (0.016)
Interaction (pc×nc) -0.080*** -0.077***
(0.003) (0.003)
Child age 0.060*** 0.041*** 0.056*** 0.039***
(0.015) (0.015) (0.014) (0.015)
Child is male 1.736*** 1.760*** 1.704*** 1.727***
(0.041) (0.043) (0.041) (0.042)
Child is muslim -1.486*** -1.066*** -1.360*** -1.089***
(0.055) (0.069) (0.055) (0.068)
Rural -1.163*** -1.013*** -1.202*** -1.041***
(0.051) (0.055) (0.050) (0.054)
Time trends Yes Yes Yes Yes
Geographical controls No Yes No Yes
Constant -2.510*** -2.445** -2.955*** -2.768***
(0.972) (1.040) (0.958) (1.025)
Observations 32,006 29,522 32,006 29,522
R-squared 0.461 0.474 0.475 0.488
Robust standard errors in parentheses
*** p¡0.01, ** p¡0.05, * p¡0.1
Table 4
4
.6: Causal mediation decomposition of parental and neighbourhood effects
(1) (2)
Total effect (Parental and neighbourhood capitals) 1.037*** 1.002***
(0.137) (0.144 )
Direct effect (Parental capital) 0.257*** 0.252***
(0.013) (0.0123)
Indirect effect (Neighbourhood capital) 0.764*** 0.750***
(0.142) (0.150)
% of total effects mediated by neighbourhood capital 73.7% 74.8%
Geographical controls No Yes
Number of observation 29522 29522
F-statistics of excluded instruments
-first stage one (T on Z) 79.668 72.042
-first stage two (M on Z-T) 524.254 591.907
Standard errors in parentheses
*** p¡0.01, ** p¡0.05, * p¡0.1
4.6 Mechanisms
4.6.1 The mediation mechanisms
In this case, the local average treatment effects (LATE) should capture the effects of quality differences in local schools due to the historical settlements of the missionaries, implying that persistence of the missionaries activities might pass through schools quality. I estimate various measures of school quality; teacher-pupils ratio, qualified teacher-pupils ratio and pupils per class. The source data is from [OSSAP, 2012], a comprehensive survey of Nigerian schools by the Nigerian government in collaboration with the Sustainable Engineering Lab at Columbia Table 4.7: The correlation of access to missions and school quality I start with analysing the correlation of exposure to the missionaries and the contemporary quality of local schools. I regress the measure of exposure to the missionaries on the measures of contemporary school quality and the locational controls. The results reported in table4.7 shows that access to the missions whether measured as continuous or dummy variable is positively correlated with local schools' quality: Teacher-pupils ratio and Pupils per class are negatively and positively related to distance to missions, suggesting that both quality and quantity of schools are still more constrained in the places that experienced less presence of the missionaries, all things being equal. I then proceed to apply the causal mediation framework as described above. The results which corresponds to the earlier results in table 4.6 is reported in table 4.8. However, the percentage of effects mediated is slightly lower than previously, suggesting the existence of other channels.
Table 4.8: Mediation effects of school quality
Average causal mediation effects based on school quality as the mediator
Mediator Teacher-pupuls ratio QTteacher-pupuls ratio Pupils-class ratio PCA
(1) (2) (3) (4)
Total effect 1.34*** 1.34*** 1.34*** 1.34***
Direct effect 0.61*** 0.48*** 0.31*** 0.47***
Indirect effect 0.73*** 0.86*** 1.03*** 0.87***
% of effects mediated University. 7 Number of observation 54.6% 48749 64.3% 48749 76.9% 48749 64.79% 48696
Controls Yes Yes Yes Yes
VARIABLES F-statistics for excluded instruments Teacher-pupils ratio Qualified teacher-pupils ratio Pupils -class ratio
-first stage one (T on Z) (1) (2) 4240 (3) (4) 4240 (5) 315 (6) 4239.878
log (dist. to mission) -0.100*** -first stage two (M on Z-T) 261.977 -0.032* 603.524 6.337*** 11000 818.954
(0.032) Outcome: Child education (0.017) (0.794)
Mission (dummy) Instrument: log(number of missions) 0.282*** 0.122** -8.068***
Standard errors in parentheses (0.107) (0.056) (2.714)
Constant *** p¡0.01, ** p¡0.05, * p¡0.1 1.046*** 0.642*** 0.439*** 0.305*** 27.763*** 51.886***
(0.123) (0.042) (0.065) (0.022) (3.018) (1.071)
Observations 700 700 700 700 700 700
R-squared 0.013 0.010 0.005 0.007 0.084 0.013
Standard errors in parentheses
*** p¡0.01, ** p¡0.05, * p¡0.1
Table 4
4
.9: Mechanism of persistence
(1) (2) (3) (4) (5)
VARIABLES log consumption family size
OLS IV Stage-2 OLS IV Stage-2 Stage-1
log (number of missions) 1.154***
(0.072)
Parental education 0.008*** 0.069*** 0.050*** -0.114**
(0.001) (0.008) (0.005) (0.055)
Full controls Yes Yes Yes Yes Yes
State FE Yes Yes Yes Yes Yes
Constant 14.252*** 13.316*** 2.952*** 6.640*** 7.893***
(0.252) (0.308) (0.617) (0.680) (2.194)
Observations 16,019 16,019 16,019 16,095 16,019
R-squared 0.416 0.261 0.113 0.055 0.198
Robust standard errors in parentheses
*** p¡0.01, ** p¡0.05, * p¡0.1
Table A .
A 2: Effects of conflict exposure on resilience capacity
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10)
VARIABLES RCI RCI ABS ABS SSN SSN AC AC ASSET ASSET
Conf lict × P OST -0.077** -0.065** 0.104** -0.079** -0.081
(0.037) (0.029) (0.056) (0.037) (0.069)
f atalities × 100 -0.152*** -0.090** 1.216** -0.206*** -0.189***
(0.029) (0.038) (0.622) (0.044) (0.041)
Pre-treatment controls ? yes yes yes yes yes yes yes yes yes yes
Household FE ? yes yes yes yes yes yes yes yes yes yes
Constant 1.214***1.211***1.333*** 1.159*** 4.012*** 3.244*** 1.373***1.378***6.017***6.006***
(0.167) (0.163) (0.066) (0.065) (0.307) (0.216) (0.081) (0.083) (0.209) (0.195)
Observations
Table A.3: Probit estimation of selection into conflict exposure Table A.4: Probit estimation of sample attrition Table A.5: Food (in)security and conflict exposure within the 5km radius of exposure : CSI = Coping strategy index, FCS = food consumption score; Food ratio = Share of household per capita food expenditure; Standard errors in parentheses. *** p¡0.01, ** p¡0.05, * p¡0.1.
VARIABLES VARIABLES A: Conflict exposure within 5KM (1) (1) Conflict Conflict Conflict Conflict Conflict (2) (3) (4) (2) (3) (4) (5) (5) Attrition Attrition Attrition Attrition Attrition
Urban Age of HH head Conflict Urban VARIABLES Conf lict × P OST HH head is wage worker Age of head Baseline CSI Household size HH head is a wage worker Baseline FCS Female head Household size Baseline food ratio HH head is literate Female head Baseline controls Ratio of children HH head is literate Household fixed effect Constant Head is never married Ratio of children Observations RCI HH head is never married Number of households ABS RCI B: Conflict intensity (no. of fatalities) -0.0014 (0.0088) (0.0087) (0.0091) (0.0087) (0.0087) -0.0003 0.0101 -0.0004 -0.0006 -0.0000 -0.0000 -0.0000 -0.0000 -0.0005 0.0005 0.0047 -0.0016 0.0020 (1) (2) (3) (4) (5) (0.0544) (0.0543) (0.0546) (0.0543) (0.0544) 0.0125 0.0148 0.0096 0.0141 CSI FCS Food ratio CSI FCS Food ratio (6) 0.0150 -0.0000 (0.0182) (0.0180) (0.0189) (0.0180) (0.0180) 1.312*** -1.143 0.075*** 1.262** -1.889 0.088*** (0.0001) (0.0001) (0.0001) (0.0001) (0.0001) -0.0032 -0.0029 -0.0033 -0.0026 0.0003 0.0003 0.0003 0.0003 0.0003 (0.204) (0.858) (0.010) (0.610) (1.616) (0.017) -0.0033 (0.0003) (0.0003) (0.0003) (0.0003) (0.0003) 0.893*** (0.0061) (0.0061) (0.0061) (0.0061) (0.0061) 0.0001 0.0002 0.0001 0.0004 0.0045 0.0064 0.0054 0.0064 0.0063 (0.031) 0.0002 (0.0128) (0.0127) (0.0127) (0.0127) (0.0127) 0.347*** (0.0006) (0.0006) (0.0006) (0.0006) (0.0006) 0.0022 0.0024 0.0028 0.0022 -0.0032*** -0.0022* -0.0031** -0.0022* -0.0030** (0.012) 0.0025 (0.0012) (0.0013) (0.0012) (0.0013) (0.0012) 0.115*** (0.0073) (0.0073) (0.0073) (0.0073) (0.0073) 0.0026 0.0032 0.0035 0.0035 -0.0188 -0.0192 -0.0187 -0.0193 -0.0190 (0.009) 0.0029 (0.0152) (0.0152) (0.0152) (0.0152) (0.0152) yes yes yes No No No (0.0039) (0.0038) (0.0038) (0.0038) (0.0038) 0.0037 0.0035 0.0036 0.0023 0.0092 0.0118 0.0103 0.0118 0.0112 No No No yes yes yes 0.0034 (0.0080) (0.0079) (0.0079) (0.0079) (0.0079) 1.285* 45.142*** 0.813*** 10.652** 53.011*** 0.771*** (0.0094) (0.0095) (0.0094) (0.0095) (0.0094) -0.0001 -0.0004 0.0004 -0.0004 0.0239 0.0179 0.0231 0.0179 0.0229 (0.691) (2.318) (0.021) (5.527) (16.401) (0.151) 0.0001 (0.0195) (0.0197) (0.0195) (0.0197) (0.0195) 2,766 2,766 2,766 (0.0113) (0.0113) (0.0112) (0.0113) (0.0113) -0.0111 -0.0119 -0.0119 -0.0119 -0.0125 1,383 1,383 1,383 1,383 1,383 1,383 0.0011 (0.0234) (0.0234) (0.0234) (0.0234) (0.0234) (0.0014) 0.0020 0.0005 (0.0030) (0.0042) SSN ABS Conflict intensity (100s of fatalities) 4.102*** -14.311*** 0.323*** 2.317*** -0.307 0.172*** -0.0161* 0.0196*** (0.342) (0.210) (0.010) (0.224) (1.462) (0.033) (0.0087) (0.0052) AC SSN Baseline CSI 1.022*** -0.0090 -0.0032 (0.026) (0.0109) (0.0045) ASSET AC Baseline FCS 0.239*** -0.0172* 0.0016 (0.015) (0.0093) (0.0015) Village Dummies yes yes yes yes ASSET Baseline Food ratio 0.296*** -0.0036 yes Constant (0.020) (0.0031) 0.7012*** 0.7205*** 0.9746*** 0.7080*** 0.9904*** Constant -0.0041 0.0189 0.0049 0.0302 Constant 0.365 27.119*** 0.488*** 7.654*** 19.225*** 0.735*** 0.0215 (0.0322) (0.0327) (0.0328) (0.0332) (0.0340) (0.0862) (0.0869) (0.0867) (0.0879) (0.0888) (0.543) (1.800) (0.031) (1.913) (5.326) (0.053)
Baseline controls yes yes yes No No No
Observations Pseudo R2 Observations Household fixed effects Pseudo R2 Observations 1,703 0.4842 1,703 No 0.3171 1,703 0.4842 1,703 No 0.3173 1,703 0.4843 1,703 No 0.3172 1,703 0.4842 1,703 yes 0.3173 2,766 1,703 yes 1,703 0.4842 0.3172 2,766 yes 2,766
*** p¡0.01, ** p¡0.05, * p¡0.1 *** p¡0.01, ** p¡0.05, * p¡0.1 Standard errors in parentheses Standard errors in parentheses SSN = Social safety nets, AC = Adaptive capacity, ASSET = Assets index SSN = Social safety nets, AC = Adaptive capacity, ASSET = Assets index Notes: RCI = Resilience capacity index, ABS = Access to basic services Notes: RCI = Resilience capacity index, ABS = Access to basic services Number of households 1,383 1,383 1,383 1,383 1,383 1,383
Notes
Table A.7: Summary statistics for variables used to compute the resilience indices :t-test reports mean differences (treatment minus control). *** p¡0.01, ** p¡0.05, * p¡0.1.Table A.10: Top resilience capacity quartiles, interactions and FCS
Table A.9: Effects of resilience on FCS, disaggregated
VARIABLES FCS
VARIABLES Conflict×P ost (1) Conflict×RCI high ×P ost (2) RCI SSN -0.966 (1.055) 3.406*** (3) (4) AC ABS (5) ASSET
(0.616)
Table A.8: Factor loadings for resilience capacity index and pillars SSN Conflict×P ost -3.125* -4.277** -4.263** -4.253** -4.253**
PRE Treatment Control t-test Treatment Control t-test POST 0.310 0.280 0.030 0.180 0.260 -0.080* cash transfer -4.847 (1.798) (1.959) (1.867) (2.021) (1.925) factor 1 factor 2 factor 3 factor 4 factor 5 uniqueness conflict×P ost × Q2 -2.778 (8.361) Resilience capacity index RCI Asset 0.67 0.22 0.26 NA NA In-kind gift (2.398) 9.473 0.95 AC 0.78 -0.22 0.39 NA NA conflict×P ost × Q3 -0.322 (9.763) 0.75 SSN 0.58 0.43 0.28 NA NA Has migrants (2.646) 7.789*** 0.81 ABS 0.41 0.18 -0.61 NA NA conflict×P ost × Q4 7.198*** (1.324) 0.88 scholarship 2.795 (2.586)
ABS Infrastructure index Distance to primary school (km) Distance to secondary school (km) Distance from health services (km) 34.160 0.190 -0.170 19.740 32.020 Distance to market (km) 30.700 conflict×P ost × Q2 0.651 0.222 -0.130 20.220 42.050 -10.030* -0.032 -0.040 -0.480 43.760 -9.600 29.980 0.720 (1.791) Asset index of agricultural asset 0.77 0.12 0.04 NA 28.310 0.140 -0.220 37.020 51.190 31.200 NA (2.093) ABS index of non -farm business asset 0.29 -0.22 0.11 NA NA Infrastructure index conflict×P ost × Q3 0.365 4.141 index of household wealth 0.38 0.23 0.08 NA NA (2.222) (15.757) Tropical Livestock Unit 0.42 0.18 -0.08 NA NA Distance to primary school conflict×P ost × Q4 -0.637 15.906*** (4.808) (2.154) Distance to major road (km) 18.100 30.140 -12.040** 26.100 Asset 0.210 0.189 0.021 0.150 index of agricultural asset 0.240 0.170 0.070 0.080 index of non-farm business assets -0.010 -0.030 0.020 -0.060 index of household wealth 0.170 0.190 -0.020 0.120 Tropical Livestock Unit 0.380 0.270 0.110 0.210 Distance to secondary school conflict×P ost × Q2 0.751 16.859*** AC Participation index 0.63 0.21 0.08 NA NA (2.491) (4.999) HH average years of education 0.21 0.34 0.22 NA NA Distance to market conflict×P ost × Q3 -0.461 16.301*** Dependency ratio 0.45 0.18 -0.23 NA NA (2.527) (4.843) diversity of income sources 0.55 0.44 0.67 NA NA Distance to health services conflict×P ost × Q4 -0.438 83.809 (2.656) (62.133) AC 0.380 0.400 -0.020 0.250 Participation index 0.560 0.450 0.110 0.360 HH average years of education 5.010 5.170 -0.160 5.120 Dependency ratio 0.880 1.430 -0.550 0.890 diversity of income sources 0.810 0.840 -0.030 0.680 Distance to major road conflict×P ost × Q2 2.616 6.795 SSN transfers (naira) 0.65 0.34 0.19 NA NA (2.615) (5.068) other transfers (naira) 0.54 -0.45 0.33 NA NA conflict×P ost × Q3 -2.123 ASSET scholarship (yes or no) -0.46 0.37 0.26 NA NA farming asset index (2.671) -120.323* Has a migrant (yes or no) 0.66 0.4 0.24 NA NA conflict×P ost × Q4 1.622 (65.697) non-farming asset index 1.184 (2.628) SSN 0.220 0.190 0.030 0.390 transfers (naira) 297 203 94 564 other transfers (naira) 205 186 19 880 scholarship (yes or no) 0.560 0.490 0.070 0.670 Has a migrant (yes or no) 0.290 0.300 -0.010 0.570 conflict×P ost × Q2 4.046 0.220 -0.150 -0.070* -0.080 21.440 6.870* 40.110 -3.090* 44.170 7.020** 29.500 0.95 0.65 0.81 0.93 1.700 25.210 0.890 0.260 -0.110* 0.190 -0.110 -0.020 -0.040* 0.350 -0.230 0.92 0.71 0.94 0.88 0.290 -0.080* 0.370 -0.120** 0.490 -0.130* 5.330 -0.210 1.540 -0.650 0.92 0.87 0.66 0.95 0.850 -0.170* 0.200 0.190*** 223 341** 156 724** 0.440 0.230* 0.260 0.310* (9.579) ABS Infrastructure index 0.49 0.18 -0.15 0.34 0.46 Household asset index (2.599) 0.011 0.93 Distance to primary school (km) 0.22 0.38 -0.32 0.33 0.37 conflict×P ost × Q3 0.968 (0.035) 0.68 Distance to secondary school (km) 0.39 0.45 0.11 0.45 -0.11 Tropical livestock unit (2.542) 4.303*** 0.74 Distance from health services (km) 0.64 0.35 -0.22 0.39 -0.44 conflict×P ost × Q4 4.019 (1.568) 0.95 Distance to market (km) 0.77 0.41 0.44 0.29 0.11 (2.576) AC 0.96 Distance to major road (km) 0.34 0.53 0.27 0.31 0.25 Labour force participation rate 0.301* 0.82 (0.168) Observations 3,000 3,000 3,000 3,000 3,000
notes: NA obtains when indicated factor number does not apply to component Average years of education -2.705*** Number of hhid 1,500 1,500 1,500 1,500 1,500
(0.920)
Dependency ratio 1.223
(2.527)
Diversity of income sources 1.344
(3.174)
Usual controls yes
Observations 3,000
Number of hhid 1,500
126 127 Standard errors in parentheses
Notes*** p¡0.01, ** p¡0.05, * p¡0.1
Table A .
A 11: Top resilience capacity quartiles, interactions and CSI
(1) (2) (3) (4) (5)
VARIABLES RCI SSN AC ABS ASSET
Conflict ×P ost 1.867*** 2.982*** 1.569*** 2.345*** 3.406***
(0.586) (0.631) (0.600) (0.647) (0.616)
conflict×P ost × Q2 -0.461
(0.780)
conflict×P ost × Q3 -0.318
(0.861)
conflict×P ost × Q4 0.385
(0.842)
conflict×P ost × Q2 -1.993**
(0.855)
conflict×P ost × Q3 -0.741
(0.854)
conflict×P ost × Q4 -2.679***
(0.887)
conflict×P ost × Q2 0.390
(0.803)
conflict×P ost × Q3 0.249
(0.814)
conflict×P ost × Q4 0.156
(0.855)
conflict×P ost × Q2 -0.862
(0.841)
conflict×P ost × Q3 -0.422
(0.858)
conflict×P ost × Q4 -1.208
(0.843)
conflict×P ost × Q2 -2.603***
(0.833)
conflict×P ost × Q3 -2.264***
(0.815)
conflict×P ost × Q4 -1.602*
(0.825)
Observations 3,000 3,000 3,000 3,000 3,000
Number of hhid 1,500 1,500 1,500 1,500 1,500
Table C.2: Effect of parental capital based on distance to the mission Table C.3: Effect of neighbourhood capital based on distance to the missions
VARIABLES Instrument VARIABLES Instrument (1) OLS (1) OLS (2) IV stage-2 (2) IV stage-2 external only internal only external & internal (3) (4) IV stage-2 IV stage-2 (3) (4) IV stage-2 IV stage-2 external only internal only external & internal (5) (5) Stage-1 Stage-1
Distance to mission (km) Distance to mission (km) Parental capital Neighbourhood capital Child age Child age Child is male Child is male child is muslim child is muslim Rural Rural Time trend Geographical Controls Time trend Geographical Controls 0.341*** (0.006) 0.984*** 0.088*** (0.015) (0.017) 0.021 1.765*** (0.016) (0.047) 1.904*** -1.815*** -0.619** 0.923*** (0.116) 0.794*** 0.143*** (0.085) (0.022) 0.027 1.528*** (0.017) (0.073) 1.904*** (0.046) (0.046) (0.077) (0.255) -1.258*** -1.500*** -2.377*** -1.221*** (0.077) (0.131) (0.053) (0.237) -1.175*** -1.537*** (0.060) (0.170) 0.172*** (0.012) 1.267*** 0.071*** (0.051) (0.017) 0.011 1.834*** (0.016) (0.048) 1.904*** -2.163*** (0.046) (0.082) -0.895*** -2.713*** (0.100) (0.058) -0.634*** 0.116*** (0.113) (0.024) -0.024 (0.024) 0.183*** (0.012) 1.185*** 0.072*** (0.047) (0.017) 0.013 1.830*** (0.016) (0.048) 1.904*** -2.141*** (0.046) (0.082) -1.001*** (0.096) -2.691*** -0.792*** (0.057) (0.106) 0.114*** -0.013 (0.024) (0.024) -0.004*** -0.004*** (0.000) (0.000) -0.094*** 0.038*** (0.017) (0.007) 0.406*** -0.003 (0.047) (0.019) -1.981*** -1.193*** (0.081) (0.035) -1.970*** -1.891*** (0.053) (0.021)
Cragg-Donald Wald F statistic Kleibergen-Paap Wald F statistic Cragg-Donald Wald F statistic Hansen J statistic (pvalue) Kleibergen-Paap Wald F statistic Hansen J statistic (pvalue) 105.16 108.85 125.12 112.85 519.85 187.09 519.847 116.51 187.095 (0.000) (0.000) 500.36 500.369 204.57 500.369 165.51 (0.000) (0.000)
Constant Constant 1.007 -1.680 (1.133) (1.096) -5.177*** -0.463 (1.803) (1.231) 7.883*** 1.331*** (0.047) (0.285) 7.849*** 1.793*** (0.046) (0.264) 5.445*** 9.792*** (1.105) (0.455)
Observations Observations R-squared R-squared 27,090 27,090 0.422 0.453 27,090 27,090 0.239 0.449 27,090 27,090 0.407 0.445 27,090 27,090 0.409 0.449 27,090 27,090 0.253 0.625
Robust standard errors in parentheses Robust standard errors in parentheses *** p¡0.01, ** p¡0.05, * p¡0.1 *** p¡0.01, ** p¡0.05, * p¡0.1
The visits are chosen because they contain the most comprehensive modules of agricultural production, being an important aspect of household consumption
Although the data collection spans more than one month, distinguishing which households were interviewed in which month is not possible. Hence, mention of period in the entire paper refers "as a snapshot" to data collection within a specific data collection round of the LSMS
Buffers above 7KM do not provide room to separate the exposed and control groups, because then nearly all the relevant data points fall within the buffer at any given event-date combination
Note that households in this second group will be exposed in the future but remain unexposed as of the time of the estimations.
The food classes include staples, pulses, vegetables, fruits, animal products, sugar, diaries, fats and oil, and the micro-nutrients weights obtained from the West African food composition table[START_REF] Barbara | West african food composition table[END_REF]
A replica of this strategy is also applied to test whether the conflict links with future vulnerability by decimating household endowments of resilience
The catchment policy assigns fixed number of admissions (quota) to each state, where the catchment states of each university is defined by adhoc proximity to the university ([Isumonah and Egwaikhide,
2013])
The LSMS team applies a set of random offsets to the GIS points of the households residence to preserve their confidentiality, while indicating their approximate location within the primary sampling areas([NBS, 2012])
This measure corresponds to the current distance of households from secondary schools. It would be better to compute this distance at ages of 12 and 18, but we do not have detailed information on secondary schools in Nigeria. Nevertheless, it is noteworthy that the establishment of secondary schools in Nigeria has not been substantial over the past few decades. From this perspective, taking the current distance to secondary schools may be a suitable measurement.
We define it as the average number of years of father's and mother's schooling.
While the federal and state governments may each establish and manage universities, but in all cases, the state influences the location in terms of LGAs.
While we have 16,581 individuals, the number of observations is usually reduced in the regressions due to the availability of the covariates included.
A comparison of the IV and OLS estimates suggests that the OLS underestimates the negative effects of distance to university. The OLS estimates are available upon request.
The dummy is constructed on the median basis. It is equal to 1 for individuals belonging to LGAs with an education level above the median and 0 otherwise.
In TableB.1 in the Appendix, we provide a comprehensive list of federal universities in Nigeria.
Balance is usually determined by the number of similar institutions already existing,[Adeyemi, 2001]
We follow the treatment definition provided by[START_REF] Molina | The schooling and labor market effects of eliminating university tuition in ecuador[END_REF].
At 52 and 48 percent for boys and girls, incidence of secondary school drop-out is a major problem in Nigeria([NBS, 2020];[Oyelere, 2008])
This is mainly because students at this level wish to continue onto professional levels which depends on performance at current level (see;[START_REF] Simon | OECD Reviews of Vocational Education and Training A Skills beyond School Review of South Africa[END_REF]
This is acceptable in Nigeria because even though rural-urban migration is high, there is but little migration across districts([Funjika and Getachew,
2022])
Further discussion of this is provided in the identification section
I estimate y ij (t -1) and y v(j),(t-1) seperately using equation
4.2 before including them together in subsequent equations. This is to demonstrate the effect of omitting any of them in the model
Given that the presence of heteroskedasticity in the parental and neighbourhood capital equations is required, table 4.1 presents the results of the Breush-Pagan tests. The null hypothesis of homoscedastic errors is strongly rejected, confirming the presence of heteroskedasticity in both equations
I made use of the STATA algorithm contributed by[START_REF] Baum | Ivreg2h: Stata module to perform instrumental variables estimation using heteroskedasticity-based instruments[END_REF] which facilitates the construction of the instruments and the implementation of the procedure, and offers option to over-identify the parameters by combining the internally generated instruments with available external instruments. The combination offers additional advantages, including improved efficiency of the instruments and chance of performing the 'Sargan-Hansen' tests of the orthogonality conditions of both the external and internal instruments ([START_REF] Baum | Advice on using heteroskedasticity-based identification[END_REF]
This overcomes the potential bias due to the generally low levels of mothers' education in the sample and the fact that fathers' education are most times not reported in female headed households. However, this approach is also generally associated with lower correlation between child's and parents' education, as a result of which the arising estimate of intergenerational persistence should be interpreted as lower bound (see;[START_REF] Funjika | Colonial origin, ethnicity and intergenerational mobility in africa[END_REF]
The survey is further described in section C.1
Acknowledgements
appreciate her offer of research assistant employment which provided me hands-on research experience, and also helped me cushion the extreme instability of my study funding. I will not fail to acknowledge Marion Mercier for facilitating the initial meeting with my supervisors, and also for participating in my "comité de suivi".
Abstract
From the point of view of access to education being a serious constraint to human capital accumulation in developing countries, we investigate the impact of geographical proximity to universities at critical ages on educational attainment in Nigeria. We rely on three available rounds of the Nigeria's Living Standard Measurement Survey to match educational attainments of individuals to spatial distance to universities during schooling years measured by pairing residential locations and university campuses using geographical information systems. To address potential endogenous residential choices and identify the effect of university proximity, we derive identifying instruments based on the "Tiebout 1956"'s theory of residential sorting. Specifically, we instrument distance to university drawing on variations in households' proximity to state boundary posts and neighbourhood population density, which captures the Tiebout hypothesis of "voting with the feet" in response to economic opportunities including education. Under the instrumental variables strategy, we find that distance to university limits schooling irrespective of sector of settlement and gender. The main result indicates that 10km increase in the distance leads to 0.5 years reduction in completed schooling. We further support the finding with impact evaluation of recently established large-scale universities in selected states of Nigeria.
Exploiting the quasi-random nature of the establishment under difference-in-difference strategy, we find that the universities have positive spillovers on the secondary schools market by decreasing the intention to drop out, supporting the earlier finding that access to education may be a critical constraint to accumulation of human capital in Nigeria. chance of being admitted to university.
The empirical approach used in this paper is to exploit the introduction of federal universities in some Nigerian states. The analysis is at the individual level. Our differencein-differences strategy assumes that the drop-out intention variable Dropout iht ∈ {0, 1} of an individual i can be written as :
Subscripts i and h denote individuals and households, respectively. The dependent variable, Drop iht denotes the secondary school drop-out intention of individual i from household h. It is important to note that assignment to treatment or control group is based on the place of residence of individuals at the time of the establishment of the new universities. The variables, N ew25km h and P ost t capture individuals living within 25 km to any of the new universities and the dummy for the post new universities establishment period, respectively. The effect of interest is captured by the coefficient β 3 that represents the average treatment effect on the treated. It enable us to infer the counter-factual schooling outcome amongst individuals in the treated states. As the establishment of universities diminishes the marginal costs of education, we expect this coefficient to be negative. If this is the case, it indicates that the establishment of universities will improve the entire educational system, and in particular the secondary schooling stage. The estimation of Eq.(3) allows us to compare secondary school drop-out intentions across treated and untreated households' subsequent to the introduction of a federal university.
X it denotes the vector of individual including pupils' gender and age. Z ht represent the vector of households characteristics that consits of household expenditures and the current distance of households from the main road, market, administrative center and population center. We include state fixed effects σ s , to account for any state observed and unobserved time-invariant characteristics. Lastly, ϵ iht represents the stochastic error term. [Tiebout, 1956] This paper exploits the original geographical distribution of schools in Nigeria to identify the parental and neighbourhood contributions to educational mobility. Using conventional and heteroscedasticity based instruments, the study finds that parental and neighbourhood capitals significantly contribute to the human capital of offspring, where capitals are respectively denoted by average education of parents and neighbourhoods. Furthermore, when the total effect is decomposed based on the causal mediation framework, the parental capital accounts for about 25% while neighbourhood capital accounts for 75%. This implies that supply side policies such as raising the quantity and quality of schools across districts could be an important option for raising equality of economic opportunities.
As robustness and to overcome potential bias due to innate ability correlation within family dynasties, I additionally follow the approach proposed by [Lewbel, 2012] to alternatively identify the models using internally generated instruments. The instruments are generated internally relying on the presence of heteroscedasticity in the error term of the first-stage equation. 4 The instruments are derived as the deviations from the mean of a vector of independent exogenous variables interacted with the residual from the first-stage regression, and may be used exclusively or to support external instruments. Theoretically, the heteroscedasticity is generated by the interaction of an unobserved common factor with included exogenous variables. Within the current framework, the interaction of unobserved innate ability with socioeconomic factors is commonly exploited through the procedure ( [Postepska, 2019]; [START_REF] Farre | A parametric control function approach to estimating the returns to schooling in the absence of exclusion restrictions: an application to the nlsy[END_REF]; [START_REF] Klein | Estimating a class of triangular simultaneous equations models without exclusion restrictions[END_REF]). To illustrate the basic framework: An attempt to estimate γ 1 through OLS regression of
standard OLS assumptions are exploited by the procedure to overcome the endogeneity: In the current setup, the later assumption ensures that the contribution of unobserved ability to intergenerational transmission depends on individual's socioeconomic factors ( [Postepska, 2019]; [START_REF] Klein | Estimating a class of triangular simultaneous equations models without exclusion restrictions[END_REF]). Therefore, [Lewbel, 2012] proposes |X -
is uncorrelated with ϵ 1 and the heteroscedasticity (cov(X, ϵ 2 2 ) ̸ = 0) guarantees that |X -
Descriptive statistics
The descriptive statistics presented in table 4.2 provide insight to the human capital accumulation of the respondents in comparison to their parents and according to cohorts of birth. The summary of the statistics according to cohorts shows that there had been improvements over time in human capital accumulation across child, father and mother. A Appendix to chapter two ABS omits distance to primary and secondary schools both of which did not have significant variation.
132
B Appendix to chapter three I focus on the measures of school quality that capture the three main dimensions of the educational system; teacher-pupils ratio, trained teacher-pupil ratio and pupils per
class([?]
). There is a high degree of collinearity among the indicators, so I used them individually in the model and as well as composite index. |