halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
8
| timestamp
stringclasses 938
values | year
stringclasses 55
values | url
stringlengths 43
389
| text
stringlengths 16
2.18M
| size
int64 16
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 0
229
|
---|---|---|---|---|---|---|---|---|---|
01490911 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490911/file/978-3-642-40358-3_21_Chapter.pdf | Bram Klievink
email: [email protected]
Inge Lucassen
email: [email protected]
Facilitating adoption of international information infrastructures: a Living Labs approach
Keywords: Data quality, supply chains, public-private information infrastructures, living labs
One of the key challenges that governments face in supervising international supply chains is the need for improving the quality of data accompanying the logistics flow. In many supply chains, individual parties in the chain work with low quality data for their operations and compliance, even though somewhere in the supply chain, better data is available. In the European CASSANDRA project, ICT-supported information infrastructures are developed to exchange data between businesses and government, to support visibility on the supply chain and the re-use of information. However, to gain better data, actors need to be open about their operations, processes and systems to parties that are geographically and culturally on the other side of the world. This adds (perceived) vulnerabilities for parties already operating in a highly competitive environment. This could be a major barrier for making the innovation work. We argue that Living Labs, as a collaborative innovation approach, are able to support the adoption of innovative information infrastructures. They help identifying gains that innovations may bring. Furthermore, the trust-based setting also mitigates the added (perceived) vulnerability such innovations bring for the participants. We illustrate this by examples from the CASSANDRA Living Labs.
Introduction
The main topics in government supervision of today's international trade are efficiency and security. Outsourcing, consolidating cargo and multi-modal transport chains have complicated the organisation and optimisation of logistics and have put additional challenges to managing information and data in these logistics chains. In addition, the information system in international logistics is much influenced by its own legacy, and as a result, documents that originating from the 19 th century are still being used. Results of these complications can easily be found: carriers and importers are being asked to make legal declarations about goods they have never seen, documents that contain crucial information can lag three days behind the goods, and these docu-ments contain information that obscure the true values, such as the identity of the real seller or buyer. These issues have a common solution: supply chain visibility, where data can be shared between business and government, providing end-to-end visibility for all stakeholders, where information can be provided by the originating party and re-used by others. This should result in a significant reduction of transaction costs while it also improves the information quality when advanced mechanisms are used for data capture and cross checking data from various sources. However, this can only be realised when government inspection agencies and the business community work together.
With the advancement of technology innovations it becomes possible to improve the information exchange worldwide, by creating electronic connections between organisations [START_REF]Accelerating Global Supply Chains with IT-Innovation[END_REF]. Data can be made available digitally and is instantaneously available to supply chain partners, and authorities around the globe. However, a wide range of complexities arises. Gathering, sharing and combining information from various sources requires the development of information exchange platforms that are used by a wide variety of stakeholders having diverse interests [START_REF] Klievink | Enhancing Visibility in International Supply Chains: The Data Pipeline Concept[END_REF]. Both business systems and digital government infrastructures are connected to each other, owned and operated by a diverse set of public and private actors. The technical complexities of implementing these platforms is compounded by the number of stakeholders affected by and involved in the decision making process. For such a platform to work in the dynamic context of international trade and logistics, the system needs to be flexible, heterogeneous, interoperable, and above all entirely secure. The use of such platforms also introduces new complexity and new uncertainties for the parties involved, due to increased interdependence and potential vulnerability [START_REF] Hart | Power and trust: Critical factors in the adoption and use of electronic data interchange[END_REF][START_REF] Kumar | Sustainable collaboration: managing conflict and cooperation in interorganizational systems[END_REF]. Ultimately, this requires the collaboration of many different stakeholders that will have to make a strategic decision on the willingness and necessary investments for sharing information about their processes, products, etc. A perceived increase in vulnerability due to this kind of innovation may hinder industry uptake.
In this paper, we present the experiences from the Living Labs (LL) approach used in the CASSANDRA project. The LLs fosters collaboration between the parties in a small group community for a specific supply chain. As a result, this group, operating in a neutral Living Labs setting, is able to build inter-organisational trust that allows the various parties to assess their supply chain from an end-to-end perspective and openly discuss activities, logistics processes, information processing and production, data requirements and how this may be disclosed to others. Consequently, benefits or business cases may be realised that go far beyond the initial quest for better data and move towards more sophisticated use of the electronic links that are formed between parties [START_REF] Massetti | Measuring the extent of EDI usage in complex organizations: strategies and illustrative examples[END_REF]. We show how the Living Labs enable parties to find these more sophisticated uses by creating a trusted setting that counters the added vulnerability that is introduced by adopting an information exchange platform to electronically connect to partners in the supply chain [START_REF] Hart | Power and trust: Critical factors in the adoption and use of electronic data interchange[END_REF].
Background: data exchange infrastructures and IOS adoption concepts
Data exchange infrastructures in international trade
The research and development project that we focus on in this paper aims to improve efficiency, security and compliance in international trade and logistics by integrating information flows on goods, actors, (commercial) contracts and logistics (e.g. transportation). Both the businesses in the supply chain and supervision authorities gain enhanced visibility benefits based on better quality information, in terms of accuracy, timeliness and completeness [START_REF] Klievink | Enhancing Visibility in International Supply Chains: The Data Pipeline Concept[END_REF]. This is even more so if enhanced data quality is combined with innovative information integration and visibility, tracking and scanning technologies and linked to operational risk management [START_REF]Accelerating Global Supply Chains with IT-Innovation[END_REF]. In addition to addressing supply chain inefficiencies, businesses can employ this high quality information to facilitate compliance. This innovation requires the business community to consider the supply chain from end-to-end, with special attention for data quality and the processes from where the data originates. This is the concept of 'data from the source'. Depending on the configuration of a specific supply chain and who is executing which activity, different parties may be best equipped to provide certain data elements that can be re-used by other parties as well. For example, a purchaser of goods knows which goods were ordered, in what quantity and for what value. But a logistics service provider that performs container stuffing may be actually matching the order with the container manifest, thus providing better quality data on the shipment and the contents of the container, which is then the basis for other logistic activities and many (legal) documents along the chain.
To enable the wide variety of actors in the supply chain to share data amongst businesses and between the business community and government agencies, ICT infrastructural facilities need to be developed. Hanseth et al. [START_REF] Hanseth | Developing information infrastructure: The tension between standardization and flexibility[END_REF] depict these infrastructures as information infrastructures to emphasize a more holistic, socio-technical and evolutionary perspective to place the growth in the combined social and technical complexity at the centre of an empirical scrutiny [START_REF] Henningson | Inscription of behaviour and flexible interpretation in Information Infrastructures: The case of European e-Customs[END_REF]. Realising these infrastructure requires transformations, i.e. meaning radical changes in core processes within and across organisational boundaries [START_REF] Murphy | Beyond e-government the world's most successful technology-enabled transformations, executive summary[END_REF][START_REF] Weerakkody | Moving from E-Government to T-Government: A Study of Process Re-engineering Challenges in a UK Local Authority Perspective[END_REF][START_REF] Kim | Managing IT-enabled transformation in the public sector: A case study on e-government in South Korea[END_REF][START_REF] Weerakkody | Transformational Change and Business Process Reengineering (BPR): Lessons from the British and Dutch Public Sector[END_REF]. Organisations that have implemented information integration solutions have reported significant benefits that support the IT/IS evolution process [START_REF] Irani | Developing a frame of reference for ex-ante IT/IS investment evaluation[END_REF].
The European CASSANDRA project proposes a solution for an innovative information infrastructure for international trade, called the data pipeline. It is a concept based on the use of Service-Oriented Architectures (SOA) to enable access to the existing information systems that are used and operated by the various parties in global supply chains [START_REF] Klievink | Enhancing Visibility in International Supply Chains: The Data Pipeline Concept[END_REF][START_REF] Overbeek | A Web-Based Data Pipeline for Compliance in International Trade[END_REF]. It is a virtual bus, created by linking ERPs, existing interorganisational trade and logistics platforms connecting e.g. port and business community platforms, and systems for tracking, tracing and monitoring the goods [START_REF] Overbeek | A Web-Based Data Pipeline for Compliance in International Trade[END_REF]. The data pipeline provides one integrated access point to the different sets of information that already exist, but are currently fragmented throughout the supply chain.
However, to set the right requirements for a data-sharing platform, the parties literally need to sit together to share and assess the end-to-end supply chain and data availability and quality, e.g. does the data originate from a standardized and verifiable process; was it made available by manual entry, or automatically from an ERP system? Due to competitive pressure, these parties normally do not discuss this as part of their normal day-to-day operations. As a result, any ICT innovation to improve endto-end supply chain visibility has a high risk of failure due to focusing primarily on improving data sharing techniques and standards, without making an in-depth assessment of the best data quality sources in each individual supply chain and the challenges in gaining mutual trust and cooperation.
Inter-organisational relationships in IOS
The data pipeline can be considered an inter-organisational information system (IOS), as information crosses the boundaries of individual parties and data and (potentially) functionality is being shared between organisations [START_REF] Vitale | Creating competitive advantage with interorganizational information systems[END_REF]. Most early literature considered a lead party (or 'sponsor') of an IOS, which has an important role in defining its functionality, the participants and funding structure of the system [START_REF] Vitale | Creating competitive advantage with interorganizational information systems[END_REF]. However, given the complexity in international trade and logistics, all parties (and thus potential 'sponsors') also depend on supply chain partners to realise the expected or intended benefits by introducing, supporting or otherwise sponsoring the system. Therefore, the adoption of an IOS, and the factors influencing adoption, is a key research topic [START_REF] Robey | Theoretical Foundations of Empirical Research on Interorganizational Systems: Assessing Past Contributions and Guiding Future Directions[END_REF][START_REF] Chwelos | Research report: empirical test of an EDI adoption model[END_REF].
In studying the factors that influence the adoption of Electronic Data Interchange (EDI) as a form of IOS, Chwelos et al. [START_REF] Chwelos | Research report: empirical test of an EDI adoption model[END_REF] seek for factors at three levels; the technology, the organisation and the inter-organisational level. In the findings of their empirical research, there are two parts that stand out in determining the intention to adopt an IOS; the external pressure and the readiness. The external pressure consists of both competitive pressures and enacted pressure from the trading partners. The readiness concerns the financial resources and the IT sophistication, including management support, which a party needs to adopt an IOS. As due to its nature an IOS only works if other partners also use the system, trading partner readiness was also found to be of importance.
The contemporary field of information or digital infrastructures, which can be considered a direct successor to the IOS studies, puts more emphasis on the interorganisational aspect. They focus on the role that socio and political factors play in enacting, adopting and supporting information infrastructures [START_REF] Tilson | Digital Infrastructures: The Missing IS Research Agenda[END_REF]. Here, information infrastructures are seen as an open, heterogeneous, evolving and IT facilitated sociotechnical system that is shared between multiple actors [START_REF] Hanseth | Design theory for dynamic complexity in information infrastructures: the case of building internet[END_REF].
In their 1997 paper, Hart and Saunders emphasise that IOS innovations lead to new complexity and primarily new vulnerabilities for organisations that adopt an IOS through the increased interdependence [START_REF] Hart | Power and trust: Critical factors in the adoption and use of electronic data interchange[END_REF]. They take the existing relationship between parties as the starting point for their reflection. The way in which innovation impacts the existing relationship and the way in which parties treat each other in the innovation, determines the extend of the potential future benefits of the innovation.
2.3
Building trust for depth of use of the innovation
The three main elements of Hart and Saunders' thesis are power, trust and vulnerability. They argue that although pressure can be used to stimulate adoption of an IOS, this can also negatively impact the relationships between parties. This is because the perceived vulnerability of the partners increases. An IOS enables information to cross the boundaries of the organisation [START_REF] Hart | Power and trust: Critical factors in the adoption and use of electronic data interchange[END_REF]. Trust, in this context, means that parties need to be confident that their partners do not misuse the information they gain in the IOS, nor exploit the increased vulnerability. If trust is built, there is a tendency to continue the collaboration. Hence trust and continuity are mutually reinforcing. Ultimately, this is important in order to gain the maximum benefits of the IOS as the innovation often starts with only a small transaction set, often with a limited number of parties. Based on the work of Massetti, Hart and Saunders [START_REF] Hart | Power and trust: Critical factors in the adoption and use of electronic data interchange[END_REF] argue that there are various characteristics to the use of an IOS: Breadth of use, i.e. the number of IOS partners; The diversity of the transaction/document set; The volume of transactions via the IOS; Depth of use, i.e. first just electronic document transfer, deeper use is interorganisational application to application transfer, next is interconnected or shared databases and ultimately up to coupled or shared (automated) work environments.
The more the IOS is used in depth, the greater the vulnerability of the parties opening-up to each other. Furthermore, Hart and Saunders argue that most parties start with simple exchange of documents, and at the time of adoption of the IOS do not yet foresee to what extend the IOS will impact their future operations and collaboration. The expectation is that more benefits can be gained if the integration is more in depth. The principle that trust and continuity is mutually reinforcing is reflected in this gradual implementation of IOS.
In this paper, we argue that a Living Lab research and development approach can help in facilitating a safe environment and facilitate building inter-organisational trust, which enables parties to design, implement, adopt and evaluate the use of an IOS. The Living Lab setting especially enables commercial partners to move beyond armslength trading relationships and collaborate more to discover shared benefits of introducing the IOS.
Trust and the Living Labs approach
Hart and Saunders use the classification of Mishra [START_REF] Mishra | Organizational responses to crisis: The centrality of trust, in Trust In Organizations[END_REF] to identify four dimensions of trust in the context of sharing information in an IOS:
Competence: partner is able to process information properly and efficiently; Openness to innovation; Care: no misuse of new interdependence and vulnerabilities; and Reliability, i.e. actors do as they say.
In this paper, we state that a Living Labs approach is not only a valuable research and development tool to put innovative information infrastructures to practice, but also creates a setting that facilitates trust, which is what we focus on. It enables the various actors to gain confidence that the innovation can indeed help in realizing mutual benefits, and that the partners will not act opportunistically and misuse the vulnerability.
A Living Lab is defined as a "gathering of public-private partnerships in which businesses, researchers, authorities, and citizens work together for the creation, validation, and test of new services, business ideas, markets, and technologies in real-life contexts" [START_REF] Ergvall-Reborn | St hlbr st, oncept Design with a Living Lab Approach[END_REF]. ontinuity, openness and empowerment of the actors are three of the ey principles that ergvall-reborn et al. [START_REF] Ergvall-Reborn | St hlbr st, oncept Design with a Living Lab Approach[END_REF] identified with respect to Living Labs. A returning aspect of Living Labs is the crucial, and high level of collaboration, enabling to build inter-organisational trust [START_REF] Pavlou | Institution-based trust in interorganizational exchange relationships: the role of online B2B marketplaces on trust formation[END_REF]. Limited trust and collaboration will inevitably result in sub-optimized designs and solutions, no commercially valuable resulting services and no valid proof of concept. Innovation through Living Labs is therefore only possible when partners aim for collaborative innovation.
With respect to governance of choice for new solutions, it is crucial that Living Labs are open and neutral with respect to technology or business models. This is needed to get the most out of a collaborative innovation process, "by avoiding the problem of path dependency & lock-in and at the same time optimizing interaction among organizations" [START_REF] Niitamo | State-of-the-art and good practice in the field of living labs[END_REF]. Consequently, Niitamo et al. [START_REF] Niitamo | State-of-the-art and good practice in the field of living labs[END_REF] argue that a Living Lab needs to bring access to state-of-the-art technology, which is diverse, i.e. not just one technology, but competing technologies that are delivered through different business models. Hence, on the technology side, cooperation with (and between) technology vendors is necessary, including both SMEs and larger firms [START_REF] Niitamo | State-of-the-art and good practice in the field of living labs[END_REF]. When asking investments of these directly competing companies in creating innovative products, creating an environment of trust, cooperation and willingness to teach and learn together, is also one of the crucial factors of success and first points of attention when setting up a Living Lab.
Compared to other test and experimentation platforms, e.g. prototyping platforms, testbeds, field trials, market pilots and societal, Living Labs have a higher level of design focus [START_REF] Niitamo | State-of-the-art and good practice in the field of living labs[END_REF]. In Living Labs more efforts are being put on the design phase, resulting in less commercial maturity at inception, but on the other hand, the end solution should better fit the requirements of the users and therefor have higher chances of success in the long run. Because of all complexities, it is evident that the goal of the Living Lab study needs to be clear and equally understood by all participants at the start of the project. If this is not the case, it will become increasingly difficult to create an environment of trust and end up with having the right focus and commitment of the individual partners, that each have their importance and contribution.
3
The Living Labs approach Within the Living Labs of the CASSANDRA project, actors from business and government cooperate to develop and evaluate new ICT solutions that support interna-tional trade, logistics and compliance in a real-life pilot setting. These partnerships are facilitated by the research environment, where partners from academia and other research institutions provide a "neutral ground" for the interactions, aiming to initiate and facilitate processes of consensus building, networking and policy making.
Select demonstration trade lanes
First step in setting up the Living Labs was the identification of suitable trade lanes. A trade lane was defined as a single lane from origin to destination, from a specific seller to a specific buyer, across fixed transport modes. These trade lanes were selected with the business partners based on flow and process stability, and relationship with the trade lane partners. After selection of the trade lanes, the next step was to assign a dedicated IT solution provider to the trade lane. This was important as it also made the solutions that would be tested in a trade lane more specific. Choice for a solution provider was driven by various factors such as estimated maturity of IT in the trade lane, specific requirements by business partners on the trade lane, geographical presence and the need to end with a set of demonstrations that would have enough variety to support scientific research and a good basis for calculating the final results in the overall evaluation of the CASSANDRA solutions, including an extrapolation to the industry at large of logistics, international trade and compliance. The closed user groups, or trade lane teams, took form.
Creating mutual understanding
The next step to work on a common level of understanding, which in case of CASSANDRA included getting acquainted on a personal level, understand each other's businesses and interests and the detailed processes and products. This was established in a workshop with local trade lane partners to get a first mapping of the trade lane's parties, processes, IT implementations, etc. This was done in working sessions where process mappings, dossier analysis and interviews were performed. Coming to the necessary level of understanding in the CASSANDRA project was a challenge for the partners that are less acquainted with the supply chain processes as the principle of getting data from the best or even original source means that a detailed knowledge of processes, resulting information exchange, and data control mechanisms is needed. In this stage, the team got a first common understanding and importantly a common mission is finding answers to the unanswered questions. Much of these answers were found in team trips to the other ends of the trade lane (i.e. Asian countries). Here, supply chain parties were visited, interviewed, possible solutions discussed, etc. In many cases, earlier assumptions, sometimes with 99% security upfront, turned out to be incorrect. Parties that have been working closely for many years, never got to talking about such details. There was an open and collaborative attitude towards the supply chain partners. Meetings never resulted in a closed attitude from either side, but almost without exception in open and animated discussions. In many cases, the group expanded with experts from the partners (literally on the other side of the world), bringing their expertise and dynamic to the team. Working in a Living Lab setting opened up possibilities to discuss operations openly. As a result, businesses learned more about the way their partners operate and the sources of the data they worked with in a day than they have done in years of collaboration before.
In the next chapter, we describe findings from the recent Living Labs, making it at this stage not possible to assess whether relationships lasted or if IT solutions reached commercialisation.
4
Findings from the CASSANDRA Living Labs
Within the CASSANDRA project, there are three Living Labs that share a common end vision, solution providers and project partners, but also each one has its own specific user group, regional characteristics, level of IT maturity and stakeholder community. In terms of building inter-organisation trust, this gives additional complications as a larger group of partners, that not only bring specific expertise but also different backgrounds, need to collaborate and agree on a final solution. Therefore, it was agreed upfront that local implementations of the solution might differ according to the needs of local stakeholders. To detail the overall vision and simultaneously start Living Labs from a bottom-up approach brings an additional complication in realising and aligning project and Living Lab goals. Coordination of these two levels needs to be a joint effort of a dedicated smaller group of people that can continuously monitor development and targets and that clearly steers and communicates. As this also transforms the way government agencies operate, and digital government infrastructures are part of the overall information infrastructures, government needs to be involved from the beginning. Due to the project schedule, various trade lane options were selected and a lack of understanding existed on the potential benefits for the various supply chain parties that were asked for support. The trade lanes where this problem did not occur, had the advantage that these were already used in earlier research projects, or there was involvement of strategic, long term partners and customers of the consortium members.
Building trust through open collaboration
Within the trade lane teams it was easier to build a high level of trust and collaboration since most partners in the logistics chain are already acquainted over several tiers and the project provided them with a neutral discussion platform. It might have been beneficial that the coordinating partner in most of the trade lanes was a neutral knowledge institute. Involving solution providers in the various trade lanes, already in the first stages, also supported trust. First, because the team worked jointly on a common understanding of trade, logistics, compliance and possible solution. Second, because there were only a limited number of external parties, being the solution providers, that needed to get acquainted with the trade lane and the more or less confidential data related to it. Third, the solution providers in the trade lane were chosen as such that there were not competing but cooperating. This also stimulates knowledge sharing between solution providers.
Trusted setting enables depth of understanding and potential use
Defining the information requirements of the various stakeholders and designing the solution is less of a challenge, provided that the partners have got acquainted and all have the same understanding of the processes, the problems to be solved and the possible solutions. This is complicated by practicalities from the international character of the topic. Supply chain parties sometimes know each other only by name, are affected by complex relationships due to the different levels of contracting in international logistics, and have no ability to meet face-to-face regularly to know each other well enough to be convinced of mutual understanding. Furthermore, having a common and complete understanding of end-to-end supply chains means that all parties need to understand both the physical and administrative processes of shipping goods around the globe. This includes not only the business side of these processes, but also compliance processes, interests of inspection authorities, and the systems that support these processes and potentially offer relevant solutions.
The exercises that contributed to this common understanding have proven to be of great importance. First, it showed a project partner, that was working closely for several years with an overseas party, jointly developing new logistics services, was completely unaware of the existence of an IT system that holds all the forwarding information. This system only came to the team's attention after a day of discussion at the forwarder's premises in Asia. The system proved to be essential for data capture in the project. Sharing this data, instead of the information they exchanged before, enabled the project partner to significantly improve their operations, including warehouse planning, and enhance their service level to their customer.
Second, the trips overseas also brought more understanding of logistics processes for the partnering authorities. Existing control mechanisms in supply chains, such as the use of a tallyman during container stuffing, can be a crucial indicator of a supply chain being in control. Information about these control mechanisms can support compliance and risk management.
Getting a common understanding of the trade lane required the whole team to be able to work on various levels of detail at the same time, understanding each other's detailed processes and products without losing the bigger picture and effects on the final solution. A Living Lab approach requires a certain set of skills and competences, both analytical and social, for its participants as well as the coordinator. The role of the coordinator in these sessions is to bring the participants together on a social level but also on a content level. The coordinator therefore needs to have basic understanding of the various perspectives and being able to work as a translator and moderator in discussions. When a Living Lab includes a public-private partnership, this is another dimension to the standard complexity. It is essential to formulate the constraints and incentives imposed by authorities to keep the discussions focused and within their legal frame. This is important to steer the project's efforts in a direction which results in not only a technically viable solution but also a solution that fits in current policy, and as such is acceptable and accessible for the market after the project ends. Managing a public-private partnership in Living Lab innovation requires both public and private parties to reach a certain level of understanding and trust which is much influences by the power relation that naturally exists.
Supporting depth of use creates bigger advantages
In the example mentioned earlier, the Living Labs setting enabled the business community in the trade lane to identify better sources of data. This is partly a result from the collaborative approach, as the Asian owner of the information system did not know what the use their partners on the other side of the world could have for their data. At the same time, being able to connect this data source to the interorganisational data sharing system was also a result from the trust-basis created in the Living Lab setting. The information system owner got in-depth information on the way the information would be used, and what the supply-chain-wide benefits could be. For them, this mitigated the perceived risk that the information would be used opportunistically. They saw how it could instead strengthen their existing relationship with their European partner. Sharing the information through the IOS enabled them to increase the information exchange and the efficiency of the supply chain as a whole, instead of optimising the individual steps that any single party could control.
The careful assessment of where data comes from, and the finding that this party could provide information from primary records, also enables benefits from compliance and governmental inspection perspectives. As the customs organisation got a better understanding of the source of the data and of the way that the business community themselves asserted its correctness (i.e. data from the source, combined with a tallyman), they are able to assess the security of this supply chain as a whole. This may result in a decrease of the inspection burden that was caused by incomplete or inaccurate information at import.
Conclusion and discussion
The development of information exchange infrastructures for international trade and logistics is a complex undertaking. Much of the data that is important can come from multiple sources and is often altered, inaccurate and sometimes intentionally vague.
To gain better data on global goods flows and thereby enhance the visibility on those flows in an inter-organisational information system or IOS, just interconnecting systems is insufficient. The stakeholders need to provide detailed requirements and specifications for such a system, meaning that knowledge is needed on the source of the data, the processes in other organisations that produce the data, existing control mechanisms and the various IT systems. With this, they are able to assess the quality level that is needed for each data element on what point in time, and of the quality level that each partner and system can provide. . Aspects that can influence innovative developments in this area are external pressure, readiness and the trust and relationship between partners. The more the IOS is used in depth, the greater the vulnerability of the parties that are opening-up to each other and the higher the need for mutual trust. This already starts in the specification phase, where discussion requires openness on operations, processes and systems to parties that could geographically and culturally be on the other side of the world. Parties might perceive to be vulnerable when opening up, especially since they are operating in a highly competitive environment. Living Labs offer the possibility to create a safe environment in which parties can create sufficient mutual understanding and trust to perform the crucial first steps in specifying the requirements for an IOS. The idea that trust and continuity are mutually reinforcing also reflects the crucial success of these first steps in a gradual implementation of IOS. Therefore, we argue that the small user group innovation in Living Labs are not just a good instrument to get this done in a research setting, but also to support the eventual adoption of the information infrastructures and support the required transformation. Such a collaborative innovation approach gives the ultimate adoption a boost by focusing not just on the benefits that parties can gain from the innovation, but also respects and deals with the added (perceived) vulnerability that such innovations bring for the participants.
The examples from the CASSANDRA Living Labs show that working in dedicated teams can work really well in creating an open and safe environment. The lessons learnt show that a specific, shared and well-understood objective for the cooperation is crucial for selecting the team and also final success. Working jointly on a common understanding of the trade lane and the sta eholder's needs not only brings knowledge but also improves the relationship and team spirit. The people working in the team need a certain set of competences and skills in order to create this positive atmosphere and work effectively. Also, the role of the neutral coordinator is important to moderate the discussions and to facilitate mutual understanding with necessary functional translations. The common understanding is the crucial starting point for developing a common roadmap to implement an IOS for a group of organisations.
Acknowledgements. This paper results from the CASSANDRA project, which is supported by funding from the 7th Framework Programme of the European Commission (FP7; SEC-2010.3.2-1) under grant agreement no. 261795. Ideas and opinions expressed by the authors do not necessarily represent those of all partners of the project. | 38,029 | [
"994118",
"1004212"
] | [
"333368",
"451137"
] |
01490913 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490913/file/978-3-642-40358-3_23_Chapter.pdf | Vesna Krnjic
email: [email protected]
Klaus Stranacher
email: [email protected]
Tobias Kellner
email: [email protected]
Andreas Fitzek
email: [email protected]
Modular Architecture for Adaptable Signature-Creation Tools Requirements, Architecture, Implementation and Usability
Keywords: Electronic Signatures, Qualified Signature, Signature-Creation, Usability, User-Centered Design
Electronic signatures play an important role in e-Business and e-Government applications. In particular, electronic signatures fulfilling certain security requirements are legally equivalent to handwritten signatures. Nevertheless, existing signature-creation tools have crucial drawbacks with regard to usability and applicability. To solve these problems, we define appropriate requirements for signature-creation tools to be used in e-Government processes. Taking care of these requirements we propose a modular architecture for adaptable signature-creation tools. Following a user-centered design process we present a concrete implementation of the architecture based upon the Austrian Citizen Card. This implementation has been used to prove the applicability of the architecture in real life. Our tool has been successfully tested and has been assessed as usable and intuitive. It has already been officially released and is widely used in productive environments.
Introduction
Electronic services have gained importance in the last years. Compared to conventional services they allow cost reduction and more efficient procedures. An increasing number of electronic services are being provided in all e-Business domains. For security and privacy sensitive services such as e-Government, electronic signatures guarantee authenticity and integrity.
Especially in the e-Government sector the legal aspects of electronic signatures play a major role. In 1999, the European Commission published the EU Signature Directive [1]. The Directive had to be implemented by national laws and defines equivalence between a handwritten signature and an electronic signature fulfilling certain security requirements ('qualified signature').
The European Commission Decision 2011/130/EU [2] defines standard signature formats for advanced electronic signatures. In addition, the Digital Agenda for Europe [START_REF] Fitzek | Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions A Digital Agenda for Europe[END_REF] and the e-Government action plan [START_REF]The European eGovernment Action Plan 2011-2015 Harnessing ICT to promote smart, sustainable & innovative Government[END_REF] aims to create a single digital market for Europe. Obviously, these activities demand for appropriate signature tools.
Currently a variety of signature-creation tools and applications are on the market. Unfortunately most of them lack usability or applicability. Either they do not support 'qualified signatures' or all standard formats, or they are available as online tools only. Nevertheless, many citizens and companies want or have to use an offline tool due to security and privacy obligations. Therefore there is a need for an offline tool creating 'qualified signatures'. In addition, current signature-creation tools do not allow to freely position a visual representation of the signature in the document. To fill this gap our paper presents a modular and adaptable architecture for signaturecreation tools. In addition -to validate the applicability of our proposed architecturewe present a concrete and user-oriented implementation of the architecture based on the Austrian Citizen Card. The main reasons for choosing the Austrian Citizen Card as a basis are: (a) electronic signatures are widely used in Austria and thus we expect a high volume of users and (b) the Austrian official signature as introduced by Leitold et al. [START_REF] Leitold | Mediabreak resistant eSignatures in eGovernment-An Austrian experience[END_REF] defines a visual representation of the signature and therefore an adequate positioning of this representation is needed.
The remainder of this paper is structured as follows: Section 2 gives an overview of the legal and technical framework our solution is based on. In Section 3 we elaborate on requirements for adaptable and secure signature-creation tools and applications. Section 4 presents our modular architecture for signature-creation tools. In addition, details about the implementation of this architecture are given. Section 5 describes the user-centered design method we followed to achieve a high grade of usability of our solution. Finally, we draw conclusions and discuss future work.
2
Legal and Technical Framework
Legal Regulations
The Digital Agenda for Europe aims to "develop a digital single market in order to generate smart, sustainable and inclusive growth in Europe" [START_REF]European Union, Digital Agenda for Europe, Summaries of EU Legislation[END_REF]. To achieve this objective, (cross-border) electronic services are one of the key enabling factors. This has been refined in the e-Government action plan for the period 2011-2105 [START_REF]The European eGovernment Action Plan 2011-2015 Harnessing ICT to promote smart, sustainable & innovative Government[END_REF]. The action plan objective is to create a new generation of administrative services. However, electronic signatures are necessary to provide secure and reliable electronic services. Electronic signatures have been discerned as a key factor for successful e-Government early on. Already in 1999, the European Commission published the Directive on a Community framework for electronic signatures 1 1 [ ]. The Directive defines a basis for legal recognition of electronic signatures. It includes a definition of different characteristics of electronic signatures and defines their legal effect. In par-ticular, it defines that an advanced electronic signature must meet the following requirements:
"(a) it is uniquely linked to the signatory;
(b) it is capable of identifying the signatory; (c) it is created using means that the signatory can maintain under his sole control; and (d) it is linked to the data to which it relates in such a manner that any subsequent change of the data is detectable;" [1] In addition, Article
2.2
Technical Background are legally equivalent to handwritten signatures which is a common precondition for e-Government processes.
From a technical perspective we concentrate on the Austrian Citizen Card concept [START_REF] Leitold | Security Architecture of the Austrian Citizen Card Concept[END_REF] as the implementation of our solution is based on it. This concept defines the Citizen Card as a technology neutral instrument that enables to create and verify electronic signatures according to the Austrian e-Government act [START_REF]The Austrian E-Government Act: Federal Act on Provisions Facilitating Electronic Communications with Public Bodies[END_REF] and e-Signature law [START_REF]The Austrian Signature Law: Federal Electronic Signature Law[END_REF] 4To integrate these various tokens a middleware is used. This Citizen Card Software (CCS) implements a high level interface . That means different forms of Citizen Card tokens can exist. Currently, smart card-based as well as mobile phone-based Citizen Card implementations are available.
5
• Online-based CCS: This smart card-based CCS runs on the server side and provides the desired functionality via a Java applet to the user. Actually, the only available online based CCS is MOCCA Online that provides diverse functionality such as the creation and verification of electronic signatures. Different types of this citizen card software exist: .
• Mobile phone signature-based CCS: This CCS which uses a simple mobile phone is available at https://www.handy-signatur.at/. It is based upon a two factor authentication using a password and a TAN (sent via SMS to the mobile phone).
Concerning signature formats, the European Commission, in their Decision 2011/130/EU [2] from 2011, published a set of standard signature formats which must be processable by all competent authorities acting under the EU Services Directive [START_REF]Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006 on services in the internal market[END_REF]. Namely these formats are the advanced electronic signatures CAdES, XAdES, and PAdES. However, Austria has rolled out a proprietary PDF-based signature format (PDF-AS) several years ago [START_REF] Leitold | Reconstruction of electronic signatures from eDocument printouts[END_REF][START_REF] Leitold | Mediabreak resistant eSignatures in eGovernment-An Austrian experience[END_REF]. This format is going to be replaced by PAdES, but currently it is still widely used. Therefore, we have chosen this signature format to implement in our signature tool (see Section 4 for details).
Requirements
The secure and reliable signature-creation of electronic documents plays a central role in most e-government solutions. Signature-creation tools must meet several requirements to satisfy legal regulations as well as the needs of all user groups. On the one hand, the signature-creation tools must fulfill the requirements for the public sector and organizations. On the other hand, the tools should be intuitive and convenient to use for every single citizen. Considering the needs of all user groups, reliability, usability, adaptability, and modularity are identified as core requirements for signaturecreation tools. These requirements are refined as follows:
• Reliability and Privacy Signature-creation tools typically process sensitive personal and business data. Misuse of this data may seriously compromise citizens and businesses. Hence, reliability and trustworthiness of this data is an essential requirement. In addition, the public administration needs certainty about the identity of the citizens or businesses. The same applies for the identity of the public administration. So, reliability of the affected parties must be achieved. Finally, citizens, businesses, and public administration need assurance that the data processing satisfies legal and privacy regulations.
• Usability Usability is another major requirement for signature-creation tools. Signature-creation tools are using cryptographic techniques like public key infrastructure (PKI) or secure signature creation devices (SSCD) such as smart cards as required for generating 'qualified signatures'. Most likely, users do not have the necessary background knowledge about complex cryptographic concepts and legal regulations. Plenty of security-sensitive tools are simply too complex for most users. In general, users are not interested in technical details. To improve the usability of signature-creation tools, this complexity must be hidden from the user. Instead, the focus has to be on presenting important information to users. To ensure usability, identified user groups must be involved in the design and development process of such tools.
• Comprehensive Format Support
In the next years a significant increase of electronic signature enabled cross-border services is expected (see Digital Agenda for Europe [START_REF]European Union, Digital Agenda for Europe, Summaries of EU Legislation[END_REF] or EU Services Directive [START_REF]Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006 on services in the internal market[END_REF] for instance). Although the European Commission Decision 2011/130/EU [2] has defined standard signature formats, various other (partly proprietary and nation-wide) formats are still in use. This implies that the support of these signature formats is still required. Hence, the ability to enhance signature-creation tools to support additional signature formats is crucial. Obviously these enhancements should be possible with minimal effort.
• Cross-Platform Applicability
Usually, e-Government applications and services must not be limited to specific hardware or software components. Services provided by public authorities must be accessible for all citizens without any restrictions and irrespective of the used environment. Thus, the availability of cross-platform applications is an essential requirement for signature-creation tools.
• Offline Availability
In many cases electronic documents contain personal or sensitive data. Therefore document owners are interested in keeping this data undisclosed, either due to privacy regulations, business policies or because of other privacy reasons. Server-based approaches are problematic in this context, because users do not want to upload sensible data to a remote server. Therefore, signature creation tools should offer a client-based implementation for creating electronic signatures.
Architectural Design
In this section we elaborate on a modular architecture and design for signaturecreation tools satisfying the identified requirements. To verify the applicability of this architecture we have implemented a signature-creation tool for use cases of the Austrian e-Government. Due to the widespread usage of the Austrian signature format PDF-AS we have given this format priority. The following subsections describe the proposed architecture and give details on the implementation.
Architecture
Fig. 1 illustrates our proposal for a modular and adaptable architecture for signaturecreation tools. The architecture supports various document formats, allows for the creation of different signature formats and makes use of different signature-creation devices. This modular approach is achieved by defining a generic signature-creation process. Depending on the current state of the process, specific implementations of the various components are used to create a signature for the current document. All those generic components are adaptable and open for further implementations and extensions to support new document types, signature formats, or signature-creation devices. Subsequently we describe our architecture and the involved components or modules (see Fig. 1):
• Input
The input module reads a given document and determines the MIME type10 for further processing. It generates a document dependent state which is used during the whole signature-creation process. This module can support local files, network files or even streams, and presents this input data in a common form to the other modules. When the input module has finished its task, the state is handed over to the viewer module.
• Viewer
The viewer module enables presentation of the document to be signed. It uses document-specific implementations for the presentation. These may be e.g. PDF renderer, MS-Word renderer, XML renderer, HTML renderer, and so on. Depending on the used signature format, a visual signature representation and a customized signature positioning can be supported. In this case the viewer module provides a Positioning component which presents an overlay to allow the user to position the visual signature representation. The chosen position is then stored in the state of the signature process.
• Signer
The signer module is responsible for the delegation between the signature component adapter and the signature creation device component. Depending on the state, the signer component chooses an appropriate signature component for the given document, or uses a preconfigured component for the given document class. It provides the chosen signature component adapter with a specific instance of a matching signature creation device, which again is either chosen on the fly or may be preconfigured.
• Signature Component Adapter
This adapter is used to provide a common interface to e.g. a signature library. The signature format implementation generates the signature data and uses an abstract signature-creation device to obtain a valid signature for this signature data. Given the signature and the signer certificate the concrete signature component is able to create a valid digitally signed document. This signed document is again stored in the process state. • Output When a signed document is available within the process state, the output module allows the user to save the signed document, to open it with the default application or to view it again with the appropriate viewer module.
Implementation
To put the proposed architecture into practice and to verify its applicability, a welldefined subset of this architecture has been implemented: signing of PDF documents with the Austrian PDF-based signature format PDF-AS was chosen. Our implementation is based on Java, thus achieving platform independence. Fig. 1 highlights the modules that have been implemented in our application. Namely these main modules are:
• PDF-Viewer module including positioning of the visual signature representation • Signature Component Adapter PDF-AS • Signature-Creation Devices based on the Austrian Citizen Card via 'Security Layer'
The process flow starts with the input component, which allows the user to select a PDF document to sign, either via drag and drop, or via an operating system file selection dialog. The viewer displays the PDF document and enables the user to position the visual signature representation. This step can be skipped if the user configured the application for automatic signature positioning. The signer component receives the document to be signed and the desired position of the signature block. With this information, a signature request for the citizen card software is built by the signature component. Here, the user chooses the concrete implementation of the signature creation software (online, local or mobile phone-based implementation). Subsequently, the signature request is signed using the selected citizen card software. Finally, this signature is incorporated into the PDF document by the PDF-AS signature component and the thus signed document is sent to the output component. Within the output component the user is able to save and open the signed PDF document.
The user interface is based on this linear process flow and guides the user through the necessary steps. Fig. 4 shows a screenshot of this interface. Depending on the configuration of the tool, certain process steps can be shortened or entirely skipped for daily use by advanced users. For instance, the document to be signed can be selected by dropping it on the program icon, the signature block can be positioned automatically, the citizen card software can be preselected, or the output filename or folder can be set in advance.
Our tool called PDF-Over has been officially launched in Austria 12 and is already widely used 13
5
User-Centered Design Method
. As we followed a user-centered design method for the implementation, the tool has been assessed as easily understandable and usable as well as intuitive. The following section gives detailed insights into this design methodology as applied to PDF-Over.
To fulfill the usability requirements of signature-creation tools discerned above, we followed the user-centered design (UCD) principles [START_REF]Ergonomics of human-system interaction --Part 210: Human-centred design for interactive systems[END_REF] in order to implement a security-sensitive application that is effective and usable in practice. UCD is a design methodology that at each stage of the design process focuses on user's needs, goals, preferences, and limitations. It is an iterative design process that requires continuous user feedback and tests. As shown in Fig. 2 the methodology consists of four design stages: analysis, design, implementation and validation. The method enables a complete remodel and rethinking of the design by early testing of conceptual models and design ideas. For the development of PDF-Over we have defined to repeat the entire design process three times 14 before launching an official release. The different stages in the creation of PDF-Over were: • Analysis At the beginning of the process we identified the end-users of PDF-Over. It turned out that the user groups of the signature-creation tool are citizens and authorities. In both groups users can again be divided into standard users and advanced users. After identifying those user groups we posed the question what each user group's main tasks and goals are and what functions are needed to accomplish those. The use case for citizens as standard users is to electronically sign a PDF document. They expect a simple und useable interface without any complexity. The authorities as standard users are interested in applying official signatures. To fulfill the Austrian official signature as introduced by Leitold et al. [START_REF] Leitold | Mediabreak resistant eSignatures in eGovernment-An Austrian experience[END_REF] certain criteria must be met, such as the placement of the visual signature representation. Additionally, advanced users need the possibility to e.g. pre-selected citizen card software, or enable automatic positioning of the visual signature representation. We also analyzed user's need of previous knowledge. In our case the end-user must know what the Austrian Citizen Card is and how to use it.
• Design
The second step in the iteration process is the design process. First, paper-based prototypes (see Fig. 3) and the initial architectural design were created. The focus on the end-users is very important in the early phase of the design. In order to get feedback from the users before writing code or beginning with the development, we performed usability tests with paper mockups.
• Implementation
In the implementation stage the detailed design and specifications were implemented and first source code was written. This stage builds upon the results of all prior stages. End-users not directly involved during the implementation. Fig. 4 illustrates a first implementation of the tool, showing the positioning of the visual signature representation.
• Validation
After the implementation phase two approved usability methods have been applied to evaluate PDF-Over. First of all, an expert review was conducted. Here, an evaluator used the tool and assessed its usability against a set of usability principles, the socalled heuristics 15In the last iteration, we performed a thinking-aloud test with five representative endusers. As indicated by Nielsen [START_REF] Hinderer | How to Recruit Participants for Usability Studies[END_REF], five test users are sufficient to find almost all usability problems one would find using many more test participants. Test users have been asked to do representative tasks, while observers, including the developers, watched the test and took notes. The obtained results were analyzed and implemented in the last iteration. With the conducted usability analysis we improved the acceptability and usability of PDF-Over.
. The heuristic evaluation provided quick and inexpensive feedback to the design. In the following implementation iteration the results from the heuristic evaluation were implemented.
Conclusions
Signature-creation is essential for many e-Government processes. Especially the creation of 'qualified signatures' is of high importance. In this paper we have presented a modular architecture for adaptable signature-creation tools. To prove the practical applicability and flexibility, we have given a concrete implementation of this architecture. To achieve a high impact our solution is based on the Austrian Citizen Card concept. We have followed a user-centered design to achieve a high usability of our tool. This tool has been successfully tested and is ready to accept current and upcoming challenges. The tool has already been officially launched in Austria and is licensed under the European Public Licence EUPL [START_REF]European Union Public Licence[END_REF]. The current number of downloads amounts to about 2000 per month which confirms the high acceptance and usability of our solution. Currently, we are integrating additional signature formats. Based upon the European Commission Decision 2011/130/EU we are implementing a PAdES signature component adapter to support PDF advanced electronic signatures. In addition, we are working on the support of batch signatures to allow signing of several documents in one step.
6 • 7 , a.sign Client 8 or TrustDesk 9 2
679 Local/Client-based CCS: This CCS is also smart card-based and has to be installed locally on the client machine. Here, different implementation exists, e.g. MOCCA Local . The terms 'qualified certificate' and 'secure signature creation' and their requirements are defined in Article 2 of the Signature Directive.
Fig. 1 .
1 Fig. 1. Modular and adaptable architecture for signature-creation tools
Fig. 2 .
2 Fig. 2. Four design phases of User-Centered Design Process
Fig. 3 .
3 Fig. 3. Design prototypes
Fig. 4 .
4 Fig. 4. PDF-Over free positioning of the visual signature representation
Better known as the EU Signature Directive
The term 'qualified signature' is not explicitly defined in the Signature Directive. However, this term is usually used in literature.
The Citizen Card offers additional functionality, such as identification of citizens and data encryption. However, these are not needed for our use cases.
The so-called 'Security Layer '
MOCCA Online: http://joinup.ec.europa.eu/software/mocca/description
MOCCA Local: http://joinup.ec.europa.eu/software/mocca/description
http://www.a-trust.at/
http://www.itsolution.at/digitale-signatur-produkte/desktop/trustDesk.html
The MIME type defines the document format.
Using the PC/SC (Personal Computer/Smart Card) interface
PDF-Over, Version 4.0.0, 15.1.2013, http://www.buergerkarte.at/pdf-signatur.de.php
Since the official launch about 2.000 users per month are gained.
This is a common approach for most developments as indicated in http://www.nngroup.com/articles/iterative-design/.
http://www.nngroup.com/articles/ten-usability-heuristics/ | 26,111 | [
"1004214",
"994640",
"1004215",
"1004216"
] | [
"65509",
"65509",
"65509",
"65509"
] |
01490915 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490915/file/978-3-642-40358-3_25_Chapter.pdf | Michael Räckers
email: [email protected]
Sara Hofmann
email: [email protected]
Jörg Becker
email: [email protected]
The Influence of Social Context and Targeted Communication on e-Government Service Adoption
Keywords: electronic ID card, eID, e-Government, TAM
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
After years of debating, the German electronic ID card (eID) was introduced in 2010. [START_REF] Hornung | An ID card for the Internet -The new German ID card with 'electronic proof of identity[END_REF] Besides being a piece of identification in the 'offline' world, it offers an electronic proof of identity and could also be used for signing documents electronically in the future. Although its launching was accompanied by much public interest, it could not meet the expectations of the government. Until recently, 17 million eID cards have been handed out to German citizens. However, only less than 30% of the respective citizens have activated the electronic identification function. [START_REF] Keitzel | Mittelstandsoffensive Neuer Personalausweis. Expertise und Handlungsempfehlungen für die Etablierung zentraler eID-Infrastrukturen für den Mittelstand[END_REF] The number of online applications for the eID is currently limited to 42. [START_REF]Federal Ministry of the Interior[END_REF] Qualified electronic certificates, which are a prerequisite for using the qualified electronic signature functions of the eID, were still in the test phase in December 2012. [START_REF] Kirsch | Digitale Signatur mit dem ePerso zum Sonderpreis[END_REF] There are several reasons for the reluctant start of the eID in Germany. On the one hand, the limited number of applications probably causes citizens to ignore possible advantages. On the other hand, security concerns contribute to the rather negative image of the eID. [START_REF] Grote | Vom Client zur App: Ideenkatalog zur Gestaltung der Software[END_REF] Shortly after the release of the eID card in 2010, the Chaos Computer Club, one of the largest hacker organisations in the world, demonstrated that the reading devices that were distributed together with the card were insecure and could easily be cracked. [START_REF] Kirk | Hacking of new German ID card[END_REF] Thus, in order to overcome these barriers and to increase the adoption of the electronic use of the eID, it is crucial to offer services that are consumed by many citizens and that provide an actual benefit for the user. Furthermore, the government should implement a suitable strategy to counteract the reluctant intention to use the new eID card. The arising question is how such a strategy should look like. For answering this question, it is first of all necessary to understand the factors that influence the adoption of the eID.
In order to identify the influencing factors, we propose the application and enrolment process at a German university as a use case for the eID. Universities provide a suitable context for eID studies as many students are likely to possess the new eID card. Furthermore, applying and enrolling for a study is a process all students have to go through. We propose a research model to analyse the variables determining behavioural intention to use the electronic ID card. Our basic hypotheses are based on the Technology Acceptance Model. [START_REF] Davis | User Acceptance of Computer Technology: A Comparison of Two Theoretical Models[END_REF] As proposed by other studies, we extend our research model both by demographic variables, experience, social influence [START_REF] Venkatesh | User acceptance of information technology: Toward a unified view[END_REF] as well as perceived risk [START_REF] Bélanger | Trust and risk in e-government adoption[END_REF], [START_REF] Carter | The utilization of e-government services: citizen trust, innovation and acceptance factors[END_REF]. Furthermore, as suggested by previous research [START_REF] Hofmann | Adoption of Municipal e-Government Services -A Communication Problem?[END_REF], we integrate (targeted) communication into our research model. Our resulting research questions are:
RQ1: Which factors influence the behavioural intention to use an eID application by students? RQ2: Which influence does communication have on the adoption of eID services at universities?
The article is organised as follows. In the next section, we give a short overview of the eID in Germany. Furthermore, we derive our research model based on egovernment respectively IT adoption literature. In Section 3, we describe our research methodology including the design and structure of the questionnaire, the process of data collection as well as of the analysis. This is followed by Section 4, where we present the results of our analysis, and Section 5, in which we discuss the impact of our findings. Section 6 summarises our results and shows both the limits of our study as well as the agenda for future research in this domain.
Related Work
The eID was introduced in Germany in November 2010 and since then Germany ranks among Belgium, Estonia, Finland, Italy, Portugal and Spain as one of the seven European countries that offer eIDs to their citizens. [START_REF]Electronic identification, signatures and trust services: Questions and Answers[END_REF] The German eID provides an electronic proof of identity as well as an electronic signature function. [START_REF] Hornung | An ID card for the Internet -The new German ID card with 'electronic proof of identity[END_REF] Although the eID is judged as rather secure, the resistance to use them is rather strong. [START_REF] Keitzel | Mittelstandsoffensive Neuer Personalausweis. Expertise und Handlungsempfehlungen für die Etablierung zentraler eID-Infrastrukturen für den Mittelstand[END_REF] The aim of our study is to understand both how eID services should be designed as well as how the accompanying communication should take place in order to increase the adoption of these services. Our research model is based on the Technology Acceptance Model (TAM) by [START_REF] Davis | User Acceptance of Computer Technology: A Comparison of Two Theoretical Models[END_REF]. TAM is an often applied model that describes the influence of different variables on IT adoption and that can be adapted easily for different application areas. [START_REF] Pikkarainen | Consumer acceptance of online banking: an extension of the technology acceptance model[END_REF], [START_REF] Zhou | Voluntary adopters versus forced adopters: integrating the diffusion of innovation theory and the technology acceptance model to study intraorganizational adoption[END_REF] For our purposes, we use the version of TAM from 1996, which includes the variable perceived usefulness, perceived ease of use, behavioural intention and actual system use. [START_REF] Venkatesh | A Model of the Antecedents of Perceived Ease of Use: Development and Test[END_REF] As the analysed IT application does not exist yet and as therefore the actual system use (see greyed out box in Fig. 1) cannot be measured, our dependent variable is behavioural intention.
We extend the basic TAM by the factors perceived risk, social influence, communication at the university as well as experiences. Our research model is shown in Fig. 1. Whereas the basic components of TAM are displayed in white rectangles, the added variables are displayed in light grey ones. The variables on the dark grey background are tested for their dependence on demographic factors (age, gender, computer skills, study programme). The assumed relations are shown by arrows. Basic model: The basic model of TAM has been used in a variety of studies dealing with the adoption of IT since then. [START_REF] Venkatesh | A Model of the Antecedents of Perceived Ease of Use: Development and Test[END_REF] TAM explains the actual use of a system as being influenced by the behavioural intention to use the system, which in turn is influenced by perceived usefulness as well as perceived ease of use. Furthermore, perceived ease of use also impacts perceived usefulness. Despite its popularity, TAM has been subject to a lot of criticism due to lacking objective antecedents as well as "black box" concepts. [START_REF] Benbasat | Quo Vadis, TAM?[END_REF] However, as it is one of the most tested and most successful theories in IT adoption and also in e-government adoption (cf. e.g. [START_REF] Carter | The utilization of e-government services: citizen trust, innovation and acceptance factors[END_REF], [START_REF] Sipior | The digital divide and t-government in the United States: using the technology acceptance model to understand usage[END_REF]), we base our basic hypotheses on the assumptions of TAM:
H 1 : Perceived usefulness will positively influence behavioural intentions. H 2 : Perceived ease of use will positively influence behavioural intentions. H 3 : Perceived ease of use will positively influence perceived usefulness.
Perceived risk: In the research area of IT adoption and especially in e-government adoption, the notion of (perceived) risk plays an important role as many sensitive data is processed [START_REF] Hofmann | Adoption of Municipal e-Government Services -A Communication Problem?[END_REF] as well as due to the impersonality of transactions on the internet [START_REF] Pavlou | Institutional trust and familiarity in online interorganizational relationships[END_REF]. In the context of e-government, perceived risk is understood as "the citizen's subjective expectation of suffering a loss in pursuit of a desired outcome". [START_REF] Warkentin | Encouraging citizen adoption of egovernment by building trust[END_REF] Previous research on e-government acceptance suggests that perceived risk has a negative impact on perceived usefulness and behavioural intention. [START_REF] Bélanger | Trust and risk in e-government adoption[END_REF], [START_REF] Horst | Perceived usefulness, personal experiences, risk perception and trust as determinants of adoption of e-government services in The Netherlands[END_REF], [START_REF] Featherman | Predicting e-services adoption: a perceived risk facets perspective[END_REF] In our survey, we especially focus on security concerns in terms of privacy and data security. Social context: A further concept that has been analysed in IT adoption studies is social context. [START_REF] Venkatesh | User acceptance of information technology: toward a unified view[END_REF] identified a relation between social influence and behavioural intention. This hypothesis has been studied in different e-government specific studies, too. (cf. e.g. [START_REF] Gupta | Adoption of ICT in a government organization in a developing country: An empirical study[END_REF]) Further research suggests a correlation between social context and the attitudes of a system user [START_REF] Guo | Factors influencing perceived usefulness of wikis for group collaborative learning by first year students[END_REF], which is especially the case for social context and perceived usefulness. [START_REF] Shen | Social Influence for Perceived Usefulness and Ease-of-Use of Course Delivery Systems[END_REF] H 6 : The social context will positively influence perceived usefulness. H 7 : The social context will positively influence behavioural intentions.
Experiences: Based on the Unified Theory of Acceptance and Use of Technology (UTAUT), an extension of TAM [START_REF] Venkatesh | User acceptance of information technology: Toward a unified view[END_REF], we assume a correlation between the previous assessment of the eID based on experiences and information and perceived risk, perceived usefulness and perceived ease of use.
H 8 : The previous assessment of risk based on experiences and information on the eID will positively influence perceived risk. H 9 : The previous assessment of perceived usefulness based on experiences and information on the eID will positively influence perceived usefulness. H 10 : The previous assessment of perceived ease of use based on experiences and information on the eID will positively influence perceived ease of use.
Demographics: Demographic factors have been identified as influencing factors in previous studies of IT acceptance. [START_REF] Agarwal | A Conceptual and Operational Definition of Personal Innovativeness in the Domain of Information Technology[END_REF], [START_REF] Gefen | Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model[END_REF] In our case, we analyse the impact of age, gender, field of study and computer skills on perceived risk, perceived usefulness, perceived ease of use and behavioural intention. H 11 : Age will influence perceived risk, perceived usefulness, perceived ease of use and behavioural intention. H 12 : Gender will influence perceived risk, perceived usefulness, perceived ease of use and behavioural intention. H 13 : The field of study will influence perceived risk, perceived usefulness, perceived ease of use and behavioural intention. H 14 : Computer skills will influence perceived risk, perceived usefulness, perceived ease of use and behavioural intention.
Communication: A further question is how communication influences the adoption of IT. Therefore, we analyse whether targeted communication content and channels can influence the attitude towards the eID. Communication can play a vital role in spreading information. [START_REF] Rogers | Diffusion of Innovations[END_REF] Furthermore, previous research suggests that lacking communication is one reason for reluctant e-government adoption in Germany. [START_REF] Hofmann | Adoption of Municipal e-Government Services -A Communication Problem?[END_REF] H 15 : Targeted communication at university on the eID will influence perceived usefulness.
Methodology
In most studies, technology acceptance has been researched using surveys. [START_REF] Venkatesh | User acceptance of information technology: Toward a unified view[END_REF], [START_REF] Bélanger | Trust and risk in e-government adoption[END_REF] As we propose a quantitative model to understand the influencing factors of the adoption of the eID card, our instrument was a standardised online questionnaire. We tested our hypotheses in the context of an application case for the eID card at a German university. In collaboration with the university's IT service provider as well as based on an online research, we identified several possible services in which an eID card could be used. We presented these possible applications in our questionnaire asking the participants for their assessment. Our items were developed from previous studies of e-government respectively IT adoption.
Our survey was structured as follows: The first part contained the introduction and introductory questions in which we gave an overview of the topic and asked general information about the participant. This was followed by questions on the assessment of the eID concerning perceived ease of use, perceived risk and perceived usefulness as well as questions concerning the frequency of eID usage and the level of satisfaction. Afterwards, we presented possible applications. We identified the services Single sign-on (SSO) and signing forms electronically as well as the application and enrolment process. Participants were asked for their assessment of the variables of our research model. The item scale for our constructs was a four end-point scale (ranging from strongly agree to strongly disagree). Afterwards, we recorded the information need and the influence of communication by querying which factors the participants would like to be informed about through which channels (assuming that the eID service was implemented at the university). In the final part, we collected demographic data and allowed for comments. Completing the survey took about 15 minutes. As an incentive to take part, we gave away three Amazon gift cards.
We pre-tested the survey in several rounds. First of all, an expert from the IT service provider checked it for logical mistakes. We conducted the first round of pretests while we were present. The second round of pre-tests was sent via e-mail. As this only led to few changes, we launched the survey afterwards. We sent an invitation link to the online survey to all employees and students of the university via email, which are about 45,000 persons in total. Filter questions ensured that the different target groups would only answer questions relevant to them.
We analysed our data with the aid of Excel for descriptive statistics as well as with SPSS for correlation analysis and linear regression. Besides calculating the Spearman correlation, we ran a regression analysis for the influencing factors of behavioural intention and perceived usefulness.
Data and Results
The questionnaire was implemented in an online survey tool and was open from June 26 th to July 10 th . Overall, 1,632 students participated and completed the questionnaire. 37% of the participants stated that they have the new ID card, 35% of these additionally have activated the electronic functionalities of the eID and 29% have an appro-priate card reader to use these functionalities. This is in line with the number of new eIDs and released electronic functionalities in Germany in general. [START_REF] Borchers | Zwei Jahre später: der neue Personalausweis[END_REF] One important aspect for the evaluation of the results was the level of information the participants have regarding the eID. In all aspects we asked for, less than 25% of the participants answered that they are very good or good informed (cf. Table 1). For assessing the influence of the social context on the behavioural intentions one main aspect was the importance of different sources for information about the eID card in general. We did not divide the sources for information further in subsets like source for information for applications or for security aspects. Regarding the source of information the participants build their own opinion on, the results show that they count on official information and information of newspapers as well as information from their direct social environment like family, friends or colleagues (cf. Table 2). In the following we will exemplary concentrate on the last case as the results for all three cases are by trend the same. Overall, 58% of the students state that they would use the eID card for an online application at the university (categories 1 and 2 of 4 on a scale from "most likely" to "no") and 19% would not use eID card for a completely electronic application process (category 4). Furthermore, 57% of the students think that their family or friends would use the eID card for the application process and 12% think that their family or friends would not use it. Condensing the further questions regarding the scenario of complete online application and calculating the correlation based on Spearman we determine an influence for every of our hypotheses H 1 to H 10 (cf. Fig. 2 Additionally, we performed a regression analysis for perceived usefulness and behavioural intention to use the eID card. Perceived usefulness has an R2 of 0.722. The social context has a high influence on the perceived usefulness; the perceived risk has a small influence on the perceived usefulness. Additionally, the usefulness in general also has a high influence. BBehavioural intention also shows an R2 of 0.722. Here, too, perceived risk has a rather small influence while the influence of perceived usefulness and of social context have a high influence. Additionally, perceived ease of use does not have a noticeable influence on behavioural intention to use the eID card for the online application for a university place.
From the set of demographic aspects, only the individual computer skills have a significant influence on behavioural intention to use. However, even in this category, behavioural intention to use the eID for online application only slightly depends on the level of computer skills as it ranked smoothly between 61% (very good skills) and 50% (low skills)
We asked for indicators of perceived usefulness in every usage scenario. Based on the hypothetical process of a complete-online application, the students said that it would be a useful, easy and fast alternative to the classical, (partly) paper-based way. However, they do not see any improvements in safety issues and are indifferent regarding financial issues (cf. Table 4).
Finally, we asked for communication aspects to prove the influence of communication on the behavioural intention to use the German eID. Overall, about 60% of the students indicated that they (most) likely think that their attitude regarding the eID could be influenced through communication of the university (cf. Table 5). Further results show that students wish to be informed by university about security aspects of the eID (80%), university's online services for the eID card (72%) as well as about the general functionality (72%). Only 10% do not want to get information about the eID card by their university. Regarding the preferred channel, students want to be informed by mail (74%) as well as the universities website (74%) or the website of the universities IT provider (66%). They do not want to get information via the social websites of the university (e.g., the Facebook page of the university).
Discussion
It is striking, that the results show a low level of awareness about functionality and fields of application of the new eID card but a significantly high interest in this new technology resp. services. To add, this interest in this rather new technology even only slightly depends on individual technical skills. However, the low level of awareness firstly has to be ascribed to the governmental level as governments should have a high interest in informing the people about the functionality and services of the eID. If students increase using their eID card for university services and provide all their information and applications digitally, process costs within the universities will be reducible in the future. This would not only be true for universities but also for other organizations with high rates of document processing like governments, insurance companies, or banks. They all provide various services which would benefit from complete digital processing. It would help to optimize business processes to provide faster and thus cheaper services to their customers. Furthermore, our results show that the participants prefer several channels for information about the eID card and services around. They not only seek for information from official sources or newspapers, but rely heavily on the opinion of the social environment about the eID card and its related services. Within our correlation analysis the influence of the social context on the behavioural intention to use the eID card for the online application is very strong, which is conform to previous research. This is an important finding as this should have an influence on the information and communication strategies of the stakeholders of the eID either being the governments or being further providers for services around the eID like a university.
It is striking that only one third of the citizens which have a new eID card activate the electronic functionalities. This reduces the actual amount of potential users for services relying on the eID functionalities to only about roughly 10%. This, furthermore, raises the question about the reasons for this low adoption rate. Recent research shows that various aspects have an influence on this acceptance rate. Besides technical, environmental, service and user characteristics, trust and communication have to be mentioned here. [START_REF] Hofmann | Identifying Factors for e-Government Acceptance -a Literature Review[END_REF] Our results presented in Section 4 show, that the ease of use (technical and service characteristics), the social context and skills (environmental and user characteristics), security issues (trust) and communication have a significant influence of the behavioural intention to use the eID card at university. To underline this, the participants furthermore explicitly mentioned security concerns as main reasons for not using the eID card, the feeling of less information and the fear of increase in complexity. However, based on our results, many of the participants see the eID card as a useful, fast and easy alternative to the actual way of applying for a university position. This shows the importance of targeted communication efforts to explain the potentials users the advantages of the new technologies.
Of course, we cannot transfer this impression of ease of use directly to other services ready for eID cards. However, in our example case it underlines the importance of the further influencing factors, i.e. trust and the already mentioned targeted communication. From our perspective these two factors interrelate to some extent. On the one hand, for sure, the eID itself as well as the services build upon the eID card have to be secure in terms of privacy of data and protection of data loss, which goes along with further results [START_REF] Bélanger | Trust and risk in e-government adoption[END_REF], [START_REF] Horst | Perceived usefulness, personal experiences, risk perception and trust as determinants of adoption of e-government services in The Netherlands[END_REF], [START_REF] Featherman | Predicting e-services adoption: a perceived risk facets perspective[END_REF]. On the other hand, the central government needs to declare that the underlying techniques are safe. One interpretation based in this could be that the central government is either not trustable from a citizens point of view and citizens do not believe in the communication of the governments or the central government is not able to communicate the security of their techniques and services in the right way. Following on this service providers like a university also encounter problems in communicating the safety of the technology. Recent research thoroughly allows for the conclusion that there are deficits in the communication efforts of governments. [START_REF] Hofmann | Adoption of Municipal e-Government Services -A Communication Problem?[END_REF] Therefore, improving the way of communication would help to increase the acceptance rate of eID-supported services.
Additionally, our results show that it is not only important that the central government as provider of the eID card informs and communicates the possibilities, services or security aspects. It is also up to the university as (potential) provider of concrete services itself to inform about these aspects and provide information about the single service as well as the overall techniques behind.
To sum up on this, a holistic concept is necessary in which the information strategies of the central government and the university as representative for the single service providers are aligned. Our results show that the citizens, i.e. the students, feel uninformed about the possibilities of the eID card. Additionally, this concept has to reflect the importance of the social environment of every single (potential) user. For example this leads to a somehow grassroots approach, were users are stimulated to foster their friends and colleagues to also use the eID card for services available. The university resp. the service providers here have the advantage that they are to some extent closer to the customers. It is easier for them to perform targeted communication, dedicated to the special needs of the (potential) users. Close interrelation between these stakeholders is obligatory. From our perspective, this could be one way to breach the chicken and egg problem we are facing. On the one hand there are less (potential) users because of few citizens activating the eID functionality. So actually it is not that promising for service providers like the university to adapt their authentication services to the eID card. On the other hand citizens are hesitating to activate this functionality because of only few services actually available, neither at university nor at public administrations.
Conclusion
Our analysis pretty well shows the actual dilemma the central government and service providers which offer the new eID card and services related are facing. On the one side the problem of few services available and on the other side potential users struggling to activate and use the services because of less possibilities to use. One main reason which could be synthesized is the feeling of lacking information the students participated in our survey expressed. So our main conclusions and answer to our research questions are that a better, targeted communication and the inclusion of the social context of the (potential) users could be key factors for improvement.
As such our study contributes to the actual body of knowledge. We were able to confirm related work in technology acceptance research in confirming and extending the existing theories and models, especially in our case research related to the TAM. We especially focussed on the aspects of communication and the social context of the users. Additionally, as implication for practice, we synthesized from data that central government together with their aligned service providers have to extend their communication concepts, targeted to single user groups, to enhance their actual efforts and achieve better results in terms of eID adoption. We especially were able to prove the strong influence of the social context on the attitude towards eID services.
Our study has a number of limitations which should be followed by further research. We focussed on the field of services combined with an eID card in the university context. Furthermore, we limited or work to exemplary services within a university environment. On the one hand this limits the generalization of our findings; on the other hand this was necessary to handle the complexity of the field to a level which was understandable for the attendees of the survey.
Further research should flow in two main directions. Firstly, the results achieved in the university context have to be extended to further services which are relevant for an eID card application. This has to be done in the university area as well as for services of public administrations and private organizations. The purpose would be to further approve the findings and make them generalizable. Secondly, the aspects of communication and the inclusion of the social context into the communication strategies and concepts have to be conducted and applied. This application subsequently has to be evaluated from a scientific perspective to ensure an integrated prove of the findings of our work.
Fig. 1 .
1 Fig. 1. Research model for the adoption of eID services at the university.
H 4 :
4 Perceived risk will negatively influence perceived usefulness. H 5 : Perceived risk will negatively influence behavioural intentions.
Table 1 .
1 Level of information about the eID card (1=very good to 4=not informed).
Level of information 1 2 3 4
eID functionality 3.01% 15.82% 42.24% 38.93%
eSign functionality 4.50% 20.12% 36.10% 39.28%
Security aspects 2.97% 14.58% 34.48% 47.97%
Fields of application 4.36% 14.14% 34.31% 47.18%
General level of information 2.45% 13.72% 50.07% 33.77%
Table 2 .
2 Source of Information, the participants build their own opinion on (1=very important to 4=not important).
1 2 3 4
Table 3 .
3 and Table3). Spearman Correlation of factors on behavioural intention of students to use the eID card for online application for a university place (0.01% level). Spearman Correlation for scenario of complete online application for a university place (0.01% level) for hypotheses H 1 to H 10 .
Social context
Risk (experiences/ information) 0.769* Perceived risk 0.803*
0.660* -0.350* -0.408*
Usefulness (exp./ inform.) 0.708* Perceived usefulness 0.729* Behavioural Intention
0.517* 0.488*
Ease of use (exp./ inform.) 0.678* Perceived ease of use
Fig. 2. Hypothesis H 1 H 2 H 3 H 4 H 5 H 6 H 7 H 8 H 9 H 10
Spearman 0.729 0.488 0.517 -0.350 -0.408 0.660 0.803 0.743 0.708 0.678
Table 4 .
4 Perceived usefulness of the eID card for online application at university (1= strongly agree to 4=disagree).
1 2 3 4
Useful alternative 35.43% 34.70% 16.00% 13.87%
Fast alternative 40.28% 37.04% 13.51% 9.17%
Easy alternative 24.95% 40.53% 21.15% 13.37%
Safer alternative 8.11% 18.68% 36.42% 36.79%
Cheap alternative 18.84% 30.81% 24.82% 25.53%
Table 5 .
5 Possible influence by university's communication about eID (1=most likely to 4=no).
1 2 3 4
Acknowledgements. We would like to thank Adrian Beheschti, Lisa Eckey, Heinrich Hüppe, Magdalena Lang, Philipp Simon, Fritz Ulrich, Patrick Vogel and Till Weber for their dedicated effort in conducting the study. | 33,144 | [
"1004217",
"1004218",
"1004219"
] | [
"325710",
"325710",
"325710"
] |
01490917 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490917/file/978-3-642-40358-3_27_Chapter.pdf | Timo Wandhöfer
email: [email protected]
Beccy Allen
Steve Taylor
Paul Walland
Sergej Sizov
email: [email protected]
Online Forums vs. Social Networks: Two Case Studies to support eGovernment with Topic Opinion Analysis
Keywords: eGovernment, Twitter, Facebook, Online Forums, Topic Opinion Analysis, Sentiment, Controversy
This paper suggests how eGovernment and public services can apply "topic-opinion" analysis (developed in the EC IST FP7 WeGov project) on citizens' opinions on the Internet. In many cases, discussion tracks on the Internet become quite long and complex. Stakeholders are often interested in gaining a quick overview of such a discussion, including understanding its thematic aspects, identifying key arguments and key users. The topic opinion analysis that is part of the WeGov toolbox aims to provide appropriate summarization techniques by identifying latent themes of discussion (topics), most relevant contributions and arguments for each topic, as well as identifying the most active users that influenced a certain aspect of discussion. In this paper we focus on online forums and social networks as digital places where users discuss potential political issues. Therefore we setup two different case studies to validate the accuracy and usefulness of analysis results of the topic opinion analysis.
Introduction
Governments and public institutions are increasingly working with citizens to give them more of a stake in the policy-shaping process, for example through public consultations on new legislation [START_REF] Koop | Leitfaden Online-Konsultation: Praxisempfehlungen für die Einbeziehung der Bürgerinnen und Bürger über das Internet[END_REF]. E-participation platforms foster communication and interaction between politicians and government bodies on the one side, and citizens on the other [START_REF] Koop | Leitfaden Online-Konsultation: Praxisempfehlungen für die Einbeziehung der Bürgerinnen und Bürger über das Internet[END_REF]. Notwithstanding the benefits brought about by existing eParticipation platforms, there remains the unsolved challenge of how to involve a larger number of affected individuals, groups and communities in discussions than is currently achieved through dedicated web sites. This problem has, for example, been analyzed with the 10 Downing Street Debate Mapper, being a case in point [START_REF] Miller | Third Phase Report[END_REF]. They found that very few people (7% of specifically addressed 309 invitees) took part using the dedicated Debate Mapper website, but many of them did comment about the same subject on other Web platforms.
The use of social networking platforms has a significant part to play in political engagement. Social Media [START_REF] Hrdinova | Designing social media policy for government: Eight essential elements[END_REF] and blogs [START_REF] Coleman | Blogs and the New Politics of Listening[END_REF] have high potential for the eGovernment to interact with citizens. From a sociological point of view, platforms like Twitter are interesting for analyzing the dissemination of topics and as well for analyzing the opinions and sentiments of the society regarding particular topics [START_REF] Savage | Twitter as Medium and Message. Researcher are mining Twitter's vast flow of data to measure public sentiment, follow political activity, and detect earthquakes and flu outbrakes[END_REF]. Beyond that, online platforms have the power to influence the process of opinion making [START_REF] Coleman | Blogs and the New Politics of Listening[END_REF]. That's why [START_REF] Coleman | Connecting parliament to the public via the Internet: two case studies of online consultations[END_REF] uses the label: "new politics of listening". For instance in the UK the Internet and social networks are everyday life functionality for Parliamentarians [START_REF] Miller | Third Phase Report[END_REF].
There is thus a huge potential of online discussion places, but there is a problem of making sense of the huge amounts of text in them. The aim of this paper is to suggest topic opinion analysis while validating to cases.to support eGovernment by exploiting the potential of online discussions and addressing the problem of "too much information". In the next section, we introduce the WeGov toolbox resulting from the EC IST FP7 WeGov project, and especially its topic opinion analysis component that provides summarization for political decision-makers. Subsequently we explain the process model behind this paper and describe two case studies that were selected to evaluate the topic opinion analysis with real data. Finally, we draw general conclusions.
Background
Both case studies that are described within this paper are based on the WeGov Toolbox as the technical framework. The WeGov toolbox supports diverse components for analyzing huge amount of online available text for stakeholders1
WeGov Toolbox
.
The WeGov Toolbox (hereafter "the toolbox") is a web-based system that enables the user to collect and analyze postings2
• The user can specify and run searches on the social networks Facebook and Twitter, and raw data reflecting users' comments is collected.
and users from social networks and the HeadsUp forum. The toolbox is deployed and hosted at a server, and the user connects to this using their web browser. The key features of the toolbox are as follows.
─ On Facebook, the user can monitor public groups and pages -the user can instruct the toolbox to collect posts and comments on those posts from a Facebook group or page by specifying the URL of the page. ─ On Twitter, the user can search for keywords or hashtags.
• Searches can be scheduled, so that they repeat automatically. This is useful for collecting data over an extended period, which is particularly suitable for monitoring a news story. The system is designed so that when a search is executed multiple times by a schedule, it will not collect any duplicate posts, as duplicates can skew analysis results. • Search results can be fed into the toolbox's two analysis components to provide summaries and automated insights into the (sometimes very large) data set returned from the social networks. ─ Behavior analysis has been developed by the Open University, Knowledge Media Institute (KMi), and monitors the discussion activity, categorizes users into behavior types and highlights key posts and users [START_REF] Rowe | Predicting Discussions on the Social Semantic Web[END_REF][START_REF] Rowe | Anticipating Discussion Activity on Community Forums[END_REF][START_REF] Rowe | Behaviour analysis across different types of Enterprise Online Communities[END_REF]. ─ Topic-opinion analysis has been developed by the University of Koblenz [START_REF] Sizov | GeoFolk: latent spatial semantics in web 2.0 social media[END_REF],
and determines themes of the documents (posts, comments, etc.) in the discussion by identifying sets of terms that frequently occur together in multiple posts and grouping them together into topic groups. In addition, opinions are determined by sentiment analysis, and the topic groups can be measured in terms of whether they express positive or negative opinion. • We have adopted a methodical approach for the development process of the toolbox with frequent and iterative end user engagement, such as the German Parliament, the German State Parliament of North Rhine-Westphalia, the EC Parliament, city administrations, parties and NGOs [START_REF] Wandhöfer | Engaging politicians with citizens on social networking sites: the WeGov Toolbox[END_REF] so as to get requirements and feedback on the toolbox's functions and usability [START_REF] Joshi | Paradox of Proximity -Trust & Provenance within the context of Social Networks & Policy[END_REF]. As part of user engagement, a number of use cases were designed [START_REF] Addis | New ways for policy makers to interact with citizens through open social network sites -a report on initial results[END_REF] showing how the toolbox analysis tools could provide a two-way dialogue with citizens, and the work reported here develops one of these use cases.
• An important aspect of the work in the WeGov project is to protect the rights and privacy of citizens and policy makers. To address this, a legal and ethical analysis was conducted to provide us with an understanding of data protection issues and give an insight into transparency. This work has influenced the design and use of all parts of the toolbox, and has been reported elsewhere [START_REF] Wilson | Appendix A, Legal Analysis of Issues Surrounding Social Networking Sites[END_REF]. The impact it has on the work here is that we only collect postings from publicly accessible sources.
Topic Opinion Analysis
In many cases, discussion tracks in social media become long and complex. Stakeholders of the toolbox technology (such as politicians, political researchers, active users) are often interested in gaining a quick overview of such a discussion, including understanding its thematic aspects, identifying key pro-and contra-arguments and finding the most influential users. However, completely reading hundreds (or even thousands) of posts is too time-consuming to be practical. There is thus a huge need to summarize the discussion tracks, and the Topic-Opinion analysis component of the toolbox provides this by identifying latent themes of discussion (topics), most relevant contributions and arguments for each topic, as well as identifying the most active users that influenced a certain aspect of discussion. [START_REF] Sizov | GeoFolk: latent spatial semantics in web 2.0 social media[END_REF]. The topic-opinion tool employs state of the art methods of Bayesian learning and opinion mining [START_REF] Blei | Latent Dirichlet Allocation[END_REF][START_REF] Deerwester | Indexing by Latent Semantic Analysis[END_REF] for finding the most relevant pieces of information that should be presented to the user, and the methods are briefly described next.
Modeling topics: Probabilistic Bayesian models are used for mining the latent semantic structure of the online discussion. The toolbox approach can be seen as an extension to the state-of-the-art method named "Latent Dirichlet Allocation" (LDA) [START_REF] Blei | Latent Dirichlet Allocation[END_REF]. The collection of postings is represented by means of probabilistic distributions over terms (words) that appear in particular discussion postings with different frequencies. The analysis runs across many posts, and looks for words that occur together in the same post. Topics are formed from groups of words that frequently occur together in a post, and the more posts that contain the same group of words, the stronger the topic is. Each topic is therefore characterized by its most relevant terms. A post can be in more than one topic (for example if it contains words commonly occurring in two topic groups), and consequently, postings are represented by means of probalistic membership of topics (e.g. a post can be 50% in topic 1, 25% in topic 2 and 25% in topic 3). Postings that belong to a certain topic with high probability are considered as most characteristic examples for the certain aspect of online discussion. [START_REF] Sizov | GeoFolk: latent spatial semantics in web 2.0 social media[END_REF] Modeling opinions: The toolbox employs state of the art techniques for mining user opinions and affect states. Conceptually, they are based on structured vocabularies that indicate the emotional state of a post's author (e.g. skepticism, positive or negative emotions, anger, etc.). Consequently, postings with strong opinions or emotions are selected for presentation to the user. [START_REF] Sizov | GeoFolk: latent spatial semantics in web 2.0 social media[END_REF] Topic-opinion summarization: Results of topic and opinion analysis are combined for presentation to the user. First, candidate postings are chosen with respect to their high relevance regarding particular discussion aspects (i.e. topics). Second, for each pre-selected posting, opinion/emotion analysis is performed. The output is constructed in such a way that a) all topics identified in the dataset are appropriately reflected, and b) postings chosen for each topic reflect different opinions and emotions. As a result, the output contains a limited number of "must-see-first" contributions from the online discussions, covering a broad spectrum of its contextual and emotional facets. [START_REF] Sizov | GeoFolk: latent spatial semantics in web 2.0 social media[END_REF] Topic-opinion analysis is intended to provide quick summaries of the themes in a debate and the opinions expressed by the citizens on digital places. As an example of this, Figure 1 shows the topic analysis results when the input was multiple sets of responses on Twitter to the query 'cyprus'. Each line includes a list of five keywords that build the topic (e.g. "banks", "reopen", "prepare", "controls", "cyprus"). The next column shows the number of tweets that are sorted to each topic (e.g. 375 tweets for the first topic). The last two columns show the sentiment and controversy of tweets that are measured for each of the twelve topics. The indication of sentiment shows if the tweets that are related to one topic are rather positive, neutral or negative. The indication of controversy shows the ratio of positive and negative posts.
Applied Process Model
Figure 2 shows the applied process model how stakeholders were engaged, both to determine requirements and to evaluate the toolbox. The idea behind this approach is to identify potential use cases that are in the end users' daily working lives. These cases are therefore of value to the end user, and can be used for validating the toolbox and its analysis results. Figure 1 shows two examples of such use cases ("HeadsUp" and "social networks"), and these are discussed throughout the paper to illustrate how the topic opinion analysis can be applied in everyday politics.
The top row in Figure 2 shows users on the Internet (the digital society) -for instance users of online forums or social networks. The second row shows stakeholders, and how they interact with the users on the Internet. In the use cases, the stakeholders already perform (often manual) analyses on the data they get from citizens on social networks. The results of their existing analyses are shown in the bottom row -here we call these data the "control group". The control group is compared with the toolbox's analyses of the same data. For instance the operator of the HeadsUp3 discussion forum (cp. left) analyses the forum discussions manually to get an insight on the debate. Another example is the policy maker (cp. right) who extracts topics from social networks to get insight into the discussion. Civil society groups run forums and blogs to connect with their members and supporters, but often analyzing the themes of these discussions is often beyond the organizations' resources. The toolbox could play an important role in helping small not-forprofit organizations, larger media organizations, as well as politicians and policymakers to understand feedback across a range of communication channels.
HeadsUp is a UK initiative, launched in June 2003 to promote political awareness and participation amongst young people. It is an online debating space for 11-18 year olds that gives them the opportunity to debate political issues with their peers, elected representatives and other decision-makers.
Five debates happen each year, each lasting three weeks and are fitted around both the school and parliamentary calendar. The forum discussions are based around political topics of interest to young people, as well as those related to key political events, issues of debate in Parliament and the media, and current government policy. Each forum is supported by background materials and teaching resources to ensure that the discussions are of a high quality.
The discussions are analyzed by the Hansard Society and are summarized in a report, which is disseminated widely. The report contains the key themes of the debate with direct quotes from participants, other information about the forum and the political context at the time the debate happened.
The core reason for analyzing the forums and distributing the report is to allow young people to have their voices heard by those that make decisions on their behalf, and to highlight that their perspectives are often different to those of adults. This is a vital aspect of HeadsUp: the report provides a channel to feed back information from the forums to policy-makers, politicians and journalists; thereby allowing young people's perspectives to inform a wide audience of those with the power to effect change.
Methodology
HeadsUp was used to evaluate the usefulness of the toolbox toolkit with regard to forum data and it provided a case study using a real world data set. It was a useful test case because the reports of each forum were generated before the WeGov project was started, so they were a good, independent control group for comparison with the results emerging from the toolkit. Three forums of different sizes were used:
• Sex Education -Do you get enough? (36 posts) • Youth Citizenship Commission: are young people allergic to politics? (317 posts) • How equal is Britain? (1186 posts)
The output of the toolbox was compared to the forum reports to assess how well it determined the themes of the debate as analyzed by a human. The toolbox's assessment of sentiment was compared with the reports and a selection of posts was double-checked for accuracy by a human. The user interface and the options available to view the data were assessed for their usefulness when populated with forum data.
Findings
The toolbox has applications outside social networks. Comments from blogs and forums or other data sets could be analyzed using the toolbox. This could help both, small non-profits, or large media organizations to analyze large-scale interactions.
• The toolbox is best at dealing with large quantities of data, amounts that could not be analyzed effectively by a human without significant resources to do so. • The toolkit performs well on relatively in-depth data -this lends itself to blogs and forums that encourage more considered and less immediate responses. • The toolbox also performed well in showing the nuances between different elements of a wider debate e.g. the women's sport debate (sexism in sport & the types of sports played by men and women vs. mixed sports).
• The toolkit works best when analyzing medium length comments that focus on one issue and when spelling and vocabulary are good.
Improvements
• A plain English explanation of how the algorithm understands and processes data is very important to ensure users trust the results. An explanation of irregularities such as: ─ Why the same data sometimes yields different results? ─ Why keywords appear in the order and frequency that they do? • Showing the hidden workings of the toolkit such as:
─ The relative influence of a greater number of key words e.g. via a tag cloud. ─ Highlighting positive/negative words that contribute to sentiment scores. ─ A separate group for excluded comments so they are still visible to the user. • Implementing more options for the users to refine the data and customize it to their situation and needs. For example: ─ The ability to exclude certain posts or words from analysis. ─ Splitting up long posts into sections that can be analyzed separately to avoid the conflicting analysis of longer posts.
Although elements of the toolkit interface and the algorithm could be improved to help users understand the results better, the toolbox worked very well with the longer more in depth posts that are more common to blogs and forums than to social media.
Facebook and Twitter Case Study -Social Networks
The intention of this case study was the validation of usefulness of topic opinion analysis of social media for politics. Therefore we designed three use cases how this technology may support politicians' everyday life:
1. Local Facebook topics: Within this use case, we monitored a sample of at least ten Facebook pages represent a geographical areas like an MP's constituency. Here topic opinion analysis was applied to extract the topics that people discussed on the pages. Each topic is a combination of words that represents a theme of the discussion, and comes with key users, and key comments.
Monitoring topics on Twitter:
The intention of the second use case is to identify subtopics on Twitter. For instance, the general debate on climate change covers subtopics like green energy, new kinds of technologies. Here we collected data three times a day from Twitter by searching for e.g. "climate change", and used topic analysis to detect the topics of the results. Because the results are already filtered by the search, the analysis produced subtopics.
This case study was conducted with a number of governmental representatives as end-user stakeholders: two members of the German Bundestag, four employees that work directly for a member of the German Bundestag, two members of the State Par-liament North Rhine-Westphalia4
Methodology
, one small German city (Kempten), one big German city (Cologne), and with a German state chancellery (Saarland). In total this evaluation consisted of 11 questionnaires and 12 semi-structured interviews following the questionnaires. The questionnaires and interviews were based on an individual analysis report that was created from four weeks of data collected from Facebook and Twitter using the toolbox search tools and scheduler.
To address the aims above, we configured the toolbox to collect data relevant to our proposed interviewees -we created user accounts for them, and set up automatic scheduled searches that were relevant to them. This enabled us to demonstrate and evaluate the analysis components with the external users that would contain subject matter they were interested in. Our reasoning behind this was that if they were interested, they would be better engaged, and therefore the quality of feedback would be better than if we had used arbitrary searches. Feedback from previous meetings with end user stakeholders showed us that local or constituency-based searches were of high importance to them, so these were strongly featured in the searches we set up.
• Strategy: Our strategy was the preparation of an individual analysis report. Each report was created using the same structure, but with data targeted to the end-user stakeholder it was intended for, and included approx. ten Facebook pages, related to the local area. For Twitter we used approx. five keyword searches using phrases around the end-user stakeholder's areas of interest. The unique data profile was initially created by the WeGov project team and was updated over several iterations by the feedback end users provided concerning their profile. For the collection from Facebook pages, we used the Facebook search tool, where we queried the constituency and the names of cities and towns within the constituency. The pages represent a selection of the available pages related to or managed by cities, public institutions, associations, local associations, arts and culture, politics, tourism and the local press. Pages with more likes, posts and comments were selected before those that displayed less public engagement. If the MP had "liked" one of the selected pages this information was noted. • Analysis Report: After four weeks of data collection, the data was analyzed by the toolbox following the "Social Network" use case pattern described above, and the results were collated into reports. The reports included a description of the evaluation strategy and the results at a glance, on one page where possible, and were sent to end users approx. two weeks before the interviews to allow time for them to prepare their comments and feedback. • Questionnaire: In addition to the analysis report, the participants got a questionnaire that covered concrete examples from the report. All questionnaires contained the following information and included the same questions. The only difference was the sample of analysis results that were tailored to the target end user, and contained questions similar to the following examples: Is the topic clear? What is the label for this topic? Do you know the topic from presswork? Is this an interesting topic? • Follow-up Interviews: Follow-up interviews were conducted to receive more indepth assessments about the analysis results, which were provided within the analysis report and the questionnaire. Here the interview focused on the reasons that stakeholders answered the questionnaires in the way that they did.
Findings
• Sensible and expected topics: The toolbox is providing topics that were sensible and expected given the source data. All topics from local Facebook pages that were assessed as understandable were known beforehand. The reasons being: stakeholders are well informed about topics that arise or are discussed within their constituency. Stakeholders follow local social network channels and are part of social network discussions -so they are 'aware of the public area'. Regarding the further twitter analysis results the assessment was often the same: stakeholders monitor topics -therefore they are 'aware of the public area' and which subtopics being discussed. Within the samples that were shown to the interviewees the subtopics were identified and the topic of discussion was clear to them. Therefore the analysis is able to provide the topics that are relevant for the queried search on Twitter; if there are enough tweets. • Quality of topics: When comparing the use cases 'Facebook topics' and 'Twitter topics', the Twitter results were more useful to the stakeholders. For topics like 'federal armed forces' all of the relevant subtopics were identified. Concerning Facebook the topics were better understandable and helpful for the interviewees when they were extracted from Facebook pages with high discussion activity (e.g. Angela Merkel5 • Different meanings for topics: All interviewees mentioned that the combination of five words for one topic could have multiple meanings. It is often the case that two or three words fit together and another word has a completely different meaning for the group of words as a whole. Another problem is that single words can also have different meanings. For instance 'dear': one interviewee mentioned that it was not clear to him if this word means the form of address, a verb, an adjective or if it is part of a substantive. Depending on the single meaning of the word the combination with other words can have different meanings.
or the press).
• Less clear topics with local Facebook pages: All interviewees observed that the topics are often unclear for local pages. The reason why 42% of the 110 topics were assessed as understandable topics is due to the fact that policy makers know what's happening in the area of their electorate. The interviewees confirmed that the number of 42% in the questionnaire is a very optimistic number, because the interviewees often made a guess what the meaning of the topic could be. Most of the topics were clear to them, because they know the 'real world' case and can therefore suggest the topic. All interviewees confirmed further that this background information is necessary for most of the provided topics. The interviewees argued that the analysis is only as good as the input data. On the local area there are not so many political debates that are public. But the results with Twitter have shown that the topic analysis is able to provide useful results.
The validation of sentiment analysis was not part of the questionnaire. But the analysis report covered at least one example similar to figure 1 that has been discussed during the expert interviews. Most of the interviewees can guess at the meaning of 'sentiment' and 'controversy' within the toolbox, and can use these indicators to choose a topic, and to read the posts contained within the group. But:
• It's not 'clear' to them why a discussion is either positive or negative as the visualization provides only one scale. For instance it may help to show the total number of both -the number of positive and also the number of negative comments. • When combined with 'controversy', the 'sentiment' is less clear. End users have difficulties understanding what the discussion looks when only seeing both scales. • The 'controversy' scale it is easier to understand when viewed separately. In general the UI needs improvement to provide the end users with a better understanding of its parameters.
Conclusions
This paper shows two case studies how to apply and validate topic opinion analysis for user comments on the Internet. While the first case study focused on the HeadsUp online discussion forum the second case focused on Facebook local pages and Twitter as social networks. Even if the approaches are different, both case studies follow the same process model and show added value as well as possible boarder-lines.
Both evaluation approaches were very effective with respect to the quality of validation and the end users' feedback. While the HeadsUp case focused on the accuracy and reliability of analysis data the social network case focused on the usability of analysis results to be used within the decision-makers' everyday life. However both cases were very time consuming. For instance the analysis reports and the extracted sample for the questionnaire needed current and personalized data -Facebook pages and topics for Twitter of interest to the stakeholder. Therefore this approach needed research time on the social web and continuous coordination with the end users to design an individual data report. Including all steps that were necessary to run this study, about one week was needed for each end user.
The toolkit returns on average a topic group for every 30 posts (HeadsUp) when the number of topic groups is not set manually by a user. However, the comments will not be distributed equally across the topic groups. Without being able to manually set the number of topic groups returned, the results were very hard to understand. With a medium sized forum of around 300 posts the outcome may be understandable but with smaller or larger forums the topic groups are either not refined enough or there are so many topic groups that patterns are hard to see or too many similar topic groups are returned. However, it is important to note that the data being tested on the toolkit had already been analyzed manually so there was already an understanding of what the debates were about; discussions that were previously unseen may be more challenging for a user to understand.
In the case of HeadsUp, the toolbox could be helpful in analyzing forum data, particularly the larger forums with hundreds or thousands of comments. The toolkit takes seconds to analyze hundreds of comments, whereas human analysis takes days to see similar results. In the social network case the interviewees argued that the toolbox is a tool that is between them and the large amount of social network data. Therefore the toolbox needs to consider that the behavior of the social network may change frequently -for instance through new privacy settings on Facebook or the way that political parties in Germany, the pirates, have revolutionized discussions on social network using open and transparent methods. However the results were very useful on Twitter to inform on particular topics to see the width of a debate.
The toolbox is best at dealing with large quantities of data, amounts that could not be analyzed effectively by a human without significant resources to do so. The toolkit performs well on relatively in-depth data -this lends itself to blogs and forums that encourage more considered and less immediate responses. The toolbox also performed well in showing the nuances between different elements of a wider debate within the HeadsUp case and the Twitter case. With local Facebook pages the quality of results worsen due to the fact of the quality of input data and less political conversations. Instead of monitoring a bunch of local Facebook pages the interviewees proposed to select less pages, but with more qualitative and more active political debates.
Although the toolbox was primarily conceived of as a project focusing on the analysis of political conversations on social media, it also has applications for forums and blogs. Most websites now support comments and sites such as the BBC or Daily Mail regularly have hundreds of comments on each article.
Civil society groups also run forums and blogs to connect with their members and supporters. Analyzing the themes of these discussions is often beyond the resources these organizations have. The toolbox could play an important role in helping small not-for-profit organizations, larger media organizations, as well as politicians and policy-makers to understand feedback across a range of communication channels.
Fig. 1 .
1 Fig. 1. Topic opinion analysis
Fig. 2 .
2 Fig. 2. Applied Evaluation Model
Here we are referring to the definition from[START_REF] Mitchell | Toward a Theory of Stakeholder Identification and Salience: Defining the Principle of Who and What Really Counts[END_REF]: "Any group or individual who can affect or is affected by the achievements by the organization's objectives". Beyond that stakeholders within the context of this paper are potential end users of the WeGov toolbox within the field of politics and public administration.
A posting (abbreviated post) is a digital user contribution within online forums, blogs or social networks. Generally a post is a text message. Here a post subsumes seed posts, status updates and comments.
URL: http://www.headsup.org.uk/content/ (Retrieved 13/3/13).
URL: http://en.wikipedia.org/wiki/Landtag_of_North_Rhine-Westphalia (Retrieved 9/3/13).
URL: https://www.facebook.com/AngelaMerkel (Retrieved 13/03/2013).
Acknowledgements
. The WeGov project (no. 248512) is funded with support from the European Commission under the SEVENTH FRAMEWORK PROGRAMME THEME ICT 2009.7.3 ICT for Governance and Policy Modelling. | 34,443 | [
"1004222",
"1004223",
"1004224",
"1004225",
"1004226"
] | [
"208955",
"489112",
"300666",
"300666",
"147310"
] |
01490920 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490920/file/978-3-642-40358-3_3_Chapter.pdf | Akemi Takeoka Chatfield
Uuf Brajawidagda
Political Will and Strategic Use of YouTube to Advancing Government Transparency: An Analysis of Jakarta Government-Generated YouTube Videos
Keywords: Local government transparency, social media-enabled transparency, YouTube, bureaucratic reform process, political will, net-savvy citizens
Government transparency is critical to cut government bureaucracy and corruption, which diminish political accountability and legitimacy, erode trust in government, and hinder citizen engagement and government performance. Previously, Jakarta"s local governments lacked government transparency, holding high-level meetings under a close-door policy, sustaining a critical and fundamental flaw in policy-making and fueling government inefficiency and corruption. Social media radically increased the speed, reach and transparency of information. Yet, social media-enabled government transparency has not been sufficiently investigated. This research presents strategic use of You-Tube by Jakarta"s new local government to "open doors" to high-level political meetings and other reform-oriented government activities for greater local government transparency. We conducted an analysis of 250 government-generated videos on YouTube viewed and liked by Jakarta"s 7.8-million net-savvy citizens. We conclude transformational leadership"s political will and strategic use of YouTube are the keys to advancing local government transparency and facilitating citizen engagement with government"s reform initiatives.
Introduction
Government transparency is critically important to cut government bureaucracies and corruptions. Government bureaucracy and corruption diminish political accountability and legitimacy [START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF], erode trust in government [START_REF] Kim | Public Trust in Government in Japan and South Korea: Does the Rise of Critical Citizens Matter?[END_REF][START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF], discourage citizen engagement [START_REF] Hetherington | Why Trust Matters: Declining Political Trust and the Demise of American Liberalism[END_REF][START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF] and hinder government performance, including the provision of effective and efficient public services [START_REF] Kim | Public Trust in Government in Japan and South Korea: Does the Rise of Critical Citizens Matter?[END_REF]. Berlin-based Transparency International found that two-thirds of countries being surveyed had the Corruption Perceived Index (CPI) below 50, indicating a serious corruption problem [28]. With the 2012 CPI of 32, Indonesia faces a very serious corruption problem. Indonesia"s central government has identified bureaucratic reform and governance as the top priority of national development in 2013 [START_REF] Prasojo | Bureaucratic Reform Beyond Realization?[END_REF]. While the central government did not specifically mention government transparency, Jakarta"s new local government won the 2012 elections by promoting "The New Jakarta" to create a transparent local government that can deliver citizen-centric public services through bureaucratic reforms and corruption eradication. Previously, Jakarta"s local governments held high-level political meetings under a closed-door policy, sustaining a critical and fundamental flaw in policy-making and fueling government inefficiency and corruption.
The effects of government transparency have been heavily examined in the literature [START_REF] Grimmelikhuijsen | Do Transparent Government Agencies Strengthen Trust? Information Polity[END_REF][START_REF] Kim | E-Participation, Transparency, and Trust in Local Government[END_REF][START_REF] Meijer | Being Transparent or Spinning the Message? An Experiment into the Effects of Varying Message Content on Trust in Government[END_REF]. The e-government research has also emerged to examine the potential benefits of social media to promote government transparency [START_REF] Bertot | Using Icts to Create a Culture of Transparency: E-Government and Social Media as Openness and Anti-Corruption Tools for Societies[END_REF][START_REF] Lee | An Open Government Maturity Model for Social Media-Based Public Engagement[END_REF][START_REF] Picazo-Vela | Understanding Risks, Benefits, and Strategic Alternatives of Social Media Applications in the Public Sector[END_REF]. Yet there has been very little research focusing on how government actually promotes transparency and how government transparency is communicated to citizens for their support in the process of bureaucratic reforms. Social media have radically increased the speed, reach and transparency of information [START_REF] Shirky | The Political Power of Social Media[END_REF]. Yet, the power of social media in government to facilitate and advance government transparency has not been sufficiently investigated. This research, therefore, aims to answer the following central question: How does government use social media tools to advance and communicate local government transparency? In this exploratory empirical study, we conducted a content analysis of 250 Jakarta"s local government-generated YouTube videos, which were viewed and highly rated by Jakarta"s 7.8 million net-savvy citizens. Based on our analysis results, we conclude that both the new transformational leadership"s political will to achieve its reform visions and its strategic use of YouTube as a mechanism for communicating bureaucratic reforms in action are the keys to advancing local government transparency and facilitating citizen engagement with government"s reform initiatives.
The remainder of this paper is structured as follows: Section 2 reviews the literature on bureaucratic reform, government transparency and social media-enabled government transparency. Section 3 presents a brief research context on government corruption in Indonesia and Jakarta"s new local government inaugurated in October, 2012. Section 4 presents our central research question and describes our research methodology on sampling and content analysis of government-generated YouTube videos. Section 5 presents key findings of our analysis of 250 YouTube videos. Section 6 presents our discussion and the conclusion of this study, including our research limitations and future research directions.
Literature Review
Bureaucratic Reform
Government bureaucracy and corruption diminish political accountability and legitimacy [START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF], erode trust in government [START_REF] Kim | Public Trust in Government in Japan and South Korea: Does the Rise of Critical Citizens Matter?[END_REF][START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF], discourage citizen engagement [START_REF] Hetherington | Why Trust Matters: Declining Political Trust and the Demise of American Liberalism[END_REF][START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF] and hinder government performance, including the provision of effective and efficient public services [START_REF] Kim | Public Trust in Government in Japan and South Korea: Does the Rise of Critical Citizens Matter?[END_REF]. When a new government is elected, a key factor influencing its success is the degree to which it can establish legitimacy among its citizens [START_REF] Gibson | Overcoming Apartheid: Can Truth Reconcile a Divided Nation?[END_REF][START_REF] Levi | Conceptualizing Legitimacy, Measuring Legitimating Beliefs[END_REF]. New governments often initiate bureaucratic (or administrative) reforms for legitima-cy or other reasons when their previous governments" bureaucracies and corruptions have been public issues. Bureaucratic reform in the context of government performance and new public management involves transforming government through strategic objectives of cutting the bureaucratic inefficiency and corruption and improving government responsiveness to citizen demands. Empirical studies show that bureaucratic reforms require institutional transformation, such as new transformational leadership [START_REF] Lee | The Korean Government's Electronic Record Management Reform: The Promise and Perils of Digital Democratization[END_REF], strategic use of e-government (or ICTs in government) and citizen participation [START_REF] Ahn | Politics of E-Government: E-Government and the Political Control of Bureaucracy[END_REF][START_REF] Bertot | Using Icts to Create a Culture of Transparency: E-Government and Social Media as Openness and Anti-Corruption Tools for Societies[END_REF][START_REF] Lee | The Korean Government's Electronic Record Management Reform: The Promise and Perils of Digital Democratization[END_REF]. The literature suggests that bureaucratic reform initiatives need institutional, technological and/or social mechanisms for producing desired outcomes.
Government Transparency
In recent years there has been an increased interest in the institutional, social and economic determinants and the effects of government transparency. While conceptions of government transparency and their study focuses are diverse, government transparency encompasses policy-making transparency, openness of political process and public service programs priority transparency [START_REF] Grigorescu | International Organizations and Government Transparency: Linking the International and Domestic Realms[END_REF][START_REF] Kim | An Institutional Analysis of an E-Government System for Anti-Corruption: The Case of Open[END_REF][START_REF] Sol | The Institutional, Economic and Social Determinants of Local Government Transparency[END_REF][START_REF] Von Haldenwang | Electronic Government (E-Government) and Development[END_REF]. Specifically, an empirical research on the effects of public service reform found that the institution"s act of adopting administrative reform by itself produced the beneficial impact on government transparency in new Eastern European democracies [START_REF] Neshkova | The Effectiveness of Administrative Reform in New Democracies[END_REF]. Survey studies in East Central Europe explained variation in local government transparency as a function of the institutions, as opposed to socioeconomic development or locality size [START_REF] Dowley | Local Government Transparency in East Central Europe[END_REF]. In another cross-national study, regression analysis found that telecommunications infrastructure and free press influenced the perceptions of government transparency in a positive and significant way [START_REF] Relly | Perceptions of Transparency of Government Policymaking: A Cross-National Study[END_REF]. Similarly, citizens" online information-seeking was positively associated with their increased support for government transparency [START_REF] Cuillier | Internet Information-Seeking and Its Relation to Support for Access to Government Records[END_REF] and citizens" e-participation seemed to be positively related to their assessment of local government transparency and their trust in government [START_REF] Kim | E-Participation, Transparency, and Trust in Local Government[END_REF]. Despite the increased interest in government transparency, however, there has been very little research focusing on the mechanisms for advancing and communicating government transparency to stakeholders. Finally, studies on local government transparency are still very limited [START_REF] Sol | The Institutional, Economic and Social Determinants of Local Government Transparency[END_REF].
Social Media-enabled Government Transparency
Open government policies, such as the US Open Government Directive [29], aim to create a new culture of openness in government for achieving greater government transparency, citizen participation and inter-agency collaboration through social media use in government. The policies acknowledge the rapid technological changes in societies across the globe. Social media, with the proliferation of multimedia data as well as multimedia mobile devices, including laptops, tablets, iPods, and smart phones, have become increasingly integrated in citizens" daily lives. In this dynamically changed information environment, "the political power of social media" [START_REF] Shirky | The Political Power of Social Media[END_REF] in providing new forms of organizing active forms of citizen political engagement was demonstrated during recent political upheavals that, for example, toppled dictatorial regimes in the Arab world. Social media in the hands of networked citizens who have no hierarchical organization have facilitated the leaderless "social media revolution" in the turbulent aftermath of the 2009 Iranian Presidential election [START_REF] Chatfield | Interactive Effects of Networked Publics and Social Media on Transforming the Public Sphere: A Survey of Iran's Leaderless 'Social Media Revolution[END_REF].
With regard to social media-enabled government transparency, e-government research on social media in government is emerging but still new. One of its first studies examined the ways in which social media and advanced ICTs were integrated into collaborative e-government initiatives at the state government level to facilitate greater government transparency [START_REF] Bertot | Promoting Transparency and Accountability through Icts, Social Media, and Collaborative E-Government[END_REF]. However, the maturity of social media-mediated local government transparency is still at its very early stage [START_REF] Bonsón | Local E-Government 2.0: Social Media and Corporate Transparency in Municipalities[END_REF]. There are institutional barriers to implementing a culture of government transparency. Not only the effective use of social media [START_REF] Hetherington | Why Trust Matters: Declining Political Trust and the Demise of American Liberalism[END_REF] but also political will of government leadership must be mobilized to overcome these challenges [START_REF] Bertot | Using Icts to Create a Culture of Transparency: E-Government and Social Media as Openness and Anti-Corruption Tools for Societies[END_REF].
3
Research Context: Jakarta's New Local Government
Government Corruption
Like many developing countries, Indonesia has been fighting a serious government corruption problem. It is public knowledge that politicians and public administrators have been engaging in wide-spread corruptive practices for their personal gains. Berlin-based Transparency International, a non-profit organization for the global coalition against corruption, uses Corruption Perceptions Index (CPI) to measure the level of government corruption in a given country. The CPI measures countries on a scale from 0 for highly corrupt to 100 for very clean. While no country has a perfect score, two-thirds of countries being surveyed have the CPI below 50, which indicates a serious corruption problem. Indonesia scored 32 on the scale in 2012 [28], indicating a very serious corruption problem.
"The New Jakarta": Transparency, Reforms and Corruption Eradication
In terms of government structure, the province is the highest level of local government hierarchies in Indonesia. Provinces are broken down further into regencies and cities. Jakarta as a providence is officially known as the Special Capital Region of Jakarta, which geographically encompasses a regency and five cities. However, public services in the Special Capital Region of Jakarta are centralized, with government agencies such as public housing and public transportation to provide public services to all the people in the Special Capital Region of Jakarta. With its metropolitan population of over 28 million, Jakarta is not only the capital city but also the largest city and primary port city in Indonesia. Jakarta is the third largest city in the world based on metropolitan population [START_REF]American Live Wire: Top 10 Largest Cities in the World 2013 -Metropolitan Populations[END_REF]. As the economic center of Indonesia, Jakarta generates approximately 70% of Indonesia"s capital flows. On September 29, 2012, Joko Widodo and Basuki Tjahaja Purnama were elected as the Governor and the Deputy Governor respectively for the local government in Jakarta. The Governor won the second-round voting despite his absolute lack of experience in either national or state-level politics. Traditionally, the Governors of the Special Capital Region of Jakarta descended from Indonesian military or high politics. The Governor was the Mayor of Surakarta (a small city in the Java Island), whereas the Deputy Governor was the head of Belitung Regency (a small island in the Sumatra region). They (and the coalition of two political parties) won the secondround election over the incumbent who built his career in Jakarta politics since 1987, by promoting their shared reform visions, "The New Jakarta" ("Jakarta Baru" in In-donesian), during the gubernatorial elections campaign. It promised the provision of citizen-centric public services through greater local government transparency and cuts in bureaucracies to improve government performance [START_REF] Baru | The New Jakarta Vision and Mission[END_REF]. It also promised that the Governor would spend one hour in his office and the rest of the time for site visits to identify the bureaucratic inefficiency and corruption problems, provide timely decision-making and closely monitor government performance and quality.
Research Methodology
This exploratory empirical research was undertaken in the context of Jakarta"s new local government to examine the following central question: How does government use social media tools to advance and communicate local government transparency? As of March 17, 2013, Jakarta"s new local government has uploaded a total of 473 government-generated videos on YouTube since its inauguration on October 15, 2012. YouTube is a video-sharing social media channel operated as a Google subsidiary since 2006. While unregistered users can view videos, registered usersindividuals, media corporations and other organizationscan upload, view and share a wide variety of user-generated video content [31]. In this research we collected Jakarta"s new local government-generated videos uploaded on YouTube through its YouTube account "PemprovDKI". The period of data collection was limited to 80 days from the inauguration day to January 3, 2013. The crawler retrieved information on both the number of uploaded government videos and viewers of the government-generated YouTube videos through the YouTube API.
During the data collection period, 266 government-generated videos were uploaded on YouTube, all of which showed the video title, with the same format: [upload date] [actor] [activity] [part]. The [part] was occasionally used to show long videos as separate parts. In this research, each part of a video is treated as a single video because it shows its own viewer-generated comments, rating and number of viewers. Four videos on inauguration and twelve videos showing other actors were excluded from analysis. Using the information contained in the video title, we selected a sample of 250 videos which showed the Governor and/or the Deputy Governor as actor(s) for analysis, because they are the two key drivers for "The New Jakarta" reform visions. We then classified political activities shown in the 250 YouTube videos into categories. For each of the categories, we collected statistics on videos and viewers. Of the political activity categories, we conducted a content analysis of "High-Level Political Meetings" category videos to identify the government"s key political issues.
Analysis Results
New Local Government's Strategic Use of YouTube
The new leaders in Jakarta demonstrated strong political will to advance their shared reform visions by engaging in rapid-fire, energetic political activities: high-level political meetings with internal and external stakeholders and site visits with local citizens and government officials alike for fact-finding at various parts of the city. They then made strategic use of YouTube video-sharing social media channel to capture and communicate their reform-oriented political activities to Jakarta"s netsavvy citizens. Only two days after the Governor and the Deputy Governor were inaugurated, government-generated videos were uploaded on YouTube. All the videos show the Special Capital Region of Jakarta logo at the upper-right corner on video frames. Through the strategic use of YouTube, the local government aimed to "open doors" to its net-savvy citizens who could view the YouTube videos and assess the new leadership efforts to make government policy-making governance and political actions transparent. The videos were shot, accompanied by narration in the form of audio and/or captions and uploaded without a video frame edit. On one hand, no video frame editing can be viewed as a reflection on the new leadership"s political will to demonstrate government transparency. On the other hand, the videos showing idle activities made the duration of the YouTube videos longer than necessary.
Reform-Oriented Political Activities Captured in YouTube Videos
On average, 3.1 videos per day were uploaded during our data collection period of 80 days. Political activities captured in the sample of 250 videos on YouTube were classified into seven categories: High-Level Political Meetings refer to high-level internal meetings with internal and/or external stakeholders (e.g. policy makers, politicians, decision makers and senior public administrators) to discuss key political issues of interest to the public from perspectives of "The New Jakarta" reform visions. Community Engagement activities aim to promote informal social interactions and exchanges between the Governor (or less frequently, the Deputy Governor) and local citizens through community events. Site Visits are defined as direct observation activities for face-to-face fact-finding with citizens and government officials alike, engaged by the Governor (or on rare occasions by the Deputy Governor) outside his Executive Office. Press/Media Conferences refer to news media interviews given by either the Governor or the Deputy Governor. Ceremonies include activities of the government officials who represent the local government in sponsoring an official ceremonious event. Public Speeches include invited keynote speeches delivered by the Governor or the Deputy Governor at seminars and workshops. Making/Hosting Honorary Visits include official gubernatorial visits to a place or an event to represent the government to interact with other parties or agencies, as well as official gubernatorial receptions for other parties or agencies. Of the seven categories, we consider High-level Political Meetings, Community Engagement and Site Visits clearly as bureaucratic reform-oriented political activities, whereas Ceremonies and Hosting/Making Honorary Visits are not reform-oriented in nature. Press/Media Conferences and Public Speeches are mixed in terms of reformoriented activities that were briefed or explained during the media interviews.
Our analysis results on the 250 YouTube videos about the category of political activities are shown in Figure 1. Overall, the top four activities shown in the YouTube videos are: High-Level Political Meetings (90/250 videos, 36%), Community Engagement (77/250 videos, 31%), Site Visits (33/250 videos, 13%) and Press/Media Conferences (33/250, 13%). On the one hand, the Governor still has engaged in the traditional gubernatorial activities such as Ceremonies and Making/Hosting Honorary Visits and Making Honorary Visits. However, these activities accounted for only 5% of the activities shown in the 250 videos. These traditional gubernatorial activities do not add value towards advancing the new local government"s strategic visions for bureaucratic reform and corruption eradication.
Fig. 1. Activities Captured in the Government-Generated Videos on YouTube
On the other hand, the Governor has engaged in the new and radically different political activities, such as: YouTube-hosted "open doors" high-level political meetings were held in which he raised pointed questions about some of the unrealistic budget proposals, corruption issues and political reforms. The Governor made his site visits to Jakarta"s "Marunda Flat", apartment building blocks for the poor, with a recurring problem of very low-level occupancy rates among the poor. The Governor made his site visits to the local government"s Corruption Eradication Commission to call their involvement in monitoring the government budget misuse problem and implementing budget transparency. The Governor visited local communities and participated in social events for community engagement and listened to the citizens" views and concerns.
Statistics on Government-Generated YouTube Videos
Descriptive statistics on the 250 government-generated videos on YouTube are shown in Table 1 below. For each category of political activities captured in the videos, Table 1 shows the number of YouTube videos (and percentage), the average length of a YouTube video (in seconds), the average number of viewers, the average number of viewer-generated comments per video and the average viewer-generated rating of a video. Our analysis results shown in the second column of Table 1 were graphically presented in Figure 1 above. The third column of Activities Covered in The Videos analysis of 2.5 million unique YouTube videos found that the average length of a YouTube video was 4 minutes and 12 seconds [START_REF]Sysomos: Inside Youtube Videos[END_REF]. The remaining columns of Table 1 present statistics on the video viewers which will be discussed in the next section.
Statistics on Net-Savvy Viewers of the Government YouTube Videos
The 250 government-generated YouTube videos attracted a total of 7,815,549 viewers during the 80-day data collection period of this research. Descriptive statistics on Jakarta"s net-savvy citizens who viewed the government-generated YouTube videos are shown in the columns 4-5 of Table 1. Given the vast array of other usersgenerated videos available on YouTube for choice, we argue that the average number of viewers for the category may be used as a proxy for measuring the level of citizen interest and participation in the category of political activities captured by the government-generated videos on YouTube. We found that the High-Level Political Meetings, the Community Engagement and the Site Visits categories attracted the highest (48,773), the second highest (29,161) and the fourth highest (21,022) average number of viewers. As we discussed earlier, videos in these three categories show the political activities that are bureaucratic reform-oriented. We also found that the Public Speeches category received the third highest average number of viewers [START_REF] Relly | Perceptions of Transparency of Government Policymaking: A Cross-National Study[END_REF]296), even though it is not reform-oriented and the number of videos was only 5. One of the videos showed that the Governor, who had been criticized by the oppositions for his lack of national and state-level public administration and political experiences, represented the Office of the Governor of Jakarta Capital Region when he interacted competently and confidently with Singapore"s ambassadors and diplomats. Jakarta"s net-savvy citizens must have liked these videos.
An analysis on the average number of viewer-generated comments per video showed that the High-Level Political Meetings and the Site Visits generated the second (310) and the third (268) highest comments per video from the viewers. These two categories are bureaucratic reform-oriented. In contrast, one of the non-reform categories, the Public Speeches, attracted the highest average number of viewergenerated comments per video. Finally, YouTube video-sharing website provides users with a video rating function. All categories of political activities received an excellent average rating (out of the maximum of 5.0). Ratings ranged from 4.94 for the Ceremonies category to 4.98 for the Site Visits and the Press/Media Conferences categories. The overall high-level ratings mean that the viewers would recommend their friends to view the government-generated YouTube videos.
Issues Captured in the High-Level Political Meetings Category
We performed a content analysis of the 90 videos classified into the High-Level Political Meetings category to identify key issues captured and communicated by the local government. Figure 2 shows our analysis results. We found four categories of political issues: Bureaucratic Reform and Governance, Budget Transparency, Investment Climate Improvement and Corruption Eradication. There are 47 videos (or 52%) in which bureaucratic reform and governance issues were discussed. A prime example is a video in which the Governor told the Mayors and Heads of Districts and Sub Districts at the meeting about the need to transform the ways which they interact with local citizens, by adopting a new mindset of a public servant, away from their bureaucratic mindset. In another video, the Deputy Governor had a series of meetings with several government agencies in healthcare services for radically improving healthcare access for the poor through the new "Jakarta Health Card" program. The Deputy Governor discussed the coherent and fair governance structure and processes for enhancing agency readiness for the Jakarta Health Card program, which was one of his campaign promises. The governance structure and processes for better interagency cooperation seem to reflect the new local government leadership"s political will to better respond to the citizen needs.
Fig. 2. Issues in the High-Level Political Meetings category videos
There are 33 videos (37%) in which the 2013 budget transparency issues were discussed. A prime example is a video which was viewed by 1,470,188 viewers. In this video the Deputy Governor discussed 25% deep cuts in the 2013 budget proposed from the Department of Civil Works and others. In other videos, the Governor and the
Issues in YouTube Videos on High-Level Political Meetings
Deputy Governor discussed the priority programs for the 2013 budget at the parliament. These videos made the political decision making process transparent regarding the priority programs and the budget allocation. There are 6 videos (or 7%) in which investment climate issues were discussed. A prime example is a video in which the Deputy Governor met with the labor union representatives during their street demonstration on October 24, 2012. The Deputy Governor discussed their demand for the 40% regional minimum wage increase, which generated strong responses from 90 companies in Jakarta, indicating the intent to move their investments out of Jakarta. Other videos show the Deputy Governor"s meetings with businesses to create a new investment climate of transparency through a new public service office, "One Stop Service," to facilitate new business investment in Jakarta. Finally, there are 4 videos (4%) in the High-Level Political Meetings category in which corruption eradication issues were discussed. A prime example is a video in which the Governor visited the Corruption Eradication Commission (KPK) to discuss new measures for identifying the potential government corruption based on the KPK"s analysis of the proposed 2013 budget. The Governor also discussed the KPK recommendations for better procurement process, better fraud reporting mechanism and enhanced public information access regarding government fraud cases. Another video shows the Governor"s meeting with the Audit Board of Republic Indonesia (BPK) regarding the implementation of a new "e-audit" system for the local government. Using the e-audit, The BPK argued for the new "e-audit" as ICT tools for detecting budget misuse or corruption.
Discussion and Conclusion
In this exploratory empirical research, we have addressed the central research question: How does government use social media tools to advance and communicate local government transparency? Our analysis results show that the local governmentgenerated YouTube videos captured and dynamically communicated the new government"s central message on greater local government transparency. Much of the leaders" political activities captured in the videos are value-adding towards advancing their shared reform visions of "The New Jakarta." This central message seemed to be well received, as the analysis results show that a total of 7.8 million net-savvy citizens viewed and highly rated the videos during our data collection period of 80 days. The viewer-generated comments and ratings suggest that the bureaucratic reform process and the necessary political activities to achieve the reforms were made visible, transparent and comprehensible to the net-savvy citizens through the visualization power of YouTube. However, we argue that social media tools by themselves are not sufficient to demonstrate local government transparency. Jakarta"s new local government leadership signalled and communicated strong political will to fulfil their "New Jakarta" visions through its rapid-fire reform-oriented political activities. Leadership"s strong political will is important to align their political activities with the shared reform visions they promised during the elections campaign. Leaders also play a critical role in governing the government"s strategic communication: what information is produced and communicated to citizens in the manner which is in alignment with their reform visions. The inclusion or exclusion of certain attributes in content has important implications for signaling government transparency to stakeholders and building public trust in government. This new social media-enabled government transparency is radically different from the traditional Jakarta politics where high-level political meetings were held under a closed-door policy and the Governors engaged in non-value adding political activities.
Based on our analysis results we conclude that two enabling factors are important to increase transparency in government. Transformational leadership"s strong political will to advancing its reform visions and YouTube tools for capturing and dynamically communicating reform-oriented political activities are the keys to advancing local government transparency and gaining the support from net-savvy citizens. This exploratory empirical research contributes to the emerging literature on social mediaenabled local government transparency. As discussed, very little has been written in the political science and public administration literatures about effective mechanisms for advancing and communicating government transparency to stakeholders. Particularly, studies on local government transparency are still very limited [START_REF] Sol | The Institutional, Economic and Social Determinants of Local Government Transparency[END_REF]. Egovernment research on social media-enabled government transparency is emerging but still new [START_REF] Bertot | Using Icts to Create a Culture of Transparency: E-Government and Social Media as Openness and Anti-Corruption Tools for Societies[END_REF][START_REF] Bertot | Promoting Transparency and Accountability through Icts, Social Media, and Collaborative E-Government[END_REF][START_REF] Bonsón | Local E-Government 2.0: Social Media and Corporate Transparency in Municipalities[END_REF][START_REF] Hetherington | Why Trust Matters: Declining Political Trust and the Demise of American Liberalism[END_REF]. In this exploratory empirical research, we have addressed this research gap in understanding how governments are using social media to promote transparency and increase citizens" awareness and understanding of their reform activities. Our research limitations include our research attention on transformational leadership behavior as well as our limited data collection period of 80 days. Our future research directions include a longitudinal study of government-generated videos on YouTube and reviewer-generated comments and ratings to observe the potential changes in communicating local government transparency over a period of time.
Table 1
1
shows that the longest
Table 1 .
1 Statistics on the 250 Government-Generated YouTube Videos
Activities Number of Average Avg. Avg. Viewer- Avg. Viewer-
YouTube Length of Number Generated Generated
Videos (%) a Video (in of Comments Rating of a
seconds) Viewers per Video Video
High-Level Political 90 (36%) 3,194 48,773 310 4.96
Meetings
Community 77 (31%) 2,195 29,161 268 4.98
Engagement
Site Visits 33 (13%) 1,183 21,022 212 4.95
Press/Media 33 (13%) 485 17,044 222 4.97
Conferences
Ceremonies 8 (3%) 2,513 16,214 130 4.98
Public Speeches 5 (2%) 2,600 24,296 379 4.97
Making/Hosting 4 (2%) 1,134 15,936 226 4.94
Honorary Visits
Total 250 (100%) 7,815,549 | 37,212 | [
"1004229",
"1004230"
] | [
"313498",
"313498"
] |
01490921 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490921/file/978-3-642-40358-3_4_Chapter.pdf | Anneke Zuiderwijk
Marijn Janssen
A Coordination Theory Perspective to Improve the Use of Open Data in Policy-Making
Keywords: open data, coordination, coordination theory, coordination mechanisms, challenges, improvement, open data process
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
The open data process consists of many activities that are performed by different stakeholders [START_REF] Zhang | Exploring stakeholders' expectations of the benefits and barriers of e-government knowledge sharing[END_REF][START_REF] Braunschweig | The State of Open Data. Limits of Current Open Data Platforms[END_REF]. Following [START_REF] Zuiderwijk | Socio-technical impediments of open data[END_REF] and [START_REF] Zuiderwijk | The potential of metadata for linked open data and its value for users and publishers[END_REF], we define the open data process as all activities between the moment that data are starting to be created and the moment that data are being discussed, including the activities to publish, The intention of open data publication is to make data available to have them reused by external users, in this way profiting from the wisdom of the crowd, and subsequently to support and improve policy-making and decision-making by discussing data and providing feedback to open data providers. However, as stated by Braunschweig et al., activities of the open data community are largely uncoordinated [START_REF] Braunschweig | The State of Open Data. Limits of Current Open Data Platforms[END_REF]. This statement is based on a survey of open data platforms, focusing on the technical aspects using open data, but not focusing on other parts of the open data process. Open data publishers are often unaware of what is done with the data, which value they can create and how they can be used for improving their own policies and decisions. Open data publishers and users are often not aware of each other's needs and activities. For instance, many open data providers are primarily focused on making data available and do not know which format is preferred by users and how the way that they publish data can stimulate the use of open data. Stimulating the use of open data is an important factor in creating the intended effects [START_REF] Zuiderwijk | Open data policies, their implementation and impact: A comparison framework[END_REF]. Coordination is important, because it may lead to increased understanding of the open data process and could result in concerted action [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF], improved performance [START_REF] Arshinder | Supply chain coordination: Perspectives, empirical studies and research directions[END_REF][START_REF] Gopal | Coordination and performance in global software service delivery: The vendor's perspective[END_REF] and improved policies. In addition, it can help to accomplish advantages, such as increased transparency [START_REF] Bertot | Using ICTs to create a culture of transparency: Egovernment and social media as openness and anti-corruption tools for societies[END_REF][START_REF] Janssen | The influence of the PSI directive on open government data: An overview of recent developments[END_REF], economic growth and innovation [START_REF] Janssen | The influence of the PSI directive on open government data: An overview of recent developments[END_REF][START_REF] Dawes | Stewardship and usefulness: Policy principles for information-based transparency[END_REF], empowerment of open data user [START_REF] Bertot | Using ICTs to create a culture of transparency: Egovernment and social media as openness and anti-corruption tools for societies[END_REF][START_REF] Geiger | Open Government and (Linked) (Open) (Government) (Data)[END_REF] and improvement of policy and decision making [START_REF] Bertot | Using ICTs to create a culture of transparency: Egovernment and social media as openness and anti-corruption tools for societies[END_REF][START_REF] Janssen | The influence of the PSI directive on open government data: An overview of recent developments[END_REF].
This research aims to 1) determine which coordination needs and challenges exist in the open data process and 2) to investigate how coordination in the open data process can be improved. A literature review is performed to examine coordination theory and to define coordination. On the basis of concepts derived from the literature and an analysis of interdependencies between activities in the open data process, coordination needs and challenges are identified. Finally, coordination mechanisms are described to deal with the coordination challenges and to improve coordination in the open data process.
Coordination theory
In this section background information is given about how coordination can be defined (Section 2.1) and which coordination mechanisms are identified in the literature to improve coordination (Section 2.2).
Coordination
Coordination theory provides an approach to the study of processes [START_REF] Crowston | A coordination theory approach to organizational process design[END_REF] and has been studied in numerous disciplines, such as computer science, sociology, political science and management science [START_REF] Malone | What is coordination theory and how can it help design cooperative work systems?[END_REF]. Although we all have an intuitive sense of what the word 'coordination' means, debate has been going on for years about what it really is. According to Van de Ven, Delbecq and Koenig [15, p. 322], "coordination means integrating or linking together different parts of an organization to accomplish a collective set of tasks." Heath and Staudenmayer [16, p. 156] state that coordination in organizations refers to "organizing individuals so that their actions are aligned". In line with these definitions, Thompson [17, p. 37], postulates that coordination means that "the elements in the system are somehow brought into an alignment, considered and made to act together". From this perspective, the division of labour in organizations leads to the need for coordination, as interdependencies between tasks and the individuals performing them need to be coordinated [START_REF] Heath | Coordination neglect: How lay theories of organizing complicate coordination in organizations[END_REF]. For this reason, Malone and Crowston [14, p. 361, 18] define coordination as "the act of managing interdependencies between activities performed to achieve a goal". In line with this, Gosain, Lee and Kim [19, p. 372] define coordination as "a process of linking together different activities of organizations to accomplish a common goal". Coordination is thus needed to map goals to activities, relate activities performed by different actors and to manage the interdependencies between these activities [START_REF] Malone | What is coordination theory and how can it help design cooperative work systems?[END_REF][START_REF] Malone | The Interdisciplinary Study of Coordination[END_REF].
In this research, interdependence is viewed as the extent to which activities in the open data process require the elements, such as the actors, systems and divisions, to work together [START_REF] Klievink | Unraveling interdependence. Coordinating Public-Private Service Networks[END_REF][START_REF] Cheng | Interdependence and coordination in organizations: A role-system analysis[END_REF]. The management of interdependencies between activities could result in the alignment of actions of stakeholders in the open data process and in this way result in coordination.
Coordination mechanisms
Coordination, i.e. the management of interdependencies between activities, can be achieved by coordination mechanisms. On the basis of the work of March and Simon [START_REF] March | Organizations[END_REF], Thompson [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF] expounds three types of coordination mechanisms. First, coordination by standardization refers to the development of routines or rules, which constrain action of each organizational part or position. This type of coordination requires an internally consistent set of rules and a stable and repetitive situation to be coordinated [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF]. Second, coordination by plan requires a lower degree of stability and routines than coordination by standardization and refers to the creation of schedules for interdependent organizational parts. These schedules may govern their actions and they are appropriate for dynamic situations, such as changing tasks [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF][START_REF] March | Organizations[END_REF]. Third, coordination by mutual adjustment is suitable for reciprocal interdependence. This type of coordination needs most communication and decisions, as it "involves the transmission of new information during the process of action" [p. 56]. Coordination by mutual adjustment is possible for variable and unpredictable situations [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF]. March and Simon [START_REF] March | Organizations[END_REF] refer to this as coordination by feedback. Also based on March and Simon's [START_REF] March | Organizations[END_REF] work, Gosain, Malhotra and El Sawy [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF] argue that in an inter-enterprise setting, coordination outcomes can be achieved by combining advanced structuring and a dynamic adjustment approach. Advanced structuring refers to structuring information flows and interconnected processes that exist between organizations before they take place (i.e. in advance). The advantage of this approach is that the effort related to adjusting to changing environments is reduced. Advanced structuring makes use of 'loose coupling', which means that certain elements of systems are linked (i.e. "coupled") to attain some degree of structuring, while spontaneous change may occur, leading to a certain degree of independence (i.e. "looseness"). Gosain et al. ( 2004) identified three aspects that advance the 'coupling' and looseness' in the advanced structuring approach. First, standardization of process and content interfaces concerns "explicit or implicit agreement on common specifications for information exchange formats, data repositories, and processing tasks at the interfaces between interacting supply chain partners" [23, p. 14]. Second, modular interconnected processes, which means "the breaking up of complex processes into sub processes (activities) that are performed by different organizations independently (such that sub processes occur through overlapping phases, or better still, fully simultaneously) with clearly specified interlinked outputs" [p. 16]. Third, structured data connectivity refers to "the ability to exchange structured transaction data and content with another enterprise in electronic form" [p. 17].
The dynamic adjustment approach refers to effectively and quickly reconfiguring interorganizational processes, so that these processes become appropriate for a changed organizational environment. The reconfiguration is supported through (IT) learning and adaptation [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF]. Aspects that advance the dynamic adjustment approach are 1) the breadth of information shared with supply chain partners, 2) the quality of information shared with supply chain partners and 3) deep coordination-related knowledge. Breadth of shared information is required to react to unexpected change, while information of high quality is needed to make effective and efficient inferences. Deep coordination-related knowledge consists of knowledge of partner competencies, process and content, organization memory of past change episodes and understanding of causal linkages [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF].
Coordination needs and challenges in the open data process
In the previous section it was stated that coordination refers to the management of interdependencies between activities [START_REF] Malone | What is coordination theory and how can it help design cooperative work systems?[END_REF][START_REF] Malone | The Interdisciplinary Study of Coordination[END_REF]. Crowston [START_REF] Crowston | Chapter 3: A Taxonomy of Organizational Dependencies and Coordination Mechanisms[END_REF] argues that "to analyze an organizational process, it is important to identify the dependencies that arise and the coordination mechanisms that are used to manage those dependencies" (p. 86). In this section, we elaborate on the need to coordinate the open data process (Section 3.1) and analyze interdependencies to determine which coordination challenges currently exist (Section 3.2).
Coordination needs
Project and organization complexities, interdependencies in work activities and uncertainty in the environment of the organization lead to a need for coordination [START_REF] Gosain | The management of cross-functional inter-dependencies in ERP implementations: emergent coordination patterns[END_REF].
Realizing coordination in the open data process is important, as coordinating by tightly coupling relationships provides the advantage to jointly exploit the capabilities of process partners [START_REF] Dyer | The relational view: Cooperative strategy and sources of interorganizational competitive advantage[END_REF][START_REF] Saraf | IS Application capabilities and relational value in interfirm partnerships[END_REF]. For instance, open data providers can use the wisdom of open data users to discuss their data. Moreover, coordination may lead to an increased understanding of the open data process and could result in concerted action [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF] and improved performance [START_REF] Arshinder | Supply chain coordination: Perspectives, empirical studies and research directions[END_REF][START_REF] Gopal | Coordination and performance in global software service delivery: The vendor's perspective[END_REF].
In the open data process, concerted action of the actors could deal with the complexities, interdependencies and uncertainties and stimulate the realization of benefits of the open data process [START_REF] Janssen | Benefits, Adoption Barriers and Myths of Open Data and Open Government[END_REF]. For instance, to achieve economic growth, providers of open data should take into account the needs of open data users, such as needs for certain data formats or metadata, and they should actively discuss those needs, so ensure that open data will actually be used. Furthermore, to improve public policies and policy and decision making, open data users can communicate with open data providers about the way that they used open data and to recommend policy improvements. Thus, there is a need for coordination in the open data process.
Coordination challenges
As there is a need for coordination in the open data process, it is important to identify the coordination challenges that currently exist. We define a coordination challenge as a situation in which a goal is defined, but coordination (i.e. the act of managing interdependencies between activities performed to achieve this goal) is inappropriate. We refer to Figure 1 to show in which part of the open data process the challenges exist. Each number in the figure refers to a challenge that is described thereafter.
As Figure 1 shows, the current open data process is characterized by four main activities. First, data are created by governmental and non-governmental organizations. Second, these organizations can decide to publish the created data on the internet. Third, the published data can be found by (potential) users, such as researchers and citizens. And, fourth, the found data can be used. 1) Actors: Open data legislators, providers and users. Activity goal: To publish data in such a way that they can be reused by external users, in this way profiting from the wisdom of the crowd, and subsequently to support and improve policy-making and decision-making by discussing data and providing feedback to open data providers. Interdependence: The way that the open data provider makes data available highly influences the way that external users can make use of the data. As a consequence, this influences the type of feedback that open data providers can obtain from the crowd and the way that they can apply this feedback to their own processes, such as policy-making and decision-making. [START_REF] Zuiderwijk | Open data policies, their implementation and impact: A comparison framework[END_REF]. Additionally, the legal frameworks, policies and guidelines do not refer to standards or plans that reflect what users need or how the data provider can obtain feedback from its own data. 1) Actor: Open data providers. Activity goal: Open data providers aim to publish data in such a way that the data can be found and reused easily, so that the advantages of open data can be realized. Interdependence: The ease of finding open data influences the way that open data can be reused effectively. When the data cannot be found easily, they are less likely to be reused and the benefits of open data are not fully realized. Coordination mechanisms: Open data are fragmented, which makes it difficult to find them. One reason for this is that data are published via various open data platforms. Even though some catalogues exist, describing which data can be found on different open data platforms, these catalogues are incomplete and usually not linked to other catalogues. In addition, there is no overview of who creates which data. As a consequence, open data users often do not know where they can find the data that they want to use. 1) Actor: Open data providers and users. Activity goal: Clearly define the boundaries of responsibilities of open data providers and users, so that they know what they can expect from each other and use this information to effectively execute their processes. Interdependence: Open data providers and users adapt themselves to the information and knowledge that they have about the boundaries of their own responsibilities and the responsibilities of the other stakeholders in the open data process to effectively perform their work. Coordination mechanisms: The boundaries of the responsibilities of stakeholders in the open data process are often unclear. There is no widely accepted agreement about which stakeholders perform which activities. Furthermore, there is no coordinator who is responsible for the whole open data process. A cause of this boundary uncertainty could be that stakeholders in the open data process lack information concerning each other's status and activities and that different organizational units observe different parts of the process [START_REF] Sheombar | Logistics coordination in dyads: Some theoretical foundations for EDIinduced redesign[END_REF].
Inappropriate regulatory environment (challenge 1 in Figure
Fragmentation of open data (challenge 2 in Figure
Unclear boundaries of responsibilities (challenge 3 in Figure
Lack of feedback on and discussion of data use (challenge 4 in Figure 1) Actor: Open data providers and open data users.
Activity goal: To discuss with other stakeholders in the open data process and to provide them with feedback on their activities. Interdependence: Applying discussion and feedback mechanisms in the open data process is important, as they can be beneficial for data providers as well as data users. Open data providers and users can use discussion and feedback mechanisms to improve the quality of the data, the data publishing processes and open data and other policies. Additionally, they can help users to better understand how they can use and interpret data and what the value of the data is [START_REF] Dawes | Information strategies for open government: Challenges and prospects for deriving public value from government transparency[END_REF].
Applied coordination mechanisms:
The current open data process is lacking discussion and feedback mechanisms. For instance, after open data have been used, there are usually no coordination mechanisms that facilitate the provision of feedback to data providers and that facilitate a discussion about the reused data.
Lack of interconnected processes (challenge 5 in Figure 1)
Actor: Open data providers and users. Activity goal: Connect sub processes of the open data process, so that open data providers and users can gear the activities that they perform to one another.
Interdependence: The open data process is divided into main processes, which can be divided into sub processes. An example of such a sub process is the preparation of the dataset or checking whether the dataset can be published. The way that stakeholders perform their activities influences the extent to which other stakeholders are able to perform their activities in other sub processes. A lack of interconnected processes can lead to the situation in which one stakeholder executes activities in such a way that other stakeholders are hindered in performing their own activities. For instance, when a data provider does not have the insight that open data users need considerable metadata to be able to use the data, he or she may not provide these metadata and hinders the open data user to use the data and realize their benefits. Coordination mechanisms: The sub processes in the open data process usually do not have clearly specified interlinked outputs. The coordination mechanism of deep coordination-related knowledge, including knowledge of partner competencies, process and content, organization memory of past change episodes and understanding of causal linkages, is lacking in the open data process. For example, many organizations merely release data on the internet without considering the way that their data can be used or how they can get feedback on the data [START_REF] Zuiderwijk | Towards an e-infrastructure to support the provision and use of open data[END_REF]. Coordination mechanisms: The mechanisms of coordination by standardization and coordination by plan are barely applied in the open data process. This may be caused by the fact that the sub processes of the open data process are not stable and sometimes not repetitive, which makes it difficult to apply coordination by standardization and plan. For instance, open data can be published and reused in various ways and feedback can be provided and received in many ways. This finding is in line with research of Braunschweig et al. [START_REF] Braunschweig | The State of Open Data. Limits of Current Open Data Platforms[END_REF], who write that considerable differences exist between the ways that data can be reused in open data repositories.
Lack of standardized and planned processes (challenge 6 in
Mechanisms to improve coordination in the open data process
In the previous section, various coordination challenges for the open data process were identified. In this section we focus on the second aim of this paper, namely to investigate how coordination in the open data process can be improved. Table 1 shows an overview of the coordination challenges that were identified in the previous sections and the related coordination mechanisms that may help in solving these challenges.
Table 1. An overview of coordination challenges related to coordination mechanisms that may help in solving these challenges.
Coordination challenges
Coordination mechanisms to solve these challenges Although the coordination mechanisms that were described by Thompson [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF] and Gosain et al. [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF] are only applied on a small scale in the open data process at this moment, all the coordination mechanisms that were described by them seem to be appropriate to use to improve coordination in the open data process, as all of them could be related to at least one of the identified coordination challenges. Therefore, we recommend to use the identified coordination mechanisms.
Table 1 shows that different coordination challenges might be solved by using different coordination mechanisms. For example, the lack of interconnected processes could be solved by applying all mechanisms, but solving the lack of communication would benefit mainly from coordination by mutual adjustment and deep coordinationrelated knowledge, rather than other mechanisms, such as the breadth and quality of shared information. Coordination by deep coordination-related knowledge could be used for all of the coordination challenges and was mentioned most often.
The second column of Table 1 shows that we propose to use a combination of coordination mechanisms from all three approaches that we analyzed, namely from Thompson's [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF] approach and from Gosain et al.'s [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF] approach of advanced structuring and dynamic adjustment. For instance, to solve the challenge of unclear boundaries of responsibilities, we propose to combine mechanisms from all three approaches, namely coordination by plan defined by Thompson [START_REF] Thompson | Organizations in action. Social sciense bases of administrative theory[END_REF], modular interconnected processes defined by Gosain et al.'s [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF] approach of advanced structuring and deep coordination-related knowledge defined by Gosain et al.'s [START_REF] Gosain | Coordinating for Flexibility in e-Business Supply Chains[END_REF] approach of dynamic adjustment.
But although several useful coordination mechanisms are described in the literature, these coordination mechanisms cannot be directly applied to solve all the challenges in an appropriate way. For instance, while it is clear that fragmentation of open data could be solved by coordination by standardization, existing research does not explain how this standardization could be applied to the open data process. in the open data process and, as a consequence, it provides an overview of these stakeholders and it gives more insight into how the open data process could be coordinated. Open data e-infrastructures may provide, among others, the functionalities of data provision, data retrieval and use, data linking, user rating and user cooperation [START_REF] Zuiderwijk | Towards an e-infrastructure to support the provision and use of open data[END_REF]. E-infrastructure may be helpful in supporting coordination by:
• Using an Application Programming Interface (API) that allows publishers to integrate the publishing workflow in their own dataset management systems and upload or update datasets automatically on open data infrastructures.
• Interconnecting processes performed by data providers and data users, for example, by keeping track of their status from the phase of publication until the phase of data reuse and discussion; • Describing and clarifying the responsibilities of stakeholders involved in the open data process; • Providing deep coordination-related knowledge;
• Linking data and showing them in linked catalogues to improve their findability; • Giving information about open data regulations (e.g. policies and guidelines); • Enabling the discussion of reused data by making it possible for users to discuss datasets individually or in groups of users, in this way stimulating iterative open data processes; • Enabling the provision of feedback on data and on policies; • Enabling monitoring data reuse, data discussions and feedback on datasets and policies by providing tools to monitor these;
• Standardizing processes of uploading, downloading, reusing and discussing data, for instance by describing formats in which data could be published to facilitate their reuse [START_REF] Zuiderwijk | Towards an e-infrastructure to support the provision and use of open data[END_REF].
Conclusions
The
find and use open data. At least open data publishers and users are involved, but often many more stakeholders are involved, such as open data facilitators, brokers (e.g. organizations that bring together open data users and producers by providing open data websites) and open data legislators (e.g. the European Commission and national political parties).
Fig. 1 .
1 Fig. 1. Coordination challenges in the open data process.
Coordination mechanisms: There are only limited coordination mechanisms in the form of legal frameworks, policies and guidelines. Although coordination by plan is applied to the open data process in the form of legislation, open data policies and organizational guidelines, these mechanisms provide little improvement of coordination in the open data process. There are many differences among open data legislation and policies, for instance with regard to policy objectives and policy instruments, and there are many opportunities for improving open data policies
Figure 1 )
1 Actor: Open data providers and open data users. Activity goal: To perform the open data process in a standardized way. Interdependence: The extent of standardization used in the open data process influences easiness, time-consumption and efficiency to participate in it.
aim of this research was 1) to determine which coordination needs and challenges exist in the open data process, and 2) to investigate how coordination in the open data process can be improved. A literature review was performed, which pointed at coordination mechanisms that can be applied to improve the open data process. Subsequently, the open data process was analyzed. Interdependencies between activities were examined and it was found that some of the coordination mechanisms derived from the literature are used in the open data process, but only on a very small scale. Six different coordination challenges were identified in the open data process, namely 1) an inappropriate regulatory environment, 2) fragmentation of open data, 3) unclear boundaries of responsibilities, 4) a lack of feedback on and discussion of data use, 5) a lack of interconnected processes, and 6) a lack of standardized and planned processes.Coordination mechanisms can be used to overcome these challenges. The use of coordination mechanisms in the open data process could stimulate the realization of the advantages of open data, such as enabling open data users to reuse data and enabling open data providers to profit from publishing data and to use this for improving their policy-making and decision-making. Yet, we found that it is difficult to coordinate the open data process due to its complexity, lack of structure, uncertainty, dynamism, and the involvement of varying stakeholders. Further research is necessary to investigate which coordination mechanisms are appropriate in the context of open data publication and use.
It is difficult to coordinate the open data process due to its complexity, lack of structure, uncertainty, dynamism, and the involvement of many stakeholders. Because of these characteristics of the open data process, is unclear how activities in the open data process could be interconnected and how deep coordination-related knowledge could be obtained by stakeholders involved in the open data process. It is hard to define suitable coordination mechanisms in advance. In different circumstances different coordination mechanisms might be appropriate. Further research is needed to investigate whether and how coordination theory could be extended to provide more appropriate coordination mechanisms in the context of open data. As a first step towards examining how coordination mechanisms can be applied, we suggest the development of an open data e-infrastructure where open data providers and users can find and contact each other and collaborate. Such an open data einfrastructure has the advantage that it brings together different stakeholders who are involved
Acknowledgements. This paper is related to the ENGAGE FP7 Infrastructure Project (An Infrastructure for Open, Linked Governmental Data Provision Towards Research Communities and Citizens; www.engage-project.eu; www.engagedata.eu), that started in June 2011. The authors would like to thank their colleagues of the ENGAGE project for their input for this paper although the views expressed are the views of the authors and not necessarily of the project. | 33,077 | [
"994145",
"985668"
] | [
"333368",
"333368"
] |
01490924 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490924/file/978-3-642-40358-3_8_Chapter.pdf | Nathalie Stembert
email: [email protected]
Peter Conradie
email: [email protected]
Ingrid Mulder
email: [email protected]
Sunil Choenni
email: [email protected]
Participatory Data Gathering for Public Sector Reuse: Lessons Learned from Traditional Initiatives
Keywords: Data Gathering, Participatory Citizenship, Local knowledge, Open Data 1
Local governments are increasingly looking for new ways to involve citizens in policy and decision-making, for example by combining public sector data sources with data gathered by citizens. Several examples exist of data gathering where personal mobile devices act as data collectors. While these efforts illustrate the technical capability of data sourcing, they neglect the value of local knowledge where people use their senses to capture and interpret data. Traditional data gathering initiatives, however, exploit this local knowledge to inform policy makers, e.g., neighborhood policing. To understand data gathering processes of these traditional data gathering initiatives, three cases are examined. We analyze these cases, focusing on the various elements they contain, concluding how digital data gathering can be informed by these traditional variants, concerning what the benefits of using digital means can be for data gathering and how traditional initiatives ensure data re-use by the public sector.
Introduction
Local governments aim for new forms of policy and decision-making processes, with an emphasis on greater citizen involvement and participatory government, where active partnerships and collaboration between citizens, the private sector and the municipality are stimulated [START_REF] Reddel | From consultation to participatory governance? A critical review of citizen engagement strategies in Queensland[END_REF]. Internet has shown to be a promising platform for eParticipation [START_REF] Amichai-Hamburger | Potential and promise of online volunteering[END_REF] and local governments are increasingly using digital tools to inform and communicate with citizens [START_REF] Johannessen | Choosing the Right Medium for Municipal eParticipation Based on Stakeholder Expectations Theoretical Premises: Technology Evaluation through Genres[END_REF]. This is also manifested in the many 'Open' movements, e.g., in Open Data Initiatives, where government data is released for reuse [START_REF] Maier-Rabler | Open": the changing relation between citizens, public administration, and political authority[END_REF].
In this paper we focus on how traditional data gathering initiatives can inform digital means of data gathering, with the data being re-used by the public sector to contribute to policy and decision-making, and how data gathering can benefit from digital tools. Digital means that enable people to passively gather data are emerging, among others to map noise pollution [START_REF] Maisonneuve | Participatory noise pollution monitoring using mobile phones[END_REF] indicate quality of roads [START_REF] Eriksson | The Pothole Patrol: Using a Mobile Sensor Network for Road Surface Monitoring[END_REF] or congestion [START_REF] Savage | Cycling through data[END_REF]. These examples highlight how mobile devices can be used as data gathering tools. Involving people as carriers of such sensors can be seen as successors to traditional forms of data gathering. However, in contrast with digital data gathering, traditional initiatives use human senses and intelligence to observe and interpret local events, such as crime prevention initiatives, where people walk inspection rounds to map neighborhood safety [START_REF] Levine | Citizenship and service delivery: The promise of coproduction[END_REF]. Digital means might help overcome disadvantages of traditional data gathering like data credibility, non-comparability of data, data in-completeness and logistical issues [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF]. Yet, traditional data gathering approaches, still offer certain advantages such as making better use of qualitative knowledge imbedded in communities [START_REF] Coleman | Bowling Together: Online Public Engagement in Policy Deliberation Online Public Engagement in Policy Deliberation[END_REF] [START_REF] Corburn | Community knowledge in environmental health science: co-producing policy expertise[END_REF]. In this paper we explore what digital data gathering processes can learn from traditional data gathering initiatives, to inform local governments on the organization of digital data gathering initiatives and empower people to gather data in collaboration with local authorities to contribute to policy and decision-making. Our research question for this study is: How can digital data gathering processes benefit from traditional data gathering initiatives?
The remainder of the current work is structured as follows: Section 2 describes other data gathering projects exploiting the potential of emerging technologies and discusses the value people can contribute to data gathering initiatives, distinguishing initiatives involving people as mobile sensor carriers as well as those involving people as sensors. Section 3 introduces our approach and provides an overview of the cases studied. In Section 4 we present our findings from the multiple case studies and describe important elements in traditional data gathering initiatives. Section 5 discusses how these elements can inform digital data gathering, the challenges associated with it, the benefits of using digital means, and ensuring data re-use by the public sector. In Section 6 we elaborate on directions for future activities.
2
Related work
Benefits associated with involving citizens through data gathering are widely acknowledged in decision making, planning, and policy development [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF][11] [START_REF] Corburn | Bringing Local Knowledge into Environmental Decision Making: Improving Urban Planning for Communities at Risk[END_REF]. These benefits include education of citizens [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF][13], cost effectiveness [START_REF] Bromenshenk | Public participation in environmental monitoring: A means of attaining network capability[END_REF], or having access to information that non-residents might not be aware of [START_REF] Stokes | Public participation and volunteer help in monitoring programs: An assessment[END_REF]. Also, when given training, citizens can provide high quality data using less expensive methods [START_REF] Au | Methodology for public monitoring of total coliforms, Escherichia coli and toxicity in waterways by Canadian high school students[END_REF].
The increased availability of mobile devices and emerging technologies has encouraged projects where data is collected digitally. These include Pothole Patrol [START_REF] Eriksson | The Pothole Patrol: Using a Mobile Sensor Network for Road Surface Monitoring[END_REF], where sensor data submitted by smartphones is used to assess road quality, the Copenhagen Wheel [START_REF] Savage | Cycling through data[END_REF], where sensors attached to city bicycles submit pollution, road conditions and congestion data, or data mining uploaded photos to map tourist movement [START_REF] Girardin | Digital Footprinting: Uncovering Tourists with User-Generated Content[END_REF]. Gathering data in a digital way improves the validation of results and increases access, in addition to offering better ways of exploring and communicating findings about the data [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF]. This type of data carrying refers to 'citizens as mobile sensor carriers', where submission and gathering of data is digital, and citizens do not actively decide what to submit. Although these examples illustrate the technical pos-sibility of submitting or analyzing mined data, they do not necessarily make use of local contextual knowledge found in communities. Firsthand experience, sometimes only available to local residents, can be important to experts in planning or developing policy [START_REF] Fischer | Citizens, experts, and the environment: The politics of local knowledge[END_REF][19] [START_REF] Stokes | Public participation and volunteer help in monitoring programs: An assessment[END_REF].
Digital data gathering stands in contrast with more traditional efforts of data gathering, for example logging water quality [START_REF] Au | Methodology for public monitoring of total coliforms, Escherichia coli and toxicity in waterways by Canadian high school students[END_REF], hunters providing wildlife samples [START_REF] Stokes | Public participation and volunteer help in monitoring programs: An assessment[END_REF], monitoring pollution with bees [START_REF] Bromenshenk | Public participation in environmental monitoring: A means of attaining network capability[END_REF], or the three cases introduced in this paper. Within this category, data gathering has traditionally been analog and people apply contextual knowledge while gathering data. In this 'citizens as sensors' category, citizens actively contribute to the data collection, by gathering data through their senses, and applying contextual knowledge when finding facts.
Due to the benefits of technology, as mentioned earlier, examples of 'citizens as sensors' enabled by digital means are appearing. One such example is FixMyStreet [START_REF] King | Fix my street or else: using the internet to voice local public service concerns[END_REF], where citizens can log problems in the public space, such as broken lanterns or pavements. Here, citizens, empowered by digital tools sense data and apply contextual knowledge and judgment on what is being logged and submitted.
In this 'citizens as sensors' category, the information is mostly qualitative, as a result of life experience and is instrument independent [START_REF] Corburn | Community knowledge in environmental health science: co-producing policy expertise[END_REF]. Given this, data gathering, validation, and testing by local residents differ largely from the methods and techniques of professional practitioners. Despite evidence that local knowledge can offer valuable insights, the differences in methods and techniques can cause professionals to view the public as having either a deficit of technical understanding or as solely complementing the work of experts [START_REF] Yearley | Experts in Public: Publics' Relationships to Scientific Authority[END_REF], while data credibility, logistical issues, noncomparability and incompleteness of data [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF] are cited as issues, posing a challenge for 'citizens as sensors' initiatives.
Co-production could be an approach to overcome disagreement about credibility, validation and testing methods and techniques [START_REF] Susskind | Learning from Western Europe. Paternalism, Conflict, and Coproduction: Learning from Citizen Action and Citizen Participation in Western Europe[END_REF], since all stakeholders are accepted as potential contributors and hard distinctions between expert and novice are rejected. Joint fact-finding, in turn, similarly assists in increasing data credibility, while also contributing to more cohesive relationships among stakeholders and a better understanding of differing views [START_REF] Ehrmann | Joint Fact-finding And The Use Of Technical Experts[END_REF].
With the advantages of 'citizens as sensors' approaches, combined with using digital means to gather, store and analyze data, we propose examining existing, successful traditional data gathering initiatives. We introduce an analysis of the processes these initiatives currently use, to inform digital initiatives and better understand re-use by the public sector of the data gathered by these traditional initiatives.
Approach
In order to understand the human involvement in analog data gathering initiatives, a multiple case study approach was used to study traditional data gathering initiatives within real-life contexts [START_REF] Yin | Case Study Research Design and Methods[END_REF]. This case study setup allows us to analyze within and across settings, to understand similarities and differences between cases [START_REF] Baxter | Qualitative case study methodology: Study design and implementation for novice researchers[END_REF].
3.1
Case introduction
The three cases are set in Rotterdam, the second largest city in The Netherlands. They were selected based on a predefined set of criteria: (a) it is an initiative where citizens use their senses to gather data, (b) ownership of the initiative lies with citizens, and (c) the data is gathered to influence local policy and decision making. The cases are briefly described below.
Case 1, Drugs in Color (DC):
This initiative attempts to lower drug related nuisance. Groups of trained volunteers walk together with police officers, representatives of housing corporations and community workers in inspection rounds through the neighborhood. They search for predefined 'drug-related objects' and rate their observations according to a five-step analog color standardization.
Case 2, Housing Report (HR): To better understand housing shortages, a neighborhood led initiative was started to identify the causes of a lack of living space. To do so, an objective researcher, together with the local council, social housing company and statistical bureau of the municipality, analyzed data about the situation. This research was complemented with qualitative interviews with neighbors.
Case 3, Citizen Blue (CB): Residents patrol the neighborhood, in collaboration with the municipality, public maintenance service and local police to increase safety and foster social cohesion. During inspection rounds, volunteers map if the neighborhood is clean, intact and safe, i.e. by paying attention to overflowing dumpsters, broken streetlights and drug dealing. Observations are reported over a handheld tranceiver and are summarized by a trained citizen and depending on the origin of the issue, presented to the authority responsible.
Table 1 presents an overview of the three cases, their goal, type and frequency of data gathering as well as the actors involved in the initiatives.
Data collection
From each of the case studies we derived a large amount of data, in the form of direct observations and raw interview material. Observations of each case were documented and analyzed, mapping the actors, triggers for data gathering, data transactions and the level of data enrichment during every transaction. Analysis of all three cases were compared and led to insights in the process steps of traditional data gathering initiatives. Interviews were conducted with members of citizen initiatives (n=5), community workers (n=1), members of municipality (n=6), data gatherers (n=9), independent researchers associated with data gathering (n=2), local police (n=3), and were transcribed, interpreted and up to a total of 433 statement cards were categorized. Statement cards show quotes, interpretations and paraphrases of the data found in our interviews and observations. This method allows a team to collectively organize and reorganize data to discuss interpretations, observe similarities and draw conclusions [START_REF] Visser | Participatory design needs participatory communication: New tools for sharing user insights in the product innovation process[END_REF]. The actor analysis together with the statement card analysis led to an overview of their gathering processes and six elements that distinguish these processes.
Findings
Actors in traditional data gathering initiatives
From the analysis we identified, process supporters, data gatherers, data recipients and data interpreters, a set of actors that were found in all cases examined. Process supporters, like a community worker, are actors who organize the process and give guidance to the other actors in the process, while they can also give legitimacy to an initiative. Data gatherers are the actual data gatherers, citizens who actively gather data in their local environment. Data interpreters, receive, interpret and enrich the data gathered, after which they provide the other actors with verbal, written or visual feedback. This role is largely determined by the ability and expertise to analyze and add value to data.
Processes of a traditional data gathering initiative
To illustrate a traditional data gathering process we describe the process of the DC initiative. First, the community worker is triggered by the complaints of drug nuisance in the neighborhood and organizes inspection rounds. The method of data gathering is standardized, allowing the initiative to be taken seriously by local authorities and to receive their support. The community worker continuously plans inspection rounds and motivates citizens to participate. In the second step, citizens actively gather data and give the data to the community worker. While walking the inspection rounds, citizens are supported by local police, who can directly intervene when necessary and can ensure the safety of data gatherers. In the third step, the community worker (re)arranges the data and gives the data to the autonomous chairman. In this stage, the data is only synthesized and not enriched. In step four, the chairman analyzes, interprets and enriches the data, after which he provides the recipients with feedback (step five). This feedback is given in the form of a quarterly feedback meeting, where data serves as a tool to form a common vision. The feedback meetings are attended by local authorities, i.e. police officers, representatives of housing corporations, members of municipality and community workers, and the data is presented in a presentation that all actors can understand. Everyone receives the same information and local authority directly provides feedback on solved or unsolved matters and explains underlying causes based on their domain expertise.
The other two cases were analyzed accordingly and the process steps of the cases were identified. These process steps were abstracted and merged into our presentation of these traditional data gathering processes (figure 1). Below we elaborate on each of these elements.
Support through mother organizations. Mother organizations understand existing social structures, have access to communication channels and have ties with local authorities. The three examined cases all built on local neighborhood collectives, which acted as a launch platform for the gathering initiatives. They offered access to potential subsidies and domain experts due to their existing network. In our cases, mother organizations initially acted as catalysts for data collection, after which they all became semi-autonomous working groups within the community organization that also undertakes other community actions.
Internal and external motivational triggers. Actors within the mother organizations of the cases, articulated a local problem that formed the trigger and main motivation to start gathering data. DC was triggered by the degree of problems caused by drugs in the neighborhood, whereas in HR, the lack of suitable housing motivated residents to hire an objective researcher, while CB came into existence as a result of high levels of crime and public problems in the neighborhood, in combination with dissatisfaction with the action undertaken by local authorities to combat the issues.
Legitimacy through authority and partnerships.
A central issue for gatherers was the need to have data taken seriously. Through interviews with public sector organiza-tions involved with data gathering (police, local council, bureau of statistics or environmental protection agencies), we found that these organizations have specific data norms they adhere to. To assure legitimacy, groups involved local authorities or trusted third parties. In the case of DC and CB, the police, local council, housing corporation actively partake in the project, while stressing their non-ownership. The HR research was performed by an objective third party with domain knowledge, while the housing company provided statistics for the researchers.
Standardization and methodology.
In order for qualitative measurements to be usable by external organizations, actors must agree with standardized measurements. It was important that data gatherers involve, or consult the earlier mentioned objective third parties, to decide together what will be measured and how the data will be gathered, i.e.; with a digital camera or notepad. In DC, this occurred at the initial phases of the project, when a decision was made about the types of nuisances to record, when to record them, and how they should be classified in the system. Similarly, HR worked in collaboration with all actors to understand how the types of data recorded and presented by the social housing company, the council, and the statistics department can be interpreted. CB also agreed on the types of data being collected and how disturbances in the public sphere could be recorded.
These four elements conclude the organization and planning phase of the process. What follows are two potentially iterative and repeating elements: active gathering and feedback.
Data Gathering: Gathering, Interpretation, Presentation and Acknowledgement.
Gathering: The initial active element is physical data collection, where citizens gather data using the chosen standardization and methodology. During this process they are supported by trusted third parties or involved authorities. The degree of support offered can vary from participating in inspection rounds, to education and logistical support. Notable during this step is the pre-interpretation and decisions made by gatherers to not capture certain data because of contextual knowledge. This includes the occurrence of homeless persons that are not considered a nuisance (DC), tolerance towards broken up street areas as a result of construction work (CB), or reliance on storytelling (HR). Interpretation: Having gathered the data, a certain amount of data interpretation is needed in order to gain insights. This interpretation is either done by a trained volunteer, or a paid professional. Local authorities can also take a role, by offering domain expertise, as is the case with DC, where police officers actively explain certain drug related issues. In our cases, care is taken during the interpretation phase to guarantee data quality and validity, making sure the data retains its legitimacy. Presentation: Following the interpretation, actors are presented with the gathered data in a tailored form. This might include graphs, statistics or text summaries. Results were compared to earlier data gathering moments or data supplied by local authorities. Presenting insights in the presence of third parties and local authorities enabled discussion and clarification of the data by domain experts. For example HR used a special neighborhood newspaper for direct stakeholders, combined with a special supplement in the local newspaper for other interested parties. CB, in contrast keeps track of data using a spreadsheet, centrally visible at the physical gathering place. The data is also communicated to the appropriate authorities, either whilst attending, or via email. These platforms are also notable for including data that is logged during un-official sightings, occurring outside the regular times of data gathering. Acknowledgement: The acknowledgement phase, introduces the last iterative step in the data gathering element, where local authorities react to the gathered data in short feedback loops. This is an important element, since it is an acknowledgement of the effort of gathering, and can act as a motivator. Additionally it has a controlling function to make sure that action is planned, although it does not necessarily include active change. To illustrate, CB sends their reports of the week's activities to the local authorities and informs gatherers about the prospective feedback by the council. DC chose quarterly meetings to get the feedback of local authorities, where HR received feedback on their gathering at a final event where the report was presented.
Feedback about short, mid and long-term outcomes. This final element of the process is the outcome of data gathering initiatives. During this phase, we define short-term, mid-term and long-term outcomes. At the very least, as a short-term outcome, common ground is hoped for from which understanding about the problem is cultivated from both sides of the issue. Mid-term outcome focuses on the concrete actions. This might be a more pro-active approach to garbage collecting, alternative route suggestions for police patrol, or more inspection rounds in certain areas. Longterm outcomes can mean lower crime or a behavior change of one of the parties involved. In the case of DC, the data gatherers have a better understanding of the issues facing the police, while in HR, parties understand the problem better due to the increased availability of information. This section presented an analyzed process of traditional data gathering by citizens, where people apply community knowledge by gathering data, enabling gathered data to be re-used by the public sector. In the following section, we discuss how this process can inform data gathering with digital means, in addition to what the benefits of using digital tools and techniques might be.
Discussion
As mentioned in Section 2, local knowledge during data gathering can yield important insights [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF][11] [START_REF] Corburn | Bringing Local Knowledge into Environmental Decision Making Improving Urban Planning for[END_REF]. Issues like differing methods and techniques [START_REF] Yearley | Experts in Public: Publics' Relationships to Scientific Authority[END_REF], data credibility, logistical issues, non-comparability and incompleteness of data [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF] can prevent the data gathered from being re-used by the public sector. Digital means of data gathering offer advantages such as the ability to better validate results, increased access, in addition to better means of exploring the data and communication about the findings [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF]. Digital means may be exploited to support and accelerate traditional data gathering initiatives, while the lessons learned from traditional gathering initiatives may be beneficial to digital means for data collecting purposes. As we have observed, contextualizing collected data is important in the interpretation of the data, to retain community knowledge. A major challenge of digital means for collecting purposes is to capture and process the context that pertain to data, i.e., to contextualize data. Digital means potentially solve traditional logistical issues by offering automatized processes, potentially resulting in efficiency and cost gains. Where the triggers for traditional data gathering are mostly introduced by local mother organizations, digital means can likewise allow likeminded but scattered groups to connect, which is beneficial. Also, digital means may bring emerging triggers to attention. By processing large amounts of collected data real-time, emerging phenomena may be exposed, serving as early warning triggers for local authorities [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF].
Traditional data gathering initiatives however, have proven to be effective and rely on human senses and insight in addition to having their data re-used by the public sector. We ascribe this partially to the support of underlying mother organizations, which function as backbone of traditional data gathering initiatives. Digital data gathering initiatives on the other hand are loosely bound together by common interest, but without backing of an existing mother organization, it may prove challenging to have the same level of logistical support and connections as traditional initiatives. Furthermore, local authority commonly is not the initiator of data gathering, but is often attracted by the mother organization to collaborate in a later stadium, offering valuable feedback and legitimacy. Not having legitimizing partnerships in digital initiatives can make outcomes uncertain and can be a hurdle to realize change, since it is important to communicate that authority supports the data gathering initiative and that a valuable outcome of some sort is guaranteed to volunteers. Attracting an objective trusted third party that can act as the 'face' of the initiative could substitute the absence of a mother organization, since it can provide access to an existing community and can address local authorities to lend legitimacy to the data gathering initiative.
When actively gathering data, digital means offer the potential to allow incidental, ad-hoc measurement, for example through using a smartphone on location, rather than data measuring during predefined walks through the neighborhood. Moreover, digital standardization cannot be influenced by human inconsistencies and can increase the possibility to gather more credible, complete and comparable data [START_REF] Gouveia | Promoting the use of environmental data collected by concerned citizens through information and communication technologies[END_REF]. However, care must be taken to allow human observations and interpretations into the methodology determination. Choosing and agreeing on standards and methods is important, since qualitative information gathered by citizens as human sensors is valuable in traditional data gathering. However, involving qualitative information increases complexity, making the training of data gatherers crucial to assure data credibility. In traditional initiatives this preceding training and support during data gathering is provided by mother organizations, which in the digital case would entail different type of support, such as a tutorial. Moreover, traditional efforts in our cases involve authorities in gathering the data from the field, enabling joint fact-finding, which can be beneficial for the relationship among actors [START_REF] Susskind | Learning from Western Europe. Paternalism, Conflict, and Coproduction: Learning from Citizen Action and Citizen Participation in Western Europe[END_REF].
In the examined traditional data gathering initiatives, interpretation of the data plays a major role and is often done by an objective trusted third party, making onthe-spot clarification possible. In digital efforts transparency about this process is also important, giving participants insight into how data is translated, compared and combined. It must be clear who presents the data, a role that can be fulfilled by an objective third party who is trusted by all stakeholders to interpret and communicate the results. Digital means make it easier to combine and compare data sets from different stakeholders, and just like in traditional means, to be presented to all stakeholders in an understandable and transparent way. Furthermore, an analysis of combined data sets results in a more comprehensive description of a phenomenon compared to an analysis of a single set. Digital variants could benefit with near-instant translation and presentation of results, but might lack the ability to consult domain experts on why results are translated and presented in a certain way. Providing certain methods of consultation on data interpretation can make this process more transparent.
However, adequate acknowledgement of local authority on the data collected stays important. In traditional initiatives local authorities give data gatherers credit in the form of appreciation of their efforts, resulting in engaged and active participants. In line with the importance of the element of acknowledgement, we note that providing data gatherers with feedback about short, mid and long-term outcomes, even when change has not been realized, empowers citizens and acknowledges their efforts. Similar efforts in digital data gathering initiatives would be beneficial.
Finally, we emphasize the need to be transparent in cases of automatic digital data gathering. In our examined cases, participants actively participate, with full knowledge of the data collection and intent. This transparency remains important in digital data gathering initiatives.
Conclusion and future work
Digital data gathering has taken a leap and already has shown the potential of supporting citizens to contribute to policy and decision-making processes. Local governments express the need to be informed by citizen data, but are not familiar with the organization of digital data gathering processes. In this paper we propose that digital data gathering can learn from traditional data gathering initiatives that have proven to be effective. We also argue that traditional means stand to benefit from digital tools and techniques. A traditional data gathering process with six elements that can serve as guidance for local governments is presented, to achieve this. We aim to contribute to the efforts of combining government data and citizen gathered data for public sector reuse to contribute to policy and decision-making. Also important is the potential of data gathering to build better relationships between authority and citizens through joint fact-finding. Due to the limited amount of cases, we emphasize that the presented process elements need to be tested in future research.
Our own efforts will focus on the standardization and methodology, where we aim to investigate how existing standards, as determined by government, can be translated to measurement standards that can incorporate human knowledge, whilst still retaining legitimacy of the data. With the development of a use case based on the data gathering demand articulated by local authorities in Rotterdam and an accordingly select-ed target group, we aim to develop a mobile application according to an iterative research and design approach.
Fig. 1 .
1 Fig. 1. Representation of traditional data gathering processes
Table 1 .
1 Overview of the three cases analyzed
Drugs in Color (DC) Housing Report (HR) Citizen Blue (CB)
Goal Stop long-term annoy- Raise attention for the Stop long-term annoyance
ance of drug nuisance in housing shortage in the of disturbance in the neigh-
the neighborhood neighborhood borhood
Data Quantitative Quantitative & Qualitative Qualitative
Frequency Quarterly Non-recurrent half year Once every two weeks
project
Supporters Community worker, Community center Community center, Local
Local Police Police
Gatherers Concerned citizens Independent researcher Concerned citizens
Interpreters Independent interpreter Independent researcher Trained citizens
Data Recipi- Municipality, Local Local residents, Municipali- Municipality, Local police,
ents police, Housing corpo- ty, Housing corporation Housing corporation,
ration maintenance service
Acknowledgements. We thank our colleagues of the Rotterdam University involved in the Rotterdam Open Data project and the participants for this study. The work has been partly funded by SIA RAAK Publiek. | 36,243 | [
"1004233",
"1004234",
"1004235",
"1004236"
] | [
"475786",
"475786",
"475786",
"333368",
"475786",
"489120"
] |
01490938 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490938/file/978-3-642-40358-3_7_Chapter.pdf | Mauricio Solar
email: [email protected]
Luis Meijueiro
email: [email protected]
Fernando Daniels
email: [email protected]
A Guide to Implement Open Data in Public Agencies
Keywords: open data, open government data, roadmap, maturity model
This article presents a guide to implement open data in Public Agencies (PAs).
The guide is the result of a worldwide proposal's study, of the application of a maturity model to diagnose the situation of PAs in Latin American countries, the opinion of experts in different excellence centers, e-government authorities, and developers of open data application in the world. The guide is simple and orients decision makers so that PAs following the actions of the guide can see their capacities improved when facing a diagnosis of their institutional maturity in implementation of open data.
Introduction
The need to encourage data reuse is a key task, since it allows promoting the capacity of innovation of developers and infomediaries. According to de la Fuente (2012), impulse of policies and availability of mature technology standards as the semantic web, are enabling a great revolution in the way of distributing and consuming public information. Both, websites and data will link to one another, reducing dramatically the cost of reuse, simplifying their integration with future applications (Berners- [START_REF] Berners-Lee | Linked Open Data. What is the idea? Retrieved from[END_REF]. CTIC Foundation (2011) suggests configuring a kind of ecosystem among government, companies and citizens to promote the re-use of Public Sector Information (PSI) and thus contribute to social welfare [START_REF] Harrison | Creating Open Government Ecosystems: A Research and Development Agenda[END_REF]. [START_REF] Yu | The New Ambiguity of 'Open Government[END_REF] indicate that publish data, in a structured way, is only a necessary condition to dispose of new applications and services, in most cases it is not realistic to expect that innovation occurs automatically.
An action plan is required then, to stimulate the consumption of a dataset between infomediaries companies and developers; this promotes the creation of new applications, driving an economical area whose axles are on one side technical and on the other business. Promoting interoperability and transparency between Public Agencies (PAs) must be a key element of the plan (CTIC, 2011).
Other components to consider in a plan are: gathering information from citizens to know what is the most relevant information to them, incorporating social networks to common channels of participation, since Open Government Data projects (OGD) succeed where they satisfy the existent demand of information and commitment [START_REF] Yu | The New Ambiguity of 'Open Government[END_REF]. In short, there must be an active partnership between government and private stakeholders.
It is necessary to spread out the efforts made by PAs. The plan proposed by CTIC Foundation, incorporates access points (open data portals) and measurement and follow up of the action plan impact. This last point requires establishing a set of indicators that will facilitate measuring and plan targets compliance.
In this context, we give some recommendations to address this complex task, synthetizing in a roadmap the implantation of a sort of OGD ecosystem between government and concerned community (stakeholders). Then, based on this roadmap, the Open Data Implantation Guide (ODIG) provides 15 recommendations. If PAs follow ODIG recommendations, then they will have a maturity level equal to or above 3 from a maximum of 4 in the maturity model shown in [START_REF] Solar | A Model to Asses Open Government Data in Public Agencies[END_REF].
In Section 2 we show the state of the art in which we base the open data implementation guide shown in Section 3. Finally, we conclude in Section 4.
2
State of the art
Three sources provided information for the development of recommendations tailored to reality. The first one related to the bibliography in OGD subject, from which different proposals of open data implementation arise. The second source, related to the results of the Open Data Maturity Model application, known as OD-MM, from which are collected the recommendations suggested by the model; and the last is a survey carried out by stakeholders and OGD experts around the world.
Experiences reported in the bibliography
In the Obama's emblematical memorandum, described in detail in [START_REF] Mcdermott | Building open government[END_REF], the open government directive instructs PAs to include in their plans, linking to a website that has information about their Freedom of Information Act (FOIA) processing and processes. It includes a description of the staffing, organizational structure, agency's capacity to analyze, coordinate, and respond to such requests in a timely manner, and if the PA has a significant backlog, milestones that detail how the agency will reduce its pending backlog of outstanding FOIA requests by at least 10% each year. The directive requires executive departments and PAs to take the following steps to the goal of creating a more open government: In May 2012 they released the digital strategy of the U.S. Federal Government in the document entitled: "Digital Government: Building a 21st Century Platform to Better Serve the American People". This document establishes 4 strategic principles that will guide initiatives in digital government, as follows: Focus on information; Shared platform; Focus on the customer; and Security and privacy.
Another case is the government of Australia that provides 13 recommendations as a guide to Government 2.0, summarized in [START_REF] Gruen | Engage: Getting on with Government 2[END_REF].
Based on the selected Open Government Initiatives (OGI), relevant literature, and interviews with several PAs, Lee & Kwak (2011) identify ten key challenges for open government implementation in three dimensions: Organizational; Technology; and Government-wide challenges. [START_REF] Lee | An Open Government Implementation Model: Moving to Increased Public Engagement[END_REF] present also 15 recommendations that PAs can use to effectively implementing their OGI.
As measures for a local government (municipal) in ORSI (2010) ten measures are proposed. In the same sense, but in a specific application scope in the U.S., the 8 steps guide in [START_REF] Kaufman | Getting Started with Open Data: A Guide for Transportation Agencies[END_REF] character. Organize workshops with a more technical and/or business delineation. 7 Offer some self-financing opportunity or give adequate information about the possibility of external financing usable in the OGD project development.
Expert's Opinion
Consultations to stakeholders and OGD experts from 15 countries of four continents (America, Asia, Australia and Europe) gave a first-source perception about impacts, either social or economic, that open data have, its eventual reuse, and recommendations to fostering OGDs.
Survey Methodology
The methodology consisted primarily in building a poll that could be answered in few minutes, to have a higher rate of responses. Most of the questions were closed inquiries; the only requirement was to mark just the offered alternatives. Finally, the poll had 9 questions from which two were open answers and optional. Built in a Google doc format, this poll provided the chance of answering directly on the web. The second step was the identification of a group of experts in OGD, for the poll delivery. Those contacted before were favored, since this increased the chances of a response within the time limits.
First three questions try to capture the importance of OGD for the expert, specifically on social and economic impact. Then, it is also important to know their opinion on issues such as the most relevant scopes, costs, benefits, risks and barriers for the OGD implementation. Finally, they responded an open question about their immediate experience regarding to specific results of some impact appreciated, either social, or economic, resultant of OGD and/or its subsequent reuse. An open question remained for further comments to help promoting OGI.
Survey Results
Each of the experts selected three areas they thought could create the greatest OGD impacts and its reuse. As shown in Table 1, the areas of "Transport" and "Transparency" are a third of the most named among 15 scopes shown, followed at a distance by the environmental issue.
According to the experts polled, most important benefits or advantages for society of the OGD and its reuse are "Transparency" with 25%, followed closely by increased citizen's "participation" (20%). It is interesting to emphasize they mentioned, with a significant frequency (14%), the benefits into PAs reflected in an increased effectiveness and efficiency of public policies. On the other hand, "updated information/knowledge" (rank 8 in Table 2) was an alternative not mentioned in the poll, but included by self-pollees. The economic benefits as "Economic development" and "Entrepreneurship", mentioned together with an 18%, is a significant figure and consistent with the importance level of the economic impact as per the same experts. Table 3 presents main barriers or difficulties in initiating open data projects. As shown, the "Lack of political will" is the most frequently mentioned factor that avoids the implantation of OGI. Second, and with similar percentages (16% each), are the "cultural" factors, as well as the "lack of appropriate laws", and "lack of leadership" (very similar to the "Lack of political will").
Table 4 presents the responses associated to risks, limitations and costs related to OGD in general. It is remarkable that the issue "Sensitive data" appears as the main constraint or risk associated with OGI. Second, "Processing costs" of large volumes of data; and far behind in third place is the "investment" associated. Some comments given by the experts are hereunder detailed: • Difficulties in impacts measuring; there is a consensus about existent impacts, but for example, measuring the increase of trust in PAs does not seem easy. Other impacts associated with private benefits are measurable, but difficult to capture.
• In relation to the previous point, impacts in PA are also mentioned, by improving internal processes. This with respect to organization and classification of information, to make it more accessible. Additionally, fostering innovation in the country is not a minor issue. • More training and divulgation: Several comments point out to the need of sensitizing authorities with seminars, divulgation and training campaigns for OGD issues. • Finally, there are other comments emphasizing the urgency of these issues and the need to accelerate data opening processes, otherwise public pressure will become stronger increasing the discredit of government institutions. First, in all proposals is mentioned, as an important action "A declaration of open government by the government", or else, they recommend "Develop and communicate a government-wide strategy", like the Open Government Directives (McDermott, 2010). In the same way, "coordinate with leadership, guidance and support" is similar to "align OGI with the agency's goals" or "institutionalize OGI" of [START_REF] Lee | An Open Government Implementation Model: Moving to Increased Public Engagement[END_REF], and to "create and institutionalize a culture of OGD" in McDermott (2010).
The Australian recommendation: "Make PSI open, accessible and reusable" is not different to "Public data opening ", from ORSI, or "consider conducting pilot projects and/or establishing centers for excellence" from Lee & Kwak (2011), or "flagship initiatives" in Obamas' memorandum. Just as "Find your data; Convert data; and Test your output" is only the way to operationalize this.
The proposal "Encourage public servants to engage online" [START_REF] Gruen | Engage: Getting on with Government 2[END_REF], is not different from the proposals "using tools of internal collaborative work" and "encourage internal participation in the city council" in ORSI (2010), and is similar to "integrate public engagement applications" [START_REF] Lee | An Open Government Implementation Model: Moving to Increased Public Engagement[END_REF].
Related to the "accessibility" issue, we have "create and maintain a dialogue" in Kaufman & Wagner (2012), "platforms of participation and citizen's collaboration" in ORSI (2010), and "use a democratic, bottom-up approach" in Lee & Kwak (2011).
Open Data Implementation Guide (ODIG)
The following are some of the principles and criteria considered for the ODIG design:
• Simplicity: Time of implementation should not be too extensive, for example, if some laws are not required, initial results should be available within 18 months. • Quick-win: This principle means a quick initial development with some visible results that could help legitimizing the initiative and obtaining additional support from stakeholders. This involves the need of a subsequent long-term development, but experience acquired at the beginnings and its legitimization can facilitate the concretion of the following steps. ODIG is a consequence of the OD-MM application since it incorporates elements detected as weak in pilot PAs; as well as elements of both, bibliographic exploration and field research carried out throughout firsthand sources.
For the ODIG development and organization, the roadmap displayed in the following section, is a reference conceptualized with OD-MM maturity model domains. For this reason, the ODIG classified three groups, corresponding to the three OD-MM model domains: the first is the "Institutional and Legal Perspective" including eight recommendations concerning organizational and management issues. Second, the "Technological Perspective" with two recommendations, and finally the "Citizen's and Business Perspective" incorporating 5 recommendations on issues related to data reuse, by the concerned community.
Nevertheless, and following the principle of simplicity already mentioned, ODIG develops the technological domain in a simplified way focusing only on fundamental issues addressed to decision makers, trying to avoid technicalities that could obstruct its comprehension. The reasons of this are: (1) Technology is by no means the main reason in OGD, since technology is just a means used to achieve it (Calderon & Lorenzo, 2010), and (2) Technology is in a constant change, it repeatedly surprises the experts themselves; therefore, the risk is high when offering detailed technology standards that in short term will be obsolete.
A Roadmap
Following are the general guidelines, called roadmap, oriented to the formulation of the ODIG. The order of this roadmap is only referential; it does not pretend to be exhaustive, not either is it necessary to complete all steps in a more developed level, and it can certainly perform some tasks in parallel:
1 Have an organization appropriate to build OGD that should not be necessarily equal to the existent, for the management of traditional e-government activities. 2 Hire and generate a training plan to provide qualified professionals in OGD.
3 Articulate an institutional statement provided by the Presidency in favor of an open government, as soon as possible, which should be part of the objectives definition and the strategy to follow on this topic (de la Fuente, 2012). 4 Have an interoperability platform between different PAs (desirable). 5 Datasets opening. Prior to this, carry out a market research regarding to the most relevant and priority data that companies and citizenship in general are requiring. 6 Develop an OGD policy including the adoption of standard open formats for data and metadata, to facilitate its later reuse. 7 Construction of an official website of OGD that includes the results of a previous study following the best international practices in the field. 8 Establish an action plan to stimulate data consumption between companies and especially among infomediaries. 9 Create alliances and agreements with stakeholders from civil society and private sector to promote specific projects of data reuse of a public value for citizens and/or PAs. [START_REF] Harrison | Open Government and E-Government: Democratic Challenges from a Public Value Perspective[END_REF] proposes alternatively that planning and assessing OGD being addressed within a "public value" framework. 10 Establish a measurement of initial diagnosis of PAs maturity level regarding to OGD, to serve as a baseline in the periodic measurement of progresses expected and approaches, and to facilitate necessary corrective and timely decision making.
This Decalogue is presented in generic or added terms; therefore, it represents the general framework and starting point for developing ODIG points for governments.
Fifteen Recommendations of ODIG
Considering a first source expert opinion, plus bibliographic research and elements development, in each of the two not technological perspectives of the open data maturity model, there is a sequence and prioritization of steps that the executor must consider when implementing ODIG. The estimated horizon of time considered ranges from 18 months and two years, depending on the starting point in each case:
THE EXISTENCE OF AN INSTITUTIONAL FRAMEWORK WITH A RECOGNIZED
ORGANIZATION FOR OGD: This is the starting point to implement an OGI with some probability of success sustained over time. It might be ascribed to an existing e-government initiative or other related organization. This institutional framework must generate an organizational structure where formally defined positions and proficiencies cover the areas of management, planning and technical. For example, it is desirable to have a person in charge of PSI re-use (known as PSI manager), a contents manager who knows the procedures for data processing, with knowledge of databases and their applications, and web portals. Finally, a systems manager with competences in the equipment catalog and IT systems able to support storage and data publishing.
EXISTENCE OF A RECOGNIZED LEADER IN CHARGE OF IMPLEMENTING AN OGD INITIATIVE:
It is necessary to appoint a suitable and sufficiently empowered person to hold a position that requires not only technical skills, but also a good political management to interact with different social levels of the public sector, and organizations of the civil world. This person should be responsible for developing a strategy, and drive the implementation process.
FORMULATION OF AN OGD STRATEGIC DEVELOPMENT PLAN:
When formulating a plan, the recommendation is to involve different social actors and at diverse instances; namely, to generate seminars inviting diverse social organizations to make their contributions; in parallel, leave enough room on the website for the citizen feedback. Although the process seems slower, in the long-term will generate greater legitimacy. Additionally, it is worth to establish first bonds with civil society, actors of an OGI. Furthermore, the advice is to provide activities to the short, medium and long term plan. Short term measures, as far as possible, must be visible enough to generate a positive impact on the population and thus, more support for long term actions. This approach must incorporate a communication strategy.
4. CONSTRUCTION AND DELIVERY OF NECESSARY LAWS TO THE CONGRESS FOR A BETTER OPERATION OF THE OGD: This activity may vary from country to country and in some cases, omitted; however, in general it is necessary to promulgate laws related to the regulation of transparency in information and protection of sensitive information. OGD supervisor must seek legal advice to achieve the development of these laws, and promote a flexible remittance and promulgation of them. After this promulgation of laws, training and divulgation of them is the next step among interested stakeholders for a better understanding/interpretation of their scopes.
PROMULGATION OF POLICIES AND INTERNAL REGULATIONS:
Not only standards are important (i.e., those related to information management procedures and their conversion to standard formats), but to keep evidence of their compliance through various control mechanisms. Additionally, the pertinent authority must ensure a proper understanding of the standards. An issue related to this point is the formulation of an open data policy including the adoption of open standards formats for data and metadata to facilitate their later reuse.
TRAINING PLAN IN OGD:
Several studies inferred that it is essential to have support of trained personnel, which in general has not been formally resolved. Some institutions consider that an 'on the way' learning is enough, or that more advanced professionals in charge should be able to solve problems. Experience shows that this long-term strategy is inadequate and may increase the costs. The point again, is to follow the plan. This involves that after about six months, there will be available a significant number of key personnel trained in the techniques of OGD, digital communication systems, IT tools, e-services, etc.
PROJECT MANAGEMENT OFFICE DEVELOPMENT:
The OGD implementation requires the development of several projects; therefore, it is necessary to ensure the compliance of standard procedures in management of OGD projects. This is a weakness detected in the field of software engineering; it tells of a high percentage of IT project failures, or at least backlogs in the compliance of delineated goals.
8. HAVE A PERFORMANCE ASSESSMENT SYSTEM OF THE PROJECTS: In general, it is possible that this system already exists in the PA. However, the results of documented experiences indicate that often no formal metrics mechanisms are present to measure projects performance, neither the establishment of specific goals. 11. DEVELOPMENT OF A FIRST OGI: This first initiative, recommended as a pilot, must be the most emblematic and with significant impact in the short term. Identification of the most relevant information and at the same time easier to open will be a tool for each country; however, it is important to consider citizens' participation in the development of this initiative, in at least some of its stages. We suggest identifying those less sensitive data categories, but with a high impact when applying Quick-win criterion. This will avoid controversies and complexity that could delay the project.
DEVELOPMENT OF
12. EXISTENCE AND MANAGEMENT OF DATASETS INDICATORS FOR ACCESS AND/OR DOWNLOADING, TOGETHER WITH DATA MONITORING: Monitoring of access and data downloading is essential to assess the level of success, especially of the first initiative. This allows taking appropriate corrective actions. It is highly recommended to complement it with periodic public opinion polls (re-users).
13. PROMOTION ACTIONS FOR RE-USE: Offering of various documents and materials, both informative and of strategic and technical character encourage the use of OGD. Speeches, workshops, seminars, applications contests, are some of the initiatives that government must have available in a systematically, to support developers and data reuse. Likewise, the recommendation is to publish successful stories, of great impact in the portal site, with metrics that establish the benefits and impacts that they had for users if possible.
14. EXISTENCE OF A CHANNEL FOR COMPLAINTS AND CONFLICTS RESOLUTION: The portal must have at least a form available with clear instructions, helpful to canalize developers and users difficulties with data reuse. This mechanism will be essential to improve aspects related to the reuse.
15. EXISTENCE OF A FORMAL CHANNEL OF PARTICIPATION AND COLLABORATION OF CIVIL SOCIETY: As mentioned above, participation and cooperation of citizens should be the cornerstone of any OGD project to ensure its success. Procedures and verification of opinions and suggestions must be available for consultation before future improvements. Opinions should have a rating system (public vote).
Conclusions
One of the recurrent elements in the bibliography is the one related to political leadership, cited as critical to a successful OGD implementation. It is an element present in all the proposals of OGD implementation plans. This element also emerges as one of the weaknesses found in the OD-MM maturity model application, and all OGD experts polled emphasize it as well, so its presence is natural in the ODIG. The new elements in the actions proposed by the ODIG are the formulation of OGD training plans, the formulation by the PA of a strategic OGD development plan, and having a performance evaluation system of OGD projects, including the development of a PMO.
The experience of applying a pilot to six PAs in three Latin American countries, demonstrates that the presented ODIG, assumes its weaknesses detected in the diagnosis of these PAs. Therefore, when following the actions proposed by ODIG, these PAs will reach level 3 of maturity, or very close to it (from a maximum of 4), for sure.
( 1 )
1 Publish government information online; (2) Improve the quality of government information; and (3) Create and institutionalize a culture of open government. Required components of the open government plans developed by PAs are: Transparency; Participation; Collaboration; Flagship initiative; and Public and agency involvement.
2.2 Recommendations collected from de OD-MM maturity model
Publish numerous open data to the community, covering the entire organization. 5 Create full indicators, with internal tracking. Perform a light analysis of results and propose evident improvement measures. 6 Offer documents and materials, either of informative or strategic and technical
, designed for transportation agencies: to open
and maintain data, overcome potential obstacles, and create a relationship with users.
The steps are: Find your data; Convert data; Test your output; Write up a license
agreement; Publish and publicize; Update and modify as needed; and create and
maintain a dialogue.
OD-MM maturity model (Solar et al., 2012) applied to six PAs in three countries in
Latin America (Chile, Colombia and El Salvador), provided a diagnosis to each PA.
Each diagnosis generates its corresponding roadmap with recommendations to con-
tinue evolving to the next level of the maturity model. Recommendations to improve
on lesser capacity level issues detected in the application of the OD-MM model im-
plementation are typical of a more advanced stage of this new way of governing. It is
necessary therefore, to move towards an integrated State, transparent and participa-
tory that solves the problems of citizens and private institutions.
From the recommendations obtained directly from the roadmaps automatically
generated by OD-MM model, the following are the suggestions most frequently gen-
erated in the implementation of the model:
1 Create training initiatives on issues related to OGD, as the use of IT tools, digital
communication systems, office automation, e-services, etc. Create an OGD specif-
ic training plan identifying needs of training and other pertinent matters, where
staff responsible of OGD training requires an appropriate training.
2 Manage projects with established procedures. Create a Project Management Office
(Letavec & Bolles, 2010), to ensure the compliance of standard procedures in man-
agement of all OGD projects of the organization.
3 Establish metrics to assess OGI. Prepare a standard and compliance goals to meas-
ure results of programs and initiatives. Create a regular and systematic assessment
plan to identify a set of appropriate metrics to evaluate OGI performance, as the
compliance of external regulations, among others.
4
Table 1 : Scope of OGD Higher Impacts
1
Rank Scope %
1 Transport 17%
2 Transparency 16%
3 Environment 9%
4 Culture and Recreation 7%
5 Public Administration 7%
6 Meteorology 6%
7 Tourism 5%
8 Delinquency 5%
9 Education 5%
10 Finances 5%
11 Health 4%
12 Business 4%
13 Properties and Land Registry 4%
14 Political scope 4%
15 Scientific 2%
Total 100%
Table 2 : OGD Benefits
2
Rank Benefit %
1 Transparency 25%
2 Participation 20%
3 Trust 17%
4 Efficiency and effectiveness of public policies 14%
5 Economic development 12%
6 Entrepreneurship 6%
7 Quality of life 3%
8 Updated information/knowledge 3%
Total 100%
Table 3 : Barrier for OGD Implantation
3
Rank Barriers %
1 Lack of political will 20%
2 Cultural problems 16%
3 Lack of laws and regulations 16%
4 Lack of leadership 14%
5 Lack of qualified personnel 12%
6 Ignorance 12%
7 Lack of confidence 8%
8 Very high costs 0%
Total 100%
Table 4 : OGD Costs and Limitations
4 When comparing the recommendations of the Australian Government[START_REF] Gruen | Engage: Getting on with Government 2[END_REF], with the U.S. government (McDermott, 2010), and ORSI (2010) for local governments as municipalities, private proposal of[START_REF] Lee | An Open Government Implementation Model: Moving to Increased Public Engagement[END_REF], and an implementation proposal in a specific scope of application such as transport (Kaufman & Wagner, 2012), it is possible to find common factors to all of them.
Costs
A STUDY OF REQUIRED ICT INFRASTRUCTURE CAPACITY: Decision makers must be aware of the need of safeguarding that systems will have sufficient capacity to manage the demands and requirements of citizens and infomediaries companies. (i.e., avoid equipment's saturation with web services). 10. GRADUALLY INCORPORATE SEMANTIC TECHNOLOGIES: These technologies are available today and it is a need to incorporate them, although their use is initially on an experimental basis, to train technical staff of the PA. With this, the PA can easily reach levels 4 and 5 stars of Berners-[START_REF] Berners-Lee | Linked Open Data. What is the idea? Retrieved from[END_REF]. The use of these technologies also allows to better manage a multiplicity of catalogs of different sources, from both, national and local governments; also from other State authorities, private sources, etc.; this, if the option is a distributed model of catalogs. In other words, facilitates interoperability, data aggregation, and catalogs, plus its updating from external sources.
Acknowledgements. The authors would like to thank Gonzalo Valdes, Gastón Concha, Cristián Torres, and José Gleiser for their work in the project. This work was partially supported by the grants DGIP 241142, International Development Research Center (IDRC/CDRI) with the collaboration of Inter-american Organization for Higher Education (OUI). | 30,850 | [
"1004237",
"1004238",
"1004239"
] | [
"406898",
"211688",
"489127"
] |
01490970 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01490970/file/978-3-642-40358-3_20_Chapter.pdf | Karin Axelsson
email: [email protected]
Ulf Melin
email: [email protected]
Ida Lindgren
email: [email protected]
Stakeholder Salience Changes in an e-Government Implementation Project
Keywords: e-government project, e-government implementation, stakeholder salience, IT driven change
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Many studies of information technology (IT) implementation projects have focused on users' reluctance to use new systems and their resistance towards changes in working routines and processes [10; 11]. There have been numerous attempts to explain reasons behind such change inertia in IT projects [START_REF] Kim | Investigating User Resistance to Information Systems Implementation: A Status Quo Bias Perspective[END_REF] both in private and public sector. The argumentation has often been that reluctant groups are afraid of new things [START_REF] Marakas | Passive resistance misuse: Overt support and covert recalcitrance in IS implementation[END_REF] or negative because they risk losing power, freedom of action or influence [2; 16]. These explanations of failure and success are applicable to e-government projects as well [START_REF] Ho | Toward the Success of eGovernment Initiatives: Mapping Known Success Factors to the Design of Practical Tools[END_REF]. In this article we discuss a case which started out as yet another example of a group of agency employees being rather negative to the introduction of a public eservice and doubting their abilities to change work practices. However, during the process this group got a changed position. They went from being a marginalized group, in their own as well as in others' eyes, to becoming influential and modern IT users. We use this empirical example to discuss in what ways IT can give both ex-pected and unexpected effects. By analyzing our case we show that an implemented public e-service, besides aiming to give benefits to different stakeholders, also changes the role of a professional group, this group's self-image, and the way other persons apprehend them as a professional group. This understanding renders implications for other e-government development and implementation projects, as it illustrates that technology can transform marginalized groups into powerful ones.
When discussing different stakeholders in e-government projects, we often distinguish between stakeholders with visibility and power to influence the result and stakeholders without such opportunities. Building on Mitchell et al.'s [START_REF] Mitchell | Toward a theory of stakeholder identification and salience: Defining the principle of who and what really counts[END_REF] argumentation, stakeholder salience depends on the stakeholders' degree of power, urgency and legitimacy towards a certain issue. In relation to e-service design and implementation, a truly salient stakeholder possesses power to influence the process, experiences it to be an urgent matter and has legitimate claims to get involved in the process. A stakeholder that has none of these three attributes is, on the other hand, not salient at all. Previous studies show that stakeholder salience differs over time in a project [START_REF] Kamal | Analyzing the role of stakeholders in the adoption of technology integration solutions in UK local government: an exploratory study[END_REF], but also that some stakeholders might remain invisible throughout the project and also afterwards [START_REF] Axelsson | Public e-services for agency efficiency and citizen benefit -Findings from a stakeholder centered analysis[END_REF]. Kamal et al.'s [START_REF] Kamal | Analyzing the role of stakeholders in the adoption of technology integration solutions in UK local government: an exploratory study[END_REF] study intends to describe four case organizations' perspectives so that other researchers can relate their experiences to this. Our study has similarities with Kamal et al.'s as both focus on detailed stakeholder analysis. However, we do not consider stakeholder influence to be the only affecting aspect in the studied case. Instead, we contribute with the notion of the interaction between stakeholders' possibilities to influence the project outcome and IT's force to change the state of things when introduced in a government setting.
E-government implementation projects often trigger changes in work practices and organization of work. When reviewing research in the information systems (IS) field, we identify many examples of changes that occur in work practices when IT systems are introduced or changed. Acknowledging that IT has the possibility to change how people perform their work tasks, how processes are (re)structured, and how work practices are organized has been central in IS research for decades. Despite being a well-researched area, Vaast and Walsham [START_REF] Vaast | Representations and actions: the transformation of work practices with IT use[END_REF] point out that there are still few studies explicitly illustrating and discussing how IT use changes work practices. More detailed studies of stakeholders' IT adoption in e-government settings are also requested by Kamal, et al. [START_REF] Kamal | Analyzing the role of stakeholders in the adoption of technology integration solutions in UK local government: an exploratory study[END_REF].
The purpose of this article is to illustrate the fact that, besides e-government projects' aim to increase agency efficiency and citizen benefit, implementing public eservices might also change the involved stakeholders' salience. The article addresses this issue by studying how a marginalized, reluctant stakeholder group is involved in an e-government project in a way that actively influences the design of the implemented e-service. Together with these stakeholder influence aspects that turn the stakeholder from a reluctant user into an empowered and strengthened user, we find IT driven change aspects, which imply that the use of the implemented e-service also triggers changes in this stakeholder group's salience in the organization.
The article is organized in the following way: In Section Two we discuss a selection of views from previous research on stakeholders' roles in IT projects and IT's impact on social and organizational change. The research approach and case study design are reported in Section Three. The empirical findings are presented in Section Four and in Section Five the findings are discussed. The article is concluded in Section Six.
2
The Roles and Influence of Stakeholders and IT
User reluctance and resistance towards new IT systems are often proposed to be reasons for IT projects' failure [START_REF] Kim | Investigating User Resistance to Information Systems Implementation: A Status Quo Bias Perspective[END_REF]. Signs of user resistance are likely to occur early in IT projects, expressed as fear and negative opinions towards the future IT system. Such user resistance and negative rumors prior to IT implementation are especially threatening to the project's success [START_REF] Markus | Technochange Management: Using IT to Drive Organizational Change[END_REF] as negative users might oppose and hinder the project to proceed. Leonardi [START_REF] Leonardi | Why Do People Reject New Technologies and Stymie Organizational Changes of Which They Are in Favor? Exploring Misalignments Between Social Interactions and Materiality[END_REF] claims that users shape their views of new technology in various ways. Users discuss technology with colleagues and this influences their perceptions of it, but they also use the technology. The experience they get from using technology might change the perception they got from social interaction with others. These misalignments between the information generated in users' interactions with others and with the technologies' material features can lead to the failure of planned organizational change (ibid.). This is in line with Markus' [START_REF] Markus | politics, and misimplementation[END_REF] early claims that user resistance can be explained as the interaction between system characteristics and the social context of its use.
Besides the aim to define and explain reasons for user reluctance and resistance there are also many attempts to find ways of avoiding or decreasing it. The main theme in these studies has been to involve users, since the reluctance has been seen as a result of lack of information and users' deficient possibilities to influence the process and the outcome. If users are allowed to participate in the project, less user resistance are expected to occur [START_REF] Cavaye | User Participation in System Development Revisited[END_REF]. Identifying and involving users and other stakeholder groups in IT projects is a key issue that relates to stakeholder salience [1; 18; 22]. Stakeholders might possess more or less power, urgency and legitimacy, as mentioned in the introduction, but participation in development and implementation projects might also change stakeholder salience. By inviting stakeholders to participate and by taking an active part in such work, a stakeholder group might increase their power in the organization and also perceive the project as more urgent.
User participation in IT projects has, thus, been proposed as a solution to the user resistance problem, but there is no definite causality between user participation and user satisfaction. Many studies question the effects of user participation regarding system success [4; 13] and discuss the paradoxes of participatory practices [e.g. 6]. System developers and managers might also have differing motives for promoting participation [START_REF] Land | Evaluation in a socio-technical context[END_REF]. This implies that participation in itself does not necessary give all participants the possibility to influence the result. Sefyrin and Mörtberg [START_REF] Sefyrin | We do not Talk about this" -Problematical Silences in e-Government[END_REF] have studied a marginalized user group that participated in an e-government project, but still had no power to influence the outcome. In their case a group of administrative officers in a public agency possessed crucial knowledge for the IT project to succeed and was therefore asked to participate in the project. Nevertheless, they were not in any sense rewarded or recognized in the project. Instead, they risked being reor-ganized, dismissed or offered an early retirement after the project had ended (ibid.). This is an explicit example of a participating stakeholder group that is not gaining any stakeholder salience due to their participation. The case, thus, shows that marginalized stakeholder groups might remain without salience even though they participate in the project; implying that there is no given causality between participation and salience.
This leads us from stakeholder influence aspects to IT driven change aspects. IT has the power to change what we do and how we perceive things [START_REF] Orlikowski | IT and the structuring of organizations[END_REF]. As discussed in the introduction, many studies have focused on what happens when IT systems are introduced in organizations. IT implementation is done with some intentions to support users' work tasks which might include changes in work practices and organization. But not all changes are planned and expected. When IT is introduced, unplanned and unexpected changes of both positive and negative nature occur.
Among other challenges, Leonardi and Barley [START_REF] Leonardi | Materiality and change: Challenges to building better theory about technology and organizing[END_REF] outline that researchers need to study the relationship between IT development and use in order to understand how the practices of designers effect users and vice versa. When differentiating between development and use in order to focus on, for example, IT driven change aspects we risk to miss out important findings. Even though the IT system (or e-service) is primarily developed during the development project, continued development might occur when it is implemented and in use. This could be conducted either by the system developers who adjust the IT system according to the users' needs or it could be the users who modify the IT system during use (ibid.). This implies that user experiences can affect re-design, meaning that development activities continue after implementation. Likewise, studies of IT use that start after implementation often treat the IT system as a black box in the sense that the understanding of the development process is limited or comes from secondary sources. We use this as a motive for our study that ranges from development through implementation to use of an e-service.
Vaast and Walsham [START_REF] Vaast | Representations and actions: the transformation of work practices with IT use[END_REF] explain how users might experience dissonance between their representations, practices, and IT use when they use IT systems in a context that is perceived as changing. In such cases, the users will transform their use of IT so that consonance is re-established. This is another explanation to the fact that changes occur both during implementation and use of IT systems. As shown in their (ibid.) study, the dissonance can occur due to perceived changes in the users' context (e.g., the work practice), but it can also be caused by changes in users' own actions or in other users' actions. A third explanation put forth, is that dissonance can arise from unintended consequences of actions (ibid.). By discussing this in terms of consonance and dissonance, Vaast and Walsham illustrate that we have to study users' understanding of their work tasks and IT system in order to understand how, and to what extent, IT use can initiate practice change.
Research Approach and Case Introduction
In this article we analyze findings from a case study performed at a Swedish university. We have conducted a qualitative, interpretive [START_REF] Walsham | Doing interpretive research[END_REF] study of a process where a pub-lic e-service for handling student anonymity during written exams has been developed, implemented, and used. The project was called 'Anonymous Exams' by the university management. At the studied university, 100.000 written exams are administered each year which makes this an extensive process. The e-service that was developed to handle student anonymity electronically consists of several components; 1) one part handling the information transfer from a student administrative IT system to a mobile IT device (a Personal Digital Assistant -PDA) that is used on site during the examination events, 2) a web-based interface where students sign up for the exam and 3) another web-based interface that the teachers and administrators use when reporting the results. The case study covers several stakeholder groups which were involved in the development project and affected by the different components in e-service, but in this article we focus on one of these stakeholder groups; the examination supervisors. Thus, we also focus on the IT solution that was developed for this user group; the PDAs. A single case study leaves us with no possibilities to draw statistically validated conclusions, but this is not our intention. Instead, we use the case in order to illustrate and discuss how stakeholder influence aspects and IT driven change aspects can interact and result in changes in stakeholder salience. Furthermore, an advantage with case study research is that a well-written case study has 'face validity' [START_REF] Myers | Qualitative Research in Business & Management[END_REF], implying that it represents a real story that people can identify with.
The origin of the initiative to develop this e-service was student demands for a higher legal certainty in the marking process of written exams. Students argued that the teachers cannot be totally fair in their marks as long as they know who the student is. Students were afraid that some of them could be "punished" with a lower grade if they had been critical towards the teacher or that some of them would receive a higher grade than appropriate because the teacher liked them. Thus, the student demand for anonymity is in line with a general strive for equal opportunities in higher education; i.e., no one should be discriminated because of his or her sex, age, sexual orientation, ethnicity, religion or other faith, disability or social background. The student demand for anonymity was articulated through the students' union and resulted in a strategic decision made by the university's vice-chancellor that an e-service should be developed to guarantee student anonymity during the marking process of all written exams. A project group was formed consisting of a project owner, project leader, systems developer, administrative personnel, representative of the examination supervisors and central examination administrator. A reference group was also organized consisting of representatives of the teachers, the students' union, and examination supervisors from all faculties. This implies that the following stakeholders were represented in these two groups; students, teachers, course administrators, examination supervisors, and the university (represented by the project leader, systems developers, and technical personnel).
The examination supervisors' task during the examination event is to monitor the students in order to control the process and prevent cheating. In short, the development, implementation and use processes that we have studied resulted in the following e-service and work process for the examination supervisors: The PDA is a mobile device that the examination supervisors use on spot during the examination event. At first, the examination supervisors load and sync the PDAs against a database with information about which students that have signed up for the examination. The PDAs are equipped with card readers with which the Student Identity Cards can be read. When the students arrive, the examination supervisors can control that each student is in the right place by scanning these cards with their PDAs. The PDAs are designed to signal that the students have arrived to the right venue by producing an audio signal (a 'beep' sound). If a student arrives at the wrong venue, the PDA responds with a different audio signal. When the students are seated and handed their exams, the examination supervisors supply the students with their anonymous ID (AID) by scanning their Student Identity Cards once more. The students, and the examination supervisors, write the students' AIDs on the cover of their exams. When the students are done writing their exams, they hand in their exams to the examination supervisors. The examination supervisors scan the Student Identity Card with the PDA once more in order to register that the student has handed in the exam. After all students have handed in their exams, the examination supervisors synchronize the PDAs against the database once more. The rest of the process, in which teachers mark the anonymous exams and course administrators register the results before the anonymity is revealed, is not further discussed here. As will be discussed later in the article, this re-designed work procedure differs a lot from how the examination supervisors used to work before this project.
The case study was conducted from 2008 until 2010. During the preimplementation phase, the authors followed the development project (the project group and the reference group) in their project activities. During the postimplementation phase the authors returned to the case in order to study the stakeholders' implementation and use experiences. Data was generated in several different ways. Six project meetings were observed and notes from these observations were taken. During the last project meeting, respondent validation [START_REF] Sefyrin | We do not Talk about this" -Problematical Silences in e-Government[END_REF] of the findings was accomplished. Data was also collected by observations of three information meetings open for university employees, one systems training activity for examination supervisors, and two evaluation meetings. 24 interviews have been conducted during the case study. The interviews lasted for 30-60 minutes and were recorded. In addition, project documentation as well as e-mails sent from university employees to the project group were analyzed. Responses to a qualitative, open-ended, questionnaire sent to all examination supervisors a year after the implementation were also analyzed. Altogether this case study design has resulted in rich empirical material focusing on the development project from several perspectives. The empirical data is of a qualitative nature and has been analyzed with an interpretive approach [e.g., 26].
Empirical Findings
The examination supervisors are contracted by the university and temporarily hired for each examination event. This group mainly consists of senior citizens (mostly retired women now working at a temporary basis) who want to earn some extra money. Their responsibility is to supervise the students during the examination event in order to control the process and prevent cheating or the use of prohibited aid. Prior to the development of the PDAs, their work was totally paper-based. It was therefore obvious that this group faced the largest changes in their work tasks due to the eservice and the re-designed process. However, very few outside the project group were concerned with this fact, implying that the examination supervisors indeed belonged to a marginalized group prior to the project. In the pre-implementation phase this group expressed fears that they would not be able to learn the new process and how to use the new technology. The degree of IT maturity differed between individuals in this group, but was over-all low. The examination supervisors were afraid that the re-designed process would lead to increased time pressure during the examination, as the registration of each student in the PDA would take some time. Their greatest fear concerned how they were supposed to solve technical problems that might occur when they were alone in the classroom with a lot of students eager to start working with their written examination. They were not sure what kind of help they could get, and from whom. Besides these fears regarding the transition from manual to IT based work, the examination supervisors also expressed positive expectations as they hoped to be able to influence the examination process when the e-service was implemented. For example, they hoped that the re-designed process would make it easier for them to refuse students, who have not registered for the exam in advance, to take part in the examination. These students are not allowed to do the examination, but the paper lists often contained invalid information and students' could claim that they were registered even though their names were not on the attendance list. In the new process the AID is generated when the student registers for the exam, and later retrieved when the Student Identity Card is scanned by the PDA, which means that no students could be permitted to participate if they lack this card and the prior registration. In spite of these positive expectations, the dominating feeling towards the e-service was fear. The examination supervisors were worried that the initial problems when introducing the new technology would last too long and that this could make some of them quit working.
Despite being characterized as a marginalized group in the organization, the members of the project group took the examination supervisors' expectations and fears seriously. They were worried that several of them would quit their job if the design of the work process and the e-service was not intuitive and easy to learn. Hence, the examination supervisors were seen as a user group whose needs and wishes had to be met to the extent possible. During training sessions organized for the examination supervisors close to the end of the development project, the participants were discontent with the design of the interface of the PDAs and protested against using the PDAs in their current design. Based on the examination supervisors' feedback on the PDAs, the interface was re-designed considerably late in the project. The examination supervisor representative in the project group was a strong driving force in this re-design of the PDAs and worked closely together with the system developer on this task. This representative turned out to be very important for promoting the examination supervisors' interests. She was selected as representative in the project group based on her formal position as an examination supervisor, but she turned out to be a real project champion with a lot of former experience in development projects.
In a questionnaire sent out to the examination supervisors when the re-designed working process and the e-service had been in use for a year, a majority responded that the PDA was an invaluable tool in their work and that they could not imagine going back to the old ways of working. Some individuals reported that they had been skeptical towards the changes initially, but that they now only had positive connotations to the e-service. The examination supervisors were very content with the training they had received on how to use the PDA and considered it to be easy to learn and use. They emphasized that the PDA was a useful tool for them and mentioned adjectives such as "fast, smooth, supportive, easy to work with, professional, modern, and good" in order to describe the e-service. Several respondents also reported that their work had become less stressful, safer and more trustworthy. Interestingly, the respondents also reported that their work required more precision and carefulness after the implementation of the PDA.
The main advantage with the new ways of working was that the entrance procedure had become less troublesome when using the PDAs. The examination supervisor could now see information about each student when scanning their Student Identity Cards by the PDA. Based on this information, the entrance procedure was now faster and easier than before; paper lists of the expected participants was no longer needed, and the audio signal from the PDA told the supervisor if the student was expected to participate and if she/he was in the correct room. Some respondents also experienced that the students' behaviour had improved as a result of changes; e.g., one respondent reported that "Previously, unregistered students tried to sneak into the room or obstinately tried to maintain that they had registered for the exam even though they were not on the registration list. This behaviour has ceased."
The overall view of the examination supervisors' perception of the PDA and the changed process is that they were very content with the ways in which it had all turned out. One questionnaire respondent wrote that "It's fun; you feel more engaged, a few more tasks, also good for the students". Another respondent wrote that "They [the students] probably didn't expect that an 'exam lady' would be able to handle a palm. We sort of have more authority now" and "Now when I know the routines I believe that the work is easy, I feel 'modern', somehow".
Discussion
Based on the empirical findings reported above it is obvious that the examination supervisors' attitudes towards the project and its outcome changed between the preimplementation and post-implementation phases. In the beginning of the project, this actor group displayed a more or less reluctant and hesitating attitude towards the changes. It is easy to trace their doubts about their future work to fears of the new eservice and the re-designed process, which is a usual reason for people's reluctance towards change [START_REF] Marakas | Passive resistance misuse: Overt support and covert recalcitrance in IS implementation[END_REF]. The feelings of fear were mostly connected to uncertainty of having sufficient skills and competence to learn how to use the technology. We do not see any signs of fear regarding, for example, risk of losing power, freedom of action or influence, which are other common explanations to change inertia [2; 16]. A possible explanation to this could be that the examination supervisors did not possess any power or influence in the organization prior to the project. The situation that a stakeholder group, prior to an implementation project, is uncertain and afraid of not being able to cope with new demands, and then, after the implementation, experiences that this fear did not come true, is probably not unusual. Nevertheless, an IT project in general or an e-government project in particular might fail if such negative expectations take over and threaten the acceptance of the outcome [8; 15]. In the studied case the risk for this to happen was quite low since the examination supervisors as a group has few connections to other stakeholders. Their formal status in the organization prior to the project was low as they are temporarily hired on contract and rather easy to replace. Regardless of their position not being a potential threat to the success of the project, it would have been a huge drawback if many of the examination supervisors had resigned all at once. Thus, the fact that the project group recognized the examination supervisors as the stakeholder that faced the most severe changes in their work, and also being the group with least IT experience, was crucial. After the e-service implementation the examination supervisors express that they are satisfied with the changes. No one wants to return to the old working process and they claim that they are proud and enjoy their work even more than before. They changed their view of the project and the e-service developed and used, which resembles stakeholders' dynamic role as discussed by Kamal et al. [START_REF] Kamal | Analyzing the role of stakeholders in the adoption of technology integration solutions in UK local government: an exploratory study[END_REF].
When analyzing the examination supervisors' stakeholder salience, it is evident that at they did not possess the salience attributes [START_REF] Mitchell | Toward a theory of stakeholder identification and salience: Defining the principle of who and what really counts[END_REF] in the very beginning of the project. The IT project was not initiated as a response to any needs or requirements that this group initially had. On the contrary, the examination supervisors did not express any need for the new working process or e-service before the project started. Neither did they have any formal power to initiate such a project, nor would their claims have been regarded as legitimate. As mentioned above, this stakeholder group could instead be seen as a marginalized group in the organization in many aspects. However, early in the project, the examination supervisors were discovered as a group that would face much change and therefore they were focused and partly prioritized during the e-service development. In retrospect this might have several explanations; 1) the other user groups (course administrators, teachers, and students) were all difficult to engage in the project, 2) the system developer was particularly interested in developing the technical PDA solution, and 3) the examination supervisor representative was a strong force during the development phase. All these aspects interplayed in the same direction, making the examination supervisors become more influential on the design of the PDA than anyone would have expected from the beginning. This is a good example of the fact that stakeholder salience might change over time [1; 7]. During the project stakeholder salience of the examination supervisors increased radically from being a marginalized to an influential group. This can be explained by the interaction of their involvement in the development project and changes imposed by the implemented IT solution.
As a result of the re-designed process the examination supervisors' work content was completely changed. There are several new IT based operations that they now have to conduct, where the process before mainly was about ticking off a list and watch for cheating students. The examination supervisors now apprehend their work situation to require much more precision and carefulness, which could be seen as a sign of increased complexity of the work content. Their part in the administrative process of written examinations has become much more active and transparent thanks to the e-service. This has changed the examination supervisors from being a passive guard of the examination event to possessing an active and important role in the university's educational processes. These changes have nothing to do with the influence the supervisors had on the PDA design, which was focused on interface and interaction issues. Instead, this is a consequence of the changed working process in combination with the new e-service. This is an illustrative example of how technology-driven organizational change activities might occur and be viewed from different perspectives [START_REF] Markus | Technochange Management: Using IT to Drive Organizational Change[END_REF]. It is noticeable that none of the changes in the working process were implemented in order to reach these benefits for the examination supervisors. Nevertheless, they did occur and are appreciated as positive aspects of the changed work content. This is also an example of a beforehand unintended, but realized benefit [17; 20] One aspect of the above mentioned changes in the examination supervisors' working process is that these changes not only influenced their notion of work satisfaction. The changes also made the students look at the supervisors with new eyes. Prior to the project, some students had tried to convince the supervisors to let them participate in the examination even though they had not registered their attendance prior to the examination. They begged, yelled, and even lied in order to be able to write the exam. This was a true problem for the supervisors, who before the implementation mentioned that a possible benefit from the project would be to gain more authority towards students. Thanks to the PDA and the changed administrative process this expectation came true. The supervisors now experience that the students obey them much better and that they apprehend them as more legitimate and powerful. Hence, the examination supervisors' role towards students has changed.
Changes in how others, in this case the students, view us also influence how we perceive ourselves. What started out as the examination supervisors' main source of concern -being able to handle the PDA or not -turned out to be the key element in their positive judgment of the outcome. After the implementation phase it was the use of the PDA that was emphasized as most positive, both regarding its usability and its implications for the process being safer, more trustworthy, and efficient [cf. 21]. They explained this as a transformation they had gone through from being a technology hostile 'exam lady' to a modern IT user. They commented upon the fact that this had also influenced their relation to technology outside their work. This could be seen as an example of dissonance [START_REF] Vaast | Representations and actions: the transformation of work practices with IT use[END_REF] between the re-designed process and the e-service, on one hand, and the old image of the supervisors on the other hand. Maybe it was this dissonance that made the supervisors start viewing themselves differently and, consequently, also acting with more authority. The result was, regardless, a changed selfimage.
Conclusions
In this article we have shown how a marginalized stakeholder, who in the beginning of an e-government implementation project lacks power, urgency, and legitimacy, still can turn into a salient actor during the process. We have identified several types of change related to the studied stakeholder group. They changed the way they viewed the project, going from a reluctant and hesitant attitude to a sense of satisfaction and pride with their PDA and working process. The examination supervisors started this journey as a somewhat marginalized group that did not have a prominent role in the planned project, but was prioritized by the system developer who at a late stage of the project involved them in the design of the PDA. This made their stakeholder salience increase during the project. In the old process, the examination supervisors mainly served as a passive guard making sure that the process and the rules were followed.
After the e-service implementation the supervisors were empowered with distinct assignments as an important and legitimate actor in the examination process, thus, the work content had shifted [cf. 15]. As results of these IT and process related changes, both the role of the supervisors as apprehended by others (the students) and their selfimage changed. They went from being a marginalized and reluctant stakeholder to an influential and modern IT user.
The purpose of this article has been to illustrate the fact that, besides e-government projects' aim to increase agency efficiency and citizen benefit, implementing eservices might also change the salience of involved stakeholders. We have done this by focusing on one stakeholder group's transformation during an e-government project. The main conclusion from this case is that in e-government projects we need to acknowledge both stakeholder influence aspects and IT driven change aspects in order to understand the effects and consequences.
Finding ways to involve stakeholders and making them influencing the design and development of e-services and working processes is an important but complex task, since there are many stakeholders with differing needs and possibilities to participate in e-government settings. This study shows that stakeholder involvement in itself is not enough since both intended and unintended IT driven changes will occur during and after the project. Stakeholder influence aspects and IT driven change aspects are intertwined. This makes it necessary for any e-government project to address the notion of stakeholder involvement in decision-making during the development and implementation phases, but also to acknowledge IT's and e-services' force to change how things and people are perceived during these phases. The view of a planned and rational change project is here challenged by an emergent, dynamic, and intertwined process [cf. 17].
We have illustrated these matters by a "successful" case, in which a marginalized group turned out to be a winner in the end. Next step would be to study less successful cases in order to find out if the intertwined relation between stakeholders and IT works in both directions, turning marginalized actors into powerful ones but also decreasing authority and prominence among others. | 39,331 | [
"994114",
"994115",
"995637"
] | [
"302346",
"302346",
"302346"
] |
01491002 | en | [
"sdv"
] | 2024/03/04 23:41:50 | 2016 | https://theses.hal.science/tel-01491002/file/DDOC_T_2016_0137_BELCASTRO.pdf | Valérie Schini-Kerth
Dr, INOTREM Marc Derive
Dr Alessandro Corti
Eugenia Belcastro
Maria Franzini
Silvana Cianchetti
Evelina Lorenzini
Silvia Masotti
Vanna Fierabracci
Angela Pucci
Alfonso Pompella
Nitrosoglutathione Eugenia Belcastro
Wen Wu
Caroline Perrin-Serrado
Isabelle Fries
Pierre Leroy
Isabelle Lartaud
Dr Caroline Gaucher
email: [email protected]
Mario Pongas
Caroline Perrin-Sarrado
Sabrina Ceragioli
Mauro Ferrari
Fulvio Basolo
Michele Emdin
Aldo Paolicchi
-Monocytes/macrophages activation contributes to b-gamma-glutamyltransferase accumulation inside atherosclerotic plaques
Vascular oxidative stress toward S-nitrosothiols bioavailability".
ACKNOWLEDGEMENTS
It's hard to write acknowledgment for many reasons. The first, because retracing my course of the research I can not think of those who have been my wing and how many people have helped me to form the person that I'm today and it is impossible to list them all in one page or so (but they know ...). The second because even with the enormous joy of cutting a milestone such as PhD graduation, you always fear of losing something, such as friends, "colleagues" of these three years, the professors in the fruitful relationship "give and take" (broadly defined), the environment itself of an experience that is fundamental part of the life.
The work of the doctoral thesis is one of the most challenging, not so much for its extension in time or for the continuing intellectual and physical effort that requires, rather because it is the result of a training program in which you choose to personally get involved. In such a way, the professional growth is just one of the challenges, inasmuch it is necessary also to know how to develop a set of transversal skills necessary to confront realities and situations, academic and non-academic, changing and uncertain.
Precisely, for me, this thesis is the expression of a human and scientific experience conceived through the encounter with many people and special places. I met in this "journey" many mentors, each of them have dedicated precious time to me, spent to discuss and find answers to my questions, to my beliefs and ideas. I can not refrain from thanking them and all people who for various reasons have accompanied me on this journey, and without them this project would not have been realized. I wish to thank Pr. Valerie Schini-Kerth (University of Strasbourg) and Pr. Céline Demougeot (University of Franche-Comté, Besançon), for accepting to be the reviewers of this work. I extend my sincere thanks to Dr. Derive (University of Lorraine) and Dr. Devaux (Luxembourg Institute of Health) for their participation in this thesis committee and useful discussions.
It is a must, and a pleasant task to sincerely thank the French supervisor of the thesis, Pr. Isabelle Lartaud: thanks for having been willing to offer me her invaluable theoretical and methodological contribution during the phases of my research work, giving me precious suggestions to improve it. Thanks for having done her utmost to ensure that I could make constructive experiences and useful to my growth as a research doctor.
My sincere and particular thanks go to Pr. Pompella, the Italian supervisor of this thesis, for always believing in me and in the validity of the project. His tenacity, accompanied by his usual irony, has supported me even in the most difficult moments. Thanks for guiding me in my course of research with wise counsels and following me constantly in the realization of the doctoral thesis. Furthermore, I express my gratitude for giving me the great opportunity to get in touch with the international dimension of research, transmitting to me, at the same time, his professional and human experience.
A warm thank you to Dr. Caroline Gaucher, the French co-supervisor of the thesis. Thanks for the availability accorded to me and for the valuable scientific support and for the invaluable suggestions. Thanks for guiding me during this international experience with sincere friendship and indispensable support in the development of the theoretical framework of research and experimental operational phase.
Special thanks then I dedicate to Dr. Alessandro Corti, for having extended his support to my work well beyond the institutional boundaries of an Italian co-supervisor of the thesis, his support to my work. I wish to thank him not only for the immense theoretical and methodological contribution offered me and his dedication to follow me in my research, but also for the example that he was for me of intelligence, fairness, love for research and professionalism, that will always be for me a model to pursue in life and work. I would also like to express my gratitude to Pr. Leroy, for the respect that he has shown me and for being able to make the time spent with his research team, highly stimulating, providing me precious knowledge for the preparation of the present Thesis.
Thanks fondly to the contribution of Dr. Maria Franzini, for valuable and scientific support and friendship. With sincere affection, many thanks to Evelina Lorenzini, Silvia Dominici and Isabelle Fries, for the their technical and experimental support during the project, but overall for making the time more cheerful the time during these three years, among laughs and coffee breaks!!! I thank all Italian and international work colleagues, on which I have always been able to rely at all times of joy or sadness during my PhD course… a sincere thanks. I can not forget the immense debt of gratitude to my parents, my sisters and a special person, who have supported the most important professional and personal decisions of my life. They have never failed to unconditional love, listening and attention, always urging me to go on my way.
LIST OF TABLES CHAPTER I
Table 1. Overview of macrophage phenotype observed in human and mice ..
LIST OF FIGURES INTRODUCTION GÉNÉRALE
INTRODUCTION GÉNÉRALE
D'après l'Organisation Mondiale de la Santé, les maladies cardiovasculaires sont responsables de 30 % de la mortalité, constituant ainsi la première cause de décès au monde. L'athérosclérose est une pathologie caractérisée par la formation d'une plaque composée de lipides au niveau de la paroi des artères. Cette pathologie, associant inflammation et stress oxydant, représente le facteur de risque majeur de plusieurs maladies cardiovasculaires comme l'infarctus du myocarde (120 000 cas par an) et les accidents vasculaires cérébraux (130 000 victimes par an), et est responsable de 90 % des cas d'angor.
L'athérosclérose se développe en plusieurs stades évolutifs successifs : la strie lipidique, la lésion fibrolipidique et la plaque compliquée. Les lipoprotéines et quatre types cellulaires: les monocytes-macrophages, les cellules endothéliales (ECs), les cellules musculaires lisses (CML) et les lymphocytes, sont les acteurs principaux de la genèse de la plaque. L'infiltration des lipoprotéines de basse densité (LDL), composées majoritairement d'apolipoprotéine B et de cholestérol (LDL-Cholestérol), dans l'intima constitue le point de départ de la formation de la strie lipidique. Leur oxydation (ox-LDL) joue un rôle clé dans l'activation proinflammatoire des macrophages, des ECs et des CML. En parallèle, la dysfonction de l'endothélium, avec diminution de la biodisponibilité du monoxyde d'azote (NO), est à l'origine de l'adhésion des monocytes à la surface de l'endothélium. La diminution de la biodisponibilité de NO est engendrée essentiellement par les espèces réactives de l'oxygène (ERO) qui oxydent directement NO en ions peroxynitrite ou découplent la NO synthase endothéliale empêchant ainsi la synthèse de NO et favorisant la production d'anions superoxyde.
Le développement de l'athérosclérose étant associé à une augmentation de la concentration plasmatique de cholestérol, le traitement actuel de cette pathologie fait appel aux statines hypolipémiantes.
Cependant, les statines ne permettent pas d'enrayer la maladie mais tout au plus ralentissent son évolution.
Ainsi, d'autres pistes comme les inhibiteurs/antagonistes du système rénine angiotensine aldostérone, système majeur de la régulation de l'homéostasie cardio-rénal, ont été envisagés car capables de diminuer aussi les marqueurs d'inflammation comme TNF-α, IL-6 et la protéine C-réactive [1] et de limiter le stress oxydant. Ces deux classes thérapeutiques (statines et inhibiteurs/antagonistes du SRAA) présentent des effets pléiotropes convergeant vers une augmentation de la biodisponibilité de NO [2] (Fig. 1).
Figure 1. Potentiel des thérapeutiques cardiovasculaires pour l'athérosclérose via des actions "NO" dépendantes. La GTP cyclohydrolase I (GCH1) est l'enzyme limitante de la synthèse du (6R)-5,6,7,8-tetrahydro-L-biopterin (BH4), cofacteur des NO synthases inductible ou endothéliale (eNOS, iNOS). Le BH4 est recyclé par réduction de la 7,8dihydrobioptérine (BH2) par la dihydrofolate réductase (DHFR), ou par la dihydroptéridine réductase à partir du quinonoide 6,7-[8H]-BH2. L'anion superoxyde (O2 .-) produit par la NADPH oxydase, oxyde le monoxyde d'azote (NO) en anion peroxynitrite (ONOO -). Celui-ci oxyde à son tour le BH4 en BH2 provoquant le découplage de la eNOS qui produit O2°-. La L-arginine, substrat de la eNOS et l'acide folique stabilisant le BH4 améliorent la fonctionnalité de la eNOS. Les statines, les antagonistes du récepteur de l'angiotensine II de type 1 (ARBs), les oestrogènes et l'érythropoïétine (EPO) favorisent la synthèse de BH4 en stimulant l'expression/activité de la GCH1. Les statines, les ARBs, les inhibiteurs de l'enzyme de conversion de l'angiotensine (ACE), l'éplérénone (antagoniste de l'aldostérone) et l'aliskiren (inhibiteur de la rénine) préviennent aussi l'oxydation du BH4 en diminuant l'expression et/ou l'activité de la NADPH oxydase. NO est antiathérogénique donc les donneurs de NO limitant le stress oxydant pourraient prévenir l'athérosclérose. GTP : guanosine 5'-triphosphate. (D'après [START_REF] Li | Prevention of atherosclerosis by interference with the vascular nitric oxide system[END_REF]) [3] La diminution de biodisponibilité de NO due à la dysfonction endothéliale ainsi que l'oxydation des LDL circulants seraient à l'origine de l'inflammation et donc de l'initiation de la formation de la plaque d'athérome. Ainsi, il apparaît comme essentiel d'étudier l'interaction inflammation/stress oxydant sur les types cellulaires (monocytes/macrophages et cellules musculaires lisses) composant la plaque d'athérome, en lien avec une restauration de la biodisponibilité de NO afin d'enrayer la formation de la plaque d'athérome. Différents donneurs de NO comme la molsidomine de la famille des sydnonimines, utilisée en clinique dans le traitement de l'angor, ont fait l'objet d'essais cliniques afin d'évaluer son intérêt thérapeutique pour l'amélioration de la fonction endothéliale de patients présentant un angor stable et éligibles à une angioplastie coronaire (étude MEDCOR) [4] [5]. Cette étude menée sur 165 patients recevant 16 mg de molsidomine par jour pendant un an après angioplastie coronaire n'a pas montré d'amélioration significative des marqueurs de la dysfonction endothéliale (ICAM-1 soluble, protéine C réactive, myéloperoxydase, ox-LDL) hormis pour le rapport activité myéloperoxydase/antigène qui diminue significativement dans le groupe molsidomine. Cependant, la molsidomine ne semble pas avoir d'effets délétères sur la fonction endothéliale contrairement au dinitrate d'isosorbide [6] ou à la trinitrine [7].
D'autres donneurs de NO comme les S-nitrosothiols représentant la forme physiologique de stockage de NO sont actuellement en étude pré-clinique. En effet, les S-nitrosothiols ont montré des capacités à prévenir l'oxydation des LDL [8], à diminuer le stress oxydant en inactivant la NADPH oxydase et à induire une déplétion spécifique des macrophages dans la plaque d'athérome [9]. Finalement, ils pourraient prévenir la formation précoce de la plaque d'athérome et modifier modérément le profil des lipides circulants [10].
Parmi les S-nitrosothiols, nous nous sommes intéressés au S-nitrosoglutathion (GSNO), réserve de NO dans les tissus. GSNO est spécifiquement métabolisé par la gamma-glutamyl transférase (GGT) dont la concentration plasmatique est corrélée avec l'augmentation du risque de pathologie cardiovasculaire [11].
De plus la GGT a récemment été identifiée au sein des plaques d'athérome [12]. Ainsi ce travail de thèse est divisé en trois parties :
-Identification de la provenance (monocyte/macrophage ou cellule musculaire lisse) de la GGT au sein de la plaque d'athérome -rôle de l'inflammation -Etude lien inflammation / stress oxydant sur monocytes/macrophages -Evaluation de l'impact du stress oxydant sur la biodisponibilité de NO à partir de GSNO pour les cellules musculaires lisses, ainsi que l'évaluation de l'implication des deux enzymes redox, GGT et protéine disulfure isomérase (PDI), dans le métabolisme du GSNO et dans la S-nitrosation des protéines.
CHAPTER I
1.1 Cardiovascular diseases and the atherogenesis process Cardiovascular diseases (CVD) are the leading cause of mortality in developed countries and are likely to attain this status worldwide, accounting for 16.7 million deaths each year [13,14]. Coronary artery disease and stroke, whose underlying pathological characteristic is atherosclerosis, are the most common forms of CVD. Atherosclerosis is a slowly progressing chronic disease of large and medium sized arteries which is characterised by the formation of atherosclerotic plaques consisting of accumulated modified lipids, leukocytes, foam cells, migrated smooth muscle cells (SMCs) and altered endothelial cells (ECs), leading to the formation of necrotic cores with calcified regions [15].
Since the term atherosclerosis was first introduced by Jean Lobstein in 1829 [16], it has long been believed that atherosclerosis involved the merely passive accumulation of cholesterol in arterial walls.
Today, the picture of atherosclerosis is much more complex as it has been considered as a chronic inflammatory disease combined with oxidative stress, involving both the innate and adaptive immune systems, which modulate the initiation and progression of the lesions, and potentially followed by devastating thrombotic complications [17]. Understanding the principles of the inflammatory and oxidative processes is important for deciphering the complex processes involved in atherosclerosis progression.
Atherosclerotic plaques are characterised by an accumulation of oxidized lipids in arterial walls combined with immunocytes infiltration. The degree of infiltration of inflammatory cells in atherosclerotic lesions is determined on the basis on monocyte recruitment, macrophage exit, and the balance between proliferation, survival, and apoptosis of several cells within the arterial walls [18,19].
Moreover, oxidative stress, characterized by an increased production of free oxygen as well as nitrogen radicals, represents also basic pathogenetic processes of atherosclerosis and is closely related to endothelial dysfunction, promotes vascular inflammatory response and is involved in the initiation and progression of the disease. There is now consensus that atherosclerosis represents a state of heightened oxidative stress involving lipids and proteins in the vascular wall. Reactive oxygen species (ROS) are key mediators of signaling pathways that underlie vascular inflammation in atherogenesis, starting from the initiation of fatty streak development, through lesion progression, to ultimate plaque rupture [20].
Although there is considerable overlap and synergy between oxidative stress and pro-inflammatory conditions, it is not clear whether they can be controlled independently of each other or how they reciprocally affect each other.
Development of the atherosclerotic process
Atherosclerotic lesions have a slow evolution with an initial phase of endothelial damage leading to endothelial dysfunction, accumulation of oxidized lipids and cell infiltration into the intima. The formation of an atheromatous core, composed of oxidized lipids and phagocytic cells, and the formation of a fibrous cap due to smooth muscle cells proliferation determine the so-called fibroatheroma, with or without calcification. Considering the importance of inflammatory processes in atherosclerosis development, the involvement of different infectious agents has been suggested [21]. Among the most studied pathogens,
Clamidophila pneumoniae [22], the herpes viruses, especially the human cytomegalovirus (CMV), [23] and the periodontal pathogens [24,25] are mentioned. To date, however, no evidence is statistically valid to support the role of these agents in etiology of atherosclerotic disease.
Atherosclerotic lesions are localized in different vascular beds with different clinical manifestations, and in preferential sites such as the bifurcation/branching of vessels, where they create the favorable conditions to increase blood flow turbulence [26]. Over the decades, the progressive growth of the lesions in the vascular lumen will reduce the blood perfusion of the affected vessel, determining morphological (atrophy/remodeling) and functional (failure) modifications. This process occurs very slowly, and the majority of individuals, while being carriers of many atherosclerotic plaques, never develop clinical consequences of atherosclerosis in the arc of their life. The most dangerous and frequent consequences of atherosclerosis consist instead in acute events, such as myocardial infarction and cerebral stroke. These latter events lead to the sudden obstruction of blood flow due to thrombotic and bleeding events of the plaque which is independent of the size of the lesion. The occurrence of individual clinical consequences of atherosclerosis depends on the structural evolution of the plaque. In the case of chronic consequences proliferative/hyperplasic aspects prevail to reduce the lumen of the vessel. In the case of acute occurrences, plaques undergo structural alteration resulting in plaque breaks. The breaking or the ulceration (i.e. the loss of the endothelial lining) of the plaque allow the contact between the contents of the lesion, highly procoagulant, and the blood, triggering the hemostatic-coagulation and thrombosis processes of the vessel. At this stage the unstable plaque can announce with characteristics clinical syndromes (unstable angina, transient ischemic attacks) that precede the events of greater severity.
Although in recent years great strides have been made to understand the mechanisms of atherosclerosis development and the related clinical syndromes, by developing animal and cell models, some aspects still remain to be clarified. The role of traditional risk factors (family history, high blood pressure, dyslipidemia, cigarette smoking, obesity, insulin resistance and diabetes mellitus) has now been established within epidemiological studies of large populations, but the absence of these risk factors does not fully protect from the development of the disease.
Morphology of the atherosclerotic plaque
Initial stages. The first signs of atherosclerosis occur with lipid accumulation, the so-called fatty streaks.
They are formed from lipid-rich macrophages infiltrated below the endothelial layer. One of the most important underlying causes of this phenomenon is the oxidation of the lipid components of low-density lipoproteins (LDL) forming oxidized LDL (oxLDL) (Fig. 2A). The oxidation can be minimal (minimally modified LDL, mmLDL) and do not inhibit the LDL recognition by LDL receptors expressed on macrophages. The oxidation can also be extensive, as it is the case for the apolipoprotein A (apoA), which is fragmented by oxidation and shows lysine residues covalently link to oxidized lipids. These oxLDL are no longer recognized by LDL receptors, but rather by neutralizing or "scavenger" receptors present on macrophages and smooth muscle cells of the media layer. The place and time of LDL oxidation are not clearly defined. Plasma circulating LDL are protected from oxidation, however they undergo oxidation during their infiltration in the vessel wall and their binding to the matrix proteins of endothelial cells and macrophages. The mmLDL and oxLDL can then exert a series of proinflammatory and prothrombotic activities leading first to the establishment and then to the expansion of the atherosclerotic plaque [27].
OxLDLs increase the expression of the intracellular adhesion molecules (ICAM-1) and the vascular adhesion molecule 1 (VCAM-1) on endothelial surface -resulting in monocytes recruitment, adhesion and transmigration -as well as the production of chemokines (Fig. 2A). This leads to monocytes diapedesis in lipid accumulating zones. Infiltrated monocytes subsequently differentiate into macrophages and overexpress a number of cell surface molecules, including scavenger receptors. These latter mediate the internalization of oxLDL and other modified LDLs resulting in the formation of lipid-laden foam cells.
The infiltration of monocytes is a defense mechanism to eliminate oxLDL, which are damaging for the vascular wall. However, the uncontrolled accumulation of macrophages (usually regulated by monocyte
More advanced stages (fibroatheroma).
During this stage, two major events are occuring inside the plaques:
-Formation of a lipid core limited by a fibrous cap
-Phagocytosis of oxLDL by macrophages
The lipid core is composed of foam cells undergoing apoptosis and oxidized lipids. This lipid core is delimited by a fibrous capsule combining matrix proteins such as fibronectin and collagen. Inside the lipid core, infiltrated macrophages engulf oxLDL and turn into foam cells. This process is mainly mediated by neutralizing or scavenger receptors and by the cluster of differentiation 36 (CD36), which binds many ligands including collagen, oxLDL and is involved in the phagocytosis process of macrophages.
Phagocytosed cholesterol from oxLDL is esterified and stored in lipid-laden form, thus conferring the typical foamy appearance to macrophages [19]. Intracellular accumulation of free cholesterol triggers macrophage death [28].
As mentioned above, macrophages contribute to the migration of smooth muscle cells in the intima and in sub-endothelial layers. This phenomenon characterizes the later stages of the atherosclerotic plaque [29]. Smooth muscle cells can proliferate, but also bind and internalize oxLDL through their scavenger receptor and in this way, they also acquire the characteristics of foam cells [30,[START_REF] Pirillo | LOX-1, OxLDL, and Atherosclerosis[END_REF]. Moreover, these cells produce proteins of the matrix and contribute to the formation of fibrous tissue which characterizes the later stages of plaque. At this stage, frequently calcification areas are found and may be in the form of granules (microcalcifications or spotty calcifications) or, less often, of macrocalcifications (lamellar calcifications) with the appearance of hard and brittle flakes or splinters or sharp, and sometimes as real areas of ossification. The spotty calcifications prevail in the early stages of atherosclerosis, while macrocalcifications are typical of advanced forms. Responsible for the calcification (precipitation of calcium phosphate in the intima in the form of hydroxyapatite crystals) appear to be both the apoptotic vesicles derived from necrosis of foam cells and from smooth muscle cells, cells capable of morphological and functional pluripotency, so as to be able to acquire a fibroblast-like phenotype and also osteoblastic [32,33,34,35,36]. Moreover, it would appear to be due to a specific subpopulation of smooth muscle cells of the media indicated as "calcifying vascular cells." [35]. This active process acts in concert to initiate and propagate hydroxyapatite mineral deposition in the vessel media, leading to arterial stiffening and atherosclerotic plaque rupture [32,37].
Moreover, activated T lymphocytes are present in the plaque and they may affect plaque progression. The type 1 helper (Th1) lymphocytes produce interferon gamma (IFN-γ) that has opposite effects on atherogenesis. These include the ability to reduce the expression of the scavenger receptor on monocytes and inhibit the proliferation of smooth muscle cells. However, IFN-γ induces the production of inflammatory cytokines by macrophages and induces the expression of major histocompatibility antigens of class II (MHCII). The ability of macrophages to present the antigen is related to the presence of antibodies against oxLDL or proteins modified during the process of plaque formation. These antibodies can be measured in the circulation and constitute the possible markers of lesion progression [29].
Ulceration and plaque rupture. Contrary to the overall atherosclerotic process, the plaque rupture is a rapid and unpredictable event. Plaque rupture is the most dangerous event in the evolution of atherosclerosis phenomenon, the search for specific markers of inflammation of atherosclerotic plaque has led to identify a myriad of surrogate markers, variously located in the endless transcriptional program that triggers with the inflammatory response [38,39]. Among those proposed, there are also serum amyloid A (SAA) [40] and Creactive protein (CRP). This latter is one of the most studied among the inflammatory markers; indeed several studies have demonstrated a correlation between circulating CRP values and instability of coronary plaques [41].
Moreover, macrophage foam cells undergo cell death through apoptotic or nonapoptotic pathways (e.g. oncosis). Foamed cell lysis, impaired efferocytosis (clearance of apoptotic cells) and subsequent secondary necrosis promote the formation of necrotic cores and further inflammatory responses within advanced atherosclerotic plaques. In vulnerable plaques, these inflammatory mechanisms lead to the thinning of the protective fibrous cap and the expansion of the necrotic core, predisposing these lesions to mechanical destabilization and plaque rupture [42] (Fig. 2B). Whereas, the capsule thickness is critical in determining its fragility, the size of plaques is not prognostic of its breaking. Indeed, the size of the lipid core and the thickness of the capsule are determined by accumulation of foamed apoptotic cells, by the release of lipids and by the production of lytic enzymes by macrophages. Furthermore the progression of the plaque is favored by the formation of new blood vessels (angiogenesis). Neovascularization of the plaque seems to be very important in the establishment of micro-haemorrhages and in inducing fragility [27].
Figure 2. An overview of atherogenesis process. (B)
The advanced atherosclerotic plaque [42].
Although substantial efforts have been made to dissect molecular mechanism of atherogenesis, a full understanding of the underlying mechanisms is still missing. However, activation of immune competent cells, leading to local and finally systemic inflammatory phenomena and the associated status of heightened oxidative stress are central events [43]. Oxidative stress is thought to have an important implication in the pathophysiology of injury in atherosclerosis through induction of various cellular/ molecular reactions.
In atherosclerosis, during Th1-type response, IFN-γ is probably the most important trigger for high ROS production in macrophages [44] due to nicotinamide adenine dinucleotide phosphate (NADPH), oxidation by the granules of resting and phagocytizing cells [45]. Main reactive species are hydrogen peroxide (H2O2), superoxide anion (O2 -), but also reactive nitrogen species such as peroxynitrite (ONOO-), nitrogen dioxide (NO2) and trioxide (N2O3) [46]. IFN-γ signaling initiates a variety of cellular defense mechanisms such as pro-inflammatory cytokine production via nuclear factor kappa B (NF-κB) signaling, enhancement of antigen presentation [47] and other important mechanisms, e.g., neopterin formation via guanosine triphosphate (GTP)-cyclohydrolaseⅠ(GTP-CH-Ⅰ) and indoleamine 2,3-dioxygenase (IDO)mediated tryptophan breakdown [48]. Under normal conditions, low levels of ROS are mainly byproducts from electron transport chain reactions in the mitochondria [49]. They are important regulators of several redox-sensitive pathways involved in the maintenance of cellular homeostasis [50], and act by modifying molecules, enzymes and transcription factors as well as by interfering with the endogenous antioxidant pool [46,50,51]. Depletion of endogenous redox buffer systems in conditions of overwhelming oxidative stress is critical, not only due to triggering of immune responses but also through induction of endothelial and smooth muscle dysfunction, and thus progression of atherosclerosis [52,53].
Therefore, the oxidative stress plays not only an aetiopathogenetic role, but it is also strongly linked to the inflammatory process, since many reactive species of oxygen are mediators of inflammation or inflammatory cell functions. Oxidative stress also plays an important role in the genesis of plaque, i.e. the oxidation of LDL [54]. Although a complex inflammatory signaling regulates cellular activities inside the plaque, the oxidative-reductive reactions (redox) also contribute to the pathogenesis of plaque, modulating the proliferation of smooth muscle cells, apoptosis, and in general the remodeling of lesions through the regulation of proteases and antiproteases. Some aspects of endothelial dysfunction (dysregulation of nitric oxide synthesis and expression of adhesion molecules) are regulated by oxidative events, as well as other key steps of platelet function regulation and signal transduction [54]. For these reasons, even elements of the redox regulation have been considered as potential biomarkers of atherosclerosis and related risk.
1.2 Main cellular components in the atherosclerotic plaque
Endothelial cells in atherosclerosis
Blood vessels are made of three layers including the intima consisting of a single layer of endothelium, the media containing a mixture of SMCs and elastic fibers, and the adventitia composed of fibroblasts, collagen fibers, and perivascular nerves.
The atherosclerotic process begins from ECs, and so from the innermost layer of the arterial vessel.
Consider the endothelial tissue as a simple lining of the vessels is very simplistic, indeed, because of the unique localization between circulating blood and the vessel wall, the endothelium has been suggested to play a crucial role in development and progression of atherosclerosis. It is known that the endothelium is in fact a metabolically active organ, playing a crucial role in the maintenance of vascular homeostasis by releasing a variety of vasoactive factors that can either dilate or constrict the blood vessels, depending on the type of the stimulus [55].
Vascular homeostasis entails keeping a tightly controlled balance between a vasodilatory state, which is often associated with anti-oxidant, anti-inflammatory and anti-thrombotic effects on one hand, and a vasoconstrictory state on the other, which is associated with pro-oxidant, pro-inflammatory and prothrombotic effects [56]. The vasodilatory state is mediated by factors such as nitric oxide (NO), endothelium-derived hyperpolarising factor (EDHF) and prostacyclins, while a vasoconstrictory state is mediated by factors such as endothelin-1 (ET-1), angiotensin II and thromboxane A2 [55,56]. Of these endothelial-derived factors, NO, which was originally identified as the endothelial-derived relaxing factor (EDRF), has since evoked much interest as it is considered to be the most potent endogenously synthesised vasodilator in the body, and a key marker of endothelial function and dysfunction.
Endothelial dysfunction, characterized by reduced NO bioavailability, is now recognised by many as an early, reversible precursor of atherosclerosis. Oxidative stress appears to be the common underlying cellular mechanism in the ensuing loss of vaso-active, inflammatory, haemostatic and redox homeostasis in the body's vascular system. For these reasons, endothelial dysfunction has emerged as a potentially valuable prognostic tool in predicting the development of atherosclerosis. The progression from the early changes observed in compromised vascular endothelium (endothelial activation and dysfunction) to atherosclerosis is complex and multifactorial [57].
The healthy, intact endothelium is a highly selectively permeable barrier and does not promote leukocyte adhesion and invasion, or platelet aggregation and adhesion [58]. However, as the endothelium progresses to a dysfunctional state, vascular homeostasis becomes impaired, leading to reduced antioxidant, anti-inflammatory and anti-thrombotic properties (due to reduced NO bioavailability), enhanced endothelial permeability (barrier dysfunction), upregulated pro-inflammatory cytokine levels, and expression of adhesion molecules such as VCAM-1 and ICAM-1, which facilitate leukocyte adhesion to the endothelium [58]. Leukocyte adhesion represents one of the first steps in the initiation of atherosclerosis.
After adhering to the endothelium, leukocytes (monocytes and lymphocytes) cross the endothelium and migrate into the intima [59,60]. Migration to the intima is mediated by chemo-attractants such as MCP-1 [61]. Upon reaching the intima, monocytes transform into macrophages and express receptors that facilitate uptake of lipids. Uptake and accumulation of lipids lead to the transformation of macrophages into foam cells, which initiate an atherosclerotic lesion and further enhance release of inflammatory cytokines [59,62]. Through these complex mechanisms, a cascade of events, which begins with the formation of an early atherosclerotic lesion, leading to an advanced lesion characterised by a plaque formation ensues [62].
Although endothelial cells represent the first cellular component involved in the atherosclerotic process, in this thesis work this latter aspect has not been widely investigated. Instead, the attention is mainly focused on the two cell types found in atherosclerotic lesion, macrophages and smooth muscle lineage cells, as amply described in sub-paragraphs below.
Monocyte/macrophages in atherosclerosis
Monocytes and macrophages play important roles in the initiation and progression of many chronic inflammatory diseases associated with oxidative stress, such as atherosclerosis.
A pivotal step in atherogenesis involves the subendothelial accumulation of monocyte-derived macrophages at predisposed sites of endothelial dysfunction and intimal lipoprotein retention, after transforming into foam cell macrophages (FCMs) [15]. In nascent lesions, these cells orchestrate the scavenging of lipids and cellular debris, as well as the local inflammatory equilibrium to ultimately define the likelihood of plaque complications [63,64]. Therefore, monocyte-derived macrophages are instrumental to the atherogenic process and contribute to its initiation, progression and symptomatology.
As plaque development may originate not only from persistent inflammation, but also from inadequate anti-inflammatory responses, the macrophage polarization balance holds clear implications for lesion formation [START_REF] Wolfs | Differentiation factors and cytokines in the atherosclerotic plaque microenvironment as a trigger for macrophage polarisation[END_REF][START_REF] Shalhoub | Innate immunity and monocyte-macrophage activation in atherosclerosis[END_REF]. Foam cells are not able to leave the initial plaque and contribute to the failure of inflammation resolution and further establishment of a complicated atherosclerotic plaque [START_REF] Randolph | Mechanisms that regulate macrophage burden in atherosclerosis[END_REF].
Macrophages can be commonly identified in the lesion shoulder and calcified plaque regions [START_REF] Stary | A definition of advanced types of atherosclerotic lesions and a histological classification of atherosclerosis. A report from the Committee on Vascular Lesions of the Council on Arteriosclerosis[END_REF][START_REF] Bobryshev | Monocyte recruitment and foam cell formation in atherosclerosis[END_REF]. Dying macrophages extensively contribute to the formation of the necrotic core and aggravation of the proatherosclerotic inflammatory response [START_REF] Moore | Macrophages in the pathogenesis of atherosclerosis[END_REF][START_REF] Seimon | Mechanisms and consequences of macrophage apoptosis in atherosclerosis[END_REF].
Despite major advances in understanding the various functions of macrophages in atherosclerotic lesions [START_REF] Ley | Monocyte and macrophage dynamics during atherogenesis[END_REF][START_REF] Mantovani | Orchestration of macrophage polarization[END_REF], it is generally accepted that the tissue microenvironment determines macrophage phenotypic polarization [START_REF] Williams | Macrophage differentiation and function in atherosclerosis: opportunities for therapeutic intervention?[END_REF]. That is, on the one hand, macrophages specifically respond to extracellular cues ranging from bacterial components to oxidatively modified molecules [START_REF] Adamson | Phenotypic modulation of macrophages in response to plaque lipids[END_REF], translating information by utilizing a range of cell surface receptors and their associated intracellular signaling [START_REF] Shalhoub | Innate immunity and monocyte-macrophage activation in atherosclerosis[END_REF]. On the other hand, macrophages also respond to changes in their intracellular environment such as cholesterol loading or endoplasmatic reticulum-stress by inducing highly specified adaptive mechanisms and reactions [START_REF] Moore | Macrophages in the pathogenesis of atherosclerosis[END_REF][START_REF] Prieur | Lipotoxicity in macrophages: evidence from diseases associated with the metabolic syndrome[END_REF] using intracellular sensors such as redox-dependent transcription factors or nuclear hormone receptors.
Even though there are reports suggesting that macrophages can transdifferentiate into dendritic cells [START_REF] Melián | CD1 expression in human atherosclerosis[END_REF][START_REF] Bobryshev | CD1 Expression and the nature of CD1-expressing cells in human atherosclerotic plaques[END_REF][START_REF] Shen | Oxidized low-density lipoprotein induces differentiation of RAW264.7 murine macrophage cell line into dendritic-like cells[END_REF], it is commonly thought that plaque macrophages represent a population of terminally differentiated cells of monocytic origin; nevertheless, macrophages are influenced by multiple microenvironmental stimuli that could drive macrophage polarization towards more or less proinflammatory phenotype [START_REF] Leitinger | Phenotypic polarization of macrophages in atherosclerosis[END_REF]. Several macrophage phenotypes could be observed in the plaque [81].
Indeed, macrophage phenotype is reversible and could be changed in response to different microenvironmental signals: macrophages are hallmarked by phenotypic heterogeneity and express a spectrum of activational programs that exist as a function of their immediate surroundings.
In the onset of atherosclerosis, LDL present in the circulation enter the activated or damaged vessel wall and become modified (e.g. oxidation resulting in oxLDL). These modified lipoproteins trigger an immune response and thereby attract blood monocytes. Circulating monocytes in the mouse exist as two equally abundant major subsets with differing cell surface marker and chemokine receptor expression patterns, that is, Ly6C high CCR2+CX3CR1 low ("inflammatory") vs. Ly6C low CCR2-CX3CR1 high ("resident"). Ly6C high monocytes are short-lived in the circulation and rapidly move into foci of acute inflammation, such as recent myocardial infarctions [START_REF] Nahrendorf | The healing myocardium sequentially mobilizes two monocyte subsets with divergent and complementary functions[END_REF] and into early atherosclerotic plaques [START_REF] Swirski | Ly-6Chi monocytes dominate hypercholesterolemia associated monocytosis and give rise to macrophages in atheromata[END_REF][START_REF] Tacke | Monocyte subsets differentially employ CCR2, CCR5, and CX3CR1 to accumulate within atherosclerotic plaques[END_REF]. By contrast, Ly6C low monocytes persist longer in the circulation, where they engage in so-called patrolling behaviour, interacting with the endothelium without extravasation [START_REF] Geissmann | Blood monocytes: distinct subsets, how they relate to dendritic cells, and their possible roles in the regulation of Tcell responses[END_REF]. Ly6C low monocytes show delayed incorporation into inflamed and damaged tissues, including infarcted myocardium [START_REF] Nahrendorf | The healing myocardium sequentially mobilizes two monocyte subsets with divergent and complementary functions[END_REF]. In humans, blood monocytes are mainy divided into two populations based on cluster of differentation 16 (CD16) and 14 (CD14) expression:
CD14 high CD16 low macrophages (about 80-90%), that are the phenotypic equivalent of the Ly6C high population in mice, whereas CD14 low CD16 high macrophages are the phenotypic equivalent of Ly6C low [START_REF] Geissmann | Blood monocytes: distinct subsets, how they relate to dendritic cells, and their possible roles in the regulation of Tcell responses[END_REF][START_REF] Libby | Diversity of denizens of the atherosclerotic plaque: not all monocytes are created equal[END_REF][START_REF] Auffray | Blood monocytes: development, heterogeneity, and relationship with dendritic cells[END_REF].
However, functional equivalence with the mouse subpopulations is unclear and directly extrapolating any of the findings from mouse to humans is therefore premature. The majority of CD14 high CD16 low monocytes seem anti-inflammatory, as they produce the cytokine IL-10 in response to bacterial lipopolysaccharide (LPS). Conversely, the smaller CD14 low CD16 high population seems proinflammatory because it produces proinflammatory mediators in response to LPS and shows an increase in plasma levels during inflammatory conditions, including atherosclerosis [START_REF] Schlitt | CD14+CD16+ monocytes in coronary artery disease and their relationship to serum TNF-alpha levels[END_REF]. Although different monocyte types may enter the atherosclerotic lesions, the majority of plaque infiltrating monocytes are from Ly6C high origin [START_REF] Swirski | Ly-6Chi monocytes dominate hypercholesterolemia associated monocytosis and give rise to macrophages in atheromata[END_REF][START_REF] Tacke | Monocyte subsets differentially employ CCR2, CCR5, and CX3CR1 to accumulate within atherosclerotic plaques[END_REF]. After migrating into the subendothelial space, monocytes differentiate into macrophages which engulf the modified lipids and become foam cells.
In vitro, the monocyte differentation is driven by two growth factors, granulocyte-macrophage colony-stimulating factor (GM-CSF) and macrophage colony-stimulating factor (M-CSF), leading to the formation of macrophages that are phenotypically similar to M1 and M2 subsets of macrophages [81]. The classic M1 phenotype of activated macrophages is characterized by a pro-inflammatory phenotype, by expressing a broad spectrum of proinflammatory cytokines (TNF-α, IL-1β, IL-12, and IL-23) and chemokines (C-X-C motif chemokine (CXCL9, CXCL10, and CXCL11)) [START_REF] De Duve | The role of lysosomes in cellular pathology[END_REF]. The anti-inflammatory M2 macrophages typically secrete high amounts of anti-inflammatory IL-10 and contribute to tissue remodeling, vasculogenesis and tumor development [START_REF] Mantovani | The chemokine system in diverse forms of macrophage activation and polarization[END_REF].
However, the M1/M2 system does not reflect the complexity of phenotypic subsets of macrophages. Depending on the activating stimulus, M2 macrophages can be divided into four subgroups:
M2a, M2b, M2c and M2d are induced by M-CSF and IL-4 or IL-13, immune complexes together with LPS or IL-1β, M-CSF and IL-10, and by TLR (Toll-like receptor) agonists through the adenosine A2A receptor (ADORA2A), respectively [START_REF] Pinhal-Enfield | An angiogenic switch in macrophages involving synergy between Toll-like receptors 2, 4, 7, and 9 and adenosine A(2A) receptors[END_REF][START_REF] Johnson | Macrophage heterogeneity in atherosclerotic plaques[END_REF].
Boyle et al. [START_REF] Boyle | Coronary intraplaque hemorrhage evokes a novel atheroprotective macrophage phenotype[END_REF] described a new macrophage population (HA-mac) in hemorrhagic areas of human plaques that expressed high levels of CD163 but low levels of human leukocyte antigen (HLA)-DR. This population was shown to possess atheroprotective properties by sensing Hb-Hp (hemoglobin-haptoglobin) complexes via the receptor CD163 followed by further Hb clearance and reduction of oxidative stress. In principle, Hb has atheroprotective effects by preventing formation of foam cells [START_REF] Finn | Hemoglobin directs macrophage differentiation and prevents foam cell formation in human atherosclerotic plaques[END_REF] and activation of the nuclear transcription factor (erythroid-derived 2)-like 2 (Nrf2) that in turn induces expression of heme oxygenase 1 [START_REF] Boyle | Heme induces heme oxygenase 1 via Nrf2: role in the homeostatic macrophage response to intraplaque hemorrhage[END_REF].
Another human atheroprotective macrophage subset called Mhem is closely related to HA-mac macrophages and also involved in Hb clearance through phagocytosis of erythrocytes. This subset is characterized by high expression of CD163 and heme-dependent activating transcription factor (ATF)-1 [START_REF] Boyle | Activating transcription factor 1 directs Mhem atheroprotective macrophages through coordinated iron handling and foam cell protection[END_REF][START_REF] Bories | Liver X receptor activation stimulates iron export in human alternative macrophages[END_REF]. The expression of these factors in turn leads to induction of other genes central to cholesterol efflux regulation. Indeed, the Mhem phenotype of macrophages is characterized by increased cholesterol efflux associated with increased production of IL-10 and apolipoprotein (Apo)E [START_REF] Boyle | Heme and haemoglobin direct macrophage Mhem phenotype and counter foam cell formation in areas of intraplaque haemorrhage[END_REF]. Furthermore, both Mhem and HA-mac macrophages exhibit increased adaptation to intraplaque hemorrhage [START_REF] Boyle | Activating transcription factor 1 directs Mhem atheroprotective macrophages through coordinated iron handling and foam cell protection[END_REF][START_REF] Boyle | Heme and haemoglobin direct macrophage Mhem phenotype and counter foam cell formation in areas of intraplaque haemorrhage[END_REF]. This subset is also characterized by increased resistance against foam cell formation but enhanced ROS production.
Collectively, Mhem, HA-mac and M(Hb) macrophages are able to reduce intraplaque hemorrhages that arise from plaque neovascularization and erythrocyte infiltration into the plaque. These macrophage subsets are involved in utilization and recycling of iron accumulated in the plaque [START_REF] Bories | Liver X receptor activation stimulates iron export in human alternative macrophages[END_REF].
Mice share with humans M1, M2 and M(Hb) macrophage phenotypes but have a specific proinflammatory subset called Mox macrophages. Mox macrophages were shown to be abundantly present in murine lesion, accounting for 30% of plaque macrophages of low density lipoprotein (LDL)
receptor knock-out (LDLR -/-) mice [START_REF] Kadl | Identification of a novel macrophage phenotype that develops in response to atherogenic phospholipids via Nrf2[END_REF]. This macrophage population could be induced by oxidized phospholipids and produces high levels of heme oxidase 1 in Nrf2-dependent manner. Mox macrophages show proatherogenic properties through elevated production of IL-1β and ROS.
Finally, recently discovered M4 macrophages could have a potential proatherogenic role in unstable plaques and may be involved in late atherosclerosis complications such as acute coronary syndrome and arterial thrombosis, since they are activated by platelet-derived CXCL4. They lose the Hb-Hp scavenger receptor CD163 that is essential for hemoglobin clearance after plaque hemorrhage. As a consequence, lack of CD163 leads to the inability to stimulate the atheroprotective enzyme heme oxygenase-1 in response to Hb-Hp complexes, and this could suggest a potential role of the M4 subset in atherosclerosis [START_REF] Gleissner | Macrophage Phenotype Modulation by CXCL4 in Atherosclerosis[END_REF]. M4 macrophages possess proinflammatory properties since they express IL-6, TNF-α, and matrix metalloproteinase MMP-12 [START_REF] Erbel | CXCL4-induced plaque macrophages can be specifically identified by coexpression of MMP7+S100A8+ in vitro and in vivo[END_REF]. However, the actual role of M4 cells in atherogenesis is unknown and should be investigated.
An overview of macrophages subsets observed in human and mice is reported in Table 1.
Smooth muscle cells in atherosclerosis
The primary functions of vascular smooth muscle cells are the contraction and the regulation of blood vessel tone, thus distributing blood flow and regulating blood pressure. Fully differentiated vascular SMCs display an elongated spindle-shaped morphology and express a unique repertoire of contractile proteins that serve as SMC markers, including a-smooth muscle actin (a-SMA), calponin, smooth muscle protein 22-alpha (SM22), SM myosin heavy chain (SMMHC) and smoothelin [START_REF] Owens | Molecular regulation of vascular smooth muscle cell differentiation in development and disease[END_REF]. In contrast to skeletal and cardiac muscle cells that differentiate terminally, vascular SMCs retain a high degree of plasticity in vivo and in vitro.
During the so-called "phenotypic modulation", SMCs can dedifferentiate from a contractile phenotype to a highly proliferative synthetic phenotype in response to local environmental cues including growth factors/inhibitors, mechanical influences, cell-cell/cell-matrix interactions, various inflammatory mediators, etc. [START_REF] Owens | Molecular regulation of vascular smooth muscle cell differentiation in development and disease[END_REF]. Vascular SMCs differentiation is an important process during vascular development.
The highly differentiated mature SMCs play critical roles in maintaining structural and functional integrity of blood vessels. Therefore, alterations in SMCs phenotype contribute to a number of major cardiovascular diseases such as atherosclerosis, hypertension and restenosis following angioplasty. Accumulation of SMCs is a hallmark of neointimal formation, and can be considered as the "soil" of atherosclerosis. Indeed, focusing the attention on atherosclerosis, the evolution of a fatty streak to an atherosclerotic plaque involves activation of macrophages by T lymphocytes through IFN-γ and cluster of differentiation 40 (CD40) [58,[START_REF] Boyle | Macrophage activation in atherosclerosis: pathogenesis and pharmacology of plaque rupture[END_REF]. Once macrophages are activated, they produce cytokines, chemokines and growth factors, promoting SMCs migration and proliferation in the intimal layer [START_REF] Takahashi | Multifunctional roles of macrophages in the development and progression of atherosclerosis in humans and experimental animals[END_REF]. SMCs then synthesize the collagen forming the fibrous cap that surrounds the lipid and necrotic core of the atherosclerotic plaque. The fibrous cap prevents exposition of the thrombogenic lipid core to arterial blood and provides mechanical strength and stability to the plaque [START_REF] Libby | Pathophysiology of coronary artery disease[END_REF]. Accumulation of activated inflammatory cells, such as macrophages and T lymphocytes, in the shoulder region of the plaque amplifies the inflammatory process by the release of proinflammatory cytokines. However, as the plaque matures, the SMCs and macrophages within the fibrous cap also release MMPs [START_REF] Newby | Vulnerable atherosclerotic plaque metalloproteinases and foam cell phenotypes[END_REF][START_REF] Back | Matrix metalloproteinases in atherothrombosis[END_REF] that degrade the extracellular matrix. All these events weaken the fibrous cap and render the plaque prone to rupture [15,[START_REF] Gomez | Smooth muscle cell phenotypic switching in atherosclerosis[END_REF].
SMC differentiation is orchestrated by a precisely coordinated molecular network that integrates a variety of factors including different environmental cues, signaling pathways, transcription factors, ROS, extracellular matrix (ECM), microRNAs, and chromosome structural modifiers [START_REF] Sinha | Transforming growth factor-beta1 signaling contributes to development of smooth muscle cells from embryonic stem cells[END_REF][START_REF] Suzuki | Effects of extracellular matrix on differentiation of human bone marrow-derived mesenchymal stem cells into smooth muscle cell lineage: Utility for cardiovascular tissue engineering[END_REF][START_REF] Xiao | Embryonic stem cell differentiation into smooth muscle cells is mediated by Nox4-produced H2O2[END_REF]. In particular, hydrogen peroxide and superoxide anion are traditionally considered as harmful substances that cause cellular dysfunction in various systems, including the cardiovascular system [START_REF] Finkel | Signal transduction by reactive oxygen species in non-phagocytic cells[END_REF].
The major source of ROS in the cardiovascular system is represented by nicotinamide adenine dinucleotide phosphate-oxidases (NADPH oxidases, NOX). To give an example, NOX1 and NOX4, two major NOX isoforms, have been identified in human and rodent aortic SMCs [START_REF] Clempus | Reactive oxygen species signaling in vascular smooth muscle cells[END_REF]. Interestingly, the literature supports the important role of NOX4 in maintaining differentiated SMCs phenotype. In contrast, NOX1 is involved in signal transduction leading to SMCs hypertrophy and proliferation [START_REF] Clempus | Reactive oxygen species signaling in vascular smooth muscle cells[END_REF]. The role of ROS in SMC differentiation is further confirmed by a study showing that NOX4-produced H2O2 mediates the differentiation of mouse embryonic stem cells towards SMC lineage using collagen IV as a coating substrate [START_REF] Xiao | Embryonic stem cell differentiation into smooth muscle cells is mediated by Nox4-produced H2O2[END_REF]. In addition Nrf3, a nuclear factor erythroid 2-related factor, is found to promote NOX4-mediated ROS production and enhances SMC differentiation of embryonic stem cells by binding to promoter regions of Pla2g7 (Phospholipase A2, group 7) gene [START_REF] Xiao | Nrf3-Pla2g7 interaction plays an essential role in smooth muscle differentiation from stem cells[END_REF]. These results demonstrate the importance of NOX4-ROS signals in SMC differentiation.
Moreover, the deposition of LDL in the vessel wall and their oxidative modification seem to initiate, or at least accelerate, the atherosclerotic process by several mechanisms, including SMC phenotypic modulation. OxLDL is detected in atherosclerotic plaques and in the plasma of atherosclerotic patients, where it contributes to disease evolution. Depending on the extent of its oxidation, oxLDL can induce proliferation, monocyte chemotaxis, apoptosis or necrosis of vascular endothelial cells and SMCs.
H2O2 is a major oxidative component of oxLDL-induced ROS and it can induce an increase in ROS formation, specific alterations in gene/protein expression, as well as apoptosis of vascular smooth muscle cells [START_REF] Sukhanov | Novel effect of oxidized low-density lipoprotein: cellular ATP depletion via downregulation of glyceraldehyde-3phosphate dehydrogenase[END_REF]. Indeed, several studies show that oxLDL and H2O2 can significantly decrease the expression of important proteins expressed in SMC and involved in their ECM adhesion and migration, such as lipoma preferred partner (LPP). At higher concentrations, OxLDL also upregulates the expression of oxidized lowdensity lipoprotein receptor-1 (LOX-1) and induces apoptosis of vascular SMCs [START_REF] Eto | Expression of lectin-like oxidized LDL receptor-1 in smooth muscle cells after vascular injury[END_REF], a process that may contribute to atherosclerotic plaque destabilization. Furthermore, in vascular SMCs, another LDL receptor, the low density lipoprotein receptor-related protein (LRP), mediating the binding and internalization of aggregated LDL (agLDLs) has been discovered [START_REF] Pentikainen | Aggregation and fusion of modified low density lipoprotein[END_REF]. LRP, contrary to the LOX-1, has multiple binding sites and is not regulated by intracellular cholesterol concentration. Therefore, LRP-mediated endocytosis can be considered as a low-specificity and high-capacity mechanism that allows the uptake of large amounts of ligand, i.e. agLDLs. LRP is highly expressed in atherosclerotic plaques and because of subendothelial LDL retention and aggregation, the uptake of agLDLs through LRP could have a crucial role in VSMC-lipid deposition in atherosclerotic plaques [30].
It is conceivable that a mix of mechanical cues, accompanied by oxidative stress and substrate composition, may dominate and account for the downregulation of several important mediators involved in SMC differentiation. An upregulation of these molecules may occur in the neointima following vascular injury, associated with increased cell migration and proliferation, presumably reflecting different changes in substrate composition and stiffness as well as different signaling pathways.
In summary, phenotypic switching of vascular SMCs from the contractile type towards proliferation and mobility is a physiological response to repair vessel damage. However, in atherosclerosis, the normal reaction of vascular SMCs to change the phenotype could be impaired by proinflammatory stimuli and oxidative stress. In particular, it has been known for a decade that the loss of endothelial NO production impairs endothelium-dependent dilatation and promotes vasospasm in atherosclerotic arteries. More recent evidence indicates that dysfunction of the endothelial NO pathway may promote atherosclerosis in view of the protective effects of NO against leukocyte adhesion, oxidative processes, smooth muscle cell migration and proliferation. On the other hand, there is ample evidence to consider NO as a molecular aggressor in chronic inflammatory processes like atherosclerosis. This latter aspect will be widely discussed in the next section.
Atherosclerosis and decrease in nitric oxide bioavailability 1.3.1 Synthesis and role of nitric oxide
In the cardiovascular system, the deficiency of NO, which is the consequence of either endothelium dysfunction or NO oxidative consumption (increased oxidative or nitrosative stress, decreased antioxidant enzyme activity) is one of the key factors for the initiation and progress of many diseases, including atherosclerosis. As described previously, the endothelium plays an important role in this context, maintaining the balance between vasodilating substances with antiproliferative activity (such as NO) and vasoconstrictor substances with mitogenic properties (such as ET-1) and any disturbance to this balance may cause damage to the arterial wall, by promoting the appearance of endothelial dysfunction. The pathophysiology of endothelial dysfunction is a complex phenomenon, engaging different mechanisms: it leads to vasoconstriction, platelet aggregation, monocyte adhesion and smooth muscle cells proliferation, and has been related to a reduced NO bioavailability, an excess of ROS and an oxidative stress-dependent increase in ET-1 action.
NO, one of the most important substance produced by the endothelium, plays a key role in homeostasis maintenance. NO is a gazeous radical with a half-life of ≈ 6-30 s, continuously synthesized from L-arginine by the nitric oxide synthase (NOS) [START_REF] Palmer | Vascular endothelial cells synthesize nitric oxide from L-arginine[END_REF].
There are three distinct isoforms of NOS which differ in structure and function [START_REF] Stuehr | Structure-function aspects in the nitric oxide synthases[END_REF]. Endothelial NOS (eNOS) and neuronal NOS (nNOS) are constitutively expressed and are referred as Ca 2+ -dependent enzymes [START_REF] Ayajiki | Intracellular pH and tyrosine phosphorylation but not calcium determine shear stress-induced nitric oxide production in native endothelial cells[END_REF] and generate small amounts of NO for signaling. They are present as dimers, containing a flavin adenine dinucleotide molecule (FAD), a flavin mononucleotide (FMN), a heme group and a tetrahydrobiopterin (BH4), whose absence leads to the production of superoxide rather than NO. These cofactors, together with calmodulin and the NADPH, are needed of enzymatic activity [START_REF] Stuehr | Mammalian nitric oxide synthases[END_REF]. The third type is the inducible isoform (iNOS), Ca 2+ -independent and inducible by immunological stimuli [START_REF] Schulz | Induction and potential biological relevance of a Ca2+-independent nitric oxide synthase in the myocardium[END_REF]. This latter is activated and generates high amounts of NO in response to inflammation [START_REF] Knowles | Nitric oxide synthases in mammals[END_REF].
NO is a chemical messenger, particularly in vascular and immune systems playing a critical role in the regulation of a wide range of physiological processes. Its production within the cell is very finely adjusted to ensure the correct action of NO. Indeed, in physiological conditions, low concentrations of NO (10 nM), which acts as a vasodilator and inhibitor of platelet aggregation, are produced by eNOS, while the activation of iNOS induced during physiopathological processes such as inflammation produces much higher concentrations of NO (> 1 µM). iNOS is expressed physiologically, but is induced by certain inflammatory cytokines (IL-1, INFγ, TNF-α), by LPS and oxidizing agents. This induction is inhibited by glucocorticoids and some cytokines (including TGF-β).
The effect of NO depends on the site of formation, its concentration and the type of targeted tissue. NO released from the endothelium stimulates soluble guanylyl cyclase, producing increased concentrations of cyclic guanosine monophosphate (cGMP). Cyclic GMP interacts with three types of intracellular proteins: cGMPdependent protein kinases (PKGs), cGMP-regulated ion channels, and cGMPregulated cyclic nucleotide phosphodiesterases (PDEs). Thus, cGMP can alter cell function through mechanism dependent or independent of protein phosphorylation. Depending on the direction of NO release and the site of cGMP activation, differing biological effects can be observed. In vascular SMCs increased cyclic GMP concentrations activate cGMP-dependent kinases that decrease intracellular calcium, producing relaxation [START_REF] Moncada | Nitric oxide physiology, pathophysiology and pharmacology[END_REF], whereas increased cGMP in platelets decreases platelet activation and adhesion to the surface of the endothelium [START_REF] Radomski | Biological role of nitric oxide in platelet function[END_REF].
NO can modulate protein activity through three main reactions:
Nitrosylation, a reversible coordination of NO to transition metal ions in enzymes, such as ferrous (Fe 2+ ) heme prosthetic groups within the soluble guanylyl cyclase (sGC) enzyme, leading to enzyme activation and increased formation of cGMP.
Protein nitrosation, where NO forms a covalent bond with cysteine (S-nitrosation) or tryptophan (N-nitrosation) residues. Modifications of free cysteine residues present at active sites of effector proteins and peptides subsequently change the activity or function of these proteins. This corresponds to a posttranslational modification of proteins as important as phosphorylation [126,[START_REF] Lima | S-nitrosylation in cardiovascular signaling[END_REF]. Numerous studies have focused on the mechanistic aspect of S-nitrosation, however it has also been shown that denitrosation plays a major role in the control of S-nitrosated proteins (Pr-SNO) levels and NO release. S-nitrosothiols can undergo spontaneously or assisted transnitrosation from high molecular weight thiols to low molecular one and inversely.
Nitration, introduction of a NO2 group covalently bound to the aromatic ring of tyrosine or tryptophan residues. These changes are often related to loss of function, due to oxidation or nitration of functionally important residues, as well as the proteolytic degradation of damaged proteins. Nitration mainly consists in the formation of nitrotyrosine, occurring via peroxynitrite in the context of heavy tissular inflammation and oxidative stress. Nitrotyrosine is identified as an indicator or marker of cell damage, inflammation as well as high concentration of NO production and proteins modifications by NO-derived oxidants. Previous studies have demonstrated that nitrotyrosine is enriched in human atherosclerotic lesions and low density lipoprotein (LDL) issued from human atheromas [START_REF] Shishehbor | Association of nitrotyrosine levels with cardiovascular disease and modulation by statin therapy[END_REF].
Endothelial dysfunction would seem to represent the earliest event of the atherosclerotic plaque formation, before a structural lesion of the vessel wall becomes visible [START_REF] Ludmer | Paradoxical vasoconstriction induced by acetylcholine in atherosclerotic coronary arteries[END_REF]. However, the decrease of NO bioavailability can have different other origins, like (i) impairment of endothelial membrane receptors that interact with agonists or physiological stimuli to release NO, (ii) diminished levels or impaired utilization of L-arginine substrate, or BH4 cofactor of the nitric oxide synthase, (iii) reduction in concentration or activity of nitric oxide synthase; (iv) enhanced degradation of NO by oxygen free radicals, (v) impaired diffusion from endothelium to SMCs, (vi) impaired interaction of NO with soluble guanylate cyclase and the subsequent limitation of the increase in intracellular cGMP level, (vii) generalized decrease in smooth muscle cell sensitivity to vasodilators [START_REF] Briasoulis | Endothelial dysfunction and atherosclerosis: focus on novel therapeutic approaches[END_REF]. In turn, diminished NO bioactivity may facilitate vascular inflammation that could lead to oxidation of lipoproteins and foam cell formation, smooth muscle proliferation, extracellular matrix deposition or lysis, accumulation of lipid-rich material, platelet activation, and thrombus formation. All of these consequences of endothelial dysfunction may contribute to the development and clinical expression of atherosclerosis [START_REF] Tousoulis | The role of nitric oxide on endothelial function[END_REF].
Finally, several studies have highlighted the relationships between NO, inflammation and oxidative stress, e.g. (i) NO is released during inflammation processes (mainly from induction of the inducible NOsynthase), (ii) NO induces positive or negative effects on vascular homeostasis according to its environment and concentration (with deleterious impact during oxidative stress and inflammation), and (iii) the released NO itself can modulate inflammatory mediators. The reason for such dual effects relates to concentrations, duration of exposure, and production from NO-synthases but also release from the endogenous reservoir represented by S-nitrosothiols (RSNOs), an important storage form of NO. A summary of NO effects related to its concentration and physiopathological conditions are reported in figure 3 (Fig. 3).
S-nitrosothiols: a class of nitric oxide donors
NO, despite being a very reactive molecule, appears to produce effects at distance from its synthesis site. These effects must be mediated by more stable molecules, capable of transporting and storing NO in its active form [START_REF] Moncada | The L-arginine-nitric oxide pathway[END_REF]. Stamler and colleagues postulated the existence of a "NO reserve" in the plasma, in which this radical is in equilibrium with S-nitrosoproteins and/or peptides characterized by a covalent S-NO bond [START_REF] Stamler | Nitric oxide circulates in mammalian plasma primarily as an S-nitroso adduct of serum albumin[END_REF]. These adducts have been suggested to play an important role in NO transport, signal transduction pathways and regulation of gene expression [START_REF] Stamler | Redox signaling: nitrosylation and related target interactions of nitric oxide[END_REF]. Ignarro and co-workers showed that these adducts could stimulate the conversion of GTP to cGMP in guanylate cyclase and suggested that they are key intermediates in the action of various nitrovasodilating compounds such as sodium nitroprusside and nitroglycerin [START_REF] Ignarro | Biosynthesis and metabolism of endothelium-derived nitric oxide[END_REF]. In fact, endothelium-derived relaxing factor may be a Pr-SNO adduct.
S-nitrosoalbumin (SNO-Alb) is the most abundant nitrosothiol in human plasma, with concentrations reported to be as high as 5 μM [START_REF] Stamler | Redox signaling: nitrosylation and related target interactions of nitric oxide[END_REF][START_REF] Gaucher | S-nitrosation/Denitrosation in Cardiovascular Pathologies: Facts and Concepts for the Rational Design of S-nitrosothiols[END_REF]. Many of the high-and low-molecular-weight Snitrosothiols can release NO either spontaneously or via metabolism, so they are able to mediate many of the biological functions of NO. In addition, Stamler and co-workers [START_REF] Stamler | Blood flow regulation by S-nitrosohemoglobin in the physiological oxygen gradient[END_REF] proposed that the binding of oxygen to heme irons in Hb promotes also the binding of NO to specific cysteine residues located in the βsubunits of Hb, forming S-nitrosohemoglobin (SNOHb) [START_REF] Stamler | Blood flow regulation by S-nitrosohemoglobin in the physiological oxygen gradient[END_REF]. Deoxygenation is accompanied by an allosteric transition in SNOHb that releases the NO group. Therefore, SNOHb has been proposed to participate in the regulation of blood flow [START_REF] Stamler | Blood flow regulation by S-nitrosohemoglobin in the physiological oxygen gradient[END_REF] and platelet aggregability.
The formation of S-nitrosothiols may also play an important role in leukocyte adhesion to the microvascular endothelium. For example, S-nitrosothiols are known to inhibit leukocyte adhesion to microvascular endothelial cells in vivo, presumably via the release of NO. -SH groups are essential for normal leukocyte-endothelial cell adhesion [START_REF] Grisham | Modulation of leukocyte-endothelial interactions by reactive metabolites of oxygen and nitrogen: relevance to ischemic heart disease[END_REF]. S-nitrosation of these critical -SH groups on the surface of endothelial cells and/or polymorphonuclear neutrophils (PMNs) could decrease adhesion, thereby limiting leukocyte infiltration. Furthermore, the formation of endogenous antiadhesive S-nitrosothiols by NO-derived nitrosating agents may be inhibited by O2 °-, suggesting that during inflammation enhanced O2 °- production may promote PMN-endothelial cell adhesion [START_REF] Wink | Superoxide modulates the oxidation and nitrosation of thiols by nitric oxide derived reactive intermediates[END_REF]. Indeed it is well established that exogenous NO donors are very effective at inhibiting PMN adhesion in vivo [START_REF] Grisham | Modulation of leukocyte-endothelial interactions by reactive metabolites of oxygen and nitrogen: relevance to ischemic heart disease[END_REF][START_REF] Granger | Nitric oxide as anti-inflammatory agent[END_REF]. Formation of S-nitrosothiols may, on the other hand, promote or perpetuate chronic inflammation.
Lander and colleagues [START_REF] Lander | Redox regulation of cell signaling[END_REF] have demonstrated that S-nitrosation of one specific cysteine residue on the p21 Ras protein (involved in cellular signal transduction and able to activate genes involved in cell growth, differentiation and survival) in lymphocytes is critical for guanine nucleotide exchange and downstream signaling resulting in the formation of proinflammatory cytokines such as TNF-.
Among endogenous RSNOs, the attention was mainly focused on S-nitrosoglutathione (GSNO) and S-nitrosocysteine (CysNO), the most represented low-molecular weight nitrosothiols in vivo.
GSNO -formed by the S-nitrosation of reduced glutathione (GSH) -is involved in the storage and transport of NO. It is mostly intracellular inside the vascular wall [START_REF] Bramanti | The determination of S-nitrosothiols in biological samples -procedures, problems and precautions[END_REF] and may release NO according to the environmental conditions. It exhibits higher stability than NO, mediates protein S-nitrosation processes and is thought to play an important role in vascular signaling and protection, especially in a context of inflammation [START_REF] Khan | Cerebrovascular protection by various nitric oxide donors in rats after experimental stroke[END_REF]. The biological activity of GSNO and particularly its vasorelaxant effect have been reported in ex vivo isolated vessel models [START_REF] Sogo | S-nitrosothiols cause prolonged, nitric oxide-mediated relaxation in human saphenous vein and internal mammary artery: therapeutic potential in bypass surgery[END_REF][START_REF] Alencar | S-Nitrosating nitric oxide donors induce long-lasting inhibition of contraction in isolated arteries[END_REF], and is directly linked to its decomposition resulting in the release of NO.
Potential routes of decomposition of NO and potential biomarkers in human blood, their respective significance and their application fields are summarized in the respective figure (Fig. 4) and table (Table 2). In plasma, NO may react with molecular oxygen to form nitrite (NO2 -) or with superoxide (O2 -) to form peroxynitrite (OONO -), which subsequently decomposes to yield nitrate (NO3 -). Alternatively, the nitrosonium moiety of NO may react with thiols to form nitrosothiols (RSNO). Furthermore, NO may reach the erythrocytes to react with either oxyhemoglobin to form methemoglobin (metHb) and NO2 -with deoxyhemoglobin to form nitrosylhemoglobin (NOHb), or with the Cys93 residue of the β-subunit to form S-nitrosohemoglobin (SNOHb). In addition, plasma NO2 -could be taken up by erythrocyte, where it is oxidized in a Hb-dependent manner to NO3 - (SNOAlb, S-nitrosoalbumin; GSNO, S-nitrosoglutathione; CysNO, S-nitrosocysteine, RSH, sulfhydryl group) [START_REF] Lauer | Indexes of NO bioavailability in human blood[END_REF].
S-nitrosothiols are quite stable in vitro at 37°C and pH 7.4 [START_REF] Mcaninly | Metal ion catalysis in nitrosothiol (RSNO) decomposition[END_REF] but they are degraded in vivo, especially by enzymatic activities. Cellular mechanisms of GSNO degradation are reported in the table below (Table 3). As from the literature, GSNO reductase (GSNOR), using NADH as a cofactor, produces an unstable intermediate, S-hydroxylaminoglutathione (GSNHOH), which can either react with GSH to produce oxidized glutathione (GSSG) and hydroxylamine (NH2OH) or rearrange and then spontaneously hydrolyze to produce glutathione sulfinic acid (GSO2H) and ammonia [START_REF] Jensen | S-Nitrosoglutathione is a substrate for rat alcohol dehydrogenase class III isoenzyme[END_REF]. In either case, nitric oxide is not liberated during GSNO catabolism, and the nitroso moiety is reduced, effectively removing it from the "NO pool".
Carbonyl reductase 1 (CR1) metabolizes GSNO to an intermediate product, which can then react with GSH to produce NH2OH and GSSG; thus, similar to GSNO reductase, CR1 does not liberate NO by its catalytic reaction [START_REF] Staab | Reduction of Snitrosoglutathione by alcohol dehydrogenase 3 is facilitated by substrate alcohols via direct cofactor recycling and leads to GSH-controlled formation of glutathione transferase inhibitors[END_REF]. Other important enzymes reported to be involved in GSNO decomposition are protein disulfide isomerase (PDI) and the thioredoxin system (Trxs).
An emerging literature has identified Trxs -composed by thioredoxin, thioredoxin reductase (TrxR) and NADPH -as major players in the reduction of low-molecular weight and protein S-nitrosothiols, participating in both denitrosation and transnitrosation reactions [START_REF] Mitchell | Thioredoxin catalyzes the S-nitrosation of the caspase-3 active site cysteine[END_REF][START_REF] Benhar | Regulated protein denitrosylation by cytosolic and mitochondrial thioredoxins[END_REF][START_REF] Sengupta | Thioredoxin and thioredoxin reductase in relation to reversible Snitrosylation[END_REF]. In its denitrosation capacity, reduced Trx reacts directly with either a low-molecular weight S-nitrosothiol (including GSNO) or protein SNO, thus liberating NO and thiols [START_REF] Wu | Redox regulatory mechanism of transnitrosylation by thioredoxin[END_REF]. Through transnitrosation, the low-pKa active-site thiol (Cys32) of Trx becomes S-nitrosated, leaving behind a low-molecular weight or protein thiol. The nitroso group now residing on Trx must then be turned over, through unknown mecanisms releasing nitroxyl (HNO)
to form oxidized Trx. Oxidized Trx is then reduced by TrxR and NADPH. Unlike GSNOR and CR1, the thioredoxin system results in the release of NO and GSH coupled to the oxidation of NADPH.
PDI, another redoxin, has been shown to denitrosate GSNO [START_REF] Sliskovic | Characterization of the S-denitrosation activity of protein disulfide isomerase[END_REF]. NO released through this reaction combines with dioxygen in the hydrophobic environment of either the cell membrane or the PDI protein itself, to form dinitrogen trioxide (N2O3). In that way, PDI becomes a NO carrier via N2O3 mediating auto-S-nitrosation of its thiol active sites.
Finally, a cell membrane enzyme implicated in the metabolism of GSH, γ-glutamyltransferase (GGT), has also to be taken into account for specific GSNO catabolism [START_REF] Angeli | A kinetic study of gamma-glutamyltransferase (GGT)-mediated S-nitrosoglutathione catabolism[END_REF][START_REF] Hogg | S-Nitrosoglutathione as a substrate for gammaglutamyl transpeptidase[END_REF]. S-nitrosoglutathione is labile in the reducing environment of cytosol, on the other hand in the extracellular space GGT specifically catalyzes its breakdown releasing its γ -glutamyl residue through a hydrolytic pathway (1), and transferring this residue to an acceptor such as glycylglycine (GlyGly), in a transpeptidation reaction (2):
The presence of glycylglycine, added to serve as co-substrate for transpeptidation reaction [START_REF] Angeli | A kinetic study of gamma-glutamyltransferase (GGT)-mediated S-nitrosoglutathione catabolism[END_REF],
accelerates the decomposition rate of GSNO, thus producing S-nitrosocysteinylglycine more quickly. This latter being less stable than GSNO, it is rapidly decomposed in the presence of divalent metallic ions such as Fe 2+ and Cu 2+ leading to the release of NO and oxidized cysteinylglycine [START_REF] Angeli | A kinetic study of gamma-glutamyltransferase (GGT)-mediated S-nitrosoglutathione catabolism[END_REF]. On the other hand, Snitrosocysteinylglycine is itself provided with vasorelaxant activity, and the same is true for its metabolite Snitroso-L-cysteine, produced by action of dipeptidases.
Indeed, in physiological conditions, the rapid vasorelaxant effects of GSNO are largely dependent on the presence of endothelium and are mediated through its GGT-dependent metabolism [START_REF] Dahboul | Endothelial γ-glutamyltransferase contributes to the vasorelaxant effect of S-nitrosoglutathione in rat aorta[END_REF]. This is an important observation that should be taken into account in a pathophysiological perspective, because elevated GGT activity has been associated with various pathologies such as cystic fibrosis [START_REF] Corti | Contribution by polymorphonucleate granulocytes to elevated gamma-glutamyltransferase in cystic fibrosis sputum[END_REF],
hypertension [11] and atherosclerosis [START_REF] Franzini | Gamma-glutamyltransferase activity in human atherosclerotic plaques--biochemical similarities with the circulating enzyme[END_REF], and because therapeutics based on GSNO have already been
proposed at the clinical level [START_REF] Rassaf | Positive effects of nitric oxide on left ventricular function in humans[END_REF][START_REF] Snyder | Acute effects of aerosolized Snitrosoglutathione in cystic fibrosis[END_REF].
GSNO: a possible nitric oxide donor in atherosclerosis
Nitric oxide, as described previously, is highly reactive, has a short half life in vivo (estimated in the range of a few seconds) and mediates SMCs relaxation [START_REF] Arnold | Nitric oxide activates guanylate cyclase and increases guanosine 3':5' -cyclic monophosphate levels in various tissue preparations[END_REF]. Therefore, in this context, several NO donor classes based on more complex chemical systems, such as nitrosamine, organic nitrates and metal-NO complexes, but also S-nitrosothiols, have emerged.
Such NO donating compounds are identified as "NO donors". Three NO release mechanisms were elaborated depending on the chemical structure. The first type is "spontaneous" NO donation, which releases NO through thermal or photochemical self-decomposition (e.g. S-nitrosothiols, Ndiazeniumdiolates, oximes) [START_REF] Wang | Nitric oxide donors: chemical activities and biological applications[END_REF]. The second type is a "chemically catalized" NO release triggered by acid, alkali, metals and thiols [START_REF] Wang | Nitric oxide donors: chemical activities and biological applications[END_REF]. This is the case e.g of organic nitrates, nitrites, sydnonimines and Snitrosothiols. The third type is an "enzymatically catalized" NO donation depending on enzymatic oxidation: e.g. N-hydroxyguanidines need metabolic activation by NO synthases or oxidases for NO release [START_REF] Wang | Nitric oxide donors: chemical activities and biological applications[END_REF].
Moreover, some NO donors release NO in more than one way, for example some S-nitrosothiols can also generate NO by enzymatic catalysis [START_REF] Singh | Mechanism of superoxide dismutase/H(2)O(2)-mediated nitric oxide release from S-nitrosoglutathione--role of glutamate[END_REF], as previously described for GGT and GSNO.
The delivery of supplementary NO in the form of NO donor drugs represents an attractive therapeutic option for the treatment of cardiovascular diseases. Some NO-donor drugs are already in widespread clinical use, in particular the organic nitrates (e.g. nitroglycerin, isosorbide dinitrate or isosorbide-5-mononitrate), the organic nitrites (e.g. amyl nitrite), the ferrous nitro complexes (e.g. sodium nitroprusside) and the sydnonimines (e.g. molsidomine) [START_REF] Tullett | Use of NO donors in biological systems[END_REF]. Nitroglycerin and amyl nitrite are widely used in both prophylaxis and treatment of angina pectoris. Sodium nitroprusside is used in treating hypertensive emergencies as well as severe cardiac failure but never for a chronic treatment [START_REF] Doni | S-Nitrosothiols: a class of nitric oxide-donor drugs[END_REF]. Other commercialized NO-donor drugs contain as active agent molsidomine, which is degraded in the liver in 3morpholinosydnonimine (SIN-1) and liberates NO. Although these drugs are effective, they may however present some drawbacks. Indeed, continuous organic nitrate administrations can rapidly induce tolerance in patients, with reduced therapeutic effect of the drugs with time. Moreover, administration of sodium nitroprusside may be toxic since this drug is converted into cyanide and thiocyanate [START_REF] Yamamoto | Nitroprusside intoxication: protection of alphaketoglutarate and thiosulfate[END_REF][START_REF] Johanning | A retrospective study of sodium nitroprusside use and assessment of the potential risk of cyanide poisoning[END_REF]. Toxic accumulation of cyanide may lead to severe lactic acidosis, arrhythmia and excessive hypotension. SIN-1 decomposition can give concomitant large amounts of NO and superoxide leading to the formation of ONOO -, which has been associated to cytotoxic effects through oxidation and nitration reactions [START_REF] Hogg | Production of hydroxyl radicals from the simultaneous generation of superoxide and nitric oxide[END_REF][START_REF] Ischiropoulos | Peroxynitrite-mediated oxidative protein modifications[END_REF][START_REF] Szabo | DNA damage induced by peroxynitrite: subsequent biological effects[END_REF].
S-nitrosothiols are proposed as promising candidates as NO donors because they do not induce any tolerance or cyanide poisoning [START_REF] Doni | S-Nitrosothiols: a class of nitric oxide-donor drugs[END_REF] and were shown to be effective in several disease models [START_REF] Lima | Endogenous S-nitrosothiols protect against myocardial injury[END_REF]. In contrast to the other NO donors described above, only initial small clinical studies have been reported for S-nitrosothiols, suggesting that they may be valuable therapeutic agents in a variety of cardiovascular disorders.
Recent studies have shown the implication of NO homeostasis in atherosclerosis and the potential therapeutic benefit of drugs which donate NO (e.g. organic nitrates, nicorandil and sydnonimines) or which increase the availability of endogenous NO (e.g. statins, angiotensin-converting enzyme inhibitors, Larginine and tetrahydrobiopterin) [START_REF] Herman | Therapeutic potential of nitric oxide donors in the prevention and treatment of atherosclerosis[END_REF]9,[START_REF] Meyer | Therapeutic strategies to deplete macrophages in atherosclerotic plaques[END_REF]. The rationale for this implication lies on the potential effects of NO on macrophages, SMCs and ECs, main cellular components of atherosclerotic plaque.
On the other hand, it is known that NO by itself [START_REF] Chang | Oxidation of LDL to a biologically active form by derivatives of nitric oxide and nitrite in the absence of superoxide. Dependence on pH and oxygen[END_REF][START_REF] Wang | Oxidation of LDL by nitric oxide and its modification by superoxides in macrophage and cell-free systems[END_REF] or in combination with superoxide anions [START_REF] Darley-Usmar | The simultaneous generation of superoxide and nitric oxide can initiate lipid peroxidation in human low-density lipoprotein[END_REF] can stimulate the oxidation of LDL, which constitutes a critical triggering event in atherogenesis. Thus, S-nitrosothiols in spite of their advantages could also contribute to oxidative stress.
Little was known about the effect of S-nitrosothiols on native LDL oxidation in normal ECs and SMCs.
Therefore Jaworski et al. have compared the effects of S-nitroso-N-acetylpenicillamine (SNAP) and two other known NO donors, SIN-1 and sodium nitroprusside, on LDL oxidation either in an acellular system or in the presence of normal endothelial cells or smooth muscle cells [8]. They demonstrated that sodium nitroprusside strongly oxidized LDL in medium alone, as well as in endothelial or smooth muscle cell cultures, and it would have to be used carefully in therapeutic vascular treatments, especially under conditions of high concentration and prolonged administration. SIN-1 also oxidized LDL in the absence of cells and clearly enhanced the LDL oxidation in cultures, in agreement with other studies [START_REF] Darley-Usmar | The simultaneous generation of superoxide and nitric oxide can initiate lipid peroxidation in human low-density lipoprotein[END_REF][START_REF] Mital | Synergy of amlodipine and angiotensinconverting enzyme inhibitors in regulating myocardial oxygen consumption in normal canine and failing human hearts[END_REF]. SNAP was not able to oxidize LDL either in acellular system or in the presence of cells, showing that the amount of superoxide and other reactive oxygen species released by these cells did not suffice, contrary to those liberated by macrophages, to combine to NO providing oxidant activity.
It has been shown that other S-nitrosothiols, such as GSNO, could protect endothelial cells from the toxic effect of oxidized LDL [START_REF] Struck | Nitric oxide compounds inhibit the toxicity of oxidized low-density lipoprotein to endothelial cells[END_REF]. In addition, SNAP was reported to be useful in the treatment of heart failure, reducing myocardial oxygen consumption [START_REF] Mital | Synergy of amlodipine and angiotensinconverting enzyme inhibitors in regulating myocardial oxygen consumption in normal canine and failing human hearts[END_REF], inducing vasodilation in rat femoral arteries [START_REF] Megson | Prolonged effect of a novel S-nitrosated glyco-amino acid in endothelium denuded rat femoral arteries: potential as a slow release nitric oxide donor drug[END_REF] and inhibiting platelet activation and aggregation through cGMP accumulation [START_REF] Salas | Comparative pharmacology of analogues of S-nitroso-N-acetyl-DL-penicillamine on human platelets[END_REF][START_REF] Gordge | Evidence for a cyclic GMP-independent mechanism in the antiplatelet action of S-nitrosoglutathione[END_REF]. Taken together, these observations suggest that S-nitrosothiols could limit the progression of vascular disorders such as atherogenesis without increasing oxidative stress, and thus encourage their clinical applications in the therapeutic treatment of atherogenesis.
In this direction, Martinet and collegues, on their previous work [START_REF] Meyer | Nitric oxide donor molsidomine favors features of atherosclerotic plaque stability during cholesterol lowering in rabbits[END_REF] demonstrated that the treatment of cholesterol-fed rabbits with the NO donor molsidomine preferentially eliminates macrophages exerting several beneficial effects on plaque structure and stability. In a more recent study, they investigated the underlying mechanisms and reported that such effect is mediated, at least in part, by the induction of endoplasmatic reticulum (ER) stress [9]. Macrophages and SMCs were treated in vitro with the NO donors spermine NONOate or SNAP, as well as with the well-known ER stress inducers thapsigargin, tunicamycin, dithiothreitol or brefeldin A. Several markers of macrophage death and induction of ER stress were measured. Macrophages and SMCs treated with spermine NONOate or SNAP showed several signs of ER stress, including upregulation of CHOP (CCAAT/Enhancer-binding protein homologous protein) expression, hyperphosphorylation of eIF2a, inhibition of de novo protein synthesis and splicing of XBP1 (xbox binding prtein 1) mRNA. These effects were similar in macrophages and SMCs, yet only macrophages underwent apoptosis. Interestingly, selective induction of macrophage death could also be initiated with well-known ER stress inducers such as the Ca 2+ homeostasis disruptor thapsigargin and the N-linked glycosylation inhibitor tunicamycin, reinforcing the finding that induction of ER stress during treatment with NO donors initiates selective macrophage death. In conclusions, Martinet et al., have provided evidence that NO-induced ER stress can induce macrophage cell death without affecting SMCs viability in atherosclerotic plaques, probably via inhibition of protein synthesis. From a clinical perspective, it is important to note that protein synthesis inhibitors, e.g. cycloheximide, cannot be administered systemically because they cause dramatic cell death in the liver [START_REF] Higami | Intravenous injection of cycloheximide induces apoptosis and up-regulates p53 and Fas receptor expression in the rat liver in vivo[END_REF]. Moreover, local drug delivery by means of coated stents avoids unwanted systemic effects, but does not guarantee permanent depletion of macrophages due to the fast release rates (hours to months) of the coated drug. In case of short-term drug delivery, it is only a matter of time before blood monocytes reinvade the "purified" plaque. NO donors are widely used by patients with coronary artery disease to relieve the symptoms of ischaemia evoked by atherosclerosis and can be administered safely for many years [START_REF] Herman | Therapeutic potential of nitric oxide donors in the prevention and treatment of atherosclerosis[END_REF]. Therefore, it is conceivable that NO donors, if necessary in combination with a statin or local therapy (for example stent-based delivery of cycloheximide), would offer new opportunities for a long-term macrophage depletory effect in atherosclerotic plaques.
The table below reports a summary of the various NO donors and their application field described above (Table 4).
50
Recently, indeed, a new class of compounds, the NO-donating statins, have been reported to associate the ability to donate bioactive NO to the capacity to inhibit 3-hydroxy-3-methyl-glutarylcoenzyme A reductase (HMG-CoA reductase), thus displaying a variety of biological effects in addition to those of statins, including anti-thrombotic and anti-inflammatory properties [START_REF] Ongini | Nitric oxide (NO)-releasing statin derivatives, a class of drugs showing enhanced antiproliferative and antiinflammatory properties[END_REF][START_REF] Rossiello | A novel nitric oxide-releasing statin derivative exerts an antiplatelet/antithrombotic activity and inhibits tissue factor expression[END_REF][START_REF] Momi | Nitroaspirin plus clopidogrel versus aspirin plus clopidogrel against platelet thromboembolism and intimal thickening in mice[END_REF] (Table 5).
Momi at al., [START_REF] Momi | Nitric oxide enhances the anti-inflammatory and anti-atherogenic activity of atorvastatin in a mouse model of accelerated atherosclerosis[END_REF] have compared the anti-inflammatory and anti-atherothrombotic properties of the NOdonating atorvastatin, NCX 6560, with those of plain atorvastatin in an animal model of severe endothelial dysfunction, oxidative stress and accelerated atherosclerosis (LDLR -/-mice fed a cholesterol-rich diet and undergoing intravascular photochemically induced generation of oxygen free radicals). In this model, atorvastatin, despite its lipid-lowering and anti-inflammatory activity [START_REF] Momi | NCX 6560, a nitric oxide-releasing derivative of atorvastatin, inhibits cholesterol biosynthesis and shows antiinflammatory and anti-thrombotic properties[END_REF][START_REF] Ongini | Nitric oxide (NO)-releasing statin derivatives, a class of drugs showing enhanced antiproliferative and antiinflammatory properties[END_REF] only partly prevented atherosclerosis and inflammation, while a NO-releasing atorvastatin showed superior activity. This suggests that in conditions of severe endothelial injury associated with the generation of oxygen radicals, the direct supply of NO allows the attainment of a significant anti-atherosclerotic effect by the enhancement of the vascular protective effects of atorvastatin unrelated to lipid lowering.
The conclusion, that NO donation confers superior anti-atherosclerotic activity to atorvastatin, is supported by several observations. The results showed a significant decrease in plasma IL-6 levels 1 week after the beginning of treatment with NCX 6560, while atorvastatin required at least 3 weeks. Then, the antiatherosclerotic activity of NCX 6560 was superior to that of atorvastatin, despite a similar reduction of serum cholesterol and similar drug plasma levels. The administration of NCX 6560, but not of atorvastatin, strongly enhanced the plasma levels of NO-degradation products and of cGMP, confirming the in vivo release of biologically relevant amounts of NO. Moreover, NCX 6560, but not atorvastatin, reduced the expression in the injured arterial wall of some inflammatory proteins, e.g. MMP-2, which participate in atherosclerotic vascular remodelling [START_REF] Toutouzas | Inflammation and restenosis after percutaneous coronary interventions[END_REF]. A direct effect of NCX 6560-released NO on the vascular wall is documented by the decrease of inflammatory markers (MMP-2, COX-2, and iNOS) in the vessel wall, by enhanced endothelium-dependent vasorelaxation, by the lowering of blood pressure in eNOS -/-mice, and by the increase of phosphorylated eNOS in the vascular tissue [START_REF] Ye | The role of eNOS, iNOS, and NF-kB in upregulation and activation of cyclooxigenase-2 and infarct size reduction by atorvastatin[END_REF].
In conclusion, this study showed that in a model of severe endothelial dysfunction, systemic peroxidation and inflammation and accelerated atherosclerosis, atorvastatin, even at high doses, displays a suboptimal anti-atherogenic and anti-inflammatory activity, while a NO-donating atorvastatin has strong and prompt anti-inflammatory and anti-atherosclerotic effects.
All these studies reinforce the idea of the importance of using "NO donors", focusing on their protective and beneficial effects, in order to lead novel strategies for the treatment of atherosclerosis or at least, the minimization of its development.
52
Finally, in the last few years, most of attention was focused on cellular responses to GSNO [START_REF] Broniowska | S-Nitrosoglutathione[END_REF].
Indeed, GSNO does not induce any tolerance or oxidative stress. To date, there have been nearly 20 clinical trials investigating the therapeutic efficacy of GSNO in multiple pathological contexts, though most have focused on its effects in cardiovascular diseases. GSNO has been administered through intravenous infusion [START_REF] Rassaf | Positive effects of nitric oxide on left ventricular function in humans[END_REF], as an aerosolized inhalant [START_REF] Snyder | Acute effects of aerosolized Snitrosoglutathione in cystic fibrosis[END_REF], and more recently, as a topical gel [START_REF] Souto | Vascular modifications of the clitoris induced by topic nitric oxide donor gel-preliminary study[END_REF] and polyvinyl alcohol film [START_REF] Simoes | Poly (vinyl alcohol) films for topical delivery of S-nitrosoglutathione: effect of freezing-thawing on the diffusion properties[END_REF]. The best-characterized effects of GSNO in humans are its direct and selective actions on platelets [START_REF] Kaposzta | S-nitrosoglutathione reduces asymptomatic embolization after carotid angioplasty[END_REF][START_REF] Langford | Inhibition of platelet activity by S-nitrosoglutathione during coronary angioplasty[END_REF]. GSNO has been shown to decrease embolism from symptomatic carotid plaques and after carotid endarterectomy [START_REF] Kaposzta | L-arginine and S nitrosoglutathione reduce embolization in humans[END_REF], carotid angioplasty [START_REF] Kaposzta | S-nitrosoglutathione reduces asymptomatic embolization after carotid angioplasty[END_REF], and vein graft [START_REF] Salas | S-nitrosoglutathione inhibits platelet activation and deposition in coronary artery saphenous vein grafts in vitro and in vivo[END_REF] by limiting platelet activation.
Its beneficial effects in the vasculature extend to cardiac left ventricular function [START_REF] Rassaf | Positive effects of nitric oxide on left ventricular function in humans[END_REF], systemic vasodilation [START_REF] Rassaf | Plasma nitrosothiols contribute to the systemic vasodilator effects of intravenously applied NO: experimental and clinical Study on the fate of NO in human blood[END_REF], and preeclampsia [START_REF] Lees | The effects of Snitrosoglutathione on platelet activation, hypertension, and uterine and fetal Doppler in severe preeclampsia[END_REF] (Table 4).
Although with some discrepancies related to the analytical method used [START_REF] Giustarini | Detection of S-nitrosothiols in biological fluids: a comparison among the most widely applied methodologies[END_REF][START_REF] Bramanti | Exogenous vs. endogenous gamma-glutamyltransferase activity: Implications for the specific determination of Snitrosoglutathione in biological samples[END_REF], GSNO was found at nano-to low micromolar concentrations in extracellular fluids and tissues [START_REF] Giustarini | Detection of S-nitrosothiols in biological fluids: a comparison among the most widely applied methodologies[END_REF][START_REF] Gaston | Endogenous nitrogen oxides and bronchodilator S-nitrosothiols in human airways[END_REF][START_REF] Kluge | S-nitrosoglutathione in rat cerebellum: identification and quantification by liquid chromatography-mass spectrometry[END_REF] and due to its nature of endogenous compound, it has been considered an "attractive candidate" for GSNO-based therapies [START_REF] Snyder | Acute effects of aerosolized Snitrosoglutathione in cystic fibrosis[END_REF][START_REF] Corti | Mechanisms and targets of the modulatory action of Snitrosoglutathione (GSNO) on inflammatory cytokines expression[END_REF].
Despite GSNO is the main physiological source of NO in the tissue [START_REF] Maron | S-Nitrosothiols and the S-Nitrosoproteome of the cardiovascular system[END_REF] and plays an important role in NO signaling and protein S-nitrosation, there are not enough experimental evidences in support of its employment for treatment of the atherosclerotic process. If one of the possible final aim for the "treatment" of atherosclerosis is to introduce new therapeutic interventions capable to reduce plaque macrophage content at different stages of atherosclerosis -either by reducing lipids in the circulation or from the plaque, or decreasing monocyte recruitment, or inhibiting macrophage activation, or inducing macrophage death in atherosclerotic plaques -the question arises spontaneously: why do not employ GSNO? Probably, it may venture the guess about a GSNO employment as a potential therapeutic agent in the prevention and treatment of atherosclerotic plaques. More in particular, it is conceivable that GSNO can be employed to counteract NO deficiency in cardiovascular disorders, and specifically in atherosclerosis, a condition characterized by a decrease of NO bioavailability and an increase in oxidative stress. In particular, given the multitude of factors that influence NO bioavailability in atherosclerosis, and its bioavailability is tightly regulated by multiple fine-tuned mechanisms, it is logical to target one or more of these processes to enhance NO levels. Indeed, the development and/or progression of atherosclerosis is associated with dysregulation of one or more of these mechanisms, resulting in the production of suboptimal levels of NO by the endothelium. Some of the mechanisms by which existing drugs or future therapeutics may increase NO bioavailability include supplementation of L-arginine or NO-releasing molecules, restoration of cofactors and cosubstrates for eNOS, stabilization of eNOS mRNA or protein, increasing eNOS expression or activity, reducing synthesis or increasing metabolism of methylarginines, reducing oxidative stress, and/or enhancing the clearance of harmful lipoproteins.
Precisely because GSNO ability to regulate NO bioavailability under oxidative stress conditions is poorly understood, to investigate and deepen this aspect can be regarded as a must. The elucidation of these mechanisms could open a way for the use of GSNO, as another useful NO donor in cardiovascular diseases associated with oxidative stress.
Finally, as previously described, GSNO is an endogenous low molecular weight S-nitrosothiol and is involved in storage and transport of NO. It exhibits higher stability than NO and mediates protein Snitrosation processes and plays an important role in vascular signaling [START_REF] Marozkina | S-Nitrosylation signalling regulates cellular protein interactions[END_REF]. The biological activity of GSNO and particularly its vasorelaxant effect have been reported in ex vivo isolated vessel model [START_REF] Sogo | S-nitrosothiols cause prolonged, nitric oxide-mediated relaxation in human saphenous vein and internal mammary artery: therapeutic potential in bypass surgery[END_REF][START_REF] Alencar | S-Nitrosating nitric oxide donors induce long-lasting inhibition of contraction in isolated arteries[END_REF][START_REF] Dahboul | Endothelial γ-glutamyltransferase contributes to the vasorelaxant effect of S-nitrosoglutathione in rat aorta[END_REF] and are directly linked to its decomposition resulting in the release of NO. Indeed the potential therapeutic use of GSNO is relevant based on its unique capacity to deliver NO and beneficial effects, particularly in the cardiovascular diseases associated with oxidative stress, through its degradation by enzymatic activities.
One of them, GGT (EC 2.3.2.2.) involved in the metabolism of GSH [START_REF] Griffith | Transport of gamma-glutamyl amino acids: role of glutathione and gamma-glutamyl transpeptidase Proceedings of the National Academy of Sciences of United States of America[END_REF] has also been shown to metabolize specifically GSNO [START_REF] Angeli | A kinetic study of gamma-glutamyltransferase (GGT)-mediated S-nitrosoglutathione catabolism[END_REF][START_REF] Hogg | S-Nitrosoglutathione as a substrate for gammaglutamyl transpeptidase[END_REF], generating S-nitrosocysteinylglycine [START_REF] Hogg | S-Nitrosoglutathione as a substrate for gammaglutamyl transpeptidase[END_REF], reporting to have biological consequences in some cellular responses. The profile of GGT will be analyzed in depth in the following section.
Role of gamma-glutamyl transferase (GGT) in atherosclerosis
γ-Glutamyl transferase is an enzyme widely distributed and conserved in the living world: in bacteria [START_REF] Okada | Crystal structure of the gammaglutamyltranspeptidase precursor protein from Escherichia coli. Structural changes upon autocatalytic processing and implications for the maturation mechanism[END_REF], in plants [START_REF] Martin | Purified gamma-glutamyl transpeptidases from tomato exhibit high affinity for glutathione and glutathione S-conjugates[END_REF] and throughout animal kingdom. In particular, in mammals, GGT is a membrane type II glycoprotein, including a heavy (380 amino acids, 55-62 kDa) and light (189 amino acids, 20-30 kDa) subunits, linked by non-covalent bonds. Through immunohistochemical studies, it has been shown that GGT is located on the membrane of nearly all cells and is preferentially expressed in epithelial tissues with secretory and absorption activity [START_REF] Hanigan | Immunohistochemical detection of gamma-glutamyl transpeptidase in normal human tissue[END_REF]. The highest GGT activity has been found in the kidney, on the luminal surface of the cells of the proximal convoluted tubule, while the cells of the distal tubule and glomeruli are practically free. In the liver, the GGT activity is found in the epithelial cell layer of the extrahepatic biliary tract and liver canaliculi, while, in the pancreas, the most of it is in acinar cells. In the brain the activity of GGT appears to contribute to the functionality of the blood-brain barrier by promoting the metabolism of leukotrienes and detoxification of xenobiotics [START_REF] Garcion | 1,25-dihydroxyvitamin D3 regulates the synthesis of gamma-glutamyl transpeptidase and glutathione levels in rat primary astrocytes[END_REF]. GGT is expressed at the endothelial level, by exerting a role in the vascular relaxation, as shown by Dabhould et al, in a rat aorta model, in which GGT activity mediates the release of NO from GSNO [START_REF] Dahboul | Endothelial γ-glutamyltransferase contributes to the vasorelaxant effect of S-nitrosoglutathione in rat aorta[END_REF]. GGT activity is also present on the membrane and intracytoplasmic granules of platelets and granulocytes and lymphocytes, where its increased activity is considered as a marker of differentiation and malignant transformation [START_REF] Khalaf | Cytochemistry of gamma-glutamyltransferase in haemic cells and malignancies[END_REF].
The activity of serum GGT is probably due to its release from the cell membranes of various parenchymatous organs, and reflects the quantitative changes in production, release and removal of the circulating enzyme.
GGT is a highly glycosylated glycoprotein and, based on its primary sequence, six possible consensus sequences for N-glycosylation have been identified. N-glycosylation represents 25-30% of the total mass of GGT and it is tissue specific. In fact, purified GGT from different organs has a different molecular weight and, therefore, a different electrophoretic mobility. This heterogeneity of glycosylation suggest the occurrence of "isoforms" for the GGT enzyme, while there are no identified isoenzymes with respect to the amino acid sequence.
Being a cell surface enzyme from the Meister cycle, GGT catalyzes the cleavage of the γ -glutamyl amino acid of L-γ-glutamyl-L-cysteinyl-glycine (GSH) and its conjugates [START_REF] Tate | Gamma-Glutamyl transpeptidase: catalytic, structural and functional aspects[END_REF]. This is thought to occur through the formation of an intermediate γ-glutamyl enzyme, which then can form free γ -glutamate by hydrolysis, or transfer the γ-glutamyl to other acceptor molecules (e.g. amino acids) through transpeptidation activity.
Kinetic studies have suggested a mechanism of action of GGT, defined as "ping -pong", which could be represented by the following diagram:
L-γ-glutamyl-L-cysteinyl-glycine + GGT ⇔ γ-glutamyl-GGT + L-cysteinylglycine γ-glutamyl-GGT + acceptor ⇔ γ-glutamyl-acceptor + GGT
GGT is specific to the type of bond and not to the substrate, since only the γ-glutamyl position is critical for the interaction with the GGT enzyme. Therefore all the γ-glutamyl-compounds are potential substrates for GGT. The main physiological GGT substrates are GSH, glutathione conjugates produced by GSH-transferase, and GSNO. As acceptor molecules, GGT only recognizes L-aminoacids (Cys, Met, Gln, Glu) or, preferably, dipeptides. The best acceptors are Cys, Met, Gln, and the aminoacylglycine dipeptides (CysGly, MetGly, GlnGly, CysGly, GlyGly) [START_REF] Thompson | Interrelationships between the binding sites for amino acids, dipeptides, and gammaglutamyl donors in gamma-glutamyl transpeptidase[END_REF][START_REF] Allison | Gamma-glutamyl transpeptidase: kinetics and mechanism[END_REF].
GGT enzyme appears to be involved in various physiological functions and presents both antioxidant and pro-oxidant activities.
Anti-oxidant role of GGT
GGT is localized at the cell surface and only cleaves extracellular substrates. GSH and oxidized GSH (GSSG) are the most abundant physiological substrates, although GGT cleaves any gamma-glutamyl substrate including GSH S-conjugates [START_REF] Wickham | Gamma-glutamyl compounds: Substrate specificity of gamma-glutamyl transpeptidase enzymes[END_REF].
The ratio of GSH to oxidized glutathione (GSSG) can be used as an estimate of the overall oxidation state of the cellular thiol redox environment [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF][START_REF] Jones | Redox potential of GSH/GSSG couple: assay and biological significance[END_REF]. Indeed, the maintenance of the intracellular concentration of GSH is essential for cells survival [START_REF] Meister | Amino acid sequence of rat kidney glutathione synthetase[END_REF]. The cytosol is considered to be a reducing environment with a cytosolic GSH/GSSH ratio equal to or greater than 100 [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF]. Measuring the GSH/GSSG ratio or calculating the overall thiol redox potential based on these GSH/GSSG values, is a method often used to determine the level of (thiol) oxidative stress of cells or tissues. Shifting toward a more oxidized thiol redox status of cells and tissues (high GSSG, low GSH/GSSG ratio) correlates with disease processes like atherosclerosis and diabetes [42,[START_REF] Asmis | A novel thiol oxidation-based mechanism for adriamycin-induced cell injury in human macrophages[END_REF][START_REF] Qiao | Thiol oxidative stress induced by metabolic disorders amplifies macrophage chemotactic responses and accelerates atherogenesis and kidney injury in ldl receptor-deficient mice[END_REF].
GSH has many important biological functions, including: conjugation of electrophilic compounds, maintenance of the intracellular redox state environment through thiol-disulfide exchange reactions, and scavenging of free radicals. The intracellular concentration of glutathione depends on the balance between its synthesis, adjusted by the contribution of constituent amino acids, and its consumption. Inasmuch GGT initiates the degradation of extracellular GSH, making possible the subsequent uptake of the composing amino acid by the cell (the γ-glutamyl cycle), it is considered as an ancillary enzyme of the antioxidant systems. From this point of view the enzyme could have a protective role against oxidative intracellular damage [223] (Fig. 5). Moreover, GGT cleaves the γ-glutamate group from extracellular GSH and GSSG providing an additional source of cysteine for intracellular GSH synthesis. Indeed, another key function of GSH is represented by the transport of cysteine (Cys), which would be highly unstable in the extracellular environment as it self-oxidizes into cystine (Cys-Cys), favoring the formation of reactive oxygen species [START_REF] Yao | Abnormal expression of hepatoma specific gammaglutamyl transferase and alteration of gamma-glutamyl transferase gene methylation status in patients with hepatocellular carcinoma[END_REF]. Following the degradation of glutathione into cysteinyl-glycine (Cys-Gly), GGT then allows the recovery of cysteine. Cystine is the most effective acceptor of the transpeptidation reaction and also the tripeptide γ-glutamyl-cysteine (γ-GluCys) contributes to the recovery of cysteine. Cysteine is a limiting substrate for protein and glutathione synthesis since it is one of the essential amino acids [START_REF] Zhang | Gamma-glutamyl transpeptidase in glutathione biosynthesis[END_REF]. GGT acts together with other membrane-bound enzymes (dipeptidases) capable of hydrolysing the peptide bond of CysGly for the recovery of the individual amino acids, to reconstitute glutathione or to effect the recovery of cysteine.
Pro-oxidant role of GGT
The substrate of GGT, GSH, is a tripeptide whose antioxidant properties are based on the reducing capacity of the thiol group (-SH) of the cysteine amino acid residue. The thiols, however, especially in their ionized form of thiolate anions (R-S -), can give up electrons to reduce metal cations such as iron and copper. Indeed, the balance between antioxidant and pro-oxidant action depends on the availability of metal ions.
The pro-oxidant effects of GGT depend from high reactivity of the thiol group (-SH) of CysGly that is present in the form of a thiolate anion (S -) at physiological pH. The latter reduces Fe 3+ to Fe 2+ , generating the radical thiyl (S °) of CysGly. Fe 2+ , in turn, reduces molecular dioxygen to superoxide anion (O2 °-) which is converted into hydrogen peroxide (H2O2) either spontaneously or by superoxide dismutase (SOD). H2O2, in the presence of free Fe 2+ (Fenton reaction) can generate hydroxyl radicals, which together with the cysteinylglycine thiyl radical formed following the reduction of Fe 3+ can initiate the chain reaction of lipid peroxidation, resulting in loss of structure and stability of the cell membrane. The same prooxidants also have important functions [START_REF] Zalit | The role of chelators in the catalysis of glutathione-gamma-glutamyl transpeptidase-dependent lipid peroxidation by transition metals[END_REF], e.g. regulate protease and antiprotease activities, modulate apoptosis or proliferation in relation to the cell line, and lead to transcriptional activation of NF-KB (Fig. 6) [START_REF] Paolicchi | Glutathione catabolism as a signaling mechanism[END_REF]. It is important to emphasize that these phenomena occur, at least initially, on the outer side of the plasma membrane of the cell, and therefore these phenomena are not subjected to anti-oxidant defense systems [START_REF] Dominici | Pro-oxidant reactions promoted by soluble and cell-bound gamma-glutamyltransferase activity[END_REF], rather their consequences are propagated throughout the cell.
While GSH, due to peculiar properties of the free carboxyl group of the γ-glutamic acid residue, is relatively ineffective in reduction of metal cations, its hydrolysis by GGT results in the release of a catabolite, CysGly, which is much more reactive and efficient in reducing metal cations, thus triggering the production of free radicals and reactive oxygen species. The pro-oxidant action of GGT is linked to the presence of redox active metals in the extracellular environment, which in vivo is strongly prevented thanks to the formation of complexes such as ferritin, transferrin and ceruloplasmin that do not allow metals to catalyze reactions with free radicals. In this regard, it should be underscored that GGT activity itself is able to reduce and to promote the release of metal ions from transferrin [START_REF] Drozdz | Gamma-Glutamyltransferase dependent generation of reactive oxygen species from a glutathione/transferrin system[END_REF][START_REF] Dominici | Possible role of membrane gamma-glutamyltransferase activity in the facilitation of transferrin-dependent and -independent iron uptake by cancer cells[END_REF] as well as from ceruloplasmin [START_REF] Glass E Stark | Promotion of glutathione-gamma-glutamyl transpeptidase-dependent lipid peroxidation by copper and ceruloplasmin: the requirement for iron and the effects of antioxidants and antioxidant enzymes[END_REF], i.e. two physiological sources of transition metal ions.
An excess production of ROS by GSH metabolism can induce DNA damage [START_REF] Schmidt | Cellular receptors for advanced glycation end products. Implications for induction of oxidant stress and cellular dysfunction in the pathogensis of vascular lesions[END_REF][START_REF] Gimbrone | Vascular endothelium: An integrator of pathophysiologic stimuli in atherosclerosis[END_REF] or trigger the process of lipid peroxidation, documented in vitro with linoleic acid [START_REF] Ross | The pathogenesis of atherosclerosis: a perspective for the 1990s[END_REF] and isolated LDL lipoproteins [START_REF] Ludmer | Paradoxical vasoconstriction induced by acetylcholine in atherosclerotic coronary arteries[END_REF] as substrates. Indeed, LDL oxidation is known to play a central role in atherogenesis and vascular damage and iron is a potential catalyst of LDL oxidation, provided that electron donors convert Fe 3+ to Fe 2+ .
In fact thiol compounds such as cysteine and homocysteine are known to reduce Fe 3+ and promote Fe 2+dependent LDL oxidation [START_REF] Bradley | TNF-mediated inflammatory disease[END_REF]. A series of experiments showed that γ-glutamate residue of GSH affects interactions of the juxtaposed cysteine thiol with iron, precluding Fe 3+ reduction and hence LDL oxidation [START_REF] Paolicchi | Gamma-glutamyl transpeptidase-dependent iron reduction and LDL oxidation-a potential mechanism in atherosclerosis[END_REF][START_REF] Berliner | The role of oxidized lipoproteins in atherogenesis[END_REF]. Both processes increase remarkably after addition of purified GGT which acts by removing γglutamate residue [START_REF] Dominici | Possible role of membrane gamma-glutamyltransferase activity in the facilitation of transferrin-dependent and -independent iron uptake by cancer cells[END_REF].
GGT in the atherosclerotic plaque
GGT has been successfully used as a predictor of cardiovascular risk, as studies conducted since the late '90s showed the association of relatively higher serum GGT levels with the progression of atherosclerosis and its complications [11]. Its accumulation in carotid [START_REF] Emdin | Serum gamma-glutamyltransferase as a risk factor of ischemic stroke might be independent of alcohol consumption[END_REF] and coronary artery atherosclerotic plaques [12] was also documented. Histochemical studies had shown that intense GGT activity is detectable in the intimal layers of human atherosclerotic lesions, where it is apparently expressed by CD68+ macrophage-derived foam cells [START_REF] Paolicchi | Gamma-glutamyl transpeptidase-dependent iron reduction and LDL oxidation-a potential mechanism in atherosclerosis[END_REF]. Moreover GGT-positive foam cells were found to colocalize with immunoreactive oxidized LDL, suggesting a possible role for GGT in the cellular processes of iron-mediated LDL damage. Interestingly, catalytically active GGT was also demonstrated in correspondence of microthrombi adhering to the surface of atheromas [START_REF] Paolicchi | Gamma-glutamyl transpeptidase-dependent iron reduction and LDL oxidation-a potential mechanism in atherosclerosis[END_REF][START_REF] Emdin | Serum gamma-glutamyltransferase as a risk factor of ischemic stroke might be independent of alcohol consumption[END_REF][START_REF] Emdin | Prognostic value of serum gamma-glutamyl transferase activity after myocardial infarction[END_REF].
There are two possibilities for GGT origin in the plaque: an endogenous and an exogenous component. Concerning the first one, it could be dependent of the cellular elements of the plaque (inflammatory and smooth muscle cells). Recently, RT-PCR analysis revealed in plaque extracts the presence of GGT mRNA transcribed from GGT-1 gene, one of the thirteen homologs of human GGT gene [START_REF] Franzini | Gamma-glutamyltransferase activity in human atherosclerotic plaques--biochemical similarities with the circulating enzyme[END_REF]. This could indicate that plaque GGT may at least in part also derive from local synthesis of the protein and a likely endogenous source of GGT could be represented by inflammatory cells [START_REF] Khalaf | Cytochemistry of gamma-glutamyltransferase in haemic cells and malignancies[END_REF][START_REF] Grisk | The activity of gammaglutamyl transpeptidase (gamma-GT) in populations of mononuclear cells from human peripheral blood[END_REF][START_REF] Sener | Activity determination, kinetic analyses and isoenzyme identification of gammaglutamyltransferase in human neutrophils[END_REF].
On the other hand, as far as the exogenous component, this is a hypothesis supported from the finding that serum GGT, of patients suffering from CVD, seems to be transported by lipoproteins, including LDL [START_REF] Huseby | Multiple forms of serum gamma-glutamyltransferase. Association of the enzyme with lipoproteins[END_REF][START_REF] Watanabe | Association of gamma-glutamyltransferase with plasma lipoprotein and lipid-protein complex in cholestasis[END_REF][START_REF] Wenham | Physical properties of gamma-glutamyltransferase in human serum[END_REF]. A first characterization of GGT extracted from atherosclerotic plaques showed the presence of two distinct enzymatic forms based on molecular size, surface charge and precipitability. The form with high molecular weight (similar to complex between GGT and LDL found in the serum), with a high negative charge (high N-glycosylation) and precipitable with polyanions (in the conditions of serum LDL) may have serum origin [START_REF] Franzini | Gamma-glutamyltransferase activity in human atherosclerotic plaques--biochemical similarities with the circulating enzyme[END_REF]. Serum GGT can be divided into two fractions, either hydrophilic or hydrophobic with different characteristics of density, size and charge. The hydrophobic fraction is formed by a set of molecular complexes consisting of GGT transported by lipoproteins VLDL, LDL, HDL, and chylomicrons, thanks to the lipophilic domain located at the N-terminus domain of the GGT heavy chain, responsible for the normal enzyme insertion in the plasma membrane of cells. The hydrophilic form is constituted by GGT devoid of this lipophilic N-terminal domain, therefore not tied to any plasma transporter [START_REF] Huseby | Multiple forms of serum gamma-glutamyltransferase. Association of the enzyme with lipoproteins[END_REF].
Using a method based on the separation of GGT complexes by size exclusion chromatography and identification of the enzyme activity, the distribution of GGT fractions in 200 blood donors (100 men and 100 women) was screened [START_REF] Franzini | Fractions of plasma gamma-glutamyltransferase in healthy individuals: reference values[END_REF]. Four sub-fractions, corresponding to macro-molecular complexes with different molecular size, called big-GGT (b-GGT, > 2000kDa), medium-GGT (m-GGT, 940 kDa), small-GGT (s-GGT, 140 kDa) and free-GGT (f-GGT, 70 kDa) have been identified [START_REF] Franzini | Fractions of plasma gamma-glutamyltransferase in healthy individuals: reference values[END_REF][START_REF] Franzini | A high performance gel filtration chromatography method for gamma-glutamyltransferase fraction analysis[END_REF] (Fig. 7).
Figure 7. Plasma GGT elution profiles. High performance gel filtration chromatography method for GGT fraction analysis [START_REF] Franzini | Fractions of plasma gamma-glutamyltransferase in healthy individuals: reference values[END_REF].
The molecular weights of b-, m-, and s-GGT fractions are respectively compatible with those of the complexes with lipoproteins VLDL, LDL, HDL, while that of f-GGT corresponds to the free enzyme form.
Nevertheless, the gel filtration elution volumes of individual fractions (i.e., their molecular weights) did not change, independently of the amount of associated GGT activity (Fig. 7). This suggests a specific interaction of GGT enzyme with each lipoprotein particle, rather than aspecific absorption of lipophilic GGT to circulating lipoproteins; the latter would produce a shift in elution volume peaks, as a consequence of the varying extents of GGT protein absorption. From the analysis of the activity of the four fractions as a function of the total GGT activity in both genders, the f-GGT fraction was the most represented at low values of total GGT, while an increase of GGT total concentration depended mainly on the s-and b-GGT fractions [START_REF] Franzini | Fractions of plasma gamma-glutamyltransferase in healthy individuals: reference values[END_REF] (Table 6).
Note. The parametric Student t test was applied. NS, not significant.
These observations are of particular interest, considering that serum levels of GGT have been repeatedly proposed to play an independent role in clinical evolution of CVD related to atherosclerosis, including stroke [11,[START_REF] Emdin | Serum gamma-glutamyltransferase as a risk factor of ischemic stroke might be independent of alcohol consumption[END_REF]. A series of studies indicated that GGT is associated with overall mortality and cardiovascular events. The association occurs both in unselected populations [START_REF] Conigrave | Prediction of alcohol-related harm by laboratory test results[END_REF][START_REF] Brenner | Distribution, determinants, and prognostic value of gamma-glutamyltranspeptidase for all-cause mortality in a cohort of construction workers from south Germany[END_REF] and in patients with ascertained coronary artery disease, independently from potential confounders including liver disease and alcohol abuse [START_REF] Emdin | Prognostic value of serum gamma-glutamyl transferase activity after myocardial infarction[END_REF][START_REF] Karlson | Ten-year mortality amongst patients with a very small or unconfirmed acute myocardial infarction in relation to clinical history, metabolic screening and signs of myocardial ischaemia[END_REF]. Interestingly, although conditions associated with increased atherosclerosis such as obesity, elevated serum cholesterol, high blood pressure and myocardial infarction are positive determinants of serum GGT activity, all studies mentioned above found that GGT predictive value for cardiovascular disease is independent from that of these determinants [START_REF] Emdin | Prognostic value of serum gamma-glutamyl transferase activity after myocardial infarction[END_REF][START_REF] Conigrave | Prediction of alcohol-related harm by laboratory test results[END_REF][START_REF] Jousilahti | Serum gammaglutamyltransferase, self-reported alcohol drinking, and the risk of stroke[END_REF]. These findings were finally confirmed by a prospective study in a population of over 160,000 subjects [START_REF] Ruttmann | γ-Glutamyltransferase as a risk factor for cardiovascular disease mortality. An investigation in a cohort of 163,944 Austrian adults[END_REF]: serum GGT, with a dose-response relationship, was shown to predict the occurrence of incident fatal events during coronary heart disease, congestive heart failure and stroke [START_REF] Ruttmann | γ-Glutamyltransferase as a risk factor for cardiovascular disease mortality. An investigation in a cohort of 163,944 Austrian adults[END_REF] (Fig. 8). Macrophages execute central roles during the initiation, progression and vulnerability of the atherosclerotic plaque. Like other immune cells, macrophages are not static but show high plasticity and heterogeneity in response to a multitude of stimuli received from their micro-environment. This macrophage heterogeneity is reflected by functional polarization of differentiated macrophages and the dynamic switch between phenotypes. Under physiological condition, the M1/M2 population remains in mixture state while disoriented shift from M1 to M2 or vice versa results in disease progression.
This chapter will describe the fundamental differentation and polarization process of macrophage lineage cells, but will also highlight the main differences in lipid and iron metabolism, as the critical role of glutathione and reactive oxygen species in M1 and M2.
Based on this broad background, the final aim will be to evaluate the link between macrophagic lineage cells as a potential source of the GGT found in atherosclerotic plaques, in order to understand whether macrophages may provide a source of b-GGT within the atherosclerotic plaque ("Monocytes/macrophages activation contributes to b-gamma-glutamyltransferase accumulation inside atherosclerotic plaques" [START_REF] Belcastro | Monocytes/macrophages activation contributes to b-gamma-glutamyltransferase accumulation inside atherosclerotic plaques[END_REF] -The first article published is presented at the end of the chapter II).
Macrophages heterogeneity in atherosclerotic plaques
Macrophage functions can broadly be categorized by a series of dichotomies, for example, innate or acquired immunity, tissue destruction or repair, immigration or emigration, cholesterol accumulation or release, pro-inflammatory or anti-inflammatory phenotype. These functions could in principle be carried out by distinct subpopulations of macrophages or, in some cases, lineages related to different monocyte precursors as suggested in the previous chapter.
Actually, M1 and M2 macrophages are thought to represent the extreme polarization phenotypes of a continuum of pro-and anti-inflammatory macrophages all simultaneously present in atherosclerotic lesions. Pro-inflammatory macrophages were found in plaques several decades ago, while M2 macrophages were detected more recently [START_REF] Bouhlel | PPARγ activation primes human monocytes into alternative M2 macrophages with anti-inflammatory properties[END_REF]. Stöger et al. [START_REF] Stöger | Distribution of macrophage polarization markers in human atherosclerosis[END_REF] studied the distribution of M1 and M2 macrophages in human plaques and showed that both types increase in numbers along plaque progression and are equally distributed in the fibrous cap region. M1-specific cell markers were preferentially detected in rupture-susceptible shoulder regions, whereas M2-specific markers were predominant in more stable plaque regions out of the lipid core in the adventitia [START_REF] Chinetti-Gbaguidi | Human atherosclerotic plaque alternative macrophages display low cholesterol handling but high phagocytosis because of distinct activities of the PPARγ and LXRα pathways[END_REF]. M2-enriched lesion areas expressed high levels of IL-4, a cytokine that is essential for M2 polarization. The observed M2 macrophage population was relatively more resistant to the formation of foam cells and had increased ability to store engulfed cholesterol esters compared to M1 and resting macrophages. M1-specific markers were shown to be increased in carotid atherosclerotic lesions, while M2 markers were preferentially located in femoral atherosclerotic plaques [START_REF] Shaikh | Macrophage subtypes in symptomatic carotid artery and femoral artery plaques[END_REF]. In addition, plaque M1 macrophages showed up-regulated expression of several MMPs [START_REF] Huang | Classical macrophage activation upregulates several matrix metalloproteinases through mitogen activated protein kinases and nuclear factor-Κb[END_REF]. Indeed, these observations suggest that M1 macrophages are preferentially accumulated in symptomatic and unstable plaques. Therefore, an important aspect is to link the presence of subset-specific surface markers to the actual function of the macrophages in the atherosclerotic plaque.
Indeed, the contribution of subsets to the atherogenic process may be better understood, by studying the functions and signaling pathways of the plaque macrophage subsets in more depth, as described in detail in all following paragraphs.
GM-CSF and M-CSF: factors of macrophage differentiation in the plaque
The heterogeneity of macrophages population in atherosclerotic lesions has been a topic of great interest. As previously described, macrophage differentiation means the differentiation of monocytes to macrophages when monocytes infiltrate into the arterial wall and transform from round-shaped cells to irregularly shaped cells capable to intake antigens and migrate within the wall. In humans, blood monocytes are divided in two subpopulations based on CD14 and CD16 expression: CD14 + CD16 -are considered as the counterpart of the murine Ly6C high monocytes ("inflammatory") and CD14 -CD16 + are the phenotypic equivalent of the murine Ly6C low monocytes ("resident") [START_REF] Geissmann | Blood monocytes: distinct subsets, how they relate to dendritic cells, and their possible roles in the regulation of Tcell responses[END_REF][START_REF] Libby | Diversity of denizens of the atherosclerotic plaque: not all monocytes are created equal[END_REF][START_REF] Auffray | Blood monocytes: development, heterogeneity, and relationship with dendritic cells[END_REF] (Fig. 9).
In vitro, monocyte differentiation is driven by two growth factors such as granulocyte-macrophage colony-stimulating factor (GM-CSF) and macrophage colony-stimulating factor (M-CSF) that leads to the formation of macrophages phenotypically similar to M1 (pro-inflammatory) and M2 (anti-inflammatory), respectively [81]. Both, pro and anti-inflammatory macrophages M1 and M2 have been detected in atherosclerotic plaques. M1 cells -widely known as "classically activated" -are characterized by a round morphology. They are involved in the management of Th1-dependent pro-inflammatory immune responses, because they produce a range of inflammatory cytokines -such as IL-1, IL-6, IL-8, IL-12, IL-23
and TNF-α -as well as ROS and NO. On the other hand, M2 cells, known as "alternatively activated", are characterized by an anti-inflammatory phenotype, with a stretched, spindle-like morphology. They could be induced by Th2 cytokines, and produce IL-10 and various scavenger receptors such as CD36, macrophage scavenger receptor 1, macrophage receptor with collagenous structure and mannose receptor (MRC1 or CD206) [START_REF] Martinez | Macrophage activation and polarization[END_REF]. In general, they promote angiogenesis, resolution and repair processes, and tumor progression [START_REF] Gordon | Monocyte and macrophage heterogeneity[END_REF][START_REF] Mantovani | The chemokine system in diverse forms of macrophage activation and polarization[END_REF] (Fig. 9).
Figure 9. Signalling pathways in macrophages involved in atherosclerosis [19].
Implications for a M-CSF/GM-CSF balance in plaque macrophage differentiation in vivo are mainly based on in vitro studies showing this regulation in atherosclerosis-related cell types and detection in murine and human plaques. In vitro studies show that M-CSF is constitutively expressed under physiological conditions by EC, fibroblasts, macrophages and vascular SMC [START_REF] Filonzi | Cytokine regulation of granulocyte-macrophage colony stimulating factor and macrophage colony-stimulating factor production in human arterial smooth muscle cells[END_REF]. M-CSF expression is also shown to be increased after stimulation by pro-inflammatory stimuli (e.g TNF-α, IFNγ) [START_REF] Filonzi | Cytokine regulation of granulocyte-macrophage colony stimulating factor and macrophage colony-stimulating factor production in human arterial smooth muscle cells[END_REF] and oxLDL [START_REF] Liao | Minimally modified low density lipoprotein is biologically active in vivo in mice[END_REF]. In murine and human lesions, M-CSF is detected both in healthy arteries and atherosclerotic lesions and in the latter correlates with macrophages and foam cells content as well as plaque progression [START_REF] Brocheriou | Antagonistic regulation of macrophage phenotype by M-CSF and GM-CSF: implication in atherosclerosis[END_REF]. In contrast, GM-CSF is poorly expressed by EC, vascular SMC and macrophages under basal conditions and requires inflammatory stimuli (e.g. TNF-α or IL-1) [START_REF] Filonzi | Cytokine regulation of granulocyte-macrophage colony stimulating factor and macrophage colony-stimulating factor production in human arterial smooth muscle cells[END_REF] or oxLDL [START_REF] Sakai | Glucocorticoid inhibits oxidized LDL-induced macrophage growth by suppressing the expression of granulocyte/macrophage colony-stimulating factor[END_REF] for induction. GM-CSF is co-localised only at very low levels with vascular SMCs and ECs in healthy human arteries, while it was increased in these cell types during macrophage accumulation and atherosclerosis development [START_REF] Plenz | Smooth muscle cells express granulocyte-macrophage colonystimulating factor in the undiseased and atherosclerotic human coronary artery[END_REF].
In line with these data, both factors may contribute to the macrophage heterogeneity observed in plaques. Since M-CSF is constitutively expressed, macrophages infiltrating early lesions are likely differentiated towards a M0 (intermediate state) and M2-like phenotype. As the plaque progresses, oxLDL and other inflammatory stimuli may increase the production of both M-CSF and GM-CSF, thereby producing an imbalance between these differentiation factors. Since GM-CSF expression seems to be specifically high in advanced lesions, this may favor a phenotypic switch towards pro-inflammatory M1-like macrophages upon plaque progression, similarly as observed in murine lesions [START_REF] Khallou-Laschet | Macrophage plasticity in experimental atherosclerosis[END_REF]. Finally, monocytemacrophage differentiation is also induced by chemokine (C-X-C motif) ligand 4 (CXCL4, or platelet factor 4) which primes monocyte differentiation to "M4 cells" another pro-inflammatory macrophage whose actual role in atherogenesis is unknown [START_REF] Gleissner | CXC chemokine ligand 4 induces a unique transcriptome in monocyte-derived macrophages[END_REF].
The current classification of macrophage differentation reveals a far more complex picture. In fact, upon differentation, macrophages are exposed to a wide variety of inflammatory cytokines in the inflammatory environment. Next, this aspect of the macrophage polarization based on published evidences will be discussed.
Macrophage polarization
Polarization of macrophages e.g. ability to switch phenotype and functional characteristics in response to external signals, designates their plasticity. M1 and M2 macrophages could be polarized from M0 macrophages by LPS and IFN-γ [START_REF] Haq | The effect of gamma interferon on IL-1 secretion of in vitro differentiated human macrophages[END_REF], or by IL-4 [START_REF] Stein | Interleukin-4 potently enhances murine macrophage mannose receptor activity: a marker of alternative immunologic macrophage activation[END_REF], respectively. Within the anti-inflammatory M2 set, several subsets are distinguished depending on the polarizing stimuli -such as M2a, M2b and M2cinducible from M0 by IL-4/IL-13, immune complexes plus IL-1β or LPS, and IL-10/ TGF-β/ glucocorticoids, respectively [START_REF] Mantovani | Macrophage diversity and polarization in atherosclerosis: a question of balance[END_REF]. Moreover, each M2 subset is characterized by a unique function. M2a macrophages express anti-inflammatory cytokines and contribute to tissue remodelling [START_REF] Martinez | Macrophage activation and polarization[END_REF]; M2b macrophages produce high amounts of IL-10 and low amounts of pro-inflammatory cytokines IL-1b, IL-6, IL-12 and TNF-α, and are characterized by immunoregulatory properties [START_REF] Nairz | The struggle for iron -a metal at the host-pathogen interface[END_REF]. The M2c subset expresses IL-10, TGF-β, pattern-recognition receptor pentraxin-3 and high levels of Mer receptor kinase (MerTK) essential for efferocytosis [START_REF] Zizzo | Circulating levels of soluble MER in lupus reflect M2c activation of monocytes/macrophages, autoantibody specificities and disease activity[END_REF]. M2a, M2b and M2c macrophages were found in both humans and mice, whereas M2d subset was identified only in the mouse [START_REF] Wang | Fra-1 protooncogene regulates IL-6 expression in macrophages and promotes the generation of M2d macrophages[END_REF] (Fig. 10). This subset could be induced by treatment of M0 macrophages with toll-like receptor agonists that activate the adenosine A2 receptor [START_REF] Pinhal-Enfield | An angiogenic switch in macrophages involving synergy between Toll-like receptors 2, 4, 7, and 9 and adenosine A(2A) receptors[END_REF]. Activation of the A2 receptor down-regulates the secretion of inflammatory cytokines like TNF-α, IL-1β and IFN-γ, and induces as well a unique proangiogenic activity in M2d macrophages associated with production of vascular endothelial growth factor (VEGF), NO and IL-10 [START_REF] Leibovich | Synergistic up-regulation of vascular endothelial growth factor expression in murine macrophages by adenosine A(2A) receptor agonists and endotoxin[END_REF][START_REF] Ferrante | The adenosinedependent angiogenic switch of macrophages to an M2-like phenotype is independent of interleukin-4 receptor alpha (IL-4Ra) signaling[END_REF]. Furthermore, the treatment of M0 macrophages with haeme degradation products could result in the polarization to the three phenotypically distinct macrophages HA-mac, M(Hb) and Mhem (Table 1), as described in the previous chapter (see chapter 1, section 1.2.2). Recently, additional plaque-specific macrophage phenotypes have been identified, termed as Mox, which it will be discussed in detail below.
Mox macrophages associated with atherosclerosis
The additional macrophage phenotype is represented by oxidized phospholipid derived macrophages (Mox) (Fig. 10). In particular, Mox subpopulation has been described to be produced by accumulation of oxidized phospholipids in atherosclerotic lesions.
Mox macrophages show different gene expression patterns and biological functions compared to M1 and M2 phenotypes. Expression of several genes in Mox macrophages is mediated by the redoxsensitive transcription factor Nrf2 [START_REF] Kadl | Identification of a novel macrophage phenotype that develops in response to atherogenic phospholipids via Nrf2[END_REF]. It has been demonstrated that oxidative modification of phospholipids is necessary for activation of Nrf2-dependent gene expression. Nrf2 goes to the nucleus and activates the genes involved in the synthesis of antioxidant enzymes including glutamate-cysteine ligase which is the first enzyme of the cellular GSH biosynthetic pathway [START_REF] Kadl | Identification of a novel macrophage phenotype that develops in response to atherogenic phospholipids via Nrf2[END_REF]. In fact, an increased GSH/GSSG ratio in the Mox macrophages has been described compared to M1 and M2 phenotypes, suggesting that Mox macrophages have the ability to cope better with oxidative stress. Control of redox status in macrophages by Nrf2 may be important in the regulation of several cellular functions that influence tissue homeostasis and inflammation. Hence, an oxidizing environment such as in atherosclerotic lesions, induces the formation of a novel macrophage phenotype (Mox) that is characterized by Nrf-dependent gene expression and may significantly contribute to pathologic processes in atherosclerotic vessels. Defective redox regulation may lead to exacerbated cell death, as seen in chronically inflamed tissue [START_REF] Kadl | Identification of a novel macrophage phenotype that develops in response to atherogenic phospholipids via Nrf2[END_REF].
Differential metabolism in GM-CSF and M-CSF differentiated macrophages
The clear metabolic differences existing between M1 (GM-CSF differentiation factor) and M2 (M-CSF differentiation factor) macrophages contribute to the shaping of their activation state [START_REF] Biswas | Orchestration of metabolism by macrophages[END_REF].
Arginine catabolism is the most studied metabolism differentiating M1 and M2 macrophages is represented [START_REF] El-Gayar | Translational control of inducible nitric oxide synthase by IL-13 and arginine availability in inflammatory macrophages[END_REF][START_REF] Modolell | Local suppression of T cell responses by arginase-induced L-arginine depletion in nonhealing leishmaniasis[END_REF]. L-arginine can be a substrate for either NOS which produce L-citrulline and NO, or arginase 1 (Arg-1) which produces polyamines, L-ornithine and urea. M1-derived NO is a major effector molecule in macrophage-mediated cytotoxicity, playing an important role in controlling bacterial and parasitic infections, whilst Arg-1 expression is linked to the wound healing actions of M2 macrophage population [START_REF] El-Gayar | Translational control of inducible nitric oxide synthase by IL-13 and arginine availability in inflammatory macrophages[END_REF][START_REF] Modolell | Local suppression of T cell responses by arginase-induced L-arginine depletion in nonhealing leishmaniasis[END_REF].
M1 macrophages induce an anaerobic glycolytic pathway, which involves an increase in glucose uptake as well as the conversion of pyruvate to lactate [START_REF] Rodríguez-Prados | Substrate fate in activated macrophages: a comparison between innate, classic, and alternative activation[END_REF]. In parallel, the pentose phosphate pathway (PPP) is also induced following IFN-γ/LPS activation. This pathway generates NADPH for the NADPH oxidase which is important for ROS production. In macrophages, increased lactate formation and activation of the PPP after phagocytosis had been already observed, suggesting potential importance of adapted metabolism on the activation cascade [START_REF] Schnyder | Role of phagocytosis in the activation of macrophages[END_REF]. Additionally, down-regulation of the carbohydrate kinase-like protein (CARKL), which catalyzes the production of sedoheptulose-7-phosphate, is required for the development of a M1 phenotype [START_REF] Galván-Peña | Metabolic reprograming in macrophage polarization[END_REF]. Indeed, it has been observed that the drop of NADH levels in cells overexpressing CARKL during macrophage activation, results in a redox shift. PPP activity contributes to the reduction of redox couples via NADPH. Hence, increased GSH and NADH generation are observed during M1 activation while M2 activation resulted in an up-regulation of CARKL, which was not followed by increased GSH or NADH formation. These findings represent a functional distinction between the two polarization states that is CARKL dependent. Hence, CARKL can be considered as a kinase orchestrating pro-and anti-inflammatory immune responses through metabolic control. On the other hand, fatty acid oxidation and oxidative metabolism are the preferential pathways in IL-4-activated macrophages [START_REF] Odegaard | Alternative macrophage activation and metabolism[END_REF]. IL-4 activates the transcription factor STAT6, which can trigger a pathway inducing mitochondrial respiration [START_REF] Galván-Peña | Metabolic reprograming in macrophage polarization[END_REF]. Upon activation, M2 macrophages can drive the pyruvate into the Krebs cycle and can induce expression of components of electron transport chain. Increased glycolysis in M1-polarized macrophages permits to quickly trigger microbicidal activity and cope with a hypoxic tissue microenvironment. In contrast, oxidative glucose metabolism in M2-polarized macrophages provides sustained energy for tissue remodeling and repair (Table 7, see page 75).
Lipid metabolism in GM-SCF and M-CSF differentiated macrophages
Both GM-CSF and M-CSF differentiated macrophages (M1 and M2, respectively) are able to take up lipids in vitro [START_REF] Waldo | Heterogeneity of human macrophages in culture and in atherosclerotic plaques[END_REF][START_REF] Kazawa | Expression of liver X receptor alpha and lipid metabolism in granulocytemacrophage colony-stimulating factor-induced human monocyte-derived macrophage[END_REF] and in vivo [START_REF] Waldo | Heterogeneity of human macrophages in culture and in atherosclerotic plaques[END_REF]. This cholesterol load is accompanied by significant changes in the macrophage transcriptome. Currently, however, it is unclear whether there is a difference in lipid handling between these two types of macrophages. Only few studies focused on this subject and results so far are rather contradictory. Indeed, in primary murine macrophages, genetic profiles of M-CSF differentiated macrophages pointed favorably towards foam cells formation [START_REF] Plenz | Smooth muscle cells express granulocyte-macrophage colonystimulating factor in the undiseased and atherosclerotic human coronary artery[END_REF]. M-CSF upregulates enzymes involved in cholesterol biosynthesis and downregulates a transporter involved in cholesterol efflux, such as ATPbinding cassette transporter G1 (ABCG1). Moreover, in human monocyte-derived macrophages, oxLDL accumulation was shown to be higher in M-CSF differentiated macrophages than in GM-CSF differentiated cells and correlates with the upregulation of CD36 and SR-A, two membrane proteins involved in the uptake of modified lipids [START_REF] Van Tits | Oxidized LDL enhances proinflammatory responses of alternatively activated M2 macrophages: a crucial role for Kruppel-like factor 2[END_REF]. In particular, oxLDL loading of M-CSF differentiated macrophages causes a shift towards a M1-like phenotype via downregulation of the nuclear transcription Krüppel-like Factor 2 (KLF-2). In agreement with this, uptake of unmodified LDL was also found to be increased in M-CSF compared to GM-CSF differentiated human-derived macrophages [START_REF] Sierra-Filardi | Heme Oxygenase-1 expression in M-CSF-polarized M2 macrophages contributes to LPS-induced IL-10 release[END_REF]. In contrast, GM-CSF differentiated macrophages showed increased expression of several cholesterol efflux regulatory proteins, such as ABCG1, ATP-binding cassette transporter (ABCA1), peroxisome proliferator-activated receptor gamma (PPARγ), liver X receptor alpha (LXRα), all involved in reducing foam cells formation. Moreover, based on CD68 and CD14 expression, Waldo et al. found lipid accumulation in both CD68+/CD14+ (M-CSF-like) and CD68+CD14-(GM-CSF like) cells in advanced human atherosclerotic lesions [START_REF] Waldo | Heterogeneity of human macrophages in culture and in atherosclerotic plaques[END_REF]. In contrast to these studies, Kazawa et al. [START_REF] Kazawa | Expression of liver X receptor alpha and lipid metabolism in granulocytemacrophage colony-stimulating factor-induced human monocyte-derived macrophage[END_REF] found an increased uptake and storage of oxLDL in human GM-CSF differentiated macrophages, although they did find ABCA1 and LXRα to be upregulated. Therefore, future studies are necessary to further characterize the difference in lipid handling in these two macrophage subtypes and to determine the contribution of GM-CSF and M-CSF differentiated macrophage in in vivo foam cells formation in atherosclerosis progression. Anyway, macrophages recognize modified lipoproteins through toll-like receptor (TLR2 and TLR4). The scavenger receptor CD36, known to serve as TLR-coreceptor, binds oxLDL and triggers assembly of certain TLR heterodimers, leading to the induction of pro-inflammatory mediators implicated in the deleterious effects of oxLDL [START_REF] Stewart | CD36 ligands promote sterile inflammation through assembly of a Toll-like receptor 4 and 6 heterodimer[END_REF]. The activation of TLR heterodimers, regulated by signals from CD36, leads to the activation of pro-inflammatory signaling pathways involving NFκB, MAP-kinase, ROS-dependent signaling, with an overall switching of macrophage phenotype towards M1. In addition, oxLDL-mediated stimulation of CD36 leads to activation of the inflammasome, which further aggravates vascular inflammation [START_REF] Jiang | Oxidized low-density lipoprotein induces secretion of interleukin-1β by macrophages via reactive oxygen species-dependent NLRP3 inflammasome activation[END_REF][START_REF] Sheedy | CD36 coordinates NLRP3 inflammasome activation by facilitating intracellular nucleation of soluble ligands into particulate ligands in sterile inflammation[END_REF] (Table 7, see page 75).
Iron metabolism in GM-CSF and M-CSF differentiated macrophages
Macrophages play an important role in iron homeostasis by recycling iron through phagocytosis of senescent red blood cells and their polarization is associated with differential regulation of iron metabolism [START_REF] Cairo | Iron trafficking and metabolism in macrophages: contribution to the polarized phenotype[END_REF]. On the other hand, iron can be used by bacteria for proliferation, virulence, and persistence [START_REF] Nairz | The struggle for iron -a metal at the host-pathogen interface[END_REF].
M1 macrophages show a metabolic profile, which favors iron retention characterized by low heme uptake, high ferritin (iron storage) and low ferroportin (iron export). By this way, M1 macrophages reduce the labile iron pool, the metabolically active fraction of cytosolic iron that is available for metabolic purpose [START_REF] Muraille | TH1/TH2 paradigm extended: macrophage polarization as an unappreciated pathogen-driven escape mechanism[END_REF]. Therefore, sequestration of iron by M1 macrophages would have a bacteriostatic and tumoristatic effect. By contrast, M2 macrophages have a sustained heme uptake as well as a reduced iron storage and enhanced release of iron (low ferritin and high ferroportin) [START_REF] Cairo | Iron trafficking and metabolism in macrophages: contribution to the polarized phenotype[END_REF]. Iron release from M2 macrophages would favor tissue repair and cell proliferation (Table 7, see page 75).
2.1.7 Role of glutathione and reactive oxygen species in macrophages polarization
Glutathione and M1/M2 macrophages
In the last years GSH has been described to have an important role of in the immune response, by influencing the activity of both lymphocytes and macrophages [START_REF] Guerra | Glutathione and adaptive immune responses against Mycobacterium tuberculosis infection in healthy and HIV infected individuals[END_REF][START_REF] Short | Defective antigen processing correlates with a low level of intracellular glutathione[END_REF][START_REF] Peterson | Glutathione levels in antigen-presenting cells modulate Th1 versus Th2 response patterns[END_REF]. Many in vitro and in vivo studies demonstrated that GSH depletion in antigen-presenting cells, such as macrophages, correlated with defective antigen processing and inhibited Th1-associated cytokine production. In murine studies, GSH depletion in antigen-presenting cells decreases the secretion of IL-12, known to regulate IFN-γ production, and leads to the switch from the typical Th1 cytokine profile towards Th2 response patterns [START_REF] Short | Defective antigen processing correlates with a low level of intracellular glutathione[END_REF][START_REF] Murata | The polarization of T(h)1/T(h)2 balance is dependent on the intracellular thiol redox status of macrophages due to the distinctive cytokine production[END_REF].
Hence, M1 cells have a higher ratio of reduced-to-oxidized glutathione (GSH/GSSG), with opposite effects of IFN-γ and IL-4 on the reductive status [START_REF] Dobashi | Regulation of LPS induced IL-12 production by IFN-gamma and IL-4 through intracellular glutathione status in human alveolar macrophages[END_REF]. It is possible to regulate the amount of secreted IL-12 by modulating the intracellular GSH/GSSG balance and an IL-12 low M2-like phenotype can be changed into an IL-12 high M1-like phenotype. This has been demonstrated in a mouse asthma model where the release of Th2 cytokines orchestrate the recruitment and activation of the primary effector cells of the allergic response. [START_REF] Koike | Glutathione redox regulates airway hyperresponsiveness and airway inflammation in mice[END_REF]. However, infection by intracellular pathogens, such as Mycobacterium tuberculosis, provides an interesting example of an intra-macrophage pathogen able to modulate the T cell responses by altering the intra-cellular redox state. In fact, Th1 immune response is down-regulated in patients with active tuberculosis infection [START_REF] Balikó | Th2 biased immune response in cases with active Mycobacterium tuberculosis infection and tuberculin anergy[END_REF], who have altered glutathione balance [START_REF] Venketaraman | Glutathione levels and immune responses in tuberculosis patients[END_REF]. Alam et al. have demonstrated that IL-12 induction in native macrophages is controlled directly by the intracellular glutathione-redox and that manipulation of the macrophage redox state by N-acetylcysteine (NAC), a GSH precursor, can influence in vitro cellular immune response to pathogens of peripheral blood mononuclear cells obtained from patients with active tuberculosis [START_REF] Alam | Glutathione-redox balance regulates crel-driven IL-12 production in macrophages: possible implications in antituberculosis immunotherapy[END_REF]. These data support the critical importance of glutathione-redox status in regulating the induction of IL-12 which activates the Th1 T cell immune response, crucial for inducing protection against the intracellular pathogens (e.g. Mycobacterium tuberculosis). Therefore, it is possible that these pathogens, modulating the GSH/GSSG balance in macrophages, transform these cells in M2-like macrophages characterized by low IL-12 production, thus polarizing the immune environment to their favor. In fact, M1 macrophages display a microbicidal activity against a wide range of intracellular parasites. It can be observed that a low GSH/GSSG ratio characterizes a M2-like phenotype both in murine and human macrophages and that GSH replenishment can represent a strategy for repolarization of both macrophage species [START_REF] Short | Defective antigen processing correlates with a low level of intracellular glutathione[END_REF][START_REF] Peterson | Glutathione levels in antigen-presenting cells modulate Th1 versus Th2 response patterns[END_REF] (Table 7, see page 75). Hence, it may be suggested that low macrophage GSH level can provide a consistent biomarker for human and murine M2 macrophages.
Reactive oxygen species and M1/M2 macrophages
The redox signalling has implication in macrophage polarization and the key roles of M1 and M2 macrophages in tissue environment provide the clue to explain the ROS abundance in certain phenotype.
As previously described, M1 macrophages majorly clearing the pathogens and ROS may be crucial for the regulation of M1 phenotype, whereas M2 macrophages resolve inflammation, which favours oxidative metabolism. Therefore how ROS play its role in maintaining the homeostatic functions of macrophage and in particular macrophage polarization is not completely understood. Indeed, the regulation may be controversial, but the discrete role of ROS on macrophages may be impacted by the different sources of ROS as well as plasticity of macrophage itself.
As macrophages are endogenous scavengers for dying cells in various pathological conditions, interaction between macrophages with compartments determines the phagocytic function of macrophages. Dying cells produce high levels of ROS, which are released into extracellular area when cellular membrane is degraded during cell death. Attachment of dying cells to macrophages requires intercellular communication in which ROS may play a role. On the other hand, extracellular and intracellular ROS may differentially control the phagocytosis process of macrophage by regulating the ability and capacity of macrophages in the uptake and degradation of dying compartments. In this regard, ROS plays a critical regulatory role in determining the initiation and outcome of cellular phagocytosis.
As mentioned above, M1 macrophages possess a high bactericidal function as defense against invading pathogens is the primary function of M1. To clear the site of injury, M1 macrophages tend to trigger the bactericidal response, which involves the production of ROS and NO in contact with pathogen.
The phagocytic function of M1 mainly depends on Nox2 gene. The production of ROS and NO derives from NADPH oxidase and nitric oxide synthase, both enzymes are generated from NADPH through pentose phosphate pathway [START_REF] Ghesquière | Metabolism of stromal and immune cells in health and disease[END_REF], and Nox2 negatively regulates the phagosomal proteolysis [START_REF] Rybicka | NADPH oxidase activity controls phagosomal proteolysis inmacrophages throughmodulation of the lumenal redox environment of phagosomes[END_REF]. The effective microbicidal function of M1 macrophages requires continuous production of ROS followed by delayed maturation of phagosomes. In vitro stimulation of M1 macrophages with LPS promotes recognition by TLRs, the primary LPS receptor, and, occasionally, LPS binds to phagocytic macrophage-1 antigen (MAC1)
receptor independent of TLRs. The association of LPS with receptors drives the production of ROS and genes alterations. LPS-induced ROS generation is Nox-dependent and it further supports the ROS-induced TNF-α production, which is evidenced from the reduced TNF-α in PHOX -/-mice model [START_REF] Qin | NADPH oxidase mediates lipopolysaccharide-induced neurotoxicity and proinflammatory gene expression in activated microglia[END_REF]. Besides, ROS may serve as secondary messenger in the LPS-induced signal transduction, facilitating the regulation of downstream pathways such as mitogen-activated protein kinase (MAPK) and NF-𝜅B. Activation of these pathways by H2O2 promotes expression of pro-inflammatory genes (Table 7, see page 75). Moreover, M1 macrophages activation is always correlated with upregulation of TNF-α mediated inflammatory response.
Activation of TNF-α is deemed to depend on interaction of TNF with TNF receptors that triggers again the downstream signalling, MAPK and I𝜅B-kinases (IKK), that activates NF-𝜅B signalling [START_REF] Kohchi | ROS and innate immunity[END_REF]. In this case, H2O2 tends to accumulate in NF-𝜅B deficient cells when exposed to TNF, it further oxidizes the catalytic cysteine of MAPK phosphatases and triggers activation of MAPK cascades including JNK and p38 MAPK. Excessive H2O2 also promotes I𝜅B-kinase activation and drives tyrosine phosphorylation of I𝜅B𝛼, leading to stimulation of NF-𝜅B signalling [START_REF] Takada | Hydrogen peroxide activates NF-𝜅B through tyrosine phosphorylation of I𝜅B𝛼 and serine phosphorylation of p65: evidence for the involvement of I𝜅B𝛼 kinase and Syk protein-tyrosine kinase[END_REF]. Given that ROS is closely related to the activation of MAPK and NF-𝜅B, ROS may partially regulate macrophage polarization towards M1.
In contrast to M1 macrophages, M2 activation stimulates increased arginase-1 activity and is accompanied by reduced ROS and NO generation. The functions of tissue remodeling and wound healing of M2 macrophages are explained to be attributed by the macrophages effect in expressing increased cathepsin S and cathepsin L and reduced Nox2 activity, which all improved the phagosomal proteolytic activity of M2 (IL-4) macrophages. Reduced Nox2 also improved the wound healing functions of M2 in degrading disulphide protein [START_REF] Balce | Alternative activation of macrophages by IL-4 enhances the proteolytic capacity of their phagosomes through synergistic mechanisms[END_REF]. Furthermore, the interaction between M2 macrophages with apoptotic bodies triggers instability of NADPH oxidase Nox2 mRNAs through binding blockade of RNA-binding protein coding gene (e.g. SYNCRIP) to Nox2 3'-UTR. This further defects the ROS production and leads to M2 macrophage polarization [START_REF] Kuchler | SYNCRIP-dependent Nox2 mRNA destabilization impairs ROS formation in M2-polarized macrophages[END_REF] (Table 7, see page 75).
Moreover, although impact of redox signalling or ROS production in M1 macrophages activation seems to interfere with M2 macrophage priming, study by Zhang et al. postulated that ROS production is also important for M2 macrophage differentiation. Intervention of antioxidant butylated hydroxyanisole (BHA) by inhibiting Nox-mediated O2 -• production before differentiation by M-CSF, blocked monocyte differentiation to M2, which suggests that ROS may implicate the early stage of M2 macrophage differentiation [START_REF] Zhang | ROS play a critical role in the differentiation of alternatively activated macrophages and the occurrence of tumor-associated macrophages[END_REF].
In summary, ROS has been shown to get involved in the functional and phenotypic regulation of macrophages. ROS is able to control the cell death, proliferation, motility, and phagocytic ability of macrophages. It is recently observed that ROS may play a complicated role in regulating macrophage polarization which can be summarized as follows:
-Involvement of ROS in regulating M1 is responsible for phagocytic activity and inflammatory response: multiple pathways are involved in generating NADPH, followed by Nox-dependant ROS production. The high ROS level used to mediate the phagocytic activity of M1 macrophages.
Besides, ROS serves as a second messenger mediating the inflammatory response of M1 macrophages, primarily through MAPK and NF-𝜅B as well as inflammasome activation.
-Involvement of ROS in regulating M2 is responsible for inflammation resolving and wound healing activities: in contrast to M1, multiple pathways are involved in reducing NADPH and Nox activity followed by reduced ROS generation. The low ROS level was accompanied by reduced inflammatory mediators, increased M2-regulated genes responsible for inflammation resolution and increased disulphide protein degradation which enhanced wound healing effect of M2.
Specific phenotypic markers in M1 and M2 macrophages
The concept of polarization has been confirmed by clear and mentioned differences in cytokine production, NO metabolism, phagocytosis [START_REF] Mantovani | The chemokine system in diverse forms of macrophage activation and polarization[END_REF][START_REF] Martinez | Macrophage activation and polarization[END_REF][START_REF] Gordon | Monocyte and macrophage heterogeneity[END_REF][START_REF] Martinez | Alternative activation of macrophages: an immunologic functional perspective[END_REF][START_REF] Mosser | Exploring the full spectrum of macrophage activation[END_REF] and transcriptional profiles [START_REF] Ghassabeh | Identification of a common gene signature for type II cytokine-associated myeloid cells elicited in vivo in different pathologic conditions[END_REF][START_REF] Lang | Shaping gene expression in activated and resting primary macrophages by IL-10[END_REF][START_REF] Martinez | Transcriptional profiling of the human monocyte-tomacrophage differentiation and polarization: new molecules and patterns of gene expression[END_REF]. Macrophage polarization is also accompanied by specific changes in cell morphology and phenotype, as described above [START_REF] Mantovani | The chemokine system in diverse forms of macrophage activation and polarization[END_REF][START_REF] Martinez | Macrophage activation and polarization[END_REF][START_REF] Gordon | Monocyte and macrophage heterogeneity[END_REF][START_REF] Martinez | Alternative activation of macrophages: an immunologic functional perspective[END_REF][START_REF] Mosser | Exploring the full spectrum of macrophage activation[END_REF]. The emerging concept that macrophage functions are, at least in part, determined by the polarization status of the cells raised the question whether these distinct polarized macrophage subsets could be identified by specific phenotypic markers. Additionally, studies focusing on macrophage polarization are mostly performed in vitro and do not reflect the complexity of immune responses observed in vivo [START_REF] Sica | Macrophage plasticity and polarization: in vivo veritas[END_REF][START_REF] Mantovani | Macrophage plasticity and polarization in tissue repair and remodelling[END_REF]. For these reasons, the validation of specific phenotypic markers is highly relevant for further investigation of different aspects of human macrophage biology.
M1 and M2 macrophages are distinguished by the differential expression of several molecules, e.g., iNOS, MMPs and arginase [START_REF] Wynn | Macrophages: master regulators of inflammation and fibrosis[END_REF]. None of these antigens is suitable as a single-marker to identify polarized macrophages. According to the literature, CD80, CD14, CD68 (surface receptor for LDL) are phenotypical markers for M1-like macrophages [START_REF] Martinez | The M1 and M2 paradigm of macrophage activation: time for reassessment[END_REF]. The mannose receptor (CD206) and the scavenger receptor (CD163), mainly found in advanced human lesions near intraplaque hemorrhage areas [START_REF] Boyle | Coronary intraplaque hemorrhage evokes a novel atheroprotective macrophage phenotype[END_REF], and whose expressions are enhanced by IL-4 [START_REF] Stein | Interleukin-4 potently enhances murine macrophage mannose receptor activity: a marker of alternative immunologic macrophage activation[END_REF][START_REF] Chroneos | Differential regulation of the mannose and SP-A receptors on macrophages[END_REF] and IL-10, respectively [START_REF] Högger | Identification of the integral membrane protein RM3/1 on human monocytes as a glucocorticoid-inducible member of the scavenger receptor cysteine-rich family (CD163)[END_REF], are specific markers for M2-like macrophages. However, cell markers alone do not fully define the overall subpopulations of macrophages [START_REF] Geissmann | Unravelling mononuclear phagocyte heterogeneity[END_REF]. Indeed, the same cell markers have also been identified as expressed by other cell types, for instance increased levels of CD163 are also found in the microglia of Alzheimer's disease patients', as well as in frontal and occipital cortices and brainstems of Parkinson's disease patients [START_REF] Pey | Phenotypic profile of alternative activation marker CD163 is different in Alzheimer's and Parkinson's disease[END_REF].
Moreover, several results in the literature, show that CD68 (clone PG-M1) may also identify dendritic cell subsets [START_REF] Vakkila | A basis for distinguishing cultured dendritic cells and macrophages in cytospins and fixed sections[END_REF].
In summary, in order to identify macrophages subsets, a combination of markers is required [START_REF] Barros | Macrophage polarisation: an immunohistochemical approach for identifying M1 and M2 macrophages[END_REF].
CD68 or CD163 can be used to identify M1 polarized macrophages when in combination with signal transducer and activator of transcription 1 (pSTAT1) or recombination signal binding protein for immunoglobulin kappa J region (RBP-J), while the same in combination with CMAF (Maf transcription factor) -an essential transcription factor for interleukin IL-10 gene expression in macrophages [START_REF] Cao | The protooncogene c-Maf is an essential transcription factor for IL-10 gene expression in macrophages[END_REF] identify M2 macrophages (Table 7, see page 75). This observation supports the notion that macrophage polarization is a dynamic process and the use of subset-specific phenotypic markers may open a new avenue for in vitro functional studies as well as a more accurate characterization of the macrophage infiltration in a variety of immune-mediated inflammatory diseases.
Finally, the findings considered above also suggest a functional plasticity of macrophages, being capable to change phenotype in response to microenvironmental stimuli in the plaque (e.g. cholesterol, oxLDL, oxidized phospholipids, chemokines, cytokines). Basically, two initial levels of regulation can be distinguished [START_REF] Gordon | Alternative activation of macrophages: mechanism and functions[END_REF]: i) Factors that drive monocyte-to-macrophage differentiation such as M-CSF, GM-CSF and CXCL4;
ii) Cell-derived cytokines that drive subsequent polarization of macrophages and highly influence their inflammatory phenotype.
However, it is unclear if different phenotypes come to coexist within the same location. Existing data suggests that plaque macrophages express features that are intermediate between the M1 and M2
phenotype extremes [START_REF] Leitinger | Phenotypic polarization of macrophages in atherosclerosis[END_REF]. M0, the intermediate state, provides the dynamic plasticity to atherosclerotic macrophages to activate or downregulate an overlapping set of transcription factors in response to extrinsic signals and switch phenotype to adjust to environmental changing like lipids and ROS levels.
Consequently, discriminating plaque macrophages solely as M1 or M2 is too simplistic. Studies identifying the presence of diverse macrophage subsets in plaques, followed by the examination of their phenotype and function in vivo and in vitro, can eventually be coupled back to the in vivo situation and in this way teach us more about the role of different macrophage subsets in the atherosclerotic process. Until now, detailed information about the role and function of the so far identified macrophage subsets in atherosclerotic processes is still minimal and mainly based on in vitro observations. Furthermore, as detailed above, the field is still in need of good characterization of a panel of markers to identify subsets and correlate them to plaque phenotype characteristics. In any case, growing evidence indicates that understanding the mechanisms of macrophage plasticity and resolving functional characteristics of distinct macrophage phenotypes should help in the development of new strategies for treatment of chronic inflammation in cardiovascular disease.
ANTI-INFLAMMATORY & WOUND HEALING
For abbreviations, see list of abbreviations.
Macrophage lineage cells: a source of GGT in the atherosclerotic plaque?
Considering all the background above, one of the objective of my experimental work has been focused on the relation between monocyte/macrophage lineage cells and GGT, an independent risk factor for cardiovascular mortality related to atherosclerotic disease [START_REF] Ruttmann | γ-Glutamyltransferase as a risk factor for cardiovascular disease mortality. An investigation in a cohort of 163,944 Austrian adults[END_REF] (see chapter 1, section 1.5.3). Indeed, previous results indicate that neutrophils contain GGT in their granules and release it upon activation [START_REF] Corti | Contribution by polymorphonucleate granulocytes to elevated gamma-glutamyltransferase in cystic fibrosis sputum[END_REF].
Importantly, human mononuclear cells have long been known to possess GGT activity [START_REF] Khalaf | Cytochemistry of gamma-glutamyltransferase in haemic cells and malignancies[END_REF][START_REF] Hultberg | L-gamma-glutamyl transpeptidase activity in normal and leukemic leukocytes[END_REF] and the enzyme is also present in GGT-positive CD68+ macrophage-derived foam cells in the intimal layers of human atherosclerotic plaques [12,[START_REF] Emdin | Serum gamma-glutamyltransferase as a risk factor of ischemic stroke might be independent of alcohol consumption[END_REF]. In addition, GGT can be found as macro-molecular complexes with different size [START_REF] Franzini | A high performance gel filtration chromatography method for gamma-glutamyltransferase fraction analysis[END_REF]. Only the big GGT (b-GGT, > 2000 kDa) fraction accumulates in atherosclerotic plaques, and correlates with other histological markers of plaque vulnerability-such as high macrophagic infiltration-as well as with levels of plasma b-GGT [15,[START_REF] Angeli | A kinetic study of gamma-glutamyltransferase (GGT)-mediated S-nitrosoglutathione catabolism[END_REF][START_REF] Khalaf | Cytochemistry of gamma-glutamyltransferase in haemic cells and malignancies[END_REF][START_REF] Wickham | Gamma-glutamyl compounds: Substrate specificity of gamma-glutamyl transpeptidase enzymes[END_REF][START_REF] Dominici | Pro-oxidant reactions promoted by soluble and cell-bound gamma-glutamyltransferase activity[END_REF][START_REF] Dominici | Possible role of membrane gamma-glutamyltransferase activity in the facilitation of transferrin-dependent and -independent iron uptake by cancer cells[END_REF][START_REF] Paolicchi | Gamma-glutamyl transpeptidase-dependent iron reduction and LDL oxidation-a potential mechanism in atherosclerosis[END_REF][START_REF] Emdin | Serum gamma-glutamyltransferase as a risk factor of ischemic stroke might be independent of alcohol consumption[END_REF][START_REF] Hultberg | L-gamma-glutamyl transpeptidase activity in normal and leukemic leukocytes[END_REF][START_REF] Pang | Increased ferritin gene expression in atherosclerotic lesions[END_REF][START_REF] Pucci | b-Gamma-glutamyltransferase activity in human vulnerable carotid plaques[END_REF].
These pieces of evidence have prompted the need of characterizing the exact origin of GGT accumulating inside atherosclerotic plaques, in order to better understand its role in plaque instability and to assess whether monocytes/macrophages may contribute and represent a source of b-GGT within the atherosclerotic plaque.
Article 1
MONOCYTES/MACROPHAGES ACTIVATION CONTRIBUTES TO B-GAMMA-GLUTAMYLTRANSFERASE ACCUMULATION INSIDE ATHEROSCLEROTIC PLAQUES (Journal of Translational Medicine, 2015, 13:325)
Les lésions athérosclérotiques sont caractérisées par une abondance de monocytes et de macrophages. Ceux-ci peuvent acquérir des phénotypes et fonctions biologiques différentes en fonction du microenvironnement et de leur état métabolique. Le recrutement des leucocytes joue un rôle clé dans les processus complexes aboutissant à l'athérosclérose. Plus particulièrement, les monocytes envahissent les lésions athérosclérotiques et se différencient en macrophages. L'hétérogénéité des macrophages décelables dans les lésions athérosclérotiques a été, dernièrement, un sujet de grand intérêt. Une attention particulière a été portée aux deux grandes sous-populations de macrophages impliquées dans les processus pro-inflammatoires (phénotype "M1") et dans la résolution de l'inflammation et la réparation (phénotype "M2").
De plus, la gamma-glutamyl transférase (GGT) est un facteur de risque indépendant bien établi pour la mortalité cardiovasculaire liée à la maladie athéroscléreuse. Quatre complexes de GGT allant de la GGT libre (f-GGT, 70 kDa) à un complexe macromoléculaire de 2000 kDa (b-GGT) ont été identifiés dans le plasma. Cependant, seule la b-GGT s'accumule dans les plaques d'athérosclérose, et concorde avec l'apparition d'autres marqueurs histologiques de vulnérabilité. La présente étude a pour but d'évaluer si les monocytes et/ou les deux types de macrophages pro-ou anti-inflammatoires (M1 et M2) peuvent représenter une source de b-GGT au sein de la plaque athérosclérotique. L'expression et la libération de la GGT ont été étudiées dans des monocytes humains isolés à partir du sang périphérique de donneurs sains.
Les facteurs de croissance GM-CSF et M-CSF ont été utilisés pour induire la différenciation des monocytes en macrophages M1 et M2, respectivement, tandis que la GGT dans la plaque a été étudiée dans des échantillons de tissu obtenus de patients subissant une endoarterectomie carotidienne. Les résultats révèlent que les macrophages de type M1 expriment des niveaux plus élevés de GGT par rapport aux macrophages de type M2 et aux monocytes. De plus, les macrophages de type M1, mais pas les M2, sont capables de libérer la fraction b-GGT après activation avec des stimuli pro-inflammatoires. L'analyse par western blot de la b-GGT extraite des plaques a confirmé la présence de GGT en paralléle de la présence des macrophages.
Ces résultats indiquent que les macrophages caractérisés par un phénotype pro-inflammatoire peuvent contribuer à l'accumulation intra-plaque de la b-GGT, qui à son tour peut jouer un rôle dans la progression de l'athérosclérose en modulant les processus inflammatoires et favorisant l'instabilité de la plaque.
2.4 Supplementary study: Monocytes/macrophages differentiation and oxidative stress markers.
As described in the previous chapter, M1 and M2 showed an opposite inflammatory profile and are together present in atherosclerotic plaques. However, even if the pathogenesis of atherosclerosis is driven by inflammation and oxidative stress, little is known on the redox status of M1 and M2. Redox status concerns mainly the production of ROS, reactive nitrogen species (RNS), antioxidant enzymes activity and glutathione concentration. Evidences in the literature have shown that pro-inflammatory M1 macrophages produce high bactericide concentrations of NO, a RNS, resulting from an increased expression of inducible nitric oxide synthase (iNOS). They also secrete large quantities of a major pro-inflammatory cytokines, the IL-1 β [START_REF] Gordon | Monocyte and macrophage heterogeneity[END_REF] and ROS [START_REF] Rees | Monocyte and macrophage biology: an overview[END_REF]. In contrast, M2 macrophages are characterized by anti-inflammatory phenotype and demonstrate elevated activity of Arg-1, which is essential in the quenching of inflammatory state [START_REF] Varin | Alternative activation of macrophages: immune function and cellular biology[END_REF].
If pro-inflammatory M1 are able to produce a large amount of ROS compared to M2, they also should bear higher antioxidant defenses.
To our knowledge, there are not many studies linking macrophages differentiation with its redox status (production of prooxidant and antioxidant molecules). Thus, the aim of this section is to evaluate the redox profile of macrophage populations, with particular regard to M1 and M2 in comparison to monocytes. In order to better understand the complex macrophage differentiation and activation processes, some oxidative and nitrosative stress markers and selected antioxidant enzyme activities of macrophages populations have been considered.
Main results
Markers of differentiation
Human monocytes from peripheral blood of healthy donors were stimulated during 6 days with two different growth factors GM-CSF (50 ng/ml) and M-CSF (50 ng/ml), to induce differentiation into M1 or M2 macrophages, respectively [START_REF] Belcastro | Monocytes/macrophages activation contributes to b-gamma-glutamyltransferase accumulation inside atherosclerotic plaques[END_REF][START_REF] Rey-Giraud | In Vitro generation of monocyte-derived macrophages under serumfree conditions improves their tumor promoting functions[END_REF]. Macrophages differentiation characterization was assessed with immunofluorescence studies to evaluate specific cell membrane markers, such as CD32 for M1 and CD163 for M2 [START_REF] Lau | CD163: a specific marker of macrophages in paraffin-embedded tissue samples[END_REF][START_REF] Barros | Macrophage polarisation: an immunohistochemical approach for identifying M1 and M2 macrophages[END_REF][START_REF] Graversen | Drug Trafficking into Macrophages via the Endocytotic Receptor CD163[END_REF][START_REF] Cho | The Phenotype of Infiltrating Macrophages Influences Arteriosclerotic Plaque Vulnerability in the Carotid Artery[END_REF]. As shown in Figure 1, M1 are positive for their specific marker CD32 and M2
show specificity for the CD163 antigen.
Production of free radicals
Nitric oxide, a generic macrophage marker, has been evaluated in the intracellular and extracellular compartments of monocytes, M1 and M2 macrophages through nitrite ions quantification using the Griess reaction. The extracellular NO concentration was under the limit of detection of the Griess reaction for monocytes, M1 and M2 macrophages. Moreover, in the intracellular compartment M1 and M2 macrophages showed a NO quantity between 4 and 5 mmol/mg of proteins, respectively, whereas intracellular NO from monocytes was under the limit of detection (Fig. 2). This result is another proof of monocytes differentiation into macrophages. However, even if M1 are pro-inflammatory macrophages and M2 anti-inflammatory macrophages, no difference in intracellular nitrite ions amount was seen between M1 and M2 macrophages. Basing on several evidences in the literature that support the critical role of ROS in the functioning monocytes and macrophages, respectively, [START_REF] Zhang | ROS play a critical role in the differentiation of alternatively activated macrophages and the occurrence of tumor-associated macrophages[END_REF][START_REF] Covarrubias | ROS sets the stage for macrophage differentiation[END_REF], intracellular and extracellular ROS have been measured by the oxidation of 2',7'-dichlorodihydrofluorescein (DCF). As shown in Figure 3, monocytes, M1 and M2 macrophages produced ROS in the intracellular and extracellular compartments. The level of ROS produced is weaker in the extracellular compartment compared to the intracellular compartment.
However, no statistically differences were seen between monocytes, M1 or M2. These results indicate that the basal level of ROS is the same for the three types of immune cells, but do not predict ROS production when cells are stimulated.
Intracellular nitrite ions
Intracellular ROS Extracellular ROS 94
The amount of ROS was expressed as a ratio of total protein mass. Data are shown as mean ± sem, n=5 (monocytes) and n=10 (M1 and M2). One-way ANOVA test.
Antioxidant defenses
Glutathione is the major intracellular antioxidant and partly reflects the redox status of the cells.
Intracellular GSH has been quantified by the 2,3-naphthalene dicarboxaldehyde probe. M1 and M2 showed a significant increase of intracellular GSH amount compared to monocytes (Fig. 4). However, the intracellular quantity of GSH is the same in M1 and M2 macrophages. The increased GSH concentration could indicate that macrophages have larger antioxidant defenses compared to monocytes. GSH homeostasis in the cell is regulated by redox enzymes, such as glutamate-cysteine ligase and glutathione synthase for its synthesis and GGT for its catabolism. GGT activity has already been evaluated in Article 1. Considering that our focus is on the analysis of the redox status of the cells, the de novo synthesis of GSH was not investigated. Therefore, we focused on the glutathione peroxidase (GPx), an antioxidant enzyme used to detoxify from ROS, mainly H2O2, through the oxidation of two GSH into GSSG. Then, we get interest in the glutathione reductase (GR), which is the critical enzyme for the reduction of one GSSG back to two GSH molecules. Moreover, GSH is also able to react with critical Cys residues in proteins, by forming mixed disulfides (GSSR) and this process can be reverted by glutathione S-transferase (GST). More details on cell antioxidant defenses will be given in the chapter III.
Therefore, in order to understand the changes in thiol status content, the GSH-dependent enzymes activities were evaluated in monocytes, M1 and M2 lysed in Tris-HCl buffer (10mM, pH=7.8) (Fig. 5), as follows:
-GPx activity was evaluated using the Paglia and Valentine method, with minor modifications [START_REF] Paglia | Studies on the quantitative and qualitative characterization of erythrocyte glutathione peroxidase[END_REF].
Briefly, cells lysate (100 µL) was mixed in 0.1 M of potassium phosphate buffer, pH=7, in presence Intracellular GSH of 1 mM of EDTA, 1mM of sodium azide, 1 mM of reduced glutathione, 2 U/mL of glutathione reductase and 0.2 mM of NADPH. The enzymatic reaction was initiated by addition of 0.1 mL of 1.5 M cumene hydroperoxide. The oxidation of the NADPH was followed spectrophotometrically at 340 nm between 3 and 5 min after initiation of the reaction.
-GR activity was performed using 100 µL of cell lysate mixed in 0.06 M of sodium phosphate buffer, pH=7.6, containing 3 mM of EDTA and 2 mg/mL of bovine serum albumin and adding with 3.25 mM of GSSG and 0.1 mM of NADPH. The oxidation of the NADPH was followed spectrophotometrically at 340 nm between 3 and 5 min after initiation of the reaction [START_REF] Massey | On the reaction mechanism of yeast glutathione reductase[END_REF].
-GST activity was measured using 10 µL of cell lysate mixed in 0.1 M of potassium phosphate buffer, pH=6.5, in presence of 1 mM of reduced glutathione and 1 mM of chloro-2,4 dinitrobenzene (CDNB), which forms GS-dinitrobenzene (GS-DNB). The absorption of GS-DNB adduct was followed spectrophotometrically at 340nm between 3 and 5 min after initiation of the reaction [START_REF] Habig | Glutathione S-transferases. The first enzymatic step in mercapturic acid formation[END_REF].
As shown in Figure 5, activities for all the investigated enzymes were detected in monocytes and macrophages. The GPx activity was significantly higher in the anti-inflammatory M2 compared to M1 and monocytes (Fig. 5A). Whereas the GR activity was the highest in pro-inflammatory M1 compared to M2 (Fig. 5B). Finally, GST activity was much lower in both macrophage types compared to monocytes (Fig. 5C).
Discussion
To complete the phenotypic macrophage characterization started in the first published article, we assessed the redox status of differentiated macrophages in comparison with each other and monocytes.
The characterization of monocytes differentiation into M1-like and M2-like macrophages was attested by immunofluorescence labeling of CD32 and CD163, specific markers for each macrophage
A) B) C)
subtype, respectively. Then, NO quantification confirmed macrophage differentiation, as NO was only quantified in M1 and M2. These results showed that macrophages present an active NO synthesis.
However, to prove that inducible NOS is present and active, mainly in pro-inflammatory (M1) macrophages, a combination of inflammatory stimuli (TNF-α, IL-1β) were used to stimulate our cells. No increase in NO production was observed (data not shown). However, all the three types of immune cells are able to produce ROS in the extracellular and intracellular compartments, even if no significant variations was observed. Antioxidant molecule like GSH and antioxidant enzymes (GPx, GR and GST) showed a different profile in monocytes, M1 and M2 (Table 8). In our results the concentration of intracellular GSH increased within the differentiation process, whereas GST activity disappeared. Furthermore, the increase GPx activity could be specific for M2 differentiation, whereas GR is specific for M1 differentiation process. The redox status characterization can be associated to other phenotypic markers within the differentiation of monocytes into macrophages. Therefore GPx, GR and GST can be used to discriminate between M1 and M2 phenotype. Furthermore, the significant increase of intracellular GSH concentration could indicate that macrophages are ready and able to protect themselves from oxidative stress. The higher intracellular GSH concentration in M1 could be due to a higher activity of GR. In contrast, M2 macrophages are characterized by increased activity of GPx, which is also confirmed by the literature [START_REF] Nagashima | Native incretins prevent the development of atherosclerotic lesions in apolipoprotein E knockout mice[END_REF]. This increase of GPx activity will help the anti-inflammatory macrophage to detoxify ROS under oxidative stress. In fact, the glutathione system is the major antioxidant in cells where glutathione peroxidase together with GSH act and neutralize lipid peroxides and hydrogen peroxide [START_REF] Meister | Glutathione[END_REF][START_REF] Sies | Glutathione and its role in cellular functions[END_REF][START_REF] Kehrer | Cellular reducing equivalents and oxidative stress[END_REF][START_REF] Shan | Glutathione-dependent protection against oxidative injury[END_REF][START_REF] Harlan | Glutathione redox cycle protects cultured endothelial cells against lysis by extracellularly generated hydrogen peroxide[END_REF][START_REF] Kuzuya | Protective role of intracellular glutathione against oxidized low density lipoprotein in cultured endothelial cells[END_REF].
Overall, it is conceivable to think that the GSH status may be connected to macrophagic development. Indeed, it is known that the redox status in the cells plays an important role in the cellular development, such as proliferation, differentiation and apoptosis [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF][START_REF] Rohrschneider | Growth and differentiation signals regulated by the M-CSF receptor[END_REF][START_REF] Jenkins | Imbalanced gp130-dependent signaling in macrophages alters macrophage colony-stimulating factor responsiveness via regulation of c-fms expression[END_REF]. Many studies have been reported that most of the cells represent redox shift to more oxidizing environments with a low reducing state during development [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF][START_REF] Hutter | Redox state changes in density-dependent regulation of proliferation[END_REF][START_REF] Takahashi | Gamma-glutamyl transpeptidase and glutathione in aging IMR-90 fibroblasts and in differentiating 3T3 L1 preadipocytes[END_REF]. In general, the redox state decreased according to the following rank order: proliferation > differentiation > apoptosis. Under the cellular development, the balance between the reducing and oxidizing equivalents determines the intracellular redox state. Therefore, the redox state in the cells could modulate the ratio of the reversible oxidized and reduced biological redox
couples, such as [NADH/NAD+], [NADPH/NADP+], [GSH/GSSG], [thioredoxin-SH2/thioredoxin-SS], and
[protein-SH/protein-SS] [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF]. Among these redox couples, GSH is a major reducing agent that can catabolize H2O2 and other peroxides by enzymatic coupling reactions and protect the protein thiol groups from oxidation. Precisely thiol groups are involved in LDL oxidation [START_REF] Sparrow | Cellular oxidation of low density lipoprotein is caused by thiol production in media containing transition metal ions[END_REF][START_REF] Graham | Human (THP-1) macrophages oxidize LDL by thiol-dependent mechanism[END_REF][START_REF] Heinecke | Oxidation of low density lipoprotein by thiols: Superoxidedependent an dindependent mechanisms[END_REF], and this latter by arterial cells, including macrophages, is known to play a major role in atherogenesis [START_REF] Raines | The role of macrophages[END_REF][START_REF] Chisolm | The oxidation of lipoproteins by monocytes-macrophages: biochemical and biological mechanisms[END_REF]. In fact with particular regard to GSH, several studies showed that macrophage GSH content affect cellular oxidative status and thus their capacity to oxidize LDL and to accumulate cholesterol, leading foam cells formation.
As shown by Rosenblat et al. [START_REF] Rosenblat | Macrophage glutathione content and glutathione peroxidase activity are inversely related to cell-mediated oxidation of ldl: in vitro and in vivo studies[END_REF], the macrophage GSH content and cellular GPx activities (seleniumdependent enzyme) are inversely related to cell-mediated oxidation of LDL [START_REF] Schuckelt | Phospholipid hydroperoxide glutathione peroxidase is a selenoenzyme distinct from the classical glutathione peroxidase as evident from cDNA and amino acid sequencing[END_REF]: upon selenium supplementation to a murine macrophage-like cell line (J-774 A.1), the cellular GPx activity and the GSH content increased, and cell mediated oxidation of LDL was reduced. Both, GSH and GPx, can protect macrophages from the toxic effects of oxidized LDL and thus may affect the development of the atheroscerotic lesion. These results suggest that the major effect of cellular glutathione on macrophagemediated oxidation of LDL is attributed to the cellular GSH content. Furthermore, Rosenblat et al. [START_REF] Rosenblat | Increased macrophage glutathione content reduces cell-mediated oxidation of LDL and atherosclerosis in apolipoprotein E-deficient mice[END_REF] demonstrated that these phenomena are also operative in vivo. In fact, using the apo e-/-mouse model, as these atherosclerotic mice are under oxidative stress and develop atherosclerosis within 3-4 months [START_REF] Maor | Oxidized monocyte-derived macrophages in aortic atherosclerotic lesion from apolipoprotein E-deficient mice and from human carotid artery contain lipid peroxides and oxysterols[END_REF][START_REF] Hayek | Increased plasma lipoprotein lipid peroxidation in apo Edeficient mice[END_REF], they showed whether manipulation of the macrophage GSH content could affect their oxidative status, their capability to oxidize LDL and, finally, the development of atherosclerotic lesions in these mice.
In conclusion, to complete and confirm these preliminary data, further experiments, with particular regard to the thiol status are needed, in particular the evaluation of GSSG concentration and the GSHdependent enzyme activities analysis in monocytes, M1 and M2 under inflammatory conditions. Moreover, these results will be also supplemented with an experimental planning under oxidative stress conditions, in order to obtain a complete overview. Finally, experiments are running to evaluate iNOS expression, another important macrophage marker, under basal and inflammatory conditions (stimulation with TNFα/IL-1β, 10 ng/ml 24 h).
CHAPTER III Vascular smooth muscle cells and Oxidative stress 100
During development and progression of atherosclerosis, there is an important cross-talk between inflammation, generation of reactive oxygen and nitrogen species, and lipid metabolism leading to vascular remodeling following injury [START_REF] Harrison | Role of oxidative stress in atherosclerosis[END_REF][START_REF] Napoli | Nitric oxide and atherosclerosis: an update[END_REF][START_REF] Patel | Cell signaling by reactive nitrogen and oxygen species in atherosclerosis Free Radical[END_REF]. At physiological levels, NO plays an important role in the maintenance of vascular homeostasis, and inhibits the proliferation and migration of vascular SMCs. By contrast, during atherosclerosis, the decrease in NO bioavailabilty caused mainly by an increase of oxidative stress and endothelial dysfunction main lead SMCs proliferation and migration to vascular remodelling.
As previously described in the first chapter (section 1.4), several NO-related therapeutics have been developed to counteract the decrease in NO bioavailability. Among them, only Snitrosothiols, which are physiological stores of NO, avoid any tolerance phenomena and oxidative stress. However, even if RSNO do not induce any oxidative stress, NO donors are still considered as potential oxidative stress enhancers leading to the formation of peroxynitrite producing deleterious proteins nitration, especially under oxidative stress conditions. The ability of GSNO to regulate NO bioavailability under oxidative stress condition is poorly studied. The following chapter will focus on the possible GSNO employment to counteract NO deficiency occurring within oxidative stress. First the selection of the most adapted oxidative stress model will be discussed in face of the literature, as well as the "phenotypic switching" of SMCs occuring during atherosclerosis and induced by ROS signaling. Besides, the relationship between S-nitrosothiol metabolism by SMCs under oxidative stress will be experimentally investigated. Precisely, the Pr-SNO, as a biomarker of the NO pool [START_REF] Stamler | Nitric oxide circulates in mammalian plasma primarily as an S-nitroso adduct of serum albumin[END_REF] will be used to evaluate the capacity of GSNO to deliver NO to SMCs cultured in an oxidative stress environment. As nitrosation can also modulate signaling pathway, the different classes and function of S-nitrosated proteins will be identified and analyzed. (Article 2 submitted to Free Radical Biology and Medicine, "Oxidative stress enhances and modulates protein S-nitrosation in smooth muscle cells exposed to S-nitrosoglutathione" and presented at the end of the chapter III)
As little is known on the possible interferences between free radicals and the metabolism of GSNO, a deeper appreciation of these mechanisms will open a way for the use of GSNO in cardiovascular diseases associated with oxidative stress.
Phenotypic switching of smooth muscle cell in atherosclerosis
SMCs exhibit phenotypic and functional plasticity in order to respond to vascular injury. In the case of vessel damage, SMCs are able to switch from "contractile" phenotype to "proliferative" (or "proinflammatory") phenotype. This change is accompanied by a decrease in the expression of smooth muscle (SM)-specific markers responsible for contraction and the production of proinflammatory mediators, which induce proliferation and chemotaxis. However, during chronic inflammatory diseases like atherosclerosis, arterial SMCs become aberrantly switched to proliferative phenotype, which leads to SMC dedifferentiation, proliferation and extracellular matrix formation for migration within plaque areas. This proatherosclerotic switch is a complex and multistep mechanism that may be induced by a variety of proinflammatory stimuli, oxidative stress and hemodynamic
Molecular mechanism of the phenotypic switching: some key players
The phenotypic switching of SMC is characterized by a decrease in expression of SM contractile proteins such as SM α-actin and SM-myosin heavy chain (MHC). The mechanism of transcriptional repression of the SM22α gene (an important contractile protein) for example was found to be involved in the changes of regulation of SMC marker genes [START_REF] Chen | Myocardin: a component of a molecular switch for smooth muscle differentiation[END_REF][START_REF] Yoshida | Myocardin is a key regulator of CArG-dependent transcription of multiple smooth muscle marker genes[END_REF]. Of major significance, Owens et al. previously demonstrated the mutation of a highly conserved G/C repressor element 5' to the proximal CARG element (a DNA consensus sequences present within the promoters of SMC genes that plays a pivotal role in controlling their transcription) in the SM22α promoter.
Moreover, they also found in the promoters of many other SMC marker genes [START_REF] Owens | Molecular regulation of vascular smooth muscle cell differentiation in development and disease[END_REF], nearly abolished down-regulation of this gene in vivo in response to vascular injury [START_REF] Regan | Molecular mechanisms of decreased smooth muscle differentiation marker expression after vascular injury[END_REF] or in atherosclerotic lesions of ApoE -/-mice [START_REF] Wamhoff | A G/C element mediates repression of the SM22alpha promoter within phenotypically modulated smooth muscle cells in experimental atherosclerosis[END_REF]. Further studies showed that the Krüppel-Like Factor-4 (Klf4), a transcriptional regulator activated by oxidized phospholipids, led to the inhibition of expression of SM22α and other SM marker genes [START_REF] Dandré | Platelet-derived growth factor-BB and Ets-1 transcription factor negatively regulate transcription of multiple smooth muscle cell differentiation marker genes[END_REF][START_REF] Cherepanova | Oxidized phospholipids induce type VIII collagen expression and vascular smooth muscle cell migration[END_REF][START_REF] Salmon | Cooperative binding of KLF4, pELK-1, and HDAC2 to a G/C repressor element in the SM22a promoter mediates transcriptional silencing during SMC phenotypic switching in vivo[END_REF]. During atherosclerosis Klf4 is upregulated and activated in SMCs [START_REF] Cherepanova | Oxidized phospholipids induce type VIII collagen expression and vascular smooth muscle cell migration[END_REF] and responsible for SMC dedifferentiation inducing atherosclerotic plaque progression and vascular remodelling [START_REF] Zheng | Role of Krüppel-like factor 4 in phenotypic switching and proliferation of vascular smooth muscle cells[END_REF]. Finally, Klf4 is also involved in the upregulation of vascular SMC-mediated extracellular matrix (ECM) gene expression such as type VIII collagen [START_REF] Cherepanova | Oxidized phospholipids induce type VIII collagen expression and vascular smooth muscle cell migration[END_REF].
In quiescent SMCs, NF-kB, a pro-inflammatory transcription factor, exists as an inactive p50-p50 homodimer that is constitutively bound to DNA, thereby repressing proinflammatory genes [START_REF] Cao | NF-kB1 (p50) homodimers differentially regulate pro-and antiinflammatory cytokines in macrophages[END_REF]. However, inflammatory stimuli cause formation of the p65-p50 heterodimer that activate the transcription of proinflammatory genes [START_REF] Bourcier | The nuclear factor kappa-B signaling pathway participates in dysregulation of vascular smooth muscle cells in vitro and in human atherosclerosis[END_REF]. Activation of NF-kB in SMCs plays a crucial role in their phenotypic switching. Indeed, NF-kB drives the expression of IL-8 [START_REF] Wang | Nitric oxide donors: chemical activities and biological applications[END_REF], MCP-1 [START_REF] Landry | Activation of the NF-kappa B and I kappa B system in smooth muscle cells after rat arterial injury. Induction of vascular cell adhesion molecule-1 and monocyte chemoattractant protein-1[END_REF], the chemokine (C-XC motif) ligand 1 (CXCL1) [START_REF] Kim | Upregulation of interleukin-8/CXCL8 in vascular smooth muscle cells from spontaneously hypertensive rats[END_REF], VCAM-1 [START_REF] Kim | Upregulation of interleukin-8/CXCL8 in vascular smooth muscle cells from spontaneously hypertensive rats[END_REF], ICAM-1 [START_REF] Cercek | Nuclear factor-kappa B activity and arterial response to balloon injury[END_REF] and metalloproteases (MMP-1, -2, -3 and -9) [START_REF] Bond | Inhibition of transcription factor NF-kappaB reduces matrix metalloproteinase-1, -3 and -9 production by vascular smooth muscle cells[END_REF][START_REF] Moon | PTEN induces G1 cell cycle arrest and inhibits MMP-9 expression via the regulation of NF-kappaB and AP-1 in vascular smooth muscle cells[END_REF][START_REF] Cui | Platelet-derived growth factor-BB induces matrix metalloproteinase-2 expression and rat vascular smooth muscle cell migration via ROCK and ERK/p38 MAPK pathways[END_REF]. CXCL1 could attract neutrophils to inflamed sites [START_REF] Schumacher | High-and low-affinity binding of GRO alpha and neutrophil-activating peptide 2 to interleukin 8 receptors on human neutrophils[END_REF], while MMPs are required for ECM degradation and increasing migration of SMCs [START_REF] Chen | Matrix metalloproteinases: inflammatory regulators of cell behaviors in vascular formation and remodeling[END_REF].
Finally, many changes occur in ECM composition during SMC phenotypic switching. Normally, SMCs are surrounded by ECM (basal lamina) composed of laminin, collagen IV and perlecan. In steady state, basal lamina provides signals that support SMC "contractile" phenotype and prevent dedifferentiation, growth and proliferation [START_REF] Barnes | Collagens and atherosclerosis[END_REF]. In contrast, in the case of vessel damage and vascular proliferative diseases, the composition of basal lamina changed, with appearance of osteopontin, fibronectin and syndecan-4, that in turn are able to mediate proatherogenic dedifferentiation of SMCs.
Finally, the modification of SMCs environnement by inflammation and oxidative stress will affect the vascular wall. Thus, in the next section, the impact of ROS signaling on SMCs phenotypic modulation will be investigate.
ROS signaling and smooth muscle cells
Vascular oxidative stress is a mechanistic link between macrophage infiltration, MMP activation and SMC apoptosis in plaque instability. The cellular redox environment is a balance between the production of ROS, RNS and reactive sulphur species (RSS), and their removal by antioxidant enzymes and small molecular-weight antioxidants. ROS include free radicals such as superoxide anion (O2 -• ), perhydroxyl radical (HO2 • ), hydroxyl radical ( • OH) and other species such as hydrogen peroxide (H2O2), singlet oxygen ( 1 O2) and hypochlorous acid (HOCl) [START_REF] Vajragupta | Manganese complexes of curcumin analogues: evaluation of hydroxyl radical scavenging ability, superoxide dismutase activity and stability towards hydrolysis[END_REF]. RNS are derived from nitric oxide through the reaction with O2 -• to form ONOO -. RSS are easily formed from thiols by reaction with ROS [START_REF] Giles | Reactive sulfur species: an emerging concept in oxidative stress[END_REF]. ROS generation has been further localized to the tunica media, suggesting a prominent role for SMC cells in their production [START_REF] Rajagopalan | Angiotensin II-mediated hypertension in the rat increases vascular superoxide production via membrane NADH/NADPH oxidase activation. Contribution to alterations of vasomotor tone[END_REF]. Directly exposing SMC cells to ROS-generating systems stimulates migration, proliferation, and growth, implicating ROS in these processes [START_REF] Kunsch | Oxidative stress as a regulator of gene expression in the vasculature[END_REF][START_REF] Irani | Oxidant signaling in vascular cell growth, death, and survival: a review of the roles of reactive oxygen species in smooth muscle and endothelial cell mitogenic and apoptotic signaling[END_REF][START_REF] Griendling | NAD(P)H oxidase: role in cardiovascular biology and disease[END_REF][START_REF] Rao | Active oxygen species stimulate vascular smooth muscle cell growth and protooncogene expression[END_REF]. Under physiological conditions ROS are produced in a controlled manner at low concentrations and function as inter-and intracellular signaling molecules [START_REF] Chen | Regulation of ROS signal transduction by NAD(P)H oxidase 4 localization[END_REF][START_REF] Chen | Matrix metalloproteinases: inflammatory regulators of cell behaviors in vascular formation and remodeling[END_REF]. By contrast, in pathological conditions, increased activity/expression of ROS generating enzymes or decreased defenses by antioxidants in the vasculature result in increased bioavalability of ROS, reduced bioavailability of NO (which is reoriented to ONOO -formation), increasing oxidative stress and vascular damage [START_REF] Paravicini | NAD(P)H oxidases, reactive oxygen species, and hypertension: clinical implications and therapeutic possibilities[END_REF][START_REF] Schulz | Nitric oxide, tetrahydrobiopterin, oxidative stress, and endothelial dysfunction in hypertension[END_REF].
Enzymatic sources of ROS
There are multiple potential enzymatic sources of ROS in vascular cells, including xanthine oxidase, uncoupled nitric oxide synthases, cyclooxygenase, and cytochrome P-450, which can potentially generate and/or release ROS in amounts that have a major influence on the control of vascular function [START_REF] Kukreja | PGH synthase and lipoxygenase generate superoxide in the presence of NADH or NADPH[END_REF][START_REF] Mueller | Redox mechanisms in blood vessels[END_REF] (Fig. 11). In order to reduce ROS-induced oxidative damage, the cells normally possess intracellular molecules and enzymes to keep the homeostasis of ROS at a low signalling level. The antioxidant defence system includes enzymatic (i.e superoxide dismutase (SOD), catalase (CAT), GPx, GR, GST, TrxR), metal chelating and free radical scavenging activities to neutralize these radicals after they have been formed. However, currently, there is substantial evidence that NADPH oxidases (NOX) and mitochondria are significant sources of ROS generation in SMC [START_REF] Wolin | Oxidant and redox signaling in vascular oxygen sensing mechanisms: Basic concepts, current controversies, and potential importance of cytosolic NADPH[END_REF][START_REF] Lyle | Modulation of vascular smooth muscle signaling by reactive oxygen species[END_REF][START_REF] Archer | Mitochondrial metabolism, redox signaling, and fusion: a mitochondria-ROS-HIF-1α-Kv1.5 O2-sensing pathway at the intersection of pulmonary hypertension and cancer[END_REF] (Fig. 11). Indeed, as previously cited in the first chapter, NOX 1 and NOX 4 have been identified in human and aortic smooth muscle cells, by participating in SMC proliferation and differentiation, respectively [START_REF] Clempus | Reactive oxygen species signaling in vascular smooth muscle cells[END_REF].
NADPH oxidases are the primary source of intracellular ROS in atherosclerosis [START_REF] Lassegue | Biochemistry, physiology, and pathophysiology of NADPH oxidases in the cardiovascular system[END_REF], as confirmed by several studies, i.e. using mice deficient in catalytic subunits NOX 1 or NOX 2 or the cytosolic subunit p47 phox [START_REF] Sheehan | Role for Nox1 NADPH oxidase in atherosclerosis[END_REF][START_REF] Judkins | Direct evidence of a role for Nox2 in superoxide production, reduced nitric oxide bioavailability, and early atherosclerotic plaque formation in ApoE-/-mice[END_REF][START_REF] Barry-Lane | p47phox is required for atherosclerotic lesion progression in ApoE(-/-) mice[END_REF][START_REF] Vendrov | Atherosclerosis is attenuated by limiting superoxide generation in both macrophages and vessel wall cells[END_REF]. However, NOX homolog-specific changes in gene expression occur at different stage of atherosclerosis development. In atherosclerotic human coronary arteries, NOX 4 expression progressively increases from stage I to stage IV [START_REF] Sorescu | Expression of the NAD(P)H oxidase subunits in human coronary arteries[END_REF]. NOX 1 expression is increased early after wire injury, whereas NOX 4 is increased later [START_REF] Szöcs | Upregulation of Nox-based NAD(P)H oxidases in restenosis after carotid injury[END_REF]. In a recent study, Xu et al., examining the contribution of NADPH oxidase in plaque-derived SMCs, identified a relationship between NOX 4 expression and lesion progression, ROS levels, apoptosis, and cell cycle arrest. These results implicated NOX 4 in the senescence and apoptosis of plaque-derived SMCs, suggesting an integral role for NOX 4 in plaque stability [START_REF] Xu | Nox4 NADPH oxidase contributes to smooth muscle cell phenotypes associated with unstable atherosclerotic plaques[END_REF].
In recent years, several reports supported the idea that cellular ROS levels can function as a "second messengers" regulating numerous cellular processes, including proliferation [START_REF] Sundaresan | Requirement for generation of H2O2 for plateletderived growth factor signal transduction[END_REF][START_REF] Sarsour | Manganese superoxide dismutase protects the proliferative capacity of confluent normal human fibroblasts[END_REF][START_REF] Sarsour | Manganese superoxide dismutase activity regulates transitions between quiescent and proliferative growth[END_REF][START_REF] Bartosz | Reactive oxygen species: destroyers or messengers?[END_REF]. The second-messenger properties of ROS are believed to activate signaling pathways by activating tyrosine kinases, tyrosine phosphatases, MAP kinases, or ion channels [START_REF] Paravicini | Redox signaling in hypertension[END_REF]. This dual function of ROS, as signaling molecules or oxidative stress inducers, could result from the differences in their concentrations, pulse duration, and subcellular localization. Indeed, although higher levels of ROS can be toxic, low levels of ROS may serve as signaling molecules regulating many cellular processes, including proliferation.
Non enzymatic sources of ROS
In biological systems, ROS can also arise from non-enzymatic sources. In fact as peroxides reactivity with thiols show slow rates, alternative processes may be involved. One possibility is that protein-binding interactions might put the catalytic site of oxidases together with thiols, in a manner that exposes them to the high concentrations of peroxide needed to drive direct reactions of peroxide with thiols. Alternatively, catalysts, such as transition metals, might promote peroxide reactions with reactive protein thiol groups. Oxidized forms of NO, including ONOO -and NO2-, have chemical properties that are potentially involved in this regulation through their reactions with protein sites such as thiols and tyrosine, unsaturated fatty acids, and other molecules (GSH, tetrahydrobiopterin, etc.) [START_REF] Pacher | Nitric oxide and peroxynitrite in health and disease[END_REF].
ROS generated during SMC proliferation could originate also from growth factors, such as PDGF [START_REF] Raines | PDGF and cardiovascular disease[END_REF]. Increased expression of PDGF and its receptors has been found in lesions of atherosclerosis [START_REF] Raines | PDGF and cardiovascular disease[END_REF]. Furthermore, PDGF is known to activate redox factor 1 (Ref-1) by altering its redox status, enhance AP-1 activity, and increase cell-cycle-regulatory protein expression, facilitating progression from G0/G1 to S phase in SMCs [START_REF] He | Redox factor-1 contributes to the regulation of progression from G0=G1 to S by PDGF in vascular smooth muscle cells[END_REF]. Besides, TGF-β, epidermal growth factor, insulin-like growth factor, basic fibroblast growth factor and angiotensin-II (Ang-II) could also initiate SMCs proliferation [START_REF] Dzau | Molecular mechanisms of vascular renin-angiotensin system in myointimal hyperplasia[END_REF][START_REF] Grant | Localization of insulin-like growth factor I and inhibition of coronary smooth muscle cell growth by somatostatin analogues in human coronary smooth muscle cells: a potential treatment for restenosis?[END_REF][START_REF] Lindner | Proliferation of smooth muscle cells after vascular injury is inhibited by an antibody against basic fibroblast growth factor[END_REF][START_REF] Majesky | Production of transforming growth factor beta 1 during repair of arterial injury[END_REF][START_REF] Nabel | Recombinant plateletderived growth factor B gene expression in porcine arteries induce intimal hyperplasia in vivo[END_REF].
Renin angiotensin system as source of ROS
With particular regard to Ang-II, both clinical and experimental evidence support a potential role of the renin-angiotensin-aldosterone system (RAAS) in contributing to phenotypic switching in SMCs. Cross-talk between the main mediators of the RAAS has been shown to participate in the development of the vascular dysfunction, with SMCs being key participants in this cross-talk [START_REF] Rautureau | Cross-talk between aldosterone and angiotensin signaling in vascular smooth muscle cells[END_REF].
SMCs were shown to express angiotensin I receptor type 1 (AT1R) that binds Ang-II, a vasoconstrictor [START_REF] Park | Characterization of angiotensin receptors in vascular and intestinal smooth muscles[END_REF]. Previous studies have supported a role for Ang-II in the generation of oxidative stress in the vasculature via the induction of superoxide via NADPH oxidase [START_REF] Griendling | Angiotensin II stimulates NADH and NADPH oxidase activity in cultured vascular smooth muscle cells[END_REF]. In addition, Ang-II has been shown to stimulate the increase of a variety of proinflammatory mediators such as MCP-1 and VCAM-1 mRNA expression in rat SMCs aortas [START_REF] Chen | Angiotensin II induces monocyte chemoattractant protein-1 gene expression in rat vascular smooth muscle cells[END_REF]. In these experiments, the increased expression of both VCAM-1 and MCP-1 by Ang-II could be blocked by NADPH oxidase inhibitors and catalase, suggesting that NADPH oxidase may be contributing to oxidative stress and regulation of vascular inflammatory genes via the generation of H2O2. Thus, Ang-II stimulates adhesion of monocytes to SMCs and promotes changes from the contractile quiescent phenotype towards SMC proliferation and migration [START_REF] Cai | Growth factors induce monocyte binding to vascular smooth muscle cells: implications for monocyte retention in atherosclerosis[END_REF]. Ang-II, through the formation of oxidative stress and increased levels of proinflammatory genes in the vessel wall, may serve also as a molecular link between hypertension and the pathogenesis of atherosclerosis [START_REF] Alexander | Hypertension and the pathogenesis of atherosclerosis: oxidative stress and the mediation of arterial inflammatory response: a new perspective[END_REF].
ROS and lipoprotein oxidative modifications
The last aspect to analyze is the involvement of free radicals in lipoprotein oxidative modifications, which is supported by several lines of evidence [START_REF] Steinberg | Beyond cholesterol. Modifications of lowdensity lipoprotein that increase its atherogenicity[END_REF]. Indeed, macrophages accumulate oxLDL via scavenger receptors, resulting in cellular cholesterol accumulation and the subsequent foam cell formation. For SMC, in addition to the expression of several LDL receptors already described in the first chapter (section 1.2.3), the cellular cholesterol accumulation results, on one hand, in an increase in cholesterol biosynthesis and esterification, and on the other hand in a decrease in cholesteryl esters (CE) hydrolysis and cholesterol efflux, fully regulated by specific enzymatic activities (HMG-CoA reductase), acyl coenzyme A: cholesterol acyltransferase (ACAT) and neutral cholesteryl ester hydrolase (NCEH)). Therefore, the disturbances in cholesterol metabolism favor accumulation of cholesterol and cholesteryl esters in vascular cells, and thus may contribute to the formation of SMC foam cells. Indeed, the idea that free radicals may modulate the activities of some enzymes involved in cellular cholesterol metabolism is supported by several studies. Gesquiere et al. observed that in vitro treatment of SMC by free radicals led to an increase in the activities of both HMG-CoA reductase and ACAT activities, in contrast NCEH activity was decreased. They propose that the changes in the activities of enzymes involved in cholesterol homeostasis could be the result of a free radical-mediated decrease in cyclic AMP (cAMP) concentration [START_REF] Gesquiere | Role of the cyclic amp-dependent pathway in free radical induced cholesterol accumulation in vascular smooth muscle cells[END_REF]. In fact, cAMP was also shown to modulate the LDL receptor activity [START_REF] Krone | Effects of prostaglandins on LDL receptor activity and cholesterol synthesis in freshly isolated human mononuclear leukocytes[END_REF][START_REF] Middleton | Cyclic AMP stimulates the synthesis and function of the low-density lipoprotein receptor in human vascular smooth-muscle cells and fibroblasts[END_REF], HMG-CoA reductase [START_REF] Krone | Effects of prostaglandins on LDL receptor activity and cholesterol synthesis in freshly isolated human mononuclear leukocytes[END_REF][START_REF] Edwards | The effect of glucagon, norepinephrine, and dibutyryl cyclic AMP on cholesterol efflux and on the activity of 3-hydroxy-3-methylglutaryl CoA reductase in rat hepatocytes[END_REF], ACEH [START_REF] Hajjar | Prostacyclin modulates cholesteryl ester hydrolytic activity by its effect on cyclic adenosine monophosphate in rabbit aortic smooth muscle cells[END_REF] and NCEH [START_REF] Pomerantz | Signal transduction in atherosclerosis: Second messengers and regulation of cellular cholesterol trafficking[END_REF] and also cholesterol efflux have also been demonstrated to be regulated by a cAMPdependent pathway [START_REF] Hokland | Cyclic AMP stimulates efflux of intracellular sterol from cholesterolloaded cells[END_REF][START_REF] Bernard | cAMP stimulates cholesteryl ester clearance to high density lipoproteins in J774 macrophages[END_REF].
In summary, the oxidative stress-mediated cholesterol accumulation in SMC may be related to a decrease in cAMP concentration. In light of these results, all of these metabolic modifications may contribute to cholesterol accumulation in SMC and, extrapolating to the in vivo situation, they may lead to the resulting foam cells found in atherosclerotic plaque.
Antioxidative systems: enzymatic defenses against ROS
Superoxide, in addition to spontaneous conversion, is converted to hydrogen peroxide by superoxide dismutase enzymes (MnSOD, CuZnSOD, and Ec-SOD). CAT and GPx neutralize H2O2 to water. Hydroperoxides are also neutralized by thioredoxin/thioredoxin reductase, glutaredoxin/glutaredoxin reductase, and the six-member family of peroxiredoxins (Prxs) [START_REF] Rhee | A family of novel peroxidases, peroxiredoxins[END_REF]. More specifically, the generation of ROS usually induce thiols oxidation forming disulfide bound in proteins or GSSG. Among ROS, H2O2 could influence the redox state protein thiols, two-electron reactions. The reduced form of cysteine in proteins can undergo oxidation reactions to form sulfenic (RSOH), sulfinic (RS2OH), and sulfonic acids (RS3OH). Sulfinic and sulfonic forms are believed to be irreversible, whereas the sulfenic form can conjugate with other reduced thiols (RSH) to form a disulfide bridge (RSSR). Then, cellular antioxidant systems can reduce the disulfide bond and generate the reduced form of the cysteine in proteins. In addition, superoxide can initiate one-electron reactions that can alter the redox state of metal cofactors (e.g. Fe and Zn) present in many kinases and phosphatases, thereby affecting their activities. The generation of oxidized thiols can be reversed by the thioredoxin/thioredoxin reductase systems or the protein disulfide isomerases family (PDI).
PDI has emerged as a key redox-sensitive player in protein folding and ER stress. For instance, it is known that ER stress generally increases cellular ROS generation [START_REF] Laurindo | Protein disulfide isomerase in redox cell signaling and homeostasis[END_REF] in part by increasing calcium release, which increases ROS production by the mitochondria [START_REF] Görlach | The endoplasmic reticulum: folding, calcium homeostasis, signaling, and redox control[END_REF]. In particular, ER lumen is an oxidative environment compared with the cytosol, with a high ratio of GSSG:GSH [START_REF] Malhotra | Endoplasmic reticulum stress and oxidative stress: a vicious cycle or a doubleedged sword? Antioxidant & Redox Signaling[END_REF]. A recent study by Wu et al. [START_REF] Wu | Nox4-derived H2O2 mediates endoplasmic reticulum signaling through local Ras activation[END_REF] however, did not find increased free H2O2 in the ER compared with the cytosol using the redox probe HyPer (hydrogen peroxide), suggesting that much of the observed redox potential is protein bound. The oxidative ER environment is conducive to the formation of disulfide bonds and a key role is played by PDI, whose expression increases at the plasma membrane under oxidative stress conditions [Belcastro et al. to be submitted, section 3.5, Article 2]. PDI catalyzes the formation and breakage of disulfide bonds between cysteine residues of proteins in a process known as oxidative folding. It is composed of 4 thioredoxin domains and contains redoxsensitive cysteines whose oxidation state can influence the protein binding and activity of PDI. PDI has two active sites, both of which are characterized by the presence of a CGHC motif, which either forms a disulfide for the enzyme to become active as an oxidase, or a dithiol for the enzyme to act as an isomerase. In its reduced form, PDI acts as an isomerase, whereas oxidation of PDI enables it to form disulfide bridges [START_REF] Rutkevich | Functional relationship between protein disulfide isomerase family members during the oxidative folding of human secretory proteins[END_REF]. Reduced PDI can bind another ER oxidoreductase, ER oxidoreductin 1 (Ero1), which is capable of oxidizing PDI and produces H2O2 in the ER as a consequence [START_REF] Laurindo | Nox NADPH oxidases and the endoplasmic reticulum[END_REF]. Ero1 is thought to be essential for PDI activity by generating internal disulfide bonds and transferring them via PDI to target proteins [START_REF] Gross | Structure of Ero1p, source of disulfide bonds for oxidative protein folding in the cell[END_REF]. The oxidation of PDI by Ero1 is thought to occur through disulfide exchange that results in the formation of reduced Ero1 [START_REF] Gross | Structure of Ero1p, source of disulfide bonds for oxidative protein folding in the cell[END_REF]. Ero1 can be oxidized rapidly in the presence of FAD and oxygen, indicating that oxygen is the ultimate electron acceptor [START_REF] Tu | The FAD-and O(2)-dependent reaction cycle of Ero1-mediated oxidative protein folding in the endoplasmic reticulum[END_REF]. The reactivation of Ero1 by molecular oxygen generates ROS in the ER. How the cell is protected from the damage caused by ER-generated ROS is poorly understood; however, recent data suggest that GSH might have an important role in this process. Recent evidence suggests that the inhibition of PDI in cells can contribute to ER stress and apoptosis. Toldo et al. [START_REF] Toldo | The role of PDI as a survival factor in cardiomyocyte ischemia[END_REF] found that overexpression of PDI protects against myocardial damage in an acute myocardial infarction (MI) mouse model. The authors suggest that PDI activity is antiapoptotic, in part, by increasing the activity of SOD1.
Antioxidative systems: non enzymatic defenses against ROS
The neutralization of ROS by additional intracellular small-molecular-weight antioxidants includes cysteine, vitamin C (ascorbic acid), and vitamin E (atocopherol) [START_REF] Mascio | Antioxidant defense systems: the role of carotenoids, tocopherols, and thiols[END_REF]. However, changes in the antioxidant enzyme activities or small-molecular-weight antioxidant levels or both could perturb the cellular redox environment, which in turn could affect the redox regulation of the cell-cycle progression. In fact, another critical role played by ROS is on the redox regulation of cell-cycle progression. It is known, that the mammalian cell cycle has five distinct phases; G0 (quiescent state), whereas the proliferative state encloses the G1, S, G2, and M phases. In response to mitogenic stimuli, quiescent cells enter the proliferative cycle and may transit back to the quiescent state.
Reentry into quiescence is essential to prevent aberrant proliferation as well as to protect the cellular life span. Precisely, the thiol-disulfide exchange reaction can regulate many of the cell-cycleregulatory protein functions during the redox regulation of the cell-cycle. Indeed, the progression through the cell-cycle phases is regulated by sequential and periodic activation of positive regulators, cyclins, and cyclin-dependent kinases (CDKs), i.e. the progression from G0/G1 to S is largely regulated by the D-type cyclins (cyclin D1 and D2) in association with CDK4-6.
In addition, ROS signaling is known to regulate many of the transcription factors that influence the development. For instance, Xu et al., using SMCs derived from a murine model of atherogenesis (ApoE -/ -/ LDLR -/ -mice), examining the contribution of NADPH oxidase in plaquederived SMCs, observed a decreased in cellular growth, an hypophosphorylation retinoblastoma protein (Rb), an increased expression of cyclin-dependent kinase inhibitors, and decreased expression of cyclins D1 (G0 / G1 arrest) [START_REF] Xu | Nox4 NADPH oxidase contributes to smooth muscle cell phenotypes associated with unstable atherosclerotic plaques[END_REF][START_REF] Han | Mechanisms of liver injury. III. Role of Glutathione redox status in liver injury[END_REF]. In conclusion, ROS, altering the redox state of several proteins, can affect the redox regulation of cell-cycle proteins during progression from one cell-cycle phase to the next.
Antioxidant glutathione effects and glutathione related post-translational modifications
Glutathione homeostasis
The most abundant molecule among endogenous antioxidant is the tripeptide glutathione, which is critical for the maintenance of the cellular redox balance [START_REF] Han | Mechanisms of liver injury. III. Role of Glutathione redox status in liver injury[END_REF][START_REF] Wu | Glutathione Metabolism and Its Implications for Health[END_REF]. GSH is synthesized in vivo, by the consecutive action of two ATP-dependent enzymes, from the precursor amino acids cysteine, γ-glutamate and glycine (Fig. 12). The first enzyme, glutamate-cysteine ligase (GCL)
formerly called γ-glutamylcysteinesynthase (GCS) is an heterodimeric rate-limiting enzyme. GCL is sensitive to oxidative stress, whose expression is regulated by the nuclear factor (erythroid-derived 2)-like 2 (NFE2L2), a transcription factor that regulates a wide array of antioxidant responsive element driven genes in various cell types [START_REF] Baldelli | Punctum on two different transcription factors regulated by PGC-1 alpha: nuclear factor erythroid-derived 2-like 2 and nuclear respiratory factor 2[END_REF]. The glutathione synthase (GS) is the second enzyme required for GSH synthesis catalysing the addition of glycine to γ-glutamyl-cysteine and consuming ATP. Finally, the chemical structure of GSH with γ-Glu and Cys provides peculiar characteristics ranging from insusceptibility to proteolysis to redox thiols catalysis, respectively. The overall rate of GSH synthesis is controlled by several factors including: (i) availability of L-cysteine [START_REF] Meister | Glutathione[END_REF]; (ii) relative ratio between the two subunits of GCL [START_REF] Chen | Glutamate cysteine ligase catalysis: dependence on ATP and modifier subunit for regulation of tissue glutathione levels[END_REF]; (iii) feedback inhibition of GCL by GSH [START_REF] Taylor | Nutritional and hormonal regulation of glutathione homeostasis[END_REF]; (iv) ATP provision. All cell types synthesize GSH, however, the main source of the tripeptide in the body is the liver. In contrast to GSH synthesis, which occurs intracellularly, GSH degradation occurs exclusively in the extracellular space, on the surface of cells that express GGT (chapter I, section 1.5), the only enzyme that catabolizes GSH and GSH adducts (e.g. oxidized glutathione, glutathione S-conjugates and glutathione complexes).
Therefore, the intra-and extracellular GSH levels are determined by the balance between its production, consumption, and cellular export. This redox cycle, known as the GSH cycle (Fig. 11),
incorporates other important antioxidant, redox-related enzymes. Due to important physiological functions of GSH, these processes are tightly controlled at transcriptional, translational, and posttranslational levels. Indeed, GSH is required for several cell processes interconnected with alterations in the maintenance and regulation of the thiol-redox status, due to its capability to exist in different redox species [START_REF] Forman | Glutathione: overview of its protective roles, measurement, and biosynthesis[END_REF].
Under physiological conditions, the reduced GSH concentration is 10 to 100 folds higher than the oxidized species (GSSG and GSSR).
Glutathione and oxidative stress
GSH scavenges free radicals, ROS and RNS (hydroxyl radical, lipid peroxyl radical, superoxide anion, and hydrogen peroxide) directly and indirectly through enzymatic reactions. The one-electron reduction with radicals is not chemically favourable, because it generates the unstable thiyl radical GS • . However, the reaction is kinetically driven in the forward direction by the removal of GS • through a reaction with thiolate anion (GS -) and O2. The first reaction leads to the generation of GSSG -, which in the presence of O2, generates GSSG and O2 •-. Ultimately, the O2 •-production will be dismuted in H2O2 by the SOD enzyme. H2O2 will then be kept in charge by catalase or GPx [START_REF] Winterbourn | Superoxide as an intracellular radical sink[END_REF]. GSH does not react directly with hydroperoxides, however, it is a cosubstrate of the selenium-dependent GPx, which has been recognized as the most important mechanism for the reduction of H2O2 and lipid hydroperoxides such as malonyl dialdehyde and 4-hydroxy-2-nonenal [START_REF] Comporti | Glutathione depleting agents and lipid peroxidation[END_REF]. GPx catabolized hydroperoxides converting two GSH into GSSG, its oxidized form. GSH is then recycled by the GR through FAD and NADPH consumption.
Therefore, the GSH/GSSG redox couple is of great importance in maintaining cellular redox status. The estimated in vivo redox potential for the GSH/GSSG couple ranges from -260 mV to -150 mV depending on the conditions [START_REF] Jones | Redox potential of GSH/GSSG couple: assay and biological significance[END_REF]. Thus, changes in the GSH/GSSG ratio are fundamental in the fine tuning of signal transduction such as cell cycle regulation [START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF]. Under oxidative stress, the concentration of GSH decreases leading to irreversible cell degeneration and death. In fact, shifting the GSH/GSSG redox toward the oxidizing state activates several signalling pathways including protein kinase B, calcineurin, NF-κB, c-Jun N-terminal kinase, apoptosis signal-regulated kinase 1, and mitogen-activated protein kinase, thereby reducing cell proliferation and increasing apoptosis [START_REF] Sen | Cellular thiols and redox-regulated signal transduction[END_REF].
In recent years, additional roles for the antioxidant function of GSH related to signal transduction have emerged. The most common covalent protein post-translational modifications of cysteine residues are the modification of the cysteine by NO, the S-nitrosation (described in the section 3.4) or thiol by incorporation of a glutathione moiety, S-glutathionylation (Fig. 13). Protein Sglutathionylation provides protection of protein cysteines from irreversible oxidation and serves to transduce a redox signal by changing structure/function of the target protein [START_REF] Ghezzi | Regulation of protein function by glutathionylation[END_REF][START_REF] Ghezzi | Protein glutathionylation in health and disease[END_REF][START_REF] Jones | Radical-free biology of oxidative stress[END_REF]. This process is observed either under physiological redox signalling or oxidative stress. Protein Sglutathionylation involves the reaction of protein cysteine residue or oxidized derivative such as Snitrosyl (S-NO), sulfenic acid (S-OH), thiyl radical (S•) with GSH. The reversal of S-glutathionylation (i.e., de-glutathionylation) is catalysed by glutaredoxin (Grx) at the expense of GSH as a cosubstrate [START_REF] Kalinina | Role of glutathione, glutathione transferase, and glutaredoxin in regulation of redox dependent processes[END_REF]. Horizontal dotted lines separate one-electron oxidative states. Dashed lines represent oxidations by oxidative species not depicted, including molecular oxygen (O2). For the sake of simplicity, lines representing reaction of P-SH with GSSG or GSNO to yield mixed disulfide protein (P-SS-G) have been omitted. GSH: reduced glutathione; GSNO; S-nitrosoglutathione; GSSG: oxidized glutathione [START_REF] Martinez -Ruiz | Signalling by NO-induced protein S-nitrosylation and S-glutathionylation: Convergences and divergences[END_REF].
In conclusion, the scenario depicted, clearly indicates that GSH is a weighty protagonist of the huge network governing the decision between life and death, through the modulation of cellular redox state. A simplified version of several roles of GSH described and not, are reported in table 8.
Different models to induce oxidative stress
In the past decades, cell cultures and animals models have been established in order to better understand the effects and mechanisms of ROS and antioxidants. To induce oxidative stress, two alternative approaches may be followed to disturb the prooxidant-antioxidant balance: either increasing the radical load or inhibiting the antioxidant defenses. The radical load can be increased by exposing the cells to γ-irradiation, elevated oxygen tension (hyperoxia), extracellular O2 -• and/or H202, or by using free radical-generating azo compound, such as 2,2-Azobis(2-amidinopropane) dihydrochloride (AAPH) or by using free radical-generating drugs, such as paraquat and menadione.
These drugs contain a quinone structure and can take part in an intracellular oxidation reduction cycle by which O2 -• is generated. However, in addition to O2 -• , reactive semiquinone radicals are also formed that may contribute to the stress induced by these compounds [START_REF] Morrison | Induction of DNA damage by menadione (2-methyl-l,4-naphthoquinone) in primary cultures of rat hepatocytes[END_REF][START_REF] Krall | Superoxide mediates the toxicity of paraquat for cultured mammalian cells[END_REF].
In the following subparagraphs several oxidative stress models will be described, with a particular attention for the AAPH model employed within article 2.
3.3.1
Exposure to extracellular H202
H202 added directly to the culture medium results in short-term exposure to a quickly decreasing concentration of H202. H202 is rather stable in most culture media, but in the presence of cells, its concentration diminishes quickly, depending on cell population density and cellular catalase contents. H202 readily penetrates the cellular membrane and inside the cell, where it is detoxified by catalase. Sensitivity towards the cytotoxic effects of H202 is inversely correlated with catalase activity when relatively high bolus doses of H202 are given [START_REF] Spitz | Stable H202-resistant variants of Chinese hamster fibroblasts demonstrate increases in catalase activity[END_REF]. Glutathione peroxidase may act as the predominant defense enzyme against lower H202 concentrations [START_REF] Engstrom | Mechanisms of extracellular hydrogen peroxide clearance by alveolar type 11 pneumocytes[END_REF]. H202 itself is a relatively inert species and most of its deleterious effects, such as induction of lipid peroxidation and DNA damage, are due to its ability to stimulate • OH formation by the transition metal-catalyzed Haber-Weiss reaction. The availability of transition metals in the cells is supposed to be limiting for • OH formation.
Therefore, the formation of • OH can only occur at those intracellular sites where these transition metals are available in the reduced form. For continuous • OH formation, it is necessary to keep the transition metals in its reduced form. This can be achieved by O2 -• (the classical Haber-Weiss reaction) but also by other reducing agents -such as ascorbate and GSH.
Exposure to extracellular O2 -•
O2 -• can be added directly to the culture medium in the form of potassium superoxide. In this way a high but quickly diminishing concentration of O2 -• can be obtained. Due to spontaneous as well as enzyme catalyzed dismutation, O2 -• is converted into H202, which in turn may give rise to • OH, if transition metal ions are present. The effects of O2 -• will be almost similar to those of H202. However, extracellular O2 -• will not readily penetrate the cellular membrane, in contrast to the hydroperoxyl radical HO2, the uncharged form of O2 -• , which is supposed to induce lipid peroxidation [START_REF] Kappus | Lipid peroxidation: mechanisms, analysis, enzimology and biological relevance[END_REF].
Theoretically, • OH is also able to penetrate the cellular membrane; however, this radical is so reactive that it is most likely to react with components of the plasma membrane. Moreover, when the effect of 'pure' O2 -• is to be studied, catalase should be added to the culture medium to remove H202 and to prevent • OH formation. The purity of the catalase preparation is extremely important, indeed many catalase preparations appear to be contaminated with SOD activity. To obtain a level of O2 -• for a longer period of time, it is possible to generate O2 -• enzymatically by adding: xanthine (X)
and xanthine oxidase (XO) to the culture medium. In this way it is possible to generate continuous fluxes of O2 -• for a longer period of time and the progress of the X/XO reaction can be followed by monitoring the formation of urate, the primary product of the X/XO reaction. In such experiments, it was observed that the formation of urate slows down after 30-40 minutes, due to inactivation of XO [START_REF] Zimmerman | Active oxygen acts as a promoter of transformation in mouse embryo C3H/10T½/C18 fibroblasts[END_REF]. Using this system, a mixture of O2 -• and H202 will be generated, the ratio of which will depend on SOD and catalase activities present in the culture system.
There are several factors that modulate the effect of extracellular of H202 and O2 -• . When cells are exposed to H202 and O2 -• in complex medium, the composition of the medium may modulate the ultimate effect. For example, pyruvate, present in almost all cell culture media, is known to scavenge H202 [START_REF] Andrae | Pyrnvate and related alpha-ketoacids protect mammalian cells in culture against hydrogen-peroxide induced cytotoxicity[END_REF] resulting in a reduced effective dose. On the other hand, ascorbic acid, which is present in high concentrations in some standard culture media, may act as an antioxidant or prooxidant molecule. Ascorbic acid can stimulate • OH formation probably due to its ability to reduce transition metals. Another important factor influencing H202 toxicity is the cell population density.
Cells at high densities appear to be much less susceptible to H202 than cells at low densities [START_REF] Spitz | Hydrogen peroxide or heat shock induces resistance to hydrogen peroxide in Chinese hamster fibroblasts[END_REF].
This effect is caused by the ability of the cells to catabolize H202.
Besides the above-mentioned effects, the addition or generation of O2 -• or H202 in complex medium might result in the formation of many other secondary reactive species. Therefore, using these types of models, it is recommended to use the simplest possible medium. For short-term exposures, when O2 -• or H202 are added directly to the cultures, the exposure time may be as short as 30 min. It makes no sense to 'expose' cells for many hours or even overnight, since the effective exposure is much shorter. Short exposure times have the advantage that complicating factors, such as fetal calf serum, can be omitted from the medium. It is also important, however, during relatively short exposure times, to add glucose and glutamine to the exposure media, in order to provide the cells with their major energy sources. In contrast, for longer exposure times, when a more chronic stress is desirable, it may be necessary to include serum proteins in the exposure medium.
Therefore, application of peroxides leads only a transient oxidative stress depending on the ability of the cultured cells to detoxify the peroxide. Using O2 -• or H202 models to generate ROS in vitro render difficult the quantification of generated reactive oxygen species or to clearly define the identity of ROS inducing the oxidative stress. Precisely for these reasons and for the purpose of this experimental thesis work, developing an oxidative stress model using SMCs (A-10 cell line)
reproducible and under controlled condition, we do not favour these kind of models.
Oxidative stress and metal ions
Detailed studies have shown that redox active metals like iron (Fe), copper (Cu), chromium (Cr), cobalt (Co) and other metals undergo redox cycling reactions and possess the ability to produce reactive radicals such as superoxide anion radical and nitric oxide in biological systems. Disruption of metal ion homeostasis may lead to oxidative stress, causing interference with signal transduction pathways that play important roles in cell growth and development [START_REF] Valko | Free radicals, metals and antioxidants in oxidative stress-induced cancer[END_REF]. The mechanism of metalinduced formation of free radicals is tightly influenced by the action of cellular antioxidants. Many low molecular weight antioxidants (ascorbic acid (vitamin C), alpha-tocopherol (vitamin E), GSH, carotenoids, flavonoids, and other antioxidants) are capable of chelating metal ions reducing thus their catalytic activity to form ROS. In particular, the oxidative stress models induced by Fe and Cu will be described below.
Iron induces oxidative stress
Iron occurs in two oxidation states Fe 2+ (ferrous ion) and Fe 3+ (ferric ion). The ferrous ions are soluble in biological fluids and generated in the presence of hydroxyl radicals. They are also unstable in aqueous media and tend to react with molecular oxygen to form ferric ions and O2 -• . This oxidized form of iron is insoluble in water at neutral pH and precipitates in the form of ferric hydroxide [START_REF] Jones-Lee | Role of iron chemistry in controlling the release of pollutants from resuspended sediments[END_REF].
The toxic effects of free ferrous ion is substantiated by its ability to catalyze via the Fenton reaction, the generation of damaging reactive free radicals [START_REF] Ganz | Hepcidin, a key regulator of iron metabolism and mediator of anemia of inflammation[END_REF]. Indeed, ferrous ion is oxidized by H202 to ferric ion, forming a hydroxyl radical and a hydroxide ion (i). Ferric ion is then reduced back to ferrous ion by another molecule of hydrogen peroxide, forming a hydroperoxyl radical and a proton (ii):
(i) Fe 2+ + H2O2 → Fe 3+ + •OH + OH - (ii) Fe 3+ + H2O2 → Fe 2+ + HOO• + H +
The hydroxyl radical is highly reactive with a half-life in aqueous solution of less than 1 ns [START_REF] Pastor | A detailed interpretation of OH radical footprints in a TBP-DNA complex reveals the role of dynamics in the mechanism of sequence-specific binding[END_REF] and it is able to abstract a hydrogen atom from polyunsatured fatty acids to initiate lipid peroxidation. When the metal is iron and copper, the production of hydroxyl radical according to the Fenton reaction is the most abundant species in vivo and it reacts close to its site of formation.
Although Fenton chemistry is known to occur in vitro, its significance under physiological conditions is not fully understood.
Copper induces oxidative stress
The most oxidation numbers of copper in living organisms are Cu 2+ (cupric ion) and Cu + (cuprous ion). Copper is a cofactor of many enzymes involved in redox reactions, such as cytochrome c oxidase, ascorbate oxidase, or superoxide dismutase. In addition to its enzymatic roles, copper is used in biological systems for electron transport [START_REF] Valko | Free radicals, metals and antioxidants in oxidative stress-induced cancer[END_REF]. Copper can catalyze ROS formation via Fenton and Haber-Weiss chemistry. Under physiological conditions, copper free exists very rarely inside cells. Copper can induce oxidative stress by two mechanisms. First, it can directly catalyze the formation of ROS via a Fenton-like reaction [START_REF] Valko | Free radicals, metals and antioxidants in oxidative stress-induced cancer[END_REF][START_REF] Liochev | The Haber-Weiss cycle-70 years later: an alternative view[END_REF]. Second, exposure to elevated levels of copper significantly decreases glutathione levels [START_REF] Speisky | Generation of superoxide radicals by copper-glutathione complexes: redox-consequences associated with their interaction with reduced glutathione[END_REF].
The cupric ion, in the presence of superoxide anion radical or biological reductants such as ascorbic acid or GSH, can be reduced to cuprous ion (i), which is capable of catalyzing the formation of reactive hydroxyl radicals through the decomposition of hydrogen peroxide via the Fenton reaction (ii) [480,[START_REF] Prousek | Fenton reaction after a century[END_REF][START_REF] Barbusinski | Fenton reaction-controversy concerning the chemistry[END_REF]:
(i) Cu 2+ + O2 -• → Cu + + O2 (ii) Cu + + H2O2→ Cu 2+ + •OH + OH-(Fenton reaction)
Again, copper-induced formation of ROS can cause peroxidation of lipids, as clearly demonstrated in vitro studies. GSH can suppress copper toxicity by directly chelating the metal [START_REF] Mattie | Copper-inducible transcription: regulation by metal-and oxidative stressresponsive pathways[END_REF],
maintaining it a reduced state, unavailable for redox cycling. Disruption of copper homeostasis resulting in elevated pools of copper may contribute to a shift in redox balance towards more oxidizing environment by depleting glutathione levels [START_REF] Linder | Biochemistry of Copper[END_REF]. The depletion of glutathione may enhance the cytotoxic effect of ROS and allow the metal to be more catalytically active, thus producing higher levels of ROS. The large increase in copper toxicity following GSH depletion clearly demonstrates that GSH is an important cellular antioxidant acting against copper toxicity [START_REF] Steinebach | Role of cytosolic copper, metallothionein and glutathione in copper toxicity in rat hepatoma tissue culture cells[END_REF].
Iron and copper are two divalent metalic ions very important in the organism: iron and copper are cofactors of many enzymes involved in redox reactions and metabolic processes. The metal-induced oxidative stress could be a good oxidant stress model. However, they are both implicated in the non-enzymatic decomposition of S-nitrosothiols and proteins causing chemical reduction of the S-NO bond [START_REF] Jaffrey | Detection and characterization of protein nitrosothiols[END_REF]. Therefore, considering the object of this second part of the thesis, the aforesaid model cannot be applied.
3.3.4
Oxidative stress and generator of free radicals: 2,2-Azobis(2-amidinopropane) dihydrochloride (AAPH)
The 2,2-azobis(2-amidinopropane) dihydrochloride (AAPH) is a water soluble azo small molecule that is often employed in the study of lipid peroxidation and for the characterization of antioxidants in vitro. Spontaneous decomposition of AAPH, at physiological temperature (37°C), produces one mole of nitrogen and two moles of carbon radicals (R▪). The carbon radicals could either combine to produce stable products or react with molecular oxygen to generate peroxyl radicals (ROO▪) or with polyunsaturated lipids of cell membranes for their peroxidation [START_REF] Noguchi | 2,29-Azobis (4-Methoxy-2,4-Dimethylvaleronitrile), a new lipid-soluble azo Iinitiator: application to oxidations of lipids and low-density lipoprotein in solution and in aqueous dispersions[END_REF]. At 37°C and pH 7, the half-life of AAPH is about 175 h; consequently, the ROO▪ generation rate is virtually constant for the first few hours [START_REF] Peluso | Intestinal motility disorder induced by free radicals: a new model mimicking oxidative stress in gut[END_REF], and is directly proportional to free base concentration.
Because AAPH is water soluble, the rate of free-radical generation from AAPH can be easily controlled and measured. AAPH has been used in vitro to determine both antioxidant properties of compounds [START_REF] Blache | Determination of sterols, oxysterols and fatty acids of phospholipids in cells and lipoproteins: a one sample method[END_REF][START_REF] Blache | Oxidant stress: the role of nutrients in cell-lipoprotein interactions[END_REF] and the total defense against free radicals [START_REF] Durand | Pro-thrombotic effects of a folic acid deficient diet in rat platelets and macrophages related to elevated homocysteine and decreased n-3 polyunsaturated fatty acids[END_REF][START_REF] Girodon | Effect of two year supplementation with low dose antioxidant vitamins and/or minerals in elderly subjects on levels of nutrients and on antioxidant defense parameters[END_REF]. It has been extensively used as a free-radical initiator for biological studies and the haemolysis induced by AAPH provides a good approach for studying membrane damage induced by free radicals [START_REF] Cai | Antioxidative and free radical scavenging effects of ecdysteroids from Serratula strangulata[END_REF]. This small molecule has been used, as a source of thermal free radical, in the study of oxidations of red blood cells, plasma, whole blood, HeLa cells, various tissues, and even the whole body [START_REF] Tang | Free-radical-scavenging effect of carbazole derivatives on AAPH-induced hemolysis of human erythrocytes[END_REF], reporting that AAPH is able to cause various types of pathological changes through cellular oxidative damage.
The use of this azo compound, on different cell types cultures, is supported by several lines of evidence. Indeed, Hyuck et al. evaluated intracellular oxidants in human premonocytic U937 cells after exposure to AAPH, in order to analyze the role of cytosolic NADP+-dependent isocitrate dehydrogenase (IDCH) in cellular defense against the lipid peroxidation-mediated oxidative damage [START_REF] Yang | Oxalomalate, a competitive inhibitor of NADP + -dependent isocitrate dehydrogenase, enhances lipid peroxidation-mediated oxidative damage in U937 cells[END_REF]. In the same year, Aldini et al. investigated the molecular mechanisms of procyanidins (polyphenols) as cardioprotective agents on endothelial stress model based on AAPH stimulation.
They studied the efficacy of procyanidins to protect ECs against ONOO-and to modulate the endothelium-dependent NO release in human internal mammary artery [START_REF] Aldini | Procyanidins from grape seeds protect endothelial cells from peroxynitrite damage and enhance endothelium-dependent relaxation in human artery: new evidences for cardio-protection[END_REF]. More recently, Scarpato et al. investigated the cytotoxic and genotoxic effects of ROO▪ and ONOO-generated from AAPH and SIN-1 in a human microvascular endothelial cell line and human peripheral lymphocytes, respectively [START_REF] Scarpato | Cytotoxicity and genotoxicity studies of two free-radical generators (AAPH and SIN-1) in human microvascular endothelial cells (HMEC-1) and human peripheral lymphocytes[END_REF]. Furthermore, AAPH was also used to study the effects of oxidative stress on the cardiovascular system during chick embryo development [START_REF] He | A new oxidative stress model, 2,2-azobis(2-amidinopropane) dihydrochloride induces cardiovascular damages in chicken embryo[END_REF]. In particular, when nine-day-old (stage HH 35) chick embryos were treated with different concentrations of AAPH inside the air chamber, it was established that the LD50 value for AAPH was 10 mmol/egg. At this concentration, AAPH was found to significantly reduce the density of blood vessel plexus that was developed in the chorioallantoic membrane of HH 35 chick embryos. Impacts of AAPH on younger embryos were also examined and discovered that it inhibited the development of vascular plexus on yolk sac in HH 18 embryos. AAPH also dramatically repressed the development of blood islands in HH 3+ embryos.
These results implied that AAPH-induced oxidative stress could impair the whole developmental processes associated with vasculogenesis and angiogenesis. Furthermore, He et al. observed heart enlargement in the HH 40 embryo following AAPH treatment, where the left ventricle and interventricular septum were found to be thickened in a dose-dependent manner due to myocardiac cell hypertrophy. Therefore, oxidative stress, induced by AAPH, could lead to damage of the cardiovascular system in the developing chick embryo. Finally, AAPH was used in vascular muscle cell line to study the direct effects of free radicals on SMCs cholesterol metabolism [START_REF] Gesquie`re | Oxidative stress leads to cholesterol accumulation in vascular smooth muscle cells[END_REF], cAMP concentration and in cholesterol homeostasis related cAMP-dependent enzymes activity [START_REF] Gesquiere | Role of the cyclic amp-dependent pathway in free radical induced cholesterol accumulation in vascular smooth muscle cells[END_REF].
Based on these informations and excluding the oxidative stress models based on ROS and metal ions for the reasons mentionned above (instability, interference with S-nitrosothiols), AAPH can be considered as a model applicable to our study for stable and reproducible induction of oxidative stress in SMC without any interference with a S-nitrosothiol treatment.
Smooth muscle cells and protein S-nitrosation
Apart from ROS and the interaction of NO with ROS (causing the production of several RNS, which occurs only under conditions of excessive ROS production), and besides its direct role in vascular function, NO also participates in redox signaling by modifying lipids (via nitration of fatty acid) and proteins (via S-nitrosation of cysteine residue) [START_REF] Villacorta | Electrophilic nitro-fatty acids inhibit vascular inflammation by disrupting LPSdependent TLR4 signalling in lipid rafts[END_REF][START_REF] Cui | Nitrated fatty acids: Endogenous antiinflammatory signaling mediators[END_REF]. In respect to the latter, NO can directly modify sulfhydryl residues of proteins through S-nitrosation, resulting in the formation of Snitrosothiols. As already described (chapter 1, section 1.3), S-nitrosation is a redox dependent, thiolbased, reversible post-translational protein modification that involves attachment of an NO moiety to a nucleophilic protein sulfhydryl group and it is involved in redox based cellular signalling [START_REF] Foster | Protein S-nitrosylation in health and disease: a current perspective[END_REF][START_REF] Gaston | S-Nitrosothiol signaling in respiratory biology[END_REF][START_REF] Gow | Basal and stimulated protein Snitrosylation in multiple cell types and tissues[END_REF][START_REF] Paige | Nitrosothiol reactivity profiling identifies S-nitrosylated proteins with unexpected stability[END_REF] (Fig. 14). S-nitrosation is analogous to phosphorylation, glutathionylation, palmitoylation, acetylation and other physiological modifications of proteins. In this process, ROS triggers oxidative modification and NO triggers S-nitrosation of many target molecules, together with activation of pro-oxidant and antioxidant enzymes to regulate the redox status of SMCs and ECs (Adapted from [START_REF] Hsieh | Shear-induced endothelial mechanotransduction: the interplay between reactive oxygen species (ROS) and nitric oxide (NO) and the pathophysiological implications[END_REF]).
There are emerging data suggesting that S-nitrosation of proteins plays an important role both in normal physiology and in a broad spectrum of human diseases. For instance relates to the regulation of vascular tone, the balance between Ang-II and NO appears crucial for maintaining the homeostasis of the cardiovascular and renal systems. In fact, when this homeostatic balance becomes perturbed, the actions of Ang-II predominate over those nitric oxide [START_REF] Schulman | Interaction between nitric oxide and angiotensin II in the endothelium: role in atherosclerosis and hypertension[END_REF]. In this regards, in the laboratory, an important point under investigation is precisely the S-nitrosation effect on Ang-II receptors by GSNO (the S-nitrosothiol thoroughly studied in our laboratory), in the cerebral circulation, to understand the real impact of this important post-translational mechanism There is cross-talk between S-nitrosation, phosphorylation and other post-translational signaling mechanisms that affect protein interactions. In many cases, pathophysiology correlates with hypo-or hyper S-nitrosation of specific protein targets, rather than a general cellular insult due to loss of or enhanced nitric oxide synthase activity. In addition, the dysregulated S-nitrosation results from a modification of NO availability (quantity and/or localization). NO availability results not only from alterations of the expression, compartmentalization and/or activity of nitric oxide synthases, but also reflect the contribution of denitrosylases, including GSNO-metabolizing enzymes, like GSNO reductase releasing GSNHOH, a non NO active molecule, or GGT releasing cys-gly-NO, an active NO molecule [START_REF] Dahboul | Endothelial γ-glutamyltransferase contributes to the vasorelaxant effect of S-nitrosoglutathione in rat aorta[END_REF]. If on one hand, the irreversible oxidation of thiols can block the physiologic modification by S-nitrosation or S-glutathionylation and thereby interfere with normal physiologic signaling [START_REF] Adachi | S-glutathiolation by peroxynitrite activates SERCA during arterial relaxation by nitric oxide[END_REF], on the other hand, it has been suggested that NO can protect cells from oxidative stress, whereas loss or inhibition of NOS enhances oxidative stress. By the way, the emerging picture shows that protein S-nitrosation not only leads to changes in protein structure and function, but also prevents these thiol(s) from further irreversible oxidative/nitrosative modification.
S-nitrosation influences cell signalling and redox regulation
The protein S-nitrosation is a redox reversible process with high spatial and temporal specificity. A determinant that governs the specificity of post-translational protein modification by NO is provided by the colocalization of NO sources and targets proteins, which is based at least in part on specific protein-protein interactions with NO synthases. Conversely, the S-nitrosation is also a temporal signaling event, which depends on the formation of NO by NOS and other nitrosylating equivalents.
Many Cys-containing proteins such as signaling molecules and transcriptional factors are potential targets that undergo a range of ROS-dependent or RNS-dependent oxidative and nitrosative modifications of this Cys-containing proteins. Physiologically, NO through S-nitrosation of proteins regulates numerous cellular responses. NO exerts as an antioxidant by inhibiting NADPH oxidase activity via S-nitrosation [START_REF] Selemidis | Nitric oxide suppresses NADPH oxidase-dependent superoxide production by S-nitrosylation in human endothelial cells[END_REF]. NO was shown to promote the ROS scavenging activity of thioredoxin-1 via S-nitrosation on Cys69 residue [START_REF] Liu | Thioredoxin-1 ameliorates myosin-induced autoimmune myocarditis by suppressing chemokine expressions an leukocyte chemotaxis in mice[END_REF][START_REF] Haendeler | Redox regulatory and antiapoptotic functions of thioredoxin depend on S-nitrosylation at cysteine 69[END_REF]. Indeed, ECs under physiological shear stress increased protein S-nitrosation [START_REF] Huang | Shear flow increases S-nitrosylation of proteins in endothelial cells[END_REF][START_REF] Hoffmann | Shear stress increases the amount of S-nitrosylated molecules in endothelial cells: important role for signal transduction[END_REF] independently of cGMP-dependent signaling. In contrast, ECs after TNF-α and mild oxidized LDL treatment reduced S-nitrosation [START_REF] Hoffmann | TNF alpha and oxLDL reduce protein S-nitrosylation in endothelial cells[END_REF]. Early researches demonstrated that AP-1 (an important transcription factor that regulates gene expression in response to a variety of stimuli and it controls a number of cellular processes including differentiation, proliferation, and apoptosis [START_REF] Ameyar | A role for AP-1 in apoptosis: the case for and against[END_REF]) activity was altered by S-nitrosation [START_REF] Marshall | Nitrosation and oxidation in the regulation of gene expression[END_REF] and by oxidation of Cys residues [START_REF] Xanthoudakis | Redox Activation of Fos Jun DNA-Binding Activity Is Mediated by a DNA-Repair Enzyme[END_REF]. Furthermore, H2O2 treatment inhibited AP-1 activity and decreased eNOS promoter activity [START_REF] Stuehr | Mammalian nitric oxide synthases[END_REF]. NF-κB, AP-1, and p53 all contain reactive thiols in their DNA binding regions, the modification of which alters their binding to DNA.
Regulation of protein interactions with chromatin can also involve S-nitrosation. For example, in neuronal development, brain-derived neurotrophic factor activates nNOS, which nitrosates histone deacetylase 2 (HDAC2; cysteines 262 and 274), causing HDAC2 to dissociate from chromatin [START_REF] Nott | S-nitrosylation of histone deacetylase 2 induces chromatin remodeling in neurons[END_REF]. This increases histone acetylation, permitting transcription of beneficial target genes regulating dendritic growth [START_REF] Nott | S-nitrosylation of histone deacetylase 2 induces chromatin remodeling in neurons[END_REF]; but it can also have potentially adverse effects, including increased expression of metastatic tumor antigen 1 [START_REF] Pakala | Regulation of NF-kappaB circuitry by a component of the nucleosome remodeling and deacetylase complex controls inflammatory response homeostatis[END_REF]. Additionally, cell cycle regulation appears to involve S-nitrosation and denitrosylation of critical proteins involved in the formation of the mitosis process [START_REF] Marozkina | S-Nitrosylation signalling regulates cellular protein interactions[END_REF]. As with other cellular effects, however, the role of S-nitrosation signaling in epigenetics and cell-cycle regulation is only beginning to be understood.
Furthermore, a number of nuclear regulatory protein interactions are modified by S-nitrosation. For example, the Hypoxia-inducible factor 1 α (Hif1-α) is stabilized by physiological S-nitrosothiol levels through S-nitrosation of protein von Hippel Lindau (C162), preventing its degradation [START_REF] Pawloski | Export by red blood cells of nitric oxide bioactivity[END_REF]. Then HIF1-α can interact with HIF1-β to bind to hypoxia-responsive elements in gene promotor regions resulting in transcription of genes such as VEGF. S-nitrosation has also a variety of effects to alter NF-κB activity, including S-nitrosation of both NF-Κb subunits p50 (C 62) and p65 (C 38); as well as IκB kinase [START_REF] Marshall | S-nitrosylation: physiological regulation of NF-kappaB[END_REF][START_REF] Reynaert | Nitric oxide represses inhibitory IκB kinase through S-nitrosylation[END_REF]. The net effect of these reactions is generally to increase cytosolic NFκB-IκB interaction, and/or preventing nuclear translocation of NF-κB; these effects prevent interaction of NFκB with inflammatory gene promoters, inhibiting inflammation.
Therefore, it should be noted that on one hand S-nitrosation could have a strong impact on protein signaling in a targeted way to cause specific changes, and eventual alterations in this process can promote the occurrence of disorders in many diseases. On the other hand, it is also true that Snitrosation in normal and disturbed cell function presents, in principle, novel therapeutic opportunities in a wide range of human diseases, such as cardiovascular diseases associated with oxidative stress. Indeed, in a context characterized by a significant decrease of NO bioavailability and an increase of oxidative stress, which is the basis of most pathologies, S-nitrosothiols represent promising candidate as NO donors to maintain an appropriate level of NO and treat NO deficiency, compared to other NO-related therapeutics developed, in fact many of them produce tolerance phenomena and oxidant stress (chapter I, section 1.4).
GSNO: a key player in the S-nitrosation
Among S-nitrosothiols, most of our attention is focused on GSNO, the endogenous/physiological storage and transport form of NO. Precisely for these reasons, the regulation of the cardiovascular system by this intracellular low molecular weight appears to be of particular physiologic interest and is under active investigation in our laboratory. Our common purpose is to evaluate the ability of GSNO to regulate NO bioavailability, in different cell-free or cellbased systems and in different conditions, in order to develop GSNO pharmaceutical forms, that actually are still lacking, for the treatment of vascular diseases. Indeed, despite its powerful antiplatelet activity, vasodilator effects, antimicrobial and antithrombotic effects (chapter I, section 1.4), this molecule is not yet present in any pharmaceutical composition. This may be related to the fast and unpredictable rate of decomposition of GSNO. In vitro, GSNO decomposition is promoted by factors such as pH, temperature and metal ions. In vivo, it is effected by enzyme activities such as GSNO reductase, CR1, PDI, Trx systems and GGT.
Therefore, considering its stability, limited by enzymatic and non-enzymatic degradations, too low for clinical application to provide a long-lasting effect and to deliver appropriate NO concentrations to target tissues, GSNO, and RSNO in general, have to be protected. GSNO encapsulation is an interesting response to overcome degradation and provide protection, but it raises difficulties for encapsulation due to its hydrophilic nature and the instability of the S-NO bound during the formulation process. In a previous study in our laboratory [START_REF] Wu | Time lasting Snitrosoglutathione polymeric nanoparticles delay cellular protein S-nitrosation[END_REF], the direct encapsulation of GSNO within polymeric nanoparticles has been described and demonstrated the conservation of the activity of this fragile molecule after the formulation process. Even if the potential of the Snitrosothiol GSNO for NO supplementation is limited by its poor stability and high hydrophilicity (the obtained release may not be sustained enough for a chronic in vivo therapeutic effect), these studies represented a first starting point for chronic oral delivery of GSNO, providing opportunities for vascular diseases treatment.
GSNO: NO storage and transportation
It is documented in the literature that NO stores implicated in physiological responses has been proposed to explain endothelium-dependent relaxing effects that persist after blockade of NO synthase by inhibitors [START_REF] Kakuyama | Endothelium-dependent sensory NANC vasodilatation: involvement of ATP, CGRP and a possible NO store[END_REF][START_REF] Chauhan | NO contributes to EDHF-like responses in rat small arteries: a role for NO stores[END_REF]. NO store is also involved in long lasting hyporesponsiveness to vasoconstrictors elicited by endotoxin [START_REF] Muller | Evidence for N-acetylcysteine-sensitive nitric oxide storage as dinitrosyl-iron complexes in lipopolysaccharide-treated rat aorta[END_REF] and by NO donors [START_REF] Terluk | Involvement of soluble guanylate cyclase and calciumactivated potassium channels in the long-lasting hyporesponsiveness to phenylephrine induced by nitric oxide in rat aorta[END_REF][START_REF] Silva-Santos | Long lasting changes of rat blood pressure to vasoconstrictors and vasodilators induced by nitric oxide donor infusion: involvement of potassium channels[END_REF]. Alencar et al, using rat aorta exposed to GSNO displayed, even after washout of the drug, a persistent increase in cysteine-NO residues and in NO content, a persistent attenuation of the effect of vasoconstrictors, and a relaxant response upon addition of low-molecular-weight thiols [START_REF] Alencar | S-Nitrosating nitric oxide donors induce long-lasting inhibition of contraction in isolated arteries[END_REF]. Rat mesenteric and porcine coronary arteries exposed in vitro to GSNO, as well as aorta and mesenteric arteries removed from rats treated in vivo with GSNO via infusion, displayed similar modifications of contraction [START_REF] Alencar | S-Nitrosating nitric oxide donors induce long-lasting inhibition of contraction in isolated arteries[END_REF].
Together, these studies support the idea that S-nitrosation of cysteine residues is involved in longlasting effects of NO on arterial tone. They suggest that S-nitrosation of tissue thiols is a mechanism of formation of local NO stores from which biologically active NO can be subsequently released [START_REF] Alencar | S-Nitrosating nitric oxide donors induce long-lasting inhibition of contraction in isolated arteries[END_REF][START_REF] Sarr | Targeted and persistent effects of NO mediated by S-nitrosation of tissue thiols in arteries with endothelial dysfunction[END_REF]. Basing on these considerations, in which NO stores are involved in the vascular response to vasoconstrictors and in the regulation of blood tone, the identification of the modification of NO stores can be used as a model to evaluate the therapeutic efficiency of free NO donors or NO donors related delivery systems. Precisely, in another work [START_REF] Wu | Polymer nanocomposites enhance S-nitrosoglutathione intestinal absorption and promote the formation of releasable nitric oxide stores in rat aorta[END_REF], we evaluated the efficiency of oral delivery of GSNO-loaded alginate/chitosan nanocomposite particles (GSNO-acNCP) through the formation of NO stores in rat aorta. After validation of NO absorption through a cell model of intestinal barrier, the selected GSNO-acNCP were orally administrated to Wistar rats. The efficient loading, protection and sustained release of GSNO provided by these formulations allows GSNO to reach the blood stream and contributes to form a reservoir of NO by transnitrosation inside the vascular wall.
Therefore, 17 h after oral administration of this formulation to Wistar rats, we observed vascular hyporeactivity to the vasoconstrictor phenylephrine. This is probably in relation with the formation of a releasable NO store, as it was possible to mobilize this NO store using N-acetylcysteine (NAC), a thiol which can displace NO from cysteine-NO residue to induce relaxation. These new drugs delivery systems of NO donors may be particularly adapted for oral treatment of cardiovascular disease treatment.
At this stage, it is conceivable to think that GSNO could be one of the best NO donor to use in cardiovascular field, precisely because it is a physiologic source of NO, exhibits higher stability than NO, does not induce any tolerance or oxidative stress, is well suited to the chemical formulation, and mediates protein S-nitrosation processes, playing an important role in vascular signaling [START_REF] Marozkina | S-Nitrosylation signalling regulates cellular protein interactions[END_REF]. Even if the role of ROS in oxidative modifications and that of NO in the S-nitrosation of many target molecules emerge from the literature, little is known on the possible interferences of free radicals with the metabolism of GSNO and subsequent NO-mediated molecular events.
For this reason, the last part of this chapter will focus on the possible GSNO employment to counteract NO deficiency occurring with oxidative stress and on the S-nitrosation effects on specific target molecules involved in smooth muscle cells phenotype. Considering, that SMCs are involved in the contraction and the regulation of blood vessel tone, thus distributing blood flow and regulating blood pressure and in signaling communication, the identification of the nitrosated proteins under oxidant conditions could help us to understand better eventual changes in their phenotype, function and biological process in the cardiovascular disorders associated with oxidative stress.
A rat aorta embryonic smooth muscle cell (A10 cell lines) exposed in vitro to the already discussed free radical generator, AAPH, will be used as oxidative stress model. The effects will be evaluated on levels of expression/activity of selected GSNO-metabolizing enzymes and levels of NO release. Moreover, considering that the protein S-nitrosation can be modified itself and in turn modulates signaling pathway under oxidative stress, the GSNO impact on S-nitrosation will be assessed. The extent and distribution of GSNO-induced S-nitrosation of cellular proteins will be also analyzed through biotin switch method [START_REF] Jaffrey | The biotin switch method for the detection of S-nitrosylated proteins[END_REF] in combination with proteomic approaches [START_REF] Hao | SNOSID, a proteomic method for identification of cysteine S-nitrosylation sites in complex protein mixtures[END_REF][START_REF] Martinez-Ruiz | Detection and proteomic identification of S-nitrosylated proteins in endothelial cells[END_REF].
Analysis reveals that many additional proteins identified are involved in SMCs cytoskeleton and contractile machinery.
These studies underlie the significant role of S-nitrosation by GSNO, suggesting novel mechanisms for the protein S-nitrosation in diseases and adding to an expanding list of potential therapeutic targets. Dans l'ensemble, même si le stress oxydant a modulé différentiellement la GGT et la PDI, des niveaux plus élevés de protéines S-nitrosées par GSNO ont été identifiées. En conclusion les résultats peuvent inciter à l'identification de biomarqueurs appropriés pour l'évaluation de la bioactivité de GSNO dans le traitement des maladies cardiovasculaires.
Introduction
Cardiovascular diseases like atherosclerosis, pulmonary hypertension, thrombosis, ischemia and cardiac arrhythmia are usually associated with oxidative stress and a reduced bioavailability of nitric oxide (NO) [1]. To overcome this aspect, several NO-related therapeutics have emerged over the past few decades, such as nitrosamines [2], organic nitrates [3], and N-diazeniumdiolates [4].
However, these compounds induce undesirable effects, such as tolerance and hypotension, and are often considered as oxidative stress enhancers in an environment rich in oxygen and/or radical species where they may favour the formation of peroxynitrite ions (ONOO -), a reactive nitrogen species (RNS) producing deleterious proteins nitration [5,6,7,8]. Other NO coumpounds, such as Snitrosothiols may represent safer alternatives [9,10]. Several investigations on the therapeutic potential of S-nitrosothiols have focused on S-nitrosoglutathione (GSNO), the physiological storage form of NO in tissues, due to the absence of recorded side effects in preclinical studies [11,12,13].
However, even though S-nitrosothiols are not prooxidant per se, the ability of GSNO to regulate NO bioavailability under oxidative stress conditions has not yet received sufficient attention.
Oxidative stress in the vessel wall has been shown to involve the tunica media, where smooth muscle cells (SMC) can produce reactive oxygen species (ROS) -e.g. superoxide anion, O2 •-following the activation of their own NADPH oxidase. SMC probably represent a privileged target of ROS [14]. Exposure of SMC cells to ROS-generating systems actually stimulates migration, proliferation, and growth [15,16,17,18]. SMC also are a main target of (endothelium-derived or exogenous) NO, which exerts in this way its vasorelaxing effects.
NO, besides its direct role in vascular function, also participates in redox signaling by modifying proteins via S-nitrosation. The S-nitrosation, which is the formation of a covalent bond between NO and the sulfhydryl group of a cysteine residue, is a redox dependent, thiol-based, reversible posttranslational modification of proteins [19,20,21]. There are emerging data suggesting that S-nitrosation of proteins plays an important role both in physiology and in a broad spectrum of pathologies [22]. Pathophysiology correlates with hypo or hyper S-nitrosation level of specific protein targets. This dysregulation of protein S-nitrosation results from a modification of NO availability (quantity and/or localization). NO availability results not only from alterations of the expression, compartmentalization and/or activity of NO synthases, but also reflects the contribution of denitrosylases, including GSNO-metabolizing enzymes, like GSNO reductase releasing GSNHOH, a non active NO-related molecule, and the gamma-glutamyl transferase (GGT) releasing cys-gly-NO, an active NO-related molecule [23]. Redoxines like protein disulfide isomerase (PDI) known to reverse thiol oxidation can also catabolise GSNO to release NO [24].
In the present study, we aimed to assess the suitability and potency of GSNO as a NO donor in an oxidative stress environment. Its metabolism by two specific redox enzymes (GGT and PDI), the cellular thiol redox status and protein S-nitrosation were thus analyzed in SMC exposed to oxidative stress induced by a free radical generator, i.e. 2,2'-azobis(2-amidinopropane) dihydrochloride (AAPH). We more specifically evaluated whether oxidative stress modulates the bioactivity of GSNO, by favouring release of NO and S-nitrosation of potentially critical protein targets related with cell contraction, morphogenesis and movement.
Materials and methods
Materials
Chemicals
All reagents were of analytical grade and all solutions prepared with ultrapure deionized water (>18.2 mΩ.cm). BCA Protein Assay Kit was purchased from Pierce and protease inhibitor cocktail from Roche. The Ez-Link Biotin-HPDP and the high capacity neutravidin agarose resin were obtained from Fisher Scientific. All other reagents came from Sigma, France country not precised for other companies.
Synthesis of S-nitrosoglutathione
GSNO was synthesized as previously described [25]. Briefly, reduced glutathione (GSH) was incubated with an equivalent amount of sodium nitrite under acidic conditions (0.626 M HCl). The concentration of GSNO was calculated using the specific molar absorbance of S-NO bond at 334 nm (ε = 922 M -1 cm -1 ) and the Beer Lambert Law.
Cell culture and oxidative stress model
Vascular smooth muscle cells derived from embryonic rat aorta (A- of GSNO (or the same volume of PBS) was added for an additional incubation period of 1 h at 37°C.
Quantification of intracellular reduced glutathione
Intracellular GSH was measured as previously described [27,28], with some adaptations.
Cells were lysed in a cold 3.3 % (v/v) perchloric acid solution and centrifuged for 15 min at 10,000 × g.
Quantification of intracellular protein reduced thiols
Intracellular protein reduced thiols were quantified using the DTNB method. After treatments, cells were lysed in a 3.3% (v/v) cold perchloric acid solution and centrifuged for 15 min at 10,000 × g. The pellets were resuspended in PBS containing 0.5% (V/V) sodium dodecylsulfate (SDS).
Cells were then incubated 10 min in the dark with 700 µL of 1 mM DTNB. After incubation, 200 µL were transferred in triplicate in a 96-wells plate and the absorbance was read at 405 nm. Intracellular thiols concentration was calculated using a GSH standard curve ranging from 3.25 µM to 32.5 µM and expressed relatively to protein quantity (see section 2.7).
Determination of gamma-glutamyltransferase activity
Gamma-glutamyl transferase activity was kinetically determined using the synthetic GGT substrate L-ɣ-glutamyl-3-carboxy-4-nitroanilide (GCNA). After treatments, incubation media were replaced with 750 µL of 1 mM of GCNA in 100 mM Tris buffer (pH= 7.4) containing 20 mM glycylglycine and 10 mM MgCl2, with/without 50 mM AAPH or 20 mM of the GGT inhibitor serine-borate complex (SBC). Cells were then incubated at 37°C, and 50 µL of incubation medium were transferred in a 96-wells plate each 30 min and absorbance read at 405 mm. At the end of the kinetic assay, cells were lysed in 500 µL 0.1 M HCl containing 0.4% (m/v) Triton X-100 for protein quantification (see section 2.7).
Cell membrane PDI expression
Cells were scrapped in
Total protein quantification
Protein determination was performed using the Pierce BCA Protein Assay Kit, following instructions of the manufacturer. A standard curve ranging from 0.025 to 1 mg.mL -1 was built with bovine serum albumin to calculate protein concentration.
Purification and identification of S-nitrosated proteins
Cells incubated in a-75 cm 2 flask were lysed in 500 µL of 50 mM Tris (pH= 6.8) containing 0.15 M NaCl, 1% (v/v) NP-40, 0.1% (v/v) SDS, 1 mM EDTA, 0.1 mM neocuproine and protease inhibitor cocktail. S-nitrosated proteins were purified by the biotin switch technique as previously described [29,30], with some adaptations. Briefly, free thiols in cell lysates were blocked with 50mM of Nethylmaleimide (NEM). Then, S-nitrosated proteins were labeled with pyridyl disulfide-biotin (N-[6-(biotinamido)hexyl]-3'-(2'-pyridyldithio)-propionamide biotin, biotin-HPDP) after cutting the S-NO bond with sodium ascorbate. Biotin-HPDP-labeled proteins were purified with NeutrAvidin beads (High Capacity NeutrAvidin Agarose Resin) and eluted in a buffer containing 1.5% (v/v) 2mercaptoethanol.
After purification, S-nitrosated proteins were identified by mass spectrometry, as follows.
Samples diluted 4-fold in 6 M urea, 50 mM Tris (pH= 8.0) were processed for cysteine reduction and alkylation, followed by overnight digestion in 10 vol of 50 mM Tris (pH= 8.0), 1 mM CaCl2 containing 100 ng sequencing-grade trypsin (Promega). Protein digests were purified through C18 mini spin columns (Pierce, Thermofisher scientific, France), resuspended in 8 µL of 2% (v/v) acetonitrile, 0.1% (v/v) trifluoroacetic acid and analyzed through label-free LC-MALDI as previously described [START_REF] Pirillo | LOX-1, OxLDL, and Atherosclerosis[END_REF].
Proteins and peptides were identified based on fragmentation spectra by interrogation of the whole Swissprot database through the public Mascot server (taking in account protein scores above 80.0 and peptide scores above 20.0 at first rank, allowing one trypsin miscleavage and considering cysteine carbamidomethylation and methionine oxidation as optional). Finally, identified proteins were classified using the Panther database [32].
Statistical analysis of data
Results are expressed as means ± standard error of the mean (sem). Statistical analyses were performed using either the Student t-test (for enzyme activity/expression or inhibition) or two-way ANOVA (pcondition basal versus AAPH, ptreatment with or without GSNO and pinteraction between condition and treatment) followed by a Bonferroni's multiple comparisons test. The GraphPad Prism software (GraphPad Software version 5.0, San Diego, USA) was used. The intracellular GSH content significantly decreased with oxidative stress (Fig. 1-A), while extracellular GSH increased (Fig. 1-B) (pcondition < 0.0001 for both). The addition of GSNO produced a very slight (1.9 %) but significant increase in extracellular GSH (ptreatment = 0.0395), both in basal and oxidative conditions (pinteraction ns), while this did not permit and recover intracellular levels of GSH, which remained at a low value under AAPH exposure (ptreatment and pinteraction ns, Fig. 1-A).
Results
Oxidative stress biomarkers
Reduced thiols at the plasma membrane (0.015 ± 0.002 nmol/µg of proteins) did not change with oxidative stress. Intracellular protein thiols evolved differently according to conditions and treatment (pinteraction < 0.0001, Fig. 1-C). They decreased with oxidative stress in the absence of GSNO.
The addition of GSNO under oxidative stress almost doubled the intracellular protein thiol.
Extracellular GSNO metabolism and intracellular formation of S-nitrosothiols.
After one hour in contact with cells, only approx. 20 µM of GSNO (Fig. 2-A) -out of the 50 µM initially added, and approx. 12 µM of nitrite ions (Fig. 2-B) were found in the extracellular space, indicating that GSNO is partly metabolized by SMC to release NO (detected as nitrite ions). GGT and PDI inhibition under basal conditions led to an increased extracellular content of GSNO to 27 ± 0.5 µM for SBC and 28 ± 0.5 µM for bacitracin (p<0.05 versus GSNO in the absence of inhibition, t-test) and a decrease in extracellular nitrite ions concentrations (10 ± 0.4 µM for SBC (p>0.05) and 7 ± 0.9 µM for bacitracin (p<0.05 versus GSNO in the absence of inhibition, t-test), attesting a decrease in GSNO catabolism. Similar profiles were obtained with GGT and PDI inhibition under oxidative stress.
At the intracellular level, addition of GSNO for 1 hour induced formation of S-nitrosothiols, which nearly doubled under oxidative stress compared to basal condition (Fig. 3). Both enzymes were implicated in the formation of intracellular S-nitrosothiols in both conditions. Inhibition of GGT by SBC led to a half time decrease in the content of S-nitrosothiols, both in basal condition (2.
Identification of S-nitrosated proteins
Purification and identification of proteins undergoing S-nitrosation revealed that 32 proteins were nitrosated under basal conditions, whereas 51 were S-nitrosated under oxidative stress. GSNOnitrosated proteins were mainly present in macromolecular complexes and organelles under basal conditions, while the membrane and extracellular region pools were prevailing under oxidative stress (Table 1). The identified proteins belonged to 20 different classes for basal conditions, and to 23 for oxidative stress. Three additional classes were S-nitrosated by GSNO under oxidative stress, designed as cell adhesion, transfer/carrier and transporter proteins (Fig. 4). Among these classes, importin subunit beta-1 and procollagen c-endopeptidase enhancer 1, involved in the cellular cycle and proliferation of vascular smooth muscle cells, respectively [33,34], have been identified. 2).
Discussion
The present study was designed to evaluate the bioactivity of GSNO in vascular SMC exposed to oxidative stress. Experiments were thus planned in order to assess the efficiency of GSNOdependent NO release, and to verify possible quantitative/qualitative changes induced by oxidative conditions in cellular protein S-nitrosation.
From an experimental point of view, two main approaches can be used to induce oxidative stress, i.e. either by inhibiting cellular antioxidant defenses or by increasing the free radical load. The latter can be obtained by exposing cells to extracellular ROS like O2 ), or with polyunsaturated lipids of cell membranes thus starting lipid peroxidation [36]. A number of studies have employed AAPH to investigate antioxidant defenses in cellular systems [37,38]. More recently, cytotoxic and genotoxic effects of ROO • originating downstream of AAPH have been studied in a human microvascular endothelial cell line [39]. The effects of oxidative stress on the development of the cardiovascular system were also investigated after administration of AAPH in the air chamber of chicken embryos [40]. As far as SMC are concerned, AAPH was used to study the direct effects of free radicals on cyclic AMP-related cholesterol homeostasis [41]. On this background, the exposure of vascular SMC to AAPH was chosen as a simple and reproducible model of oxidative stress.
AAPH-induced oxidative stress caused no significant change in the extracellular GSNO metabolism, as the consumption of added GSNO and the corresponding NO release were largely the same in both conditions. In principle, the GSNO degradation might ensue from a direct oxidation by AAPH radicals. However, when checked in the absence of cells, the direct AAPH-mediated oxidation of GSNO (50 µM) actually released 4 ± 0.1 µM nitrite ions, i.e. much less than the concentrations detected in the presence of SMC (18.3 ± 0.6 µM), suggesting that most of the observed GSNO metabolism in oxidative stress conditions in SMC occurs through the activity of diverse cellular enzymes.
140
Oxidative stress induced a remarkable increase in the formation of intracellular Snitrosothiols under GSNO addition, a finding whose interpretation requires additional considerations.
GGT is a critical enzyme in GSNO metabolism [42], essential for the release of NO and its subsequent utilization in S-nitrosothiols formation. Indeed, intracellular concentrations of S-nitrosothiols detected within GGT inhibition were markedly decreased. However, even if the GGT activity was decreased by oxidative stress, it was still implicated in the same extent in GSNO extracellular catabolism and intracellular S-nitrosothiols formation. Actually, GGT inhibition did not entirely suppress GSNO metabolism, as it could not restore the initial extracellular concentration of GSNO: approx. 26 µM GSNO were detected at the end of the incubation with GGT inhibitor, vs 50 µM initially added. Taken together, these findings indicate that GGT activity is certainly involved in the extracellular metabolism of GSNO, but other enzymes must also be implicated in the process. As regards PDI, its inhibition produced similar results: enzyme inhibition did not entirely suppress GSNO extracellular metabolism and approx. 28 µM GSNO (of 50 µM initially added) were still detectable in the extracellular compartment at the end of incubation. Cellular GGT and PDI activities appear therefore to stay implicated in GSNO catabolism, even if they were inversely (GGT activity decreases, while PDI expression at the membrane level increases) impacted by oxidative stress.
The observed increase in S-nitrosation of SMC proteins under oxidative stress conditions was rather unexpected, as prooxidants should oxidize reduced thiols to disulfides and/or other sulfur species, which are then unavailable for nitrosation. The oxidation of thiols during oxidative stress might prevent S-nitrosation thus interfering with NO-based physiologic signaling [43]. However, it has also been showed that S-nitrosation is a protection of thiols from oxidation by NO [44].
In our experiments, oxidative stress (in the absence of added GSNO) caused both a decrease in intracellular GSH and a decrease in SH groups in cellular proteins, accompanied by an increase in extracellular GSH. The latter was likely the result of the AAPH-induced decrease in GGT activity at the SMC plasma membrane level, resulting in a lower consumption of extracellular GSH. In fact, the AAPH-induced decrease of GGT activity can be explained as the effect of either direct inactivation of the enzyme protein by AAPH radicals or the large increase in the extracellular redox potential. Direct oxidation of plasma membrane proteins by AAPH is a known phenomenon [45]. Loss of cellular GGT activity was also reported following exposure of lung epithelial cells to hyperoxia-induced lipid peroxidation [46], and indeed, AAPH is itself known to induce lipid peroxidation [47].
The decrease of protein SH groups induced by AAPH was reversed by GSNO (Fig. 1-C). In principle, the addition of GSNO in an oxidative stress environment would rather be expected to enhance oxidative stress by production of peroxynitrite anions. In our systems, no peroxynitrite ion was detected in smooth muscle cells in basal conditions upon GSNO addition, using DHR probe.
AAPH-induced oxidative stress increased the intracellular peroxynitrite ion concentration up to 6.7 ± 0.8 µM; however, the concomitant addition of GSNO did not modify peroxynitrite ion concentration (5.9 ± 0.4 µM). The protection offered by GSNO to intracellular protein -SH groups could be explained by the release of GSH concomitantly to the release of NO. Released GSH can be incorporated in the intracellular GSH pool to support GSH-dependent antioxidant defenses. However, our data did not show any recovery of intracellular GSH levels after GSNO addition. This phenomenon was probably due to direct oxidation of GSH by AAPH challenge. Furthermore, our data showed that GSNO is able to recover the oxidation of protein thiols caused by oxidative stress (Fig. 1-C). In this perspective, the ability of GSNO to protect reduced protein thiols from oxidative stress, making them again available to react with NO, may represent the mechanism explaining the increased formation of intracellular Snitrosothiols (Fig. 3) observed under oxidative conditions.
Protein S-nitrosation is considered as an important mechanism for post-translational modulation of protein function, and several studies have described such modulatory effects on a series of cys-containing proteins, being potential targets of RNS-dependent nitrosative modifications [48,49,50]. In addition to direct modulation of protein function, protein S-nitrosation can also represent a mean for constitution of 'NO stores' in tissues. Indeed, different studies support the idea that S-nitrosation of tissue thiols is a mechanism for the constitution of local reservoirs from which biologically active NO can be subsequently released [51,52,53]. Identification and assessment of such NO stores could provide a valuable biomarker for evaluation of the therapeutic efficiency of NO donors.
The increase in S-nitrosation mostly concerned proteins belonging to plasma membrane and extracellular region, which is not surprising considering that the AAPH-dependent oxitative challenge was originated in the extracellular compartment. In particular, the detailed pattern of S-nitrosated proteins indicates that most of the proteins selectively S-nitrosated under oxidative stress conditions are of primary relevance for the performance of SMC functions, which are often altered in vascular diseases, such as cell communication, cytoskeletal organization, contraction, morphogenesis and movement. Interestingly, the pathway involves actin cytoskeleton dynamics of several key regulatory proteins including Calponin-2 (CNN2), Myosin Light Polypeptide 6 (MYL6), Transgelin (TAGLN) and Lipoma Preferred Partner (LPP). The role of S-nitrosation in regulating these proteins is not completely understood particularly during oxidative stress. Each of these proteins has been shown to play a role in regulating and modulating smooth muscle contraction or nitric oxide signaling. In vitro S-nitrosation of skeletal muscle myosin, for example, increases the force of the actomyosin interaction while decreasing its velocity indicating a relaxed state [54]. The calcium binding protein CNN2 has been shown to participate in regulating smooth muscle contraction by binding to actin, calmodulin, troponin C and tropomyosin. The interaction of calponin with actin inhibits the actomyosin Mg-ATPase activity [55,56]. This tonic inhibition of the ATPase activity of myosin in smooth muscle is blocked by Ca 2+ -calmodulin, which inhibits CNN2 actin binding [57]. MYL6 regulates light chain of myosin and it does not bind calcium, but it is always involved in muscle contraction and skeletal muscle tissue development. Transgelin (also designated SM22α and p27) is a smooth muscle protein that physically associates with cytoskeletal actin filament bundles in contractile smooth muscle cells. Studies in transgelin knockout mice have demonstrated a pivotal role for transgelin in the regulation of Ca 2+ independent contractility [58] and it is proposed to be necessary for actin polymerization and bundling [59]. Moreover, LPP, a nucleocytoplasmic shuttling protein, is located in focal adhesions and associates with the actin cytoskeleton [60]. LPP can function as an adaptor protein that constitutes a platform that orchestrates protein-protein interactions and contributes to the migratory phenotype of SMC [61]. As ROS have also been shown to enhance cell migration [62,63] and GSNO has been shown to decrease smooth muscle cells migration capacity [64], we can speculate that LPP S-nitrosation could protect against oxidative stress induced cell migration.
Therefore, considered together, all these proteins, almost quite implicated in Ca 2+ -dependent contractility and in NO signalling, constitute a potential interactome and discovering their behavior as S-nitrosated proteins may further help our understanding of several processes, such as contraction-relaxation signaling of SMC or their phenotype switching, in the vascular system.
In conclusion, our study documented that oxidative stress can significantly modify smooth muscle cells metabolism of GSNO, an endogenous NO-donor presently under active investigation as a potential therapeutic agent. In particular, oxidative stress was shown to increase the extent, and profoundly modify the pattern of GSNO-dependent protein S-nitrosation, with the additional involvement in the process of several proteins critical for SMC homeostasis and function. These data can represent a valuable basis for the identification of biomarkers of GSNO bioactivity in the vascular system, as well as for the appraisal of possible beneficial effects of this NO donor in the treatment of cardiovascular diseases. les réponses vasculaires. En particulier, en mettant l'accent sur les macrophages et le CML, qui sont les principaux types cellulaires retrouvés dans les lésions athérosclérotiques et semblent être colocalisés avec la GGT, le plan expérimental de thèse a éte divisé en deux parties :
Le premier objectif visait à identifier l'origine de la GGT accumulée dans la plaque d'athérome, et à élucider entre les stimuli inflammatoires et oxydants, qui est responsable de l'accumulation de GGT dans les plaques d'athérosclérose, afin de comprendre si les macrophages peuvent fournir une source potentielle de b-GGT dans la plaque athéroscléreuse (premier article) [START_REF] Belcastro | Monocytes/macrophages activation contributes to b-gamma-glutamyltransferase accumulation inside atherosclerotic plaques[END_REF].
Le deuxième objectif est étroitement lié au précédent et a été consacré à la restauration de la biodisponibilité de NO à partir de GSNO dans les cellules musculaires lisses sous stress oxydant, qu'a l'évalutation de l'implication des deux enzymes spécifiques, la GGT, déjà mentionné, et la PDI, impliquées dans le métabolisme du GSNO et dans la S-nitrosation des protéines, cette dernière utilisée comme biomarqueur du pool de NO [START_REF] Stamler | Nitric oxide circulates in mammalian plasma primarily as an S-nitroso adduct of serum albumin[END_REF].
Des études récentes ont montré que les cellules inflammatoires de patients présentant une pathologie cardiovasculaire contiennent des granules de GGT qui ont libérées lors d'une stimulation [START_REF] Corti | Contribution by polymorphonucleate granulocytes to elevated gamma-glutamyltransferase in cystic fibrosis sputum[END_REF].
En effet, le travail effectué dans notre laboratoire a montré que la GGT libérée est associée à d'autres protéines pour former de grands agrégats, avec des caractéristiques semblables à celles des exosomes ou microparticules [START_REF] Corti | Contribution by polymorphonucleate granulocytes to elevated gamma-glutamyltransferase in cystic fibrosis sputum[END_REF]. Il existe donc un lien entre les concentrations sériques élevées de GGT, et migratoire [21]. De plus, le stress oxydant provoque une diminution de la biodisponibilité de NO et comme décrit précédemment (section 1.4), plusieurs molécules donneuses de NO ont été développées pour contrer cette diminution de biodisponibilité. Cependant, même si certains donneurs de NO comme GSNO n'induisent pas le stress oxydant, ils sont encore considérés comme de potentiels activateurs de stress oxydant conduisant à la formation d'ions peroxynitrite susceptibles de produire des effets délétères sur les protéines en particulier dans des conditions de stress oxydant [START_REF] Hogg | Production of hydroxyl radicals from the simultaneous generation of superoxide and nitric oxide[END_REF][START_REF] Ischiropoulos | Peroxynitrite-mediated oxidative protein modifications[END_REF][START_REF] Szabo | DNA damage induced by peroxynitrite: subsequent biological effects[END_REF]. Bien que le stress oxydant soit concomitant avec le développement de maladies cardiovasculaires, on en sait peu sur la capacité de GSNO à restaurer la biodisponibilité de NO dans les tissus vasculaires sous stress oxydant. Tout comme on en sait peu sur les interférences possibles entre les radicaux libres et le métabolisme de GSNO.
Une appréciation plus profonde de ces mécanismes pourrait donc ouvrir une voie à l'utilisation de GSNO dans les maladies cardiovasculaires associées au stress oxydant. En outre, la littérature a également montré que la S-nitrosation de protéines impliquées dans le phénotype des cellules musculaires lisses est très importante [START_REF] Hsieh | Shear-induced endothelial mechanotransduction: the interplay between reactive oxygen species (ROS) and nitric oxide (NO) and the pathophysiological implications[END_REF], surtout si on prend en considération que les CML sont impliquées dans la contraction et la régulation du tonus des vaisseaux sanguins, répartissant ainsi le flux sanguin et la régulation de la pression artérielle. En conséquence, l'identification des protéines nitrosées en condition de stress oxydant pourrait révéler de nouvelles cibles thérapeutiques des donneurs de NO et aider à mieux comprendre les processus pathologiques impliqués dans le développement des maladies cardiovasculaires associées au stress oxydant. maladies cardiovasculaires. Ils peuvent inciter à l'identification de biomarqueurs appropriés pour l'évaluation de la bioactivité de GSNO dans le traitement des maladies cardiovasculaires associées au stress oxydant. Ces études peuvent être le point de départ pour soutenir l'idée d'utiliser GSNO pour le traitement / prévention des maladies cardiovasculaires, notamment l'athérosclérose, en réduisant l'accumulation des macrophages et en restaurant les niveaux de NO. Ce vaste travail révèle plusieurs questions qui demeurent en suspens et nécessiteront d'être posées à l'avenir.
Conclusions et perspectives
Ce travail visait à étudier l'impact de l'inflammation et du stress oxydant sur les cellules majoritaires au sein de la plaque d'athérome : les macrophages et les cellules musculaires lisses.
L'inflammation a dans un premier temps été considérée sur les monocytes/macrophages avec la mise en place d'un protocole de différenciation des monocytes en macrophages de type M1 L'autre composante de l'athérosclérose, l'inflammation a un impact sur la prolifération et la dédifférenciation de cellules musculaires lisses. Pour le moment, aucune donnée n'est disponible quant à l'impact de l'inflammation sur l'activité/expression/localisation de la GGT au sein des cellules musculaires lisses. Lors de nos études préliminaires, nous avons montré (données non présentées dans ce manuscrit), que l'activité de la GGT était augmentée, au sein des cellules musculaires lisses, par l'inflammation. Cependant, aucune libération de la GGT dans le milieu extracellulaire n'a pu être mise en évidence. Ces premiers résultats, combinés à ceux menés sur macrophages attestant la relargage de la GGT sous stimulus inflammatoire, augurent que l'utilisation du S-nitrosoglutathion en tant que donneur de NO dans le cadre du traitement de l'athérosclérose permettrait la libération préférentielle de NO au niveau de la plaque d'athérome. L'inflammation induisant la prolifération et la dédifférenciation des cellules musculaires lisses, il serait opportun d'évaluer l'impact du Snitrosoglutathion sur ces deux phénomènes en condition inflammatoire (Fig. 16). L'athérosclérose est une maladie chronique à évolution lente caractérisée par la formation de plaques d'athérome, consistant en l'accumulation de lipoprotéines de basse densité (LDL), de leucocytes, de cellules spumeuses, la migration des cellules musculaires lisses (CML) et l'altération des cellules endotheliales (ECs). Ces phénomènes conduisent à la formation d'un noyau nécrotique incluant des régions calcifiées. La genèse de l'athérosclérose et de l'instabilité de la plaque d'athérome sont le résultat d'une synergie entre inflammation et stress oxydant. Les données actuelles identifient plusieurs populations de macrophages dans la plaque d'athérome présentant différents phénotypes en lien avec l'inflammation (pro-inflammatoire: M1, anti-inflammatoire: M2) ou avec des modifications redox de l'environnement (Mox). Stress oxydant et inflammation sont liés et jouent un rôle important dans (i) la dysfonction endothéliale induisant une diminution de la biodisponibilité du monoxyde d'azote (NO), (ii) l'oxydation des LDL, (iii) le remodelage de la lésion (régulation de protéases et d'antiprotéases) et (iv) la prolifération des CML. Les CML sont le deuxième type cellulaire le plus abondant dans la lésion athérosclérotique après les macrophages, leur hyperprolifération est la conséquence d'une dédifférenciation cellulaire d'un phénotype contractile à sécrétoire, augmentant leur capacité proliférative et migratoire. Les donneurs de NO, comme les S-nitrosothiols, connus également pour protéger contre le stress oxydant grâce essentiellement à la S-nitrosation, peuvent contrer la carence en NO. Parmi eux, le Snitrosoglutathion (GSNO), forme physiologique de stockage de NO dans les tissus, spécifiquement metabolisé par la gamma-glutamyl transférase (GGT) peut être envisagé. La corrélation entre l'augmentation des concentrations sériques de GGT et les facteurs de risque cardiovasculaire a récemment été démontrée. En particulier, seule la b-GGT s'accumule dans les plaques d'athérome, et concorde avec l'apparition d'autres marqueurs histologiques de vulnérabilité de la plaque. Étant donné que, les macrophages et les CML sont les principaux types cellulaires retrouvés dans les lésions athérosclérotiques et semblaient être colocalisés avec la GGT, l'attention de ce travail de thèse a été centrée sur la compréhension de la provenance de la GGT et son rôle dans le métabolisme du GSNO au sein de la plaque d'athérome. Une première partie de ce manuscrit vise à identifier l'origine de la GGT accumulée dans la plaque d'athérome, et à élucider entre le stimulus inflammatoire et oxydant, qui est responsable de l'accumulation de GGT dans la plaque d'athérome. La deuxième partie a été consacrée à la restauration de la biodisponibilité de NO dans les CML en condition de stress oxydant avec un intérêt particulier porté sur l'identification des protéines S-nitrosés. Atherosclerosis is a slowly progressing chronic disease characterized by the formation of atherosclerotic plaques consisting of accumulated low density lipoprotein (LDL), leukocytes, foam cells, migrated smooth muscle cells (SMCs) and altered endothelial cells (ECs), leading to the formation of necrotic cores with calcified regions. Atherosclerosis genesis and subsequent instability of atherosclerotic plaques result from a synergy between inflammation and oxidative stress. Current data identified several macrophage populations within the atherosclerotic plaque showing different inflammatory phenotypes (pro-inflammatory: M1, anti-inflammatory: M2) or functions in response to redox changes in the environment (Mox). The oxidative stress linked to inflammation plays an important role in (i) endothelial dysfunction, with reduced nitric oxide (NO) bioavailability, (ii) LDL oxidation, (iii) lesion remodeling (regulation of proteases and antiproteases) and (iv) SMCs proliferation. Indeed, SMCs are the second more abundant cell type, after macrophages, in the atherosclerotic lesion because their dedifferentiation from contractile to secreting phenotype increased their proliferation and migration capacity. NO donors, like S-nitrosothiols, also known to protect from oxidative stress by S-nitrosation, could counteract this NO deficiency. Among them, the S-nitrosoglutathione (GSNO), a physiological storage form of NO in tissues, specifically catabolized by the gamma-glutamyltransferase (GGT) is considered. Recently, it has been shown that the increased serum level of GGT is an independent risk factor for cardiovascular mortality related to atherosclerotic disease. In particular, only the big fraction (b-GGT) has been detected inside human atherosclerotic plaques associated to CD68 + macrophage-derived foam cells. As macrophages and SMCs are the main cell types found in atherosclerotic lesion and seemed to be colocalized with GGT, the attention of this thesis work was focused on the understanding of GGT provenance and its role in the GSNO metabolism within the atherosclerotic plaque. A first part of the thesis was to identify the origin of GGT accumulating inside atherosclerotic plaques, and to decipher between inflammation and oxidative stress stimuli, which one is responsible of GGT accumulation in atherosclerotic plaques. The second part was dedicated to the restoration of NO bioavailability within SMCs under oxidative stress with a focus on the identification of S-nitrosated proteins.
Keywords: Inflammation, oxidative stress, atherosclerosis, GGT, S-nitrosoglutathione, S-nitrosation
1 .
1 ............................................... Table 2. Potential markers of NO bioavailability in human blood ................................................................... Table 3. Cellular mechanism of GSNO degradation ......................................................................................... Table 4. Potential NO donors and their application field .................................................................................Table 5. Enhancers of NO availability and NO-donating statins and beneficial effects ................................... Table 6. Total and fractional GGT activities (mean ± SD, U/L) in both genders ............................................... CHAPTER II Table 7. Characteristics of M1 and M2 macrophages subsets ........................................................................ Article Table 1. Histological features of the selected plaques used ............................................................................ Supplementary study.
Table 8 .
8 GSH and antioxidant enzymes in monocytes, M1 and M2 macrophages .......................................... CHAPTER III Table 9. Role of glutathione ........................................................................................................................... Article 2.
Table 1 .
1 Distribution of S-nitrosated proteins among distinct cell compartments upon treatment with GSNO (50 µM) of smooth muscle cells cultured under basal or oxidative stress (50 mM AAPH) conditions. ......... Table 2. Molecular function and biological implications of cytoskeletal proteins S-nitrosated by 50 µM GSNO under basal or oxidative stress conditions .....................................................................................................
Fig 1 .Fig 2 AFig 3 .
123 Fig 1. Potentiel des thérapeutiques cardiovasculaires pour l'athérosclérose via des actions "NO" dépendantes ..........................................................................................................................................................................
Fig 4 .Fig 5 .Fig 6 .
456 Fig 4. A model of intravascular metabolism of nitric oxide .............................................................................. Fig 5. The γ-glutamyl cycle ................................................................................................................................ Fig 6. GGT implication in the recovery of extracellular GSH and associated pro-oxidant reactions ................
Fig 7 .
7 Fig 7. Plasma GGT elution profiles: high performance gel filtration chromatography method for GGT fraction analysis .............................................................................................................................................................
Fig 8 .Fig 9 .
89 Fig 8. GGT and cardiovascular mortality...........................................................................................................
Fig 1 .
1 Fig 1. Cytochemical staining for GGT enzyme activity expressed in activated monocytes. ............................
Fig 2 .Fig 3 .
23 Fig 2. Cytokines in the supernatant of monocyte-derived macrophages ........................................................ Fig 3. Effects of GM-CSF and M-CSF exposure on cellular GGT expression .....................................................
Fig 4 .
4 Fig 4. GGT release by activated macrophages ..................................................................................................
Fig 5 .
5 Fig 5. Effects of TNFα and IL-10 on GGT expression in monocytes. .................................................................
Fig 6 .
6 Fig 6. Gel filtration chromatography of b-GGT released by TNFα/IL-1-activated monocytes (a) and M1-like macrophages (b) ...............................................................................................................................................
Fig 7 .Fig 8 .
78 Fig 7. GGT expression in atherosclerotic plaques............................................................................................. Fig 8. Elution profile of GGT activity from whole homogenate of three selected plaque samples (a, b, c) ....
Fig 1 . 92 Fig 2 . 93 Fig 3 . 93 Fig 4 . 94 Fig 5 .Fig 11 . 109 Fig 13 . 111 Fig 14 .
1922933934945111091311114 Fig 1. Immunofluorescent detection of specific markers of M1-like and M2-like macrophages..................... 92 Fig 2. Intracellular nitrite ions production ....................................................................................................... 93 Fig 3. Quantification of ROS in intracellular and extracellular compartments ................................................ 93 Fig 4. Quantification of intracellular GSH. ........................................................................................................ 94 Fig 5. Evaluation of GSH-dependent enzyme activities ................................................................................... 95
Fig 1 . 133 Fig 2 . 134 Fig 3 . 135 Fig 4 .Fig 15 . 155 Fig 16 .
11332134313541515516 Fig 1. Intracellular and extracellular thiol status in basal and oxidative stress conditions ............................ 133 Fig 2. Extracellular metabolism of S-nitrosoglutathione ................................................................................ 134 Fig 3. Intracellular formation of S-nitrosothiols ............................................................................................. 135 Fig 4. Identification and classification of smooth muscle cells proteins S-nitrosated in basal or oxidative stress conditions ....................................................................................................................................................... 137
exit and apoptosis) may contribute to further progression of the damage. Infiltrated macrophages induce the recruitment of smooth muscle cells from the media through the production of growth factors, such as Platelet Derived Growth Factor (PDGF), Transforming Growth Factor β (TGF-β) and Fibroblast Growth Factor (FGF).
Figure 2 .
2 Figure 2. An overview of atherogenesis process. (A)The initial atherosclerotic stage[42].
because it is followed by coagulation cascade activation leading to the rapid formation of a thrombus, that may occlude the lumen of the vessel and cause ischemia and necrosis of perfused tissues. The persistent production of inflammatory mediators, including chemokines (Monocyte Chemoattractant Protein-1 [MCP-1] and Regulated on Activation, Normal T cell Expressed and Secreted chemokine [RANTES]), cytokines (interleukin 1 [IL-1], IL-6, tumor necrosis factor alpha [TNF-α] and IFN-γ), proteases (matrix metalloproteinases and cathepsins), and ROS formed by infiltrated immune cells, generates an inflammatory environment which promotes the recruitment, accumulation, and activation of additional inflammatory cells and smooth muscle cells, resulting in plaque expansion. As inflammation is a systemic
Figure 3 .
3 Figure 3. Vascular aspect: NO & physiopathological effects.
Figure 4 .
4 Figure 4. A model of intravascular metabolism of nitric oxide.NO produced by eNOS may diffuse into vascular lumen as well as the underlying smooth muscle. In plasma, NO may react with molecular oxygen to form nitrite (NO2 -) or with superoxide (O2 -) to form peroxynitrite (OONO -), which subsequently decomposes to yield nitrate (NO3 -). Alternatively, the nitrosonium moiety of NO may react with thiols to form nitrosothiols (RSNO). Furthermore, NO may reach the erythrocytes to react with either oxyhemoglobin to form methemoglobin (metHb) and NO2 -with deoxyhemoglobin to form nitrosylhemoglobin (NOHb), or with the Cys93 residue of the β-subunit to form S-nitrosohemoglobin (SNOHb). In addition, plasma NO2 -could be taken up by erythrocyte, where it is oxidized in a Hb-dependent manner to NO3 - (SNOAlb, S-nitrosoalbumin; GSNO, S-nitrosoglutathione; CysNO, S-nitrosocysteine, RSH, sulfhydryl group)[START_REF] Lauer | Indexes of NO bioavailability in human blood[END_REF].
Figure 5 .
5 Figure 5. The γ-glutamyl cycle.
Figure 6 . 2 57
62 Figure 6. GGT implication in the recovery of extracellular GSH and associated prooxidant reactions[START_REF] Paolicchi | Glutathione catabolism as a signaling mechanism[END_REF]. 1-recovery of precursors amino acids for intracellular GSH synthesis 2-pro-oxidant reactions in the extracellular environment
Figure 8 .
8 Figure 8. GGT and cardiovascular mortality. Adjusted cumulative survival from CVD mortality according to categories of GGT among 136,944 women and men (mean age 42 years) in the Vorarlberg Health Monitoring and Promotion Program (VHM&PP) estimated at the average values of covariates. Survival curves were calculated with a Cox proportional hazards model that was adjusted for sex, age, body mass index, systolic blood pressure, cholesterol, triglycerides, glucose, smoking, work status, and year of examination. Numbers at bottom of graph indicate participants available for analysis at given time points [251].
Figure 10 .
10 Figure 10. Polarization process of macrophages in the atherosclerotic lesions. M1: classical phenotype; M2: alternative phenotype; Mox: macrophages induced by oxidized low-density lipoproteins (oxLDL); ROI: reactive oxygen intermediates, iNOS: inducible nitric oxide synthase; Arg-1: arginase-1; HO-1: heme oxigenase-1; SRXN1: sulphiredoxin-1; VEGF: vascular endothelial growth factor [272].
high , IL-23 high , TNF-α high , IL-1β high , IL-10 low , iNOS, ROS, ROI, M1 chemokines, Th1 response IL-10 high , Arg-1, IL-23 low , IL-12 low , TNF-α low , ROS high : ↑Phagocytic activity (Nox2dependent) ROS high (as second messengers): ROS low : ↑Phagosomal proteolitic activity ↓Nox2 activity: would healing improvement (↑disulphide protein degradation) ↓Inflammatory mediators: TNF-α, IL-1β (↓ATP plasma membrane ion channel) Cu, Zn-SOD action: ↑Arg-1, ↑FIZZ-1,↑YM-1 (regulated by redox-dependent STAT6 translocation)
Figure 1 .
1 Figure 1. Immunofluorescent detection of specific markers of M1-like and M2-like macrophages. Cells, seeded in 8well plates and differentiated with GM-CSF or M-CSF, were fixed with paraformaldehyde 4% (m/v) and labeled with 1:50 diluted goat CD32 and mouse CD163 antibodies (Santa Cruz Biotechnology) for 60 min in blocking solution (PBS, 8% (v/v) milk, 1% (v/v) TRITON X-100).Then a 1:100 dilution of a secondary anti-mouse FITC and anti-goat PE antibodies (Santa Cruz Biotechnology) were incubated with cells during 30 min. n=1 (Magnification: 40x, scale bar: 25µm).
Figure 2 .
2 Figure 2. Intracellular nitrite ions production. Monocytes were differentiated in M1 and M2 by GM-CSF and M-CSF, respectively during 6 days. After differentiation, cells were lysed in Tris-HCl buffer (10mM, pH=7.8) and nitrite ions were quantified with the Griess reaction. The amount of intracellular nitrite ions was expressed as a ratio of total protein mass. Data are shown as mean ± sem, n=6-8 (M1-like and M2-like). * p < 0.05 vs monocytes, one-way ANOVA test + Bonferroni post-test.
Figure 3 .
3 Figure 3. Quantification of ROS in intracellular and extracellular compartments. Monocytes were differentiated in M1 and M2 by GM-CSF and M-CSF, respectively during 6 days. After differentiation, in the cell culture supernatants and in the cells lysed in lysis buffer (Tris-HCl 0,5 M, NaCl 1,5 M, Triton X-100, SDS 10 % and ultra-pure H2O2), extracellular and intracellular ROS were quantified through the oxidation of 2',7'-dichlorodihydrofluorescein (DCF).
Figure 4 .
4 Figure 4. Quantification of intracellular GSH. Monocytes were differentiated in M1 and M2 by GM-CSF and M-CSF, respectively during 6 days. After differentiation, cells were lysed in perchloric acid buffer 3.3% and GSH concentration was quantified with 2,3-naphthalene dicarboxaldehyde probe (NDA). The amount of intracellular GSH was expressed as a ratio of total protein mass. Data are shown as mean ± sem, n=6 (monocytes, M1 and M2). * p < 0.05 vs monocytes, one-way ANOVA test + Bonferroni post-test.
Figure 5 .
5 Figure 5. Evaluation of GSH-dependent enzyme activities. Monocytes were differentiated in M1 and M2 by GM-CSF and M-CSF, respectively during 6 days. After differentiation, cells were lysed in Tris-HCl buffer (10mM, pH=7.8) and undergone to 5 cycles of freezing/thawing in a mixture of ice and salt. After a further homogenization with Dounce (20 strokes) in the ice, a centrifuge at room temperature (2500 rpm, 5 minutes) and a resuspension of the sample in lysis buffer, the enzyme activities were evaluated. Data are shown as mean ± sem, n=6 (monocytes, M1 and M2). *p < 0.05, one way ANOVA test + Bonferroni post-test.
alterations. Disturbances in hemodynamic forces could initiate the proinflammatory switch in SMC phenotype even before the onset of symptoms of atherosclerosis. Proinflammatory signals play a crucial role in further dedifferentiation of SMCs in affected vessels and propagation of pathological vascular remodelling. The majority of SMCs within lesions appear to be derived from phenotypic modulation of preexisting SMCs in response to environmental changes like lipids internalisation, lipid peroxidation products, ROS, inflammatory cytokines, altered cell-cell and cell matrix contacts, exposure to circulating blood products, platelet-derived products, growth factors, and perhaps specific negative regulators of SMC differentiation.The first chapter (section 1.2.3) focused on the mechanisms that regulate the differentiated state of the SMC under normal circumstances and how these regulatory processes are altered during the formation of intimal lesions. The evolved model shows that regulation of SMC differentiation is extremely complex and involves constant interplay between environmental cues and the genetic program. Therefore, this next section will focus on the molecular mechanisms that control phenotypic switching of SMC in atherosclerosis.
Figure 11 .
11 Figure 11. Summary of ROS types and sources, and action point of antioxidants (R▪, lipid alkyl radical; RH, lipid; ROO▪, lipid peroxyl radical; ROOH, lipid hydroperoxide) [395].
Figure 12 .
12 Figure12. GSH biosynthetic route and GSH cycle. GSH biosynthesis occurs in two ATP-dependent steps. The first limiting step is the formation of a covalent bound between γ-Glu and L-Cys catalysed by glutamatecysteine ligase (GCL), formed by two subunits GCLc and GCLm. The addition L-Gly in the second step is catalysed by glutathione synthase (GS) to form γ-glutamyl-cysteinyl-glycine (GSH). Then GSH is used by GPx to detoxify H2O2, generating GSSG that could be recycle by glutathione reductase (GR) consuming NAPDH, which is reduced from pentose phosphate pathway (PPP)[START_REF] Espinosa-Diez | Antioxidant responses and cellular adjustments to oxidative stress[END_REF]
Figure 13 .
13 Figure 13. Chemical pathways leading to the formation of protein S-nitros(yl)ation and S-glutathionylation.Horizontal dotted lines separate one-electron oxidative states. Dashed lines represent oxidations by oxidative species not depicted, including molecular oxygen (O2). For the sake of simplicity, lines representing reaction of P-SH with GSSG or GSNO to yield mixed disulfide protein (P-SS-G) have been omitted. GSH: reduced glutathione; GSNO; S-nitrosoglutathione; GSSG: oxidized glutathione[START_REF] Martinez -Ruiz | Signalling by NO-induced protein S-nitrosylation and S-glutathionylation: Convergences and divergences[END_REF].
Figure 14 .
14 Figure 14. Mechanisms for oxidant stress-induced modifications on target proteins in cardiovascular diseases.In this process, ROS triggers oxidative modification and NO triggers S-nitrosation of many target molecules, together with activation of pro-oxidant and antioxidant enzymes to regulate the redox status of SMCs and ECs (Adapted from[START_REF] Hsieh | Shear-induced endothelial mechanotransduction: the interplay between reactive oxygen species (ROS) and nitric oxide (NO) and the pathophysiological implications[END_REF]).
[
Bouressman et al. to be submitted].
Acidic supernatants were neutralized with 10 M
10 NaOH and diluted 10 times in 0.1 M HCl containing 2 mM EDTA. Sixty µL of diluted samples or standard GSH solutions (0.65-3.25 μM) were transferred to a 96-wells plate; 120 μL of 0.4 M borate buffer (pH= 9.2) and 20 μL of 5.4 mM 2,3-naphthalene dicarboxaldehyde (NDA) solution prepared in ethanol were then added into each well. Microplate was incubated 25 min on ice in the dark. The fluorescence intensity of GSH-NDA adducts was measured using a microplate reader (Synergy 2 model, Biotek Instruments, Colmar, France) with excitation set at 485 ± 20 nm and emission at 528 ± 20 nm and expressed relatively to protein quantity (see section 2.7).
A 3 -
3 h incubation of SMC in presence of 50 mM AAPH significantly increased the redox potential of the culture medium from 256 ± 19 (basal) to 484 ± 8 mV (AAPH) (n = 3, Student t test, p < 0.05 vs basal). Addition of GSNO (50 µM) during the final 1-h incubation had no impact on this redox potential, neither in control nor under AAPH exposure. No variation of the pH (7.4) was observed all along the experiment. Under oxidative stress, the GGT activity decreased 3.5 fold, from 1.35 ± 0.20 to 0.39 ± 0.14 nmol/min/mg of proteins (n = 3, Student t test, p < 0.05). At variance, PDI localization at the plasma membrane increased 1.7 fold, from 0.72 ± 0.02 to 1.22 ± 0.31 (PDI/actin ratio, n = 4, Student t test, p < 0.05).
Fig. 1 .
1 Fig. 1. Intracellular and extracellular thiol status in basal and oxidative stress conditions. Smooth muscle cells were incubated for a total of 3 h without (basal) or with 50 mM AAPH. In each condition, 50 µM GSNO or GSNO+AAPH were added during the 3 rd h of incubation. Intracellular reduced thiols (C) were quantified by reacting precipitated proteins with DTNB. Intracellular (A) and extracellular GSH (B) were quantified with the NDA probe in the supernatant after protein precipitation. Results are presented as means ± sem of three independent experiments and compared using a two way ANOVA (pcondition (Basal vs AAPH), ptreatment (Control, GSNO) and pinteraction); * p<0.05 (Bonferroni's multiple comparisons test).
Fig. 2 .
2 Fig. 2. Extracellular metabolism of S-nitrosoglutathione. Smooth muscle cells were incubated for 3 h without (Basal) or with 50 mM AAPH. In each condition, 50 µM GSNO or GSNO+AAPH were added during the 3 rd hour of incubation. S-nitrosothiols (A) and nitrite ions (B) were quantified by the Griess-Saville and Griess methods, respectively. Results are presented as means ± sem of three independent experiments and compared using a two way ANOVA (pcondition (Basal vs AAPH), ptreatment (Control, GSNO) and pinteraction).
Fig. 3 .
3 Fig. 3. Intracellular formation of S-nitrosothiols. Intracellular S-nitrosothiols were quantified by the DAN/Hg 2+ method after incubation of smooth muscle cells for 3 h without (Basal) or with 50 mM of AAPH. In each condition, 50 µM GSNO or GSNO+AAPH were added during the 3 rd hour of incubation. Results are presented as means ± sem of three independent experiments and compared using a two way ANOVA (pcondition (Basal vs AAPH), ptreatment (Control, GSNO) and pinteraction); * p<0.05 (Bonferroni's multiple comparisons test).
Fig. 4 .
4 Fig. 4. Identification and classification of smooth muscle cells proteins S-nitrosated in basal or oxidative stress conditions. Proteins were S-nitrosated by 50 µM GSNO in cells exposed or not to oxidative stress (50 mM AAPH). After purification (biotin switch technique), proteins identified by mass spectrometry were classified using the Panther database.
probablement d'origine inflammatoire et l'identification de la GGT intra-plaque. Le relargage de la GGT peut donc interférer avec le métabolisme physiologique de GSNO dans le sang et / ou dans les tissus.Cependant, il est également bien documenté que les cellules inflammatoires stimulent la production de molécules d'adhésion endothéliales et de facteurs chimiotactiques qui augmentent le recrutement des monocytes et des macrophages ainsi que leur accumulation dans les lésions athérosclérotiques. Les données actuelles identifient plusieurs populations de macrophages dans la plaque d'athérome présentant différents phénotypes en lien avec l'inflammation (pro-inflammatoire : M1, antiinflammatoire : M2)[START_REF] Leitinger | Phenotypic polarization of macrophages in atherosclerosis[END_REF] ou avec les modifications redox de l'environnement (Mox)[START_REF] Wolfs | Differentiation factors and cytokines in the atherosclerotic plaque microenvironment as a trigger for macrophage polarisation[END_REF].Par conséquent, sur la base du premier objectif du travail, un grand intérêt a été porté sur l'hétérogénéité des macrophages au sein des lésions athérosclérotiques. Une première approche expérimentale a été d'isoler les monocytes humains à partir du sang périphérique de donneurs sains. La différenciation des monocytes en macrophages M1 et M2 a été induite par les facteurs de croissance GM-CSF et M-CSF, respectivement. Suite à la différenciation, nous avons étudié l'expression et la libération de la GGT au sein de ces cellules et l'analyse de la GGT dans la plaque a été réalisée dans des échantillons de tissu obtenus de patients ayant subi une endartériectomie carotidienne. Les résultats ont révélé que les macrophages de type M1 expriment des niveaux plus élevés de GGT par rapport aux macrophages de type M2 et aux monocytes. De plus, les macrophages de type M1, mais pas les M2, sont capables de libérer la fraction b-GGT après activation par des stimuli pro-inflammatoires (combinaison de cytokines proinflammatoires, TNF-α/IL-1β). L'analyse par western blot de la b-GGT extraite des plaques a confirmé la présence de GGT en parallèle de la présence des macrophages.Ces résultats indiquent que les macrophages caractérisés par un phénotype pro-inflammatoire (M1) peuvent contribuer à l'accumulation intra-plaque de la b-GGT, qui à son tour peut jouer un rôle dans la progression de l'athérosclérose en modulant les processus inflammatoires et favorisant l'instabilité de la plaque. Ces données sont en accord avec celles observées pour les neutrophiles, qui contribuent tous à la libération de b-GGT dans les exsudats inflammatoires de la mucoviscidose[START_REF] Corti | Contribution by polymorphonucleate granulocytes to elevated gamma-glutamyltransferase in cystic fibrosis sputum[END_REF]. Cela ajoute un autre support évident pour la connexion entre GGT et inflammation.Afin de compléter la caractérisation phénotypique des macrophages, nous avons commencé à évaluer le statut redox des macrophages différenciés, M1 vs M2, et par rapport aux monocytes. En effet, on en sait peu sur l'état d'oxydo-réduction de M1 et M2. Mais comme confirmé par la littérature et les données préliminaires obtenues (chapitre II, section 2.4), ERO, ERN et GSH sont connectés et influent sur le développement des macrophages. En effet, on sait que l'état d'oxydo-réduction dans les cellules joue un rôle important dans le développement cellulaire, tels que la prolifération, la différenciation et l'apoptose[START_REF] Schafer | Redox environment of the cell as viewed through the redox state of the glutathione disulfide/glutathione couple[END_REF][START_REF] Rohrschneider | Growth and differentiation signals regulated by the M-CSF receptor[END_REF][START_REF] Jenkins | Imbalanced gp130-dependent signaling in macrophages alters macrophage colony-stimulating factor responsiveness via regulation of c-fms expression[END_REF]. Cependant, les résultats sont préliminaires et doivent être complétés par des expériences complémentaires, en particulier en condition de stress oxydant.Effectivement, comme déjà mentionné, une augmentation du stress oxydant et de la dysfonction endothéliale au cours de l'athérosclérose, conduisent à la prolifération et la migration des CML. Le remodelage vasculaire consécutif à l'hyperprolifération de ces cellules, est la conséquence d'une dédifférenciation cellulaire d'un phénotype contractile à sécrétoire, augmentant leur capacité proliférative
(
proinflammatoire) et M2 (antiinflammatoire). L'impact d'un stimulus inflammatoire sur l'expression/activité de la GGT a été évalué. Du point de vue du stress oxydant, évalué uniquement sur les cellules musculaires lisses le but était de montrer que le S-nitrosoglutathion pouvait être utilisé comme donneur de NO dans un environnement défavorable.Nous avons démontré que :-L'inflammation, induite par IL1-β et TNF-α, provoquait la dégranulation des macrophages de type M1 (pro-inflammatoires) et ainsi la libération de la GGT intracellulaire sous la forme d'un macrocomplexe nommé b-GGT. Cette première publication a permis d'émettre une hypothèse sur la provenance de la GGT identifiée au sein de la plaque d'athérome.-Le stress oxydant amplifiait les phénomènes de nitrosation des protéines induites par le Snitrosoglutathion au sein des cellules musculaires lisses. L'identification de ces protéines a révélé que l'appareil contractile était la cible privilégiée du S-nitrosoglutathion et que le stress oxydant impliquait aussi la nitrosation de protéines de transport et de trafic intracellulaire. Cette deuxième publication a permis d'envisager GSNO comme donneur de NO potentiel restaurant le pool de NO vasculaire faisant défaut dans le cadre de l'athérosclérose.Au regard de ces deux études, différents paramètres méritent approfondissement et d'autres n'ayant pu être étudiés mériteraient notre attention. En effet, l'athérosclérose est une pathologie mêlant inflammation et stress oxydant, ainsi, les macrophages et les cellules musculaires lisses subissent ces deux conditions. Le stress oxydant induit la différenciation des monocytes en Mox présentant des défenses antioxydantes suractivées. Cependant, à l'heure actuelle, nous ne disposons d'aucune donnée sur l'impact du stress oxydant sur les macrophages de type M1 et M2. Notre étude préliminaire ayant montré que les macrophages M1 et M2 présentent un profil redox différent (concentration intracellulaire en GSH, activité des enzymes antioxydantes), il serait intéressant d'étudier leur réponse face à un environnement oxydant induit par exemple avec de l'AAPH. Cette étude permettrait aussi de faire le lien avec celle menée sur les cellules musculaires lisses en condition de stress oxydant. L'impact de S-nitrosoglutathion dans cet environnement oxydant, au regard de l'activité de la GGT, en combinaison avec l'étude des protéines nitrosées au sein des macrophages pourrait être un point de confirmation quant à la capacité du S-nitrosoglutathion à restaurer la biodisponibilité de NO au sein du compartiment vasculaire. L'étude de l'évolution du phénotype des macrophages placés en condition de stress oxydant en présence de Snitrosoglutathion ainsi que l'identification des protéines S-nitrosées pourraient mettre en évidence de nouvelles protéines cibles de NO et ouvrir de nouvelles portes pour l'utilisation des donneurs de NO en thérapeutique (Fig.15).
Figure 15 .
15 Figure 15. Présentation des perspectives expérimentales concernant le développement d'un modèle de stress oxydant sur les macrophages.
Figure 16 .
16 Figure 16. Présentation des perspectives expérimentales sur les cellules musculaires lisses.
Mots clés:
Inflammation, stress oxydant, athérosclérose, GGT, S-nitrosoglutathion, S-nitrosation Inflammation and oxidative stress in atherosclerosis: role of S-nitrosothiols in the vascular responses.
Table 8 . GSH and antioxidant enzymes in monocytes, M1 and M2 macrophages.
8
Monocytes M1 M2
GSH + ++ +++
GPx + + ++
GR ++ +++ ++
GST +++ + ++
Table 9 . Role of glutathione
9
ANTIOXIDANT DEFENSE METABOLISM REGULATION
Scavenging free radicals and
other reactive species
Les S-nitrosothiols, comme le GSNO, principale forme physiologique de stockage du NO dans les tissus présentent une liaison réversible de NO à un groupement thiol réduit (-SH) et sont de potentiels agents thérapeutiques dans le cadre du traitement des maladies cardiovasculaires présentant une réduction de la biodisponibilité de NO. Le stress oxydant est concomitant au développement des maladies cardiovasculaires, cependant peu d'études documentent la capacité de GSNO à restaurer la biodisponibilité du NO dans les tissus vasculaires sous stress oxydant. La présente étude évalue l'impact du stress oxydant sur la biodisponibilité pour les CML de NO à partir de GSNO, en tenant compte de l'implication d'enzymes redox, la GGT et la PDI, impliquées dans le métabolisme de GSNO et dans la S-nitrosation des protéines.Le stress oxydant a été induit in vitro sur des CML à l'aide d'un générateur spontané de radicaux libres, le 2,2'-azobis (2-amidinopropane) (AAPH). Les effets du stress oxydant sur la libération de NO ainsi que sur l'expression / l'activité de la (GGT) et de la PDI ont été évalués. L'état
3.5 Article 2
OXIDATIVE STRESS ENHANCES AND MODULATES PROTEIN S-NITROSATION IN
SMOOTH MUSCLE CELLS EXPOSED TO S-NITROSOGLUTATHIONE
(Submitted to Free Radical Biology and Medicine, October 2016)
redox des thiols cellulaires a été quantifié au travers des concentrations intra et extracellulaires de
glutathion et des groupements thiols réduits (-SH) des protéines. La capacité de GSNO à induire la S-
nitrosation des protéines a été globalement quantifiée puis les protéines S-nitrosées ont été
identifiées par spectrométrie de masse.
Comme attendu, le stress oxydant diminue la concentration intracellulaire de glutathion et
des groupement thiols réduits au sein des protéines. Cependant, l'activité de la GGT est diminuée 3.5
fois, tandis que l'expression membranaire de la PDI est augmentée de 1.7 fois sans que ces
modifications n'aient d'effet sur le catabolisme extracellulaire de GSNO. L'addition de GSNO sur les
CML en condition de stress oxydant restaure les groupements thiols réduits des protéines et produit
une S-nitrosation des protéines plus grande qu'en condition basale.
En outre, l'analyse par spectrométrie de masse des protéines S-nitrosées a révélé un nombre
plus élevé de protéines S-nitrosées par GSNO en condition de stress oxydant (51 protéines, vs 32 en
condition basale) comprenant un nombre plus élevé de protéines du cytosquelette (17, vs 8 en
condition basale) impliquées dans la contraction vasculaire, la morphogenèse et le mouvement
cellulaire. En outre, des classes supplémentaires de protéines impliquées dans l'adhésion cellulaire,
le transfert / support et des protéines de transport) sont S-nitrosées uniquement en condition de
stress oxydant.
123
The pellet containing membrane proteins was ressuspended in 50 mM Tris buffer (pH= 8) added with 1% (v/v) SDS and agitated for 30 min on ice. Membrane proteins were then centrifuged for 20 min at 21,000 × g. Finally, proteins were precipitated with 100% cold acetone for 1 h at -20°C. After centrifugation (3,000 × g, 10 min), the pellet was ressuspended in 50 mM Tris-HCl
buffer (pH= 6.8) containing 0.15 M NaCl, 1% (w/v) SDS and 1% (w/v) Triton X-100. Proteins were
quantified (see section 2.7) and 10 µg were deposited on a SDS PAGE with 10% separative gel and 4%
concentrating gel. After migration, proteins were transferred on a polyvinylic membrane and labelled
with anti-PDI (sc-20132 Santa Cruz biotechnology) or anti-actin antibody diluted 1/1000 or 1/2000,
respectively. Secondary antibody conjugated with HRP (sc-2004, Santa Cruz biotechnology) diluted
1/5000 was used to quantify PDI/Actin ratio using Image J 1.47V software (NIH, USA).
50
mM Tris buffer (pH= 8) added with 50 mM 2-mercaptoethanol and protease inhibitor cocktail. After 30-min incubation on ice, cell lysates were centrifuged at 17,600 × g during 20 min.
Table 1 . Distribution of S-nitrosated proteins among distinct cell compartments upon treatment with GSNO (50 µM) of smooth muscle cells cultured under basal or oxidative stress (50 mM AAPH) conditions. Percentage of total identified proteins
1
Cell compartment Basal + GSNO AAPH + GSNO
Cell junction 2.20
Membrane 43.50
Macromolecular complex 48 2.20
Extracellular matrix 2.20
Cytosol 8 6.50
Organelle 44 2.20
Extracellular region 41.30
Table 2 . Molecular function and biological implications of cytoskeletal proteins S-nitrosated by 50 µM GSNO under basal or oxidative stress conditions.
2
Basal + GSNO GSNO + AAPH
Panther family/subfamily Molecular function Biological process
Elongation Factor 1-
Cell communication
Gamma
Structural constituent of cytoskeleton
Actin, Aortic Smooth Muscle
•-or H2O2. Prolonged enzymatic generation of O2 •-can be sustained e.g. by the xanthine/xanthine oxidase system, which however can introduce a major bias in the results, as itself can denitrosate S-nitrosothiols[35]. H2O2 prooxidant effects are mediated by the formation of hydroxyl radical, • OH, through the transition metalcatalyzed Fenton reaction. However, metal cations are also known to catalyze direct degradation of S-nitrosothiols, preventing its use in our study. On the other hand, the water soluble azo compound
2,2'-azobis (2-amidinopropane) dihydrochloride (AAPH) can be considered as a 'clean' and reproducible free radical generator, as it spontaneously decomposes at 37°C into one mole of nitrogen and two moles of carbon-centered radicals. AAPH-derived radicals can either combine with each other to produce a stable product, or react with molecular oxygen to generate peroxyl radicals (ROO
•
Ainsi, la deuxième approche expérimentale présentée dans ce manuscrit était de développer un modèle de stress oxydant, sur des cellules musculaires lisses. Il est connu que pour induire un stress oxydant, deux approches alternatives peuvent être suivies pour perturber l'équilibre pro-oxydantantioxydant: soit augmenter la charge radicalaire ou inhiber les défenses antioxydantes. La charge radicalaire peut être augmentée en exposant les cellules à une irradiation gamma, une élevée de la tension d'oxygène (hyperoxie), extracellulaire O2 -• et / ou H202, ou en utilisant des composés azoïques générant des radicaux libres, tels que AAPH ou en utilisant des médicaments générateurs de radicaux libres (chapitre III, section 3.3). Dans ce cas, afin de développer un modèle de stress oxydant sur CML (A-10 lignée cellulaire) dans des conditions reproductibles et contrôlées, nous avons utilisé l'AAPH, une molécule soluble dans l'eau, utilisée dans l'étude de la peroxydation lipidique, dans la caractérisation des antioxydants in vitro et capable de provoquer différents types de changements pathologiques à travers des dommages oxydatifs.En outre, l'utilisation de ce composé, sur différents types de cultures cellulaires, est soutenue par plusieurs sources de données[START_REF] Yang | Oxalomalate, a competitive inhibitor of NADP + -dependent isocitrate dehydrogenase, enhances lipid peroxidation-mediated oxidative damage in U937 cells[END_REF][START_REF] Aldini | Procyanidins from grape seeds protect endothelial cells from peroxynitrite damage and enhance endothelium-dependent relaxation in human artery: new evidences for cardio-protection[END_REF][START_REF] Scarpato | Cytotoxicity and genotoxicity studies of two free-radical generators (AAPH and SIN-1) in human microvascular endothelial cells (HMEC-1) and human peripheral lymphocytes[END_REF][START_REF] He | A new oxidative stress model, 2,2-azobis(2-amidinopropane) dihydrochloride induces cardiovascular damages in chicken embryo[END_REF][START_REF] Gesquie`re | Oxidative stress leads to cholesterol accumulation in vascular smooth muscle cells[END_REF]. Comme décrit dans le deuxième article, les cellules ont été exposées à l'AAPH et les effets du stress oxydant sur la libération de NO ainsi que sur l'expression / l'activité de la PDI et de la GGT respectivement, ont été évalués. En outre, l'état redox des thiols cellulaires a été quantifié au travers des concentrations intra et extracellulaires de glutathion et des groupements thiols réduits (-SH) des protéines. Enfin, la capacité de GSNO à induire la S-nitrosation des protéines a été globalement quantifiée puis les protéines S-nitrosées ont été identifiées par spectrométrie de masse.Comme attendu, le stress oxydant diminue la concentration intracellulaire de glutathion et des groupements thiols réduits au sein des protéines. Les résultats montrent que le stress oxydant a provoqué une modulation différente de la GGT (diminution de l'activité) et de la PDI (expression accrue à la membrane plasmique) sans aucun effet sur le catabolisme extracellulaire du GSNO. Cependant, l'addition de GSNO sur les CML en condition de stress oxydant restaure les groupements thiols réduits des protéines et produit une S-nitrosation des protéines plus grandes qu'en condition basale. En outre, l'analyse par spectrométrie de masse des protéines S-nitrosées a révélé un nombre plus élevé de protéines S-nitrosés par GSNO en condition de stress oxydant comprenant un nombre plus élevé de protéines du cytosquelette impliquées dans la contraction vasculaire, la morphogenèse et le mouvement cellulaire. Encore plus intéressant, des classes supplémentaires de protéines impliquées dans l'adhésion cellulaire, le transfert / support et des protéines de transport sont S-nitrosées uniquement en condition de stress oxydant. Dans l'ensemble, même si le stress oxydant a modulé différemment la GGT et la PDI, des niveaux plus élevés de protéines S-nitrosées par GSNO ont été identifiés. Ces études soulignent le rôle important de
Acknowledgments
We thank the proteomics plateform (Dr Jean-Baptiste Vincourt) from the Federation de Recherche (FR3209 CNRS -BMCT) based at the Biopôle on the biology-health campus at Université de Lorraine for identification of S-nitrosated proteins (LC-MALDI MS).
Funding
This work was supported by the Université de Lorraine and the Région Lorraine (UHP_2011_EA3452_BMS_0062, RL 21/11, RL 140/12, CPER 2007-13 PRST «Ingénierie Moléculaire et Thérapeutique -Santé»). Programme VINCI 2014 -Université Franco Italienne, project number C2-56.
CHAPTER II
Macrophages and Inflammation
Abstract
Among S-nitrosothiols showing reversible binding between NO and -SH group, S-nitrosoglutathione (GSNO) represents potential therapeutics to treat cardiovascular diseases (CVD) associated with reduced nitric oxide (NO) availability. It also induces S-nitrosation of proteins, responsible for the main endogenous storage form of NO. Although oxidative stress parallels CVD development, little is known on the ability of GSNO to restore NO supply and storage in vascular tissues under oxidative stress conditions. Aortic rat smooth muscle cells (SMC) were stressed in vitro with a free radical generator (2,2'azobis(2-amidinopropane) dihydrochloride, AAPH). The cellular thiol redox status was reflected through levels of reduced glutathione and protein sufhydryl (SH) groups. The ability of GSNO to deliver NO to SMC and to induce protein S-nitrosation (investigated via mass spectrometry, MS), as well as the implication of two redox enzymes involved in GSNO metabolism (activity of gammaglutamyltransferase, GGT, and expression of protein disulfide isomerase, PDI) were evaluated.
Oxidative stress decreased both intracellular glutathione and protein -SH groups (53% and 32% respectively) and caused a 3.5 fold decrease in GGT activity, while PDI expression at the plasma membrane was 1.7-fold increased without any effect on extracellular GSNO catabolism. Addition of GSNO (50 μM) increased protein -SH groups and protein S-nitrosation (50%). Mass spectrometry analysis revealed a higher number of proteins Snitrosated under oxidative stress (51 proteins, vs 32 in basal conditions) including a higher number of cytoskeletal proteins (17, vs 8 in basal conditions) related with cell contraction, morphogenesis and movement. Furthermore, proteins belonging to additional protein classes (cell adhesion, transfer/carrier, and transporter proteins) were S-nitrosated under oxidative stress.
In conclusion, higher levels of GSNO-dependent S-nitrosation of proteins from the cytoskeleton and the contractile machinery were identified under oxidative stress conditions. The findings may prompt the identification of suitable biomarkers for the appraisal of GSNO bioactivity in treatment of CVD. Keywords: Oxidative stress, S-nitrosoglutathione, Protein S-nitrosation, Gamma-glutamyl transferase, Protein disulfide isomerase, Mass spectrometry.
CHAPTER IV Discussion Générale, Conclusions et Perspectives | 360,539 | [
"781571"
] | [
"422739"
] |
01491112 | en | [
"math"
] | 2024/03/04 23:41:50 | 2019 | https://hal.science/hal-01491112/file/1703.04658.pdf | Jean-Baptiste Meilhan
email: [email protected]
Akira Yasuhara
email: [email protected]
E Bellingeri
Wagner
ARROW CALCULUS FOR WELDED AND CLASSICAL LINKS
We develop a diagrammatic calculus for welded and classical knotted objects. We define Arrow presentations, which are essentially equivalent to Gauss diagrams but carry no sign on arrows, and more generally w-tree presentations, which can be seen as 'higher order Gauss diagrams'. We provide a complete set of moves for Arrow and w-tree presentations. This Arrow calculus is used to characterize finite type invariants of welded knots and long knots. Using S. Satoh's Tube map, which realizes welded objects into knotted surfaces in 4-space, we recover several topological results due to K. Habiro, A. Shima, and to T. Watanabe. We also classify welded string links up to homotopy, thus recovering a result of the first author with B. Audoux, P.
Introduction
A Gauss diagram is a combinatorial object, introduced by M. Polyak and O.Viro in [START_REF] Polyak | Gauss diagram formulas for Vassiliev invariants[END_REF], which encodes faithfully 1-dimensional knotted objects in 3-space. To a knot diagram, one associates a Gauss diagram by connecting, on a copy of S 1 , the two preimages of each crossing by an arrow, oriented from the over-to the under-passing strand and labeled by the sign of the crossing. Gauss diagrams form a powerful tool for studying knot and their invariants. In particular, a result of M. Goussarov [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF] states that any finite type (Goussarov-Vassiliev) knot invariant admits a Gauss diagram formula, i.e. can be expressed as a weighted count of arrow configurations in a Gauss diagram. A remarkable feature of this result is that, although it concerns classical knots, its proof heavily relies on virtual knot theory. Indeed, Gauss diagrams are inherently related to virtual knots, since an arbitrary Gauss diagram doesn't always represent a classical knot, but a virtual one [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF][START_REF] Louis | Virtual knot theory[END_REF].
More recently, further topological applications of virtual knot theory arose from its welded quotient, where one allows a strand to pass over a virtual crossing. This quotient is completely natural from the virtual knot group viewpoint, which naturally satisfies this additional local move. Hence all virtual invariants derived from the knot group, such as the Alexander polynomial or Milnor invariants, are intrinsically invariants of welded knotted objects. Welded theory is also natural by the fact that classical knots and (string) links embed in their welded counterparts. The topological significance of welded theory was enlightened by S. Satoh [START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF]; building on early works of T. Yajima [START_REF] Yajima | On the fundamental groups of knotted 2-manifolds in the 4-space[END_REF], he defined the so-called Tube map, which 'inflates' welded diagrams into ribbon knotted surfaces in dimension 4. Using the Tube map, welded theory was successfully used in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF] to classify ribbon knotted annuli and tori up to link-homotopy (for knotted annuli, it was later shown that the ribbon case can be used to give a general link-homotopy classification [START_REF] Audoux | On link-homotopy in codimension 2[END_REF]).
In this paper, we develop an arrow calculus for welded knotted objects, which can be regarded as a kind of 'higher order Gauss diagram' theory. We first recast the notion of Gauss diagram into so-called Arrow presentations for classical and welded knotted objects. These are like Gauss diagrams without signs, and they satisfy a set of Arrow moves, which we prove to be complete, in the following sense.
Theorem 1 (Thm. 4.5). Two Arrow presentations represent equivalent diagrams if and only if they are related by Arrow moves.
We stress that, unlike Gauss diagrams analogues of Reidmeister moves, which involve rather delicate compatibility conditions in terms of the arrow signs and local strands orientations, Arrow moves involve no such restrictions.
More generally, we define w-tree presentations for diagrams, where arrows of Gauss diagrams are generalized to oriented trees. Arrow moves are then extended to a calculus of w-tree moves, i.e. we have a w-tree version of Theorem 1.
This work should also be regarded as a welded version of the Goussarov-Habiro theory [START_REF] Habiro | Claspers and finite type invariants of links[END_REF][START_REF] Gusarov | On n-equivalence of knots and invariants of finite degree[END_REF], solving partially a problem set by M. Polyak in [START_REF] Ohtsuki | Problems on invariants of knots and 3-manifolds[END_REF]Problem 2.25]. In [START_REF] Habiro | Claspers and finite type invariants of links[END_REF], Habiro introduced the notion of clasper for (classical) knotted objects, which is a kind of embedded graph carrying a surgery instruction. Any knot or (string) link can be obtained from the trivial one by clasper surgery, and a set of moves is known, relating any two such presentations. A striking result is that clasper theory gives a topological characterization of the information carried by finite type invariants of knots. More precisely, Habiro used claspers to define the C k -equivalence relation, for any integer k ≥ 1, and showed that two knots share all finite type invariants up to degree < k if and only if there are C k -equivalent. This result was also independently obtained by Goussarov in [START_REF] Gusarov | On n-equivalence of knots and invariants of finite degree[END_REF]. In this paper, we use w-tree presentations to define a notion of w k -equivalence, and prove similar characterization results. More precisely, we use Arrow calculus to show the following.
Theorem 2 (Cor. 8.2). There is no non-trivial finite type invariant of welded knots.
Theorem 3 (Cor. 8.6). The following assertions are equivalent, for any k ≥ 1:
(1) two welded long knots are w k -equivalent, (2) two welded long knots share all finite type invariants of degree < k, (3) two welded long knots have same invariants {α i } for 2 ≤ i ≤ k.
Here, the invariants α i are given by the coefficients of the power series expansion at t = 1 of the normalized Alexander polynomial.
Theorem 2 and the equivalence (2)⇔(3) of Theorem 3 were independently shown for rational-valued finite type invariants by D. Bar-Natan and S. Dancso [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. Using Satoh's Tube map, we can promote these results to topological ones. More precisely, we obtain that there is no non-trivial finite type invariant of ribbon torusknots (Cor. 8.3), and reprove a result of Habiro and A. Shima [START_REF] Habiro | Finite type invariants of ribbon 2-knots[END_REF] stating that finite type invariants of ribbon 2-knots are determined by the (normalized) Alexander polynomial (Cor. 8.7). Moreover, we show that Theorem 3 implies a result of T. Watanabe [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF] which characterizes topologically finite type invariants of ribbon 2-knots. See Section 10.3.
We also develop a version of Arrow calculus up to homotopy. Here, the notion of homotopy for welded diagrams is generated by the self-(de)virtualization move, which replaces a classical crossing between two strands of a same component by a virtual one, or vice-versa. We use the homotopy Arrow calculus to prove the following.
Theorem 4 (Cor. 9.5). Welded string links are classified up to homotopy by welded Milnor invariants. This result, which is a generalization of Habegger-Lin's classification of string links up to link-homotopy [START_REF] Habegger | The classification of links up to link-homotopy[END_REF], was first shown by B. Audoux, P. Bellingeri, E. Wagner and the first author in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF]. Our version is stronger in that it gives, in terms of w-trees, an explicit representative for the homotopy class of a welded string links, see Theorem 9.4. Moreover, this result can be used to give homotopy classifications of ribbon annuli and torus-links, as shown in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF].
The rest of this paper is organized as follows. We recall in Section 2 the basics on classical and welded knotted objects, and the connection to ribbon knotted objects in dimension 4. In Section 3, we give the main definition of this paper, introducing w-arrows and w-trees. We then focus on w-arrows in Section 4. We define Arrow presentations and Arrow moves, and prove Theorem 1. The relation to Gauss diagrams is also discussed in more details in Section 4.4. Next, in Section 5 we turn to w-trees. We define the Expansion move (E), which leads to the notion of w-tree presentation, and we provide a collection of moves on such presentations. In Section 6, we give the definitions and some properties of the welded extensions of the knot group, the normalized Alexander polynomial, and Milnor invariants. We also review the finite type invariant theory for welded knotted objects. The w k -equivalence relation is introduced and studied in Section 7. We also clarify there the relation to finite type invariants and to Habiro's C n -equivalence. Theorems 2 and 3 are proved in Section 8. In Section 9, we consider Arrow calculus up to homotopy, and prove Theorem 4. We close this paper with Section 10, where we gather several comments, questions and remarks. In particular, we prove in Section 10.3 the topological consequences of our results, stated above.
Acknowledgments. The authors would like to thank Benjamin Audoux for stimulating conversations. This paper was completed during a visit of first author at Tsuda College, Tokyo, whose hospitality and support is warmly acknowledged. The second author is partially supported by a Grant-in-Aid for Scientific Research (C) (#23540074) of the Japan Society for the Promotion of Science.
2.
A quick review of classical and welded knotted objects 2.1. Basic definitions. A classical knotted object is the image of an embedding of some oriented 1-manifold in 3-dimensional space. Typical examples include knots and links, braids, string links, and more generally tangles. It is well known that such embeddings are faithfully represented by a generic planar projection, where the only singularities are transverse double points endowed with a diagrammatic over/under information, as on the left-hand side of Figure 2.1, modulo Reidemeister moves I, II and III.
This diagrammatic realization of classical knotted objects generalizes to virtual and welded knotted objects, as we briefly outline below.
A virtual diagram is an immersion of some oriented 1-manifold in the plane, whose singularities are a finite number of transverse double points that are labeled, either as a classical crossing or as a virtual crossing, as shown in Figure 2 Convention 2.1. Note that we do not use here the usual drawing convention for virtual crossings, with a circle around the corresponding double point.
There are three classes of local moves that one considers on virtual diagrams.
• the three classical Reidemeister moves,
• the three virtual Reidemeister moves, which are the exact analogues of the classical ones with all classical crossings replaced by virtual ones, • the Mixed Reidemeister move, shown on the left-hand side of Figure 2.2. We call these three classes of moves the generalized Reidemeister moves. A virtual Figure 2.2. The Mixed, OC and UC moves on virtual diagrams knotted object is the equivalence class of a virtual diagram under planar isotopy and generalized Reidemeister moves. This notion was introduced by Kauffman in [START_REF] Louis | Virtual knot theory[END_REF], where we refer the reader for a much more detailed treatment.
Recall that generalized Reidemeister moves in particular imply the so-called detour move, which replaces an arc passing through a number of virtual crossings by any other such arc, with same endpoints.
Recall also that there are two 'forbidden' local moves, called OC and UC moves (for Overcrossings and Undercrossings Commute), as illustrated in Figure 2.2.
In this paper, we shall rather consider the following natural quotient of virtual theory.
Definition 2.2. A welded knotted object is the equivalence class of a virtual diagram under planar isotopy, generalized Reidemeister moves and OC moves.
There are several reasons that makes this notion both natural and interesting. The virtual knot group introduced by Kauffman in [START_REF] Louis | Virtual knot theory[END_REF] at the early stages of virtual knot theory, is intrasically a welded invariants. As a consequence, the virtual extensions of classical invariants derived from (quotients of) the fundamental group are in fact welded invariants, see Section 6. Another, topological motivation is the relation with ribbon knotted objects in codimension 2, see Section 2.2.
In what follows, we will be mainly interested in welded links and welded string links, which are the welded extensions of classical link and string link diagrams. Recall that, roughly speaking, an n-component welded string link is a diagram made of n arcs properly immersed in a square with n points marked on the lower and upper faces, such that the kth arc runs from the kth lower to the kth upper marked point. A 1-component string link is often called long knot in the literature -we shall use this terminology here as well.
Welded (string) links are a genuine extension of classical (string) links, in the sense that the latter embed into the former ones. This is shown strictly as in the knot case [7, Thm.1.B], and actually also holds for virtual objects.
Convention 2.3. In the rest of this paper, by 'diagram' we will implicitly mean an oriented diagram, containing classical and/or virtual crossings, and the natural equivalence relation on diagrams will be that of Definition 2.2. We shall sometimes use the terminology 'welded diagram' to emphasize this fact. As noted above, this includes in particular classical (string) link diagrams.
Remark 2.4. Notice that the OC move, together with generalized Reidemeister moves, implies a welded version of the detour move, called w-detour move, which replaces an arc passing through a number of over-crossings by any other such arc, with same endpoints. This is proved strictly as for the detour move, the OC move playing the role of the Mixed move.
2.2.
Welded theory and ribbon knotted objects in codimension 2. As already indicated, one of the main interests of welded knot theory is that it allows to study certain knotted surfaces in 4-space. As a matter of fact, the main results of this paper will have such topological applications, so we briefly review these objects and their connection to welded theory.
Recall that a ribbon immersion of a 3-manifold M in 4-space is an immersion admitting only ribbon singularities. Here, a ribbon singularity is a 2-disk with two preimages, one being embedded in the interior of M , and the other being properly embedded.
A ribbon 2-knot is the boundary of a ribbon immersed 3-ball in 4-space, and a ribbon torus-knot is, likewise, the boundary of an immersed solid torus in 4-space. A ribbon 2-link (resp. torus-link) is the boundary of a ribbon immersed disjoint union of 3-balls (resp. solid tori) in 4-space. More generally, one can define ribbon 2-string links, which are the natural analogues of string links, see [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF][START_REF] Audoux | On link-homotopy in codimension 2[END_REF]. Ribbon 2-links admit two natural closure operations, by either capping off each component by 2-disks or by a braid-type closure operation; the first closure operation produces a ribbon 2-link, while the second one yields a ribbon torus-link. By ribbon knotted object, we mean a knotted surface obtained as the boundary of some ribbon immersed 3-manifold in 4-space.
Using works of T. Yajima [START_REF] Yajima | On the fundamental groups of knotted 2-manifolds in the 4-space[END_REF], S. Satoh defined in [START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF] a surjective Tube map, from welded diagrams to ribbon 2-knotted objects. Roughly speaking, the Tube map assigns, to each classical crossing of a diagram, a pair of locally linked annuli in a 4-ball as shown in Figure 2.3 (we use the same drawing convention as [START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF]); next, it only remains to connect these annuli to one another by unlinked annuli, as prescribed by the diagram. Although not injective in general, 1 the Tube map acts faithfully on the 'fundamental group'. This key fact, which will be made precise in Remark 6.1, will allow to draw several topological consequences from our diagrammatic results. See Section 10.3.
Remark 2.5. One can more generally define k-dimensional ribbon knotted objects in codimension 2, for any k ≥ 2, and the Tube map generalizes straightforwardly to a surjective Tube k map from welded diagrams to k-dimensional ribbon knotted objects. See for example [START_REF] Audoux | On link-homotopy in codimension 2[END_REF]. As a matter of fact, most of the topological results of this paper extend freely to ribbon knotted objects in codimension 2.
w-arrows and w-trees
Let D be a diagram. The following is the main definition of this paper. Definition 3.1. A w-tree for D is a connected uni-trivalent tree T , immersed in the plane of the diagram such that:
• the trivalent vertices of T are pairwise disjoint and disjoint from D, 1 The Tube map is not injective for welded knots [START_REF] Ichimori | Ribbon torus knots presented by virtual knots with up to four crossings[END_REF], but is injective for welded braids [START_REF] Brendle | Configuration spaces of rings and wickets[END_REF] and welded string links up to homotopy [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF].
• the univalent vertices of T are pairwise disjoint and are contained in D \ {crossings of D}, • all edges of T are oriented, such that each trivalent vertex has two ingoing and one outgoing edge, • each vertex is equipped with a cyclic order on the three incident edges/strands of D, • we allow virtual crossings between edges of T , and between D and edges of T , but classical crossings involving T are not allowed, • each edge of T is assigned a number (possibly zero) of decorations •, called twists, which are disjoint from all vertices and crossings. A w-tree with a single edge is called a w-arrow.
For a union of w-trees for D, vertices are assumed to be pairwise disjoint, and all crossings among edges are assumed to be virtual. See Figure 3 We call tails the univalent vertices of T with outgoing edges, and we call the head the unique univalent vertex with an ingoing edge. We will call endpoint any univalent vertex of T , when we do not need to distinguish between tails and head. The edge which is incident to the head is called terminal.
Two endpoints of a union of w-trees for D are called adjacent if, when travelling along D, these two endpoints are met consecutively, without encountering any crossing or endpoint. Remark 3.2. Note that, given a uni-trivalent tree, picking a univalent vertex as the head uniquely determines an orientation on all edges respecting the above rule. Thus, we usually only indicate the orientation on w-trees at the terminal edge. However, it will occasionnally be useful to indicate the orientation on other edges, for example when drawing local pictures. Definition 3.3. Let k ≥ 1 be an integer. A w-tree of degree k, or w k -tree, for D is a w-tree for D with k tails. Convention 3.4. We will use the following drawing conventions. Diagrams are drawn with bold lines, while w-trees are drawn with thin lines. The cyclic order at each vertex is always counterclockwise. See Figure 3.1. We shall also use the symbol • to describe a w-tree that may or may not contain a twist at the indicated edge: or =
Arrow presentations of diagrams
In this section, we focus on w-arrows. We explain how w-arrows carry 'surgery' instructions on diagrams, so that they provide a way to encode diagrams. A complete set of moves is provided, relating any two w-arrow presentations of equivalent diagrams. The relation to the theory of Gauss diagrams is also discussed. Suppose that there is a disk in the plane that intersects D ∪A as shown in If some w-arrow of A intersects the diagram D (at some virtual crossing disjoint from its endpoints), then this introduces pairs of virtual crossings as indicated on the left-hand side of the figure below. Likewise, the right-hand side of the figure indicates the rule when two portions of (possibly of the same) w-arrow(s) of A intersect.
= =
Finally, if some w-arrow of A contains some twists, we simply insert virtual crossings accordingly, as indicated below.
=
An example is given in Figure 4.2. We say that two Arrow presentations are equivalent if the surgeries yield equivalent diagrams. We will simply denote this equivalence by =.
= =
In the next section, we address the problem of generating this equivalence relation by local moves on Arrow presentations.
As Figure 4.3 illustrates, surgery along a w-arrow is equivalent to a devirtualization move, which is a local move that replaces a virtual crossing by a classical one. This observation implies the following. In other words, the wA-presentation of a diagram should be thought of as its canonical Arrow presentation. For example, for the diagram of the trefoil show in Figure 4.13, the wA-presentation is given in the center of the figure. Proof. Virtual Isotopy moves (1) are easy consequences of the surgery definition of w-arrows and virtual Reidemeister moves. This is clear for the Reidemeister-type moves, since all such moves locally involve only virtual crossings. The remaining local moves essentially follow from detour moves. For example, the figure below illustrates the proof of one instance of the second move, for one choice of orientation at the tail:
= = =
All other moves of (1) are given likewise by virtual Reidemeister moves.
Move [START_REF] Audoux | Extensions of some classical local moves[END_REF] follows from the definition of a twist and the virtual Reidemeister II move, as shown below:
= = =
Having proved these first sets of moves, we can freely use them to simplify the proof of the remaining moves. For example, we can freely assume that the w-arrow involved in the Tail Reversal move ( 4) is either as shown on the left-hand side of [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. There, the second and fourth identities are applications of the detour move, while the third move uses the OC move. In Figure 4.6, we had to choose a local orientation for the upper strand. This implies the result for the other choice of orientation, by using the Tail Reversal move [START_REF] Bar-Natan | Vassiliev homotopy string link invariants[END_REF].
Moves ( 6) and ( 7) are direct consequences of the definition, and are left to the reader. Finaly, we prove [START_REF] Gusarov | On n-equivalence of knots and invariants of finite degree[END_REF]. There are a priori several choices of local orientations to consider, which are all declined in two versions, depending on whether we insert a twist on the •-marked w-arrow or not. Figure 4.7 illustrates the proof for one choice of orientation, in the case where no twist is inserted. The sequence of identities in this figure is given as follows: the second and third identities use isotopies and detour moves, the fourth (vertical) one uses the OC move, then followed by isotopies and detour moves which give the fifth equality. The final step uses the Tails Exchange move (5). Now, notice that the exact same proof applies in the case where there is a twist on the •-marked w-arrow. Moreover, if we change the local orientation of, say, the bottom strand in the figure, the result follows from the previous case by the Reversal moves (3) and ( 4), the Tails Exchange move (5) and the Involutivity move (2), as the following picture indicates:
= = = = =
= = =
We leave it to the reader to check that, similarly, all other choices of local orientations follow from the first one.
The main result of this section is that this set of moves is complete. The if part of the statement is shown in Lemma 4.4. In order to prove the only if part, we will need the following. Virtual Reidemeister moves and the Mixed move follow from Virtual Isotopy moves [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF]. For example, the case of the Mixed move is illustrated in Figure 4.8 (the argument holds for any choice of orientation). The OC move is, expectedly, essentially a consequence of the Tails Exchange move [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. More precisely, Figure 4.9 shows how applying the Tails Exchange together with Isotopy moves (1), followed by Tail Reversal moves (4), and further Isotopy moves, realizes the OC move. III, we first note that, although there are a priori eight choices of orientation to be considered, Polyak showed that only one is necessary [START_REF] Polyak | Minimal generating sets of Reidemeister moves[END_REF]. We consider this move in Figure 4.12. There, the second equality uses the Reversal and Isotopy moves (3), ( 4) and ( 1), the third equality uses the Inverse move [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF], and the fourth one 8) as well as the Tails Exchange move [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. Then the fifth equality uses the Inverse move back again, the sixth equality uses the Reversal, Isotopy and Tails Exchange moves, and the seventh one uses further Reversal and Isotopy moves.
= = =
= = = =
Remark 4.7. We note from the above proof that some of the Arrow moves appear as essential analogues of the generalized Reidemeister moves: the Isolated move [START_REF] Brendle | Configuration spaces of rings and wickets[END_REF] gives Reidemeister I move, while the Inverse move [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF] and Slide move [START_REF] Gusarov | On n-equivalence of knots and invariants of finite degree[END_REF] give Reidemeister II and III, respectively. Finaly, the Tails Exchange move (5) corresponds to the OC move.
We can now prove the main result of this section.
Proof of Theorem 4.5. As already mentioned, it suffices to prove the only if part. Observe that, given a diagram D, any Arrow presentation of D is equivalent to the wA-presentation of some diagram. Indeed, by the Involutivity and Head Reversal moves (2) and (3), we can assume that the Arrow presentation of D contains no twist. We can then apply Isotopy and Tail Reversal moves (1) and ( 4) to assume that each w-arrow is contained in a disk where it looks as on the left-hand side of Figure 4.1; by using virtual Reidemeister moves II, we can actually assume that it is next to a (virtual) crossing, as on the left-hand side of Figure 4.3. The resulting Arrow presentation is thus a wA-presentation of some diagram (which is equivalent to D, by Lemma 4.4). Now, consider two equivalent diagrams, and pick any Arrow presentations for these diagrams. By the previous observation, these Arrow presentations are equivalent to wA-presentations of equivalent diagrams. The result then follows from Lemma 4.6.
4.4.
Relation to Gauss diagrams. Although similar-looking and closely related, w-arrows are not to be confused with arrows of Gauss diagrams. In particular, the signs on arrows of a Gauss diagram are not equivalent to twists on w-arrows. Indeed, the sign of the crossing defined by a w-arrow relies on the local orientation of the strand where its head is attached. The local orientation at the tail, however, is irrelevant. Let us clarify here the relationship between these two objects.
Given an Arrow presentation V ∪ A for some diagram K (of, say, a knot) one can always turn it by Arrow moves into an Arrow presentation V 0 ∪ A 0 , where V 0 is a trivial diagram, with no crossing. See for example the case of the trefoil in Figure 4.13. There is a unique Gauss diagram for K associated to V 0 ∪ A 0 , which is simply = = Conversely, any Gauss diagram can be converted to an Arrow presentation, by attaching the head of an arrow to the right-hand (resp. left-hand) side of the (trivial) diagram if it is labeled by a + (resp. -).
Theorem 4.5 provides a complete calculus (Arrow moves) for this alternative version of Gauss diagrams (Arrow presentations), which is to be compared with the Gauss diagram versions of Reidemeister moves. Although the set of Arrow moves is larger, and hence less suitable for (say) proving invariance results, it is in general much simpler to manipulate. Indeed, Gauss diagram versions of Reidemeister moves III and (to a lesser extent) II contain rather delicate compatibility conditions, given by both the arrow signs and local orientations of the strands, see [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF]; Arrow moves, on the other hand, involve no such condition.
Moreover, we shall see in the next sections that Arrow calculus generalizes widely to w-trees. This can thus be seen as an 'higher order Gauss diagram' calculus.
Surgery along w-trees
In this section, we show how w-trees allow to generalize surgery along w-arrows. 5.1. Subtrees, expansion, and surgery along w-trees. We start with a couple preliminary definitions.
A subtree of a w-tree is a connected union of edges and vertices of this w-tree. Given a subtree S of a w-tree T for a diagram D (possibly T itself), consider for each endpoint e of S a point e on D which is adjacent to e, so that e and e are met consecutively, in this order, when running along D following the orientation. One can then form a new subtree S , by joining these new points by the same directed subtree as S, so that it runs parallel to it and crosses it only at virtual crossings. We then say that S and S are two parallel subtrees.
We now introduce the Expansion move (E), which comes in two versions as shown in Figure 5 By applying (E) recursively, we can eventually turn any w-tree into a union of w-arrows. Note that this process is uniquely defined. An example is given in Figure 5.3.
Definition 5.2. The expansion of a w-tree is the union of w-arrows obtained from repeated applications of (E). 5.3 illustrates, the expansion of a w k -tree T takes the form of an 'iterated commutators of w-arrows'. More precisely, labeling the tails of T from 1 to k, and denoting by i a w-arrow running from (a neighborhood of) tail i to (a neighborhood of) the head of T , and by i -1 a similar w-arrow with a twist, then the heads of the w-arrows in the expansion of T are met along D according to a k-fold commutator in 1, • • • , k. See Section 6.1.2 for a more rigorous and detailed treatment.
The notion of expansion leads to the following. Definition 5.4. The surgery along a w-tree is surgery along its expansion.
As before, we shall denote by D T the result of surgery on a diagram D along a union T of w-trees.
Remark 5.5. We have the following Brunnian-type property. Given a w-tree T , consider the trivial tangle D given by a neighborhood of its endpoints: the tangle D T is Brunnian, in the sense that deleting any component yields a trivial tangle. Indeed, in the expansion of T , we have that deleting all w-arrows which have their tails on a same component of D, produces a union of w-arrows which yields a trivial surgery, thanks to the Inverse move [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF].
Moves on w-trees.
In this section, we extend the Arrow calculus set up in Section 4 to w-trees. The expansion process, combined with Lemma 4.4, gives immediately the following. Lemma 5.6. Arrow moves (1) to ( 5) hold for w-trees as well. More precisely:
• one should add the following local moves to (1):
strand of diagram or edge of w-tree s =
• The Tails Exchange move (5) may involves tails from different components or from a single component.
Remark 5.7. As a consequence of the Tails Exchange move for w-trees, the relative position of two (sub)trees for a diagram is completely specified by the relative position of the two heads. In particular, we can unambiguously refer to parallel w-trees by only specifying the relative position of their heads. Likewise, we can freely refer to 'parallel subtrees' of two w-trees if these subtrees do not contain the head.
Convention 5.8. In the rest of the paper, we will use the same terminology for the w-tree versions of moves ( 1) to [START_REF] Brendle | Configuration spaces of rings and wickets[END_REF], and in particular we will use the same numbering. As for moves [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF] and ( 8), we will rather refer to the next two lemmas when used for w-trees.
As a generalization of the Inverse move (7), we have the following.
Lemma 5.9 (Inverse). Two parallel w-trees which only differ by a twist on the terminal edge yield a trivial surgery. 3
=
Proof. We proceed by induction on the degree of the w-trees involved. The w-arrow case is given by move [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF]. Now, suppose that the left-hand side in the above figure involves two w k -trees. Then, one can apply (E) to both to obtain a union of eight w-trees of degree < k. Note that S can be described explicitly from S, by using the Inverse Lemma 5.9 recursively.
Likewise, we have the following natural generalization of the Slide move (8). Proof. The proof is done by induction on the degree of the w-trees involved in the move, as in the proof of Lemma 5.9. The degree 1 case is the Slide move (8) for w-arrows. Now, suppose that the left-hand side in the figure of Lemma 5.11 involves two w k -trees, and apply (E) to obtain a union of eight w-trees of degree < k. Remark 5.12. The Slide Lemma 5.11 generalizes as follows. If one replace the warrow in Figure 5.5 by a bunch of parallel w-arrows, then the lemma still applies. Indeed, it suffices to insert, using the Inverse Lemma 5.9, pairs of parrallel w-trees between the endpoints of each pair of consecutive w-arrows, apply the Slide Lemma 5.11, then remove pairwise all the added w-trees again by the Inverse Lemma. Note that this applies for any parallel bunch of w-arrows, for any choice of orientation and twist on each individual w-arrow.
We now provide several supplementary moves for w-trees. Lemma 5.17 (Antisymmetry). The cyclic order at a trivalent vertex may be changed, at the cost of a twist on the three incident edges at that vertex:
=
Proof. The proof is by induction on the number of edges from the head to the trivalent vertex involved in the move. When there is only one edge, the result simply follows from (E), isotopy of the resulting w-trees, and (E) back again, as shown in Figure 5 The inductive step is clear: applying (E) to a w-tree containing a fork yields four w-trees, two of which contain a fork, by the Tails Exchange move [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. Using the induction hypothesis, we are thus left with two w-trees which cancel by the Inverse Lemma 5.9. 5.3. w-tree presentations for welded knotted objects. We have the following natural generalization of the notion of Arrow presentation. Let us call w-tree moves the set of moves on w-trees given by the results of Section 5. More precisely, w-tree moves consists of the Expansion move (E), Moves (1)-( 6) of Lemma 5.6, and the Inverse (Lem. 5.9), Slide (Lem. 5.11), Head Traversal (Lem. 5.13), Heads Exchange (Lem. 5.14), Head/Tail Exchange (Lem. 5.16), Antisymmetry (Lem. 5.17) and Fork (Lem. 5.18) moves. Clearly, w-tree moves yield equivalent w-tree presentations.
Examples of w-tree presentations for the right-handed trefoil are given in Figure 5.16. There, starting from the Arrow presentation of Figure 4.13, we apply the Head/Tail Exchange Lemma 5.16, the Tails Exchange move (5) and the Isolated Arrow move [START_REF] Brendle | Configuration spaces of rings and wickets[END_REF]. As mentioned in Section 4.4, these can be regarded as kinds of = = = It follows from Theorem 4.5 that w-tree moves provide a complete calculus for w-tree presentations. In other words, we have the following.
Theorem 5.20. Two w-tree presentations represent equivalent diagrams if and only if they are related by w-tree moves.
Note that the set of w-tree moves is highly non-minimal. In fact, the above remains true when only considering the Expansion move (E) and Arrow moves (1)-(8).
Welded invariants
In this section, we review several welded extensions of classical invariants. Since virtual crossings do not produce any generator or relation, virtual and Mixed Reidemeister moves obviously preserve the group presentation [START_REF] Louis | Virtual knot theory[END_REF]. It turns out that this 'virtual knot group' is also invariant under the OC move, and is thus a welded invariant [START_REF] Louis | Virtual knot theory[END_REF][START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF].
6.1.1. Wirtinger-like presentation using w-trees. Given a w-tree presentation of a diagram L, we can associate a Wirtinger-like presentation of G(L) which involves in general fewer generators and relations. More precisely, let U ∪ T be a w-tree presentation of L, where T = T 1 ∪ • • • ∪ T r has r connected components. The r heads of T split U into a collection of n arcs, 5 and we pick a generator m i for each of them. Consider the free group F generated by these generators, where the inverse of a generator m i will be denoted by m i . Arrange the heads of T (applying the Head Reversal move (3) if needed) so that it looks locally as in Figure 6.2. Then we have
G(L) = {m i } i | R j (j = 1, • • • , r)
, where R j is a relation associated with T j as illustrated in the figure . There,w(T j ) is a word in F , constructed as follows. First, label each edges of T j which is incident to a tail by the generator m i inherited from its attaching point. Next, label all edges of T j by elements of F by applying recursively the rules illustrated in Figure 6.2. More precisely, assign recursively to each outgoing edge at a trivalent vertex the formal bracket [a, b] := abab, where a and b are the labels of the two ingoing edge, following the cyclic orientation of the vertex; we also require that a label meeting a twist is replaced by its inverse. This procedure yields a word w(T j ) ∈ F associated to T j , which is defined as the label at its terminal edge. Note that this procedure more generally associates a formal word to any subtree of T j , and that, by the Tail Reversal move (4), the local orientation of the diagram at each tail is not relevant in this process.
In the case of a wA-presentation of a diagram, the above procedure recovers the usual Wirtinger presentation of the diagram, and it is easily checked that, in general, this procedure indeed gives a presentation of the same group. Remark 6.1. As outlined in Section 2.2, the Tube map that 'inflates' a welded diagram L into a ribbon knotted surface acts faithfully on the virtual knot group, in the sense that we have π 1 Tube(L) ∼ = G(L), 6 and that it maps meridians to meridians and (preferred) longitudes to (preferred) longitudes, so that the Wirtinger presentations are in one-to-one correspondence (see [START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF][START_REF] Yajima | On the fundamental groups of knotted 2-manifolds in the 4-space[END_REF][START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF]). 6.1.2. Algebraic formalism for w-trees. Let us push a bit further the algebraic tool introduced in the previous section.
Given two w-trees T and T with adjacent heads in a w-tree presentation, such that the head of T is met before that of T when following the orientation, we define w(T ∪ T ) := w(T )w(T ) ∈ F. Convention 6.2. Here F denotes the free group on the set of Wirtinger-like generators of the given w-tree presentation, as defined in Section 6.1. 1. In what follows, we will always use this implicit notation.
Note that, if T is obtained from T by inserting a twist in its terminal edge, then w(T ) = w(T ), and w(T ∪ T ) = 1, which is compatible with Convention 5.10. Now, if we denote by E(T ) the result of one application of (E) to some w-tree T , then we have w(T ) = w(E(T )). More precisely, if we simply denote by A and B the words associated with the two subtrees at the two ingoing edges of the vertex where (E) is applied, then we have
w(T ) = [A, B] = A B A B.
We can therefore reformulate (and actually, easily reprove) some of the results of Section 5.2 in these algebraic terms. For example, the Heads Exchange Lemma 5.14 translates to
AB = B[B, A]A,
and its variants given in Figure 5.9, to
AB = B[A, B]A = BA[A, B] = [A, B]BA.
The Antisymmetry Lemma 5.17 also reformulates nicely; for example the 'initial case' shown in Figure 5.12 can be restated as
[B, A] = [A, B]
. 6 Here, π 1 Tube(L) denote the fundamental group of the complement of the surface Tube(L) in 4-space.
Finally, the Fork Lemma 5.18 is simply
[ • • • [A, A] • • •] = 1.
In the sequel, although we will still favor the more explicit diagrammatical language, we shall sometimes make use of this algebraic formalism. 6.2. The normalized Alexander polynomial for welded long knots. Let L be a welded long knot diagram. Suppose that the group of L has presentation
G(L) = x 1 , • • • , x m |r 1 , • • • , r n for some m, n such that m ≥ n. Consider the n×m Jacobian matrix M = ϕ ∂r i ∂x j i,j
, where ∂ ∂xj denote the Fox free derivative in variable x j , and where ϕ :
ZF (x 1 , • • • , x m ) → Z[t ±1
] is the ring homomorphism mapping each generator x i of the free group
F (x 1 , • • • , x m ) to t.
The Alexander polynomial of L, denoted by ∆ L (t) ∈ Z[t ±1 ], is defined as the greatest common divisor of the n × n-minors of M , which is well-defined up to a unit factor [START_REF] Sawollek | On alexander-conway polynomials for virtual knots and links[END_REF][START_REF] Silver | Alexander groups and virtual links[END_REF]. Remark 6.3. If the presentation for G(L) is given from the Wirtinger procedure, with one generator for each classical crossing, then each line of M has exactly 3 nonzero entries, which are -t -1 , 1 and -1 + t -1 , so that the sum of its columns yields a column of zeros. Consequently, ∆ L (t) is given by any n × n-minors of M . This observation extends to any presentation of G(L) with deficiency one, and in particular for the Wirtinger-like presentations extracted from w-tree presentations of L, as presented in the previous section.
In order to remove the indeterminacy in the definition of ∆ L (t), we further require that ∆ L (1) = 1 and that d∆ L dt (1) = 0. The resulting invariant is the normalized Alexander polynomial of L, denoted by ∆L (see e.g. [START_REF] Habiro | Finite type invariants of Ribbon 2-knots[END_REF]); it decomposes as
∆L (t) = 1 + k≥2 α k (L)(1 -t) k ∈ Z[t -1],
thus defining a sequence of integer-valued invariants α k of welded long knots. 7Definition 6.4. We call the invariant α k the kth normalized coefficient of the Alexander polynomial.
We now give a realization result for the coefficients α k in terms of w-trees. Consider the welded long knots L k or L k (k ≥ 2) defined in Figure 6
(t) = 1 + (1 -t) k and ∆L k (t) = 1 -(1 -t) k .
Note that these are genuine equalities: there are no higher order terms. In particular, we have Proof of Lemma 6.5. The presentation for G(L k ) given by the defining w k -tree presentation is l, r|R k lR -1 k r -1 , where
α i (L k ) = -α i (L k ) = δ ik . ... ... L k ... ... L k
R k = [• • • [[[l, r -1 ], r -1 ], r -1 ] • • • ], r -1 is a length k commutator. One can show inductively that ∂R k ∂l = (1 -r) k-1
, so that the normalized Alexander polynomial is given by
∆L k (t) = ϕ ∂R k lR -1 k r -1 ∂l = 1 + (1 -t) k ,
thus completing the computation for L k . The result for L k is completely similar, and is left to the reader.
Although the following might be well-known, we add a short proof as we could not find any in the literature. Lemma 6.6. The normalized Alexander polynomial of welded long knots is multiplicative.
Proof. The proof is straightforward. Let K and K be two welded long knots, with groups given by x
1 , • • • , x m+1 |r 1 , • • • , r m and y 1 , • • • , y n+1 |s 1 , • • • , s n .
Then the group of the welded long knot K • K is given by
x 1 , • • • , x m , y 1 , • • • , y n+1 | r1 , • • • , r m , s 1 , • • • , s n ,
where ri is obtained from r i by replacing x m+1 by y 1 . The Jacobian matrix thus decomposes as
ϕ ∂ ri ∂xj 1≤i≤m 1≤j≤m ϕ ∂ ri ∂y1 1≤i≤m 0 0 ϕ ∂si ∂y1 1≤i≤n ϕ ∂si ∂yj 1≤i≤n 2≤j≤n+1 i,j
, and the result follows by taking the determinant after removing, say, the (m + 1)st column. Theorem 6.6 implies the following additivity result. Corollary 6.7. Let k be a positive integer and let K be a welded long knot with α i (K) = 0 (i ≤ k -1). Then, for any welded long knot K , α k (K • K ) = α k (K) + α k (K ). 6.3. Welded Milnor invariants. We now recall the general virtual extension of Milnor invariants given in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF], which is an invariant of welded string links. This construction is intrasically topological, since it is defined via the Tube map as the 4-dimensional analogue of Milnor invariants for (ribbon) knotted annuli in 4-space; we will however give here a purely combinatorial reformulation.
Given an n-component welded string link L, consider the group G(L) defined in Section 6.1. Consider also the free group F l and F u generated by the n 'lower' and 'upper' Wirtinger generators, i.e. the generators associated with the n arcs of L containing the initial, resp. terminal, point of each component. Recall that the lower central series of a group G is the family of nested subgroups {Γ k G} k≥1 defined recursively by Γ 8 This relies heavily on the topological realization of welded string links as ribbon knotted annuli in 4-space by the Tube map, which acts faithfully at the level of the group system: see Section 6 of [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF] for the details.
1 G = G and Γ k+1 G = [G, Γ k G]. Then, for each k ≥ 1, we have a sequence of isomorphisms 8 F n /Γ k F n F l /Γ k F l G(L)/Γ k G(L) F u /Γ k F u F n /Γ k F n ,
where F n is the free group on m 1 , • • • , m n . In this way, we associate to L an element ϕ k (L) of Aut(F n /Γ k F n ). This is more precisely a conjugating automorphism, in the sense that, for each i, ϕ k (L) maps m i to a conjugate m λ k i i ; we call this conjugating element λ k i ∈ F n /Γ k F n the combinatorial ith longitude. Now, consider the Magnus expansion, which is the group homomorphism E : F n → Z X 1 , • • • , X n mapping each generator m i to the formal power series 1 + X i . Definition 6.8. For each sequence
I = i 1 • • • i m-1 i m of (possibly repeating) indices in {1, • • • , n}, the welded Milnor invariant µ w I (L) of L is the coefficient of the monomial X i1 • • • X im-1 in E(λ k im ), for any k ≥ m.
The number of indices in I is called the length of the invariant.
For example, the simplest welded Milnor invariants µ w ij indexed by two distinct integers i, j are the so-called virtual linking numbers lk i/j (see [7, §1.7]. Remark 6.9. This is a welded extension of the classical Milnor µ-invariants, in the sense that if L is a (classical) string link, then µ I (L) = µ w I (L) for any sequence I. The following realization result, in terms of w-trees, is to be compared with [20, pp.190] and [START_REF] Yasuhara | Self delta-equivalence for links whose Milnor's isotopy invariants vanish[END_REF]Lem. 4.1]. Lemma 6.10.
Let I = i 1 • • • i k be a sequence of indices in {1, • • • , n}, and, for any σ in the symmetric group S k-2 of degree k -2, set σ(I) = i σ(1) • • • i σ(k-2) i k-1 i k .
Consider the w-tree T I for the trivial n-string link diagram 1 n shown in Figure 6.4.
1 k i k- i ... i k- i 2 1 2 i Figure 6.4. The w k-1 -tree T i1,i2,••• ,i k-1 ,i k for 1 n
Then we have
µ w σ(I) ((1 n ) T I ) = 1 if σ =Id, 0 otherwise. Moreover for all σ ∈ S k-2 , we have µ w σ(I) ((1 n ) T I ) = -µ w σ(I) (1 n ) T I
, where T I is the w-tree obtained from T I by inserting a twist in the terminal edge.
Proof. This is a straightforward calculation, based on the observation that the combinatorial i k th longitude of T I is given by
λ k i k = [i 1 , [i 2 , • • • , [i k-3 , [i k-2 , i -1 k-1 ] -1 ] -1 • • • ] -1
] (all other longitudes are clearly trivial). Remark 6.11. The above definition can be adapted to welded link invariants, which involves, as in the classical case, a recurring indeterminacy depending on lower order invariants. In particular, the first non-vanishing invariants are well defined integers, and Lemma 6.10 applies in this case.
Finally, let us add the following additivity result. Lemma 6.12. Let L and L be two welded string links of the same number of components. Let m, resp. m , be the integer such that all welded Milnor invariants of L, resp. L , of length ≤ m, resp. ≤ m , are zero. Then
µ w I (L • L ) = µ w I (L) + µ w I (L ) for any sequence I of length ≤ m + m .
The proof is strictly the same as in the classical case, as for example in [START_REF] Meilhan | On Cn-moves for links[END_REF]Lem. 3.3], and is therefore left to the reader.
S ⊂S (-1) |S | v (L S ) = 0. An invariant is of degree k if it is of degree ≤ k, but not of degree ≤ k -1.
Remark 6.14. This definition is strictly similar to the usual notion of finite type (or Goussarov-Vassiliev) invariants for classical knotted objects, with the virtualization move now playing the role of the crossing change. Since a crossing change can be realized by (de)virtualization moves, we have that the restriction of any welded finite type invariant to classical objects is a Goussarov-Vassilev invariants.
The following is shown in [START_REF] Habiro | Finite type invariants of Ribbon 2-knots[END_REF] (in the context of ribbon 2-knots, see Remark 6.1). Lemma 6.15. For each k ≥ 2, the kth normalized coefficient α k of the Alexander polynomial is a finite type invariant of degree k.
It is known that classical Milnor invariants are of finite type [START_REF] Bar-Natan | Vassiliev homotopy string link invariants[END_REF][START_REF] Lin | Power series expansions and invariants of links[END_REF]. Using essentially the same arguments, it can be shown that, for each k ≥ 1, length k + 1 welded Milnor invariants of string links are finite type invariants of degree k. The key point here is that a virtualization, just a like a crossing change, corresponds to conjugating or not at the virtual knot group level. Since we will not make use of this fact in this paper, we will not provide a full and rigorous proof here. Indeed, formalizing the above very simple idea, as done by D. Bar-Natan in [START_REF] Bar-Natan | Vassiliev homotopy string link invariants[END_REF] in the classical case, turns out to be rather involved. Note, however, that we will use a consequence of this fact which, fortunately, can easily be proved directly, see Remark 7.6. Remark 6.16. The Tube map recalled in Section 2.2 is also compatible with this finite type invariant theory, in the following sense. Suppose that some invariant of welded knotted objects v extends naturally to an invariant v (4) of ribbon knotted objects, so that v (4) (Tube(D)) = v(D), for any diagram D. Note that this is the case for the virtual knot group, the normalized Alexander polynomial and welded Milnor invariants, essentially by Remark 6.1. Then, if v is a degree k finite type invariant, then so is v (4) , in the sense of the finite type invariant theory of [START_REF] Habiro | Finite type invariants of Ribbon 2-knots[END_REF][START_REF] Kanenobu | Two filtrations of ribbon 2-knots[END_REF]. Indeed, if two diagrams differ by a virtualization move, then their images by Tube differ by a 'crossing changes at crossing circles', which is a local move that generates the finite type filtration for ribbon knotted objects, see [START_REF] Kanenobu | Two filtrations of ribbon 2-knots[END_REF].
w k -equivalence
We now define and study a family of equivalence relations on welded knotted objects, using w-trees. We explain the relation with finite type invariants, and give several supplementary technical lemmas for w-trees. 7.1. Definitions. Definition 7.1. For each k ≥ 1, the w k -equivalence is the equivalence relation on welded knotted objects generated by generalized Reidemeister moves and surgery along w l -trees, l ≥ k. More precisely, two welded knotted objects W and W are w k -equivalent if there exists a finite sequence {W i } n i=0 of welded knotted objects such that, for each i ∈ {1, • • • , n}, W i is obtained from W i-1 either by a generalized Reidemeister move or by surgery along a w l -tree, for some l ≥ k.
By definition, the w k -equivalence becomes finer as the degree k increases, in the sense that the w k+1 -equivalence implies the w k -equivalence.
The notion of w k -equivalence is a bit subtle, in the sense that it involves both moves on diagrams and on w-tree presentations. Let us try to clarify this point by introducing the following. Notation 7.2. Let V ∪ T and V ∪ T be two w-tree presentations of some diagram, and let k ≥ 1 be an integer. Then we use the notation
V ∪ T k → V ∪ T
if there exists a union T of w-trees for V of degree ≥ k such that V ∪T = V ∪T ∪T .
Note that we have the implication
V ∪ T k → V ∪ T ⇒ V T k ∼ V T .
Therefore, statements given in the terms of Notation 7.2 will be given when possible.
The converse implication, however, does not seem to hold in general. In other words, we do not know whether a w k -equivalence version of [START_REF] Habiro | Claspers and finite type invariants of links[END_REF]Prop. 3.22] holds -see also Section 7.5.
7.2.
Cases k = 1 and 2. We now observe that w 1 -moves and w 2 -moves are equivalent to simple local moves on diagrams.
We already saw in Figure 4.3 that surgery along a w-arrow is equivalent to a devirtualization move. Clearly, by the inverse move [START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF], this is also true for a virtualization move. It follows immediately that any two welded links or string link of the same number of components are w 1 -equivalent.
Let us now turn to the w 2 -equivalence relation. Recall that the right-hand side of Figure 2.2 depicts the UC move, which is the forbidden move in welded theory. We have Lemma 7.3. A w 2 -move is equivalent to a UC move.
Proof. Figure 7.1 below shows that the UC move is realized by surgery along a w 2 -tree. Note that, in the figure, we had to choose several local orientations on the strands: we leave it to the reader to check that the other cases of local orientations follow from the same argument, by simply inserting twists near the corresponding tails. It was shown in [START_REF] Audoux | Extensions of some classical local moves[END_REF] that two welded (string) links are related by a sequence of UC move, i.e. are w 2 -equivalent, if and only if they have same welded Milnor invariants µ w ij . In particular, any two welded (long) knots are w 2 -equivalent. Remark 7.4. The fact that any two welded (long) knots are w 2 -equivalent can also easily be checked directly using arrow calculus. Starting from an Arrow presentation of a welded (long) knot, one can use the (Tails, Heads and Head/Tail) Exchange move (5) and Lemmas 5.14 and 5.16 to separate and isolate all w-arrows, as in the figure of the Isolated move (6), up to addition of higher order w-trees. Each w-arrow is then equivalent to the empty one by move (6). 7.3. Relation to finite type invariants. One of the main point in studying welded (and classical) knotted objects up to w k -equivalence is the following. Proposition 7.5. Two welded knotted objects that are w k -equivalent (k ≥ 1) cannot be distinguished by finite type invariants of degree < k.
Proof. The proof is formally the same as Habiro's result relating C n -equivalence (see Section 7.5) to Goussarov-Vassiliev finite type invariants [11, §6.2], and is summarized below.
First, recall that, given a diagram L and
k unions W 1 , • • • , W k of w-arrows for L, the bracket [L; W 1 , • • • , W k ] stands for the formal linear combination of diagrams [L; W 1 , • • • , W k ] := I⊂{1,••• ,k} (-1) |I| L ∪ i∈I Wi .
Note that, if each W i consists of a single w-arrow, then the defining equation (6.1) of finite type invariants can be reformulated as the vanishing of (the natural linear extension of) a welded invariant on such a bracket. Note also that if, say, W 1 is a union of w-arrow W 1 1 , • • • , W n 1 , then we have the equality
[L; W 1 , W 2 , • • • , W k ] = n j=1 [L W 1 1 ∪•••∪W j-1 1 ; W j 1 , W 2 , • • • , W k ].
Hence if an invariant is of degree ≤ k, then it vanishes on [L;
W 1 , • • • , W k ].
Now, suppose that T is a w k -tree for some diagram L, and label the tails of T from 1 to k. Consider the expansion of T , and denote by W i the union of all w-arrows running from (a neighborhood of) tail i to (a neighborhood of) the head of T . Then L T = L ∪ k i=1 Wi and, according to the Brunnian-type property of w-trees noted in Remark 5.5, we have
L ∪ i∈I Wi = L for any I {1, • • • , k}. Therefore, we have L T -L = (-1) k [L; W , • • • , W k ],
which, according to the above observation, implies Proposition 7.5.
We will show in Section 8 that the converse of Proposition 7.5 holds for welded knots and long knots.
Remark 7.6. It follows in particular from Proposition 7.5 that Milnor invariants of length ≤ k are invariants under w k -equivalence. This can also be shown directly by noting that, if we perform surgery on a diagram L along some w k -tree, this can only change elements of G(L) by terms in Γ k F . 7.4. Some technical lemmas. We now collect some supplementary technical lemmas, in terms of w k -equivalence. We will use an additional tool, called the index, which we define below.
Let D be a welded (string) link diagram with n components labeled by 1 to n. Let T be a w k -tree for D. Definition 7.7. The index of T , denoted by i(T ), is the subset of {1, ..., n} of all indices i such that T intersects the ith component of D at some endpoint.
Note that i(T ) has cardinality at most k + 1. When referring to a w-tree using a word W ∈ F (using the algebraic formalism of Section 6.1.2), we will freely use the notation i(W ) to refer to the index of the corresponding w-tree.
The next result allows to move twists across vertices. Lemma 7.8 (Twist). Let k ≥ 2. The following holds for a w k -tree. k+1 Note that this move implies the converse one, by using the Antisymmetry Lemma 5.17 and Involutivity move (2).
Proof. Denote by d + 1 the number of edges of T in the unique path connecting the trivalent vertex shown in the statement to the head. Note that 0 ≤ d ≤ k -1. We will prove by induction on d the following claim, which is a stronger form of the desired statement. Claim 7.9. For all k ≥ 2, and for any w k -tree T , the following equalities hold T = ...
G
where G denotes a union of w-trees of degree > k, each with index equal to i(T ).
The case d = 0 of the claim is given in Figure 7.3. There, the first equality uses (E) and the second equality follows from the Heads Exchange Lemma 5.14 (actually, from Remark 5.15) applied at the two rightmost heads, and the Inverse Lemma 5.9. The third equality also follows from Remark 5.15 and the Inverse Lemma. Note = = = that this can equivalently be shown using the algebraic formalism of Section 6.1.2; more precisely, the above figure translates to the simple equalities
[A, B] = A B AB = A [A, B] A = [A, [A, B]] [A, B].
Observe that, in this algebraic setting, d is the depth of
[A, B] in an iterated commutator [ • • • , [A, B]•••] ∈ Γ m F , which is defined as the number of elements D i ∈ F such that [ • • • , [A, B]•••] = [D d , [D d-1 , • • • , [D 1 , [A, B]]•••]]. For the inductive step, consider an element C, [ • • • , [A, B]•••] ∈ Γ k F , where C ∈ Γ l F and [ • • • , [A, B]•••] ∈ Γ m F
for some integers l, m such that l + m = k. Observe also that the induction hypothesis gives the existence of some S ∈ Γ m+1 F , with i(S) = i([
• • • , [A, B]•••]), such that [ • • • , [A, B]•••] = S [ • • • , [A, B]•••]. The inductive step is then given by C, [ • • • , [A, B]•••] = C [ • • • , [A, B]•••] C [ • • • , [A, B]•••] = C [ • • • , [A, B]•••] S C S [ • • • , [A, B]•••] (induction hypothesis) = G C [ • • • , [A, B]•••] C [ • • • , [A, B]•••] (Heads Exch. Lem. 5.14) = G C, [ • • • , [A, B]•••] ,
where
G is some term in Γ k+1 F with i(G) = i( C, [ • • • , [A, B]•••] ).
(The reader is invited to draw the corresponding diagrammatic argument.)
Remark 7.10. By a symmetric argument, we can prove a variant of Claim 7.9 where the heads of G are to the right-hand side of the w-tree in the figure.
Next, we address the move exchanging a head and a tail of two w-trees of arbitrary degree.
Lemma 7.11. The following holds.
W W'
1 k+k'+ T Here, W and W are a w k -tree and a w k -tree, respectively, for some k, k ≥ 1, and T is a w k+k -tree shown.
Proof. Consider the path of edges of W connecting the tail shown in the figure to the head, and denote by n the number of edges in this path: we have 1 ≤ n ≤ k . The proof is by induction on n. More precisely, we prove by induction on n the following stronger statement. Claim 7.12. Let k, k ≥ 2. Let W , W and T be as above. The following equality holds.
T W W' = ... S
where S denotes a union of w-trees of degree > k + k , each with index equal to i(W ) ∪ i(W ).
The case n = 1 of the claim is a consequence of the Head/Tails Exchange Lemma 5.16, Claim 7.9 and the Involutivity move [START_REF] Audoux | Extensions of some classical local moves[END_REF]. The proof of the inductive step is illustrated in Figure 7.4 below. The first equality in Figure 7.4 is an application of (E) to the w k -tree W , while the second equality uses the induction hypothesis. The third (vertical) equality then follows from recursive applications of the Heads Exchange Lemma 5.14, and uses also Convention 5.10. Further Heads Exchanges give the fourth equality, and the final one is given by (E). We note the following consequence of Lemma 5.14 and Claim 7.12.
Corollary 7.13. Let T and T be two w-trees, of degree k and k . We can exchange the relative position of two adjacent endpoints of T and T , at the expense of additional w-trees of degree ≥ k + k and with index equal to i(T ) ∪ i(T ).
Proof. There are three types of moves to be considered. First, exchanging two tails can be freely performed by the Tails Exchange move [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. Second, it follows from the Heads Exchange Lemma 5.14 that exchanging the heads of these two w-trees can be performed at the cost of one w k+k -tree with the desired index. Third, by Claim 7.12, exchanging a tail of one of these w-trees and the head of the other can be achieved up to addition of w-trees of degree ≥ k + k with index i(T ) ∪ i(T ).
Let us also note, for future use, the following consequence of these Exchange results. We denote by 1 n the trivial n-component string link diagram, without crossings, and, given a union of w-trees W for 1 n , we call a w-tree T ⊂ W separated if (1 n ) T is a factor of the welded string link (1 n ) W , i.e. if W splits as disjoint union
W 1 T W 2 such that (1 n ) W = (1 n ) W1 • (1 n ) T • (1 n ) W2 .
Corollary 7.14. Let k, l be integers such that k ≥ l ≥ 1. Let W be a union of w-trees for 1 n of degree ≥ l. Then (1 n ) W is w k+1 -equivalent to a welded string link obtained from 1 n by surgery along separated w l -trees and w-trees of degree in {l + 1, • • • , k}.
Proof. This is shown by repeated applications of Corollary 7.13. More precisely, we use Exchange moves to rearrange the w l -trees T 1 , . . . , T m in W so that they sit in disjoint disks D i (i = 1, . . . , m), which intersects each component of 1 n at a single trivial arc, so that (1 n ) ∪iTi = (1 n ) T1 • . . . • (1 n ) Tm . By Corollary 7.13, this is achieved at the expense of w-trees of degree ≥ l + 1, which may intersect those disks. But further Exchange moves allow to move all higher degree w-trees under ∪ i D i , according to the orientation of 1 n , now at the cost of additional w-trees of degree ≥ l + 2, which possibly intersect ∪ i D i . We can repeat this procedure until the only higher degree w-trees intersecting ∪ i D i have degree > k, which gives the equivalence
(1 n ) W k+1 ∼ (1 n ) T1 • . . . • (1 n ) Tm • (1 n ) W ,
where W is a union of w-trees of degree in {l + 1, • • • , k}.
Finally, we give a w-tree version of the IHX relation. Proof. We prove this lemma using the algebraic formalism of Section 6.1.2, for simplicity (we leave it as an exercise to the interested reader to reformulate the arguments diagrammatically). We prove the following stronger version. Claim 7.16. For all k ≥ 2, we have
[ • • • , [A, [B, C]]•••] ∈ Γ k F = S [ • • • , [[A, B], C]•••][ • • • , [[A, C], B]•••], for some S ∈ Γ k+1 F , with i(S) = i([ • • • , [A, [B, C]]•••]).
The proof is by induction on the depth
d of [A, [B, C]] in the iterated commutator [ • • • , [A, [B, C]]•••],
as defined in the proof of Claim 7.9. Recall that, diagrammatically, the depth of [A, [B, C]] is the number of edges connecting the vertex v in Figure 7.5 to the head. The case d = 0 is given by Proposition 7.17. For all n ≥ 1, C n -equivalence implies w n -equivalence.
[A, [B, C]] = A C B C B A B C B C = A C B C A [A, B] C B C = A C B C A [[A, B], C] C [A, B] B C = R [[A, B], C] A C B C A C [A, B] B C (Heads Exchange Lem. 5.14) = R [[A, B], C] A C B C A C A A C = R [[A, B], C] A C B [C, A] B A C = R [[A, B], C] A C [C, A] [A, C], B A C = R [[A, B], C] [A, C], B (
Proof. It suffices to show that a C n -move can be realized by surgery along w-trees of degree ≥ n, which is done by induction. Actually, we prove the following. Before showing Claim 7.18, let us observe that it implies Proposition 7.17. Note that, if we delete those w-trees in F n having index {0, 1, ..., n}, we obtain a w-tree presentation of the right-hand side of Figure 7.6. Such w-trees have degree ≥ n, and by the Inverse Lemma 5.9, deleting them can be realized by surgery along w-trees of degree ≥ n. Therefore we have shown Proposition 7.17.
Let us now turn to the proof of Claim 7.18. The case n = 1 is clear, since it was already noted that a crossing change can be achieved by a sequence of (de)virtualization moves or, equivalently, by surgery along w-arrows (see Section 7.2). Now, using the induction hypothesis, consider the following w-tree presentation for the (n + 1)-strand diagram on the left-hand side of Figure 7.6:
= 2 n- 1 1 ... n n ... ... 2 1 n- 0 0 1 n- F 1
(Here, we have made a choice of orientation of the strands, but it is not hard to check that other choices can be handled similarly.) By moving their endpoints accross F n-1 , the four depicted w-arrows with index {n -1, n} can be cancelled pairwise. By Corollary 7.13, moving w-arrow ends accross F n-1 can be made at the expense of additional w-trees with index {0, 1, • • • , n}. This completes the induction.
Finite type invariants of welded knots and long knots
We now use the w k -equivalence relation to characterize finite type invariants of welded (long) knots. Topological applications for surfaces in 4-space are also given. 8.1. w k -equivalence for welded knots. The fact, noted in Section 7.2, that any two welded knots are w i -equivalent for i = 1, 2, generalizes widely as follows.
Theorem 8.1. Any two welded knots are w k -equivalent, for any k ≥ 1.
An immediate consequence is the following.
Corollary 8.2.
There is no non-trivial finite type invariant of welded knots.
This was already noted for rational-valued finite type invariants D. Bar-Natan and S. Dancso [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. Also, we have the following topological consequence, which we show in Section 10.3.
Corollary 8.3.
There is no non-trivial finite type invariant of ribbon torus-knots. Theorem 8.1 is a consequence of the following, stronger statement. Lemma 8.4. Let k, l be integers such that k ≥ l ≥ 1 and let K be a welded knot. Then there is a welded knot W l such that
K k+1 ∼ W l and W l l → O.
Proof. The proof is by induction on l. The initial case, i.e., l = 1 for any fixed integer k ≥ 1, was given in Section 7.2, so we assume that K is w k+1 -equivalent to a welded knot W l that W l l → O. Using Corollary 7.14, we have that W l is w k+1 -equivalent to a welded knot which is obtained from O by surgery along a union of isolated w l -trees, and w-trees of degree in {l + 1, • • • , k}. Here, a w l -tree T for O is called isolated if it is contained in a disk B which is disjoint from all other w-trees and intersects O at a single arc.
Consider such an isolated w l -tree T . Suppose that, when traveling along O, the first endpoint of T which is met in B is its head; then, up to applications of the Tails Exchange move (5) and Antisymmetry Lemma 5.17, we have that T contains a fork, so that it is equivalent to the empty w-tree by the Fork Lemma 5.18. Note that these moves can be done in the disk B. If we first meet some tail when traveling along O in B, we can slide this tail outside B and use Corollary 7.13 to move it around O, up to addition of w-trees of degree ≥ l + 1, until we can move it back in B. In this case, by Corollary 7.14, we may assume that the new w-trees of degree ≥ l + 1 do not intersect B up to w k+1 -equivalence. Using this and the preceding argument, we have that T can be deleted. This completes the proof. 8.2. w k -equivalence for welded long knots. We now turn to the case of long knots. In what follows, we use the notation 1 for the trivial long knot diagram (with no crossing).
As recalled in Section 7.2, it is known that any two welded long knots are w iequivalent for i = 1, 2. The main result of this section is the following generalization. Theorem 8.5. For each k ≥ 1, welded long knots are classified up to w k -equivalence by the first k -1 normalized coefficients {α i } 2≤i≤k of the Alexander polynomial.
Since the normalized coefficients of the Alexander polynomial are of finite type, we obtain the following, which in particular gives the converse to Proposition 7.5 for welded long knots.
Corollary 8.6. The following assertions are equivalent, for any integer k ≥ 1:
(1) two welded long knots are w k -equivalent, (2) two welded long knots share all finite type invariants of degree < k, (3) two welded long knots have same invariants {α i } for i < k.
Theorem 8.5 also implies the following, which was first shown by K. Habiro and A. Shima [START_REF] Habiro | Finite type invariants of ribbon 2-knots[END_REF].
Corollary 8.7. Finite type invariant of ribbon 2-knots are determined by the (normalized) Alexander polynomial.
Actually, we also recover a topological characterization of finite type invariants of ribbon 2-knots due to T. Watanabe, see Section 10.3.
Moreover, by the multiplicative property of the normalized Alexander polynomial (Lemma 6.6), we have the following consequence. Actually, the fact that welded long knots, and more generally welded string links, up to w k -equivalence form a finitely generated group can be proved directly, using w-tree moves, as in Section 5.2 of [START_REF] Habiro | Claspers and finite type invariants of links[END_REF].
The proof of Theorem 8.5 uses the next technical lemma, which refer to the welded long knots L k or L k defined in Figure 6.3. Lemma 8.9. Let k, l be integers such that k ≥ l ≥ 1, and let L be a welded long knot obtained from 1 by surgery along w-trees of degree ≥ l ( i.e. L l → 1). Then
L k+1 ∼ L x l • L , for some x ∈ Z, where L -1 l := L l and L l + 1 → 1.
Let us show how Lemma 8.9 allows to prove Theorem 8.5.
Proof of Theorem 8.5 assuming Lemma 8.9. We prove that, for any k, l such that k ≥ l ≥ 1, a welded long knot K satisfies (8.1)
K k+1 ∼ l-1 i=2 L xi(K) i • W l ,
where W l l → 1, and where
x i (K) = α i (K) if i = 2, α i (K) -α i i-1 j=2 L xj (K) j if i > 2.
We proceed by induction on l. Assume Equation (8.1) for some l ≥ 1 and any fixed k ≥ l. By applying Lemma 8.9 to the welded long knot W l , we have W l k+1 ∼ L x l • W l+1 , where W l+1 l + 1 → 1. Using the additivity (Corollary 6.7) and finite type (Lemma 6.15 and Proposition 7.5) properties of the normalized coefficients of the Alexander polynomial, we obtain that x = x l (K), thus completing the proof.
Proof of Lemma 8.9. By Corollary 7.14, we may assume that K is w k+1 -equivalent to a welded long knot which is obtained from 1 by surgery along a union of separated w l -trees and w-trees of degree in {l + 1, • • • , k}.
Consider such a separated w l -tree T . Let us call 'external' any vertex of T that is connected to two tails. In general, T might contain several external vertices, but by the IHX Lemma 7.15 and Corollary 7.14, we can freely assume that T has only one external vertex, up to w k+1 -equivalence.
By the Fork Lemma 5.18 and the Tails Exchange move (5), if the two tails connected to this vertex are not separated by the head, then T is equivalent to the empty w-tree. Otherwise, using the Tails Exchange move, we can assume that these two tails are at the leftmost and rightmost positions among all endpoints of T along 1, as for example for the w l -tree shown in Figure 8 Let us prove the equivalence of Figure 8.1. To this end, consider the union A ∪ F of a w-arrow A and a w l-1 -tree F as shown on the left-hand side of Figure 8.2. On one hand, by the Fork Lemma 5.18, followed by the the Isolated move (6), we have that 1 A∪F = 1. On the other hand, we can use the Head/Tail Exchange We can then apply the Head/Tail Exchange Lemma to move the head of A across the head of F , which by Corollary 7.14 yields the second equivalence. Further applications of Corollary 7.14, together with the Antisymmetry and Twist Lemmas 5.17 and 7.8, give the third equivalence. Finally, the first term in the right-hand side of this equivalence is trivial by the Isolated move [START_REF] Brendle | Configuration spaces of rings and wickets[END_REF] and the Fork Lemma. The equivalence of Figure 8.1 is then easily deduced, using the Inverse Lemma 5.9 and Corollary 7.14.
Homotopy arrow calculus
The previous section shows how the study of welded knotted objects of one components is well-understood when working up to w k -equivalence. The case of several components (welded links and string links), though maybe not out of reach, is significantly more involved.
One intermediate step towards a complete understanding of knotted objects of several components is to study these objects 'modulo knot theory'. In the context of classical (string) links, this leads to the notion of link-homotopy, were each individual component is allowed to cross itself; this notion was first introduced by Milnor [START_REF] Milnor | Link groups[END_REF], and culminated with the work of Habegger and Lin [START_REF] Habegger | The classification of links up to link-homotopy[END_REF] who used Milnor invariants to classify string link up to link-homotopy. In the welded context, the analogue of this relation is generated by the self-virtualization move, where a crossing involving two strands of a same component can be replaced by a virtual one. In what follows, we simply call homotopy this equivalence relation on welded knotted objects, which we denote by h ∼. This is indeed a generalization of link-homotopy, since a crossing change between two strands of a same component can be generated by two self-(de)virtualizations.
We have the following natural generalization of [START_REF] Milnor | Isotopy of links. Algebraic geometry and topology[END_REF]Thm. 8]. → by h ∼. This is a consequence of Claims 7.9 and 7.16, which show that the equality in these lemmas is achieved by surgery along repeated w-trees. In what follows, we will implicitly make use of this fact, and freely refer to the lemmas of the previous sections when using their homotopy versions. 9.2. Homotopy classification of welded string links. Let n ≥ 2. For each integer i ∈ {1, • • • , n}, denote by S l (i) the set of all sequences i 1 • • • i l of l distinct integers from {1, • • • , n} \ {i} such that i j < i l for all j = 1, . . . , l -1. Note that the lexicographic order endows the set S l (i) with a total order. For any sequence Ii := (1 n ) T Ii .
I = i 1 • • • i k-1 ∈ S k-1 (i),
We prove the following (compare with Theorem 4.3 of [START_REF] Yasuhara | Self delta-equivalence for links whose Milnor's isotopy invariants vanish[END_REF]). As a consequence, we recover the following classification results. This result was first shown by Audoux, Bellingeri, Wagner and the first author in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF]: their proof consists in defining a global map from welded string links up to homotopy to conjugating automorphisms of the reduced free group, then to use Gauss diagram to build an inverse map. Corollary 9.5 is a generalization of the classification of string links up to link-homotopy of Habegger and Lin [START_REF] Habegger | The classification of links up to link-homotopy[END_REF]: it is indeed shown in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF] that string links up to link-homotopy embed in welded string links up to homotopy. Remark 9.6. Theorem 9.4 does not allow to recover the result of [START_REF] Habegger | The classification of links up to link-homotopy[END_REF]. By Remark 6.9, it only implies that two classical string link diagrams are related by a sequence of isotopies and self-(de)virtualizations if and only if they have same Milnor invariants.
Proof of Theorem 9.4. Let L be an n-component welded string link. Pick an Arrow presentation for L. By Corollary 7.14, we can freely rearrange the w-arrows up to w n -equivalence, so that
L n ∼ j =i (W ji ) xji • (1 n ) R1 • (1 n ) S ≥2 ,
where R 1 is a union of self-arrows, and S ≥2 is a union of w-trees of degree ≥ 2 and < n. Up to homotopy, we can freely delete all self-arrows, and using the properties of Milnor invariants (Lemmas 6.12 and 6.10, Remark 7.6, and Lemma 9.1), we have that x ji = µ w ji (L) for all j = i. Hence we have
L h ∼ l 1 • (1 n ) S ≥2 .
Next, we can separate, by a similar procedure, all w 2 -trees in S ≥2 . We need the following general fact, which is easily checked using the Antisymmetry, IHX and Twist Lemmas 5.17, 7.15 and 7.8.
Welded arcs.
There is yet another class of welded knotted object that we should mention here. A welded arc is an immersed oriented arc in the plane, up to generalized Reidemeister moves, OC moves, and the additional move of Figure 10.1 (left-hand side). There, we represent the arc endpoints by large dots. We emphasize that these large dots are 'free' in the sense that they can be freely isotoped in the plane. It can be checked that welded arcs have a well-defined composition rule, given by gluing two arc endpoints, respecting the orientations. This is actually a very natural notion from the 4-dimensional point of view, see Section 10. we can freely delete a w-tree whose head is adjacent to an arc endpoint. This is reminiscent of the case of welded long knots. Indeed, if a welded long knot is obtained from the trivial diagram 1 by surgery along a w-tree T whose head is adjacent to an endpoint of 1, then by the Fork Lemma 5.18, we have 1 T = 1. This was observed in the proof of Lemma 8.5. A consequence is that the proof of this lemma can be applied verbatim to welded arcs (in particular, the key fact of Figure 8.1 applies). This shows that welded arcs up to w k -equivalence form an abelian group, which is isomorphic to that of welded long knots up to w k -equivalence, for any k ≥ 1. Finite type invariants of welded arcs are thus classified similarly.
To be more precise, there is a natural capping map C from welded long knots to welded arcs, which replaces the (fixed) endpoints by (free) large dots. This map C is clearly surjective and the above observation says that it induces a bijective map when working up to w k -equivalence. It seems however unknown whether the map C itself is injective. 10.3. Finite type invariants of ribbon 2-knots and torus-knots. As outlined above, the notion of welded arcs is relevant for the study of ribbon 2-knots in 4space. Indeed, applying the Tube map to a welded arc, capping off by disks at the endpoints, yields a ribbon 2-knot, and any ribbon 2-knot arises in this way [START_REF] Satoh | Virtual knot presentation of ribbon torus-knots[END_REF]. Combining this with the surjective map C from Section 10.2 above, we obtain: Fact 5. Any ribbon 2-knot can be presented, via the Tube map, by a welded long knot.
Recall that K. Habiro introduced in [START_REF] Habiro | Claspers and finite type invariants of links[END_REF] the notion of C k -equivalence, and more generally the calculus of claspers, and proved that two knots share all finite type invariants of degree < k if and only if they are C k -equivalent. As a 4-dimensional analogue of this result, T. Watanabe introduced in [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF] the notion of RC k -equivalence, and a topological calculus for ribbon 2-knots. He proved the following.
Theorem 10.1. Two ribbon 2-knots share all finite type invariants of degree < k if and only if they are RC k -equivalent.
We will not recall the definition of the RC k -equivalence here, but only note the following. Fact 6. If two welded long knots are w k -equivalent, then their images by the Tube map are RC k -equivalent.
.1.
Figure 2 . 1 .
21 Figure 2.1. A classical and a virtual crossing.
Figure 2 . 3 .
23 Figure 2.3. The Tube map
.1 for an example.
Figure 3 . 1 .
31 Figure 3.1. Example of a union of w-trees
4. 1 .
1 Surgery along w-arrows. Let A be a union of w-arrows for a diagram D. Surgery along A yields a new diagram, denoted by D A , which is defined as follows.
Figure 4 . 1 .Figure 4 . 1 .
4141 Figure 4.1. Surgery along a w-arrow the fact that the orientation of the portion of diagram containing the tail needs to be specified to define the surgery move.If some w-arrow of A intersects the diagram D (at some virtual crossing disjoint from its endpoints), then this introduces pairs of virtual crossings as indicated on the left-hand side of the figure below. Likewise, the right-hand side of the figure indicates the rule when two portions of (possibly of the same) w-arrow(s) of A intersect.
Figure 4 . 2 . 4 . 2 .Definition 4 . 1 .
424241 Figure 4.2. An example of diagram obtained by surgery along w-arrows
Proposition 4 . 2 .
42 Any diagram admits an Arrow presentation.
Figure 4 . 3 .Definition 4 . 3 .
4343 Figure 4.3. Surgery along a w-arrow is a devirtualization move.
4. 3 .= 2
32 Arrow moves. We call Arrow moves the following eight types of local moves among Arrow presentations.(1) Virtual Isotopy. Virtual Reidemeister-type moves involving edges of warrows and/or strands of diagram, along with the following local moves: Here, in the figures, the vertical strand is either a portion of diagram or of a w-arrow.
Lemma 4 . 4 .
44 Arrow moves yield equivalent Arrow presentations.
Figure 4 .
4 4 below, or differs from this figure by a single twist. The proof of (4) is given in Figure 4.4 in the case where the w-arrow has no twist and the strand is oriented upwards (in the figure of the lemma). It only uses the = = =
Figure 4 . 4 .Figure 4 . 5 .
4445 Figure 4.4. Proving the Tail Reversal move
Figure 4 .
4 6 shows (
Figure 4 . 6 .
46 Figure 4.6. Proving the Tails Exchange move
Figure 4 . 7 .
47 Figure 4.7. Proving the Slide move
Theorem 4 . 5 .
45 Two Arrow presentations represent equivalent diagrams if and only if they are related by Arrow moves.
Lemma 4 . 6 .
46 If two diagrams are equivalent, then their wA-presentations are related by Arrow moves. Proof. It suffices to show that generalized Reidemeister moves and OC moves are realized by Arrow moves among wA-presentations.
Figure 4 . 8 .
48 Figure 4.8. Realizing the Mixed move by Arrow moves
Figure 4 . 9 .Figure 4 . 10 .Figure 4 . 11 .
49410411 Figure 4.9. Realizing the OC move by Arrow movesWe now turn to classical Reidemeister moves. The proof for the Reidemeister I move is illustrated in Figure4.10. There, the second equality uses move (1), while the third equality uses the Isolated Arrow move[START_REF] Brendle | Configuration spaces of rings and wickets[END_REF]. (More precisely, one has to consider both orientations in the figure, as well as the opposite crossing, but these other cases are similar.) The proof for the Reidemeister II move is shown in Figure
Figure 4 . 12 .
412 Figure 4.12. Realizing the Reidemeister III move by Arrow moves
Figure 4 . 13 .
413 Figure 4.13. The right-handed trefoil as obtained by surgery on w-arrowsobtained by the following rule. First, each w-arrow in A 0 enherits a sign, which is + (resp. -) if, when running along V 0 following the orientation, the head is attached to the right-hand (resp. left-hand) side. Next, change this sign if and only if the w-arrow contains an odd number of twists. For example, the Gauss diagram for the right-handed trefoil shown in Figure4.13 is obtained from the Arrow presentation on the right-hand side by labeling all three arrows by +. Note that, if the head of a w-arrow is attached to the right-hand side of the diagram, then the parity of the number of twists corresponds to the sign.Conversely, any Gauss diagram can be converted to an Arrow presentation, by attaching the head of an arrow to the right-hand (resp. left-hand) side of the (trivial) diagram if it is labeled by a + (resp. -).Theorem 4.5 provides a complete calculus (Arrow moves) for this alternative version of Gauss diagrams (Arrow presentations), which is to be compared with the Gauss diagram versions of Reidemeister moves. Although the set of Arrow moves is larger, and hence less suitable for (say) proving invariance results, it is in general much simpler to manipulate. Indeed, Gauss diagram versions of Reidemeister moves III and (to a lesser extent) II contain rather delicate compatibility conditions, given by both the arrow signs and local orientations of the strands, see[START_REF] Goussarov | Finite type invariants of virtual and classical knots[END_REF]; Arrow moves, on the other hand, involve no such condition.Moreover, we shall see in the next sections that Arrow calculus generalizes widely to w-trees. This can thus be seen as an 'higher order Gauss diagram' calculus.
Figure 5 . 1 .Figure 5 . 2 .
5152 Figure 5.1. Expanding w-trees by using (E)
T=Figure 5 . 3 .
53 Figure 5.3. Expansion of a w 3 -tree
Figure 5 .Figure 5 . 4 .3
554 Figure 5.4. Proving the Inverse move for w-trees
Lemma 5 .Figure 5 . 5 .
555 Figure 5.5. The Slide move for w-trees
Figure 5 . 6 .
56 Figure 5.6. Proving the Slide move for w-trees Tails Exchange move (5), and apply (E) back again to express the right-hand-side of Figure 5.6 as the desired pair of w k -trees.
Lemma 5 . 13 ( 4 =Figure 5 . 7 .Figure 5 . 8 .Figure 5 . 9 .Figure 5 . 10 .Figure 5 . 11 .
5134575859510511 Figure 5.7. Proving the Head Traversal move
Figure 5 . 12 .Figure 5 . 13 .=Figure 5 . 14 .Figure 5 . 15 .
512513514515 Figure 5.12. Proving the Antisymmetry move: initial step
Definition 5 . 19 .
519 Suppose that a diagram is obtained from a diagram U without classical crossings by surgery along a union T of w-trees. Then U ∪ T is called a w-tree presentation of the diagram. Two w-tree presentations are equivalent if they represent equivalent diagrams.
Figure 5 . 16 .
516 Figure 5.16. Tree-presentation for the trefoil 'higher order Gauss diagram' presentation for the trefoil.It follows from Theorem 4.5 that w-tree moves provide a complete calculus for w-tree presentations. In other words, we have the following.
6. 1 . 1 aba c - 1 Figure 6 . 1 .
11161 Figure 6.1. Wirtinger relation at each crossing
Figure 6 . 2 .
62 Figure 6.2. Wirtinger-type relation at a head, and the procedure to define w(T )
Figure 6 . 3 .
63 Figure 6.3. The welded long knots L k or L k , given by surgery along a single w k -tree (k ≥ 2).
6. 4 .
4 Finite type invariants. The virtualization move is a local move on diagrams which replaces a classical crossing by a virtual one. We call the converse local move the devirtualization move. Given a welded diagram L, and a set C of classical crossings of L, we denote by L C the welded diagram obtained by applying the virtualization move to all crossings in C; we also denote by |C| the cardinality of C. Definition 6.13. An invariant v of welded knotted objects, taking values in an abelian group, is a finite type invariant of degree ≤ k if, for any welded diagram L and any set S of k + 1 classical crossings of D, we have (6.1)
Figure 7 . 1 .Figure 7 . 2 .
7172 Figure 7.1. Surgery along a w 2 -tree implies the UC move
Figure 7 . 3 .
73 Figure 7.3. Proof of the Twist Lemma: case d = 0
Figure 7 . 4 .
74 Figure 7.4. Here, S (resp. G and H) represent a union of w-trees of degree > k -1 (resp. degree > k) and with index i(T )
Lemma 7 .Figure 7 . 5 .
775 Figure 7.5. The IHX relation for w-treesHere, I, H and X are three w k -tree for some k ≥ 3.
7 . 5 .
75 Heads Exchange Lem. 5.14) = R [[A, B], C] S [A, C], B , (Twist Lem. 7.8) = S [[A, B], C] [A, C], B , (Heads Exchange Lem. 5.14) where R, R , S and S are some elements of Γ k+1 F with index i([A, [B, C]]).For the inductive step, letI = [ • • • , [A, [B, C]]•••] be an element of Γ m F , for some m ≥ 3, such that [A, [B, C]] has depth d, and set H = [ • • • , [[A, B], C]•••] and X = [ • • • , [[A, C], B]•••].By the induction hypothesis, there exists an element S ∈ Γ m+1 with index i(I ) such that I = S H X . Let D ∈ Γ l F such that l+m = k. Then we have [D, I ] = D I DI = D X H S D S H X (induction hypothesis) = R D X H D H X (Heads Exchange Lem. 5.14) = R D X D [D, H ] X = R [D, H ] D X D X (Heads Exchange Lem. 5.14) = R [D, H ] [D, X ], where R, R ∈ Γ k+1 F are some elements with index i([D, I ]). Relation to C n -equivalence. Recall that, for n ≥ 1, a C n -move is a local move on knotted objects involving n + 1 strands, as shown in Figure 7.6. (A C 1 -
Figure 7 . 6 .
76 Figure 7.6. A C n -move. move is by convention a crossing change.) The C n -equivalence is the equivalence relation generated by C n -moves and isotopies. The next result states that this equivalence relation is a refinement of the w n -equivalence.
Claim 7 . 18 .
718 For all n ≥ 1, the diagram shown on the left-hand side of Figure7.6 obtained from the (n + 1)-strand trivial diagram by surgery along a union F n of w-trees, such that each component of F n has index {0, 1, • • • , i} for some i.
Corollary 8 . 8 .
88 Welded long up to w k -equivalence form a finitely generated abelian group, for any k ≥ 1.
. 1 .
1 The result then follows from the technical observations shown in this figure. Indeed, combining these equalities ... ...
Figure 8 . 1 .
81 Figure 8.1. The shaded part contains all non-represented edges of the w l -tree, and G is a union of w-trees of degree in {l+1, • • • , k} with the Involutivily move (2) and the Twist Lemma 7.8, we have that T can be
Figure 8 . 2 .
82 Figure 8.2. Here, G, G and G are unions of w-trees of degree in {l + 1, • • • , k}
Lemma 9 . 1 .
91 If I is a sequence of non repeated indices, then µ w I is invariant under homotopy. notation k + 1
consider the w k-1 -trees T Ii and T Ii for the trivial diagram 1 n introduced in Lemma 6.10. Set W Ii := (1 n ) T Ii and W -1
Theorem 9 . 4 .
94 Let L be an n-component welded string link. Then L is homotopic tol 1 • • • l n-1 , where for each k, l k = n i=1 I∈S k (i) (W Ii ) x I , where x I = µ w ji (L) if k = 1 and I = j, µ w Ii (L) -µ w Ii (l 1 • • • l k-1 ) if k > 1.
Corollary 9 . 5 .
95 Welded string links are classified up to homotopy by welded Milnor invariants indexed by non-repeated sequences.
Figure 10 . 1 .
101 Figure 10.1. Additional moves for welded arcs, and the corresponding extra w-tree move
.3. Let k ≥ 2. The normalized Alexander polynomial of L k and L k are given by ∆L k
Lemma 6.5.
More precisely, the heads of T split U into a collections of arcs and possibly several circles, corresponding to closed components of U with no head attached.
Note that our definition for α k slightly differs from the one used in[START_REF] Habiro | Finite type invariants of Ribbon 2-knots[END_REF], by a factor (-1) k .
Proof. The proof is essentially the same as in the classical case. Set I = i 1 • • • i m , such that i j = i k if j = k. It suffices to show that µ w I remains unchanged when a self-(de)virtualization move is performed on the ith component, which is done by distinguishing two cases. If i = i m , then the effect of this move on the combinatorial i m th longitude is multiplication by an element of the normal subgroup N i generated by m i ; each (non trivial) term in the Magnus expansion of such an element necessarily contains X im at least once, and thus µ w I remains unchanged. If i = i m , then this move can only affect the combinatorial i m th longitude by multiplication by an element of [N i , N i ]: any non trivial term in the Magnus expansion of such an element necessarily contains X i at least twice. 9.1. w-tree moves up to homotopy. Clearly, the w-arrow incarnation of a selfvirtualization move is the deletion of a w-arrow whose tail and head are attached to a same component. In what follows, we will call such a w-arrow a self-arrow. More generally, a repeated w-tree is a w-tree having two endpoints attached to a same component of a diagram. Lemma 9.2. Surgery along a repeated w-tree does not change the homotopy class of a diagram.
Proof. Let T be a w-tree having two endpoints attached to a same component. We must distinguish between two cases, depending on whether these two endpoints contain the head of T or not. Case 1: The head and some tail t of T are attached to a same component. Then we can simply expand T : the result contains a bunch of self-arrows, joining (a neighborhood of) t to (a neighborhood of) the head of T . By the Brunnian-type property of w-trees (Remark 5.5), deleting all these self-arrows yields a union of w-arrows which is equivalent to the empty one. Case 2: Two tails t 1 and t 2 of T are attached to a same component. Consider the path of edges connecting these two tails, and denote by n the number of edges connecting this path to the head: we proceed by induction on this number n. The case n = 1 is illustrated in Figure 9.1. As the first equality shows, one application of (E) yields four w-trees T 1 , T 1 , T 2 , T 2 . For the second equality, expand the w-tree
T 2 , and denote by E(T 2 ) the result of this expansion. Let us call 't 2 -arrows' the w-arrows in E(T 2 ) whose tail lie in a neighborhood of t 2 . We can successively slide all other w-arrows in E(T 2 ) along the t 2 -arrows, and next slide the two w-trees T 1 and T 1 , using Remark 5.12: the result is a pair of repeated w-trees as in Case 1 above, which we can delete up to homotopy. Reversing the slide and expansion process in E(T 2 ), we then recover T 2 ∪ T 2 , which can be deleted by the Inverse Lemma 5.9. The inductive step is clear, using (E) and the Inverse Lemma 5.9.
Remark 9.3. Thanks to the previous result, the Twist and IHX Lemmas given in Section 7.4 for w-trees presentation still hold when working up to homotopy. More precisely, Lemmas 7.8 and 7.15 remain valid when replacing, in the statement, the Claim 9.7. Let T be a w k -tree for 1 n . Suppose that i
where the head of T is attached to the i k th component, and with i j < i k-1 for all j < k -1. Then
for some N ≥ 1, where S ≥k+1 is a union of w-trees of degree ≥ k + 1 and < n, and where each T i is a copy of either
Hence, we obtain
for some integers x I , where R 2 is a union of repeated w 2 -trees, and where S ≥3 is a union of w-trees of degree ≥ 3 and < n. By using the properties of Milnor invariants, we have
thus showing, using Lemma 9.2, that
Iterating this procedure, using Claim 9.7 and the same properties of Milnor invariants, we eventually obtain that
The result follows by Lemma 9.2, since a union of w-trees of degree ≥ n for 1 n is necessarily repeated. Remark 9.8. It was shown in [START_REF] Audoux | Homotopy classification of ribbon tubes and welded string links[END_REF] that Corollary 9.5, together with the Tube map, gives homotopy classifications of ribbon tubes and ribbon torus-links (see Section 2.2). Actually, we can deduce easily a homotopy classification of ribbon string links in codimension 2, in any dimension, see [START_REF] Audoux | On link-homotopy in codimension 2[END_REF]. [START_REF] Habegger | The Kontsevich integral and Milnor's invariants[END_REF]. Concluding remarks and questions 10.1. Arrow presentation allowing classical crossings. In the definition of an Arrow presentation (Def. 4.1), we have restricted ourselves to diagrams with only virtual crossings. Actually, we could relax this condition, and consider more general Arrow presentations with both classical and virtual crossings. The inconvenience of this more general setting is that some of the moves involving w-arrows and crossings are not valid in general. For example, although passing a diagram strand above a w-arrow tail is a valid move (as one can easily check using the OC move), passing under a w-arrow tail is not permitted, as it would violate the forbidden UC move. Note that passing above or under a w-arrow head is allowed. Since one of the main interests of Arrow calculus resides, in our opinion, in its simplicity of use, we do not further develop this more general (and delicate) version in this paper. This follows from the definitions for k = 1 (see Figure 3 of [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF]), and can be verified using (E) and Watanabe's moves [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF]Fig. 6] for higher degrees.
Corollary 8.6 gives a welded version of Theorem 10.1, and can actually be used to reprove it.
Proof of Theorem 10.1. Let R and R be two ribbon 2-knots and, using Fact 5, let K and K be two welded long knots representing R and R , respectively. If R and R share all finite type invariants of degree < k, then they have same normalized coefficients of the Alexander polynomial α i for 1 < i < k, by [START_REF] Habiro | Finite type invariants of Ribbon 2-knots[END_REF]. As seen in Remark 6.1, this means that K and K have same α i for 1 < i < k, hence are w k -equivalent by Corollary 8.6. By Fact 6, this shows that R and R are RC kequivalent, as desired. (The converse implication is easy, see [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF]Lem. 5.7]).
Using very similar arguments, we now provide quick proofs for the topological consequences of Corollaries 8.6 and 8.2.
Proof of Corollary 8.7. If two ribbon 2-knots have same invariants α i for 1 < i < k, then the above argument using Corollary 8.6 shows that they are RC k -equivalent. This implies that they cannot be distinguished by any finite type invariant ( [START_REF] Watanabe | Clasper-moves among ribbon 2-knots characterizing their finite type invariants[END_REF]Lem. 5.7]).
Proof of Corollary 8.3. Let T be a ribbon torus-knot. In order to show that T and the trivial torus-knot share all finite type invariants, it suffices to show that they are RC k -equivalent for any integer k. But this is now clear from Fact 6, since any welded knot K such that Tube(K) = T is w k -equivalent to the trivial diagram, by Theorem 8.1. 10.4. Welded string links and universal invariant. We expect that Arrow calculus can be successfully used to study welded string links, beyond the homotopy case treated in Section 9. In view of Corollary 8.2, and of Habiro's work in the classical case [START_REF] Habiro | Claspers and finite type invariants of links[END_REF], it is natural to ask whether finite type invariants of degree < k classify welded string links up to w k -equivalence. A study of the low degree cases, using the techniques of [START_REF] Meilhan | Characterization of finite type string link invariants of degree < 5[END_REF], seem to support this fact.
A closely related problem is to understand the space of finite type invariants of weldeds string links. One can expect that there are essentially no further invariants than those studied in this paper, i.e. that the normalized Alexander polynomial and welded Milnor invariants together provide a universal finite type invariant of welded string links. One way to attack this problem, at least in the case of rational-valued invariants, is to relate those invariants to the universal invariant Z w of D. Bar-Natan and Z. Dancso [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF]. It is actually already shown in [START_REF] Bar-Natan | Finite-type invariants of w-knotted objects, I: wknots and the Alexander polynomial[END_REF] that Z w is equivalent to the normalized Alexander polynomial for welded long knots, and it is very natural to conjecture that the 'tree-part' of Z w is equivalent to welded Milnor invariants, in view of the classical case [START_REF] Habegger | The Kontsevich integral and Milnor's invariants[END_REF]. Observe that, from this perspective, w-trees appear as a natural tool, as they provide a 'realization' of the space of oriented diagrams where Z w takes its values (see also [START_REF] Polyak | On the algebra of arrow diagrams[END_REF]), just like Habiro's claspers realize Jacobi diagrams for classical knotted objects. In this sense, Arrow calculus provides the Goussarov-Habiro theory for welded knotted objects, solving partially a problem posed by M. Polyak in [START_REF] Ohtsuki | Problems on invariants of knots and 3-manifolds[END_REF]Problem 2.25]. | 103,826 | [
"15345"
] | [
"21",
"425321"
] |
01365108 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2017 | https://hal.science/hal-01365108v2/file/paperR2.pdf | QP-based Adaptive-Gains Compliance Control in Humanoid Falls
Vincent Samy 1 , Karim Bouyarmane 2 , and Abderrahmane Kheddar 1 Abstract-We address the problem of humanoid falling with a decoupled strategy consisting of a pre-impact and a postimpact stage. In the pre-impact stage, geometrical reasoning allows the robot to choose appropriate impact points in the surrounding environment and to adopt a posture to reach them while avoiding impact-singularities and preparing for the postimpact. The surrounding environment can be unstructured and may contain cluttered obstacles. The post-impact stage uses a quadratic program controller that adapts on-line the joint proportional-derivative (PD) gains to make the robot compliant -to absorb impact and post-impact dynamics, which lowers possible damage risks. This is done by a new approach incorporating the stiffness and damping gains directly as decision variables in the QP along with the usually-considered variables of joint accelerations and contact forces. Constraints of the QP prevent the motors from reaching their torque limits during the fall. Several experiments on the humanoid robot HRP-4 in a full-dynamics simulator are presented and discussed.
I. INTRODUCTION
In order to effectively make use of humanoid robots in real-life applications such as daily home services 1 , large scale manufacturing 2 , or disaster response scenarios exemplified by the DARPA Robotics Challenge (DRC), it is mandatory to properly address the robot falling risk and to attenuate as much as possible the damage inherent to falling. It is indeed widely accepted that (i) even if the environment is well structured and even if we devote advanced strategies to walking, a humanoid robot will fall; and (ii) we are not able to list all the possible cases and situations where this will occur. A general common sense approach that accounts for the humanoid falling event would ideally operate as follows: (a) devise strategies to avoid falling in the first place; (b) if falling cannot be avoided in (a), or, for some reasons the robot must fall on purpose, then, if the robot is in servo-on, reduce as much as possible the damage resulting from the fall; (c) when the two previous solutions are not applicable, i.e.
if the robot is no more under control, it is better to simply resort to an extra shock absorbing system, such as an airbag, that can be triggered independently from the robot embedded control board. In [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF], we proposed a falling strategy for the case/step (b) above consisting of:
• A taxonomy to choose appropriate falling postures to adopt when falling is detected; • Active reshaping, during which PD gains are high, to meet the impact in the best possible posture; • Impact absorption by reducing the PD gains. The above respective high and low PD gains values were manually ad-hoc tuned in [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF]. The contribution of the present paper is a method to tune them automatically in an adaptive way. Our novel idea consists in integrating the gain adaptation problem directly into the multi-objective QP formulation. This way, we can benefit from the online capabilities of the QP control approach -which has been widely adopted for controlling humanoid robots, and at the same time use the remaining "degrees-of-freedom" of the control for other tasks that can appear to be useful or necessary during falling, such as posture and CoM tasks as will be demonstrated later. The rest of the paper is organized as follows. Section II reviews the related work in humanoid falling. Section III introduces the notation and hypotheses used throughout the paper. The two components of our approach are detailed in Sections IV and V. Section IV describes the pre-impact stage with the geometrical search of appropriate landing points and posture reshaping to prepare the impact. Section V deals with the post-impact stage detailing the joint motor PD gain computation inside the QP. Section VI presents simulation experiments in Gazebo on the humanoid robot HRP-4 to validate our approach, and Section VII concludes the paper with future work.
II. RELATED WORK
We chose to focus on two main problems. The first one is related to the strategy the robot should apply when falling in a cluttered environment. This kind of problem has been addressed in [START_REF] Goswami | Direction-changing fall control of humanoid robots: theory and experiments[END_REF]. Based on inertia reshaping principles, they suggested three ways of modifying the direction of the fall. The concern is to avoid falling on/into a human.
The second problem we treat in this paper focuses on implicit damage reduction at the impact. This has been studied in [START_REF] Fujiwara | UKEMI: falling motion control to minimize damage to biped humanoid robot[END_REF], [START_REF] Fujiwara | Safe knee landing of a human-size humanoid robot while falling forward[END_REF], [START_REF] Fujiwara | Towards an optimal falling motion for a humanoid robot[END_REF], [START_REF]An optimal planning of falling motions of a humanoid robot[END_REF]. They proposed an off-line nonlinear method and an on-line solution to minimize the impact at landing for front and back falls. To prevent damaging the actuators, they are turned off just before the impact and turned on again right after. They also added an extra soft skin on the robot in order to absorb part of the shock.
In [START_REF] Ogata | Falling motion control for humanoid robots while walking[END_REF], [START_REF]Real-time selection and generation of fall damage reduction actions for humanoid robots[END_REF] an on-line solution for front fall from a walking state is proposed. The idea is to track a CoM trajectory which aims at minimizing the impact. Another method proposed in [START_REF] Lee | Fall on backpack: Damage minimization of humanoid robots by falling on targeted body segments[END_REF] consists in making the robot fall on its backpack that prevents the damage.
Finally, a tripod fall has also been considered in [START_REF] S.-K. Yun | Tripod fall: Concept and experiments of a novel approach to humanoid robot fall damage reduction[END_REF]. The idea comes from the simple observation that the earlier the impact occurs, the lower the kinetic energy is. So the method aims at breaking the fall as soon as possible by having the two hands and a foot touch the ground.
In very recent work [START_REF] Ha | Multiple contact planning for minimizing damage of humanoid falls[END_REF], Ha and Liu presented an off-line strategy where an algorithm that finds a sequence of contacts minimizes the damage of a humanoid robot fall. [START_REF] Lee | An active compliant impact protection system for humanoids: Application to walk-man hands[END_REF] and [START_REF] Kajita | Impact acceleration of falling humanoid robot with an airbag[END_REF] proposed a strategy based on an active cover.
In our previous work [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF] we made a taxonomy of singular falls and proposed a simple fall strategy based on geometrical properties. We also tuned the PD gains of the motors to experimental values that allowed compliance at the impact.
Our contribution with respect to that previous work and to the listed state-of-the-art3 is twofold: First, we extend the fall strategy to handle any fall direction in a cluttered environment, with more than just a posture task for falling on a flat floor as was the case in [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF], see Fig 1 . Secondly, we propose a novel method to automatically tune the PD gains within the whole-body QP controller instead of manually fixing experimentally drawn values as was the case in [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF].
III. TERMINOLOGY AND NOTATION
The mathematical notation we use is mostly the same as in [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF] and [START_REF] Featherstone | Rigid body dynamics algorithms[END_REF]. Bold characters are vectors. If A and B are two spatial frames, then:
• B E A denotes the rotation from A to B.
• B r A denotes the position of the origin of B in A.
• u denotes a 3 × 1 column vector in world coordinates. In the notation r,b,p X 0 , the left-hand side superscripts and right-hand side subscript mean that the transformation is made from 0 to the point p of body (link) b of the robot r (i.e. p ∈ b ∈ r). 0 is the index denoting the world. r is for robot and e for environment. Leaving left-hand side superscripts unspecified such as in the notation u implicitly stands for 0 u. Finally, right-hand side superscripts will eventually be used for complementary information.
IV. GLOBAL FALLING STRATEGY Fall control can be divided into four main parts: 1) Fall detection, 2) Environment analysis, 3) Pre-impact strategy execution, and 4) Post-impact strategy execution.
In step 1) a fall detection system must be constantly running in parallel to the performed tasks as a background process. The system should be able to stop the execution of the current tasks and switch to the pre-impact strategy execution whenever necessary. Note that this step might also include a fall recovery mode if possible.
In step 2) the robot performs an analysis of the situation (we exclude having humans or valuable items in the surroundings) in order to process useful information such as estimating the current pose of the robot and building a map of the surrounding's planar surfaces. Step 2) is out of this paper's scope. We shall consider it as a black-box module and assume that the environment, the robot state, and the available environment falling surfaces are known. This is a plausible assumption considering the advances made recently in SLAM technology [START_REF] Salas-Moreno | Dense planar slam[END_REF].
When the fall is detected, the controller goes through different states at each iteration loop in step 3), as follows:
(i) estimate the fall direction, (ii) search landing points, (iii) update falling tasks, This step is detailed in subsections IV-A, IV-B, and IV-C. At step 4), the robot has touched down. This step is considered independently from the previous steps, although the same whole-body controller is essentially used for both, as detailed in Section V. Additionally, the one we propose for this step will ensure an active compliance of the actuators/joints after the impact has occurred.
In this work, since the robot is under multiple tasks and constraints, we rely on a multi-objective weighted quadraticprogram-based (QP) controller. The highest priority level is the QP constraints that must be satisfied without compromise:
• Joint limits, joint velocity limits, and torque limits • Self-collision avoidance,
• The equation of motion and the physics model. The second priority level (lowest level) are the tasks that are formulated as set-point tasks and combined in a weighted sum to form the objective function of the QP. A set-point task is defined as the minimization of the following quadratic cost:
E sp = 1 2 S M (k p g + k v ġ + J g q + Jg q) 2 , (1)
where S M is a selection matrix, k p and k v are proportional and damping task gains (not to be confused with the lowlevel joint PD gains that will be computed in Section V), g is the task error and J g is its Jacobian. More details on the controller can be found in [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF].
In the following three subsections, we give further details on the three states described in step 3).
A. Direction of the fall
The fall direction is derived from the center of mass (CoM) trajectory. Let pc r t 0 be the CoM projected on the plane D 0 normal to the gravity and passing through a point on the ground, at time t. At each time step the fall direction is computed as
d f = pc r t 0 -pc r t d 0 pc r t 0 -pc r t d 0 , (2)
where t d is the fall detection time and t > t d .
B. Search of landing points
In order to choose the landing/impact points, we first need to know the potential impact surfaces. For a humanoid robot, the impact points are the hands, feet and knees [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF]. We also assume here that a SLAM routine coupled with appropriate segmentation algorithms can return the available planar surfaces in the environment, as in [START_REF] Salas-Moreno | Dense planar slam[END_REF]. In simulation however this information is readily available.
To lower damage risks resulting from the fall, we need to decide where to locate the impact points. These should be reachable, meaning that the robot can change its configuration in order to meet the desired landing spots during the falling time.
We approximate the robot as rigid stick that is falling, see Fig. 2 (green model on the figure). This stick lives in the Fig. 2: Illustration of the search of possible impact points. The yellow arrow is the fall direction, the green lines represent a simplified stick model. The dotted arc is its trajectory. The transparent red plane is the plane where an impact surface exists. The transparent green ellipsoid is a gross representation of the polyhedron representing the arm's reachable workspace. Black and white points are the MPIP and BIP respectively. The red line represents the minimum distance between MPIP and BIP. plane defined by a contact point (if any, or a projection of the nearest point from feet), the fall direction vector and the gravity vector. The length of the stick is set to the distance between the latter point and the middle of the two shoulders. Both the plane of motion of the stick and its length are adjusted at each time step. The trajectory of this stick in the defined plane is a 2D circle. The shoulders' trajectory are directly computed from it and the desired whole-body posture of the robot is then generated aiming for the hands to be on their respective shoulders' trajectory.
Finally, we compute all the intersections between shoulders' trajectory and the planes of the surfaces returned by step 2). We call these points most probable impact points (MPIP) hereafter. Fig. 2 represents one such MPIP as a black point. These points may or may not be on the environment surfaces (in the example of Fig. 2, the MPIP does not belong to the environment), this is why we also need to compute for each MPIP its closest point belonging to its respective environment surface. These closest points on the environment surfaces are called Best Impact point (BIP) (Fig 2 represents the BIP corresponding to the MPIP as a white point).
We now need to make a choice between the different available BIP. First, the arms' workspace gives two polyhedra which are split in two by the coronal plane, leading to one polyhedron for reachable front fall points and one for reachable back fall points. We also compute the centroid of each polyhedron. Note that these are calculated off-line only once and are associated with the geometric model of the robot. Placing the centroid on the MPIP, the BIP is selected if the segment of line between the MPIP and the BIP is inside the given polyhedron.
In case more than one BIP satisfies the condition, then the highest BIP (highest vertical coordinate) is chosen because the impact happens sooner and less potential energy is converted into kinetic energy before the impact. In case none of the points are inside the polyhedron, the BIP having the minimum distance to its respective MPIP is chosen.
C. Reshaping tasks
To make the robot directly face the impact environment (front fall) or directly oppose it (back fall), since those two falls are the safest falls [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF], we propose to use a vector orientation task which aims to align two vectors, one of which is linked to the torso and the other to the environment. A posture task is also included to help avoiding singular falls as defined in [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF] and to bend the knees to lower the CoM. Finally, end-effector position tasks are used to reach the desired impact points. All of these task are run and their targets updated at each control loop. To implement a new set-point task (1), the task error g, the task Jacobian J g and the time-derivative of the task Jacobian Jg are needed. We describe these derivations in the next subsections.
1) Vector orientation task: Let u target be the desired goal unit vector and r,b u a unit vector in robot body coordinates. The task error is given by:
g vo = u target -0 E r,b r,b u , = u target -0 u , (3)
where 0 E r,b is the rotation from the body b to the world. As the target vector is considered as fixed in the world, only the time-derivative of the robot vector is considered:
0 u = 0 E r,b [ r,b ω × r,b u] , = -0 E r,b ( r,b u×) r,b J ang r,b q , = J g vo q . ( 4
)
Here, r,b ω is the angular velocity of the body in body coordinates and r,b J ang r,b is the angular part of the body Jacobian in body coordinates. Differentiating one more time, the time-derivative of the task Jacobian is then:
Jg vo = 0 E r,b [ r,b ω × ( r,b ω × r,b u) + r,b a vp,ang × r,b u] , (5)
where r,b a vp,ang = r,b Jang r,b q is the angular part of velocityproduct of the acceleration in body coordinates [START_REF] Featherstone | Rigid body dynamics algorithms[END_REF].
The targeted vector is set so that u target ∈ D 0 and u target • d f = 0 (Fig. 3). r,b u is chosen perpendicular to the torso sagittal plane in the torso coordinates. There are four possible solutions so both vectors must be chosen depending on whether a front fall or a back fall is desired.
2) Relative distance task: Ideally, we would like that multiple impacts occur all at the same time, but this is difficult to achieve in practice because it requires estimation of the exact impact time. A solution is to manipulate the distance between the desired environment impact surfaces and the robot impacting bodies so that the relative error of the distances between two pairs of surface-impacting bodies is zero. One of the advantages of this task is that it handles different heights of surfaces. We remind here that the considered surfaces are planar so the time-derivatives of their normals are zero. Let r,b1,p1 r 0 be the closest point of a body b 1 to a surface s 1 and r,b2,p2 r 0 the closest point of a body b 2 to a surface s 2 . Let e,s1,p1 r 0 and e,s2,p2 r 0 be points on s 1 and s 2 respectively. The distance of a pair of impact body and surface is:
d i = r,bi,pi r 0 -e,si,pi r 0 , i = 1, 2. (6)
Here, we do not want to consider the minimal distance but rather a distance along an axis, which is more useful in our application. The task is designed to modify the distance between robot bodies and surface planes, so the distance along the normal of a plane is more relevant. This method controls only the motion along the normal of the plane, while the motion along the plane itself is left free and will be handled by a position task for reaching the desired impact points.
Let now u 1 and u 2 be unit vectors linked to s 1 and s 2 respectively. The task error is:
g rd = d 1 • u 1 -d 2 • u 2 . ( 7
)
The surfaces s 1 and s 2 are considered fixed so the timederivatives of u i and e,si,pi r 0 is zero (i = 1, 2). Note that if this assumption is false, then it means that the robot would fall on a moving environment. It is possible to adapt the tasks to handle such cases but we will not consider them here. The time-derivative of g is given by:
ġrd = v 1 • u 1 -v 2 • u 2 , = (u T 1 J lin r,b1,p1 -u T 2 J lin r,b2,p2 ) q , = J g rd q , ( 8
)
where J lin bi,pi is the linear part of the body Jacobian of point p i associated to body b i (i = 1, 2). The Jacobian time-derivative is:
Jg rd = u T 1 Jlin b1,p1 -u T 2 Jlin b2,p2 . (9)
3) End-effector position task: The end-effector position task is a common task [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF]. The points on the hands to control are the closest points to their respective chosen BIP. We also mention that the task should be written in the surface frame so that only the x and y coordinates are controlled by the task. The z-coordinate (normal to the surface) is handled by the relative-distance task above.
V. POST-IMPACT STRATEGY
The pre-impact process described in section IV shapes the robot into a relatively "compliable" posture just before the impact. The impact is produced whenever the feet, knees or hands are about to touch down. From that instant, the controller behaves as an active compliance for lowering the damage, using a single QP whole-body controller.
In position-controlled humanoids, the low-level actuator controller consists in a proportional-derivative (PD), which leads to the simplified governing equation:
H q + C -J T Gλ = τ = Ke + B ė , (10)
where H ∈ R Ndof×Ndof is the robot inertia matrix, C ∈ R Ndof×1 the gravity and Coriolis vector, J ∈ R 6Nc×Ndof the contact Jacobian, G ∈ R 6Nc×NcNg the matrix of friction cone generators, and τ the generalized forces (comprising the actuation torques for the actuated joints and zero entries for the non-actuated ones). The parameters K ∈ R Ndof×Ndof and B ∈ R Ndof×Ndof are the diagonal matrices of PD gains, e = q ref -q and ė = qrefq are respectively the errors in joint position and velocity. q ref is set to the current configuration just before the impact and qref is set to zero. Note that in the case of joints without motors (e.g. the free-floating base) the corresponding entries in the diagonals of K and B are zeros. We denote K and B the vectors containing the diagonal entries of K and B respectively, i.e. K = diag(K) and B = diag(B). N dof , N c and N g are respectively the number of degrees of freedom (dof) of the system, the number of contact points and the number of generators of the linearized friction cones. We also define N m as the number of motors. K and B have constant values that encode the default high stiffness behavior of the motors. These values are generally very high to make the motors track the reference values as fast as possible accounting for perturbations, inertia -and more general dynamics, while avoiding overshooting. In order to comply, we need to relax and adapt these values.
Our novel idea is to use a multi-objective QP formulation in the X = (q, λ, K, B) decision vector.
First, to handle the impact/contact, a constraint is added so that at the contact points, the velocity is zero. This condition is realized with the following constraints [START_REF] Vaillant | Multi-character physical and behavioral interactions controller[END_REF]:
S M v -v ∆T ≤ J q + J q ≤ v -v ∆T , (11)
where S M is a n × 6 (n ≤ 6) selection matrix, v and v are the minimal and maximal body velocity. The primary objective of a compliant behavior is the motor's constraints. They are modeled as box torque constraints and added to the QP as follows
τ ≤ Ke -B ė ≤ τ . (12)
The other box constraints are the bounds over the parameters in X:
q ≤ q ≤ q λ ≥ 0 K ≥ 0 B ≥ 0 . ( 13
)
Important note: in post-impact, constraints on joint limits and velocity limits are purposely not inserted as constraints in the QP. The reason for this is that we have no control over the impact behavior. Indeed, the impact is imposed to the robot in a very limited time. If it is large enough, the generated velocity would make the QP fail to find a feasible solution so the robot would remain stiff. Thus, the main advantage of not taking limits as constraints is that the robot will always comply until it has fully absorbed the post-impact dynamics or until it reaches a mechanical stop (joint limit). On the opposite, in case the impact is not large enough, nothing guarantees that the robot will not reach a joint limit. In order to ensure that the joints are kept inside their limits, a basic strategy would be to give high weight and stiffness to a posture task instead. This amounts to changing the 'priority' level, i.e. shifting the joint limit constraints from the constraint set to the cost function in the QP.
Finally, the QP writes as follow:
min q,λ,K,B k ω sp k E sp k + ω λ λ 2 + ω G ( K 2 + B 2 ) , s.t.
(10), ( 11), ( 12), ( 13) [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF] where k is an index over the tasks (posture, CoM). The number of parameters is equal to dim(X) = N X = N dof + N c N g + 2N m , which is almost three times the number of variables of the more usual form of the QP used for general-purpose control (i.e. without K and B). In order to improve the performance we chose to restrain the gain adaptation only to a selected set of joints directly involved in impact absorption. We propose to select all the motors in the kinematic chain between the end-effector contact points and the root of the kinematic chain of the robot. For example, if a contact is on the hand (actually the wrist on the HRP-4 robot), then the motors of the elbow and the shoulder are retained.
Once the joints are selected, we just need to extract their corresponding lines in the matrices H and J T G and in the vectors C and τ . The other joints need also to be consistent with the dynamics and the torque limits so a new constraint is added to the QP. The constraints [START_REF] S.-K. Yun | Tripod fall: Concept and experiments of a novel approach to humanoid robot fall damage reduction[END_REF] and ( 12) are removed and the following constraints are added to the QP ( 14): [START_REF] Featherstone | Rigid body dynamics algorithms[END_REF] with the subscript S (resp. N S) designating the matrix/vector of selected (resp. non-selected) rows.
H S q + C S -(J T G) S λ = Ke S -B ėS τ S ≤ Ke S -B ėS ≤ τ S τ N S ≤ H N S q + C N S -(J T G) N S λ ≤ τ N S
Note that both the pre-impact and post-impact stages can be performed. As all bodies are not impacting at the same time (in a front fall the knees generally impact way before the hands), this QP form allows to perform both pre-impact and post-impact in parallel. (At knees' impact, the legs are set to the complying behavior whereas the upper part of the robot continues its pre-impact stage).
VI. SIMULATIONS
To demonstrate the capabilities of the adaptive-gain QP, we performed several falling simulations of the HRP-4 robot in the Gazebo simulator (see companion video).
We focus in this section on the very first experiment consisting in dropping the robot from a given height (1m) and letting it land on its feet (at impact time t 0.45s with a velocity of 4.43m/s). Four methods are compared 1) keeping the robot's stiff initial gains, 2) using predefined static gains (as in [START_REF] Samy | Falls control using posture reshaping and active compliance[END_REF]), 3) using zero gains (shutting down the robot, in servo-off mode) and finally 4) adaptive gains (our proposed method). This experiment illustrates the postimpact gain adaptation strategy part of the paper. The preimpact geometric reshaping part is illustrated along with the gain adaptation in all the other experiments of the video.
In order to back up our claim that the adaptive QP complies with the post-impact dynamics and lowers the risk of damaging the robot, we chose to look at two qualifiers: (i) the IMU acceleration (in the waist) (Fig. 6a), and (ii) the joint positions (Fig. 6b). As the floating-base (waist) acceleration is proportional to the applied external forces, and as there are many contacts, we found the IMU acceleration to be a good indicator of how much total impact/contact force is applied on the structure. We use the IMU acceleration as damage quantification comparison quantity: the less acceleration there is, the better and the safer for the robot.
Let us first analyse the data from our proposed approach alone (Figs. 5). We can see that the damping coefficient increase very fast untill 0.6s. This is mostly due to the fact that right after the impact, the error is almost null whereas the velocity is high, hence the solver is mostly using the damping gains B (Fig. 5). To understand the high variation of the damping gain around 0.6s (from 60 to 0Nms/rad), we note in Fig. 4 that around 0.6s the torque's sign (minus) is unchanged. At this stage, the joint velocity is switching sign (from plus to minus). Considering the eq. τ = Ke + B ė we can see that in order to have a negative torque with a positive velocity error ( ė = 0 -q) we need B < 0. But the constraint B ≥ 0 enforces the non-negativity of the damping coefficient resulting in a zero value for it.
Fig. 6b shows that using the predefined static fixed PD gains or turning off the motors could be extremely risky since joint limits are reached fast. On the other hand, keeping the initial (stiff) gains does not make the robot reach the mechanical stops but leads to very high jerk and IMU acceleration (6a) at the impact, which is a prediction of a high impact force. The proposed adaptive QP approach avoids all these issues. It has a low jerk and a low acceleration profile while still staying under the joint limit and not reaching any mechanical stop at the joints. Fig. 4 shows that the adaptive QP keeps the torques under their limits.
VII. CONCLUSION AND FUTURE WORK
We proposed an original way of addressing falls in a cluttered environment. First, an active reshaping fall strategy prepares the robot to the impact from the moment the fall is detected and up to just before the impact occurs. Then, during the post-impact stage, a QP controller allows the robot to become compliant in order to absorb the impact energy while satisfying its structural constraints.
In order to implement this strategy on a real robot, two modules are necessary and were assumed as available black- The black dotted line is the joint limit of the knee and the colored dotted lines represent mechanical stops (joint limits).
boxes: robot state estimation and landing surfaces candidates computation, both can be provided by SLAM in future work. For now, sliding contacts are not perfectly handled in the QP. This is a challenging problem that we are currently working on for general multi-contact planning and control purposes. A temporary solution we implemented was to release the tangent space dof of the contacts to allow sliding.
Finally, fall detection itself needs to be improved. Many methods have been suggested, but all of them fail in several cases. In real situation, falling extends beyond what the current state-of-the-art can detect. For example, falling does not necessarily restrict to the notion of loss of balance because the latter may be dictated by a task to achieve. In all generality, it should be thought of as the loss of task-based controllability, but this novel concept is out of this paper's scope and needs to be researched as a new direction.
1
V. Samy and A. Kheddar are with CNRS -University of Montpellier LIRMM, 34000 Montpellier France [email protected] 2 K. Bouyarmane is with University of Lorraine -INRIA -CNRS LORIA, 54600 Villers-lès-Nancy, France 1 www.projetromeo.com 2 www.comanoid.eu
Fig. 1 :
1 Fig. 1: Examples of falling in a cluttered environment.
Fig. 3 :
3 Fig. 3: The four possibilities for the vector orientation task. The blue vectors are the two possible body vector r,b u. The yellow vector is the fall direction and the two orange vectors are the possible targets u target .
Fig. 4 :
4 Fig.4: Evolution of the torque for the three right leg pitch joints (hip, knee, ankle) resulting from our adaptive QP method. The dashed vertical line is the impact time. Dashed horizontal lines are torque limits.
Fig. 5 :Fig. 6 :
56 Fig.5: Evolution of: (5a) the stiffness, and (5b) damping gains for the right leg pitch joints resulting from our QP adaptive method. The dashed line is the impact time.
https://icra2016wsfallingrobots.wordpress.com/ program/ | 32,285 | [
"9653",
"934229",
"176001"
] | [
"181",
"415996",
"395113"
] |
01491182 | en | [
"info"
] | 2024/03/04 23:41:50 | 2017 | https://inria.hal.science/hal-01491182/file/draft_13Mar.pdf | Imran Sheikh
email: [email protected]
Irina Illina
email: [email protected]
Dominique Fohr
email: [email protected]
Segmentation and Classification of Opinions with Recurrent Neural Networks
Automatic opinion/sentiment analysis is essential for analysing large amounts of text as well as audio/video data communicated by users. This analysis provides highly valuable information to companies, government and other entities, who want to understand the likes, dislikes and feedback of the users and people in general. Opinion/Sentiment analysis can follow a classification approach or perform a detailed aspect level analysis. In this paper, we address a problem in between these two, that of segmentation and classification of opinions in text. We propose a recurrent neural network model with bi-directional LSTM-RNN, to perform joint segmentation and classification of opinions. We introduce a novel method to train neural networks for segmentation tasks. With experiments on a dataset built from the standard RT movie review dataset, we demonstrate the effectiveness of our proposed model. Proposed model gives promising results on opinion segmentation, and can be extended to general sequence segmentation tasks.
I. INTRODUCTION
With the growing amount of users on the internet, social media and online shopping websites, a large amount of data is generated in which people voluntarily publish their opinion on products, stocks, policies, etc. Automatic systems are necessary to analyse such large data and derive facts from them. Accordingly, the area of automatic opinion/sentiment analysis is receiving interest from both industry and academia, with some challenges and tasks being held every year [START_REF] Nakov | Semeval-2016 task 4: Sentiment analysis in twitter[END_REF]- [START_REF] Mohammad | Semeval-2016 task 6: Detecting stance in tweets[END_REF]. Research in sentiment analysis involves building systems and algorithms which can understand text from the perspective of the opinions or sentiments expressed in it [START_REF] Pang | Opinion mining and sentiment analysis[END_REF], [START_REF] Medhat | Sentiment analysis algorithms and applications: A survey[END_REF]. Sentiment analysis systems are very useful for industries to obtain a feedback on their products which get reviewed on social networks and online shopping websites [START_REF] Hu | Mining and summarizing customer reviews[END_REF]- [START_REF] Gryc | Leveraging textual sentiment analysis with social network modelling[END_REF]. Similarly they have been used for analysing sentiments in political tweets and election data [START_REF] Ceron | Using sentiment analysis to monitor electoral campaigns: Method matters-evidence from the united states and italy[END_REF]- [START_REF] He | Quantising opinions for political tweets analysis[END_REF]. Apart from text data, videos posted on the social media and news websites [START_REF] Morency | Towards multimodal sentiment analysis: Harvesting opinions from the web[END_REF]- [START_REF] Rosas | Multimodal Sentiment Analysis of Spanish Online Videos[END_REF], as well as audio conversations from call centres [START_REF] Mishne | Automatic analysis of call-center conversations[END_REF]- [START_REF] Ezzat | Sentiment analysis of call centre audio conversations using text classification[END_REF], are analysed for sentiments.
A common task is to classify a given sentence or text as expressing positive or negative sentiments using a text categorization approach [START_REF] Pang | Thumbs up?: Sentiment classification using machine learning techniques[END_REF]. On the other hand, classifying sentiments at the sentence and document level may not provide a detailed analysis of the entity or product being reviewed. For example, the movie review -The actors did their job but the characters are simply awesome, attributes different level of opinions to different entities. In such cases aspect level sentiment analysis [START_REF] Schouten | Survey on aspect-level sentiment analysis[END_REF] is performed. Aspect level sentiment analysis is concerned with identification of sentiment-target pairs in the text, their classification and also the aggregation over each aspect to provide a concise overview. (Although in practice, a method may not implement all these steps and in the same order.)
We focus on a sentiment analysis problem which is inbetween sentence/document level classification and aspect level sentiment analysis. We investigate the problem of analysis of text which can contain segments corresponding to both positive and negative sentiments, with one following the other. For example, the review -Comes with a stunning camera and high screen resolution. Quick wireless charging but the battery life is a spoiler. Given such a text, our task is to segment (and classify) the text into parts of positive and negative segments. So, the goal is to automatically identify the segment 'Comes with a stunning camera and high screen resolution. Quick wireless charging', classify it as positive, and have the segment 'but the battery life is a spoiler' classified with negative sentiments. Similar to our previous example, in aspect level sentiment analysis, the text can have different sentimenttarget pairs or aspects. However, our task is segmentation and classification of text based on opinions/sentiments, without performing a detailed aspect level analysis.
An important point to note is that, the segmentation models cannot simply rely on sentence boundaries, punctuations or any other linguistic features. As in the example we presented, the segment boundaries are not always at the end of the sentence. Another common scenario where the segmentation models cannot rely on such features is sentiment analysis on audio/video data. Automatic Speech Recognition (ASR) transcripts of these audio/video documents are used for sentiment analysis and they do not contain any kind of punctuation marks. To perform a robust segmentation and classification under such conditions, we propose discriminatively trained neural network models.
Developments in neural networks and deep learning has led to new state-of-the-art results in text and language processing tasks. Text classification is being commonly performed with compositional representations learned with neural networks or by training the network specifically for text classification [START_REF] Goldberg | A primer on neural network models for natural language processing[END_REF]. Fully connected feed forward neural networks [START_REF] Le | Distributed representations of sentences and documents[END_REF]- [START_REF] Nam | Largescale multi-label text classification -revisiting neural networks[END_REF], Convolutional Neural Networks (CNN) [START_REF] Kim | Convolutional neural networks for sentence classification[END_REF]- [START_REF] Wang | Semantic clustering and convolutional neural network for short text categorization[END_REF] and also Recurrent/Recursive Neural Networks (RNN) [START_REF] Socher | Recursive deep models for semantic compositionality over a sentiment treebank[END_REF]- [START_REF] Dai | Semi-supervised sequence learning[END_REF] have been used successfully. The approaches based on CNN and RNN capture rich compositional information and have been outperforming previous results on standard tasks in natural language processing. Of particular interest to this work are the Recurrent Neural Networks (RNN). Previous works have shown that RNNs are very good at modelling word sequences in text. Since our task is to segment a text sequence and classify the segments into positive and negative sentiments, we exploit RNNs to perform the segmentation and classification of opinions.
In this paper, we present our approach to train RNNs for segmentation and classification of opinions in text. We propose a novel cost function to train the RNN in a discriminative manner to perform the segmentation (Equation ( 12)). We evaluate our proposed model on the task of segmentation and classification of movie reviews into positive and negative sentiments. The dataset used in our experiments is built using sentences from the standard Rottent Tomatoes (RT) movie review dataset [START_REF] Pang | Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales[END_REF]. The rest of the paper is organised as follows. In Section II we present the previous works related to our work. We provide a description of RNNs in Section III. Our proposed model is presented in Section IV. Section V describes our experiment setup, including the experiment dataset, model training and evaluation measures. This is followed by a discussion on the results obtained with our model in Section V-C and the conclusion in Section VI.
II. RELATED WORK
Classical sentiment analysis approaches traditionally relied on classifiers trained using sentiment specific lexicons or knowledge structures and other hand crafted features [START_REF] Pang | Opinion mining and sentiment analysis[END_REF], [START_REF] Medhat | Sentiment analysis algorithms and applications: A survey[END_REF]. These classical feature based approaches also tried neural networks for sentiment classification [START_REF] Sharma | An artificial neural network based approach for sentiment analysis of opinionated text[END_REF]. Maas et. al. [START_REF] Maas | Learning word vectors for sentiment analysis[END_REF] proposed models which automatically learned word features (or word vector representations) targeted for the sentiment classification. Neural network models and automatically learned word vector features came together to achieve state-of-the-art results on sentiment classification with the model proposed in [START_REF] Le | Distributed representations of sentences and documents[END_REF]. Later, the application of deep learning techniques in natural language processing led to new state-of-the-art results on sentiment classification tasks [START_REF] Tang | Deep learning for sentiment analysis: Successful approaches and future challenges[END_REF], mainly with CNN [START_REF] Kim | Convolutional neural networks for sentence classification[END_REF], [START_REF] Johnson | Effective use of word order for text categorization with convolutional neural networks[END_REF] and RNN [START_REF] Dai | Semi-supervised sequence learning[END_REF] architectures. Neural network architectures, including both CNN and RNN, have been also applied for aspect level sentiment analysis [START_REF] Pontiki | Semeval-2016 task 5: Aspect based sentiment analysis[END_REF], [START_REF] Poria | Aspect extraction for opinion mining with a deep convolutional neural network[END_REF], [START_REF] Nguyen | Phrasernn: Phrase recursive neural network for aspect-based sentiment analysis[END_REF].
In this paper, we explore RNN models for segmentation and classification of opinions/sentiments in text. As mentioned earlier, our task is in-between sentiment classification and aspect level sentiment analysis. Compared to earlier works using RNN for sentiment analysis [START_REF] Dai | Semi-supervised sequence learning[END_REF], [START_REF] Irsoy | Opinion mining with deep recurrent neural networks[END_REF]- [START_REF] Tang | Document modeling with gated recurrent neural network for sentiment classification[END_REF], we propose a novel method for discriminative training of RNNs for joint text segmentation and classification. Text segmentation approaches have been studied vastly in previous works. These approaches scan the text and determine the locations of segment cuts/boundaries based on coherence calculated between adjacent blocks of text. After the initial work in this area [START_REF] Hearst | Texttiling: Segmenting text into multi-paragraph subtopic passages[END_REF], [START_REF] Choi | Advances in domain independent linear text segmentation[END_REF], most approaches used topic models and Bayesian approaches for text segmentation task [START_REF] Misra | Text segmentation via topic modeling: An analytical study[END_REF]- [START_REF] Misra | Text segmentation: A topic modeling perspective[END_REF]. We employ word embeddings [START_REF] Pennington | Glove: Global vectors for word representation[END_REF], vector representations of words which carry both syntactic and semantic information of words and the context, to learn sentiment level cohesion of segments in text. As opposed to the methods relying on Bayesian and topic models, our approach can detect very short segments containing only few words.
More recently word embeddings from neural network models have been utilised for text segmentation in general [START_REF] Alemi | Text segmentation based on semantic word embeddings[END_REF] as well as specifically for sentiment analysis [START_REF] Tang | A joint segmentation and classification framework for sentence level sentiment classification[END_REF]. The approach in [START_REF] Alemi | Text segmentation based on semantic word embeddings[END_REF] uses an iterative refinement technique whereas the work in [START_REF] Tang | A joint segmentation and classification framework for sentence level sentiment classification[END_REF] is focused on finding appropriate phrase like segments which imply the correct sentiments (for example not good actually implies bad). In contrast to these approaches, we exploit Long Short-Term Memory (LSTM) RNNs to capture and remember the sentiment level cohesion, mainly to perform appropriate segmentation and classification of opinions and sentiments in text. Figure 1 shows a schema for a typical Recurrent Neural Network (RNN) (the most commonly used Elman network [START_REF] Elman | Finding structure in time[END_REF]). Similar to most neural networks, RNN has an input layer, a layer of hidden units and an output layer. Given a discrete input sequence {x t } t=1,2,3,...N , a hidden layer activation h t , is obtained using the current input x t and the previous hidden layer activation h t-1 . The corresponding output y t is then obtained using h t . The computation of the hidden layer and output activations is given as:
III. RECURRENT NEURAL NETWORKS h t h t x t h t-1
h t = f x (x t .W x + h t-1 .W h + b x ) (1) y t = f y (h t .W y + b y ) (2)
where W x , W h and W y are the weight parameters at input, hidden and output layers, b x and b y are the bias parameters at input and output layers, f x and f y denote non-linearity functions like sigmoid and hyperbolic tangent (tanh). Training the RNN involves learning the weight and bias parameters. Given a training dataset with input sequences and output labels, this can be achieved using gradient descent and error back propagation algorithms [START_REF] Lecun | Effiicient backprop[END_REF], [START_REF] Bengio | Practical recommendations for gradient-based training of deep architectures[END_REF].
As it is evident from Figure 1, the RNN can remember activations of the past inputs. This enables it to model sequences like discrete time sequences in speech signals, word sequences in a document, protein sequences, etc. However, training RNN requires error back propagation through time and as the length of the sequence/time increase it leads to vanishing and exploding gradient problems [START_REF] Pascanu | On the difficulty of training recurrent neural networks[END_REF]. To address the more severe vanishing gradient problem, the Long Short-Term Memory (LSTM) cell [START_REF] Hochreiter | Long short-term memory[END_REF] has become a popular alternative to the hidden layer unit in the classical RNN.
Figure 2 shows an illustration of the LSTM cell. The LSTM cell borrows ideas from a hardware memory cell, and as shown in the figure it consists of a cell state c t and a forget gate f t which controls the amount of past activations to be memorised and/or to be forgotten by the cell. The computations of the activations at the input gate (i t ), forget gate f t , cell state (c t ), output gate (o t ) and the hidden layer (h t ) are given as:
i t = σ(x t .W i + h t-1 .U i + b i ) (3) ct = tanh(x t .W c + h t-1 .U c + b c ) (4) f t = σ(x t .W f + h t-1 .U f + b f ) (5) c t = i t * ct + f t * c t-1 (6) o t = σ(x t .W o + h t-1 .U o + c t .V o + b o ) (7) h t = o t * tanh(c t ) (8)
where W , U and b are weight parameters and bias with suffixes i, f , c and o denoting input gate, forget gate, cell state and output gate, respectively. σ, tanh denote the sigmoid and hyperbolic tangent non-linearities and V o is another weight parameter at the output gate. * denotes simple (element-wise) multiplication and . denotes vector/matrix dot products in a multi-dimensional setup.
IV. PROPOSED APPROACH
We propose a model based on LSTM-RNN to perform joint segmentation and classification of opinions/sentiments in text. More specifically we employ a bidirectional LSTM-RNN, in which there is a forward LSTM-RNN which models the word sequence from left to right and a backward LSTM-RNN which models the word sequence from right to left. In previous works, bidirectional LSTM-RNN are shown to perform better than unidirectional LSTM-RNN for modelling and classifying sequences [START_REF] Schuster | Bidirectional recurrent neural networks[END_REF]- [START_REF] Sundermeyer | Translation modeling with bidirectional recurrent neural networks[END_REF]. However, we specifically choose bidirectional LSTM-RNN to compare and measure cohesion between past and future word sequences. In our task, it will enable us to detect changes in sentiments as well as in context, and hence to perform segmentation. At the same time, the activations from the bidirectional LSTM-RNN will be combined to perform a classification of sentiments corresponding to the segments. Figure 3 shows the diagrammatic representation of our proposed model for joint segmentation and classification. As shown in the figure, the model operates on word embeddings corresponding to words in a text sequence (denoted as ..., x t-1 , x t , x t+1 , ...). Following a bidirectional LSTM-RNN architecture, our model has two layers of LSTM-RNN. The hidden layer activation for the forward LSTM-RNN at time t is denoted as h F t and the hidden layer activation for the backward LSTM-RNN at time t is denoted as h B t . Compared to Figure 1, here the RNN schema is unrolled across t. The hidden layer activations of the forward and backward LSTM-RNN are obtained using Equations ( 3)- [START_REF] Ghose | Opinion Mining using Econometrics: A Case Study on Reputation Systems[END_REF]. These hidden layer activations (h F t , h B t ) are used in both the segmentation and classification sub-parts, shown in top part of the Figure 3.
W class W seg x t-1 x t x t+1 h F t-1 h B t-1 h F t h B t h F t+1 h B t+1 s F t-1 s B t-1 s F t s B t s F
We first present the segmentation part of our model, which is shown in the top left part of Figure 3. Each of the hidden layer activations of the forward and backward LSTM-RNN are transformed using a feed forward neural network as follows:
s F t = h F t .W seg + b seg (9) s B t = h B t .W seg + b seg (10)
where W seg and b seg are weight and bias parameters of the segmentation feed forward network. Following the feed forward layer, the outputs corresponding to forward and backward LSTM-RNN at each t are compared as:
d t = s F t .s B t (11)
where . denotes a dot product. This dot product compares the similarity between the context until t, as captured in s F t , and the context following t, which is captured in s B t . Thus {d t } t=1:N , where N is the length of the text sequence, represents similarity across the text sequence and this similarity should be minimum at the segment boundaries. The similarity calculation is followed by a softmin function, given as:
softmin(d t * ) = e -dt * N t=1 e -dt (12)
The softmin function will give highest output probability to the lowest d t . Additionally it will enable a discriminative training of the segmentation model, by maximising the likelihood of the true segmentation point (t * , known at time of training), as compared to all the other points (t = 1, 2, ...N ).
Given the output of segmentation, the opinions/sentiments in the text segments can be classified using separate models which are trained for sentiment classification [START_REF] Iyyer | Deep unordered composition rivals syntactic methods for text classification[END_REF], [START_REF] Kim | Convolutional neural networks for sentence classification[END_REF], [START_REF] Johnson | Effective use of word order for text categorization with convolutional neural networks[END_REF], [START_REF] Dai | Semi-supervised sequence learning[END_REF], [START_REF] Sheikh | Learning word importance with the neural bag-of-words model[END_REF]. However, we would like to study the power of our model for joint segmentation and classification. Thus, in addition to the segmentation part, our model also has a classification part as shown in top right of Figure 3. The hidden layer activations of the forward and backward LSTM-RNN are averaged to form a single vector representation h of the entire text, as:
h = 1 2N ( N t=1 h F t + N t=1 h B t ) (13)
This vector representation of the text is then fed into a feed forward neural network as follows:
ŷt = h.W class + b class (14)
Review
Sentiment Polarity an exhilarating experience. positive over-the-top and a bit ostentatious, this is a movie that's got oodles of style and substance. positive decent but dull. negative i suspect that there are more interesting ways of dealing with the subject. negative where W class and b class are the weight and bias parameters of the classification feed forward network. This is followed by a softmax function, given as:
softmax(ŷ l * ) = e ŷl * L l=1 e ŷl (15)
The softmax function will give highest output probability to the highest ŷt . Additionally it will enable a discriminative training of the classification model, by maximising the likelihood of the true sentiment/opinion classes (l * , known at time of training), as compared to all possible classes. In our experiments, we perform segmentation and classification of text containing two opinionated segments, which can be categorised into four classes as -, -+, +-, ++ (with -denoting negative and + denoting positive).
V. EXPERIMENTS AND RESULTS
In this section we first present a description of the dataset used in our experiments, followed by the details on our experiment setup and finally the results obtained from our experiments.
A. Experiment Dataset
The dataset used in our experiments is built using sentences from the standard Rottent Tomatoes (RT) movie review dataset [START_REF] Pang | Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales[END_REF]. We obtained the v1.0 balanced binary dataset1 containing 10,662 reviews. Each movie review presents a users opinion on a movie in about 1-2 sentences. Figure 4 shows a distribution in terms of number of reviews in the original RT dataset for different review lengths (in number of words). About 320 reviews have 5 or lesser words and about 1200 reviews have 10 or lesser words, which can be a severe problem for classical segmentation techniques based sliding windows and statistics of features in these windows. Examples of some reviews from the RT dataset are shown in Table I.
In our experiments we perform a 10-fold cross-validation using the balanced binary dataset of 10,662 reviews. In each fold 90% of the dataset (9596 reviews) are used to build our train set and remaining 10% of the dataset (1066 reviews) are used to form our test set. To build the train set for our task, we randomly sample two reviews from the 9596 reviews allocated for training and concatenate them as a training sample. It will a big fat pain. few films this year have been as resolute in their emotional nakedness.
-+ 5
... there are enough moments of heartbreaking honesty to keep one glued to the screen. an extremely unpleasant film.
+-16
boisterous, heartfelt comedy. as green-guts monster movies go, it's a beaut. ++ 5
be given a sentiment label as -, -+, +-or ++ (with -denoting negative and + denoting positive) depending on the sentiment labels of the individual reviews. This label will be used to train the sentiment classification part of the model, as discussed in Section IV. Additionally the length of the first review will be considered as the label for training the segmentation part of the model, as discussed in Section IV. A similar procedure is used to form our test set and its ground truth labels. Table II shows some samples from our training and test set, along with their corresponding classification and segmentation labels.
Figure 5 shows the distribution of segmentation boundaries (length of first segment) for the test set (in fold-0). This distribution is similar for test sets in all the 10 folds of validation. It indicates that the test set has segments of different lengths and difficulties to be identified.
B. Model Training and Evaluation
As discussed in Section IV, our LSTM-RNN model operates on word embeddings. For our task the word embeddings are initialised with publicly available 300-d (300 dimensional) GloVe word vectors [START_REF] Pennington | Glove: Global vectors for word representation[END_REF], originally trained over the Common Crawl2 . During model training these word embeddings are treated as model parameters and are updated by using backpropagation, so that the resulting word embedding representations are optimised for our task. Training of all the model parameters is performed with the ADADELTA [START_REF] Zeiler | ADADELTA: an adaptive learning rate method[END_REF] gradient descent algorithm, which provides an adaptive learning rate. For regularisation of the neural networks and to avoid overfitting problem, we apply a word dropout [START_REF] Iyyer | Deep unordered composition rivals syntactic methods for text classification[END_REF], [START_REF] Sheikh | Learning word importance with the neural bag-of-words model[END_REF], with a dropout probability of 0.7. We also apply dropout to the nonsequential units of the LSTM-RNN, as discussed in [START_REF] Zaremba | Recurrent neural network regularization[END_REF], with a dropout probability of 0.5. Additionally we use an early stopping criterion which keeps a check on the classification error and it stops the model training when the error starts to increase (model over-fitting). The hyper-parameters are chosen based on experiments on the RT sentiment classification task in [START_REF] Iyyer | Deep unordered composition rivals syntactic methods for text classification[END_REF], [START_REF] Sheikh | Learning word importance with the neural bag-of-words model[END_REF].
As mentioned earlier, we perform a 10-fold cross-validation over the RT dataset. Since our task involves segmentation and classification, we will report Segmentation Error Rate (SER) and Classification Error Rate (CER), averaged over the 10 folds. The mis-classification of sentiments corresponding to the two segments (-, -+, +-or ++) in a test set sample contributes to the classification errors. In segmentation, if the model gives a segmentation such that the segment boundary is more than 3 words away from actual boundary, then the segmentation is treated as erroneous and it contributes to the segmentation error.
C. Model Performance
Table III presents the segmentation error rate and classification error rate results obtained with our proposed model. We present the results obtained with our model when trained only for segmentation, only for classification and for both segmentation and classification. Moreover we also present the results when the models are trained and tested on the dataset with full stop marks and commas, after removing full stop marks and after removing both full stop marks and commas. These punctuation marks may carry information for end of segments in the text and it is important to analyse how the segmentation model would perform with and without this information. It can be observed from Table III, that our proposed model can perform almost perfect segmentation of opinions when the information about sentence boundaries i.e. full stops and other breaks i.e. commas are available. In this case the classification error rate is also lower and it slightly improves when the models are trained with both segmentation and classification cost functions. The segmentation error rate increases when the sentence boundary (full stop) information are not available in train and test, and the error rate reduces further when the commas are also removed. However, an error rate of only 22%, within a strict segmentation criterion of only 3 words, shows that our proposed model can perform even when the full stop and comma punctuation marks are not available. It is also observed that removal of the punctuation marks has a small effect, only 2-5% absolute, on the classification error rate.
We tried to analyse if the errors from our model are due to any particular type of segments or sentiments. In Figure 6 we present the distribution of segmentation errors (in fold-0) for model trained without full stops and commas. The distribution is plotted against length of the first segment. We can see that the distribution of segmentation errors is quite similar to the distribution of the lengths of the first segment in the test set, as shown in Figure 5. This implies that segmentation errors are evenly spread across segments of different lengths, and that the proposed model is not biased towards shorter or longer segments.
We also verified if the segmentation errors are biased towards particular type of sentiment classes (-, -+, +-or ++). Again the segmentation errors were evenly distributed across these classes. The fact that the samples with same sentiment segments (-or ++), were not inducing more segmentation errors shows that our model not only learns segmentation of different type of sentiments but also segmentation of different levels of sentiments (of same polarity) and that of different contexts which carry sentiments of same polarity. We further analysed the errors in sentiment classification and found that they are more or less evenly distributed for the different sentiment classes (-, -+, +-and ++), confirming that the model is not biased towards any particular type of sentiment combinations.
VI. CONCLUSION AND FUTURE WORK
In this paper we proposed a recurrent neural network model with bi-directional LSTM-RNN, to perform joint segmentation and classification of opinions in text. Our proposed model was trained by optimizing network parameters using two cost functions, one for segmentation and other for classification. We introduced a novel method to train bi-directional recurrent neural networks for segmentation. The segmentation cost function compares the sentiment context in the past with that of the future, for every word position in the text, and uses a softmin function to maximise the segmentation accuracy. With experiments on a dataset built from the standard RT movie review dataset, we demonstrated the effectiveness of our proposed model. Our model can perform almost perfect segmentation with knowledge of full stops and comma, which carry information useful for segmentation. We also showed that the model produces good segmentation results even when it is trained and tested on data without full stops and comma.
In our work, we discussed segmentation in context of opinions in text. However, our model readily extends to other sequence segmentation tasks, for example -segmentation of topics in text and automatic speech recognition transcripts. Similarly it can be extended to speech and audio signal segmentation, by operating on automatically trained or precomputed acoustic features. We are working on these extensions as part of our future work. Moreover, in this paper we focused our work on a sequence with two segments. In future we will extend our model to sequences with more than two segments.
Fig. 1 .
1 Fig. 1. Schema of Recurrent Neural Network (RNN)
Fig. 2 .
2 Fig. 2. Long Short-Term Memory (LSTM) Cell. (Taken from [59].)
Fig. 3 .
3 Fig. 3. Proposed model for joint Segmentation and Classification
Fig. 4 .
4 Fig. 4. Distribution of reviews in RT dataset based on review length
Fig. 5 .
5 Fig. 5. Distribution of segmentation boundaries (length of first segment) for test set (in fold-0)
Fig. 6 .
6 Fig. 6. Distribution of segmentation errors (in fold-0) for model trained without full stop and comma.
TABLE I EXAMPLE
I OF REVIEWS FROM THE RT DATASET
TABLE II EXAMPLE
II OF REVIEWS FROM OUR TRAIN AND TEST SET
Sample Sentiment label Segment boundary
the jokes are flat, and the action looks fake. truly terrible. - 11
https://www.cs.cornell.edu/people/pabo/movie-review-data/
http://nlp.stanford.edu/projects/glove/
ACKNOWLEDGMENT
This work is funded by the ContNomina project supported by the French National Research Agency (ANR) under contract ANR-12-BS02-0009. Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria, CNRS, RENATER and other Universities and organisations (https://www.grid5000.fr). | 33,337 | [
"1000772",
"15663",
"15652"
] | [
"420403",
"420403",
"420403"
] |
01491197 | en | [
"phys",
"math"
] | 2024/03/04 23:41:50 | 2018 | https://hal.science/hal-01491197/file/KKP%20AOS%20REVISED%203.pdf | Karim Lounici
email: [email protected]
Katia Meziani
email: [email protected]
Gabriel Peyré
email: [email protected]
Adaptive sup-norm estimation of the Wigner function in noisy quantum homodyne tomography
Keywords: Non-parametric minimax estimation, Adaptive estimation, Inverse problem, L 2 and L∞ Risks, Quantum homodyne tomography, Wigner function, Radon transform, Quantum state
come Adaptive sup-norm estimation of the Wigner function in noisy quantum homodyne tomography
Quantum optics is a branch of quantum mechanics which studies physical systems at the atomic and subatomic scales. Unlike classical mechanics, the result of a physical measurement is generally random. Quantum mechanics does not predict a deterministic course of events, but rather the probabilities of various alternative possible events. It provides predictions on the outcome measures, therefore exploring measurements involves non-trivial statistical methods and inference on the result of a measurement should be done on identically prepared quantum systems. In this paper, we study a severely ill-posed inverse problem that has arised in quantum optics. Let (Z 1 , Φ 1 ), . . . , (Z n , Φ n ) be n pairs of independent identically distributed random variables with values in R × [0, π] satisfying Z := X + 2γ ξ , where X l admits density p(x, φ) w.r.t. the Lebesgue measure on R × [0, π], ξ l is a standard normal variable independent of X l and γ ∈ (0, 1) is a known scalar. Due to the particular structure of this quantum optics problem, the density p(x, φ) satisfies
p(x, φ) = 1 π R[W ](x, φ)1I [0,π] (φ),
where W : R 2 → R is the unknown function to be estimated based on indirect observations (Z 1 , Φ 1 ), . . . , (Z n , Φ n ) and R[W ] is the Radon transform of W . The Radon transform will be properly defined in Section 1 below. The target W is called the Wigner function and is used to describe the quantum state of a physical system of interest.
For the interested reader, we provide in Section 1 a short introduction to the needed quantum notions. This section may be skipped at first reading. Section 2 introduces the statistical model by making the link with quantum theory. The interested reader can get further acquaintance with quantum concepts through the textbooks or the review articles of [START_REF] Helstrom | Quantum Detection and Estimation Theory[END_REF]; [START_REF] Holevo | Probabilistic and Statistical Aspects of Quantum Theory[END_REF]; [START_REF] Barndorff-Nielsen | On quantum statistical inference (with discussion)[END_REF] and [START_REF] Leonhardt | Measuring the Quantum State of Light[END_REF].
Physical background
In quantum mechanics, the measurable properties (ex: spin, energy, position, ...) of a quantum system are called "observables". The probability of obtaining each of the possible outcomes when measuring an observable is encoded in the quantum state of the considered physical system.
Quantum state and observable
The mathematical description of the quantum state of a system is given in the form of a density operator ρ on a complex Hilbert space H (called the space of states) satisfying the three following conditions:
1. Self adjoint: ρ = ρ * , where ρ * is the adjoint of ρ.
2. Positive: ρ ≥ 0, or equivalently ψ, ρψ ≥ 0 for all ψ ∈ H.
3. Trace one: Tr(ρ) = 1.
Notice that D(H), the set of density operators ρ on H, is a convex set. The extreme points of the convex set D(H) are called pure states and all other states are called mixed states.
In this paper, the quantum system we are interested in is a monochromatic light in a cavity. In this setting of quantum optics, the space of states H we are dealing with is the space of square integrable complex valued functions on the real line. A particular orthonormal basis for this Hilbert space is the Fock basis {ψ j } j∈N :
ψ j (x) := 1 √ π2 j j! H j (x)e -x 2 /2 , (1)
where H j (x) := (-1) j e x 2 d j dx j e -x 2 denote the j-th Hermite polynomial. In this basis, a quantum state is described by an infinite density matrix ρ = [ρ j,k ] j,k∈N whose entries are equal to ρ j,k = ψ j , ρψ k , where •, • is the inner product. The quantum states which can be created currently in laboratory are matrices whose entries are decreasing exponentially to 0, i.e., these matrices belong to the natural class R(C, B, r) defined below, with r = 2. Let us define for C ≥ 1, B > 0 and 0 < r ≤ 2, the class R(C, B, r) is as follows R(C, B, r) := {ρ quantum state : |ρ m,n | ≤ C exp(-B(m + n) r/2 )}.
(2)
In order to describe mathematically a measurement performed on an observable of a quantum system prepared in state ρ, we give the mathematical description of an observable. An observable X is a self adjoint operator on the same space of states H and
X = dimH a x a P a ,
where the eigenvalues {x a } a of the observable X are real and P a is the projection onto the one dimensional space generated by the eigenvector of X corresponding to the eigenvalue x a . As a quantum state ρ encompasses all the probabilities of the observables of the considered quantum system, when performing a measurement of the observable X of a quantum state ρ, the result is a random variable X with values in the set of the eigenvalues of the observable X. For a quantum system prepared in state ρ, X has the following probability distribution and expectation function P ρ (X = x a ) = Tr(P a ρ) and E ρ (X) = Tr(Xρ).
Note that the conditions defining the density matrix ρ insure that P ρ is a probability distribution.
In particular, the characteristic function is given by E ρ (e itX ) = Tr(ρe itX ).
Quantum homodyne tomography and Wigner function
In quantum optics, a monochromatic light in a cavity is described by a quantum harmonic oscillator. In this setting, the observables of interest are usually Q and P (resp. the electric and magnetic fields). But according to Heisenberg's uncertainty principle, Q and P are non-commuting observables, they may not be simultaneously measurable. Therefore, by performing measurements on (Q, P), we cannot get a probability density of the result (Q, P ). However, for all phase φ ∈ [0, π] we can measure the quadrature observables
X φ := Q cos φ + P sin φ.
Each of these quadratures could be measured on a laser beam by a technique developed by Smithey and called Quantum Homodyne Tomography (QHT). The theoretical foundation of quantum homodyne tomography was outlined by [START_REF] Vogel | Determination of quasiprobability distributions in terms of probability distributions for the rotated quadrature phase[END_REF].
When performing a QHT measurement of the observable X φ of the quantum state ρ, the result is a random variable X φ whose density conditionally to Φ = φ is denoted by p ρ (•|φ). Its characteristic function is given by
E ρ (e itX φ ) = Tr(ρe itX φ ) = Tr(ρe it(Q cos φ+P sin φ) ) = F 1 [p ρ (•|φ)](t),
where F 1 [p ρ (•|φ)](t) = e itx p ρ (x|φ)dx denotes the Fourier transform with respect to the first variable. Moreover if Φ is chosen uniformly on [0, π], the joint density probability of (X φ , Φ) with respect to the Lebesgue measure on R × [0, π] is
p ρ (x, φ) = 1 π p ρ (x|φ)1I [0,π] (φ).
An equivalent representation for a quantum state ρ is the function W ρ : R 2 → R called the Wigner function, introduced for the first time by [START_REF] Wigner | On the quantum correction for thermodynamic equations[END_REF]. The Wigner function may be obtained from the momentum representation
W ρ (u, v) := F 2 [W ρ ](u, v) = Tr ρe i(uQ+vP) , (3)
where F 2 is the Fourier transform with respect to both variables. By the change of variables (u, v) to polar coordinates (t cos φ, t sin φ), we get
W ρ (t cos φ, t sin φ) = F 1 [p ρ (•|φ)](t) = Tr(ρe itX φ ). (4)
The origin of the appellation quantum homodyne tomography comes from the fact that the procedure described above is similar to positron emission tomography (PET), where the density of the observations is the Radon transform of the underlying distribution
p ρ (x|φ) = R[W ρ ](x, φ) = W ρ (x cos φ + t sin φ, x sin φ -t cos φ)dt, (5)
where R[W ρ ] denotes the Radon transform of W ρ . The main difference with PET is that the role of the unknown distribution is played by the Wigner function which can be negative.
The physicists consider the Wigner function as a quasi-probability density of (Q, P ) if one can measure simultaneously (Q, P). Indeed, the Wigner function satisfies
W ρ : R 2 → R, W ρ (q, p)dqdp = 1, (6)
and other boundedness properties unavailable for classical densities. However, the Wigner function can and normally does take negative values for states which are not associated to any classical model. This property of the Wigner function is used by Physicists as a criterion to discriminate nonclassical states of the field.
In the Fock basis, we can write W ρ in terms of the density matrix [ρ jk ] as follows (see [START_REF] Leonhardt | Measuring the Quantum State of Light[END_REF] for the details).
W ρ (q, p) = j,k ρ jk W j,k (q, p)
where for j ≥ k,
W j,k (q, p) = (-1) j π k! j! 1 2 √ 2(ip -q) j-k e -(q 2 +p 2 ) L j-k k 2q 2 + 2p 2 . ( 7
)
and L α k (x) the generalized Laguerre polynomial of degree k and order α.
Pattern functions
The ideal result of the QHT measurement provides (X φ , Φ) of joint probability density with respect to the Lebesgue measure on R × [0, π] equal to
p ρ (x, φ) = 1 π p ρ (x|φ)1I [0,π] (φ) = 1 π R[W ρ ].(x, φ)1I [0,π] (φ) (8)
The density p ρ (•, •) can be written in terms of the entries of the density matrix ρ (see [START_REF] Leonhardt | Measuring the Quantum State of Light[END_REF])
p ρ (x, φ) = ∞ j,k=0 ρ j,k ψ j (x)ψ k (x)e -(j-k)φ , (9)
where {ψ j } j∈N is the Fock basis defined in (1). Conversely (see D 'Ariano, Macchiavello and Paris (1994); [START_REF] Leonhardt | Measuring the Quantum State of Light[END_REF] for details), we can write
ρ j,k = π 0 p ρ (x, φ)f j,k (x)e -(j-k)φ dxdφ, (10)
where the functions f j,k : R → R introduced by [START_REF] Leonhardt | Tomographic reconstruction of the density matrix via pattern functions[END_REF] are called the "pattern functions". An explicit form of the Fourier transform of f j,k (•) is given by [START_REF] Richter | Realistic pattern functions for optical homodyne tomography and determination of specific expectation values[END_REF]: for all j ≥ k
f j,k (t) = f k,j (t) = π(-i) j-k 2 k-j k! j! |t|t j-k e -t 2 4 L j-k k ( t 2 2 ), (11)
Note that by writing t = ||w|| = ||(q, p)|| = q 2 + p 2 in the equation ( 7), we can define for all
j ≥ k l j,k (t) := |W j,k (q, p)| = 2 j-k 2 π k! j! 1 2 t j-k e -t 2 L j-k k (2t 2 ) . (12)
Therefore, there exists a useful relation, for all j ≥ k
f j,k (t) = π 2 |t|l j,k (t/2). ( 13
)
Moreover [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF] have given the following Lemma which will be useful to prove our main results.
Lemma 1 [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF]). For all j, k ∈ N and J := j + k + 1, for all t ≥ 0,
l j,k (t) ≤ 1 π 1 if 0 ≤ t ≤ √ J, e -(t- √ J) 2 if t ≥ √ J. ( 14
)
Statistical model
In practice, when one performs a QHT measurement, a number of photons fails to be detected. These losses may be quantified by one single coefficient η ∈ [0, 1], such that η = 0 when there is no detection and η = 1 corresponds to the ideal case (no loss). The quantity (1 -η) represents the proportion of photons which are not detected due to various losses in the measurement process.
The parameter η is supposed to be known, as physicists argue that their machines actually have high detection efficiency, i.e. η ≈ 0.9. In this paper we consider the regime where more photons are detected than lost, that is η ∈ (1/2, 1]. Moreover, as the detection process is inefficient, an independent Gaussian noise interferes additively with the ideal data X φ . Note that the Gaussian nature of the noise is imposed by the Gaussian nature of the vacuum state which interferes additively.
To sum up, for Φ = φ, the effective result of the QHT measurement is for a known efficiency
η ∈ (1/2, 1], Y = √ η X φ + (1 -η)/2 ξ ( 15
)
where ξ is a standard Gaussian random variable, independent of the random variable X φ having density, with respect to the Lebesgue measure on R×[0, π], equal to p ρ (•, •) defined in equation ( 8).
For the sake of simplicity, we re-parametrize (15) as follows
Z := Y / √ η = X φ + (1 -η)/(2η) ξ := X φ + 2γ ξ, (16)
where γ = (1 -η)/(4η) is known and γ ∈ [0, 1/4) as η ∈ (1/2, 1]. Note that γ = 0 corresponds to the ideal case.
Let us denote by p γ ρ (•, •) the density of (Z, Φ) which is the convolution of the density of X φ with N γ (•) the density of a centered Gaussian distribution having variance 2γ, that is
p γ ρ (z, φ) = 1 π R[W ρ ](•, φ)1I [0,π] (φ) * N γ (z) = p ρ (•, φ) * N γ (z) (17) = p ρ (z -x, φ) N γ (x)dx.
For Φ = φ, a useful equation in the Fourier domain, deduced by the previous relation ( 17) and equation ( 4) is
F 1 [p γ ρ (•, φ)](t) = F 1 [p ρ (•, φ)](t) N γ (t) = W ρ (t cos(φ), t sin(φ)) N γ (t), (18)
where F 1 denotes the Fourier transform with respect to the first variable and the Fourier transform
of N γ (•) is N γ (t) = e -γt 2 .
This paper aims at reconstructing the Wigner function W ρ of a monochromatic light in a cavity prepared in state ρ from n observations. As we cannot measure precisely the quantum state in a single experiment, we perform measurements on n independent identically prepared quantum systems. The measurement carried out on each of the n systems in state ρ is done by QHT as described in Section 1. In practice, the results of such experiments would be n independent identically distributed random variables (Z 1 , Φ 1 ), . . . , (Z n , Φ n ) such that
Z := X + 2γ ξ . ( 19
)
with values in R × [0, π] and distribution P γ ρ admitting density p γ ρ (•, •) defined in (17) with respect to the Lebesgue measure on R × [0, π]. For all = 1, . . . , n, the ξ 's are independent standard Gaussian random variables, independent of all (X , Φ ).
In order to study the theoretical performance of our different procedures, we use the fact that the unknown Wigner function belongs to the class of very smooth functions A(β, r, L) (similar to those of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF]; [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF]) described via its Fourier transform:
A(β, r, L) := f : R 2 → R, | f (u, v)| 2 e 2β (u,v) r dudv (2π) 2 L , (20)
where f (•, •) denotes the Fourier transform of f with respect to both variables and (u, v) = √ u 2 + v 2 denote the usual Euclidean norm. Note that this class is reasonable from a physical point of view. Indeed, it follows from Propositions 1 and 2 in Aubry, [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF] that any Wigner functio whose density matrix belongs to the realistic class R(C, B, r) lies in a class A(β , r, L ) where β > 0 and L > 0 depend only on B, C, r. To the best of our knowledge, there exists no converse result proving that the density matrix of any Wigner function in the class A(β , r, L ) belongs to R(C, B, r).
Previous works and outline of the results
The problem of reconstructing the quantum state of a light beam has been extensively studied in physics literature and in quantum statistics. We only mention papers with a theoretical analysis of the performance of their estimation procedure. Additional references to physics papers can be found therein. Methods for reconstructing a quantum state are based on the estimation of either the density matrix ρ or the Wigner function W ρ . In order to assess the performance of a procedure, a realistic class of quantum states R(C, B, r) has been defined in many papers such as in (2) where the elements of the density matrix decrease rapidly. From the physics point of view, all the states which have been produced in the laboratory up to now belong to such a class with r = 2, and a more detailed argument can be found in the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF].
The estimation of the density matrix from averages of data has been considered in the framework of ideal detection (η = 1 i.e. γ = 0) by [START_REF] Artiles | An invitation to quantum tomography[END_REF] while the noisy setting has been investigated by [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF] for the Frobenius norm risk. More recently in the noisy setting, an adaptive estimation procedure over the classes of quantum states R(C, B, r), i.e. without assuming the knowledge of the regularity parameters, has been proposed by [START_REF] Alquier | Adaptive Estimation of the Density Matrix in Quantum Homodyne Tomography with Noisy Data[END_REF] and an upper bound for Frobenius risk has been given. The problem of goodness-of-fit testing in quantum statistics has been considered in [START_REF] Meziani | Nonparametric goodness-of fit testing in quantum homodyne tomography with noisy data[END_REF]. In this noisy setting, the latter paper derived a testing procedure from a projection-type estimator where the projection is done in L 2 distance on some suitably chosen pattern functions.
The Wigner function is an appealing tool to Physicists to determine particular features of the quantum state of a system. Therefore, this work is of practical interest. For instance, non classical quantum state correspond to negative parts of the Wigner function. This paper deals with the problem of reconstruction of the Wigner function W ρ in the context of QHT when taking into account the detection losses occurring in the measurement, leading to an additional Gaussian noise in the measurement data (η ∈ (1/2, 1]). In the absence of noise (γ = 0), [START_REF] Guţă | Minimax estimation of the Wigner in quantum homodyne tomography with ideal detectors[END_REF] obtained the sharp minimax rate of pointwise estimation over the class of Wigner functions A(β, 1, L) for a kernel based procedure. The same problem in the noisy setting was treated by [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF], they obtain minimax rates for the pointwise risk over the class A(β, r, L) for the procedure defined in (21). Moreover, a truncated version of their estimator is proposed by [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF] where an upper bound is computed for the L 2 -norm risk over the class A(β, r, L). The estimation of a quadratic functional of the Wigner function, as an estimator of the purity, was explored in [START_REF] Meziani | Nonparametric Estimation of the Purity of a Quantum State in Quantum Homodyne Tomography with Noisy Data[END_REF].
The reconstruction problem considered in this paper belongs to the class of linear inverse problems. It requires to solve simultaneously a tomography problem and a density deconvolution problem. We refer to [START_REF] Cavalier | Nonparametric statistical inverse problems[END_REF] for a survey of the literature on general inverse problems in statistics.
Tomography problems, such as noisy integral equation of the form y = R[f ](x, φ) + ξ where (x, φ) ∈ R × [0, π], ξ is some random noise and f is the unknown function to be recovered, have been investigated in Korostelëv andTsybakov (1991, 1993); [START_REF] Klemelä | Empirical risk minimization in inverse problems[END_REF] and the references cited therein. For density type tomography problems closer to our setting, [START_REF] Johnstone | Speed of estimation in positron emission tomography and related inverse problems[END_REF] considered uncorrupted observations, corresponding to γ = 0 in (19), and established the minimax rate of the inverse Radon transform over Sobolev classes of density functions for the quadratic risk. Under a similar framework, [START_REF] Donoho | Renormalization exponents and optimal pointwise rates of convergence[END_REF] obtained the pointwise minimax rate of reconstruction. The deconvolution problem has been studied extensively in the literature. We refer to [START_REF] Bissantz | Non-parametric confidence bands in deconvolution density estimation[END_REF]; [START_REF] Bissantz | Statistical inference for inverse problems[END_REF]; Butucea and Tsybakov (2008a,b); [START_REF] Carroll | Optimal rates of convergence for deconvolving a density[END_REF]; [START_REF] Delaigle | Practical bandwidth selection in deconvolution kernel density estimation[END_REF]; [START_REF] Diggle | A Fourier approach to nonparametric deconvolution of a density estimate[END_REF]; [START_REF] Fan | On the optimal rates of convergence for nonparametric deconvolution problems[END_REF][START_REF] Fan | Adaptively local one-dimensional subproblems with application to a deconvolution problem[END_REF]; [START_REF] Goldenshluger | On pointwise adaptive nonparametric deconvolution[END_REF]; [START_REF] Hesse | Optimal iterative density deconvolution[END_REF]; Johnstone, Kerkyacharian, Picard and Raimondo ( 2004); Johnstone and Raimondo (2004); [START_REF] Meister | Deconvolution from Fourier-oscillating error densities under decay and smoothness restrictions[END_REF]; [START_REF] Pensky | Functional deconvolution in a periodic setting: Uniform case[END_REF]; [START_REF] Pensky | Adaptive wavelet estimator for nonparametric density deconvolution[END_REF]; [START_REF] Stefanski | Rates of convergence of some estimators in a class of deconvolution problems[END_REF]; [START_REF] Stefanski | Deconvoluting kernel density estimators[END_REF]. Most of these papers concern the quadratic risk or the pointwise risk. [START_REF] Lounici | Global uniform risk bounds for wavelet deconvolution estimators[END_REF] established the first minimax uniform risk estimation result for a wavelet deconvolution density estimator over Besov classes of density functions.
The remainder of the article is organized as follows. In Section 3, we establish in Theorem 1 the first L ∞ -norm risk upper bound for the estimation procedure (21) of the Wigner function while in Theorem 2 we establish the first minimax lower bounds for the estimation of the Wigner function for the L 2 -norm and the L ∞ -norm risks. As a consequence of our results, we determined the minimax L ∞ -norm and L 2 -norm rates of estimation for this noisy QHT problem up to a logarithmic factor in the sample size. We propose in Section 4 a Lepski-type procedure that adapts to the unknown smoothness parameters β > 0 and r ∈ (0, 2] of the Wigner function of interest. The only previous result on adaptation is due to [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] but concerns the simplest case r ∈ (0, 1) where the estimation procedure (21) with a proper choice of the parameter h independent of β, r is naturally minimax adaptive up to a logarithmic factor in the sample size n. Theoretical investigations are complemented by numerical experiments reported in Section 5. The proofs of the main results are deferred to the Appendix.
Wigner function estimation and minimax risk
From now on, we work in the practical framework and we assume that n independent identically distributed random pairs (Z i , Φ i ) i=1,...,n are observed, where Φ i is uniformly distributed in [0, π] and the joint density of 17)). As [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF], we use the modified usual tomography kernel in order to take into account the additive noise on the observations and construct a kernel K γ h which performs both deconvolution and inverse Radon transform on our data, asymptotically such that our estimation procedure is
(Z i , Φ i ) is p γ ρ (•, •) (see (
W γ h (q, p) = 1 2πn n =1 K γ h ([z, Φ ] -Z ) , (21)
where 0 ≤ γ < 1/4 is a fixed parameter and h > 0 tends to 0 when n → ∞ in a proper way to be chosen later. The kernel is defined by
K γ h (t) = |t|e γt 2 1I |t|≤1/h , (22)
where z = (q, p) and [z, φ] = q cos φ + p sin φ.
From now on, • ∞ and • 2 and • 1 will denote respectively the L ∞ -norm, the L 2 -norm and the L 1 -norm. As the L ∞ -norm risk can be trivially bounded as follows
W γ h -W ρ ∞ ≤ W γ h -E[ W γ h ] ∞ + E W γ h -W ρ ∞ , (23)
and in order to study the L ∞ -norm risk of our procedure W γ h , we study in Propositions 1 and 2, respectively the bias term and the stochastic term.
Proposition 1. Let W γ h be the estimator of W ρ defined in ( 21) and h > 0 tends to 0 when n → ∞ . Then,
E W γ h -W ρ ∞ ≤ L (2π) 2 βr h (r-2)/2 e -βh -r (1 + o(1)),
where W ρ ∈ A(β, r, L) defined in (20) and r ∈ (0, 2].
The proof is deferred to Appendix A.1.
Proposition 2. Let W γ h be the estimator of W ρ defined in (21) and 0 < h < 1. Then, there exists a constant C 1 , depending only on γ such that
E W γ h -E[ W γ h ] ∞ ≤ C 1 e γh -2 log n n + log n n . ( 24
)
Moreover, for any x > 0, we have with probability at least 1 -e -x that
W γ h -E[ W γ h ] ∞ ≤ C 2 e γh -2 max log(n) + x n , log(n) + x n , ( 25
)
where C 2 > 0 depends only on γ.
The proof is deferred to Appendix A.2. The following theorem establishes the upper bound of the L ∞ -norm risk.
Theorem 1. Assume that W ρ belongs to the class A(β, r, L) defined in (20) for some r ∈]0, 2] and β, L > 0. Consider the estimator (21) with h * = h * (r) such that
γ (h * ) 2 + β (h * ) r = 1 2 log(n/ log n) if 0 < r < 2, h * = 2(β+γ) log(n/ log n) 1/2 if r = 2. ( 26
)
Then we have
E W γ h * -W ρ ∞ ≤ Cv n (r)
, where C > 0 can depend only on γ, β, r, L and the rate of convergence v n is such that
v n (r) = (h * ) (r-2)/2 e -β(h * ) -r if 0 < r < 2, log n n β 2(β+γ) if r = 2. ( 27
)
Note that for r ∈ (0, 2) the rate of convergence v n is faster than any logarithmic rate in the sample size but slower than any polynomial rate. For r = 2, the rate of convergence is polynomial in the sample size.
Proof of Theorem 1: Taking the expectation in (23) and using Propositions 1 and 2, we get for all 0 < h < 1
E [ W γ h -W ρ ∞ ≤ E W γ h -E[ W γ h ] ∞ + E W γ h -W ρ ∞ ≤ Ce γh -2 1 n (1 + o(1)) + C B h (r-2)/2 e -βh -r (1 + o(1))
where
C B = L (2π) 2 βr , h → 0 as n → ∞ and W ρ ∈ A(β, r, L). The optimal bandwidth parameter h * (r) := h * is such that h * = arg inf h>0 C B h (r-2)/2 e -βh -r + C 1 e γh -2 log n n . ( 28
)
Therefore, by taking derivative, we get
γ (h) 2 + β (h) r = 1 2 log(n/ log n) + C 1 (1 + o(1)).
For 0 < r < 2, (26) provides an accurate approximation of the optimum h * when the number of observations n is large. By plugging the result into (28), we get
(h * ) (r-2)/2 e -β(h * ) -r = (h * ) (r-2)/2 log n n e γ(h * ) -2 .
It follows that the bias term is much larger than the stochastic term for 0 < r < 2. It is easy to see that for r = 2, we have
h * = 2(β+γ) log(n/ log n) 1/2
and that the bias term and the stochastic term are of the same order.
We derive now a minimax lower bound. We consider specifically the case r = 2 since it is relevant with quantum physic applications. The only known lower bound result for the estimation of a Wigner function is due to [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] and concerns the pointwise risk. In Theorem 2 below, we obtain the first minimax lower bounds for the estimation of a Wigner function W ρ ∈ A(β, 2, L) with the L 2 -norm and L ∞ -norm risks.
Theorem 2. Assume that (Z 1 , Φ 1 ), • • • , (Z n , Φ n ) coming from the model (16) with γ ∈ [0, 1/4).
Then, for any β, L > 0 and p ∈ {2, ∞}, there exists a constant c := c(β, L, γ) > 0 such that for n large enough
inf Wn sup Wρ∈A(β,2,L) E W n -W ρ p ≥ cn -β 2(β+γ) ,
where the infimum is taken over all possible estimators W n based on the i.i.d. sample {(Z i , Φ i )} n i=1 . We believe similar arguments can be applied to the case 0 < r < 2 up to several technical modifications. This is left for future work. The proof is deferred to Appendix B. This theorem guarantees that the L ∞ -norm upper bound derived in Theorem 1 and also that the L 2 -norm risk upper bound of [START_REF] Aubry | State estimation in quantum homodyne tomography with noisy data[END_REF] are minimax optimal up to a logarithmic factor in the sample size n.
Adaptation to the smoothness
As we see in (28), the optimal choice of the bandwidth h * depends on unknown smoothness parameters β and r ∈ (0, 2]. We propose here to implement a Lepskii type procedure to select an adaptive bandwidth h. The Lepski method was introduced in [START_REF] Lepskiȋ | Asymptotically minimax adaptive estimation. I. Upper bounds. Optimally adaptive estimates[END_REF][START_REF] Lepskiȋ | Asymptotically minimax adaptive estimation. II. Schemes without optimal adaptation[END_REF] and has become since then a popular method to solve various adaptation problems. We will show that the estimator obtained with this bandwidth achieves the optimal minimax rate for the L ∞ -norm risk. Our adaptive procedure is implemented in Section 5. . We denote by L κ (•), the Lepski functional such that
Let M ≥ 2, and 0 < h M < • • • < h 1 < 1 a grid of ]0, 1[, we build estimators W γ hm as- sociated to bandwidth h m for any 1 ≤ m ≤ M . For any fixed x > 0, let us define r n (x) = max log(n)+x n , log(n)+x
L κ (m) = max j>m W γ hm -W γ hj ∞ -2κe γh -2 j r n (x + log M ) +2κe γh -2 m r n (x + log M ), (29)
where κ > 0 is a fixed constant. Therefore, our final adaptive estimator denoted by W γ h m will be the estimator defined in (21) for the bandwidth h m . The bandwidth h m is such that m = argmin 1≤m≤M L κ (m).
(30)
Note that the following result is valid for any β > 0 and r ∈ (0, 2].
Theorem 3. Assume that W ρ ∈ A(β, r, L). Take κ > 0 sufficiently large and
M ≥ 2. Choose 0 < h M < • • • < h 1 < 1.
Then, for the bandwidth h m with m defined in (30) and for any x > 0, we have with probability at least
1 -e -x W γ h m -W ρ ∞ ≤ C min 1≤m≤M h r/2-1 m e -β h r m + e γh -2 m r n (x + log M ) , (31)
where C > 0 is a constant depending only on γ, β, r, L.
In addition, we have in expectation
E W γ h m -W ρ ∞ ≤ C min 1≤m≤M h r/2-1 m e -β h r m + e γh -2 m r n (log M ) , (32)
where C > 0 is a constant depending only on γ, r, β, L.
The proof is deferred to the Appendix C.
The idea is now to build a sufficiently fine grid 0 < h M < • • • < h 1 < 1 to achieve the optimal rate of convergence over the range β > 0. Take M = log n/(2γ) . We consider the following grid for the bandwitdh parameter h:
h 1 = 1/2, h m = 1 2 1 -(m -1) 2γ log n , 1 ≤ m ≤ M. (33)
We build the corresponding estimators W γ hm and we apply the Lepski procedure ( 29)-( 30) to obtain the estimator W γ h m . The next result guarantees that this estimator is minimax adaptive over the class Ω := {(β, r, L), β > 0, 0 < r ≤ 2, L > 0} .
Corollary 1. Let the conditions of Theorem 3 be satisfied. Then the estimator W γ h m for the bandwidth h m with m defined in (30) and for any (β, r, L) ∈ Ω satisfies
limsup n→∞ sup Wρ∈A(β,r,L) E W γ h m -W ρ ∞ ≤ Cv n (r),
where v n (r) is the rate defined in ( 27) and C is a positive constant depending only on r, L, β and γ.
Proof of Corollary 1 : First note that for all m = 1, • • • , M and as
h m ∈ ((γ/(2 log n)) 1/2 , 1/2],
the bias term h
r/2-1 m e -β h r m is larger than the stochastic term e γh -2 m r n (log M ) up to a numerical constant. Let define m := argmax 1≤m≤M {|h m -h * | : h m ≤ h * },
where m is well defined. Indeed, we have
h M h * = (1/2) 1 -M (2γ/ log n) 1/2 + (2γ/ log n) 1/2 (log n/(2γ) -(β/γ)(h * ) -r ) -1/2 = 1 2 1 -M + ((log n)/(2γ)) 1/2 1 -(2β/(log(n))(h * ) -r 1/2 . Moreover, as 0 ≤ ((log n)/(2γ)) 1/2 -M ≤ 1, we get h M h * ≤ 1 -(2β/(log(n))(h * ) -r 1/2 ≤ 1.
Therefore, from (32),
E W γ h m -W ρ ∞ ≤ Ch r/2-1 m e -β h r m ≤ Ch r/2-1 m e -β h r m v n (r)v n (r) -1 = C h m h * r/2-1 e -β(h -r m -(h * ) -r ) v n (r).
By the definition of m, it follows that h -r m ≥ (h * ) -r , then
E W γ h m -W ρ ∞ ≤ C h m h * r/2-1 v n (r) = C h m -h * h * + 1 r/2-1 v n (r)
.
By construction |h m -h * | ≤ (γ/(2 log n)) 1/2 , then we have E W γ h m -W ρ ∞ ≤ C 1 - (γ/(2 log n)) 1/2 h * r/2-1 v n (r).
As
(h * ) -1 ≤ (log n/(2γ)) 1/2 , it holds that 1 -(γ/(2 log n)) 1/2 h * ≥ 1/2. Therefore, there exists a numerical constant C > 0 such that, for any 0 < r ≤ 2, we have E W γ h m -W ρ ∞ ≤ C v n (r).
Experimental evaluation
We test our method on two examples of Wigner functions, corresponding to the single-photon and the Schrödinger's cat states, and that are respectively defined as
W ρ (q, p) = -(1 -2(q 2 + p 2 ))e -q 2 -p 2 , W ρ (q, p) = 1 2 e -(q-q0) 2 -p 2 + 1 2 e -(q+q0) 2 -p 2 + cos(2q 0 p)e -q 2 -p 2 .
We used q 0 = 3 in our numerical tests. The toolbox to reproduce the numerical results of this article is available online1 . Following the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] and in order to obtain a fast numerical procedure, we implemented the estimator W γ h defined in (21) on a regular grid. More precisely, 2-D functions such as W ρ are discretized on a fine 2-D grid of 256 × 256 points. We use the Fast Slant Stack Radon transform of [START_REF] Averbuch | A framework for discrete integral transformations: II. The 2D discrete Radon transform[END_REF], which is both fast and faithful to the continuous Radon transform R. It also implements a fast pseudo-inverse which accounts for the filtered back projection formula (21). The filtering against the 1-D kernel ( 22) is computed along the radial rays in the Radon domain using Fast Fourier transforms. We computed the Lepski functional (29) using the values x = log(M ) and κ = 1. Figures 1 and 2 reports the numerical results of our method on both test cases. The left part compares the error W γ h -W ρ ∞ (displayed as a function of h) to the parameters h m selected by the Lepski procedure (30) . The error W γ h -W ρ ∞ (its empirical mean and its standard deviation) is computed in an "oracle" manner (since for these examples, the Wigner function to estimate W ρ is known) using 20 realizations of the sampling for each tested value (h i ) M i=1 . The histogram of values h m is computed by solving (29) for 20 realizations of the sampling. This comparison shows, on both test cases, that the method is able to select a parameter value h m which lies around the optimal parameter value (as indicated by the minimum of the L ∞ -norm risk). The central and right parts show graphical displays of W γ h m , where m is selected using the Lepski procedure (30), for a given sampling realization.
Appendix A: Proof of Propositions
A.1. Proof of Proposition 1
First remark that by the Fourier transform formula for w = (q, p) ∈ R 2 and x = (x 1 , x 2 )
W ρ (w) = 1 (2π) 2 W ρ (x)e -i(qx1+px2) dx. ( 34
)
Let W γ h be the estimator of W ρ defined in (21), then
E W γ h (w) = 1 2π E [K γ h ([w, Φ 1 ] -Z 1 )] = 1 2π π 0 K γ h ([w, φ] -z)p γ ρ (z, φ)dzdφ = 1 2π π 0 K γ h * p γ ρ (•, φ)([w, φ])dφ.
In the Fourier domain, the convolution becomes a product, combining with (18), we obtain w,φ] dtdφ.
E W γ h (w) = π 0 1 (2π) 2 K γ h (t)F 1 [p γ ρ (•, φ)](t)e -it[
As N γ (t) = e -γt 2 , definition (22) of the kernel combined with ( 18) gives [w,φ] dtdφ.
E W γ h (w) = π 0 1 (2π) 2 K γ h (t) W ρ (t cos(φ), t sin(φ)) N γ (t)e -it[w,φ] dtdφ = π 0 1 (2π) 2 |t|≤1/h |t| W ρ (t cos(φ), t sin(φ))e -it
Therefore, by the change of variable x = (t cos(φ), t sin(φ)), it follows
E W γ h (w) = 1 (2π) 2 ||x||≤1/h W ρ (x)e -i(qx1+px2) dx. ( 35
)
From equations ( 34) and ( 35), we have
E W γ h (w) -W ρ (w) ≤ 1 (2π) 2 ||x||>1/h W ρ (x) dx ≤ 1 (2π) 2 W ρ (x) 2 e 2β||x|| r dx 1/2 ||x||>1/h e -2β||x|| r dx 1/2 ≤ L (2π) 2 βr h (r-2)/2 e -βh -r (1 + o(1)), h → 0,
by applying Lemma 7 (see Section D.5 below) and as W ρ ∈ A(β, r, L) the class defined in (20).
A.2. Proof of Proposition 2
We recall first the notion of covering numbers for a functional class. For any probability distribution Q, we denote by L 2 (Q) the set of real-valued functions on R embedded with the
L 2 (Q)-norm • L 2 (Q) = R | • | 2 dQ 1/2 . For any functional class H in L 2 (Q), the covering number N ( , H, L 2 (Q))
denotes the minimal number of L 2 (Q)-balls of radius less than or equal to , that cover H.
The following lemma is needed to prove the Proposition 2.
Lemma 2. Let δ h := h -1 e γ h 2 > 0 for any 0 < h ≤ 1, then the class
H h = {δ -1 h K γ h (• -t), t ∈ R}, h > 0 ( 36
)
is uniformly bounded by U := h 2γπ . Moreover, for every 0 < < A and for finite positive constants A, v depending only on γ,
sup Q N ( , H h , L 2 (Q)) ≤ (A/ ) v , ( 37
)
where the supremum extends over all probability measures Q on R.
The proof of this Lemma can be found in D.1. To prove (24), we have to bound the following quantity :
E[|K γ h ([z, Φ ] -Z )| 2 ] ≤ K γ h 2 ∞ ≤ K γ h 2 1 = |t|≤h -1 |t|e γt 2 dt 2 = 2 h -1 0 te γt 2 dt 2 = γ -1 e γh -2 - 1 γ 2 ≤ 1 γ 2 e 2γh -2 . ( 38
)
Moreover for δ h = h -1 e γh -2 , we have
δ -2 h E[|K γ h ([z, Φ ] -Z )| 2 ] ≤ h 2 γ 2 . ( 39
)
By Lemma 2, it follows that the class H h is VC. Next, we note that the supremum over R is the same as a countable supremum since K γ h is continuous. Hence, we can apply (85) to get
E W γ h -E[ W γ h ] ∞ = E sup z∈R 2 1 2πn n l=1 K γ h ([z, Φ ] -Z ) -E [K γ h ([z, Φ ] -Z )] = δ h 2πn E sup z∈R 2 n l=1 δ -1 h K γ h ([z, Φ ] -Z ) -E[δ -1 h K γ h ([z, Φ ] -Z )] ≤ C(γ)δ h 2πn σ n log AU σ + U log AU σ , (40)
where U = h 2γπ is the envelope of the class H h defined in Lemma 2. By choosing
σ 2 := h 2 γ 2 ≥ sup z∈R 2 E δ -1 h K η h ([z, Φ ] -Z ) 2
in (40) we get the result in expectation (24). Now, prove the result in probability (25).
In view of the previous display (38), we have
Var γ(hδ h ) -1 |K γ h ([•, Φ 1 ] -Z 1 ) -E [K γ h ([•, Φ 1 ] -Z 1 )]| ≤ γ 2 (hδ h ) -2 E |K γ h ([•, Φ 1 ] -Z 1 )| 2 ≤ γ 2 (hδ h ) -2 1 γ 2 e 2γh -2 = 1.
As U = 1 2γπ and by ( 72), it follows
γ(hδ h ) -1 K γ h ([•, Φ 1 ] -Z 1 ) -E[K γ h ([•, Φ 1 ] -Z 1 )] ∞ ≤ γ(hδ h ) -1 K γ h ∞ ≤ γh -1 U ≤ 1.
We use Talagrand's inequality as in Theorem 2.3 of [START_REF] Bousquet | A Bennett concentration inequality and its application to suprema of empirical processes[END_REF]. Let us define
Z := nγ hδ h W γ h -E[ W γ h ] ∞ .
Then, for any x > 0 and with probability at least 1 -e -x , we obtain
Z ≤ E [Z] + 2xn + 4xE[Z] + x 3 ≤ E [Z] + √ 2xn + 2 xE[Z] + x 3 ≤ 2E [Z] + √ 2xn + 4x 3 ,
where we have used the decoupling inequality 2ab ≤ a 2 + b 2 with a = √ x and b = E[Z]. Thus, with probability at least 1 -e -x , we get
W γ h -E[ W γ h ] ∞ = hδ h nγ Z ≤ 2E W γ h -E[ W γ h ] ∞ + e γh -2 γ 2 x n + 4x 3n .
Plugging our control (24
) on E[ W γ h -E[ W γ h ] ∞ ],
the result in probability follows. The proof for the minimax lower bounds follows a standard scheme for deconvolution problem as in the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF]; [START_REF] Lounici | Global uniform risk bounds for wavelet deconvolution estimators[END_REF]. However, additional technicalities arise to build a proper set of Wigner functions and then to derive a lower bound. From now on, for the sake of brevity, we will denote A(β, 2, L) by A(β, L) as we consider only the practical case r = 2. Let W 0 ∈ A(β, L) be a Wigner function. Its associated density function will be denoted by
p 0 (x, φ) = 1 π R[W 0 ](x, φ)1I [0,π] (φ).
We suggest the construction of a family of two Wigner functions W 0 and W 1 such that for all w ∈ R 2 :
W 1 (w) = W 0 (w) + V h (w),
where the construction of W 0 and V h are given in Appendices B.1.1 and B.1.2 and the parameter h = h(n) → 0 as n → ∞. We denote by
p m (x, φ) = 1 π R[W m ](x, φ)1I [0,π] (φ), m = 0, 1
the density function associated to the Wigner functions W 0 and W 1 . As we consider the noisy framework ( 16) and in view of ( 17), we set for m = 0, 1
p γ m (z, φ) = [p m (•, φ) * N γ ] (z)
If the following conditions (C1) to (C3) are satisfied, then Theorem 2.6 in the book of [START_REF] Tsybakov | Introduction to Nonparametric Estimation[END_REF] gives the lower bound.
(C1) W 0 , W 1 ∈ A(β, L). (C2) We have ||W 1 -W 0 || 2 2 ≥ 4ϕ 2 n , with ϕ 2 n = O n -β β+γ . (C3) We have nX 2 (p γ 1 , p γ 0 ) := n π 0 (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 p γ 0 (z, φ) dzdφ ≤ 1 4 .
Proofs of these three conditions are provided in Appendices B.1.3 to B.1.5.
B.1.1. Construction of W 0
The Wigner function W 0 is the same as in the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF]. For the sake of completeness, we recall its construction here. The probability density function associated to any density matrix ρ in the ideal noiseless setting is given by equation ( 9). In particular, for diagonal density matrix ρ, the associated probability density function is
p ρ (x, φ) = ∞ k=0 ρ kk ψ 2 k (x).
For all 0 < α, λ < 1, we introduce a family of diagonal density matrices ρ α,λ such that for all
k ∈ N ρ α,λ kk = 1 0 z k α (1 -z) α (1 -λ) α 1I λ≤z≤1 dz. (41)
Therefore the probability density associated to this diagonal density matrix ρ α,λ can be written as follows
p α,λ (x, φ) = ∞ k=0 ρ α,λ kk ψ 2 k (x) = ∞ k=0 ψ 2 k (x) 1 0 z k α (1 -z) α (1 -λ) α 1I λ≤z≤1 dz. ( 42
)
Moreover by the well known Mehler formula (see [START_REF] Erdélyi | Higher transcendental functions[END_REF]), we have
∞ k=0 z k ψ 2 k (x) = 1 π(1 -z 2 ) exp -x 2 1 -z 1 + z .
Then, it follows
p α,λ (x, φ) = α (1 -λ) α 1 0 (1 -z) α π(1 -z 2 ) exp -x 2 1 -z 1 + z 1I λ≤z≤1 dz.
The following Lemma, proved in the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF], gives a control on the tails of the associated density p α,λ (x, φ) = p α,λ (x) as it does not depend on φ.
Lemma 3 (Butucea, Guta and Artiles (2007)). For all φ ∈ [0, 1] and all 0 < α, λ < 1 and |x| > 1 there exist constants c, C depending on α and λ such that 1+2α) .
c|x| -(1+2α) ≤ p α,λ (x) ≤ C|x| -(
In view of Lemma 3 of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF], the Wigner function W 0 will be chosen in the set
W α,λ = W α,λ = W ρ α,λ : Wigner function associated to ρ α,λ : 0 < α, λ < 1 ,
where λ is such that W 0 is a Wigner function belonging to A(β, L) (see Section 6.1 in [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] or the proof of Theorem 2 in Guţă and Artiles ( 2007)).
B.1.2. Construction of
V h for the L 2 -norm Let δ := log -1 (n). (43)
We define two infinitely differentiable function g and g 1 such that:
• g 1 : R → [0, 1].
• The support of g 1 is Supp(g 1 ) = (δ, 2δ) .
• And ∀t ∈ δ 3 , 2δ 3 , g 1 (t) = 1. • g : R → [-1, 1] is on odd function, such that for some fixed > 0, g(x) = 1 for any x ≥ .
Define also the following parameters:
a 1 := (h -2 + δ) 1/2 , b 1 := (h -2 + 2δ) 1/2 . ( 44
)
a 1 := (h -2 + (4/3)δ) 1/2 , b 1 := (h -2 + (5/3)δ) 1/2 . ( 45
)
C 0 := πL(β + γ). (46)
We also introduce an infinitely differentiable function V h such that:
• V h : R 2 → R is an odd real-valued function.
• Set t = w 2 1 + w 2 2 , then the function V h admits the following Fourier transform with respect to both variables
V h (w) := F 2 [V h ](w) := iaC 0 h -1 e βh -2 e -2β|t| 2 g 1 (|t| 2 -h -2 )g(w 2 ), (47)
where a > 0 is a numerical constant chosen sufficiently small. The bandwidth is such that
h = log n 2(β + γ) -1/2 . ( 48
)
Note that V h (w) is infinitely differentiable and compactly supported, thus it belongs to the Schwartz class S(R 2 ) of fast decreasing functions on R 2 . The Fourier transform being a continuous mapping of the Schwartz class onto itself, this implies that V h is also in the Schwartz class S(R 2 ). Moreover, V h (w) is an odd function with purely imaginary values. Consequently, V h is an odd real-valued function. Thus, we get
V h (p, q)dpdq = R[V h ](x, φ)dx = 0, (49)
for all φ ∈ [0, π] and R[V h ] the Radon transform of V h . Now, we can define the function W 1 as follows:
W 1 (z) = W 0 (z) + V h (z), ( 50
)
where W 0 is the Wigner function associated to the density p 0 defined in (41).
As in (8), we also define
p 1 (x, φ) = 1 π R[W 1 ](x, φ)1I (0,π (φ),
and ρ
(1)
j,k = π 0 p m (x, φ)f j,k (x)e (j-k)φ dxdφ. ( 51
)
By Lemma 6 in Appendix D.4, the matrix ρ (1) is proved to be a density matrix. Therefore, in view of ( 9) and ( 49), the function W 1 is indeed a Wigner function.
B.1.3. Condition (C1)
By the triangle inequality, we have
W 1 e β • 2 2 ≤ W 0 e β • 2 2 + V h e β • 2 2 .
The first term in the above sum has be bounded in Lemma 3 of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] as follows
W 0 e β • 2 2 2 ≤ π 2 L. ( 52
)
To study the second term in the sum above, we consider the change of variables w = (t cos φ, t sin φ) and as g is bounded by 1, we get, using ( 43), ( 44) and ( 46) that
V h e β • 2 2 2 ≤ aC 0 h -1 e βh -2 2 e -2β w 2 g 2 1 ( w 2 -h -2 )dw ≤ a 2 C 2 0 h -2 e 2βh -2 π 0 b1 a1 |t|e -2β|t| 2 dt ≤ πa 2 C 2 0 h -2 e 2βh -2 e -2βa 2 1 b1 a1 tdt ≤ π 2 a 2 C 2 0 h -2 e -2βδ b 2 1 -a 2 1 ≤ π 3 a 2 C 2 0 h -2 δe -2βδ ≤ π 2 L, (53)
for a small enough. It follows from ( 52) and ( 53) that W 1 ∈ A(β, L).
B.1.4. Condition (C2)
By applying Plancherel Theorem and the change of variables w = (t cos φ, t sin φ), we have that
W 1 -W 0 2 2 = V h 2 2 = V h 2 2 = 1 4π 2 π 0 |t| V h (t, φ) 2 dtdφ = a 2 C 2 0 4π 2 h -2 e 2βh -2 π 0 |t|e -4βt 2 g 2 (t sin φ)g 2 1 (t 2 -h -2 )dtdφ. ( 54
)
Note that for a fixed µ ∈ (0, π/4), there exists a numerical constant c > 0 such that sin(φ) > c on (µ, π -µ). From now on, we denote by A 1 the set
A 1 := w ∈ R 2 : a 1 ≤ w 2 ≤ b 1 , (55)
where a 1 and b 1 are defined in (45). By definition of g and for a large enough n, we have for any 54) can be lower bounded as follows
(t, φ) ∈ A 1 × (µ, π -µ) that g 2 (t sin(φ)) = 1 with t 2 = w 2 . Therefore, (
W 1 -W 0 2 2 ≥ a 2 C 2 0 4π 2 h -2 e 2βh -2 π-µ µ A1 |t|e -4βt 2 g 2 1 (t 2 -h -2 )dtdφ = π -2µ 4π 2 a 2 C 2 0 h -2 e 2βh -2 A1 |t|e -4βt 2 g 2 1 (t 2 -h -2 )dt. ( 56
)
On A 1 and by construction of the function g 1 , we have g 2 1 (t 2 -h -2 ) = 1. Hence, it results
I := A1 |t|e -4βt 2 g 2 1 (t 2 -h -2 )dt ≥ e -4β b 2 1 A1 |t|g 2 1 (t 2 -h -2 )dt ≥ e -4β b 2 1 b1 a1 tdt ≥ 1 2 e -4β b 2 1 ( b 2 1 -a 2 1 ) ≥ 1 6 δe -4β b 2 1 . ( 57
)
Combining ( 56) and ( 57), we get, since
C 2 0 h -2 δ = πL/2 that W 1 -W 0 2 2 ≥ π -2µ 24π 2 a 2 C 2 0 h -2 e 2βh -2 δe -4β b 2 1 = π -2µ 48π a 2 Le 2βh -2 e -4β b 2 1 = π -2µ 48π a 2 Le -2βh -2 e -20 3 βδ .
It follows from ( 48) that
W 1 -W 0 2 2 ≥ π -2µ 48π a 2 Ln -β β+γ e -40 3 β ≥ 4cn -β β+γ =: 4ϕ 2 n ,
where c > 0 is a numerical constant possibly depending only on β.
B.1.5. Condition (C3)
Denote by C > 0 a constant whose value may change from line to line and recall that N γ is the density of the Gaussian distribution with zero mean and variance 2γ. Note that p 0 and N γ do not depend on φ. Consequently, in the framework of noisy data defined in ( 16), p γ 0 (z, φ) = p γ 0 (z) 1 π 1I (0,π) (φ). Lemma 4. There exists numerical constants c > 0 and c > 0 such that
p γ 0 (z) ≥ c z -2 , ∀|z| ≥ 1 + 2γ, (58)
and
p γ 0 (z) ≥ c , ∀|z| ≤ 1 + 2γ. ( 59
)
The proof of this Lemma is done in Appendix D.3. Using Lemma 4, the χ 2 -divergence can be upper bounded as follows
nX 2 (p γ 1 , p γ 0 ) = n π 0 (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 p γ 0 (z, φ) dzdφ ≤ n c π 0 1+ √ 2γ -(1+ √ 2γ) (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 dzdφ + n c π 0 R\(1+ √ 2γ,1+ √ 2γ) z 2 (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 dzdφ =: n c I 1 + n c I 2 . ( 60
)
Note that, as in ( 18) the Fourier transforms of p γ 1 and p γ 0 with respect to the first variable are respectively equal to
F 1 [p γ 1 (•, φ)](t) = W 1 (t cos φ, t sin φ) N γ (t) = V h (t cos φ, t sin φ) + W 0 (t cos φ, t sin φ) e -γt 2 , ( 61
) F 1 [p γ 0 (•, φ)](t) = W 0 (t cos φ, t sin φ)e -γt 2 , ( 62
)
since N γ (t) = e -γt 2 . Using Plancherel Theorem, equations ( 47), ( 61) and ( 62), the first integral I 1 in the sum ( 60) is bounded by
I 1 ≤ π 0 (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 dzdφ = 1 4π 2 π 0 |F 1 [p γ 1 (•, φ)](t) -F 1 [p γ 0 (•, φ)](t)| 2 dtdφ = 1 4π 2 π 0 V h (t cos φ, t sin φ) 2 e -2γt 2 dtdφ = a 2 C 2 0 4π 2 h -2 e 2βh 2 π 0 e -4βt 2 -2γt 2 g 2 1 t 2 -h -2 g 2 (t sin φ)dtdφ.
By construction, the function g is bounded by 1 and the function g 1 admits as support Supp(g 1 ) = (δ, 2δ). Thus,
I 1 ≤ a 2 C 2 0 4π e 2βh 2 e -4βt 2 -2γt 2 g 2 1 t 2 -h -2 dt ≤ a 2 C 2 0 4π h -2 e 2βh -2 b1 a1 e -4βt 2 -2γt 2 dt ≤ a 2 C 2 0 4π (b 1 -a 1 )h -2 e 2βh -2 e -4βa 2 1 -2γa 2 1 ≤ a 2 C 2 0 4π b 2 1 -a 2 1 2a 1 h -2 e 2βh -2 -4βa 2 1 -2γa 2 1 .
Some basic algebra, ( 43), ( 44), ( 46) and ( 48) yield
n c I 1 ≤ a 2 C √ log n , ( 63
)
for some constant C > 0 which may depend on β, γ, L and c . For the second term I 2 in the sum (60), with the same tools we obtain using in addition the spectral representation of the differential operator, that
I 2 ≤ π 0 z 2 (p γ 1 (z, φ) -p γ 0 (z, φ)) 2 dzdφ = π 0 ∂ ∂t (F 1 [p γ 1 (•, φ)] -F 1 [p γ 0 (•, φ)]) (t) 2 dtdφ = π 0 ∂ ∂t V h (t cos φ, t sin φ)e -γt 2 2 dtdφ = π 0 e -γt 2 ∂ ∂t ( V h )(t cos φ, t sin φ) -2γte -γt 2 V h (t cos φ, t sin φ) 2 dtdφ ≤ 2 π 0 e -2γt 2 |I 2,1 | 2 dtdφ + 16γ 2 π 0 t 2 e -2γt 2 |I 2,2 | 2 dtdφ, (64)
where I 2,2 = V h (t cos φ, t sin φ) and I 2,1 , the partial derivative ∂ ∂t ( V h )(t cos φ, t sin φ), is equal to
iaC 0 h -1 e βh -2 -2βt 2 g 1 (t 2 -h -2 ) (-4βtg(t sin φ) + g (t sin φ) sin φ ) + 2tg 1 (t 2 -h -2 )g(t sin φ) .
Since g 1 and g belong to the Schwartz class, there exists a numerical constant c S > 0 such that
max{ g 1 ∞ , g 1 ∞ , g ∞ , g ∞ } ≤ c S . Furthermore, the support of the function g 1 is Supp(g 1 ) = (δ, 2δ), then |I 2,1 | 2 ≤ a 2 c 4 S C 2 0 h -2 e 2βh -2 -4βt 2 ((4β + 2)|t| + 1) 2 1I (a1,b1) (t), (65)
with a 1 and b 1 defined in (44). Proceeding similarly, we have
|I 2,2 | 2 = aC 0 h -1 e βh -2 e -2βt 2 g 1 (t 2 -h -2 )g(t sin φ) 2 ≤ a 2 c 4 S C 2 0 h -2 e 2βh -2 -4βt 2 1I (a1,b1) (t). (66)
Combining ( 65) and ( 66) with (64), as 0 ≤ δ ≤ 1
I 2 ≤ 2a 2 c 4 S C 2 0 h -2 e 2βh -2 π 0 b1 a1 e -2γt 2 e -4βt 2 ((4β + 2)|t| + 1) 2 + 8γ 2 t 2 dtdφ ≤ 2πa 2 c 4 S C 2 0 h -2 e 2βh -2 b1 a1 e -2(γ+2β)t 2 (1 + 4(2β + 1)t + (2 + 4β + 8γ 2 )t 2 dt.
An integration by part gives
I 2 ≤ 2πa 2 c 4 S C 2 0 h -2 e 2βh -2 e -(4β+2γ)a 2 1 ((4β + 2)b 1 + 1) 2 + 8γ 2 b 2 1 b1 a1 dt ≤ 2πa 2 c 4 S C 2 0 ch -2 δe -2(β+γ)h -2 e -2(2β+γ)δ ,
where c > 0 depends only on γ, β. Some basic algebra, ( 43), ( 44), ( 46) and ( 48) yield
n c I 2 ≤ a 2 C, (67)
for some constant C > 0 possibly depending on β, γ, L, c S and c . Combining ( 67) and ( 63) with (60), we get for n large enough
nX 2 (p γ k,h , p γ 0 ) := n π 0 R (p γ k,h (z, φ) -p γ 0 (z, φ)) 2 p γ 0 (z, φ) dzdφ ≤ a 2 C,
where C > 0 is a constant which may depend on β, γ, L c S , c and c . Taking the numerical constant a > 0 small enough, we deduce from the previous display that
nX 2 (p γ k,h , p γ 0 ) ≤ 1 4 .
B.2. Proof of Theorem 2 -Lower bounds for the sup-norm
To prove the lower bound for the sup-norm, we need to slightly modify the construction of the Wigner function W 1 defined in (50). In our new construction, the Wigner function W 0 , associated to the density p 0 defined in (41), stays unchanged as compared to the L 2 case. However, the function V h given in ( 47) is modified as follows. We replaced the functions g 1 and g respectively into g 1, and g for some 0 < < 1.
We introduce an infinitely differentiable function g 1, such that:
• g 1, : R → [0, 1]. • The support of g 1, is Supp(g 1, ) = (δ, 2δ) .
• Using a similar construction as for function g 1 , we can also assume that
g 1, (t) = 1, ∀t ∈ A 1, := [(1 + )δ, ( 2
-)δ] , (68)
and
g 1, ∞ ≤ c δ , (69)
for some numerical constant c > 0. • An odd function g : R → [-1, 1] satisfies the same conditions as g above but we assume in addition that
g ∞ ≤ c , (70)
for some numerical constant c > 0.
The condition (70) will be needed to check Condition (C3). Such a function can be easily constructed. Consider for instance a function g such that its derivative satisfies
g (t) = ψ * 1 1I (0, ) (t),
for any t ∈ (0, ) where ψ is a mollifier. Integrate this function and renormalize it properly so that g (t) = 1 for any t ≥ . Complete the function by symmetry to obtain an odd function defined on the whole real line. Such a construction satisfies condition (70).
It is easy to see that Condition (C1) is always satisfied by the new test functions W 0, and W 1, . We now check Condition (C2). Set C h = iaC 0 h -1 e βh -2 . Then, we have
W 1, (z) -W 0, (z) = 1 4π 2 e -i z,w W 1, (w) -W 0, (w) dw = 1 4π 2 π 0 e -it[z,φ] |t|C h e -2βt 2 g 1, t 2 -h -2 g (t)dtdφ.
Note that A 1 = lim →0 A 1, where A 1 is defined in (68). For all z ∈ R 2 , we define the following quantity
I(z) := π 0 e -it[z,φ] |t|C h e -2βt 2 1I A1 t 2 -h -2 1I (0,∞) (t) -1I (-∞,0) (t) dtdφ.
Lebesgue dominated convergence Theorem guarantees that
lim →0 π 0 e -it[z,φ] |t|C h e -2βt 2 g 1, t 2 -h -2 g (t)dtdφ = I(z).
Therefore, there exists an > 0 (possibly depending on n, z) such that
|W 1, (z) -W 0, (z)| ≥ 1 8π 2 |I(z)| .
Taking z = (0, 2h), Fubini's Theorem gives
I(z) = 1 4π 2 π 0 e -it2h sin φ |t|C h e -2βt 2 1I A1 t 2 -h -2 1I (0,∞) (t) -1I (-∞,0) (t) dtdφ = 1 4π 2 π 0 e -it2h sin φ dφ |t|C h e -2βt 2 1I A1 t 2 -h -2 1I (0,∞) (t) -1I (-∞,0) (t) dt. Note that π 0 e -it2h sin φ dφ = π(iH 0 (2ht) + J 0 (2ht)),
where H 0 and J 0 denote respectively the Struve and Bessel functions of order 0. By definition, H 0 is an odd function while J 0 and t → |t|C h e -2βt 2 1I A1 t 2 -h -2 are even functions. Consequently, we get
I(z) = 1 4π iC h |t|H 0 (2ht)e -2βt 2 1I A1 t 2 -h -2 1I (0,∞) (2ht) -1I (-∞,0) (t) dt = 1 2π iC h ∞ 0 tH 0 (2ht)e -2βt 2 1I A1 t 2 -h -2 dt = iC h 2π b1 a1 tH 0 (2ht)e -2βt 2 dt,
with a 1 and b 1 defined in (44). Note that ∀t ∈ [a 1 , b 1 ] and for a large enough n, it follows that 2ht ∈ [2, 3]. Therefore, on [a 1 , b 1 ], the function t → H 0 (2ht) is decreasing. In addition, (see [START_REF] Erdélyi | Higher transcendental functions[END_REF]), we have
min t∈[a1,b1] {H 0 (2ht)} > 1/2.
We easily deduce from the previous observations that
|I(z)| ≥ |C h | 4π b1 a1 te -2βt 2 dt ≥ |C h | 16πβ e -2βa 2 1 -e -2βb 2 1 .
Therefore, some simple algebra gives, for n large enough. that
|I(z)| ≥ c2βa 1 δ(1 -βa 1 δ)δ|C h |n -β β+γ ≥ ac n -β 2(β+γ) ,
for some numerical constants c, c > 0 depending only β. Taking the numerical constant a > 0 small enough independently of n, β, γ, we get that Condition (C2) is satisfied with ϕ n = cn -β 2(β+γ) . Concerning Condition (C3), we proceed similarly as above for the L 2 -norm risk. The only modification appears in ( 65)-( 66) where we now use ( 68)-( 69) combined with the fact that |Supp(g )| ≤ 2 and |Supp(g 1, )| ≤ 2δ , by construction of these functions. Therefore, the details will be omitted here.
Appendix C: Proof of Theorem 3 -Adaptation
The following Lemma is needed to prove Theorem 3.
Lemma 5. For κ > 0, a constant, let E κ be the event defined such that
E κ = M m=1 W γ hm -E[ W γ hm ] ∞ ≤ κe γh -2 m r n (x + log M ) . (71)
Therefore, on the event
E κ W γ h m -W ρ ∞ ≤ C min 1≤m≤M h r/2-1 m e -βh -r m + e γh -2 m r n (x + log M ) ,
where C > 0 is a constant depending only on γ, β, L, r, κ and W γ h m is the adaptive estimator with the bandwidth h m defined in (30).
The proof of the previous Lemma is done in D.2. For any fixed m ∈ {1, • • • , M }, we have in view of Proposition 2 that
P W γ hm -E[ W γ hm ] ∞ ≤ Ce γh -2 m r n (x) ≥ 1 -e -x ,
where
r n (x) = max 1+x n , 1+x n
. By a simple union bound, we get
P 1≤m≤M W γ hm -E[ W γ hm ] ∞ ≤ C 2 e γhm -2 r n (x) ≥ 1 -M e -x .
Replacing x by (x + log M ), implies
P 1≤m≤M W γ hm -E[ W γ hm ] ∞ ≤ C 2 e γhm -2 r n (x + log M ) ≥ 1 -e -x .
For κ > C 2 , we immediately get that P(E κ ) ≥ 1 -e -x and the result in probability (31) follows by Lemma 5. To prove the result in expectation (32), we use the property E[Z] = ∞ 0 P(Z ≥ t)dt, where Z is any positive random variable. We have indeed for any 1 ≤ m ≤ M that
P W γ h l -W ρ ∞ ≥ C h r/2-1 m e -β h r m + e γh -2 m r n (x + log M ) ≤ e -x , ∀x > 0.
Note that
r n (x + log M ) = max x + log(eM ) n , x + log(eM ) n ≤ max log eM n , log eM n + max x n ∨ x n ≤ r n (log M ) + r n (x -1).
Combining the two previous displays, we get ∀x > 0
P W γ h l -W ρ ∞ ≥ C h r/2-1 m e -β h r m + e γh -2 m [r n (log M ) + r n (x -1)] ≤ e -x . Set Y = W γ h l -W ρ ∞ /C, a = h r/2-1 m e -β h r
m + e γh -2 m r n (log M ) and b = e γh -2 m . We have
E[Y ] = a + E[Y -a] = a + ∞ 0 P (Y -a ≥ u) du = a + b ∞ 0 P (Y -a ≥ bt) dt.
Set now t = r n (x -1). If 0 < t < 1, then we have t = x n . If t ≥ 1 then we have t = x n . Thus we get by the change of variable t = x n that
1 0 P (Y -a ≥ bt) dt = n 0 P Y -a ≥ b x n 1 2 √ xn dx ≤ 1 2 √ n n 0 e -x √ x dx ≤ c √ n ,
where c > 0 is a numerical constant. Similarly, we get by change of variable t =
x n ∞ 1 P (Y -a ≥ bt) dt = ∞ n P Y -a ≥ b
x n
1 n dx ≤ 1 n ∞ n e -x dx ≤ c n ,
where c > 0 is a numerical constant. Combining the last three displays, we obtain the result in expectation.
Appendix D: Proof of Auxiliary Lemmas
D.1. Proof of Lemma 2
To prove the uniform bound of (36), we define
δ h = max |t|≤h -1 |t|e γt 2 .
Then, by definition of K η h and using the inverse Fourier transform formula, we have
δ -1 h K γ h ∞ = 1 2π δ -1 h sup x∈R e -itx K γ h (t)dt ≤ 1 2π δ -1 h h -1 h -1 |t|e γt 2 dt ≤ 1 π δ -1 h h -1 0 te γt 2 dt ≤ 1 2γπ δ -1 h h -1 0 2γte γt 2 dt ≤ 1 2γπ δ -1 h (e γh -2 -1) ≤ 1 2γπ δ -1 h (e γh -2 -1) ≤ h 2γπ := U. ( 72
)
For the entropy bound (37), we need to prove that K γ h ∈ V 2 (R), where V 2 (R) is the set of functions with finite quadratic variation (see Theorem 5 of Bourdaud, Lanza de Cristoforis and Sickel ( 2006)). To do this, it is enough to verify that
K γ h ∈ B 1/2 2,1 (R) and the result is a consequence of the embedding B 1/2 2,1 (R) ⊂ V 2 (R).
Let us define the Littlewood-Paley characterization of the seminorm
• • 1/2,2,1 as follows g • 1/2,2,1 := ∈Z 2 /2 F -1 1 [α F 1 [g]] 2 ,
where α (•) is a dyadic partition of unity with α symmetric w.r.t to 0, supported in
[-2 +1 , -2 -1 ] ∪ [2 -1 , 2 +1 ]
and 0 ≤ α ≤ 1 (see e.g. Theorem 6.3.1 and Lemma 6.1.7 in the paper of [START_REF] Bergh | Interpolation spaces. An introduction[END_REF]). Then,
K γ h ∈ B 1/2 2,1 (R), if and only if K γ h • 1/2,2
,1 is bounded by a fixed constant. By isometry of the Fourier transform combined with definition of α and K γ h , we get that
F -1 1 [α F 1 [K γ h ]] 2 = α F 1 [K γ h ] 2 = α K γ h 2 = [0,h -1 ]∩[2 -1 ,2 +1 ] α (t) 2 2|t| 2 e 2γt 2 dt ≤ [0,h -1 ]∩[2 -1 ,2 +1 ]
2t 2 e 2γt 2 dt.
A primitive of t → 2t 2 e 2γt 2 is 1 2γ te 2γt 2 -1 2γ t 0 e 2γu 2 du. Thus, we get that
F -1 1 [α F 1 [K γ h ]] 2 ≤ 1 γ h -1/2 e γh -2 , ∀ ∈ Z, and
K γ h • 1/2,2,1 ≤ 1 γ h -1/2 e γh -2 L h =-∞ 2 /2 ,
where L h = log 2 (h -1 ) + 1 . A simple computation gives that
L h =-∞ 2 /2 ≤ √ 2 √ 2 -1 + 2 (L h +1)/2 -1 √ 2 -1 ≤ √ 2 √ 2 -1 + 2 √ 2 -1 h -1/2 .
Combining the last two displays and since h -1 ≥ 1, we get
K γ h • 1/2,2,1 ≤ c 1 γ h -1 e γh -2 ,
where c > 0 is a numerical constant. This shows that δ -1
h K γ h • 1/2,2
,1 is bounded by a fixed constant depending only on γ. Therefore K γ h ∈ V 2 (R) and the entropy bound (37) is obtained by applying Lemma 1 of [START_REF] Giné | Uniform limit Theorems for wavelet density estimators[END_REF].
D.2. Proof of Lemma 5
We recall that the bandwidth h m with m is defined in (30). Let r n (x) = max log(n)+x n
, log(n)+x n and define
m * := argmin 1≤m≤M h r/2-1 m e -β h r m + e γh -2 m r n (x + log M ) , (73)
and
B(m) = max j:j>m W γ hm -W γ hj ∞ -2κe γh -2 j r n (x + log M ) .
In one hand, we have
W γ h m -W γ h m * ∞ 1I m>m * = W γ h m -W γ h m * ∞ -2κe γh -2 m r n (x + log M ) 1I m>m * +2κe γh -2 m r n (x + log M )1I m>m * ≤ B(m * ) + 2κe γh -2 m r n (x + log M ) 1I m>m * .
In the other hand, similarly, we have
W γ h m -W γ h m * ∞ 1I m≤m * ≤ B( m) + 2κe γh -2 m * r n (x + log M ) 1I m≤m * .
Combining the last two displays, and by definition of L κ (•) in ( 29), we get
W γ h m -W γ h m * ∞ ≤ B(m * ) + 2κe γh -2 m r n (x + log M ) 1I m>m * + B( m) + 2κe γh -2 m * r n (x + log M ) 1I m≤m * ≤ B(m * ) + B( m) + 2κr n (x + log M )(e γh -2 m + e γh -2 m ) = L(m * ) + L( m) ≤ 2L(m * ), (74)
where the last inequality follows from the definition of m in (30). By the definition of B(•), it follows
L(m * ) = B(m * ) + 2κe γh -2 m * r n (x + log M ) = max j:j>m * W γ h m * -W γ hj ∞ -2κe γh -2 j r n (x + log M ) ≤ max j:j>m * W γ h m * -E[ W γ h m * ] ∞ + E[ W γ h m * ] -W ρ ∞ + W ρ -E[ W γ hj ] ∞ + E[ W γ hj ] -W γ hj ∞ ] -2κe γh -2 j r n (x + log M ) + 2κe γh -2 m * r n (x + log M ).
On the event E κ , it follows that
L(m * ) ≤ max j:j>m * W γ h m * -E[ W γ h m * ] ∞ + E[ W γ h m * ] -W ρ ∞ + W ρ -E[ W γ hj ] ∞ -κe γh -2 j r n (x + log M ) + 2κe γh -2 m * r n (x + log M ).
As h m * > h j for all j > m * , we have -e γh -2 j < -e γh -2 m * . Therefore, on the event E κ , we get
L(m * ) ≤ E[ W γ h m * ] -W ρ ∞ + max j:j>m * E[ W γ hj ] -W ρ ∞ + 2κe γh -2 m * r n (x + log M ). (75)
From ( 74) and on the event E κ , we have
W γ h m -W ρ ∞ ≤ W γ h m -W γ h m * ∞ + W γ h m * -W ρ ∞ ≤ | W γ h m * -W ρ ∞ + 2L(m * ) ≤ W γ h m * -E[ W γ h m * ] ∞ + E[ W γ h m * ] -W ρ ∞ + 2L(m * ) ≤ κe γh -2 m r n (x + log M ) + E[ W γ h m * ] -W ρ ∞ + 2L(m * ).
Combining the last inequality with (75)
W γ h m -W ρ ∞ ≤ 5κe γh -2 m r n (x + log M ) + 3|E[ W γ h m * ] -W ρ ∞ + 2 max j:j>m * E[ W γ hj ] -W ρ ∞ .
From Proposition 1, the bias is bounded by t → t r/2-1 e -βt -r an increasing function for sufficiently small t > 0, and as s h m * > h j for all j > m * , we can write
W γ h m -W ρ ∞ ≤ C κe γh -2 m r n (x + log M ) + h r/2-1 m * e -βh -r m *
.
The result follows from ( 73), the definition of m * .
D.3. Proof of Lemma 4
In view of Fatou's Lemma, we have
liminf |z|→∞ z 2 p γ 0 (z) ≥ liminf |z|→∞ z 2 p 0 (z -x)N γ (x)dx ≥ √ 2γ - √ 2γ liminf |z|→∞ z 2 p 0 (z -x)N γ (x)dx. Recall that γ = 1-η 4η ≤ 1/4, then for |z| ≥ √ 2γ + 1 and any x ∈ (- √ 2γ, √ 2γ), it follows by Lemma 3 that p 0 (z -x) ≥ c(z -x) -2 . Thus, liminf |z|→∞ z 2 p γ 0 (z) ≥ c √ 2γ - √ 2γ N γ (x)dx = c 1 -1 1 √ 2π e -x 2 2 dx ≥ c > 0,
where c > 0 is a numerical constant . Choose now a numerical constant c ≥ 0 such that c -c p 0 (x)dx ≥ 1/2, therefore, for any |z| ≤ 1 + √ 2γ and some numerical constant c > 0 we get
p γ 0 (z) ≥ c -c p 0 (x)N γ (z -x)dx ≥ min |y|≤M +1+ √ 2γ {N γ (y)} c -c p 0 (x)dx ≥ 1 2 min |y|≤M +1+ √ 2γ
{N γ (y)} ≥ c > 0.
D.4. Lemma 6
Lemma 6. The density matrix ρ (1) defined in (51) satisfies the following conditions :
(i) Self adjoint: ρ (1) = (ρ (1) ) * .
(ii) Positive semi-definite: ρ (1) ≥ 0.
(iii) Trace one: Tr(ρ (1) ) = 1.
Proof:
• Note first that V h is not a Wigner function, however it belongs to the linear span of Wigner functions. Consequently, it admits the following representation
1 π R[V h ](x, φ)1I (0,π (φ) = ∞ j,k=0 τ (h) j,k ψ j (x)ψ k (x)e -i(j-k)φ , where τ (1) j,k = π 0 1 π R[V h ](x, φ)f j,k (x)e -i(j-k)φ dxdφ. (76)
For the sake of brevity, we set from now on ρ = ρ (1) and τ = τ (1) . Note that the matrix ρ satisfies ρ j,k = ρ (0) j,k + τ j,k . Exploiting the above representation of τ , it is easy to see that τ j,k = τ k,j for any j, k ≥ 0. On the other hand, ρ (0) is a diagonal matrix with real-valued entries. This gives (i) immediately.
• We consider now (iii). First, note that R[V h ](•, φ) is an odd function for any fixed φ. Indeed, its Fourier transform with respect to the first variable
F 1 [R[V h ](•, φ)] (t) = V h (t cos φ, t sin φ),
is an odd function of t for any fixed φ. Thus, it is easy to see that τ j,j = 0, for any j ≥ 0. Since ρ (0) is already known to be a density matrix, this implies that Tr(ρ (1) ) = Tr(ρ (0) ) + Tr(τ ) = 1.
• Let now prove (ii). Define r k := ∞ j=1 : j =k |τ j,k |, ∀k ≥ 1. We will prove that ρ k,k ≥ 2r k for all k ≥ 1 and combine this fact with Gershgorin's disk theorem to get the result.
We omit the numerical constants in our analysis since we are only interested in bounding the coefficients τ j,k . More specifically, we will prove there exists a numerical constant c > 0 such that we have for any k ≥ 2 that
|τ k,1 | ≤ c a √ k! , (77)
and also for any k ≥ 2 and l ≥ 1,
τ k+2l,k = 0, |τ k+2l+1,k | ≤ c a k l+ 1 2 , |τ k+1,k | ≤ c a k 5 4 . (78)
We combine now the previous display with Gershgorin's disk theorem to get the result. More precisely, since ρ is a Hermitian matrix (iii ), it admits real eigenvalues. For any eigenvalue µ of ρ, in view of Theorem 5 in Appendix D.5, there exists an integer k ≥ 1 such that
µ -ρ (0) kk ≤ ∞ j=1 : j =k |τ j,k | =: r k . ( 79
) If k = 1, we get r 1 ≤ ca ∞ j=2 1 √ j! ≤ c a for some numerical constant c > 0. If k ≥ 2, we get in view of (11) that r k = k-1 j=1 |τ j,k | + ∞ j=k+1 |τ j,k | ≤ ca 1 √ k! + k-2 l=2 1 (k -l) l+1/2 + (k -1) -5/4 + k -5/4 + ∞ l=2 1 k l+ 1 2 ≤ c a 1 k 5/4 ,
for some numerical constant c > 0.
Recall that ρ (0) = ρ α,λ for some 0 < α, λ < 1 where ρ α,λ is defined in (41). Lemma 2 in the paper of [START_REF] Butucea | Minimax and adaptive estimation of the Wigner function in quantum homodyne tomography with noisy data[END_REF] guarantees that
ρ α,λ kk = α (1 -λ) α Γ(α + 1)k -(1+α) (1 + o(1)),
as n → ∞. We note that ρ (0) kk > 0 decreases polynomially at the rate k -(1+α) as k → ∞ whereas r k decreases polynomially at the rate k -5 4 . Fix the numerical constant α ∈ (0, 1/4). Then, taking the numerical constant a > 0 small enough in (47) independently of k, we get ρ kk ≥ 2r k for any k ≥ 1. Thus, any eigenvalue µ of ρ is non-negative. Consequently, ρ is positive semi-definite.
We now prove (77)-(78). In (80), we fix the variable φ and we apply Plancherel's theorem to the integral w.r.t. the variable t to get that
τ j,k = π 0 1 π V h (t cos φ, t sin φ) f j,k (t)e -i(j-k)φ dtdφ. (80)
Plugging the definition of the pattern functions ( 11) and ( 47) into the previous display, we get for any j > k that
τ j,k = (-i) j-k 2 k-j k! j! π 0 V h (t cos φ, t sin φ)|t|t j-k e -t 2 4 L j-k k ( t 2
2 )e -i(j-k)φ dtdφ.
(81)
Set C a,h = aC 0 h -1 e βh -2 . Plugging now (47) into the previous display, we get for any j > k that τ j,k = C a,h (-1) j-k i j-k+1 2 k-j k! j! π 0 e -2βt 2 g 1 (t 2 -h -2 )g(t sin φ)|t|t j-k e -t 2 4 L j-k k
( t 2
2 )e -i(j-k)φ dtdφ.
By construction, for any fixed φ ∈ (0, π), we note that t → e -2βt 2 g 1 (t 2 -h -2 )g(t sin φ)|t|e -t 2 4 L j-k k ( t 2 2 ) is an odd function. Hence, if j -k is even, then we have τ j,k = 0.
Set α = j -k. We will study separately the four different settings: x α+1 e -x (L α k (x)) 2 ≤ 6k 1/6 √ k + α + 1.
Plugging the above display into the definition of τ j,k , we obtain that
|τ j,k | ≤ √ 6C a,h b1 a1 e -βt 2 dt 2 k-j k! j! k 1 12 (j + 1) 1/4 √ 2 j-k+1 ≤ √ 12C a,h b1 a1 e -βt 2 dt k! j! k 1 12 (j + 1) 1/4 ≤ C a j 3/2 ,
where C > 0 is an absolute constant. Case (c). Assume that k ≥ 35 and α = j -k ≥ 24. Theorem 2 in [START_REF] Krasikov | Inequalities for orthonormal Laguerre polynomials[END_REF] guarantees the existence of a numerical constant C > 0 such that:
sup x>0 x α+1 e -x (L α k (x)) 2 ≤ Ck -1/6 √ k + α + 1.
Similarly as in the previous case, we obtain |τ j,k | ≤ CC a,h b1 a1 e -βt 2 dt k! j! k -1 12 (j + 1) 1/4 ≤ C a j 3/2 , for some numerical constant C > 0.
Case (d). Assume that k ≥ 35 and α = j -k ≤ 24. In view of Theorem 1 in [START_REF] Krasikov | Inequalities for orthonormal Laguerre polynomials[END_REF], we have
M α k (x) := [L α k (x)
] 2 e -x x α+1 ≤ x(s 2 -q 2 ) r(x) , ∀x ∈ (q 2 , s 2 ),
where s = √ k + α + 1 + √ k, q = √ k + α + 1 -√ k and r(x) = (x -q 2 )(s 2 -x). Note that s → ∞ as k → ∞ whereas q ≤ √ α + 1. Thus, for any n large enough such that a 2 1 /2 ≥ α + 1, there exists k 0 large enough such that for any k ≥ k 0 , we have ⊂ (q 2 , s 2 ). Thus, the above display gives
M α k t 2 2 ≤ t 2 2 s 2 -q 2 (t 2 /2 -q 2 )(s 2 -t 2 /2) ≤ C
for some absolute constant C > 0.
Combining the above display with (81), we get that
|τ j,k | ≤ CC a,h b1 a1 e -2βt 2 dt 2 k! j! ≤ C a k! j! ,
for some numerical constant C > 0.
We consider now the case 3 ≤ α ≤ 24 (we recall that τ k+2,k = 0). We have in view of the previous display that
|τ j,k | = |τ k+α,k | ≤ C a 1 (k + 1) • • • (k + α) ≤ C a k α/2 , ∀k ≥ 1.
for some numerical constant C > 0.
Note that the previous bound is also valid for α = 1 but it is not sufficient for our needs. The case α = 1 is actually the most difficult as the previous bounds on the Laguerre polynomials are not sufficient to yield the desired control. In that case, we should rather exploit the oscillatory properties of the Laguerre polynomials. For large k, the Laguerre polynomial function L 1 k behaves essentially like a trigonometric function on any fixed compact interval. Combining this fact with the stationary phase principle, we can obtain a small enough bound.
In view of (1.1) in [START_REF] Muckenhoupt | Asymptotic forms for Laguerre polynomials[END_REF], we have, as k → ∞, for any x ∈ (a 2 1 /2, b 2 1 /2) that
L 1 k (x) = √ 2(k + 1) e x/2 ν 1/4 k √ x ψ x ν k J 1 ( √ ν k x + (x, ν k )) + O x 1/4 ν 7/4 k , where ν k = 4k + 6, (x, ν k ) = O ν -1/2 k x 3 2
, ψ(t) = g(t) g (t)
1/2 with g(t) = arcsin(t) and J 1 is the Bessel function of order 1.
Note that ψ(t) = √ t(1 + o(1)) as t → 0. The Bessel function J 1 admits the following asymptotic expansion as t → ∞:
J 1 (t
We present some well-known results about the theory of empirical processes that are used in our proof. We refer the interested reader to [START_REF] Giné | Uniform limit Theorems for wavelet density estimators[END_REF] for more details about this theory.
Let Z 1 , ..., Z n be i.i.d. with law P on R, and let F be a P -centered (i.e., P f = f dP = 0 for all f ∈ F) countable class of real-valued functions on R, uniformly bounded by the constant U , called the envelope of the class. We say that F is a VC-type class for the envelope U and with VC-characteristics A, v if its L 2 (Q) covering numbers satisfy that, for all probability measures Q and ε > 0, N (F, L 2 (Q), ε) ≤ (AU/ε) v . For such classes, assuming P f = 0 for f ∈ F, there exists a universal constant L such that
E := E sup f ∈F n i=1 f (Z i ) ≤ L √ v √ nσ 2 log AU σ + vU log AU σ , ( 85
)
where σ is any positive number such that
σ 2 ≥ sup f ∈F E(f 2 (Z)).
See, e.g., Giné and Guillou (2001).
Talagrand's inequality bounds the deviation of the suprema of empirical processes. The following result is a version of this inequality is due to [START_REF] Bousquet | A Bennett concentration inequality and its application to suprema of empirical processes[END_REF].
Theorem 6. Assume that Z i are identically distributed according to P . Let F be a countable class set of functions from a set X to R and assume that all functions f in F are P-measurable, square-integrables and satisfy E[f (Z 1 )] = 0 with envelope equal to 1. Let σ 2 ≥ sup f ∈F Var(f (X 1 )) almost surely, then for all x ≥ 0, we have
P sup f ∈F n i=1 f (Z i ) ≥ E sup f ∈F n i=1 f (Z i ) + √ 2xnv + x 3 ≤ e -x ,
with v = nσ 2 + 2E sup f ∈F | n i=1 f (Z i )| .
n
Figure 1 :Figure 2 :
12 Figure 1: Single photon cat state estimation, with η = 0.9, n = 100 × 10 3 . Left, top: display of W γ h -Wρ ∞/ Wρ ∞ as a function of 1/h. The central curve is the mean of this quantity, while the shaded area displays the ±2× standard deviation of this quantity. Left, right: histogram of the empirical repartition of m computed by the Lepski procedure (30). Center: display as a 2-D image using level sets of Wρ (top) and W γ h m (bottom). Right: same, but displayed as an elevation surface.
Appendix B: Proof of Theorem 2 -Lower bounds B.1. Proof of Theorem 2 -Lower bounds for the L 2 -norm
(a) k and α are bounded; (b) k is bounded and α is large; (c) k is large and α is also large; (d) k is large and α is bounded. Case (a). For any pair (j, k) such that k ≤ 34 and α = j -k ≤ 23. In view of Theorem 1 in Krasikov (2007), we get that |τ jprovided n is taken large enough. Case (b). Assume that k ≤ 34 and α = j -k ≥ 24. Again, in view of Theorem 1 in Krasikov (2007), we have sup x>0
π 0 C
0 k (t, φ)dtdφ = I 1 + I 2 + I 3 ,
https://github.com/gpeyre/2015-AOS-AdaptiveWigner
Acknowledgement
We thank Laetitia Comminges for her careful reading of a preliminary version of the paper and her helpful comments that led to a logarithmic improvement in Theorem 2.
* Supported in part by Simons Collaboration Grant 315477 and NSF CAREER Grant DMS-1454515. † Supported in part by "Calibration" ANR-2011-BS01-010-01. ‡ Supported by the European Research Council (ERC project SIGMA-Vision). 1 /Optimal estimation of the Wigner function in noisy QHT
where
and
for some numerical constant c > 0.
For the last integral in (82), a quick inspection gives
for some numerical constant c > 0.
Combining the last two displays with (82) gives the desired bound.
D.5. Auxiliary results
For the sake of completeness, we collect here a few results used in our proofs.
The following lemma, due to Butucea and Tsybakov (2008a), describes the asymptotic behaviour of integrals of exponentially decreasing functions.
Lemma 7. For any positive α, β, r, s and for any A ∈ R and B ∈ R, we have
The following classical result describes the asymptotic behaviour of integrals with non-stationary phase functions. See for instance page 631 in [START_REF] Zorich | Mathematical analysis[END_REF].
Theorem 4 (Localisation principle). Fix a compact domain Ω ⊂ R and let f ∈ C ∞ c (Ω). Let S ∈ C ∞ (Ω) be a function such that S (x) = 0 for any x ∈ support(f ). Then, as ν → ∞, we have
The same conclusion holds valid for the cosine function replaced by the sine function.
The following theorem provides a localization bound on the eigenvalues of square matrices. See for instance [START_REF] Feingold | Block Diagonally Dominant Matrices and Generalizations of the Gerschgorin Circle Theorem[END_REF].
Theorem 5 (Gershgorin Disk Theorem). Let A be an infinite square matrix and let µ be any eigenvalue of A. Then, for some j ≥ 1, we have |µ -A j,j | ≤ r j (A), where r j (A) = k≥1:k =j |A j,k |. | 74,934 | [
"751536",
"2110",
"1211"
] | [
"7772",
"60",
"66",
"454660"
] |
01491224 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491224/file/978-3-642-40346-0_10_Chapter.pdf | Shun Shiramatsu
email: [email protected]
Tadachika Ozono
email: [email protected]
Toramatsu Shintani
Approaches to Assessing Public Concerns: Building Linked Data for Public Goals and Criteria Extracted from Textual Content
Keywords: Linked Data, Public Involvement, Concern Assessment, Goal Matching Service, Text Mining
The importance of public involvement in Japanese regional societies is increasing because they currently face complicated and ongoing social issues due to the post-maturity stage of these societies. Since citizens who have beneficial awareness or knowledge are not always experts on relevant social issues, assessing and sharing public concerns are needed to reduce barriers to public participation. We propose two approaches to assess public concerns. The first is building a linked open data set by extracting public goals for a specific social issue aimed at by citizens or agents from articles or public opinions. This paper deals with hierarchical goals and subgoals for recovery and revitalization from the Great East Japan Earthquake manually extracted from related articles. The data set can be used for developing services to match citizens and agents who aim at similar goals to facilitate collaboration. The second approach is building a linked data set by extracting assessment criteria for a specific social issue from public opinions. This paper deals with candidate terms that potentially represent such criteria for a specific public project automatically extracted from clusters of citizens' opinions. The data set can be used as evidence for policy-making about the target project.
Introduction
Japanese regional societies currently face complicated and ongoing social issues, e.g., disaster risks, dilapidated infrastructures, radiation pollution, and an aging population. Some Japanese researchers regard such troubling situations, that are partially due to the post-maturity stage of societies, as "a front-runner of emerging issues" [START_REF] Komiyama | Vision 2050 and the role of japan toward the sustainable society[END_REF]. Public involvement is an interactive communication process between stakeholders in deciding public policy [START_REF] Jeong | Discourse analysis of public debates: A corpus-based approach[END_REF] and has thus become more important to explore optimal solutions to complicated issues. For example, interactive and bottom-up communication is essential to design an optimal policy toward recovery and revitalization from the Great East Japan Earthquake [START_REF] Nishimori | Optimal policy design for disaster-hit area of japan: bottom-up systems analysis of special zone for reconstruction by the isdm[END_REF].
Since citizens who have beneficial awareness or knowledge are not always experts on relevant social issues, public concerns need to be assessed and shared to reduce barriers to public participation. It is difficult to participate in issues without contextual or background information. Linked open data (LOD), which are semantically connected data on the basis of universal resource identifiers (URIs) and the resource description framework (RDF), plays an important role in fostering open government [START_REF] Hochtl | Linked open data -a means for public sector information management[END_REF]. We aim to accrue LOD to share public concerns among citizens, governments, and experts to increase transparency that facilitates eParticipation. The structure of public concerns is an important context when building consensus. In this paper, we call the process of structuring public concerns "concern assessment".
Social networking systems (SNSs) such as Facebook are used for developing collaborative relationships not only in private but also in public spheres. To reduce barriers to participation and collaboration in public spheres, we consider that SNSs need to incorporate functions to share public concerns. In this paper, we focus on two types of public concerns, i.e., public goals and assessment criteria. Public goals that are aimed at by citizens are important for facilitating public collaboration. Assessment criteria for a specific social issue are also important for comparing and deliberating on multiple options to solve the issue.
We have developed an LOD set called SOCIA (Social Opinions and Concerns for Ideal Argumentation) that consists of Web content related to Japanese geographic regions, e.g., regional news articles, microblog posts, and minutes of city council meetings. The vocabularies for structuring relationships among such content and opinions are partially defined in the SOCIA ontology that we designed.
The main reason SOCIA deals with such web content is to share background context behind regional social issues. The background context that should be supported, however, includes not only relationships among content but also personal context, e.g., public goals and assessment criteria. The conventional SO-CIA dataset and ontology could not yet support these kinds of personal context. In this paper, we expand the SOCIA ontology for structuring public goals and criteria and presents and present a prototyped dataset consistings of goals for revitalization from the Great East Japan Earthquake.
Literature Review
The International Association for Public Participation (IAP2) [START_REF]IAP2: Iap2 spectrum of public participation[END_REF] and the Obama administration's Open Government Initiative (OGI) [START_REF]WhiteHouse: Open government initiative[END_REF] have presented similar stages for public participation, i.e., the Spectrum of Public Participation and the Principles of Open Government shown in Fig. 1. The gradation in the figure represents the public impact of each stage. The figure also indicates the expected coverage of the use of LOD. Open data generally contributes to transparency and informativity, i.e., to the first stage. However, non-linked open data (e.g., CSV table data) generally lack interoperability. LOD is expected to be able to also contribute to the higher/collaborative stages because semantic links compliant with RDF increase the interoperability of data and help us to reuse data for interorganizational collaboration. Contextual information provided by the semantic links provides the potential for developing social web services to facilitate public collaboratoin. For example, an architecture based on linked data paradigm for participatory decision-making proposed by Kalampokis et al [START_REF] Kalampokis | Combining social and government open data for participatory decision-making[END_REF] can potentially be expanded to an architecture for supporting inter-organizational collaboration.
Over 40 countries currently have open data portals. 1 The number of open data portals has been increasing since 2009. In Japan, the Ministry of Economy, Trade and Industry operates a web site called the "Open Government Laboratory"2 as an experimental Web site towards achieving eParticipation and eGovernment. The LOD Challenge Japan has been held since 2011, which is modeled on the Open Data Challenge in Europe. SOCIA and our system [START_REF] Shiramatsu | Structuring japanese regional information gathered from the web as linked open data for use in concern assessment[END_REF] received the ChallengeDay Award at the LOD Challenge Japan 2011. 3 Utilizing open data is rapidly promoted by "e-Government Open Data Strategy" of the IT Strategy Headquarters of the Japanese Government since 2012.
There are several vocabularies that can be used for public participation or collaboration, e.g., the participation schema [START_REF] Styles | Participation schema[END_REF] and the weighted interests vocabulary [START_REF] Brickley | The weighted interests vocabulary 0[END_REF]. However, these vocabularies have not focused on assessing public concerns to facilitate public collaboration. This study presents how to deal with public goals and assessment criteria on the basis of LOD.
Manual Extraction of Public Goals
Public collaboration and consensus building between stakeholders are essential to enable revitalization from disasters, e.g., the Great East Japan Earthquake. Collaboration between multiple agents generally requires the following conditions:
-Similarity of the agents' goals or objectives -Complementarity of the agents' skills, abilities, or resources As the first step, this study focuses on the similarity of the goals. Sharing a data set of public goals can help citizens, who have similar goals, build consensus and collaborate with one another.
We focus on the following three problems related to public collaboration.
1. Citizens cannot easily find somebody whose goals are similar to their ones. 2. Stakeholders who have similar goals occasionally conflict with one another when building consensus because subgoals are sometimes difficult to be agreed on even if the final goal is generally agreed on. 3. A too abstract and general goal is hard to be contributed collaboratively.
We presume that the hierarchies of goals and subgoals play important roles to address these problems. First, the hierarchical structure can make methods of calculating the similarity between public goals more sophisticated. The hierarchy provides rich context to improve retrieval of similar goals. If the data set of public goals had only short textual descriptions without hierarchical structures, calculating the similarity between goals would be difficult and the recall ratio in retrieving similar goals would be lower. Second, visualizing the hierarchies is expected to support people in conflict to attain compromises. Third, dividing goals into fine-grained subgoals reduces barriers to participation and collaboration because small contributions to fine-grained subgoals are more easily provided.
We are planning to develop a Web service to match citizens and agents who are aiming at similar goals to facilitate collaboration. Toward this end, we expanded the SOCIA ontology to describe the public goals in Fig. 2. The property socia:subgoal enables us to describe the hierarchical structure of goals and subgoals. The public goal matching service that we aim to develop requires high-recall retrieval of similar goals to facilitate inter-domain, interarea, and inter-organizational collaboration.
To develop a service for matching public goals, data on public goals need to be input by stakeholders who are aiming at the goals in person. Before developing such an SNS like mechanism to input stakeholders' goals and match them, we built an LOD set4 by manually extracting public goals from news articles and related documents. The 657 public goals and 4349 RDF triples were manually extracted from 96 news articles and two related documents by one human annotator. The most abstract goal that is the root node of the goal-subgoal hierarchy is "revitalization from the earthquake". 5 The subgoals are linked from this goal with the socia:subgoal property.
The manually built LOD set can be used for developing a method of calculating the similarities between public goals. It can also be used as example seed data when citizen users input their own goals for revitalization. Fig. 3 shows an instance of a public goal to revitalize the Tohoku region from the Great East Japan Earthquake. This goal of "developing a new package tour product", has a title in Japanese, a description in Japanese, and two subgoal data resources.
The cosine similarity between public goals can be calculated on the basis of a recursive definition of a bag-of-features vector as:
sim(g i , g j ) = bof(g i ) • bof(g j ) ∥bof(g i )∥∥bof(g j )∥ (1) bof(g) = α ∥tfidf(g)∥ tfidf(g) + β ∥lda(g)∥ lda(g) + γ |sub(g)| ∑ sg∈sub(g) bof(sg) ∥bof(sg)∥ (2) tfidf(g) = tfidf(w1, g) . . . tfidf(w |W | , g) 0 . . . 0 ∈ R |W |+|Z| , lda(g) = 0 . . . 0 p(z1|g) . . . p(z |Z| |g) ∈ R |W |+|Z| ,(3)
where g denotes a public goal, bof(g) denotes a bag-of-features vector of g, and sub(g) denotes a set of subgoals of g. Here, w ∈ W denotes a term, z ∈ Z denotes a latent topic derived by a latent topic model [START_REF] Blei | Latent Dirichlet Allocation[END_REF], and tfidf(w, g) denotes the TF-IDF, i.e., the product of term frequency and inverse document frequency, of w in a title and a description of g. The p(z|g) denotes the probability of z given g, 0 ≤ α, β, γ ≤ 1, and α + β + γ = 1. The reason this definition incorporates a latent topic model is to enable short descriptions of goals to be dealt with because TF-IDF is insufficient for calculating similarities in short texts. The parameters α, β, and γ are empirically determined on the basis of actual data. This prototyped method of calculating similarities should be tested, verified, and refined though experiments in future work using the LOD set of public goals that we present.
Automatic Extraction of Assessment Criteria
Citispe@k, which is our system for online public debate, supports manual tagging of assessment criteria for public opinions and Web content [START_REF] Swezey | An eparticipation support system for regional communities based on linked open data, classification and clustering[END_REF]. Transparent and participatory management of public issues requires assessing public concerns about the issue and criteria for the assessment. We call the criteria for the concern assessment "assessment criteria" in this paper. Although assessment criteria are diversified for each public issue, citispe@k does not yet support suggestion functions for setting new criteria. Here, we investigate the ability to apply text mining to extract assessment criteria from public opinions gathered from public workshops about a specific public project to maintain mountainous areas.
These workshops were held four times in four different areas. Participants at each workshop were divided into three to four debate teams. There were a total of 15 debate teams in these workshops. There were about five to six participants for each debate team, and one of them was a facilitator who did not state opinions. The opinions stated in each debate were manually structured according to the KJ-method [START_REF] Kunifuji | Consensus-making support systems dedicated to creative problem solving[END_REF], which consisted of brainstorming and grouping phases. The opinions were written on colored cards in the brainstorming phase. Red cards were for positive opinions, blue cards were for negative ones, and yellow cards were for demands or hopes. The opinions on the cards were manually classified into several groups in the grouping phase. The opinion groups had manually assigned labels. Although the group labels potentially represented assessment criteria if their expressions were uniform, their expressions were actually nonuniform and different for each debate team. Hence, we should apply text mining techniques to automatically extract candidate terms for assessment criteria. We employed a method of cluster analysis using text mining techniques, i.e., we clustered the opinions and extracted feature terms for each cluster. The frequency of terms for short texts was insufficient for calculating similarities. The shorter the text content became, the lower the probability of the same term concurrently occurring in two kinds of content became, even if they were semantically close. Actually, participants at the workshops could not write lengthy opinions on the cards. The average number of morphemes in each opinion on a card was only 13.4. To address this problem, we used the latent Dirichlet allocation (LDA) [START_REF] Blei | Latent Dirichlet Allocation[END_REF], which is a frequently used model of latent topics in a document set. We used an implementation of the hierarchical Dirichlet process-LDA (HDP-LDA) 6 in the training phase of a topic model [START_REF] Teh | Hierarchical dirichlet processes[END_REF]. Although conventional LDA needs to be manually given the number of latent topics, HDP-LDA can determine this automatically.
Table 1 summarizes the number of opinions written on cards for each area. A total of 611 opinions were written by 15 debate teams. Since there were insufficient opinions to train the latent topic model, we also used utterances that were related to opinions on the cards in 15 debate transcripts. Table 2 lists the number of utterances in debate transcripts for each area. We regarded adjacent sentences sandwiched by interlocutors' names as one utterance. Although there were a total of 5329 utterances, the transcripts also included irrelevant utter- ances, e.g., introductory and concluding remarks and facilitators' utterances. To divide transcripts into relevant (in-range) and irrelevant segments (out-ofrange), we appended boundary markers to them. There were 2494 utterances other than those by facilitators in the in-range segment. These 611 opinions and 2494 utterances were used as a corpus to train HDP-LDA.
The procedure for clustering opinions and extracting feature terms that potentially represent assessment criteria is detailed in Fig. 4 Hereafter, let o ∈ O be an opinion written on a card, u ∈ U be an utterance in transcripts of the debate, and d ∈ D = O ∪ U be a document (i.e., d is any one of o or u). Let w ∈ W be a morpheme N -gram (N = 1, 2, 3), g ∈ G be a label for an opinion group manually assigned, s ∈ S be a speaker (interlocutor) of an utterance in the debate transcripts, and z ∈ Z be a latent topic derived by HDP-LDA.
To prepare for step 1 in the figure, determine feature set F = W ∪ G ∪ S and document set D = O ∪ U appearing in the corpus. w ∈ W can be extracted from the corpus through morphological analysis by using MeCab7 , which is a morphological analyzer. Morpheme N -grams that appear less than three times in the corpus are excluded because such rare expressions are not suitable for statistical processing. In the step 1, the feature-document matrix consists of frequencies of features in each document (i.e., opinion on cards or utterances in transcripts). In the step 2, an LDA model is trained from the feature-document matrix with the HDP-LDA tool. Probability p(z|o) is calculated using the parameters obtained with the trained model. In the step 3, the bof(o), which is a bag-of-features The climbing trails are maintained Are there any collaborative tasks cluster centroid → ease of use to make Mt. Takatori better
(translated from
The hiking trails are maintained. Great because the sea, mountain,
Japanese)
and town can all be seen.
Good perspective from the sea.
There are no handrails on I have a view of Mt.
(translated from
Number of crows increased. Artificial forests increased.
Japanese)
Now there is a lot of greenery, Citizens groups for vitalizing more than when the Hanshin mountainous areas need earthquake occurred. government financial help.
vector for o, is generated as:
bof(o) = α ∥tfidf(o)∥ tfidf(o) + 1 -α ∥lda(o)∥ lda(o), (4)
where vectors tfidf(o) and lda(o) are defined in the same way as that in Eq. ( 3) in the previous section. Parameter α satisfies 0 ≤ α ≤ 1. In the step 4, opinions o i and o j whose cosine similarity is greater than a particular threshold, θ, are grouped as cluster c. Clusters whose cosine similarity between their centroids is greater than θ are also grouped as one cluster. One opinion can belong to multiple clusters, i.e., this method is a kind of soft clustering. In the step 5, opinion clusters c ∈ C are ranked in descending order of the number of opinions.
In the step 6, w as candidate feature terms for each opinion cluster c are ranked with the following score based on pointwise mutual information (PMI): score(w, c) = -E w σ w and ( 5)
PMI(w, c) = log p(w, c) p(w)p(c) , ( 6
)
where
E w = 1 |C| ∑ c∈C PMI(w, c) and σ w = 1 |C| √∑ c∈C (PMI(w, c) -E w ) 2 .
Canonicalization by using standard variation σ w in Eq. ( 5) is necessary because rare terms tend to be over-emphasized by only the PMI value.
We empirically set α = 0.5 and θ = 0.65 in this experiment , and the four top-ranked clusters and extracted feature terms with high scores are listed in Table 3. The feature terms for each cluster represent kinds of facets of opinions in the cluster. They potentially represent assessment criteria that are focused on by the opinion cluster. For example, the feature terms in the first cluster can be interpreted as "maintaining hiking trails" and those in the second cluster can be interpreted as "landscapes of mountains". The obtained clusters can be interpreted as facets or assessment criteria of opinions. The ratio for negative opinions and demand (blue and yellow cards), which is weighted according to the distance from cluster centroids, represents the degree of needs to be addressed by the target public project. For example, the ratio for the second cluster is low because the participants satisfying the landscapes of mountains. Fig. outlines classes that are newly needed in SOCIA to describe the assessment criteria extracted from opinions. All opinion clusters correspond to socia:Facet. Clusters interpreted as assessment criteria can be instances of socia:AssessmentCriterion, which is the subclass of socia:Facet. An LOD set for assessment criteria can be built according to the classes in the figure. The links between assessment criteria and opinion clusters enable government and citizens to check context behind the concern assessment. Such structure can be utilized to develop tools for assessing and sharing public concerns. Furthermore, we visualized the distribution of opinions to enable users to understand the overview using non-metric multidimensional scaling (NMDS) based on the inverse cosine similarity of bof(o) shown in Fig. 6. The colors of the points in the figure correspond to the colors of cards. Semantically close opinions are closely located by the NMDS algorithm. We use function isoMDS for NMDS, which is included in the library MASS in the statistical software R. 8 On the basis of this visualization, we developed an exploratory browsing interface on the Web browser shown in Fig. 7. Users can interactively browse neighboring opinion clusters of their clicked points in this browsing interface.
Conclusion
We focused on two types of public concerns, i.e., public goals and assessment criteria, and presented our approaches to assessing them. First, the LOD of public goals for revitalization from the Great East Japan Earthquake that was aimed at by citizens or agents was manually built. It contained 657 public goals and 4349 RDF triples manually extracted from 96 news articles and two related documents. The data set dealt with the hierarchical structure of goals and subgoals, which played important roles in attaining compromises. The hierarchy of subgoals was recursively used to generate a bag-of-features vector of a public goal in order to avoid decreasing the recall ratio. We are planning to test and verify our method of calculating the similarities between goals and to develop a goal matching service using this data set. The effectiveness of the recursive definition of the bag-of-features vector can be verified through empirically determining the parameters α, β, and γ on the basis of the LOD of public goals. If the optimal value of γ becomes significantly greater than 0, the subgoal structure can be regarded as actually significant. Second, we investigated the ability of applying text mining to extract assessment criteria from public opinions gathered at workshops on a public project to maintain mountainous areas. The feature terms automatically extracted from an opinion cluster helped us to interpret what kinds of assessment criteria were indicated by clusters. We also presented an extension of our ontology to build LOD for assessment criteria. Moreover, we developed an exploratory browsing interface to enable overviews of opinion clusters to be understood.
Fig. 1 .
1 Fig. 1. Expected coverage of Linked Open Data
Fig. 2 .Fig. 3 .
23 Fig. 2. Expanded classes in SOCIA ontology to represent public goals
Fig. 4 .
4 Fig. 4. Clustering opinions and extracting feature terms that potentially represent assessment criteria
Fig. 5 .
5 Fig. 5. Expanded classes in SOCIA ontology to represent assessment criteria
Fig. 6 .Fig. 7 .
67 Fig. 6. Visualizing opinion distribution on NMDS
Table 1 .
1 Citizens' opinions written on sticky notes with KJ method
Area A Area B Area C Area D Total
No. of debate teams 4 4 3 4 15
Positive (red) 50 63 45 54 212
Negative (blue) 40 57 48 59 194
Demand (yellow) 37 48 57 53 205
Total 127 168 150 166 611
Table 2 .
2 Utterances in debate transcripts
Area A Area B Area C Area D Total
No. of debate teams 4 4 3 4 15
In-range, citizens 678 685 450 681 2494
In-range, facilitators 509 401 279 279 1468
Out-of-range 293 534 288 252 1367
Total 1480 1620 1017 1212 5329
Table 3 .
3 Feature terms of top four clusters that potentially represent assessment criteria
1st cluster 2nd cluster
Feature N-grams Desirable, climbable, near, stroll, Far, break, Osaka, mountains,
(translated mountain, everyday climbing, foliage tree, observation deck,
from Japanese) hiking trail, Suma Alps broad-leafed tree, landscape, seasons
Interpretation by a user Maintaining hiking trails Landscapes of mountains
No. of opinions 82 54
(Negative+demand) ratio 0.488 0.315
Opinions near to
http://www.data.gov/opendatasites
http://openlabs.go.jp/ (in Japanese)
http://lod.sfc.keio.ac.jp/challenge2011/result2011.html (in Japanese)
http://data.open-opinion.org/socia/data/Goal?rdf:type=socia:Goal&limit=100 (in Japanese)
http://data.open-opinion.org/socia/data/Goal/%E9%9C%87%E7%81%BD%E5%BE %A9%E8%88%88 (in Japanese)
http://www.cs.princeton.edu/~blei/topicmodeling.html
https://code.google.com/p/mecab/ (in Japanese)
Acknowledgments. This work was supported by the Revitalization Promotion Program (A-STEP) (No. 241FT0304) of JST, the Grant-in-Aid for Young Scientists (B) (No. 25870321) from JSPS, the SCOPE from the Ministry of Internal Affairs and Communications, the Rokko Sabo office of the Ministry of Land, Infrastructure, Transport and Tourism, and Yachiyo Engineering Co., Ltd. We would also like to sincerely thank Dr. Hayeong JEONG for her valuable advices. | 26,336 | [
"993341",
"993343",
"993344"
] | [
"66716",
"66716",
"66716"
] |
01491229 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491229/file/978-3-642-40346-0_3_Chapter.pdf | Joachim Åström
email: [email protected]
Hille Hinsberg
email: [email protected]
Magnus E Jonsson
email: [email protected]
Martin Karlsson
email: [email protected]
Crisis, Innovation and e-Participation: Towards a Framework for Comparative Research
Keywords: Democratic Innovation, Crisis, e-Participation, Comparative Research, ICT
Why and how do e-participation policies sometimes flow with politics as usual and sometimes lead to challenging powerful elites and institutions? With the aim of investigating this question, we introduce a framework for comparative research that includes not only systemic but also circumstantial factors. The approach is tested in a comparative case study of three northern European countries--Sweden, Estonia and Iceland--that are all experimenting with eparticipation but which are experiencing rather different levels of crisis. The results show that innovation and elite challenging aspirations are very much related to the type and degree of crisis. It is therefore argued that the interplay between institutional constraints and circumstantial catalysts needs further scholarly attention and elaboration.
Introduction
Regardless of all the differences among European democracies, similar challenges regarding a gap between citizens and their governments seem to work as a starting point for democratic renewal initiatives which show remarkable similarities across countries [START_REF] Daemen | Renewal of Local Democracies: Puzzles, Dilemmas and Options[END_REF]. One intriguing development is the introduction of "democratic innovations", which refers to institutions that have been specifically designed to increase and deepen citizen participation in the political decision-making process, such as different forms of e-participation [START_REF] Smith | Democratic Innovations -Designing Institutions for Citizen Participation[END_REF]. In the scholarly debate on these innovations, much attention has been paid on finding successful recipes of design. By following Archon Fung [START_REF] Fung | Recipes for Public Spheres: Eight Institutional Design Choices and Their Consequences[END_REF], many scholars have argued that the success of democratic innovations and their consequences for democratic governance depend, to a large extent, on "the details of their institutional construction". Empirical research supports this view by showing that the methods by which participants are selected, the timing of consultations within the policy cycle, and the mode of communication adopted set a decisive context for participant interaction. Even though not all democratic innovations succeed in engaging citizens, some do, and the odds for success and failure differ considerably according to aspects of design [START_REF] Åström | Online Consultations in Local Government: What works, When and Why[END_REF]. However, democratic innovations, just as any innovation, are more than ideas and designs; they are ideas in action. Therefore, as Newton aptly points out [START_REF] Newton | Curing the democratic malaise with democratic innovations[END_REF], they depend on implementation, and "good innovations depend on good ideas and designs that can be implemented successfully". But empirical research on the implementation of democratic innovations is still in its infancy, and there is still little knowledge on how similar designs are in fact mediated by various local contexts: how they are translated locally, why they are implemented differently, and what consequences they have on democracy.
Examples of comparative internet political research are growing in numbers, but these have mainly been conducted in the field of electoral politics. Concentrating primarily on European and U.S. case studies, this research has criticized the idea that American innovations in e-campaigning could simply be replicated elsewhere. By accounting for mainly institutional variables such as party structure and funding, electoral regulations, and media systems, research shows how similar instances of ecampaigning are shaped by national contexts [START_REF] Trevisan | Same Recipe but different Ingredients? Challenges and Methodologies of Comparative Internet/Politics Research[END_REF][START_REF] Åström | Blogging in the shadow of parties: Collectivism and individualism in the Swedish 2010 election[END_REF].
When it comes to e-participation, it is local government that is the laboratory for research and experimentation. In most countries, experimentation in e-participation, if there is any, takes place primarily at this level [START_REF] Åström | Online Consultations in Local Government: What works, When and Why[END_REF]. This focus on local-level experimentation offers researchers some specific advantages and challenges. The closeness of political actors makes the effects of new processes more readily observable to those who govern localities -and to researchers -than to those at higher levels of governance. Furthermore, the large number of local governments is advantageous when it comes to generalizing results [START_REF] John | The Comparative Study of Local Government and Politics: Overview and Synthesis[END_REF]. However, local government vary across countries on multiple dimensions (i.e., its purpose; its autonomy and relationship to other levels of government; its relationship to its residents; its structure, form and setting; and its politics and policy), which is challenging [START_REF] Wolman | Comparing local government systems across countries: conceptual and methodological challenges to building a field of comparative local government studies[END_REF]. As a consequence of this complexity, there is a general lack of systematic comparison on local politics across countries. This is not only the case in the field of e-participation. Urbanists in general, Pierre [10, p. 446] states, "have been surprisingly slow in using comparison as a research strategy", and according to Wolman [9,p. 88], the main threshold problem is "the lack of a common framework to conduct such research, to place results, and build upon them".
The general aim of this paper is to help remedy this research gap by exploring what constitutes the "context" that surrounds and influences the implementation of eparticipation at the local government level. The specific question we set out to explain is why and how e-participation policies sometimes flow with politics as usual and sometimes lead to challenging powerful elites and institutions. While "elitechallenging" or "citizen-centric" forms of participation reflect a critical citizenry whose members want to put incumbent authorities under pressure to respond to their demands [START_REF] Dalton | Citizenship Norms and the Expansion of Political Participation[END_REF], and while they tend to have a positive impact on democracy [START_REF] Inglehart | Modernization, cultural change, and democracy: the human development sequence[END_REF], they are still rare exceptions [START_REF] Blaug | Engineering Democracy[END_REF][START_REF] Coleman | The Internet and democratic citizenship: theory, practice and policy[END_REF].
To grasp a better understanding of why, when and how these rare exceptions occur, we will start by proposing a new approach to comparative research, based on a wider, more flexible understanding of "context" that accounts for the interplay between institutional constraints and circumstantial catalysts [START_REF] Trevisan | Same Recipe but different Ingredients? Challenges and Methodologies of Comparative Internet/Politics Research[END_REF]. This framework is subsequently used in order to compare e-participation implementation in Sweden, Estonia and Iceland, three northern European countries that are all experimenting with e-participation but are experiencing rather different levels of crises. While different "systemic elements" come to the fore in the implementation of e-participation depending on the political system, circumstantial factors--or different degrees of crises--transcend boundaries and thus provide an interesting starting point for empirical analysis. Information about the three cases is based on a series of personal interviews conducted with local politicians and civil servants in Sweden, Estonia and Iceland during the spring 2012, as well as a joint workshop with participants from all three countries in the autumn 2012. Additionally, we have used evaluations and case-reports from respective countries (our own and others) as well as public data.
2
What context matters for e-participation?
Encouraged by Trevisian and Oates [START_REF] Trevisan | Same Recipe but different Ingredients? Challenges and Methodologies of Comparative Internet/Politics Research[END_REF], this study will thus introduce a broader framework that accounts for the interplay between institutional constraints and circumstantial catalysts in the implementation of e-participation. Following the footsteps of previous comparative local government research, we will particularly analyze the local government system, or central-local relations, as well as the character of local democracy. When it comes to circumstantial factors, we will separate between the nature of policy-problems on the one hand and the political climate on the other. In combination, these elements are expected to be key factors influencing the degree of innovation in the implementation of e-participation. Innovation is usually seen as offering an opportunity to change the rules of the game, which is more motivating in situations characterized by crises. However, this is seldom recognized in theoretical frameworks in comparative digital research. Instead, "context" is usually narrowed down to different institutional constraints.
Sweden
Systemic factors. The Swedish political system is first and foremost characterized by a strong position of political parties. Every elected politician represents a political party, and Swedish elections are centered on parties rather than individual candidates. Also, political participation in Sweden has traditionally been channeled through political parties and popular mass movements, fostering a collectivist ideal for citizen participation and democratic citizenship. In a recent comparative analysis of sixteen European countries, investigating the extent to which local democratic institutions and political cultures are "party democratic" or "citizen democratic", Sweden is found to be the most "party democratic" political system [15, p. 9]. The Swedish system is also based on strong local government with far-reaching autonomy [16, p. 233]. Local governments raise the majority of the income taxes from the population and have gained the responsibility for a growing number of welfare services (from national as well as county governments) during the last fifty years [16, p. 236]. The local government sector is also a large employer that occupies around 20% of the Swedish workforce.
Swedish citizens are usually considered relatively informed about politics [START_REF] Fraile | A comparative study of Political Knowledge in the 2009 European Elections[END_REF], and turnout in elections is comparatively high (approximately 80% of the electorate vote in local authority, county council and national elections) [START_REF]Electoral results 2010[END_REF].
Circumstantial factors.
During the last decades, there has been a growing debate over the state of democracy in Sweden. As in many other European countries, the Swedish public is becoming more dissatisfied with the traditional institutions of representative democracy and with conventional forms of participation [START_REF] Goul Andersen | Udviklingen i den politiske deltagelse i de nordiske lande[END_REF][START_REF] Holmberg | Down and down we go: political trust in Sweden[END_REF]. A notable strengthening of the socio-economic status in the country has resulted in a change towards individualization among its citizens. In recent comparative studies, Swedes are found among the most individualized citizens in the world [START_REF] Inglehart | Modernization, cultural change, and democracy: the human development sequence[END_REF]. As a result of this transformation, the strong collectivistic tradition of political engagement in the country has been questioned. The formerly strong mass popular movements, including the political parties, have lost a large share of their members. A widespread decline in political trust and party membership as well as party identification has caused some scholars to claim that political parties are losing their legitimacy [START_REF] Montin | Mobilizing for participatory democracy? The case of democracy policy in Sweden[END_REF][START_REF] Holmberg | Down and down we go: political trust in Sweden[END_REF] However, the contemporary political situation is still one of stability. Sweden has not been directly involved in a war since 1809 and is, along with Canada, the only state rewriting its constitution despite the absence of a political crisis [START_REF] Elster | Forces and mechanisms in the constitution-making process[END_REF]. The country has managed through the current economic crises better than most other European countries, and the parliamentary situation is still characterized by pragmatism, coalition-building and striving for consensus. While turnout and levels of trust are going down, they are still at a relative high from a comparative perspective.
E-participation implementation.
In attempts to mend the apparent challenges of Swedish representative democracy, a trend of introducing new forms of citizen participation (e.g., e-participation initiatives) has nonetheless emerged [START_REF] Karlsson | Participatory initiatives and political representation: The case of local councillors in Sweden[END_REF]. In line with the Swedish tradition of strong local governments, the vast majority of these participatory initiatives have been implemented at the local level, championed by local governments. However, local e-participation initiatives are still rather few and unevenly diffused among Swedish local governments. The local governments that have implemented e-participation initiatives are often characterized by relatively low electoral participation as well as relatively weak political trust among citizens. Case studies report that local politicians often view e-participation initiatives as a potential remedy for these challenges [START_REF] Åström | Mot en digital demokrati? Teknik[END_REF]; [START_REF] Granberg | Civic Participation and Interactive Decision-making: A Case Study[END_REF], which indicates a link between circumstantial factors and e-participation initiatives. However, it soon becomes evident that the catalyst for change is not very strong. First, the initiatives are implemented as potential remedies for a declining trust in political parties and institutions, but not as a process for solving specific policy problems. One illustrative example is the online referendums in the municipality of Sigtuna. The local government decided to implement a large number of local referendums in order to spire greater political participation and foster political trust, but the policy issues for these referendums were chosen at a later stage and were not the main focus of the participatory process. Second, the new arenas of engagement have often been detached from the traditional party arena of representative democracy. Local governments are locked into old structures and ways of working, only with islands of participatory practice [cf. [START_REF] Amnå | Playing with fire? A Swedish mobilization for deliberative democracy[END_REF][START_REF] Åström | Apple Pie-Spinach Metaphor: Shall e-Democracy make Participatory Planning More Wholesome?[END_REF].
Estonia
Systemic factors. Since the restoration of independence in 1991, Estonia has built and developed a democratic structure. Practically starting from scratch after the occupation of the Soviet Union, all the functions and apparatus of a modern state, including a legal code, a civil service, and national and sub-national institutions have been built up. Today, Estonia is a Parliamentary Republic. The political parties are the main instrument of channeling power from the citizens, and the general elections are the central mechanisms that give the people influence over policy-making. Another important trait of the Estonian political system is central authority. This is partly due to the fact that the "cornerstone of Estonian local governance-the municipality-was abolished by the Soviet regime" [28, p. 168]. The local authorities are thus formally autonomous in acting within the framework on fiscal and normative matters, but the framework has "not been conducive to actual autonomy and, hence, the development of local democracy" [28, p. 190]. Due to the regulations of tax collection and private enterprise in social services, the revenues for local governments are low. Therefore, the possibility for local governmental maneuvers is low.
Circumstantial factors.
The political parties dominate Estonian political life, yet they do not enjoy high public trust, with around 40% of the citizens claiming they would not vote for any of the competing parties in an election [START_REF] Madise | Elections, Political Parties, and Legislative Performance in Estonia: Institutional Choices from the Return to Independence to the Rise of E-Democracy[END_REF]. The public distrust was recently manifested in a much-publicized debate article in November 2012, in which the Charter 12 was presented. The article was published in connection with a political scandal concerning party financing, with the explicit message that "Estonia's democracy is crumbling before our eyes", and "democratic legitimation [sic] has ceased" [30]. The Charter did catch both the public (17,000 supported The Charter in an online process) and the President's interest (who supported the call). The Charter then became the platform for an online political process, leading to the creation of the site The Peoples Assemble, 'Rahvakogu', was ended with a "Deliberation Day" in April 2013. The political climate can thus be said to be characterized by instability and low trust towards the political establishment. The policy problems facing Estonia are, despite the current Charter 12 events, of a resource character. The political parties are currently under pressure and scrutiny and do, in general, receive low trust.
E-participation implementation.
With a diffusion of approximately 75% of the population, Estonia is among the top 30 states in the world when it comes to internet access. Already early on in the process of democratization, the Estonian government turned to ICT solutions to enhance citizen participation. The earliest and most notable actions were the introduction of the TOM (Today I Decide) system in 2001, the introduction of e-voting in 2005 and the osale.ee platform in 2007. Due to the nationallevel crisis and the creation of the "People's Assembly" platform in 2013, Estonia gained its first elite-challenging e-participation process. The process was initiated by the President and shaped in co-operation with the civil society. Aimed to invite lay citizens to discuss and propose fundamental changes in the party structure, the initiative must be viewed as rather radical.
On the local level, however, e-participation tools have been developing at a considerably slower pace than those created by the central government or citizen initiatives. For example, the VOLIS system is an online decision-making system for local councils, and the application aims to integrate e-governance, participatory democracy, and records management. It is in some sense similar to the TOM system, offering possibilities for citizens to propose issues to the council for discussion or adoption and to collect supporting signatures. However, the system has not been widely adopted. The basic reason for this is the centralized institutional framework and the additional costs brought on the municipalities and regions interested in the system.
Iceland
Systemic factors. Despite Iceland's long tradition of democracy with the first parliament, the Althing (Alþingi), established as early as 930 CE, the modern democratic state took shape after its independence from Denmark in 1944, when Iceland constituted itself as a semi-presidential republic with parliamentary rule. Iceland is a decentralized state with strong local democratic traditions. Municipalities as a political unit date back to the first "free men" that conquered Iceland in 800 CE. Iceland consists of 75 municipalities, which, with a total population of 320,000 citizens, makes a "great number of small, sparsely populated municipalities" [31, p. 21].
With its structure as a unitary state, the national government of Iceland rules the state, while the local authorities, with restrictions from the national level, rule the municipalities. Tax revenue is collected by the municipalities, and thus a huge part of the welfare services is provided by the municipality. The municipalities are, therefore, under the law, entitled to maneuver within the frames of their budgets.
Circumstantial factors.
The contemporary Icelandic political context can be characterized as being in a stable post-crisis condition, after the economic and political crisis that began in the wake of the global financial crisis in 2008. In the aftermath of the financial and political crisis, the sitting government was toppled in 2009. A public discussion on the fundamentals of the Icelandic political system took form. One of the main issues was the drafting of a new constitution. The constitution was drafted online in a crowd-sourcing process, which is unique in its kind.
Iceland has traditionally been considered of good democratic health, with high turnout levels in parliamentary elections (e.g., 85.1% in 2009) and relatively high turnouts in presidential elections (e.g., 69.3% in 2012). Parallel to this high level of participation, Icelanders are dissatisfied with the political establishment, a visible trend in that political parties attract fewer members. Some scholars argue that the "political parties in Iceland have become almost empty shells" [START_REF] Kristjánsson | Iceland: A Parliamentary Democracy with a Semi-Presidential Constitution[END_REF].
The recent developments (i.e., the financial and political crisis), in combination with a notion that the political parties resemble "empty shells", have affected the levels of general trust in politics in Iceland. In a poll conducted in 2011, only one in ten Icelanders expressed "great trust" in parliament [START_REF] Gylfason | From Collapse to Constitution: The Case of Iceland[END_REF]. The political climate is thus characterized by low trust. The policy problems in Iceland concern fundamental political issues. The crisis did affect all political and societal institutions, and it must therefore be considered of fundamental character.
E-participation implementation.
Early on, Iceland's government was positive to ICT solutions. Already in 1996, the prime minister announced, "The chief objective is that Iceland shall be in the forefront of the world's nations in the utilization of information technology in the service of improved human existence and increased prosperity" [START_REF] Fung | Recipes for Public Spheres: Eight Institutional Design Choices and Their Consequences[END_REF]. But despite the infrastructure and governmental rhetoric, the rate of democratic innovation and e-participation was low in Iceland until 2008, both on local and national levels. Only a few examples of e-participation innovations could be found on the local level, as, for example, the e-voting referendum in 2001 on whether or not to move the national airport located in Reykjavik as well as a deliberative online project concerning the "Local Agenda 21" policy in 2004 [START_REF] Guðmundsson | The role of public participation in creating a sustainable development policy at the local level. An example from the City of Reykjavík, Iceland. Report[END_REF].
After the crisis in 2008, the rate of e-participatory innovation rose and became ever more elite-challenging. The first and most prominent example on the national level was the process of drafting the new constitution, while the most prominent example on the local level was the launch and implementation of the Better Reykjavik system. The process of drafting the new constitution began with the National Forum in 2010 in which a decision was taken that a citizen assembly, the "Constitutional Council", should be elected by popular vote. The Constitutional Council then chose to put the process online, and the drafting of the constitution soon metamorphosed into an online process that invited every citizen of Iceland to participate in the writing of the new constitution [START_REF] Gylfason | Constitutions: Send in the Crowds[END_REF]. Social media platforms used were YouTube, Twitter, Facebook and Flickr. A first draft of the new constitution was handed over to parliament in July 2011, and an advisory referendum was held in October 2012. The drafting is thus still in progress. On a local level, the Better Reykjavik system was implemented by the new local party, the Best Party, in co-operation with the non-profit organization Citi-zen Foundation. The system allows citizens of Reykjavik to participate by posting, discussing and voting upon citizen initiatives.
Comparative analysis
At first glance, Sweden, Estonia and Iceland share many similarities in relation to eparticipation. All three countries are characterized by strong ICT-infrastructure and high levels of ICT-modernization among its populations. However, the e-participation initiatives implemented in these countries differ widely in terms of innovation and elite challenging aspirations. How can these differences be understood in relation to the interplay between circumstantial and systemic factors?
In the Swedish case, e-participation initiatives are primarily implemented at the local level and are seldom initiated to handle specific policy problems. Rather, these initiatives are used as a tool among many to foster citizen participation and political trust in the light of a declining trend in electoral participation and political trust (although from comparably high levels). The degree of pervasiveness and innovation is low in comparison to the other cases, which might be understood in relation to systemic (party-centric) as well as circumstantial (stability) factors. In a comparably stable political climate, without any imminent policy problems, e-participation initiatives have not been implemented so as to challenge the party-centric style of policy making. Despite the use of some interesting tools such as e-referendums, online discussion forums, e-panels and e-petitioning, implementation pretty much speaks in favour of "politics as usual".
By comparison, the Estonian case is interesting, since the country is beginning to move beyond the elites' comfort zone. While the Estonian local governments do not have the capacity for innovative e-participation implementation, the country was early in developing a national e-participation platform. The government thus "paved the way" for innovation by constructing a physical and cultural infrastructure for eparticipation early on. However, it was not until the emergence of a legitimacy crisis that a more elite-challenging practice developed with the creation of the "Rahvakogu" and "Deliberation Day". With its roots in an instable political climate, this crisis was also related to a specific policy problem: how to regulate party finance.
However, the Icelandic case is a sui generis due to the extent of the economic crisis as well as the degree of elite-challenging democratic innovations in its wake. The financial crisis facing Iceland in 2008 brought with it not only policy problems related to financial issues but also a substantial challenge to the political climate in terms of governmental and local government institutions lacking legitimacy. These developments spired innovative and pervasive forms of e-participation processes at the national level as well as in the city of Reykjavik. In contrast to the elements of crisis described in Sweden and Estonia, the situation in Iceland could be described as a more fundamental crisis affecting several sectors and functions in society. Table 1 summarizes the main differences and similarities among the cases. As has been argued above, both systemic and circumstantial factors seem to influence eparticipation implementation. When it comes to circumstantial factors, which are of particular interest in this paper, the results suggest that the lower the trust and the deeper the policy problems are, the higher the chances are for an elite-challenging implementation. An interesting common feature of the more innovative and elitechallenging initiatives are their lack of sole management by state bodies from the topdown. Instead, they have come into being from outside the state, with civil society participating in deciding the rules of the game. It would thus seem that crisis makes eparticipation more innovative by making it less government-organised and interpreted more in terms of citizen or civil-society concerns, reflecting the tension in democratic theory between models of participation promoted by incumbent power-holders and autonomous initiatives driven by self-actualizing citizens [START_REF] Bennet | Changing Citizenship in the Digital Age[END_REF].
Conclusions
The crisis of representative democracy may work as a starting point for democratic renewal initiatives in many cities around the world, but these initiatives vary considerably in terms of their elite-challenging aspirations. In this article, we have argued that circumstantial factors are as important as systemic factors in order to understand why. Within this framework, it becomes just as important to distinguish between crises as to distinguish between institutions. Without both these sets of factors, the results of the comparative case study would be more difficult to understand, but empirical work on less-straightforward case studies would help to achieve a more sophisticated understanding between crisis and e-participation.
Earlier studies of e-participation have focused foremost on the influence of institutional and systemic factors on e-participation implementation while largely ignoring or overlooking crisis, a concept that has had a central position in other related fields of social scientific research interested in innovations in government and society (i.e., organizational studies and economics). The findings of this analysis encourage more research the interplay between crisis and institutions in shaping the e-participation processes and on how different kinds, or degrees, of crisis affect e-participation implementation. [START_REF]Nordforsk, the Icelandic Centre for Research (Rannis) and Estonian Ministry for Economic Affairs and[END_REF]
Table 1 .
1 Key contexts influencing e-participation implementation | 31,268 | [
"1004256",
"1004257",
"1004258",
"1004259"
] | [
"301188",
"171767",
"489196",
"489197"
] |
01491230 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491230/file/978-3-642-40346-0_4_Chapter.pdf | Donald F Norris
email: [email protected]
Christopher G Reddick
email: [email protected]
Michael Cannon
Ira Levy
David Molchany
Elliot Schlanger
E-Participation among American Local Governments
Keywords: E-participation, e-democracy, e-government
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
1
Introduction 1 In this paper, we examine empirically whether American local governments have adopted electronic participation (e-participation), also known as e-democracy (herein, we use these terms synonymously). For years, scholars and advocates have argued that e-government has the potential not simply both to deliver governmental information and services online and to produce e-democracy (e.g., [START_REF] Nugent | If e-democracy is the answer, what's the question?[END_REF][START_REF] Garson | The Promise of Digital Government[END_REF]Ward and[START_REF] Ward | Introduction: The potential of the Internet revisited[END_REF].
Proponents" claims about the potential of edemocracy suggest that it will produce primarily positive results in such areas as democratic engagement and deliberation, citizen participation in government and politics, and voter turnout in elections (e.g., [START_REF] Meeks | Better Democracy through technology[END_REF]Baum and DiMaio, 2000;[START_REF] Becker | Rating the impact of new technologies on democracy[END_REF][START_REF] Gronlund | Democracy in an IT-framed society[END_REF][START_REF] Hiller | Privacy Strategies for Electronic Government[END_REF]and Westcott, 2001;OECD, 2003;[START_REF] King | Democracy in the Information Age[END_REF][START_REF] Ward | Introduction: The potential of the Internet revisited[END_REF][START_REF] Amoretti | International Organizations ICTs Policies: E-Democracy and E-Government for Political Development[END_REF].
Defining E-democracy
Currently, there is little agreement in the literature about what e-democracy means in theory or constitutes in practice, which should not be surprising because the same can be said of democratic theory in general. According to [START_REF] Dahl | A Preface to Democratic Theory[END_REF], "One of the difficulties one must face at the outset is that there is no democratic theorythere are only democratic theories [START_REF] Akdogan | Evaluating and improving e-participation in Istanbul[END_REF]. " The term e-democracy is often conflated with constructs labeled e-participation, virtual democracy, teledemocracy, digital democracy, cyber democracy and e-democracy. Several authors have offered definitions of e-democracy (e.g., Hacker and van Dijk, 2000; [START_REF] Gronlund | Democracy in an IT-framed society[END_REF][START_REF] Kakabadse | Reinventing the Democratic Governance Project through Information Technology? A Growing Agenda for Debate[END_REF]European Commission, 2005;[START_REF] Pratchett | Barriers to edemocracy: Local government experiences and responses. A report prepared for the Local e-Democracy National Project in the UK[END_REF][START_REF] Tambouris | A survey of e-participation research projects in the European Union[END_REF][START_REF] Spirakis | The impact of electronic government on democracy: edemocracy through e-participation[END_REF], among many others). Most commonly, definitions of e-democracy are involve the use of ICTs for citizen participation. Additional elements common to such definitions are normative in nature and suggest purposes for e-democracy, such as improving or enhancing democracy, involving citizens in decision-making, fomenting organizational (that is, governmental) change and transforming governments.
For the purposes of this paper, we define e-democracy descriptively as: The use of electronic means, principally although not solely through government websites and the Internet, to promote and enhance citizen engagement with and participation in governmental activities, programs and decision-making. (This is the same definition that we used in our survey.)
3
Literature Review
For this research, we conducted an extensive review of the e-democracy literature.
The great majority of the works we found were speculative or theoretical in nature or addressed e-government applications. Very few were empirical. We reviewed the empirical works to find those that sought hard evidence (e.g., through case studies, surveys, website analyses, etc.) of the existence of e-democracy anywhere around the world. We discuss findings from this review in the following paragraphs [START_REF] Gibson | The Australian public and politics on-line: Reinforcing or reinventing representation?[END_REF], reported that there was little citizen uptake of eparticipation efforts in Australia. They also suggested that "…widespread mobilization is unlikely to occur in the near future (111)." [START_REF] Medaglia | Measuring the diffusion of eParticipation: A survey on Italian local government[END_REF] found that very few Italian municipal websites provided opportunities for active citizen participation (93 percent did not). In an examination of Korean government websites, [START_REF] Lyu | The public"s e-participation capacity and motivation in Korea: A web analysis from a new institutionalist perspective[END_REF], discovered low citizen uptake of and demand for e-participation efforts. [START_REF] Astrom | Digital Democracy: Ideas, intentions and initiatives in Swedish local governments[END_REF] found that although the elected heads of Swedish municipalities favored aspects of e-democracy, there was little evidence of these initiatives on municipal websites. "As the analysis shows, most local governments "…use the Internet for modernization rather than radical regeneration (111)." [START_REF] Astrom | Apple pie-spinach metaphor: Shall e-democracy make participatory planning more wholesome?[END_REF], found little evidence of e-participation in municipal planning in Sweden, despite the fact that a large fraction of local planning directors said that they favored it.
In a paper about e-government in Istanbul, Turkey, Akdogan (2010) was unable to identify any significant amount of e-democracy via governmental websites in that metropolis. Similarly, [START_REF] Sobaci | What the Turkish parliamentary web site offers to citizens in terms of e-participation: A content analysis[END_REF] found that the Turkish parliament website offered very little in terms of e-participation. In a web based survey of civil servants in six New Zealand government departments, [START_REF] Baldwin | What public servants really think of e-government[END_REF], found that while civil servants generally had favorable views of e-government (though not of etransformation), the actual extent of e-participation efforts among those agencies was limited. This, the authors argued, "…suggests that "e-participation" largely remains a method of informing, keeping happy and convincing the public (116)."
After conducting an analysis for the Local e-Democracy National Project in the UK, [START_REF] Pratchett | Barriers to edemocracy: Local government experiences and responses. A report prepared for the Local e-Democracy National Project in the UK[END_REF], found that "Despite the existence of a range of edemocracy tools and some significant experience of using them in different contests, the penetration and take-up of e-democracy in the UK, as elsewhere, remains limited [START_REF] Astrom | Digital Democracy: Ideas, intentions and initiatives in Swedish local governments[END_REF]." Writing about the effect of the Internet on citizen participation in politics in the UK, [START_REF] Ward | Introduction: The potential of the Internet revisited[END_REF] reported only a limited impact. Indeed, they cautioned that, based on the extant evidence, "the Internet per se is unlikely to stimulate widespread mobilization or participation… (215)." [START_REF] Polat | E-citizenship: Reconstructing the public online[END_REF] reviewed the UK"s local e-government program that operated between 2000 and 2006, which they argued was "…arguably one of the biggest initiatives of its kind in the world [START_REF] Hacker | Digital Democracy[END_REF]," and found that it largely ignored what the authors called online practices of citizenship and instead favored themes of modernization and efficiency.
Studies in the US have similarly failed to find evidence of the adoption of edemocracy by governments there. Using data from a survey of residents of the state of Georgia, [START_REF] Thomas | E-democracy, e-commerce and eresearch: Examining the electronic ties between citizens and governments[END_REF] categorized citizen visits to government websites as e-commerce, e-research or e-democracy. E-democracy visits were the least frequent. [START_REF] Norris | Electronic Democracy at the American Grassroots[END_REF] conducted focus groups with local officials and found that edemocracy was not a consideration when these governments initiated their egovernment efforts nor a part of their future planning for e-government.
After examining planning-related websites among US municipalities with populations of 50,000 and greater, Conroy and Evans-Crowley (2006) found little evidence of the use of e-participation tools. [START_REF] Scott | E" the People: Do U. S Municipal Government Web Sites Support Public Involvement[END_REF] reviewed the websites of the 100 largest US cities and found little evidence that these websites supported "…significant public involvement in accordance with direct democracy theory (349)." Finally, D" [START_REF] Agostino | A study of g-government and e-governance: An empirical examination of municipal websites[END_REF], reviewed the websites of the 20 largest American cities for their practices of e-government (information and services) and e-governance (participation) and found that information and service delivery predominated and that "…governance applications are only marginally practiced via the Internet. ( 4)"
A number of scholars have conducted comparative studies, mostly concerning egovernment and e-democracy initiatives in the US, the UK, European nations, and by the European Union (EU) and the European Commission (EC). These works, like those reviewed above, have also failed to find evidence that governments in those nations have adopted or are practicing e-democracy (see, for example: Annttiroiko (2001); [START_REF] Chadwick | Interaction between states and citizens in the age of the Internet: "E-government" in the United States, Britain and the European Union[END_REF]; [START_REF] Needham | The citizen as consumer: e-government in the United Kingdom and the United States[END_REF]; [START_REF] Zittel | Digital parliaments and electronic democracy: A comparison between the US House, the Swedish Riksdag and the German Bundestag[END_REF]; and, [START_REF] Chadwick | Web 2.0: "New challenges for the study of e-democracy in an era of informational exuberance[END_REF]. The principal conclusion that we draw from these empirical studies is that, despite much early enthusiasm, there is little evidence that governments anywhere around the world have not adopted or are practicing e-democracy.
Research Methods
We study e-democracy at the American grassroots for two important reasons. First, the US has a large number of general purpose local governmentsabout 39,000 --19,429 municipalities; 16,504 towns and townships; 3,034 counties (Census, 2002). Second, local governments are the closest governments to the people and have the greatest direct impacts on people"s lives.
To produce the data needed for this study, we contracted with the International City/County Management Association (ICMA) to conduct a survey of e-democracy among American local governments. 2 (For readers from outside of the US, the ICMA is a major and highly respected local government association that, among other things, conducts and publishes considerable research for its members.) The questionnaire that we used for this study is based in part on an e-democracy survey conducted by ICMA in 2006 (Norris, 2006b). Because we wanted to be able to compare the results from our 2011 survey with data from the 2006 survey, we based the 2011 instrument on the instrument from 2006. However, recognizing that much has changed in the world of e-government and e-democracy in the five years between the surveys, we needed to update the 2006 instrument at least somewhat to capture recent edemocracy issues and trends.
Therefore, prior to developing the 2011 instrument, we asked a convenience sample of local Information Technology (IT) directors and Chief Information Officers (CIOs) to review the 2006 instrument and make recommendations to us based on their expert knowledge of local e-democracy developments since then (see Appendix A). Armed with these expert practitioners" suggestions, we worked cooperatively with the ICMA survey research staff to write the 2011 questionnaire. While many of the questions are identical to those in the 2006 survey, we added a number of new questions. In order to keep the length of the survey manageable, as we added new questions to the 2011 instrument, we deleted a nearly equal number from the 2006 instrument. Note that we told survey respondents that, for our purposes, the terms e-participation and e-democracy were synonymous and, that to simplify things for the questionnaire, we used the term e-participation to mean both.
Of 2,287 surveys mailed in 2011, 684 local governments responded, for a response rate of 29.9 percent. This response rate is consistent with other recent surveys recently conducted by the ICMA at around 30 percent, although lower than the response rate of 36.8 percent 2006 survey. ICMA has noticed a decline in responses to its surveys in recent years and attributes this, in part, to the impact of the "Great Recession" on local staff cutbacks. As a result, local governments understandably have fewer resources to devote to completing surveys [START_REF] Moulder | Personal Communication[END_REF].
When we examined the responses for representativeness (that is, of the responding governments to US local governments as a whole), we found that local governments with over 1 million in population were underrepresented. Local governments in the Northeast were underrepresented, while those in other regions of the nation were about evenly represented. Among municipalities, the council manager form of government was substantially overrepresented, while among counties the counciladministrator form of government was also overrepresented when compared with governments with elected executives.
Findings
We begin by examining whether responding governments had implemented one or more of several possible e-participation activities (Table 1). The first and most important finding from these data is that very few local governments had undertaken any of these e-participation activities. Second, most of the e-participation activities that the governments had undertaken did not provide much, if any, opportunity for meaningful citizen participation, at least by our definition (that is, activities that promote and enhance citizen engagement with and participation in governmental activities, programs and decision-making). Only one e-participation activity had been implemented by more than half of the governments responding to the 2011 survey (enabling citizens to view a hearing or meeting, 68.3 percentnot asked in 2006). While an adoption rate of this magnitude might appear impressive, merely viewing a hearing or meeting hardly constitutes meaningful citizen participation. Far fewer governments (only one in five, 19.8 percent) enabled active citizen participation in meetings or hearings. This does, however, represent a substantial increase over 2006.
Two activities approached half of local governments reporting. The first, enabling citizens to post comments (49.9 percent), was not asked in 2006. The second, enabling citizens to participate in a poll or survey (47.9 percent), was asked in 2006, when a quarter of governments (25.2 percent) said that they had conducted web surveys. Posting comments and responding to surveys or polls represents a type of active citizen participation, but it is one-way communication (citizen to government).
Next we inquired about why local governments engage in e-participation projects and activities (Table 2). We did not ask this question in 2006. The great majority (82.5 percent) responded that it was "the right thing to do." Although the survey instrument did not delve into the meaning, we suspect that doing the right thing is driven by both professional norms and a public service motivation.
Slightly more than four in ten governments said that both top local administrators (43.8 percent) and local elected officials (43.3 percent) demanded e-participation. About one third (32.3 percent) said demand by local citizen. Next we asked (2011only) whether these governments" e-participation projects were mostly one-way from governments to citizens or mostly citizen to government (Table 3). The great majority of governments (71.0 percent) said mostly one-way. Only 2.9 percent said mostly citizen to government, while about one-quarter (26.0 percent) said a combination of one-and two-way. To help understand why so few local governments had adopted e-democracy, we asked about barriers to adoption (Table 4). The top four barriers, all of which were reported by greater than a majority of governments, were lack of funding (83.5 percentup eight percent from 2006); need to upgrade technology (69.6 percentup seven percent); lack of technology staff (60.7 percentdown nearly three percent); and concerns about the digital divide (55.7 percentup nine percent). The second and third of these barriers are directly related to the first, funding. The survey also asked about whether local elected officials and local administrators promoted e-participation (Tables 6 and7). Answers here could also be important to understanding why so few local governments have adopted e-participation. Three in 10 respondents (30.9 percent) to the 2011 survey said elected officials actively promoted e-participation (up 8.7 percent over 2006); a similar fraction (31.1 percent) said that elected officials promoted it some (up 2.9 percent); and 38.0 percent said these officials did not support e-participation (down 11.5 percent). More than four in 10 respondents (43.6 percent, up 8.6 percent over 2006) said that appointed officials actively supported e-participation; one-third (32.2 percent) promoted it some (up 3.2 percent); and one-quarter (24.1 percent) did not promote it (down 11.6 percent).
Finally, we wanted to know whether these local governments perceived any citizen demand for e-participation (Table 8). This, too, could be important to an understanding of why so few local governments had adopted e-participation. Here we asked whether citizens or grassroots organizations actively pushed for e-participation oppor-tunities. We asked the respondents to answer based on a scale of 1 to 5, with 1 meaning no citizen demand and 5 meaning significant citizen demand. For ease of analysis, we collapsed responses 1 and 2 to mean little or no citizen demand, 3 to mean some citizen demand, and 4 and 5 to mean significant citizen demand. The data suggest a slight trend in the direction of greater citizen demand, but the trend is so small that it could be an artifact of the survey, rather than an indication of anything substantive. The percentage of governments indicating the existence of significant citizen demand nearly doubled between 2006 and 2011, but only from 4.4 percent to 8.2 percent (still miniscule). Those indicating no citizen demand diminished slightly (from 79.8 percent to 72.5 percent). "Some" citizen demand remained at around three in 10 respondents (29.0 percent in 2006 and 32.2 percent in 2011).
Findings and Conclusion
The most striking finding from this study is that few American local governments have adopted e-participation and those that have been adopted, for the most, have not implemented what we would consider meaningful citizen participation. Data from the 2011 survey strongly suggest two explanatory factors: lack of funding and lack of demand. The responding governments cited lack of funding as the most frequently barrier to their adopting e-participation in both 2006 and 2011. Respondents also cited the need to upgrade technology, lack of technology staff, difficulty justifying costs, and lack of technology expertise as barriersall of which are directly related to lack of funding. This finding is also consistent with surveys of local e-government in the US, where lack of funding nearly always tops the list of barriers to adoption (e.g., [START_REF] Coursey | Models of E-Government: Are They Correct? An Empirical Assessment[END_REF].
A second important reason for the lack of local e-democracy in the US may well be lack of demandfrom local officials and citizens. When asked about barriers to eparticipation, 46 percent of local governments cited lack of demand by citizens and 42 percent said lack of demand by elected officials. Moreover, only three in ten felt that elected officials actively promoted e-participation and about 4 in 10 appointed officials did so. And, only about a quarter of governments perceived any citizen demand at all.
Finally, the literature on e-government increasingly points to the probability that early predictions for e-government were simply wrong. In part, they were technologically deterministic [START_REF] Coursey | Models of E-Government: Are They Correct? An Empirical Assessment[END_REF] and they also were based on a lack or an incomplete understanding of the prior relevant literature [START_REF] Coursey | Models of E-Government: Are They Correct? An Empirical Assessment[END_REF][START_REF] Kraemer | Information Technology and Administrative Reform: Will E-Government be different?[END_REF][START_REF] Danziger | The Impacts of Information Technology on Public Administration: An Analysis of Empirical Research from the Golden Age of Transformation[END_REF]. Whatever the causes, the reality is that there is very little e-democracy among US local governments. Based on the available evidence, we suspect that the state of edemocracy at the American grassroots it is not likely to change much in the foreseeable future (see also, Norris 2010). Moreover, based on our reading of the empirical studies of e-democracy, we strongly suspect that the state of local e-democracy in the US is more similar to than it is different from that of local e-democracy elsewhere in the world. Of course, only further study will allow us to support or reject these suspicions.
Table 1 .
1 Has your local government has done any of the following electronically within the past 12 months?
2006 2011
N % N %
Table 2 .
2 Why does your local government engage in e-participation?(2011)
N %
Table 3 .
3 Are your local government's e-participation projects and activities today mostly communication from the government to citizens or mostly from citizens to government?(2011)
N %
Table 4 .
4 Barriers
2006 2011
N % N %
Table 5 .
5 Elected officials promote e-participation?
2006 2011
N % N %
Don"t promote 363 49.5 243 38.0
Promote some 207 28.2 199 31.1
Actively promote 163 22.2 198 30.9
Table 6 .
6 Top appointed officials promote e-participation?
2006 2011
N % N %
Don"t promote 260 35.9 152 24.1
Promote some 210 29.0 203 32.2
Actively promote 253 35.0 275 43.6
Table 7 .
7 Are citizen groups actively pushing for e-participation
2006 2011
N % N %
[START_REF] Amoretti | International Organizations ICTs Policies: E-Democracy and E-Government for Political Development[END_REF]We wish to thank UMBC"s Research Venture Fund and the College of Public Policy research grant at UTSA that enabled us to conduct the survey that produced the data on which this paper is based.
Appendix A Expert Practitioners
We wish to acknowledge and express our appreciation to the following local government officials who reviewed the 2006 survey instrument and provided comments and suggestions that we then used in developing the 2011 instrument. Any errors or omissions are those of the authors and in no way reflect on these officials or their advice. | 24,453 | [
"1004260",
"1004261"
] | [
"61727",
"335543"
] |
01491231 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491231/file/978-3-642-40346-0_6_Chapter.pdf | Robin Effing
email: [email protected]
Jos Van Hillegersberg
email: [email protected]
Theo Huibers
email: [email protected]
Social Media Participation and Local Politics: a Case Study of the Enschede Council in the Netherlands
Keywords: social media, council, politics, participation, web 2.0
Social media such as Facebook, Twitter and YouTube are often seen as political game changers. Yet little is known of the effects of social media on local politics. In this paper the Social Media Participation Model (SMPM) is introduced for studying the effects of social media on local political communities. The SMPM aims to explore the relationship between Social Media Participation and Community Participation. The model comprises four constructs: Social Media Choice, Social Media Use, Sense of Community and Community Engagement. The design of the case study was based on the SMPM and took place among the members and parties of the Enschede council, from a large municipality in the Netherlands. Social media participation levels were measured and compared with the Social Media Indicator (SMI). A negative correlation between Social Media Use and Sense of Community has been discovered. However, we could not find a causal effect that explains this correlation. To analyze the effects in more detail, we show directions for further improvement of the model.
Introduction
Social media change the game of politics both on a national and local scale. Politicians increasingly use social media such as Facebook, Twitter, Blogs, YouTube and LinkedIn. Recent political events showed that social media influence the rules of political participation today. During the "Arab spring" in 2011, social media allowed social movements to reach once-unachievable goals eventually leading to the fall of oppressing regimes [START_REF] Howard | The role of digital media[END_REF]. In presidential elections, the cases of Barack Obama (US) and Ségolène Royal (France) show that effective social media campaigns can make a difference in politics [START_REF] Christakis | Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives[END_REF][START_REF] Citron | Fulfilling Government 2 . 0 ' s Promise with Robust Privacy Protections[END_REF][START_REF] Greengard | The First Internet President[END_REF][START_REF] Lilleker | Political Parties and Web 2 . 0 : The Liberal Democrat Perspective[END_REF][START_REF] Montero | Political e-mobilisation and participation in the election campaigns of Ségolène Royal (2007) and Barack Obama[END_REF][START_REF] Talbot | How Obama Really Did It: The Social-networking Strategy that took an Obscure Senator to the Doors of the White House[END_REF][START_REF] Ren | Drawing Lessons from Obama for the European Context[END_REF][START_REF] Zhang | The Revolution Will be Networked[END_REF]. A US Congress Facebook message increased the voting outcome with 340,000 voters [START_REF] Kiderra | Facebook Boosts Voter Turnout[END_REF].
Yet we know little about which social media strategies contribute to political party communities and which do not. The Twitter campaign from the CDU (Germany), for instance, did not result in high numbers of reach and engagement [START_REF] Jungherr | Online Campaigning in Germany : The CDU Online Campaign for the General Election 2009 in Germany[END_REF].
Furthermore, more research should be carried out to understand how social media affects local politics. Local politicians may think that integrating social media in their political work is easy. However, an effective social media strategy requires more than just creating profiles to have a presence on social media. "Considering the novel culture of social media and the shift in power relations, the internalization of social media expertise within an organization may prove to be a much harder task than expected" [START_REF] Munar | Destination Management Social Media Strategies and Destination Management[END_REF]. Next to the national elections cases, more attention should be paid to the effects of social media on a local scale as well. Local concerns should be an explicit part of the social media strategy in order to be effective [START_REF] Bottles | Who should manage your social media strategy?[END_REF][START_REF] Berthon | Marketing meets Web 2.0, social media, and creative consumers[END_REF].
In order to maximize the impact of time and effort spent on social media, members of local councils would like to understand the effects of these tools on their work and political communities. Municipalities and their councils are relatively near to the citizens. Social media can potentially help people to establish and foster authentic relations with each other [START_REF] Bottles | Who should manage your social media strategy?[END_REF]. However, little is known about the effects of social media participation by politicians on such local political communities. Political party communities are relational communities for a professional cause and are not necessarily territorially bounded [START_REF] Mcmillan | Sense of community: A definition and theory[END_REF]. The members of political parties are engaged in their communities because of shared beliefs, goals or interests. Does social media participation by members of the party contribute to a stronger party-community? Or is the opposite true? Some parties make social media participation too much of a goal in itself, without any underlying strategy. Such initiatives seem destined to fail.
The aim of this paper is to investigate social media effects within local politics and to learn from local practices. To achieve our goal, we conducted a case study based on the Social Media Participation Model. This model can be used for exploring causal effects of social media participation on communities. The model is still in an early stage of development. By applying this model as the theoretical lens for a case study we can both validate the model and increase understanding of the effects of social media.
Only few models and methods are aimed at understanding the effects of social media participation. The Unified Theory of Acceptance and Use of Technology (UTAUT) from Venkatesh et al. [START_REF] Venkatesh | User Acceptance of Information Technology: Toward a Unified View[END_REF], [START_REF] Curtis | Adoption of social media for public relations by nonprofit organizations[END_REF] is known to be used to study social media acceptance. However, this theory, and related ones such as the Technology Acceptance Model (TAM), focus merely on adoption of technology and do not capture the effects of use. Other theoretical frameworks from the field of e-participation, such as the participation ladder from Macintosh [START_REF] Macintosh | Citizen Participation in Public Affairs[END_REF][START_REF] Grönlund | ICT Is Not Participation Is Not Democracy -eParticipation Development Models Revisited[END_REF][START_REF] Medaglia | Measuring the diffusion of eParticipation : A survey on Italian local government[END_REF][START_REF] Sommer | Participation 2.0: a Case Study of e-Participation In[END_REF], do help to place social media use against a theoretical background, but are too abstract to investigate effects in detail. Therefore, we designed the Social Media Participation Model (Figure 1), aimed at capturing the effects of Social Media Participation on Community Participation. We conducted a case study within the Enschede council and its members based on this model. The municipality of Enschede is located in the eastern part of the Netherlands and has more than 150,000 citizens.
The remainder of this paper is structured as follows. First, we will introduce the Social Media Participation Model. Second, we will clarify our methodology. Third, we will share results from the case of the local government in Enschede. Finally, we will discuss our observations and we will present our future research agenda.
Introducing the Social Media Participation Model
Since models to study the effects of social media within the non-profit sector are still scarce [START_REF] Effing | Measuring the Effects of Social Media Participation on Political Party Communities[END_REF], we decided to design the Social Media Participation Model for this purpose. This is a model that is aimed to explore the relationship between Social Media Participation on the one hand and Community Participation on the other hand (Figure 1).
Fig. 1. The Social Media Participation Model
The model takes a high-level approach to a complex reality of social behavior of politicians in both the online and offline world. This means that there can be many (causal) relationships between the included concepts. However, the model concentrates on the assumed causal relation between Social Media Participation by politicians and their Community Participation. We assume that being active on social media affects to some extent the community participation of a politician.
We have three grounds to assume the causality. First, the number of relationships between people tends to increase when people use social network sites, because these sites reveal relationships by making them transparant [START_REF] Boyd | Social Network Sites: Definition, History, and Scholarship[END_REF]. As a result, users of social media tend to make more connections with each other, bridging relationship networks [START_REF] Tomai | Virtual communities in schools as tools to promote social capital with high schools students[END_REF]. Second, social media do not completely replace offline communication, but augment them, reducing the transaction costs of communication [START_REF] Ren | Drawing Lessons from Obama for the European Context[END_REF], [START_REF] Vergeer | Consequences of media and Internet use for offline and online network capital and well-being. A causal model approach[END_REF]. Third, by taking part in online communities, people become more aware of their connections to others in the community which leads to a stronger bonding to the community in general [START_REF] Tomai | Virtual communities in schools as tools to promote social capital with high schools students[END_REF].
After a literature review [START_REF] Effing | Measuring the Effects of Social Media Participation on Political Party Communities[END_REF] regarding social media, participation and communities, we decided to derive four more specific constructs from the two concepts in the model.
Social Media Choice
According to Kaplan and Haenlein social media: "is a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content" [START_REF] Kaplan | Users of the world, unite! The challenges and opportunities of Social Media[END_REF]. While some politicians start using social media just because they feel they cannot stay behind, others approach them as being part of underlying communication strategies. The choice for certain social media out of the vast amount of available social media chan-nels can be dependent of multiple factors but: "Nothing impacts the success of a Social Media effort more than the choice of its purpose" [START_REF] Bradley | Social Media Success Is About Purpose[END_REF]. However, not all communication by social media is appropriate for all communication strategies. Therefore, we have to take the choice and appropriateness of Social Media into account when determining variations in impact on the dependent concept of Community Participation.
After a literature review [START_REF] Effing | Measuring the Effects of Social Media Participation on Political Party Communities[END_REF], we selected the theories of Social Presence [START_REF] Short | Social Psychology of Telecommunications[END_REF], Media Appropriateness [START_REF] Daft | Organizational Information Requirements, Media Richness and Structural Design[END_REF], [START_REF] Rice | Media appropriateness: using social presence theory to compare traditional and new organizational media[END_REF] and the Theory of Cognitive and Affective Organizational Communication [START_REF] Te'eni | Review: A Cognitive-Affective Model of Organizational Communication for Designing IT[END_REF] to warrant Social Media Choice as a construct in the model. All of these theories have shortcomings and should not be applied too rigidly. Nevertheless, they provide us theoretical backgrounds for media choice and communication strategies.
The expected capacity of the social media channels regarding social presence [START_REF] Short | Social Psychology of Telecommunications[END_REF] and interaction [START_REF] Daft | Organizational Information Requirements, Media Richness and Structural Design[END_REF] can influence the choice made by a political party or its members. Social presence is "the degree to which a medium is perceived as conveying the presence of the communicating participants" [START_REF] Short | Social Psychology of Telecommunications[END_REF]. While the theory was initially created for telecommunications, it is currently used for social media as well [START_REF] Kaplan | Users of the world, unite! The challenges and opportunities of Social Media[END_REF]. For different forms of media, differences exist in their capacity to transmit immediate feedback, the interaction capacity [START_REF] Daft | Organizational Information Requirements, Media Richness and Structural Design[END_REF]. Different communication strategies (for example Exchanging information, Problem solving and Generating Ideas [START_REF] Short | Social Psychology of Telecommunications[END_REF], or Contextualization, Affectivity or Control [START_REF] Te'eni | Review: A Cognitive-Affective Model of Organizational Communication for Designing IT[END_REF]) can require different choices in social media channels (e.g. Twitter for higher level of interaction, YouTube for higher level of social presence). There can be differences in effectiveness of social media use for different purposes. The choice can even be inappropriate for certain communication tasks [START_REF] Kaplan | Users of the world, unite! The challenges and opportunities of Social Media[END_REF], [START_REF] Rice | Media appropriateness: using social presence theory to compare traditional and new organizational media[END_REF]. Therefore, the strategy and choice determine for a large part the effectiveness of social media for communication goals. Decisions can be made on both the individual and the group level. Sometimes decisons are based on a strategy-plan considering goals, media-channel choice, target audiences and local concerns [START_REF] Munar | Destination Management Social Media Strategies and Destination Management[END_REF][START_REF] Bottles | Who should manage your social media strategy?[END_REF][START_REF] Berthon | Marketing meets Web 2.0, social media, and creative consumers[END_REF].
To explore to what extent politicians use these social media channels, we introduce the next construct of the model: Social Media Use, which is a more quantitative approach.
Social Media Use
We decided to create an instrument, the Social Media Indicator, with metrics based on the participation ladder of Macintosh, including a distinction between e-Enabling and e-Engaging. Macintosh [START_REF] Macintosh | Citizen Participation in Public Affairs[END_REF][START_REF] Grönlund | ICT Is Not Participation Is Not Democracy -eParticipation Development Models Revisited[END_REF][START_REF] Medaglia | Measuring the diffusion of eParticipation : A survey on Italian local government[END_REF][START_REF] Sommer | Participation 2.0: a Case Study of e-Participation In[END_REF] created a three-step participation ladder, which can be applied to the Social Media phenomenon from a high-level perspective. The first step on the ladder is e-Enabling. At this step, party members provide access and information to citizens. The second step is e-Engaging. At this step, politicians react, have conversations and interact with citizens based on dialogue. The third step is e-Empowering. At this step, citizens are being invited to take part in the political activities. Politicians start working together with citizens, empowering them with responsibilities, tasks and opportunities to collaborate with the party's community. Previous efforts at trying to empower citizens often failed because of low levels of citizen en-gagement in electronic tools and other technological and democratic shortcomings [START_REF] Phang | A Framework of ICT Exploitation for E-Participation Initiatives[END_REF][START_REF] Roeder | Public Budget Dialogue " -An Innovative Approach to E-Participation[END_REF][START_REF] Stern | Web-based and traditional public participation in comprehensive planning[END_REF]. As Social Media mature, the question remains if social media eventually will lead to the step of e-Empowering. This step is left out of the instrument because it is too difficult to recognize by direct metrics without additional content analysis. In the section of methods, we show our instrument for the measurements. We now continue with the dependent side of the causal model: community participation.
Sense of Community
Community participation has both a tacit and an apparent construct. We address the tacit construct as Sense of Community (SOC), which is: "a feeling that members have of belonging, a feeling that members matter to one another and to the group and a shared faith that members' needs will be met through their commitment to be together" [START_REF] Mcmillan | Sense of community: A definition and theory[END_REF]. The Sense of Community can be further divided in four elements [START_REF] Mcmillan | Sense of community: A definition and theory[END_REF]: membership, influence, reinforcement and shared emotional connection. The importance of these four elements can vary depending of the type of the community [START_REF] Mcmillan | Sense of community: A definition and theory[END_REF]. The theory can be used for studying and comparing different kinds of communities, including political parties and council communities.
Community Engagement
Community engagement is the more apparent construct of community participation.
The construct reflects the actual behavior of community members, such as time spent in the community and existing communication ties between members. Since communities are networks of people, the communication ties between politicians can also reflect their actual engagement levels. Christakis and Fowler [START_REF] Christakis | Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives[END_REF] found out for instance that being connected to each other in a social network influences political party campaigns, voting and co-sponsorship within politics. The social network is more of a group characteristic and makes sense on a higher level: the community level instead of the individual level. Community engagement can be approached from various abstraction levels. In our case we distinguish the individual level and the network level. All four constructs have been explained and we can continue with the methodology that underpinned our case study.
Methodology
The proposed design we present here is based on comparative case study research [START_REF] Talbot | How Obama Really Did It: The Social-networking Strategy that took an Obscure Senator to the Doors of the White House[END_REF], including both quantitative and qualitative data collection techniques.
We propose a multi-level approach for studies incorporating both the individual level (e.g. the politician) and the group level (e.g. the political faction). The level of inquiry is the individual level. To recognize effects, it is required to conduct more measurements with time intervals in between.
Social Media Choice: Qualitative Interviews
A selection of members from all parties in the Enschede council were invited for faceto-face semi-structured open interviews. Specific members were selected for interviews in collaboration with the municipality of Enschede. The questions were partly exploratory for getting to know their existing strategy plans. Another part of the interviews was based on the theories of Short et al. [START_REF] Short | Social Psychology of Telecommunications[END_REF], Rice [START_REF] Rice | Media appropriateness: using social presence theory to compare traditional and new organizational media[END_REF] and Te'eni [START_REF] Te'eni | Review: A Cognitive-Affective Model of Organizational Communication for Designing IT[END_REF] and were more directly aimed at understanding their choices for their social media practices. Eight parties accepted the invitation. All interviews were recorded, transcribed and analyzed.
Social Media Use: Quantitative Metrics of the Social Media Indicator
The Social Media Indicator (SMI) has been developed to compare how active community members use social media [START_REF] Effing | Measuring the Effects of Social Media Participation on Political Party Communities[END_REF]. This instrument was used in five prior studies, mostly regarding social media use by political candidates and elections outcome. The division between e-Enabling and e-Engagement is based on the e-Participation ladder from Macintosh [START_REF] Macintosh | Citizen Participation in Public Affairs[END_REF][START_REF] Grönlund | ICT Is Not Participation Is Not Democracy -eParticipation Development Models Revisited[END_REF][START_REF] Medaglia | Measuring the diffusion of eParticipation : A survey on Italian local government[END_REF][START_REF] Sommer | Participation 2.0: a Case Study of e-Participation In[END_REF]. Due to privacy settings and application programming interface (API) limitations some potentially valuable metrics are excluded from the Social Media Indicator, such as Wall Posts on Facebook. Nevertheless, the SMI provides us with indicative scores. The metrics of the SMI are presented in Table 1 and are based on social media-reach numbers from market researchers ComScore and NewCom.
Other social media with high reach can be added for specific studies. The symbol # means: the number of. Scores can be calculated with the metrics of the SMI to indicate the use of members. We calculated scores for all members of the Enschede council based on the profile information they provided us with an online questionnaire.
Sense of Community: Questionnaire with 24 Statements
To measure the Sense of Community we make use of the SCI-2 instrument [START_REF] Chavis | The Sense of Community (SCI) Revised: The reliability and Validity[END_REF]. It consists of 24 statements that respondents can respond to on a Likert scale and provides a standardized scoring instruction to evaluate belonging, influence, reinforce-ment and shared emotional connection. We sent this questionnaire to all members of the Enschede council. The total SCI-2 score gives an assessment of the individual sense of community of a member. We asked them for both the overall council community and their own party community as a part of the Enschede council. Because the scale is 24 * 3 points, there is a maximum of 72 points for the SCI-2.
Commmunity Engagement: Questionnaire and Social Network Analysis
In the questionnaire we inquired about the average time members spent per week on their council-affiliation. While such a question does not lead to very reliable information, since members can exaggerate or have different ways of counting, it does help to understand how actively the politicians perceive their own engagement. If there are more reliable ways to obtain the engagement, these should be preferred. Additionally, we created social network diagrams of the primary communication relationships within the community. These network diagrams can be made after asking the member for a top five list of other members of the council, with which they communicate the most. This helps to understand how communication, power and influence within a community are distributed [START_REF] Christakis | Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives[END_REF].
Data Analysis, Statistics and Social Networking Analysis
We analyzed our four constructs and relationships as follows. For the quantitative constructs we applied regular forms of statistical analysis (means, graphs and Std. Dev.). For the qualitative interviews we took an exploratory approach to capture motivations, and underlying reasons for social media choices. Social networking analysis software, such as Gephi, was used to create network diagrams of the existing ties between members. Furthermore, to discover effects, we applied various statistical methods with SPSS to explore possible relationships between the constructs.
Case Study Results from the Enschede Council
The municipality of Enschede, located in the eastern part of the Netherlands with more than 150,000 citizens, was interested in how social media affected the work of politicians within their council. In April 2011 the initiative was taken to start this research project. The case study was conducted between June 2011 and March 2013. The 39 members of the Enschede council were elected in March 2010. The members represent nine different parties (political factions).
The design of the case study was based on the Social Media Participation Model. The researchers took the role of observers and did not interfere with any social media planning or helping the candidates. Two measurements were carried out regarding the entire population of the council (n=39). The first measurement (T1) was from November 2011 until April 2012 (n = 29 response 74%). The second measurement (T2) was from October 2012 until December 2012 (n= 26 response 67%). We will now present results from this study.
Social Media Choice
Twitter is the preferred social platform of the interviewees of the council. The members believe Twitter can contribute the most towards increasing political participation of citizens. However, members argue the importance of "the physical side of communication … it is important to keep having conversations" (Interviewee). And, based on their experiences, the members do not think that social media is revolutionary for local politics: "Twitter did not deliver the miracle we hoped for in advance". Only one of the nine political parties prepared a social media strategy. Some parties had a few loosely defined agreements about what they do with social media. Generally, the parties did not approach social media strategically: "we are in the end amateurs, we just do something, in our free time … we would like social media strategies, but we need external help for that" (Interviewee).
One interviewee, one of the most active social media users of the council, mentioned difficulties with interaction: "during the past months where I have been spokesman on Facebook, I have created links to the documents we discuss so that people can read them and you would like to see interaction as a result, but that does not happen."
Social Media Use
Based on the Social Media Indicator we discovered that 93% of the members of the Enschede council use social media (n=28). 93% of all members use LinkedIn. 82% uses Twitter. The second measurement captures a shorter period of time (April -November) and shows increased levels of interaction (e-Engagement) and fewer differences between members in comparison with measurement one in figure 3. The highest score in the second measurement was 7,598 from the same member as in the first measurement. One of the four factors of the Sense of Community was relatively low for all members, the shared emotional connection. This makes sense since the parties (fractions) are primarily professional communities and offer fewer incentives for emotional bonding.
Community Engagement
The average member of the Enschede council spends 23.8 hour per week (n=20) (Std. Dev. 7,9) on his or her job. The social network diagrams in figure 5 show the primary communication ties between members. Every dot is a person and every line a connection. Different shades of grey refer to different parties. In the diagrams above, we can see that the community of the Enschede council is less fragmented in November, indicating a more connected council community.
Analysis of Relationships between Concepts
We found a statistically significant negative correlation between Social Media Use and Sense of Community of members. The Spearman's rho correlation is -.454* (*significant at the 0.05 level, n=23). This means that, on average, members who are relatively more active users of social media have relatively lower scores on sense of community. This correlation remained present in the second measurement. We also analyzed effects from SMI on SCI. We checked for variance (with a SPLIT-PLOT ANOVA Repeated Measures) based on splitting the council in two groups. One group of frequent social media users (SMI above 1,000) and a control group. No variance could be proven to signal a causal effect in the Enschede council. Also this analysis showed that the group of frequent social media users had a lower sense of community than the controlgroup of infrequent users.
Discussion and Future Research
In the case of the Enschede council we see that social media participation by political parties and their members did not make much of a difference for the political game yet. The parties seem to struggle with finding ways to use social media for their own benefits. In Enschede, the parties have not yet professionalized their social media campaigns. Furthermore, strategic approaches considering social media choice, goals, target audience selection and local concerns are still to be defined. We found a negative correlation (-.454) between social media use and sense of community. However, during the period of the research, the sense of community of members was not influenced by the social media use. This leaves us with a paradox. What does cause the negative correlation? Do members that already have lower levels of sense of community tend to use social media more? This may be the case if there is less bonding with colleagues. Or is it valuable to be connected to others outside the party-fraction, bridging with citizens and organizations? These questions still remain unanswered.
The model can be further refined to recognize more precisely how social media affects political communities such as the Enschede council and its parties. By working with the Social Media Participation model and the specific design of the case-study we encountered four limitations. First, monitoring the use of social media with this instrument has limitations for presenting the real-world behavior of members. Members could have use-scores that actually represent a different background or a large offline network size influences the SMI score. Second, the Sense of Community cannot be separated for online or offline behavior and it can consist of various echelons (such as local versus national communities). Third, the decision for members whether or not to participate in social media is sometimes dependent from a higher authority level in the political party, exceeding the communication strategies as provided in our model. Fourth, the SMPM is based on a linear causal view, while in reality the constructs also influence each other in cycles and maybe Community Participation influences the Social Media Participation.
In the near future we will finish measuring two other types of non-profit communities: church-communities and charities. Additionally, we will develop a tool to automatically retrieve SMI scores for social media use. We have also planned to integrate more social media strategy theory in our work. We encourage other scholars to contribute to more refined models and methods to investigate how social media affects local politics and their communities.
Figure 2
2 shows a chart of the social media use by all members of the council sorted from high to low (Entire history use until April 2012). The lighter areas in the bars indicate the part of the communication that is interaction (e-Engagement) while the darker areas indicate contribution (e-Enabling). The highest SMI score of a member in this measurement was 19,141.
Fig. 2 .
2 Fig. 2. Social Media Use (Sorted high to low) of members on until April 2012
Fig. 3 .
3 Fig. 3. Social Media Use (Sorted high to low) of members from April 2012 -November 2012
Fig. 4 .
4 Fig. 4. Measurement of SCI of members in April (left) and November (right) 2012.
Fig. 5 .
5 Fig. 5. Social network diagrams April (left) and November (right) 2012.
Table 1 .
1 The Social Media Indicator.
Social media: e-Enabling: e-Engagement:
Facebook profile # friends # likes
Twitter account # tweets # following
# followers # retweets*
# replies and mentions*
YouTube channel # videos # comments
LinkedIn # connections # recommendations
Blog # posts| # replies
Total: Sub score Contribution Sub score Interaction
TOTAL SMI = SUB SCORE CONTRIBUTION + SUB SCORE INTERACTION
*of the last 200 Tweets (to limit contribution to total score).
Acknowledgements. The case study was initiated in collaboration with the Municipality of Enschede and the broader research project is supported and funded by Saxion University of Applied Sciences, Enschede, The Netherlands. | 33,529 | [
"1004262",
"1004263",
"1004264"
] | [
"303060",
"303060",
"303060"
] |
01491232 | en | [
"shs",
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491232/file/978-3-642-40346-0_7_Chapter.pdf | Eirik Rustad
email: [email protected]
Øystein Saebø
email: [email protected]
How, Why and with Whom Do Local Politicians Engage on Facebook?
Keywords: eParticipation, political engagement, social media, Facebook, case study
This article focuses on how, why and with whom local politicians engage on Facebook. Based on a literature review of the public sphere, eParticipation and research related to social media, we propose a theoretical framework that identifies thematic areas integral to understanding the nature of political participation. The explanatory potential of our 'ENGAGE' model (Exchange, Narcissist, Gather, Accented, General and Expense) is exemplified by conducting a qualitative case study focusing on politicians in a local municipality in southern Norway. The findings indicate various uses of Facebook among the respondents, and a dissonance between what the politicians state as being important (engaging in dialogue with citizens) and what they really do (posting statements). We conclude our paper by discussing the use and usefulness of our proposed model, and by summarising how, why and with whom local politicians use social media.
Introduction
Despite a growing research interest in the use of social media in the area of eParticipation [START_REF] Hong | Which candidates do the public discuss online in an election campaign?: The use of social media by 2012 presidential candidates and its impact on candidate salience[END_REF][START_REF] Linders | From e-government to we-government: Defining a typology for citizen coproduction in the age of social media[END_REF], more work is needed to understand the role of politicians in this capacity. Most studies within the field of eParticipation focus on citizens' roles, whereas the role of politicians is emphasised to lesser degree [START_REF] Saebø | The shape of eParticipation: Characterizing an emerging research area[END_REF]. This paper focuses on the politicians, by exploring how, why and with whom local politicians engage on Facebook.
Based on current literature and empirical findings from an exploratory case study, we introduce an explanatory framework entitled "ENGAGE". Understanding politicians' use of social media is essential to understanding how political discourse among citizens, politicians and other external stakeholders may influence and impact decisionmaking processes. Facebook is currently the most common social media platform, with most age groups now well represented and with more than one billion members globally. Thus, we choose to focus on politicians' use of Facebook for our research purpose. By doing so, we attempt to understand how and why politicians and citizens alike engage on a technical platform that upholds many of the characteristics normally associated with formal eParticipation efforts.
Habermas' ideas represent our point of departure to understand how communication between politicians and citizens enhances democracy. Although Habermas has been criticised for leaving organisations and politicians out of the mix [START_REF] Westling | Expanding the public sphere: The impact of Facebook on political communication[END_REF], many recent eParticipation efforts are based on norms and theoretical backdrop strongly influenced by Habermasian ideas. We argue that the theory of 'the public sphere' [START_REF] Habermas | The Structural Transformation of the Public Sphere[END_REF] is a valuable point of departure to understand eParticipation initiatives. The public sphere is a separate common ground where ordinary citizens can enlighten each other through discussions and find common causes that transform into real politics through the intervention of traditional media, which set the agenda to which politicians must adhere and respond. Our framework aims to apply Habermas' normative concept to a modern-day view of society, where social media plays a major role. Social media provide users the ability to interact, collaborate, contribute and share online contents [START_REF] Mcgrath | Exploring the Democratic Potential of Online Social Networking: The Scope and Limitations of E-Participation[END_REF], and to communicate and maintain their networks [START_REF] Medaglia | Characteristics of social networking services[END_REF]; the rapid growth in use and number of members increases the importance of understanding its effects on society and people.
We propose a theoretical framework that identifies thematic areas integral to understanding the nature of political participation. By doing so, we aim to identify different attitudes and motivations which are important to understanding various forms of engagement. We illustrate the explanatory potential of the framework by conducting a qualitative case study analysing local politicians' use of Facebook in a Norwegian municipality.
Theoretical approaches
Our 'ENGAGE' framework, introduced below, is mainly based upon Habermas' theory of 'the public sphere' [START_REF] Habermas | The Structural Transformation of the Public Sphere[END_REF], the discourse on how technology influences democracy (see ( [START_REF] Päivärinta | Models of e-democracy[END_REF]) for a more detailed discussion), and research focusing on the use of social media in the eParticipation area [START_REF] Hong | Which candidates do the public discuss online in an election campaign?: The use of social media by 2012 presidential candidates and its impact on candidate salience[END_REF][START_REF] Linders | From e-government to we-government: Defining a typology for citizen coproduction in the age of social media[END_REF][START_REF] Mcgrath | Exploring the Democratic Potential of Online Social Networking: The Scope and Limitations of E-Participation[END_REF]. ENGAGE is an acronym that helps identifying important thematic characteristics in order to gain knowledge of politicians' motivations for using social media. The framework focuses on thematic areas that are related to the theory of the public sphere and eParticipation by finding answers to certain key questions: Who is engaging with each other? What are the outcomes for citizen input? Why is the politician participating? ENGAGE borrows its structure from 'SLATES' (Search, Links. Authoring, Tags, Extensions and Signals) [START_REF] Mcafee | Enterprise 2.0: The dawn of emergent collaboration[END_REF]. [START_REF] Mcafee | Enterprise 2.0: The dawn of emergent collaboration[END_REF] calls attention to components that should be included in an understanding of 'Enterprise 2.0'. By emphasising which new technologies could enhance effective knowledge sharing within enterprises, this framework helps simplify the thematic areas that are critical to successful operation in a new world. Hence, even though the two schemes do not share a focus on the use of social media in the eParticipation area, the structure of McAfee's (2006) work nevertheless proved important for our purpose.
The framework for our study is presented below, followed by a summary model demonstrating connections between the framework and current streams of research. The individual categories of the ENGAGE model were devised by reviewing litera-ture discussing dimensions of the public sphere theory and literature discussing eParticipation, and introduce a conceptual representation of important aspects related to «Web 2.0» technologies, eParticipation and public sphere The scope and length of this paper limits the possibility to discuss in detail the theories upon which the framework is based.
ENGAGE
The framework focuses on thematic areas necessary to understand politicians' behaviour on social media. Building on major themes of the public sphere within an eParticipation context, 'ENGAGE' aims to improve our understanding of the nature of politicians' participation.
E -Exchange.
Communication is the core of a thriving democracy [START_REF] Habermas | The Structural Transformation of the Public Sphere[END_REF]. Whether the politician wears his or her private 'hat' or engages in political activity is essential to understanding his or her activity online. In this regard, we do not value exchanges of, for example, cake recipes as equal to an exchange of opinions in a political debate. Thus, centring on exchange involves focusing on the content and context of the online communication exchange.
N -Narcissist. Politicians may or may not consider Facebook to be an isolated arena where 'what happens on Facebook stays on Facebook'. It is important to consider the extent to which politicians change their own opinion, or bring forward prevalent views, into formal political decision-making processes [START_REF] Westling | Expanding the public sphere: The impact of Facebook on political communication[END_REF]. Politicians may very well be on Facebook solely for the exposure and visibility that it can provide.
G -Gather. Politicians may use social media to engage with citizens either by broadcasting political victories, or by asking for their input on topical issues; both strategies represent important aspects of politics. By preferring the broadcasting strategy with limited gathering of input from other stakeholders, their use of social media becomes more of a one-way dialogue than a deliberative discourse. The distribution of questions asked or statements posed may indicate the type of participation that the politician prefers.
A -Accented. Language is a form of capital that can translate to power, but is also a differentiating factor that can either create distance or 'close the gap' between a political power elite and public participants [START_REF] Habermas | The Structural Transformation of the Public Sphere[END_REF]. An indicator for exclusion or inclusion is the extent to which politicians post detailed or general questions or statements, and how they inform the public and discuss the political process.
G -General. Many groups are marginalised and underrepresented, and eParticipation studies have long warned about the digital divide [START_REF] Saebø | The shape of eParticipation: Characterizing an emerging research area[END_REF]. Thus, an important area of interest is who the politician receives 'friend requests' from, and who they befriend. Journalists, old classmates, friends, family and other acquaintances are part of politician's networks. It is of common interest to investigate the potential predominance of particular groups in their network, and the potential targeting of specific groups in explaining politicians' participation.
E -Expense. Available resources limit politicians' engagement on social media. Time, competence and perceived gains are all important factors to consider when we look at the quality of engagement. If a politician does not answer enquiries, or does not actively follow up on discussions and comments, this may diminish citizens' willingness to engage with the politician in question. Expense can therefore be used as a tool to determine whether the participation is meaningful in a comprehensive way, or if it merely resembles a casual pastime without direction.
Relationship between 'ENGAGE' and current research
Figure 1 illustrates the relationship between the theoretical backdrop of the public sphere, eParticipation literature and the proposed ENGAGE framework. The theory of public sphere relates to:
• (E)xchange by focusing on engagement of a public or private nature, and prevelant two-way communication. This is a fundamental principle at the core of deliberation in a public sphere. • (N)arcissist by discussing whether communication has practical implications on policy or political process, or if politicians' communication on social media is isolated from decision-making processes. • (G)ather by exploring potential prioritising from politicians concerning input and output. • (A)ccented by discussing how engangement might be influenced by specific goals, and identifying potential target groups.
eParticipation theory relates to:
• (G)ather by exploring the socio-technical characteristics of social media, and how such systems influence the amount of discourse with external stakeholders. • (G)eneral by discussing the digital divide by asking who is left out and who is included.
• (E)xpense by discussing resources needed in relation to time spent utilising social media, in comparison to competing communication platforms.
Introducing the case, data collection and analysis strategy
To illustrate the explanatory potential of ENGAGE, we designed a qualitative study to explore politicians' use of Facebook in the Norwegian city of Kristiansand (approximately 75,000 inhabitants). The politicians were selected using a snowball method [START_REF] Yin | Case study research : design and methods[END_REF] among city council members with a Facebook account. The politicians were contacted on Facebook and everyone asked agreed to participate. Ages ranged from 20 to 41 years with four female politicians and one male politician interviewed.
Fig. 2. Data analysis strategy
Given the emphasis on understanding phenomena within their real-life context through a rich description of particular instances [START_REF] Yin | Case study research : design and methods[END_REF], it is appropriate to adopt a case study approach [START_REF] Kirsch | The enactments and consequences of token, shared, and compliant participation in information systems development[END_REF]. The study is exploratory by nature, aiming to define questions, proposing new constructs and eventually identifying new theoretical propositions, additional constructs and the relationships between constructs [START_REF] Cavaye | Case study research: a multi-faceted research approach for IS[END_REF] that may complement the original framework.
Data came from two sources. The primary source of data was semi-structured interviews covering the thematic areas introduced above (the ARENA framework). Since we limited our case study to interviews with five politicians, we also had a secondary data source, which consisted of a content analysis [START_REF] Silverman | Interpreting qualitative data[END_REF] of data from the five politicians' Facebook pages. Interviews were transcribed and analysed based on a pattern-matching logic, in which themes were identified and put in context within the framework presented. The data from the politicians' Facebook pages were analysed by placing 20 status updates from each politician in a spreadsheet. The spreadsheet categorised the status updates according to their different thematic areas of focus.
Findings
Below we introduce our findings related to the ENGAGE framework, illustrated by quotations from interviews.
E -Exchange 'I'm very concious about what I post because I know that my profile is in fact public.' (Respondent #4).
The politicians are very much aware that what they post on Facebook can turn into news stories. However, they have several concerns on the use of Facebook as a public arena, including a lack of confidence with the technical use of Facebook, as well as how to use the systems in place to develop and maintain their networks. The politicians have limited resources available to really exploit the potential uses of Facebook and social media. Politicians may be wary of becoming tiresome, boring their audience with political discourses that do not necessarily interest everyone.
During the interviews, most politicians expressed believe in using Facebook for two-way dialogue, but the content analyses of their Facebook accounts provided contradictory indicators. The limited number of questions asked to their followers and the number of times the original poster responded to subsequent comments do not uniformly support the claims that politicians engage in two-way dialogues on Facebook. The politicians do not alter their opinions based on inputs and dialogue on social media, but might be open to minor adjustments. This might relate to the fact that politicians post 'status updates' with opinions expressing their strong beliefs, rather than in the form of open questions welcoming inputs to the decision-making processes. Two of the interviewees argued that by mediating wishes expressed by citizens, Facebook might actually influence the political process.
'I have absolutely done that (changed opinions based on inputs on Facebook). Both in political meetings as well in letters or op-eds in the newspaper.' (Respondent 3).
The above statement is an example from one politician stating that she uses Facebook actively as a tool for gathering citizens' opinions on political issues.
'In theory it (Facebook) must be a good arena for discussion, especially for youth, since we are always online and the smartphone is always on the table with Facebook present. Me, I don't discuss much on Facebook because it is tiring. The discussion quickly gets out of hand and I have a perception that people discuss just for discussion's sake on Facebook. I discuss so much every day that I can't manage. But in theory it is a good arena.' (Respondent 2).
The statement above echoes other politicians' points of view, while another politician argues that Facebook is a good arena for deliberation because the quality of argumentation is preferable to alternative sources, such as the comment section below online newspaper articles, and since 'everyone' is using Facebook.
G -Gather
'It is probably input because I scroll and browse more on Facebook than I write myself.' Most of the 'status updates' analysed in the content analysis are not questions, but opinions or links to online content. Among the five politicians included in our study and out of twenty 'status updates', the highest number of questions was three. Politicians, whenever interviewed, stated that they gather information by browsing Facebook, which takes up a lot of the time that they are 'on' Facebook. Politicians are aware of journalists seizing material posted on their Facebook pages, potentially presenting part of the communication outside its natural context. One politician reported fearing 'flame wars' when posting political 'status updates', thus reducing their willingness to participate in the online discussions, and limiting the potential for improved political deliberation. The content analysis revealed mixed results on how commenting and 'liking' posts related to their own 'status updates' proliferate. The most active politician commented on 13 out of 16 possible 'status updates' that generated comments by external stakeholders, down to a low of three status updates commented out of 15 by the least active politician.
A -Accented
'[…]if there is something I want to put out there, a purpose may be that one of the journalists I have on Facebook will pick up the story. But that is also why I need to be sure that I want to post it, because then I know it is something I can defend, something that may be printed.' (Respondent 1) The politicians consider themselves to be very conscious of what they are posting. They know that journalists are watching and, as one politician commented, this represents a double-edged situation in which politicians strive for exposure in traditional media, but at the same time are reluctant to make certain comments or statements out of fear of negative exposure in the same media channels. Most of the 'status updates' indicate that the more general statements dominate. Few of the 'status updates' seem to have specific target groups in mind.
G -General
'I have made a choice that Facebook is a public arena for me, a politician's arena, not my personal playground.' (Respondent 3) Some of the politicians view Facebook as a private arena where they do not accept 'friend requests' from unknown individuals, unlike the view expressed by the quotation above. One respondent commented on who the 'friends' are:
'There are probably many, except those that are my friends and people I know personally, involved in societal matters of some sort. I think that if you took a few hundred people in there that I don't know, but are from Kristiansand, or the southern part of Norway… I would think almost everyone is either politicians, youth politicians, media people, business people or people from the arts. There are probably some people I don't know that "add me" because they are involved in societal matters of some sort.' (Respondent 3). The politicians' networks of 'friends' do not really indicate any expansion of the public sphere. Their networks are mainly made up of family, friends, colleagues, other politicians and journalists. People in the network that the politicians do not personally know consist mainly of people being active (and visible) in their local society by belonging to elite groups. However, there is a shortcoming in our research approach concerning the network effects. If a 'friend' of the politician comments on something, this might be visible to friends of the friend, depending on the personal settings chosen by the politicians. Hence, a broader audience may possibly be informed by politicians' online activities, compared to those actually able to participate in the discussions.
E -Expense
'Many will say too much time (is spent on Facebook)! And that is probably true as well. I can say this: generally, independently of if I'm in front of the computer or cell phone, Facebook is always on in the background. As TV is always on in the background for some, Facebook is always on for me.' (Respondent 3). All the politicians use Facebook on a daily basis, some for several hours a day. Time spent on Facebook does not easily translate to an activity identified within the framework of this case study, since politicians most probably read more than they write. Moreover, one of our interviewees explained that personal messages are often used to answer comments on the politicians' wall, which are not publicly visible. One politician argues that to successfully use Facebook as a political tool, you need to invest time and presence when engaging your audience; if you don't follow up your initial statement when comments are made, you may lose out on what could have been an interesting discussion.
Discussion and conclusion
The findings from our case study concerning the thematic areas identified in the current literature are summarised in Table 1.
Table 1. Case findings summarised
Thematic area Main findings
Exchange
Politicians differ in their views on what two-way communication is. Politicians believe they engage in two-way dialogue, while the secondary data source points to differences in what politicians consider to be two-way dialogue.
Narcissist
The politicians rarely change their minds on political issues. With one exception they mainly post opinions that they strongly believe in.
Gather
Politicians mostly post statements, not questions. Questions regarding political issues are not prevalent. Politicians engage on Facebook through activity that is not necessarily easily monitored. Some use private messages in an extensive way; others centre their activity on their 'wall'. Much of the time spent on Facebook is not spent actively posting or commenting, but browsing profiles, an activity that may influence politicians but is difficult to measure.
Accented
The politicians emphasise that they will use different means in different parts of the political process, but the main finding is that politicians are more likely to use Facebook to broadcast outputs than to gather opinion early in the political processes. The politicians vary the level of detail in their posts.
General
Politicians' friends are mostly made up of already engaged citizens and cultural, political or civic elites. There is no active strategy for 'adding' ordinary citizens into the mix. This suggests a strengthening of bonds between established elites, even though the platform has the potential to provide more democratic influence, as politicians don't differentiate among authors of opinions.
Expense
The politicians allocate a great deal of time to maintaining a presence on Facebook. The activity that results from this presence is highly diverse. One of the politicians stated that while she checks Facebook four to five times a day, she has posted less than 20 'status updates' in over a year. Others are extremely active in posting status updates or commenting. Facebook represents a channel in addition to present communication channels.
How politicians engage on Facebook varies greatly, with different modes of operation. Most politicians in our study believe in the use of Facebook primarily for collecting opinions from citizens and other stakeholders. However, content analysis of the politicians' Facebook accounts suggest otherwise.
Why politicians engage also varies within our sample. Some view Facebook as a personal and private arena where politicians enter only as private persons, while others view Facebook as a tool for gathering relevant information vital to their role as a politician.
With whom the politicians engage is more uniform. Friends, family, colleagues and journalists are well represented in all respondents' Facebook networks. The re-maining 'friends' of the politicians consist mostly of already established elites within cultural or political sectors, or civic society in general.
Politicians with an explicit strategy for how to engage citizens through the use of Facebook are more likely than others to make sense of comments, discussions and other forms of feedback in a meaningful way. Even though most of our respondents do not find their online presence and the input they may receive through their use of Facebook to be especially valuable, with the one exception aiming to use her Facebook presence effectively as valuable resource, social media remains a potentially useful tool if used in a systematic manner.
The general characteristics of social media, and particularly that relevant information can be distributed, gathered and discussed within minutes of posting, may potentially represent a shift in how politicians interact. We argue that politicians may benefit from viewing Facebook as a democratic arena by gathering valuable and relevant information that can influence decisions in political processes. However, our empirical results indicate that politicians, in a local Norwegian context, still have some work to do to strategically harness their use, or non-use, of social media in political discourse.
For now, Facebook as an arena for deliberation may not live up to the strict ideals of public sphere theorists. Habermas' critique of internet as an arena for public deliberation is centred on the fear of echo chamber effects, and the fragmentation that leads to many separate public spheres [START_REF] Gentikow | Habermas, medienes rolle for den offentlige meningsdannelsen, og en fotnote om Internettet i fire versjoner[END_REF]. Habermas still believes traditional news media is key to setting the agenda. A common criticism when applying public sphere theory to social media is the lack of face-to-face interaction that is an essential aspect of how deliberation should ideally entail [START_REF] Habermas | Political communication in media society: Does democracy still enjoy an epistemic dimension? the impact of normative theory on empirical research1[END_REF]. Although there is a gap between a face-to-face meeting between peers, and the nature of Facebook as an arena for engagement, we strongly believe that the use of such media it is a step in the right direction concerning public deliberation. As one of the respondents comment, the «friend» relationship on Facebook is likely to increase incentive to engage by removing barriers. Habermas' critique of «new media» is based mainly on web-forums and the like [START_REF] Geiger | Does Habermas understand the Internet? The algorithmic construction of the blogo/public sphere[END_REF]. A nonanonymous arena such as Facebook has arguably different qualities and characteristics than face-to-face encounters, but is still relevant to counter some of Habermas' concerns about an online public sphere.
An interesting observation is the role Facebook and other social media can play in the agenda-setting phase of public decision-making. News-stories published in traditional news media often originate from Facebook. Hence, social media is not only a valuable direct source for news, but plays a mayor role in aggregating news stories, where an increasing number of referrals to news stories originate from social media channels. Facebook may influence the transformation of power from news editors towards a more democratic form of involvement by the public themselves.
The strength of our proposed model is its potential to encompass a range of important dimensions within different fields of study that are needed in order to obtain a more comprehensive understanding of the phenomena of engagement. The ENGAGE model could easily be confirmed and/or elaborated by further research. Further research is also needed to answer questions regarding how the size of the community influences the quality of the participation. Moreover, the role of the technology could be further investigated, to explore whether the difference in participation (e.g. inability to maintain a dialogue) could be caused by specific technicalities (interface) of the Facebook platform itself rather than politicians' competencies or desire to deepen engagement and participation.
Fig. 1 .
1 Fig. 1. Relational model for 'ENGAGE'
Further questioning as to what constitutes 'two-way dialogue' could have clarified what each individual politician meant by the term. Is 'liking' a statement tantamount to commenting in the eyes of the politician? N -Narcissist 'I may have strenghtened my belief in an opinion or had second thoughts because of input or the like. But I have not changed my opininon'.(Respondent 5) | 29,931 | [
"1004265",
"1004202"
] | [
"301147",
"301147"
] |
01491249 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491249/file/978-3-642-40346-0_11_Chapter.pdf | Asbjørn Følstad
email: [email protected]
Marika Lüders
email: [email protected]
Online Political Debate: Motivating Factors and Impact on Political Engagement
Online political debate is increasing in importance, both as a real world phenomenon and as an object of scientific study. We present a survey study exploring people's motivations for engaging in online political debate and how such debate may impact their general political engagement. The survey was conducted among 90 participants of an online environment for political debate hosted by one of the main Norwegian political parties. We found four motivational factors with relevance for participation in online political debate: engaging topic, want to contribute, frustration, and reciprocal learning. Sixty-four per cent of the participants answered that the online environment for political debate could make them more politically engaged. These participants reported that such an increase in political engagement could be due to the online environment providing a sense of influence, access to political debate, a means for getting updated, a lowered threshold for participation, motivating local political engagement, and awareness concerning political events.
Introduction
Political debate is increasingly conducted online. This trend has been welcomed with enthusiasm as it has been assumed that such political debate may lower the threshold for participation, increase citizen involvement, and, in consequence, strengthen democracy [START_REF] Lüders | Expectations and experiences with MyLabourParty: From right to know to right to participate?[END_REF]. The enthusiasm has seemed warranted as citizens do make use of online arenas for political debate to share their opinions and engage themselves politically [5; 7]. It is suggested that online political debate may be beneficial to public involvement in policymaking [START_REF] Stromer-Galley | Political Discussion Online[END_REF]. Also, it has been suggested that online arenas for political debate may serve as a public sphere supporting rational-critical discourse among its participants [START_REF] Dahl | Democracy and its critics[END_REF], though this has been severely criticized [START_REF] Stromer-Galley | Political Discussion Online[END_REF].
A range of studies have been conducted to characterize those that engage in online political debate, for example in terms of gender, age, and education. Also, efforts have been made to assess the quality of such online debate [START_REF] Stromer-Galley | Political Discussion Online[END_REF]. The contribution of this study is to provide insight into the motivation of those engaging in online political debate and the perceived impact of such debate on the debaters' political engagement. Thus, this study extends the current knowledge of online political debate as it provides knowledge on how such political debate is perceived from the perspective of those who engage in it. Furthermore, it suggests how online political debate may strengthen the participants' general political engagement; the latter being a needed addition to the current literature on the correlation between online and offline political engagement [1; 13].
The remainder of the paper is structured as follows: first we provide an overview of previous work. Then we formalize the research questions and present our research method, followed by a presentation of the results of our study. Finally, we discuss the results, their implications, and the study limitations, as well as suggest future work.
2
Previous work
Online political participation
Political participation is hardly an unambiguous term in the scientific literature. Teorell [START_REF] Teorell | Political participation and three theories of democracy: A research inventory and agenda[END_REF] distinguished between responsive, participatory and deliberative models of democracy. Voting and participation in election campaigns are key aspects of a responsive democratic model [START_REF] Verba | Participation in America: Political democracy and social equality[END_REF]. Taking part in decision-making processes is at the core of a participatory model [START_REF] Vitale | Between deliberative and participatory democracy: A contribution on Habermas[END_REF]. Participating in the political opinion formation is central to a deliberative model [START_REF] Teorell | Political participation and three theories of democracy: A research inventory and agenda[END_REF].
Participatory and deliberative democracy depends on debate and dialogue between citizens. Significant participatory divides have been found concerning gender and education, with males being more active in online political debates, and with educational levels correlating with online political participation [11; 15]. Yet, in multiple regression analyses, demographic variables (such as gender and age) have been found to explain far less of the variance in online political participation than factors associated with political engagement in general [START_REF] Valenzuela | Social networks that matter: Exploring the role of political discussion for online political participation[END_REF].
Individuals' general political interest, offline political engagement, and civic engagement may be better predictors of online political participation than mere demographic variables. Vesnic-Alujevic [START_REF] Vesnic-Alujevic | Political participation and web 2.0 in Europe: A case study of Facebook[END_REF], in a survey study among citizens using the European parliament Facebook pages, found that online political participation correlated strongly with political interest. Likewise, Conroy, Feezell, and Guerrero [START_REF] Conroy | Facebook and political engagement: A study of online political group membership and offline political engagement[END_REF] found a strong correlation between online and offline political engagement in their study of political Facebook groups. De Zúñiga, Jung, and Valenzuela [START_REF] De Zúñiga | Social media use for news and individuals' social capital, civic engagement and political participation[END_REF] found strong correlations between online political engagement, offline political participation, civic engagement, and the use of social networking sites for news. An experimental study by Min [START_REF] Min | Online vs. face-to-face deliberation: Effects on civic engagement[END_REF] showed that online deliberation may increase the participants' sense of political efficacy and willingness to participate in politics.
Motivated by the promise that online political debate, adhering to the principles of deliberative democracy [START_REF] Teorell | Political participation and three theories of democracy: A research inventory and agenda[END_REF], may strengthen the public sphere, several studies have analysed the quality of such debate. Stromer-Galley and Wichowski [START_REF] Stromer-Galley | Political Discussion Online[END_REF] summarized this literature, and concluded that "online political debate, created by and for citizens left to their own devices tends not to produce high-quality discussions" [ibid., p. 180]. Howev-er, the quality of the discussion, that is, the discussion's adherence to the principles of deliberative democracy, may be higher for debates involving both ordinary citizens and politicians [ibid., p. 179]. Also, the design of the online environment for political debate may affect the quality of the discussions; higher quality discussions are found in online environments such as blogs that motivate more contemplative comments rather than a speedy exchange of messages [ibid., p. 178].
Online debate connecting citizens and politicians
It is noteworthy that the involvement of politicians in online political debate among citizens may improve the quality of the debate. As politicians are elected to represent citizens, they also need to listen to the opinions of the same citizens [4; 17]. Furthermore, politicians listening to, and debating with, ordinary citizens may strengthen the involvement of citizens in policymaking. Stromer-Galley and Wichowski suggest that online discussions "hosted by government agencies or policymakers, enact democracy by situating citizens as agents within the policymaking process" [11, p. 182].
The possible use of online political debate as a means to involve citizens in policymaking may be a way to implement Dahl's [START_REF] Dahl | Democracy and its critics[END_REF] characteristic of democratic participation, where all citizens should have the same opportunity to set political agendas and influence political decision-making. Furthermore, online political debate involving ordinary citizens and politicians could have an added democratic value as it may strengthen the openness of political processes [START_REF] Sevland | Det lokale folkestyret i endring? Om deltaking og engasjement i lokalpolitikken[END_REF].
3
Research questions
Our research questions are designed to fill what we perceive as two gaps in the current knowledge on online political debate: the motivation for participating in such debates and the impact of such participation on the debaters' political engagement. Two research questions were formulated.
RQ1: Which factors motivate participation in online political debate?
The current literature provides ample insight into the characteristics of online political debaters. However, the current knowledge on motivational factors is limited. Extending this knowledge is important as it may help us improve the online environments for such debates, as well as understand the role such debates may have in society.
RQ2: How may participation in online political debate impact the general political engagement of the debaters?
From the current literature we know that the tendency to participate in online political debate is closely associated with political engagement in general. However, we find that there is a lack of knowledge concerning how online political debate may come to affect such general political engagement. Extending our knowledge on this issue is relevant both for understanding the role of online political engagement for the individual debater as well as to set up political debate so as to increase general political engagement in the population.
As the literature suggests that the quality of online political debate may be positively affected by involving both politicians and ordinary citizens, we wanted to investigate our research questions in a context where both these groups participated; this to prevent our findings from being unduly biased by the participants' perception of the online political debate as of low quality.
Furthermore, we wanted to investigate our research questions in an online environment promoting contemplative comments rather than a fast exchange of messages, also for the purpose of controlling against low-quality political debate.
Method
To gain in-depth understanding of online debaters' motivation, and the impact of such debate on general political engagement, this study was conducted in the context of a single case: an online environment for political debate run by one of the main political parties in Norway.
We wanted to gather data from a relatively large number of participants. Consequently, we decided to conduct an online questionnaire survey. As we wanted the study to be exploratory, we included questionnaire items with free-text answers to gather qualitative data.
The case
The case was an online environment for political debate hosted by one of the main political parties in Norway. The environment was divided into sections concerning specific topics (such as education, health, employment), specific parts of the party organization (local and higher level party bodies), and blogs for individual politicians. The online environment was set up to foster deliberative dialogue involving central party members / politicians, peripheral party members, and politically interested citizens who are not members of the party organization. The overall design of the online environment was a portal structure including a number of blogs for specific topics or parts of the party organization. In the separate blogs, discussions were organized as threads following an introductory text. The comment field was located below the discussion thread, to motivate the participants to read others' comments before posting their own. Upon posting a comment, the online debater by default was set to follow the discussion, and notified by e-mail when new comments were posted. The online debaters had to log in to comment, either as a user of the online environment or through their Facebook or Twitter accounts. The vast majority of debaters participated in their own full name.
4.2
The participants and recruitment process
The participants were selected on the basis of their participation in four sections of the online environment; three thematic sections (foreign affairs, education, and employment) and a section serving as the blog for the party leader. In total, 464 persons had made one or more comments in the four sections during a given two month period in 2010; 87 in the three thematic sections, the others in the party leader blog only. Those that had commented in the three thematic sections, or that had made two or more comments in the party leader blog, were invited. Furthermore, among those that had made only one comment in the party leader blog, 40 were randomly selected. We did not invite persons that had published blog posts (in addition to comments) in the online environment, as we assumed these to be closer to the central party administration. Furthermore, we did not invite persons that had logged in with Twitter or Facebook accounts, as we wanted our participants to be regular visitors of the online environment. These filters excluded 48 of the 464 commenters.
In total we invited 204 persons to participate in the study by invitations sent through the internal messaging system of the studied online environment. Of these, 90 responded to the invitation (44%). For the purpose of anonymity, no couplings were made in the data set between (a) the debaters and content in the studied online environment and (b) the participants' questionnaire responses.
4.3
The questionnaire
The questionnaire contained 17 questions on demographics, the participants' use of social media, the participants' use of the studied online environment, their motivation for providing comments in the studied online environment, the impact of their online participation on their general political engagement, their experience of the online environment, and suggested changes for the online environment. Due to limited general interest, the findings concerning the latter theme are not presented.
The analysis process
The participants' free-text responses concerning their experience of the studied environment, their motivation for commenting, and the impact of their online participation, were subjected to thematic analysis [START_REF] Ezzy | Qualitative analysis[END_REF]. For each of these questions, an initial set of coding categories was established after the first reading of the comments. The initial categories were then refined following pilot coding. After having established a stable set of coding categories, all comments were coded. Following this, the comments within each coding category were subjected to a second round of analysis for detailed findings.
Results
The participants
The average age of the participants was 51 years (SD = 13, min = 22, max = 83). Sixtythree per cent were male. Nearly half of the participants (44%) had used the studied online environment for a year or more. The participants were also active in other social media; 73% reported that they were regular users of Facebook, 21% were regular users of Twitter.
The majority of the participants were members of a political party; 30% reported that they were active members, 27% were passive members. About one-sixth (17%) reported participating in political meetings.
Upon being asked about their experience of the studied online environment, the most prominent themes were statements on satisfaction (16 comments in this category) and critique of the discussions (nine comments). Statements on satisfaction concerned various aspects of the online environment and the way it was run. The critique of the discussions in particular concerned disrespectful treatment of other participants, varying quality in the comments, and difficulties in getting an overview of discussions; the latter having the consequence that themes were seen as repeated multiple times in the same discussion thread.
Which factors motivate participation in online political debate?
The participants were asked to explicate why they had commented in the studied online environment. The participants' answers were found to reflect four overall motivations:
1. Engaging topic (32%). These participants reported being engaged by the topic under discussion and/or having strong opinions. Several provided details on the actual topic of interest. 2. Want to contribute (19%). These participants reported that they had knowledge or experience that they found to be a needed or useful addition to an on-going debate. They typically also reported a desire for their opinion to have some kind of impact. 3. Frustration (12%). These participants typically reported anger or frustration concerning general societal or political issues. Three of these also aired frustration concerning the debate in the studied online environment. 4. Reciprocal learning (2%). Two of the participants reported that they found the studied online environment to be an arena for learning.
See Fehler! Verweisquelle konnte nicht gefunden werden. for examples of participant reports concerning motivational factors.
Want to contribute
I disagreed with the post starting this discussion, and feel that I have both the competency and the engagement.
Disagree with many of the comments on the causes for sick leave, and wanted to present my point of view.
Frustration I commented out of frustration following this year's election and the subsequent unfulfilled promises concerning students [...]
I am annoyed concerning the sick leave discussion.
Reciprocal learning
I look at the comments as introductions or replies in a knowledge debate where the goal is to reciprocally learn and develops one's own position and opinion in interplay with politically interested people.
Interesting and sensible debates are pleasant and instructive to participate in.
Of relevance to the question on motivation, we found that 38% of the participants voiced a general wish for even more engagement on the part of politicians in the studied online environment. This was not the topic of any of the questions in the questionnaire, but something that was reported in response to several of the free-text questions.
In particular, the participants wanted feedback from central party members and politicians in the form of comments in the online discussions, clarity concerning the impact of the participants' comments, and clarifications concerning whom from the party organization one may expect to respond to comments.
5.3
How may participation in online political debate impact the political engagement of the debaters?
The participants were asked whether they thought the studied online environment could affect the strength of their political engagement. Sixty-four per cent answered that the studied online environment could make them more politically engaged, 31% answered that it had no effect, 5% answered that it could make them less politically active.
The participants who answered that the online environment could make them more politically active were asked, in a separate question, to report in free-text on how the environment could have this effect. The other participants were not asked this question. The thematic analysis yielded six answer categories:
1. Sense of influence (reported by 17). These participants see the studied online environment as an opportunity for having an influence and communicating their own opinion. This opportunity in turn is reported to motivate an increase in political engagement. However, several of the participants reported that such an increase in their political engagement presupposed an active engagement from central party members and politicians in the studied online environment. 8. Access to debate (reported by 14). These participants described the access to debate, made possible by the studied online environment in particular or by the general increase in arenas for online political debate, as engaging and inspiring in itself. Three of the participants noted that the discussions in the studied online environment could also serve as a basis for political debate outside this environment. Three explained that the main value of online arenas for political debate is to increase the transparency in political processes and to support grassroots movements. 9. Getting updated (reported by 5). These participants reported that the studied online environment helped them to get updated on political issues. Three of these specifically associated such updates with engagement in political activity. The described updates concerned, for example, general political trends, particular topics under debate, and news concerning particular persons. 10. Lowered threshold for participation (reported by 4). These participants reported that the studied online environment represents a low-threshold offer for persons who want to engage politically, and that it makes it easier to be politically active. 11. Local participation (reported by 4). These participants reported that their activity in the studied online environment could motivate them to participate actively in local politics. 12. Information on events (reported by 4). The studied online environment is used for spreading information on events such as meetings, seminars, and campaigns. Some of the participants reported that such information increases their chances for participating in the events.
See Fehler! Verweisquelle konnte nicht gefunden werden. for example participant comments concerning how the studied online environment could make them more politically active.
Table 2. Example participant comments on how the studied online environment could increase their political engagement.
Theme
Example comments 1. Sense of influence
By this I mean that it is possible for me to reach out with my opinions to a wider audience, I have on several occasions received "likes" on my comments and to me this is motivating.
Closeness to the power -provided that the comments are read by someone in charge. Share experiences from the real world.
Access to debate
It is easier to get an interest in particular issues if you have an arena for speaking out.
I have just discovered political blogs, it is a new arena for me. Otherwise, I am engaged in political discussions at work and would like to be more engaged in other (non-political) organizations.
Getting updated
The
Discussion
In this section, we will first discuss our findings relative to the two main research questions. Then we will discuss the limitations of the study and suggest future work.
6.1
Motivators for online political debate
The participants' responses provided relevant insights into possible motivators for participation in online political debate. We find it consoling that the most frequently reported motivator was an engagement in the discussed topic, and that the second most frequent motivator was a wish to contribute in the debate. Both these motivators are in compliance with the ideals of online deliberation. It is useful for developers and hosts of online environments for political debate to know that engaging topics and a wish to contribute may be key motivators for online debaters. In particular, this may have implications for how topics should be presented and moderated. Given that the findings are general, developers and hosts of such online environments needs to look for topics and content triggering the participants' engagement, and present topics in an engaging manner, rather than, for example, just present content for informational purposes. It may also be important to strengthen participants' opportunities for making contributions that may actually impact political policymaking, thereby "situating citizens as agents within the policymaking process" [11, p. 182]. That said, it is noteworthy that general frustration was the third most frequently reported motivation to make comments in the online environment. While frustration may possibly help people get started in online debate, such motivation is hardly an optimal basis for the rational-critical discourse of deliberative democracy [START_REF] Dahlgren | The Internet, public spheres, and political communication: Dispersion and deliberation[END_REF]. Possibly, debaters venting their frustration online may be the reason why some of the study participants criticize what they perceive as disrespectful treatment of others in the debates. Although political debate may benefit from having nerve and temperature, it is an important challenge for the hosts of online political debate to reduce the effect of online debaters motivated mainly by their frustration. In particular, this is important in cases such as the one in this study, where frustration only motivates a small proportion of the online debaters.
Finally, it may be noted that there still is a way to go before online deliberation [START_REF] Dahlgren | The Internet, public spheres, and political communication: Dispersion and deliberation[END_REF] is the backbone reflex of the participants in the studied online environment. Only two participants reported reciprocal learning as their motivation. Being engaged and wanting to contribute are indeed necessary requisites for online political debate. However, in terms of online deliberation, it will also be necessary to listen to others' perspectives and appreciate the possible learning that may come out of the political debate.
Impact on general political engagement
From the existing literature we know that participation in online political debate is highly correlated with general political engagement [15; 1; 18]. Furthermore, online deliberation may strengthen political efficacy and willingness to participate in politics [START_REF] Min | Online vs. face-to-face deliberation: Effects on civic engagement[END_REF]. In our study, the majority of the participants reported that their participation in online political debate might strengthen their political engagement. This finding is in line with Min's conclusion that online deliberation may increase political efficacy and willingness to participate in politics [START_REF] Min | Online vs. face-to-face deliberation: Effects on civic engagement[END_REF]. Furthermore, our findings indicate how such increased willingness to engage politically may be explained.
The most frequently reported reason for a strengthened political engagement is the perceived promise of influence associated with an online environment hosted by a politi-cal party. This perceived promise may be strengthened by politicians and central party members participating in the same environment. However, although party members indeed were present as debaters, several of the survey participants voiced concern that central party members and politicians were not more active. This concern reflects a scalability-challenge in the interchange between politicians and ordinary citizens in online political debate; as the number of active debaters increases, it will be next to impossible for central party members and politicians to follow up all comments.
Consequently, we need sustainable approaches to support interaction between politicians, central party members, and ordinary citizens in online political debates. One approach may be to clarify the promise of the online political debate: that the online environment is an arena for debate mainly among citizens and local party members; however, central party members and politicians may be active to the extent possible. A second approach may be to conduct regular summaries of the content of political debate, for example as input in political policymaking, and be clear on how the online debaters have contributed to the summaries.
Other reasons for strengthened political engagement included the motivation for involvement in local politics, and an increased awareness of political events. Political parties hosting online environments for political debate may benefit from these effects of the online political debate by making easily available offerings to the online debaters, for example by promoting selected offline political events.
Limitations and future research
This study was conducted in an online environment for political debate where ordinary citizens, party members and politicians participated. Furthermore, the online environment was designed to foster contemplative comments rather than a fast-paced exchange of messages. Consequently, the generality of our findings is limited to contexts for online political debate that share these characteristics. Future work comparing the kind of online environment used in this study to other online environments for political debate is needed to make more general claims.
The case of the present study was arguably a suitable object of study for our research questions, in particular as Norway is an egalitarian society with high Internet penetration and online maturity in the population. However, the generality of the findings may depend on the characteristics of the society in which the study was conducted. Consequently, it will be beneficial to replicate the study in other cases, preferably in other countries.
Table 1 .
1 Example participant comments concerning their motivation for commenting in the studied online environment for political debate.
Theme Example comments (translated from Norwegian)
1. Engaging It was something that caught my interest. Issues that I have experi-
topic enced or will experience myself.
I am very interested in questions on the politics of drug abuse. I see a
connection between drug addiction and sick leave, crime and health
in general.
Acknowledgement. This work was conducted as part of the research project NETworked Power (http://networkedpower.origo.no) supported by the Norwegian Research Council VERDIKT programme, project number 193090. | 30,670 | [
"993329",
"993330"
] | [
"556764",
"556764"
] |
01491259 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491259/file/978-3-642-40346-0_5_Chapter.pdf | Stephan Neumann
email: [email protected]
Anna Kahlert
email: [email protected]
Maria Henning
email: [email protected]
Philipp Richter
email: [email protected]
Hugo Jonker
email: [email protected]
Melanie Volkamer
email: [email protected]
Modeling the German Legal Latitude Principles
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Holding regular parliamentary elections is essential for the exercise of popular sovereignty and an expression of the democratic form of government. The fundamental decision for democracy is established in Article 20.1 and 2 of the German Constitution. According to this, the authority of the state originates with the people and is exercised in elections and votes. The Federal Electoral Act was enacted in 1956. At this time, the legislator considered traditional paper-based polling station voting as the main voting channel. Postal voting was only allowed in exceptional cases. However, the number of absentee voters constantly rose in the following years as society became more and more mobile (in the 2009 federal elections 21.4% of the cast votes were postal votes). De facto, postal voting became an alternative to the conventional voting process.
In 1967, the Federal Constitutional Court decided on the constitutionality of postal voting for the first time. In these proceedings, the Constitutional Court declared that the principles of the free and secret elections were not violated [3, Decision: 21, 200:1967]: the increase in election participation offered by postal voting, which translates to an improvement of the principle of the universal elections, is strong enough to offset the impairment of the secret elections, and thus can be accepted. This means, the legislature is entitled to broaden latitude when lending concrete shape to the principles of electoral law within which it must decide whether and to what degree deviations from individual principles of electoral law are justified in the interest of the uniformity of the entire voting system and to ensure the state policy goals which they pursue [START_REF]Decisions of the Federal Constitutional Court (BVer-fGE) referred to in this work[END_REF]Decision: In order to avoid such a debacle with future new voting systems, it is necessary to have clear guidelines on what is and what is not acceptable when balancing legal provisions. Then the compliance of proposed voting systems can be properly analyzed with the legal latitude before their use. This is especially pertinent in the case of Internet voting systems -Internet voting systems are already used in various European countries, and the possibility of voting in such a manner seems to enjoy support amongst German constituents [START_REF] Forsa Survey | Jeder zweite würde online wählen[END_REF].
Contribution. This work supports an interdisciplinary dialog by constructing a model for comparing newly proposed voting systems, e.g. an Internet voting system, with established voting systems, e.g. postal voting in the German federal election. We therefore identify and model the principles of the legal latitude. The developed model allows to compare voting systems based on the legal latitude. As such, the model helps developers of new voting systems in identifying and mitigating constitutional shortcomings of their systems which ultimately should lead to the identification or construction of a constitutionally compliant (electronic) voting system. The model is meant as a guideline, which allows conceptual design to be carried out in the right direction, but results will still need legal review in case of planned application of voting systems in political environments. While the model is specifically tailored to the German constitution, we believe the election principles therein to be of a generic nature. As such, adapting the model to another constitution should be straightforward.
Explanation of Legal Latitude
The election of the representatives is regulated in Article 38 of the German constitution. Correspondingly, the principles of the universal, direct, free, equal, and secret elections established in Article 38.1 sentence 1 are of particular relevance. While the principle of universal elections concerns the eligibility to vote without applying to personal qualities or political, financial or social aspects [3, Decision: 15, 165 (166f):1962. Decision: 36, 139 (141):1973], the principle of equal elections addresses the impact of every valid vote on the election result. That is, every voter needs to have the same number of votes and must be able to cast his or her vote in the same way as any other one [7, § 1, Rn. 43]. Furthermore, all candidates need to be presented equally, so all of them have the same chance to win the election [ The so called public nature of elections requires that all essential steps in the elections are subject to public examinability unless other constitutional interests justify an exception. However, the German constitution only gives the election principles but does not purport a specific voting system. The legislator needs to provide a system that fulfills the illustrated principles as best as possible. This follows from Article 38.3 of the German constitution. After this, a federal act needs to define full particulars regarding the federal elections. Note that this article contains no legal proviso but authorizes and obligates the federal legislator to enact an execution law [START_REF] Mangoldt | Kommentar zum Grundgesetz[END_REF] Insofar, the legislature is entitled to broad latitude when lending concrete shape to the principles of electoral law within which it must decide if deviations from individual principles of electoral law are justified in the interest of the uniformity of the entire election system and to ensure the state policy goals which they pursue [3, Decision: 123, 39 (71):2009]. Furthermore, while weighing the election principles the convention of the unity of the constitution needs to be respected [START_REF] Dreier | Grundgesetz-Kommentar[END_REF]Art. 38,Rn. 166]. According to this, restrictions of constitutionally required positions are possible only in case a collision with other principles of constitutional status is given and "practical accordance" [START_REF] Hesse | Grundzüge des Verfassungsrechts der Bundesrepublik Deutschland[END_REF] regarding the restricted principle can be made [2, Art. 38, Rn. 61]. During the necessary consideration, the basic principle of commensurability is of great importance, i.e., a relation of two mutable values that comes as close as possible to the particular optimization, not a relation between a constant purpose and one or more variable instruments [START_REF] Hesse | Grundzüge des Verfassungsrechts der Bundesrepublik Deutschland[END_REF]. Since all election principles have equal potential [3, Decision: 99, 1 (13):1998], it needs to be decided in each individual case which election principle can be restricted in favor of another one. In case the legislator decides to realize one election principle in the best possible way as it happened with the implementation of postal voting in view of the principle of the universal elections, it is not objectionable from a constitutional point of view as long as this decision does not go along with an exceeding restriction or hazard of other election principles From the legal latitude discussed in this section, three principles can be derived: the principle of minimum degree of fulfillment, the principle of necessity, and the principle of overall degree of fulfillment. The general view is that the current voting system fulfills the election principles in an acceptable way, allowing it to be used as the reference system: any new voting system must therefore simultaneously fulfill all three principles with reference to the current voting system.
Modeling the Legal Latitude Principles
In this section, the three principles of the legal latitude are modeled. Before diving into the modeling process, we shall first provide the reader with conventions used throughout this work.
Foundations of the Model
The degree to which individual election principles are fulfilled by a specific voting system can be charted by a network diagram, having one axis for each considered principle (see Figure 1 for a reference system and Figure 2 for a proposed new voting system). On each axis is marked to which degree the election principle is fulfilled by the system under consideration. Higher degrees of fulfillment are plotted further out from the center than lower ones.
The Principle of Minimum Degree of Fulfillment
The principle of minimum degree of fulfillment requires that a minimum degree of fulfillment has to be achieved for all election principles. That means that a voting system is tied to the minimum degree of fulfillment of all election principles. For a given minimum degree of fulfillment deg min , the correlation is modeled as follows: min a∈SEP
(degree system new a ) ≥ deg min (1)
For an election principle a from the set of election principles SEP , the mathematical term degree S a denotes the degree of fulfillment of a in system S. Figure 3 shows the proposed voting system in reference to a potentially prescribed minimum degree of fulfillment. It can be seen that the hypothetical voting system complies with the principle of minimum degree of fulfillment.
The Principle of Necessity
An election principle may be fulfilled to a lesser degree in a proposed voting system than in a reference system if and as far as this is necessary to fulfill another election principle to a higher degree than in the reference system, thus enhancing the reference system with respect to that principle.
Due to the fact that not all possible voting system alternatives are available, it is not possible to prove the satisfaction of the principle of necessity.
The Principle of Overall Degree of Fulfillment
The principle of overall degree of fulfillment is an optional principle, when a proposed system is only meant to enhance a reference system. Then the two principles described before apply strictly and overall degree of fulfillment may be viewed as good practice. However, the principle of overall degree of fulfillment is obligatory, when a proposed voting system is meant to replace a reference system or to be applied equally with a reference system. Compliance with the principle of overall degree of fulfillment is achieved when all election principles are fulfilled at least to an equal degree as in the reference system (refer to Formula (2a)) or when more are fulfilled to a higher degree than to a lesser (refer to Formula (2b)). These alternate correlations are modeled as follows: There may also be cases where one election principle is fulfilled to a very high degree in the proposed system and may balance more than one lesser fulfillment, but these cases may not be appropriately expressed in abstract rules but depend very much on the individual case and must be reviewed legally in any case from the beginning.
Figure 4 shows a comparison of two voting systems (the voting systems depicted by the solid line in Figure 1 and the system depicted by the broken line in Figure 2), where moving from the reference system to the new system adheres to the principle of overall degree of fulfillment as modeled by Formula (2b). This is shown by the fact that four election principles are improved in the new system (public, universal, direct, free), while only two principles are weakened (equal, secret).
Model Compliance
If a system fulfills the two (optionally three) legal latitude principles (where the reference system acts as a baseline for comparison), it can most likely be seen as legally acceptable. If not all principles are fulfilled, this entails an ad hoc decision and it requires an additional interdisciplinary evaluation.
Conclusion and Future Work
In the development and usage of voting systems for federal elections, not all constitutional election principles can be deployed in purity and impairments of these principles among each other must be accepted. From the legal point of view the legal latitude enables the legislator to constrain the fulfillment of certain constitutional principles in favor of others. Based on an analysis of the legal latitude, we developed a model capable of comparing voting systems with regard to fulfillment of election principles. To build our model, we decomposed the legal latitude into its basic principles and modeled these principles. The developed model will support technical developers in the creation of new voting systems on a legal basis.
The focus of this work is on the evaluation of voting systems based on election principles. In the context of Internet voting and electronic authentication in the polling station certain additional constitutional rights play an important role: the Right to Informational Self-Determination and Secrecy of Telecommunications. How these two basic rights have to be considered in the procedure described in here is a topic for future research.
In its current state, the model does not specify measures to assess the degree of fulfillment of specific election principles. In order to estimate the degree for abstract election principles, these principles must be refined into more precise requirements. Consequently, in the future, the herein developed reference model will be refined by integrating measures to assess the degree of fulfillment of election principles built upon fine-grained requirements.
To date, the model serves as a reference model for parliamentary elections in Germany. In the future, we plan to apply the developed model to a concrete election scenario and a concrete newly proposed voting system. At this point in time, the authors do not consider any Internet voting system an adequate substitute for postal voting for German federal elections. The most promising scenario in which to consider Internet voting seems the upcoming German social election4 in 2017. As outlined by Richter [START_REF] Richter | Wahlen im Internet rechtsgemäß gestalten[END_REF], social elections do not demand the public nature principle of elections in its full strength.
59, 119 (124 f):1981]. However, this latitude has its limitations as the Constitutional Court's "Election Computers Judgment" [3, Decision: 123, 39, (75):2009] illustrates: Hereafter, the use of the Nedap electronic voting machines in the 2005 federal elections was unconstitutional. This judgment was justified by the lack of any possibility to verify the essential steps in the elections. The Constitutional Court argued [3, Decision: 123, 39, (75):2009]: "Where the deployment of computer-controlled voting machines aims to rule out inadvertent incorrect markings on voting slips, unwanted invalid ballots, unintentional counting errors or incorrect interpretations of the voter's intention when votes are counted which repeatedly occur in classical elections with voting slips, this serves the interest of the implementation of the equality of elections under Article 38.1 sentence 1 [...] It certainly does not justify by itself foregoing any type of verifiability of the election act."
7, § 1, Rn. 48f]. The principle of direct elections forbids the integration of electoral delegates [3, Decision: 7, 63 (68):1957. Decision: 47, 253 (279):1978] and requires that the representatives get elected through voters only by casting their vote personally [2, Art. 38, Rn. 75] [5, Art. 38, Rn. 101]. The principle of secret elections claims that the voting decision remains secret during and after the election process [9, Art. 38, Rn. 67]. It needs to remain secret whether voters split their votes or cast them based on a single preferred party, whether they spoiled their vote or abstained from voting at all [7, § 1, Rn. 95]. The secrecy of the vote guarantees the principle of free elections which covers the process of opinion making prior to the election as well as the process of vote casting within the election. In formal aspects it ensures the right to choose whether one wants to casts a vote or not. In material regards it provides the freedom to cast a vote for the preferred candidate or party [7, § 1, Rn. 21]. In addition to these principles, another election principle emerging from Article 20.1, 20.2 and 38.1 of the German constitution has been emphasized by the Federal Constitutional Court in 2009 [3, Decision: 123, 39:2009]:
, [2, Art. 38, Rn. 61]. In essence, this article constitutes a regulation that assigns the exclusive law authority to the Federation in order to shape the German electoral law [2, Art. 38, Rn. 125]. Even though all election principles are of equal importance in the context of parliamentary elections [3, Decision: 99, 1 (13):1998], they cannot be fulfilled simultaneously [3, Decision: 59, 119 (124):1981]. Due to the necessity to balance all principles, a legal latitude is open for the legislator [2, Art. 38, Rn. 62]. Colliding election principles need to be assigned to one another to such an extent that each of them is fulfilled in the best possible way [3, Decision: 59, 119 (124):1981].
[3, Decision: 59, 119 (125):1981]. The Federal Constitutional Court only reviews whether the legislature has remained within the boundaries of the latitude or whether it has violated a valid constitutional election principle by overstepping these boundaries [3, Decision: 123, 39 (71):2009].
Fig. 1 . 2 .
12 Fig. 1. Reference voting system. Fig. 2. Proposed new voting system.
Fig. 3 .
3 Fig. 3. Minimum degree of fulfillment in new voting system.
-
∀a ∈ SEP : (degree system new a
Fig. 4 .
4 Fig. 4. Overall degree of fulfillment in both voting systems.
The social election is conducted via postal voting every six years to elect bodies of the social insurances. There are over 40 millions eligible voters.
Acknowledgment. This paper has been partially developed within the projects "ModIWa2" -Juristisch-informatische Modellierung von Internetwahlen, "VerKonWa" -Verfassungskonforme Umsetzung von elektronischen Wahlen, which are funded by the German Science Foundation (DFG), and partially within the project "BoRoVo" -Board Room Voting -which is funded by the German Federal Ministry of Education and Research (BMBF). | 18,437 | [
"1004268",
"1004269",
"1004270",
"1004271",
"1004272",
"983849"
] | [
"161409",
"253513",
"253513",
"253513",
"366875",
"161409"
] |
01491260 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01491260/file/978-3-642-40346-0_9_Chapter.pdf | Enrico Ferro
email: [email protected]
Euripidis Loukis
email: [email protected]
Yannis Charalabidis
email: [email protected]
Michele Osella
email: [email protected]
Analyzing the Centralised Use of Multiple Social Media by Government from Innovations Diffusion Theory Perspective
Governments have started increasingly using web 2.0 social media as a new channel of interaction with citizens in various phases of public policies lifecycle. In this direction they have started moving from simpler forms of exploitation of these strong bi-directional communication channels to more complex and sophisticated ones. These attempts constitute important innovations for government agencies, so it is necessary to analyse them from this perspective as well. This paper analyzes an advanced form of centralised use of multiple social media by government agencies from this perspective, using the well established Diffusion of Innovations Theory of Rogers. It is based on a pilot application of the above approach for conducting a consultation campaign in multiple social media concerning the large scale application of a telemedicine program of the Piedmont Regional Government, Italy. It has been concluded that this approach has the fundamental preconditions for a wide diffusion (relative advantage, compatibility with existing values and processes, reasonable complexity, trialability and observability), at least in government organizations having a tradition of bi-directional communication with citizens in all phases of policy making, and also some experience in using social media for this purpose.
Introduction
Governments have started increasingly using web 2.0 social media as a new channel of interaction with citizens in various phases of public policies life-cycle (agenda setting, policy design, adoption, implementation and monitoring -evaluation) [START_REF] Osimo | Web 2.0 in Government: Why and How? European Commission[END_REF][START_REF] Punie | Public Services 2.0: The Impact of Social Computing on Public Services[END_REF][START_REF] Bertot | The impact of policies on government social media usage: Issues, challenges and recommendations[END_REF][START_REF] Bonsón | Local e-government 2.0: Social media and corporate transparency in municipalities[END_REF][START_REF] Snead | Social media use in the U.S. Executive branch[END_REF]. Initially they adopted simpler forms of expoitation of these strong bi-directional communication channels, which involved setting up and operating manually accounts in some social media, posting manually content to them and then reading citizens' comments in order to draw conclusions. Recently they tend to shift towards more complex and sophisticated forms of social media use; they are based on the automated posting of content to multiple social media, and also the retrieval of various types of citizens' interactions with it (e.g. numbers of views, likes, retransmissions, etc.) and relevant content, using the Application Progamming Interfaces (APIs) of these social media, and finally highly sophisticated processing of them [START_REF] Ferro | Policy Gadgets: Paving the Way for Next-Generation Policy Making[END_REF][START_REF] Charalabidis | Participative Public Policy Making Through Multiple Social Media Platforms Utilization[END_REF][START_REF] Wandhöfer | Engaging Politicians with Citizens on Social Networking Sites: The WeGov Toolbox[END_REF][START_REF] Charalabidis | Public Policy Formulation Through Non-Moderated Crowdsourcing in Social Media[END_REF]. These attempts constitute important innovations for government agencies, so it is necessary to analyse them from this perspective as well. It is important to investigate to what extent they have the fundamental preconditions for a wider diffusion in government. For this purpose we can use methods and frameworks developed by the extensive previous research on innovation diffusion [START_REF] Macvaugh | Limits to the diffusion of innovation -A literature review and integrative model[END_REF]. Such research can reveal both 'strengths and weaknesses' from this perspective, i.e. characteristics and contextual factors that favour diffusion in government, and also characteristics and contextual factors that hinder it, so it can provide guidelines concerning required improvements in relevant systems and methods, and also the contexts they are more suitable for.
This paper makes a contribution in this direction. It analyzes an advanced form of multiple social media use by government agencies, based on a central system that uses social media APIs (a more detailed description of it is provided in section 3), from an innovation perspective, using the well established Diffusion of Innovations Theory of Rogers [START_REF] Rogers | Diffusion of Innovations -Fifth Edition: The Free Press[END_REF]. Our analysis is based on a pilot application of the above approach for conducting a consultation campaign in multiple social media concerning the large scale application of a telemedicine program of the Piedmont Regional Government, Italy. This research has been conducted as part of project PADGETS ('Policy Gadgets Mashing Underlying Group Knowledge in Web 2.0 Media'www.padgets.eu), supported by the 'ICT for Governance and Policy Modeling' research initiative of the European Commission.
The paper is organized in seven sections. In the following section 2 the background of this study is outlined. It is followed by a brief description of the abovementioned advanced form of social media use in government in section 3, and its pilot application in section 4. Then in section 5 the research methodology is described, while in the following section 6 the results are presented. The final section 7 contains conclusions and future research directions.
Background
Social Media in Government
As mentioned in the introduction, social media, though they were initially used mainly by private sector firms in their marketing and customer service activities, are increasingly adopted and utilised by government agencies. It is gradually recognised that social media offer to government agencies significant opportunities for: i) increasing citizens' participation and engagement in public policy making, by providing to more groups a voice in discussions of policy development, implementation and evaluation; ii) promoting transparence and accountability, and reducing corruption, by enabling governments to open up large quantities of activity and spending related data, and at the same time citizens to collectively take part in monitoring the activities of their governments; iii) public services co-production, by enabling government agencies and the public to develop and design jointly government services; iv) crowdsourcing solutions and innovations, by exploiting public knowledge and talent in order to develop innovative solutions to the increasingly complex societal problems [START_REF] Bertot | The impact of policies on government social media usage: Issues, challenges and recommendations[END_REF][START_REF] Bertot | Engaging the public in open government: The policy and government application of social media technology for government transparency[END_REF][START_REF] Bertot | Promoting transparency and accountability through ICTs, social media, and collaborative e-government[END_REF][START_REF] Linders | From e-government to we-government: Defining a typology for citizen coproduction in the age of social media[END_REF].
Highly useful for public policy making can be the capabilities offered by social media to apply the 'crowdsourcing' ideas [START_REF] Surowiecki | The wisdom of crowds[END_REF][START_REF] Brabham | Crowdsourcing as a Model for Problem Solving: An Introduction and Cases[END_REF], which have been initially developed in the private sector, but have subsequently taken root in the public sector as well (with appropriate adaptations to the specificities of government); these Web 2.0 platforms enable government agencies to mine useful fresh ideas from large numbers of citizens concerning possible solutions to social needs and problems, new public services or improvements of existing ones, or other types of innovations [START_REF] Linders | From e-government to we-government: Defining a typology for citizen coproduction in the age of social media[END_REF][START_REF] Bovaird | Beyond engagement and participation: User and community coproduction of public services[END_REF][START_REF] Torres | Citizen sourcing in the public interest[END_REF][START_REF] Lukensmeyer | Citizensourcing: Citizen participation in a networked nation[END_REF][START_REF] Chun | Government 2.0: Making connections between citizens, data and government[END_REF][START_REF] Hilgers | Citizensourcing: Applying the concept of open innovation to the public sector[END_REF][START_REF] Nam | Suggesting frameworks of citizen-sourcing via Government 2.0[END_REF][START_REF] Margo | A Review of Social Media Use in E-Government[END_REF]. This can lead to the application of open innovation ideas in the public sector [START_REF] Hilgers | Citizensourcing: Applying the concept of open innovation to the public sector[END_REF], and gradually result in 'co-production' of public services by government and citizens in cooperation [START_REF] Linders | From e-government to we-government: Defining a typology for citizen coproduction in the age of social media[END_REF][START_REF] Bovaird | Beyond engagement and participation: User and community coproduction of public services[END_REF]. According to [START_REF] Lukensmeyer | Citizensourcing: Citizen participation in a networked nation[END_REF] such 'citizen-sourcing' may change the government's perspective from viewing citizens as "users and choosers" of government services to "makers and shapers" of them. However, at the same time relevant literature notes that social media not only offer important opportunities to government agencies, but also might pose some risks under specific circumstances [START_REF] Picazo-Vela | Understanding risks, benefits, and strategic alternatives of social media applications in the public sector[END_REF]. It is widely recognized that further research is required both for developing new advanced and more efficient and effective forms of exploiting the capabilities offered by social media in government, and also for evaluating them from various perspectives in order to understand better their capabilities and strengths on one hand, and their weaknesses and risks on the other [START_REF] Bertot | The impact of policies on government social media usage: Issues, challenges and recommendations[END_REF][START_REF] Chun | Editorial -Social media in government[END_REF]. The research presented in this paper contributes in this direction, focusing on the the evaluation of an advanced form of social media use by government from an innovation diffusion perspective.
Diffusion of Innovations Theory
Extensive research has been conducted on innovation diffusion, in order to understand it better and identify factors that favor it [START_REF] Macvaugh | Limits to the diffusion of innovation -A literature review and integrative model[END_REF]. One of the most widely accepted and use theories of innovations diffusion is the one proposed by [START_REF] Rogers | Diffusion of Innovations -Fifth Edition: The Free Press[END_REF], which has been extensively employed for analyzing ICT-related innovations in both the public and the private sector [START_REF] Wonglimpiyarata | In support of innovation management and Roger's Innovation Diffusion theory[END_REF][START_REF] Raus | Electronic customs innovation: An improvement of governmental infrastructures[END_REF][START_REF] Loukis | Barriers to the adoption of B2B emarketplaces by large enterprises: lessons learnt from the Hellenic Aerospace Industry[END_REF][START_REF] Al-Jabri | Mobile Banking Adoption: Application of Diffudion of Innovation Theory[END_REF]. According to this theory, there are five critical characteristics of an innovation that determine the degree of its adoption, which are shown with their definitions in Table 1.
Table 1. Innovation characteristics that determine the degree of its adoption Characteristic Definition
Relative Advantage
The degree to which an innovation is perceived as better than the idea, work practice or object it supersedes
Compatibility
The degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters
Complexity
The degree to which an innovation is perceived as difficult to understand, implement and use
Trialability
The degree to which an innovation may be experimented with on a limited scale basis
Observability
The degree to which the results of an innovation are visible to others Therefore it is paramount to assess to what extent various both simpler and advanced proposed approaches to social media usage by government agencies for supporting public policy making have the above characteristics, which result in higher levels of adoption and diffusion.
An Advanced Form of Multiple Social Media Use in Government
An advanced form of social media exploitation by government is under development in the abovementioned European project PADGETS (for more details on it see [START_REF] Ferro | Policy Gadgets: Paving the Way for Next-Generation Policy Making[END_REF] and [START_REF] Charalabidis | Participative Public Policy Making Through Multiple Social Media Platforms Utilization[END_REF]), which is shown schematically in Figure 1. It hinges on the use of a central system for conducting consultation campaigns on a policy-related topic in multiple social media, carefully selected so that each of them attracts a different targeted group of citizens. In particular, the frontend of this systems allows a policy maker to create a consultation campaign, which includes definition of its topic, the targeted social media and relevant multimedia content (e.g. a short and a longer textual description, images, videos, etc.), termed as 'Policy Gadgets' (or 'Padgets'). The backend of the system using the APIs of these social media is posting to each of them the appropriate subset of this content (e.g. the short text to Twitter, the longer text to Blogger, the video to YouTube), and then retrieves data on citizens' interactions with it (e.g. numbers of views, likes, ratings, comments) from the afore-mentioned social media. Finally these data undergo in the backend three levels of processing in order to extract from them to useful information for policy makers:
1. calculation of various analytics (e.g. numbers of views, likes, ratings, comments, etc., per region, gender, age and education group, for each of the social media and in total), 2. text mining of the textual comments based on opinion mining techniques (for a review of them see [START_REF] Maragoudakis | A Review of Opinion Mining Methods for Analyzing Citizens' Contributions in Public Policy Debate[END_REF]), in order to determine the 'sentiment' of citizens' comments (positive or negative), and the main issues and suggestions expressed by them, 3. future projections through simulation (e.g. using system dynamics or agent-based simulation -for more details see [START_REF] Charalabidis | Enhancing Participative Policy Making Through Simulation Modelling -A State of the Art Review[END_REF]).
A Pilot Application
In order to evaluate the abovementioned advanced form of centralised multiple social media use by government agencies from an innovation perspective, a pilot application of it was made in cooperation with Piedmont's Regional Government, Italy. One of its major problems has been for long time its high levels of spending for providing health services to its citizens (on average about 80% of its total budget). The increasing budget reductions currently experienced at local and at national level require regional governments to face a major challenge: to significantly reduce health related expenditures without deteriorating quality of service. For achieving these conflicting objectives Piedmont's Regional Government examined various measures, one of them being the introduction of telemedicine methods. In this direction it launched a pioneering telemedicine small scale project in one of the least populated and most mountainous of its provinces, Verbano-Cusio-Ossola (VCO). This telemedicine project was supported by the Local Health Authority of VCO that serves a population of about 172,000 citizens, with 23% of them being over 65 years old. The evaluation of this small scale project was positive, so Piedmont's Regional Government had to decide whether it should proceed to the large scale application of telemedicine practices in the whole Piedmont. Since this was a difficult and complex decision, for which a plethora of factors had to be taken into account, and also due to their long tradition of bi-directional communication with citizens in policy making (mainly using off-line methods, while recently they had some experience in using social media for this pur-pose), they decided to conduct a consultation with citizens on this in multiple social media, using the approach and the supporting central system described in the previous section. In this way they expected to take advantage of the high penatration of social media (at the level of 30% of ist population) in this region.
In particular, the objective of this social media campaign was to convey information on the planned extension of the telemedicine initiative in the whole Piedmont region to interested and affected citizens (e.g., patients and their families, doctors, health management emloyees), and then to collect feedback from them. The regional government expected through this campaign to gain a better understanding about the levels of final users' interest in and acceptance of these telemedicine services and the technology mediated model proposed for their provision; also, to identify possible barriers due to practical problems or internal organizational resistance, so that approppriate actions could be taken for addressing them. This project was associated with competences of four different departments of the Piedmont Regional Government, the Public Health, Budget and Finance, Institutional Communication and Regional Innovation ones, so it was decided all of them to be involved in it. For this consultation campaign Facebook was used as the central channel, due to both its peculiar interaction patterns as well as its noteworthy penetration rate in Piedmont's population. Beside Facebook, the campaign has made use of Twitter and YouTube. Also, Flickr and LinkedIn assumed an ancillary role. For all these five social media the existing accounts of the Piedmont Regional Goverment were used. The duration of this campaign was one month, during which six videos on telemedicine were created and published in YouTube, and ten policy messages were published via Twitter and Facebook. This campaign was promoted through the websites and the social media accounts of Piedmont Regional Goverment and other local organizations.
Research Methodology
After the end of this consultation campaign an analysis of it was conducted, which included three stages:
1. Initially we examined the analytics of this campaign provided by each of the above social media, with main emphasis on the following:
─ for Facebook: impressions per post, unique users per post, engaged users per post, users who generated 'stories' per post (sharings, likes, comments), organic reach per post, viral reach per post, virality percentage per post (number of storytellers divided by reach), ─ for YouTube: impressions per video, unique users per video, active interactions per video (likes, dislikes, comments, sharings), ─ or Twitter: retweets, replies and mentions, click-throughs on links included in twits.
2. Next we examined the results of the text mining performed for the textual comments of the citizens, with main emphasis on the issues and suggestions it extract-ed; for each of them we identified and examined the most representative of the comments mentioning it. 3. Finally, two semi-structured interviews were conducted with the most involved senior staff in this pilot: the Head of Public Health Department and a senior member of the Regional Innovation Department. The main objective of these interviews was to assess to what extent the proposed approach (centralised automated use of multiple social media), viewed as an innovation in government agencies policy making processes, has the five preconditions -characteristics for wide diffusion and adoption proposed by the innovations diffusions theory of [START_REF] Rogers | Diffusion of Innovations -Fifth Edition: The Free Press[END_REF] (in particular, the part of it that deals with the intrinsic characteristics of an innovation that influence an individual's decision to adopt or reject it): relative advantage, compatibility, complexity, trialability and observability. The main questions discussed are shown in Table 2. Each interview lasted about one hour, was tape-recorded and then transcribed. Open coding [START_REF] Maylor | Researching Business and Management[END_REF] of interviews transcripts followed, in order to extract the main points.
Table 2. Main questions of the interviews
To what extent the proposed approach:
-is a better way for consultations with citizens on various public policies than the other existing 'physical' (i.e. through 'physical' meetings) or 'electronic' ways for this (relative advantage)? What are its advantages and also disadvantages? -is compatible with the values and the policy formulation processes of government agencies (compatibility)?
-can be applied practically by government agencies policy makers without requiring much effort (complexity)?
-can be initially applied in small scale pilot applications by government agencies, in order to assess its capabilities, advantages and disadvantages, before proceeding to a larger scale application (trialability)?
-is an innovation highly visible to other public agencies, policy makers and the society in general, which can create positive impressions and comments (observability)?
6 Results
Citizens' Reach and Engagement
In terms of reach, the policy messages of this campaign that were posted in the above social media have generated 28,165 impressions. This figure, that has to do with the mere reception of the policy message in the social media realm, is characterized by a cross-platform nature. In Facebook, the figure encompasses the views of posts associated to the campaign which are located on the fan page chosen by the policy makers (we had 27,320 views). Regarding YouTube, here the principle does not change, therefore the indicator includes views of the telemedicine related videos uploaded as part of this campaignv (we had 783). With respect to Twitter it is important to point out that the number of impressions of a given message ("tweet") cannot be computed resorting to either native or third parties' tools. In this platform, the only viable solution has been to estimate impressions using click-throughs on links as well as YouTube referrals (we had 62): as a consequence, this value represents a significant underestimation (at least one order of magnitude) of the actual performance expressed on the specific platform. Translating impressions into unique user accounts, the data from platforms' analytics show that over 11,000 accounts have been reached.
Moving from passive interactions to active engagement, platforms' analytics reveal the participation of more than 300 (unique) individuals during the campaign lifecycle. The inherent cross-platform nature of this consultation campaign implies the use of different measures from each platform for the calculation of this indicator: unique users who generated a story through comments, likes, and public sharing in Facebook, unique users who performed actions such as like, dislike, comments and sharing in YouTube and, in Twitter, unique users who re-tweeted or replied to tweets representing policy messages published by the campaign initiator.
As a supplement to afore-mentioned figures, it is relevant to stress that performances exhibited by campaign messages published during the pilot on Piedmont Regional Government's accounts have been remarkably superior to other messages posted in the same period apart from the institutional campaign, which may be seen in the guise of a control group. A quintessential example in this vein has to do with Facebook regional channel: taking into account this platform, the messages of this campaign had a reach three times larger than others (on average), while in terms of active engagement, they generated reactions about twenty times more than usual.
Going beyond reach and engagement numbers, precious stimuli for policy makers derive from the main perceptions, issues and suggestions extracted through text mining of citizens' textual commens. First of all, telemedicine is percived as a useful means for the rationalization of public spending, especially in a period when budget constraints are tighter than ever. Some messages in this vein are as follows:
'The project has very good prospects and it can certainly represent an efficient way to reduce the cost of public health and prevention services'.
'An example to follow for regions like mine, Lazio, where -more and more frequently -past and present spending reviews are leading to closure of hospitals'. Also, substantial benefits are expected to arise also for the patients: whilst the continuous remote supervision of the patient's conditions is expected to result in an improvement of the quality of healthcare provision and patien's life, a reduction in the number of trips between dwelling places and local hospitals, and will have a remarkable impact in terms of savings (i.e., time devoted to mobility and cost of fuel) and environmental footprint (i.e., containment of CO2 emissions). For instance, one message remarks that:
'Telemedicine can remarkably reduce the queue for particular clinical examinations whose waiting time has now become eternal'.
However, despite rosy expectations and fervent impulses coming from technophiles, there are still some major roadblocks clearly perceived by the population. In fact, a number of concerns have been expressed about the uneven technological literacy among patients, in light of the relentless aging phenomenon. A message on this states that: 'Technology scares, especially those who are not born with the PC in the cradle'.
Finally, citizens involved in the campaign outlined the risk of applying a technocratic approach that does not take into account the human aspects of the physicianpatient relationship, or having problems due to insufficient training of healthcare personnel:
'In any case, data interpretation -especially in more complex situations -requires always a thorough (and human) assessment'.
Innovation Diffusion Determinants Assessment
The interviewees agreed that this approach (centralised automated use of multiple social media) offers strong relative advantages, in comparison with existing both 'physical' alternatives (e.g. physical meetings for communicating with citizens concerning various public policies under design or implementation) and 'electronic' ones (e.g. government e-participation/e-consultation portals). The inherent nature of this approach is perceived as going beyond the traditional schemes of 'official' eparticipation/e-consultation portals developed and operated by government organizations. It was stressed that such a 'formal' e-consultation gives citizens some opportunities to offer comments in response to a limited set of questions posed by government. However, these designated 'official' e-consultation spaces are largely unknown to the general public due to the high costs of promotion and the slow pace of dissemination. Furthermore the tools they provide are not sufficiently user-friendly, and are often usable only by an affluent and acculturate minority. Another problem is that when the consultation period ends, policy makers are hit by a wave of textual comments, without obtaining a clear picture of vox populi. However, the examined novell approach is perceived as overcoming the above weaknesses and problems. It leverages already established installed large bases of social media users, and paves the way to a friction-less (i.e., faster and more frequent) interaction between policy makers and society. A substantial relative advantage arises with respect to previous generation of e-participation models due to the fact that the government makes a first step towards citizens (moving to the electronic spaces they chose for discussion and content production), rather than expecting the citizenry to move their content production activity onto the 'official' spaces created for e-Participation. It was also mentioned that the high levels of citizens' reach and engagement achieved in this pilot application of the examined approach, and the useful insights offered by citizens' textual comments and opinions, as discussed in previous section 6.1, are indicative the significant relative advantages that the examined approach provides. In general, the interviewees agree that this pilot confirms the expectations of relevant literature, as mentioned in 2.1, concerning the potential of social media in government along four dimensions: increasing citizens' participation and engagement in public policy making, promoting transparence and accountability (as the main advantages and disadvantages of various policy options can be widely communicated and discussed), public services coproduction, and crowdsourcing solutions and innovations.
With respect to compatibility, the interviewees found that the pronounced crosssectoral nature of this approach renders it a precious decision support tool capable to maximize the 'horizontality' in terms of application scope, and, as a consequence, it may be easily and effectively employed for any kind or thematic area of public policy. Furthermore, it can be used in every stage of the policy life-cycle (agenda setting, policy design, adoption, implementation and monitoring and evaluation). As a result, with regard to compatibility, the recourse to multiple social media seems to fit in with the policy formulation processes of government agencies. Interviewees concluded that the whole approach was compatible with the values and policy formulation processes of Piedmont Regional Government.
However, it was mentioned that the above relative advantages and compatibility are to a significant extent associated with two positive characteristics of the particular government agency, which might not exist in other other contexts: i) their long tradition and culture of bi-directional communication with citizens in all phases of policy making, and ii) their previous familiarity with and experience in using social media for the above purpose. If these do not exist, then it is likely that the above relative advantage and compatibility might be lower, or even there might be important relative disadvantages in comparison with existing alternative channels. Most government agencies have already developed some 'organizational capabilities' in using the abovementioned alternative physical and electronic channels of communication with citizens, but this has not happened yet with social media. The interviewees stressed that a 'typical public servant' might initially not feel 'culturally fit' for and familiar with the language and style of dialogue of most social media, and find it difficult to participate effectively in such dialogues; so if adequate training is not provided to them, there might be a risk of ineffective or even problematic communication between government agencies and citizens in the social media, which would have negative impact on the public image of the former. Furthermore, if there is a lack of tradition and culture of bi-directional communication with citizens in some government agencies, this might become more visible to the citizens due to the extensive, direct and informal interaction that characterises the social media, with negative consequences.
With respect to complexity, it was mentioned that the proposed approach, in combination with the ICT tools supporting it, have the distinctive trait of keeping moderate the cognitive effort required to policy makers. Despite processing data in behind the scene and provide decision makers with a set of synthetic, fresh and relevant data through intuitive visual outputs. The easily understandable way of reporting campaign results determines a substantial simplicity in usage that clears the hurdle of complexity, creating a fertile soil for a smooth adoption by every policymaker inclined to embrace 'open' policy making. Furthermore, the successful completion of the pilot held in Piedmont Region corroborates the a priori conviction that this approach might take advantage of a noticeable scalability that allows to move all along the continuum ranging from small scale to full scale. All interviewees agreed that this innovation may be experimented without particular obstacles, since there does not exist a 'minimum efficient scale' for running a campaign, so it is characterised by trialability. It was recognised that this approach can be initially applied by government agencies in small scale pilot applications, in order to assess its capabilities and to fine-tune the underpinning mechanisms, before proceeding to larger scale applications.
Finally, the interviewees mentioned that the unprecedented exposure (at least in the digital world) given by social media to public policy campaigns makes this innovation highly visible to other public agencies, policy makers and the society in general. In fact, policy messages make their appearance on public pages accessible by everyone (i.e., Facebook Fan Pages, Twitter Pages, YouTube Channels) and viral 'contagious' phenomena occurring in the social media realm in light of intertwined social connections play their part in garnering a rapid and vast spreading of the policy proposal at stake. The resulting observability of the innovation has according to the interviewees a twofold advantage: on one hand, it stimulates the citizenry to step in the debate boosting the adoption rate, and, on the other hand, the opportunity to observe how the tool works on the field can create awareness in the public realm about the opportunity to tap social media in order to let 'collective intelligence' percolate across governmental boundaries.
Conclusions
The increasing adoption of social media by government agencies, initially simpler but gradually becoming more and more complex and sophisticated, constitutes an important innovation in their public policy making processes. Therefore it is important to analyse it from an innovation diffusion perspective as well, taking advantage of the extensive previous research in this area. This will allow us to understand to what extent various existing or emerging forms of social media exploitation in government, simpler or sophisticated ones, have the fundamental preconditions for a wider diffusion. Also, it will allow identifying characteristics of these approaches and the supporting systems, or of their context (e.g. characteristics of the adopting government organizations or the targeted citizens' groups), which do not favour their diffusion, and take appropriate actions for addressing them.
This paper aims to make a contribution in this direction. It analyses an advanced approach of using social media by government agencies, which includes centralized combined exploitation of multiple complementary social media platforms, in an au-tomated manner taking advantage of their APIs, initially for posting to them various types of policy-related content, and then for retrieving users' interactions with them in these social media platforms, which finally undergo sophisticated processing. As theoretical foundation for our research we use the Diffusion of Innovations Theory proposed by [START_REF] Rogers | Diffusion of Innovations -Fifth Edition: The Free Press[END_REF]. Our analysis is based on a pilot application of this approach for conducting a consultation campaign concerning the large scale application of a telemedicine program of the Piedmont Regional Government, Italy.
It has been concluded that this approach has the fundamental preconditions for a wide diffusion according to the above theory: relative advantage, compatibility with existing values and processes, reasonable complexity, trialability and observability. However, its relative advantage and compatibility relies to a significant extent on the context on: i) the history and tradition of the adopting government agency with respect to bi-directional communication with citizens, and ii) on its familarity with and experience in using social media for this purpose. If these do not exist, the relative advantage and compatibility might be lower, or even there might be relative disadvantages in comparison with the alternative physical and electronic channels of communication with citizens. The use of social media by government agencies without sufficient preparation, training of the responsible staff, and in general development of 'organizational capabilities' in this area, and culture of bi-directional communication with citizens, might have negative impact on the image of government agencies.
The findings of this paper have interesting implications for research and management. With respect to research, it provides a framework for future analysis of existing or emerging forms, systems and methods of social media use by government agencies from an innovation diffusion perspective, which is definitely a quite important one. In general it opens up a new research direction, which combines theories, frameworks and methods from innovation, political sciences and e-participation research, in order to provide a deeper understanding of social media based innovations in political communication. With respect to management of government agencies, findings indicate that such a complex and sophisticated form of multiple social media use for bidirectional communication with citizens has the fundamental preconditions for a wide diffusion and adoption. However this might depend from previous history and tradition in communication with citizens, and at the same time might necessitate training and familiarization with a new language and style of dialogue with citizens, quite different from the ones dominant previously.
Further research is required on the existing and the emerging more complex and sophisticated forms of social media use in government from various innovation related perspectives, in different contexts (e.g. different government agencies with different cultural -organizational characteristics and relevant experiences, for different types of topics), examining the viewpoints of all stakeholders (politicians, public servants and citizens) and using both qualitative and quantitative methodologies.
Fig. 1 .
1 Fig. 1. An advanced form of centrally managed multiple social media use in government | 39,405 | [
"1004273",
"993317",
"993316",
"1004274"
] | [
"142019",
"300999",
"300999",
"142019"
] |
01491274 | en | [
"shs"
] | 2024/03/04 23:41:50 | 2017 | https://hal-hprints.archives-ouvertes.fr/hprints-01491274/file/Preprint-C-Test%20validity.pdf | Fahimeh Khoshdel-Niyat
email: [email protected]
The C-Test: A Valid Measure to Test Second Language Proficiency?
Keywords: C-Test, Validation, Construct identification, LLTM
The C-Test is a gap-filling test belonging to the family of the reduced redundancy tests which is used as an overall measure of general language proficiency in a second or a native language. There is no consensus on the construct underlying the C-Test and many researchers are still puzzled by what is actually activated when examinees take a C-Test. The purpose of the present study is to cast light on this issue by examining the factors that contribute to C-Test item difficulty. A number of factors were selected and entered into regression model to predict item difficulty. Linear logistic test model was also used to support the results of regression analysis. Findings showed that the selected factors only explained 12 per cent of the variance in item difficulty estimates. Implications of the study for C-Test validity and application are discussed.
Introduction
The C-Test is a text completion test and is based on the reduced redundancy principle. The noise test and oral cloze procedures are other kinds of reduced redundancy tests. These tests were developed on the basis of the assumption that natural languages are "redundant". "This means that in natural communication messages contain elements which are not necessary" (Baghaei, 2011, p.7). According to information theory principles, redundancy can be reduced by eliminating words from a text and asking the learner to fill in the gap.
The C-Test is a variation of the cloze test and thus has the same basic theoretical assumptions as the cloze test ( [START_REF] Grotjahn | C-Tests and language processing[END_REF]. The difference is that in C-Test parts of words are omitted not the whole words. Cloze test is an appropriate instrument for measuring general language proficiency as [START_REF] Oller | Evidence of a general language proficiency factor: An expectancy grammar[END_REF] concluded. The C-Test is based on the reduced redundancy principle [START_REF] Spolsky | Reduced Redundancy as a Language Testing Tool[END_REF], i.e., the assumption that natural languages are redundant, so advanced learners can be distinguished from beginners by their ability to deal with reduced redundancy [START_REF] Beinborn | Predicting the Difficulty of Language Proficiency Tests[END_REF]. [START_REF] Raatz | The C-Testa modification of the cloze procedure[END_REF] suggested C-Tests because of several problems of cloze tests. These problems were:
(1) cloze tests should be long in order to have sufficient number of items; (2) cloze tests usually contain one longer text because of deletion principles. Therefore, it makes the test specific and also biased; (3) validity and reliability of cloze tests will be affected by some factors such as "text", "deletion rate" and "starting point of deletion"; (4) most of cloze tests are less reliable than what they are assumed to be; (5) scoring in cloze test is based on two methods: exact method that gaps should be filled by exact words and acceptable method that gaps can be filled by any appropriate words; so, it is more subjective and consumes much more time; (6) the difficulty of deleted words depends on the grammatical structures and content of words [START_REF] Alderson | The effect on the cloze test of changes in deletion frequency[END_REF][START_REF] Alderson | The cloze procedure and proficiency in English as a foreign language[END_REF][START_REF] Klein-Braley | Empirical investigations of cloze tests[END_REF].
A frequent question about C-Tests is that how behaviors of testees could be measured while filling the gaps [START_REF] Klein-Braley | Psycholinguistics of C-Test taking[END_REF]. Many investigations have focused on mental processes while working on a C-test (Fledmann & Stemmer, 1987;[START_REF] Grotjahn | On the development and evaluation of a C-Test for French[END_REF]. To identify test-taking processes, three possible approaches are suggested: statistical item analysis, text linguistic item analysis and analysis of individual performance (Fledmann et al, 1986;Grotjahn, 1986, as cited in Klein-Braley, 2002).
All of the above mentioned approaches seek for mental processes which examinees undertake while answering the test items. The mental processes which are taped by the test should be in line with the construct of the test; therefore, to validate a given test, these mental processes could be investigated.
Construct identification is concerned with factors that are involved in the test content and the methods that are used for subjects" scores [START_REF] Sigott | Towards identifying the C-Test construct[END_REF]. So, it reveals validity of the test by studying characteristics which affect test difficulty. Test difficulty should be measured by learners" ability to answer any item that can be easy or difficult based on test content and method features that any particular test has. In other words, item difficulty is the proportion of wrong responses for every item of a given test [START_REF] Farhady | Test language Skills: From Theory to Practice. The organization for researching and composing university textbooks in the humanities (SAMT)[END_REF].
We are going to investigate the validity of the C-Test as a test of general language proficiency through analyzing difficulty of the items in the framework of construct identification.
Cloze tests belong to the family of reduced redundancy tests and were proposed as a measure of text difficulty by [START_REF] Taylor | Cloze procedure: a new tool for measuring readability[END_REF]. In cloze tests, every 7 th word or higher is deleted from a text [START_REF] Brown | Cloze item difficulty[END_REF]. In these tests, a reading passage should have a familiar topic for a learner to engage with a text and must not be very difficult [START_REF] Brown | The emperor"s new cloze: strategies for revising cloze tests[END_REF][START_REF] Douglas | Testing methods in context-based second language research[END_REF][START_REF] Sasaki | Effects of cultural schemata on students' test-taking processes for cloze tests: a multiple data source approach[END_REF]. The ambiguity of the gaps is the main problem in cloze tests, so it cannot be anticipated what the set of potential solutions are (Horsmann & Zesch, 2014). Hence, because of some criticisms toward cloze tests, C-Tests were developed as a replacement for cloze tests in 1981 by Raatz and Klein-Braley from both theoretical and psychometric viewpoint [START_REF] Babaii | On the interplay between test task difficulty and macro-level processing in the C-test[END_REF]. Notably, "the letter C stands for cloze to call to mind the relationship between the two tests" (Baghaei, 2008a, p. 33).
The next is the oral cloze procedure. Despite a cloze test, written cloze passage is not given to testees in this procedure. When a mutilated passage records, it will presents acoustically. The blanks are numbered or there is a pause where a blank occurs. While listening to the material, students should guess the missing part. It can be said that the advantage of this cloze technique is that it can be used for non-literate subjects [START_REF] Oller | Language tests at school: a pragmatic approach[END_REF].
Background to the Theoretical and Conceptual Views
A C-Test includes several authentic texts, usually between four and six with twenty to twenty five gaps in each text. Each text usually is eighty to one hundred words in length with different content. Five minutes is allocated for each text, so a test with four texts would take twenty minutes, the one with five texts would take twenty five and so on. In the literature 20 to 25 gaps in each passage are suggested [START_REF] Raatz | Introduction to the language and the C-Test[END_REF]), however, Baghaei (2011a[START_REF] Baghaei | Optimal number of gaps in C-Test passages[END_REF] demonstrate that C-Test with smaller number of gaps work as well as 25-gap C-Tests. In C-Tests, the exact answer should be given, however in some occasions as [START_REF] Raatz | Introduction to the language and the C-Test[END_REF] deduced, the alternative solution would be accepted. Numbers and proper names are usually left without any changing [START_REF] Raatz | Introduction to the language and the C-Test[END_REF]. Furthermore, content of texts should be neutral without any special vocabulary from the general knowledge domain. In other words, texts can be selected from newspapers, brochures, magazines, non-fictional books and so forth [START_REF] Grotjahn | C-Tests and language processing[END_REF].
In C-Tests, first and last sentences should remain without any deletion. Beginning at word two in sentence two, the second half of every second word is deleted [START_REF] Raatz | Introduction to the language and the C-Test[END_REF].
So, according to [START_REF] Raatz | Introduction to the language and the C-Test[END_REF] a C-Test is a kind of reduced redundancy test, because it uses the cloze principles as mentioned earlier, which is derived from Information Theory. It means a redundant massage includes more information than is necessary for understanding the massage. Hence, when a massage is damaged, the other parts that are intact can help to find what the complete massage is.
However, [START_REF] Koberl | Adjusting C-test difficulty in German[END_REF] and Sigott and Koberl (1996) developed two other different variations of a German and an English C-Tests: 1) Deletion of two thirds of every second word; 2) Deletion of all but first letter of every second word.
Deleting the first halves of words was the most reliable test in German, but not in English. The larger "half" of words will be deleted in words with an odd number in letter. This technique is called "rule of two" in C-Tests texts (Klein-Braley & Raatz, 2002). It has to be noted that in this research we will work on the original C-Test principle on test difficulty. Furthermore, a C-Test has its own rules for construction. These are as follow:
1. Target population and the test format should be defined, 2. Appropriate texts should be chosen more than needed and then the best ones would be selected, 3. After selecting the best texts, they should be brought into C-Test format (rule of two), 4. Analyzing the difficulty of the texts, 5. It should be decided the satisfactory of each text by changing, adding or removing some damaged words, because some are so difficult or easy, 6. Then good texts should be combined, 7. Item analysis, reliability and validity of the test would be performed, 8. Test should be improved if it is needed, 9. The final form of the test should be administered to a sample of the target population, 10. The test norms should be calculated (Klein-Braley & Raatz, 2002).
C-Test Validation Studies
Validating C-Tests has been researchers" concern for several decades of research on C-Tests. C-Tests have been developed and validated for different groups of learners whether L1 learners, L2 learners or foreign language learners [START_REF] Baghaei | Construction and validation of a C-Test in Persian[END_REF]. There is ample convincing evidence for the validity of C-Tests as measures of general language proficiency. For example, it is found that C-Tests have a high correlation with other language tests such as teacher ratings and students self-assessment or with composite scores of various language skills. Factorial structure and its fit to the Rasch model are another evidence of C-Test validity (Baghaei, 2008a[START_REF] Baghaei | The effects of the rhetorical organization of texts on the C-test construct: A Rasch modelling study[END_REF][START_REF] Baghaei | An investigation of the invariance of Rasch item and person measures in a C-Test[END_REF](Baghaei, , 2011;;[START_REF] Eckes | Rasch-Modelle zur C-Test-Skalierung[END_REF][START_REF] Eckes | Item banking for C-tests: A polytomous Rasch modeling approach[END_REF][START_REF] Eckes | Using testlet response theory to examine local dependency in C-Tests[END_REF][START_REF] Eckes | A closer look at the construct validity of C-tests[END_REF][START_REF] Raatz | The factorial validity of C-Tests[END_REF][START_REF] Raatz | Investigating dimensionality of language testsa new solution to an old problem[END_REF].
Moreover, [START_REF] Borgards | Sind C-Tests trainierbar[END_REF] examined German C-Tests" sensitivity to a construct-irrelevant attribute which is named coaching effect. In their study, there were control and experimental groups of 43 secondary level students which in pretest experimental group was exposed to 45 minutes of coaching for C-Test taking. The Posttest demonstrated that mean of both control and experimental groups similarly increased in comparison with the pretest. It means that coaching effect did not influence on C-Test scores. So, it can be found out C-Tests measure general language proficiency. Also, [START_REF] Baghaei | Construction and validation of a C-Test in Persian[END_REF] developed and validated a Persian C-Test. The result of his study showed that C-Tests can be used as a general language proficiency for ages 12-14 of Persian speakers. Hence, it can be used as measurement of general language proficiency in Persian as a second or foreign language. The validity of C-Tests was proved because the C-Test could fit the Rasch model.
Another study of C-test validity is the role of the text type or text genre in C-Tests. For example, [START_REF] Shohamy | Validation of listening comprehension tests: the effect of text and question type[END_REF] found that in a listening comprehension test, test takers" performance depends on text genre. "When they constructed identical listening comprehension questions on the basis of texts which contained exactly the same information but only differed in their genre, examinees" performances were noticeably affected" (Baghaei & Grotjahn, 2014a, p.163;[START_REF] Baghaei | Establishing the construct validity of conversational C-Tests using a multidimensional Item Response Model[END_REF]. Based on Baghaei, Monshi Tousi and Boori"s (2009) and Baghaei and Grotjahn"s (2014a) research, the text type influences the construct of C-Tests, so they developed C-Tests from spoken discourse texts to tap test takers" oral abilities.
In addition, Baghaei and Grotjahn (2014a;[START_REF] Baghaei | Establishing the construct validity of conversational C-Tests using a multidimensional Item Response Model[END_REF] worked on the componential structure of an English C-Test which included two spoken-discourse passages and two written-discourse passages by using unidimensional and multidimensional Rasch models. Their C-Test fitted Adams, Wilson, and Wang"s (1997) multidimensional partial credit model better than [START_REF] Masters | A Rasch model for partial credit scoring[END_REF] unidimensional partial credit model. Hence, it revealed two separate dimensions: spoken discourse and written-discourse C-Test passages. Note that the sample size of 99 is the limitation of their study and it can be considered as a pilot study. The result showed that different C-Test text types of C-Test can measure different constructs. For example, spoken discourse C-Test texts may be better in testing students" listening/speaking skills than written discourse texts. Moreover, if a C-Test includes both spoken and written discourse, it can be a better measurement for C-Test construct as a general language proficiency. Furthermore, as [START_REF] Sigott | Towards identifying the C-Test construct[END_REF]
Construct Validity
Construct validity is one of the most complicated aspects of test validation. Not only is construct validity based on analyzing the test scores, but it also analyzes test performance [START_REF] Sigott | Towards identifying the C-Test construct[END_REF]. Messick (1975, p. 975) states: "A measure estimates how much of something an individual displays or process and also he defines validity as "an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment" (1989, p. 13). He also writes that there was a shift from prediction to explanation in validity concept. It means that obvious interpretation of test scores is much more important than ambiguous prediction. Baghaei (2007) also has a similar idea toward importance of score meaning and its interpretation in construct validity.
Besides, Baghaei (2007) focused on another crucial aspect of construct validity that should be paid attention to, i.e., "construct irrelevant variance". In test scores, there are always some irrelevant sub-dimensions that are not our purpose to measure but they affect the construct. Baghaei indicated two reasons for "construct irrelevant variance"; these are construct-irrelevant easiness" and "construct-irrelevant difficulty". Construct-irrelevant difficulty means inclusion of some tasks that make the construct difficult and results in "invalidly low scores" for some people. Construct-irrelevant easiness, on the other hand, is the easiness of the test because of inclusion of faulty items that give clues to individuals (in the case of multiple-choice items) who are familiar with the test format and can benefit from this (Baghaei, 2011).
Construct Identification Studies
"Construct representation is concerned with identifying the theoretical mechanisms that underlie item responses, such as information processes, strategies, and knowledge stores" (Embretson, 1983, p 179). Also, it can be considered as "construct identification", and "construct specification" (Perkins & Linville, 1987;[START_REF] Stenner | Toward a theory of construct definition[END_REF]). When a person scores higher than another one, it indicates that he/she processes more of the construct in question or an item that score higher in difficulty presumably demands more in construct [START_REF] Stenner | Toward a theory of construct definition[END_REF]. [START_REF] Stenner | Toward a theory of construct definition[END_REF] believed the process reveals "something" that happens while examines provide responses to items and is called "construct definition". Based upon some research such as [START_REF] Klein-Braley | Towards a theory of C-Test processing[END_REF], focus of construct identification is twofold: first one is investigating C-Test takers" psycholinguistic strategies; another one is predicting the difficulty of C-Test passages from text characteristics. The ability of responding to the item and method features affect the difficulty of the test, subtest or items.
According to Previous works, [START_REF] Klein-Braley | Advance Prediction of Difficulty with C-Tests[END_REF], 1985) used German C-Tests for 9 and 11 year old L1 German speakers and English C-Test for L1 German-speaking English students at Duisburg University based on multiple regression equation. She listed text characteristics as follows:
(1) number of words in text, number of different type of words,
(2) number of sentences in the text,
(3) type token ratio, (4) average sentence length in syllabus, (5) average number of word in sentence and average number of syllables in word.
The type-token ratio and the average sentence length in syllables were the best predictors of scores for English students. For German students, the type token ratio and the average number of words in the sentences were the best predictors. These results are to predict the difficulty of C-Test passages for special groups and cannot be generalized for other groups. In our study we will focus on using the theory of construct identification to predict factors that influence item difficulty of C-Tests.
Remarkably, [START_REF] Eckes | Item banking for C-tests: A polytomous Rasch modeling approach[END_REF] focused on a Rasch model to compare different C-Tests by constructing a calibrated item bank for C-tests. "In a calibrated item bank the parameter estimates for all items in the bank have been placed on the same difficulty scale" (Szabó, 2008;Vale, 2006;Wright & Stone, 1999, as cited in Eckes, 2011). Besides, fit of data to latent trait model is evidence of existence of a construct underlying the responses and hence validity [START_REF] Baghaei | The logic of latent variable analysis as validity evidence in psychological measurement[END_REF].
The present study
C-Tests, like any other tests, consist of several items with different item difficulties, so we should find out the factors which make items easier or more difficult. To this end, various factors that affect test difficulties should be studied. The factors such as (1) the frequency of the mutilated word [START_REF] Brown | Cloze item difficulty[END_REF][START_REF] Sigott | The C-test: some factors of difficulty[END_REF] , (2) whether the words are content or function words, (3) the length of the mutilated word, (4) the length of the sentence where the gap is (Klein-Braley, 1984) , ( 5) the number of propositions in the sentence where the gap is, (6) the propositional density (of the sentence where the gap is) , ( 7) inflections [START_REF] Beinborn | Predicting the Difficulty of Language Proficiency Tests[END_REF] , (8) text difficulty (as measured by Lexile) (www.lexile.com) , ( 9) the frequency of the word before the mutilate word, (10) the frequency of the word after the mutilate word, (11) text difficulty (p-values of texts) [START_REF] Beinborn | Predicting the Difficulty of Language Proficiency Tests[END_REF],( 12) dependency among items [START_REF] Beinborn | Predicting the Difficulty of Language Proficiency Tests[END_REF], and (13) word class (noun, verb, adjective, adverb, pronoun, preposition, conjunction, and determiner) [START_REF] Sigott | The C-test: some factors of difficulty[END_REF].
We should bear in mind that the researcher predicted some factors that may affect item difficulty and some of them are found in C-Test literature. So, we are going to study the factors which affect the item difficulty in a C-Test.
Method
Participants and Setting
The participants in the present study were 352 undergraduate EFL students at Islamic Azad University of Mashhad and Neyshabour, Ferdowsi, Khayyam, and Binalood universities. Both male (N=108) and female (N=244) students participated in this research with the age range of 20 to 35 (M=20, SD=10.33). They were assured that their information would be confidential and they were appreciated for their cooperation.
Instrumentation
The instrument employed in this study was a C-Test with four texts. Each text had 25 gaps with different general knowledge content. In this C-Test the first and the last sentences remained without any deletions. Beginning at word two, in sentence two, the second half of every second word was deleted [START_REF] Raatz | Introduction to the language and the C-Test[END_REF]. The texts were selected from CAE [START_REF] Norris | Ready for CAE: coursebook[END_REF] and FCE books [START_REF] Norris | Ready for FCE: coursebook[END_REF]. Furthermore, online Collin dictionary was used to get the Frequency of each word.
Procedure
The test was given to 352 EFL students at Islamic Azad University of Mashhad and Neyshabour, Ferdowsi, Khayyam, and Binalood universities. They were informed that they should read the instructions carefully and fill in the 25 mutilated words in each text (in general, 100 items) based on the available textual information in the passages. By computing the difficulty of individual gaps based on their answers, we could explore the factors that make an item more difficult or easier.
Based on the available literature, as mentioned earlier, 13 factors were selected for investigation. To compute item difficulty, participants had 20 minutes to answer all 100 items (gaps). Item difficulty was computed as the proportion of wrong answers. Furthermore, the participants were asked to write down their email addresses if they would like to be informed of the result of their tests.
Study Design and Analysis
In this study correlational analysis, multiple regression, one-way analysis of variance (AVONA), and linear logistic test modeling (LLTM) [START_REF] Fischer | The linear logistic test model as an instrument in educational research[END_REF] were used to analyze the data. First of all, dependent and independent variables were clarified. Item difficulty of C-Test items was considered as the dependent variable and the 13 word-level and text-level factors mentioned above were independent variables. The correlation coefficient between these 13 factors and item difficulty were computed. Next, item difficulty was regressed on the independent variables.
At last, LLTM was used to cross check the results of regression analysis. LLTM is an extension of the Rasch [START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF][START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF] model which imposes linear constraints on item parameters [START_REF] Baghaei | Linear logistic test modeling with R. Practical Assessment[END_REF]. It postulates that several basic parameters affect item difficulty. Therefore, the difficulty of each item is obtained from the sum of the difficulty of basic parameters. If the Rasch model based item parameters and the LLTM reconstructed item parameters are close, LLTM will have a good fit [START_REF] Baghaei | A cognitive processing model of reading comprehension in English as a foreign language using the linear logistic test model[END_REF][START_REF] Baghaei | Linear logistic test modeling with R. Practical Assessment[END_REF]. For example, to solve a mathematics question that needs subtraction, addition, and division, LLTM considers these four operations as the basic operations which influence the difficulty of the item (question). The difficulty of this item should be recovered by the adding the difficulty of the three operations needed to solve the item.
Noteworthy, the difficulty of this item is calculated with the standard Rasch model, too, and then it is compared with the LLTM reconstructed item difficulty. If these two are close to each other, we will get that those three basic parameters (subtraction, addition, and division) influence the overall difficulty of the item.
SPSS software version 22 was employed for correlational analysis and regression, and eRm (Mair, Hatzinger, & Maier, 2014) package in R version 3.11 (R Core Development Team, 2015) was used for analyzing LLTM.
Results
Descriptive Statistics
Table 1 displays minimum, maximum, means, and standard deviations of the 12 independent variables that are chosen as predictors of C-Test item difficulty in the present study. This section of result presents whether frequency of the mutilated words and C-Test item difficulty have any significant relationship or not. Correlational analysis showed that there was a negative correlation between the two variables, r= -.248, n=100, p < .05. That is, as the word becomes more frequent in the language difficulty decreases. Content words are usually nouns, verbs, adjectives, and sometimes adverbs. They help us to focus on content story and tell the addressees where to focus their attention. Whereas, function words are the words we use to make our sentences grammatically correct. Pronouns, determiners, prepositions, and auxiliary verbs are examples of function words [START_REF] Bell | Predictability effects on durations of content and function words in conversational English[END_REF]. If we use them incorrectly, we will be poor speakers of English but our listeners can get the main idea of our speaking. Furthermore, function words do not give us main information, so we do not use them to attract others" attention or sometimes we ignore them while speaking.
In this section of result, content words were coded "1" and function words were coded "0". Correlational analysis revealed that there was a positive correlation between these two variables, r= .216, n= 100, p< .05. This means that content words are more difficult to reconstruct than function words and contribute more to C-Test item difficulty.
Table 4: An example of length of each word and item difficulty of some words
Table 4 illustrates the length of each word and its difficulty (for all 100 items the length of words were counted). There was a weak positive relationship between these two variables, r= .013, n= 100, p = .70. The researcher hypothesized that the length of word is a factor that may affect item difficulty. But the analysis showed that it has no effect on difficulty of C-Test items. We should count the number of all words in each sentence for any mutilated word. Correlational analysis showed that there was a low negative correlation between sentence length and item difficulty, r= -.012, n= 100, p= .91. So, this factor was not a good predictor for difficulty of items. Therefore, the fourth null hypothesis is accepted.
Table 6: An example of number of propositions and item difficulty of some words
Number of proposition is another predictor that is chosen to predict item difficulty. It is based on the number of verbs in a given sentence. By performing correlational analysis, it is found out that there is no significant correlation between item difficulty and number of propositions, r =-.54, n=100, p = .61. The fifth null hypothesis is accepted too. For calculating the propositional density, as another predictor, sentence length and the number of propositions in each sentence are needed. Propositional density is computed by dividing the number of propositions in a sentence by the number of words in the sentence. We hypothesized that the higher the density the more difficult the item. However, there was no significant correlation between these two variables, r=-.024, n=100, p= .84. So, propositional density was not a good predictor and the sixth null hypothesis is accepted as well.
Table 8: An example of inflections and item difficulty of some words
In grammar inflection is the modification of a word to show different grammatical categories such as number, gender, voice, tense, etc., inflected forms were coded "1" and otherwise "0". As Table 8 illustrates, the researcher selected seven types of inflections (ed, s (plural), s (3 rd person), al (adjective), ly (adverb), er, est (superlative)). Correlational analysis revealed that inflections had no significant correlation with item difficulty, r =.13, n=100, p=.17. So, the seventh null hypothesis is accepted. Text difficulty was computed according to Lexile framework (www.lexile.com). It was found that there was no significant correlation between these two variable, r= .12, n=100, p= 0.28. Therefore, the eighth null hypothesis is accepted. Table 10: An example of dependency and item difficulty of some words Dependency among items was identified by examining the relationship between item residuals after a Rasch model [START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF][START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF] analysis. Residuals are unexplained variance in the data after the latent trait is factored out. Correlations among residuals indicate dependency beyond the effect of the latent trait. A Rasch analysis using Winsteps (Linacre, 2014) identified 12 items with high residual correlations. These items were coded "1" and the rest "0". Correlational analysis showed no significant correlation between these two variables, r= .11, n=100, p=.25. The researcher postulated that the frequency of words before the mutilated words may affect item difficulty. So, by using Collins online dictionary, the frequency of those words were found. Correlational analysis revealed that there was no significant correlation between item difficulty and the frequency of the word before the mutilate word (r= -.05, n=100, p=.62). Thus, the tenth null hypothesis is accepted. Table 12: An example of frequency of the words after the mutilated word and C-Test item difficulty
In the example above, the researcher predicted that frequency of words after the mutilated word may affect item difficulty. So, by using Collins online dictionary, the frequency of those words were found. Correlational analysis revealed that there was no significant correlation between item difficulty and frequency of the word after the mutilate word. The frequency of the word immediately after the mutilated word was not a good predictor of item difficulty either (r=.05, n=100, p=.65). Thus, the eleventh null hypothesis is accepted.
Dependent variable
Independent The independent variables all together explained 8% of variance in item difficulties which was not statistically significant, (F (11, 86) = 1.79, p =.06, R 2 = .18, R 2 Adjusted = .08). Table 15 shows the Beta weights for the independent variables, their statistical significance and part correlations. The square of part correlations show the unique contribution of each independent variable to explaining item difficulty. As Table 15 shows word length has the strongest contribution to item difficulty. Next is function/content and third comes word frequency. Also, the useful piece of information that is displayed in Table 15 is part correlation. The square of part correlation tells how much of the total of variance in the dependent variable is explained by each variable. In this study, frequency of word, function content word, the length of word have part correlation co-efficient of .16, .16, and -.17, respectively. If we square them we get .025, .25, and .030, indicating that frequency and content/function word explain 2.5 percent of item difficulty and length of word explains 3 percent of the variance of item difficulty. However, these three factors explain a few percentage of item difficulty but they are better predicators than the other factors.
One-way Analysis of Variance (ANOVA)
To answer the thirteenth research question, one-way analysis of variance (ANOVA) was run.
As Table 16 displays, in this analysis there is one independent variable (word class) with eight levels.
Table 16: Mean Item Difficulties of Different Word Classes
Table 16 shows "verbs" and "adjectives" are more difficult to answer in C-Tests and "determiners" are easier. It can be said that word classes affect item difficulty in C-Test items. So, Hypothesis 13 that was "word classes" have no effect on item difficulty is rejected.
One-way ANOVA showed that there was not a statistically significant difference at the level of p<.05 in word class for eight groups: F (7, 92) = 1.21, p=.30. So, Robust Tests of Equality of Means was used and it became significant: p=.27.
The Linear Logistic Test Model (LLTM) Analysis
The linear logistic test model (LLTM, Fischer, 1973) was also used to study the contribution of different factors to C-Test item difficulty Since LLTM is an extension of the Rasch model, the standard Rasch model [START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF][START_REF] Rasch | Probabilistic models for some intelligence and attainment tests[END_REF] should fit the data first [START_REF] Fischer | The linear logistic test model as an instrument in educational research[END_REF][START_REF] Baghaei | Linear logistic test modeling with R. Practical Assessment[END_REF][START_REF] Baghaei | A cognitive processing model of reading comprehension in English as a foreign language using the linear logistic test model[END_REF][START_REF] Ghahramanlou | Understanding the cognitive processes underlying performance in the IELTS listening comprehension test[END_REF][START_REF] Hohensinn | Does the position of response options in multiplechoice tests matter?[END_REF]). Andersen"s likelihood ratio test [START_REF] Andersen | A goodness of fit test for the Rasch model[END_REF] showed that the 100 items do not fit the Rasch model, 𝜒 2 =533.16, df=99, p=00. Graphical model check (Figure 1) revealed that 36 out of 100 items were misfitting. Based on the results of graphical model check the 36 misftting items were deleted and a Q-Matrix for the 64 remaining items and the 11 basic parameters was developed. The 64 Rasch model fitting items and the Q-matrix were subjected to LLTM analysis using eRm (Mair, Hatzinger, & Maier, 2014) package in R version 3.11 (R Core Development Team, 2015). Table 20 shows the easiness parameters of the 11 operations, their standard errors, and their 95% confidence intervals. LLTM analysis revealed high errors for basic parameters 8, and 11, so they were omitted and LLTM was estimated again with 9 basic parameters (Table 17). LLTM imposes a linear constraint on the difficulty parameter. It means, we should be able to reconstruct Rasch model-based item parameters by adding the difficulty of the operations needed to solve each item.
Comparing the fit of LLTM and the Rasch model with the likelihood ratio test showed that the Rasch model fits significantly better than LLTM, 𝜒 2 =9856, df =54, p =0.00. The correlation between Rasch model-based items estimates and LLTM-reconstructed item estimates was .37; that is, we managed to explain 12% of the variance in item difficulties with the nine factors (Figure 3).
Results and Discussions
The purpose of this study was to establish whether 13 independent variables (1. The frequency of the mutilated word, 2. Whether there is content and function word, 3. The length of the mutilated word, 4. The length of the sentence where the gap is, 5. After collecting the data, correlational analysis, regression, ANOVA, and LLTM were conducted. Based upon the response of participants, all the 13 hypotheses were tested and the results represented. By interpreting the results of correlational analysis, it is concluded that frequency of mutilated word and content/function word, and text difficulty as measured by pvalues have a significant correlation with item difficulty in C-Test items. However, text difficulty as p-value has a significant correction because this measure of text difficulty is based on the difficulty of individual C-Test items within a text. Therefore, it should have correlation with item difficulty. The other variables had no significant correlation with item difficulty in this study.
ANOVA was used to analyze the effect of word class with eight levels as mentioned above, on item difficulty. It showed that there were significant differences among the mean difficulties of different word classes. Hence, Verbs and adjectives were harder to reconstruct for participants and determiners were easier.
Multiple regression was used to assess the ability of the 11 independent variables (Number of propositions was deleted because of high correlation with sentence length (r=.94) and also word class was not used in this analysis) to predict the item difficulty of the C-Test items. The result revealed that the 11 independent variable just explain 8% of variance in item difficulties. The Beta weights showed that word length has the strongest contribution to item difficulty. Next was function/content and third was word frequency.
The linear logistic test modeling (LLTM) was used to be sure about the explanation of the variance in item difficulty that acquired in the analysis of multiple regression. First, Rasch model was used for 100 items to determine whether they fit this model or not. Results showed that 36 items did not fit, so after deletion of these 36 items, for 64 items Rasch model was run again. After developing the Q-Matrix for 64 items and 11 basic operations (dependency and word class were omitted in this analysis), LLTM was run. Two basic operations were deleted due to high errors and LLTM was rerun. The 9 parameters as mentioned earlier were: 1. Frequency of mutilated word, 2. Content/function word, 3. Word length, 4. Sentence length, 5. Number of propositions, 6. Propositional density, 7. Inflections, 8. Frequency of word before the mutilated word, 9. Frequency of word after the mutilated word.
The result of this analysis showed that content words, inflections, and the frequency of the mutilated word had the greatest impact on item difficulty. Although, there were some other parameters but they did not have remarkable effect on item difficulty. Also, LLTM explained 12 % of variance in item difficulties.
Conclusions
As mentioned before, in this study the researcher hypothesized that 13 factors contribute to C-Test item difficulty. These factors were entered into regression analysis as independent variables to predict C-Test item difficulties. Also, correlational analysis, ANOVA, and LLTM were used. So, in the present study we are going to identify which factors make items more difficult or easier.
To determine the effect of the 13 independent variable on item difficulty, the current study was conducted. 352 students of several universities of Mashhad and Neyshabour were selected to answer 100 items of four C-Test texts. The texts were chosen from CAE [START_REF] Norris | Ready for CAE: coursebook[END_REF] and FCE books [START_REF] Norris | Ready for FCE: coursebook[END_REF].
The results of correlation analysis were as follow: First, the result of this study showed that frequency of mutilated word had a significant relationship with the item difficulty. It means that if the mutilated word has a high frequency, it will help test takers to answer it better than a low frequency word. For instance, the mutilated word "sch…….."= "school" with a high frequency (with frequency of "5") was easier to answer, but the mutilated word "instr………" = "instructed" with frequency of "3"was more difficult to answer. Therefore, word frequency affects the difficulty of each item. Moreover, whether the mutilated words are function or content words can affect the item difficulty. If the mutilated word is a content word, it is harder to answer. For example, "students" as a content word was more difficult to answer than "into" as a function word. In addition, there was a significant correlation between text difficulty as measured by p-values with item difficulty in C-Test items. Text difficulty as p-value is based on the difficulty of individual C-Test items within a text, and it is clear that it has a significant correlation with the item difficulty. Finally, analyzing eight word classes illustrated that "verbs" and "adjectives" were more difficult to answer in C-Tests and "determiners" were easier.
Whereas, the other 9 independent variables had no significant correlation with item difficulties in C-Tests. These variables were: (1) word length, (2) sentence length, (3) the number of propositions in the sentence where the C-Test item is, (4) the propositional density of the sentence where the C-Test item is, (5) inflections, (6) text difficulty as measured by Lexile, (7) item dependency, (8) frequency of the word before the mutilated word, (9) the frequency of the word after the mutilated word.
And the result of the linear logistic test model (LLTM, Fischer, 1973) was almost the same as correlational analysis. LLTM showed that Andersen"s likelihood ratio test [START_REF] Andersen | A goodness of fit test for the Rasch model[END_REF] showed that all 100 items do not fit the Rasch model. As mentioned in chapter 4, graphical model check revealed that 36 out of 100 items were misfitting. So, 36 misfitting item were deleted, and then the Rasch model was estimated again. A Q-Matrix for the 64 remaining items and the 11 basic parameters was developed. They were: (1) frequency of mutilated word, (2) function/content words, (3) word length, (4) sentence length, (5) number of proposition, (6) propositional density, (7) text difficulty (Lexile), (8) inflections, (9) frequency of word before the mutilate word, (10) frequency of word after the mutilated word, (11) text difficulty (Pvalue or difficulty of each super-item or passage). LLTM analysis illustrated high errors for two parameters named "Inflections" and "text difficulty" (P-value or difficulty of each super-item or passage). Hence, they were omitted and LLTM was estimated again with the other 9 basic parameters. The results revealed that the 9 independent variables all together explained 12% of variance in item difficulties. In general, based on findings of correlational analysis and LLTM it is concluded that frequency of mutilated word, content/function word, and text difficulty as measured by p-values had a significant contribution to item difficulty in C-Test items. According to the result of the LLTM model, content words, inflections, and the frequency of the mutilated word had the greatest impact on item difficulty.
The findings of this study revealed that the 13 factors that we selected only explained a small portion of the variance in C-Test item difficulties. Some of these factors were already in the literature and some were added by the researcher i.e., whether the words are content or function words, the length of the mutilated word, the number of propositions in the sentence where the gap is, the propositional density (of the sentence where the gap is), text difficulty ( as measured by Lexile) (www.lexile.com), the frequency of the word before the mutilate word, the frequency of the word after the mutilate word. The researcher included all the possible factors which deemed to affect C-Test item difficulty. No construct identification study on C-Test has so far covered as many number of factors as included in this study. Nevertheless, the portion of the variance explained, i.e. 12%, is rather small considering the number of factors that were entered into the analysis.
One reason for the observed findings is that test takers may use different skills and strategies to answer C-Test items. Therefore, explaining item difficulties with a number of factors for all the test takers is not possible. According to [START_REF] Sigott | Towards identifying the C-Test construct[END_REF], C-Tests have a fluid construct. He argued that the construct underlying the C-Test changes as a function of person ability and text difficulty; That is, a C-Test could measure different things for different examinees. If the fluid construct phenomenon is true, explaining and modeling item difficulty in C-Test is very difficult if not impossible. However, researchers in future must think of other additional relevant factors that might contribute to item difficulty.
Another issue that must be given attention is that correlation is sensitive to restrictions of range. That is, when the range of the measured variables is small, the correlation coefficients are depressed. Our analysis suffered from this problem. Almost all of our independent variables such as word frequency, content/function, etc. suffered from range restrictions. Frequency was measured on scale from 1 to 5 and content/function was dichotomous with only two values, 1 and 2. Therefore, the small correlations we observed in this study are partly due to the small range of the variable. Thus, different levels of proficiency cause different interpretations for C-Test scores because "the same C-Test passage could well be different tests for subjects at different levels of proficiency…without [the test user] knowing to what extent different aspects of the construct are reflected in the individual test scores" (Sigott, 2004, p.203). If FCP is true then it is very difficult to understand what factors make C-Test items hard. Consequently, while answering the C-Test item different factors may influence the difficulty of each item and it would be hard to find out the exact reason why an item becomes easy or hard.
The findings of this study may have some hints and implications for the other researchers. In the present study, the effect of 13 independent variables on item difficulty in C-Test items were investigated. In material and test development, it is crucial to find out which factors make an item easier or more difficult. In fact, what makes a test or task hard can guide teachers and material developers for ideal use of the tasks.
According to [START_REF] Grotjahn | C-Tests and language processing[END_REF], C-Tests are based on a variation of the cloze principle and thus have the same basic theoretical assumptions, but many of EFL teachers and educators do not know the similarities and differences of cloze tests and C-Tests. The results of the current research showed that a C-Test can be used to test knowledge of vocabulary and we cannot measure reading or grammar through such tests. So, C-Tests can be used as a vocabulary test at schools for different levels and as a vocabulary tasks in course books. It is important that C-Tests measures the exact thing that we want. For instance, if it is a vocabulary test, it should test the knowledge of vocabulary or if it is a grammar test, it should test the knowledge of grammar, etc. Moreover, by reviewing the literature, it is found that C-Tests can measures crystalized intelligence [START_REF] Baghaei | The C-Test: An Integrative Measure of Crystallized Intelligence[END_REF]. Crystallized intelligence is the ability to use "culturally approved, previously acquired problem solving methods" (Hunt, 2000, p. 127) and represents abilities that result from education and experience (Baghaei & Tabatabaee, 2015, p. 47). In general, it should be noted that different abilities and intelligences can affect the way test takers answer the test and based on their ability their answers would be different.
Suggestions for Further Research
The researcher suggests the following areas for further research related to the difficulty of C-Test items.
According to [START_REF] Sigott | Towards identifying the C-Test construct[END_REF], fluid nature of C-Test construct may influence the C-Test scores interpretations. So, it should be considered as a crucial aspect of C-Tests for test scoring. Also, further studies should deal with the effect of paragraphs and text on C-Test item difficulty with the vast number of variables because in the present study the focus was on the gap-level. Moreover, The 13 independent variables all together explained 12% of variance in item difficulties. For this reason, more research is needed to identify what the exact factors are and how they affect item difficulty based on the gap-level.
Other important consideration is that some researchers believe C-Test is a kind of general language proficiency test [START_REF] Eckes | A closer look at the construct validity of C-tests[END_REF][START_REF] Klein-Braley | A cloze-up on the C-Test: a study in the construct validation of authentic tests[END_REF][START_REF] Sigott | The C-test: some factors of difficulty[END_REF] and the other point out that it is a vocabulary test [START_REF] Chapelle | Are C-tests valid measures for L2 vocabulary research[END_REF][START_REF] Singleton | The second language lexicon: some evidence from university-level learners of French and German[END_REF] or a grammar test [START_REF] Babaii | The C-test: a valid operationalization of reduced redundancy principle?[END_REF]. With regard to the present study, the researcher concluded that C-Test can be a vocabulary test. However it still needs more investigation.
Besides, the current study did not deal with ESL or ESP students so they can be examined using the same design procedures as same as EFL learners.
stated, C-Tests can be multidimensional because of various interpretation of C-Test scores : "the same C-Test passage could well be different tests for subjects at different levels of proficiency…without [the test user] knowing to what extent different aspects of the construct are reflected in the individual test scores" (p.203).
Figure 1 :
1 Figure 1: Graphical Model Check for the 100 C-Test Gaps
Figure 2 :
2 Figure 2: Graphical Model Check for 64 C-Test Gaps
Figure 3 :
3 Figure 3: RM Item Parameters vs. LLTM Item Parameters
Table 1 :
1 Descriptive statistics for the predicators in the analysis
N Minimum Maximum Mean Std. Deviation
1.Frequency 100 3 5 4.61 .601
2.F.C 100 0 1 .56 .499
3.L.Word 100 2 10 4.93 2.114
4.L.Sentece 100 5 74 29.48 23.808
5.Proposition 100 1 10 3.87 3.368
6.P.Density 100 .06 .50 .1346 .07455
7.Inflection 100 0 1 .18 .386
8.Lexile 100 700 1170 980.00 183.264
9.Dependancy 100 0 1 .20 .402
10.Frequency before mutilated word 100 2 5 4.71 .556
11.Frequency after mutilated word 100 3.00 5.00 4.8800 .38350
12.p value 100 9 13 11.25 1.486
Valid N (listwise) 100
Table 2 :
2 An example of word frequency and item difficulty for some words
Table 3 :
3 An example of content/function words and item difficulty of some words
Dependent variable Independent variable
Item Item difficulty Word frequency
1(students) .36
2(selected) .29
3(first) .11
4( into) .50
5(just) .14
Table 5 :
5 An example of sentence length and item difficulty of some words
Dependent variable Independent variable
Item Item difficulty Content /function
1 (students) .36 1
2(selected) .29 1
3(first) .11 1
4( into) .50 0
5(just) .14 0
Table 7 :
7 An example of propositional density and item difficulty of some words
Dependent variable Independent variable
Item Item difficulty Sentence length
1 (students) .36 24
2(selected) .29 24
3(first) .11 24
4( into) .50 24
5(just) .14 24
Dependent variable Independent variable
Item Item difficulty Number of propositions
1 (students) .36 2
2(selected) .29 2
3(first) .11 2
4( into) .50 2
5(just) .14 2
Table 9 :
9 An example of text difficulty and item difficulty of some words in the four C-Test texts
Dependent variable Independent variable
Item Item difficulty Sentence length Number of propositions Propositional density
1 (students) .36 24 2 2/24
2(selected) .29 24 2 2/24
3(first) .11 24 2 2/24
4( into) .50 24 2 2/24
5(just) .14 24 2 2/24
Dependent variable Independent variable
Item Item difficulty Inflections
1 students) .36 1
2(selected) .29 0
3(first) .11 0
4( into) .50 0
5(just) .14 0
Table 11 :
11 An example of frequency of the word before the mutilate word and C-Test item difficulty
Dependent variable Independent variable
Item Item difficulty Dependency
1 (students) .36 0
2(selected) .29 0
3(first) .11 0
4( into) .50 0
5(just) .14 0
Table 13 :
13 An example of (text difficulty) p. value. And C-Test item difficulty of some words in the four C-Test texts
variable
Table 13
13 revealed that the p-value or difficulty of each passage was computed considering each passage as a super-item or testlet. This was considered another measure of difficulty for each passage alongside the Lexile estimates. The correlation between individual C-Test item difficulties and passage difficulties was r=-.24, n=100, p< .05. That is, as the text gets easier items get easier too. However, since this measure of text difficulty is based on the difficulty of individual C-Test items within a text a finding a correlation is obvious. Table14displays all coefficients of correlation between all the variables in this study. As it is shown in table below there are three significant correlations: Frequency of mutilated word and item difficulty, r= -.24, n=100, p< .05. Function/ content words and item difficulty, r= .21, n=100, p<.05. Text difficulty as estimated using passage difficulty as measured by super item p-value and item difficulty, r= -.246, n=100, p< .05.
Correlational Analysis
In summary,
Table 14 :
14 Correlational Analysis
Regression
Standard multiple regression was used to estimate the contribution of the 11 independent
variables in explaining C-Test item difficulty. The assumptions of multicollinearity and
independence of residuals were first checked. Number of propositions was deleted because of
high correlation with sentence length (r=.94).
Table 15 :
15 Multiple Regression
Independent variable Beta T P Part correlation
.300 .765
Inflection .187 1.340 .184 .130
Frequency -.216 -1.671 .098 -.162
F.C .227 1.674 .098 .163
Length of Word -.262 -1.800 .075 -.175
Length of Sentence -.094 -.753 .454 -.073
Propositional Density .014 .137 .892 .013
Frequency before mutilated word -.144 -1.333 .186 -.130
Frequency after mutilated word .033 .318 .752 .031
Text difficulty (super-item p-value) .122 .998 .321 .097
Dependency .115 1.130 .262 .110
Text difficulty (Lexile) .142 .953 .343 .093
Table 17 :
17 Easiness of the basic parameters, standard errors and 95% confidence intervals for 11 operations
Estimate Std. Error lower CI upper CI
Table 18 :
18 Easiness of the basic parameters, standard errors and 95% confidence intervals for 9 operations
Estimate Error lower CI upper CI
Std. | 57,034 | [
"1004277"
] | [
"489475"
] |
00413474 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2009 | https://hal-lirmm.ccsd.cnrs.fr/lirmm-00413474/file/soulier09_optim_layout_multipolar%20%281%29.pdf | Fabien Soulier
email: [email protected]
Olivier Rossel
Serge Bernard
Guy Cathébras
David Guiraud
An optimized layout for multipolar neural recording electrode
Keywords: electro-neurogramm, electrode array, selectivity, simulation
The propagation of action potentials along the axons can be recorded via the electrical activity of the nerve (electroneurogramm). This paper focuses on the use of cuff electrodes being very usual for chronic measurement. The main issues with this kind of electrode are parasitic noise and poor selectivity of the recorded signal. Then, we propose an optimized layout for multipolar recording electrode. The main idea is to find the best value for the inter-pole distance and the most relevant processing in order to both improve selectivity in the nerve and to reject external parasitic signals. In this study, we put emphasis on simulation of action potential as a method to help the electrode specification. The amplitude of the expected signal is evaluated in both spatial and frequency domains, with respect to axons variability. Then, the selectivity of the proposed design is compared to state-of-art electrode layout. The proposed design, with the associated pre-processing shows a real improvement of the electrode selectivity. The drawback is a decrease of the sensitivity that nevertheless remains compatible with integrated micro-circuit amplifiers.
Introduction
In the context of functional electrical stimulation it is of high interest to have an objective measure of the effect of stimulation and to get a sensitive feedback from afferent neural signal. Many studies aim to detect or record this information from inside the nerve but they have to overcome several difficulties. In chronic implants, measure of the nerve activity has to be little invasive making the use of a cuff electrode very suitable.
Unfortunately, the electro-neurogramm (ENG) appears to be of very low level and even often below the micro-volt. Moreover, bioelectrical activity makes the in-vivo environment very noisy, the worst noise being the signal generated by muscle activity (electro-myogram, or EMG). This parasitic signal will inevitably hide the ENG signal. Analog preprocessing must therefore be carried out in order to reject EMG-type noise.
The majority of ENG recording systems are based on tripolar [START_REF] Papathanasiou | An implantable CMOS signal conditioning system for recording nerve signals with cuff electrodes[END_REF][START_REF] Nielsen | A low-power CMOS frontend for cuff-recorded nerve signals[END_REF], multipolar [START_REF] Rieger | Very low-noise ENG amplifier system using cmos technology[END_REF] or flat [START_REF] Yoo | Selective recording of the canine hypoglossal nerve using a multicontact flat interface nerve electrode[END_REF] cuff electrodes. The electrodes consist of three or more conductor contact (poles) distributed around the nerve. Based on the flat shape, we propose a new multipolar electrode with a selectivity optimized according to the nerve physiology and topology.
The first sections of this paper detail the axon model used for simulation and give some characteristics of the expected signal. Then, the electrode layout is optimally adapted to the appropriate spatial filtering. Eventually, a comparative selectivity study will be presented, with some remarks and perspectives.
Methods
Axon model
The ENG signal can be described as the superimposition of extracellular potentials generated by several axons activated at the same moment. Our objective is to detect activity from a set of these axons. We can assume that if the electrode characteristics are optimized to detect one axon activity then they can be extrapolated to be the best solution for a set of axons. We thus start by simulating an individual axon thanks to the Neuron simulation software [START_REF] Carnevale | The Neuron Book[END_REF].
Figure 1: Structure of a myelinated nerve fiber (adapted from [START_REF] Plonsey | Bioelectricity: A Quantitative Approach[END_REF]).
For the simulation, we consider the myelinated axon illustrated figure 1 where l my is the distance between two successive nodes of Ranvier (internode) and d is the fiber diameter. According to the study by McIntyre et al. [START_REF] Mcintyre | Modeling the excitability of mammalian nerve fibers: Influence of afterpotentials on the recovery cycle[END_REF], l my can vary from 0.5 to 1.5 mm, with d varying from 5.7 to 16 μm. We used these characteristics to build several Neuron models composed of 150 myelinated sections.
Analysis of extracellular potential
A typical result is shown in fig. 2. We can clearly distinguish the pseudo-periodical variations due to the discontinuities along the myelin shield. These variations are specific to the neuro-signal and can be highlighted by spatial frequency analysis.
Filtering
The classical preprocessing used for ENG recording consists in calculating the difference between the potential on a central pole and the average of the potential on outer poles [START_REF] Struijk | Tripolar nerve cuff recording: stimulus artifact, EMG and the recorded nerve signal[END_REF][START_REF] Pflaum | Performance of alternative amplifier configurations for tripolar nerve cuff recorded ENG[END_REF]. This method, known as Laplacian filtering can be generalized to multipolar configuration and has proven to be very efficient for EMG rejection [START_REF] Soulier | Multipolar electrode and preamplifier design for eng-signal acquisition[END_REF]. We have recently prove that this kind of filter exhibit a bandwidth between 1/4h and 3/4h, h being the interpole distance [START_REF] Soulier | Design of Nerve Signal Biosensor[END_REF]. The first maximum of the ENG spectrum is centered on k = 1/ l my (fig. 3) for 0.5 mm < l my < 1.5 mm. To fit the bandpass of the filter for all kind of fiber (fig. 4), the optimal interpole distance should simply be h = 375 µm.
Selectivity results
The selectivity of the electrode is the ability to discriminate signal generated by different fascicles in the nerve. In the ideal case, each recording channel, after filtering, would convey information from a single fascicle, thus from a single location of sensory organs. To evaluate the selectivity, we use the method exposed by Yoo et al. [START_REF] Yoo | Selective recording of the canine hypoglossal nerve using a multicontact flat interface nerve electrode[END_REF] to compute the selectivity index (SI). This quantity denotes the relative contribution of two distinct sources over the different channels. In the case of perfect separation, SI = 1. In order to get a realistic model of the flat cuff electrode, we used the layout presented in fig. 5. The poles have a width of 250 µm and are regularly distributed on the whole surface. We compute the SI for each possible location of two axons inside the nerve. The fig. 6 shows the average SI as a function of the distance between the two sources, along with the standard deviation. The result is compared to the multipolar electrode presented in [START_REF] Yoo | Selective recording of the canine hypoglossal nerve using a multicontact flat interface nerve electrode[END_REF] with a more typical interpole distance of h = 5 mm and a width of 500 µm.
Discussion and Conclusions
These first results tend to indicate that the proposed layout with smaller than usual inter-pole distance always exhibits a better selectivity regarding the ENG sources. This conclusion must be moderated regarding two points. 1. First, this better selectivity implies some loss in the signal power, due to the filtering. Simulations show that attenuation of 6 dB has to be expected. Moreover, this issue can be overcome by the multiplication of recording channels. 2. Second, the use of smaller poles will degrade the impedances of the bio-electronic interface. Very high input impedance low-noise pre-amplifiers are thus needed to get a useful signal.
These remarks put in evidence that a specific multichannel pre-amplifier is necessary for the new electrode presented here. The most obvious solution will be in the form of an ASIC performing analog low-level filtering (see [START_REF] Soulier | Design of Nerve Signal Biosensor[END_REF] for an implementation example).
This paper has presented a new approach in the design of ENG recording multipolar electrodes. It points the importance of the physiological conformation in the signal characteristics and shows they can be used to optimize electrode layout. It proposes the use of signal analysis in the space-frequency domain as a design tool. Together with the development of a dedicated ASIC, future works will include validation of the concepts presented here through animal experiments.
Figure 2 :
2 Figure 2: Extracellular potential plotted along the axon's axis at a distance of 200 µm from the membrane. The nodes of Ranvier are separated by l my = 1 mm.
Figure 3 :
3 Figure 3: Typical spectrum of extracellular potential with regard to the spatial frequency in the direction of the axon's axis.
Fig. 3
3 Fig. 3 gives a typical spectrum in the spatial frequency domain. The frequencies for the highest amplitude value are clearly linked to the internode distance l my and occur for wave-numbers equal to k n = n/l my (1) whereas the low frequency energy denotes the global variations due to signal propagation along the axon.
Figure 4 :
4 Figure 4: Typical ENG spectra for l my = 0.5, 1 and 1.5 mm and optimal Laplacian filter gain.
Figure 5 :
5 Figure 5: Flat electrode layout with highlight on three poles used to compute Laplacian (one recording channel).
Figure 6 :
6 Figure 6: The selectivity index of the proposed electrode versus a state of art multipolar electrode.
September 13 th -17 th , 2009 -Seoul, Korea | 9,843 | [
"16319",
"1263572",
"1186",
"6335",
"8582"
] | [
"391198",
"391198",
"391198",
"391198",
"391198"
] |
01491417 | en | [
"shs"
] | 2024/03/04 23:41:50 | 2017 | https://hal.science/hal-01491417/file/LiMooreSmythe-2017.pdf | Jing Li
Danièle Moore
Suzanne Smythe
Simon Fraser
Voices from the "Heart": Understanding a Community-Engaged Festival in Vancouver's Downtown Eastside
Keywords: small group, local culture, festival, multiliteracies
This study presents findings from an ethnographic case study of a community-engaged festival held annually in Downtown Vancouver. It explores how the festival functions as a small group that contributes to the establishment of local culture and place identities in order to resist engrained stereotypes. This study also examines the ephemeral space of the festival as an interactional arena where participants co-engage in the construction of community, identity, and meaning. The study expands the discussion of community festivals as socially meaningful devices for collective action, community building, and multiliterate meaning-making in urban environments.
Original Manuscript scratch at the surface uncovers rich stories of local residents who act collectively to contest these engrained stereotypes. For many years, the DTES community has taken collective action to affirm a common identity oriented to resisting processes of gentrification and surveillance that threaten to divide and disperse its residents. One of these actions, also our study focus, is an annual community festival: the Heart of the City Festival (HOTCF) that has taken place in the DTES of Vancouver since 2004. With multiple modes of visual and audio expressions (e.g., visual arts, dancing, music, digital stories, and poetry), Festival participants produce metaphoric and creative reflections on social realities and issues concerning themselves and their community. By doing so, they create a unique local culture that negotiates and changes inequalities that keep them disadvantaged.
Festivals are a vital aspect of the cultural and urban lives and have gained currency in multiple disciplines because of "the universality of festivity and the popularity of festival experiences" (Getz 2010, 1). A literature review of festivals in sociology and cultural anthropology highlights how such communal celebratory events have provided an analytical window for the study of their impacts on urban cities and communities (e.g., [START_REF] Delgado | Celebrating Urban Community Life: Fairs, Festivals, Parades, and Community Practice[END_REF][START_REF] Foley | Glasgow's Winter Festival: Can Cultural Leadership Serve the Common Good?[END_REF][START_REF] Jepson | An Introduction to Planning and Managing Communities, Festivals, and Events[END_REF][START_REF] Moscardo | Analyzing the Role of Festivals and Events in Regional Development[END_REF][START_REF] Reid | Identifying Social Consequences of Rural Events[END_REF], community identity construction and cohesion [START_REF] Derrett | Making sense of how festivals demonstrate a community's sense of place[END_REF][START_REF] Elias-Vavotsis | Festivals and Events-(Re)interpreting Cultural Identity[END_REF], the symbolic relationships between festivals, place, and branding [START_REF] Mcclinchey | Urban Ethnic Festivals, Neighborhoods, and the Multiple Realities of Marketing Place[END_REF][START_REF] Reid | The Politics of City Imaging: A Case Study of the MTV Europe Music Awards Edinburgh 03[END_REF][START_REF] Wynn | Music/City: American Festivals and Placemaking in Austin[END_REF], and urban development and renewal [START_REF] Che | Sports, Music, Entertainment and the Destination Branding of Post-Fordist Detroit[END_REF][START_REF] Hughes | Urban Revitalization: The Use of Festive Time Strategies[END_REF]. Despite the multitude of festival studies, theory still lags behind practice with regard to urban community practice and celebratory events [START_REF] Delgado | Celebrating Urban Community Life: Fairs, Festivals, Parades, and Community Practice[END_REF]. As [START_REF] Delgado | Celebrating Urban Community Life: Fairs, Festivals, Parades, and Community Practice[END_REF] points out, ordinary residents, not only event planners or community organizers, are responsible for most community celebratory events. However, the ongoing discussions on festivals leave little room for understanding how "ordinary" individuals, especially disadvantaged groups in community settings, experience festivals and how they make sense of those experiences. [START_REF] Getz | The Nature and Scope of Festival Studies[END_REF] analysis of the nature and scope of festival studies has suggested that much research has been conducted on the economic, operational, and motivational aspects of festivals, while the analytic focus on actual experiences and meanings attached to them is absent.
Sociologists in recent years have begun to apply a local/group sociological approach to festival studies (see [START_REF] Delgado | Celebrating Urban Community Life: Fairs, Festivals, Parades, and Community Practice[END_REF]Wynn 2015, for further discussion). Gary Alan Fine (2012, 117, 116) describes festivals as "focused microgatherings" and "the archetypal form of wispy communities." He argues that wispy communities/small groups are the basic building blocks of society and play a pivotal role in organizing social life and developing local cultures and identities. In Music/City: American festivals and placemaking in Austin, Nashville, and Newport, Wynn (2015, 9) conceptualizes festivals as "an occasional public" wherein local actions and greater social forces come together for bounded periods to engage in cultural work. He accordingly suggests a microstructural lens be applied to study urban festivals. [START_REF] Delgado | Celebrating Urban Community Life: Fairs, Festivals, Parades, and Community Practice[END_REF] also recognizes the value of the meso-level analysis in enhancing the knowledge about multiple functions of community celebratory events. Attending to the often-neglected and unexplored microdynamics of local fields of action, the group sociological approach provides a useful explanatory framework to understand how our social worlds are locally organized in group settings, which sheds light on our understanding of community festivals in new ways.
Combining [START_REF] Fine | The Sociology of the Local: Action and Its Publics[END_REF][START_REF] Fine | Tiny Publics: A Theory of Group Action and Culture[END_REF] local/group sociological approach and Borer's (2006b) urban culturalist perspective, the current study explores the roles of the HOTC Festival in the lives of community residents and construction of community and identity. We wish to explore the Festival as a context for the formation of place identities and local culture that counters an unwanted Othering identity formed by those outside the community. This ethnographic study also involves exploring the ephemeral space of festivals as a local interactional arena where participants co-engage in meaning-making and place-making. Specifically, the study addresses three related questions: (1) How does the Festival foster participants' connections to the community and to the (re)construction of community identity? (2) What are the effects of local culture created within the Festival on community building, meaning-making, and place-making? and (3) What are the affordances of the theory of group action and culture [START_REF] Fine | The Sociology of the Local: Action and Its Publics[END_REF][START_REF] Fine | Tiny Publics: A Theory of Group Action and Culture[END_REF] and urban culturalist perspective (Borer 2006b) for conceptualizing the roles of the Festival in local culture development and group-culture-place relationships?
Theoretical Framework
The theoretical perspectives from which we approach the research questions are the group action and culture theory [START_REF] Fine | The Sociology of the Local: Action and Its Publics[END_REF][START_REF] Fine | Tiny Publics: A Theory of Group Action and Culture[END_REF] and urban culturalist framework (Borer 2006b). In his Tiny Publics, Fine develops a local sociological framework for describing and analyzing small groups and the establishment of local/group culture, which he refers to as "idioculture" (2012,3). A local sociology holds that small groups, which Fine thinks of as local interactional arenas, are the microfoundation of social structure. The group is society writ small and of paramount significance as an explanatory tool in recognizing local diversity and local effects, argues Fine. Within the boundaries of group settings, local culture, which he refers to as "idioculture" (2012, 3)-a unique set of meanings, knowledge, behaviors, and customs-is produced and performed as the outcomes of participant interactions. Discussing his view of culture, Fine asserts that culture is locally produced in microcommunities and embedded within interaction. He (2012, 35) writes, "We must conceptualize culture in light of those behavioral domains in which it is embedded. Societies are said to "have" culture, but culture is performed and displayed to particular audience (even through media productions, created by groups that have extended audiences)" (Fine 2012, 35). The interpretation of culture as a message embedded in interaction and created locally for audiences provides new analytical insights into urban festivals as local contexts for culture formation, learning, and place/meaning-making. Wynn (2015, 9) has stated that festivals are occasions and moments of collaborative meaning-making wherein "creative activities of individuals and the constraining and empowering forces of social structures" occur simultaneously. Fine (2012, 117) goes so far as to define festivals as "focused microgatherings" (117) in which social ties are developed and identities generated. He contends that occasions and gatherings are typical forms of small groups where individuals with common interests and the shared past congregate and interact with each other to create meaning and order. Festivals' inherent nature as small communities which encourage local individuals' actions and forge local culture accordingly returns us to a meso-level analysis of such group contexts. The small-group perspective allows us to situate individual agency and group dynamics within festival contexts to address "many of the elements associated with celebratory events and why they can transform communities" (Delgado 2016, 6).
To address the multiple roles of the HOTC Festival in place-making and local culture development, we turn to the "urban culturalist perspective" (Borer's 2006b) to explore the connections between culture and place. This critical model comprises six distinct but interrelated domains of research: (1) images and representations of the city; (2) urban community and civic culture; (3) place-based myth, narratives, and collective memories; (4) sentiment and meaning of and for places; (5) urban identities and lifestyles; and (6) interactive places and practices. The urban culturalist perspective is used in this study particularly to analyze images/representations, civic culture, and narratives/collective memories. It helps us position the present study within culture-place relationships to understand how Festival participants make sense of the meanings of place through Festival participation, and how they address the issues that they see as a threat to their community. By combining the small group theory and urban culturalist perspective, we hope to discern and recognize the connections among group, culture, and place in the Festival context in order to understand the festival Festival functions as a site of identity formation, community building and collective action among community residents.
We also draw upon the concepts of multiliteracies, or multimodal literacies, 1 to refer to variability and multiple modes (language, visual, spatial, and digital) of meaning-making [START_REF] Albers | The Arts, New Literacies, and Multimodality[END_REF][START_REF] Kress | Multiliteracies: Literacy Learning and the Design of Social Futures[END_REF][START_REF] London | A Pedagogy of Multiliteracies: Designing Social Futures[END_REF]. Multiliteracies, according to [START_REF] Sanders | Multimodal Literacies: An Introduction[END_REF], incorporate the arts, literacies, and new media. They signal multiple communication channels, hybrid text forms, new social relations, and the increasing salience of linguistic and cultural diversity [START_REF] Cope | Multiliteracies: Literacy Learning and the Design of Social Futures[END_REF][START_REF] Hull | Literacy and Learning Out of School: A Review of Theory and Research[END_REF][START_REF] London | A Pedagogy of Multiliteracies: Designing Social Futures[END_REF]. Acknowledging that any single mode is only partial [START_REF] London | A Pedagogy of Multiliteracies: Designing Social Futures[END_REF], multiliteracies approaches to literacy studies reflect a shift in understanding literacy as a singular form of verbal or written expression (reading and writing) to a visual expression across modes (multiliteracies). In the Festival setting, the concept brings attention to how meaning making is distributed across visual and auditory modes and movements (literacies beyond written texts), and to the "complex relationships among and between modes in constructed texts" (Albers and Harste 2007, 11).
The Downtown Eastside Context and Heart of the City Festival
Located just a few blocks away from the city's affluent business center, the DTES is Vancouver's oldest neighborhood. As the historic heart of the city from where Vancouver has grown and developed, the DTES neighborhood is culturally and ethnically diverse. The DTES, like Vancouver itself, is located on the unceded territories of the Coast Salish Musqueam, Squamish, and Tsleil-Wauthuth First Nations. Indigenous people from all over Canada call the DTES their home. Chinese and Japanese communities and new immigrants from Europe were among its earliest settlers, arriving in the 1870s as economic migrants and temporary foreign workers in the village nicknamed Gastown. The neighborhood was also once home to a thriving African Canadian community. Now it is still home to Chinese opera and New Year Parades, Japanese Taiko drumming, Aboriginal ceremony and anticolonial activism, and Ukrainian New Year Celebrations.
Despite its cultural richness, the DTES is often recognized as one of the most impoverished neighborhoods in Canada. It was estimated that "18,477 people lived in the neighborhood in 2011. More than half of the residents are poor, dependent on Income Assistance support, pensions, charitable and social services" (DTES Plan, City of Vancouver 2015, 18-19) and relying upon the relatively affordable but quickly gentrifying supply of run-down single-room occupancy hotels. People here disproportionately struggle with homelessness, poverty, mental illness, addiction, and exacerbated by unaffordable and insecure housing. Ungenerous journalistic and media representations have framed the area as "Canada's poorest neighbourhood" [START_REF] Hopper | Vancouver's 'Gulag': Canada's Poorest Neighbourhood Refuses to Get Better Despite $1M a Day in Social Spending[END_REF]) and, "the poorest of poor neighbourhoods" [START_REF] Brethour | Exclusive Demographic Picture[END_REF], and so forth. These descriptions used by both international and local media reinforce a unidimensional view of the community as a rough environment. Ironically, as Robinson (2012, 16) observes, "the DTES is at once within the city and apart from it."
It is in the midst of these harsh media representations that the HOTC Festival was launched in 2004. The HOTC Festival It is an annual two-week community event featuring arts performances, music, dancing, comedy, poetry, craftwork, and other cultural and heritage activities created by and for community members and allies that have close ties to the community. Unlike most festivals that are alliances between arts and commerce, the Festival is mostly free, remaining accessible to the public in an area where many people are unable to pay even token entry fee. The Festival originally developed out of a successful local arts production in celebration of the centennial of the historic Carnegie building that is home to the Carnegie Community Center:
In the Heart of a City: The Downtown Eastside Community Play. the HOTC Festival is currently coproduced by Vancouver Moving Theatre 2 (VMT) with the Carnegie Community Centre, the Association of United Ukrainian Canadians, and more than one hundred community partners that include local First Nations, Chinese, and Japanese cultural associations, local history and cultural societies, artists, and writers.
Data Collection and Participants
Data in this paper mostly come from fieldwork conducted over a sixteen-month period from 2013 to 2015 (The Festival falls between the last week of October and the first week of November each year). The project started in 2013 as a Languages, Cultures and Literacies course assignment that asked education graduate students to explore ethnographic research approaches while attending the Festival, drawing upon field journaling and photo documentation. As part of the coursework, Author 1 3 observed Festival events and mingled with Festival participants, and then obtained in 2014 a more comprehensive ethical protocol to discuss with participants about their experiences of the Festival. Based on these experiences and intense observation activities, fifteen semi-structured interviews were conducted in 2014 with an interview protocol that addressed the questions guiding the study. To make sure that we obtained a fuller picture and establish trustworthiness, after the 2014 Festival, further interviews were conducted with participants across a wide range of roles and cultural backgrounds, including community residents, actors/performers, organizers, audiences, volunteers, sponsors, and working staff. A total of twenty-five interviews have been made. Interview questions generally focused on participants' roles in, experiences with, and perceptions of the Festival. Questions with Festival producers were mainly about the production process and logistics. Twenty Festival events/performances were chosen to be attended from those recommended on the Festival website and those that were closely relevant to the annual theme of the Festival. A research journal was kept to document reflections on particular events and interactions. AHA Media (the Festival's media partner) uploaded all major events on YouTube and thus provided artifacts of social media on the Festival (e.g., video, photos, and blogs), which offered us more flexibility to review a range of Festival productions and media texts generated during the Festival.
Multiple sources of data were collected, including interview transcripts, written records of informal conversations, reflexive postobservation fieldnotes, visual artifacts, and media texts (e.g., photos and video clips). The Festival website, program guides, comment books, and other related documents were also referred to.
Using pseudonyms to protect study participants' identities is the common practice in research. However, we followed the advice and examples of [START_REF] Borer | From Collective Memory to Collective Imagination: Time, Place, and Urban Redevelopment[END_REF] and [START_REF] Wynn | Music/City: American Festivals and Placemaking in Austin[END_REF] of keeping places' and participants' real names and consulted with community members regarding their preferences when they signed the consent form. Except those who were quoted from secondary sources (e.g., newspapers, Festival comment books) or from whom no permission was obtained, we present in this article the real names of the interviewees to respect their preference to include their genuine voice. By making the names of places and people known without causing any significant harm to the participants, we hope to preserve those particular identities and histories that make them unique (Borer 2010, 99) and offer "the public voice and recognition" (p. 99) that supports the Festival goal to share experiences and concerns with others as part of the work of public education about colonialism, and also the creativity and agency of the community that is so often maligned.
Negotiating Access to the Site
Preliminary negotiation to carry out interviews was made through casual conversations with Festival participants when we attended the Festival in 2013, developing relationship with some local residents who had differing levels of involvement in the Festival. As the fieldwork further proceeded, the initial contacts have led to closer relationships with some long-time participants/residents. Occasionally, the first author was invited by local residents to join their dress rehearsals and pre/post-festival Festival workshops, which allowed for a unique insider perspective to observe and record participants' collaboration and interaction at and off the Festival. Throughout the three-year fieldwork, the researcher appreciated these generous offers of friendship and insight into the values and cultures of the Festival and sought to reciprocate by volunteering in the community center in return for residents' generosity and hospitality. Gaining access, as [START_REF] Yin | Qualitative Research from Start to Finish[END_REF] notes, is a process, not a one-time event. It was an essential process in this study to maintain access all along, establishing new relationships at each Festival while nurturing the relationships already made in the previous year(s) based on mutual respect and trust with community members.
Conducting Reciprocal and Collaborative Research
As argued by Lather (1986, 73), the methodological task of any empowering research is to proceed in a reciprocal, dialogic manner; the researcher's role should be of a "catalyst who works with local participants to understand and solve local problems." From the outset, we included participants' voices to promote collaborative learning and research design. During our initial contacts and interviews with Festival participants, we assured them that their name would be replaced by a pseudonym and other personal information not be disclosed in public dissemination. However, many of them expressed a preference to have their real names included so their genuine voice could be heard. Some participants also pointed out that the consent form we first used with them was jargon-laden and may cause difficulty in understanding for those community residents who lacked adequate literacy skills. Adopting these suggestions, we co-developed a revised consent form in response to the participants, in which they were offered a chance to choose whether they wanted to keep their real names in the project. At the later stage of the fieldwork, an open discussion was facilitated in the community center to share research results with local residents and Festival participants. In doing so, we were able to share the preliminary findings of the fieldwork with community members and listen to their feedback and interpretations. With these mutual feedback processes, we sought to establish a shared trust with participants and create a space for better research practices and protocols [START_REF] Pidgeon | Researching with Aboriginal Peoples: Practices and Principles[END_REF] that are more relevant to the community and its residents.
Making Sense of the Community through Collective Memories/Narratives
The DTES neighborhood is the ancestral lands of Coast Salish peoples, who were forcibly removed from the area during industrialization. The First Nations still live and work on the shores of the Pacific and in the community, and indeed have never ceded the territory that has been claimed as the jurisdiction of the City of Vancouver and of the Provincial and Federal Governments. The community is also home to Japanese Canadians, Ukrainian Canadians, and other settlers. During WWII, Japanese Canadians in the neighborhood were removed from their homes, their property confiscated, and their communities moved to internment camps across Canada. Some of the community returned, but the area around the Vancouver Japanese Language School & Japanese Hall and Powell Street has become more of a historic site today, known as "Japantown"." Now the second-and third-generation Japanese Canadians and Ukrainian Canadians who live in other areas of the city and province return to the neighborhood every year for community celebratory events, such as the HOTC Festival. Many participate in traditional cultural and heritage activities and, as some of the Japanese Canadians whom we met at the Festival said, "learn about the history of Powell Street . . . and our own cultural and historical backgrounds." An example of the entangled histories of Indigenous and Japanese communities is the multicultural performance Against the Current (2015), a remarkable cultural/artistic experience at the Festival that brought together the Japanese Canadian community and Salish community in the DTES, featuring Japanese Taiko drumming, Salish songs, and storytelling to celebrate the shared role of salmon in Salish and Japanese history, culture, mythology, and economy. The following fieldnote is drawn from the performance.
(8:30 p.m., Friday) I arrived late at the Vancouver Japanese Language School & Japanese Hall where the show was held and missed the first part. Had to stand at the back of the hall to avoid disrupting the show L. It was quiet in the hall except the soothing voices from two storytellers, an elderly Japanese lady and a First Nation woman telling of the salmon swimming upstream. . . . The drumming took over and the intensity of the drumming beats steadily grew. . . . The First Nation woman then began to tell a story of her sharing a meal with a group of Japanese Canadian fishermen while moored with her father on Galiano Island. . . . Throughout the performance, snippets of memories of early Japanese immigrants and Indigenous peoples enduring the hardships were beautifully woven into multi-textured fabric of song, dance, drumming and spoken word. The salmon and its lifecycle were a unifying theme and a metaphor for the lifecycle and journey home for Japanese-Canadians and First Nations. The Indigenous and Japanese drummers, singers, and dancers, both old and young, overwhelmed the stage with their energetic and passionate presence as they revealed shared experience of swimming against the current. At the end of the show, over 25 DTES community participants carried a three-foot long papier maché salmon prop walking through the audience. . . . So for an hour and a half, I stood there as one of the audience at this performance and found that my pulses were constantly on with the Taiko drumming and Salish music. Perhaps others felt the same sensation that engaged me so intensely. When it was over, the full room of the audience erupted in applause, rising in a standing ovation. (Author 1, fieldnotes, November 6, 2015)
Conversing with some attendees and actors after the performance, we noticed that many were long-time residents of the DTES, or once lived there. It was not hard to see that witnessing or participating in the performance offered them a window into shared memories that bound individuals and groups together. A Japanese Canadian elder said, "We don't live there anymore but we are coming to have our festival, commemorate our lives there but at the same time remembering that there are people living there and that they must be brought into our community." Grace, one of the storytellers in the performance who had experienced the forced removal of Japanese Canadians during WWII, talked later in an interview about how the shared history of Japanese Canadians and Aboriginal people fighting against injustices helped her better engage with her role as the storyteller.
Because I was quite, very much aware of the WWII Holocaust history and I also knew partially about the Aboriginal, the treatment of the Aboriginal. So I know that other people have suffered much more than we did. . . . So when I look at our history and I look at the larger community histories, I realized that we really have something common to share. . . . After we had a couple of rehearsals, even with the rehearsal, I wasn't really getting into it as I should. . . . So finally when it was getting close to the time, I was starting to really focused and I read it at home and I really got into the theme of against the current, that Aboriginal people, Japanese people, you know, who were all working against the current, and the whole idea of using the salmon. Fighting against the current, they go to multiply and to lay their eggs. You know, all those things became very relevant to me and I really started to appreciate those who wrote those words. (Transcripts, November 26, 2015) Grace was one of those actors and onlookers for whom the performance evoked past memories. "Against the Current has given me an opportunity to go back into my memories and look at our history as a First Nations family on this coast and growing up as fishermen," said the Coast Salish storyteller quoted in the performance brochure (2015, 10), "Being part of this project has let me look back once again in awe and amazement at a way of life that's almost disappeared in my lifetime. . . . Memories of family and teachings on how to be and to live in this world, feeling connected to may past and my family again." Coming from a long line of fishermen, the storyteller wove her own stories into the performance. The script for her was not only a play script written for the performance, but a real life script as well. "The stories that I share are all real from a different time and era," she wrote in the performance brochure, "our way of life that we all knew so well is disappearing, going the way of the salmon . . . just lost" (2015,11). Here, memoires of what has been and what was have become evident in reproducing and retelling the story. In a sense, Against the Current became moments of memory-retrieving time in which participants/actors were invited to walk in the world of their ancestors, reexperiencing the rich history that has shaped the community, but often been forgotten or gone unrecognized.
Bread & Salt (2013), an annual HOTC Festival event, is another example of the memory-retrieving and place-making role of the Festival. Inspired by stories and memoires from the Ukrainian community in the DTES, the performance created a Ukrainian Canadian story woven from personal and collective memories. Professional and community actors interwove oral history of Ukrainian Canadians with multimodal expressions of live theatre, music, dance, and projected images to pay a tribute to the East End's historic Ukrainian community. The story took attendees on a journey of discovery. Shared memories and narratives of struggle and solidarity resonated, connecting audiences not only to their cultural roots but also contemporary experiences of place. The following excerpts well illustrated this point:
As a 4th-generation Jewish immigrant to Canada, whose great Grandfather helped to build the 1st Synagogue in Bytown, it was fascinating for me and quite touching to discover Vancouver's first synagogue, just down the road from the Ukrainian Hall. As so many Jews came from Ukraine and my own ancestors came from Poland/Russia, I felt a connection with this history and community I was discovering. (Downtown Eastside Heart of the City Festival Comment Book 2013)
Collective memory "connects people in the present to the facts of yesterday and how those facts were ascertained and currently received" (Borer 2006b, 186). At the Festival, actors and attendees hear in such performances as Against the Current and Bread & Salt echoes of the many people who have arrived and lived in this community as survivors of all forms of injustice, of families broken up by the residential school system, internment camps, and contemporary approaches to social welfare that make families and children vulnerable to homelessness and addiction. They also hear stories of resilience and strength: immigrants from different cultural backgrounds and Aboriginal people who fought hard "against the current" and contribute to Vancouver's prosperity. S. Elizabeth Bird (2002, 526) contends that "local narratives are less about 'history' and more about how people construct their sense of place and cultural identity." At the Festival, these shared, retrospective narratives articulated through the multimodal forms of poems, personal memories, lived stories, songs, dances, projected images, and historical chronologies serve as common reference points for Festival participants to make sense of their own cultural identities and connections to the place.
Research on myths, narratives, and collective memory from the urban culturalist lens holds that "social, public, collective memories are 'stored and transmitted' in and through places" (Borer 2006b, 186). Eviatar Zerubavel (1996, 292) reminds us that "(t)he preservation of social memories need not depend on either oral or written transmission. After all, material culture plays a very similar role in helping us retain them." In the case of the Festival, we argue that festival culture, what Wynn (2015, 228) refers to as "liquid culture," 4 is also entangled with the materialities of place and things, and with the work of memory and place-making. In reproducing and retrieving collective memories/narratives, the Festival took on the dual role of a "mnemonic device" (Borer 2006a, 210) and knowledge-keeper, reminding people of the intertwined cultural and social histories that makes the DTES community unique and complex, while keeping these narratives/memories alive and sharing them with the next generation and with their neighbors in broader communities.
Creating an Inclusive Civic Culture/Community
Unlike those festivals that are founded by municipal or other government officials or created by professional arts groups, one special characteristic of the HOTC Festival is that it is largely community initiated, community led, and community engaged in content and production. The executive director of the Festival, Terry Hunter, said that "there is a strong sense of ownership from the community around the Festival. It's something people look forward to every year." He added, "Last year we had over a thousand residents participating in the festival as performers or presenters or artists" (transcripts, August 3rd, 2015). 24 Hours, a local newspaper, thus describes the Festival in one of its 2015 news report: "DTES residents tell story through acting. . . . It might sound like professional actors tensely rehearsing-instead it's DTES residents-turned actors weaving together a narrative about homelessness" (as cited in the Downtown Eastside Heart of the City Festival Comment Book 2015). It was hard to ignore a strong sense of community engagement during the encounters and interviews with Festival participants. Adrienne, a long-time community resident, has been consistently and actively involved in the Festival since it first started in 2004: I have been in the audience, a performer as well as helping to organize events like the Learning Centre events and my own digital storytelling shows. . . . Jimmy and I wrote a play together called "who stole the spirit of Carnegie" which was first performed at the festival in 2014 . . . the actors were all Carnegie (Community Centre) regulars who were great to work with. . . . Having it (the festival) in the rainiest part of the year makes it so downtown eastside for me. Nobody else puts on a festival at that time of year. (personal communication, August 19, 2016) "So I have been continuing on with my long involvement with the Heart of the City Festival," wrote Adrienne in an email exchange after the 2016 Festival, saying that she presented another digital story called A Year in the Learning Centre (Personal communication, December 1, 2016). Having attended the Festival for three consecutive years, we witnessed individual active participation from local residents such as Adrienne and observed cross-cultural performances that intertwined culture, history, and place-making, such as Against the Current (2015) and All Our Father's Relations: Stories of Shared Chinese and First Nations Heritage (2016). Community members' collective engagement and shared participation manifested themselves in the process of working on a variety of Festival productions. In Against the Current, large papier maché salmons were members of the cast; more than twenty community residents were invited to carry the salmon during the performance. And many residents were involved in creating the big salmons before the performance. Over a period of three months before the 2015 Festival, project staff organized workshops in different venues in the DTES community and invited local residents, adults and children, to create the papier maché salmon. Some workshops involved making salmon molds, others focused on papier maché-ing those molds and painting the fish. To those who were involved, the experience of making the salmon was also a practice of remembering and renewing the central place of the Salmon in Canadian west coast ecology economy, story and culture. Speaking of community collaboration, John Endo Greenaway, the artistic director and co-writer of the performance, remarked: So many of these shows come out of cultural groups brainstorming about something that can be done which speaks to their experience and the neighbourhood. . . . It wasn't me sitting there thinking of the story and writing it. It really was a collaborative process. It took several years towards its completion. It (the project) developed very organically. . . . Because there was no budget for a big rehearsal, you know, very low budget, and it depended on the good will of a lot of people. (Transcripts, April 19, 2016) Like Adrienne and those who work together on various Festival events, community members from diverse cultural and age groups in the DTESfrom street-involved youth, First Nation elders, and volunteers to community artists and activists-have engaged in the preparation and production of Festival to varying degrees throughout the year towards the yearly show. The creation of the Festival can be seen as the experience of ordinary community residents' collaboration and engagement, providing a place for involvement, pulling individuals, both within the community and outside, into shared participation and place/meaning-making. Indeed, the very acts of addressing the threats of gentrification and displacement are powerful place-making practices. [START_REF] Fine | Tiny Publics: A Theory of Group Action and Culture[END_REF] thinks of the small group as an interactional arena where participants engage themselves and collaboration emerges. Small groups, in most cases, "serve as the gravitational centers of civic life, drawing individuals into participation not only through compelling ideas but through material resources" (130). This statement well suits the Festival case in that it is an action space that engages DTES residents to collectively challenge mainstream media discourses and actual policies that construct the DTES community as the "down-and-out" and entrench practices of gentrification, welfare, and housing that further marginalize low-income members.
Furthermore, participants expressed that engaging in these common activities offered them an opportunity to create, experiment, and meet new people. They expressed their joy, excitement, and a feeling of satisfaction from being a part of the project. "I had tremendous fun at the HOTCF," one Festival participant said affectively, "and I feel and believe it truly reflects our community and I think it's a lot more meaningful and gratifying when one is involved in some aspect of the Festival, whatever that maybe". Apart from personal satisfaction, the process of collaborative action also brought about a local "energy." Eyoalha Baker, the artist who created the poster mural "Wall of Joy" on the side wall of a hotel in the neighborhood and presented this work at the 2015 Festival, mentioned how the shared energy helped forge an open and welcoming atmosphere.
It (the mural) really belongs to everyone. . . . A lot of people would come down to help me out (when I installed it on the wall). And they were so proud of this. . . . It brought so many people together to create this thing. You made it part of their experiences too. So it's really an interactive thing, this mural. . . . I just fell in love with the neighbourhood, and people, and their openness. . . . I just felt so supported, like they really got the energy. They could really feel the energy. (personal communication, November 1, 2015)
Her feeling was echoed by another Festival visitor: I appreciated the festival, not only for its events but also for the warm atmosphere. From the outset, I felt welcomed and included. . . . In addition to good entertainment, I also learnt much about the residents, their issues and their area. I felt that I had joined a very human group whose wit, intelligence and humor contributed greatly to my pleasure attending the festival. (Downtown Eastside Heart of the City Festival Comment Book 2014) What characterizes a city with a strong community is the respect and recognition afforded to its members, regardless of their personal, social, and economic statuses or ethnic and cultural backgrounds. [START_REF] Borer | From Collective Memory to Collective Imagination: Time, Place, and Urban Redevelopment[END_REF] argues that a vibrant civic culture is built around the variety and depth of social interactions and common activities from diverse persons across races, classes, and ages. In a similar vein, Monti (1999, 104) states that "civic culture makes it possible for different groups to claim the same piece of land as their own and to become part of a more inclusive community." In Fine's local sociology, shared participation is an important feature that characterizes a group. But so too, in the Festival case, should it be recognized as contributing to producing an inclusive civic culture or community. In the course of shared participation and collaborative engagement, new relationships of respect and reciprocity have been formed, strong ties between local communities established, and richer understandings of the community's diversified cultural tradition realized. On the other hand, local residents work with each other at the Festival to contest injustices as "members of different populations with different ideas, interests, and intentions to coexist in the same geographic and social territory" (Borer 2010, 102).
Accentuating a Common Community Identity
Within the urban culturalist perspective, how place is made in the image and symbolic representation of the place is also of our interest. In this section, we examine media texts produced at the Festival and analyze such features as the layout of visual images and their modes of expression. We are interested how linguistic and semiotic resources are selected and combined at the Festival to address its hoped-for audiences and create a space for dialogue between the DTES and broader communities. It is this aspect of symbolic representations that we now turn to.
The Symbolic Image
At the Festival, visual representations (e.g., photos, paintings, and posters) are strategically used as supplements to local narratives and form a collective identity of the DTES community. An example is the Festival symbol (Figure 1), a phoenix that is associated with regeneration and rebirth, designed by the DTES artist Diane Wood. An excerpt from In the Heart of the City: The Downtown Eastside Community Play in 2003 beautifully elaborated the spirit of the phoenix and explained why it was chosen to represent the community identity:
Leanne: A phoenix is the most beautiful bird in the world. It lives forever. Whenever people chase it, to steal its glorious feathers, the phoenix flies to a distant land to sing in peace. When its wings grow heavy with age and death approaches, the phoenix builds itself a nest of sweet scented wigs. . . . There it sits on the nest and waits for the sun's rays to ignite into flame. Out of the flames emerges a beautiful young phoenix. Out of yesterday's tragedy, the phoenix will always return. (Festival Program Guide 2014, 40) The visual representation of the phoenix signifier evokes a symbolic sense of renewal and becoming and facilitates a reclamation of identity-making. The legendary bird represents a life cycle that has no end, a perpetually renewing resource, just as the community and people living inside, an enduring place with a durable people. We can just say that this phoenix symbol reflects the desire and imagination of the DTES people. It also expresses the hope and longing of community members for something better, even for those experiencing displacement as a result of the rapid gentrification and high housing prices in the city.
Community Art Projects
Community art projects, too, were used as a tool to refute "a deficit-focused public narrative of the DTES" (Szöke 2015, 10) In 2016, Richard Tetrault had his new work, a mural banner, The Gathering (Figure 3), presented at the Festival opening ceremony. The Gathering featured the extraordinary artists, activists, and people past and present in the DTES community. A Ukrainian folk performer, First Nations artists, an African Canadian/Cherokee singer, a Japanese Taiko drummer, a Latin-American guitarist, and a Chinese Pipa player all found space in Tetrault's latest work. With this new artwork, the artist tried to assert a positive identity of the DTES through visual arts. "Not that I don't know the negative side. . . . But if they think that's all there is to the Downtown Eastside, they're missing the boat," remarked the artist quoted in the Georgia Straight, a local newspaper [START_REF] Smith | Artists at Heart of the City Festival Throw Fresh Light on the Downtown Eastside[END_REF], "only when you go into the Downtown Eastside and don't drive through it do you realize the complexity and mutual support there is. And that's what the Festival is: accentuating the positive." Symbolic images or representations of a place can be used to create common reference points for visitors to understand that place and allow a space for dialogue between groups in the city (Borer 2006b). Cultural representations are all socioculturally situated and embedded. What we observed at the Festival is that DTESiders create symbolic representations to express their cultural and artistic richness and lived stories that shape their community. In these representations, community members add with their individual/collective memories and dreams to the Festival's ongoing making of the neighborhood's identity. This is well reflected in the words of an interviewee: It (the Festival) shows to people this is not just a dead zone down here. It shows there are artists here and there is a real sense of community, people helping each other; people coming together for this Festival; people opening their doors at different venues and allowing people to come in. (Transcripts, October 28, 2014)
Photographs
In addition to public art projects, harnessing photographs also plays an essential role in countering the negative media portrait. The Carnegie Jazz band (Figure 4), taken from the 2014 Festival program guide, is a good example. The photo shows different artists who represent ethnic diversity in the community. They are depicted in close shot rather than from a distance, indicating they are socially close to any one viewer who is looking at the photo. The angle from which the photo was arranged is at the eye level, offering a sense of equality and intimacy with the viewer. Posing with their musical instruments communicates vitality and talent. This photo, among others shot in a similar manner, symbolically exudes a sense of energy and warmth. Indeed, visual images have a special role to play in identity formation and in the communication of ideas because they "convey layered and concretized, personalized messages, connotations and metaphors and to arouse strong feelings" (Hamilton 2012, 48). Terry Hunter emphasized the importance of visual images in properly representing the DTES and its residents: So the visual component of the festival is a very, very important aspect of the work we do. . . . I thought it's very important that people in the community look really good and that is one color. Because the DTES style is always considered black and white and cheap. Cheap printing, black and white. . . . I thought that it's really important that people get down and look really good and that the image is really strong and captures the community. . . . The photos capture the people of the community and show their humanity and show them in the context of the work they are doing, in this case it is the artistic practice. So we put a lot of emphasis on taking photos both during the Festival and after Festival. (transcripts, April 12, 2016) The small group perspective contends that groups permit communities to represent themselves in symbolic terms through the collective development, appropriation, and interpretation of meanings and objects [START_REF] Fine | Tiny Publics: A Theory of Group Action and Culture[END_REF]). In the Festival setting, static images that carry emblematic significance are collectively developed and strategically utilized to enable audiences to see and sense a dynamic and vibrant community. For example, Figure 4 offers a perspective of the community within the lived experiences of those who live there. It is an image of the DTES as an alive and resilient community with talented musicians, poets, dancers, and visual artists that blur the distinction between professionals and community members to explore the affordances of collective creativity and the process in the making of the Festival.
Discussion: The Festival and Local Culture Development
According to Fine (2012, 26), a small group is of great value in explaining how meaning and order are established in "mesostructures" (Wynn 2015, 255) because of its four powerful forces of control, contestation, representation, and allocation. As we look into the Festival from the small group perspective, we particularly see these practices of representation and contestation. By this, we mean that when individuals participate in producing and performing the Festival, a meaning-making process, the Festival becomes (1) a "performative space" [START_REF] Hamilton | Literacy and the Politics of Representation[END_REF]) in which community residents represent themselves and community in symbolic, multiliterate productions and (2) an action space that encourages co-engagement and active resistance to stigmatized labeling. In the process, a local culture is simultaneously generated within the Festival, a cultural field, to support community identity construction.
As previously mentioned, most Festival productions feature the work of community residents and local artists, who conceive and contribute experience and meanings and then perform their artistic works at the Festival, either through spoken word and text, or through visual and multimedia arts. Meaning-making is "an ongoing process that is achieved through shared history" (Fine 2010, 356). Building on shared memories and lived experiences, Festival participants/community residents mobilize multiliterate resources to produce new constructions and representations of social realities. While participating in meaning-making, they create communal and personal identification with each other and their community at the same time. Multiliteracies approaches view this active process of working upon emergent meaning as "Designing" (New London Group 1996; [START_REF] Kress | Multimodal Discourse: The Modes and Media of Contemporary Communication[END_REF]. And those cultural and multiliterate texts/events that are produced in the process are considered "performative spaces" [START_REF] Hamilton | Literacy and the Politics of Representation[END_REF]. At the Festival, community residents, taking on multiple roles "as meaning-makers, as agents, as participants, and even as active citizens" (Kalantzis and Cope 2012, 142), coengage with critical agency in the "Designing" process to represent themselves and their community in symbolic terms and to construct their sense of place and cultural identity.
As Wynn (2015, 44) has shown in his study of music festivals, "festivals are the result of participants' collective action." More than that, the fact that a wide range of multiliterate resources are entwined with artistic performances that include community residents who collaborate during the year toward the annual show highlights the fact that the HOTC Festival also facilitates co-engagement, and consequently, inclusive civic culture/community. Such a civic culture space enables local residents of diverse ethnic and cultural backgrounds to interact with each other to share knowledge and skills, feel interpersonal satisfactions, and experience a sense of hope and dignity. "It's the highlight of the entire city that we put this (the Festival) together and it's about the re-establishing and re-collecting the fire within this community," noted Stephen, a local DTES resident and community actor, "and it gives people a sense of hope, a sense of dignity, a sense of being alive" (transcripts, August 15, 2015).
It is without a doubt that voices of self or groups are (re)shaped "by the context and strategies used to produce them" (Hamilton 2012, 76). Nonetheless, not only does the context frame texts, but the inverse also occurs. In the case of the Festival, performances/cultural texts embody values and tradition that have been embedded within the DTES community, "a particular cultural context in which real people live, work, and practice the art of community and politics, together" (Borer 2006b, 179). Meanwhile, the Festival helps (re)define place identity by (re)producing symbolic representations and images of the community for community advocacy. Collective memories, semiotic resources of visualization, and metaphorical symbols are tactically used to create a counter-discourse. In this sense, the Festival functions as a site for "unofficial protest" [START_REF] Wynn | Music/City: American Festivals and Placemaking in Austin[END_REF], on which community residents challenge the undesired identities by reshaping the community's cultural landscape.
We further bring together the small-group theory and urban culturalist perspective to theorize the group-culture-place relationships (Figure 5) and consider the Festival as a vital but neglected space for place-making, conscientization, and multiliterate meaning-making. In the active "Designing" process, a unique local culture is developed within the Festival. This group culture is created through the public display of the diversity, variety, and cultural wealth of the DTES, and performed and practiced in multimodal productions. It is also a local achievement of participants' agency developed through shared participation and co-engagement. This culture is produced, represented, and perceived through narratives/collective memories, a vibrant civic space, and symbolic and expressive images/representations. Here, the Festival operates as the arena for cultural praxis in which individuals renegotiate and redevelop community identity while attaching meaning and emotional value to place. In the meantime, the Festival, a cultural text itself that is deeply rooted in and nourished by the DTES/place, helps people make sense of their identity and the structures and forces that shape their experiences by positioning participants in shared memories and symbolic representations. In doing so, the Festival successfully cultivates strong ties between local communities, deepens education about the complexity and cultural wealth of the DTES, and crafts unique urban festival experiences.
Conclusion
Drawing upon the small-group theory and urban culturalist perspective, we have explored in this ethnographic case study a community-engaged festival in Downtown Vancouver and its multiple roles in helping give voice to community residents who are otherwise discursively and politically "silenced." We conceive the Festival as a local, group context in which community residents and their diaspora, through shared participation and collective engagement, develop and perform local culture for identity construction and meaning/place making. We demonstrate with multiple examples how the Festival, transitory and temporally bounded as it is, has powerful influences upon local culture development and identity building. With this study, we hope to expand further discussion on community festivals as a socially meaningful means for collective action, community building, and multiliterate meaning-making in urban environments.
Notes
1. The term multiliteracies was created by the "New [START_REF] London | A Pedagogy of Multiliteracies: Designing Social Futures[END_REF] and refers to the two major aspects of meaning-making: social diversity and multimodality [START_REF] Kalantzis | Literacies[END_REF]. Our focus here is placed on the multimodality dimension of multiliteracies. In this article, multiliteracies and multimodal literacies are used as two interchangeable terms and both refer to the multiple modes (language, visual, spatial, and digital) of meaning-making. 2. Vancouver Moving Theatre is a professional arts organization founded in 1983 by two DTES artists/residents, Terry Hunter and Savannah Walling. It is now the lead producer of the Festival. 3. Author 1 did three years of fieldwork as part of her doctoral dissertation research.
Both Author 2 and Author 3 have contributed to writing of the article. 4. Wynn (2015, 228) points out that the process of producing festivals depends on the "crafting of temporary cultural and entertainment-based spaces" and flexible programming that "can more fluidly respond to the changing needs of the city, its residents, and the audience that attends." He refers to this kind of festival culture as "liquid culture," as opposed to concrete culture.
Figure 1 .
1 Figure 1. The phoenix illustration. Source: Downtown Heart of the City Festival Program Guide, 2006.
and exhibited the cultural diversity of the DTES. The photo of the mural Through the Eyes of Raven (Figure 2) was taken during a fieldtrip at the 2013 Festival. The mural was painted on the side of a hotel wall in the neighborhood by local artists Richard Tetrault and a team including Haisla Collins, Sharifa Marsden, Richard Shorty, and Jerry Whitehead. The first and third authors had a conversation with Jerry Whitehead about the theme of the mural in the First Nations artist studio in 2013. The mural depicts relations among Aboriginal people, coastal ecosystems, and colonialism, and also demonstrates the strength of their ancestry, as well as contemporary urban Aboriginal events.
Figure 2 .
2 Figure 2. Through the Eyes of Raven.
Figure 3 .
3 Figure 3. The Gathering. Source: The Georgia Straight, October 19, 2016.
Figure 4 .
4 Figure 4. Carnegie Jazz Band. Source: Festival Program Guide, 2014.
Figure 5 .
5 Figure 5. Group-culture-place relationships in the Festival context.
Acknowledgment
It is with honor and gratitude that we acknowledge that this research has been conducted on the unceded ancestral territories of the Coast Salish peoples, specifically the x m θk ym (Musqueam), Skwxwú7mesh (Squamish), and selílwitulh ʷ əә əәʷʷ (Tsleil-Wauthuth) First Nations.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Author Biographies XXXXXX [AQ3] | 60,281 | [
"1004285",
"12941",
"1004286"
] | [
"117565",
"267720",
"117565",
"117565"
] |
01379971 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01379971/file/01-final-cdc-time-trig-discrete-time.pdf | Romain Postoyan
email: [email protected].
Dragan Nešić
Time-triggered control of nonlinear discrete-time systems
We investigate the time-triggered control of nonlinear discrete-time systems using an emulation approach. We assume that we know a controller, which stabilizes the origin of a discrete-time nonlinear system. We then provide conditions to preserve stability when the control input is no longer updated at each step, but within N steps from the previous update, where N is a strictly positive integer. We consider general output feedback controllers and we allow for various holding strategies of the control input between two updates, such as zero-input or hold-input policies for example. An easily computable bound on the maximum number of steps between two updates, i.e. N , is provided. The results are applied to linear time-invariant systems in which case the assumptions are written as a linear matrix inequality, and a nonlinear physical example is provided as an illustration. This study is relevant for networked control systems, as well as any system for which sparse or sporadically changing control inputs are advisable in view of the resource limitations for instance.
I. INTRODUCTION
Control strategies that generate sparse or sporadically varying inputs are suitable in a number of situations. Networked control systems (NCS) are a typical example. These are systems in which the plant and the controller communicate with each other via a digital channel, which may be used by other tasks. In this set-up, the frequent transmission of data between the plant and the controller, and thus the frequent update of the control input, may exceed the communication channel capacity, and leads to delays or packet losses, which may destroy the desired properties of the closed-loop system. Another example is embedded systems, for which the computation of the control input is limited by the available computation resources. Similar situations arise in a variety of other control applications. In medicine for instance, more specifically in dynamic phototherapy [START_REF] Dougherty | Photodynamic therapy[END_REF], to develop sparse control policies would be beneficial for the patients in order to reduce the pain induced by the treatment [START_REF] Bastogne | Biomedical device to control the activation of a photosensitizing agent in a biological tissue[END_REF].
Several types of such control strategies are available in the literature. The simplest option is to hold the control input for a fixed amount of time, leading to time-triggered control, see e.g., [START_REF] Donkers | Stability analysis of networked control systems using a switched linear systems approach[END_REF], [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF]. An alternative consists in adapting the transmissions according to the current state of the plant, we talk of event-triggered control, see [START_REF] Heemels | An introduction to event-triggered and self-triggered control[END_REF], [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF] and the references therein. The idea is to evaluate a state-dependent criterion at each step, whenever the latter is satisfied, a transmission is triggered. Event-triggered control usually generates less transmissions than time-triggered control, however the evaluation of the transmission condition may be difficult to implement in some applications. In self-triggered control, transmissions also adapt to the system evolution but the next triggering instant is decided based on the value of the state at the last control input update (and not the current state as in event-triggered control, to avoid the frequent evaluation of the triggering condition), see [START_REF] Heemels | An introduction to event-triggered and self-triggered control[END_REF] and the references therein. The choice between these paradigms depends on the considered problem, all are relevant and deserve being investigated.
We study nonlinear discrete-time systems in this paper. This is justified by the fact that many processes are conveniently modeled in discrete-time because of their intrinsic nature or because of the tools used to construct the model, see [START_REF] Ljung | System identification[END_REF] for instance. Also, any sufficiently smooth continuoustime model can be (approximately) discretized. Results, which relate the stability of nonlinear discretized models to the stability of the original system can be found in [START_REF] Nešić | Sufficient conditions for stabilization of sampled-data systems via discrete-time approximations[END_REF]; we do not address this issue in this paper and leaves it for future work. In the context of NCS, to work with a discrete model may be convenient as it allows considering holding strategies in which the input is set to zero when no transmission has occurred to reduce the actuation cost, see [START_REF] Schenato | To zero or to hold control inputs with lossy links[END_REF] for instance. Furthermore, equidistant transmissions naturally give rise to discrete-time models of the form considered here. Results on the event-triggered control of discrete-time systems can be found in [START_REF] Cogill | Event-based control using quadratic approximate value functions[END_REF], [START_REF] Eqtami | Event-triggered control for discrete-time systems[END_REF], [START_REF] Li | Weakly coupled event triggered output feedback system in wireless networked control systems[END_REF] for instance. Recently, many works have been devoted to the design of predictive strategies for NCS, see e.g., [START_REF] Bemporad | Predictive control of teleoperated constrained systems with unbounded communication delays[END_REF], [START_REF] Chaillet | Delay compensation in packet-switching networked controlled systems[END_REF], [START_REF] Hu | Event-driven networked predictive control[END_REF], [START_REF] Quevedo | Robust stability of packetized predictive control of nonlinear systems with disturbances and Markovian packet losses[END_REF]. At each triggering instant, a packet containing future values of the control input is sent to the plant, and then stored in a buffer until the next transmission occurs. Hence, in [START_REF] Bus ¸oniu | Near-optimal strategies for nonlinear and uncertain networked control systems[END_REF], time-triggered and self-triggered policies are presented for general nonlinear discrete-time systems in order to optimize a discounted cost. While this strategy allows reducing the usage of the network, it still updates the control input value at each iteration. To overcome this potential issue, self-triggered controllers for constrained nonlinear systems are proposed in [START_REF] Gommans | Resource-aware MPC for constrained nonlinear systems: a self-triggered control approach[END_REF], which generate sparse or sporadically changing control inputs, depending on the desired specification, while guaranteeing stability and a desired level of optimality.
In this study, we focus on time-triggered control and our objective is to address the following problem. We assume that we know an output-feedback controller, which uniformly asymptotically stabilizes the origin of the plant when it is updated at each step. We then consider the scenario where a new control input is transmitted to the plant within every N steps from the last transmission. Between two updates, the input is held using a general type of functions, which captures the hold-input and the zero-input [START_REF] Schenato | To zero or to hold control inputs with lossy links[END_REF] policies as particular cases. We provide conditions on the closedloop system as well as an explicit bound on the maximum allowable transmission interval (MATI), i.e. N , to guarantee stability. In particular, sufficient conditions for local/global, asymptotic/exponential stability are given. These conditions are written as a linear matrix inequality for linear timevarying systems, and a discretized Lorenz model of a thermal convection loop is proved to satisfy the required assumptions.
The approach is inspired by the continuous-time results in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF]. It appears that the discrete-time nature of the problem generates differences and some non-trivial technical difficulties compared to [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF]. Indeed, we first had to modify the assumptions on the closed-loop system, and we had to make an assumption on a ℓ 2 -gain contrary to [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF], as will be explained later. We then provide a bound on the MATI, which is different from the one in continuous-time. Indeed, in contrast with [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF], we do not provide an explicit formula for the MATI bound but a simple algorithm for its computation, which is easy to implement. Compared to the related works based on model predictive ideas [START_REF] Bus ¸oniu | Near-optimal strategies for nonlinear and uncertain networked control systems[END_REF], [START_REF] Gommans | Resource-aware MPC for constrained nonlinear systems: a self-triggered control approach[END_REF], the controller is an output feedback law (we do not require to measure the full state) and we do not rely on optimization techniques, which may be important for applications with low computational capacities.
II. PRELIMINARIES
Let R := (-∞, ∞), R ≥0 := [0, ∞), and Z ≥0 := {0, 1, 2, . . .}. For (x, y) ∈ R n+m , (x, y) stands for [x T , y T ] T . A function χ : R ≥0 → R ≥0 is of class K if it is continuous, zero at zero and strictly increasing, and it is of class K ∞ if, in addition, it is unbounded. A continuous function χ : R 2 ≥0 -→ R ≥0 is of class KL if for each t ∈ R ≥0 , χ(•, t) is of class K, and, for each s > 0, χ(s, •) is decreasing to zero. The distance of a point x ∈ R n to a set A ⊆ R n is denoted by |x| A := inf{|x -y| : y ∈ A}. Let P be a real, square, and symmetric matrix, λ max (P ) and λ min (P ) are respectively the largest and the smallest eigenvalue of P . The notation I stands for the identity matrix. The symbol ⋆ used in matrices denotes the symmetric block component.
We will use the following stability definition. Definition 1: Consider the system x + = F (x) with x ∈ R n . The compact set S ⊂ R n is:
• uniformly locally asymptotically stable (ULAS) if there exist β ∈ KL and c > 0, such that for any x(0) with |x(0)| S ≤ c, the corresponding solution verifies
|x(k)| S ≤ β(|x(0)| S , k) for any k ∈ Z ≥0 .
• uniformly globally asymptotically stable (UGAS) if the previous property holds for any x(0). • uniformly locally (respectively, globally) exponentially stable (ULES (respectively, UGES)) if it is ULAS (respectively, UGAS) and β(s, k) = Ce -σk with C > 0 and σ > 0 for any s, k ∈ Z ≥0 .
III. PROBLEM STATEMENT
Consider the system
x + p = f p (x p , u), y = g p (x p ), (1)
with state x p ∈ R np , input u ∈ R nu , and output y ∈ R ny . We design the following controller to stabilize system (1)
x + c = f c (x c , y), u = g c (x c , y), (2)
where x c ∈ R nc is the controller state. System (2) covers static controllers, in which case u = g c (y) (and no variable x c is needed).
We study the scenario where the input to system (1) is no longer updated by (2) at each step but sporadically, such that there exists a maximum of N steps between two successive updates. The underlying idea is to transmit as rarely as possible. Hence, denoting k i , i ∈ Z ≥0 , the sequence of transmission instants, k i+1k i ≤ N for any i ∈ Z ≥0 . At each k i , the controller receives the current value of the plant output y, generates an updated control input u, which is immediately transmitted to the plant. We assume that the delays induced by the transmissions and the computation time are negligible. For the sake of convenience, we introduce the clock variable τ ∈ Z ≥0 to count the number of steps since the last transmission. Variable τ has the following dynamics
τ + = 1 when τ ∈ {1, . . . , N } (transmission) τ + 1 when τ ∈ {1, . . . , N -1}
(no transmission).
(
) 3
In (3), when a transmission occurs, which may happen at any time in {1, . . . , N }, τ is reset to 1, otherwise τ is incremented by 1 and this may occur when {1, . . . , N -1}, but not when τ = N as a transmission necessarily needs to be triggered in this case (otherwise the inter-transmission time will be strictly bigger than N , which is excluded). When τ ∈ {1, . . . , N -1}, a transmission may or may not occur. We see that system (3) generates non-unique solutions for a given initial condition: that is not an issue for the forthcoming analysis.
The objective of this paper is to provide conditions to guarantee the stability of the closed-loop system (1)-( 2) in presence of communication constraints. In particular, we aim at providing an easily computable bound on N . We first need to model the overall system for this purpose.
The control input applied to system (1) is no longer u, but a sampled version of input u, which we denote û. Similarly, controller (2) receives the sampled version of y, denoted ŷ. Hence, (1) and ( 2) respectively become x + p = f p (x p , û) (the output equation does not change) and
x + c = f c (x c , ŷ), u = g c (x c , ŷ).
The variables û and ŷ have the following dynamics like in [START_REF] Lješnjanin | Controllability of discrete-time networked control systems with try once discard protocol[END_REF]
û+ ŷ+ = u + y + when τ ∈ {1, . . . , N } (transmission) ĝc (x p , x c , û, ŷ, τ ) ĝp (x p , x c , û, ŷ, τ ) when τ ∈ {1, . . . , N -1} (no transmission), ( 4
) where y + = g p (x + p ) = g p (f p (x p , û)) and u + = g c (x + c , y + ) = g c (f c (x c , ŷ), g p (f p (x p , û))).
The mappings ĝc and ĝp model the way the variables û and ŷ are respectively generated by the plant and the controller when there is no transmission. A typical implementation is zero-order-hold, meaning that û and ŷ remain constant in absence of transmission, i.e. g c (x p , x c , û, ŷ, τ ) = û and g p (x p , x c , û, ŷ, τ ) = ŷ. Another common possibility is to 'zero' ŷ and û when no transmission occurs [START_REF] Schenato | To zero or to hold control inputs with lossy links[END_REF], which give g c = 0 and g p = 0. We allow ĝc and ĝp to depend on all the variables of the problem for the sake of generality, hence covering other types of holding policy, such as the model-based one [START_REF] Lunze | A state-feedback approach to event-based control[END_REF].
For the sake of convenience, and as commonly done in the NCS literature, we introduce the sampling-induced error e := (e u , e y ) ∈ R ne with e u := ûu, e y := ŷy and n e := n u + n y . The overall system is modeled as follows
x + = f (x, e) e + τ + = g 1 (x, e, τ ) 1 when τ ∈ {1, . . . , N } (transmission) g 2 (x, e, τ ) τ + 1 when τ ∈ {1, . . . , N -1} (no transmission), (5)
where x := (x c , x p ) ∈ R nx , n x := n p + n c , f (x, e) := f p (x p , g c (x c , g p (x p ) + e y ) + e u ), f c (x c , g p (x p ) + e y ) , g 1 (x, e, τ ) := (0, 0) and g 2 (x, e, τ ) := ĝc x, e u + u, e y + y, τg c f c (x c , y + e y ), ĝp (x, e u + u, e y + y, τ ) , ĝp x, e u + u, e y +y, τ -g p f p (x p , u+e u ) , with u = g c (x c , g p (x p )+ e y ) and y = g p (x p ).
IV. MAIN RESULT
In this section, we first state the assumptions we make on system (5), we then provide a bound on N , which we analyse, and we finally state the main stability results.
A. Assumption
We make the following assumption on system (5). Assumption 1: There exist V :
R nx → R ≥0 continuous, W : R ne → R ≥0 continuous, α V , α V , α W , α W , α, ε ∈ K ∞ , H : R nx → R continuous, θ ≥ γ > 0, L ∈ R ≥0 and ∆ > 0 such that the following holds. (i) For any (x, e) ∈ R nx+ne , α V (|x|) ≤ V (x) ≤ α V (|x|) and α W (|e|) ≤ W (e) ≤ α W (|e|). (ii) For any (x, e) ∈ R nx+ne such that max{|x|, |e|} ≤ ∆, V (f (x, e)) -V (x) ≤ -α(|x|) -ε(|e|) -θH 2 (x) +γW 2 (e). (iii) For any (x, e) ∈ R ne such that max{|x|, |e|} ≤ ∆, W (g 2 (x, e, τ )) ≤ LW (e) + H(x).
When items (ii)-(iii) hold for any (x, e) ∈ R nx+ne , we say that the assumption holds globally. Item (i) of Assumption 1 means that V and W are positive definite and radially unbounded. Item (ii) of Assumption 1 is a robust local stability property of the system x + = f (x, e). When the loop is closed at each step, e = 0, and items (i)-(ii) imply that the origin is uniformly locally asymptotically stable for the system x + = f (x, 0), which corresponds to the closed-loop system (1)- [START_REF] Bemporad | Predictive control of teleoperated constrained systems with unbounded communication delays[END_REF]. When e = 0, these items imply that the x-system satisfies an input-to-stable stability property with input e, and also that the x-system is (locally) ℓ 2 -stable from W (e) to H(x) with gain γ θ . It is interesting to note that this gain is less than 1 since γ ≤ θ. This is an important difference compared to the continuous-time results in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF] where there is no condition on the corresponding L 2 -gain. A possible explanation is the following. The underlying idea in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF] is that the e-system is L 2 -stable from H(x) to W (e), with a gain that can be made arbitrarily small, by selecting the MATI bound accordingly (see Proposition 6 in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] for a formal statement), then a small-gain analysis allows ensuring the stability of the overall system. In our study, we cannot make this ℓ 2 -gain as small as desired because the MATI, N , is a strictly positive integer. When γ > θ, our analysis gives N = 1, that is we transmit at each step, as e is always equal to 0. Item (iii) of Assumption 1 is an exponential growth condition of the dynamics of the e-system when there is no transmission. When system (1) is linear and time-invariant, the conditions in Assumption 1 can be written as a linear matrix inequality, as explained in Section V-A. An example of a nonlinear system that verifies Assumption 1 is provided in Section V-B.
B. MATI estimate
To provide a bound on N , we introduce the variable φ ∈ R ≥0 which has the following dynamics
φ + = λ φ -1 φ -1 + λL 2 , φ(1) = λ, (6)
where λ := θ γ ≥ 1, and θ, γ, L come from Assumption 1. We define the MATI as
N ⋆ := sup {k ∈ Z >0 : φ(k) ≥ 1} . ( 7
)
Note that the set in the right-hand side of ( 6) is never empty as φ(1) = λ = θ γ > 1 (θ > γ according to Assumption 1). Moreover, the denominator in (6) never cancels when φ ≥ 1 (unless L = 0, in which case we define φ + = λ). We provide guidelines on how to compute N ⋆ below but before that we state the following result, which provides necessary and sufficient conditions under which N ⋆ is respectively (in)finite and strictly bigger than 1.
Proposition 1: The following holds.
(i) N ⋆ is finite if and only if L > 1 - √ λ -1 . (ii) N ⋆ ≥ 2 if and only if L ≤ √ λ - √ λ -1
. Proof. We first prove item (i) of Proposition 1. The idea is to show that φ(k), as iteratively defined by (3), does not increase. We will see that two situations can then happen: either the iterative map in (3) has a fixed point that belongs to [1, λ], in which case φ converges to it as time grows and N ⋆ is infinite, or there is no such fixed point and φ becomes strictly less than 1 in finite time.
For the sake of convenience, we introduce ψ := φ -1. To show that ψ(k) is non-increasing is thus equivalent to show that φ(k) is non-increasing. We have
ψ + = φ + -1 = λ ψ ψ + λL 2 -1 = (λ -1)ψ -λL 2 ψ + λL 2 . (8) Let ψ ∈ [0, λ -1], ψ + ≤ ψ ⇔ (λ -1)ψ -λL 2 ψ + λL 2 ≤ ψ ⇔ (λ -1)ψ -λL 2 ≤ ψ 2 + λL 2 ψ ⇔ 0 ≤ ψ 2 + (λ(L 2 -1) + 1)ψ + λL 2 . (9)
We interpret the last term on the right-hand side above as a second-order polynomial in ψ, which we denote p. Its discriminant is δ := (λ(L 2 -1)+1) 2 -4λL 2 . When δ < 0, p has no real root, therefore p(ψ) > 0 for all ψ, which means that any solution to (8) strictly decreases and ( 8) has no fixed point, hence N ⋆ is finite. Consider now the case where δ ≥ 0 and let X := λ(L 2 -1)+1. We show that the maximal root of p, ψ :=
-X + √ δ 2 , is such that ψ < λ-1. This statement is equivalent to -X + √ δ = -X + √ X 2 -4λL 2 < 2(λ -1), which leads to -X-2(λ-1) < - √ X 2 -4λL 2 .
Note that -X-2(λ-1) ≤ 0, otherwise we would have -2(λ-1) > X = λ(L 2 -1)+1, which gives 0 > λL 2 + λ -1, which is false as λL 2 ≥ 0 and λ -1 ≥ 0 by definition of λ. Thus -X -2(λ -1) < -√ X 2 -4λL 2 is equivalent to (-X -2(λ -1)) 2 > X 2 -4λL 2 , that we write as X 2 + 4(λ -1)X + 4(λ -1) 2 > X 2 -4λL 2 . After simplifying both sides of the last inequality, we obtain (λ -1)X + (λ -1) 2 + λL 2 = (λ -1)(λ(L 2 -1) + 1) + (λ -1) 2 + λL 2 = λ 2 L 2 > 0 which is true as long as L > 0. Hence, ψ < λ -1 when L > 0. For any ψ > max{ψ, 0}, ψ + > ψ since ψ > ψ implies ψ + > ψ + = ψ, as the right-hand side of ( 8) is strictly increasing in ψ on R ≥0 . Consequently, ψ(k) iteratively defined by ( 8) and initialized at λ strictly decreases when L > 0. When L = 0, ψ(k) = λ -1 for any k ∈ Z ≥0 according to [START_REF] Eqtami | Event-triggered control for discrete-time systems[END_REF].
We have thus proved that ψ(k) is non-increasing in all cases. We deduce that N is finite if and only if (8) has no fixed point (i.e. δ < 0) or when ψ < 0. The first case, i.e. δ < 0, is equivalent to
L ∈ (1 - √ λ -1 , 1 + √ λ -1
). We next analyse the second case. We have ψ < 0 and
δ ≥ 0 is equivalent to -X + √ X 2 -4λL 2 < 0 and X 2 -4λL 2 ≥ 0. These inequalities are equivalent to X ≥ 2 √ λL, that is λ(L 2 -1) + 1 ≥ 2 √ λL, which gives ( √ λL -1) 2 ≥ λ and finally L ≥ 1 + √ λ -1 . As a result, N ⋆ is finite if and only if L > 1 - √ λ -1
. To prove item (ii) of Proposition 1, we study when φ(2) ≥
1 with φ(1) = λ, that is when λ λ -1 λ -1 + λL 2 ≥ 1. The latter is equivalent to λ(λ -1) ≥ λ -1 + λL 2 , which we rewrite as λ 2 -2λ + 1 = (λ -1) 2 ≥ λL 2 . Hence φ(2) ≥ 1 if and only if λ -1 ≥
√ λL, which corresponds to the condition in item (ii) of Proposition 1.
According to Proposition 1, when L > 1 -√ λ -1 , N ⋆ is finite. To compute it, we can run a simple program where we initialize φ at λ, and we iterate it according to (6) until it becomes strictly less than
1 at iteration N ⋆ + 1. When, in addition, L ≥ √ λ - √ λ -1
, we immediately know that N ⋆ = 1 according to item (ii) of Proposition 1, which means that we transmit at each step. When L < 1 -√ λ -1 , N ⋆ is infinite, which means that we only need to close the feedback loop once. This situation may happen in specific scenarios, like when y = x and model-based holding functions are to generate û and x, or when controller (2) stabilizes the origin of system (1) in one step and the zero-input holding strategy is employed.
Remark 1: In [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF], where the plant and the controller have continuous-time dynamics, a similar variable φ is introduced to compute the MATI. In particular, φ is given by the solution to a nonlinear ordinary differential equation (see (27) in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF]). The latter is solved analytically and then the MATI bound is obtained. To analytically determine N ⋆ is a difficult task, which is the reason why we have decided to define it as in [START_REF] Dougherty | Photodynamic therapy[END_REF], which is easy to evaluate as explained above.
C. Stability guarantees
The next result ensures asymptotic stability properties for system [START_REF] Cogill | Event-based control using quadratic approximate value functions[END_REF].
Theorem 1: Consider system (5) and suppose Assumption 1 holds. For any N ∈ {1, . . . , N ⋆ }, where N ⋆ is defined in [START_REF] Dougherty | Photodynamic therapy[END_REF], the compact set A := {(x, e, τ ) : x = 0, e = 0, τ ∈ {1, . . . , N }} is:
(i) ULAS;
(ii) UGAS when Assumption 1 holds globally;
(iii) ULES (respectively, UGES) when Assumption 1 holds (respectively, globally) with α
V (s) = a V s 2 , α V (s) = a V s 2 , α W (s) = a W s 2 , α W (s) = a W s 2 , α(s) = as 2 , ε (s)
= ǫs 2 for any s ≥ 0, with a V , a V , a W , a W , a, ǫ > 0. Proof. We prove item (i) of Theorem 1, the two other items similarly follow. We write system (5) as q + ∈ F (q) with q := (x, e, τ ) for the sake of convenience. We introduce U (q) = V (x) + γφ(τ )W 2 (e) for any (x, e) ∈ R nx+ne and τ ∈ {1, . . . , N }, where V, W, γ come from Assumption 1. Let (x, e) ∈ R nx+ne be such that max{|x|, |e|} ≤ ∆, where ∆ comes from Assumption 1, and τ ∈ {1, . . . , N }. Let ϕ ∈ F (q). From item (ii) of Assumption 1, when ϕ = (f (x, e), g 1 (x, e, τ ), 1), i.e. when a transmission occurs,
U (ϕ) -U (q) ≤ -α(|x|) -ε(|e|) -θH 2 (x) + γW 2 (e) +γφ(1)W 2 (0) -γφ(τ )W 2 (e) = -α(|x|) -ε(|e|) -θH 2 (x) + γW 2 (e) -γφ(τ )W 2 (e). (10)
By definition of N ⋆ in [START_REF] Dougherty | Photodynamic therapy[END_REF] and since τ ≤ N ≤ N ⋆ , φ(τ ) ≥ 1, thus γW 2 (e)γφ(τ )W 2 (e) ≤ 0 and
U (ϕ) -U (q) ≤ -α(|x|) -ε(|e|) ≤ -ρ(|(x, e)|), (11)
for some ρ ∈ K ∞ . When ϕ = (f (x, e), g 2 (x, e, τ ), τ + 1), i.e. when no transmission occurs, τ ∈ {1, . . . , N -1} and in view of items (ii)-(iii) of Assumption 1,
U (ϕ) -U (q) ≤ -α(|x|) -ε(|e|) -θH 2 (x) + γW 2 (e) +γφ(τ + 1)W 2 (g 2 (x, e, τ )) -γφ(τ )W 2 (e) ≤ -α(|x|) -ε(|e|) -θH 2 (x) + γW 2 (e) +γφ(τ + 1) (LW (e) + H(x)) 2 -γφ(τ )W 2 (e) = -α(|x|) -ε(|e|) -θH 2 (x) + γW 2 (e) +γφ(τ + 1) L 2 W (e) 2 + H(x) 2 + 2LW (e)H(x) -γφ(τ )W 2 (e) = -α(|x|) -ε(|e|) -(H(x), W (e)), M(τ )(H(x), W (e)) , (12) where
M(τ ) := θ -γφ(τ + 1) -γφ(τ + 1)L ⋆ γ -1 + φ(τ ) -φ(τ + 1)L 2 .
(13) The matrix M(τ ) is positive semi-definite if and only if
0 ≤ θ -γφ(τ + 1) 0 ≤ (θ -γφ(τ + 1)) γ -1 + φ(τ ) -φ(τ + 1)L 2 -γ 2 φ(τ + 1) 2 L 2 . ( 14
)
The first inequality follows from the fact that φ(τ ) ≤ θ γ for all τ ∈ {1, . . . , N }, as φ is shown to be non-increasing in the proof of Proposition 1 and φ(1) = θ γ . The definition of φ(τ + 1) in ( 6) is such that the left-hand side of the second inequality above cancels. Indeed, the latter can be written as -γ 2 (-1 + φ(τ )) -γθL 2 φ(τ + 1) + θγ(-1 + φ(τ )) (the terms in φ(τ + 1) 2 cancel), whose root, when interpreting φ(τ + 1) as an unknown, is φ(τ
+ 1) = θγ(1 -φ(τ )) -γ 2 (-1 + φ(τ )) -γL 2 θ = θ γ φ(τ ) -1 φ(τ ) -1 + L 2 θ γ
, which corresponds to [START_REF] Donkers | Stability analysis of networked control systems using a switched linear systems approach[END_REF]. The matrix M(τ ) is therefore positive semi-definite, hence U (ϕ) -U (q) ≤ -α(|x|)ε(|e|) ≤ -ρ(|(x, e)|) with ρ as in [START_REF] Hu | Event-driven networked predictive control[END_REF].
We have proved that U (ϕ) -U (q) ≤ -ρ(|(x, e)|) for any ϕ ∈ F (q). Let ∆ = α U (∆) where α U (s) = min{α V ( s 2 ), γα 2 W ( s 2 )} for s ≥ 0 (see for instance Lemma 2 in [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF]), U (q) ≤ ∆ implies max{|x|, |e|} ≤ ∆ in view of item (i) of Assumption 1. The set {q : U (q) ≤ ∆, τ ∈ {1, . . . , N }} is thus forward invariant for system [START_REF] Cogill | Event-based control using quadratic approximate value functions[END_REF]. We note that α U (|q| A ) ≤ U (q) ≤ α U (|q| A ) where α U : s → α V (s) + θα W (s) ∈ K ∞ and α U ∈ K ∞ . We deduce that the set A is ULAS using Theorem 8 in [START_REF] Nešić | Formulas relating KL stability estimates of discrete-time and sampled-data nonlinear systems[END_REF].
Remark 2: The stability properties ensured by Theorem 1 are robust to the so-called σor ρ-perturbations when f (x, e), g 1 (x, e, τ ), g 2 (x, e, τ ) in ( 5) are compact and non-empty for any x, e, τ (which is the case when the corresponding mappings are continuous for instance), according to Theorem 2.8 in [START_REF] Kellett | On the robustness of KL-stability for difference inclusions: smooth discrete-time Lyapunov functions[END_REF] as the Lyapunov function used in the proof of Theorem 1 is continuous.
V. APPLICATIONS
In this section, we apply the results of the previous section to linear time-invariant systems and to a nonlinear example.
A. Linear time-invariant systems
Consider the system
x + p = A p x p + B p u, y = C p x p , (15)
where (A p , B p ) is stabilizable and (A p , C p ) is detectable. We can therefore stabilize the origin of system (15) using a dynamic controller of the form
x + c = A c x c + B c y, u = C c x c + D c y. (16)
We take into account the communication constraints between system (15) and controller ( 16), and we obtain the model below in view of Section III
x + = A 1 x + B 1 e e + τ + = (0, 0) 1 when τ ∈ {1, . . . , N } (transmission) A 2 x + B 2 e τ + 1 when τ ∈ {1, . . . , N -1} (no transmission), ( 17
) where x = (x c , x p ) ∈ R nx , and A 1 := A p + B p D c C p B p C p B p C p A c , B 1 := B p D c B p B c 0 , A 2 := -C p (A p + B p D c C p ) -C p B p C p -C c B c C p -C c A c , and
B 2 := -C p B p D c -C p B p -C c B c 0 .
The next result shows that system (15) ensures Assumption 1 provided a linear matrix inequality holds, which then implies that the set A is UGES according to Theorem 1. Its proof is omitted for space reasons.
Proposition 2: Consider system (17) and suppose that there exists a symmetric positive definite matrix P , θ ≥ γ > 0 such that
A T 1 P A 1 -P + θA T 2 A 2 A T 1 P B 1 ⋆ -γI + B T 1 P B 1 < 0. (18)
Then, Assumption 1 holds globally and the set A defined in Theorem 1 is UGES when N ≤ N ⋆ with N ⋆ given by [START_REF] Dougherty | Photodynamic therapy[END_REF].
B. Lorenz model
We consider the Euler discretization of the controlled Lorenz model of a thermal convection loop1 studied in [START_REF] Wan | Nonlinear feedback control with global stabilization[END_REF] with sampling period T > 0
x + 1 = x 1 + T (-ax 1 + ax 2 ) x + 2 = x 2 + T (bx 1 -x 2 -x 1 x 3 + u) x + 3 = x 3 + T (x 1 x 2 -cx 3 ) y = x 1 , (19)
where a, b, c > 0, see [START_REF] Wan | Nonlinear feedback control with global stabilization[END_REF] for details on the meaning of the state variables, the control input and the parameters. We take a = 10, b = 28, c = 8 3 and T = 10 -3 . We design the static output-feedback law u = -by. After taking into account the communication constraints, the input applied to the plant is u = -b(y +e) where e = ŷ -y (there is no need to introduce the error on u since the controller is static).When using the hold-input strategy, the corresponding system satisfies Assumption 1 with W
(e) = |e|, α W (s) = α W (s) = s, L = 1, H(x) = T | -ax 1 + ax 2 |, V (x) = 0.0059288x 4 1 + 2.8058 • 10 -6 x 4
2 + 0.0044086x 4 3 + 2.5782x 2 1 -4.9865x 1 x 2 + 8.0907•10 -7 x 1 x 3 +5.3294x 2 2 -7.4358•10 -7 x 2 x 3 +3.0477x 2 3 , which was obtained using SOSTOOLS [START_REF] Papachristodoulou | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF], α V (s) = s 2 and some α ∈ K ∞ , α(s) = ε(s) = 10 -3 s 2 , θ = 200, γ = 2.001, and ∆ = √ 1000, for x = (x 1 , x 2 , x 3 ) ∈ R 3 , e ∈ R and s ≥ 0. In particular, item (ii) of Assumption 1 holds for any |x| ≤ ∆ and e ∈ R, and item (iii) of Assumption 1 is verified for any x, e. As a consequence, the set A defined in Theorem 1 is ULAS. Furthermore, in this case, the set {(x, e, τ ) : V (x) + γφ(τ )W 2 (e) ≤ ∆ 2 } belongs to the basin of attraction of the origin (since V (x) ≥ |x| 2 for any x ∈ R 3 ).
The formula in [START_REF] Dougherty | Photodynamic therapy[END_REF] gives N ⋆ = 18, while simulation results of periodic transmissions have shown that the asymptotic stability of the set A for the closed-loop system is preserved up to N ⋆ = 77. The bound we have obtained can be further improved by taking 'smaller' functions ε and α, nevertheless this may affect the robustness of the closedloop system. Also, a different Lyapunov function would generally lead to a different N ⋆ (as well as a different basin of attraction). Finally, we note that Assumption 1 does not hold when 'zeroing' ŷ when no transmission occurs (i.e. when g 2 = 0 in (5)). In this case, item (iii) of Assumption 1 is verified with L = 0 (and H(x) = |x 1 + T (-ax 1 + ax 2 )|) but item (ii) of Assumption 1 cannot be satisfied, otherwise N * would be infinite according to item (i) of Proposition 1 (as L = 0), which means that the origin of the open-loop system would be locally exponentially stable, which is not the case.
VI. CONCLUSIONS
We have investigated the scenario in which a discrete-time controller and a discrete-time plant communicate with each other at least every N steps. Assuming that the corresponding closed-loop system satisfies a robust asymptotic stability property when there is no communication constraint, we have provided an explicit bound on N to preserve stability. The results have been applied to linear time-invariant systems in which case the assumptions are written as a linear matrix inequality, as well as to a nonlinear physical example.
This study was motivated by [START_REF] Varma | Energy efficient time-triggered control over wireless sensor/actuator networks[END_REF], where we develop energy-efficient transmissions strategy for time-triggered controlled discrete-time systems implemented over a wireless network.
A careful analysis of the relationship between the stability of this model and the stability of the original sampled-data one would be interesting but is outside the scope of this paper, see[START_REF] Nešić | Sufficient conditions for stabilization of sampled-data systems via discrete-time approximations[END_REF].
VII. ACKNOWLEDGEMENT
Romain Postoyan would like to thank Vineeth S. Varma for the discussions which lead to this work, and Giorgio Valmorbida for his precious advices regarding the use of SOSTOOLS in Section V-B.
His work was partially supported by the ANR under the grant COMPACS (ANR-13-BS03-0004-02). D. His work was supported by the Australian Research Council under the Discovery Project DP1094326. | 34,851 | [
"845",
"858580"
] | [
"185180",
"32324"
] |
01379968 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01379968/file/CDC16_0452_FI.pdf | Wei Wang
Romain Postoyan
email: [email protected]
Dragan Nesic
email: [email protected]
W P Maurice
H Heemels
email: [email protected]
Wang
W P M H Heemels
Stabilization of nonlinear systems using state-feedback periodic event-triggered controllers
come L'archive ouverte pluridisciplinaire
Stabilization of nonlinear systems using state-feedback periodic event-triggered controllers W. Wang, R. Postoyan, D. Nešić and W.P.M.H. Heemels
Abstract-We investigate the scenario where a controller communicates with a plant at discrete time instants generated by an event-triggering mechanism. In particular, the latter collects sampled data from the plant and the controller at each sampling instant, and then decides whether the control input needs to be updated, leading to periodic event-triggered control (PETC). In this paper, we propose a systematic design procedure for PETC that stabilize general nonlinear systems. The design is based on the existence of a continuous-time statefeedback controller, which stabilizes the system in the absence of communication constraints. We then take into account the sampling and we design an event-triggering condition, which is only updated at some of the sampling instants, to preserve stability. An explicit bound on the maximum sampling period with which the triggering rule is evaluated is provided. We show that there exists a trade-off between the latter and a parameter used to define the triggering condition. The results are applied to a van de Pol oscillator as an illustration.
I. INTRODUCTION
Major advancements over the last decades in wired and wireless communication networks gave rise to networked control systems (NCS). These are systems in which the sensors and the actuators communicate with the controller via a shared digital channel. A major challenge in this context is to design control strategies which do not "overuse" the network, to limit the transmission delays and the occurence of packet losses, which may destroy the desired closedloop system properties. An attractive solution consists in adapting the transmissions to the current state of the plant, we talk of event-triggered control, see [START_REF] Heemels | An introduction to event-triggered and self-triggered control[END_REF] and the references therein. This paradigm consists in continuously evaluating a state/output-dependent condition and, when the latter is satisfied, a transmission is triggered. Many works have shown that event-triggered control is able to significantly reduce the number of transmissions compared to the traditional periodic sampling, see [START_REF] Lunze | A state-feedback approach to event-based control[END_REF], [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF], [START_REF] Postoyan | Event-triggered tracking control of unicycle mobile robots[END_REF], [START_REF] Wang | Event-triggering in distributed networked control systems[END_REF] for instance. Nevertheless, the continuous evaluation of the triggering condition is not possible when the implementation platform is digital. Instead, the triggering criterion can only be evaluated at some sampling instants, leading to the periodic event-triggered control (PETC), see [START_REF] Heemels | Periodic eventtriggered control for linear systems[END_REF], [START_REF] Heemels | L 2 -gain analysis for a class of hybrid systems with applications to reset and event-triggered control: A lifting approach[END_REF], [START_REF] Heemels | Periodic event-triggered control[END_REF] and the references therein.
Results on the design of PETC for linear systems are presented, for instance, in [START_REF] Heemels | Periodic eventtriggered control for linear systems[END_REF], [START_REF] Heemels | L 2 -gain analysis for a class of hybrid systems with applications to reset and event-triggered control: A lifting approach[END_REF]. In [START_REF] Heemels | Periodic event-triggered control[END_REF], a methodology is proposed for nonlinear systems. The idea is to start from a given event-triggered controller and to redesign it to obtain a periodic event-triggered controller, based on a condition on the successive Lie derivatives of the original triggering condition. A bound on the sampling period is provided, which is based on the minimum inter-transmission time of the event-triggered controller, which is often difficult to precisely estimate. On the other hand, the generic results in [START_REF] Sanfelice | Lyapunov analysis of sampled-andhold hybrid feedbacks[END_REF] on the sampling of hybrid controllers show that, if an eventtriggered controller ensures a uniform global asymptotic stability property, the latter is preserved semiglobally and practically when emulating the controller with sufficiently fast sampling.
In this paper, we design periodic event-triggered controllers for nonlinear systems using a different approach compared to [START_REF] Heemels | Periodic event-triggered control[END_REF], [START_REF] Sanfelice | Lyapunov analysis of sampled-andhold hybrid feedbacks[END_REF]. We start from a continuous-time controller which stabilizes the plant in the absence of network (and not an event-triggered controller as in [START_REF] Heemels | Periodic event-triggered control[END_REF], [START_REF] Sanfelice | Lyapunov analysis of sampled-andhold hybrid feedbacks[END_REF]). We then take into account the communication network and we design a triggering condition, which is only evaluated at given sampling instants. The stability of the overall system is guaranteed provided that the maximum sampling period, with which the triggering rule is evaluated, is less than a given bound. We model for that purpose the overall system as a hybrid system using the formalism of [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF] and the analysis relies on the construction of a novel hybrid Lyapunov function. The results are applied to a van de Pol oscillator as an illustration.
Compared to [START_REF] Heemels | Periodic event-triggered control[END_REF], the bound on the sampling period does not rely on the estimation of the minimum inter-transmission time of a predesigned event-triggered controller, and is therefore easier to compute. Furthermore, the triggering condition we propose is easy to construct as we only need to verify an input-to-state stability property, as opposed to a local condition on the Lie derivatives of the triggering rule of the original event-triggered controller in [START_REF] Heemels | Periodic event-triggered control[END_REF]. In addition, our results clearly show that there is a trade-off between the parameter of the triggering condition and the maximum time between two sampling instants. Also, the sampling instants at which the triggering rule is evaluated are not necessarily periodic. On the other hand, we cope with a more specific type of triggering rules than in [START_REF] Heemels | Periodic event-triggered control[END_REF]. In contrast with [START_REF] Sanfelice | Lyapunov analysis of sampled-andhold hybrid feedbacks[END_REF],
we provide an explicit bound on the sampling period and we ensure uniform global asymptotic stability properties, as opposed to uniform semiglobal practical stability.
The paper is organized as follows. The notation and preliminaries on hybrid systems are given in Section II. We state the problem and present the model in Section III. The main results are stated in Section IV. Simulation results and conclusions are respectively provided in Sections V and VI. The proofs are given in the appendix.
II. PRELIMINARIES Let
Z >0 := {1, 2, • • • }, Z ≥0 := {0, 1, 2, • • • } and R ≥0 := [0, ∞). Let |x| denote the Euclidean norm of the vector x ∈ R n . For (x, y) ∈ R n+m , (x, y) stands for [x T , y T ] T . Given a set A ⊂ R n and x ∈ R n , we define the distance of x to A as |x| A := inf y∈A |x -y|. A set-valued mapping M : R m ⇒ R n is locally bounded if, for any x ∈ R m , there exists a neighborhood U x of x such that M (U x ) is a bounded set. A set-valued mapping M : R m ⇒ R n is outer semi-continuous when its graph {(y, z) ∈ R m+n : z ∈ M (y)} is closed, see Lemma 5.10 in [2]. A function γ : R ≥0 → R ≥0 is of class-K, if it is continuous, zero at zero and strictly increasing and it is of class-K ∞ if, in addition, it is unbounded. A function γ : R 2 ≥0 → R ≥0 is of class-KL, if it is continuous, for each r ∈ R ≥0 , γ(•, r) is of class-K, and, for each s ∈ R ≥0 , γ(s, •) is decreasing to zero. For x, v ∈ R n and locally Lipschitz U : R n → R, let U • (x; v) be the Clarke derivative of the function U at x in the direction v, i.e. U • (x; v) := lim sup y→x,λ↓0 U (y+λv)-U (y) λ
. This notion will be useful as we will be working with locally Lipschitz Lyapunov functions, which are not differentiable everywhere.
Consider the following hybrid system [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF]
q = F(q) q ∈ C q + ∈ G(q) q ∈ D (1)
with state q ∈ R n and where C, D ⊂ R n are respectively the flow and the jump sets. We assume that: the sets C and D are closed; F : R n → R n is a continuous function; G : R n ⇒ R n is outer semi-continuous and locally bounded; and G(q) is nonempty for each q ∈ D.
We now recall some definitions from [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF].
A set S ⊂ R ≥0 × Z ≥0 is called a compact hybrid time domain if S = J-1 j=0 ([t j , t j+1 ], j) for some finite sequence of times 0 = t 0 ≤ t 1 ≤ t 2 ≤ • • • ≤ t J . The set S is a hybrid time domain if for all (T, J) ∈ S, S ∩ ([0, T ] × {0, 1, • • • , J}) is a compact hybrid time domain. A function q : S → R n is a hybrid arc if S is
a hybrid time domain and q(•, j) is locally absolutely continuous for each j. A hybrid arc q : dom q → R n is a solution to (1) if q(0, 0) ∈ C ∪ D and 1) for all j ∈ Z ≥0 and almost all t such that (t, j) ∈ dom q, q(t, j) ∈ C and q(t, j) = F(q(t, j)); 2) for all (t, j) ∈ dom q such that (t, j + 1) ∈ dom q, q(t, j) ∈ D and q(t, j + 1) ∈ G(q(t, j)). A solution is maximal if it cannot be extended and it is complete when dom q is unbounded. We also recall the following set stability definition.
Definition 1: The closed set A ⊂ R n is called uniformly globally asymptotically stable (UGAS) for system (1) if there exists β ∈ KL such that all solutions q to system (1) satisfy
|q(t, j)| A ≤ β(|q(0, 0)| A , t + j) ∀(t, j) ∈ dom q (2)
and all maximal solutions to system (1) are complete. We will need the following result, which corresponds to Lemma II.1 in [START_REF] Liberzon | Lyapunov-based smallgain theorems for hybrid systems[END_REF].
Lemma 1: Consider two functions U 1 : R n → R and U 2 : R n → R that have well-defined Clarke derivatives for all x ∈ R n and v ∈ R n . Introduce three sets A := {x :
U 1 (x) > U 2 (x)}, B := {x : U 1 (x) < U 2 (x)}, Γ := {x : U 1 (x) = U 2 (x)}. Then, for any v ∈ R n , the function U (x) := max{U 1 (x), U 2 (x)} satisfies U • (x; v) = U • 1 (x; v) for all x ∈ A, U • (x; v) = U • 2 (x; v) for all x ∈ B, and
U • (x; v) ≤ max{U • 1 (x; v), U • 2 (x; v)} for all x ∈ Γ.
III. PETC MODEL
We consider the plant model
ẋp = f p (x p , u), (3)
where x p ∈ R np is the state and u ∈ R nu is the control input.
We assume that the full state vector x p is measured. Suppose that the following state-feedback controller is designed to stabilize the origin of ( 3)
ẋc = f c (x c , x p ) u = g c (x c , x p ), (4)
where x c ∈ R nc is the state of the controller. When the controller is static, (4) becomes u = g c (x p ) and there is no need to introduce the state x c .
Controller x p (t i ) u(t i ) Plant x p (s i )
Event-triggering mechanism
(x c (s i ), u(s i ))
Fig. 1: PETC schematic.
We consider the scenario where plant (3) and controller (4) communicate with each other via a network, see Figure 1. An event-triggering mechanism is used to define the sequence of transmission instants in the following manner. A triggering condition is evaluated at each sampling instant s i , i ∈ Z ≥0 , where
ε ≤ s i+1 -s i ≤ T, i ∈ Z ≥0 , (5)
where T > 0 is the upper bound on the sampling period and ε ∈ (0, T ] is the minimum time between two successive evaluations of the triggering condition. When the triggering condition is satisfied, the plant state measurement x p and the control input u are respectively sent to the controller and to the plant. Consequently, the sequence of transmission instants, which we denote {t i } i∈I , I ⊆ Z ≥0 , is a subsequence of {s i } i∈Z ≥0 , and two successive transmissions are spaced by at least ε units of time in view of (5), thereby avoiding the Zeno phenomenon. Parameter ε reflects the minimum achievable transmission interval given by the hardware constraints. We assume that the transmission delays and the quantization effects are negligible. For the sake of generality, we allow the triggering condition to depend on x p , x c , u at the current transmission time. While this may be difficult to implement in practice, this formulation encompasses the practically relevant cases where the controller is directly connected to the actuators and only the plant state is transmitted over the network, or vice versa when the controller is directly connected to the sensors and only the control input is sent over the channel.
Because of the communication network, plant (3) no longer has access to u, but only to its networked version, which we denote by û. Similarly, controller (4) has access to xp , the networked version of x p . Between two successive transmission instants, xp and û are governed by
ẋp = fp (x p , x c , xp , û) u = fc (x p , x c , xp , û) t ∈ (t i , t i+1 ), (6)
where fp and fc are the holding functions. Zero-order-hold devices correspond to fp = 0 and fc = 0 for instance, but other holding functions can be envisioned as well, see [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF] for example. Before modeling the dynamics of xp and û at each sampling instant s i , we introduce the vector x := (x p , x c ) ∈ R nx , which is the concatenation of the plant and the controller state, and the vector of network-induced errors e := (e p , e u ) ∈ R ne , where e p := xp -x p is the network-induced error on the state measurement x p , e u := û -u is the network-induced error on the control input, n x := n p + n c and n e := n p + n u . At each sampling instant s i , i ∈ Z ≥0 , the update of xp and û satisfy
xp (s + i ) û(s + i ) ∈ x p (s i ) u(s i ) when Υ(e(s i ), x(s i )) > 0 xp (s i ) û(s i ) when Υ(e(s i ), x(s i )) < 0 xp (s i ) û(s i ) , x p (s i ) u(s i ) when Υ(e(s i ), x(s i )) = 0, ( 7
)
where Υ describes the triggering condition, which is evaluated at each sampling instant by the event-triggering mechanism. We explain later how to construct Υ (see Section IV-B). In view of [START_REF] Heemels | Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance[END_REF], when Υ(e(s i ), x(s i )) > 0, a transmission occurs at s i , and xp and û are reset to the actual value of x p and u, respectively. When Υ(e(s i ), x(s i )) < 0, no transmission occurs and xp and û remain unchanged. When Υ(e(s i ), x(s i )) = 0, the model allows two possibilities: either a transmission occurs or not, our results apply in both cases. This construction ensures that the jump map in ( 7) is outer semi-continuous, which is essential for the hybrid model presented below to be (nominally) well-posed, see Chapter 6 in [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF] for more details. As a result, [START_REF] Heemels | Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance[END_REF] generates non-unique solutions, which is not an issue for the forthcoming results.
We deduce from ( 7) that the variable e has the following dynamics at jumps
e(s + i ) ∈ h(e(s i ), x(s i )), (8)
where
h(e, x) := 1 -Γ(e, x) e (9)
and Γ : R ne+nx ⇒ {0, 1} is the function that indicates if a transmission occurs or not. In particular, Γ(e, x) = {1} when Υ(e, x) > 0, which corresponds to a transmission. When Υ(e, x) < 0, Γ(e, x) = {0} and this corresponds to no transmission and h(e, x) = e in this case. When Υ(e, x) = 0, Γ(e, x) = {0, 1} covers the above two possibilities. In agreement with [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], we call (8) the protocol equation. We note that h depends on the state x contrary to [START_REF] Heemels | Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance[END_REF], [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF], which will have important consequences on the stability property of the protocol equation compared to the latter references (see Remark 1 in Section IV-B).
We introduce the variable τ ∈ R ≥0 to keep track of the time elapsed since the last evaluation of the triggering criterium on, which has the following dynamics
τ = 1 when τ ∈ [0, T ] τ + = 0 when τ ∈ [ε, T ].
We then model the complete system as
q = F (q) q ∈ C q + ∈ G(q) q ∈ D, (10)
where q := (x, e, τ ),
C := R nx+ne × [0, T ] D := R nx+ne × [ε, T ]. (11)
The mapping F in (10) is defined as, for
q ∈ R nx × R ne × R ≥0 , F (q)
:= (f (x, e), g(x, e), 1) ,
where f and g can be calculated from ( 3) and ( 4), and the set-valued mapping G is defined as
G(q) := x, h(e, x), 0 (13)
with h coming from (9). Our objective is to design the triggering condition Υ in [START_REF] Heemels | Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance[END_REF] and to provide an explicit bound on the sampling period T to ensure asymptotic stability properties for system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF].
IV. MAIN RESULTS
In this section, we first state the assumption we make on system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF], based on which we then construct the triggering condition Υ and the bound on T . We finally present the main stability result.
A. Assumption
We assume that system (10) verifies the following properties.
Assumption 1: There exist locally Lipschitz functions V :
R nx → R ≥0 and W : R ne → R ≥0 , α V , α V , α W , α W ∈ K ∞ , a V , L W > 0 and L V ≥ 0 such that the following holds. (i) For all x ∈ R nx , α V (|x|) ≤ V (x) ≤ α V (|x|). (ii) For almost all x ∈ R nx and all e ∈ R ne , ∇V (x), f (x, e) ≤ -a V V (x) + γ 2 W 2 (e). (iii) For any e ∈ R ne , α W (|e|) ≤ W (e) ≤ α W (|e|).
(iv) For any x ∈ R nx and almost all e ∈ R ne , ∇W (e), g(x, e) ≤ L W W (e) + L V V (x). Items (i)-(iii) in Assumption 1 imply that the system ẋ = f (x, e) is input-to-state stable (ISS) with respect to e. The fact that the Lyapunov function V has an exponential decay rate in item (ii) of Assumption 1 is not restrictive as any input-to-state stable Lyapunov function can be modified accordingly in view of [START_REF] Praly | Stabilization in spite of matched unmodelled dynamics and an equivalent definition of input-to-state stability[END_REF]. Item (iii) says that W is positive definite and radially unbounded. Item (iv) is an exponential growth condition of the e-system between two consecutive sampling instants, like in [START_REF] Postoyan | A framework for the observer design for networked control systems[END_REF], [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF]. A nonlinear physical example, which satisfies Assumption 1, is provided in Section V.
B. Triggering mechanism
We define Υ in (7) by, for any e ∈ R ne and x ∈ R nx ,
Υ(e, x) = W 2 (e) -λV (x), (14)
where λ ≥ 0 is a design parameter. We select λ such that λ < λ * with
λ * := a V γ 2 , ( 15
)
where a V and γ > 0 come from Assumption 1.
Remark 1: The definition of Υ in [START_REF] Postoyan | A framework for the observer design for networked control systems[END_REF] guarantees that the corresponding protocol ( 8) is input-to-state stable (see Definition 5.3 in [19]). In particular, W (e) ≥ (λ + ν)V (x) implies W (h(e, x)) = 0 for any (x, e) ∈ R nx+ne and ν > 0.
For each λ ∈ [0, λ * ), we select T in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] such that T < T MASP , where T MASP is the maximum allowable sampling period defined as
T MASP := 2 L W arctanh L W √ a V -γ √ λ (L W √ λ + 2L V )γ + L W √ a V , (16)
where 16) is non-negative in view of [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF].
L V ≥ 0, L W , a V , γ > 0 come from Assumption 1. The numerator L W ( √ a V -γ √ λ) in (
The bound in ( 16) is decreasing in λ. In other words, the larger the λ, which leads to a later triggering instant as ensured by [START_REF] Postoyan | A framework for the observer design for networked control systems[END_REF], the smaller T MASP . Loosely speaking, this suggest that there is a trade-off between the maximum sampling period and T the number of transmissions generated by the event-triggering condition, typically fast sampling would lead to less frequent transmissions but to more computation and vice-versa.
Remark 2: When λ = 0, the triggering condition Υ is always non-negative, and consequently transmissions can occur at every sampling instants according to [START_REF] Heemels | Networked control systems with communication constraints: Tradeoffs between transmission intervals, delays and performance[END_REF]. Hence, in this case we recover the time-triggered control scenario investigated in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF]. The bound on the MASP in ( 16) is then
T MASP = 2 L W arctanh L W √ a V 2L V γ + L W √ a V , (17)
which differs from the one in [START_REF] Nešić | Explicit computation of the sampling period in emulation of controllers for nonlinear sampled-data systems[END_REF] because the assumptions and the analysis are different. We cannot therefore assess whether one is more conservative than the other in general.
C. Stability guarantee
We show that Assumption 1 with a proper selection of λ and T ensure the stability of system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF], as formalized in Theorem 1.
Theorem 1: Consider system (10) and suppose the following hold.
1) Assumption 1 is verified.
2) λ < λ * with λ * defined in [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF].
3) T < T MASP with T MASP defined in [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF]. Then, the compact set
A := {q ∈ R nx × R ne × R ≥0 : x = 0, e = 0, τ ∈ [0, T ]} is UGAS.
Remark 3: The stability property ensured in Theorem 1 is robust to ρ-perturbations as the attractor A is compact and system (10) satisfies the hybrid basic conditions, which implies that it is well-posed, see Chapter 7 in [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF].
V. ILLUSTRATIVE EXAMPLE
We consider the following van der Pol oscillator
ẋ1 = x 2 ẋ2 = (1 -x 2 1 )x 2 -x 1 + u, (18)
where x 1 , x 2 ∈ R, whose origin is exponentially stabilized by the controller
u = -x 2 -(1 -x 2 1 )x 2 .
We consider the case where sensors and actuators are collated via a communication network and the control signal u is submitted to the network and received as û. Suppose zero-order-hold devices are used to implement the controller, which gives fc = 0 for fc in [START_REF] Heemels | Periodic event-triggered control[END_REF]. Then, with e := û-u being the networked-induced error (there is no need to introduce xp -x p since the controller is static), we obtain the system in [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF] with
f (x, e) := x 2 -x 1 -x 2 + e g(x, e) := (2 -x 2 1 )(-x 1 -x 2 + e) -2x 1 x 2 2 . ( 19
)
Assumption 1 is satisfied with1 V (x) = 3.1783x We choose λ = 1.246 × 10 -4 , which gives T MASP = 0.0105 and we take T = 0.01 which satisfies T < T MASP . Figure 3 illustrates the convergence of the plant state to the origin. On the other hand, the convergence no longer occurs when we increase λ to 2 × 10 -4 or when we increase T to 0.1 (which both violate the conditions of Theorem 1). This suggests that the bounds on T and on λ are not very conservative for this example.
VI. CONCLUSIONS
We have addressed the design of periodic event-triggered controllers for a class of nonlinear systems. We have followed an emulation approach for this purpose, in the sense that we start from a given state-feedback controller which stabilizes the origin of the continuous-time plant, and we then explain how to derive a periodic event-triggered condition to preserve stability in the presence of a network. The triggering condition is of the same type as in [START_REF] Tabuada | Event-triggered real-time scheduling of stabilizing control tasks[END_REF] where continuous event-triggered control was addressed. An easily computable bound on the sampling period used to evaluate the triggering condition is provided. The analysis reveals a trade-off between the parameter of the triggering criterion and the considered sampling period.
Fig. 2 :
2 Fig. 2: Triggering parameter λ vs T MASP .
Fig. 3 :
3 Fig. 3: A solution to system (10) when T = 0.01 and λ = 1.246 × 10 -4 .
1 x 2 -0.2499x 1 x 5 2 + 3.8468x 1 x 2 + 1.8511x 6 2 +
6 1 + 1 x 4 2 + 4.6209x 2 1 x 3 2 + 2.9377x 3 1 x 2 2 + 7.1644x 4 3.4385x 1 x 5 2 +
6.8622x
We have obtained V , W , γ, a V , L V , L W using SOSTOOLS[START_REF] Valmorbida | SOSTOOLS: Sum of squares optimization toolbox for MATLAB[END_REF].
This work is supported by the Australian Research Council under the Discovery Project DP1094326, the ANR under the grant COMPACS (ANR-13-BS03-0004-02), and the Innovational Research Incentives Scheme under the VICI grant Wireless control systems: A new frontier in automation (No. 11382) awarded by NWO (The Netherlands Organisation for Scientific Research) and STW (Dutch Technology Foundation).
W.
APPENDIX
Proof of Theorem 1. Let λ ∈ (0, λ * ) with λ * defined as in [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF]. Let T ∈ (0, T MASP ) and T MASP be determined by [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF].
We define, for any q ∈ C ∪ D, U (q) := max V (x), φ(τ )W 2 (e) , [START_REF] Tabuada | Event-triggered real-time scheduling of stabilizing control tasks[END_REF] where W and V come from Assumption 1, and φ : [0, T MASP ] → µ, µ is defined in the next lemma, whose proof is omitted due to space limitations. Lemma 2: There exist µ > µ > 0 such that
where µ * := γ 2 a V and µ * := 1 λ , and the function φ defined by the solution to
We first show that the following properties hold for system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF]. There exist ν > 0, α U and α U ∈ K ∞ such that: 1) U is locally Lipschitz in x, e and τ , and, for all q ∈ C∪D, α U (|q| A ) ≤ U (q) ≤ α U (|q| A ); 2) for all q ∈ C, U • (q; F (q)) ≤ -νU (q); 3) for all q ∈ D and g ∈ G(q), U g ≤ U (q).
It follows from Assumption 1 and the definition of φ in Lemma 2 that the Lipschitz property of U in item 1) is satisfied. In view of Lemma 2, φ(τ ) ∈ µ, µ for all τ ∈ T ]. It follows from ( 21) that item 1) holds with α U , α U ∈ K ∞ given by, for all s ≥ 0,
We now consider item 2). Let U 1 (q) := V (x) and U 2 (q) := φ(τ )W 2 (e) for any q ∈ C ∪ D. Let q ∈ C. We distinguish three cases according to Lemma 1.
Case 1: q ∈ C and U 1 (q) > U 2 (q).
In this case, φ(τ
Consequently, in view of Lemma 1 and item (ii) of Assumption 1,
Case 2: q ∈ C and U 1 (q) < U 2 (q). In this case, U (q) = U 2 (q) = φ(τ )W 2 (e). Hence
We omit below the dependency of φ on τ for the sake of convenience. In view of item (iv) in Assumption 1, ( 22), ( 25)
and the fact that φ(τ ) ≥ µ for all τ ∈ [0, T ],
Since U (q) = φ(τ )W 2 (e) ≤ µW 2 (e) in this case, (26) gives that
Case 3: q ∈ C and U 1 (q) = U 2 (q). In view of Lemma 1, ( 24) and ( 27), we have that
Combining ( 24) and ( 27) leads to item 2) with ν := min (1 -σ)a V , µµ µ for all q ∈ C. We now investigate the evolution of U at jumps, i.e., item 3). Let q ∈ D. We distinguish two cases depending on whether a transmission occurs or not. When a transmission occurs, the corresponding g ∈ G(q) is such that W 2 (h(e, x)) = 0 and thus, since W is positive definite in view of item (iii) of Assumption 1,
When no transmission occurs, it follows from ( 9) and ( 14) that W 2 (h(e, x)) = W 2 (e) ≤ λV (x). Since φ(0) = µ and µ < 1/λ according to Lemma 2,
for all g ∈ G(q) satisfying h(e, x) = e. This with (28) ensures that item 3) holds for all q ∈ D. The satisfaction of items 1)-3) imply that items (i)-(iii) of Theorem 1 in [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF] hold and item (iv) of Theorem 1 also holds with noting [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF].
We then invoke Theorem 1 in [START_REF] Postoyan | A framework for the event-triggered stabilization of nonlinear systems[END_REF] and have that system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF] is uniformly globally pre-asymptotically stable (UGpAS). Note that condition (VC) of Proposition 6.10 in [2] holds for system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF]. Moreover, we can exclude item (b) of Proposition 6.10 in [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF] in view of items 1)-3), and item (c) of this proposition is also excluded as G(q) ⊂ C for any q ∈ D in view of ( 10)- [START_REF] Postoyan | Event-triggered tracking control of unicycle mobile robots[END_REF]. Then, in view of Proposition 6.10 in [START_REF] Goebel | Hybrid dynamical systems: Modeling, Stability, and Robustness[END_REF] and the fact that t j+1 -t j ≥ ε holds for any t, j ∈ dom q, all maximal solutions q of system (10) are complete in t direction, i.e., sup t domq = ∞. As a result, the set A is UGAS for system [START_REF] Mazo | An ISS self-triggered implementation of linear controllers[END_REF]. | 31,048 | [
"845",
"858580",
"990341"
] | [
"32324",
"185180",
"32324",
"32881"
] |
01379962 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01379962/file/CDC16_1109_FI.pdf | Navid Noroozi
email: [email protected].
Romain Postoyan
email: [email protected].
Dragan Nesic
email: [email protected].
Stefan S H J Heijmans
email: [email protected].
W P Maurice
H Heemels
email: [email protected].
Dragan Nešić
Stefan H J Heijmans
S H J Heijmans
Stability analysis of networked control systems with direct-feedthrough terms: Part I -the nonlinear case
come L'archive ouverte pluridisciplinaire
A popular design approach for NCSs is via the so-called emulation method, see [START_REF] Walsh | Asymptotic behavior of nonlinear networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF]. The idea is to first ignore communication constraints and design a continuous-time controller for a continuous-time plant. Then, the controller is implemented via the network and it is shown (under suitable conditions) that the closed-loop system is stable when the transmission frequency is sufficiently high, i.e. the maximum allowable transmission interval (MATI) is sufficiently small. This approach was shown to work well for a large class of systems whose scheduling protocols are uniformly globally exponentially stable (UGES) in an appropriate sense. The emulation approach enjoys considerable advantages in terms of its simplicity and applicability to a large class of nonlinear NCSs. Indeed, any continuous-time design approach can be used to obtain the controller.
Most existing emulation results on NCSs concentrate on the stabilization using dynamic controllers without directfeedthrough terms, see [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF], [START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF] for example. However, direct-feedthrough terms are essential to model controllers commonly used in the industry such as proportionalintegral(-derivative) regulators. Considering dynamic controllers with direct-feedthrough terms in general complicates the analysis as it will be shown in this paper. Some results exist in the NCSs literature when the channel is only used to ensure the communication between the sensors and the controller, or between the controller and the actuators but not both, see [START_REF] Quevedo | Networked PID control[END_REF] for instance. The main purpose of the present paper is to consider NCSs with dynamic controllers that contain direct-feedthrough inputs and for which both the sensor data and the control input are transmitted over a network. In particular, we focus on the effect of scheduling and sampling 1 . First, we extend the modelling framework developed in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF] to cover NCSs with direct-feedthrough terms. Then, we revisit the notion of uniformly globally exponentially stable (UGES) scheduling protocols as given in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] in order to incorporate direct-feedthrough terms. In particular, an auxiliary system induced by the plant and the protocol turns out to be instrumental in the stability analysis of NCSs designed via emulation; this auxiliary system depends only on the protocol in the simpler case of dynamic controllers without direct-feedthrough terms, as was considered in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF], [START_REF] Carnevale | A Lyapunov proof of an improved maximum allowable transfer interval for networked control systems[END_REF], [START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF], but this is no longer true in the NCS setup considered here.
We investigate two cases. In the first case, we assume that all inputs and outputs are sent over one serial communication channel and that some nodes may contain both inputs and outputs. Using a Lyapunov based approach, we show that the auxiliary system induced by the plant and the protocol is UGES for Round-Robin (RR) and Try-Once-Discard (TOD) protocols. In the second case, we assume that there are two dedicated channels that are respectively used to send the outputs and the inputs over the network. In this case, the analysis is greatly simplified since the auxiliary system induced by the plant and the protocol is a cascaded system. Once UGES of the auxiliary system induced by the plant and the protocol is established, we can use stability results of [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF] to conclude stability of NCSs.
In the companion paper [START_REF] Heijmans | Stability analysis of networked control systems with directfeedthrough terms: Part II -The linear case[END_REF], we address the scenario where the plant and the controller dynamics are linear and both contain direct-feedthrough terms, which is a source of additional difficulties, while here we investigate nonlinear systems but only the controller (or the plant) has directfeedthrough terms.
The remainder of this paper is organized as follows. Section II provides the used notation. The problem is stated and the model formulation is developed in Section III. The main results are given in Sections IV-V. Section VI provides the concluding remarks. All the proofs of our results are omitted due to space constraints.
II. NOTATION Let
R := (-∞, +∞), R ≥0 := [0, +∞), R >0 := (0, +∞), Z ≥0 := {0, 1, 2, . . .}, Z ≤0 := {. . . -2, -1, 0}
and Z >0 := {1, 2, . . .}. The Euclidean norm of a vector x ∈ R n and its 1-norm are respectively denoted |x| and |x| 1 . We denote the identity matrix of dimension n ∈ Z >0 by I n . A function f : Z ≥0 → R n belongs to ∞ if sup{|f (i)| : i ∈ Z ≥0 } is bounded. By convention, for any m ∈ Z ≤0 , m i=1 w(i) = 0 where w : Z >0 → R. The function θ : Z → {0, 1} is defined by θ(m) := 1 when m ∈ Z >0 and θ(m) := 0 when m ∈ Z ≤0 . We write (x, y) to represent [x T , y T ] T for any (x, y) ∈ R n × R m .
III. PROBLEM STATEMENT
Consider the nonlinear plant model
ẋp = f p (x p , u) y = g p (x p ), (1)
where x p ∈ R np is the plant state, u ∈ R nu is the control input and y ∈ R ny is the plant output. We follow the emulation approach to design the controller. Hence, we assume that we know a continuous-time controller, which stabilizes the origin of system (1) in the absence of network. We focus on dynamic controllers of the form
ẋc = f c (x c , y) u = g c (x c , y), (2)
where x c ∈ R nc is the controller state. In contrast with e.g.
[7]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF], the controller output map g c depends on the plant output y. This direct-feedthrough term appears for standard dynamic controllers such as the popular linear proportionalintegral(-derivative) controllers. This term prevents the application of the results in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF] because it changes the protocol equation as we will see later.
Remark 1: The forthcoming results apply mutatis mutandis to the case where the plant (1) has direct-feedthrough terms but not the controller [START_REF] Hespanha | A Survey of recent results in networked control systems[END_REF]. The presence of directfeedthrough terms in both (1) and (2) lead to an algebraic constraint, which requires particular care as shown in [START_REF] Heijmans | Stability analysis of networked control systems with directfeedthrough terms: Part II -The linear case[END_REF] where linear NCSs are studied.
Remark 2: Existing results as in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF] allow for the controller output map g c to depend on y when the controller is directly connected to the plant, see Section IX in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]. When this is not the case, substantial differences arise, which prevent to apply the results of [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF].
We consider the scenario where the plant and the controller are connected via a digital network that is composed of ∈ Z >0 nodes. A node corresponds to a collection of sensors and/or actuators. Let y ∈ Z ≥0 denote the number of nodes which are not associated to any actuators. Similarly, let u ∈ Z ≥0 denote the number of nodes which are not associated to any sensors. Hence u + y ≤ (there is no equality as a node may be associated to both sensors and actuators).
The network generates various constraints on the communication of both u and y. In this paper, we concentrate on the effect due to sampling and scheduling. Transmissions occur only at some given time instants t j , j ∈ Z ≥0 , such that υ ≤ t j+1 -t j ≤ τ MATI , where υ ∈ (0, τ MATI ] and τ MATI respectively represent the minimum time and the maximum time between any two transmission instants and υ can be arbitrarily small. Furthermore, at each transmission instant, a single node is granted access to the network. This selection is done by the scheduling protocol. As in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], the overall system can be modelled by the following impulsive system
ẋp = f p (x p , û) t ∈ [t j , t j+1 ] y = g p (x p ) ẋc = f c (x c , ŷ) t ∈ [t j , t j+1 ] u = g c (x c , ŷ) ẏ = fp (x p , x c , ŷ, û) t ∈ [t j , t j+1 ] u = fc (x p , x c , ŷ, û) t ∈ [t j , t j+1 ] ŷ(t + j ) = y(t j ) + h y (j, e(t j )) û(t + j ) = u(t j ) + h u (j, e(t j )) (3)
where x := (x p , x c ) ∈ R nx , and û ∈ R nu and ŷ ∈ R ny are, respectively, the vector of the most recently transmitted controller output value and the vector of the most recently transmitted plant output value. These two variables are generated by the holding function fp and fc between two successive transmission instants. The use of zero-order-hold devices leads to fp = 0 and fc = 0 for instance. The functions h y and h u model the network protocol that can, for instance, be RR, TOD, or any other protocol discussed in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] and [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF]. In addition, e := (e y , e u ) ∈ R ne denotes the network-induced errors where e y := ŷ -y ∈ R ny and
e u := û -u ∈ R nu . It is more convenient to rewrite (3) as ẋ = f (x, e) (4a) ė = g(x, e) (4b) e(t + j ) = h(j, e(t j ), x(t j )) (4c)
where f and g are assumed to be continuously differentiable and are obtained by direct calculations from (3) (cf. [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] for more details). Contrary to ( 14) in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], the function h not only depends on (j, e) but also on x in (4c). As seen below, this extension comes from the fact that g c in (3) depends on ŷ. Let us illustrate this point through the RR protocol, which periodically grants access to each node. Let t j , j ∈ Z ≥0 , according to (3),
e y (t + j ) = ŷ(t + j ) -y(t + j ) = y(t j ) + h y (j, e(t j )) -y(t j ) = h y (j, e(t j )) e u (t + j ) = û(t + j ) -u(t + j ) = u(t j ) + h u (j, e(t j )) -u(t + j ) = h u (j, e(t j )) + g c (x c (t j ), ŷ(t j )) -g c (x c (t + j ), ŷ(t + j )) = h u (j, e(t j )) + g c (x c (t j ), y(t j ) + e y (t j )) -g c (x c (t j ), y(t j ) + h y (j, e(t j ))) (5)
where h y (j, e) = (I ny -∆ y (j))e y and h u (j, e) = (I nu -∆ u (j))e u with ∆ y (j) := diag(δ y1 (j)I ny 1 , . . . , δ y y (j)I ny y )
∆ u (j) := diag(δ u1 (j)I nu 1 , . . . , δ u u (j)I nu u ). (6)
In ( 6)-( 7), y ∈ Z ≥0 and u ∈ Z ≥0 are respectively the number of nodes associated at least to one sensor and to at least one actuator (hence y ≥ y , u ≥ u and y + u ≥ ), e y = (e y1 , . . . , e y y
) and e u = (e u1 , . . . , e u u ) (after reordering, if needed), e yi ∈ R ny i for i ∈ {1, . . . , y }, and e ui ∈ R nu i for i ∈ {1, . . . , u }, so that n e = i∈{1,..., y } n yi + i∈{1,..., u } n ui . To define the δ terms in ( 6)-( 7), we need to number the nodes. We use for that purpose the mapping π such that π(y i ) is the number of the node associated to y i , for i ∈ {1, . . . , y }, and π(u i ) is the number of the node associated to u i , for i ∈ {1, . . . , u }. We consider, for i ∈ {1, . . . , y },
δ yi (j) := 1 when j = π(y i ) -1 + k, k ∈ Z ≥0 0 otherwise, (8)
and we similarly define δ ui for i ∈ {1, . . . , u }.
The dynamics of e u at each transmission involve y and thus x because of the term g c (x c , y+e y )-g c (x c , y+h y (j, e)) in [START_REF] Zhang | Stability of networked control systems[END_REF]. This is due to the direct-feedthrough term in [START_REF] Hespanha | A Survey of recent results in networked control systems[END_REF], which implies that u(t + j ) = u(t j ) in general 2 . On the other hand, the dynamics of e y at each transmission is the same as in 2 When the controller is such that u = gc(xc) (as in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]), u(t + j ) = u(t j ) and we recover the protocol equation studied in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]. [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] due to the absence of direct-feedthrough terms in [START_REF] Mamduhi | Robust event-based data scheduling for resource constrained networked control systems[END_REF]. To be more precise, for the RR protocol, e y (t + j ) =(I ny -∆ y (j))e y (t j ) (9a) e u (t + j ) =(I nu -∆ u (j))e u (t j ) + I nu -∆ u (j) g c (x c (t j ), y(t j ) + e y (t j ))
-g c (x c (t j ), y(t j ) + (I ny -∆ y (j))e y (t j ))
where the product I nu -∆ u , which multiples g c (x c , y + e y ) -g c (x c , y + (I ny -∆ y )e y ), is required to accommodate transmissions in which a node corresponding to a collection of both sensors and actuators grants access to the network.
As a result, it is not clear that the stability properties proved for the RR protocol in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] are preserved in this case. Going back to the general case, the fact that h depends on both the network protocol model, as well as the plant model is in stark contrast with [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF], and introduces significant technical difficulties in the analysis and is the topic of this paper. We will consider in this context commonly used protocols, such as RR, TOD 3 protocols and we will also present results for any other protocols discussed in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF], [START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF] in the two channel case. In particular, we focus on the following stability definition for system (4c) which extends Definition 7 in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF].
Definition 1: The discrete-time system e(j + 1) = h(j, e(j), x(j)) is UGES with a Lyapunov functional
W : Z ≥0 × R ne × ∞ → R ≥0 if there exist a 1 , a 2 ∈ R >0
and ρ ∈ [0, 1) such that for all j ∈ Z ≥0 , e ∈ R ne , x ∈ ∞ the following holds a 1 |e| ≤ W (j, e, x) ≤ a 2 |e| (10) W (j + 1, h(j, e, x), x) ≤ ρW (j, e, x).
(
) 11
The UGES property is required to combine our results with those developed in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF] to guarantee that (4) is uniformly globally asymptotically stable (UGAS). Particularly, in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF]- [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF], conditions on (4) are provided to ensure asymptotic properties of the origin when the mapping h only depends on j and e. The general idea is to assume that (i) the controller ( 2) is such that the system ẋ = f (x, e) satisfies a robust asymptotic stability property with respect to e, (ii) the system ė = g(x, e) satisfies an exponential growth condition, (iii) the origin of the system e + = h(j, e) is UGES or UGAS with a Lyapunov function(al) W , which is locally Lipschitz in e. Item (iii) is shown to be verified by RR and TOD protocols in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], and other examples are provided in [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF], [START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF], [START_REF] Postoyan | Tracking control for nonlinear networked control systems[END_REF].
Then, by selecting τ MATI sufficiently small, the stability of the overall system can be guaranteed. We can apply the same approach to analyse the stability of system (4). However, it is no longer clear that the e-system at jumps is still UGES (or UGAS) when the mapping h depends on x for standard protocols such as RR and TOD. More generally, if the system e + = h(j, e, 0) is UGES (or UGAS), it may be the case that this property is lost when x = 0, which arises when the controller (or the plant) has direct-feedthrough terms. The objective of this paper is to provide conditions under which, if the origin of system e + = h(j, e, 0) is UGES (or UGAS), this property is preserved when x = 0.
IV. ONE CHANNEL CASE
In this section, we consider the case of networks in which all nodes are transmitted over one channel and any given node may contain both inputs and outputs. In particular, we provide conditions for system (4c) to be UGES. While results of this paper are applicable to a large class of protocols, for illustration purposes we will concentrate on a special form of protocols including two particular examples that have been studied in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF].
A. UGES of system (4c) Consider the case where system (4c) can be written as e y (t + j ) =(I ny -Ψ y (e(t j ), j))e y (t j )
e u (t + j ) =(I nu -Ψ u (e(t j ), j))e u (t j ) + I nu -Ψ u (e(t j ), j) g c (x c (t j ), y(t j ) + e y (t j ))
-g c (x c (t j ), y(t j ) + (I ny -Ψ y (e(t j ), j))e y (t j )) ,
where the protocol is fully defined by the functions with ψ yi and ψ ui are mappings from R ne × Z ≥0 to {0, 1}. Equations (12a)-(12b) respectively describe the update of e y and e u at each transmission. Model [START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF] encompasses RR and TOD protocol as special cases.
Consider the following auxiliary discrete-time system induced by the plant and the protocol, for j ∈ Z ≥0 , e y (j + 1) = I ny -Ψ y (e, j) e y (j) (15a) e u (j + 1) = I nu -Ψ u (e, j) e u (j) + I nu -Ψ u (e, j) g c (x c (j), y(j) + e y (j))
-g c (x c (j), y(j) + (I ny -Ψ y (e, j))e y (j)) .
(15b)
We refer to system (15) as the auxiliary discrete-time system induced by the plant and the protocol [START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF]. For any initial time j 0 ∈ Z ≥0 , any initial condition e 0 ∈ R n and any input x ∈ ∞ , φ(•, j 0 , e 0 , x) denotes the corresponding solution to [START_REF] Jiang | A converse Lyapunov theorem for discretetime systems with disturbances[END_REF]. As e = (e y , e u ), we partition the solution φ into φ := (φ y , φ u ), whenever it is convenient. We use the following assumption.
Assumption 1: There exists
L g > 0 such that |g c (x c , y 1 ) -g c (x c , y 2 )| 1 ≤ L g |y 1 -y 2 | 1 for all x c ∈ R nc and all y 1 , y 2 ∈ R ny .
Assumption 1 means that g c is globally Lipschitz in its second argument uniformly in x. This condition is always verified by linear systems for instance. The next assumption is that the solutions to [START_REF] Jiang | A converse Lyapunov theorem for discretetime systems with disturbances[END_REF] converge in (a uniform) finite-time to the origin. Examples of protocols that satisfy Assumption 2 are the RR protocol and the TOD protocol, which is the purpose of Sections IV-B and IV-C.
Assumption 2: There exists j ∈ Z ≥0 such that for any e ∈ R ne , any x ∈ ∞ and j 0 ∈ Z ≥0 φ(j, j 0 , e 0 , x) = 0 for all j -j 0 ≥ j.
When Assumptions 1-2 hold, we can construct a Lyapunov function to prove that system (15) is UGES, using a similar construction as in the proof of Theorem 1 in [START_REF] Jiang | A converse Lyapunov theorem for discretetime systems with disturbances[END_REF].
Lemma 1: Let Assumptions 1 and 2 hold. Also, let the functional W :
Z ≥0 × R ne × ∞ → R ≥0 be given by W (j, e, x) = +∞ k=j |φ(k, j, e, x)| 2 (16)
for any j ∈ Z ≥0 , e ∈ R ne and x ∈ ∞ . Then the auxiliary discrete-time system induced by the protocol ( 12) is UGES. In particular, W satisfies Definition 1 with
a 1 = 1, a 2 = √ M and ρ = M -1 M with M := (1+Lg) j+1 √ ne j+1 -1 (1+Lg) √ ne-1
and j coming from Assumption 2.
It follows from Lemma 1 that protocol ( 15) is dead-beat stable, which is a stronger property than UGES. However, the UGES property is sufficient for our purposes.
Remark 3: Given the system e y (j + 1) = I ny -Ψ y (e, j) e y (j)
e u (j + 1) = I nu -Ψ u (e, j) e u (j)
is UGES, without consideration of Assumption 2, one can easily show that there exists sufficiently small L g > 0 such that system (15) is UGES, as well. However, such a condition is not needed for RR and TOD protocols as we show in the remaining part of this section.
B. Round-Robin Protocol
For the ease of reference, we rewrite the RR protocol model [START_REF] Carnevale | A Lyapunov proof of an improved maximum allowable transfer interval for networked control systems[END_REF] e y (j + 1) =(I ny -∆ y (j))e y (j) (18a) e u (j + 1) =(I nu -∆ u (j))e u (j) + I nu -∆ u (j) g c (x c (j), y(j) + e y (j)) -g c (x c (j), y(j) + (I ny -∆ y (j))e y (j))
(18b)
where ∆ y (j) := diag(δ y1 (j)I ny 1 , . . . , δ y y (j)I ny y ), ∆ u (j) := diag(δ u1 (j)I nu 1 , . . . , δ u u (j)I nu u ) and the functions δ ui and δ yi are defined in [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF]. It should be pointed out that, here, we replace the notation Ψ i 's (and ψ i , respectively) with ∆ i 's (and δ i , respectively) to be as consistent as possible with the NCSs context.
To establish UGES for system (18), we only need to show that system (18) satisfies Assumption 2 in view of Lemma 1.
Proposition 1: Under Assumption 1, system (18) verifies Assumption 2 with j = 2 , where is the number of nodes.
In [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], where no direct-feedthrough term is considered, the RR protocol is dead-beat stable in -steps. Here, the maximal number of steps to reach 0 has doubled, which will lead to a smaller MATI bound in view of [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Carnevale | A Lyapunov proof of an improved maximum allowable transfer interval for networked control systems[END_REF]. A similar observation applies in Section IV-C. The following lemma concludes this subsection and is a direct consequence of Proposition 1 and Lemma 1.
Lemma 2: Under Assumption 1, system (18) is UGES with the Lyapunov functional [START_REF] Abdelrahim | Robust event-triggered output feedback controllers for nonlinear systems[END_REF] and
M = (1+Lg) 2 +1 √ ne 2 +1 -1 (1+Lg) √ ne-1
. Moreover, the Lyapunov functional ( 16) is locally Lipschitz in e uniformly in j and x.
C. Try-Once-Discard Protocol
The TOD protocol grants the node whose network-induced error is the largest access to the network. This gives e y (j + 1) = I ny -Ψ y (e(j)) e y (j)
e u (j + 1) = I nu -Ψ u (e(j)) e u (j) + I nu -Ψ u (e(j)) g c (x c (j), y(j) + e y (j))
-g c (x c (j), y(j) + (I ny -Ψ y (e(j)))e y (j)) ,
where
Ψ
and e = (e 1 , . . . , e ) is the partition of e according to the nodes (after reordering, if needed). With the same arguments as those for the RR protocol, the difference with [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF] is the term g c (x c , y + e y ) -g c (x c , y + (I ny -Ψ y (e))e y ) = 0 on the right-hand side of (19b).
The next proposition shows that Assumption 2 is verified by system (19).
Proposition 2: Under Assumption 1, system (19) satisfies Assumption 2 with 4
j = ˜ := -1 + ( -u )( u + 1) + θ( -u -2) ( -u -1)( -u -2) 2 .
We conclude this subsection with the following result. 4 The function θ is defined Section II.
Lemma 3: Under Assumption 1, system (19) is UGES with the Lyapunov functional [START_REF] Abdelrahim | Robust event-triggered output feedback controllers for nonlinear systems[END_REF] with
M = (1+Lg) ˜ +1 √ ne ˜ +1 -1 (1+Lg) √ ne-1
with ˜ defined in Proposition 2. Remark 4: We have not been able to prove that the Lyapunov functional in Lemma 3 obtained from Lemma 1 is locally Lipschitz in e, uniformly in j and x. We will work on this point in future work.
V. TWO CHANNEL CASE
In this section, we consider NCSs, which have two channels that are respectively dedicated to transmission of y and u signals. Hence, all nodes in e y are sent over one channel and all nodes in e u are sent via a different channel. In other words, there are no nodes in which both inputs and outputs are included and using our notation from the previous sections we have that
= y + u . (23)
Each channel has its own protocol that governs transmissions of inputs only or outputs only. With this stronger assumption, we show in this section that it is possible to obtain different results via proofs and constructions that are simpler than the general case that was considered in the previous section. Hence, a slightly higher cost of implementation (i.e. two channels rather than one channel) simplifies the controller design and subsequent analysis significantly 5Under our assumptions, the central point in the analysis is an auxiliary system of the form e y (j + 1) = I ny -Ψ y (e y , j) e y (j) =:hy(i,ey) (24a) e u (j + 1) = I nu -Ψ u (e u , j) e u (j) =:hu(i,eu) + I nu -Ψ u (e u , j) g c (x c (j), y(j) + e y (j)) -g c (x c (j), y(j) + (I ny -Ψ y (e y , j))e y (j)) .
(24b)
Note the difference with (15) since Ψ y (e u , j) and Ψ y (e y , j) respectively depend on e u and e y rather than on e as it is the case in [START_REF] Jiang | A converse Lyapunov theorem for discretetime systems with disturbances[END_REF]. This simplifying but reasonable assumption allows us to consider the system (24) as a cascade consisting of the e y subsystem and the e u subsystem with inputs e y and x. The main result of this section is given next.
Lemma 4: Suppose that the following conditions hold. 1) System (24a) is UGES with a Lyapunov function6 W y : Z ≥0 × R ny → R ≥0 , which is locally Lipschitz in its second argument, uniformly in the first one.
2) System (24b) with x = 0, e y = 0 is UGES with a Lyapunov function W u : Z ≥0 × R nu → R ≥0 , which is globally Lipschitz in its second argument, uniformly in the first one. 3) Assumption 1 holds. Then, there exists d > 0 such that the system (24) is UGES with Lyapunov function W (j, e) := W y (j, e y ) + dW u (j, e u ) where j ∈ R ≥0 and e ∈ R ne , which is locally Lipschitz in e, uniformly in j.
The advantage of the setup in this section is a simplified analysis that allows us to use known results on UGES protocols to combine them. Indeed, we note that items 1) and 2) of Lemma 4 were shown to hold in [START_REF] Nešić | Input-output stability properties of networked control systems[END_REF], [START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF] for a large class of commonly used protocols, such as the RR and TOD protocols. Hence, results of this section apply to a range of situations when either RR or TOD protocols are used for the output and input channels. Note that we do not have to use the same protocol for the output and input channels. For instance, we can use the RR protocol for the output channel and the TOD protocol for the input channel, or vice versa.
Remark 5: A notion of UGAS protocols was introduced in [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF]. We can rephrase Lemma 4 so that instead of UGES protocols, we use UGAS protocols for each subsystem. Since the system (24) is a cascade system, we can use results in [START_REF] Nešić | Changing supply functions in input-to-state stable systems: the discrete-time case[END_REF] to construct a UGAS Lyapunov function for the system (24) using UGAS Lyapunov functions for subsystems. In this case, we would need to appeal to stability results in [START_REF] Nešić | Input-to-state stability of networked control systems[END_REF] to conclude stability of the overall NCSs.
Remark 6: The simplified analysis of this section can be applied also to situations when we use one channel for all inputs and outputs but the protocol is such that it yields a cascade system in [START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF] so that the auxiliary system takes the form as in (24). Note that in this case it is necessary that no node contains both inputs and outputs, i.e. = u + y . This may give rise to new classes of protocols. For instance, suppose that we use a periodically time-varying protocol that on different time intervals acts as TOD for all output nodes and then TOD for all input nodes. This idea can be used to generate a range of novel protocols that fit within our analysis framework. This flexibility is very useful since different protocols perform differently when applied to a given plant.
VI. CONCLUSIONS
We have extended the emulation-based controller design framework for NCSs to systems controlled with dynamic controllers that contain direct-feedthrough terms. The existing stability analysis applies to this case once the stability of the auxiliary system induced by the protocol and the plant is established. We have provided several results that can be used in the case of a single channel or two channels setups.
Ψ
y (e, j) := diag ψ y1 (e, j)I ny 1 , . . . , ψ y y (e, j)I ny y (13) Ψ u (e, j) := diag ψ u1 (e, j)I nu 1 , . . . , ψ u u (e, j)I nu u (14)
y (e) := diag(ψ y1 (e)I ny 1 , . . . , ψ y y (e)I ny y ) (20) Ψ u (e) := diag(ψ u1 (e)I nu 1 , . . . , ψ u u i = min(arg max k∈{1,..., } |e k |) 0 otherwise,
We foresee that our results can be extended under minor changes to the case where the transmission delays are not negligible as well as in presence of quantization errors, in view of[START_REF] Nešić | A unified framework for design and analysis of networked and quantized control systems[END_REF],[START_REF] Heemels | Networked control systems with communication constraints: tradeoffs between transmission intervals, delays and performance[END_REF].
TOD consists in granting access to the node, which has the largest local network-induced error; a mathematical definition is given in Section IV-C.
For simplicity, we assume here that transmissions in both channels occur at the same time and we use the same MATI for both channels. However, it is possible to further generalise these results to the case when the two channels are not synchronised in this manner and different MATIs are used, see[START_REF] Abdelrahim | Robust event-triggered output feedback controllers for nonlinear systems[END_REF] for example.
Wy is a function (and not a functional as in Section IV-A, since it only depends on j and ey, and not on the input x). The same comment applies to Wu.
His work was partially supported by the ANR under the grant COMPACS (ANR-13-BS03-0004-02). D. His work was supported under the Australian Research Council under the Discovery Project DP1094326. Their work is supported by the Innovational Research Incentives Scheme under the VICI grant Wireless control systems: A new frontier in automation (No. 11382) awarded by STW (Dutch Science Foundation) and NWO (The Netherlands Organization for Scientific Research). | 33,741 | [
"845",
"858580",
"990341"
] | [
"98761",
"185180",
"32324",
"32881",
"32881"
] |
01491765 | en | [
"info"
] | 2024/03/04 23:41:50 | 2017 | https://hal.sorbonne-universite.fr/hal-01491765/file/paper-62.pdf | Mohamed-Amine Baazizi
email: [email protected]
Houssem Ben Lahmar
email: [email protected]
Dario Colazzo
email: [email protected]
Giorgio Ghelli
email: [email protected]
Carlo Sartiani
email: [email protected]
Schema Inference for Massive JSON Datasets
Keywords: CCS Concepts, Information systems → Semi-structured data, Data model extensions, •Theory of computation → Type theory, Logic, JSON, schema inference
Recent years have seen the widespread use of JSON as a data format to represent massive data collections. JSON data collections are usually schemaless. While this ensures several advantages, the absence of schema information has important negative consequences: the correctness of complex queries and programs cannot be statically checked, users cannot rely on schema information to quickly figure out structural properties that could speed up the formulation of correct queries, and many schema-based optimizations are not possible.
In this paper we deal with the problem of inferring a schema from massive JSON data sets. We first identify a JSON type language which is simple and, at the same time, expressive enough to capture irregularities and to give complete structural information about input data. We then present our main contribution, which is the design of a schema inference algorithm, its theoretical study and its implementation based on Spark, enabling reasonable schema inference time for massive collections. Finally, we report about an experimental analysis showing the effectiveness of our approach in terms of execution time, precision and conciseness of inferred schemas, and scalability.
INTRODUCTION
Big Data applications typically process and analyze very large structured and semi-structured datasets. In many of these applications, and in those relying on NoSQL document stores in particular, data are represented in JSON, a data format that is widely used thanks to its flexibility and simplicity.
JSON data collections are usually schemaless. This ensures several advantages: in particular it enables modern applications to quickly consume huge amounts of semi-structured data without waiting for a schema to be specified. Unfortunately, the lack of a schema makes it impossible to statically detect unexpected or unwanted behaviors of complex queries and programs (i.e., lack of correctness), users cannot rely on schema information to quickly figure out structural properties that could speed up the formulation of correct queries, and many schema-based optimizations are not possible.
In this paper we deal with the problem of inferring a schema from massive JSON datasets. Our main goal in this work is to infer structural properties of JSON data, that is, a description of the structure of JSON objects and arrays that takes into account nested values and the presence of optional values. These are the main properties that characterize semi-structured data, and having a tool that ensures fast, precise, and concise inference is crucial in modern applications characterized by agile consumption of huge amounts of data coming from multiple and disparate sources.
The approach we propose here is based on a JSON schema language able to capture structural irregularities and complete structural information about input data. This language resembles and borrows mechanisms from existing proposals [START_REF] Pezoa | Foundations of json schema[END_REF], but it has the advantage to be simple yet very expressive.
The proposed technique infers a schema that provides a global description of the whole input JSON dataset, while having a size that is small enough to enable a user to consult it in a reasonable amount of time, in order to get a global knowledge of the structural and type properties of the JSON collection. The description of the input JSON collection is global in the sense that each path that can be traversed in the tree-structure of each input JSON value can be traversed in the inferred schema as well. This property is crucial to enable a series of query optimization tasks. For instance, thanks to this property JSON queries [1,[START_REF] Beyer | Jaql: A scripting language for large scale semistructured data analysis[END_REF] can be optimized at compile-time by means of schema-based path rewriting and wildcard expansion [START_REF] Mchugh | Query optimization for xml[END_REF] or projection [START_REF] Benzaken | Type-based xml projection[END_REF]. These optimizations are not possible if the schema hides some of the structural properties of the data, as happens in related approaches [START_REF] Wang | Schema management for document stores[END_REF].
At the same time, our inferred schemas precisely capture the presence of optional and mandatory fields in collection of JSON records. Thanks to our approach, the user has a precise knowledge about i) all possible fields of records, ii) optional ones, and iii) mandatory ones. Property i) is crucial, as thanks to it the user can avoid time consuming, error-prone (approximated) data explorations to realize what fields can be really selected, while property ii) guides the user towards the adoption of code to handle the optional presence of certain fields; property iii), finally, indicates fields that can be always selected for each record in the collection.
A precise schema, like the one that can be inferred by our approach, can be very useful when very large datasets must be analyzed or queried with main-memory tools: indeed, by identifying the data requirements of a query or a program through a simple static analysis technique, it is possible to match these requirements with the schema in order to load in main memory only those fragments of the input dataset that are actually needed, hence improving both scalability and performance.
It is worth stressing that, even if in some cases JSON data feature a rather regular structure, the only alternative way for the user to be sure that all possible (optional) fields are identified is to explore the entire dataset either manually or by means of scripts that must be manually adapted to each particular JSON source, with weak guarantees of efficiency and soundness. Our approach instead applies to any JSON data collection, and is shown to be sound and effective on massive datasets. In addition, it is worth observing that, while in many cases processed JSON data come from remote, uncontrolled sources, in other particular cases JSON data are generated by applications whose code is known. In these cases more knowledge is available about the structure of the program output, but again schema inference is important as it can highlight subtle structural properties that can arise only in outputs of some particular program runs; also, when the code starts being complex, it is difficult to precisely figure out the structure of output JSON data. In some other cases, remote JSON sources can be accessed by APIs (e.g., Twitter APIs) that sometimes are provided with some schema descriptions. Unfortunately, these descriptions are often incomplete, often some fields are ignored, and the distinction between optional and mandatory fields is often omitted.
Our Contribution. Our main contribution is the design of a schema inference algorithm and its implementation based on Spark, in order to ensure reasonable schema inference time for massive collections. Our schema inference approach consists of two main steps. In the first one, an input collection of JSON values is processed by a Map transformation in order to infer a simple type for each value. The resulting output is processed by a Reduce action, which fuses inferred types that are not necessarily identical, but that share similar structure. This step relies on a binary function that takes two JSON types as input and fuses them. This function inspects the two input types identifying parts that are mandatory, optional, or repeated in the types, in order to obtain a type which is a super type of the two input types (it includes them), but that is potentially much more succinct than their simple union. A theoretical study shows that the fusion function is correct and, very importantly, associative.
Associativity is crucial as it allows Spark to safely distribute and parallelize the fusion of a massive collection of values. Associativity is also important to enable incremental evolution of the inferred schema under updates. In many applications the JSON sources are dynamic, and new values can be added at any time, with a structure that can differ from that already inferred for previous records. In this situation, in the case of insertion of a new record in an existing record collection, thanks to associativity, we simply need to fuse the existing schema with the schema of the new record. For incremental maintenance under other forms of updates, in the usual case that a massive data set is kept partitioned and the updated parts are known, it just suffices to re-infer the schema for the updated parts and to fuse them with previously inferred schemas for unchanged parts.
Our last contribution consists of an implementation of the proposed approach based on Spark, and experimental evaluation validating our claims of succinctness, precision, and efficiency. We based our tests on 4 real JSON datasets. Our experiments confirm that our schema inference algorithm returns very succinct yet precise schemas, even in the presence of poorly organized data (i.e., Wikipedia dataset). Also, a scalability analysis reveals that our approach ensures reasonable execution times, and that a simple partitioning strategy allows the performance to be improved.
Paper Outline. The paper is organized as follows. In Section 2 we illustrate some scenarios that motivate our work. In Section 3, then, we survey existing works. In Section 4, we describe the data model and the schema language we use here, while in Section 5 we present our schema inference approach. In Sections 6 and 7, finally, we show the results of our experimental evaluation and draw our conclusions.
MOTIVATION AND OVERVIEW
This section overviews the two steps of our schema fusion approach: type inference and type fusion. To this end, we first briefly recall the general syntax and semantics of JSON values. As in most semi-structured models, JSON distinguishes between basic values, which range over numbers (e.g., 123), strings (e.g., "abc"), and booleans (i.e., true/false), and complex values which can be either (unordered) sets of key/value pairs called records or (ordered) lists of values called arrays. The only constraint that JSON values must obey is key uniqueness within each record. Arrays can mix both basic and complex types. In the following, we will use the term mixed-content arrays for arrays mixing atomic and complex values.
A sample JSON record is illustrated in Figure 1. Syntactically, records use the conventional curly braces symbols whereas arrays use square brackets; finally, string values and keys are wrapped inside quotes in JSON (but we will avoid quotes around keys in our formal syntax).
Type inference.
Type inference, during the Map phase, is dedicated to inferring individual types for the input JSON values and yields a set of distinct types to be fused during the Reduce { " A " : 1 2 3 " B " : " The ..." " C " : false " D " : [ " abc " , " cde " , " fr 1 2 " ] } Figure 1: A JSON record r1.
phase. Some proposals of JSON schemas exist in the literature. With one exception [START_REF] Pezoa | Foundations of json schema[END_REF], none of them uses regular expressions which, as we shall illustrate, are important for concisely representing types for array values. Moreover, a clean formal semantics specification of types is often missing in these works, hence making it difficult to understand their precise meaning.
The type language we adopt is meant to capture the core features of the JSON data model with an emphasis on succinctness. Intuitively, basic values are captured using standard data types (String, Number, Boolean), complex values are captured by introducing record and array type constructors, and a union type constructor is used to add flexibility and expressive power. To illustrate the type language, observe the following type that is inferred for the record r1 given in Figure 1:
{A : Num, B : Str, C : Bool, D : [Str, Str, Str]}
As we will show, the initial type inference is a quite simple and fast operation: it is consists in a simple traversal of the input values that produces a type that is isomorphic to the value.
Type fusion.
Type fusion is the second step of our approach and consists in iteratively merging the types produced during the Map phase. Because it is performed during the Reduce phase in a distributed fashion, type fusion relies on a fusion operator which enjoys the commutativity and associativity properties. This fusion operator is invoked over two types T1 and T2 and produces a supertype of the inputs. To do so, the fusion collapses the parts of T1 and T2 that are identical and preserves the parts that are distinct in both types. To this end, T1 and T2 are processed in a synchronised top-down manner in order to identify common parts. The main idea is to only represent once what is common, and, at the same time, to preserve all the parts that differ.
Fusion treats atomic types, record types and array types differently, as follows.
• Atomic types: the fusion of atomic types is obvious: identical types are collapsed while different types are combined using the union operator.
• Record types: recall that valid record types enjoy key uniqueness. Therefore, the fusion of T1 and T2 is led by two rules:
(R1) matching keys from both types are collapsed and their respective types are recursively fused;
(R2) keys without a match are deemed optional in the resulting type and decorated with a question mark ?.
To After the array types have been so simplified, they are fused by simply recursively fusing their content types, applying the same technique described for record types: when the body type is a union type, we separately merge the atomic components, the array components, and the record components, and take the union of the results.
RELATED WORK
The problem of inferring structural information from JSON data collections has recently attracted attention in the database research community. The closest work to ours is the very preliminary investigation that we presented in [START_REF] Colazzo | Typing massive json datasets[END_REF]. While [START_REF] Colazzo | Typing massive json datasets[END_REF] only provides a sketch of a MapReduce approach for schema inference, in this paper we present results about a much deeper study. In particular, while in [START_REF] Colazzo | Typing massive json datasets[END_REF] a declarative specification of only a few cases of the fusion process is presented, in this paper we fully detail this process, provide a formal specification as well as a fusion algorithm. Furthermore, differently from [START_REF] Colazzo | Typing massive json datasets[END_REF], we present here an experimental evaluation of our approach validating our claims of parallelizability and succinctness.
In [START_REF] Wang | Schema management for document stores[END_REF] Wang et al. present a framework for efficiently managing a schema repository for JSON document stores. The proposed approach relies on a notion of JSON schema called skeleton. In a nutshell, a skeleton is a collection of trees describing structures that frequently appear in the objects of JSON data collection. In particular, the skeleton may totally miss information about paths that can be traversed in some of the JSON objects. In contrast, our approach enables the creation of a complete yet succinct schema description of the input JSON dataset. As already said, having such a complete structural description is of vital importance for many tasks, like query optimisation, defining and enforcing access-control security policies, and, importantly, giving the user a global structural vision of the database that can help her in querying and exploring the data in an effective way. Another important application of complete schema information is query type checking: as illustrated in [START_REF] Colazzo | Typing massive json datasets[END_REF] our inferred schemas can be used to make type checking of Pig Latin scripts much stronger.
In a very recent work [START_REF] Pezoa | Foundations of json schema[END_REF], motivated by the need of laying the formal foundations for the JSON Schema language [3], Pezoa et al. present the formal semantics of that language, as well as a theoretical study of its related expressive power and validation problem. While that work does not deal with the schema inference problem, our schema language can be seen as a core part of the JSON Schema language studied therein, and shares union types and repetition types with that one. These constructors are at the basis of our technique to collapse several schemas into a more succinct one. An alternative proposal for typing JSON data is JSound [2]. That language is quite restrictive wrt ours and JSON Schemas: for instance it lacks union types.
In a very recent work [START_REF] Discala | Automatic generation of normalized relational schemas from nested key-value data[END_REF] Abadi and Discala deal with the problem of automatic transforming denormalised, nested JSON data into normalised relational data that can be stored into a RDBMS; this is achieved by means of a schema generation algorithm that learns the normalised, relational schema from data. Differently from that work, we deal with schemas that are far from being relational, and are closer to tree regular grammars [START_REF] Murata | Taxonomy of xml schema languages using formal language theory[END_REF]. Furthermore, the approach proposed in [START_REF] Discala | Automatic generation of normalized relational schemas from nested key-value data[END_REF] ignores the original structure of the JSON input dataset and, instead, depends on patterns in the attribute data values (functional dependencies) to guide its schema generation. So, that approach is complementary to ours.
In [START_REF] Liu | Json data management: Supporting schema-less development in rdbms[END_REF] Liu et al. propose storage, querying, and indexing principles enabling RDBMSs to manage JSON. The paper does not deal with schema inference, but indicates a possible optimisation of their framework based on the identification of common attributes in JSON objects that can be captured by a relational schema for optimization purposes. In [START_REF] Scherzinger | Finding and fixing type mismatches in the evolution of object-nosql mappings[END_REF] Scherzinger et al. propose a plugin to track changes in object-NoSQL mappings. The technique is currently limited to only detect mismatches between base types (e.g., Boolean, Integer, String), and the authors claim that a wider knowledge of schema information is needed to enable the detection of other kinds of changes, like, for instance, the removal or renaming of attributes.
It is important to state that the problem of schema inference has already been addressed in the past in the context of semi-structured and XML data models. In [START_REF] Nestorov | Infering structure in semistructured data[END_REF] and [START_REF] Nestorov | Extracting schema from semistructured data[END_REF], Nestorov et al. describe an approach to extract a schema from semistructured data. They propose an object-oriented type system where nodes are captured by classes built starting from nodes sharing the same incoming and outcoming edges and where data edges are generalized to relations between the classes. In [START_REF] Nestorov | Extracting schema from semistructured data[END_REF], the problem of building a type out a of a collection of semistructured documents is studied. The emphasis is put on minimizing the size of the resulting type while maximizing its precision. Although that work considers a very general data model captured by graphs, it does not suit our context. Firstly, we consider the JSON model that is tree-shaped by nature and that features specific constructs such as arrays that are not captured by the semi-structured data model. Secondly, we aim at processing potentially large datasets efficiently, a problem that is not directly addressed in [START_REF] Nestorov | Infering structure in semistructured data[END_REF] and [START_REF] Nestorov | Extracting schema from semistructured data[END_REF].
More recent effort on XML schema inference (see [START_REF] Freydenberger | Fast learning of restricted regular expressions and dtds[END_REF] and works cited therein) is also worth mentioning since it is somewhat related to our approach. The aim of these approaches is to infer restricted, yet expressive enough forms of regular expressions starting from a positive set of strings representing element contexts of XML documents. While XML and JSON both allow one to represent tree-shaped data, they have radical differences that make existing XML related approaches difficult to apply to the JSON setting. Similar remarks hold for related approaches for schema inference for RDF [START_REF] Cebiric | Query-oriented summarization of RDF graphs[END_REF]. Also, none of these approaches is designed to deal with massive datasets.
DATA MODEL AND TYPE LANGUAGE
This section is devoted to formalizing the JSON data model and the schema language we adopt.
We represent JSON values as records and arrays, whose abstract syntax is given in Figure 2. Basic values B comprise null values, booleans, numbers n and strings s. As outlined in Section 2, records are sets of fields, each field being an association of a value V to a key l whereas arrays are sequences of values. The abstract syntax is practical for the formal treatment, but we will typically use the more readable notation introduced at the bottom of Figure 2, where records as represented as {l1 : V1, . . . , ln : Vn} and arrays are represented as [V1, . . . , Vn].
V ::= B | R | A Top-level values B ::= null | true | false | n | s Basic values R ::= ERec | Rec(l, V, R) Records A ::= EArr | Arr (V, A)
Arrays Semantics: In JSON, a record is well-formed only if all its top-level keys are mutually different. In the sequel, we only consider well-formed JSON records, and we use Keys(R) to denote the set of the top-level keys of R.
Records Domain : F S(Keys × Values) ERec = ∅ Rec(l, V, R) = {(l, V )} ∪ R
Since a record is a set of fields, we identify two records that only differ in the order of their fields.
The syntax of the JSON schema language we adopt is depicted in Figure 3. The core of this language is captured by the non-terminals BT , RT , and AT which are a straightforward generalization of their B, R and A counterparts from the data model syntax.
As previously illustrated in Section 2, we adopt a very specific form of regular types in order to prepare an array type for fusion. Before fusion, an array type [T1, . . . , Tn] is simplified as [(T1 + . . . + Tn) * ], or, more precisely, as [LFuse(T1, . . . , Tn) * ]: instead of giving the content type element by element as in [T1, . . . , Tn], we just say that it contains a sequence of values all belonging to LFuse(T1, . . . , Tn) that will be defined as a compact upper bound of T1 + . . . + Tn. This simplification is allowed by the fact that, besides the basic array types AT = [T1, . . . , Tn] we also have the simplified array type SAT = [T * ], where T may be any type, including a union type.
A field OptRecT (l, T, . . .), represented as l : T ? in the simplified notation, represents an optional field, that is, a field that may be either present or absent in a record of the corresponding type. For example, a type {l : Num?, m : (Str + Null)} describes records where l is optional and, if present, contains a number, while the m field is mandatory and may contain either null or a string.
A union type T + U contains the union of the values from T and those for U . The empty type denotes the empty set. 1 We define now schema semantics by means of the function , defined as the minimal function mapping types to sets of values that satisfies the following equations. For the sake of simplicity we omit the case of basic types.
Auxiliary functions
S 0 = {[ ]} S n+1 = {[V ] :: a | V ∈ S, a ∈ S n [ ]} S * = i∈N S i Records Domain : Sets(F S(Keys × Values)) ERecT = {∅} RecT (l, T, RT ) = {{(l, V )} ∪ R | V ∈ T , R ∈ RT } OptRecT (l, T, RT ) = RecT (l, T, RT ) ∪ RT Arrays and Simplified Arrays Domain : Sets(Lists(Values)) EArrT = {[ ]} ArrT (T, AT ) = {[V ] :: A | V ∈ T , A ∈ AT } [T * ] = T * Union types = ∅ T + U = T ∪ U
The basic idea behind our type fusion mechanism is that we always generalize the union of two record types to one record type containing the keys of both, and similarly for the union of two array types. We express this idea as 'merging types that have the same kind'. The following kind() function that maps each type to an integer ranging over {0, . . . , 5} is used to implement this approach.
In the sequel, generic types are indicated by the metavariables T, U, W , while BT , RT and AT are reserved for basic types, record types and array types.
Later on, in order to express correctness of the fusion process we rely on the usual notion of sub-typing (type inclusion). Definition 4.1 (Sub-typing) Let T and U be two types. Then T is a sub-type of U , denoted with T <: U , if and only if T ⊆ U .
The sub-typing relation is a partial order among types. We don't use any subtype checking algorithm in this work, but we exploit this notion to state properties of our schema inference approach. 1 The type is never used during type inference, since no value belongs to it. In greater detail, is actually a technical device that is only useful when an empty array type EArrT is simplified, before fusion, into a simplified array type: EArrT (that is, the type [ ]) is simplified as [ * ], which has the same semantics as EArrT , and our algorithms never insert in any other position.
SCHEMA INFERENCE
As already said, our approach is based on two steps: i) type inference for each single value in the input JSON data collection, and ii) fusion of types generated by the first step. We present these steps in the following two sections.
Initial Schema Inference
The first phase of our approach consists of a Map phase that performs schema inference for each single value of the input collection. Type inference for single values is done according to the inference rules in Figure 4. Each rule allows one to infer the type of a value indicated in the conclusion (part below the line) in terms of types recursively determined in the premises (part above the line). Rules with no premises deal with the terminal cases of the recursive typing process, which infers the type of a value by simply reflecting the structure of the value itself. Note the particular case of record values where uniqueness of attribute keys li is checked. Also notice that these rules are deterministic: each possible value matches at most the conclusion of one rule. These rules, hence, directly define a recursive typing algorithm. The following lemma states soundness of value typing, and it can be proved by a simple induction.
Lemma 5.1 For any JSON value V , V ; T implies V ∈ T .
It is worth noticing that schema inference done in this phase does not exploit the full expressivity of the schema language. Union types, optional fields and repetition types (the Simplified Array Types) are never inferred, while these types will be produced by the schema fusion phase described next.
Schema Fusion
The second phase of our approach is meant to fuse all the types inferred in the first Map phase. The main mechanism of this phase is a binary fusion function, that is commutative and transitive. These properties are crucial as they ensure that the function can be iteratively applied over n types in a distributed and parallel fashion.
When fusion is applied over two types T and U , it outputs either a single type obtained by recursively merging T and U if they have the same kind, or the simple union T + U otherwise. Since fusion may result in a union type, and since this is in turn fused with other types, possibly obtained by fusion itself, the fusion function has to deal with the case when union types T = T1 + . . . + Tn and U = U1 + . . . + Um need to be fused. In this case, our fusion function identifies and fuses types Tj and U h with matching kinds, while types of non-matching kinds are just moved unchanged into the output union type. As we will see later, the fusion process ensures the invariant property that in each output union type a given kind may occur at most once in each union; hence, in the two union types above, n ≤ 6 and m ≤ 6, since we only have six different kinds.
The auxiliary functions KMatch and KUnmatch, defined in Figure 5, respectively have the purpose of collecting pairs of types of the same kind in two union-types T1 and T2, and of collecting non-matching types. In Figure 5, two similar functions FMatch and FUnmatch are defined. They identify and collect fields having matching/unmatched keys in two input body record types RT1 and RT2.
These two functions are based on the auxiliary functions •(T ) and (RT ). The functions •(T ) transforms a union type T1 + . . . + Tn into the list of its non-union addends [T1, . . . , Tn]. The function (RT ) transforms a record type {(l1:T1) m 1 +. . .+(ln:Tn) mn } into the set of its fields [(l1:T1) m 1 + . . .+(ln:Tn) mn ] -in this case we can use a set since no repetition is possible. Here we use (l:T ) 1 do denote a mandatory field, (l:T ) ? do denote an optional field, and the symbols m and n for metavariables that range over {1, ?}.
We are now ready to present the fusion function. Its formal specification is given in Figure 6. We use function ⊕ L that is a right inverse of •(T ) and rebuilds a union type from a list of non-union addends, and function S that is a right inverse of (RT ) and rebuilds a record type from a set of fields. We also use min(m, n) which is a partial function that picks the "smallest" cardinality, by assuming ? < 1.
The general case where types T1 and T2 that may be union types have to be fused is dealt with by the Fuse(T1, T2) function. According to what said before, it recursively applies LFuse to pairs of types coming from T1 and T2 and having the same kind, while unmatched types are simply returned in the output union type.
The specification of LFuse is captured by lines 2 to 7. Line 2 deals with the case where the input types are two identical basic types. In this case, the fusion yields the input basic type. Line 3 deals with the case where the input types are records. In this case, pairs of fields whose keys match are recursively fused by calling LFuse, the lowest cardinality is chosen for each, so that a field is mandatory only if is mandatory in both record types, whereas the unmatching fields are copied in the result type as optional fields.
) n ∈ (RT1) | ∀(k:U ) m ∈ (RT2). l = k} ∪ {(l:T ) n ∈ (RT2) | ∀(k:U ) m ∈ (RT1). l = k}
The remaining lines of LFuse deal with the case where the input types are arrays. Each of these lines deals with a combination among original and simplified arrays by ensuring that Fuse is called over the body types of arrays that have been simplified through the call of collapse. While line 4 deals with the case that the two types have not been subject to fusion yet, lines 5-7 deal with the case that one of the input is the result of previous fusion operations, and therefore it has a *-expression as a body (recall the discussion in Section 2). Lines 8 and 9 are dedicated to the array simplification function collapse. This function simply rely on Fuse in order to generate an over-approximation of all the different types that are found in the original array type, in order to prepare the array type for the fusion process.
To illustrate both body array type simplification and record fusion, consider the following type T :
T = [Num, Bool, Num, {l1 : Num, l2 : Str}, {l1 : Num}, {l2 : Bool, l3 : Str}]
We have that collapse(T ) is equal to:
(Num + Bool + {l1 : Num, l2 : Str + Bool, (l3 : Str)?})
Note that only one record type is created, by iterating fusion over the three record types. Also note that there is a good level of size reduction entailed by simplification. This hap-pens in the most frequent cases (where elements of an array share most of their structure), while size reduction becomes weaker when very heterogeneous records appear in the array body type (in the particular case where no field key is shared among records, the unique record type given by simplification contains all keys, with their associated types, as optional fields).
To conclude this section the following theorems prove the main theoretical properties of the fusion process: correctness, commutativity and associativity. The crucial role played by these properties has already been discussed in the previous sections.
All these properties hold for types that respect the invariant that types of a given kind can occur at most once in each union. We use the term "normal types" to refer to such types. All of our algorithms respect this invariant, that is, they only generate normal types.
We first deal with correctness.
Theorem 5.2 (Correctness of Fuse) Given two normal types T1 and T2, if T3 = Fuse(T1, T2), then T1 <: T3 and T2 <: T3.
The proof of the above theorem relies on the following lemma.
Lemma 5.3 (Correctness of LFuse) Given two non-union normal types T1 and T2 with the same kind, if T3 = LFuse(T1, T2), then it holds that T1 <: T3 and T2 <: T3.
Another important property of fusion is commutativity.
EXPERIMENTAL EVALUATION
In this section we present an experimental evaluation of our approach whose main goal is to validate our precision and succinctness claims. We also incorporate a preliminary study on using our approach in a cluster-based environment for the sake of dealing with complex large datasets.
Experimental Setup and Datasets
For our experiments, we used Spark 1.6.1 [7] installed on two kinds of hardware. The first configuration consists in a single Mac mini machine equipped with an Intel dual core 2.6 Ghz processor, 16GB of RAM, and a SATA hard-drive. This machine is mainly used for verifying the precision and succinctness claims. In order to assess the scalability of our approach and its ability to deal with large datasets, we also exploited a small size cluster of six nodes connected using a Gigabit link with 1Gb speed. Each node is equipped with two 10-core Intel 2.2 Ghz CPUs, 64GB of RAM, and a standard RAID hard-drive.
The choice of using Spark is intuitively motivated by its widespread use as a platform for processing large datasets of different kinds (e.g., relational, semi-structured, and graph data). Its main characteristic lies in its ability to persist large datasets into main-memory in order to process them in a fast and efficient manner. Spark offers APIs for major programming languages like Java, Scala, and Python. In particular, Scala serves our case well since it makes the encoding of pattern matching and inductive definitions very easy. Using Scala has, for instance, allowed us to implement both the type inference and the type fusion algorithms in a rather straightforward manner starting from their respective formal specifications.
The type inference implementation extends the Json4s library [START_REF]Json4s library[END_REF] for parsing the input JSON documents. This library yields a specific Scala object for each JSON construct (array, record, string, etc), and this object is used by our implementation to generate the corresponding type construct. The type fusion implementation follows a standard functional programming approach and does not need to be commented.
It is important to mention that the Spark API offers a feature for extracting a schema from a JSON document. However, this schema inference suffers from two main drawbacks. First, the inferred schemas do not contain regular expressions, which prevents one from concisely representing repeated types, while our type system uses the Kleene-Star to encode the repetition of types. Second, the Spark schema extraction is imprecise when it comes to deal with arrays containing mixed content, such as, for instance, an array of the form:
[Num, Str, {l : Str}]
In such a case, the Spark API uses type coercion yielding an array of type String only. In our case, we can exploit union types to generate a much more precise type:
[(Num + Str + {l : Str}) * ]
For our experiments we used four datasets. The first two datasets are borrowed from an existing work [START_REF] Discala | Automatic generation of normalized relational schemas from nested key-value data[END_REF] and correspond to data crawled from GitHub and from Twitter. The third dataset consists in a snapshot of Wikidata [START_REF]Wikidata[END_REF], a large repository of facts feeding the Wikipedia portal. The last dataset consists in a crawl of NYTimes articles using the NYTimes API [5]. A detailed description of each dataset is provided in the sequel.
GitHub.
This dataset corresponds to metadata generated upon pull requests issued by users willing to commit a new version of code. It comprises 1 million JSON objects sharing the same top-level schema and only varying in their lower-level schema. All objects of this dataset consist exclusively of records, sometimes nested, with a nesting depth never greater than four. Arrays are not used at all.
Twitter.
Our second dataset corresponds to metadata that are attached to the tweets shared by Twitter users. It comprises nearly 10 million records corresponding, in majority, to tweet entities. A tiny fraction of these records corresponds to a specific API call meant to delete tweets using their ids. This dataset is interesting for our experiment for many reasons. First, it uses both records and arrays of records, although the maximum level of nesting is 3. Second, it contains five different top-level schemas sharing common parts. Finally, it mixes two kinds of JSON records (tweets and deletes). This dataset is useful to assess the effectiveness of our typing approach when dealing with arrays.
Wikidata.
The largest dataset comprises 21 million records reaching a size of 75GB and corresponding to Wikipedia facts. These facts are structured following a fixed schema, but suffer from a poor design compared to the previous datasets. For instance, an important portion of Wikidata objects corresponds to claims issued by users. These users identifiers are directly encoded as keys, whereas a clean design would suggest encoding this information as a value of a specific key called id, for example. This dataset can be of interest to our experiments since several records reach a nesting level of 6.
NYTimes.
The last dataset we are considering here is probably the most interesting one and comprises approximately 1.2 million records and reaches the size of 22GB. Its records feature both nested records and arrays and are nested up to 7 levels. Most of the fields in records are associated to text data, which explains the large size of this dataset compared to the previous ones. These records encode metadata about news articles, such as the headline, the most prominent keywords, the lead paragraph as well as a snippet of the article itself. The interest of this dataset lies in the fact that the content of fields is not fixed and varies from one record to another. A quick examination of an excerpt of this dataset has revealed that the content of the headline field is associated, in some records, to subfields labeled main, content kicker, kicker, while in other records it is associated to subfields labeled main and print headlines. Another common pattern in this dataset is the use of Num and Str types for the same field.
In order to compare the results of our experiments using the four datasets, we decided to limit the size of every dataset to the first million records (the size of the smallest one). We also created, starting from each dataset, subdatasets by restricting the original ones to respectively thousand (1K), ten thousands (10K) and one hundred thousands (100K) records chosen in a random fashion. Table 1
Testing Scenario and Results
The main goal of our experiments is to assess the effectiveness of our approach and, in particular, to understand if it is able to return succinct yet precise fused types. To do so we report in Tables 2 to 5, for each dataset, the number of distinct types, the min, max and average size of these types as well as the size of the fused type. The notion of size of a type is standard, and corresponds to the size (number of nodes) of its Abstract Syntax Tree. For fairness, one can consider the average size as a baseline wrt which we compare the size of the fused type. This helps us judge the effectiveness of our fusion at collapsing common parts of the input types.
From Tables 2, 3 and 4, it is easy to observe that our primary goal of succinctness is achieved for the GitHub and the Twitter datasets. Indeed, the ratio between the size of the fused type and that of the average size of the input types is not bigger than 1.4 for GitHub whereas it is bounded by 4 for Twitter, which are relatively good factors. These results are not surprising: GitHub objects are homogeneous. Twitter has a more varying structure and, in addition, it mixes two different kinds of objects that are deletes and tweets, as outlined in the description of this dataset. This explains the slight difference in terms of compaction wrt GitHub.
As expected, the results for Wikidata are worse than the results for the previous datasets, due to the particularity of this dataset concerning the encoding of user-ids as keys. This has an impact on our fusion technique, which relies on keys to merge the underlying records. Still, our fusion algorithm manages to collapse the common parts of the input types as testified by the fact that the size of the fused types is smaller than the sum of the input types. 2 Finally, the results for NYtimes dataset, which features many irregularities, are promising and even better than the rest. This can be explained by the fact that the fields in the first level are fixed while the lower level fields may vary. This does not happen in the previous datasets, where the variations occur on the first level. Execution times for the type inference and the type fusion for GitHub, Twitter, and Wikidata datasets are reported in Table 6. As it can be observed, processing the Wikidata dataset is more time-consuming than processing the two other datasets. This is explained, once again, by the nature of the Wikidata dataset. Observe also that the processing time of GitHub is larger than that of Twitter due to the size of the former dataset that is larger than the latter one.
Scalability
To assess the scalability of our approach, we have deployed the typing and the fusion implementations on our cluster. We set the parameters so to exploit the full capacity of the cluster in terms of number of cores. To do so, we set the number of cores to 120, that is, [START_REF] Pezoa | Foundations of json schema[END_REF] also assign to our job 300GB of main memory, hence leaving 72GB for the task manager and other runtime monitoring processes. We used the NYTimes full dataset (22GB) stored on HDFS. Because our approach requires two steps (type inference and type fusion), we adopted a strategy where the results of the type inference step are persisted into mainmemory to be directly available to the fusion step. We ran the experiments on datasets of varying size obtained by restricting the full one to the first fifty, two hundred-fifty and five hundred thousands records, respectively. The results for these experiments are reported in In an attempt to optimize the execution time on the cluster, we started by analyzing the execution and realized that the full capacity of the cluster was not exploited. Indeed, the HDFS uses only one node to store the entire dataset, which does not allow the parallelism to be exploited. We also observed that the intermediate results produced by the type inference step were split on only two nodes. The overall effect is that the computation was performed on two nodes while the remaining four nodes were idle.
To overcome this problem, we considered a strategy based on partitioning the input data that would force Spark to take full advantage of the cluster. In order to avoid the overhead of data shuffling, the ideal solution would be to force computation to be local until the end of the processing. Because Spark 1.6 does not explicitly allow such an option we had to opt for a manual strategy where each partition of data is processed in isolation, and each of the inferred schema is finally fused with the others (this is a fast operation as each schema to fuse has a very small size). The purpose is to simulate the realistic situation where Spark processes data exclusively locally thus avoiding the overhead of synchronization. The times for processing each partition are reported in Table 8. The average time is 2.85 minutes, which is a rather reasonable time for processing a dataset of 22 Note that this simple yet effective optimization is possible thanks to the associativity of our fusion process.
CONCLUSIONS AND FUTURE WORK
The approach described in this paper is a first step towards the definition of a schema-based mechanism for exploring massive JSON datasets. This issue is of great importance due to the overwhelming quantity of JSON data manipulated on the web and due to the flexibility offered by the systems managing these data.
The main idea of our approach is to infer schemas for the input datasets in order to get insights about the structure of the underlying data; these schemas are succinct yet precise, and faithfully capture the structure of the input data. To this end, we started by identifying a schema language with the operators needed to ensure succinctness and precision of our inferred schemas. We, then, proposed a fusion mechanism able to detect and collapse common parts of the input types. An experimental evaluation on several datasets validated our claims and showed that our type fusion approach actually achieves the goals of succinctness, precision, and efficiency.
Another benefit of our approach is its ability to perform type inference in an incremental fashion. This is possible because the core of our technique, fusion, is incremental by essence. One possible and interesting application would be to process a subset of a large dataset to get a first insight on the structure of the data before deciding whether to refine this partial schema by processing additional data.
plan to enrich schemas with statistical and provenance information about the input data. Furthermore, we want to improve the precision of the inference process for arrays and study the relationship between precision and efficiency.
c 2017 ,
2017 Copyright is with the authors. Published in Proc. 20th International Conference on Extending Database Technology (EDBT), March 21-24, 2017 -Venice, Italy: ISBN 978-3-89318-073-8, on OpenProceedings.org. Distribution of this paper is permitted under the terms of the Creative Commons license CC-by-nc-nd 4.0
ArraysFigure 2 :
2 Figure 2: Syntax of JSON data.
TFigure 3 :
3 Figure 3: Syntax of the JSON type language.
Figure 4 :
4 Figure 4: Type inference rules.•(T ) : transforms a type into a list of non-union types, where • is list concatenation and [ ] is the list constructor•(T1 + T2) := •(T1) • •(T2) •( ) := [ ] •(T ) := [T ] when T = T1 + T2 and T = KMatch(T1, T2) := {(U1, U2) | U1 ∈ •(T1), U2 ∈ •(T2), kind(U1) = kind(U2)} KUnmatch(T1, T2) := {U1 ∈ •(T1) | ∀U2 ∈ T2. kind(U1) = kind(U2)} ∪{U2 ∈ •(T2) | ∀U1 ∈ •(T1). kind(U2) = kind(U1)}(RT ) : transforms a record type into a set of fields (ERecT ) := ∅ (RecT (l, T, RT )) := {(l:T ) 1 } ∪ (RT ) (OptRecT (l, T, RT )) := {(l:T ) ? } ∪ (RT ) FMatch(RT1, RT2) := {((l:T ) n , (k:U ) m ) | (l:T ) n ∈ (RT1) and (m:U ) m ∈ (RT2) and l = k} FUnmatch(RT1, RT2) := {(l:T ) n ∈ (RT1) | ∀(k:U ) m ∈ (RT2). l = k} ∪ {(l:T ) n ∈ (RT2) | ∀(k:U ) m ∈ (RT1). l = k}
Figure 5 :
5 Figure 5: Auxiliary functions.
illustrate those cases, assume that T1 and T2 are, respectively, {A:Str, B:Num} and {B:Bool, C:Str}.Fusion of nested records eventually associates keys with types that may be unions of atomic types, record types, and array types. We will see that, when such types are merged, we separately merge the atomic types, the record types, and the array types, and return the union of the result. For instance, the fusion of types
The only matching key is "B" and hence its two atomic
types Num and Bool are fused, which yields Num + Bool.
The other keys will be optional according to rule R2.
Hence, fusion yields the type
T12 = {(A:Str)?, B:Num + Bool, (C:Str)?}
Assume now that T12 is fused with
T3 = {A:Null, B:Num}
Rules R1 and R2 need to be slightly adapted to deal
with optional types. Intuitively, we should simply con-
sider that optionality '?' prevails over the implicit total
cardinality '1'. The resulting type is thus
T123 = {(A:Str + Null)?, B:Num + Bool, (C:Str)?}. {l:(Bool + Str + {A:Num}} and {l:(Bool + {A:Str, B:Num})} yields {l:(Bool + Str + {A:(Num + Str), (B:Num)?}}.
• Array types: array fusion deserves special attention. A particular aspect to consider is that an array type obtained in the first phase may contain several repeated types, and may feature mixed-content. To deal with this, before fusing types we perform a kind of simplification on bodies by using regular expression types, and, in particular, union + and repetition types * . To illustrate this point, consider the array value [ abc , cde , { E : f r , F : 12}], containing two strings followed by a record (mixedcontent). The first phase infers for this value the type [Str, Str, {E:Str, F :Num}]. This type can be actually simplified. For instance, one can think of a partition-based approach which collapses adjacent identical types into a star-guarded type, thus transforming [Str, Str, {E:Str, F :Num}] into [(Str) * , {E:Str, F :Num}] by collapsing the string types. The resulting schema is indeed succinct and precise. However, succinctness cannot be guaranteed after fusion. For instance, if that type was to be merged with [{E:Str, F :Num}, Str, Str], where strings and record swapped positions, succinctness would be lost because we need to duplicate at least one sub-expression, (Str) * or {E:Str, F :Num}. As we are mainly concerned with generating types that can be human-readable, we trade some precision for succinctness and do not account for position anymore. To achieve this, in our simplification process (made before fusing array types) we generalize the above partitionbased solution by returning the star-guarded union of all distinct types expressed in an array. So, simplification for either [Str, Str, {E:Str, F :Num}] or [{E:Str, F :Num}, Str, Str] yields the same type S = [(Str + {E:Str, F :Num}) * ].
Table 1 :
1 reports the size of each of these sub-datasets.
1K 10K 100K 1M
GitHub 14MB 137MB 1.3GB 14GB
Twitter 2.2MB 22 MB 216MB 2.1GB
Wikidata 23MB 155MB 1.1GB 5.4GB
NYTimes 10MB 189MB 2GB 22GB
(Sub-)datasets sizes.
Table 2 :
2 Results for GitHub.
Inferred types size Fused
# types min. max. avg. type size
1K 29 147 305 233 321
10K 66 147 305 239 322
100K 261 147 305 246 330
1M 3,043 147 319 257 354
Inferred types size Fused
# types min. max. avg. type size
1K 167 7 218 74 221
10K 677 7 276 273
100K 2,320 7 308 75 277
1M 8,117 7 390 77 299
Table 3 :
3 Results for Twitter.
Inferred types size Fused
# types min. max. avg. type size
1K 999 27 36,748 1,215 37,258
10K 9,886 21 36,748 866 82,191
100K 95,298 11 39,292 607 87,290
1M 640,010 11 39,292 310 117,010
Table 4 :
4 Results for Wikidata.
Table 6 :
6 cores per node. We Typing execution times.
Inferred types size Fused
# types min. max. avg. type size
1K 555 299 887 597.25 88
10K 2,891 6 943 640 331
100K 15,959 6 997 755 481
1M 312,458 6 1,046 674 760
Table 5: Results for NYTimes
1K 10K 100K 1M
GitHub 1s 4s 32s 297s
Twitter 0 1s 7s 73s
Wikidata 7s 15s 121s 925s
Table 7 :
7 Table 7 together with some statistics on these datasets (number of records and cardinality of the distinct types). It can be observed that execution increases linearly with the dataset size. Scalability -NYTimes dataset.
size # records # distinct types time
1GB 50,000 5,679 2 min
4.5GB 250,000 54,868 4.4 min
9GB 500,000 128,943 8.5 min
22GB 1,184,943 312,458 12.5 min
Table 8 :
8 Parition-based processing of NYTimes.
GB.
# objects # types time
partition 1 284,943 67,632 2.4 min
partition 2 300,000 83,226 3.8 min
partition 3 300,000 89,929 1.9 min
partition 4 300,000 84,333 3.3 min
The total size of input types can be roughly estimated by multiplying either the minimum, maximum or average size with the number of types.
ACKNOWLEDGMENTS
Houssem Ben Lahmar has been partially supported by the project D03 of SFB/Transregio 161 3 . | 53,496 | [
"1004298",
"840508",
"1004291",
"1095200"
] | [
"238503",
"564132",
"989",
"89633",
"132915"
] |
01491834 | en | [
"math"
] | 2024/03/04 23:41:50 | 2015 | https://hal.science/hal-01491834/file/Boltzmann%20to%20Incompressible%20NS%20article.pdf | Marc Briant
FROM THE BOLTZMANN EQUATION TO THE INCOMPRESSIBLE NAVIER-STOKES EQUATIONS ON THE TORUS: A QUANTITATIVE ERROR ESTIMATE
Keywords: Boltzmann equation on the Torus, Explicit trend to equilibrium, Incompressible Navier-Stokes hydrodynamical limit, Knudsen number, Hypocoercivity, Kinetic Models
come
From the Boltzmann equation to the incompressible Navier-Stokes equations on the torus: A quantitative error estimate
Introduction
This paper deals with the Boltzmann equation in a perturbative setting as the Knudsen number tends to 0. The latter equation describes the behaviour of rarefied gas particles moving on T d (flat torus of dimension d
2) with velocities in R d when the only interactions taken into account are binary collisions. More precisely, the Boltzmann equation rules the time evolution of the distribution of particles in position and velocity. A formal derivation of the Boltzmann equation from Newton's laws under the rarefied gas assumption can be found in [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF], while [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] present Lanford's Theorem (see [START_REF] Lanford | Time evolution of large classical systems[END_REF] and [START_REF] Gallagher | From Newton to Boltzmann: hard spheres and short-range potentials[END_REF] for detailed proofs) which rigorously proves the derivation in short times.
We denote the Knudsen number by ε and the Boltzmann equation reads
∂ t f + v • ∇ x f = 1 ε Q(f, f ) , on T d × R d = R d ×S d-1 Φ (|v -v * |) b (cos θ) [f ′ f ′ * -f f * ] dv * dσ, (1.1)
where f ′ , f * , f ′ * and f are the values taken by f at v ′ , v * , v ′ * and v respectively. Define:
v ′ = v + v * 2 + |v -v * | 2 σ v ′ * = v + v * 2 - |v -v * | 2 σ
, and cos
θ = v -v * |v -v * | , σ .
One can find in [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF], [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] or [START_REF] Golse | From kinetic to macroscopic models[END_REF] that the global equilibria for the Boltzmann equation are the Maxwellians µ(v). Without loss of generality we consider only the case of normalized Maxwellians:
µ(v) = 1 (2π) d 2 e -|v| 2 2 .
The bilinear operator Q(g, h) is given by
Q(g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) [h ′ g ′ * -hg * ] dv * dσ.
1.1. The problem and its motivations. The Knudsen number is the inverse of the average number of collisions for each particle per unit of time. Therefore, as reviewed in [START_REF] Villani | Limites hydrodynamiques de l'équation de Boltzmann (d'après C. Bardos[END_REF], one can expect a convergence from the Boltzmann model towards the acoustics and the fluid dynamics as the Knudsen number tends to 0. This latter convergence will be specified. However, these different models describe physical phenomenon that do not act at the same scales in space or time. As suggested in previous studies, for instance [START_REF] Villani | Limites hydrodynamiques de l'équation de Boltzmann (d'après C. Bardos[END_REF][14] [START_REF] Saint-Raymond | Hydrodynamic limits of the Boltzmann equation[END_REF],a rescaling in time and a perturbation of order ε around the global equilibrium µ(v) should approximate, as the Knudsen number tends to 0, the incompressible Navier-Stokes regime. We therefore study the following equation
(1.2) ∂ t f ε + 1 ε v • ∇ x f ε = 1 ε 2 Q(f ε , f ε ) , on T d × R d ,
under the linearization f ε (t, x, v) = µ(v) + εµ 1/2 (v)h ε (t, x, v). This leads to the perturbed Boltzmann equation
(1.3) ∂ t h ε + 1 ε v • ∇ x h ε = 1 ε 2 L(h ε ) + 1 ε Γ(h ε , h ε ).
that we will study thoroughly, and where we defined
L(h) = Q(µ, µ 1 2 h) + Q(µ 1 2 h, µ) µ -1 2 Γ(g, h) = 1 2 Q(µ 1 2 g, µ 1 2 h) + Q(µ 1 2 h, µ 1 2 g) µ -1 2 .
All along this paper we consider the Boltzmann equation with hard potential or Maxwellian potential (γ = 0), that is to say there is a constant C Φ > 0 such that
Φ(z) = C Φ z γ , γ ∈ [0, 1].
We also assume a strong form of Grad's angular cutoff [START_REF] Grad | Principles of the kinetic theory of gases[END_REF], expressed here by the fact that we assume b to be C 1 with the following controls
∀z ∈ [-1, 1], |b(z)| , |b ′ (z)| C b ,
b and Φ being defined in equation (1.1).
The aim of the present article is to develop a constructive method to obtain existence and exponential decay for solutions to the perturbed Boltzmann equation (1.3), uniformly in the Knudsen number.
Such a uniform result is then used to derive explicit rates of convergence for (h ε ) ε>0 towards its limit as ε tends to 0. Thus proving and quantifying the convergence from the Boltzmann equation to the incompressible Navier-Stokes equations (1.4).
1.2. Notations. Throughout this paper, we use the following notations. For two multi-indexes j and l in N d we define:
• ∂ j l = ∂ v j ∂ x l , • for i in {1, . . . , d} we denote by c i (j) the i th coordinate of j, • the length of j will be written |j| = i c i (j),
• the multi-index δ i 0 by : c i (δ i 0 ) = 1 if i = i 0 and 0 elsewhere. We work with the following definitions:
L p x,v = L p T d × R d , L p x = L p T d and L p v = L p R d .
The Sobolev spaces H s x,v , H s x and H s v are defined in the same way and we denote the standard Sobolev norms by • 2
H s x,v = |j|+|l| s ∂ j l • 2 L 2 x,v .
1.3. Our strategy and results. The first step is to investigate the equation (1.3) in order to obtain existence and exponential decay of solutions close to equilibrium in Sobolev spaces H s x,v , independently of the Knudsen number ε. Moreover, we want all the required smallness assumption on the initial data and rates of convergence to be explicit. Such a result has been proved in [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] by studying independently the behaviour of both microscopic and fluid parts of solutions to (1.3), we proposed here another method based on hypocoercivity estimates.
Our strategy is to build a norm on Sobolev spaces which is equivalent to the standard norm and which satisfies a Grönwall type inequality.
We first construct a functional on H s x,v by considering a linear combination of
∂ j l • 2 L 2 x,v
, for all |j|+|l| s, together with product terms of the form ∂ δ i l-δ i •, ∂ 0 l • L 2 x,v . The distortion of the standard norm by adding these mixed terms is necessary [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] in order to exhibit a relaxation, due to the hypocoercivity property of the linear part of the perturbed Boltzmann equation (1.3).
We then study the flow of this functional along time for solutions to the linearized Boltzmann equation (1.3). This flow is controlled by energy estimates and, finally, a non-trivial choice of coefficients in the functional yields an equivalence between the functional and the standard Sobolev norm, as well as a Grönwall type inequality, both of them being independent of ε.
This strategy is applied to the linear part of the equation to prove that it generates a strongly continuous semigroup with exponential decay (Theorem 2.1). We then combine the latter method and some orthogonal property of the remainder and apply it to the full nonlinear model(Proposition 2.2). This estimate enables us to prove the existence of solutions to the Cauchy problem and their exponential decay as long as the initial data is small enough, with a smallness independent of ε (Theorem 2.3). We emphasize here that, thanks to the functional we used, the smaller ε the less control is needed on the v-derivatives of the initial data.
However, these results seem to tell us that the v-derivatives of solutions to equation (1.3) can blow-up as ε tends to 0. The last step is thus to create a new functional, based on the microscopic part of solutions (idea first introduced by Guo [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF]), satisfying the same properties but controlling the v-derivatives as well. The control on the microscopic part of solutions to equation (1.3) is due to the deep structure of the linear operator L. This leads to the expected exponential decay independently of ε even for the v-derivatives (Theorem 2.4).
Finally, the chief aim of the present article is to derive explicit rates of convergence from solutions to the perturbed Boltzmann equation to solutions to the incompressible Navier-Stokes equations.
Theorem 2.3 tells us that for all ε we can build a solution h ε to the perturbed Boltzmann equation (1.3), as long as the initial perturbation is sufficiently small, independently of ε. We can then consider the sequence (h ε ) 0<ε 1 and study its limit. It appears that it converges weakly in L ∞ t H s x L 2 v , for s s 0 > d, towards a function h. Furthermore, we have the following form for h (see [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF])
h(t, x, v) = ρ(t, x) + v.u(t, x) + 1 2 (|v| 2 -d)θ(t, x) µ(v) 1/2 ,
of which physical observables are weak solutions, in the Leray sense [START_REF] Leray | Sur le mouvement d'un liquide visqueux emplissant l'espace[END_REF], of the incompressible Navier-Stokes equations (p being the pressure function, ν and κ being constants determined by L, see Theorem 5 in [START_REF] Golse | From kinetic to macroscopic models[END_REF])
∂ t u -ν∆u + u • ∇u + ∇p = 0, ∇ • u = 0, (1.4) ∂ t θ -κ∆θ + u • ∇θ = 0,
together with the Boussinesq relation (1.5) ∇(ρ + θ) = 0.
We conclude by studying the properties of the hydrodynamical convergence thanks to the Fourier transform on the torus of the linear operator L -v • ∇ x . This gives us a strong convergence result on the time average of h ε with an explicit rate of convergence in finite time. An interpolation between this finite time convergence and the exponential stability of the global equilibria for Boltzmann equation as well as for Navier-Stokes equations gives a strong convergence for all times (Theorem 2.5). We obtain an explicit form for the initial data associated to the limit of (h ε ) ε>0 .
1.4. Comparison with existing results. For physical purposes, one may assume that ε = 1 which is a mere normalization and that is why many articles about the perturbed Boltzmann equation only deal with this case. The associated Cauchy problem has been worked on over the past fifty years, starting with Grad [START_REF] Grad | Asymptotic equivalence of the Navier-Stokes and nonlinear Boltzmann equations[END_REF], and it has been studied in different spaces, such as weighted L 2 v (H l x ) spaces [START_REF] Ukai | On the existence of global solutions of mixed problem for non-linear Boltzmann equation[END_REF] or weighted Sobolev spaces [START_REF] Guo | Classical solutions to the Boltzmann equation for molecules with an angular cutoff[END_REF][19] [START_REF] Yu | Global classical solutions of the Boltzmann equation near Maxwellians[END_REF]. Other results have also been proved in R d instead of the torus, see for instance [START_REF] Alexandre | The Boltzmann equation without angular cutoff in the whole space: II, Global existence for hard potential[END_REF][9] [START_REF] Nishida | Global solutions to the initial value problem for the nonlinear Boltzmann equation[END_REF], but it will not be the purpose of this article.
Our article explicitly deals with the general case for ε and we prove results that are uniform in ε. To solve the Cauchy problem we use an iterative scheme, like in the papers mentioned above, but our strategy yields a condition for the existence of solutions in H s
x,v which is uniform in ε (Theorem 6.3). In order to obtain such a result, we had to consider more precise estimates on the bilinear operator Γ. Bardos and Ukai [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] obtained a similar result in R d but in weighted Sobolev spaces and did not prove any decay.
The behaviour of such global in time solutions has also been studied. Guo worked in weighted Sobolev spaces and proved the boundedness of solutions to equation (1.3) [START_REF] Guo | Classical solutions to the Boltzmann equation for molecules with an angular cutoff[END_REF], as well as an exponential decay (uniform in ε) [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF]. The norm involved in [START_REF] Guo | Classical solutions to the Boltzmann equation for molecules with an angular cutoff[END_REF] [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] is quite intricate and requires a lot of technical computations. To avoid specific and technical calculations, the theory of hypocoercivity [START_REF] Mouhot | Quelques résultats d'hypocoercitivité en théorie cinétique collisionnelle[END_REF] focuses on the properties of the Boltzmann operator and which are quite similar to hypoellipticity. This theory has been used in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] to obtain exponential decay in standard Sobolev spaces in the case ε = 1.
We use the idea of Mouhot and Neumann developed in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] consisting of considering a functional on H s x,v involving mixed scalar products. In this article we thus construct such a quadratic form, but with coefficient depending on ε. Working in the general case for ε yields new calculations and requires the use of certain orthogonal properties of the bilinear operator Γ to overcome these issues. Moreover, we construct a new norm out of this functional, which controls the v-derivatives by a factor ε.
The fact that the study yields a norm containing some ε factors prevents us from having a uniform exponential decay for the v-derivatives. We use the idea of Guo [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] and look at the microscopic part of the solution h ε everytime we look at a differentiation in v. This idea catches the interesting structure of L on its orthogonal part. Combining this idea with our previous strategy fills the gap for the v-derivatives.
Several studies have been made on the different regimes of hydrodynamical limits for the Boltzmann equation and complete formal derivations have then been obtained by Bardos, Golse and Levermore [START_REF] Bardos | Fluid dynamic limits of kinetic equations. I. Formal derivations[END_REF]. We refer the reader to [START_REF] Saint-Raymond | Hydrodynamic limits of the Boltzmann equation[END_REF] for an overview on the existing results and standard techniques. The particular case of incompressible Navier-Stokes regime has been first addressed by Sone [START_REF] Sone | Asymptotic theory of flow of rarefied gas over a smooth boundary ii[END_REF] where he dealt with the asymptotic theory for the perturbed equation up to the second order inside a smooth domain. Later, De Masi, Esposito and Lebowitz [START_REF] De Masi | Incompressible Navier-Stokes and Euler limits of the Boltzmann equation[END_REF] gave a first rigorous and constructive proof on the torus by considering the stability of Maxwellians whose mean velocity is a solution to incompressible Navier-Stokes equations. Note that their result is of a different nature than the one presented here. Let us also mention the works of Golse and Saint-Raymond [START_REF] Golse | The Navier-Stokes limit of the Boltzmann equation for bounded collision kernels[END_REF] [START_REF] Golse | The incompressible Navier-Stokes limit of the Boltzmann equation for hard cutoff potentials[END_REF] (in R 3 ) and Levermore and Masmoudi [START_REF] Levermore | From the Boltzmann equation to an incompressible Navier-Stokes-Fourier system[END_REF] (on T d ) where the convergence is proved for appropriately scaled Di Perna-Lions renormalized solutions [START_REF] Diperna | On the Cauchy problem for Boltzmann equations: global existence and weak stability[END_REF].
Our uniform results enable us to derive a weak convergence in H s x L 2 v towards solutions to the incompressible Navier-Stokes equations, together with the Boussinesq relation. We then find a way to obtain strong convergence using the ideas of the Fourier study of the linear operator L -v.∇ x , developed in [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] and [START_REF] Ellis | The first and second fluid approximations to the linearized Boltzmann equation[END_REF], combined with Duhamel formula. However, the study done in [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] relies strongly on an argument of stationary phase developed in [START_REF] Ukai | The incompressible limit and the initial layer of the compressible Euler equation[END_REF] which is no longer applicable in the torus. Indeed, the Fourier space of R d is continuous and so integration by parts can be used in that frequency space. This tool is no longer available in the frequency space of the torus which is discrete.
Theorem 2.5 shows that the behaviour of the hydrodynamical limit is quite different on the torus, where an averaging in time is necessary for general initial data. However,we obtain the same relation between the limit at t = 0 and the initial perturbation h in and also the existence of an initial layer. That is to say that we have a convergence in L 2 [0,T ] = L 2 ([0, T ]) if and only if the initial perturbation satisfies some physical properties, which appear to be the same as in R d studied in [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF].
This convergence gives a perturbative result for incompressible Navier-Stokes equations in Sobolev spaces around the steady solution. The regularity of the weak solutions we constructed implies that they are in fact strong solutions (see Lions [START_REF] Lions | Mathematical topics in fluid mechanics[END_REF], Section 2.5, and more precisely Serrin, [START_REF] Serrin | On the interior regularity of weak solutions of the Navier-Stokes equations[END_REF] and [START_REF] Serrin | The initial value problem for the Navier-Stokes equations[END_REF]). Moreover, our uniform exponential decay for solutions to the linearized Boltzmann equation yields an exponential decay for the perturbative solutions of the incompressible Navier-Stokes equations in higher Sobolev spaces. Such an exponential convergence to equilibrium has been derived in H 1 0 for d = 2 or d = 3 in [START_REF] Temam | Theory and numerical analysis[END_REF], or can be deduced from Proposition 3.7 in [START_REF] Majda | Vorticity and incompressible flow[END_REF] in higher Sobolev spaces for small initial data. The general convergence to equilibrium can be found in [START_REF] Matsumura | Initial-boundary value problems for the equations of motion of compressible viscous and heat-conductive fluids[END_REF] (small initial data) and in [START_REF] Novotný | Convergence to equilibria for compressible Navier-Stokes equations with large data[END_REF] but they focus on the general compressible case and no rate of decay is built. Furthermore, results that do not involve hydrodynamical limits (existence and exponential decay results) are applicable to a larger class of operators. In Appendix A we prove that those theorems also hold for other kinetic collisional models such as the linear relaxation, the semi-classical relaxation, the linear Fokker-Planck equation and the Landau equation with hard and moderately soft potential. 1.5. Organization of the paper. Section 2 is divided in two different subsections.
As mentionned above, we shall use the hypocoercivity of the Boltzmann equation (1.1). This hypocoercivity can be described in terms of technical properties on L and Γ and, in order to obtain more general results, we consider them as a basis of our paper. Thus, subsection 2.1 describes them in detail and a proof of the fact that L and Γ indeed satisfy those properties is given in Appendix A. Most of them have been proved in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] but we require more precise ones to deal with the general case.
The second subsection 2.2 is dedicated to a mathematical formulation of the results described in subsection 1.3.
As said when we described our strategy (subsection 1.3), we are going to study the flow of a functional involving L 2
x,v -norm of x and v derivatives and mixed scalar products. To control this flow in time we compute energy estimates for each of these terms in a toolbox (section 3) which will be used and referred to all along the rest of the paper. Proofs of those energy estimates are given in Appendix B.
Finally, sections 4, 5, 6, 7 and 8 are the proofs respectively of Theorem 2.1 (about the strong semigroup property of the linear part of equation (1.3)), Proposition 2.2 (an a priori estimates on the constructed functional for the full model), Theorem 2.3 (existence and exponential decay of solutions to equation (1.3)), Theorem 2.4 (showing the uniform boundedness of the v-derivatives) and of Theorem 2.5 (dealing with the hydrodynamical limit).
We notice here that section 6 is divided in two subsection. Subsection 6.1 deals with the existence of solutions for all ε > 0 and subsection 6.2 proved the exponential decay of those solutions.
Main results
This section is divided into two parts. The first one translates the hypocoercivity aspects of the Boltzmann linear operator in terms of mathematical properties for L and Γ. Then, the second one states our results in terms of those assumptions.
2.1. Hypocoercivity assumptions. This section is dedicated to describing the framework and assumptions of the hypocoercivity theory. A state of the art of this theory can be found in [START_REF] Mouhot | Quelques résultats d'hypocoercitivité en théorie cinétique collisionnelle[END_REF].
Assumptions on the linear operator L. Assumptions in H 1
x,v . : (H1): Coercivity and general controls
L : L 2 v -→ L 2 v is a closed and self-adjoint operator with L = K -Λ such that: • Λ is coercive: -it exists . Λv norm on L 2 v such that ∀h ∈ L 2 v , ν Λ 0 h 2 L 2 v ν Λ 1 h 2 Λv Λ(h), h L 2 v ν Λ 2 h 2
Λv , -Λ has a defect of coercivity regarding its v derivatives:
∀h ∈ H 1 v , ∇ v Λ(h), ∇ v h L 2 v ν Λ 3 ∇ v h 2 Λv -ν Λ 4 h 2 Λv . • There exists C L > 0 such that ∀h ∈ L 2 v , ∀g ∈ L 2 v , L(h), g L 2 v C L h Λv g Λv ,
where (ν Λ s ) 1 s 4 are strictly positive constants depending on the operator and the dimension of the velocities space d.
As in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF], we define a new norm on L 2
x,v : . Λ = . Λv L 2 x .
(H2): Mixing property in velocity
∀δ > 0 , ∃C(δ) > 0 , ∀h ∈ H 1 v , ∇ v K(h), ∇ v h L 2 v C(δ) h 2 L 2 v + δ ∇ v h 2 L 2 v . (
H3): Relaxation to equilibrium
We suppose that the kernel of L is generated by N functions which form an orthonormal basis for Ker(L):
Ker(L) = Span{φ 1 (v), . . . , φ N (v)}.
Moreover, we assume that the φ i are of the form P i (v)e -|v| 2 /4 , where P i is a polynomial.
Furthermore, denoting by π L the orthogonal projector in L 2 v on Ker(L) we assume that we have the following local coercivity property:
∃λ > 0 , ∀h ∈ L 2 v , L(h), h L 2 v -λ h ⊥ 2 Λv , where h ⊥ = h -π L (h) denotes the microscopic part of h (the orthogonal to Ker(L) in L 2 v ).
We are using the same hypothesis as in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF], except that we require the φ i to be of a specific form. This additional requirement allows us to derive properties on the v-derivatives of π L that we will state in the toolbox section 3.
Then we have two more properties on L in order to deal with higher order Sobolev spaces.
Assumptions in H s
x,v , s > 1. : (H1'): Defect of coercivity for higher derivatives We assume that L satisfies (H1) along with the following property: for all s 1, for all |j|
+ |l| = s such that |j| 1, ∀h ∈ H s x,v , ∂ j l Λ(h), ∂ j l h L 2 x,v ν Λ 5 ∂ j l h 2 Λ -ν Λ 6 h H s-1 x,v
, where ν Λ 5 and ν Λ 6 are strictly positive constants depending on L and d. We also define a new norm on H s x,v :
. H s Λ = |j|+|l| s ∂ j l . 2 Λ 1/2 . ( H2
∀δ > 0 , ∃C(δ) > 0 , ∀h ∈ H s x,v , ∂ j l K(h), ∂ j l h L 2 x,v C(δ) h 2 H s-1 x,v + δ ∂ j l h 2 L 2 x,v .
2.1.2. Assumptions on the second order term Γ. To solve our problem uniformly in ε we had to precise the hypothesis made in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] in order to have a deeper understanding of the operator Γ. This lead us to two different assumptions.
(H4): Control on the second order operator Γ :
L 2 v × L 2 v -→ L 2
v is a bilinear symmetric operator such that for all multi-indexes j and l such that |j| + |l| s, s 0,
∂ j l Γ(g, h), f L 2 x,v G s x,v (g, h) f Λ , if j = 0 G s x (g, h) f Λ , if j = 0 , G s x,v and G s x being such that G s x,v G s+1 x,v , G s x G s+1
x and satisfying the following property:
∃s 0 ∈ N, ∀s s 0 , ∃C Γ > 0, G s x,v (g, h) C Γ g H s x,v h H s Λ + h H s x,v g H s Λ G s x (g, h) C Γ h H s x L 2 v g H s Λ + g H s x L 2 v h H s Λ .
(H5): Orthogonality to the Kernel of the linear operator
∀h, g ∈ Dom(Γ) ∩ L 2 v , Γ(g, h) ∈ Ker(L) ⊥ . 2.2.
Statement of the Theorems.
2.2.1.
Uniform result for the linear Boltzmann equation. For s in N * and some constants (b
(s) j,l ) j,l , (α (s) l ) l and (a (s)
i,l ) i,l strictly positive and 0 < ε 1 we define the following functional on H s x,v , where we emphasize that there is a dependance on ε, which is the key point of our study:
• H s ε = |j|+|l| s |j| 1 b (s) j,l ε 2 ∂ j l • 2 L 2 x,v + |l| s α (s) l ∂ 0 l • 2 L 2 x,v + |l| s i,c i (l)>0 a (s) i,l ε ∂ δ i l-δ i •, ∂ 0 l • L 2 x,v 1 2
.
We first study the perturbed equation (1.3), without taking into account the bilinear remainder operator. By letting π w be the projector in L 2
x,v onto Ker(w) we obtained the following semigroup property for L.
Theorem 2.1. If L is a linear operator satisfying the conditions (H1'), (H2') and (H3) then it exists 0 < ε d 1 such that for all s in N * , [START_REF] Alexandre | The Boltzmann equation without angular cutoff in the whole space: II, Global existence for hard potential[END_REF]
for all 0 < ε ε d , G ε = ε -2 L -ε -1 v • ∇ x generates a C 0 -semigroup on H s x,v . (2) there exist C (s) G , (b (s) j,l ), (α (s) l ), (a (s) i,l ) > 0 such that for all 0 < ε ε d : • 2 H s ε ∼ • 2 L 2 x,v + |l| s ∂ 0 l • 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l • 2 L 2 x,v ,
and for all h in
H s x,v , G ε (h), h H s ε -C (s) G h -π Gε (h)) 2 H s Λ .
This theorem gives us an exponential decay for the semigroup generated by G ε . 2.2.2. Uniform perturbative result for the Boltzmann equation. The next result states that if we add the bilinear remainder operator then it is enough, if ε is small enough, to slightly change our new norm to have a control on the solution.
Proposition 2.2. If L is a linear operator satisfying the conditions (H1'), (H2') and (H3) and Γ a bilinear operator satisfying (H4) and (H5) then it exists 0 < ε d 1 such that for all s in N * , (1) there exist
K (s) 0 , K (s) 1 , K (s) 2 (b (s) j,l ), (α (s) l ), (a (s)
i,l ) > 0, independent of Γ and ε, such that for all 0 < ε ε d :
• 2 H s ε ∼ • 2 L 2 x,v + |l| s ∂ 0 l • 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l • 2 L 2 x,v , (2)
and for all h in in H s x,v ∩ Ker(G ε ) ⊥ and all g in Dom(Γ) ∩ H s x,v , if we have a solution h in H s x,v to the following equation
∂ t h + 1 ε v • ∇ x h = 1 ε 2 L(h) + 1 ε Γ(g, h), then d dt h 2 H s ε -K (s) 0 h 2 H s Λ + K (s) 1 (G s x (g, h)) 2 + ε 2 K (s) 2 G s x,v (g, h) 2 .
One can remark that the norm constructed above leaves the x-derivatives free while it controls the v-derivatives by a factor ε.
We emphasize that this result shows that the derivative of the norm is control by the x-derivatives of Γ and the Sobolev norm of Γ, but weakened by a factor ε 2 . This is important as our norm . 2 H s ε controls the L 2 v (H s x )-norm by a factor of order 1 whereas it controls the whole H s x,v -norm by a multiplicative factor of order 1/ε. Theorem 2.3. Let Q be a bilinear operator such that:
• the equation (1.2) admits an equilibrium
0 µ ∈ L 1 (T d × R d ),
• the linearized operator L = L(h) around µ with the scaling f = µ + εµ 1/2 h satisfies (H1'), (H2') and (H3), • the bilinear remaining term Γ = Γ(h, h) in the linearization satisfies (H4) and (H5). Then there exists 0 < ε d 1 such that for any s s 0 (defined in (H4) ),
(1) there exist (b
(s) j,l ), (α (s) l ), (a (s)
i,l ) > 0, independent of Γ and ε, such that for all 0 < ε ε d :
• 2 H s ε ∼ • 2 L 2 x,v + |l| s ∂ 0 l • 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l • 2 L 2 x,v , (2)
there exist δ s > 0, C s > 0 and τ s > 0 such that for all 0 < ε ε d : For any distribution 0
f in ∈ L 1 (T d × R d ) with f in = µ + εµ 1/2 h in 0, h in in Ker(G ε ) ⊥ and h in H s ε δ s ,
there exists a unique global smooth (in H s x,v , continuous in time) solution f ε = f ε (t, x, v) to (1.2) which, moreover, satisfies f ε = µ + εµ 1/2 h ε 0 with:
h ε H s ε h in H s ε e -τst .
The fact that we are asking h in to be in Ker(G ε ) ⊥ just states that we want f in to have the same physical quantities as the global equilibrium µ. This is a compulsory requirement as one can easily check that the physical quantities
T d ×R d f ε (x, v)dxdv, T d ×R d vf ε (x, v)dxdv, T d ×R d |v| 2 f ε (x, v)dxdv
are preserved with time (see [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] for instance).
Notice that the H s ε -norm is this theorem is the same than the one we constructed in Proposition 2.2.
2.2.3.
The boundedness of the v-derivatives. As a corollary we have that the H s x (L 2 v )norm decays exponentially independently of ε but that the only control we have on the
H s x,v is h ε H s x,v δ s ε e -τst .
This seems to tell us that the v-derivatives can blow-up at a rate 1/ε. However, Guo [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] showed that one can prove that there is no explosion if one controls independently the fluid part and the microscopic part of the solution. This idea, combined with our original one, leads to the construction of a new norm which will only control the microscopic part of the solution whenever we face a derivative in the v variable. We define the following positive quadratic form
• 2 H s ε⊥ = |j|+|l| s |j| 1 b (s) j,l ∂ j l (Id -π L ) 2 L 2 x,v + |l| s α (s) l ∂ 0 l • 2 L 2 x,v + |l| s i,c i (l)>0 a (s) i,l ε ∂ δ i l-δ i •, ∂ 0 l • L 2 x,v .
Theorem 2.4. Under the same conditions as in Theorem 2.3, for all s s 0 , there exist (b
(s) j,l ), (α (s) l ), (a (s)
i,l ) > 0 and 0 < ε d 1 such that for all 0 < ε ε d :
(1) • H s ε⊥ ∼ • H s x,v , independently of ε, (2) if h ε is a solution of 1.3 in H s x,v with h in H s ε⊥ δ ′ s then h ε H s ε⊥ δ ′ s e -τ ′ s t ,
where δ ′ s and τ ′ s are strictly positive constants independent of ε. This theorem builds up a functional that is equivalent to the standard Sobolev norm, independently of ε. Thus, it yields an exponential decay for the v-derivatives as well as for the x-derivatives. However, the distorted norm used in Theorem 2.3 asked less control on the v-derivatives for the initial data, suggesting that, in the limit as ε goes to zero, almost only the x-variables have to be controlled. 2.2.4. The hydrodynamical limit on the torus for Maxwellian particles. Our theorem states that one can really expect a convergence of solutions of collisional kinetic models near equilibrium towards a solution of fluid dynamics equations. Indeed, the smallness assumption on the initial perturbation does not depend on the parameter ε as long as ε is small enough.
We then define the following macroscopic quantities
• the particles density ρ ε (t, x) = µ(v) 1/2 , h ε (t, x, v) L 2 v , • the mean velocity u ε (t, x) = vµ(v) 1/2 , h ε (t, x, v) L 2 v , • the temperature θ ε (t, x) = 1 d (|v| 2 -d)µ(v) 1/2 , h ε (t, x, v) L 2 v .
The theorem 2.3 tells us that, for s s 0 , the sequence (h ε ) ε>0 converges (up to an extraction) weakly-* in L ∞ t (H s l L 2 v ) towards a function h. Such a weak convergence enables us to use the Theorem 1.1 of [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF], which is a slight modification of the result in [START_REF] Bardos | Fluid dynamic limits of kinetic equations. I. Formal derivations[END_REF] to get that (1) h is in Ker(L), so of the form
h(t, x, v) = ρ(t, x) + v.u(t, x) + 1 2 (|v| 2 -d)θ(t, x) µ(v) 1/2 , ( 2
) (ρ ε , u ε , θ ε ) converges weakly* in L ∞ t (H s x ) towards (ρ, u, θ), (3)
(ρ, u, θ) satisfies the incompressible Navier-Stokes equations (1.4) as well as the Boussinesq equation (1.5). If such a result confirms the fact that one can derive the incompressible Navier-Stokes equations from the Boltzmann equation, it does unfortunately neither give us the continuity of h nor the initial condition verified by (ρ, u, θ), depending on (ρ in , u in , θ in ), macroscopic quantities associated to h in . Our next, and final step, is therefore to link the last two triplets and so to understand the convergence h ε → h more deeply. This is the purpose of the following theorem. Theorem 2.5. Consider s s 0 and h in in H s
x,v such that h in H s ε δ s . Then, (h ε ) ε>0 exists for all 0 < ε ε d and converges weakly* in L ∞ t (H s x L 2 v ) towards h such that h ∈ Ker(L), with ∇ x • u = 0 and ρ + θ = 0. Furthermore, T 0 hdt belongs to H s x L 2 v and it exists C > 0 such that,
+∞ 0 hdt - +∞ 0 h ε dt H s x L 2 v C ε |ln(ε)|. One can have a strong convergence in L 2 [0,T ] H s x L 2 v only if h in is in Ker(L) with ∇ x • u in = 0 and ρ in + θ in = 0 (initial layer conditions).
Moreover, in that case we have
h -h ε L 2 [0,+∞) H s x L 2 v C ε |ln(ε)|,
and for all δ in [0, 1], if h in belongs to
H s+δ x L 2 v , sup t∈[0,+∞) h -h ε H s x L 2 v (t) Cε min(δ,1/2) .
This theorem proves the strong convergences for (ρ ε , u ε , θ ε ) towards (ρ, u, θ) but above all it shows that (ρ, u, θ) is the solution to the incompressible Navier-Stokes equations together with the Boussinesq equation satisfying the initial conditions:
• u(0, x) = P u in (x), where P u in (x) is the divergence-free part of u in (x),
• ρ(0, x) = -θ(0, x) = 1 2 (ρ in (x) -θ in (x)).
Finally, note that in the case of initial data satisfying the initial layer conditions, the strong convergence in time requires a little bit more regularity from the initial data. This fact was already noticed in R d (see [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] Lemma 6.1) but overcome by considering weighted norms in velocity.
Toolbox: fluid projection and a priori energy estimates
In this section we are going to give some inequalities we are going to use and to refer to throughout the sequel. First we start with some properties concerning the projection in L 2 v onto Ker(L): π L . Then, because we want to estimate all the terms appearing in the H s
x,v -norm to estimate the functionals H s ε and H s ε⊥ , we will give upper bound on their time derivatives. The proofs are only technical and the interested reader will find them in Appendix B.
We are assuming there that L is having properties (H1'), (H2') and (H3), that Γ satisfies (H4) and (H5) and that 0 < ε 1.
3.1.
Properties concerning the fluid projection π L . We already know that L is acting on L 2 v , with Ker(L) = Span(φ 1 , . . . , φ d ), with (φ i ) 1 i N an orthonormal family, we obtain directly a useful formula for the orthogonal projection on Ker(L) in L 2 v , π L :
(3.1) ∀h ∈ L 2 v , π L (h) = N i=1 R d hφ i dv φ i .
Plus, (H3) states that φ i = P i (v)e -|v| 2 /4 , where P i is a polynomial. Therefore, direct computations and Cauchy-Schwarz inequality give that π L is continuous on
H s x,v with (3.2) ∀s ∈ N, ∃C πs > 0, ∀h ∈ H s x,v , π L (h) 2 H s x,v C πs h 2 H s
x,v . More precisely one can find that for all s in N
(3.3) ∀|j| + |l| = s, ∀h ∈ H s x,v , ∂ j l π L (h) 2 L 2 x,v C πs ∂ 0 l π L (h) 2 L 2 x,v
.
Finally, building the Λ-norm one can find that in all the collisional kinetic equations concerned here we have that
(3.4) ∃C π > 0, ∀h ∈ L 2 x,v , π L (h) 2 Λ C π h 2 L 2 x,v .
Then we can also use the properties of the torus to obtain Poincaré type inequalities. This can be very useful thanks to the next proposition, which is proved in Appendix B. Remark 3.2. In this proposition, Ker(G) has to be understood as linear combinations with constant coefficients of the functions Φ i . This subtlety has to be emphasized since in L 2
x,v , Ker(L) includes all linear combinations of Φ i but with coefficients being functions of x.
Therefore, if we define, for 0 < ε 1:
G ε = 1 ε 2 L - 1 ε v.∇ x ,
then we have a nice desciption of π Gε :
∀h ∈ L 2 x,v , π Gε (h) = N i=1 T d R d hφ i dxdv φ i .
That means that π Gε (h) is, up to a multiplicative constant, the mean of π L (h) over the torus. We deduce that if h belongs to Ker(G ε ) ⊥ , π L (h) has zero mean on the torus and is an operator not depending on the x variable. Thus we can apply Poincaré inequality on the torus:
(3.5) ∀h ∈ Ker(G ε ) ⊥ , π L (h) 2 L 2 x,v C p ∇ x π L (h) 2 L 2 x,v C p ∇ x h 2 L 2
x,v . 3.2. A priori energy estimates. Our work in this article is to study the evolution of the norms involved in the definition of the operators H s ε and H s ε⊥ and to combine them to obtain the results stated above. The Appendix B contains the proofs, which are technical computations together with some choices of decomposition, of the following a priori estimates. Note that all the constants K 1 , K dx and K s-1 used in the inequalities below are independent of ε, Γ and g, and only depend constructively on the constants defined in the hypocoercivity assumptions or in the subsection above. The number e can be any positive real number and will be chosen later.
We would like to study both linear and non-linear models but they appeared to be very similar. In order to avoid long and similar inequalities we will write in parenthesis terms we need to add for the full model.
Let g be a function in H s x,v . We now consider a function h in Ker(G ε ) ⊥ ∩ H s x,v , for some s in N * , which is solution of the linear (linearized) Boltzmann equation:
∂ t h + 1 ε v.∇ x h = 1 ε 2 L(h) + 1 ε Γ(g, h) .
We remind the reader that the following notation is used:
h ⊥ = h -π L (h). 3.2.1. Time evolutions for quantities in H 1 x,v . We write the L 2 x,v -norm estimate (3.6) d dt h 2 L 2 x,v - λ ε 2 h ⊥ 2 Λ + 1 λ G 0 x (g, h) 2 .
Then the time evolution of the x-derivatives
(3.7) d dt ∇ x h 2 L 2 x,v - λ ε 2 ∇ x h ⊥ 2 Λ + 1 λ G 1 x (g, h) 2 ,
and of the v-derivatives
d dt ∇ v h 2 L 2 x,v K 1 ε 2 h ⊥ 2 Λ + K dx ε 2 ∇ x h 2 L 2 x,v - ν Λ 3 ε 2 ∇ v h 2 Λ (3.8) + 3 ν Λ 3 G 1 x,v (g, h) 2 .
Finally, we will need a control on the scalar product as well, as explained in the strategy subsection 1.3. Notice that we have some freedom as e can be any positive number.
d dt ∇ x h, ∇ v h L 2 x,v C L e ε 3 ∇ x h ⊥ 2 Λ - 1 ε ∇ x h 2 L 2 x,v + 2C L eε ∇ v h 2 Λ + e C L ε G 1 x (g, h) 2 . (3.9)
Time evolutions for quantities in H s
x,v . We consider multi-indexes j and l such that |j| + |l| = s. As in the previous case, we have a control on the time evolution of the pure xderivatives,
(3.10) d dt ∂ 0 l h 2 L 2 x,v - λ ε 2 ∂ 0 l h ⊥ 2 Λ + 1 λ (G s x (g, h)) 2 .
In the case where |j| 1, that is to say when we have at least one derivative in v, we obtained the following upper bound
d dt ∂ j l h 2 L 2 x,v - ν Λ 5 ε 2 ∂ j l h 2 Λ + 3(ν Λ 1 ) 2 d ν Λ 5 (ν Λ 0 ) 2 i,c i (j)>0 ∂ j-δ i l+δ i h 2 Λ + K s-1 ε 2 h 2 H s-1 x,v + 3 ν Λ 5 G s x,v (g, h) 2 . (3.11)
We may find useful to consider the particular case where |j| = 1,
d dt ∂ δ i l-δ i h 2 L 2 x,v - ν Λ 5 ε 2 ∂ δ i l-δ i h 2 Λ + 3ν Λ 1 ν Λ 5 ν Λ 0 ∂ 0 l h 2 L 2 x,v + K s-1 ε 2 h 2 H s-1 x,v + 3 ν Λ 5 G s x,v (g, h) 2 . (3.12)
Finally we will need the time evolution of the following scalar product:
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v C L e ε 3 ∂ 0 l h ⊥ 2 Λ - 1 ε ∂ 0 l h 2 L 2 x,v + 2C L eε ∂ δ i l-δ i h 2 Λ + e C L ε (G s x (g, h)) 2 , (3.13)
where we still have some freedom as e is any positive number.
We just emphasize here that one can see that we were careful about which derivatives are involved in the terms that contain Γ. This is because our operator . H s ε controls the H s x (L 2 v )-norm by a mere constant whereas it controls the entire H s x,vnorm by a factor 1/ε.
Time evolutions for orthogonal quantities in H s
x,v . For the theorem 2.4 we are going to need four others inequalities which are a little bit more intricate as they need to know the shape of π L as described in the subsection above. The proofs are written in Appendix B and we are just looking at the whole equation in the setting g = h.
We want the time evolution of the v-derivatives of the orthogonal (microscopic) part of h, as suggested in [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] this allows us to really take advantage of the structure of the linear operator L on its orthogonal:
d dt ∇ v h ⊥ 2 L 2 x,v K ⊥ 1 ε 2 h ⊥ 2 Λ + K ⊥ dx ∇ x h 2 L 2 x,v - ν Λ 3 2ε 2 ∇ v h ⊥ 2 Λ + 3 ν Λ 3 G 1 x,v (h, h) 2 . (3.14)
Then we can have a new bound for the scalar product used before
d dt ∇ x h, ∇ v h L 2 x,v K ⊥ e ε 3 ∇ x h ⊥ 2 Λ + 1 4C π1 C π C p eε ∇ v h ⊥ 2 Λ - 1 2ε ∇ x h 2 L 2 x,v + 4C π ε G 1 x,v (h, h) 2 , (3.15)
where e is any number greater than 1.
As usual, we may need the same kind of bounds in higher degree Sobolev spaces. The reader may notice that the bounds we are about to write are more intricate than the ones in the previous section because they involve more terms with less derivatives. We consider multi-indexes j and l such that |j| + |l| = s. This time we really have to divide in two different cases.
Firstly when |j| 2,
d dt ∂ j l h ⊥ 2 L 2 x,v - ν Λ 5 ε 2 ∂ j l h ⊥ 2 Λ + 9(ν Λ 1 ) 2 d 2(ν Λ 0 ) 2 ν Λ 5 i,c i (j)>0 ∂ j-δ i l+δ i h ⊥ 2 Λ +K ⊥ dl |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + K ⊥ s-1 ε 2 h ⊥ 2 H s-1 x,v + 3 ν Λ 5 G s x,v (h, h) 2 . (3.16)
Then the case when |j| = 1
d dt ∂ δ i l-δ i h ⊥ 2 L 2 x,v - ν Λ 5 ε 2 ∂ δ i l-δ i h ⊥ 2 Λ + K ⊥ dl |l ′ |=s ∂ 0 l ′ h 2 L 2 x,v + K ⊥ s-1 ε 2 h ⊥ 2 H s-1 x,v + 3 ν Λ 5 G s x,v (h, h) 2 . (3.17)
Finally we give a new version of the control over the scalar product in higher Sobolev's spaces.
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v K ⊥ ε 3 e ∂ 0 l h ⊥ 2 Λ + 1 4C πs C π deε ∂ δ i l-δ i h ⊥ 2 Λ - 1 2ε ∂ 0 l h 2 L 2 x,v + 1 4dε |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + 2C π ε G s x,v (h, h) 2 , (3.18)
for any e 1.
Linear case: proof of Theorem 2.1
In this section we are looking at the linear equation
∂ t h = G ε (h), on T d × R d .
Theorem 2.1 will be proved by induction on s. We remind here the operator we will work with on H s x,v
• in the case s = 1:
h 2 H 1 ε = A h 2 L 2 x,v + α ∇ x h 2 L 2 x,v + bε 2 ∇ v h 2 L 2 x,v + aε ∇ x h, ∇ v h L 2 x,v , • in the case s > 1: h 2 H s ε = |j|+|l| s |j| 1 b (s) j,l ε 2 ∂ j l h 2 L 2 x,v + |l| s α (s) l ∂ 0 l h 2 L 2 x,v + |l| s i,c i (l)>0 a (s) i,l ε ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v .
The Theorem 2.1 only requires us to choose suitable coefficients that gives us the expected inequality and equivalence.
Consider
h in in H s x,v ∩ Dom(G ε ). Let h be a solution of ∂ t h = G ε (h) on T d × R d such that h(0, •, •) = h in (•, •). Notice that if h in is in H s x,v ∩ Dom(G ε ) ∩ Ker(G ε
) then we have that the associated solution remains the same in time: ∂ t h = 0. Therefore the fluid part of a solution does not evolve in time and so the semigroup is identity on Ker(G ε ). Besides, we can see directly from the definition and the adjointness property of
L that h ∈ Ker(G ε ) ⊥ for all t if h in belongs in Ker(G ε ) ⊥ .
Therefore, to prove the theorem it is enough to consider
h in in H s x,v ∩ Dom(G ε ) ∩ Ker(G ε ) ⊥ .
4.1. The case s = 1. For now on we assume that our operator L satisfies the conditions (H1), (H2) and (H3) and that 0 < ε 1.
If (H3) holds for L then we have that ε
-2 L is a non-positive self-adjoint operator on L 2 x,v . Moreover, ε -1 v • ∇ x is skew-symmetric on L 2 x,v . Therefore the L 2
x,v -norm decreases along the flow and it can be deduced that G ε yields a C 0 -semigroup on L 2
x,v for all positive ε (see [START_REF] Kato | Perturbation theory for linear operators[END_REF] for general theory and [START_REF] Ukai | On the existence of global solutions of mixed problem for non-linear Boltzmann equation[END_REF] for its use in our case).
Using the toolbox, which is possible since h is in Ker(G ε ) ⊥ for all t, we just have to consider the linear combination A(3.6) + α(3.7) + bε 2 (3.8) + aε(3.9) to obtain
d dt h 2 H 1 ε 1 ε 2 [bK 1 -λA] h ⊥ 2 Λ + 1 ε 2 C L ea -λα ∇ x h ⊥ 2 Λ + 2C L a e -bν Λ 3 ∇ v h 2 Λ + [bK dx -a] ∇ x h 2 L 2 x,v . (4.1)
Then we make the following choices:
(1) We fix b such that -ν Λ 3 b < -1. (2) We fix A big enough such that [bK 1 -λA] -1. (3) We fix a big enough such that [bK dx -a] -1. (4) We fix e big enough such that 2C L a e -bν Λ 3 -1.
(5) We fix α big enough such that C L ea -λα -1 and such that a 2 αb b α .
This leads to, because 0 < ε 1:
d dt h 2 H 1 ε -h ⊥ 2 Λ + ∇ x h ⊥ 2 Λ + ∇ v h 2 Λ + ∇ x h 2 L 2 x,v
.
Finally we can apply the Poincaré inequality (3.5) together with the equivalence of the L 2
x,v -norm and the Λ-norm on the fluid part π L , equation (3.4), to get
∃C, C ′ > 0, h 2 Λ C h ⊥ 2 Λ + 1 2 ∇ x h 2 L 2 x,v , ∇ x h 2 Λ C ′ ∇ x h ⊥ 2 Λ + 1 2 ∇ x h 2 L 2 x,v
.
Therefore we proved the following result:
∃K > 0, ∀ 0 < ε 1 , d dt h 2 H 1 ε -C (1) G h 2 Λ + ∇ x,v h 2 Λ .
With these constants, .
H 1 ε is equivalent to h 2 L 2 x,v + ∇ x h 2 L 2 x,v + ε 2 ∇ v h 2 L 2 x,v 1/2
since a 2 αb and b α and hence:
A h 2 L 2 x,v + b 2 ∇ x h 2 L 2 x,v + ε 2 ∇ v h 2 L 2 x,v h 2 H 1 ε and h 2 H 1 ε A h 2 L 2 x,v + 3α 2 ∇ x h 2 L 2 x,v + ε 2 ∇ v h 2 L 2 x,v .
The results above gives us the expected theorem for s = 1.
4.2.
The induction in higher order Sobolev spaces. Then we assume that the theorem is true up to the integer s -1, s > 1. Then we suppose that L satisfies (H1'), (H2') and (H3) and we consider ε in (0
, 1]. Let h in be in H s x,v ∩ Dom(G ε ) ∩ Ker(G ε ) ⊥ and h be the solution of ∂ t h = G ε (h) such that h(0, •, •) = h in (•, •).
As before, h belongs to Ker(G ε ) ⊥ for all t and thus we can use the results given by the toolbox.
Thanks to the proof in the case s = 1 we know that we are able to handle the case where there is only a difference of one derivative between the number of derivatives in x and in v. Therefore, instead of working with the entire norm of H s
x,v , we will look at an equivalent of the Sobolev semi-norm. We define:
F s (t) = |j|+|l|=s |j| 2 ε 2 B ∂ j l h 2 L 2 x,v + B ′ |l|=s i,c i (l)>0 Q l,i (t), Q l,i (t) = α ∂ 0 l h 2 L 2 x,v + bε 2 ∂ δ i l-δ i h 2 L 2 x,v + aε ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v ,
where the constants, strictly positive, will be chosen later. Like in the section above, we shall study the time evolution of every term involved in F s in order to bound above dF s /dt(t) with negative coefficients. 4.2.1. The time evolution of Q l,i . We will first study the time evolution of Q l,i for given |j| + |l| = s. The toolbox already gave us all the bounds we need and we just have to gather them in the following way: α(3.10) + bε 2 (3.12) + aε (3.13). This leads to, because 0 < ε 1,
d dt Q l,i (t) 1 ε 2 C L ea -λα ∂ 0 l h ⊥ 2 Λ + 2C L a e -ν Λ 5 b ∂ δ i l-δ i h 2 Λ + 3ν Λ 1 ν Λ 5 ν Λ 0 b -a ∂ 0 l h 2 L 2 x,v + K s-1 b h H s-1 x,v .
One can notice that, except for the last term, we have exactly the same kind of bound as in (4.1), in the proof of the case s = 1. Therefore we can choose α, b, a, e, independently of ε such that it exists K Q > 0 and C s-1 > 0 such that for all 0 < ε 1:
• Q l,i (t) ∼ ∂ 0 l h 2 L 2 x,v + ε 2 ∂ δ i l-δ i h 2 L 2 x,v , • d dt Q l,i (t) -K Q ∂ 0 l h 2 Λ + ∂ δ i l-δ i h 2 Λ + C s-1 h H s-1 x,v
, where we used (3.4) (equivalence of norms L 2
x,v and Λ on the fluid part) to get
∂ 0 l h 2 Λ C ′ ∂ 0 l h ⊥ 2 Λ + ∂ 0 l h 2 L 2 x,v . 4.2.2.
The time evolution of F s and conclusion. The last result about Q l,i gives us that
F s (t) ∼ |l|=s ∂ 0 l h 2 L 2 x,v + ε 2 |l|+|j|=s |j| 1 ∂ j l h 2 L 2
x,v .
To study the time evolution of F s we just need to combine the evolution of Q l,i and the one of
∂ j l h 2 L 2 x,v
which is given in the toolbox by (3.11).
d dt F s (t) |j|+|l|=s |j| 2 -ν Λ 5 B ∂ j l h 2 Λ + |j|+|l|=s |j| 2 3(ν Λ 1 ) 2 d ν Λ 5 (ν Λ 0 ) 2 Bε 2 i,c i (j)>0 ∂ j-δ i l+δ i h 2 Λ -K Q B ′ |l|=s i,c i (l)>0 ∂ 0 l h 2 Λ + ∂ δ i l-δ i h 2 Λ (4.2) + |j|+|l|=s |j| 2 K s-1 B + |l|=s i,c i (l)>0 B ′ C s-1 h 2 H s-1 x,v .
Then we choose the following coefficients B = 2/ν Λ 5 and we can rearrange the sums to obtain
d dt F s (t) |j|+|l|=s |j| 2 6d(ν Λ 1 ) 2 (ν Λ 5 ν Λ 0 ) 2 ε 2 -2 ∂ j l h 2 Λ + |j|+|l|=s |j|=1 6d(ν Λ 1 ) 2 (ν Λ 5 ν Λ 0 ) 2 ε 2 -K Q B ′ ∂ j l h 2 Λ + |j|+|l|=s |j|=0 (-K Q B ′ ) ∂ j l h 2 Λ + C (s-1) + (B ′ ) h H s-1 x,v .
Therefore we can choose the remaining coefficients:
(1) ε d = min 1,
(ν Λ 5 ν Λ 0 ) 2 6d(ν Λ 1 ) 2 , (2) we fix B ′ big enough such that K Q B ′ 1 and 6d(ν Λ 1 ) 2 (ν Λ 5 ν Λ 0 ) 2 ε 2 d -K Q B ′ -1.
Everything is now fixed in C (s-1) + (B ′ ) and therefore it is just a constant C (s-1) + that does not depend on ε. Therefore we then have the final result.
∀ 0 < ε ε d , d dt F s (t) C (s-1) + h 2 H s-1 x,v - |j|+|l|=s ∂ j l h 2 Λ .
Then, we know that . Λ controls the L 2 -norm. And therefore:
∀ 0 < ε ε d , d dt F s (t) C (s) + |j|+|l| s-1 ∂ j l h 2 Λ - |j|+|l|=s ∂ j l h 2 Λ .
This inequality is true for all s and therefore we can take a linear combination of the F s to obtain the following, where C s is a constant that does not depend on ε since C
+ does not depend on it.
∀ 0 < ε ε d , d dt n p=1 C p F p (t) -C (s) G |j|+|l|| s ∂ j l h 2 Λ .
We can use the induction assumption from rank 1 up to rank s -1 to find that this linear combination is equivalent to
. 2 L 2 x,v + |l| s ∂ 0 l . 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l . 2 L 2 x,v
and so fits the expected requirements.
5. Estimate for the full equation: proof of Proposition 2.2
We will prove that proposition by induction on s. For now on we assume that L satisfies hypothesis (H1'), (H2') and (H3), that Γ satisfies properties (H4) and (H5) and we take g in H s
x,v .
So we take h in in H s x,v ∩Ker(G ε ) ⊥ and we consider the associated solution, denoted by h, of
∂ t h + 1 ε v • ∇ x h = 1 ε 2 L(h) + 1 ε Γ(g, h).
One can notice that thanks to (H5) and the self-adjointness of L, h remains in Ker(G ε ) ⊥ for all times.
Besides, while considering the time evolution we find a term due to G ε and another due to Γ. Therefore, we will use the results found in the toobox but including the terms in parenthesis.
5.1. The case s = 1. We want to study the following operator on
H s x,v h 2 H 1 ε = A h 2 L 2 x,v + α ∇ x h 2 L 2 x,v + bε 2 ∇ v h 2 L 2 x,v + aε ∇ x h, ∇ v h L 2 x,v .
Therefore, using the toolbox we just have to consider the linear combination A(3.6) + α(3.7) + bε 2 (3.8) + aε(3.9) to yield
d dt h 2 H 1 ε 1 ε 2 [bK 1 -λA] h ⊥ 2 Λ + 1 ε 2 C L ea -λα ∇ x h ⊥ 2 Λ + 2C L a e -bν Λ 3 ∇ v h 2 Λ + [bK dx -a] ∇ x h 2 L 2 x,v (5.1)
+ Aν Λ 1 ν Λ 0 λ G 0 x (g, h) 2 + αν Λ 1 ν Λ 0 λ + ν Λ 1 ea C L ν Λ 0 G 1 x (g, h) 2 + 3ν Λ 1 b ν Λ 0 ν Λ 3 ε 2 G 1 x,v (g, h) 2 .
One can see that we obtained exactly the same upper bound as in the proof of the previous theorem, equation (4.1), adding the terms involving Γ (remember that G s
x is increasing in s). Therefore we can make the same choices for A, α, b, a and e, independently of Γ and g, to get that
h 2 H 1 ε ∼ h 2 L 2 x,v + ∇ x h 2 L 2 x,v + ε 2 ∇ v h 2 L 2
x,v , and that, once those parameters are fixed, there exist K
(1) 0 , K (1) 1 , K (1)
2 > 0 such that for all 0 < ε 1,
d dt h 2 H 1 ε -K (1) 0 h 2 Λ + ∇ x,v h 2 Λ + K (1) 1 G 1 x (g, h) 2 + ε 2 K (1) 2 G 1 x,v (g, h) 2 ,
which is the expected result in the case s = 1.
5.2. The induction in higher order Sobolev spaces. Then we assume that the theorem is true up to the integer s -1, s > 1. Then we suppose that L satisfies (H1'), (H2') and (H3) and we consider ε in (0, 1]. Since h in is in Ker(G ε ) ⊥ , h belongs to Ker(G ε ) ⊥ for all t and so we can use the results given in the toolbox.
As in the proof in the linear case we define:
F s (t) = |j|+|l|=s |j| 2 ε 2 B ∂ j l h 2 L 2 x,v + B ′ |l|=s i,c i (l)>0 Q l,i (t), Q l,i (t) = α ∂ 0 l h 2 L 2 x,v + bε 2 ∂ δ i l-δ i h 2 L 2 x,v + aε ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v
, where the constants, strictly positive, will be chosen later.
Like in the section above, we shall study the time evolution of every term involved in F s in order to bound above dF s /dt(t) with expected coefficients.
5.2.1.
The time evolution of Q l,i . We will first study the time evolution of Q l,i for given |j| + |l| = s. The toolbox already gave us all the bounds we need and we just have to gather them in the following way: α(3.10) + bε 2 (3.12) + aε (3.13). This leads to, because 0 < ε 1,
d dt Q l,i (t) 1 ε 2 C L ea -λα ∂ 0 l h ⊥ 2 Λ + 2C L a e -ν Λ 5 b ∂ δ i l-δ i h 2 Λ + 3ν Λ 1 ν Λ 5 ν Λ 0 b -a ∂ 0 l h 2 L 2 x,v + K s-1 b h H s-1 x,v + αν Λ 1 ν Λ 0 λ + ν Λ 1 ea C L ν Λ 0 (G s x (g, h)) 2 + 3ν Λ 1 b ν Λ 0 ν Λ 5 ε 2 G s x,v (g, h) 2 .
One can notice that, except for the term in h H s-1 x,v , we have exactly the same kind of bound as in the case s = 1, given by (5.1). Therefore we can choose α, b, a, e, independently of ε, Γ and g such that it exists K Q , K Γ1 , K Γ2 > 0 and C s-1 > 0 such that for all 0 < ε 1:
• Q l,i (t) ∼ ∂ 0 l h 2 L 2 x,v + ε 2 ∂ δ i l-δ i h 2 L 2 x,v , • d dt Q l,i (t) -K Q ∂ 0 l h 2 Λ + ∂ δ i l-δ i h 2 Λ + K Γ1 (G s x (g, h)) 2 +ε 2 K Γ2 G s x,v (g, h) 2 + C s-1 h H s-1 x,v
, where we used (3.4) (equivalence of norms L 2
x,v and Λ on the fluid part) to get
∂ 0 l h 2 Λ C ′ ∂ 0 l h ⊥ 2 Λ + ∂ 0 l h 2 L 2 x,v .
5.2.2.
The time evolution of F s and conclusion. The last result about Q l,i gives us that
F s (t) ∼ |l|=s ∂ 0 l h 2 L 2 x,v + ε 2 |l|+|j|=s |j| 1 ∂ j l h 2 L 2 x,v
, so it remains to show that F s satisfies the property describe by the theorem for some B and B ′ .
To study the time evolution of F s we just need to combine the evolution of Q l,i and the one of
∂ j l h 2 L 2 x,v
which is given in the toolbox by (3.11).
d dt F s (t) |j|+|l|=s |j| 2 -ν Λ 5 B ∂ j l h 2 Λ + |j|+|l|=s |j| 2 3(ν Λ 1 ) 2 d ν Λ 5 (ν Λ 0 ) 2 Bε 2 i,c i (j)>0 ∂ j-δ i l+δ i h 2 Λ -K Q B ′ |l|=s i,c i (l)>0 ∂ 0 l h 2 Λ + ∂ δ i l-δ i h 2 Λ + |j|+|l|=s |j| 2 K s-1 B + |l|=s i,c i (l)>0 B ′ C s-1 h 2 H s-1 x,v (5.2)
+ |l|=s i,c i (l)>0 B ′ K Γ1 (G s x (g, h)) 2 +ε 2 |l|=s i,c i (l)>0 B ′ K Γ2 + |j|+|l|=s |j| 2 3ν Λ 1 ν Λ 0 ν Λ 5 B G s x,v (g, h) 2 .
One can easily see that, apart from the terms including Γ, we have exactly the same bound as in the proof in the linear case, equation (4.2). Therefore we can choose B, B ′ and ε d like we did, thus independent of Γ and g, to have for all
0 < ε ε d d dt F s (t) C (s-1) + h 2 H s-1 x,v - |j|+|l|=s ∂ j l h 2 Λ + K Γ1 (G s x (g, h)) 2 + ε 2 K Γ2 G s x,v (g, h) 2 ,
with C
(s-1) +
, K Γ1 and K Γ2 positive constants independent of ε, Γ and g.
To conclude we just have to, as in the linear case, take a linear combination of the (F p ) p s and use the induction hypothesis (remember that both G p x,v and G p x are increasing functions of p) to obtain the expected result:
∀ 0 < ε ε d , d dt n p=1 C p F p (t) -K (s) 0 |j|+|l|| s ∂ j l h 2 Λ + K (s) 1 (G s x (g, h)) 2 + ε 2 K (s) 1 G s x,v (g, h) 2 ,
with this linear combination being equivalent to
• 2 L 2 x,v + |l| s ∂ 0 l • 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l • 2 L 2 x,v
and so fits the expected requirements.
6. Existence and exponential decay: proof of Theorem 2.3
One can clearly see that solving the kinetic equation (1.2) in the setting f = µ + εµ 1/2 h is equivalent to solving the linearized kinetic equation (1.3) directly. Therefore we are going to focus only on this linearized equation.
The proof relies on the a priori estimate derived in the previous section. We shall use this inequality as a bootstrap to obtain first the existence of solutions thanks to an iteration scheme and then the exponential decay of those solutions, as long as the initial data is small enough. 6.1. Proof of the existence of global solutions.
6.1.1. Construction of solutions to a linearized problem. Here we will follow the classical method that is approximating our solution by a sequence of solutions of a linearization of our initial problem. Then we have to construct a functional on Sobolev spaces for which this sequence can be uniformly bounded in order to be able to extract a convergent subsequence.
Starting from h 0 in H s x,v ∩ Ker(G ε ) ⊥ , to be define later, we define the function h n+1 in H s
x,v by induction on n 0 : (6.1)
∂ t h n+1 + 1 ε v.∇ x h n+1 = 1 ε 2 L(h n+1 ) + 1 ε Γ(h n , h n+1 ) h n+1 (0, x, v) = h in (x, v),
First we need to check that our sequence is well-defined.
Lemma 6.1. Let L be satisfying assumptions (H1'), (H2') and (H3), and let Γ be satisfying assumptions (H4) and (H5).
Then, it exists 0 < ε d 1 such that for all s s 0 (defined in (H4)), it exists δ s > 0 such that for all 0 < ε ε d , if h in H s ε δ s then the sequence (h n ) n∈N is well-defined, continuous in time, in H s
x,v and belongs to Ker(G ε ) ⊥ . Proof of Lemma 6.1. By induction, let us suppose that for a fixed n 0 we have constructed h n in H s x,v , which is true for h in .
Using the previous notation one can see that we are in fact trying to solve the linear equation on the torus:
∂ t h n+1 = G ε (h n+1 ) + 1 ε Γ(h n , h n+1 )
with h in as an initial data.
The existence of a solution h n+1 has already been shown for each equation covered by the hypocoercivity theory in the case ε = 1 (see papers described in the introduction). It was proved by fixed point arguments applied to the Duhamel's formula. In order not to write several times the same estimates one may use our next lemma 6.2 together with the Duhamel's formula (instead of considering directly the time derivative of h n+1 ) to get a fixed point argument as long as h in is small enough, the smallness not depending on ε.
As shown in the study of the linear part of the linearized model, under assumptions (H1'), (H2') and (H3) G ε generates a C 0 -semigroup on H s x,v , for all 0 < ε ε d . Moreover, hypothesis (H4) shows us that Γ(h n , •) is a bounded linear operator from
(H s x,v , E(•)) to (H s x,v , • H s x,v ). Thus h n+1 is in H s x,v . The belonging to Ker(G ε ) ⊥ is direct since Γ(h n , •) is in Ker(G ε ) ⊥ (hypothesis (H5)).
Then we have to strongly bound the sequence, at least in short time, to have a chance to obtain a convergent subsequence, up to an extraction. 6.1.2. Boundedness of the sequence. We are about to prove the global existence in time of solutions in C(R + , . H s ε ).That will give us existence of solutions in standard Sobolev's spaces as long as the initial data is small enough in the sense of the H s εnorm,which is smaller than the standard H s x,v -norm. To achieve that we define a new functional on H s x,v (6.2)
E(h) = sup t∈R + h(t) 2 H s ε + t 0 h(s) 2 H s Λ ds .
Lemma 6.2. Let L be satisfying assumptions (H1'), (H2') and (H3), and let Γ be satisfying assumptions (H4) and (H5).
Then it exists 0 < ε d 1 such that for all s s 0 (defined in (H4)) it exists δ s > 0 independent of ε, such that for all 0
< ε ε d , if h in H s ε δ s then (E(h n ) δ s ) ⇒ (E(h n+1 ) δ s ) .
Proof of Lemma 6.2. We let t > 0. We know that h in belongs to H s x,v ∩ Ker(G ε ) ⊥ . Moreover we have, thanks to Lemma 6.1, that (h n ) is well-defined, in Ker(G ε ) ⊥ and in H s x,v , since s s 0 . Moreover, Γ satisfies (H5). Therefore we can use the Proposition 2.2 to write, for ε ε d (ε d being the minimum between the one in Lemma 6.1 and the one in Proposition 2.2),
d dt h n+1 2 H s ε -K (s) 0 h n+1 2 H s Λ + K (s) 1 (G s x (h n , h n+1 )) 2 + ε 2 K (s) 2 G s x,v (h n , h n+1 ) 2 .
We can use the hypothesis (H4) and the fact that
(6.3) C m . 2 L 2 x,v + |l| s ∂ 0 l . 2 L 2 x,v + ε 2 |l|+|j| s |j| 1 ∂ j l . 2 L 2 x,v . 2 H s ε C M . H s x,v ,
to get the following upper bounds:
(G s x (h n , h n+1 )) 2 C 2 Γ C m h n 2 H s ε h n+1 2 H s Λ + h n+1 2 H s ε h n 2 H s Λ G s x,v (h n , h n+1 ) 2 C 2 Γ C m ε 2 h n 2 H s ε h n+1 2 H s Λ + h n+1 2 H s ε h n 2 H s Λ .
Therefore we have the following upper bound, where K 1 and K 2 are constants independent of ε:
d dt h n+1 2 H s ε -K (s) 0 h n+1 2 H s Λ + K 1 h n 2 H s ε h n+1 2 H s Λ + K 2 h n+1 2 H s ε h n 2 H s Λ K 1 E(h n ) -K (s) 0 h n+1 2 H s Λ + K 2 E(h n+1 ) h n 2 H s Λ .
We consider now that E(h n ) K (s) 0 /2K 1 . We can integrate the equation above between 0 and t and one obtains
h n+1 2 H s ε + K (s) 0 2 t 0 h n+1 2 H s Λ ds h 0 2 H s ε + KE(h n+1 )E(h n ).
This is true for all t > 0, then we define C = min{1,
K (s) 0 /2}, if E(h n ) C/2K we have E(h n+1 ) 2 C h 0 2 H s ε . Therefore choosing M (s) = min{C/2K, K (s) 0 /2K 1 } and δ s min{M (s) C/2, M (s) }
gives us the expected result. 6.1.3. The global existence of solutions. Now we are able to prove the global existence result: Theorem 6.3. Let L be satisfying assumptions (H1'), (H2') and (H3), and let Γ be satisfying assumptions (H4) and (H5). Then it exists 0 < ε d 1 such that for all s s 0 (defined in (H4)), it exists δ s > 0 and for all 0 < ε ε d :
If h in H s ε δ s then there exist a solution of (1.3) in C(R + , E(•)) and it satisfies, for some constant C > 0,
E(h) C h in 2 H s ε .
Proof of Theorem 6.3. Regarding Lemma 6.2, by induction we can strongly bound the sequence (h n ) n∈N , as long as E(h 0 ) δ s , the constant being defined in Lemma 6.2 . Therefore, defining h 0 to be h in at t = 0 and 0 elsewhere gives us E(h 0 ) = h in H s ε δ s . Thus, we have the boundedness of the sequence
(h n ) n∈N in L ∞ t H s x,v ∩ L 1 t H s Λ .
By compact embeddings into smaller Sobolev's spaces (Rellich theorem) we can take the limit in (6.1) as n tends to +∞, since G ε and Γ are continuous. We obtain h a solution, in
C(R + , E(•)), to ∂ t h + 1 ε v.∇ x h = 1 ε 2 L(h) + 1 ε Γ(h, h) h(0, x, v) = h in (x, v).
6.2. Proof of the exponential decay. The function constructed above, h, is in Ker(G ε ) ⊥ for all 0 < ε 1. Moreover, this function is clearly a solution of the following equation:
∂ t h = G ε (h) + 1 ε Γ(h, h),
with Γ satisfying (H5). Therefore, we can use the a priori estimate on solutions of the full perturbative model concerning the time evolution of the H s ε -norm (where we will omit to write the dependence on s for clearness purpose), Proposition 2.2.
d dt h 2 H s ε -K 0 h 2 H s Λ + K 1 (G s x (h, h)) 2 + ε 2 K 2 G s x,v (h, h) 2 .
Moreover, using (6.3) and hypothesis (H4) to find:
(G s x (h, h)) 2 2C 2 Γ C m h 2 H s ε h 2 H s Λ G s x,v (h, h) 2 2C 2 Γ C m ε 2 h 2 H s ε h 2 H s Λ .
Hence, K being a constant independent of ε:
d dt h 2 H s ε K h 2 H s ε -K 0 h 2 H s Λ . Therefore, one can notice that if h in 2 H s ε K 0 /2K then we have that h 2 H s
ε is decreasing in time. Hence, because the Λ-norm controls the L 2 -norm which controls the H-norm:
d dt h 2 H s ε - K 0 2 h 2 H s Λ - K 0 2 ν Λ 0 ν Λ 1 C M h 2 H s ε .
Then we have directly, by Gronwall's lemma and setting
τ s = K 0 ν Λ 0 /4ν Λ 1 C M , h 2 H s ε h in 2 H s ε e -2τst
as long as h in 2 H s ε K 0 /2K, which is the expected result with δ s K 0 /2K.
7.
Exponential decay of v-derivatives: proof of Theorem 2.4
In order to prove this theorem we are going to state a proposition giving an a priori estimate on a solution to the equation (1.3)
∂ t h + 1 ε v.∇ x h = 1 ε 2 L(h) + 1 ε Γ(h, h).
We remind the reader that we work in H s x,v with the following positive functional
• 2 H s ε⊥ = |j|+|l| s |j| 1 b (s) j,l ∂ j l (Id -π L )• 2 L 2 x,v + |l| s α (s) l ∂ 0 l • 2 L 2 x,v + |l| s i,c i (l)>0 a (s) i,l ε ∂ δ i l-δ i •, ∂ 0 l • L 2 x,v .
One can notice that if we choose coefficients (b
(s) j,l ), (α (s)
l ), (a
(s) i,l ) > 0 such that • 2 H s 1⊥ is equivalent to |j|+|l| s |j| 1 ∂ j l (Id -π L )• 2 L 2 x,v + |l| s ∂ 0 l • 2 L 2 x,v
then for all ε less than some ε 0 , • 2 H s ε⊥ is also equivalent to the latter norm with equivalence coefficients not depending on ε.
Moreover, using equation (3.3), we have that
∂ j l h 2 L 2 x,v C πs ∂ 0 l h 2 L 2 x,v + ∂ j l h ⊥ 2 L 2 x,v 2C πs ∂ 0 l h 2 L 2 x,v + ∂ j l h 2 L 2 x,v
, and therefore
|j|+|l| s |j| 1 ∂ j l (Id -π L ) 2 L 2 x,v + |l| s ∂ 0 l . 2 L 2 x,v
is equivalent to the standard Sobolev norm. Thus, we will just construct coefficients (b
(s) j,l ), (α (s)
l ) and (a
(s) i,l ) so that . 2 H s 1⊥
is equivalent to the latter norm and then for ε small enough we will have the equivalence, not depending on ε, between • 2 H s ε⊥ and the H s x,v -norm.
7.1. An a priori estimate. In this subsection we will prove the following proposition:
Proposition 7.1. If L is a linear operator satisfying the conditions (H1'), (H2') and (H3) and Γ a bilinear operator satisfying (H5) then it exists 0 < ε d 1 such that for all s in N * ,
(1) for h in in Ker(G ε ) ⊥ if we have h an associated solution of
∂ t h + 1 ε v • ∇ x h = 1 ε 2 L(h) + 1 ε Γ(h, h),
(2) there exist
K (s) 0 , K (s) 1 , (b (s)
j,l ), (α
(s) l ), (a (s)
i,l ) > 0 such that for all 0 < ε ε d :
• • H s ε⊥ ∼ • H s x,v , • ∀h in ∈ H s x,v ∩ Ker(G ε ) ⊥ , d dt h 2 H s ε⊥ -K (s) 0 1 ε 2 h ⊥ 2 H s Λ + 1 |l| s ∂ 0 l h 2 L 2 x,v + K (s) 1 G s x,v (h, h) 2 .
Remark 7.2. We notice here that in front of the microscopic part of h is a negative constant order -1/ε 2 which is the same order than the control derived by Guo in [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF] for his dissipation rate.
We will prove that proposition by induction on s. So we take h in in H s
x,v ∩ Ker(G ε ) ⊥ and we consider the associated solution of (1.3), denoted by h. One can notice that thanks to (H5), h remains in Ker(G ε ) ⊥ for all times and thus we are allowed to use the inequalities given in the toolbox 7.1.1. The case s = 1. In that case we have
h 2 H 1 ε⊥ = A h 2 L 2 x,v + α ∇ x h 2 L 2 x,v + b ∇ v h ⊥ 2 L 2 x,v + aε ∇ x h, ∇ v h L 2 x,v ,
d dt h 2 H 1 ε⊥ 1 ε 2 K ⊥ 1 b -λA h ⊥ 2 Λ + 1 ε 2 K ⊥ ea -λα ∇ x h ⊥ 2 Λ + 1 ε 2 1 4C π1 C π C p a e -b ν Λ 3 2 ∇ v h ⊥ 2 Λ + K ⊥ dx b - a 2 ∇ x h 2 L 2 x,v +K(A, α, b, a) G 1 x,v (h, h) 2 , (7.1)
with K a fonction only depending on the coefficients appearing in hypocoercivity hypothesis and independent of ε.
We directly see that we have exactly the same kind of bound as the one we obtain while working on the a priori estimates for the operator h H 1 ε , equation (5.1). Therefore we can choose of coefficients A, α, b, e and a in the same way (in the right order) and use the same inequalities to finally obtain the expected result:
∃K 0 , K 1 > 0, ∀ 0 < ε 1, d dt h 2 H 1 ε⊥ -K (1) 0 1 ε 2 h ⊥ 2 Λ + 1 ε 2 ∇ x h ⊥ 2 Λ + 1 ε 2 ∇ v h ⊥ 2 Λ + ∇ x h 2 L 2 x,v +K (1) 1 G 1 x,v (h, h) 2 ,
with the constants K
(1) 0 and K
1 independent of ε, and h 2
H 1 1⊥ equivalent to h 2 L 2 x,v + ∇ x h 2 L 2 x,v + ∇ v h ⊥ 2 L 2 x,v
. Therefore, for all ε small enough we have the expected result in the case s = 1. 7.1.2. The induction in higher order Sobolev spaces. Then we assume that the theorem is true up to the integer s -1, s > 1. Then we suppose that L satisfies (H1'), (H2') and (H3) and we consider ε in (0, 1].
Since h in is in Ker(G ε ) ⊥ , h belongs to Ker(G ε ) ⊥ for all t and so we can use the results given in the toolbox.
As in the proofs of previous sections, we define on H s x,v :
F s (t) = |j|+|l|=s |j| 2 B ∂ j l h ⊥ 2 L 2 x,v + B ′ |l|=s i,c i (l)>0 Q l,i (t), Q l,i (t) = α ∂ 0 l h 2 L 2 x,v + b ∂ δ i l-δ i h ⊥ 2 L 2 x,v + aε ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v ,
where the constants, strictly positive, will be chosen later.
Like in the section above, we shall study the time evolution of every term involved in F s in order to bound above dFs dt (t) with expected coefficients. However, in this subsection we will need to control all the Q l,i 's in the same time rather than treating them separately as we did in the proof of Proposition (2.2), because the toolbox tells us that each Q l,i is controlled by quantities appearing in the others.
The time evolution of
Q l,i . Gathering the toolbox inequalities in the following way: α(3.10) + b(3.17) + aε (3.18). This yields, because 0 < ε
1 and Card{i, c i (l) > 0} d, d dt |l|=s i,c i (l)>0 Q l,i (t) 1 ε 2 K ⊥ ea -λα |l|=s ∂ 0 l h ⊥ 2 Λ + 1 ε 2 1 4C πs C π d a e -ν Λ 5 b |l|=s i,c i (l)>0 ∂ δ i l-δ i h ⊥ 2 Λ + K ⊥ dl db - a 2 |l|=s ∂ 0 l h 2 L 2 x,v + a 4 |l| s-1 ∂ 0 l h 2 L 2 x,v + bK ⊥ s-1 ε 2 |l|+|j|=s i,c i (l)>0 1 h ⊥ 2 H s-1 x,v + K(α, b, a, e) G s x,v (h, h) 2 ,
with K a fonction only depending on the coefficients appearing in hypocoercivity hypothesis and independent of ε.
One can notice that except for the terms in h H s-1 x,v and |l| s-1
∂ 0 l h 2 L 2
x,v , we have exactly the same bound as in the case s = 1, equation (7.1). Therefore we can choose α, b, a, e, independently of ε and Γ such that it exists K ′ 0 > 0, K ′ 1 > 0 and C 0 , C 1 > 0 such that for all 0 < ε 1:
• |l|=s i,c i (l)>0 Q l,i (t) ∼ |l|=s i,c i (l)>0 ∂ 0 l h 2 L 2 x,v + ∂ δ i l-δ i h ⊥ 2 L 2 x,v , • d dt |l|=s i,c i (l)>0 Q l,i (t) -K ′ 0 1 ε 2 |l|=s ∂ 0 l h ⊥ 2 Λ + 1 ε 2 |l|=s i,c i (l)>0 ∂ δ i l-δ i h ⊥ 2 Λ + |l|=s ∂ 0 l h 2 L 2 x,v + C 0 ε 2 h ⊥ 2 H s-1 x,v + C 1 |l| s-1 ∂ 0 l h 2 L 2 x,v + K ′ 1 G s x,v (h, h) 2 .
7.1.4. The time evolution of F s and conclusion. We can finally obtain the time evolution of F s , using
d dt ∂ j l h ⊥ 2 L 2 x,v
, equation (3.16), so that there is no more ε in front of the Γ term:
d dt F s (t) -B ν λ 5 ε 2 |j|+|l|=s |j| 2 ∂ j l h ⊥ 2 Λ + B 9(ν Λ 1 ) 2 d 2(ν Λ 0 ) 2 ν Λ 5 |j|+|l|=s |j| 2 i,c i (j)>0 ∂ j-δ i l+δ i h ⊥ 2 Λ -K ′ 0 B ′ 1 ε 2 |l|=s ∂ 0 l h ⊥ 2 Λ + 1 ε 2 |l|=s i,c i (l)>0 ∂ δ i l-δ i h ⊥ 2 Λ + |l|=s ∂ 0 l h 2 L 2 x,v + |j|+|l|=s |j| 2 BK ⊥ dl + B ′ C 1 |l| s-1 ∂ 0 l h 2 L 2 x,v + 1 ε 2 |j|+|l|=s |j| 2 BK ⊥ s-1 + B ′ C 0 h ⊥ 2 H s-1 x,v + |j|+|l|=s |j| 2 3Bν Λ 1 ν Λ 0 ν Λ 5 + B ′ K ′ 1 G s x,v (h, h) 2 ,
Therefore we obtain the same bound (except
|l| s-1 ∂ 0 l h 2 L 2
x,v ) as in the proof of Proposition 2.2, equation (5.2), and so by choosing coefficients in the same way we have that it exists C (s) + > 0, 0 < ε d 1 and K (s * ) 1 > 0, none of them depending on ε, such that for all 0 < ε ε d :
d dt F s (t) C (s) + 1 ε 2 |j|+|l| s-1 ∂ j l h ⊥ 2 Λ + |l| s-1 ∂ 0 l h 2 L 2 x,v - 1 ε 2 |j|+|l|=s ∂ j l h ⊥ 2 Λ + |l|=s ∂ 0 l h 2 L 2 x,v +K (s * ) 1 G s x,v (h, h) 2 .
This inequality is true for all s and therefore we can take a linear combination of the F s to obtain the required result. Using the induction hypothesis on F 1 up to F s-1 we also have the equivalence of norms. 7.2. The exponential decay: proof of Theorem 2.4. Thanks to Theorem 2.3, we know that we have a solution to the equation (1.3) for any given h in small enough in the standard Sobolev norm. Call h the associated solution of h in ∈ H s x,v to (1.3). Since the existence has been proved we can use the a priori estimate above and the Proposition 7.1.
Thus we have
d dt h 2 H s ε⊥ -K (s) 0 1 ε 2 h ⊥ 2 H s Λ + 1 |l| s ∂ 0 l h 2 L 2 x,v + K (s) 1 G s x,v (h, h) 2 .
As before we can use (3.4) (equivalence of norms L 2
x,v and Λ on the fluid part) to get, for |l| > 1,
∂ 0 l h 2 Λ C ′ ∂ 0 l h ⊥ 2 Λ + ∂ 0 l h 2 L 2 x,v
, and for the case |l| 1 we can apply the Poincaré inequality (3.5) together with the equivalence of the L 2 x,v -norm and the Λ-norm on the fluid part π L , (3.4) to get
∃C, C ′ > 0, h 2 Λ C h ⊥ 2 Λ + 1 2 ∇ x h 2 L 2 x,v , ∇ x h 2 Λ C ′ ∇ x h ⊥ 2 Λ + 1 2 ∇ x h 2 L 2 x,v
.
Then we get that
d dt h 2 H s ε⊥ -K (s) 0 |j|+|l| s |j| 1 ∂ j l h ⊥ 2 Λ + |l| s ∂ 0 l h 2 Λ + K (s) 1 G s x,v (h, h) 2 -K (s * ) 0 h 2 H s Λ + K (s) 1 G s x,v (h, h) 2 .
Then for s s 0 , defined in (H4), and because Γ satisfies (H4) we can write
d dt h 2 H s ε⊥ K (s) 1 C 2 Γ h 2 H s x,v -K (s * ) 0 h 2 H s Λ . Because h H s ε⊥ and h 2 H s
x,v are equivalent, independently of ε, we finally have
d dt h 2 H s ε⊥ K (s) 1 C 2 Γ C h 2 H s ε⊥ -K (s * ) 0 h 2 H s Λ . Therefore if h in 2 H s ε⊥ K (s * ) 0 2K (s) 1 C 2 Γ C we have that h 2 H s ε⊥
is always decreasing on R + and so for all t > 0
d dt h 2 H s ε⊥ - K (s * ) 0 2K (s) 1 C 2 Γ C h 2 H s Λ .
And the H s Λ -norm controls the H s x,v -norm which is equivalent to the H s ε⊥ -norm. Thus applying Gronwall's lemma gives us the expected exponential decay.
8. Incompressible Navier-Stokes Limit: proof of Theorem 2.5
In this section we consider s s 0 , 0 < ε ε d and we take h in in H s
x,v such that h in H s ε δ s . Therefore we know, thanks to theorem 2.3, that we have a solution h ε to the linearized Boltzmann equation
∂ t h ε + 1 ε v.∇ x h ε = 1 ε 2 L(h ε ) + 1 ε Γ(h ε , h ε ), with h ε (0, x, v) = h in (x, v). Moreover, we also know that (h ε ) tends weakly-* to h in L ∞ t (H s x L 2 v ).
The first step towards the proof of Theorem 2.5 is to derived a convergence rate in finite time. Then, as described in Section 1.3, we shall interpolate this result with the exponential decay behaviour of our solutions in order to obtain a global in time convergence.
8.1.
A convergence in finite time. In Remark 8.13, we define V T (ε) and prove the following result
∀T > 0, V T (ε) = sup t∈[0,T ] h ε -h L ∞ x L 2 v → 0, as ε → 0.
Thanks to this remark we can give an explicit convergence in finite time.
Theorem 8.1. Consider s s 0 and h in in H s
x,v such that h in H s ε δ s . Then, (h ε ) ε>0 exists for all 0 < ε ε d and converges weakly* in
L ∞ t (H s x L 2 v ) towards h such that h ∈ Ker(L), with ∇ x • u = 0 and ρ + θ = 0.
Furthermore, T 0 hdt belongs to H s x L 2 v and it exists C > 0 such that for all T > 0,
T 0 hdt - T 0 h ε dt H s x L 2 v C max{ √ ε, √ T ε, T V T (ε)}.
One can have a strong convergence in L 2 [0,T ] H s x L 2 v only if h in is in Ker(L) with ∇ x • u in = 0 and ρ in + θ in = 0 (initial layer conditions). Moreover, in that case we have, for all T > 0,
h -h ε L 2 [0,T ] H s x L 2 v C max{ √ ε, √ T V T (ε)},
and for all δ in [0, 1], if h in belongs to
H s+δ x L 2 v , sup t∈[0,T ] h -h ε H s x L 2 v (t) C max{ε min(δ,1/2) , V T (ε)}.
Remark 8.2. We mention here that the obligation of an integration in time for non special initial condition is only due to the linear part ε -2 L -ε -1 v • ∇ x , whereas the case T = +∞ is prevented by the second order term Γ.
We proved in the linear case, theorem 2.1, that the linear operator
G ε = ε -2 L - ε -1 v • ∇ x generates a semigroup e tGε on H s
x,v . Therefore we can use Duhamel's principle to rewrite our equation under the following form, defining u ε = Γ(h ε , h ε ),
h ε = e tGε h in + t 0 1 ε e (t-s)Gε u ε (s)ds := U ε h in + Ψ ε (u ε ). (8.1)
The article by Ellis and Pinsky [START_REF] Ellis | The first and second fluid approximations to the linearized Boltzmann equation[END_REF] gives us a Fourier theory in x of the semigroup e tGε and therefore we are going to use it to study the strong limit of U ε h in and Ψ ε (u ε ) as ε tends to 0. We will denote by F x the Fourier transform in x on the torus (which is discrete) and n the discrete variable associated in Z d . From [START_REF] Ellis | The first and second fluid approximations to the linearized Boltzmann equation[END_REF], we are using Theorem 3.1, rewriten thanks with the Proposition 2.6 and the Appendix II with δ = λ/4 in Proposition 2.3, to get the following theorem Theorem 8.3. There exists n 0 ∈ R * + , there exists functions
• λ j : [-n 0 , n 0 ] -→ C, -1 j 2, C ∞ • e j : [-n 0 , n 0 ] × S d-1 -→ L 2 v (ζ, ω) -→ e j (ζ, ω) , -1 j d, C ∞ in ζ and C 0 in ω, such that (1) for all -1 j 2, λ j (ζ) = iα j ζ -β j ζ 2 + γ j (ζ), where α j ∈ R, with α 0 = α 2 = 0, β j < 0 and |γ j (ζ)| C γ |ζ| 3 with |γ j (ζ)| β j 2 |ζ| 2 , (2) for all -1 j d • e j (ζ, ω) = e 0j (ω) + ζe 1j (ω) + ζ 2 e 2j (ζ, ω), • e 0-1 (ω)(v) = e 01 (-ω)(v) = A 1 -ω.v + |v| 2 -d 2 µ(v) 1/2 , ( 3
) we have e tGε = F -1 x Û (t/ε 2 , εn, v)F x where Û (t, n, v) = 2 j=-1 Ûj (t, n, v) + ÛR (t, n, v)
with the following properties
• for -1 j 2, Ûj (t, n, v) = χ |n| n 0 e tλ j (|n|) P j |n| , n |n| (v), • for -1 j 1, P j |n| , n |n| = e j |n|
P 0j = π L ,
• it exists C R , σ > 0 such that for all t ∈ R + and all n ∈ Z d ,
||| ÛR (t, n, v)||| L 2 v C R e -σt .
Remark 8.4. This decomposition of the spectrum of the linear operator is based on a low and high frequencies decomposition. It shows that the spectrum of the whole operator can be viewed as a perturbation of the spectrum of the homogeneous linear operator. It can be divided into large eigenvalues, which are negative and therefore create a strong semigroup property for the remainder term, and small eigenvalues around the origin that are smooth perturbations of the homogeneous ones.
This theorem gives us all the tools we need to study the convergence as ε tends to 0 since we have an explicit form for the Fourier transform of the semigroup. We also know that this semigroup commutes with the pure x-derivatives. Therefore, studying the convergence in the L 2
x L 2 v -norm will be enough to obtain the desired result in the H s x L 2 v -norm. We are going to prove the following convergences in the different settings stated by Theorem 2.5
(1)
U ε h in tends to V (t, x, v)h in with V (0, x, v)h in = V (0)(h in )(x, v)
where V (0) the projection on the subset of Ker(L) consisting in functions g such that
∇ x • u g = 0 and ρ g + θ g = 0, (2) Ψ ε (u ε ) converges to Ψ(h, h) with Ψ(h, h)(t = 0) = 0.
8.1.1. Study of the linear part. We remind here that we have
U ε h in = F -1 x Ûε (t, n, v) ĥin (n, v) with Ûε (t, n, v) = 2 j=-1 Ûε j (t, n, v) + Ûε R (t, n, v), Ûε j (t, n, v) = χ |εn| n 0 e iα j t|n| ε -β j t|n| 2 + t ε 2 γ j (|εn|) P 0j n |n| + ε |n| P 1j |εn| , n |n| .
We can decompose Ûε j into four different terms
Ûε j (t, n, v) = e iα j t|n| ε -β j t|n| 2 P 0j n |n| +χ |εn| n 0 e iα j t|n| ε -β j t|n| 2 e t ε 2 γ j (|εn|) -1 P 0j n |n| (8.2) +χ |εn| n 0 e iα j t|n| ε -β j t|n| 2 + t ε 2 γ j (|εn|) ε |n| P 1j |εn| , n |n| + χ |εn| n 0 -1 e iα j t|n| ε -β j t|n| 2 P 0j n |n| . = U ε 0j + U ε 1j + U ε 2j + U ε 3j .
Remark 8.5. One can notice that U ε 00 and U ε 02 do not depend on ε, since α 0 = α 2 = 0.
We are going to study each of these four terms in two different lemmas and then add a last lemma to deal with the remainder term U R h in . The lemmas will be proven in Appendix C. Lemma 8.6. For α j = 0 (j = ±1) we have that it exists C 0 > 0 such that for all T ∈ [0, +∞]
T 0 U ε 0j h in dt 2 L 2 x L 2 v C 0 ε 2 h in 2 L 2 x L 2 v .
Moreover we have a strong convergence in the
L 2 [0,+∞) L 2 x L 2 v -norm if and only if h in satisfies ∇ x • u in = 0 and ρ in + θ in = 0.
In that case we have U ε 0j h in = 0. Lemma 8.7. For -1 j 2 and for 1 l 3 we have that the three following inequalities hold for U ε lj
• ∃C l > 0, ∀T > 0, T 0 U ε lj h in dt 2 L 2 x L 2 v C l ε 2 h in 2 L 2 x L 2 v , • ∃C ′ l > 0, U ε lj h in 2 L 2 [0,+∞) L 2 x L 2 v C ′ l ε 2 h in 2 L 2 x L 2 v , • ∀δ ∈ [0, 1], ∃C (l) δ > 0, ∀t > 0, U ε lj h in (t) 2 L 2 x L 2 v C (l) δ ε 2δ h in 2 H δ x L 2
v . Lemma 8.8. For the remainder term we have the two following inequalities
• ∃C 4 > 0, ∀T > 0, T 0 U ε R h in dt 2 L 2 x L 2 v C 4 T ε 2 h in 2 L 2 x L 2 v , • ∃C ′ 4 > 0, U ε R h in 2 L 2 [0,+∞) L 2 x L 2 v C ′ 4 ε 2 h in 2 L 2 x L 2 v , • ∀t 0 > 0, ∃C r > 0, ∀t > t 0 , U R h in (t) 2 L 2 x L 2 v C r √ t 0 ε h in 2 L 2 x L 2
v . Moreover, the strong convergence up to t 0 = 0 is possible if and only if h in is in Ker(L). In that case we have
∀δ ∈ [0, 1], ∃C (R) δ > 0, ∀t > 0, U ε R h in 2 L 2 x L 2 v C (R) δ ε 2δ h in 2 H δ x L 2 v .
Therefore, gathering lemmas 8.6, 8.7 and 8.8 and reminding Remark 8.5, we proved that, as ε tends to 0, e tGε h in converges to
(8.3) V (t, x, v)h in (x, v) = F -1 x e -β 0 t|n| 2 P 00 n |n| + e -β 2 t|n| 2 P 02 n |n| F x h in .
The convergence is strong when we consider the average in time and is strong in
L 2 t H s x L 2 v ( and in C([0, +∞), H s x L 2 v ) if h in is in H s+0 x L 2 v
) if an only if both conditions found in Lemma 8.6 and Lemma 8.8 are satisfied. That is to say h in belongs to Ker(L) with ∇ x • u in = 0 and ρ in + θ in = 0.
Moreover this also allows us to see that V (0, x, v)h in = V (0)(h in )(x, v) where V (0) is the projection on the subset of Ker(L) consisting in functions g such that ∇ x • u g = 0 and ρ g + θ g = 0.
8.1.2. Study of the bilinear part. We recall here that u ε = Γ(h ε , h ε ). Therefore, by hypothesis (H5), u ε belongs to Ker(L) ⊥ . Then we know that for all -1 j 2, P 0j n |n| is a projection onto a subspace of Ker(L). Therefore we have that, in the Fourier space,
P j |εn| , n |n| ûε = |εn| P 1j n |n| ûε + |εn| 2 P 2j |εn| , n |n| ûε .
Thus, recalling that
Ψ ε (u ε ) = t 0 1 ε e (t-s)Gε u ε (s)ds,
we can decompose it
Ψ ε (u ε ) = 2 j=-1 ψ ε j (u ε ) + ψ ε R (u ε ), with ψ ε j (u ε ) = F -1 x χ |εn| n 0 t 0 e iα j (t-s)|n| ε -β j (t-s)|n| 2 + t-s ε 2 γ j (|εn|) |n| (P 1j + ε |n| P 2j ) ûε (s)ds. := ψ ε 0j (u ε ) + ψ ε 1j (u ε ) + ψ ε 2j (u ε ) + ψ ε 3j (u ε )
, where we have used the same decomposition as in the linear case, equation (8.2), substituting t by t -s, P 0j by |n| P 1j and P 1j by |n| P 2j . And
ψ ε R (u ε ) = t 0 1 ε U ε R (t -s)u ε (s)ds.
Like the linear case, Remark 8.5, ψ ε 00 and ψ ε 02 do not depend on ε and we are going to prove the convergence towards Ψ(u) = F -1
x [ψ ε 00 (u) + ψ ε 02 (u)] F x , where u = Γ(h, h). To establish such a result we are going to study each term in three different lemmas and then a fourth one will deal with the remainder term. The lemmas will be proven in Appendix C. Lemma 8.9. For α j = 0 (j = ±1) we have the following inequality for ψ ε 0j :
∃ C 0 > 0, ∀T > 0, T 0 ψ ε 0j (u ε )dt 2 L 2 x L 2 v C 0 T 2 ε 2 E(h ε ) 2 . Remark 8.10. We know that (h ε ) ε>0 is bounded in L ∞ t H s x L 2 v (see theorems 2.
and 2.4).
This remark gives us the strong convergence to 0 of the average in time and the strong convergence to 0 without averaging in time as long as h in belongs to Ker(L) in Lemma 8.9.
Lemma 8.11. For -1 j 2 and for 1 l 3 we have that the three following inequalities hold for ψ ε lj
• ∃ C l > 0, ∀T > 0, T 0 ψ ε lj (u ε )dt 2 L 2 x L 2 v C l T ε 2 E(h ε ) 2 , • ∃ C ′ l > 0, ∀T > 0, ψ ε lj (u ε ) 2 L 2 [0,T ] L 2 x L 2 v C ′ l ε 2 E(h ε ) 2 , • ∀ |δ| ∈ [0, 1], ∃C (l) δ > 0, ∀T > 0, ψ ε lj (u ε )(T ) 2 L 2 x L 2 v C (l) δ ε 2δ E(∂ 0 δ h ε ) 2 .
Lemma 8.12. For the remainder term we have the three following inequalities
• ∃ C 4 > 0, ∀T > 0, T 0 ψ ε R (u ε )dt 2 L 2 x L 2 v C 4 T εE(h ε ) 2 , • ∃ C ′ 4 > 0, ∀T > 0, ψ ε R (u ε ) 2 L 2 [0,T ] L 2 x L 2 v C ′ 4 εE(h ε ) 2 , • ∃ C ′′ 4 > 0, ∀T > 0, ψ ε R (u ε )(T ) 2 L 2 x L 2 v C ′′ 4 εE(h ε ) 2 .
Gathering all Lemmas 8.9, 8.11 and 8.12 gives us the strong convergence of Ψ ε (u ε ) -Ψ(u ε ) towards 0, thanks to Remark 8.10. It remains to prove that we have indeed the expected convergences of Ψ(u ε ) towards Ψ(u) as ε tends to 0.
We start this last step by a quick remark relying on Sobolev embeddings and giving us a strong convergence of
h ε towards h in L ∞ [0,T ] L ∞ x L 2 v , for T > 0.
Remark 8.13. We know that
h ε → h weakly-* in L ∞ t H s x L 2 v , for s s 0 > d/2. But we also proved that for all t > 0 that (h ε ) ε is bounded in H s x L 2 v . Therefore the sequence ( h ε L 2 v , ε > 0) is bounded in H s
x and therefore converges strongly in H s ′ x for all s ′ < s.
But, by triangular inequality it comes that
h ε H s ′ x L 2 v -h H s ′ x L 2 v h ε L 2 v -h L 2 v H s ′ x .
This means that we also have that lim
ε→0 h ε H s ′ x L 2 v = h H s ′ x L 2 v . The space H s ′ x L 2
v is a Hilbert space and h ε tends weakly to h in it, therefore the last result gives us that in fact h ε tends strongly to h in H s ′
x L 2 v . This result is for all t > 0 and all s ′ s. Furthermore, s > d/2 and so we can choose s ′ > d/2. By Sobolev's embedding we obtain that h ε tends strongly to h in L ∞
x L 2 v , for all t > 0. Reminding that h ε → h weakly-* in L ∞ t H s x L 2 v and we obtain that we have
∀T > 0, V T (ε) = sup t∈[0,T ] h ε -h L ∞ x L 2 v → 0, as ε → 0.
Lemma 8.14. We have the following rate of convergence:
• ∃ C 5 > 0, ∀T > 0, T 0 Ψ(u ε )dt - T 0 Ψ(u)dt 2 L 2 x L 2 v C 5 T 2 V T (ε) 2 , • ∃ C ′ 5 > 0, ∀T > 0, Ψ(u ε ) -Ψ(u ε ) 2 L 2 [0,T ] L 2 x L 2 v C ′ 5 T V T (ε) 2 , • ∃ C ′′ 5 > 0, ∀T > 0, Ψ(u ε ) -Ψ(u ε ) 2 L 2 x L 2 v (T ) C ′′ 5 V T (ε) 2 .
Thus, those Lemmas, combined with the study of the linear case (Lemmas 8.6, 8.7 and 8.8) prove the Theorem 2.5 with the rate of convergence being the maximum of each rate of convergence. Moreover we have proved
h(t, x, v) = V (t, x, v)h in (x, v) + Ψ(t, x, v)(Γ(h, h)).
8.2. Proof of Theorem 2.5. Thanks to Theorem 8.1 we can control the convergence of h ε towards h for any finite time T . Then, thanks to the uniqueness property of Theorem 2.1 and the control on the remainder of Theorem 2.3 in [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF], in the case of a hard potential collision kernel, one has
∀T > 0, V T (ε) C V ε.
Finally, thanks to Theorem 2.3, we have the exponential decay for both h ε and h, leading to
h ε -h H s x L 2 v 2 h in H s ε e -τsT .
We define
T M = - 1 τ s ln ε 2 h in H s ε to get that ∀T T M , h ε -h H s x L 2 v ε.
This conclude the proof Theorem 2.5, by applying Theorem 8.1 to T M .
Appendix A. Validation of the assumptions
As said in the introduction, all the hypocoercivity theory assumptions hold for several different kinetic models. One can find the proof of the assumptions (H1), (H2), (H3), (H1') and (H2') in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] directly for the linear relaxation (see also [START_REF] Cáceres | Equilibration rate for the linear inhomogeneous relaxation-time Boltzmann equation for charged particles[END_REF]), the semi-classical relaxation (see also [START_REF] Neumann | Convergence to global equilibrium for a kinetic fermion model[END_REF]), the linear Fokker-Planck equation, the Boltzmann equation with hard potential and angular cutoff and the Landau equation with hard and moderately soft potential (both studied in a constructive way in [START_REF] Mouhot | Explicit coercivity estimates for the linearized Boltzmann and Landau operators[END_REF] and [START_REF] Baranger | Explicit spectral gap estimates for the linearized Boltzmann and Landau operators with hard potentials[END_REF], for the spectral gaps, see also [START_REF] Guo | The Landau equation in a periodic box[END_REF] and [START_REF] Guo | The Vlasov-Poisson-Boltzmann system near Maxwellians[END_REF] for the Cauchy problems):
• The Linear Relaxation
∂ t f + v.∇ x f = 1 ε d R f (t, x, v * )dv * µ(v) -f , • The Semi-classical Relaxation ∂ t f + v.∇ x f = 1 ε R d [µ(1 -δf )f * -µ * (1 -δf * )f ] dv * ,
• The Linear Fokker-Planck Equation
∂ t f + v.∇ x f = 1 ε ∇ v . (∇ v f + f v) ,
• The Boltzmann Equation with hard potential and angular cutoff
∂ t f + v.∇ x f = 1 ε R d ×S d-1 b(cosθ)|v -v * | γ [f ′ f ′ * -f f * ] dv * dσ,
• The Landau Equation with hard and moderately soft potential
∂ t f + v.∇ x f = 1 ε ∇ v . R d Φ(v -v * )|v -v * | γ+2 [f * (∇f ) -f (∇f ) * ] .
Assumption (H4) is clearly satisfied by the first three as in that case we have either . Λv = . L 2 v or Γ = 0 (see [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF]). Moreover, (H5) is obvious in the case of a linear equation. It thus remains to prove properties (H5) for the semi-classical relaxation and (H4) and (H5) for the Boltzmann equation and the Landau equation (since our property (H4) is slightly different from (H4) in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF]).
A.1. The semi-classical relaxation. In the case of the semi-classical relaxation, the linearization is slightly different. Indeed, the unique global equilibrium associated to an initial data f 0 is (assuming some initial bounds, see [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF])
f ∞ = κ ∞ µ 1 + δκ ∞ µ ,
where κ ∞ depends on f 0 .
Thus, we are no longer in the case of a global equilibrium being a Maxwellian. However, a good way of linearizing this equation is (see [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF]) considering
f = f ∞ + ε √ κ ∞ µ 1 + δκ ∞ µ h.
Using such a linearization instead of the one used all along this paper yields the same general equation (1.3) with L and Γ satisfying all the requirements (see [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF]). Indeed, one may find that Ker(L) = Span f ∞ / √ µ and then notice that this is not of the form needed in assumption (H3). However, this is bounded by e -|v| 2 /4 and therefore we are still able to use the toolbox (section 3, thus all the theorems.
Let us look at the bilinear operator to show that it fulfils hypothesis (H5). A straightforward computation gives us the definition of Γ,
Γ(g, h) = δ √ κ ∞ 2 R d √ µ * µ * -µ 1 + εκ ∞ µ * [hg * + h * g]dv * .
Then, multiplying by a function f , integrating over R d and looking at the change of variable
(v, v * ) → (v * , v) yields Γ(g, h), f L 2 v = δ √ κ ∞ 4 R d ×R d (µ * -µ)(gh * +g * h) f √ µ * 1 + δκ ∞ µ * -f * √ µ 1 + δκ ∞ µ dvdv * .
Therefore, taking f in Ker(L) gives us the expected property.
A.2. Boltzmann operator with angular cutoff and hard potential. Notice that, compared to [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF], we defined Γ in a way that it is symmetric which gives us, using the fact that
µ * µ = µ ′ * µ ′ , Γ(g, h) = 1 2 R d ×(S) d-1 B(µ 1/2 ) * [g ′ * h ′ + g ′ h ′ * -g * h -gh * ]dv * dσ, A.2.1.
Orthogonality to Ker(L): (H5). A well-known property (see [START_REF] Golse | From kinetic to macroscopic models[END_REF] for instance) tells us that for all φ in L 2 v decreasing fast enough at infinity and for all ψ in L 2 v one has
R d Γ(g, h)(v)ψ(v)dv = 1 8 (R d ) 2 ×S d-1 B[g ′ * h ′ + g ′ h ′ * -g * h -gh * ] ((µ 1/2 ) * ψ + (µ 1/2 )ψ * -(µ 1/2 ) ′ * ψ ′ -(µ 1/2 ) ′ ψ ′ * )dvdv * dσ.
As shown in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] or [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] we have that Ker(L) = Span(1, v 1 , . . . , v d , |v| 2 )µ 1/2 and therefore taking ψ to be each of these kernel functions gives us (H5).
A.2.2. Controlling derivatives: (H4). To prove (H4) we can define
Γ + (g, h) = R d ×(S) d-1 B(µ 1/2 ) * g ′ * h ′ dv * dσ, Γ -(g, h) = - R d ×(S) d-1 B(µ 1/2 ) * g * h dv * dσ.
By using the change of variable u = v -v * we end up with θ being a function of u and σ and v
′ = v + f 1 (u, σ) and v ′ * = v + f 2 (u, σ)
, f 1 and f 2 being functions. Therefore we can make this change of variable, take j and l such that |j| + |l| s and differentiate our operator Γ -.
∂ j l Γ -(g, h) = - 1 2 j 0 +j 1 +j 2 =j l 1 +l 2 =l R d ×S d-1 b(cosθ)|u| γ ∂ j 0 0 µ(v -u) 1/2 ∂ j 1 l 1 g * ∂ j 2 l 2 h dudσ.
Then we can easily compute that, C being a generic constant,
∂ j 0 0 µ(v -u) 1/2 Cµ(v -u) 1/4
. Moreover, we are in the case where γ > 0 and therefore we have
|u| γ µ(v -u) 1/4 C(1 + |v|) γ µ(v -u) 1/8 .
Combining this and the fact that |b| C b (angular cutoff considered here), multiplying by a function f and integrating over T d × R d yields, using Cauchy-Schwarz two times,
∂ j l Γ -(g, h), f L 2 x,v C j 0 +j 1 +j 2 =j l 1 +l 2 =l T d ×R d (1 + |v|) γ ∂ j 2 l 2 h |f | R d µ 1/8 * ∂ j 1 l 1 g * dv * dvdx G s (g, h) f Λ , with G s (g, h) = C |j 1 |+|l 1 |+|j 2 |+|l 2 | s T d ∂ j 2 l 2 h 2 Λv ∂ j 1 l 1 g 2 L 2 v dx 1/2
.
At that point we can use Sobolev embeddings (see [START_REF] Brezis | Analyse fonctionnelle[END_REF], corollary IX.13) stating that if E (s 0 /2) > d/2 then we have
H s/2 x ֒→ L ∞ x . So, if |j 1 | + |l 1 | s/2 we have ∂ j 1 l 1 g 2 L 2 v sup x∈T d ∂ j 1 l 1 g 2 L 2 v C s ∂ j 1 l 1 g 2 L 2 v H s/2 x C s |p| s/2 p 1 +p 2 =p T d ×R d ∂ j 1 l 1 +p 1 g ∂ j 1 l 1 +p 2 g dvdx (A.1) C s g 2 H s
x,v , by a mere Cauchy-Schwarz inequality.
In the other case, |j 2 | + |l 2 | s/2 and by same calculations we show
∂ j 2 l 2 h 2 Λv C s h 2 H s Λ .
Therefore, by just dividing the sum into this two subcases we obtain the result (H4) for Γ -, noticing that in the case j = 0 equation (A.1) has no v derivatives and the Cauchy-Schwarz inequality does not create such derivatives so the control is only made by x-derivatives.
The second term Γ + is dealt exactly the same way with, at the end (the study of
G s ), another change of variable (v, v * ) → (v ′ , v ′ * ) which gives the result since (1 + |v ′ |) γ (1 + |v|) γ + (1 + |v * |) γ if γ > 0.
A.3. Landau operator with hard and moderately soft potential. The Landau operator is used to describe plasmas and for instance in the case of particles interacting via a Coulomb interaction (see [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] for more details). The particular case of Coulomb interaction alone (γ = -3) will not be studied here as the Landau linear operator has a spectral gap if and only if γ -2 (see [START_REF] Guo | The Landau equation in a periodic box[END_REF], for not constructive arguments, [START_REF] Mouhot | Spectral gap and coercivity estimates for linearized Boltzmann collision operators without angular cutoff[END_REF] for general constructive case and [START_REF] Baranger | Explicit spectral gap estimates for the linearized Boltzmann and Landau operators with hard potentials[END_REF] for explicit construction in the case of hard potential γ > 0) and so only the case γ -2 may be applicable in this study.
We can compute straightforwardly the bilinear symmetric operator associated with the Landau equation:
Γ(g, h) = 1 2 √ µ ∇ v • R d √ µµ * Φ(v -v * ) [g * ∇ v h + h * ∇ v g -g(∇ v h) * -h(∇ v g) * ] dv * ,
where Φ :
R d -→ R d is such that Φ(z) is the orthogonal projection onto Span(z) ⊥ so Φ(z) ij = δ ij - z i z j |z| 2 ,
Γ(g, h), ψ L 2 v = - 1 2 R d ×R d ∇ v ψ √ µ • ( √ µµ * Φ(v -v * )[G]) dv * dv, where G = g * ∇ v h + h * ∇ v g -g(∇ v h) * -h(∇ v g) * .
Then the change of variable
(v, v * ) → (v * , v) only changes ∇ v (ψ/ √ µ) to ∇ v (ψ/ √ µ) *
and G becomes -G. Therefore we finally obtain
Γ(g, h), ψ L 2 v = 1 4 R d ×R d √ µµ * Φ(v -v * )[G] • ∇ v ψ √ µ * -∇ v ψ √ µ dv * dv.
As shown in [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF] or [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] we have that Ker(L) = Span(1, v 1 , . . . , v d , |v| 2 )µ 1/2 . Computing the term inside brackets for each of these functions gives us 0 or, in the case
|v| 2 √ µ, 2(v * -v). However, by definition, Φ(v -v * )[G] belongs to Span(v -v * ) ⊥ and therefore Φ(v - v * )[G] • (v * -v) = 0. So Γ indeed satisfies (H5).
A.3.2. Controlling derivatives: (H4). The article [START_REF] Guo | The Landau equation in a periodic box[END_REF] gives us directly the expected result in its Theorem 3, equation [START_REF] Nishida | Global solutions to the initial value problem for the nonlinear Boltzmann equation[END_REF] with θ = 0. The case where there are only x-derivatives is also included if one takes β = 0.
Appendix B. Proofs given the results in the toolbox
We used the estimates given by the toolbox throughout this article. This appendix is to prove all of them. It is divided in two parts. The first one is dedicated to the proof of the equality between null spaces whereas the second part deals with the time derivatives inequalities. To prove this result we will need a lemma.
Lemma B.2. Let f : T d × R d -→ R be continuous on T d × R d and differentiable in x. If v • ∇ x f (x, v) = 0 for all (x, v) in T d × R d then f does not depend on x. Proof of Lemma B.2. Fix x in T d and v Q-free in R d .
For y in R d we will denote by y its equivalent class in
T d . Define g : R -→ R t -→ f (x + tv, v)
.
We find easily that g is differentiable on R and that g
′ (t) = v.∇ x f (x, v) = 0 on R. Therefore: ∀t ∈ R, f (x + tv, v) = f (x, v).
However, a well-known property about the torus is that the set {x + nv, n ∈ Z} is dense in T d for all x in T d and v Q-free in R d . This combined with the last result and the continuity of f leads to:
∀y ∈ T d , f (y, v) = f (x, v).
To conclude it is enough to see that the set of Q-free vector in R d is dense in R d and then, by continuity of f in v:
∀y ∈ T d , ∀v ∈ R d , f (y, v) = f (x, v).
Now we have all the tools to prove the proposition about the kernel of operators.
Proof of Proposition B.1. Since L satisfies (H1) we know that L acts on L 2 v and that its Kernel functions φ i only depend on v. Thus, we have directly the first inclusion Ker(L) ⊂ Ker(G).
Then, let us consider
h in H 1 x,v such that G(h) = 0. Because the transport operator v • ∇ x is skew-symmetric in L 2
x,v we have
0 = G(h), h L 2 x,v = a T d L(h), h L 2 v dx.
However, because L satisfies (H3) we obtain:
0 λ T d h(x, .) -π L (h(x, .)) 2 Λv dx.
But λ is strictly positive and thus:
∀x ∈ T d , h(x, •) = π L (h(x, •)) = d i=1 c i (x)φ i .
Finally we have, by assumption, G(h) = 0 and because h(x, •) belongs to Ker(L) for all x in T d we end up with
∀(x, v) ∈ T d × R d , v • ∇ x h(x, v) = 0.
By applying the lemma above we then obtain that h does not depend on x. But (φ i ) 1 i d is an orthonormal family, basis of Ker(L), and therefore we find that for all i, c i does not depend on x. So,we have proved that:
∀(x, v) ∈ T d × R d , h(x, v) = d i=1 c i φ i (v).
Therefore, h belongs to Ker(L) and only depends on x.
B.2.
A priori energy estimates. In this subsection we derive all the inequalities we used. Therefore, we assume that L satisfies (H1'), (H2') and (H3) while Γ has the properties (H4) and (H5), and we pick g in H s
x,v . We consider h in H s x,v ∩ Ker(G ε ) ⊥ and we assume that h is a solution to (1.3):
∂ t h + 1 ε v.∇ x h = 1 ε 2 L(h) + 1 ε Γ(g, h).
In the toolbox, we wrote inequalities on function which were solutions of the linear equation. As the reader may notice, we will deal with the second order operator just by applying the first part of (H4) and Young's inequality. Such an inequality only provides two positive terms, and thus by just setting Γ equal to 0 in the next inequalities we get the expected bounds in the linear case (not the sharpest ones though). Therefore we will just describe the more general case and the linear one is included in it.
B.2.1. Time evolution of pure x-derivatives. The operators L and Γ only act on the v variable. Thus, for 0 |l| s, ∂ 0 l commutes with L and v
• ∇ x . Remind that v • ∇ x is skew-symmetric in L 2 x,v (T d × R d ) and therefore we can compute d dt ∂ 0 l h 2 L 2 x,v = 2 ε 2 L(∂ 0 l h), ∂ 0 l h L 2 x,v + 2 ε ∂ 0 l Γ(g, h), ∂ 0 l h L 2 x,v . We can then use hypothesis (H3) to obtain 2 ε 2 L(∂ 0 l h), ∂ 0 l h L 2 x,v - 2λ ε 2 (∂ 0 l h) ⊥ 2 Λ .
We also use (H3) to get (∂ 0 l h) ⊥ = ∂ 0 l h ⊥ . To deal with the second scalar product, we will use hypothesis (H4) and (H5), which is still valid for ∂ 0 l Γ since π L only acts on the v variable, followed by a Young inequality with some D 1 > 0. This yields
2 ε ∂ 0 l Γ(g, h), ∂ 0 l h L 2 x,v = 2 ε ∂ 0 l Γ(g, h), ∂ 0 l h ⊥ L 2 x,v 2 ε G s x (g, h) ∂ 0 l h ⊥ Λ D 1 ε (G s x (g, h)) 2 + 1 D 1 ε ∂ 0 l h ⊥ 2 Λ .
Gathering the last two upper bounds we obtain
d dt ∂ 0 l h 2 L 2 x,v 1 D 1 ε - 2λ ε 2 ∂ 0 l h ⊥ 2 Λ + D 1 ε (G s x (g, h)) 2 .
Finally, taking D 1 = ε/λ gives us inequalities (3.6), (3.7) and (3.10).
B.2.2. Time evolution of
∇ v h 2 L 2
x,v . For that term we get, by applying the equation satisfied by h, the following:
d dt ∇ v h 2 L 2 x,v = 2 ε 2 ∇ v L(h), ∇ v h L 2 x,v - 2 ε ∇ v (v•∇ x h), ∇ v h L 2 x,v + 2 ε ∇ v Γ(g, h), ∇ v h L 2 x,v .
And by writing the second term on the right-hand side of the equality and integrating by part in x, we have
∇ v (v • ∇ x h), ∇ v h L 2 x,v = ∇ x h, ∇ v h L 2 x,v .
Therefore the following holds:
d dt ∇ v h 2 L 2 x,v = 2 ε 2 ∇ v L(h), ∇ v h L 2 x,v - 2 ε ∇ x h, ∇ v h L 2 x,v + 2 ε ∇ v Γ(g, h), ∇ v h L 2 x,v .
Then we have by (H1) that L = K -Λ and we can estimate each component thanks to (H1) and (H2):
-∇ v Λ(h), ∇ v h L 2 x,v ν Λ 4 h 2 L 2 x,v -ν Λ 3 ∇ v h 2 Λ , ∇ v K(h), ∇ v h L 2 x,v C(δ) h 2 L 2 x,v + δ ∇ v h 2 L 2
x,v , where δ is a strictly positive real that we will choose later.
Finally, for a D > 0 that we will choose later, we have the following upper bound, by Cauchy-Schwarz inequality:
- 2 ε ∇ x h, ∇ v h L 2 x,v D ε ∇ x h 2 L 2 x,v + ν Λ 1 Dν Λ 0 ε ∇ v h 2 Λ , using the fact that . 2 L 2 x,v ν Λ 1 ν Λ 0 . 2 Λ .
Finally, another Young inequality gives us a control on the last scalar product, for a D 2 > 0 to be chosen later
2 ε ∇ v Γ(g, h), ∇ v h L 2 x,v D 2 ε G 1 x,v (g, h) 2 + 1 D 2 ε ∇ v h 2 Λ .
We gather here the last three inequalities to obtain our global upper bound:
d dt ∇ v h 2 L 2 x,v 1 ε 2 2ν Λ 4 + 2C(δ) h 2 L 2 x,v + D ε ∇ x h 2 L 2 x,v + 2ν Λ 1 δ ν Λ 0 ε 2 - 2ν Λ 3 ε 2 + ν Λ 1 Dεν Λ 0 + 1 D 2 ε ∇ v h 2 Λ + D 2 ε G 1 x,v (g, h) 2 .
We can go even further since we have h 2
L 2 x,v = h ⊥ 2 L 2 x,v + π L (h) 2 L 2
x,v . But because h is in Ker(G ε ) ⊥ we can use the toolbox and the equation (3.5) about the Poincaré inequality:
π L (h) 2 L 2 x,v C p ∇ x h 2 L 2
x,v . This last inequality yields:
d dt ∇ v h 2 L 2 x,v ν Λ 1 ν Λ 0 ε 2 2ν Λ 4 + 2C(δ) h ⊥ 2 Λ + C p ε 2 2ν Λ 4 + 2C(δ) + D ε ∇ x h 2 L 2 x,v + 2ν Λ 1 δ ν Λ 0 ε 2 - 2ν Λ 3 ε 2 + ν Λ 1 Dεν Λ 0 + 1 D 2 ε ∇ v h 2 Λ + D 2 ε G 1 x,v (g, h) 2 .
Therefore, we can choose
δ = ν Λ 0 ν Λ 3 /6ν Λ 1 , D = 3ν Λ 1 ε/ν Λ 0 ν Λ 3 and D 2 = 3ε/ν Λ 3 to get the equation (3.8). B.2.3. Time evolution of ∇ x h, ∇ v h L 2 x,v .
In the same way, and integrating by part in x then in v we obtain the following equality:
d dt ∇ x h, ∇ v h L 2 x,v = 2 ε 2 L(∇ x h), ∇ v h L 2 x,v - 2 ε ∇ v (v•∇ x h), ∇ x h L 2 x,v + 2 ε ∇ x Γ(g, h), ∇ v h L 2 x,v .
By writing explicitly
∇ v (v • ∇ x h), ∇ x h L 2 x,v
and by integrating by part one can show that the following holds:
∇ v (v.∇ x h), ∇ x h L 2 x,v = 1 2 ∇ x h 2 L 2
x,v .
Therefore we have an explicit formula for that term and we can find the time derivative of the scalar product being:
d dt ∇ x h, ∇ v h L 2 x,v = 2 ε 2 L(∇ x h), ∇ v h L 2 x,v - 1 ε ∇ x h 2 L 2 x,v + 2 ε ∇ x Γ(g, h), ∇ v h L 2 x,v .
We can bound above the first term in the right-hand side of the equality thanks to (H1) and then Cauchy-Schwarz in x, with a constant η > 0 to be define later.
2 ε 2 L(∇ x h), ∇ v h L 2 x,v = 2 ε 2 L(∇ x h ⊥ ), ∇ v h L 2 x,v C L ε 2 T d 2 ∇ x h ⊥ Λv ∇ v h Λv dx C L η ε 2 ∇ x h ⊥ 2 Λ + C L ηε 2 ∇ v h 2 Λ .
Then applying hypothesis (H4) and Young's inequality one more time with a constant D 3 > 0 one may find
2 ε ∇ x Γ(g, h), ∇ v h L 2 x,v D 3 ε G 1 x (g, h) 2 + 1 D 3 ε ∇ v h 2 Λ .
Hence we end up with the following inequality:
d dt ∇ x h, ∇ v h L 2 x,v C L η ε 2 ∇ x h ⊥ 2 Λ + C L ηε 2 + 1 D 3 ∇ v h 2 Λ - 1 ε ∇ x h 2 L 2 x,v + D 3 ε G 1 x (g, h) 2 .
Now define η = e/ε, e > 0, and D 3 = e/C L to obtain equation (3.9).
B.2.4. Time evolution of ∂
j l h 2 L 2 x,v
for |j| 1 and |j|+|l| = s. This term is the only term far from what we already did since we are mixing more than one derivative in x and one derivative in v in general. By simply differentiating in time and integrating by part we find the following equality.
d dt ∂ j l h 2 L 2 x,v = 2 ε 2 ∂ j l L(h), ∂ j l h L 2 x,v - 2 ε ∂ j l (v.∇ x h), ∂ j l h L 2 x,v + 2 ε ∂ j l Γ(g, h), ∂ j l h L 2 x,v = 2 ε 2 ∂ j l L(h), ∂ j l h L 2 x,v - 2 ε i,c i (j)>0 ∂ j l h, ∂ j-δ i l+δ i h L 2 x,v + 2 ε ∂ j l Γ(g, h), ∂ j l h L 2 x,v .
We can then apply Cauchy-Schwarz for the terms inside the sum symbol. For each we can use a D i,l,s > 0 but because they play an equivalent role we will take the same D > 0, that we will choose later:
- 2 ε ∂ j l h, ∂ j-δ i l+δ i h L 2 x,v ν Λ 1 Dν Λ 0 ε ∂ j l h 2 Λ + D ε ∂ j-δ i l+δ i h 2 L 2 x,v
.
Then we can use (H1') and (H2'), with a δ > 0 we will choose later, to obtain
2 ε 2 ∂ j l L(h), ∂ j l h L 2 x,v 2 ε 2 (C(δ) + ν Λ 6 ) h 2 H s-1 x,v + 2 ε 2 δν Λ 1 ν Λ 0 -ν Λ 5 ∂ j l h 2 Λ .
Finally, applying (H4) and Young's inequality with a constant D 2 > 0 we obtain
2 ε ∂ j l Γ(g, h), ∂ j l h L 2 x,v D 2 ε G s x,v (g, h) 2 + 1 D 2 ε ∂ j l h 2 Λ .
Combining these three inequality we find an upper bound for the time evolution. Here we also use the fact that the number of i such that c i (j) > 0 is less or equal to d.
d dt ∂ j l h 2 L 2 x,v ν Λ 1 d Dεν Λ 0 + 2 ε 2 δν Λ 1 ν Λ 0 -ν Λ 5 + 1 D 2 ε ∂ j l h 2 Λ + D ε i,c i (j)>0 ∂ j-δ i l+δ i h 2 L 2 x,v + 2 ε 2 (C(δ) + ν Λ 6 ) h 2 H s-1 x,v + D 2 ε G s x,v (g, h) 2 .
Hence, we obtain equations (3.11) and (3.12) by taking
D = 3ν Λ 1 ε/ν Λ 0 ν Λ 5 , D 2 = 3ε/ν Λ 5 and δ = ν Λ 0 ν Λ 5 /6ν Λ 1 . Also note that in (3.11) we used ∂ j-δ i l+δ i h 2 L 2 x,v ν Λ 1 ν Λ 0 ∂ j-δ i l+δ i h 2 Λ . B.2.5. Time evolution of ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v .
With no more calculations, we can bound this term in the same way we did for d dt ∇ x h, ∇ v h L 2 x,v . Here we get
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v C L η ε 2 ∂ 0 l h ⊥ 2 Λ + C L ηε 2 + 1 εD 3 ∂ δ i l-δ i h 2 Λ - 1 ε ∂ 0 l h 2 L 2 x,v + D 3 ε (G s x (g, h)) 2 .
Now define η = e/ε, e > 0, and D 3 = e/C L to obtain equation (3.13).
In the next paragraphs, we are setting g = h.
B.2.6. Time evolution of ∇
v h ⊥ 2 L 2 x,v
. By simply differentiating norm and using (H5) to get Γ(h, h) ⊥ = Γ(h, h), we compute
d dt ∇ v h ⊥ 2 L 2 x,v = 2 ∇ v (G ε (h)) ⊥ , ∇ v h ⊥ L 2 x,v + 2 ε ∇ v Γ(h, h), ∇ v h ⊥ L 2
x,v . By applying (H4) and Young's inequality to the second term on the right-hand side, with a constant D 2 > 0, and controlling the L 2
x,v -norm by the Λ-norm we obtain:
2 ε ∇ v Γ(h, h), ∇ v h ⊥ L 2 x,v D 2 ε G 1 x,v (h, h) 2 + 1 εD 2 ∇ v h ⊥ 2 Λ .
Then we have to control the first term. Just by writing it and decomposing terms in projection onto Ker(L) and onto its orthogonal we yield:
2 ∇ v (G ε (h)) ⊥ , ∇ v h ⊥ L 2 x,v = 2 ε 2 ∇ v L(h), ∇ v h ⊥ L 2 x,v - 2 ε ∇ v (v • ∇ x h) ⊥ , ∇ v h ⊥ L 2 x,v = 2 ε 2 ∇ v L(h ⊥ ), ∇ v h ⊥ L 2 x,v - 2 ε ∇ x h, ∇ v h ⊥ L 2 x,v - 2 ε v • ∇ v ∇ x π L (h), ∇ v h ⊥ L 2 x,v + 2 ε ∇ v π L (v • ∇ x h), ∇ v h ⊥ L 2
x,v .
Then we can control the first term on the right-hand side thanks to (H1) and (H2), δ > 0 to be chosen later:
2 ε 2 ∇ v L(h ⊥ ), ∇ v h ⊥ L 2 x,v 2(C(δ) + ν Λ 4 )ν 1 Λ ν Λ 0 ε 2 h ⊥ 2 Λ + 2 ε 2 ν Λ 1 δ ν Λ 0 -ν Λ 3 ∇ v h ⊥ 2 Λ .
We apply Cauchy-Schwarz inequality to the next term, with D to be chosen later:
- 2 ε ∇ x h, ∇ v h ⊥ L 2 x,v D ε ∇ x h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∇ v h ⊥ 2 Λ .
For the third term we are going to apply Cauchy-Schwarz inequality and then use the property (H3). The latter property tells us that the functions in Ker(L) are of the form a polynomial in v times e -|v| 2 /4 . This fact combined with the shape of π L , equation (3.1), shows us that we can control, by a mere Cauchy-Schwarz inequality, the third term. Then the property (3.3) yields the following upper bound:
- 2 ε v • ∇ v ∇ x π L (h), ∇ v h ⊥ L 2 x,v D ε v • ∇ v π L (∇ x h) 2 L 2 x,v + 1 Dε ∇ v h ⊥ 2 L 2 x,v DC π1 ε ∇ x h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∇ v h ⊥ 2 Λ .
Finally, we first use equation (3.3) controling the v-derivatives of π L and then see that the norm of π L (v.f ) is easily controled by the norm of f (just use (H3) and the definition of π L (3.1) and apply Cauchy-Schwarz inequality) by a factor C π1 (increase this constant if necessary in (3.3)):
2 ε ∇ v π L (v.∇ x h), ∇ v h ⊥ L 2 x,v D ′ ε ∇ v π L (v.∇ x h) 2 L 2 x,v + 1 εD ′ ∇ v h ⊥ 2 L 2 x,v D ′ C π1 ε π L (v.∇ x h) 2 L 2 x,v + 1 εD ′ ∇ v h ⊥ 2 L 2 x,v D ′ C 2 π1 ε ∇ x h 2 L 2 x,v + ν Λ 1 ν Λ 0 εD ′ ∇ v h ⊥ 2 L 2 x,v
.
We then gather all those bounds to get the last upper bound for the time derivative of the v-derivative.
d dt ∇ v h ⊥ 2 L 2 x,v ν Λ 1 ν Λ 0 ε 2 2ν Λ 4 + 2C(δ) h ⊥ 2 Λ + D ε + D ′ C 2 π1 ε + DC π1 ε ∇ x h 2 L 2 x,v + 2ν Λ 1 δ ν Λ 0 ε 2 - 2ν Λ 3 ε 2 + ν Λ 1 εν Λ 0 1 D + 1 D ′ + 1 D + 1 εD 2 ∇ v h ⊥ 2 Λ + D 2 ε G 1 x,v (h, h) 2 .
Therefore we obtain (3.14) by taking
D = D ′ = D = 9ν Λ 1 ε/ν Λ 0 ν Λ 3 , δ = ν Λ 0 ν Λ 3 /6ν Λ 1 and D 2 = 3ε/ν Λ 3 . B.2.7. A new time evolution of ∇ x h, ∇ v h L 2 x,v
. By integrating by part in x then in v we obtain the following equality on the evolution of the scalar product:
d dt ∇ x h, ∇ v h L 2 x,v = 2 ∇ x G ε (h), ∇ v h L 2 x,v + 2 ε ∇ v Γ(h, h), ∇ x h L 2 x,v .
We will bound above the first term as in the previous case and for the second term involving Γ we use (H4) and Young's inequality with a constant D 3 > 0:
2 ∇ v Γ(h, h), ∇ x h L 2 x,v D 3 G 1 x,v (h, h) 2 + 1 D 3 ∇ x h 2 Λ .
We decompose ∇ x h thanks to π L and we use (3.4) to control the fluid part of it,
2 ∇ v Γ(h, h), ∇ x h L 2 x,v D 3 G 1 x,v (h, h) 2 + 1 D 3 ∇ x h ⊥ 2 Λ + C π D 3 ∇ x h 2 L 2
x,v . Finally we obtain an upper bound for the time-derivative:
d dt ∇ x h, ∇ v h L 2 x,v C L η ε 2 + 1 εD 3 ∇ x h ⊥ 2 Λ + C L ηε 2 ∇ v h 2 Λ + C π εD 3 - 1 ε ∇ x h 2 L 2 x,v + D 3 ε G 1 x,v (h, h) 2 .
But now, we can use the properties (3.3) and (3.4) of the projection π L to go further.
∇ v h 2 Λ 2 ∇ v h ⊥ 2 Λ + 2 ∇ v π L (h) 2 Λ 2 ∇ v h ⊥ 2 Λ + 2C π1 C π π L (h) 2 L 2 x,v 2 ∇ v h ⊥ 2 Λ + 2C π1 C π C p ∇ x h 2 L 2 x,v ,
where we used Poincare inequality (3.5) because h is in Ker(G ε ) ⊥ . Hence we have a final upper bound for the time derivative:
d dt ∇ x h, ∇ v h L 2 x,v C L η ε 2 + 1 εD 3 ∇ x h ⊥ 2 Λ + 2C L ηε 2 ∇ v h ⊥ 2 Λ + 2C L C π1 C π C p ε 2 η + C π εD 3 - 1 ε ∇ x h 2 L 2 x,v + D 3 ε G 1 x,v (h, h) 2 .
Thus, setting η = 8eC L C π1 C π C p /ε with e 1 and D 3 = 4C π we obtain equation (3.15).
B.2.8. Time evolution of ∂
j l h ⊥ 2 L 2 x,v
, j 1 and |j| + |l| = s. We have the following time evolution:
d dt ∂ j l h ⊥ 2 L 2 x,v = 2 ∂ j l (G ε (h)) ⊥ , ∂ j l h ⊥ L 2 x,v + 2 ε ∂ j l Γ(h, h), ∂ j l h ⊥ L 2 x,v .
As above, we apply (H4) for the last term on the right hand side, with a constant
D 2 > 0, 2 ∂ j l Γ(h, h), ∂ j l h ⊥ L 2 x,v D 2 G s x,v (h, h) 2 + 1 D 2 ∂ j l h ⊥ 2 Λ .
Then we evaluate the first term on the right-hand side.
2 ∂ j l (G ε (h)) ⊥ , ∂ j l h ⊥ L 2 x,v = 2 ε 2 ∂ j l L(h), ∂ j l h ⊥ L 2 x,v - 2 ε ∂ j l (v.∇ x h) ⊥ , ∂ j l h ⊥ L 2 x,v = 2 ε 2 ∂ j l L(h ⊥ ), ∂ j l h ⊥ L 2 x,v - 2 ε v • ∂ j l π L (∇ x h), ∂ j l h ⊥ L 2 x,v - 2 ε i,c i (j)>0 ∂ j-δ i l+δ i h, ∂ j l h ⊥ L 2 x,v + 2 ε ∂ j l π L (v • ∇ x h), ∂ j l h ⊥ L 2 x,v .
Then we shall bound each of these four terms on the right-hand side. We can first use the properties (H1') and (H2') of L to get, for some δ to be chosen later,
2 ε 2 ∂ j l L(h ⊥ ), ∂ j l h ⊥ L 2 x,v 2 ε 2 C(δ) + ν Λ 6 h ⊥ 2 H s-1 x,v + 2 ε 2 ν Λ 1 δ ν Λ 0 -ν Λ 5 ∂ j l h ⊥ 2 Λ .
For the three remaining terms we will apply Cauchy-Schwarz inequality and use the properties of π L concerning v-derivatives and multiplications by a polynomial in v. First
- 2 ε v • ∂ j l π L (∇ x h), ∂ j l h ⊥ L 2 x,v D ε v • ∂ j l π L (∇ x h) 2 L 2 x,v + 1 Dε ∂ j l h ⊥ 2 L 2 x,v DC πs ε ∂ 0 l (∇ x h) 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∂ j l h ⊥ 2 Λ DC πs ε |l ′ |=s ∂ 0 l ′ h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∂ j l h ⊥ 2 Λ , if |j| = 1 DC πs ε |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∂ j l h ⊥ 2 Λ , if |j| > 1,
where we used that |l| = |s| -|j|. Then
- 2 ε ∂ j-δ i l+δ i h, ∂ j l h ⊥ L 2 x,v D ′ ε ∂ j-δ i l+δ i h 2 L 2 x,v + ν Λ 1 ν Λ 0 D ′ ε ∂ j l h ⊥ 2 Λ
In the case where |j| > 1 we can also use that ∂
j-δ i l+δ i h 2 L 2
x,v can be decomposed thanks to π L and its orthogonal projector. Then the fluid part is controlled by the x-derivatives only. And finally
2 ε ∂ j l π L (v • ∇ x h), ∂ j l h ⊥ L 2 x,v D ε ∂ j l π L (v • ∇ x h) 2 L 2 x,v + 1 Dε ∂ j l h ⊥ 2 L 2 x,v DC πs ε ∂ 0 l ∇ x h 2 L 2 x,v + ν Λ 1 Dν Λ 0 ε ∂ j l h ⊥ 2 Λ DC πs ε |l ′ |=s ∂ 0 l ′ h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∂ j l h ⊥ 2 Λ , if |j| = 1 DC πs ε |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + ν Λ 1 ν Λ 0 Dε ∂ j l h ⊥ 2 Λ , if |j| > 1,
We are now able to combine all those estimates to get an upper bound of the time-derivative we are looking at. We can also give to different bounds, depending on the size |j|. We also used that the number of i such that c i (j) > 0 is less than d.
In the case |j|
> 1, d dt ∂ j l h ⊥ 2 L 2 x,v 2 ε 2 ν Λ 1 δ ν Λ 0 -ν Λ 5 + ν Λ 1 ν Λ 0 ε 1 D + d D ′ + 1 D + 1 D 2 ∂ j l h ⊥ 2 Λ + D ′ ν Λ 1 2ν Λ 0 ε i,c i (j)>0 ∂ j-δ i l+δ i h ⊥ 2 Λ + DC πs 2ε + D ′ C πs ε + DC πs ε |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + 2(C(δ) + ν Λ 6 ) ε 2 h ⊥ 2 H s-1 x,v + D 2 ε G s x,v (h, h) 2 .
And in the case |j| = 1,
d dt ∂ δ i l-δ i h ⊥ 2 L 2 x,v 2 ε 2 ν Λ 1 δ ν Λ 0 -ν Λ 5 + ν Λ 1 ν Λ 0 ε 1 D + 1 D ′ + 1 D + 1 D 2 ∂ δ i l-δ i h ⊥ 2 Λ + DC πs ε + D ′ ε + DC πs ε |l ′ |=s ∂ 0 l ′ h 2 L 2 x,v + 2(C(δ) + ν Λ 6 ) ε 2 h ⊥ 2 H s-1 x,v + D 2 ε G s x,v (h, h) 2 .
By taking
D = D = 9ν Λ 1 ε/ν Λ 0 ν Λ 5 , D 2 = 3ε/ν Λ 5 , δ = ν Λ 0 ν Λ 5 /6ν Λ 1 and D ′ = 9ν Λ 1 ε/ν Λ 0 ν Λ 5 , if |j| = 1, or D ′ = 9ν Λ 1 dε/ν Λ 0 ν Λ 5 , if |j| > 1,
δ i l-δ i h, ∂ 0 l h L 2 x,v
. By integrating by part in x then in v we obtain the following equality on the evolution of the scalar product.
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v = 2 ∂ δ i l-δ i G ε (h), ∂ 0 l h L 2 x,v + 2 ε ∂ δ i l-δ i Γ(h, h), ∂ 0 l h L 2 x,v .
We will bound above the first term as in the previous case and for the second term involving Γ we use (H4) and Young's inequality with a constant D 3 > 0. Moreover, we decompose ∂ 0 l h into its fluid part and its microscopic part and we apply (3.4) on the fluid part. This yields
2 ∂ δ i l-δ i Γ(h, h), ∂ 0 l h L 2 x,v D 3 G s x,v (h, h) 2 + 1 D 3 ∂ 0 l h ⊥ 2 Λ + C π D 3 ∂ 0 l h 2 L 2 x,v .
Finally we obtain an upper bound for the time-derivative:
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v C L η ε 2 + 1 D 3 ∂ 0 l h ⊥ 2 Λ + C L ηε 2 ∂ δ i l-δ i h 2 Λ + C π εD 3 - 1 ε ∂ 0 l h 2 L 2 x,v + D 3 ε G s x,v (h, h) 2 .
Now we can use the properties of π L concerning the v-derivatives, equation (3.3), the equivalence of norm under the projection π L , equation (3.4), and Poincare inequality get the following upper bound:
∂ δ i l-δ i h 2 Λ 2 ∂ δ i l-δ i h ⊥ 2 Λ + 2 ∂ δ i l-δ i π L (h) 2 Λ 2 ∂ δ i l-δ i h ⊥ 2 Λ + 2C πs C π ∂ 0 l-δ i (h) 2 L 2 x,v 2 ∂ δ i l-δ i h ⊥ 2 Λ + 2C πs C π |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v .
Therefore,
d dt ∂ δ i l-δ i h, ∂ 0 l h L 2 x,v C L η ε 2 + 1 D 3 ∂ 0 l h ⊥ 2 Λ + 2C L ηε 2 ∂ δ i l-δ i h ⊥ 2 Λ + C π εD 3 - 1 ε ∂ 0 l h 2 L 2 x,v + 2C L C πs C π ηε 2 |l ′ | s-1 ∂ 0 l ′ h 2 L 2 x,v + D 3 ε G s x,v (h, h) 2 .
We finally define η = 8eC L C πs C π d/ε, with e > 1, and D 3 = 2C π to yield equation (3.18).
Appendix C. Proof of the hydrodynamical limit lemmas
In this section we are going to prove all the different lemmas used in section 8.
All along the demonstration we will use this inequality:
(C.1) ∀t > 0, k ∈ N * , q 0, p > 0, t q k 2p e -atk 2 C p (a)t q-p .
C.
T 0 U ε 0j h in dt = n∈Z d -{0} e in.x T 0 e iα j t|n| ε -β j t|n| 2 dt P 0j n |n| ĥin (n, v) = n∈Z d -{0} e in.x ε iα j |n| -εβ j |n| 2 e iα j T |n| ε -β j T |n| 2 -1 P 0j ĥin (n, v).
The Fourier transform is an isometry in L 2
x and therefore
T 0 U ε 0j h in dt 2 L 2 x L 2 v ε 2 n∈Z d -{0} 2 α 2 j |n| 2 + ε 2 β 2 j |n| 4 P 0j n |n| ĥin (n, •) 2 L 2 v .
Finally, we know that, like e 0j , P 0j is continuous on the compact S d-1 and so is bounded. But the latter is a linear operator acting on L 2 v and therefore it is bounded by M 0j in the operator norm on L 2 v . Thus
T 0 U ε 0j h in dt 2 L 2 x L 2 v ε 2 M 2 0j α 2 j n∈Z d -{0} ĥin (n, •) 2 L 2 v ε 2 M 2 0j α 2 j h in (•, •) 2 L 2 x L 2 v ,
which is the expected result. Now, let us look at the L 2
x -norm of this operator, to see how the torus case is different from the case R d studied in [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] and [START_REF] Ellis | The first and second fluid approximations to the linearized Boltzmann equation[END_REF]. Consider a direction n 1 in the Fourier transform space of the torus and define φ n 1 = F -1
x (e in 1 ). We have the following equality
U ε 0j h in , φ n 1 L 2 x = Ûε 0j ĥin , φn 1 L 2 n = e iα j t|n 1 | ε -β j t|n 1 | 2 P 0j n 1 |n 1 | ĥin (n 1 , v).
If we do not integrate in time, one can easily see that this expression cannot have a limit as ε tends to 0 if P 0j n 1 |n 1 | ĥin (n 1 , v) = 0, and so we cannot even have a weak convergence. The difference with the whole space case is this possibility to single out one mode in the frequency space in the case of the torus. This leads to the possible existence of periodic function at a given frequency, the norm of which will never decrease. This is impossible in the case of a continuous Fourier space, as in R d , and well described by the Riemann-Lebesgue lemma.
Therefore we have a convergence without averaging in time if and only if P 0j n 1 |n 1 | ĥin (n 1 , v) = 0, for all j = ±1 and all direction n 1 . This means that for all j = ±1 and all n 1 , e 0j If we take T > 0, by Parseval identity we get
T 0 U ε 1j h in dt 2 L 2 x L 2 v = n∈Z d -{0} χ |εn n 0 | T 0 e iα j t|n| ε -β j t|n| 2 e t ε 2 γ j (|εn|) -1 dt 2 P 0j ĥin 2 L 2 v .
But then we can use the fact that |e a -1| |a| e |a| , the inequalites satisfied by γ j and the computational inequality (C.1) to obtain which is independent of n and is written Iε. Therefore we have the expected inequality, by using the continuity of P 0j ,
T 0 U ε 1j h in dt 2 L 2 x L 2 v ε 2 I 2 M 2 0j h in 2 L 2 x L 2 v .
The last two inequalities we want to show comes from Parseval's identity, the properties of γ j and the computational inequality (C.1):
U ε 1j h in 2 L 2 x L 2 v = n∈Z d -{0}
χ |εn| n 0 e -2β j t|n| 2 e t ε 2 γ j |εn| -
1 2 P 0j n |n| ĥin 2 L 2 v M 2 0j C 2 γ ε 2 n∈Z d -{0}
χ |εn| n 0 t 2 |n| 6 e -β j t|n| 2 ĥin Finally, if we integrate in t between 0 and +∞ we obtain the expected second inequality of the lemma. If we merely bound e -β j t 2 |n| 2 by one and use the fact that χ |εn| n 0 1 and χ |εn| n 0 ε 2 |n| 2 n 2 0 we obtain the third inequality of the lemma for δ = 1 and δ = 0. Then by interpolation we obtain the general case for 0 δ 1.
The term U ε 2j : Fix T > 0. By Parseval's identity we have where we used the inequalities satisfied by γ and integration in time.
T 0 U ε 2j h in dt 2 L 2 x L 2 v = n∈Z d -{0}
Then, P 1j is continuous on the compact [-n 0 , n 0 ] × S d-1 and so is bounded, as an operator acting on L 2 v , by M 1j > 0. Hence, Parseval's identity offers us the first inequality of the lemma.
The last two inequalities are just using Parseval's identity and the continuity of P 1j . Indeed, We recognize here the same form of inequality (C.2). Thus, we obtain the last two inequalities of the statement in the same way.
U ε 2j h in 2 L 2 x L 2 v = n∈Z d -{0}
The term U ε 3j : We remind the reader that Ûε 3j = χ |εn| n 0 -1 e iα j t|n| ε -β j t|n| 2 P 0j n |n| .
We have the following inequality
χ |εn| n 0 -1 εn n 0 .
Therefore, replacing P 1j by 1 n 0 P 0j and β j by 2β j (since t ε 2 γ j (|εn|)
tβ j 2 |n|
2 ) in the proof made for U ε 2j we obtain the expected three inequalities for U ε 3j h in , the last one only with δ = 1.
To have the last inequality in δ, it is enough to bound χ |εn| n 0 -1 by 1 and then using the continuity of P 0j to have the result for δ = 0. Finally, we interpolate to get the general result for all 0 δ 1. C.1.3. Proof of Lemma 8.8. Thanks to Theorem 8.3 we have that
U ε R h in 2 L 2 x L 2 v = ÛR (t/ε 2 , εn, v) ĥin 2 L 2 n L 2 v C 2 R e -2 σt ε 2 h in 2 L 2 x L 2 v .
But then we have, thanks to the technical lemma C.1, that e -2 σt ε 2
C 1/2 (2σ) ε √ t , which gives us the last two inequalities we wanted. For the first inequality, a mere Cauchy-Schwarz inequality yields
T 0 U ε R h in dt 2 L 2 x L 2 v T T 0 U ε R h in 2 L 2 x L 2 v dt,
which gives us the first inequality by integrating in t.
Now, let us suppose that we have the strong convergence down to t = 0. At t = 0 we can write that e tGε = Id and therefore that: We have the strong convergence down to 0 as ε tends to 0. Therefore, taking the latter equality at ε = 0 we have, because Then ÛR ĥin tends to 0 as ε tends to 0 in C([0, +∞), L 2 x L 2 v ) if and only if h in belongs to Ker(L). In that case, we can use the proof of Lemma 6.2 of [START_REF] Bardos | The classical incompressible Navier-Stokes limit of the Boltzmann equation[END_REF] in which they noticed that U ε R (t, x, v) = e tGε U ε R (0, x, v) = e tGε F -1
x Id -χ |εn| n 0 2 j=-1 P j (εn) F x .
Thanks to that new form we have that, if h in = π L (h in ),
U ε R (t, x, v)h in = e tGε
ψ ε ij (u ε ) = t 0 n∈Z d -{0}
g(t, s, k, x)P (n)û ε (s, k, v)ds, with P (n) being a projector in L 2 v , bounded uniformely in n.
Looking at the dual definition of the norm of a function in L 2
x,v , we can consider f in C ∞ c T d × R d such that f L 2 x,v = 1 and take the scalar product with ψ ε ij (u ε ). This yields, since P is a projector and thus symmetric,
ψ ε ij (u ε ), f L 2 x,v = T d t 0 n∈Z d -{0}
g(t, s, k, x) P (n)û ε , f
L 2 v ds = T d t 0 n∈Z d -{0}
g(t, s, k, x) ûε , P (n)f L 2 v ds. (C. [START_REF] Bardos | Fluid dynamic limits of kinetic equations. I. Formal derivations[END_REF] We are working in L 2
x L 2 v in order to simplify computations as they are exactly the same in higher Sobolev spaces. Therefore, we can assume that hypothesis (H4) is still valid in L 2 v without loss of generality. This means
(C.4) ûε , P (n)f L 2 v h L 2 x L 2 v h Λv P (n)f Λv .
Finally, in terms of Fourier coefficients in x, P (n) is a projector in L 2 v and uniformely bounded in n as an operator in L 2 v . Thus, combining (C.4) and the definition of the functional E, (6.2), we see that
T 0 fε 2 L 2 x L 2 v dt is a continuous operator from C(R + , L 2 x L 2 v , E(•)) to C(R + , L 2 x L 2 v , • L 2 x L 2 v
). Looking at (C.3), we can consider without loss of generality that the following holds (even for the remainder term) for all T > 0:
ψ ε ij (u ε ) = t 0 n∈Z d -{0}
g(t, s, k, x) fε (s, k, v)ds,
with T 0 fε 2 L 2 x L 2 v dt M ij E(h ε ) 2 .
C.2.2. Proof of Lemma 8.9. For the first inequality, fix T > 0 and integrate by part in t to obtain where we used the subsection above and Parseval's identity again. This is exactly the expected result.
C.2.3. Proof of Lemma 8.11. We divide this proof in three paragraphes, each of them studying a different term.
The term ψ ε 1j : We will just prove the last two inequalities and then merely applying Cauchy-Schwarz inequality will lead to the first one. Fix t > 0. By a change of variable we can write We can obtain the result by using Parseval's identity, denoting C a constant independent of ε and T , the continuity of P 1j and the computational inequality (C.1). If we merely bound e -β j (t-s) 2 |n| 2 by one and use the fact that χ |εn| n 0 1 and χ |εn| n 0 ε 2 |n| 2 n 2 0 we obtain the third inequality of the lemma for δ = 1 and δ = 0. Then by interpolation we obtain the general case for 0 δ 1.
ψ ε 1j (u ε ) =
ψ ε 1j (u ε )(t)
If we integrate in t between 0 and a fixed T > 0, a mere integration by part yields the expected control on the L 2 t,x,v -norm. Finally, from the latter control and a Cauchy-Schwarz inequality we deduce the first inequality. This bound is of the same form as equation (C.5). Therefore we have the same result.
The term ψ ε 3j : As above, we will show the third inequality only. Fix T > 0, we can write Looking at the fact that χ |εn| n 0 -1 ε|n| n 0 , we find the same kind of inequality as equation (C.5). Thus, we reach the same result. C.2.4. Proof of Lemma 8.12. We remind the reader that
ψ ε 3j (u ε ) =
Ψ ε R (u ε ) = t 0 1 ε U ε R (t -s)f ε (s)ds,
and that, by Theorem 8.3,
U ε R f ε 2 L 2 x L 2 v C 2 R e -2 σt ε 2 f ε 2 L 2 x L 2
v . Hence, a Cauchy-Schwarz inequality gives us the third inequality for ψ ε R (u ε )(T ) 2
L 2 x L 2
v , and then the two others inequality stated above. C.2.5. Proof of Lemma 8.14. We remind the reader that Ψ(u) = F -1
x [ψ ε 00 (u) + ψ ε 02 (u)] F x . As above, and because in that case α j = 0, we can write ψ ε 0j (u ε -u)(T ), for some T > 0, and apply a Cauchy-Schwarz inequality:
ψ ε 0j (u ε -u) 2 L 2 x L 2 v (T ) = n∈Z d -{0} |n| 2 R d T 0 e -sβ j |n| 2 P 1j Γ(h ε -h, h ε + h)ds 2 dv M 2 1j β 2 j Sup t∈[0,T ] Γ(h ε -h, h ε + h) 2 L 2 x L 2 v .
But because T d is bounded in R d and thanks to (H4) and the boundedness of (h ε ) ε and h (both bounded by M) in H s x L 2 v (Theorem 2.3), we can have the following control:
Γ(h ε -h, h ε + h) 2 L 2 x L 2 v 4M 2 C 2 Γ Volume(T d ) h ε -h L ∞ x L 2 v .
Therefore we obtain the last inequality and the first two just come from Cauchy-Schwarz inequality.
'): Mixing properties As above, Mouhot and Neumann extended the hypothesis (H2) to higher Sobolev's spaces: for all s 1, for all |j| + |l| = s such that |j| 1,
Proposition 3 . 1 .
31 Let a and b be in R * and consider the operator G = aL -bv.∇ x acting on H 1 x,v . If L satisfies (H1) and (H3) then Ker(G) = Ker(L).
with A, α, b and a strictly positive.Therefore we can study the time evolution of that operator acting on h by gathering results given in the toolbox. We simply take A(3.6) + α(3.7) + b(3.14) + aε(3.15)
and γ belongs to [-2, 1]. A.3.1. Orthogonality to Ker(L): (H5). Let consider a function ψ in C ∞ x,v . A mere integration by part gives us
B. 1 .
1 Proof of Proposition 3.1: We are about to prove the following proposition. Proposition B.1. Let a and b be in R * and consider the operator G = aL -bv • ∇ x acting on H 1 x,v . If L satisfies (H1) and (H3) then Ker(G) = Ker(L).
n 1 |n 1 |-β j t|n| 2 e t ε 2
112 , ĥin L 2 v = 0. By the expression known (see theorem 8.3) of e 0±1 , this is true if and only if ∇ x • u in = 0 and ρ in + θ in = 0. C.1.2. Proof of Lemma 8.7. This lemma deals with three different terms and we study them one by one because they behaviour are quite different. The term U ε 1j : We remind that we have Ûε 1j ĥin = χ |εn| n 0 e iα j t|n| ε γ j (|εn|) -1 P 0j n |n| ĥin (n, v).
-β j t|n| 2 e t ε 2 0 t |n| 3 e -tβ j 2 |n| 2 dt C γ εC 3
203 γ j (|εn|) -1 dt C γ ε T
γ ε 2 C 2 β j 2 n∈Z d -{0} χ |εn| n 0 |n| 2 e -β j t 2 |n| 2 ĥin 2
22
χ |εn| n 0 T 0 e iα j t|n| ε -β j t|n| 2 + t ε 2 γ j (|εn|) dt 2 |εn| 2 P 1j ĥin 2 L 2 v n∈Z d -{0} 4 β 2 j |n| 4 |εn| 2 P 1j |εn| , n |n| ĥin 2 L 2 v
002222222 ,
Id = χ |εn| n 0 2 j=- 1 P
21 j |εn| , n |n| + ÛR (0, εn, v).
2 j=- 1 P
1 0j = π L , ÛR (0, 0, v) = Id -π L .
F - 1 x( 1 - 2 j=- 1 P 2 j=- 1 P
112121 χ |εn| n 0 ) -|εn| χ |εn| n 0 1j (εn) ĥin , because π L = 0j .Therefore we can redo the same estimates we worked out in the previous lemmas and use the same interpolation method to get the result stated in Lemma 8.8.C.2. Study of the bilinear part.C.2.1.A simplification without loss of generality. All the terms we are about to study, apart from the remainder term, are of the following form
e 2 jT ε 2 E
22 i α j (t-s) ε |n|-(t-s)β j |n| 2 |n| fε (s)ds dt = n∈Z d -{0} e in.x ε iα j |n| -εβ j |n| T 0 e i α j (T -s) ε |n|-(T -s)β j |n| 2 -1 fε (s)ds .Finally we can use Parseval's identity (h ε ) 2 ,
n∈Z d -{0} e ik.x χ |εn| n 0 t 0 e iα j s ε |n|-β j s|n| 2 e s ε 2 0 e iα j s ε |n|-β j s|n| 2 e s ε 2 0 e iα j s ε |n|-β j s|n| 2 e s ε 2 C 1 β j 4 |n| 2 t 0 e -β j s 4 |n| 2 fε ds εC γ C 1 β j 4
02020204 γ j (|εn|) -1 |n| fε (t -s)ds.By the study made in the proof of Lemma 8.7 we have thatt γ j (|εn|) -1 |n| fε (t -s)ds C γ |n| 4 ε t 0 se -β j s 2 |n| 2 fε (t -s) ds.Then we use the computational inequality (C.1) and a Cauchy-Schwarz to obtain t γ j (|εn|) -1 |n| ûε ds εC γ
The term ψ ε 2j : 0 e 0 e iα j s ε |n|-β j s|n| 2 + s ε 2 2 T 0 e -β j s 2 |n| 2
2j002202 As in the case ψ ε 1j , we are going to prove the third inequality only. Fix T > 0, a change of variable gives usψ ε 2j (u ε ) = n∈Z d -{0} e ik.x χ |εn| n 0 T iα j s ε |n|-β j s|n| 2 + s ε 2 γ j (|εn|) ε |n| 2 fε (T -s)ds.We can see thatT γ j (|εn|) ε |n| 2 fε (T -s)ds ε |n| fε (T -s) ds.
n∈Z d -{0} e ik.x (χ |εn| n 0 -1) |n|-β j s|n| 2 |n| fε (T -s, n, v)ds.
χ |εn| n 0 e |εn| n 0 |n| 2 e -tβ j |n| 2 ĥin
iα j t|n| ε -β j t|n| 2 + t ε 2 γ j (|εn|) 2 |εn| 2 P 1j (n) ĥin 2 L 2 v
M 2 1j ε 2 n∈Z d -{0} 2 L 2
χ v .
Appendix C. Proof of the hydrodynamical limit lemmas References
The author was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis. | 130,201 | [
"739558"
] | [
"236275"
] |
01492021 | en | [
"math"
] | 2024/03/04 23:41:50 | 2015 | https://hal.science/hal-01492021/file/Filling%20of%20vacuum%20for%20BE.pdf | M Briant
INSTANTANEOUS FILLING OF THE VACUUM FOR THE FULL BOLTZMANN EQUATION IN CONVEX DOMAINS
Keywords: Boltzmann equation, Transport equation in convex domains, Exponential lower bound, Explicit, Specular boundary conditions
published or not. The documents may come
Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains
the only interactions taken into account are binary collisions. More precisely, the Boltzmann equation describes the time evolution of f (t, x, v), the distribution of particles in position and velocity, starting from an initial distribution f 0 (x, v) .
We investigate the case where Ω is either the torus or a C 2 convex bounded domain. The Boltzmann equation reads
∀t 0 , ∀(x, v) ∈ Ω × R d , ∂ t f + v • ∇ x f = Q(f, f ), (1.1) ∀(x, v) ∈ Ω × R d , f (0, x, v) = f 0 (x, v),
with f being periodic in the case of Ω = T d , the torus, or with f satisfying the specular reflections boundary condition if Ω is a C 2 convex bounded domain:
(1.2) ∀(x, v) ∈ ∂Ω × R d , f (t, x, v) = f (t, x, R x (v)).
R x , for x on the boundary of Ω, stands for the specular reflection at that point of the boundary. One can compute, denoting by n(x) the outward normal at a point
x on ∂Ω, ∀v ∈ R d , R x (v) = v -2(v • n(x))n(x).
The quadratic operator Q(f, f ) is local in time and space and is given by
Q(f, f ) = R d ×S d-1 B (|v -v * |, cos θ) [f f * -f f * ] dv * dσ,
where f , f * , f * and f are the values taken by f at v , v * , v * and v respectively. Define:
v = v + v * 2 + |v -v * | 2 σ v * = v + v * 2 - |v -v * | 2 σ
, and cos
θ = v -v * |v -v * | , σ .
The collision kernel B 0 contains all the information about the interaction between two particles and is determined by physics (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF] or [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] for a formal derivation for the hard sphere model of particles). In this paper we shall only be interested in the case of B satisfying the following product form
(1.3) B (|v -v * |, cos θ) = Φ (|v -v * |) b (cos θ) ,
which is a common assumption as it is more convenient and also covers a wide range of physical applications. Moreover, we shall assume that Φ satisfies either
(1.4) ∀z ∈ R, c Φ |z| γ Φ(z) C Φ |z| γ
or a mollified assumption
(1.5) ∀ |z| 1 ∈ R, c Φ |z| γ Φ(z) C Φ |z| γ ∀ |z| 1 ∈ R, c Φ Φ(z) C Φ ,
c Φ and C Φ being strictly positive constants and γ in (-d, 1]. The collision kernel is said to be "hard potential" in the case of γ > 0, "soft potential" if γ < 0 and "Maxwellian" if γ = 0.
Finally, we shall consider b to be a continuous function on θ in (0, π], strictly positive near θ ∼ π/2, which satisfies (1.6) b (cos θ) sin d-2 θ ∼ θ→0 + b 0 θ -(1+ν) for b 0 > 0 and ν in (-∞, 2). The case when b is locally integrable, ν < 0, is referred to by the Grad's cutoff assumption (first introduce in [START_REF] Grad | Principles of the kinetic theory of gases[END_REF]) and therefore B will be said to be a cutoff collision kernel. The case ν 0 will be designated by non-cutoff collision kernel.
1.1. Motivations and comparison with previous results. The aim of this article is to show and to quantify the strict positivity of the solutions to the Boltzmann equation when the gas particles move in a bounded domain. This issue has been investigated for a long time since it not only presents a great physical interest but also appears to be of significant importance for the mathematical study of the Boltzmann equation.
Moreover, our results only require some regularity on the solution and no further assumption on its local density, which was assumed to be uniformly bounded from below in previous studies (which is equivalent of assuming a priori either that there is no vacuum or that the solution is strictly positive).
More precisely, we shall prove that solutions to the Boltzmann equation in a C 2 convex bounded domain or the torus that which have uniformly bounded energy satisfy an immediate exponential lower bound:
∀t 0 > 0, ∃K, C 1 , C 2 > 0, ∀t t 0 , ∀(x, v) ∈ Ω × R d , f (t, x, v) C 1 e -C 2 |v| 2+K ,
with K = 0 (Maxwellian lower bound) in the case of a collision kernel with angular cutoff.
We emphasize here that the present results only require solutions to the Boltzmann equation to be continuous away from the grazing set
(1.7) Λ 0 = (x, v) ∈ ∂Ω × R d , n(x) • v = 0 ,
which is a property that is known to hold in the case of specular reflection boundary conditions [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF](see also [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF] or [START_REF] Hwang | Global existence for the Vlasov-Poisson system in bounded domains[END_REF] for boundary value problems for mean-field equations). We also note that more physically relevant boundary conditions are a combination of specular reflections with some diffusion process at the boundary. The same kind of exponential lower bounds, with the same assumptions on the solution, have recently been obtained by the author in the case of Maxwellian diffusion boundary conditions [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF].
The strict positivity of the solutions to the Boltzmann equation standing in the form of an exponential lower bound was already noticed by Carleman in [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF] for the spatially homogeneous equation. In his article he proved that such a lower bound is created immediately in time in the case of hard potential kernels with cutoff in dimension 3. More precisely, the radially symmetric solutions he constructed in [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF] satisfies an almost Maxwellian lower bound, ∀t t 0 , ∀v ∈ R 3 , f (t, v) C 1 e -C 2 |v| 2+ε , C 1 , C 2 > 0 for all t 0 > 0 and ε > 0. His breakthrough was to notice that a part Q + of the Boltzmann operator Q satisfies a spreading property, roughly speaking
Q + (1 B(v,r) , 1 B(v,r) ) C + 1 B(v, √ 2r
) , with C + < 1 (see Lemma 3.2 for an exact statement).
The spreading strategy was used by Pulvirenti and Wennberg in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] to extend the latter inequality to solutions to the spatially homogeneous Boltzmann equation with hard potential and cutoff in dimension 3 with more general initial data. Their contribution was to get rid of the inital boundedness suggested in [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF] by Carleman thanks to the use of an iterative regularity property of the Q + operator. This property allowed them to immediately create an "upheaval point" that they then spread with the method of Carleman. Moreover, they obtain an exact Maxwellian lower bound of the form by controlling the decay of C n
+ ∀t t 0 , ∀v ∈ R 3 , f (t, v) C 1 e -C 2 |v| 2 ,
for all t 0 > 0.
Finally, Mouhot in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] dealt with the full Boltzmann equation in the torus. He derived a spreading method that is invariant under the flow of the characteristics, obtaining lower bounds uniformly in space as long as the solution has uniformly bounded density, energy and entropy (for the hard potential case) together with uniform bounds on higher moments (for the soft and Maxwellian potentials case). However, he also implicitly assumed that the initial data had to be bounded from below uniformly in space. He also derived ( [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]) the same kind of results in the noncutoff case in the torus, the immediate appearance of an exponential lower bound of the form ∀t t 0 , ∀(x, v) ∈ T d × R d , f (t, v) C 1 (ε)e -C 2 (ε)|v| K+ε , for all t 0 > 0, all ε > 0 and K = K(ν) with K(0) = 2 (thus recovering the cutoff case in the limit). His idea was to split further the Q operator into a cutoff part and a non-cutoff part that is seen as a small perturbation of his original spreading method.
Our results extend those In [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] in the case of C 2 bounded convex domain. Our main contribution is the derivation of a spreading method that remains invariant under the characteristics flow that, unlike the torus case, changes the direction of velocities at the boundary. Moreover, we emphasize here that the existence of boundaries implies the existence of grazing colLisions against the latter, wxere the strategies developped in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] and [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] fail. We therefore derive a geometrical approach to those problematic trajectories. Furthermore, we do not assume any uniform boundedness on the initial data but we require the continuity of the solution to the Boltzmann equation. However, if we keep the assumptions made in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] and further assume that the domain is C 3 and strictly convex then our proofs are constructive.
The quantification of the strict positivity, and above all the appearance of an exponential lower bound, has been seen to be of great mathematical interest thanks to the development of the entropy-entropy production method. This method (see [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF], Chapter 3, and [START_REF] Villani | Cercignani's conjecture is sometimes true and always almost true[END_REF]) provides a useful way of investigating the long-time behaviour of solutions to kinetic equations. Indeed, it has been successfully used to prove convergence to the equilibrium in non-pertubative cases for the Fokker-Planck equation, [START_REF] Desvillettes | On the trend to global equilibrium in spatially inhomogeneous entropy-dissipating systems: the linear Fokker-Planck equation[END_REF], and the full Boltzmann equation in the torus or in C 1 bounded connected domains with specular reflections, [START_REF] Desvillettes | On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation[END_REF]. This entropy-entropy production method requires (see Theorem 2 in [START_REF] Desvillettes | On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation[END_REF]) uniform boundedness on moments and Sobolev norms for the solutions to the Boltzmann equation but also an a priori exponential lower bound of the form f (t, x, v) C 1 e -C 2 |v| q , with q 2. Therefore, the present paper allows us to prove that the latter a priori assumption is in fact satisfied for a lot of different cases (see [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], Section 5 for an overview). We also emphasize here that the assumption of continuity of the solution we have made does not reduce the range of applications since a lot more regularity is usually asked for the entropy-entropy production method. Moreover, our method, unlike the ones developed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] and [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF], does not require a uniform bound on the local density of solutions, which is not a requirement for the entropy-entropy production method either (see [START_REF] Desvillettes | On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation[END_REF], Theorem 2).
To conclude we note that our investigations require a deep and detailed understanding of the geometry and properties of characteristic trajectories for the free transport equation. In particular, a geometric approach of grazing collisions against the boundary is derived and is the key ingredient to study the strict positivity of solutions to the Boltzmann equation. The existing strategies as well as our improvements are discussed in the next section. 1.2. Our strategy. Our strategy to tackle this issue will follow the method introduced by Carleman [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF] together with the idea of Mouhot [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] to find a spreading method that will be invariant along the characteristic trajectories. Roughly speaking we shall built characteristics in a C 2 bounded convex domain, create an "upheaval point" (as in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] and [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]) that we spread and expand uniformly along the characteristics. Finally, once the lower bound can be compared to an exponential one we reach the expected result.
However, the existence of rebounds against the boundary leads to difficulties. We describe them below and point out how we shall overcome them.
Creating an "upheaval point" was achieved, in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] and [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], by using an iterated Duhamel formula and a regularity property of the collision operator relying on a uniform lower bound of the local density of the function. But the use of this property requires a uniform control along the characteristics of the density, the energy and the entropy of the solutions to the Boltzmann equation which is natural in the homogeneous case but made Mouhot consider initial datum bounded from below uniformly in space. Our way of dealing with the appearance of the "upheaval point" is rather different but includes more general initial datum. We make the assumption of continuity of solutions to the Boltzmann equation and by compactness arguments we can construct a partition of our phase space where initial localised lower bounds exist, i.e., localised "upheaval points".
The case on the torus studied by Mouhot tells us that an exponential lower bound should arise immediately and therefore we expect the same to happen as long as the characteristic trajectory is a straight line. Unfortunately, the possibility for a trajectory to remain a line depends on the distance from the boundary of the starting point, which can be as short as one wants. This thought is the basis of our means for spreading the initial lower bound. We divided our trajectories into two categories, the ones which always stay close to the boundary (grazing collisions) and the others. For the latter we can spread our lower bound uniformly as noticed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]. The key contribution of our proof is a thorough investigation of the geometry of grazing collisions. We show that their velocity does not evolve a lot along time and mix it with the spreading property of the collision operator. Notice here that the convexity of Ω is needed for the study of grazing trajectories.
The last behaviour to notice is the fact that specular reflections completely change velocities but preserve their norm. Therefore, the existence of rebounds against the boundary prevents us from obtaining a uniform spreading method straight from the "upheaval point" unless it is depending only on the norm of the velocity. Our strategy is to spread the lower bound created at the "upheaval points" independently for grazing and non-grazing collisions up to the point when the lower bound we obtain depends only on the norm of the velocity. Roughly, our lower bounds will be balls in velocity that can be centred away from the origin and we shall grow them up finitely many times to balls containing the origin and finally be able to generate a uniform spreading method.
Collision kernels satisfying a cutoff property as well as collision kernels with a non-cutoff property will be treated following the strategy described above. The only difference is the decomposition of the Boltzmann bilinear operator Q we consider in each case. In the case of a non-cutoff collision kernel, we shall divide it into a cutoff collision kernel and a remainder. The cutoff part will already be dealt with and a careful control of the L ∞ -norm of the remainder will give us the expected lower bound, smaller than a Maxwellian lower bound.
A preliminary to our study (left in appendix) is to be able to construct the characteristic trajectories associated to the Boltzmann equation with specular reflections in a C 2 bounded convex domain. These trajectories are merely those of the free transport and so can be seen as the movement of a billiard ball inside the boundary of our domain.
Such a free transport in a convex domain has been studied in [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF] (see also [16], [START_REF] Tabachnikov | Geometry and billiards[END_REF] or [START_REF] Poritsky | The billiard ball problem on a table with a convex boundary-an illustrative dynamical problem[END_REF] for geometrical properties) and has been used in kinetic theory by Guo, [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF], or Hwang, [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF], for instance. Yet, the common feature in [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF], [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF], [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] and [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF] is that their assumptions on the boundary always lead to clear rebounds of the characteristic trajectories. That is to say, the absoption phenomenon of [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF], the electromagnetic field in [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF] and [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF] or the smooth strict convexity assumption used in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF], prevent the characteristics to roll on the boundary which is one of the possible behaviour we have to take into account in our general settings. As briefly mentionned in the introduction of [START_REF] Tabachnikov | Geometry and billiards[END_REF], the behaviour at some specific boundary points is mathematically quite unexpected, even if that is of no physical relevance. We thus classify all the possible outcomes of a rebound against the boundary and study them carefully to analytically build the characteristics for the free transport equation in our domain Ω.
Finally, we need to control the number of rebounds that can happen in a finite time. In [16], Tabachnikov focuses on the footprints on the boundary of the trajectories of billiard balls and shows that the initial conditions leading to infinitely many rebounds on the boudary is a set of measure 0. We extend this to the whole trajectory (see Appendix A.1, Proposition A.4), not only its footprints on the boundary, allowing us to consider only finitely many rebounds in finite time and to have an analytic formula for the characteristics which we shall use throughout the article.
Notice that all this study of the free transport equation will be done in the case of a merely C 1 bounded domain, which extends the results of [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF].
1.3. Organisation of the paper. Section 2 is dedicated to the statement and the description of the main results proved in this article. It contains four different parts Section 2.1 defines all the notations which will be used throughout the article.
As mentioned above, we shall investigate in detail the characteristics and the free transport equation in a C 1 bounded domain. Section 2.2 mathematically formulates the intuitive ideas of trajectories.
The last subsections, 2.3 and 2.4, are dedicated to a mathematical formulation of the results related to the lower bound in, respectively, the cutoff case and the noncutoff case, described in Section 1.2. It also defines the concept of mild solutions to the Boltzmann equation in each case.
Sections 3 to 6 focuse on the Maxwellian lower bound in the cutoff case. It is divided into the four main arguments of the proof.
Following our strategy, Section 3 creates the localised "upheaval points" whereas Section 4 and Section 5 spread them along non-grazing and grazing trajectories respectively.
Section 6 concludes by describing the immediate appearance of a lower bound depending only on the norm of the velocity ( Proposition 2.5) as well as proving the immediate Maxwellian lower bound (proof of Theorem 2.3).
Finally, we deal with non-cutoff collision kernels in Section 7 where we prove the immediate appearance of an exponential lower bound (Theorem 2.7). The proof follows exactly the same steps as in the case of cutoff kernels and is thus divided into Section 7.1, where we construct a lower bound only depending on the norm of the velocity, and Section 7.2, where we derive the exponential lower bound.
As mentioned before, we need to study the free transport equation and the different important properties of the characteristics. Appendix A formulates these issues, investigates all the different behaviours of rebounds against the boundary (Section A.1), builds the characteristics and derives their properties (Section A.2) and solves the free transport equation (Section A.3).
Main results
We begin with the notations we shall use all along this article.
2.1. Notations. We denote • = 1 + |•| 2 and y + = max{0, y}, the positive part of y. This study will hold in specific functional spaces regarding the v variable that we describe here and use throughout the sequel. Most of them are based on natural Lebesgue spaces L p v = L p R d with a weight:
• for p ∈ [1, ∞] and q ∈ R, L p q,v is the Lebesgue space with the following norm
f L p q,v = v q f L p v , • for p ∈ [1, ∞] and k ∈ N we use the Sobolev spaces W k,p v by the norm f W k,p v = |s| k ∂ s f (v) p L p v 1/p , with the usual convention H k v = W k,2 v .
In what follows, we are going to need bounds on some physical observables of solution to the Boltzmann equation (1.1).
We consider here a function f (t, x, v) 0 defined on [0, T ) × Ω × R d and we recall the definitions of its local hydrodynamical quantities.
• its local energy
e f (t, x) = R d |v| 2 f (t, x, v)dv,
• its local weighted energy
e f (t, x) = R d |v| γ f (t, x, v)dv,
where γ = (2 + γ) + , • its local L p norm (p ∈ [1, +∞)) l p f (t, x) = f (t, x, •) L p v , • its local W 2,∞ norm w f (t, x) = f (t, x, •) W 2,∞ v .
Our results depend on uniform bounds on those quantities and therefore, to shorten calculations we will use the following
E f = sup (t,x)∈[0,T )×Ω e f (t, x) , E f = sup (t,x)∈[0,T )×Ω e f (t, x), L p f = sup (t,x)∈[0,T )×Ω l p f (t, x) , W f = sup (t,x)∈[0,T )×Ω w f (t, x).
In our theorems we are giving a priori lower bound results for solutions to (1.1) satisfying some properties about their local hydrodynamical quantities. Those properties will differ depending on which case of collision kernel we are considering. We will take them as assumptions in our proofs and they are the following.
• In the case of hard or Maxwellian potentials with cutoff (γ 0 and ν < 0):
(2.1) E f < +∞.
• In the case of a singularity of the kinetic collision kernel (γ ∈ (-d, 0)) we shall make the additional assumption (2.2) L pγ f < +∞, where p γ > d/(d + γ). • In the case of a singularity of the angular collision kernel (ν ∈ [0, 2)) we shall make the additional assumption
(2.3) W f < +∞, E f < +∞.
As noticed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], in some cases several assumptions might be redundant. Furthermore, in the case of the torus with periodic conditions or the case of bounded domain with specular boundary reflections, solutions to (1.1) also satisfy the following conservation laws (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF], [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] or [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] for instance) for the total mass and the total energy:
(2.4) ∃M, E 0, ∀t ∈ R + , Ω R d f (t, x, v) dxdv = M, Ω R d |v| 2 f (t, x, v) dxdv = E.
2.2.
Results about the free transport equation. Our investigations start with the study of the characteristics of the free transport equation. We only focus on the case where Ω is not the torus (the characteristics in the torus being merely straight lines) but we will use the same notations in both cases. This is achieved by the following theorem.
Theorem 2.1. Let Ω be an open, bounded and
C 1 domain in R d . Let u 0 : Ω × R d -→ R be C 1 in x ∈ Ω and in L 2
x,v . The free transport equation with specular reflections reads
∀t 0 , ∀(x, v) ∈ Ω × R d , ∂ t u(t, x, v) + D x (v)(u)(t, x, v) = 0, (2.5) ∀(x, v) ∈ Ω × R d , u(0, x, v) = u 0 (x, v), (2.6) ∀(x, v) ∈ ∂Ω × R d , u(t, x, v) = u(t, x, R x (v)), (2.7)
where R x stands for the specular reflection at a point x and D x (v) is the directional derivative at x in the direction of v. Then this equation has a unique solution u : R + × Ω × R d -→ R which is C 1 in time, admits a directional derivative in space in the direction of v and is in L 2
x,v . Moreover, for all
(t, x, v) in R + × Ω × R d , there exists x f in (t, x, v), v f in (t, x, v) and t f in (t, x, v) (see Definition A.6) such that u(t, x, v) = u 0 (x f in -(t -t f in )v f in , v f in ) .
This part of the article provides a thorough study of the characteristics of our system. However, it is independent of the rest of the work (apart for building solid grounds for trajectories) and therefore is left to Appendix A.
2.3.
Maxwellian lower bound for cutoff collision kernels. The final theorem we prove in the case of cutoff collision kernel is the immediate appearance of a uniform Maxwellian lower bound. We use, in that case, the Grad's splitting for the bilinear operator Q such that the Boltzmann equation reads
Q(g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) [h g * -hg * ] dv * dσ = Q + (g, h) -Q -(g, h),
where we used the following definitions
Q + (g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) h g * dv * dσ, Q -(g, h) = n b (Φ * g(v)) h = L[g](v)h, (2.8)
where (2.9)
n b = S d-1 b (cos θ) dσ = S d-2 π 0 b (cos θ) sin d-2 θ dθ.
In Appendix A we prove that we are able to construct the characteristics (X t (x, v), V t (x, v)), for all (t, x, v) in R + ×Ω×R d , of the transport equation (Proposition (A.8)). Thanks to this Proposition we can define a mild solution of the Boltzmann equation in the cutoff case. This weaker form of solutions is actually the key point for our result and also gives a more general statement. Definition 2.2. Let f 0 be a measurable function, non-negative almost everywhere on
Ω × R d . A measurable function f = f (t, x, v) on [0, T ) × Ω × R d is a mild solution of the Boltzmann equation associated to the initial datum f 0 (x, v) if (1) f is non-negative on Ω × R d , (2) for every (x, v) in Ω × R d : t -→ L[f (t, X t (x, v), •)](V t (x, v)), t -→ Q + [f (t, X t (x, v), •), f (t, X t (x, v), •)](V t (x, v)) are in L 1 loc ([0, T )), (3) and for each t ∈ [0, T ), for all x ∈ Ω and v ∈ R d , f (t, X t (x, v), V t (x, v)) = f 0 (x, v)exp - t 0 L[f (s, X s (x, v), •)](V s (x, v)) ds (2.10) + t 0 exp - t s L[f (s , X s (x, v), •)](V s (x, v)) ds Q + [f (s, X s (x, v), •), f (s, X s (x, v), •)](V s (x, v)) ds.
Now we state our result.
Theorem 2.3. Let Ω be T d or a C 2 open convex bounded domain in R d with nowhere null normal vector and let f 0 be a non-negative continuous function on Ω × R d . Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν < 0. Let f (t, x, v) be a mild solution of the Boltzmann equation in Ω × R d on some time interval [0, T ), T ∈ (0, +∞], which satisfies
• f is continuous on [0, T ) × Ω × R d -Λ 0 (Λ 0 grazing set defined by (1.7)), f (0, x, v) = f 0 (x, v), M > 0 and E < ∞ in (2.4); • if Φ satisfies (1.4
) with γ 0 or if Φ satisfies (1.5), then f satisfies (2.1); • if Φ satisfies (1.4) with γ < 0, then f satisfies (2.1) and (2.2). Then for all τ ∈ (0, T ) there exists ρ > 0 and θ > 0, depending on τ , E f (and L pγ f if Φ satisfies (1.4) with γ < 0), such that for all t ∈ [τ, T ) the solution f is bounded from below, almost everywhere, by a global Maxwellian distribution with density ρ and temperature θ, i.e.
∀t ∈ [τ, T ), ∀(x, v) ∈ Ω × R d , f (t, x, v) ρ (2πθ) d/2 e -|v| 2
2θ .
If we add the assumptions of uniform boundedness of f 0 and of the mass and entropy of the solution f we can use the arguments originated in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] to construct explicitely the initial "upheaval point", without any compactness argument (see Section 3.2). Moreover, if we further suppose that Ω is C 3 and strictly convex, the use of tools developed by Guo [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] yields a constructive method to control grazing collisions (see Remark 5.3). We thus have the following corollary.
Corollary 2.4. Suppose that conditions of Theorem 2.3 are satisfied (the continuity assumption on f 0 can be dropped) and further assume that Ω is C 3 and strictly convex, i.e. there exists ξ : R d -→ R to be C 3 such that
Ω = {x ∈ R d , ξ(x) < 0}
and such that ∇ξ = 0 on ∂Ω and there exists C ξ > 0 such that
∂ ij ξ(x)v i v j C ξ v 2
for all x in Ω and all v in R d . Further assume that f 0 is uniformly bounded from below
∀(x, v) ∈ Ω × R d , f 0 (x, v) ϕ(v) > 0,
and that f has a bounded local mass and entropy
R f = inf (t,x)∈[0,T )×Ω R d f (t, x, v) dv > 0 H f = sup (t,x)∈[0,T )×Ω R d f (t, x, v)logf (t, x, v) dv < +∞.
Then conclusion of Theorem 2.3 holds true with the constants ρ and θ being explicitely constructed in terms of τ , E f , H f , L pγ f and upper and lower bounds on |∇ξ| and |∇ 2 ξ|on ∂Ω.
As stated in Subsection 1.2, the main result to reach Theorem 2.3 is the construction of an immediate lower bound only depending on the norm of the velocity: Proposition 2.5. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3. For all 0 < τ < T there exists r V , a 0 (τ ) > 0 such that
∀t ∈ [τ /2, τ ], ∀(x, v) ∈ Ω × R d , f (t, x, v) a 0 (τ )1 B(0,r V ) (v),
r V and a 0 (τ ) only depending on τ , E f (and L pγ f if Φ satisfies (1.4) with γ < 0). 2.4. Exponential lower bound for non-cutoff collision kernels. In the case of non-cutoff collision kernels (0 ν < 2 in (1.6)), Grad's splitting does not make sense anymore and so we have to find a new way to define mild solutions to the Boltzmann equation (1.1). The splitting we are going to use is a standard one and it reads
Q(g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) [h g * -hg * ] dv * dσ = Q 1 b (g, h) -Q 2 b (g, h)
, where we used the following definitions
Q 1 b (g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) g * (h -h) dv * dσ, Q 2 b (g, h) = - R d ×S d-1 Φ (|v -v * |) b (cosθ) [g * -g * ] dv * dσ h (2.11) = S[g](v)h.
We would like to use the properties we derived in the study of collision kernels with cutoff. Therefore we will consider additional splitting of Q.
For ε in (0, π/4) we define a cutoff angular collision kernel
b CO ε (cosθ) = b (cosθ) 1 |θ| ε and a non-cutoff one b N CO ε (cosθ) = b (cosθ) 1 |θ| ε .
Considering the two collision kernels
B CO ε = Φb CO ε and B N CO ε = Φb N CO ε
, we can combine Grad's splitting (2.8) applied to B CO ε with the non-cutoff splitting (2.11) applied to B N CO ε . This yields the splitting we shall use to deal with non-cutoff collision kernels, (2.12)
Q = Q + ε -Q - ε + Q 1 ε -Q 2 ε , where we use the shortened notations Q ± ε = Q ± b CO ε and Q i ε = Q i b N CO ε , for i = 1, 2.
Thanks to the splitting (2.12) and the study of characteristics mentionned in Section 2.2, we are able to define mild solutions to the Boltzmann equation with non-cutoff collision kernels. This is obtained by considering the Duhamel formula associated to the splitting (2.12) along the characteristics (as in the cutoff case). Definition 2.6. Let f 0 be a measurable function, non-negative almost everywhere on
Ω × R d . A measurable function f = f (t, x, v) on [0, T ) × Ω × R d is
a mild solution of the Boltzmann equation with non-cutoff angular collision kernel associated to the initial datum f 0 (x, v) if there exists 0 < ε 0 < π/4 such that for all 0 < ε < ε 0 :
(1)
f is non-negative on Ω × R d , (2) for every (x, v) in Ω × R d : t -→ L ε [f (t, X t (x, v), •)](V t (x, v)), t -→ Q + ε [f (t, X t (x, v), •), f (t, X t (x, v), •)](V t (x, v)) t -→ S ε [f (t, X t (x, v), •)](V t (x, v)), t -→ Q 1 ε [f (t, X t (x, v), •), f (t, X t (x, v), •)](V t (x, v)) are in L 1 loc ([0, T )), (3) and for each t ∈ [0, T ), for all x ∈ Ω and v ∈ R d , (2.13) f (t, X t (x, v), V t (x, v)) = f 0 (x, v)exp - t 0 (L ε + S ε ) [f (s, X s (x, v), •)](V s (x, v)) ds + t 0 exp - t s (L ε + S ε ) [f (s , X s (x, v), •)](V s (x, v)) ds Q + ε + Q 1 ε [f (s, X s (x, v), •), f (s, X s (x, v), •)](V s (x, v)) ds.
Now we state our result.
with ν in [0, 2). Let f (t, x, v) be a mild solution of the Boltzmann equation in Ω × R d on some time interval [0, T ), T ∈ (0, +∞], which satisfies • f is continuous on [0, T ) × Ω × R d -Λ 0 (Λ 0 grazing set defined by (1.7)) and f (0, x, v) = f 0 (x, v), M > 0 and E < ∞ in (2.4); • if Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then f satisfies (2.1) and (2.3); • if Φ satisfies (1.4) with γ < 0, then f satisfies (2.1), (2.
2) and (2.3).
Then for all τ ∈ (0, T ) and for any exponent K such that
K > 2 log 2 + 2ν 2-ν log2 , there exists C 1 , C 2 > 0, depending on τ , K, E f , E f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0), such that ∀t ∈ [τ, T ), ∀(x, v) ∈ Ω × R d , f (t, x, v) C 1 e -C 2 |v| K .
Moreover, in the case ν = 0, one can take K = 2 (Maxwellian lower bound).
We emphasize here that, in the same spirit as in the cutoff case, the main part of the proof will rely on the establishment of an equivalent to Proposition 2.5 for non-cutoff collision kernels.
Corollary 2.8. As for Corollary 2.4, if f 0 is bounded uniformly from below as well as the local mass of f , the local entropy of f is uniformly bounded from above and Ω is C 3 and strictly convex then the conclusion of Theorem 2.7 holds true with constants being explicitely constructed in terms of τ , Remark 2.9. Throughout the paper, we are going to deal with the case where Ω is a C 2 convex bounded domain since it is the case where the most important difficulties arise. However, if Ω = T d , we can follow the same proofs by letting the first time of collision with the boundary to be +∞ (see Appendix A) and by making the definition that the distance to the boundary (which does not exist) is +∞ (which rules out the case of grazing trajectories).
K, E f , E f , W f , H f , L
The cutoff case: localized "upheaval points"
In this section and the next three we are going to prove a Maxwellian lower bound for a solution to the Boltzmann equation (1.1) in the case where the collision kernel satisfies a cutoff property.
The strategy to tackle this result follows the main idea used in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] and [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] which relies on finding an "upheaval point" (a first minoration uniform in time and space but localised in velocity) and spreading this bound, thanks to the spreading property of the Q + operator, in order to include larger and larger velocities.
We gather here two lemmas, proven in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], that we will frequently use in this section. We remind the reader that we are using Grad's splitting (2.8). Let us first give an L ∞ bound on the loss term (Corollary 2.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]).
Lemma 3.1. Let g be a measurable function on R d . Then ∀v ∈ R d , |L[g](v)| C L g v γ +
, where C L g is defined by: (1) If Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then
C L g = cst n b C Φ e g . (2) If Φ satisfies (1.4) with γ ∈ (-d, 0), then C L g = cst n b C Φ e g + l p g , p > d/(d + γ).
The spreading property of Q + is given by the following lemma (Lemma 2.4 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]), where we define
(3.1) l b = inf π/4 θ 3π/4 b (cos θ) .
v ∈ R d , 0 < r R, ξ ∈ (0, 1), we have Q + (1 B(v,R) , 1 B(v,r) ) cst l b c Φ r d-3 R 3+γ ξ d 2 -1 1 B(v, √ r 2 +R 2 (1-ξ)) .
As a consequence in the particular quadratic case δ = r = R, we obtain
Q + (1 B(v,δ) , 1 B(v,δ) ) cst l b c Φ δ d+γ ξ d 2 -1 1 B(v,δ √ 2(1-ξ)) , for any v ∈ R d and ξ ∈ (0, 1).
The case of the torus, studied in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], indicates that without rebounding the expected minoration is created after time t = 0 as quickly as one wants. Therefore we expect the same kind of bound to arise on each characteristic trajectory before its first rebound. However, in the case of a bounded domain, rebounds against the boundary can occur very close to the time t = 0 and a rebound preserves only the norm of the velocity. Therefore, we will fail finding a uniformly (in space) small time where a uniform bound arises. Nevertheless, the convexity and the smoothness of the domain implies that grazing collisions against the boundary do not change the velocity very much.
Thus our study will be split in three parts, which are the next three sections. The first step will be to partition the position and velocity spaces so that we have an immediate appearance of an "upheaval point" in each of those partitions. The second one is to obtain a uniform lower bound which will depend only on the norm of the velocity. Then the final part will use the standard spreading method used in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] and [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] which will allow us to deal with large velocities and derive the exponential lower bound uniformly.
3.1. Partition of the phase space and first localised lower bounds. In this section we use the continuity of f together with the conservation laws (2.4) to obtain a point in the phase space where f is strictly positive. Then, thanks to the continuity of f , its Duhamel representation (2.10) and the spreading property of the Q + operator (Lemma 3.2) we extend this positivity to high velocities at that particular point (Lemma 3.3). Finally, the free transport part of the solution f will imply the immediate appearance of the localised lower bounds (Proposition 3.4).
Moreover we define constants that we will use in the next two subsections in order to have a uniform lower bound.
We define some shorthand notations. For x in Ω, v in R d and s, t 0 we denote the point at time s of the forward characteristic passing through (x, v) at time t by
X s,t (x, v) = X s (X t (x, -v), -V t (x, -v)) V s,t (x, v) = V s (X t (x, -v), -V t (x, -v)),
which has been derived from (A.1).
We start by the strict positivity of our function at one point for all velocities: Lemma 3.3. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3. Then there exists (x 1 , v 1 ) in Ω × R d and ∆ > 0 such that for all n ∈ N and all t in [0, ∆], there exists r n > 0, depending only on n, and α n (t) > 0 such that
∀x ∈ B x 1 , ∆ 2 n , ∀v ∈ R d , f (t, x, v) α n (t)1 B(v 1 ,rn) (v),
with α 0 > 0 independent of t and the induction formula
α n+1 (t) = C Q r d+γ n 4 d/2-1 min(t,∆/(2 n+1 (2rn+ v 1 )) 0 e -sC L 2rn+ v 1 γ + α 2 n (s) ds where C Q = cst l b c Φ is defined in Lemma 3.2 and C L = cst n b C Φ E f (or C L = cst n b C Φ (E f + L p f )
) is defined in Lemma 3.1, and
r 0 = ∆, r n+1 = 3 √ 2 4
r n .
Proof of Lemma 3.3. The proof is an induction on n.
Step 1: Initialization. We recall the conservation laws satisfied by a solution to the Boltzmann equation, (2.4),
∀t ∈ R + , Ω R d f (t, x, v) dxdv = M, Ω R d |v| 2 f (t, x, v) dxdv = E, with M > 0 and E < ∞.
Since Ω is bounded, and so is included in, say, B(0, R X ), we also have that
∀t ∈ R + , Ω R d |x| 2 + |v| 2 f (t, x, v) dxdv α = M R 2 X + E < +∞.
Therefore if we take t = 0 and R min = 2α/M , we have the following
B(0,R min ) B(0,R min ) f 0 (x, v) dxdv M 2 > 0.
Therefore we have that there exists x 1 in Ω and v 1 in B(0, R min ) such that
f 0 (x 1 , v 1 ) M 4Vol(B(0, R min )) 2 > 0.
The first step of the induction is then due to the continuity of f at (0, x 1 , v 1 ). Indeed, there exists δ T , δ X , δ V > 0 such that 2 . and we define ∆ = min(δ T , δ X , δ V ).
∀t ∈ [0, δ T ], ∀x ∈ B(x 1 , δ X ), ∀v ∈ B(v 1 , δ V ), f (t, x, v) M 8Vol(B(0, R min ))
Step 2: Proof of the induction. We assume the conjecture is valid for n.
Let x be in B(x 1 , ∆/2 n+1 ), v in B(0, v 1 + 2r n ) and t in [0, ∆].
We use the fact that f is a mild solution to write f (t, x, v) under its Duhamel form (2.10). The control we have on the L operator, Lemma 3.1, allows us to bound from above the second integral term (the first term is positive). Moreover, this bound on L is independent on t, x and v since it only depends on an upper bound on the energy e f (t,x,•) (and its local L p norm l p f (t,x,•) ) which is uniformly bounded by E f (and by L p f ). This yields, for τ n (t) = min (t, ∆/(2 n+1 (2r
n + v 1 ))) (3.2) f (t, x, v) τn(t) 0 e -sC L v 1 +2rn γ + Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] (V s,t (x, v)) ds,
where
C L = cst n b C Φ E f (or C L = cst n b C Φ (E f + L p f )), see Lemma 3.1, and we used V s,t (x, v) = v 2r n + v 1 .
Besides, we have that B(x 1 , ∆) ⊂ Ω and also
∀s ∈ 0, ∆ 2 n+1 (2r n + v 1 ) , ∀v * ∈ B(0, v 1 + 2r n ), x 1 -(x + sv * ) ∆ 2 n
which, by definition of the characteristics (see Appendix A.2), yields
∀s ∈ [0, τ n (t)] , ∀v * ∈ B(0, v 1 + 2r n ), X s,t (x, v * ) = x + sv * ∈ B x 1 , ∆ 2 n V s,t (x, v * ) = v * .
Therefore, by calling v * the integration parametre in the operator Q + we can apply the induction property to f (s, X s,t (x, v), v * ) which implies, in (3.2),
f (t, x, v) τn(t) 0 e -sC L v 1 +2rn γ + α 2 n (s)Q + 1 B(v 1 ,rn) , 1 B(v 1 ,rn) ds(v).
Applying the spreading property of Q + , Lemma 3.2, with ξ = 1/4 gives us the expected result for the step n + 1 since
B(v 1 , r n+1 ) ⊂ B(0, v 1 + 2r n ).
We now have all the tools to prove the next proposition which is the immediate appearance of localised "upheaval points". Proposition 3.4. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3. Then there exists ∆ > 0 such that for all 0 < τ 0 ∆, there exists δ T (τ 0 ), δ X (τ 0 ), δ V (τ 0 ), R min (τ 0 ), a 0 (τ 0 ) > 0 such that for all N in N there exists N X in N * and x 1 , . . . , x N X in Ω and v 1 , . . . , v N X in B(0, R min (τ 0 )) and
• Ω ⊂
1 i N X B x i , δ X (τ 0 )/2 N ; • ∀t ∈ [τ 0 , δ T (τ 0 )], ∀x ∈ B(x i , δ X (τ 0 )), ∀v ∈ R d , f (t, x, v) a 0 (τ 0 )1 B(v i ,δ V (τ 0 )) (v), with B (v i , δ V (τ 0 )) ⊂ B(0, R min (τ 0 )).
Proof of Proposition 3.4. We are going to use the free transport part of the Duhamel form of f (2.10), to create localised lower bounds out of Lemma 3.3.
We take 0 < τ 0 ∆, where ∆ is defined in Lemma 3.3. Ω is bounded so let us denote its diameter by d Ω . Let n be big enough such that r n 2d Ω /τ 0 + v 1 and define R min (τ 0 ) = 2d Ω /τ 0 .
Thanks to Lemma 3.3 applied to this particular n we have that
(3.3) ∀t ∈ τ 0 2 , ∆ , ∀x ∈ B(x 1 , ∆/2 n ), f (t, x, v) α n τ 0 2 1 B(v 1 ,rn) (v),
where we used the fact that α n (t) is an increasing function.
Define a 0 (τ 0 ) = 1 2 α n τ 0 2 e - τ 0 2 C L 2d Ω τ 0 γ + .
Definition of the constants. We notice that for all x in ∂Ω we have that n(x) • (xx 1 ) > 0, because Ω has nowhere null normal vector by hypothesis. But the function
x -→ n(x) • x -x 1 x -x 1
is continuous (since Ω is C 2 ) on the compact ∂Ω and therefore has a minimum that is atteined at a certain X(x 1 ) on ∂Ω. Hence,
(3.4) ∀x ∈ ∂Ω, n(x) • x -x 1 x -x 1 n(X(x 1 )) • X(x 1 ) -x 1 X(x 1 ) -x 1 = 2λ(x 1 ) > 0.
To shorten following notations, we define on Ω × R d -{0} the function
(3.5) Φ(x, v) = n x + t x, v v v v ,
where we defined t(x, v) = min{t 0 : x + tv ∈ ∂Ω}, the first time of contact against the boundary of the forward characteristic (x + sv) s 0 defined for v = 0 and continuous on Ω × R d -{0} (see Lemma 5.2).
We denote d 1 to be half of the distance from x 1 to ∂Ω. We define two sets included in
[0, ∆] × Ω × R d : Λ (1) = [0, ∆] × B(x 1 , d 1 ) × R d and Λ (2) = (t, x, v) / ∈ Λ (1) , v d 1 τ 0 and Φ(x, v) • v v λ(x 1 )
By continuity of t(x, v) and of n (on ∂Ω), we have that
Λ = Λ (1) ∩ Λ (2)
is compact and does not intersect the grazing set [0, ∆] × Λ 0 defined by (1.7). Therefore, f is continuous in Λ and thus is uniformly continuous on Λ. Hence, there exist
δ T (τ 0 ), δ X (τ 0 ), δ V (τ 0 ) > 0 such that ∀(t, x, v), (t , x , v ) ∈ Λ, |t -t | δ T (τ 0 ), x -x δ X (τ 0 ), v -v δ V (τ 0 ), (3.6) |f (t, x, v) -f (t , x , v )| a 0 (τ 0 ).
The map Φ (defined by (3.5)) is uniformly continuous on the compact [0, ∆] × Ω × S d-1 and therefore there exist δ T (τ 0 ), δ X (τ 0 ), δ V (τ 0 ) > 0 such that
∀(t, x, v), (t , x , v ) ∈ Λ (2) , |t -t | δ T (τ 0 ), x -x δ X (τ 0 ), v -v δ V (τ 0 ), (3.7) |Φ(x, v) -Φ(x , v )| λ(x 1 ) 2 .
We conclude our definitions by taking
δ T (τ 0 ) = min (∆, τ 0 + δ T (τ 0 ), τ 0 + δ T (τ 0 )) , δ X (τ 0 ) = min ∆ 2 n , δ X (τ 0 ), δ X (τ 0 ), d 1 /2 , δ V (τ 0 ) = min r n , δ V (τ 0 ), d 1 2τ 0 δ V (τ 0 ), λ(x 1 ) 2 .
Proof of the lower bounds. We take N ∈ N and notice that Ω is compact and therefore there exists x 1 , . . . , x N X in Ω such that Ω ⊂
1 i N X B x i , δ X (τ 0 )/2 N .
Moreover, we construct them such that x 1 is the one defined in Lemma 3.3 and we then take v 1 to be the one defined in Lemma 3.3. We define
∀i ∈ {2, . . . , N X }, v i = 2 τ 0 (x i -x 1 ).
Because Ω is convex we have that
X τ 0 /2,τ 0 (x i , v i ) = x 1 , V τ 0 /2,τ 0 (x i , v i ) = v i .
Using the fact that f is a mild solution of the Boltzmann equation, we write it under its Duhamel form (2.10) and we drop the last term which is positive. As in the proof of Lemma 3.3 we can control the L operator appearing in the first term in the right-hand side of (2.10) (corresponding to the free transport). Thus, we use the Duhamel form (2.10) between τ 0 and τ 0 /2. This yields
f (τ 0 , x i , v i ) f τ 0 2 , x 1 , v i e - τ 0 2 C L 2 τ 0 (x i -x 1 ) γ + α n τ 0 2 e - τ 0 2 C L 2d Ω τ 0 γ + 1 B(v 1 ,rn) (v i ) 2a 0 (τ 0 )1 B(v 1 ,rn) (v i ),
where we used (3.3) for the second inequality. We see here that v i belongs to B(0, R min (τ 0 )) and that B(0, R min (τ 0 )) ⊂ B(v 1 , r n ) and therefore
(3.8) f (τ 0 , x i , v i ) 2a 0 (τ 0 ).
We first notice that (τ 0 , x i , v i ) belongs to Λ since either x i belongs to B(x 1 , d 1 ) or x 1x i d 1 but by definition of v i and λ(x 1 ) (see (3.4)),
n x i + t x i , v i v i v i v i • v i v i 2λ(x 1 )
and
v i = 2 τ 0 x i -x 1 2 τ 0 d 1 .
We take t in [τ 0 , δ T (τ 0 )], x in B(x i , δ X (τ 0 )) and v in B(v i , δ V (τ 0 )) and we will prove that (t, x, v) also belongs to Λ.
If x i belongs to B(x 1 , d 1 /2) then since δ X (τ 0 ) d 1 /2, x -x 1 d 1 2 + x -x i d 1
and (t, x, v) thus belongs to Λ (1) ⊂ Λ.
In the other case where x 1x i d 1 /2 we first have that
v i = 2 τ 0 x i -x 1 d 1 τ 0 .
And also
v v - v i v i 2 v i v -v i = τ 0 x i -x 1 δ V (τ 0 ) 2τ 0 d 1 δ V (τ 0 ) δ V (τ 0 ).
The latter inequality combined with (3.7) and that |t -
τ 0 | δ T (τ 0 ) and x -x i δ X (τ 0 ) yields |Φ(x, v) -Φ(x i , v i )| λ(x 1 ) 2 , which in turn implies Φ(x, v) • v v Φ(x i , v i ) • v i v i + Φ(n, v) • (v -v i ) + (Φ(x, v) -Φ(x i , v i )) • v i 2λ(x 1 ) -v -v i -|Φ(x, v) -Φ(x i , v i )| λ(x 1 )
, so that (t, x, v) belongs to Λ (2) .
We can now conclude the proof. We proved that (τ 0 , x i , v i ) belongs to Λ and that for all t in [τ 0 , δ T (τ 0 )], x in B(x i , δ X (τ 0 )) and v in B(v i , δ V (τ 0 )), (t, x, v) belongs to Λ. By definition of the constants, (tτ 0 , xx i , vv i ) satisfies the inequality of the uniform continuity of f on Λ (3.6). Combining this inequality with (3.8), the lower bound at (τ 0 , x i , v i ), we have that f (t, x, v) a 0 (τ 0 ). Remark 3.5. This last proposition tells us that localised lower bounds appear immediately, that is to say after any time τ 0 > 0. The exponential lower bound we expect will appear immediately after those initial localised lower bounds, i.e. for all τ 1 > τ 0 . Therefore, to shorten notation and lighten our presentation, we are going to study the case of solution to the Boltzmann equation which satisfies Proposition 3.4 at τ 0 = 0. Then we will immediatly create the exponential lower bound after 0 and apply this result to F (t, x, v) = f (t + τ 0 , x, v).
3.2.
A constructive approach to the initial lower bound, Corollary 2.4. The initial lower bounds we just derived relies on compactness arguments and their construction is therefore not explicit. However, as mentioned in Section 2.3, a few more assumptions on f 0 and f suffice to obatin a completely constructive approach for the "upheaval point". This method is based on a property of the iterated Q + operator discovered by Pulvirenty and Wennberg [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] and reformulated by Mouhot ([13] Lemma 2.3) as follows.
Lemma 3.6. Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν 0. Let g(v) be a nonnegative function on R d with bounded energy e g and entropy h g and a mass ρ g such that 0 < ρ g < +∞.
Then there exist R 0 , δ 0 , η 0 > 0 and v ∈ B(0, R 0 ) such that
Q + Q + g1 B(0,R 0 ) , g1 B(0,R 0 ) , g1 B(0,R 0 ) η 0 1 B(v,δ 0 ) ,
with R 0 , δ 0 , η 0 being constructive in terms on ρ g , e g and h g .
We now suppose that 0 < ρ f 0 < +∞, h f 0 < +∞ and that
∀(x, v) ∈ Ω × R d , f 0 (x, v) ϕ(v) > 0
and we follow the argument used in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF].
By the Duhamel definition (2.10) of f being a mild solution and Lemma 3.1 we have
(3.9) f (t, X t (x, v), V t (x, v)) f 0 (x, v)e -tC L v γ + and f (t, x, v) t 0 e -(t-s)C L v γ + Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] (V s,t (x, v)) ds.
Define t(x, v) > 0 the time of first contact with ∂Ω of the trajectory x + sv (see rigorous definition in Proposition A.3). For all t in [0, t(x, v)] we have
X 0,t (x, v) = x + tv, V 0,t (x, v) = v.
Thus, for all 0 t t(x, v),
f (t, x, v) t 0 e -(t-s)C L v γ + Q + [f (s, x + sv, •), f (s, x + sv, •)] (v) ds,
and we can iterate the latter inequality
f (t, x, v) t 0 e -(t-s)C L v γ + Q + s 0 e -(s-s )C L v γ + Q + (f (s, x + s v, •), f (s, x + s v, •)) (•)ds , f (t, x + sv, •) (v) ds.
(3.10) (3.9) and (3.10) are exactly the same bounds than the ones obtained in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF],
Step 1 of proof of Proposition 3.2, and we can therefore conclude the same way with Lemma 3.6
f (t, x, v) a 0 (τ 0 )1 B(v,δ 0 ),
as long as v is in B(0, R 0 ) and 0 t τ 0 . The only difference with [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] is the fact that we need τ 0 to be in [0, t(x, v)], giving local lower bounds instead of a global one.
3.3.
A lower bound depending only on the norm of the velocity: strategy of the proof of Proposition 2.5. As stated in the introduction, the spreading property of the bilinear operator Q + cannot be used (at least uniformly in time and space) when we are really close to the boundary due to the lack of control over the rebounds. However, if we have a lower bound depending only on the norm of the velocity then the latter bound will not take into account rebounds as they preserve the norm, allowing us to spread this minoration up to an exponential one.
The next two sections are dedicated to the creation of such a uniform lower bound depending solely on the norm of the velocity. In order to do so we restrain the problem without taking into account large velocities and divide the study to two cases: if the trajectory stays close to the boundary or if it does not. In both cases we will start from the localised "upheaval points" constructed in Section 3.1 and spread them to the point where one gets a lower bound depending only on the norm of the velocity.
The next sections tackle each of these points. We first study the case when a characteristic reaches a point far from the boundary and finally we focus on the case of grazing characteristics. We fix δ T , δ X , δ V , R min and a 0 to be the ones described in Proposition 3.4 at time τ 0 = 0.
The result we will derive out of those studies is Proposition 2.5 and from now on, dependencies on physical observables of f (E f and L pγ f ) will be mentionned but will not be explicitly written everytime.
The cutoff case: characteristics passing by a point far from the boundary
In this section we manage to spread the lower bounds created in Proposition 3.4 up to a ball in velocity centred at zero as long as the trajectory we look at reaches a point far enough from the boundary.
First, we pick N in N * and cover Ω with 1 i N X B(x i , δ X /2 N ) as in Proposition 3.4. Then for l 0 we define (4.1)
Ω l = {x ∈ Ω : d(x, ∂Ω) l} ,
where d(x, ∂Ω) is the distance from x to the boundary of Ω.
For any R > 0 we define two sequences in R + by induction, for all τ 0 and l 0,
r 0 = δ V r n+1 = 3 √ 2 4 r n (4.2) and a 0 (l, τ ) = a 0 a n+1 (l, τ ) = C Q r d+γ n 4 d/2-1 l 2 n+3 R e -τ C L R γ + a 2 n l 8 , τ , (4.3)
where C Q and C L were defined in Lemma 3.3.
We express the spreading of the lower bound in the following proposition.
Proposition 4.1. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3 and suppose that f satisfies Proposition 3.4 with τ 0 = 0. Consider 0 < τ δ T and N in N. Let (x i ) i∈{1,...,N X } and (v i ) i∈{1,...,N X } be given as in Proposition 3.4 with τ 0 = 0. Then for all n in {0, . . . , N } we have that the following holds: for all 0 < l δ X , and R > 0 such that l/R < τ , for all t in [l/(2 n R), τ ], and for all x ∈ Ω and v ∈ B(0, R), if there exists
t 1 ∈ [0, t -l/(2 n R)] such that X t 1 ,t (x, v) belongs to Ω l ∩ B(x i , δ X /2 n ) then f (t, x, v) a n (l, τ )1 B(v i ,rn) (V t 1 ,t (x, v)),
where (r n ) and (a n ) are defined by (4.2)-(4.3).
Proof of Proposition 4.1. This Proposition will be proved by induction on n.
Step 1: Initialization. The initialisation is simply Proposition 3.4. Indeed, we use the definition of f being a mild solution to write f (t, x, v) under its Duhamel form (2.10) starting at t 1 where both parts are positive. The control we have on the L operator, Lemma 3.1, allows us to bound from above the first term. Moreover, this bound on L is independent on x and v (see proof of Lemma 3.3). This gives
(4.4) f (t, x, v) e -(t-t 1 )C L R γ + f (t 1 , X t 1 ,t (x, v), V t 1 ,t (x, v)) .
Finally, Proposition 3.4 applied to f (t 1 , X t 1 ,t (x, v), V t 1 ,t (x, v)) gives us the property for n = 0.
Step 2: Proof of the induction. We consider the case where the proposition is true for n.
Given l ∈ (0, δ X ], t ∈ [l/(2 n+1 R), τ ], x ∈ Ω and v ∈ B(0, R).
We suppose now that there exists
t 1 ∈ [0, t -l/(2 n+1 R)] such that X t 1 ,t (x, v) ∈ Ω l ∩ B(x i , δ X /2 n+1 ).
Similar to what we did in the first step of the induction, but concentrating on the second part of the Duhamel formula (2.10) we conclude that
f (t, x, v) (4.5) e -C L τ R γ + t 1 + l 2 n+2 R t 1 + l 2 n+3 R Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] ds (V t 1 ,t (x, v)) .
The goal is now to apply the induction to the triplet (s, X s,t (x, v), v * ), where v * is the integration parametre inside the
Q + operator, with v * R. One easily shows that X s,t (x, v) = X t 1 ,t (x, v) + (s -t 1 )V t 1 ,t (x, v), for s in [t 1 + l 2 n+3 R , t 1 + l 2 n+2 R ]
, and therefore we have that (4.6)
X t 1 ,t (x, v) -X s,t (x, v) l 2 n+2
, and so that X s,t (x, v) belongs to Ω l-l/2 n+2 . Finally, we have to find a point on the characteristic trajectory of (s, X s,t (x, v), v * ) that is in Ω l for some l . This is achieved at the time t 1 (see Fig. 1).
l ∂Ω v x X t 1 ,t (x, v) v * X s,t (x, v) l/8 Ω l Figure 1. Study of (s, X s,t (x, v), v * ) far from the boundary Indeed, we have s in [t 1 + l/(2 n+3 R), t 1 + l/(2 n+2 R)] so, for v * R (4.7) ∀s ∈ [t 1 , s], X s,t (x, v) -(X s,t (x, v) -(s -s )v * ) l 2 n+2 .
This gives us the characteristics trajectory backward starting from s, since X s,t (x, v)-(ss )v * remains in Ω, and therefore
∀s ∈ [t 1 , s], X s ,s (X s,t (x, v), v * ) = X s,t (x, v) -(s -s )v * V s ,s (X s,t (x, v), v * ) = v * .
To conclude we just need to gather the upper bounds we found about the trajectories reaching (X s,t (x, v), v * ) in a time s in [t 1 + l/(2 n+3 R), t 1 + l/(2 n+2 R)], equations (4.6) and (4.7)
X t 1 ,t (x, v) -X t 1 ,s (X s,t (x, v), v * ) l 2 n+1 .
We have that X t 1 ,t (x, v) belongs to Ω l ∩ B(x i , δ X /(2 n+1 )) and therefore we have that for all s in [t
1 + l/(2 n+3 R), t 1 + l/(2 n+2 R)], X t 1 ,s (X s,t (x, v), v * ) belongs to Ω l/2 ∩ B(x i , δ X /2 n ). Finally, if s belongs to [t 1 + l/(2 n+3 R), t 1 + l/(2 n+2 R)] we have that (l/8)/(2 n R) s τ and t 1 is in [0, s -(l/8)/(2 n R)].
We can therefore apply the induction assumption for l = l/8 inside the Q + operator in (4.5), recalling that
V t 1 ,s (X s,t (x, v), v * ) = v * . f (t, x, v) a n l 8 , τ 2 e -C L τ R γ + t 1 + l 2 n+2 R t 1 + l 2 n+3 R Q + 1 B(v i ,rn) , 1 B(v i ,rn) ds (V t 1 ,t (x, v)) .
Applying the spreading property of Q + , Lemma 3.2, with ξ = 1/4 gives us the expected result for the step n + 1.
One easily notices that (r n ) n∈N is a strictly increasing sequence. Moreover, for all N in N we have that for all 1 i N X , v i belongs to B(0, R min ). Therefore, by taking N big enough (bigger than N 1 say) we have that ∀i ∈ {1, . . . , N X }, B(0, 2R min ) ⊂ B(v i , r N ).
This remark leads directly to the following corollary which stands for Proposition 2.5 in the case when a point on the trajectory is far from the boundary of Ω.
Corollary 4.2. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3 and suppose that f satisfies Proposition 3.4 with τ 0 = 0. Let ∆ T be in (0, δ T ] and take τ 1 in (0, ∆ T ]. Then for all 0 < l δ X , there exists a(l, τ 1 , ∆ T ) > 0 and 0 < t(l, τ 1 , ∆ T ) < τ 1 such that for all t in [τ 1 , ∆ T ], and every
(x, v) in Ω × R d : if there exists t 1 ∈ [0, t -t(l, τ 1 , ∆ T )] such that X t 1 ,t (x, v) belongs to Ω l then f (t, x, v) a(l, τ 1 , ∆ T )1 B(0,2R min ) (v).
Proop of Corollary 4.2. This is a direct consequence of Proposition 4.1.
Indeed, take 0 < l δ X , 0 < τ 1 ∆ T and R = R(∆ T ) > 0 such that R 3R min and l/R ∆ T . Then take N 2 N 1 big enough such that l/(2 N 2 R) < τ 1 . We emphasize here that N 2 depends on to τ 1 so we write N 2 (τ 1 ). Now apply Proposition 4.1 with N = N 2 (τ 1 ) and for t in [τ 1 , ∆ T ]. We obtain exactly Corollary 4.2 (since B(0, 2R min ) ⊂ B(v i , r N ) for all i and R 3R min ) with
a(l, τ 1 , ∆ T ) = a N 2 (τ 1 ) (l, ∆ T ) and t(l, τ 1 , ∆ T ) = l 2 N 2 R(∆ T )
, and the fact that
1 i N X B x i , δ x /2 N covers Ω.
The cutoff case: geometry and grazing trajectories
We now turn to the case when the characteristic trajectory never escapes a small distance from the boundary of our convex domain Ω.
Intuitively, by considering the case where Ω is a circle, one can see that such a behaviour is possible only when the angles of collisions with the boundary remain small (which corresponds in high dimension to the scalar product of the velocity with the outside normal being close to zero), or the angle is important but the norm of the velocity or the time of motion is small. Thus, by using the spreading property of the Q + operator we may be able to create larger balls in between two rebounds against the boundary because the latters should not change the velocity too much.
The study of grazing collisions will follow this intuition. First of all Section 5.1 proves a geometric lemma dealing with the fact that if the velocities are bounded from below and above, then for short times, the possibility for a trajectory to stay very close to the boundary implies that the velocity do not change a lot over time. Then Section 5.2 spreads a lower bound, in the same spirit as the last subsection, up to the point when this lower bound covers a centred ball in velocity. Notice that the geometric property forces us to work with velocities whose norm is bounded from below and so we shall have to take into account the speed of the spreading.
Geometric study of grazing trajectories.
The key point of the study of grazing collisions is the following geometric lemma. We emphasize here that this is the only part of the article where we need the fact that Ω is C 2 . Proposition 5.1. Let Ω be an open convex bounded C 2 domain in R d and let 0 < v m < v M . Then, for all ε > 0 there exists t ε (v M ) such that for all 0 < τ 2 t ε (v M ) there exists l ε (v m , τ 2 ) > 0 such that for all x in Ω and all
v in R d with v m v v M , ∀s ∈ [0, τ 2 ], X s (x, v) / ∈ Ω lε(vm,τ 2 ) =⇒ (∀s ∈ [0, t ε (v M )], V s (x, v) -v ε) . Furthermore, l ε (v m , •) is an increasing function.
The following is dedicated to the proof of Proposition 5.1.
We recall that for x in Ω and v in R d we define, see Appendix A, t min (x, v) to be the time of the first proper rebound when we start from x with a velocity -v. This means that t min (x, v) does not take into account the case where a ball rolls on the boundary. This implies that one cannot hope to get continuity of the function t min because changing the velocity slightly may lead to a proper rebound instead of a rolling movement.
This being said, we define a time of collision against the boundary which will not take into account the possibility of rolling along the boundary of Ω. This will not be too restrictive as we are considering a C 2 convex domain and therefore a trajectory that stays on the boundary will only reach a stopping point which happens only on a set of measure zero in the phase space (see Appendix A). Therefore we define for x in Ω and v in R d , the first forward contact with the boundary, t(x, v). It exists by the same arguments as for t min . Notice that if x is on ∂Ω then for all v = 0 we have that t(x, v) = 0 if and only if n(x) • v 0, with n(x) being the outward normal to ∂Ω at the point x.
We have the following Lemma dealing with the continuity of the outward normal to ∂Ω at the first forward contact point which will be of great interest for proving the crucial Proposition 5.1.
Lemma 5.2. Let Ω be an open convex bounded C 1 domain in R d . Then t : (x, v) -→ t(x, v) is continuous from Ω × R d -{0} to R + . Proof of Lemma 5.2. Let suppose that t is not continuous at (x 0 , v 0 ) in Ω× R d -{0} . Then ∃ε > 0, ∀N 1, ∃(x N , v N ), x 0 -x N 1/N v 0 -v N 1/N and |t(x 0 , v 0 ) -t(x N , v N )| > ε.
If we still denote by d Ω the diameter of Ω, we obviously have that for all N , 0 t(x N , v N ) d Ω / v N . Thus, (t(x N , v N )) N ∈N is a bounded sequence of R and we can extract a converging subsequence t(x φ(N ) , v φ(N ) ) such that T = lim
N →+∞ t(x φ(N ) , v φ(N ) ).
By construction (see Appendix A) we have that for all N in N, x φ(N ) +t(x φ(N ) , v φ(N ) )v φ(N ) belongs to ∂Ω which is closed. Moreover, this sequence converges to x 0 + T v 0 which therefore is on ∂Ω.
Finally we have that |t(x 0 , v 0 ) -T | ε. Since Ω is convex, the segment [x 0 , x 0 + max(t(x 0 , v 0 ), T )v 0 ] stays in Ω and intersect the boundary at least at two distinct points. By convexity of the domain, this implies that the extreme points of the latter segment have to be on the boundary which means that x 0 belongs to ∂Ω which is a contradiction.
Therefore, t is continuous in Ω × R d -{0}
. By the definition of t(x, v) we have its continuity at the boundary. Indeed, n(x) • v 0 means we came from inside the domain to reach that point and we have
|t(x , v) -t(x, v)| x -x v .
We are now ready to prove the geometric Proposition 5.1.
Proof of Proposition 5.1. Consider ε > 0 and 0 < v m < v M .
Step 1: the case of segments. The first step is to understand that if a whole trajectory stays close to the boundary, then the angle made by the velocity with respect to the normal at the point of collision is close to π/2 for dimension d = 2. The same behaviour in higher dimensions is described by the scalar product of the direction of the trajectory and the normal being close to zero. One has to remember that controlling V s (x, v)v is the same as controlling the scalar products of the trajectory and the normal on the boundary at each collision point (see definition of
V s (x, v) in Appendix A).
Let x be on ∂Ω and p in N * . We define
Γ p (x) = |n(x) • v| : v ∈ S d-1 s.t. n(x) • v < 0 and ∀s ∈ [0, t(x, v)], x + sv / ∈ Ω 1/p ,
with Ω 1/p being defined by (4.1).
Γ p (x) gives us the values of scalar products between a normal on the boundary and all the directions that create a characteristic trajectory which stays at a distance less than 1/p from the boundary in between two distinct rebounds (see Fig. 2). This is exactly what we would like to control uniformly on the boundary.
We remark that Γ p (x) is not empty because Ω and, thus, Ω 1/p are convex and by the geometric theorem of Hahn-Banach we can separate Ω 1/p and a disjoint convex ball containing x. It is also straightforward, a mere Cauchy-Schwartz inequality, that Γ p (x) is bounded from above by 1. Therefore we can define, for all p in N * , h p : ∂Ω -→ R +
x -→ sup Γ p (x).
We are going to prove that (h p ) p∈N * satisfies the following properties: it is a decreasing sequence of functions, h p is continuous in x for each p 1 and for all x in ∂Ω (h p (x)) p∈N * converges to 0.
The fact that (h p ) is decreasing is obvious. In order to prove the continuity of h p we take an x on the boundary and v in S d-1 such that |n(x) • v| is in Γ p (x). We have that for all s in [0, t(x, v)] d(x + sv, ∂Ω) < 1/p. The distance to the boundary is a continuous function and [0, t(x, v)] is compact so there exists s(x, v) in the latter interval such that d(x+s(x, v)v, ∂Ω) is maximum.
Because Ω is convex we have that Ω 1/p is convex and therefore
∀s ∈ [0, t(x, v)], B x + sv, d(x + s(x, v)v, Ω 1/p ) 2 ∩ Ω 1/p = ∅.
Then for all x on the boundary such that xx d(x + s(x, v)v, Ω 1/p )/2 we have that for all s in [0, t(x , v)], x + sv is not in Ω 1/p . Lemma 5.2 gives us that if x is close to x then t(x , v) > 0 and thus v is not tangential at x either. Moreover Ω is C 2 so the outward normal to the boundary is continuous and therefore for x even closer to x we have that v is such that |n(x ) • v| is also in Γ p (x ). To conclude, we notice that the scalar product is continuous and therefore for all η > 0 we obtain
-η |n(x ) • v| -|n(x) • v| η,
when x is close enough to x.
The same arguments with the same constants (since our continuous functions act on compact sets and therefore are uniformly continuous) if x is close to x then taking |n(x ) • v| in Γ 1/p (x ) we have |n(x) • v| in Γ 1/p (x) and the same inequality as above. This gives us the continuity of h p at x. Indeed, we showed that for all x close to x and for all element u in Γ 1/p (x) we can find an element u in Γ 1/p (x ) that is close to u.
Finally, it remains to show that for x on the boundary we have that h p (x) tends to 0 as p tends to +∞. One can notice that the vector -n(x) is the maximum possible in Γ p (x) and is exactly the direction of the diametre in Ω passing by x. Hence, simple convexity arguments lead to the fact that if all the segments of the form [x, x-t(x, -n(x))n(x)] intersect Ω 1/p then we have that for all x on the boundary, there exists v p (x) in S d-1 such that n(x) • v p (x) = -h p (x). Moreover, the segment [x, x + t(x, v p (x))v p (x)] is tangent to Ω 1/p and we denote by x p its first contact point (see Fig. 2). The convexity of Ω and Ω 1/p shows that, as p increases, x p gets closer to x and to the boundary (Ω is convex). Therefore v p (x) tends to a tangent vector of the boundary at x. This shows that lim p→+∞ h p (x) = 0 in the case where all the segments of the form [x, xt(x, -n(x))n(x)] intersect Ω 1/p . We now come to the case where the segments of the form [x, xt(x, -n(x))n(x)] do not all intersect Ω 1/p . If for all p, this segment does not intersect Ω 1/p this implies by convexity of Ω that [x, xt(x, -n(x))n(x)] is included in ∂Ω. But then -n(x) is not only a normal vector to the boundary at x but also a tangential one at x. Geometrically this means that x is a corner of ∂Ω and n(x) is ill-defined. This is impossible for Ω being C 2 . Hence, for all x on the boundary, it exists p(x) such that the segment at x intersect Ω p(x) . However, Ω is C 2 and we also have Lemma 5.2. Those two facts implies that p(x) is continuous on ∂Ω which is compact and therefore p(x) reaches a maximum. Let us call this maximum P . For all p P , all the segments of the form [x, xt(x, -n(x))n(x)], x in ∂Ω, intersect Ω P and we conclude thanks to the previous case.
Thanks to these three properties and the fact that ∂Ω is compact, we are able to use Dini's theorem. We therefore find that (h p ) p∈N * converges uniformly to 0. By taking p ε big enough we have that for a segment of a characteristic trajectory joining two points on the boundary to be outside Ω pε we must have Γ pε ε for any x on the boundary (see Fig. 2).
Step 2: more general trajectories. We take x in ∂Ω and v such that v m v v M and we suppose that for a given t > 0 ∀s ∈ [0, t], X s (x, v) / ∈ Ω 1/p (ε/2Nmax) , N max to be define later.
We are about to find a uniformly small time such that trajectories having at least two collisions against the boundary do not undergo an important evolution of velocity. This will be achieved thanks to the facts that v v M and that the maximum of the scalar product is attained at a critical vector and which is the only one that needs to be controlled.
Thanks to Proposition A.4, (X s (x, v)) s has countably many rebounds against the boundary (almost surely a finite number in fact). We denote by (t i ) (i∈N) the sequence of times between consecutive collisions and by (l i ) i∈N the distance travelled during these respective times. We have that
∀i ∈ N, l i = |v| t i and v m t i∈N l i v M t.
Therefore, for all η > 0, there exists N η (x, v) in N such that (5.1)
i>Nη(x,v) t i η.
By continuity of t(x, v), see Lemma 5.2, and the fact that t(x, v) = 0 if and only if n(x) • v 0, we have that for η small enough (5.1) yields
(5.2) i>Nη(x,v) |n(x i ) • v i | ε/4,
where v i is the velocity after the i th rebound and x i is the i th footprint.
t(x, v) is uniformly continuous on the compact ∂Ω × {|v| = v M } (see Lemma 5.2) therefore the footprints of (X s (x, v)) s∈[0,t] are uniformly continuous and therefore there exists α (1)
X > 0 and N max in N such that (5.3) ∀x, x ∈ ∂Ω s.t. x -x α X , ∀v m |v| v M , N η (x, v) N max -1.
We have now defined N max .
The first property to notice is that if (X s (x, v)) s∈[0,t] has at least two rebounds against the boundary, then at each of them the scalar product between the incoming velocity and the outward normal is less than ε/2N max .
Secondly, Ω is C 2 and therefore n(x) is uniformly continuous on the boundary. Thus, the specular reflection operator R x is uniformly continuous on ∂Ω × B(0, v M ):
(5.4) ∃α
(2)
X > 0, ∀x, x ∈ ∂Ω s.t. x -x α X , R x -R x ε/4N max .
We want to be sure that straight trajectories stay in our domain of uniformity so we consider
t t ε (v M ) = max α X v M , 1 p ε/2Nmax v M ,
where α X = min(α
(1) X , α (2)
X ) defined in (5.3) and (5.4). To conclude, thanks to (5.3) and (5.2), if (X s (x, v)) s∈[0,t] collides at least twice with the boundary then
∀s ∈ [0, t], v -V s (x, v) 2 i∈N |n(x i ) • v i | 2 i Nmax-1 ε 4N max + 2 ε 4 = ε.
Roughly speaking we do not allow the velocities near the critical direction to bounce against the wall and for the grazing ones we run them for a short time, preventing them from escaping a small neighbourhood where the collisions behave almost the same everywhere (see Fig. 2).
To conclude our proof, it only remains to find l 1/p ε/2Nmax that prevents trajectories staying in Ω l but go through only one rebound with a scalar product greater than ε/2 from happening. This is easily achieved by taking l small enough such that not a single trajectory with a scalar product greater than ε/2N max can stay inside Ω l during a time τ . Indeed, one part of these trajectories will overcome a straight line of lenght at least v m τ /2 and making a scalar product greater than ε/2N max . The distance from the boundary of the extremal point of these straight lines is therefore, by convexity, uniformly bounded from below (e.g. in dimension 2 it is bounded by v m τ ε/4N max . Taking l ε (v m , τ ) being the minimum between this lower bound and 1/p ε/2Nmax gives us the required distance from the boundary.
∂Ω 1/l /2 l (v m , τ ) Ω 1/l 2 -1 x l x 1 2 v m τ
Figure 2. Control on grazing trajectories
Remark 5.3. In the case of Ω is a strictly convex C 3 domain, the proof of Proposition 5.1 can be easily made constructive thanks to the tools developed by Guo [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF].
In that case we have the existence of ξ : R d -→ R to be C 3 such that
Ω = {x ∈ R d , ξ(x) < 0}
and such that ∇ξ = 0 on ∂Ω and there exists C ξ > 0 such that
∂ ij ξ(x)v i v j C ξ v 2
for all x in Ω and all v in R d . It allows us to define the following bounded functional along a characteristic trajectories (X s , V s ),
α(s) = ξ 2 (X s ) + [V s • ∇ξ(X s )] 2 -2 V s • ∇ 2 ξ(X s ) • V s ξ(X s ) 0.
The latter functional satisfies that if X s 0 is on ∂Ω then
α(s 0 ) = [V s 0 • ∇ξ(X s 0 )] 2 = [V s 0 • n(X s 0 )] 2 |∇ξ(X s 0 )| 2 .
α thus encodes the evolution of the scalar product between the velocity of the trajectory and the normal to Ω at the footprints of the characteristic. If the characteristic trajectory starts with a velocity v such that v m v v M , as in Proposition 5.1, Lemma 1 and Lemma 2 of [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] shows that in between two consecutive collision with the boundary at time s 1 and s 2 we have the existence of C ξ > 0 such that
|s 1 -s 2 | C ξ α(s 1 ) v 2 M |∇ξ(X s 0 )| , (5.5) e C ξ (vm+1)s 1 α(s 1 ) e C ξ (v M +1)s 2 α(s 2 ), (5.6) e -C ξ (v M +1)s 1 α(s 1 )
e -C ξ (vm+1)s 2 α(s 2 ). (5.7) With (5.5) we can control the minimum time between two consecutive collisions with the boundary and therefore the minimum lenght of a segment between two consecutive collisions, uniformly in x and v (since ∇ξ is bounded from below on ∂Ω and nonvanishing) . We therefore obtain a uniform maximum number of collisions during the given time T . Finally, (5.6) and (5.7) bounds uniformly the evolution of the scalar product between two consecutive collision and therefore the maximum evolution of V s (x, v) on the whole trajectory for a given time T . Plugging those constructive constants into the study we just made gives explicit constants in Proposition 5.1. Now that we understand how grazing trajectories behave geometrically we can turn our attention to their effects combined with the spreading property of the Boltzmann Q + operator. 5.2. Spreading effect along grazing trajectories. In order to use the geometrical behaviour of grazing characteristic trajectories, one needs to consider velocities that are bounded from below. However, we would like to spread a lower bound up to ball centred at 0 where a lower bound on the norm of velocities is impossible. We shall overcome this problem using the flexibility of the spreading property of the Q + operator, Lemma 3.2, which allows us to extend the radius of the ball from 0 up to √ 2 times the initial radius.
The idea is to spread the initial lower bound by induction as long as the origin is strictly outside, where we are allowed to use the geometrical property of grazing characteristics. Finally, a last iteration of the spreading property, not requiring any a priori knowledge on characteristics, will include 0 in the lower bound.
In Corollary 4.2 we can fix a special time τ 1 of crossing the frontier of some Ω l allowing us to derive a lower bound for our function in this special case. The second case of grazing trajectories is dealt with Proposition 5.1 where we can find an l for Ω l to control the evolution of the velocity. Our goal now will be to find all the constants that are still free and to finally find a time of collision small enough that it will remain the same during all the iteration scheme.
We now fix all the constants that remain to be fixed in Corollary 4.2 thanks to Proposition 5.1. Let
(5.8) ∆ T = min δ T , t δ V /4 (3R min ) .
Next we define, for ξ in (0, 1), (5.9)
r 0 (ξ) = δ V r n+1 (ξ) = √ 2(1 -ξ)r n (ξ) - δ V 4 .
We have that r n (1/2 -5/(8 √ 2)) n∈N is a strictly increasing sequence. Therefore, it exists N max such that
r Nmax 1 2 - 5 8 √ 2 2R min .
Now we fix N in N * greater than N max . With this N and Proposition 3.4 at τ 0 = 0, we construct v 1 , . . . , v N X .
For i in {1, . . . , N } we take ξ (i) in (0, 1/4 -5/(8 √
2)] and we define N max (i) to be such that 0 / ∈ B v i , r n (ξ (i) ) for all n < N max (i) and 0 ∈ B v i , r Nmax(i) (ξ (i) ) . We can in fact take ξ (i)
such that 0 ∈ Int B v i , r Nmax(i) (ξ (i) ) .
Therefore we have that for all i in {1, . . . , N X },
δ i = v i -r Nmax(i)-1 (ξ (i) ) 0,
which is strictly positive if and only if N max (i) > 0. We consider
(5.10) v m = min i∈{1,...,N X } {δ i ; δ i > 0}.
We can now define:
∀ 0 < τ ∆ T , R(τ ) = max 3R min , 2δ X τ + 1 , (5.11) τ 1 (τ ) = τ - 2δ X R(τ ) > 0, (5.12) t(τ ) = t(l(τ ), τ 1 (τ ), ∆ T ). (5.13) Finally, we define l(τ ) ∀ 0 < τ ∆ T , τ 2 (τ ) = min ∆ T , δ X R(τ ) , (5.14) l(τ ) = min δ X , l δ V /4 (v m , τ 2 (τ )) . (5.15)
We also build up the following sequence, where R, l and τ 1 depend on τ , (5.16)
b (i) 0 (τ, ∆ T ) = a 0 e -(∆ T -τ )C L R γ + b (i) n+1 (τ, ∆ T ) = min C Q r d+γ n (ξ (i) ) d/2-1 δ X 2 n+2 R e -τ C L R γ + b (i) n (τ, ∆ T ) 2 ; a(l, τ 1 , ∆ T ) ξ (i)
was defined above and a(l, τ, ∆ T ) was defined in Corollary 4.2.
We are now ready to state the next Proposition which is the complement of Proposition 4.1 in the case when the trajectory stays close to the boundary. We remind the reader that 0 < t(τ ) < τ 1 (τ ). Proposition 5.4. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3 and suppose that f satisfies Proposition 3.4 with τ 0 = 0. Consider 0 < τ ∆ T and take i in {1, . . . , N X } such that N max (i) > 1. For all n in {0, . . . , N max (i) -1} we have that for all t in
[τ -δ X /(2 n R(τ )), ∆ T ], all x in B(x i , δ X /2 n ) and all v in B(0, R(τ )), if ∀s ∈ [0, t -t(τ )], X s,t (x, v) / ∈ Ω l(τ ) then f (t, x, v) b (i) n (τ, ∆ T )1 B(v i ,rn(ξ (i)
)) (v), all the constants being defined in (5.8), (5.9), (5.15), (5.11), (5.12), (5.13) and (5.16).
Proof of Proposition 5.4. We are going to use the same kind of induction we used to prove Proposition 4.1. So we start by fixing i such that N max (i) > 1.
Step 1: Initialization. The initialisation is simply Proposition 3.4 and the first term in the Duhamel formula (2.10) starting at τ , with the control from above on L thanks to Lemma 3.1.
Stef 2: Proof of the induction. We consider the case where the Proposition is true at n N max (i) -2. We take t in [τδ X /(2 n+1 R(τ )), ∆ T ], x in B(x i , δ X /2 n+1 ) and all v in B(0, R(τ )).
We suppose now that for all s ∈ [0, tt(τ )] we have that X s,t (x, v) does not belongs to Ω l(τ ) .
To shorten notation we will skip the dependence in τ of the constant. We use the definition of f being a mild solution to write f (t, x, v) under its Duhamel form (2.10) where both parts are positive. As in the proof of Proposition 4.1, we control, uniformly on t, x and v, the L operator from above. This yields (5.17)
f (t, x, v) e -C L τ R γ + t-δ X 2 n+2 R t- δ X 2 n+1 R Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] (V s,t (x, v)) ds,
where we used V s,t (x, v) = v R. We also emphasize here that this inequality holds true thanks to the definition of (5.11):
t - δ X 2 n+1 R τ - δ X R > 0.
The goal is now to apply the induction to the triplet (s, X s,t (x, v), v * ), where v * is the integration parameter inside the Q + operator, with v * R. We notice first that for all s in
[t -δ X /(2 n+1 R), t -δ X /(2 n+2 R)]
Thanks to Proposition 5.4, we can build, for all x and all v, a lower bound that will contain 0 in its interior after another use of the spreading property of the Q + operator. The next Corollary is the complement of Corollary 4.2.
Corollary 5.5. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3 and suppose that f satisfies Proposition 3.4 with τ 0 = 0. Let ∆ T be defined by (5.8). There exists r V > 0 such that for all τ ∈ (0, ∆ T ] there exists b(τ ) > 0 such that for all t in [τ, ∆ T ]
If, for t(τ ) and l(τ ) being defined by (5.13) -(5.15),
∀s ∈ [0, t -t(τ )], X s,t (x, v) / ∈ Ω l(τ ) .
Then
f (t, x, v) b(τ )1 B(0,r V ) (v).
Proof of Corollary (5.5). We are going to use the spreading property of Q + one more time. We recall that we chose N N max N max (i) for all i. By definition of
N max (i), ∀i ∈ {1, . . . , N X }, 0 ∈ Int B v i , r Nmax(i) (ξ (i)
) . We define r V = min r Nmax(i) (ξ (i) )v i ; i ∈ {1, . . . , N X } , which only depends on δ V and (v i ) i∈{1,...,N X } . By construction we see that
(5.20) ∀i ∈ {1, . . . , N X }, B(0, r V ) ⊂ B v i , r Nmax(i) (ξ (i) ) .
Now we take τ in (0, ∆ T ] and we take
t in [τ, ∆ T ], x in B(x i , δ X /2 N ) and v in B(0, R(τ )) such that ∀s ∈ [0, t -t(τ )], X s,t (x, v) / ∈ Ω l(τ ) , We have that t is in [τ -δ X /(2 Nmax(i)-1 R(τ )), ∆ T ] and x in B(x i , δ X /2 Nmax(i)-1 ) (N N max (i))
. By the same methods we reached (5.19), we obtain for n = N max (i)
f (t, x, v) (5.21) C Q r d+γ n (ξ (i) ) d/2-1 e -τ C L R γ + (b (i) n ) 2 t- δ X 2 n+2 R t- δ X 2 n+1 R 1 B(v i , √ 2(1-ξ (i) )rn(ξ (i) )) (V s,t (x, v)) ds.
This time the conclusion is different because we cannot bound the velocity from below since our lower bound contains 0. However, (5.20) allows us to bound from below the integrand in (5.21) by a function depending only on the norm. Moreover, v = V s,t (x, v) along characteristic trajectories (see Proposition (A.8)). Thus we obtain the expected result by taking
b(τ ) = min b (i) Nmax(i) ; i ∈ {1, . . . , N X } .
6. Maxwellian lower bound in the cutoff case: proof of Theorem
2.3
This section gathers all the results we proved above and proves the main Theorem in the case of a cut-off collision kernel. 6.0.1. Proof of Proposition (2.5). By combining Corollary 4.2 and Corollary 5.5 we can deal with any kind of characteristic trajectory. This is expressed by the following lemma. Lemma 6.1. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3 and suppose that f satisfies Proposition 3.4 with τ 0 = 0. There exists ∆ T > 0 and r V > 0 such that for all 0 < τ ∆ T there exists a(τ ) and
∀t ∈ [τ, ∆ T ], a.e. (x, v) ∈ Ω × R d , f (t, x, v) a(τ )1 B(0,r V ) (v).
Proof of Lemma 6.1. In Corollary 5.5 we constructed ∆ T and r V .
We now take τ in (0, ∆ T ] and consider t in [τ, ∆ T ], (x, v) in Ω × R d where f is a mild solution of the Boltzmann equation.
We remind the reader that l(τ ) and t(τ ) have been introduced in (5.15) and (5.13). Either (X s,t (x, v)) s∈[0,t-t(τ )] meets Ω l(τ ) and then we use Corollary 4.2 to get
f (t, x, v) a(l(τ ), τ 1 (τ ), ∆ T )1 B(0,r V ) (v).
Or (X s,t (x, v)) s∈[0,t-t(τ )] stays out of Ω l(τ ) and then we use Corollary 5.5 to get
f (t, x, v) b(τ )1 B(0,r V ) (v).
We obtain Lemma 6.1 with a(τ ) = min (a(l(τ ), τ 1 (τ ), ∆ T ), b(τ )).
We now have all the tools to prove Proposition 2.5.
Proof of Proposition 2.5. Let τ be strictly positive and consider t in [τ /2, τ ].
First case. We suppose that f satisfies Proposition 3.4 with τ 0 = 0. We can compare t with ∆ T constructed in Lemma 6.1. If t ∆ T then we can apply the latter lemma and obtain for almost every (
x, v) in Ω × R d (6.1) f (t, x, v) a τ 2 1 B(0,r V ) (v).
If t ∆ T then we can use Duhamel formula (2.10) and bound f (t, x, v) by its value at time ∆ T (as we did in the first step of the induction in the proof of Proposition 4.1) and use Lemma 6.1 at ∆ T . This gives, for v r V ,
f (t, x, v) f (∆ T , X ∆ T ,t (x, v), V ∆ T ,t (x, v))e -(t-∆ T )C L r V γ + a(∆ T )e -(τ -∆ T )C L r V γ + 1 B(0,r V ) (V ∆ T ,t (x, v)) = a(∆ T )e -(τ -∆ T )C L r V γ + 1 B(0,r V ) (v). (6.2)
We just have to take the minimum of the two lower bounds (6.1) and (6.2) to obtain Proposition 2.5.
Second case. We do not assume anymore that f satisfies Proposition 3.4 with τ 0 = 0.
Thanks to Proposition 3.4 with τ 0 = τ /4 we have that
∀t 0, ∀x ∈ Ω, v ∈ R d , F (t, x, v) = f (t + τ 0 , x, v)
is a mild solution of the Boltzmann equation satisfying exactly the same bounds as f in Theorem 2.3 and such that F has the property of Proposition 3.4 at 0 (note that all the constants depend on τ 0 ). Hence, we can apply the first step for t in [τ /4, 3τ /4] and F (t , x, v). This gives us the expected result for f (t, x, v) for t = t + τ 0 in [τ /2, τ ].
6.1. Proof of Theorem 2.3. As was mentioned in Section 1.2, the main difficulty in the proof is to create a lower bound depending only on the norm of the velocity. This has been achieved thanks to Proposition 2.5. If we consider this proposition as the start of an induction then it leads to exactly the same process developed by Mouhot in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], Section 3. Therefore we will just explain how to go from Proposition 2.5 to Theorem 2.3, without writing too many details.
First of all, by using the spreading property of the Q + operator once again we can grow the lower bound derived in Proposition 2.5. Proposition 6.2. Let f be the mild solution of the Boltzmann equation described in Theorem 2.3. For all τ in (0, T ), there exists R 0 > 0 such that
∀n ∈ N, ∀t ∈ τ - τ 2 n+1 , τ , ∀(x, v) ∈ Ω × R d , f (t, x, v) a n (τ )1 B(0,rn) (v),
with the induction formulae
a n+1 (τ ) = cst C e a 2 n (τ )r d+γ n ξ d/2+1 n 2 n+1 and r n+1 = √ 2(1 -ξ n )r n ,
where (ξ n ) n∈N is any sequence in (0, 1) and r 0 = r V , a 0 (τ ) and C e only depend on τ , E f (and
L pγ f if Φ satisfies (1.4) with γ < 0).
Indeed, we take the result in Proposition 2.5 to be the first step of our induction and then, for n in N and 0 < τ < T , the Duhamel form of f gives
f (t, x, v) τ -τ 2 n+2 τ -τ 2 n+1 e -C L (t-s) v γ + Q + (f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)) (V s,t (x, v))ds, for t in [τ -τ /2 n+2 , τ ].
Using the induction hypothesis together with the spreading property of Q + (Lemma 3.2) leads us, as in the proofs of Propositions 4.1 and 5.4, to a bigger ball in velocity, centred at 0. The only issue is to avoid the v-dependence in exp -C L (ts) v γ + which can easily be achieved as shown at the end of the proof of Proposition 3.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]. This is exactly the same result as Proposition 3.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], but with the added uniformity in x.
As in Lemma 3.3 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], we can take an appropriate sequence (ξ n ) n∈N and look at the asymptotic behaviour of (a n (τ )) n∈N . We obtain the following
∀τ > 0, ∃ρ τ , θ τ > 0, ∀(x, v) ∈ Ω × R d , f (t, x, v) ρ τ (2πθ τ ) d/2 e -|v| 2 2θ .
Notice that, again, the result is uniform in space, since the previous one was, and that the constants ρ τ and θ τ only depend on τ and the physical quantities associated to f .
To conclude, it remains to make the result uniform in time. As noticed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], Lemma 3.5, the results we obtained so far do not depend on an explicit form of f 0 but just on uniform bounds and continuity that are satisfied at all times, positions and velocities. Therefore, we can do the same arguments starting at any time and not t = 0. So if we take τ > 0 and consider τ t < T we just have to make the proof start at tτ to obtain Theorem 2.3.
7.
Exponential lower bound in the non cutoff case: proof of Theorem 2.7
In this section we prove the immediate appearance of an exponential lower bound for solutions to the Boltzmann equation (1.1) in the case of a collision kernel satisfying the non cutoff property.
The definition of being a mild solution in the case of a non cutoff collision kernel, Definition 2.6 and equation (2.12), shows that we are in fact dealing with an almost cutoff kernel to which we add a non locally integrable remainder. The strategy will mainly follow what we did in the case of a cutoff collision kernel with the addition of controlling the loss due to the added term.
As in the last section, we shall first prove that solutions to the Boltzmann equation can be uniformly bounded from below by a lower bound depending only on the norm of the velocity and then use the proof given for the non cutoff case in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]. We will do that by proving the immediate appearance of localised "upheaval points" and spreading them up to the point where we reach a uniform lower bound that includes a ball in velocity centred at the origin. The spreading effect will be done both in the case where the trajectories reach a point far from the boundary and in the case of grazing trajectories. At this point we will spread this lower bound on the norm of the velocity up to the exponential lower bound we expect.
We gather here two lemmas, proved in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], which we shall use in this section. They control the L ∞ -norm of the linear operator S ε and of the bilinear operator Q 1 ε . We first give a property satisfied by the linear operator S, (2.12), which is Corollary 2.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], where we define
∀v ∈ R d , |S[g](v)| C S g v γ +
, where C S g is defined by: (1) If Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then
C S g = cst m b C Φ e g . (2) If Φ satisfies (1.4) with γ ∈ (-d, 0), then C S g = cst m b C Φ e g + l p g , p > d/(d + γ).
We will compare the lower bound created by the cutoff part of our kernel to the remaining part Q 1 ε . To do so we need to control its L ∞ -norm. This is achieved thanks to Lemma 2.5 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], which we recall here. Lemma 7.2. Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν ∈ [0, 2). Let f, g be measurable functions on R d . Then
(1) If Φ satisfies (1.4) with 2 + γ 0 or if Φ satisfies (1.5), then
∀v ∈ R d , Q 1 b (g, f )(v) cst m b C Φ g L 1 γ f W 2,∞ v γ .
(
) If Φ satisfies (1.4) with 2 + γ < 0, then ∀v ∈ R d , Q 1 b (g, f )(v) cst m b C Φ g L 1 γ + g L p f W 2,∞ v γ 2
with p > d/(d + γ + 2).
7.1.
A lower bound only depending on the norm of the velocity. In this section we prove the following proposition, which is exactly Proposition 2.5 in the non-cutoff framework.
Proposition 7.3. Let f be the mild solution of the Boltzmann equation described in Theorem 2.7. For all 0 < τ < T there exists a 0 (τ ) > 0 such that
∀t ∈ [τ /2, τ ], ∀(x, v) ∈ Ω × R d , f (t, x, v) a 0 (τ )1 B(0,r V ) (v), r V and a 0 (τ ) only depending on E f , E f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0).
Proof of Proposition 7.3. As before, we would like to create localised "upheaval points" (as the ones created in Proposition 3.4) and then extend them. Both steps are done, as in the cutoff case, by induction along the characteristics.
We have the following inequality (7.2)
Q + ε (f, f ) + Q 1 ε (f, f ) Q + ε (f, f ) -Q 1 ε (f, f ) .
From the definition of being a mild solution in the non-cutoff case (Definition 2.6), for any 0 < ε < ε 0 , (7.3)
f (t, X t (x, v), V t (x, v)) = f 0 (x, v)exp - t 0 (L ε + S ε ) [f (s, X s (x, v), •)](V s (x, v)) ds + t 0 exp - t s (L ε + S ε ) [f (s , X s (x, v), •)](V s (x, v)) ds Q + ε + Q 1 ε [f (s, X s (x, v), •), f (s, X s (x, v), •)](V s (x, v))
ds. Due to Lemmas 3.1, 7.1 and 7.2 we find that
(7.4) L ε [f ] C f n b CO ε v γ + , S ε [f ] C f m b N CO ε v γ + and (7.5) Q 1 ε (f, f ) C f m b N CO ε v (2+γ) + where C f > 0 is a constant depending on E f , E f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0).
The proof of Proposition 7.3 is divided into three different inductions that are dealt with in the same way as in the proof of Proposition 2.5. Each induction represents a step in the proof: one to create localised initial lower bounds (Lemma 3.3), another one to deal with non-grazing trajectories (Proposition 4.1) and the final one for grazing trajectories (Proposition 5.4). Therefore, we will just point out below the only changes we need to make those inductions work in the non-cutoff case.
In all the inductions in the cutoff case, the key point of the induction was to control at each step quantities of the form
f (t, x, v) t (2) n t (1) n exp - t s (L ε + S ε ) [f (s , X s (x, v), •)](V s (x, v)) ds Q + ε + Q 1 ε [f (s, X s (x, v), •), f (s, X s (x, v), •)](V s (x, v)) ds, where (t (1) n ) n∈N , (t (2)
n ) n∈N are defined differently for grazing and non-grazing trajectories (see proofs of Propositions 4.1 and 5.4).
Much like those previous induction, and using (7.2), (7.3) and (7.4)
-(7.5), if f (t, x, v) a n 1 B(v,rn) then f (t, x, v) t (2) n t (1) n e -C ε f (R) a 2 n Q + ε [1 B(v,rn) , 1 B(v,rn) ] -C f m b N CO ε R (2+γ) + (V s (x, v)) ds,
which leads to
f (t, x, v) t (2) n t (1) n e -C ε f (R) (7.6) a 2 n cst l b CO ε c Φ r d+γ n ξ d 2 -1 n 1 B(v,rn √ 2(1-ξn)) -C f m b N CO ε R (2+γ) + (V s (x, v)) ds,
due to the spreading property of Q + ε (see Lemma 3.2) and using the shorthand notation
C ε f (R) = C f (n b CO ε + m b N CO ε ) R γ + .
To conclude we notice that, thanks to the definitions (3.1), (2.9) and (7.1),
l b CO ε l b and (7.7) n b CO ε ∼ ε→0 b 0 ν ε -ν , m b N CO ε ∼ ε→0 b 0 2 -ν ε 2-ν
if ν belongs to (0, 2) and
(7.8) n b CO ε ∼ ε→0 b 0 |logε| , m b N CO ε ∼ ε→0 b 0 2 ε 2
for ν = 0. Thus, at each step of the inductions we just have to redo the proofs done in the cutoff case and choose ε = ε n small enough such that (7.9)
C f m b N CO εn R (2+γ) + 1 2 a 2 n cst l b c Φ r d+γ n ξ d 2 -1 n .
where (ξ n ) n∈N is any sequence in (0, 1) and r 0 = r V , a 0 (τ ) and C f depend only on
τ , E f , E f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0).
We emphasize here that the induction formulae are obtained thanks to the use of equivalences (7.7) and (7.
n -t
n ) R γ + e -C f m b N CO εn +n b CO εn ( k n+1 ∆ k ) R γ + (2)
(see step 2 of proof of Proposition 4.2, Section 4 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]).
As we obtain exactly the same induction formulae as in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], the asymptotic behaviour of the coefficients a n is the same. Thus, by choosing an appropriate sequence (∆ n ) n∈N , as done in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], we can construct the expected exponential lower bound independently of time.
Appendix A. The free transport equation: proof of Theorem 2.1
In this section, we study the transport equation with a given initial data and boundary condition in a bounded domain Ω. We will only consider the case of purely specular reflections on the boundary ∂Ω. Those kind of interaction cannot occur for all velocities at the boundary. Indeed, for a particle to bounce back at the boundary, we need its velocity to come from inside the domain Ω. To express this fact mathematically, we define
Λ + = (x, v) ∈ ∂Ω × R d : v • n(x) 0 ,
where we denote by n(x) the exterior normal to ∂Ω at x.
Consider u 0 : Ω × R d -→ R which is C 1 in x ∈ Ω and L 2 (Ω × R d ) = L 2
x,v . We are interested in the problem stated in Theorem 2.1, (2.5) -(2.7).
If D x (v)(u) denotes the directional derivate of u in x in the direction of v we have, in the case of functions that are
C 1 in x, D x (v)(u) = v • ∇ x u.
Therefore, instead of imposing that the solution to the transport equation should be C 1 in x, we reformulate the problem with directional derivatives.
Physically, the free transport equation means that a particle evolves freely in Ω at a velocity v until it reaches the boundary. Then it bounces back and moves straight until it reaches the boundary for the second time and so on so forth up to time t. The method of characteristics is therefore the best way to link u(t, x, v) to u 0 by just following the path used by the particle, backwards from t to 0 (see Fig. 3). This method has been used in [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF] on the half-line and in [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF], [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF], for instance, in the case of convex media. However, in both articles they only deal with finite, or countably many, numbers of rebounds in finite time. Indeed, the electrical field in [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF] and [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF] makes the particles always reach the boundary with v • n(x) > 0 and [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF] has a specular boundary problem with an absorption coefficient α ∈ [0, 1): u(t, x, v) = αu(t, x, R x (v)). Therefore, in the case the particle arrives tangentially to the boundary, i.e. v • n(x) = 0, we have R x (v) = v and so u(t, x, v) = 0. This
v t min (x, v) t 1 (t, x, v) t 2 (t, x, v) t f in (t, x, v) Ω x Figure 3
. Backward trajectory with standard rebounds vanishing property allowed the authors to not care about the special cases where the particle starts to roll on the boundary.
Another way of looking at the characteristics method is to study the footprints of the trajectories on the boundary. This problem, as well as the possibility of having infinitely many rebounds in a finite time, has been tackled by Tabachnikov in [16]. Tabachnikov only focused on boundary points since the description of the trajectories by only considering their collisions with the boundary holds a symplectic property and a volume-preserving transformation. Such properties allowed him to show that the set of points on the boundary that lead to infinitely many rebounds in finite time is of measure 0 ( [16], Lemma 1.7, 1). Unfortunately, in our case we would like to follow the characteristics and the study of trajectories only via their footprints on the boundary is no longer a volume-preserving transformation.
In our case we need to follow the path of a particle along the characteristics of the equation to know the value of our function at each step. If the particle starts to roll on the boundary (see Fig. 4) we require to know for how long it will do so. The major issue is the fact that v • n(x) = 0 does not tell us much about the geometry of ∂Ω at x and the possibility, or lack of, for the particle to keep moving tangentially to the boundary. Moreover, some cases lead to non physical behaviour since the sole specular collision condition implies that some pairs (x, v) ∈ ∂Ω × R d can only be starting points, they cannot be generated by any trajectories (see Fig. 5). This case is mentioned quickly in the first chapter of [START_REF] Tabachnikov | Geometry and billiards[END_REF] but not dealt with. Therefore, in order to prove the well-posedness of the transport equation (2.5) -(2.7), we follow the ideas developed in [START_REF] Guo | Singular solutions of the Vlasov-Maxwell system on a half line[END_REF] and [START_REF] Hwang | Regularity for the Vlasov-Poisson system in a convex domain[END_REF], which consist of studying the backward trajectories that can lead to a point (t, x, v), combined with the idea of countably many collisions in finite time used in [START_REF] Chen | Linear transport equation with specular reflection boundary condition[END_REF]. However, we have to deal with the issues described above and to do so we introduce a new classification of possible interactions with the boundary (see Definition A.1). We also extend the result of . Backward trajectory that reaches an end [16], in terms of pair (x, v) leading to infinitely many rebounds in finite time, to the whole domain Ω (Proposition A.4). To do so we link up the study on the boundary made in [16] with the Lebesgue measure on Ω by artificially creating volume on ∂Ω thanks to time and a foliation of the domain by parallel trajectories.
Ω v x t min (x, v) t 2 (t, x, v) t f in (t, x, v)
The section is divided as follows. First of all we shall describe and classify the collisions with the boundary in order to describe very accurately the backward trajectories of a point (x, v) in ∂Ω×R d . We will name trajectory or characteristic any solution (X(t, x, v), V (t, x, v)) satisfying the initial condition (X(0, x, v), V (0, x, v)) = (x, v), the boundary condition (2.7) and satisfying, in Ω,
dX dt = V dV dt = 0.
Now we have our partition of points on the boundary of Ω, we are able to generate the backward trajectory associated to a starting point (x, v) in Ω × R d . The first step towards its resolution is to find the first point of real collision (if it exists) that generates (x, v) (see Fig. 3). The next proposition-definition proves mathematically what the figure shows.
Proposition A.3. Let Ω be an open, bounded and C 1 domain in R d . Let (x, v) be in Ω × R d , then we can define t min (x, v) = max t 0 : xvs ∈ Ω, ∀ 0 s t .
Moreover we have the following properties:
(1) if there exists t in (0,
t min (x, v)) such that x -vt hits ∂Ω then (x -vt, v) belongs to Ω rolling . (2) t min (x, v) = 0 if and only if (x, v) belongs to Ω stop ∪ Ω rebounds . (3) (x -vt min (x, v), v) belongs to Ω stop ∪ Ω rebounds .
Property (1) emphasises the fact that if, on the straight line between x and xvt min (x, v), the particle hits the boundary it will not be reflected and so just rolls on. Then property (2) tells us than t min(x,v) is always strictly positive except if (x, v) does not come from any trajectory of a particle or if it is the outcome of a rebound without rolling. Finally, property (3) finishes the study since at xvt min (x, v) the particles either come from a reflection (case Ω rebounds ), and we can keep tracking backwards, or started its trajectory at xvt min (x, v) (case Ω stop ).
Proop of Proposition A.3. First of all we have that Ω is bounded and so there exists R such that Ω ⊂ B(0, R), the ball of radius R in R d .
Then we notice that 0 belongs to A(x, v) = t 0 : xvs ∈ Ω, ∀ 0 s t .
Therefore A(x, v) is not empty. Moreover, this set is bounded above by 2R/ v since for all t in A(x, v) R > xvt t vx .
Therefore we can talk about the supremum t min (x, v) of A(x, v). Let (t n ) n∈N be increasing sequence in A(x, v) that tends to t min (x, v). As Ω is closed we have that xvt min (x, v) belongs to Ω. Then, if 0 s < t min (x, v) there exists n such that 0 s t n and so, by the property of t n , xvs is in Ω. This conclude the fact that t min (x, v) belongs to A(x, v) and so is a maximum.
We now turn to the proof of properties. Let (x, v) be in Ω and 0 < t < t min (x, v) such that xvt belongs to ∂Ω. Then for all 0 < t 1 < t < t 2 < t min (x, v), xvt 1 and xvt 2 are in Ω and so, by the definition of an exterior normal to a surface we have
[(x -vt) -(x -vt 1 )] • n(x -vt) 0 and [(x -vt) -(x -vt 2 )] • n(x -vt) 0, which gives v • n(x -vt) = 0.
Moreover, since t 2 belongs to A(x, v), for all s in [0, t 2t], (xvt)vs is in Ω, which means that (xvt, v) belongs to Ω rolling . Property (2) is direct since if t min (x, v) = 0 then for all t > 0, there exists 0 < s t such that xvs does not belong to Ω and then v • n(x) 0. So (x, v) belongs to Ω rebounds , if v • n(x) > 0, or to Ω stop .
Finally, property (3) is straightforward since xvt min (x, v) is in ∂Ω (because Ω is open) and since for all 0 t t min (x, v), x-vt is in Ω. Thus [(xvt min ) -(xvt)]• n(xvt min (x, v)) 0, which yields v • n(xvt min (x, v)) 0.Then, by the definition of A(x, v) and the fact that t min (x, v) is its maximum, we have that either (xvt min (x, v), v) belongs to Ω rebounds or belongs to Ω stop .
Up to now we focused solely on the case of the first possible collision with the boundary. In order to conclude the study of rebounds for any given characteristics we have to, in some sense, count the number of rebounds without rolling that can happen in finite time. This is the purpose of the next proposition. Then for all t 0 the trajectory finishing at (x, v) after a time t has at most a countable number of rebound without rolling.
Moreover, this number is finite almost surely with respect to the Lebesgue measure on Ω × R d Proof of Proposition A.4. The fact that there is countably many rebounds without rolling comes directly from the fact that t min (x, v) > 0 except if (x, v) is a starting/stopping point (and then did not move from 0 to t) or if (x, v) is the outcome of a rebound (and so comes from (x, R x (v)) which belongs to Ω line , implying that t min (x, R x (v)) > 0). Now we shall prove that the set of points in Ω × R d which lead to an infinite number of rebounds in a finite time is of measure 0. To do so, we first need some definitions. The measure µ in Ω × R d is the one induced by the Lebesgue measure and we denote by λ the measure on ∂Ω × R d (see section 1.7 of [16]). We will also denote Ω = (x, v) ∈ Ω × R d -{0} coming from an infinite number of rebounds , Ω ∂ = (x, v) ∈ ∂Ω × R d -{0} coming from an infinite number of rebounds .
We know ( [16] Lemma 1.7.1) that λ(Ω ∂ ) = 0 and we are going to establish a link between the measure of Ω and the one of Ω ∂ . Those two sets do not live in the same topology nor same dimension and so we build a function that artificially recreates them via time.
Because Ω is bounded we can find time T M > 0 such that for all x in Ω and v in R d -{0}, (x -T M v/ v ) does not belong to Ω. Furthermore, in the same way as for t min (x, v), we can define, for (x, v) in Ω × R d , T (x, v) = min{t > 0 : x + vt ∈ ∂Ω} if (x, v) ∈ Ω ∪ Ω rebounds 0 otherwise .
• Step 1: initialisation: we define
x 0 (x, v) = x, v 0 (x, v) = v, t 0 (x, v) = 0.
• Step 2: induction: if (x k (x, v), v k (x, v)) ∈ Ω stop then we define
x k+1 (x, v) = x k (x, v), v k+1 (x, v) = v k (x, v), t k+1 (x, v) = +∞, if (x k (x, v), v k (x, v)) /
∈ Ω stop then we define
x k+1 (x, v) = x k (x, v) -v k (x, v)t min (x k (x, v), v k (x, v)), v k+1 (x, v) = R x k+1 (x,v) (v k (x, v)), t k+1 (x, v) = t k (x, v) + t min (x k (x, v), v k (x, v)).
Remark A.5. Let us make a few comments on the accuracy of the sequence we just built.
(1) Looking at Proposition A.3, we know that at each step (apart from 0) we necessary have that (x k (x, v), v k (x, v)) belongs to either Ω stop or Ω rebounds and so the characteristic stops for ever (case 1 in induction) or bounces without rolling and start another straight line (case 2). Thus the sequence of footprints defined above captures the trajectories as long as there are rebounds and then becomes constant once the trajectory reach a stopping point. (2) If t min (x k (x, v), v k (x, v)) = 0 for some k > 0 then, by properties 2. and 3. of Proposition A.3, we must have (x k (x, v), v k (x, v)) ∈ Ω stop (since v k (x, v) is the specular reflection at x k (x, v) of v k-1 (x, v) and (x k (x, v), v k-1 (x, v)) is in Ω rebounds ∪ Ω stop ). Thus, (t k (x, v)) k∈N is strictly increasing as long as it does not reach the value +∞, where it remains constant.
Finally, it remains to connect the time variable to those quantities. In fact, the time will determine how many rebounds can lead to (x, v) in a time t. The reader must remember that the backward trajectory can lead to a point in Ω stop before time t.
Since the characteristics method helps us to find the value of the solution of the transport equation at a given point using its trajectory, the next definition links a triplet (t, x, v) to the first rebound of the trajectory that leads to (x, v) in a time t. The last rebound is then define by • if n(t, x, v) < +∞ and t n(t,x,v)+1 = +∞, then
x f in (t, x, v) = x n(t,x,v) (x, v), v f in (t, x, v) = v n(t,x,v) (x, v), t f in (t, x, v) = t,
• if n(t, x, v) < +∞ and t n(t,x,v)+1 < +∞, then
x f in (t, x, v) = x n(t,x,v) (x, v), v f in (t, x, v) = v n(t,x,v) (x, v), t f in (t, x, v) = t n(t,x,v) (x, v),
• if n(t, x, v) = +∞, then
x f in (t, x, v) = lim k→+∞ x k (x, v), v f in (t, x, v) = lim k→+∞ v k (x, v), t f in (t, x, v) = lim k→+∞ t k (x, v).
Remark A.7. Let us make a few comments on the definition above and the existence of limits.
(1) After the last rebound, occuring at t n(t,x,v) , the backward trajectory can only be a straight line during the time period tt n(t,x,v) (see Fig. 3). That is why we defined t f in (t, x, v) = t n(t,x,v) if we reached a point on Ω rebounds and t f in (t, x, v) = t if the last rebound reaches Ω stop (the trajectory can only start from there). (2) In the last case of the definition, we remind the reader that (t k (x, v)) k∈N is strictly increasing and so converges if bounded by t. But then, because ( v k (x, v) ) k∈N is constant and x k (x, v) = x k-1 (x, v)-t min (x k (x, v), v k (x, v))v k (x, v), we have that (x k (x, v)) k∈N is a Cauchy sequence. (3) The last case in Definition A.6 almost surely never happens, as proved in Proposition A.4.
To conclude this study of the characteristics we just have to make one more comment. We studied the characteristics that go backward in time because it simplifies the construction of a solution to the free transport equation. However, it is easy to prove (just requires the inductive construction of v k and x k ) that the forward trajectory of (x, v) during a period t is the backward trajectory over a period t of (x, -v). This gives the final proposition.
Proposition A.8. Let Ω be an open, bounded and C 1 domain in R d . Then for all (x, v) in Ω×R d we have existence and uniqueness of the characteristic (X t (x, v), V t (x, v)) given by, for all t 0, A.3.3. Boundary and initial conditions. First of all, u satisfies the initial condition (2.6) as n(0, x, v) = 0 (since t min (x, v) 0).
u also satifies the specular boundary condition (2.7). Indeed, if (x, v) is in Λ + , then either v • n(x) = 0 and the result is obvious since R x (v) = v, or v • n(x) > 0 and thus (x, R x (v)) belongs to Ω rebounds so t min (x, R x (v)) = 0 (Proposition A.3). An easy induction shows x k (x, v) = x k+1 (x, R x (v)), v k (x, v) = v k+1 (x, R x (v)), t k (x, v) = t k+1 (x, R x (v)), for all k in N. The last equality gives us that n(t, x, v) = n(t, x, R x (v))-1 and therefore, combined with the two other equalities,
x f in (t, x, v) = x f in (t, x, R x (v)), v f in (t, x, v) = v f in (t, x, R x (v)), t f in (t, x, v) = t f in (t, x, R x (v)),
which leads to the specular reflection boundary condition. A.3.4. Time differentiability. Here we prove that u is differentiable in time on R + . Let us fix (x, v) in Ω × R d . By construction, we know that n(t, x, v) is piecewise constant. Since (t k (x, v)) k∈N is strictly increasing up to the step where it takes the value +∞, for t k (x, v) < t < t k+1 (x, v) we have that for all s ∈ R such that t k (x, v) < t + s < t k+1 (x, v),
x f in (t, x, v) = x f in (t + s, x, v), v f in (t, x, v) = v f in (t + s, x, v), t f in (t, x, v) = t f in (t + s, x, v).
Therefore, we have that u(t + s, x, v)u(t, x, v) s
= u 0 (x f in -(t + s -t f in )v f in , v f in ) -u 0 (x f in -(t -t f in )v f in , v f in ) s → s→0 -v f in • (∇ x u 0 ) (x f in -(t -t f in )v f in , v f in ),
because u 0 is C 1 in x. So u is differentiable at t if t in strictly between two times t k (x, v). We thus find that u is differentiable at t and that its derivative is continuous (since x f in , v f in and t f in are continuous when x and v are fixed).
In the case t = t k (x, v) we can use what we just proved to show that we have the existence of right (except for t = 0) and left limits of ∂ t u(t, x, v) as t tends to t k (x, v). We use the specular reflection boundary condition of u 0 together with the fact that it is C 1 in x and that t k (x, v) = t k+1 (x, R x (v)) to obtain the equality of the two limits. A.3.5. Space differentiability and solvability of the transport equation. Here we prove that u is differentiable in x in Ω, which follows directly from the time differentiability. Let us fix t in R + and v in R d , we shall study the differentiability of u(t, •, v) in the direction of v.
Ω is open and so ∀x ∈ Ω, ∃δ > 0, ∀s ∈ [-δ, δ], x + sv ∈ Ω.
Theorem 2 . 7 .
27 Let Ω be T d or a C 2 open convex bounded domain in R d with nowhere null normal vector and f 0 be a continuous function on Ω × R d . Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6)
pγ f and upper and lower bounds on |∇ξ| and |∇ 2 ξ|on ∂Ω.
Lemma 3 . 2 .
32 Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν 0. Then for any
( 7 . 1 ) 0 b
710 m b = S d-1 b (cos θ) (1cos θ)dσ = S d-2 π (cos θ) (1cos θ)sin d-2 θ dθ. Lemma 7.1. Let g be a measurable function on R d . Then
8) inside the exponential term e -C f m b N CO εn +n b CO εn (t
Figure 4 .Figure 5
45 Figure 4. Backward trajectory rolling on the boundary
Proposition A. 4 .
4 Let Ω be a C 1 open, bounded domain in R d and let (x, v) be in Ω × R d .
Definition A. 6 .
6 Let Ω be an open, bounded andC 1 domain in R d . Let (t, x, v) be in R + × Ω × R d . Then we can define n(t, x, v) = max{k ∈ N : t k (x, v) t}, if it exists, +∞, if (t k (x, v)) k is bounded by t.
e-mail: [email protected]
, for suggesting me this problem and for all the fruitful discussions and advices he offered me. I also would like to thank Alexandre Boritchev, Amit Einav and Sara Merino for fruitful discussions. This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis.
References 55 1. Introduction
This paper deals with the Boltzmann equation, which rules the behaviour of rarefied gas particles moving in a domain Ω of R d with velocities in R d (d 2) when
2 n , so that for all s in [t-δ X /(2 n+1 R), t-δ X /(2 n+2 R)], X s,t (x, v) belongs to B(x i , δ X /2 n ).
We also note that
We have two different cases to consider for (X s ,s (X s,t (x, v), v * )) s ∈[0,s-t] . Either for some s in [0, st], X s ,s (X s,t (x, v), v * ) belongs to Ω l and then we can apply Corollary 4.2:
Or for all s in [0, st] ⊂ [0, τ 2 ], X s ,s (X s,t (x, v), v * ) does not belong to Ω l and then we can apply our induction property at rank n and we reach the same lower bound (5.18). Plugging (5.18) into (5.17) implies, thanks to the spreading property of Q + , Lemma 3.2 with ξ = ξ (i) ,
To conclude we use the fact that for all s in [0, tt] we have that X s,t (x, v) does not belong to Ω l and that tt > τ 2 . Moreover, n + 1 N max (i) -1 and so if v belongs to B v i , r n (ξ (i) ) we have that v m v . We apply Proposition 5.1, raising
) we can compute explicitly (5.19) and obtain the expected induction. Proposition 7.3 follows directly from these choices plugged into the study of the cutoff case. 7.2. Proof of Theorem 2.7. Now that we proved the immediate appearance of a lower bound depending only on the norm of the velocity we can spread it up to an exponential lower bound. As in Section 6.1, we thoroughly follow the proof of Theorem 2.1 of [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]. The proof in our case is exactly the same induction, starting from Proposition 7.3. Therefore we only briefly describe how to construct the expected exponential lower bound. For more details we refer the reader to [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], Section 4.
We start by spreading the initial lower bound (Proposition 7.3) by induction where, at each step, we use the spreading property of the Q + εn operator and fix ε n small enough to obtain a strictly positive lower bound (see (7.9)).
There is, however, a subtlety in the non-cutoff case that we have to deal with. Indeed, at each step of the induction we choose an ε n of decreasing magnitude, but at the same time in each step the action of the operator
. By (7.7) -(7.8), as ε n tends to 0 we have that n b CO εn goes to +∞ and so the action of
) seems to decrease the lower bound to 0 exponentially fast. The idea to overcome this difficulty is to find a time interval t
n = ∆ n at each step to be sufficiently small to counterbalance the effect of n b CO εn . More precisely, by starting from Proposition 7.3 as the first step of our induction, taking
) and fixing ε n by (7.9) we can prove the following induction property Proposition 7.4. Let f be the mild solution of the Boltzmann equation described in Theorem 2.7. For all τ in (0, T ) and any sequence
with the induction formulae
This will give us an explicit form for the characteristics and allow us to link u(t, x, v) with u 0 (x * , v * ), for some x * and v * . Finally, we will show that the function we constructed is, indeed, a solution to the transport equation with initial data u 0 and specular boundary condition and that such a solution is unique.
A.1. Study of rebounds on the boundary. As mentionned in the introduction of this section, when a particle reaches a point at the boundary with a velocity v it can bounce back (Fig. 3), keep moving straight (Fig. 4) or stop moving because the specular reflection does not allow it to do anything else (Fig. 5), which is physically unexpected. The next definition gives a partition of the points at the boundary which takes into account those properties.
Definition A.1. We define here a partition of ∂Ω × R d that focuses on the outcome of a collision in each of the sets.
• The set coming from a rebound without rolling
• The set coming from rolling on the boundary
• The set of only starting points
• The set coming from straight line
One has to notice that any point of Ω line indeed comes from a straight line arriving at x with direction v since Ω is open and is C 1 (so there is no cusp). In order to understand the behaviour expected at Ω stop we have the following proposition. The proof of it gives insight into the nature of specular reflections.
Proposition A.2. If we have (x, v) in Ω stop then there is no trajectory with specular boundary reflections that leads to (x, v).
Proof of Proposition A.2. Let us assume the contrary, that is to say (x, v) is in Ω stop comes from a trajectory with specular boundary reflection.
We have that (x, v) belongs to ∂Ω × R d and so if (x, v) comes from a straight line it can only be (by definition of trajectories) a line containing x with direction v which means that (x, v) comes from {(xvt, v), t ∈ [0, T ]}, for some T > 0. But the trajectory is necessarily in Ω and this is in contradiction with the definition of Ω stop .
Therefore, (x, v) must come from a rebound after a straight line trajectory. But again we obtain a contradiction because the velocity before the rebound is R x (v) = v and the backward trajectory is the one studied above. We define the following function which is clearly C 1 .
). We also define the set
and claim that F is injective on the set B. Indeed, if (t, x, v) and (t * , x * , v * ) are in B such that
Let assume that t * > t, therefore we have that
and thus t *t T (x * , v). However, t * T (x * , v) so we reach a contradiction and t * t. By symmetry we have t = t * and then x = x * . We also notice that [0, T M ] × Ω stop and [0, T M ] × Ω rolling do not intersect B.
Finally we have that
) and x + tv/ v is in Ω and its first rebound backward in time is (x, v) which lead to infinitely many rebounds in finite time. Therefore
The converse is direct, by considering the first collision with the boundary of the backward trajectory starting at (x, v) in Ω.
All those properties allow us to compute µ(Ω) by a change of variable in B ∩ Ω ∂ .
A.2. Description of characteristics. In the previous section we derived all the relevant properties of when, where and how a trajectory can bounce against the boundary of Ω. As was shown, the characteristic starting from a point (t, x, v) in
is the backward trajectory satisfying specular boundary reflections that leads to (x, v) in time t. Basically, it consists in a straight line as long as it stays inside Ω or it rolls on the boundary. Then it reaches a boundary point where it does not move any more (Ω stop ) or bounces back (Ω rebounds ). Thanks to Proposition A.4 we can generate the countable (and almost surely finite) sequence of collisions with the boundary associated to the future point (x, v). We shall construct it by induction. We consider (x, v) in Ω × R d .
Moreover, we have that V t (x, v) = O t,x,v (v) with O t,x,v an orthogonal transformation, and that for almost every (x, v) in Ω × R d we have the following
Proof of Proposition A.8. By construction we have that
It only remains to show the last equation (A.1), but it follows directly from the fact that the backward trajectory of (x, v) is the forward trajectory of (x, -v).
We can reach a point on Ω stop after a time t 1 and so the forward trajectory of that point during a time t > t 1 does not come back to the original point (since we stayed in Ω stop for a period tt 1 ).
However, the set of points that reach Ω stop belongs to the set of points that bounce infinitely many times in a finite time and this set is of measure zero (see Proposition A.4). x ∩ L 2 x,v comes directly from the fact that we have a preserved quantity through time, thanks to the specular reflection property. Indeed, let us assume that u is a solution to our free transport equation satisfying specular boundary condition and the initial value problem u 0 . Then, a mere integration by part gives us
x,v , which directly implies the uniqueness of a solution, since the transport equation (2.5) is linear. A.3.2. Construction of the solution. It remains to construct a function u that will be constant on the characteristic trajectories and check that we indeed obtain a function that is differentiable in t and x which satisfies the transport equation. The first point of Remark A. [START_REF] Desvillettes | On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation[END_REF] gives us the answer as we expect the following behaviour u(t, x, v) = u(tt 1 (x, v), x 1 (x, v), v 1 (x, v)) = • • • = u(tt k (x, v), x k (x, v), v k (x, v)), up to the point where there are no more rebound in the time interval [0, t]. From there we continue in a straight line.
Thus, we define: ∀(t, x, v) ∈ R + × Ω × R d , u(t, x, v) = u 0 (x f in (t, x, v) -(tt f in (t, x, v))v f in (t, x, v), v f in (t, x, v)) .
Thanks to the inductive construction, one find easily that u(t, x + sv, v) = u(ts, x, v).
Therefore, since u is time differentiable, we have that u(t, •, v) admits a directional derivative in the direction of v and that D x (v)(u)(t, x, v) = -∂ t u(t, x, v). | 126,994 | [
"739558"
] | [
"236275"
] |
01492026 | en | [
"math"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01492026/file/homogeneousBN_final.pdf | Marc Briant
email: [email protected]
Amit Einav
email: [email protected]
ON THE CAUCHY PROBLEM FOR THE HOMOGENEOUS BOLTZMANN-NORDHEIM EQUATION FOR BOSONS: LOCAL EXISTENCE, UNIQUENESS AND CREATION OF MOMENTS
Keywords: Boltzmann-Nordheim equation, Kinetic model for bosons, Bose-Einstein condensattion, Subcritical solutions, Local Cauchy Problem
published or not. The documents may come On the Cauchy Problem for the Homogeneous Boltzmann-Nordheim Equation for Bosons: Local Existence, Uniqueness and Creation of Moments
The collision kernel B contains all the information about the interaction between two particles and is determined by physics. We mention, at this point, that one can derive this type of equations from Newtonian mechanics (coupled with quantum effects in the case of the Boltzmann-Nordheim equation), at least formally (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF] or [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] for the classical case and [START_REF] Nordheim | On the kinetic method in the new statistics and its application in the electron theory of conductivity[END_REF] or [START_REF] Chapman | An account of the kinetic theory of viscosity, thermal conduction and diffusion in gases[END_REF] for the quantum case). However, while the validity of the Boltzmann equation from Newtonian laws is known for short times (Landford's theorem, see [START_REF] Lanford | Time evolution of large classical systems[END_REF] or more recently [START_REF] Gallagher | From Newton to Boltzmann: hard spheres and short-range potentials[END_REF][START_REF] Pulvirenti | On the validity of the Boltzmann equation for short range potentials[END_REF]), we do not have, at the moment, the same kind of proof for the Boltzmann-Nordheim equation.
1.1. The problem and its motivations. Throughout this paper we will assume that the collision kernel B can be written as
B(v, v * , θ) = Φ (|v -v * |) b (cos θ) ,
which covers a wide range of physical situations (see for instance [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] Chapter 1).
Moreover, we will consider only kernels with hard potentials, that is
(1.1) Φ(z) = C Φ z γ , γ ∈ [0, 1],
where C Φ > 0 is a given constant. Of special note is the case γ = 0 which is usually known as Maxwellian potentials. We will assume that the angular kernel b • cos is positive and continuous on (0, π), and that it satisfies a strong form of Grad's angular cut-off:
(1.2) b ∞ = b L ∞ [-1,1]
< ∞
The latter property implies the usual Grad's cut-off [START_REF] Grad | Principles of the kinetic theory of gases[END_REF]:
(1.3) l b = S d-1 b (cos θ) dσ = S d-2 π 0 b (cos θ) sin d-2 θ dθ < ∞.
Such requirements are satisfied by many physically relevant cases. The hard spheres case (b = γ = 1) is a prime example.
With the above assumption we can rewrite the Boltzmann-Nordheim equation for bosonic gas as (1.4)
∂ t f = C Φ R d ×S d-1 |v -v * | γ b (cos θ) [f ′ f ′ * (1 + f + f * ) -f f * (1 + f ′ + f ′ * )] dv * dσ.
and break it into obvious gain and loss terms
∂ t f = Q + (f ) -f Q -(f )
where
Q + (f ) = C Φ R d ×S d-1 |v -v * | γ b (cos θ) f ′ f ′ * (1 + f + f * ) dv * dσ, (1.5) Q -(f ) = C Φ R d ×S d-1 |v -v * | γ b (cos θ) f * (1 + f ′ + f ′ * ) dv * dσ. (1.6)
The goal of this work is to show local in time existence and uniqueness of solutions to the Boltzmann-Nordheim equation for bosonic gas. The main difficulty with the problem is the possible appearance of a Bose-Einstein condensation, i.e. a concentration of mass in the mean velocity, in finite time. In mathematical terms, this can be seen as the appearance of a Dirac function in the solution of the equation (1.4), noticeable by a blow-up in finite time.
Such concentration is physically expected, based on various experiments and numerical simulations, as long as the temperature T of the gas is below a critical temperature T c (M 0 ) which depends on the mass M 0 of the bosonic gas. We refer the interested reader to [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF] for an overview of these results.
1.2.
A priori expectations for the creation of a Bose-Einstein condensation. In this subsection, we explore some properties of the Boltzmann-Nordheim equation bosonic gas in order to motivate why a concentration phenomenon is expected. We emphasize that everything is stated a priori and should not be considered a rigorous proof.
We start by noticing the symmetry property of the Boltzmann-Nordheim operator.
Lemma 1.1. Let f be such that Q(f ) is well-defined. Then for all Ψ(v) we have
R d Q(f )Ψ dv = C Φ 2 R d ×R d ×S d-1 q(f )(v, v * ) [Ψ ′ * + Ψ ′ -Ψ * -Ψ] dσdvdv * , with q(f )(v, v * ) = |v -v * | γ b (cos θ) f f * (1 + f ′ + f ′ * ) .
This result is well-known for the Boltzmann equation and is a simple manipulation of the integrand using changes of variables (v, v * ) → (v * , v) and (v, v * ) → (v ′ , v ′ * ), as well as using the symmetries of the operator q(f ). A straightforward consequence of the above is the a priori conservation of mass, momentum and energy for a solution f of (1.4) associated to an initial data f 0 . That is (1.7)
R d 1 v |v| 2 f (v) dv = R d 1 v |v| 2 f 0 (v) dv = M 0 u M 2 .
The entropy associated to (1.4) is the following functional
S(f ) = R d [(1 + f )log(1 + f ) -f log(f )] dv
which is, a priori, always increasing in time. It has been proved in [START_REF] Huang | Statistical mechanics[END_REF] that for given mass M 0 , momentum u and energy M 2 , there exists a unique maximizer of S with these prescribed values which is of the form
(1.8) F BE (v) = m 0 δ(v -u) + 1 e β 2 (|v-u| 2 -µ) -1
, where • m 0 0.
• β ∈ (0, +∞] is the inverse of the equilibrium temperature.
• -∞ < µ 0 is the chemical potential.
• µ • m 0 = 0. This suggests that for a given initial data f 0 , the solution of the Boltzmann-Nordheim equation (1.4) should converge, in some sense, to a function of the form F BE with constants that are associated to the physical quantities of f 0 . Hence, we can expect the appearance of a Dirac function at u if m 0 = 0.
One can show (see [START_REF] Lu | A modified Boltzmann equation for Bose-Einstein particles: isotropic solutions and long-time behavior[END_REF] or [START_REF] Escobedo | On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons[END_REF]) that for a given (M 0 , u, M 2 ) we have that if d = 3, m 0 = 0 if and only if where ζ denotes the Riemann Zeta function. Equivalent formulas can be obtained in a similar way for any higher dimension.
According to [START_REF] Chapman | An account of the kinetic theory of viscosity, thermal conduction and diffusion in gases[END_REF] Chapter 2, the kinetic temperature of a bosonic gas is given by
T = m 3k B M 2 M 0 ,
where k B is the physical Boltzmann constant. This implies, using (1.9), that m 0 = 0 if and only if T T c (M 0 ) where
T c (M 0 ) = mζ(5/2) 2πk B ζ(3/2) M 0 ζ(3/2) 2/3
. Initial data satisfying (1.9) is called subcritical (or critical in case of equality).
From the above discussion, we expect that for low temperatures, T < T c (M 0 ), our solution to the Boltzmann-Nerdheim equation will split into a regular part and a highly concentrated part around u as it approaches its equilibrium F BE . In [START_REF] Spohn | Kinetics of the Bose-Einstein condensation[END_REF], Spohn used this idea of a splitting to derive a physical quantitative study of the Bose-Einstein condensation and its interactions with normal fluid, in the case of radially symmetric (isotropic) solutions. 1.3. Previous studies. The issue of existence and uniqueness for the homogeneous bosonic Boltzmann-Nordheim equation has been studied recently in the setting of hard potentials with angular cut-off, especially by X. Lu [START_REF] Lu | A modified Boltzmann equation for Bose-Einstein particles: isotropic solutions and long-time behavior[END_REF][START_REF] Lu | On isotropic distributional solutions to the Boltzmann equation for Bose-Einstein particles[END_REF][START_REF] Lu | The Boltzmann equation for Bose-Einstein particles: velocity concentration and convergence to equilibrium[END_REF][START_REF] Lu | The Boltzmann equation for Bose-Einstein particles: regularity and condensation[END_REF], and M. Escobedo and J. J. L. Velázquez [START_REF] Escobedo | On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons[END_REF][START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF]. It is important to note, however, that these developments have been made under the isotropic setting assumption. We present a short review of what have been done in these works.
In his papers [START_REF] Lu | A modified Boltzmann equation for Bose-Einstein particles: isotropic solutions and long-time behavior[END_REF] and [START_REF] Lu | On isotropic distributional solutions to the Boltzmann equation for Bose-Einstein particles[END_REF], X. Lu managed to develop a global-in-time Cauchy theory for isotropic initial data with bounded mass and energy, and extended the concept of solutions for isotropic distributions. Under these assumptions, Lu proved existence and uniqueness of radially symmetric solutions that preserve mass and energy. Moreover, he showed that if the initial data has a bounded moment of order s > 2, then this property will propagate with the equation. Additionally, Lu showed moment production for all isotropic initial data in L 1 2 . More recently, M. Escobedo and J. J. L. Velázquez used an idea developed by Carleman for the Boltzmann equation [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF] in order to obtain uniqueness and existence locally in time for radially symmetric solutions in the space L ∞ (1 + |v| 6+0 ) (see [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF]). As a condensation effect can occur, we can't expect more than local-in-time results in L ∞ spaces in the general setting.
The issue of the creation of a Bose-Einstein condensation has been extensively studied experimentally and numerically in physics [START_REF] Semikov | Kinetics Bose condensation[END_REF] [START_REF] Semikov | Condensation of Bosons in the kinetic regime[END_REF][15] [START_REF] Lacaze | Dynamical formation of a boseeinstein condensate[END_REF]. Mathematically, a formal derivation of some properties of this condensation, as well as its interactions with the regular part of the solution, has been studied in [START_REF] Spohn | Kinetics of the Bose-Einstein condensation[END_REF] in the isotropic framework. In the series of papers, [START_REF] Lu | A modified Boltzmann equation for Bose-Einstein particles: isotropic solutions and long-time behavior[END_REF][START_REF] Lu | On isotropic distributional solutions to the Boltzmann equation for Bose-Einstein particles[END_REF][START_REF] Lu | The Boltzmann equation for Bose-Einstein particles: velocity concentration and convergence to equilibrium[END_REF], X. Lu managed to show a condensation phenomenon, under appropriate initial data and in the isotropic setting, as the time goes to infinity. He has shown that at the low temperature case, the isotropic solutions to ((1.4)) converge to the regular part of F BE , which has a smaller mass than the initial data. This loss of mass is attributed to the creation of a singular part in the limit, i.e. the desired condensation. It is interesting to notice, as was mentioned in [START_REF] Lu | The Boltzmann equation for Bose-Einstein particles: velocity concentration and convergence to equilibrium[END_REF], that this argument does not require the solution to be isotropic and that this created condensation neither proves, or disproves, creation of a Bose-Einstein condensation in finite time.
In a recent breakthroughs, [START_REF] Escobedo | On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons[END_REF][START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF], the appearance of Bose-Einstein condensation in finite time has finally been shown. In [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF] the authors showed that if the initial data is isotropic in L ∞ (1 + |v| 6+0 ) with some particular conditions for its distribution of mass near |v| 2 = 0, then the associated isotropic solution exists only in finite time, and its L ∞ -norm blows up. This was done by a thorough study of the concentration phenomenon occurring in a bosonic gas. In [START_REF] Escobedo | On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons[END_REF], the authors showed that supercritical initial data indeed satisfy the blow-up assumptions in the case of the isotropic setting.
More precisely, in [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF] the authors showed that there exist R blowup , Γ blowup > 0 such that if the isotropic initial satisfies
|v| R blowup f 0 (|v| 2 ) dv Γ blowup ,
measuring concentration around |v| = 0, then there will be a blow-up in the L ∞norm in finite time. This should be compared with the very recent proof of Lu [START_REF] Lu | The Boltzmann equation for Bose-Einstein particles: regularity and condensation[END_REF] showing global existence of solutions in the isotropic setting when
R d f 0 (|v| 2 ) |v| dv Γ global
for a known Γ global > 0 and d = 3. The above gives us a measure of lack of concentration near the origin at t = 0. At this point we would like to mention that the problem of finite time condensation, intimately connected to the Boltzmann-Nordheim equations for bosons, is far from being fully resolved, and the aforementioned results by Lu, Escobedo and Velázquez are a paramount beginning of the investigation of this problem.
1.4. Our goals and strategy. The a priori conservation of mass, momentum and energy seems to suggest that a natural space to tackle the Cauchy problem is L 1 2 , the space of positive functions with bounded mass and energy. While this is indeed the right space for the regular homogeneous Boltzmann equation (see [START_REF] Lu | Conservation of energy, entropy identity, and local stability for the spatially homogeneous Boltzmann equation[END_REF][START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF]), the possibility of sharp concentration implies that the L ∞ -norm is an important part of the mix as it can measure the condensation blow up. Additionally, one can see that for short times, when no condensation is created, the boundedness of the L ∞ -norm implies a strong connection between the trilinear gain term in the homogeneous Boltzmann-Nordheim equation and the quadratic gain term in the homogeneous Boltzmann equation. Thus, it seems that the right space to look at, when one investigates the Boltzmann-Nordheim equation, is in fact L 1 2 ∩ L ∞ , or the intersection of L 1 2 with some weighted L ∞ space.
The main goal of the present work is to prove that the above intuition is valid by showing a local-in-time existence and uniqueness result for the Boltzmann-Nordheim equation when initial data in L 1 2 ∩ L ∞ (1 + |v| s ) for a suitable s, without any isotropic assumption. One of the main novelty of the present paper is the highlighting of the role played by the L ∞ -norm not only on the control of possible blow-ups, but also on the gain of regularity of the solutions. This L ∞ investigation is an adaptation of the work of Arkeryd [START_REF] Arkeryd | L estimates for the space-homogeneous boltzmann equation[END_REF] for the classical Boltzmann operator. A core difference between Arkeryd's work and ours lies in the control of the loss term, Q -, which can no longer be controlled above zero using the entropy, as well as more complexities arising from dealing with a trilinear term.
We tackle the issue of the existence of solutions with an explicit Euler scheme for a family of truncated Boltzmann-Nordheim operators, a natural approach when one wants to propagate boundedness. The sequence of functions we obtain is then shown to converge to a solution of (1.4). The key ingredients we use are a new control on the gain term, Q + , for large and small relative velocities v -v * , estimations of 'gain of regularitiy at infinity' due to having the initial data in L ∞ (1 + |v| s ), and a refinement and an extension to higher dimensions of a Povzner-type inequality for the evolution of convex and concave functions under a collision.
The issue of uniqueness is being dealt by an adaptation of the strategy developed by Mischler and Wennberg in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF] for the homogeneous Boltzmann equation. The main difficulty in this case is the control of terms of the form |v -v * | 2+γ that appear when one studies the evolution of the energy of solutions. Besides our local theorems, we also show the appearance of moments of all orders to the solution of (1.4). As can seen from the above discussion, as well as the proofs to follow, we treat the Cauchy theory, and the creation of moments, for the Boltzmann-Nordheim equation as an 'extension' of known results and methods for the Boltzmann equation -though the technicalities involved are far from trivial. 1.5. Organisation of the article. Section 2 is dedicated to the statements and the descriptions of the main results proved in this paper.
In Section 3 we derive some key properties of the gain and loss operators Q + and Q -, and show several a priori estimates on solutions to (1.4). We end up by proving a gain of regularity at infinity for solutions to the homogeneous Boltzmann-Nordheim equation.
As moments of solutions to (1.4) are central in the proof of uniqueness, Section 4 is dedicated to their investigation. We show an extension of a Povzner-type inequality and use it to prove the instantaneous appearance of bounded moments of all order. Lastly, we quantify the blow-up near t = 0 for the moment of order 2 + γ.
In Section 5 we show the uniqueness of bounded solutions that preserve mass and energy and then we turn our attention to the proof of local-in-time existence of such bounded, mass and energy preserving solutions in Section 6.
Main results
We begin by introducing a few notation that will be used throughout this work. As we will be considering spaces in the variables v and t separately at times, we will index by v or t the spaces we are working on. The subscript v will always refer to
R d . For instance L 1 v refers to L 1 (R d ) and L ∞ [0,T ],v refers to L ∞ ([0, T ] × R d ).
We define the following spaces, when p ∈ {1, ∞} and s ∈ N:
L p s,v = f ∈ L p v , ( 1
+ |v| s )f L p v <
+∞ Lastly, we denote the moment of order α, where α 0, of a function f of t and v by (2.1)
M α (t) = R d |v| α f (t, v) dv.
Note that when f 0 the case α = 0 corresponds to the mass of f while the case α = 2 corresponds to its energy.
The main result of the work presented here is summed up in the next theorems:
Theorem 2.1. Let f 0 0 be in L 1 2,v ∩ L ∞ s,v when d 3 and d -1 < s. Then if a non-negative solution to the Boltzmann-Nordheim equation on [0, T 0 ) × R d , f ∈ L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v
, that preserves mass and energy exists it must be unique. Moreover, this solution satisfies
• For any 0 s ′ < s, where s = min s , d 1+γ s -d + 1 + γ + 2(1+γ) d , we have that f ∈ L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ s ′ ,v , • if γ > 0 then for all α > 0 and for all 0 < T < T 0 , M α (t) ∈ L ∞ loc ([T, T 0 )) . Theorem 2.2. Let f 0 0 be in L 1 2,v ∩ L ∞ s,v when d 3 and d -1 < s. Then, if (i) γ = 0 and s > d, or (ii) 0 < γ 1 and s > d + 2 + γ, there exists T 0 > 0 depending on d, s, C Φ , b ∞ , l b , γ, f 0 L 1 2,v
and f 0 L ∞ s,v , such that there exists a non-negative solution to the Boltzmann-Nordheim equation on
[0, T 0 ) × R d , f ∈ L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v
, that preserves mass and energy. Moreover T 0 = +∞ or lim sup
T →T - 0 f L ∞ [0,T ]×R d = +∞.
Remark 2.3. We mention a few remarks in regards to the above theorem:
(1) It is easy to show that s = s if d = 3 or s d.
(2) The difference between conditions (i) and (ii) in Theorem 2.2 arises from the explicit Euler scheme we employ. To show the existence, we start by solving an appropriate truncated equation. However, any 'regularity at infinity' that may be gained due to the term |v -v * | γ for γ > 0 is lost due to this truncation. Thus, an additional assumption on the weighted L ∞ norm is required. Note that the case d = 3, γ = 1 gives the same condition as that of [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF]. (3) Of great importance is the observation that the above theorems identifies an appropriate norm in the general non-isotropic setting, the L ∞ norm, under which a study of the appearance of a blow up in finite time is possiblegiving rise to a proof of local existence and uniqueness. We would like to mention that this blow up may not be the Bose-Einstein condensation itself and additional assumptions, such as the ones presented in [10] [START_REF] Escobedo | On the blow up and condensation of supercritical solutions of the Nordheim equation for bosons[END_REF], may be needed to fully characterise the condensation phenomena. (4) Much like the classical Boltzmann equation, higher order moments are created immediately, but unlike it, M α (t) are only locally bounded. We also emphasize here that this creation of moments only requires f 0 to be in L 1 2,v ∩ L ∞ v as we shall see in Section 4.
(5) Lastly, let us mention that our proofs still hold in d = 2, but only in the special case γ = 0. This is due to the use of the Carleman representation for Q + .
3.
A priori estimate: control of the regularity by the L ∞ v -norm This section is dedicated to proving an a priori estimate in the L ∞ v space for solutions to (1.4), locally in time. As was mentioned before, we cannot expect more than this as we know from [START_REF] Escobedo | Finite time blow-up for the bosonic Nordheim equation[END_REF] that even for radially symmetric solutions there are solutions with a blow-up in finite time.
Many results in this section are an appropriate adaptation of the work of Arkeryd [START_REF] Arkeryd | L estimates for the space-homogeneous boltzmann equation[END_REF]. Nonetheless, we include full proofs to our main claims for the sake of completion.
The main theorem of the section, presented shortly, identifies the importance of the L ∞ v requirement as an indicator for blow-ups. Indeed, as we shall see, the boundedness of the solution, along with appropriate initial conditions, immediately implies higher regularity at infinity.
Theorem 3.1. Let f 0 0 in L 1 2,v ∩ L ∞ s,v when d 3 and d -1 < s. Let f be a non-negative solution of (1.4) in L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v ,
with initial value f 0 , satisfying the conservation of mass and energy. Define
(3.1) s = min s ; d 1 + γ s -d + 1 + γ + 2(1 + γ) d .
Then for all 0 T < T 0 and all s ′ < s there exists an explicit C T > 0 such that following holds
∀t ∈ [0, T ], f (t, •) L ∞ s ′ ,v C T .
The constant C T depends only on T , d, the collision kernel,
f L ∞ [0,T ],v , f 0 L 1 2,v ∩L ∞ s,v , s and s ′ .
The entire section is devoted to the proof of this result. We start by stating a technical lemma that will be used throughout the entire section, whose proof we leave to the Appendix.
Lemma 3.2. Let s 1 , s 2 0 be such that s 2 -s 1 < d and let f ∈ L 1 s 1 ,v ∩ L ∞ s 2 ,v . Then, for any 0 α < d we have R d f (v * ) |v -v * | -α dv * C d,α f L 1 s 1 ,v + f L ∞ s 2 ,v (1 + |v|) -b where b = min α, s 1 + α(s 2 -s 1 ) d
and C d,α > 0 depends only on d and α.
3.1. Key properties of the gain and loss operators. In this subsection we gather and prove some useful properties of the gain and loss operators Q -and Q + that will be used in what is to follow. First, we have the following control on the loss operator.
Lemma 3.3. Let f 0 be in L 1 2,v . Then (3.2) ∀v ∈ R d , Q -(f )(v) C Φ l b (1 + |v| γ ) f L 1 v -C Φ C γ l b f L 1 2,v
, where
(3.3) C γ = sup x 0 1 + x γ 1 + x 2 .
Proof of Lemma 3.3. Using the fact that for any x, y > 0 and 0 γ 1 we have
|x| γ -|y| γ |x -y| γ we find that for any v ∈ R d Q -(f )(v) C Φ R d ×S d-1 [(1 + |v| γ ) -(1 + |v * | γ )] b (cos θ) f * dv * dσ C Φ l b (1 + |v| γ ) f L 1 v -C Φ C γ l b f L 1 2,v
.
Remark 3.4. Had we had a uniform in time control over the entropy, R d f log dv, we would have been able to find a strictly positive lower bound for the loss operator, much like in the case of the Boltzmann equation. However, for the Boltzmann-Nordheim equation the appropriate decreasing entropy is given by
R d ((1 + f ) log(1 + f ) -f log f ) dv,
which is not as helpful.
An essential tool in the investigation of the L ∞ properties of solutions to the Boltzmann equation is the so-called Carleman representation. This representation of the gain operator has been introduced by Carleman in [START_REF] Carleman | Problèmes mathématiques dans la théorie cinétique des gaz[END_REF] and consisted of changing the integration variables in the expression for it from dv * dσ to dv ′ dv ′ * on R d and appropriate hyperplanes. As shown in [START_REF] Gamba | Upper Maxwellian bounds for the spatially homogeneous Boltzmann equation[END_REF], the representation reads as: (3.4)
R d ×S d-1 Lemma 3.5. Let f 0 be in L 1 2,v ∩ L ∞ v . If γ ∈ [0, d -2], then Q + (f ) L ∞ v C + 1 + 2 f L ∞ v sup v,v ′ ∈R d E vv ′ f ′ * dE(v ′ * ) R d f ′ |v -v ′ | d-1-γ dv ′ ,
where
C + = 2 d-1 C Φ b ∞ .
Remark 3.6. Note that the requirement of having γ in [0, d-2] prevents our method from working in d = 2 unless γ = 0.
Lemma 3.7. Let f 0 be in L 1 v ∩ L ∞ v .
For any given v ∈ R d we have that almost everywhere in the direction of v -v ′ (3.5)
E vv ′ Q + (f )(v ′ * ) dE(v ′ * ) C +E 1 + 2 f L ∞ v f L 1 v sup v 1 ∈R d R d f (v) |v -v 1 | 1-γ dv ,
where
C +E > 0 depends only on d, C Φ and b ∞ . Proof of Lemma 3.7. Denote by ϕ n (v) = n 2π 1 2 e - nD ( v,E vv ′ ) 2 2
, where D (v, A) is the distance of v from the set A.
Using the standard change of variables (v, v * , σ) → (v ′ , v ′ * , σ) we find that
R d ϕ n (v)Q + (f )(v) dv C Φ b ∞ 1 + 2 f L ∞ v R d ×R d ×S d-1 ϕ ′ n |v -v * | γ f f * dvdv * dσ.
We have that
S d-1 ϕ n (v ′ ) dσ = 2 d-1 |v -v * | d-1 Svv * ϕ n (x) ds(x)
where ds is the uniform measure on S vv * which is the sphere of radius |v -
v * | /2 centred at (v + v * )/2.
R d ϕ n (v)Q + (f )(v) dv C Φ S d-2 b ∞ 1 + 2 f L ∞ v R d ×R d f f * |v -v * | 1-γ dvdv * dσ.
Using the fact that ϕ n converge to the delta function of E vv ′ we conclude that
E vv ′ Q + (f )(v ′ * ) dE(v ′ * ) = lim n→+∞ R d ϕ n (v)Q + (f )(v) dv S d-2 C Φ b ∞ 1 + 2 f L ∞ v R d ×R d f f * |v -v * | 1-γ dvdv * S d-2 C Φ b ∞ 1 + 2 f L ∞ v f L 1 v sup v 1 ∈R d R d f |v -v 1 | 1-γ dv,
which is the desired result.
ψ a (v) = 0 |v| < |a| 1 |v| |a| . If f ∈ L 1 s,v ∩ L ∞ v when s d d-1 then for almost every hyperplane E vv ′ E vv ′ ψ a (v ′ * )Q + (f )(v ′ * ) dE(v ′ * ) C Φ C d,γ b ∞ f L 1 s,v + f L ∞ v 3 (1 + |a|) -s+γ-1
where C d,γ > 0 is a constant depending only on d and γ.
Proof of Lemma 3.8. The proof follows the same lines of the proof of Lemma 3.7.
We define ϕ n to be the approximation of the delta function on the appropriate hyperplane. Then
R d ϕ n (v)ψ(v)Q + (f )(v) dv C Φ b ∞ 1 + 2 f L ∞ v × R d ×R d ×S d-1 ϕ n (v ′ )ψ(v ′ )f (v)f (v * ) |v -v * | γ dvdv * dσ Since |v| |a| /2 and |v * | |a| /2 implies ψ(v ′ ) = 0 (as |v ′ | |v| + |v * |)
we conclude that the above is bounded by
C Φ b ∞ 1 + 2 f L ∞ v {|v| |a| 2 ∨|v * | |a| 2 }×S d-1 ϕ n (v ′ )f (v)f (v * ) |v -v * | γ dvdv * dσ C Φ S d-2 b ∞ 1 + 2 f L ∞ v {|v| |a| 2 ∨|v * | |a| 2 } f (v)f (v * ) |v -v * | γ-1 dvdv * dσ C Φ S d-2 b ∞ 1 + 2 f L ∞ v |v|> |a| 2 f (v) dv sup |v|> |a| 2 R d f (v * ) |v -v * | γ-1 dv * C Φ C d,γ b ∞ 1 + 2 f L ∞ v f L 1 s,v (1 + |a|) s f L 1 s,v + f L ∞ v (1 + |v|) b for b = min 1 -γ ; s 1 - 1 -γ d
where we have used Lemma 3.2. The result follows from taking n to infinity as
s d/(d -1) implies max 0 γ 1 1 -γ 1 -1-γ d s.
3.2.
A priori properties of solutions of (1.4). The first step towards the proof of Theorem 3.1 is to obtain some a priori estimates on f when f is a bounded solution of the Boltzmann-Nordheim equation. We first derive an estimation of the growth of the moments of f when f 0 has moments higher than 2. Proposition 3.9. Assume that f is a solution to the Boltzmann-Nordheim equation with initial conditions f 0 ∈ L 1 s,v for s > 2. Then, for any T < T 0 we have that
f (t, •) L 1 s,v e 2C Φ Csb∞ 1+2 sup t∈(0,T ] f L ∞ v f 0 L 1 2,v t f 0 L 1 s,v .
Proof of Proposition 3.9. We have that
d dt R d (1 + |v| s ) f (v, t) dv = C Φ 2 R d ×R d ×S d-1 q(f )(v, v * ) |v ′ | s + |v ′ * | s -|v| s -|v * | s dvdv * dσ C Φ C s b ∞ 1 + 2 sup t∈(0,T ] f L ∞ v R d ×R d |v| s-1 |v * | (|v| γ + |v * | γ ) f (v)f (v * ) dvdv * C Φ C s b ∞ 1 + 2 sup t∈(0,T ] f L ∞ v R d ×R d |v| s |v * | + |v| s-1 |v * | 2 f (v)f (v * ) dvdv * 2C Φ C s b ∞ 1 + 2 sup t∈(0,T ] f L ∞ v f 0 L 1 2,v R d (1 + |v| s ) f (v) dv
where we have used the known inequality
|v ′ | s + |v ′ * | s -|v| s -|v * | s C s |v| s-1 |v * |
for s > 2 and some C s depending only on s, the fact that γ 1 and the inequality
|v| α 1 + |v| α+1
for any α 0. The result follows.
The next stage in our investigation is to show that under the conditions of Theorem 3.1 one can actually bound the integral of f over E vv ′ uniformly in time, which will play an important role in the proof of the mentioned theorem, and more. Proposition 3.10. Let f be a solution to the Boltzmann-Nordheim equation that satisfies the conditions of Theorem 3.1 and let 0 T < T 0 . Then there exists C E > 0 and C 0 ∈ R * such that for any given v ∈ R d we have that almost everywhere in the direction of v -v ′ and for all t ∈ [0, T ]
E vv ′ f ′ * (t) dE(v ′ * ) C E e -C 0 t f 0 L ∞ s,v + C E 1 -e -C 0 T C 0 f 0 L 1 v 1 + 2 sup τ ∈[0,T ] f (τ, •) L ∞ v f 0 L 1 v + sup τ ∈[0,T ] f (τ, •) L ∞ v
where the constant C E only depends on d, s and the collision kernel, and C 0 depends also on f 0 and satisfies
Q -(f )(v) C 0 .
Remark 3.11. From Lemma 3.3 we know that we can choose
C 0 = C Φ l b (C γ f 0 L 1 2,v - f 0 L 1 v )
but the theorem can be stated more generally, as presented. Notice that the choice above can satisfy C 0 < 0, which will imply an exponential growth in the bound.
Proof of Proposition 3.10. Define ϕ n as in Lemma 3.7. Since f is a solution to the Boltzmann-Nordheim equation and that
Q -(f )(v) C 0 we find that (3.7) d dt R d ϕ n (v)f (t, v) dv -C 0 R d f (t, v)ϕ n (v) dv + R d ϕ n (v)Q + (f )(v) dv.
Using (3.6) we conclude that
R d ϕ n (v)Q + (f (t, •))(v) dv S d-2 C Φ b ∞ 1 + 2 sup τ ∈[0,T ] f (τ, •) L ∞ v R d ×R d |v -v * | γ-1 f (t, v)f (t, v * ) dvdv * S d-2 C Φ b ∞ 1 + 2 sup τ ∈[0,T ] f (τ, •) L ∞ v f 0 L 1 v sup τ ∈[0,T ], v 1 ∈R d R d f (τ, v * ) |v 1 -v * | 1-γ dv * , (3.8)
where we used that f is mass preserving. We notice that for γ > 1 -
d and v ∈ R d R d f (v * ) |v -v * | 1-γ dv * f L ∞ v |x|<1 dx |x| 1-γ + f L 1 v implying R d ϕ n (v)Q + (f (t, •))(v) dv C d,γ C Φ b ∞ 1 + 2 sup τ ∈[0,T ] f (τ, •) L ∞ v f 0 L 1 v f 0 L 1 v + sup τ ∈[0,T ] f (τ, •) L ∞ v ,
for an appropriate C d,γ . The resulting differential inequality from (3.7) is
d dt R d ϕ n (v)f (t, v) dv -C 0 R d ϕ n (v)f (t, v) dv + C T
with an appropriate C T , which implies by a Grönwall lemma that (3.9)
R d ϕ n (v)f (t, v) dv R d ϕ n (v)f 0 dv e -C 0 t + C T C 0 1 -e -C 0 t . Since lim n→∞ R d ϕ n (v)f 0 dv = E vv ′ f 0 (v ′ * ) dE(v ′ * ) f 0 L ∞ s,v E vv ′ dE(v ′ * ) 1 + |v ′ * | s = C d,s f 0 L ∞ s,v
as s > d -1, we take the limit as n goes to infinity in (3.9) which yields (3.10)
E vv ′ f (t, v ′ * ) dE(v ′ * ) C d,s f 0 L ∞ s,v e -C 0 t + C T C 0 1 -e -C 0 t
which is the desired result.
Lastly, before proving Theorem 3.1, we give one more a priori type of estimates on the family of hyperplanes E vv ′ . Proposition 3.12. Let f be a solution to the Boltzmann-Nordheim equation that satisfies the conditions of Theorem 3.1 and let 0 T < T 0 . For any a ∈ R d define
ψ a (v) = 0 |v| < |a| 1 |v| |a| .
Then for almost every hyperplane E vv ′ and t ∈ [0, T ]
E vv ′ ψ v (v ′ * )f (t, v ′ * ) dE(v ′ * ) E vv ′ ψ v (v ′ * )f 0 (v ′ * ) dE(v ′ * ) + C T,α (1 + |v|) -α with α = 3 if s d + 2 and α = s ′ + 1 for any s ′ < s -d if s > d + 2. The constant C T,α > 0 depends only on T , d, the collision kernel, sup t∈(0,T ] f (t, •) L ∞ v , f 0 L 1 2,v ∩L ∞ s,v , s and s ′ .
Proof of Proposition 3.12. We start by noticing that if s -s ′ > d then
R d 1 + |v| s ′ f 0 (v) dv C s,s ′ f 0 L ∞ s,v R d dv 1 + |v| s-s ′ = C s,s ′ f 0 L ∞ s,v Thus, if s > d + 2 we can conclude that f 0 ∈ L 1 s ′ ,v for any 2 < s ′ < s -d, improving the initial assumption on f 0 .
We continue as in Lemma 3.8 and define ϕ n to be the approximation of the delta function on E vv ′ . Denoting by
I n (t) = R d ϕ n (v * )ψ v (v * )f (t, v * ) dv *
we find that, using Lemma 3.8, Proposition 3.9 and denoting by C T the appropriate constant from the mentioned lemma and proposition,
d dt I n (t) -C Φ l b (1 + |v| γ ) f 0 L 1 v + C Φ C γ l b f 0 L 1 2,v I n (t) + C T (1 + |v|) -s ′ +γ-1 if s > d + 2 and s ′ < s -d (1 + |v|) γ-3 if s d + 2.
The above differential inequality implies (see Lemma A.3 in Appendix) that for any
|v| 2C γ f 0 L 1 2,v f 0 L 1 v 1 γ -1,
the following holds:
I n (t) I n (0) + C T (1 + |v|) -s ′ -1 if s > d + 2 and s ′ < s -d (1 + |v|) -3 if s d + 2.
Taking n to infinity along with Proposition 3.10 yields the desired result as when
|v| < 2C γ f 0 L 1 2,v f 0 L 1 v 1 γ -1
the following holds:
E vv ′ ψ v (v ′ * )f (t, v ′ * ) dE(v ′ * ) 2C γ f 0 L 1 2,v f 0 L 1 v β γ 1 (1 + |v|) β E vv ′ f (t, v ′ * ) dE(v ′ * ).
Remark 3.13. We notice that since
f 0 ∈ L ∞ s,v E vv ′ ψ v (v ′ * )f 0 (v ′ * ) dE(v ′ * ) f 0 L ∞ s,v E vv ′ ψ v (v ′ * ) 1 + |v ′ * | s dE(v ′ * ) C s,s ′′ ,d (1 + |v|) -(s-s ′′ )
for any d -1 < s ′′ < s. This implies that Proposition 3.12 can be rewritten as
E vv ′ ψ v (v ′ * )f (t, v ′ * ) dE(v ′ * ) C T (1 + |v|) -(s-d+1-ǫ) s > d + 2 (1 + |v|) -min(3,s-d+1-ǫ) s d + 2
where we have picked s ′′ = d -1 + ǫ and s ′ = s -d -ǫ for an arbitrary ǫ small enough. As s -d + 1 -ǫ 3 -ǫ when s d + 2 for any ǫ we conclude that
(3.11) E vv ′ ψ v (v ′ * )f (t, v ′ * ) dE(v ′ * ) C T (1 + |v|) -(s-d+1-ǫ)
3.3. Gain of regularity at infinity. This subsection is entirely devoted to the proof of Theorem 3.1.
Proof of Theorem 3.1. We start by noticing that the function
f l,v (v * ) = 1 -ψ v √ 2 (v * ) f (v * )
, where ψ a was defined in Lemma 3.8, satisfies
f l,v (v ′ ) f l,v (v ′ * ) = 0. Indeed, as |v ′ | 2 + |v ′ * | 2 = |v| 2 + |v * | 2 |v| 2 we find that |v ′ | |v| / √ 2 or |v ′ * | |v| / √ 2. This implies that Q + (f l,v )(v) = 0
and thus, by setting
f h,v = f -f l,v we have that Q + (f )(v) C Φ b ∞ 1 + 2 sup (0,T ] f L ∞ v R d ×S d-1 |v -v * | γ f (v ′ ) f (v ′ * ) dv * dσ = C Φ b ∞ 1 + 2 sup (0,T ] f L ∞ v Q + B,γ (f h,v , f h,v ) + 2Q + B,γ (f l,v , f h,v ) 3C Φ b ∞ 1 + 2 sup (0,T ] f L ∞ v Q + B,γ (f, f h,v )
where
Q + B,γ (f, g) = R d ×S d-1 |v -v * | γ f (v ′ ) g (v ′ * ) dv * dσ.
and we have used the fact that Q + (f, g) is symmetric under exchanging f and g.
Using Carleman's representation (3.4) for Q + B,γ along with Lemma 3.2 and Remark 3.13 we find that
Q + B,γ (f, f h,v )(v) R d f (v ′ ) dv ′ |v -v ′ | d-1-γ E vv ′ f h,v (v ′ * ) dE(v ′ * ) C T f 0 L 1 2,v + sup (0,T ] f L ∞ v f 0 L 1 2,v (1 + |v|) -δ , (3.12)
where C T > 0 is a constructive constant depending only on d, s, f 0 , the collision kernel and T and where
δ = min (s -γ -ǫ 1 , ξ) with ξ = s -d + 1 -ǫ 1 + 2(1+γ) d
and ǫ 1 to be chosen later.
As f solves the Boltzmann-Nordheim equation, we find that it must satisfy the following inequality:
(3.13) ∂ t f 3C Φ b ∞ C T 1 + 2 sup (0,T ] f L ∞ v f 0 L 1 2,v + sup (0,T ] f L ∞ v f 0 L 1 2,v (1 + |v|) -δ -C Φ l b (1 + |v| γ ) f L 1 v -C Φ C γ l b f L 1 2,v f
where we have used Lemma 3.3 and (3.12). Solving (3.13) (see Lemma A.3 in the Appendix) with abusive notation for C T , we find that for any δ δ
f (t, •) L ∞ γ+ δ,v f 0 L ∞ γ+ δ,v + C T (3.14)
Let s ′ < s be given and denote by ǫ = s -s ′ . We shall show that the L ∞ s ′ ,v -norm of f can be bounded uniformly in time by a constant depending only on the initial data, dimension and collision kernel.
If δ s -γ -ǫ the result follows from (3.14). Else, the same equation implies that
f (t, •) ∈ L ∞ ξ+γ uniformly in t ∈ (0, T ].
Repeating the same arguments leading to (3.14) but using Lemma 3.2 with an L ∞ weight of s 2 = ξ + γ instead of s 2 = 0 yields an improved version of (3.14) where sup
τ ∈(0,T ] f (τ, •) L ∞ v is replaced with sup τ ∈(0,T ] f (τ, •) L ∞ ξ+γ,v
, and δ is replaced with
δ 1 = min s -γ -ǫ 1 , ξ + d -1 -γ d (ξ + γ) .
We continue by induction. Defining
δ n = min s -γ -ǫ 1 , ξ + (ξ + γ) n j=1 d -1 -γ d j .
we assume that for any δ δ n
(3.15) f (t, •) L ∞ γ+ δ,v C T where C T depends only on C Φ , b ∞ , l b , T , sup t∈(0,T ] f (t, •) L ∞ v , f 0 L ∞ s,v , f 0 L 1 2,v , γ, s, d
and ǫ 1 .
If δ n = s -γ -ǫ the proof is complete, else we can reiterate the proof to find that (3.15) is valid for δ δ n+1 .
Since
ξ + (ξ + γ) ∞ j=1 d -1 -γ d j = d 1 + γ (ξ + γ) -γ = d 1 + γ s -d + 1 + γ + 2(1 + γ) d -ǫ 1 -γ
we conclude that we can bootstrap our L ∞ weight up to
d 1 + γ s -d + 1 + γ + 2(1 + γ) d -ǫ s -ǫ
in finitely many steps with an appropriate choice of ǫ 1 . This completes the proof.
Creation of moments of all order
This section is dedicated to proving the immediate creation of moments of all order to the Boltzmann-Nordheim equation, as long as they are in
L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v .
This will play an important role in the proof of the uniqueness of the solutions, as when one deals with the difference of two solutions one cannot assume any fixed sign and usual control on the gain and loss terms fails. Higher moments of the solutions will be required to give a satisfactory result, due to the kinetic kernel |v -v * | γ .
The instantaneous generation of moments of all order is a well known and important result for the Boltzmann equation (see [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF]). As for finite times, assuming no blow ups in the solution, the Boltzmann-Nordheim's gain and loss terms control, and are controlled, by the appropriate gain and loss terms of the Boltzmann equation, one can expect that a similar result would be valid for the bosonic gas evolution.
We would like to emphasize at this point that our proofs follow the arguments used in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF] with the key difference of a newly extended Povzner-type inequality, from which the rest follows. The reader familiar with the work of Mischler and Wennberg may just skim through the statements and skip to the next section of the paper.
The study of the generation of higher moments will be done in three steps: The first subsection is dedicated to a refinement of a Povzner-type inequality [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF][START_REF] Povzner | On the Boltzmann equation in the kinetic theory of gases[END_REF] which captures the geometry of the collisions in the Boltzmann kernel. Such inequalities control the evolution of convex and concave functions under the effect of a collision, which is what we are looking for in the case of moments.
In the second subsection we will prove the appearance of moments for solutions to Boltzmann-Nordheim equation for bosons in
L 1 2,v ∩ L ∞ v .
We conclude by quantifying the rate of explosion of the (2 + γ) th moment as the time goes to 0. This estimate will be of great importance in the proof of the uniqueness.
F ∈ L ∞ (R d ×R d ×S d-1 ) be such that F a > 0.
Given a function ψ we define.
K ψ (v, v * ) = S d-1 F (v, v * , σ)b(θ) ψ(|v ′ * | 2 ) + ψ(|v ′ | 2 ) -ψ(|v * | 2 ) -ψ(|v| 2 ) dσ.
Then, denoting by χ(v, v * ) = 1 -1 {|v|/2<|v * |<2|v|} , we find the following decomposition for K:
K ψ (v, v * ) = G ψ (v, v * ) -H ψ (v, v * )
, where G and K satisfy the following properties:
(i) If ψ(x) = x 1+α with α > 0 then |G(v, v * )| C G α (|v| |v * |) 1+α and H(v, v * ) C H α |v| 2+2α + |v * | 2+2α χ(v, v * ). (ii) If ψ(x) = x 1+α with -1 < α < 0 then |G(v, v * )| C G |α| (|v| |v * |) 1+α and -H(v, v * ) C H |α| |v| 2+2α + |v * | 2+2α χ(v, v * ). (iii)
If ψ is a positive convex function that can be written as ψ(x) = xφ(x) for a concave function φ that increases to infinity and satisfies that for any ε > 0 and α ∈ (0, 1)
(φ(x) -φ(αx)) x ε -→ x→∞ ∞ Then, for any ε > 0, |G(v, v * )| C G |v| |v * | 1 + φ |v| 2 1 + φ |v * | 2 and H(v, v * ) C H |v| 2-ε + |v * | 2-ε χ(v, v * ). In addition, there is a constant C > 0 such that φ ′ (x) C/(1 + x) implies G(v, v * ) C G |v| |v * |.
The constants C G and C H are positive and depend only on α, ψ, ε, b, a and
F L ∞ v,v * ,σ .
Remark 4.2. The operator H ψ in the above lemma can be chosen to be monotonous in ψ in the following sense:
if ψ = ψ 1 -ψ 2 0 is convex then H ψ 1 -H ψ 2 0
. This property will prove itself extremely useful later on in the paper.
Proof of Lemma 4.1. The proof follows similar arguments to the one presented in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF] where We start by recalling the definition of v ′ , v ′ * and cos θ:
F
v ′ = v + v * 2 + |v -v * | 2 σ v ′ * = v + v * 2 - |v -v * | 2 σ , cos θ = v -v * |v -v * | , σ .
One can see that
|v ′ | 2 = |v| 2 1 2 + 1 2 v -v * |v -v * | , σ + |v * | 2 1 2 - 1 2 v -v * |v -v * | , σ + |v -v * | 2 v + v * , σ - 1 2 v -v * |v -v * | , σ |v| 2 -|v * | 2 . = β(σ) |v| 2 + (1 -β(σ)) |v * | 2 + Z(σ) = Y (σ) + Z(σ),
where
β(σ) = 1 2 + 1 2 v -v * |v -v * | , σ ∈ [0, 1], (4.1)
Y (σ) = β(σ) |v| 2 + (1 -β(σ)) |v * | 2 , (4.2) Z(σ) = |v -v * | 2 v + v * , σ - 1 2 v -v * |v -v * | , σ |v| 2 -|v * | 2 (4.3)
Similarly, one has
|v ′ * | 2 = Y (-σ) + Z(-σ).
As Z is an odd function in σ, we can split the integration over S d-1 to the domains where Z is positive and negative. By changing σ to -σ and adding and subtracting the term ψ(Y (σ)) + ψ(Y (-σ)), as well as using the fact that β(-σ) + β(σ) = 1 we conclude that
K ψ = σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y + Z) -ψ(Y )] dσ + σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y (-σ) -Z(σ)) -ψ(Y (-σ))] dσ - S d-1 [b(θ)F (σ) + b(π -θ)F (-σ)] βψ(|v| 2 ) + (1 -β)ψ(|v * | 2 ) -ψ(Y ) dσ, (4.4)
We define (4.5)
H ψ = S d-1 [b(θ)F (σ) + b(π -θ)F (-σ)] βψ(|v| 2 ) + (1 -β)ψ(|v * | 2 ) -ψ(Y ) dσ
and notice that due to the definition of Y (σ) and the convexity or concavity of ψ, H ψ always has a definite sign. As such
(4.6) H ψ a S d-1 [b(θ) + b(π -θ)] βψ(|v| 2 ) + (1 -β)ψ(|v * | 2 ) -ψ(Y ) dσ,
when ψ is convex and (4.7)
-H ψ F L ∞ v,v * ,σ S d-1 [b(θ) + b(π -θ)] βψ(|v| 2 ) + (1 -β)ψ(|v * | 2 ) -ψ(Y ) dσ,
when ψ is concave. At this point the proof of (i) and (ii) for H ψ follows the arguments presented in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF].
We now turn our attention to the remaining two integrals in (4.4). Due to the positivity of b and F , and the monotonicity of ψ both integrals will be dealt similarly and we restrict our attention to the first. One sees that
σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y + Z) -ψ(Y )] dσ 2b ∞ F L ∞ v,v * ,σ σ: Z(σ) 0 [ψ(Y + Z) -ψ(Y )] dσ. (4.8)
The rest of the proof will rely on a careful investigation of the integrand. To do so, we start by noticing that
2 β 1 -β |v| |v * | Y (σ) |v| 2 + |v * | 2 |Z(σ)| 4 |v| |v * | . (4.9)
Case (i): In that case we have for all σ on S d-1 such that Z(σ) 0
ψ(Y + Z) -ψ(Y ) = (1 + α)Z (C(σ)) α (1 + α)Z (Y + Z) α . As Y + Z = |v ′ | 2 |v| 2 + |v * | 2 we find that ψ(Y + Z) -ψ(Y ) 4 |v| |v * | |v| 2 + |v * | 2 α (1 + α) 4 1+2α (|v| |v * |) 1+α if |v| 2 |v * | 2 |v| 4 ε 1 α |v| 2 + |v * | 2 1+α + 4ε (|v| |v * |) 1+α otherwise.
where we have used Hölder inequality in the second term. As ǫ is arbitrary, one can choose it such that 8
(1 + α)b ∞ F L ∞ v,v * .σ ε 1 α C H 2 ,
where C H is the constant associated to H ψ . Defining
H ψ = H ψ + 8(1 + α)b ∞ F L ∞ v,v * .σ ε 1 α |v| 2 + |v * | 2 1+α χ(v, v * )
and G ψ to be what remains, the proof is completed for this case.
Case (ii): In that case we have for all σ on S d-1 such that Z(σ) 0
ψ(Y + Z) -ψ(Y ) = (1 + α)Z (C(σ)) α (1 + α)ZY α .
Using (4.9) we find that (4.10)
ψ(Y + Z) -ψ(Y ) C (|v| |v * |) 1+α 1 [β(σ)(1 -β(σ))] α/2 . Since S d-1 dσ [β(σ)(1 -β(σ))] α/2 = C d π 2 0 sin d-2 θ cos d-2 θ (cos θ sin θ) α dθ < ∞
This yields the desired result with the choice H ψ = H ψ and G ψ the remaining terms.
Case (iii): This case will be slightly more complicated and we will deal with the first two integrations in (4.4) separately. We start with the second integral. As Z 0 in the domain of integration and Y 0 always, we find that
σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y (-σ) -Z(σ)) -ψ(Y (-σ))] dσ 2b ∞ F L ∞ v,v * ,σ σ: Z(σ) 0 Z(σ)ψ ′ (Y (-σ)) dσ
where we have used the fact that ψ is convex. As ψ ′ (x) = φ(x) + xφ ′ (x) and
φ(x) -φ(0) xφ ′ (x)
when x > 0, due to the concavity of φ, we have that (4.11)
σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y (-σ) -Z(σ)) -ψ(Y (-σ))] dσ 16b ∞ F L ∞ v,v * ,σ |v| |v * | σ: Z(σ) 0 φ(Y (-σ)) dσ
where we have used (4.9) and the positivity of φ.
to deal with the first integral in (4.4) we notice that for Z 0
|ψ(Y (σ) + Z(σ)) -ψ(Y (σ))| Y (σ)Z(σ)φ ′ (Y (σ)) + Z(σ)φ(Y (σ) + Z(σ))
where we have used to concavity of φ. Like before we can conclude that (4.12)
σ: Z(σ) 0 [b(θ)F (σ) + b(π -θ)F (-σ)] [ψ(Y (σ) + Z(σ)) -ψ(Y (σ))] dσ 8b ∞ F L ∞ v,v * ,σ |v| |v * | σ: Z(σ) 0 (φ(Y (σ)) + φ(Y (σ) + Z(σ))) dσ .
Adding (4.11) and (4.12) and using the positivity and concavity of φ we find that by choosing H ψ = H ψ we have that
G ψ (v, v * ) 16b ∞ F L ∞ v,v * ,σ |v| |v * | φ S d-1 Y (σ)dσ + φ S d-1 (Y (σ) + Z(σ)) dσ = 32b ∞ F L ∞ v,v * ,σ |v| |v * | φ |v| 2 + |v * | 2 2 32b ∞ F L ∞ v,v * ,σ |v| |v * | max φ(|v| 2 ), φ(|v * | 2 )
which completes the estimation for G ψ in the general case. Property (iii) for H ψ is proved along the same lines of the proof of Mischler and Wennberg in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF], as well as the second part of the case.
4.2.
A priori estimate on the moments of a solution. The immediate appearance of moments of any order is characterized by the following proposition.
Proposition 4.3. Let f be a non-negative solution of (1.4) in L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v
, with initial data f 0 , satisfying the conservation of mass and energy. If γ > 0 then for all for all α > 0 and for all 0 < T < T 0 ,
R d |v| α f (t, v) dv ∈ L ∞ loc ([T, T 0 )) .
The proof of that proposition is done by induction and requires two lemmas. The first lemma proves a certain control of the L 1 2+γ/2,v -norm and will be the base case for the induction, while the second lemma will prove an inductive bound on the moments.
In what follows we will rely heavily on the following technical lemma, proved in the appendix of [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF]:
Lemma 4.4. Let f 0 ∈ L 1 2,v .
Then, there exists a positive convex function ψ defined on R + such that ψ(x) = xφ(x) with φ a concave function that increases to infinity and satisfies that for any ε > 0 and α ∈ (0, 1)
(φ(x) -φ(αx)) x ε -→ x→∞ ∞,
and such that
R d ψ |v| 2 f 0 (v) dv < ∞.
In what follows we will denote by ψ and φ, the associated functions given by Lemma 4.4 for the initial data f 0 . Lemma 4.5. Let f satisfy the conditions of Proposition 4.3. Then for any T in [0, T 0 ) there exist c T , C T > 0 such that for all 0 t T ,
R d f (t, v)ψ |v| 2 dv + c T t 0 R d f (τ, v) M 2+ γ 2 (τ ) + ψ |v| 2 dvdτ R d f 0 (v)ψ |v| 2 dv + C T t. (4.13)
Proof of Lemma 4.5. We fix T in [0, T 0 ) and we consider 0 t T .
As seen in [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF], one can construct an increasing sequence of convex functions, (ψ n ) n∈N , that converges pointwise to ψ and satisfies that ψ n+1 -ψ n is convex. Moreover, there exists a sequence of polynomials of order 1, (p n ) n∈N , such that ψ n -p n is of compact support.
The properties of ψ n imply that for a given F as in Lemma 4.1 we have that the associated operators H ψn , G ψn satisfy:
• H ψn is positive and increasing (due to Remark 4.2).
• H ψn converges pointwise to H ψ (this follows from the appropriate representation of H, see [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF]).
• |G ψn (v, v * )| C G |v| |v * | for all n.
As f preserves mass and energy and p n is of order 1:
R d [f (t, v) -f 0 (v)] ψ n |v| 2 dv = R d [f (t, v) -f 0 (v)] ψ n |v| 2 -p n |v| 2 dv.
Since ψ n -p n is compactly supported and f solves the Boltzmann-Nordheim equation, we use Lemma 1.1 to conclude
R d [f (t, v) -f 0 (v)] ψ n |v| 2 dv = C Φ 2 t 0 R d ×R d ×S d-1 q(f )(τ, v, v * ) [ψ ′ n * + ψ ′ n -ψ n * -ψ n ] dvdv * dτ, with q(f )(τ, v, v * ) = |v -v * | γ b (cos θ) f (τ )f * (τ ) (1 + f ′ (τ ) + f ′ * (τ )) . Using Lemma 4.1 with F = 1 + f ′ + f ′
* we find that the above implies, using the decomposition stated in the lemma, that
R d f (t, v)ψ n |v| 2 dv + C φ 2 t 0 R d ×R d f (τ )f (τ ) * |v -v * | γ H ψn dv * dvdτ = R d f 0 (v)ψ n |v| 2 dv + C φ 2 t 0 R d ×R d f (τ )f (τ ) * |v -v * | γ G ψn dv * dvdτ.
At this point the proofs follows much like in the work of Mischler and Wennberg. We concisely outline the steps for the sake of completion.
Using the uniform bound on G ψn and the properties of H ψn we find that by the monotone convergence theorem
R d f (t, v)ψ |v| 2 dv + C φ 2 t 0 R d ×R d f (τ )f (τ ) * |v -v * | γ H ψ dv * dvdτ = R d f 0 (v)ψ |v| 2 dv + C φ C G 2 t 0 R d ×R d f (τ )f (τ ) * |v -v * | γ |v| |v * | dv * dvdτ.
Using Lemma 4.1 again for H ψ and picking ǫ = γ 2 in the relevant case we have that
R d ×R d f (τ )f (τ ) * |v -v * | γ H ψ dv * dv C R d ×R d f (τ )f (τ ) * |v| 2+ γ 2 dv * dv -c R d ×R d f (τ )f (τ ) * (|v| |v * |) 1+ γ 4 dv * dv. As M β (f )(τ ) f (τ ) L 1 2,v = f 0 L 1 2,v
for any β 2 we conclude that due to the conservation of mass and energy we have that
R d f (t, v)ψ |v| 2 dv + c T 2 f 0 L 1 v t 0 M 2+ γ 2 (τ )dτ R d f 0 (v)ψ |v| 2 dv + C T t.
The above also implies that
R d f (t, v)ψ |v| 2 dv R d f 0 (v)ψ |v| 2 dv + C T T,
which is enough to complete the proof.
Next we prove the lemma that governs the induction step. Again, the proof follows [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF] closely, yet we include it for completion. Lemma 4.6. Let T be in (0, T 0 ). For any n ∈ N there exists T n > 0 as small as we want such that M 2+(2n+1)γ/2 (T n ) < ∞. Moreover, for any t ∈ [T n , T ] there exists C T > 0 and c Tn,T > 0 such that (4.14)
M 2+(2n+1)γ/2 (t) + c T t Tn M 2+(2n+1)γ/2 (τ ) + M 2+(2n+3)γ/2 (τ ) dτ C Tn,T (1 + t),
Proof of Lemma 4.6. We start by noticing that since M 2+γ/2 ∈ L 1 loc ([0, T 0 )), according to Lemma 4.5 and the conservation of mass, we can find t 0 , as small as we want, such that
M 2+γ/2 (t 0 ) < ∞.
We repeat the proof of Lemma 4.5 with the function ψ(x) = x 1+ γ 4 on the interval [t 0 , T ], as we can still find the same polynomial approximation and a uniform bound on the associated H and G, to find that for almost any t ∈ [t 0 , T 0 )
R d ×R d f (t) |v| 2+γ/2 dvdv * + C T t t 0 f (τ )f * (τ ) |v| 2+3γ/2 dvdv * dτ c t t t 0 f (τ )f * (τ ) |v| 2+γ/2 |v * | γ dvdv * dτ + R d ×R d f (t 0 ) |v| 2+γ/2 dvdv * .
This completes the proof in the case n = 0 using Lemma 4.5 again. Notice that as the right hand side is a uniform bound in t we can conclude that the inequality is valid for any t in the appropriate interval.
We continue in that manner, using Lemma 4.1 with ψ(x) = x 1+(2n+3)γ/4 , assuming we have shown the result for M 2+(2n+1)γ/2 , and conclude the proof.
We now posses the tools to prove the main proposition of this section.
Proof of Proposition 4.3. We start by noticing, that since f conserves mass and energy f is in L 1 2,v for all t ∈ [0, T 0 ) and therefore the Proposition is valid for all α ∈ [0, 2].
Given α > 2 and 0 < T < T 1 < T 0 we know by Lemma 4.6 that we can construct an increasing sequence (T n ) n∈N such that T n < T for all n and
M 2+(2n+1)γ/2 (t) < C Tn,T (1 + T 1 ) when t ∈ [T n , T 1 ] ⊂ [T, T 1 ]
. This completes the proof. Remark 4.7. We would like to emphasize at this point that this result is slightly different from the one for the Boltzmann equation. Indeed, in the case when T 0 = +∞ in the Boltzmann equation the bounds on the moments on [T, ∞) depend only on T ,while for the Boltzmann-Nordheim equation in our settings we can only find local bounds on the moments since we require the boundedness of the solution f .
4.3.
The rate of blow up of the L 1 2+γ,v -norm at t = 0. In this subsection we will investigate the rate by which the (2 + γ) th moment blows up as t approaches zero. This will play an important role in the proof of the uniqueness to the Boltzmann-Nordheim equation.
∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v
satisfying the conservation of mass and energy. Then, given T < T 0 , if M 2+γ (t) is unbounded on (0, T ] there exists a constant C T > 0 such that ∀t ∈ (0, T ], M 2+γ (t) C T t .
C T depends only on γ, d, the collision kernel, sup t∈(0,T ] f L ∞ v and the appropriate norms of f 0 Proof of Proposition 4.8. Let 0 < t < T < T 0 . We start by mentioning that due to Proposition 4.3 we know that all the moments considered in what follows are defined and finite. Using Lemma 1.1 we find that
(4.15) d dt M 2+γ (t) = C φ 2 R d ×R d |v -v * | γ f f * K 1+γ/2 (v, v * ) dv * dv,
where K 1+γ/2 (v, v * ) is given by Lemma 4.1 with the choice ψ(x) = x 1+γ/2 . From the same lemma we have
K 1+γ/2 (v, v * ) C 1,T |v| 1+γ/2 |v * | 1+γ/2 -C 2,T |v| 2+γ + |v * | 2+γ + C 3,T |v| 2+γ + |v * | 2+γ 1 {|v|/2<|v * |<2|v|} for constants C 1,T , C 2,T , C 3,T depending only on γ, T, d, f L ∞ [0,T ],v
and the appropriate norms of f 0 .
On
{|v| /2 < |v * | < 2 |v|} |v| 2+γ + |v * | 2+γ 2 2+γ/2 |v| 1+γ/2 |v * | 1+γ/2 .
Therefore, (4.15) yields
d dt M 2+γ (t) R d ×R d |v -v * | γ f f * C 1,T |v| 1+γ/2 |v * | 1+γ/2 -C 2,T |v| 2+γ dv * dv.
Since f preserves the mass and energy, and 0 γ 1, we find that with abusing notations for relevant constants (4. [START_REF] Lacaze | Dynamical formation of a boseeinstein condensate[END_REF])
d dt M 2+γ (t) C 1,T M 1+3γ/2 -C 2,T M 2+γ ,
where we have used the fact that
||v| γ -|v * | γ | |v -v * | γ |v| γ + |v * | γ .
As, for any ǫ > 0
|v| 1+3γ/2 = |v| 1+γ/2 |v| γ ε |v| 2+γ + 1 4ε |v| 2γ ε 1 + |v| 2+2γ + 1 4ε |v| 2γ .
we conclude that since 2γ 2 we can take ε to be small enough such that (4.16) becomes
d dt M 2+γ (t) c T -C T M 2+2γ (t).
where c T , C T > 0 are independent of t and depend only on the relevant known quantities.
Due to Holder's inequality we know that
M 2+γ M 1/2 2 M 1/2 2+2γ implying that d dt M 2+γ (t) c T -C T M 2 2+γ (t).
As M 2+γ (t) is unbounded in (0, T ], we know that there exists t 0 ∈ (0, T ] such that
M 2+γ (t 0 ) max 2c T C T , M 2+γ (T )
. We find that
d dt M 2+γ (t 0 ) C T 2 M 2 2+γ (t 0 ) -C T M 2 2+γ (t 0 ) < 0
implying that there exists a neighbourhood of t 0 where M 2+γ (t) decreases. As this means that M 2+γ (t) 2c T C T to the left of t 0 we can repeat the above argument and conclude that M 2+γ (t) decreases on (0, t 0 ]. Moreover, in this interval we have
d dt M 2+γ - C T 2 M 2 2+γ .
The above inequality is equivalent to
d dt 1 M 2+γ C T 2 ,
which implies, by integrating over (0, t) and remembering that M 2+γ is unbounded, that 1 M 2+γ (t)
C T 2 t
on (0, t 0 ], from which the result follows.
Uniqueness of solution to the Boltzmann-Nordheim equation
This section is dedicated to proving that if a solution to the Boltzmann-Nordheim equation exists, with appropriate conditions on the initial data, then it must be unique. The main theorem we will prove in this section is the following:
Theorem 5.1. Let f 0 be in L 1 2,v ∩ L ∞ s,v
, where d -1 < s. If f and g are two nonnegative mass and energy preserving solutions of the Boltzmann-Nordheim equation with the same initial data
f 0 that are in L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v then f = g on [0, T 0 ).
The proof relies on precise estimates of the L 1 v , the L 1 2,v and the L ∞ v -norms of the difference of two solutions. As the difference of solutions may not have a fixed sign, these estimations require some delicacy due to a possible gain of a |v| γ weight from the collision operator.
In what follows we will repeatedly denote by C T constants that depend on d, s, the collision kernel, f 0 L 1 2,v ∩L ∞ s,v , sup t∈(0,T ] f L ∞ v , sup t∈(0,T ] g L ∞ v and T . Other instances will be clear form the context. We would like to point out that if f is a weak solution to the Boltzmann-Nordheim equation, i.e.
(5.1)
f (t, v) = f 0 (v) + t 0 Q (f (s, •)) ds,
with the required conservation and bounds, then similarly to Lemma 3.3, and using Lemma 3.5 together with Proposition 3.10 show us that for a fixed v ∈ R d we have that
Q (f (s, v)) ∈ L ∞ t ([0, T ]
). This implies that f is actually absolutely continuous with respect to t and as such we can differentiate (5.1) strongly with respect to t. The above gives validation to the techniques used in the next few subsection.
Evolution of
f -g L 1 v .
The following algebraic identity will serve us many times in what follows:
(5.2) abc -def = 1 2 (a -d)(bc + ef ) + a + d 4 [(b -e)(c + f ) + (c -f )(b + e)] .
Lemma 5.2. Let 0 T < T 0 . Then, there exists C T > 0 such that for all t ∈ [0, T ]:
d dt f -g L 1 v C T f -g L 1 2,v + f -g L ∞ v .
Proof of Lemma 5.2. Given T ∈ [0, T 0 ) we have, due to Lemma 1.1:
d dt f -g L 1 v = R d sgn(f -g)∂ t (f -g) dv = R d sgn(f -g) (Q(f ) -Q(g)) dv = C Φ 2 R d ×R d ×S d-1 b (cos θ) |v -v * | γ P (f, g) [Ψ ′ * + Ψ ′ -Ψ * -Ψ] dσdv * dv, (5.3)
where Ψ(t, v) = sgn(f -g)(t, v) and
(5.4)
P (f, g) = f f * (1 + f ′ + f ′ * ) -gg * (1 + g ′ + g ′ * )
It is simple to see that |Ψ ′ * + Ψ ′ -Ψ * -Ψ| 4 and using the algebraic identity (5.2) we also note that
|P (f, g)| C T |f -g| (f * + g * ) + (f + g) |f * -g * | + (f + g)(f * + g * ) |f ′ -g ′ | + |f ′ * -g ′ * | .
Using the above with (5.3), along with known symmetry properties, we find that
d dt f -g L 1 v C T R d ×R d |v -v * | γ |f -g| (f * + g * ) dv * dv + R d ×R d ×S d-1 b (cos θ) |v -v * | γ (f + g)(f * + g * ) |f ′ -g ′ | dv * dvdσ . As |v -v * | γ 1 + |v| 2 1 + |v * | 2 ,
since γ ∈ [0, 1], and using the conservation of mass and energy, as well as the fact that f and g has the same initial condition f 0 , we conclude that
d dt f -g L 1 v C T f 0 L 1 2,v f -g L 1 2,v + f 0 2 L 1 2,v f -g L ∞ v ,
proving the desired result.
Evolution of
f -g L 1 2,v
. The most problematic term to appear in our evolution equation is that of the L 1 2,v -norm. We have the following:
Lemma 5.3. Let 0 T < T 0 . Then, there exists C T > 0 such that for all t ∈ [0, T ]:
d dt f -g L 1 2,v C T M 2+γ (t) f -g L 1 v + f -g L 1 2,v + (1 + M 2+γ (t)) f -g L ∞ v ,
where M 2+γ is the (2 + γ) th moment of f + g.
Proof of Lemma 5.3. We proceed like the proof of Lemma 5.2. For a given fixed T ∈ [0, T ) ) we have:
(5.5)
d dt f -g L 1 2,v = C Φ 2 R d ×R d ×S d-1 b|v -v * | γ P (f, g) [Ψ ′ * + Ψ ′ -Ψ * -Ψ] dv * dvdσ,
with Ψ(t, v) = sgn(f -g)(t, v) 1 + |v| 2 and P (f, g) given by (5.4).
Using the algebraic identity (5.2) and known symmetry properties we obtain
(5.6) d dt f -g L 1 v = C Φ 1 2 I 1 + 1 4 I 2 + 1 8 I 3 + 1 4 I 4
with
I 1 = R d ×R d ×S d-1 b|v -v * | γ [G(Ψ) -Ψ] (f -g)(f * + g * ) dσdv * dv, I 2 = R d ×R d ×S d-1 b|v -v * | γ [G(Ψ) -Ψ] (f -g)(f * (f ′ + f ′ * ) + g * (g ′ + g ′ * )) dσdv * dv, I 3 = R d ×R d ×S d-1 b|v -v * | γ [G(Ψ) -Ψ] (f + g)(f * -g * )(f ′ + f ′ * + g ′ + g ′ * ) dσdv * dv, I 4 = R d ×R d ×S d-1 b|v -v * | γ [G(Ψ) -Ψ] (f + g)(f * + g * )(f ′ * -g ′ * ) dσdv * dv,
and where we defined
G(Ψ) = Ψ ′ * + Ψ ′ -Ψ * . It is immediate to verify that (5.7) |G(Ψ)| 3 + |v ′ | 2 + |v ′ * | 2 + |v * | 2 = 2 1 + |v * | 2 + 1 + |v| 2 .
Thanks to the latter bound on G(Ψ) and the fact Ψ • (f -g) = 1 + |v| 2 |f -g| we find that
I 1 2l b R d ×R d 1 + |v * | 2 (|v| γ + |v * | γ ) |f -g| (f * + g * ) dvdv * 4l b f 0 L 1 2,v f -g L 1 2,v + 2l b M 2+γ f -g L 1 v . (5.8)
where we have used similar estimation as in Lemma 5.2.
The term I 2 is dealt similarly:
(5.9)
I 2 C T f 0 L 1 2,v f -g L 1 2,v + C T M 2+γ f -g L 1 v .
When dealing with I 3 we make the symmetric change of (v, v * ) → (v * , v) and obtain:
(5.10)
I 3 C T f 0 L 1 2,v f -g L 1 2,v + C T M 2+γ f -g L 1 v
. Lastly, we find that using similar methods
|I 4 | 4l b R d ×R d (1 + |v| 2 ) (|v| γ + |v * | γ ) (f + g)(f * + g * ) dv * dv f -g L ∞ v 4l b 2 f 0 2 L 1 2,v + 4 f 0 L 1 v M 2+γ f -g L ∞ v . (5.11)
To conclude we just add (5.8), (5.9), (5.10) and (5.11) with appropriate coefficients.
Control of
f -g L ∞ v .
Lastly, we deal with the evolution of the L ∞ -norm.
Lemma 5.4. Let 0 T < T 0 . Then, there exists C T > 0 such that for all t ∈ [0, T ]:
f -g L ∞ v C T t 0 f -g L 1 2,v (u) + f -g L ∞ v (u) du.
Proof of Lemma 5.4. Given T ∈ [0, T 0 ) and t ∈ [0, T ], we have that since f (0) = g(0):
|f (t) -g(t)| = t 0 sgn(f -g)(s) (Q(f (s)) -Q(g(s)))ds = C Φ t 0 R d ×S d-1 b (cos θ) |v -v * | γ sgn(f -g)P (f ′ , g ′ ) dσdv * ds -C Φ t 0 R d ×S d-1 b (cos θ) |v -v * | γ sgn(f -g)P (f, g) dσdv * ds = J 1 + J 2 .
where P is given by (5.4), and we have used the convention f ′′ = f and g ′′ = g.
Using the algebraic identity (5.2) and the definition of P we find that:
|P (f ′ , g ′ )| C T [|f ′ -g ′ | (f ′ * + g ′ * ) + |f ′ * -g ′ * | (f ′ + g ′ )] + 1 4 |f * -g * | (f ′ + g ′ )(f ′ * + g ′ * ) + 1 4 |f -g| (f ′ + g ′ )(f ′ * + g ′ * ).
The change of variable σ → -σ sends v ′ to v ′ * and vice versa. Thus we find that:
|J 1 | C T t 0 R d ×S d-1 b (cos θ) |v -v * | γ |f ′ -g ′ | (f ′ * + g ′ * ) dσdv * ds + 1 4 t 0 R d ×S d-1 b (cos θ) |v -v * | γ (f ′ * + g ′ * )(f ′ + g ′ ) |f * -g * | dσdv * ds + 1 4 f -g L ∞ v R d ×S d-1 b (cos θ) |v -v * | γ (f ′ * + g ′ * )(f ′ + g ′ ) dσdv * ds,
where we defined b(x) = b(x) + b(-x). The first term can be dealt with using the appropriate Carleman change of variables, leading to the Carleman representation (3.4). Indeed, one can show that
R d ×S d-1 b (cos θ) |v -v * | γ |f ′ -g ′ | (f ′ * + g ′ * ) dσdv * = R d |f ′ -g ′ | |v -v ′ | E vv ′ b(cos θ) |v -v * | γ |v ′ * -v ′ | d-2-γ (f ′ * + g ′ * ) dE(v ′ * ) 2 b L ∞ E vv ′ (f ′ * + g ′ * ) dE(v ′ * ) L ∞ v R d |f ′ -g ′ | |v -v ′ | d-1-γ dv ′ L ∞ v C T f -g L ∞ v + f -g L 1 v
LOCAL CAUCHY PROBLEM FOR HOMOGENEOUS BOSONIC BOLTZMANN-NORDHEIM 33 due to Proposition 3.10 and the inequality
R d f (v) |v -v ′ | β dv C β f L ∞ v + f L 1 v , when β < d.
The same technique will work for the third term in J 1 , yielding
R d ×S d-1 b (cos θ) |v -v * | γ (f ′ * + g ′ * )(f ′ + g ′ ) dσdv * b L ∞ E vv ′ (f ′ * + g ′ * ) dE(v ′ * ) L ∞ v R d |f ′ + g ′ | |v -v ′ | d-1-γ dv ′ L ∞ v C T .
We are only left with the middle term of J 1 . Using the simple inequality
|v -v * | γ = |v ′ -v ′ * | γ 1 + |v ′ | γ 1 + |v ′ * | γ .
we find that
R d ×S d-1 b (cos θ) |v -v * | γ (f ′ * + g ′ * )(f ′ + g ′ ) |f * -g * | dσdv * l b f + g 2 L ∞ γ,v f -g L 1 v C T f -g L 1 v ,
where we have used Theorem 3.1 and the fact that γ < d -1 < s.
Combining the above yields (5.12)
|J 1 | C T t 0 f -g L ∞ v + f -g L 1 v ds.
The term J 2 requires a more delicate treatment. Starting again with the algebraic identity (5.2) we find that:
P (f, g) - 1 2 (f -g) (f * (1 + f ′ + f ′ * ) + g * (1 + g ′ + g ′ * )) C T (f + g) |f * -g * | + 1 4 (f + g)(f * + g * ) |f ′ -g ′ | + 1 4 (f + g)(f * + g * ) |f ′ * -g ′ * | Thus, -sgn(f -g)P (f, g) - 1 2 |f -g| (f * (1 + f ′ + f ′ * ) + g * (1 + g ′ + g ′ * )) +C T (f + g) |f * -g * | + 1 4 (f + g)(f * + g * ) |f ′ -g ′ | + 1 4 (f + g)(f * + g * ) |f ′ * -g ′ * | implying that J 2 C T t 0 R d ×S d-1 b (cos θ) |v -v * | γ |f * -g * | (f + g) dσdv * ds + 1 2 t 0 f -g L ∞ v R d ×S d-1 b (cos θ) |v -v * | γ (f * + g * )(f + g) dσdv * ds C T t 0 f + g L ∞ v,γ f -g L 1 2,v + f + g 2 L ∞ v,γ f -g L ∞ v ds C T t 0 f -g L 1 2,v + f -g L ∞ v ds.
Combining the estimations for J 1 and J 2 yields the desired result.
5.4. Uniqueness of the Boltzmann-Nordheim equation. We are finally ready to prove our main theorem for this section.
Proof of Theorem 5.1. : Combining Lemma 5.2, 5.3 and 5.4 we find that for any given T ∈ [0, T 0 ) the following inequalities hold:
(5.13) d dt f -g L 1 v C T f -g L 1 2,v + f -g L ∞ v d dt f -g L 1 2,v C T M 2+γ (t) f -g L 1 v + f -g L 1 2,v + (1 + M 2+γ (t)) f -g L ∞ v f -g L ∞ v C T t 0 f -g L 1 2,v (u) + f -g L ∞ v (u) du.
, where C T can be chosen to be the same in all the inequalities. As the L 1 v , L 1 2,v and L ∞ v -norms of f and g are bounded uniformly on [0, T ] we see from (5.13
) that f -g L 1 v C T t, f -g L ∞ v C T t.
Moreover, due to Proposition 4.8 we know that the rate of blow up of M 2+γ is at worst of order 1/t. More precisely there exists a constant C 1 that may depend on
T, d, γ, sup t∈[0,T ] f L ∞ v , sup t∈[0,T ] g L ∞ v
, the appropriate norms of f 0 , or the bound of the (2 + γ) th moment if it is bounded, such that
(5.14) M 2+γ (t) C 1 t .
This, together with the middle inequality of (5.13) implies that
d dt f -g L 1 2,v C T C T C 1 + 2 f 0 L 1 2,v + C T (T + C 1 ) , from which we conclude that f -g L 1 2,v C 2 T C 1 + 2C T f 0 L 1 2,v + C 2 T (T + C 1 ) t.
Iterating this process shows that there exists
C n,T > 0 such that max f -g L 1 v , f -g L 1 2,v , f -g L ∞ v C n.T t n ,
though the dependency of C n,T on n may be slightly complicated. We will continue following the spirit of Nagumo's fixed point theorem.
Firstly, we notice that by defining t 0 = min{T, 1/(2C T )}, a simple estimation in the third inequality of (5.13) shows that for any t ∈ [0, t 0 ]:
sup t∈[0,t] f -g L ∞ v 2C T t sup t∈[0,t] f -g L 1 2,v .
6.1. Some properties of truncated operators. The idea of approximating the collision kernel in the case of hard potentials is a common one in the Boltzmann equation literature (see for instance [START_REF] Arkeryd | On the Boltzmann equation. I. Existence[END_REF][2] or [START_REF] Mischler | On the spatially homogeneous Boltzmann equation[END_REF]). For n ∈ N, we consider the following truncated operators:
Q n (f ) = C Φ R d ×S d-1 (|v -v * | ∧ n) γ b(θ) [f ′ f ′ * (1 + f + f * ) -f f * (1 + f ′ + f ′ * )] dv * dσ.
where x ∧ y = min(x, y).
We associate the following natural decomposition to the truncated operators:
Q n (f ) = Q + n (f ) -f Q - n (f ), with Q + and Q -defined as in (1.5) -(1.6).
We have the following:
Lemma 6.1. For any f ∈ L 1 2,v ∩ L ∞ v we have that: • f Q - n (f ) L 1 2,v C Φ l b n γ 1 + 2 f L ∞ v f 2 L 1 2,v , • Q - n (f ) L ∞ v C Φ l b n γ 1 + 2 f L ∞ v f L 1 v , • if f 0, then for any v ∈ R d Q - n (f )(v) C Φ l b (n γ ∧ (1 + |v| γ )) f L 1 v -C Φ C γ l b f L 1 2,v
, where C γ > 0 is defined by (3.3).
Proof of Lemma 6.1. As
Q - n (f )(v) = C Φ R d ×S d-1 (n ∧ |v -v * |) γ b(cos θ)f * [1 + f ′ * + f ′ ] dv * dσ.
The first two inequalities are easily obtained by bounding f ′ * + f ′ by 2 f L ∞ v and the collision kernel by n γ b(cos θ).
To show the last inequality we use the non-negativity of f and mimic the proof of Lemma 3.3:
Q - n (f )(v) C Φ R d ×S d-1 (n ∧ |v -v * |) γ b(cos θ)f * dv * dσ C Φ l b |v-v * | n |v -v * | γ f * dv * + |v-v * | n n γ f * dv * C Φ l b |v-v * | n (1 + |v| γ ) -(1 + |v * | γ ) f * dv * + |v-v * | n n γ f * dv * C Φ l b (n γ ∧ (1 + |v| γ )) f L 1 v -C γ |v-v * | n (1 + |v * | 2 )f * dv * ,
where C γ was defined in (3.3). The proof is now complete.
As we saw in Section 3, the control of the integral of Q + over the hyperplanes E vv ′ is of great importance in the study of L ∞ -norm for the solutions to the Boltzmann-Nordheim equation. We thus strive to find a similar result for the Q + n operators.
Lemma 6.2. Let f be in L 1 2,v ∩ L ∞ v . Then: • Q + n (f ) L 1 2,v 2C Φ l b n γ 1 + 2 f L ∞ v f 2 L 1 2,v , • If f 0 then for almost every (v, v ′ ) E vv ′ Q + n (f )(v ′ * ) dE(v ′ * ) C +E f L 1 v 1 + 2 f L ∞ v S d-1 d + γ -1 f L ∞ v + f L 1 v ,
where C +E was defined in Lemma 3.7,
• If there exists E f > 0 such that for almost every (v, v ′ ) E vv ′ |f ′ * | dE(v ′ * ) E f then Q + n (f ) L ∞ v C + E f 1 + 2 f L ∞ v S d-1 1 + γ f L ∞ v + f L 1 v ,
where C + was defined in Lemma 3.5.
Proof of Lemma 6.2. To prove the first inequality, we notice that the change of variable (v ′ , v ′ * ) → (v, v * ) yields the following inequality:
R d (1 + |v| γ ) Q + n (f ) dv 2 R d 1 + |v| 2 f Q - n (f ) dv,
from which the result follows due to Lemma 6.1.
The last two inequalities follow respectively from the Lemma 3.7 and Lemma 3.5, as the truncated kernel is bounded by the collision kernel, and the following inequality for α < d:
R d f (v) |v -v 0 | α dv S d-1 d -α f L ∞ v + f L 1 v .
6.2. Construction of a sequence of approximate solutions to the truncated equation. In this subsection we will start our path towards showing local existence of solutions to the Boltzmann-Nordheim equation by finding solutions to the truncated Boltzmann-Nordheim equation
∂ t f n = Q n (f n )
on an interval [0, T 0 ], when n ∈ N is fixed and T 0 is independent of n. We will do so by an explicit Euler scheme.
To simplify the writing of what follows, we denote the mass and the energy of f 0 respectively by M 0 and M 2 and we introduce the following notations: (6.1)
C L = C Φ l b M 0 , (6.2) K ∞ = 2 f 0 L ∞ v min (1, C L ) , (6.3)
E ∞ = sup (v,v ′ )∈R d ×R d E vv ′ f 0 (v ′ * ) dE(v ′ * ) + C +E M 0 (1 + 2K ∞ ) S d-1 d + γ -1 K ∞ + M 0 and (6.4) C ∞ = C Φ C γ l b (M 0 + M 2 )K ∞ + C + E ∞ (1 + 2K ∞ ) S d-1 1 + γ K ∞ + M 0 .
We are now ready to define the time interval on which we will work:
(6.5) T 0 = min 1 ; K ∞ 2C ∞ min (1, C L ) .
For a fixed n we consider the following explicit Euler scheme on [0, T 0 ]: for j ∈ N we define (6.6
) f (0) j,n (v) = f 0 (v) f (k+1) j,n (v) = f (k) j,n (v) 1 -∆ j Q - n f (k) n + ∆ j Q + n f (k) j,n , for k ∈ 0, . . . , T 0 ∆ j , .
where ∆ j , the time step, is chosen as follows:
(6.7) ∆ j = min 1, 1 2C Φ l b jn γ M 0 [1 + 2K ∞ ]
.
We notice the following properties of the sequence:
Proposition 6.3. For all k in {0, . . . , [T 0 /∆ j ]}, we have that f (k) j,n satisfies: (i) f (k) j,n 0; (ii) f (k) j,n L 1 v = M 0 , |v| 2 f (k) j,n L 1 v = M 2 and R d vf (k) j,n dv = M 1 ; (iii) f (k) j,n (v) f 0 (v) -C L k-1 l=0 ∆ j (n γ ∧ (1 + |v| γ )) f (l) j,n + k∆ j C ∞ and for almost every (v, v ′ ) E vv ′ f (k) j,n (v ′ * )dE(v ′ * ) E vv ′ f 0 (v ′ * )dE(v ′ * )+k∆ j C +E M 0 (1+2K ∞ ) S d-1 d + γ -1 K ∞ + M 0 (iv) sup v∈R d f (k) j,n (v) + C L ∆ j k-1 l=0 (n γ ∧ (1 + |v| γ )) f (l) j,n (v) K ∞
and for almost every (v, v ′ ),
E vv ′ f (k) j,n (v ′ * )dE(v ′ * ) E ∞ .
Proof of Proposition 6.3. The proof of the proposition is done by induction. The case k = 0 follows directly from our definitions of K ∞ and E ∞ . We proceed to assume that the claim is valid for k such that k + 1 T 0 ∆ j . Combining Lemma 6.1 with (ii) and (iv) of Proposition 6.3 for f (k) j,n we have that
∆ j Q - n f (k) j,n L ∞ v ∆ j C Φ l b n γ M 0 (1 + 2K ∞ ) 1 2 .
Thus, by definition of f
(k+1) j,n : f (k+1) j,n (v) 1 2 f (k) j,n (v) + ∆ j Q + n f (k) j,n 0 as f (k) j,n 0, proving (i).
Furthermore, we have
R d 1 v |v| 2 f (k+1) j,n (v)dv = R d 1 v |v| 2 f (k) j,n (v)dv+∆ j R d 1 v |v| 2 Q n (f (k) j,n )(v)dv.
Since Q n satisfies the same integral properties of Q, we find that the last term is zero. This shows that as f (k) j,n satisfies (ii), so does f (k+1) j,n .
In order to prove (iii) we will use the positivity of f (k) j,n along with Lemma 6.1, and Lemma 6.2 together with property (iv) for f (k) j,n . This shows that:
f (k+1) j,n (v) f (k) j,n (v) -C L ∆ j (n γ ∧ (1 + |v| γ )) f (k) j,n + ∆ j C ∞ proving the first part of (iii). Since Q - n (f (k) j,n
) is positive we also find for almost every (v, v ′ )
E vv ′ f (k+1) j,n (v ′ * ) dE(v ′ * ) E vv ′ f (k) j,n dE(v * ) + ∆ j E vv ′ Q + n (f (k) j,n ) dE(v * ) E vv ′ f (k) j,n dE(v * ) + ∆ j C +E M 0 (1 + 2K ∞ ) S d-1 d + γ -1 K ∞ + M 0
where we have used property (ii) of Lemma 6.2, and properties (ii) and (iv) of f (k) j,n . Thus, the second part of (iii) is valid by the same property for f
(k) j,n .
The last property (iv) is a direct consequence of (iii) along with the fact that (k + 1)∆ j T 0 , and the definition of T 0 .
As a discrete version of the Boltzmann-Nordheim equation, our apriori estimates in Section 3 led us to believe that we may be able to propagate moments and weighted L ∞ norm in our sequence. This is indeed the case, as we will state shortly. However, it is important to notice that while the truncated kernel can be thought of as a an appropriate kernel with γ = 0, in order to get bounds that are independent in n we must use estimation that use the γ given in the problem. This will lead to a drop in the power we can weight the function against. The following Lemma is easy to prove using similar methods to the ones presented in Section 3. We state it here and leave the proof to the Appendix. Lemma 6.4. Consider the sequence defined in (6.6).
(i) Let s > 2, there exists C s > 0 (uniform constant defined in Lemma B.1) such that for any j j 0 = 2(1 + M 2 )C s /M 0 we have that (6.8)
R d (1 + |v| s )f (k) j,n (v)dv (D s k∆ j + 1) R d (1 + |v| s )f 0 (v)dv,
where
D s = 4C Φ C s l b (1 + 2K ∞ )(1 + M 2 ). (ii) If f 0 ∈ L ∞ s,v when s > d + 2γ then for any s ′ < s -2γ W s ′ = sup k,j j 0 ,n f (k) j,n L ∞ s ′ ,v
< ∞.
6.3.
Convergence towards a mass and momentum preserving solution of the truncated Boltzmann-Nordheim equation. In the previous subsection we have constructed a family of functions f
(k) j,n k∈{0,...,[T 0 /∆ j ]} in L 1 2,v ∩ L ∞ s ′ ,v
, for s ′ < s -2γ, with the same mass and energy as the initial data f 0 . Our next goal is to use this family in order to find a sequence of functions, (f j,n
) j∈N in L 1 ([0, T 0 ] × R d ) ∩ L ∞ [0, T 0 ]; L ∞ s ′ ,v (R d
) that converges strongly to a solution of the truncated Boltzmann-Nordheim equation, while preserving the mass and energy of the initial data. The construction of such sequence is fairly straight forward -we view the sequence f (k) j,n k∈{0,[...,T 0 /∆ j ]+1} as a constant in time sequence of functions and construct a piecewise function using them. Indeed, we define for any j ∈ N:
(6.9) f j,n (t, v) = f (k) j,n (v) (t, v) ∈ [k∆ j , (k + 1)∆ j ) × R d
, where we replace of ([T 0 /∆ j ] + 1)∆ j by T 0 . Proposition 6.5.
Let f 0 ∈ L 1 2,v ∩ L ∞ s,v for s > d + 2γ. Then, the sequence (f j,n ) j∈N converges strongly in L 1 ([0, T 0 ] × R d ) to a function f n that belongs to L 1 ([0, T 0 ] × R d ) and L ∞ [0, T 0 ]; L ∞ s ′ ,v (R d )
. Moreover: (i) f n is a solution of the truncated Boltzmann-Nordheim equation (1.4) with Q replaced by Q n and initial data f 0 , (ii) f n is positive and for all t in [0,
T 0 ], ψ(•)f n (t, •) L 1 v = ψf 0 L 1 v for ψ(v) = 1, v, |v| 2 . (iii) f n satisfies sup t T 0 f n (t, •) L ∞ v K ∞ and sup t T 0 f n (t, •) L ∞ s ′ ,v W s ′
for any s ′ < s -2γ, where W s ′ has been defined in Lemma 6.4.
Proof of Proposition 6.5. For simplicity in the proof we will drop the subscript n. We start by noticing that by its definition and Proposition 6.3, {f j } j∈N has the same mass, energy and momentum as f 0 . We will now show that (f j ) j∈N is a Cauchy sequence in L 1 ([0, T 0 ] × R d ). Indeed, by its definition we find that
f (k) j (v) -f (0) j (v) = ∆ j k-1 l=0 Q n f (l) j (v).
This, combined with the definition of f j , shows that if t ∈ [k∆ j , (k + 1)∆ j ) we have that (6.10)
f j (t, v) = f 0 (v) + t 0 Q n (f j (s, v)) ds -(t -k∆ j ) Q n f (k) j .
For a given j l we see that
f j (t, •) -f l (t, •) L 1 v t 0 Q + n (f j (s, •)) -Q + n (f l (s, •))dvds + E j
where we have used Lemma 6.2 and Proposition 6.3 and the symmetry of the collision operators. We also denoted
E j = 4 (1 + 2K ∞ ) C + E ∞ S d-1 1 + γ K ∞ + M 0 + C Φ l b n γ (M 0 + M 2 ) ∆ j .
We conclude that, using the algebraic property (5.2),
f j (t, •) -f l (t, •) L 1 v C Φ l b n γ (1 + 2K ∞ ) t 0 R d ×R d (f j + f l ) |f j, * -f l, * | dv * dvds + 2C Φ n γ t 0 R d ×R d ×S d-1 b(cos θ)f j f j, * f ′ j -f ′ l dvdv * dσ + E j
Next, using Lemma 6.4 we have that
f i (v)f i (v * ) W 2 s ′ (1 + |v| s ′ )(1 + |v * | s ′ ) W 2 s ′ 1 + 2 2-s ′ 2 |v| 2 + |v * | 2 s ′ 2 ,
for any i ∈ N. Thus,
t 0 R d ×R d ×S d-1 b(cos θ)f j f j, * f ′ j -f ′ l dvdv * dσ l b W 2 s ′ R d ×R d |f j -f l | 1 + 2 s ′ -γ 2 |v * | s ′ dvdv * = l b C s ′ W 2 s ′ t 0 f j (s, •) -f l (s, •) L 1 v ds if s ′ > d,
where C s ′ is a uniform constant. Since s > d + 2γ and we can pick any s ′ up to s -2γ, the above is valid for s ′ = d + ǫ for ǫ > 0 small enough. Thus,
f j (t, •) -f l (t, •) L 1 v C n t 0 f j (s, •) -f l (s, •) L 1 v ds + E j
where C n > 0 is independent of j, l and t. From which we conclude that
f j (t, •) -f l (t, •) L 1 v E j e Cnt
As E j goes to zero as j goes to infinity, the above shows that {f j } j∈N converges to a function f both in L 1 v for any fixed t and in L 1 [0, T 0 ] × R d . Thanks to Passing to an appropriate subsequence, which we still denote by {f j } j∈N we can assume that f j converges pointwise to f almost everywhere. As such, the preservation of mass and (iii) follow immediately form the associated properties of the sequence. Moreover, thanks to (6.10) and the strong convergence we just showed, we conclude that
f (t, v) = f 0 (v) + t 0 Q n (f (s, v))ds,
showing (i).
We are only left with showing the conservation of momentum and energy. Using Fatou's lemma we find that
R d |v| 2 f (v)dv lim inf j→∞ R d |v| 2 f j (v)dv = R d |v| 2 f 0 (v)dv.
If we show tightness of the sequence |v| 2 f j (v) j∈N , i.e. that for any ǫ > 0 there exists R ǫ > 0 such that sup j∈N |v|>Rǫ |v| 2 f j (v)dv < ǫ then the converse will be valid and we would show the conservation of the energy.
To prove this we recall Lemma 4.4 for f 0 and denote the appropriate convex function by ψ. We claim that there exists C > 0, depending only on the initial data, γ, d and the collision kernel but not j such that for all j ∈ N, for all k ∈ {0, . . . , [T 0 /∆ j ] + 1}, (6.11)
R d f (k) j (v)ψ |v| 2 dv R d ψ |v| 2 f 0 (v) dv + Ck∆ j f 0 2 L 1 2,v
, This will imply that (6.12)
R d f j (t, v)ψ |v| 2 dv R d ψ |v| 2 f 0 (v) dv + C f 0 2 L 1 2,v
, from which the desired result follows as ψ(x) = xφ(x) for a concave, increasing to infinity function φ.
We prove (6.11) by induction. The case k = 0 is trivial and we proceed to assume that (k + 1)∆ j T 0 and that inequality (6.11) is valid for f
(k) j . Defining M (k) j = R d f (k) j (v)ψ |v| 2 dv
and using the definition of f (k) j and Lemma 1.1 we find that
M (k+1) j = M (k) j + ∆ j R d ψ |v| 2 Q n (f (k) j )(v) dv = M (k) j + C Φ ∆ j 2 R d ×R d (n ∧ |v -v * |) γ f (k) j (v)f (k) j (v * ) × S d-1 1 + f (k) j (v ′ ) + f (k) j (v ′ * ) b(cos θ) (ψ ′ * + ψ ′ -ψ * -ψ) dσ dv * dv = M (k) j + C Φ ∆ j 2 R d ×R d (n ∧ |v -v * |) γ f (k) j (v)f (k) j (v * ) [G(v, v * ) -H(v, v * )] dv * dv. (6.13)
for the appropriate G and H given by Lemma 4.1. Moreover, Lemma 4.1 implies that
G(v, v * ) C G |v| |v * | , H(v, v * ) 0,
where C G depends only on the collision kernel, γ, d and possibly the mass of f (k) j . As the latter is uniformly bounded for all j and k by K ∞ we can assume that C G is a constant that is independent of j and k. From the above we conclude that
M (k+1) j M (k) j + C Φ ∆ j 2 C G R d ×R d |v -v * | γ |v| |v * | f (k) j (v)f (k) j (v * ) dv * dv M (k) j + C Φ ∆ j 2 C G R d (1 + |v| γ ) |v| f (k) j (v) dv 2 M (k) j + C Φ ∆ j C G f (k) j 2 L 1 2,v R d ψ |v| 2 f 0 (v) + ∆ j (Ck + C Φ C G ) f 0 2 L 1 2,v
showing the desired result for the choice C = C Φ C G . A similar, yet simpler, proof (as we have bounded second moment) shows the conservation of momentum.
6.4. Existence of a solution to the Boltzmann-Nordheim equation. Now that we have solutions to the truncated equation, we are ready to show the existence theorem.
Proof of Theorem 2.2. If γ = 0 then the truncated equation is actually the full equation. As such, Proposition 6.5 shows (i). From now on we will assume that γ > 0 and s > d + 2 + γ. We notice that in that case there exists ǫ > 0 such that the f 0 ∈ L 1 2+γ+ǫ,v . We denote by {f n } n∈N the solutions to the truncated equation
∂ t f n (t) = Q n (f n ) t > 0, v ∈ R d f (0, v) = f 0 (v) v ∈ R d
given by Proposition 6.5. We will show that the sequence is Cauchy. In what follows, unless specified otherwise, constants C that appear will depend on K ∞ , E ∞ , C ∞ , T 0 and f 0 but not on n, m and t. Assuming that n m and following the same technique as the one in Lemma 5.3 we see that
d dt f n (t) -f m (t) L 1 2,v = R d sgn(f n (t) -f m (t))(1 + |v| 2 ) (Q n (f n (t)) -Q n (f m (t))) dv + R d sgn(f n (t) -f m (t))(1 + |v| 2 ) (Q n (f m (t)) -Q m (f m (t))) dv = I 1 + I 2 .
Exactly as in Lemma 5.3
I 1 C (1 + M 2+γ (f n + f m )) f n -f m L 1 2,v + f n -f m L ∞ v
Since M 2+γ (f 0 ) < ∞ we find that, due to Lemma 6.4, the sequence f
(k) j,n j,n∈N
, and as such, our f n , have a uniform bound, depending on f 0 , on their moment of order 2 + γ. Thus,
I 1 C 1 f n -f m L 1 2,v + f n -f m L ∞ v
For the second term we notice that
|Q n (f m )(v) -Q m (f m )(v)| C Φ b ∞ (1 + 2K ∞ ) {|v-v * | m}×S d-1 |v -v * | γ f ′ m f ′ m, * + f m f m, * dv * dσ (6.14)
and as such
R d ×R d ×S d-1 (1 + |v| 2 ) |Q n (f m )(v) -Q m (f m )(v)| dv C n ε R d ×R d ×S d-1 1 + |v| 2 + |v * | 2 |v -v * | γ+ǫ f m f m, * dvdv * dσ C n ε (M 0 + M 2 + M 2+γ+ǫ ) 2
where M 2+γ+ǫ is a uniform bound on the 2 + γ + ǫ moment of all {f n } n∈N , depending only on f 0 and other parameters of the problem. We conclude that
I 2 C 2 m ǫ . Thus, (6.15) d dt f n (t) -f m (t) L 1 2,v C 1 f n -f m L 1 2,v + f n -f m L ∞ v + C 2 m ǫ .
Next, we turn our attention to the L ∞ norm. In order to do that we notice that due to Proposition 6.5
|v -v * | α f ′ m f ′ m, * C s ′ ,α W 2 s ′ 1 1 + |v ′ | s ′ -α 1 1 + |v ′ * | s ′ + 1 1 + |v ′ * | s ′ -α 1 1 + |v ′ | s ′ , D s ′ ,α 1 + |v * | s ′ -α (6.16)
and the same holds replacing (v ′ , v ′ * ) by (v, v * ). We therefore see that by choosing α
= γ + ǫ, if s ′ -d > γ + ǫ we have that (6.17) |Q n (f m )(v) -Q m (f m )(v)| D s ′ ,α m ǫ .
where D s ′ ,α is a constant that depends only on the parameters on the problems. Due to Lemma 6.4 we know that we can choose s ′ as close as we want to
s -2γ > d + 2 -γ d + γ.
Using the above, and Lemma 5.4 we see that
f n (t) -f m (t) L ∞ v C t 0 f n -f m L 1 2,v + f n -f m L ∞ v ds + t 0 |Q n (f m )(v) -Q m (f m )(v)| ds C t 0 f n -f m L 1 2,v + f n -f m L ∞ v ds + D s ′ ,α T 0 m ǫ .
Following in the steps of the proof of the uniqueness in Section 5 we choose t 0 , depending only on T 0 and D 0 such that for all t t 0 sup s∈
[0,t] f n -f m L ∞ v C 3 t 0 f n -f m L 1 2,v ds + 2D s ′ ,α T 0 m ǫ .
Combining this with the integral version of (6.15) gives us that in [0, t 0 ]
f n (t) -f m (t) L 1 2,v f n (0) -f m (0) L 1 2,v + C t 0 (1 + t -s) f n (s) -f m (s) L 1 2,v ds + 1 m ε .
As f n (0) = f m (0) we find that the above is enough to show that {f n } n∈N is Cauchy in L 1 2,v as well as L 1 2,t,v for t ∈ [0, t 0 ]. As t 0 was independent of n, m and the bound that we used are valid for all t ∈ [0, T 0 ] we can use the fact that {f n (t 0 )} n∈N is Cauchy and repeat the process. This shows that the sequence is Cauchy in all of [0, T 0 ] and we denote by f its limit in L 1 2,t,v . Using the strong convergence of f n to f , and the fact that f n solves the truncated equation, we conclude that for any such φ
t 0 R d φ(t, v) f (t, v) -f 0 (v) - t 0 Q(f )(s, v)ds = 0
which shows that f is indeed the desired solution.
Since the convergence of f n to f is in L 1 2,t,v we conclude the conservation of mass, momentum and energy.
Lastly, we notice that T 0 , the time we have worked with from the sequence f (k) j,n j,n , depends only on f 0 and parameters of the collision. Thus If f (t, •) L ∞ v is bounded on [0, T 0 ] we can use Theorem 2.2 together with the conservation of mass, momentum and energy, to repeat our arguments and extend the time under which the solution exists. We conclude that we can 'push' our solution up to a time T max such that lim sup
T →T - max f L ∞ [0,T ]×R d = +∞.
This completes the proof.
We end this section with a few remarks.
Remark 6.6.
(i) As we have shown existence of a mass, momentum and energy conserving solution to the Boltzmann-Nordheim equation that is in L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ v , the a priori estimate given by Theorem 3.1 actually improve our regularity of the solution and we learn that f belongs to
L ∞ loc [0, T 0 ), L 1 2,v ∩ L ∞ s ′ ,v for all s ′ < s.
(ii) Note that we have given an explicit way to find the solution, as all our sequences converge strongly.
Appendix A. Simple Computations
We gather a few simple computations in this Appendix to make some of the proofs of the paper more coherent, without breaking the flow of the paper.
A.1. Proof of Lemma 3.2. We start by noticing that
R d f (v * ) |v -v * | -α dv * = |v-v * |<1 f (v * ) |v -v * | -α dv * + |v-v * |>1 f (v * ) dv * f L ∞ v |x|<1 |x| -α dx + f L 1 v = C d,α f L ∞ v + f L 1 v
implying that the required integral is uniformly bounded in all v as 0 α < d. Thus, in order to prove the Lemma we can assume that |v| > 1. For such a v consider the sets The last integration is finite if and only if d -2 -α > -1, which is valid in our case. The proof is thus complete. Then for any 0 < t < T and |v| Proof of Lemma A.3. Defining φ(t) = e (C 1 (1+|v|) α -C 2 )t ψ(t), we find that
A = v * ∈ R d ; |v * | |v| 2 , B = v * ∈ R d ; |v -v * | |v| s 2 -s 1 d 2 , and C = (A ∪ B) c . We have that A f (v * ) |v -v * | -α dv * 2 α |v| -α A f (v * ) dv * 2 α |v| -α f L 1 v
φ ′ C 3 (1 + |v|) β e (C 1 (1+|v|) α -C 2 )t
Using the assumption on v, which is equivalent to
C 1 (1 + |v|) α -C 2 > C 1 2 (1 + |v|) α > 0
we find that ψ(t) e -(C 1 (1+|v|) α -C 2 )t ψ(0) + C 3
(1 + |v|) β (C 1 (1 + |v|) α -C 2 ) 1 -e -(C 1 (1+|v|) α -C 2 ) t ψ(0) + 2C 3 C 1 (1 + |v|) α+β .
from which the result follows. In what follows we will drop the subscript j, n from the proofs to simplify the notation Proof. The proof, as usual, goes by induction.The step k = 0 is immediate. Assuming the claim is valid for k we have that
R d (1 + |v| s )f (k+1) (v)dv = R d (1 + |v| s )f (k) (v)dv + ∆ j R d |v| s Q n f (k) (v)dv R d (1+|v| s )f (k) (v)dv+C Φ C s l b (1+2K ∞ )∆ j R d ×R d |v| s |v * | + |v| s-1 |v * | 2 f (k) (v)f (k) (v * )dvdv *
where we have used a Povzner inequality much like Proposition 3.9. Thus, We have the following:
R d (1+|v| s )f (k+1) (v)dv (1 + 2C Φ C s l b (1 + 2K ∞ )(1 + M 2 )∆ j ) R d (1+|v| s )f (k) (v)dv (1 + 2C Φ C s l b (1 + 2K ∞ )(1 + M 2 )∆ n ) (D s k∆ n + 1)
Lemma B.2. Consider the sequence defined in (6.6). We have that (i) if r 2 is such that R d (1 + |v| r )f 0 (v)dv < ∞ then for any j j 0
E vv ′ ψ v (v ′ * )f (k) j,n (v ′ * )dE(v ′ * ) E vv ′ ψ v (v ′ * )f 0 (v ′ * )dE(v ′ * ) + B ∞ k∆ j (1 + |v|) -r+γ-1 ,
where
B ∞ = C Φ C d,γ b ∞ (1 + D r + K ∞ )
E vv ′ ψ v (v ′ * )f (k) j,n (v ′ * )dE(v ′ * ) C s,ǫ f 0 L ∞ s,v + B ∞ k∆ j (1 + |v|) -(s-d+1-ǫ-γ) ,
where C s,ǫ is a uniform constant that depends only on s and ǫ.
E vv ′ ψ v (v ′ * )f (k+1) (v ′ * )dE(v ′ * ) E vv ′ ψ ′ v, * f (k)′ * dE(v ′ * ) + ∆ j E vv ′ ψ ′ v, * Q + f (k)′ * dE(v ′ * ) E vv ′ ψ ′ v, * f ′ 0, * dE(v ′ * ) + B ∞ k∆ j (1 + |v|) r-γ+1 + C Φ C d,γ b ∞ (1 + D r + K ∞ ) 3 ∆ j (1 + |v|) r-γ+1
.
The choice of r follows the remark at the beginning of the proof of Proposition 3.12.
(ii) follows much like Remark 3.13. The proof of (iii) follows the same method of the proof of Theorem 3.1, with a few small changes to give a uniform bound on the weighted norm that will be independent of the truncation. As seen in the aforementioned proof, together with (ii) As such, we find that
Q + n (f (k) )(v) Q + (f (k) )(v) C 0,
f (k+1) (v) f (k) + Q + (f (k) )(v) f (k) + C 0,s,ǫ (1 + |v|) -δ ,
implying that one can prove inductively that there exists a constant W 1 that depends only on the s, ǫ, the initial data and the collision parameters such that
f (k) (v) W 1 k∆ j (1 + |v|) -δ + f 0 (v).
This implies that, with the notations of (iii), W δ < ∞. At this point we continue by induction and by using Lemma 3.2 with s 1 = δ. Note that the process can not go beyond s -2γ. Denoting by ξ = s -d + 1 -γ + 2(1+γ) d , we see that the process can continue until
s ′′ < ξ ∞ j=0 d -1 -γ d j = d 1 + γ ξ.
Because s > d + γ the above is bigger than s -2γ which means that we will reach the desired result in finitely many steps, completing the proof.
Lemma 3 . 8 .
38 Let a ∈ R d and define
4. 1 .
1 An extended version of a Povzner-type inequality. The main result of this subsection is the following Povzner-type inequality for the Boltzmann-Nordheim equation. Lemma 4.1. Let b(θ) be a positive bounded function and let
= 1 and d = 3. Much like in the work of Mischler and Wennberg, we decompose |v ′ | 2 and |v ′ * | 2 to a convex combination of |v| 2 and |v * | 2 and a remainder term, and use convexity/concavity properties of ψ and φ.
Proposition 4 . 8 .
48 Let f be a non-negative solution of the Boltzmann-Nordheim equation in L
2 -s 1 d 2 = |v| 1 - 1 2 2 |x| |v| s 2 -s 1 d 2 |x| -α dx = 2 s 2 S 2 -s 1 d 2 .A. 2 .|v -v 1 |and r 2 |v -a| 3r 2 . 1 dσ(v 1 )|v -v 1 | 2 dθ= C d,α r -α π 2 0
222222222212211122 as |v -v * | |v| -|v * | |v| /2.Next, we notice that if v * ∈ B and |v| > 1 then since s 2 -s 1 < d we have that|v * | |v| -|v -v * | |v| -|v| s |v| d-(s 2 -s 1 ) d |v| 2 . Thus B f (v * ) |v -v * | -α dv * f L ∞ s 2 ,v B (1 + |v * | s 2 ) -1 |v -v * | -α dv * 2 s 2 f L ∞ s 2 ,v |v| -s d-1 d -α f L ∞ s 2 ,v |v| -s 2 + (d-α)(s 2 -s 1 ) d = C d,α f L ∞ s 2 ,v |v| -s 1 -α(s 2 -s 1 ) d . Lastly, when v * ∈ C we have that |v * | |v| 2 and |v -v * | |v| s Thus C f (v * ) |v -v * | -α dv * 2 α |v| -α(s 2 -s 1 ) d C f (v * ) (1 + |v * | s 1 ) |v * | -s 1 dv * 2 s 1 +α |v| -s 1 -α(s 2 -s 1 ) d f L 1 s 1 ,v .Combining all of the above gives the desired result. Additional estimations. Throughout this section we will denote by S d-1 r (a) the sphere of radius r and centre a ∈ R d .Lemma A.1. For any a ∈ R d and r > 0 we have that if 0 α d -1 then there existsC d,α > 0 such that -α dσ(v 1 ) C d,α r -α . Proof of Lemma A.1. We have that if v 1 ∈ S d-1 r (a) then |v -v 1 | 2 = |v -a| 2 + r 2 -2r |v -a| cos θ = (|v -a| -r) 2 + 2r |v -a| (1 -cos θ) ,where θ is the angle between the constant vector v -a and the vector v 1 . At this stage we'll look at two possibilities: ||v -a| -r| > r 2 In the first case we have that|v -v 1 | |(v -a) -= 2 α r -α .In the second case we have that |v -v 1 | 2r |v -a| (1 -cos θ) -α dσ(v 1 ) cos d-2 (θ) sin d-2 (θ) sin α (θ) dθ.
Lemma A. 2 . 1 2n 2π e -nx 2 d 2 2 2√ 2π π 0 e 2 sin d- 2 θdθr d- 2 S 3 .
212202223 Let E be any hyperplane in R d with d 3 and let a ∈ R d and r > 0. Then ϕ n (x)ds(x) S d-2 where ϕ n (x) = n 2π e -nD(x,E) 2 2 with D(x, A) the distance of x from the set A, ds(x) is the appropriate surface measure. Proof of Lemma A.2. Due to translation, rotation and reflection with respect to E we may assume that E = x ∈ R d , x d = 0 and that a = |a| êd . In that case ϕ n (x) = and on S d-1 r (a) we find that ϕ n (a + rω) = n 2π e -n(|a|+r cos θ)where θ is the angle with respect to the êd axis.ϕ n (x)ds(x) = S d-2 √ nr -n(|a|+r cos θ) 2Using the change of variables x = √ nr cos θ yields 1 Assume that ψ satisfiedψ ′ -C 1 (1 + |v|) α ψ + C 2 ψ + C 3 (1 + |v|) -βwhen C 1 , C 2 , C 3 > 0.
2C 2 C 1 1 α - 1 ( 1 +
111 |v|) α+β ψ(t) (1 + |v|) α+β ψ(0) + 2C 3 C 1 .
Appendix B . ( 1 +( 1 +
.11 Propagation of Weighted L ∞ Norms for the Truncated Operators Here we will discuss the propagation of the weighted L ∞ norms for the constructed sequence {f j,n } j∈N .Lemma B.1. Consider the sequence defined in (6.6). Let s > 2 and let C s be a uniform constant such that|v ′ | s + |v ′ * | s -|v| s -|v * | s C s |v| s-1 |v * |Then for any j j 0 = 2(1 + M 2 )C s /M 0 we have that(B.1) R d |v| s )f (k) j,n (v)dv (D s k∆ j + 1) R d |v| s )f 0 (v)dv,whereD s = 4C Φ C s l b (1 + 2K ∞ )(1 + M 2 ).
( 1 +
1 |v| s )f 0 (v)dv. The proof follows form the choice of D s . Much like in Section 3 we denote by ψ a (v) = 0 |v| < |a| 1 |v| |a|
(iii) If f 0 ∈ L ∞ s,v when s > d + 2γ then for any s ′ < s -2γ W s ′ = supk,j j 0 ,n f All the proofs will follow by induction. The step k = 0 is trivial. (i) Using Lemma B.1, Lemma 3.8, the fact that Q - n 0 and Q + n Q + we have that
s,ǫ (1 + |v|) -δ where C 0 depends only on s, ǫ, the initial data and the collision parameters, andδ = min s -2γ -ǫ, s -d + 1 -γ -ǫ + 2(1 + γ) d .
3 and C d,γ is a uniform constant defined in Lemma 3.8. Moreover, one can choose
r = 2, if s d + 2 s
′ , if s > d + 2 and s ′ < s -d (ii) If ǫ is small enough
This, together with the second inequality of (5.13) and the moment bounds implies that for any t ∈ [0, t 0 ] we have (5.15)
,
where
Let n ∈ N be such that K 1 n and define
/t n . As X(t) C n+2,T t 2 and X(0) = 0 we conclude that X(t) is differentiable at t = 0 and as such, in [0, t 0 ]. We have that for t ∈ [0, t 0 ]:
Continuing by induction we conclude that for any n ∈ N and t ∈ [0, t 0 ]
Taking n to infinity shows that X(t) = 0 for all t ∈ [0, t 0 ], proving that f = g on that interval. If t 0 = T we are done, else we repeat the same arguments, starting from t 0 where the functions are equal, on the interval [t 0 , 2t 0 ]. Continuing inductively we conclude the uniqueness in [0, T ].
We finally have all the tools to show Theorem 2.1
Proof of Theorem 2.1. This follows immediately from Theorem 3.1, Proposition 4.3 and Theorem 5.1.
Local existence of solutions
In this section we will develop the theory of existence of local in time solutions to the Boltzmann-Nordheim equation and prove Theorem 2.2. From this point onwards, we assume that f 0 is not identically 0.
The method of proof we will employ to show the above theorem involves a time discretisation of equation (1.4) along with an approximation of the Boltzmann-Nordheim collision operator Q, giving rise to a sequence of approximate solutions to the equation. | 94,902 | [
"739558"
] | [
"236275",
"37391"
] |
01492036 | en | [
"math"
] | 2024/03/04 23:41:50 | 2015 | https://hal.science/hal-01492036/file/lowerboundBE_maxwellianBC_pdf.pdf | Marc Briant
email: [email protected]
INSTANTANEOUS EXPONENTIAL LOWER BOUND FOR SOLUTIONS TO THE BOLTZMANN EQUATION WITH MAXWELLIAN DIFFUSION BOUNDARY CONDITIONS
Keywords: Boltzmann equation, Exponential lower bound, Maxwellian lower bound, Maxwellian diffusion boundary conditions
We prove the immediate appearance of an exponential lower bound, uniform in time and space, for continuous mild solutions to the full Boltzmann equation in a C 2 convex bounded domain with the physical Maxwellian diffusion boundary conditions, under the sole assumption of regularity of the solution. We investigate a wide range of collision kernels, with and without Grad's angular cutoff assumption. In particular, the lower bound is proven to be Maxwellian in the case of cutoff collision kernels. Moreover, these results are entirely constructive if the initial distribution contains no vacuum, with explicit constants depending only on the a priori bounds on the solution.
Introduction
The Boltzmann equation rules the dynamics of rarefied gas particles moving in a domain Ω of R d with velocities in R d (d 2) when the only interactions taken into account are elastic binary collisions. More precisely, the Boltzmann equation describes the time evolution of f (t, x, v), the distribution of particles in position and velocity, starting from an initial distribution f 0 (x, v) .
In the present article we are interested in the case where the gas stays inside a domain of which walls are heated at a constant temperature T ∂ . Contrary to the classical specular (billiard balls) or bounce-back reflections boudary conditions, the temperature of the boundary generates a diffusion towards the inside of the domain which prevents the usual preservation of energy of the gas.
We investigate the case where Ω is a C 2 convex bounded domain and that the boundary conditions are Maxwellian diffusion. The Boltzmann equation reads
∀t 0 , ∀(x, v) ∈ Ω × R d , ∂ t f + v • ∇ x f = Q(f, f ), (1.1) ∀(x, v) ∈ Ω × R d , f (0, x, v) = f 0 (x, v),
with f satisfying the Maxwellian diffusion boundary condition:
∀(t, x, v) ∈ R * + × ∂Ω × R d , f (t, x, v) = f ∂ (t, x, v), where (1.2) f ∂ (t, x, v) = v•n(x)>0 f (t, x, v) (v • n(x)) dv 1 (2π) d-1 2 T d+1 2 ∂ e - |v| 2
2T ∂ ,
with n(x) denoting the outwards normal to Ω at x on ∂Ω. This boundary condition expresses the physical process where particles are absorbed by the wall and then emitted back into Ω according to the thermodynamical equilibrium distribution between the wall and the gas.
The operator Q(f, f ) encodes the physical properties of the interactions between two particles. This operator is quadratic and local in time and space. It is given by
Q(f, f ) = R d ×S d-1 B (|v -v * |, cos θ) [f ′ f ′ * -f f * ] dv * dσ,
where f ′ , f * , f ′ * and f are the values taken by f at v ′ , v * , v ′ * and v respectively. Define:
v ′ = v + v * 2 + |v -v * | 2 σ v ′ * = v + v * 2 - |v -v * | 2 σ
, and cos
θ = v -v * |v -v * | , σ .
We recognise here the conservation of kinetic energy and momentum when two particles of velocities v and v * collide to give two particles of velocities v ′ and v ′ * . The collision kernel B 0 contains all the information about the interaction between two particles and is determined by physics (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF] or [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] for a formal derivation for the hard sphere model of particles). In this paper we shall only be interested in the case of B satisfying the following product form
(1.3) B (|v -v * |, cos θ) = Φ (|v -v * |) b (cos θ) ,
which is a common assumption as it is more convenient and also covers a wide range of physical applications. Moreover, we shall assume that Φ satisfies either
(1.4) ∀z ∈ R, c Φ |z| γ Φ(z) C Φ |z| γ
or a mollified assumption
(1.5) ∀ |z| 1 ∈ R, c Φ |z| γ Φ(z) C Φ |z| γ ∀ |z| 1 ∈ R, c Φ Φ(z) C Φ ,
c Φ and C Φ being strictly positive constants and γ in (-d, 1]. The collision kernel is said to be "hard potential" in the case of γ > 0, "soft potential" if γ < 0 and "Maxwellian" if γ = 0.
Finally, we shall consider b to be a continuous function on θ in (0, π], strictly positive near θ ∼ π/2, which satisfies (1.6) b (cos θ) sin d-2 θ ∼ θ→0 + b 0 θ -(1+ν) for b 0 > 0 and ν in (-∞, 2). The case when b is locally integrable, ν < 0, is referred to by the Grad's cutoff assumption (first introduce in [START_REF] Grad | Principles of the kinetic theory of gases[END_REF]) and therefore B will be said to be a cutoff collision kernel. The case ν 0 will be designated by non-cutoff collision kernel.
1.1. Motivations and comparison with previous results. The aim of this article is to show and to quantify the strict positivity of the solutions to the Boltzmann equation when the gas particles move in a domain with boundary conditions. In that sense, it continues the study started in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] about exponential lower bounds for solutions to the Boltzmann equation when specular refletions boundary conditions were taken into account.
More precisely, we shall prove that continuous solutions to the Boltzmann equation, with Maxwellian diffusion boundary conditions in a C 2 convex bounded domain, which have uniformly bounded energy satisfy an immediate exponential lower bound:
∀t t 0 , ∀(x, v) ∈ T d × R d , f (t, v) C 1 e -C 2 |v|
K , for all t 0 > 0. Moreover, in the case of collision kernel with angular cutoff we recover a Maxwellian lower bound ∀τ > 0, ∃ρ, θ > 0, ∀t τ, ∀(x, v) ∈ Ω × R d , f (t, x, v) ρ (2πθ) d/2 e -|v| 2 2θ .
We would like to emphasize that, in the spirit of [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF], our results show that the gas will instantaneously fill up the whole domain even if the initial configuration contains vacuum. Indeed, they only require some regularity on the solution and no further assumption on its local density. Previous studies assumed the latter to be uniformly bounded from below, which is equivalent of assuming a priori either that there is no vacuum or that the solution is strictly positive.
Moreover, the present results only require solutions to the Boltzmann equation to be continuous away from the grazing set
(1.7) Λ 0 = (x, v) ∈ ∂Ω × R d , n(x) • v = 0 ,
which is a property that is known to hold in the case of Maxwellian diffusion boundary conditions [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF].
The issue of quantifying the positivity of solutions has been investigated for a long time since it not only presents a great physical interest but also appears to be of significant importance for the mathematical study of the Boltzmann equation. Indeed, exponential lower bounds are essential for entropy-entropy production methods used to describe long-time behaviour for kinetic equations [5][6]. More recently, such lower bounds were needed to prove uniqueness of solutions to the Boltzmann equation in
L 1 v L ∞ x 1 + |v| 2+0 [8]
. Several works quantified the study of an explicit lower bound for solutions to the Boltzmann equation. We give here a brief overview and we refer the interested reader to the more detailed description in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF].
The first result about the strict positivity of solutions to the Boltzmann equation was derived by Carleman [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF]. Noticing that a part Q + of the Boltzmann operator Q satisfies a spreading property, roughly speaking
Q + (1 B(v,r) , 1 B(v,r) ) C + 1 B(v, √ 2r) ,
with C + < 1 (see Lemma 3.2 for an exact statement), he proved the immediate creation of an exponential lower bound for a certain class of solutions (radially symmetric in velocity) to the spatially homogeneous equation with hard potential kernel with angular cutoff. The latter result was improved to a Maxwellian lower bound and extended to the case of non-radially symmetric solutions to the spatially homogeneous equation with hard potential and cutoff by Pulvirenti and Wennberg [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF].
Finally, the study in the case of the full equation has been tackled by Mouhot [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] in the case of the torus Ω = T d , and more recently by the author [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] in C 2 convex bounded domains with specular boundary conditions in all dimension d. In both articles, a Maxwellian lower bound is derived for solutions to the full Boltzmann equation with angular cutoff (both with hard of soft potentials) and they showed the appearance of an exponential lower bound in the non-cutoff case.
Our present results show that the previous properties proven for the full Boltzmann equation [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] still hold in the physically relevant case of the Maxwellian diffusion generated by the boundary of the domain Ω. This is done for all the physically relevant collision kernels such as with and without angular cutoff and hard and soft potentials. Moreover, in the case of a solution that has uniformly bounded local mass and entropy, the proofs are entirely constructive and the constants are explicit and only depend on the a priori bounds on the solution and the geometry of the domain, which is of great physical interest for the study of the spreading of gas into a domain with heated walls.
There are two key contributions in the present article. The main one is a quantification of the strict positivity of the Maxwellian diffusion process thanks to a combination of a localised positivity of the solution and a geometrical study of the rebounds against a convex boundary. Roughly speaking, we show that the wall instantaneously starts to diffuse in all directions and that its diffusion is uniformly bounded from below. The second one is a spreading method combining the effects used in previous studies and the exponential diffusive process (see next section for details).
1.2. Our strategy. The main strategy to tackle this result relies on the breakthrough of Carleman [START_REF] Carleman | Sur la théorie de l'équation intégrodifférentielle de Boltzmann[END_REF], namely finding an "upheaval point" (a first minoration uniform in time but localised in velocity) and spreading this bound, thanks to the spreading property of the Q + operator, in order to include larger and larger velocities and finally compare it to an exponential. The case of the spatially non-homogeneous equation [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] was dealt by finding a spreading method that was invariant along the flow of characteristics.
The creation of "upheaval points" (localised in space because of boundary effects) is essantially the method developed in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] for general continuous initial datum or the one of [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF][10] for constructive purposes. There is a new technical treatment of the time of appearance of such lower bounds but does not present any new difficulties.
The main issue is the use of a spreading method that would be invariant along the flow of characteristics [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF]. In the case of Maxwellian diffusion boundary conditions characteristic trajectories are no longer well defined. Indeed, once a trajectory touches the boundary it is absorbed an re-emitted in all the directions. Characteristic trajectories are therefore only defined in between two consecutive rebounds and one cannot hope to use the invariant arguments developed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF].
The case of the torus, studied in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], indicates that without boundaries the exponential lower bound is created after time t = 0 as quickly as one wants. In the case of a bounded domain with specular reflection boundary conditions [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF], this minoration also occurs immediately. Together, it roughly means that one can expect to obtain an exponential lower bound on straight lines uniformly on how close the particle is from the boundary. Therefore we expect the same kind of uniform bounds to arise on each characteristic trajectory in between two consecutive rebounds as long as the Maxwellian diffusion emitted by the boundary is uniformly non-negative.
Our strategy is therefore to first prove that the boundary condition produces a strictly positive quantity uniformly towards the interior of Ω and then to find a method to spread either this diffusion or the localised "upheaval points". More precisely, if there is no contact during a time τ > 0 we expect to be able to use the spreading method developed in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] from the initial lower bounds. Else there is a contact during the interval [0, τ ] we cannot hope to use the latter spreading method, nor its more general characteristics invariant version derived in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF], since the Maxwellian diffusion boundary condition acts like an absorption for particles arriving on the boundary. But this boundary condition also diffuses towards the interior of the domain Ω what it has absorbed. This diffusion follows an exponential law and therefore of the shape of a Maxwellian lower bound that we manage to keep along the incoming characteristic trajectory.
Collision kernels satisfying a cutoff property as well as collision kernels with a non-cutoff property will be treated following the strategy described above. The only difference is the decomposition of the Boltzmann bilinear operator Q we consider in each case. In the case of a non-cutoff collision kernel, we shall divide it into a cutoff collision kernel and a remainder. The cutoff part will already be dealt with and a careful control of the L ∞ -norm of the remainder will give us the expected exponential lower bound, which decreases faster than a Maxwellian. 1.3. Organisation of the paper. Section 2 is dedicated to the statement and the description of the main results proven in this article. It contains three different parts Section 2.1 defines all the notations which will be used throughout the article. The last subsections, 2.2 and 2.3, are dedicated to a mathematical formulation of the results related to the lower bound in, respectively, the cutoff case and the non-cutoff case, described above. It also defines the concept of mild solutions to the Boltzmann equation in each case. Section 3 deals with the case of the immediate Maxwellian lower bound for collision kernels with angular cutoff. As described in our strategy, it follows three steps. Section 3.1 generates the localised "upheaval points" for general initial datum. A constructive approach to that problem is given in Section 3.4.
The uniform positivity of the Maxwellian diffusion is proven in Section 3.2. Finally, Section 3.3 combines the standard spreading methods with the diffusion process to prove the expected instantaneous Maxwellian lower bound.
To conclude, Section 4 proves the immediate appearance of an exponential lower bound in the case of collision kernels without angular cutoff. We emphasize here that this Section actually explains the adaptations required compared to the case of collision kernels with the cutoff property.
Main results
We begin with the notations we shall use all along this article. Functional spaces. This study will hold in specific functional spaces regarding the v variable that we describe here and use throughout the sequel. For all p in [1, ∞], we use the shorthand notation for Lebesgue spaces
L p v = L p R d . For p ∈ [1, ∞] and k ∈ N we use the Sobolev spaces W k,p v by the norm f W k,p v = |s| k ∂ s f (v) p L p v 1/p .
Physical observables and hypotheses. In the sequel of this study, we are going to need bounds on some physical observables of solution to the Boltzmann equation (1.1).
We consider here a function f (t, x, v) 0 defined on [0, T ) × Ω × R d and we recall the definitions of its local hydrodynamical quantities.
• its local energy
e f (t, x) = R d |v| 2 f (t, x, v)dv,
• its local weighted energy
e ′ f (t, x) = R d |v| γ f (t, x, v)dv,
where γ = (2 + γ) + , • its local L p norm (p ∈ [1, +∞)) l p f (t, x) = f (t, x, •) L p v , • its local W 2,∞ norm w f (t, x) = f (t, x, •) W 2,∞ v .
Our results depend on uniform bounds on those quantities and therefore, to shorten calculations we will use the following
E f = sup (t,x)∈[0,T )×Ω e f (t, x) , E ′ f = sup (t,x)∈[0,T )×Ω e ′ f (t, x), L p f = sup (t,x)∈[0,T )×Ω l p f (t, x) , W f = sup (t,x)∈[0,T )×Ω w f (t, x).
In our theorems we are giving a priori lower bound results for solutions to (1.1) satisfying some properties about their local hydrodynamical quantities. Those properties will differ depending on which case of collision kernel we are considering. We will take them as assumptions in our proofs and they are the following.
• In the case of hard or Maxwellian potentials with cutoff (γ 0 and ν < 0):
(2.1) E f < +∞.
• In the case of a singularity of the kinetic collision kernel (γ ∈ (-d, 0)) we shall make the additional assumption
(2.2) L pγ f < +∞, where p γ > d/(d + γ).
• In the case of a singularity of the angular collision kernel (ν ∈ [0, 2)) we shall make the additional assumption
(2.3) W f < +∞, E ′ f < +∞. Assumption (2.
2) implies the boundedness of the local entropy and if γ 0 we have E ′ f E f and so in some cases several assumptions might be redundant. Moreover, in the case of the torus with periodic conditions or the case of bounded domain with specular boundary reflections [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF], solutions to (1.1) also satisfied the conservation of the total mass and the total energy. The case with Maxwellian diffusion boundary conditions only preserves, in general (see [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] for instance), the total mass:
(2.4) ∃M 0, ∀t ∈ R + , Ω R d f (t, x, v) dxdv = M.
Characteristic trajectories. The characteristic trajectories of the equation are only defined between two consecutive rebounds against the boundary and they are given by straight lines that we will denote by
(2.5) ∀ 0 s t, ∀(x, v) ∈ R d × R d , X s,t (x, v) = x -(t -s)v.
Because Ω is a closed set, we can define the first time of contact between a backward trajectory and ∂Ω:
(2.6) ∀(x, v) ∈ Ω × R d , t ∂ (x, v) = min {t 0 : x -vt ∈ ∂Ω} ,
and the contact point between such a trajectory and the boundary:
(2.7) ∀(x, v) ∈ Ω × R d , x ∂ (x, v) = x -vt ∂ (x, v)
2.2. Maxwellian lower bound for cutoff collision kernels. The final theorem we prove in the case of cutoff collision kernel is the immediate appearance of a uniform Maxwellian lower bound. We use, in that case, the Grad's splitting for the bilinear operator Q such that the Boltzmann equation reads
Q(g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) [h ′ g ′ * -hg * ] dv * dσ = Q + (g, h) -Q -(g, h),
where we used the following definitions
Q + (g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) h ′ g ′ * dv * dσ, Q -(g, h) = n b (Φ * g(v)) h = L[g](v)h, (2.8)
where (2.9)
n b = S d-1 b (cos θ) dσ = S d-2 π 0 b (cos θ) sin d-2 θ dθ.
As already mentionned, the characteristics of our problem can only be defined in between two consecutive rebounds against ∂Ω. We can therefore define a mild solution of the Boltzmann equation in the cutoff case, which is expressed by a Duhamel formula along the characteristics. This weaker form of solutions is actually the key point for our result and also gives a more general statement. Definition 2.1. Let f 0 be a measurable function non-negative almost everywhere on
Ω × R d . A measurable function f = f (t, x, v) on [0, T ) × Ω × R d is a mild solution of the Boltzmann equation associated to the initial datum f 0 (x, v) if (1) f is non-negative on Ω × R d , (2) for every (t, x, v) in R + × Ω × R d : s -→ L[f (t, X s,t (x, v), •)](v), t -→ Q + [f (t, X s,t (x, v), •), f (t, X s,t (x, v), •)](v) are in L 1 loc ([0, T )), (3) and for each t ∈ [0, T ), for all x ∈ Ω and v ∈ R d f (t, x, v) = f 0 (x -vt, v)exp - t 0 L[f (s, X s,t (x, v), •)](v) ds (2.10) + t 0 exp - t s L[f (s ′ , X s ′ ,s (x, v), •)](v) ds ′ ×Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)](v) ds. if t t ∂ (x, v) or else f (t, x, v) = f ∂ (t ∂ (x, v), x ∂ (x, v), v)exp - t t ∂ L[f (s, X s,t (x, v), •)](v) ds (2.11) + t t ∂ (x,v) exp - t s L[f (s ′ , X s ′ ,s (x, v), •)](v) ds ′ ×Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)](v) ds.
with ν < 0. Let f (t, x, v) be a mild solution of the Boltzmann equation in Ω × R d on some time intervalle [0, T ), T ∈ (0, +∞], which satisfies • f is continuous on [0, T ) × Ω × R d -Λ 0 (Λ 0 grazing set defined by (1.7)), f (0, x, v) = f 0 (x, v) and M > 0 in (2.4); • if Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then f satisfies (2.1); • if Φ satisfies (1.4) with γ < 0, then f satisfies (2.
1) and (2.2).
Then for all τ ∈ (0, T ) there exists ρ > 0 and θ > 0, depending on τ , E f (and L pγ f if Φ satisfies (1.4) with γ < 0), such that for all t ∈ [τ, T ) the solution f is bounded from below, almost everywhere, by a global Maxwellian distribution with density ρ and temperature θ, i.e.
∀t ∈ [τ, T ), ∀(x, v) ∈ Ω × R d , f (t, x, v) ρ (2πθ) d/2 e -|v| 2 2θ .
If we add the assumptions of uniform boundedness of f 0 and of the local mass and entropy of the solution f we can use the arguments originated in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] to construct explicitely the initial "upheaval point", without any compactness argument. We refer the reader to Section 3.4 which gives the following corollary.
Corollary 2.3. Suppose that conditions of Theorem 2.2 are satisfied and further assume that f 0 is uniformly bounded from below
∀(x, v) ∈ Ω × R d , f 0 (x, v) ϕ(v) > 0,
and that f has a bounded local mass and entropy
R f = inf (t,x)∈[0,T )×Ω R d f (t, x, v) dv > 0 H f = sup (t,x)∈[0,T )×Ω R d f (t, x, v)logf (t, x, v) dv < +∞.
Then conclusion of Theorem 2.2 holds true with the constants ρ and θ being explicitely constructed in terms of τ , E f , H f and L pγ f .
2.3.
Exponential lower bound for non-cutoff collision kernels. In the case of non-cutoff collision kernels (0 ν < 2 in (1.6)), Grad's splitting does not make sense anymore and so we have to find a new way to define mild solutions to the Boltzmann equation (1.1). The splitting we are going to use is a standard one and it reads
Q(g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) [h ′ g ′ * -hg * ] dv * dσ = Q 1 b (g, h) -Q 2 b (g, h),
where we used the following definitions
Q 1 b (g, h) = R d ×S d-1 Φ (|v -v * |) b (cosθ) g ′ * (h ′ -h) dv * dσ, Q 2 b (g, h) = - R d ×S d-1 Φ (|v -v * |) b (cosθ) [g ′ * -g * ] dv * dσ h (2.12) = S[g](v)h.
We would like to use the properties we derived in the study of collision kernels with cutoff. Therefore we will consider additional splitting of Q.
For ε in (0, π/4) we define a cutoff angular collision kernel
b CO ε (cosθ) = b (cosθ) 1 |θ| ε and a non-cutoff one b N CO ε (cosθ) = b (cosθ) 1 |θ| ε .
Considering the two collision kernels
B CO ε = Φb CO ε and B N CO ε = Φb N CO ε
, we can combine Grad's splitting (2.8) applied to B CO ε with the non-cutoff splitting (2.12) applied to B N CO ε . This yields the splitting we shall use to deal with non-cutoff collision kernels, (2.13)
Q = Q + ε -Q - ε + Q 1 ε -Q 2 ε , where we use the shortened notations Q ± ε = Q ± b CO ε and Q i ε = Q i b NCO ε , for i = 1, 2.
Thanks to the splitting (2.13), we are able to define mild solutions to the Boltzmann equation with non-cutoff collision kernels. This is obtained by considering the Duhamel formula associated to the splitting (2.13) along the characteristics (as in the cutoff case). Definition 2.4. Let f 0 be a measurable function, non-negative almost everywhere on
Ω × R d . A measurable function f = f (t, x, v) on [0, T ) × Ω × R d is
a mild solution of the Boltzmann equation with non-cutoff angular collision kernel associated to the initial datum f 0 (x, v) if there exists 0 < ε 0 < π/4 such that for all 0 < ε < ε 0 :
(1) f is non-negative on Ω × R d , (2) for every (t, x, v) in R + × Ω × R d : s -→ L ε [f (t, X s,t (x, v), •)](v), s -→ Q + ε [f (t, X s,t (x, v), •), f (t, X s,t (x, v), •)](v) s -→ S ε [f (t, X s,t (x, v), •)](v), s -→ Q 1 ε [f (t, X s,t (x, v), •), f (t, X s,t (x, v), •)](v) are in L 1 loc ([0, T )), (3) and for each t ∈ [0, T ), for all x ∈ Ω and v ∈ R d f (t, x, v) = f 0 (x -vt, v)exp - t 0 (L ε + S ε ) [f (s, X s,t (x, v), •)](v) ds (2.14) + t 0 exp - t s (L ε + S ε ) [f (s ′ , X s ′ ,s (x, v), •)](v) ds ′ × Q + ε + Q 1 ε [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)](v) ds. if t t ∂ (x, v) or else f (t, x, v) = f ∂ (t ∂ , x ∂ , v)exp - t t ∂ (L ε + S ε ) [f (s, X s,t (x, v), •)](v) ds (2.15) + t t ∂ (x,v) exp - t s (L ε + S ε ) [f (s ′ , X s ′ ,s (x, v), •)](v) ds ′ × Q + ε + Q 1 ε [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)](v) ds, where t ∂ = t ∂ (x, v) and x ∂ = x ∂ (x, v)
Ω × R d on some time intervalle [0, T ), T ∈ (0, +∞], which satisfies • f is continuous on [0, T ) × Ω × R d -Λ 0 (Λ 0 grazing set defined by (1.7)), f (0, x, v) = f 0 (x, v) and M > 0 in (2.4); • if Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then f satisfies (2.1) and (2.3); • if Φ satisfies (1.4) with γ < 0, then f satisfies (2.1), (2.
2) and (2.3).
Then for all τ ∈ (0, T ) and for any exponent K such that
K > 2 log 2 + 2ν 2-ν log2 , there exists C 1 , C 2 > 0, depending on τ , K, E f , E ′ f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0), such that ∀t ∈ [τ, T ), ∀(x, v) ∈ Ω × R d , f (t, x, v) C 1 e -C 2 |v| K .
Moreover, in the case ν = 0, one can take K = 2 (Maxwellian lower bound).
As in the angular cutoff case, if we further assume that f 0 presents no vacuum area and that f has uniformly bounded local mass and entropy, our results are entirely constructive.
Corollary 2.6. As for Corollary 2.3, if f 0 is bounded uniformly from below as well as the local mass of f , the local entropy of f is uniformly bounded from above then the conclusion of Theorem 2.5 holds true with constants being explicitely constructed in terms of τ ,
K, E f , E ′ f , W f , H f , L pγ f .
The cutoff case: a Maxwellian lower bound
In this section we are going to prove a Maxwellian lower bound for a solution to the Boltzmann equation (1.1) in the case where the collision kernel satisfies a cutoff property.
The strategy to tackle this result follows the main idea used in [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF][10][1] which relies on finding an "upheaval point" (a first minoration uniform in time and space but localised in velocity) and spreading this bound, thanks to a spreading property of the Q + operator, in order to include larger and larger velocities.
As described in the introduction (Section 1.2), we need a method that translates the usual spreading argument in the case of our problem and combine it with a strict positivity of the diffusion process. Roughly speaking, either the characteristic we are looking at comes from the diffusion of the boundary or the spreading will be generated on a straight line as in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF].
Thus our study will be split into three parts, which are the next three subsections The first step (Section 3.1) is to partition the position and velocity spaces so that we have an immediate appearance of an "upheaval point" in each of those partitions.
As discussed in the introduction, the standard spreading method fails in the case of characteristics trajectories bouncing against ∂Ω. We therefore study of the effects of the Maxwellian diffusion (Section 3.2).
The last one (Section 3.3) is to obtain uniform lower bounds less and less localised (and comparable to an exponential bound in the limit) in velocity. The strategy we use relies on the spreading property of the gain operator that we already used and follows the method in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] combined with the treatment of localised bounds in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF], together with the previous focus on the Maxwellian diffusion process.
A separated part, Section 3.4, is dedicated to a constructive approach to the Maxwellian lower bound.
3.1. Initial localised lower bounds. In Section 3.1.2 we use the continuity of f together with the conservation of total mass (2.4) and the uniform boundedness of the local energy to obtain a point in the phase space where f is strictly positive. Then, thanks to the continuity of f , its Duhamel representation (2.11) -(2.10) and the spreading property of the Q + operator (Lemma 3.2) we extend this positivity to high velocities at that particular point (Lemma 3.3).
Finally, in Section 3.1.3, the free transport part of the solution f will imply the immediate appearance of the localised lower bounds (Proposition 3.5).
Moreover we define constants that we will use in the next two subsections in order to have a uniform lower bound.
3.1.1. Controls on the gain and the loss operators. We first introduce two lemmas, proven in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], that control the gain and loss terms in the Grad's splitting (2.8) we are using under the cutoff assumption. One has an L ∞ bound on the loss term (Corollary 2.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]). Lemma 3.1. Let g be a measurable function on R d . Then
∀v ∈ R d , |L[g](v)| C L g v γ + ,
where C L g is defined by: (1) If Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then
C L g = cst n b C Φ e g . (2) If Φ satisfies (1.4) with γ ∈ (-d, 0), then C L g = cst n b C Φ e g + l p g , p > d/(d + γ).
The spreading property of Q + is given by the following lemma (Lemma 2.4 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF]), where we define
Q + (1 B(v,R) , 1 B(v,r) ) cst l b c Φ r d-3 R 3+γ ξ d 2 -1 1 B(v, √ r 2 +R 2 (1-ξ)) .
As a consequence in the particular quadratic case δ = r = R, we obtain
Q + (1 B(v,δ) , 1 B(v,δ) ) cst l b c Φ δ d+γ ξ d 2 -1 1 B(v,δ √ 2(1-ξ)) ,
for any v ∈ R d and ξ ∈ (0, 1).
3.1.2. First "upheaval" point. We start by the strict positivity of our function at one point for all velocities. Lemma 3.3. Let f be the mild solution of the Boltzmann equation described in Theorem 2.2. Then there exist ∆ > 0, (x 1 , v 1 ) in Ω × R d such that for all n ∈ N there exist r n > 0, depending only on n, and
t n (t), α n (t) > 0 such that ∀t ∈ [0, ∆], ∀x ∈ B x 1 , ∆ 2 n , ∀v ∈ R d , f (t, x, v) α n (t)1 B(v 1 ,rn) (v)
, with α 0 > 0 independent of t and the induction formula
α n+1 (t) = C Q r d+γ n 4 d/2-1 t tn(t) e -sC L 2rn+ v 1 γ + α 2 n (s) ds where C Q = cst l b c Φ is defined in Lemma 3.2 and C L = cst n b C Φ E f (or C L = cst n b C Φ (E f + L p f )) is defined in Lemma 3.1, and
r 0 = ∆, r n+1 = 3 √ 2 4 r n , (3.2) t n (t) = max 0, t - ∆ 2 n+1 ( v 1 + r n ) . (3.3) Remark 3.4.
It is essentially the same method used to generate "upheaval points" in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF]. The main difference being that we need to control the characteristics before they bounce against the boundary, whence the existence of the bound t n (t).
Proof of Lemma 3.3. The proof is an induction on n and mainly follows the method in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] Lemma 3.3.
Step 1: Initialization. We recall the assumptions that are made on the solution f ((2.4) and assumption (2.1)):
∀t ∈ R + , Ω R d f (t, x, v) dxdv = M, sup (t,x)∈[0,T )×Ω R d |v| 2 f (t, x, v) dxdv = E f , with M > 0 and E f < ∞.
Since Ω is bounded, and so is included in, say, B(0, R X ), we also have that
∀t ∈ R + , Ω R d |x| 2 + |v| 2 f (t, x, v) dxdv α = MR 2 X + R X E f < +∞.
Therefore, exactly the same computations as in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] (step 1 of the proof of Lemma 3.3) are applicable and lead to the existence of (x 1 , v 1 ) such that f (0, x 1 , v 1 ) > 0 and, by uniform continuity of f , to Lemma 3.3 in the case n = 0.
Step 2: Proof of the induction. We assume the conjecture is valid for n.
Let x be in B(x 1 , ∆/2 n+1 ), v in B(0, v 1 + 2r n ) and t in [0, ∆].
We have straightforwardly that
∀s ∈ [t n (t), t] , x 1 -(x -(t -s)v) ∆ 2 n . Since B(x 1 , ∆) ⊂ Ω, the characteristic line (X s,t (x, v)) s∈[tn(t),t] therefore stays in Ω. This implies that t ∂ (x, v) > t n (t).
We thus use the fact that f is a mild solution to write f (t, x, v) under its Duhamel form starting at t n (t) without contact with the boundary (2.10). The control we have on the L operator, Lemma 3.1, allows us to bound from above the second integral term (the first term is positive). Moreover, this bound on L is independent on t, x and v since it only depends on an upper bound on the energy e f (t,x,•) (and its local L p norm l p f (t,x,•) ) which is uniformly bounded by E f (and by L p f ). This yields
(3.4) f (t, x, v) t tn(t) e -sC L v 1 +2rn γ + Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] (v) ds,
where
C L = cst n b C Φ E f (or C L = cst n b C Φ (E f + L p f )), see Lemma 3.
1, and we used v
2r n + v 1 .
We already saw that (X s,t (x, v)) s∈[tn(t),t] stays in B(x 1 , ∆/2 n ). Therefore, by calling v * the integration parametre in the operator Q + we can apply the induction property to f (s, X s,t (x, v), v * ) which implies, in (3.4),
f (t, x, v) t tn(t) e -sC L v 1 +2rn γ + α 2 n (s)Q + 1 B(v 1 ,rn) , 1 B(v 1 ,rn) ds(v).
Applying the spreading property of Q + , Lemma 3.2, with ξ = 1/4 gives us the expected result for the step n + 1 since B(v 1 , r n+1 ) ⊂ B(0, v 1 + 2r n ).
3.1.3. Partition of the phase space and first localised lower bounds. We are now able to prove the immediate appearance of localised "upheaval points". We emphasize here that the following proposition is proven with exactly the same arguments as in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF]. Proposition 3.5. Let f be the mild solution of the Boltzmann equation described in Theorem 2.2 and consider x 1 , v 1 constructed in Lemma 3.3. Then there exists ∆ > 0 such that for all 0 < τ 0 ∆, there exists δ T (τ 0 ), δ X (τ 0 ), δ V (τ 0 ), R min (τ 0 ), a 0 (τ 0 ) > 0 such that for all N in N there exists N X in N * and x 1 , . . . , x N X in Ω and v 1 , . . . , v N X in B(0, R min (τ 0 )) and
• Ω ⊂
1 i N X B x i , δ X (τ 0 )/2 N ; • ∀t ∈ [τ 0 , δ T (τ 0 )], ∀x ∈ B(x i , δ X (τ 0 )), ∀v ∈ R d , f (t, x, v) a 0 (τ 0 )1 B(v i ,δ V (τ 0 ,N )) (v).
Proof of Proposition 3.5. We are going to use the free transport part of the Duhamel form of f (2.10), to create localised lower bounds out of the one around (x 1 , v 1 ) in Lemma 3.3.
Ω is bounded so let us denote its diameter by d Ω . Take τ 0 in (0, ∆]. Let n be large enough such that r n 2d Ω /τ 0 + v 1 , where r n is defined by (3.2) in Lemma 3.3. It is possible since (r n ) increases to infinity. Now, define R min (τ 0 ) = 2d Ω /τ 0 .
Thanks to Lemma 3.3 applied to this particular n we have that
(3.5) ∀t ∈ τ 0 2 , ∆ , ∀x ∈ B(x 1 , ∆/2 n ), f (t, x, v) α n τ 0 2 1 B(v 1 ,rn) (v),
where we used the fact that α n (t) is an increasing function.
Define a 0 (τ 0 ) = 1 2 α n τ 0 2 e - τ 0 2 C L 2d Ω τ 0 γ + .
Definition of the constants. We notice that for all x in ∂Ω we have that n(x) • (x -x 1 ) > 0, because Ω has nowhere null normal vector by hypothesis. But the function
x -→ n(x) • x -x 1 x -x 1 is continuous (since Ω is C 2 ) on the compact ∂Ω and therefore has a minimum that is atteined at a certain X(x 1 ) on ∂Ω.
Hence,
(3.6) ∀x ∈ ∂Ω, n(x) • x -x 1 x -x 1 n(X(x 1 )) • X(x 1 ) -x 1 X(x 1 ) -x 1 = 2λ(x 1 ) > 0.
To shorten following notations, we define on Ω × R d -{0} the function
(3.7) Φ(x, v) = n x + t x, v v v v ,
where we defined t(x, v) = min{t 0 : x + tv ∈ ∂Ω}, the first time of contact against the boundary of the forward characteristic (x + sv) s 0 defined for v = 0 and continuous on Ω × R d -{0} (see [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] Lemma 5.2 for instance).
We denote d 1 to be half of the distance from x 1 to ∂Ω. We define two sets included in
[0, ∆] × Ω × R d : Λ (1) = [0, ∆] × B(x 1 , d 1 ) × R d and Λ (2) = (t, x, v) / ∈ Λ (1) , v d 1 τ 0 and Φ(x, v) • v v λ(x 1 )
By continuity of t(x, v) and of n (on ∂Ω), we have that
Λ = Λ (1) ∩ Λ (2)
is compact and does not intersect the grazing set [0, ∆] × Λ 0 defined by (1.7). Therefore, f is continuous in Λ and thus is uniformly continuous on Λ. Hence, there exist
δ ′ T (τ 0 ), δ ′ X (τ 0 ), δ ′ V (τ 0 ) > 0 such that ∀(t, x, v), (t ′ , x ′ , v ′ ) ∈ Λ, |t -t ′ | δ ′ T (τ 0 ), x -x ′ δ ′ X (τ 0 ), v -v ′ δ ′ V (τ 0 ), (3.8) |f (t, x, v) -f (t ′ , x ′ , v ′ )| a 0 (τ 0 ).
The map Φ (defined by (3.7)) is uniformly continuous on the compact [0, ∆] × Ω × S d-1 and therefore there exist
δ ′′ T (τ 0 ), δ ′′ X (τ 0 ), δ ′′ V (τ 0 ) > 0 such that ∀(t, x, v), (t ′ , x ′ , v ′ ) ∈ Λ (2) , |t -t ′ | δ ′′ T (τ 0 ), x -x ′ δ ′′ X (τ 0 ), v -v ′ δ ′′ V (τ 0 ), (3.9) |Φ(x, v) -Φ(x ′ , v ′ )| λ(x 1 ) 2 .
We conclude our definitions by taking
δ T (τ 0 ) = min (∆, τ 0 + δ ′ T (τ 0 ), τ 0 + δ ′′ T (τ 0 )) , δ X (τ 0 ) = min ∆ 2 n , δ ′ X (τ 0 ), δ ′′ X (τ 0 ), d 1 /2 , δ V (τ 0 ) = min r n , δ ′ V (τ 0 ), d 1 2τ 0 δ ′′ V (τ 0 ), λ(x 1 ) 2 .
Proof of the lower bounds. We take N ∈ N and notice that Ω is compact and therefore there exists x 1 , . . . , x N X in Ω such that Ω ⊂
1 i N X B x i , δ X (τ 0 )/2 N .
Moreover, we construct them such that x 1 is the one defined in Lemma 3.3 and we then take v 1 to be the one defined in Lemma 3.3. We define ∀i ∈ {2, . . . , N X }, v i = 2 τ 0 (x i -x 1 ).
Because Ω is convex we have that
X τ 0 /2,τ 0 (x i , v i ) = x 1 , V τ 0 /2,τ 0 (x i , v i ) = v i .
The latter equalities imply that there is no contact with ∂Ω between times τ 0 /2 and τ 0 when starting from x 1 to go to x i with velocity v i . Using the fact that f is a mild solution of the Boltzmann equation, we write it under its Duhamel form without contact (2.10), but starting at τ 0 /2. We drop the last term which is positive. As in the proof of Lemma 3.3 we can control the L operator appearing in the first term in the right-hand side of (2.10) (corresponding to the free transport).
f (τ 0 , x i , v i ) f τ 0 2 , x 1 , v i e - τ 0 2 C L 2 τ 0 (x i -x 1 ) γ + α n τ 0 2 e - τ 0 2 C L 2d Ω τ 0 γ + 1 B(v 1 ,rn) (v i ) 2a 0 (τ 0 )1 B(v 1 ,rn) (v i ),
where we used (3.5) for the second inequality. We see here that v i belongs to B(0, R min (τ 0 )) and that B(0, R min (τ 0 )) ⊂ B(v 1 , r n ) and therefore (3.10) f (τ 0 , x i , v i ) 2a 0 (τ 0 ).
We first notice that (τ 0 , x i , v i ) belongs to Λ since either x i belongs to B(x 1 , d 1 ) or x 1 -x i d 1 but by definition of v i and λ(x 1 ) (see (3.6)),
n x i + t x i , v i v i v i v i • v i v i 2λ(x 1 )
and
v i = 2 τ 0 x i -x 1 2 τ 0 d 1 .
We take t in [τ 0 , δ T (τ 0 )], x in B(x i , δ X (τ 0 )) and v in B(v i , δ V (τ 0 )) and we will prove that (t, x, v) also belongs to Λ.
If
x i belongs to B(x 1 , d 1 /2) then since δ X (τ 0 ) d 1 /2, x -x 1 d 1 2 + x -x i d 1
and (t, x, v) thus belongs to Λ (1) ⊂ Λ.
In the other case where x 1 -x i d 1 /2 we first have that
v i = 2 τ 0 x i -x 1 d 1 τ 0 .
And also
v v - v i v i 2 v i v -v i = τ 0 x i -x 1 δ V (τ 0 ) 2τ 0 d 1 δ V (τ 0 ) δ ′′ V (τ 0 ).
The latter inequality combined with (3.9) and that |t -
τ 0 | δ ′′ T (τ 0 ) and x -x i δ ′′ X (τ 0 ) yields |Φ(x, v) -Φ(x i , v i )| λ(x 1 ) 2 ,
which in turn implies
Φ(x, v) • v v Φ(x i , v i ) • v i v i + Φ(n, v) • (v -v i ) + (Φ(x, v) -Φ(x i , v i )) • v i 2λ(x 1 ) -v -v i -|Φ(x, v) -Φ(x i , v i )| λ(x 1 )
, so that (t, x, v) belongs to Λ (2) .
We can now conclude the proof. We proved that (τ 0 , x i , v i ) belongs to Λ and that for all t in [τ 0 , δ T (τ 0 )], x in B(x i , δ X (τ 0 )) and v in B(v i , δ V (τ 0 )), (t, x, v) belongs to Λ. By definition of the constants, (t -τ 0 , x -x i , v -v i ) satisfies the inequality of the uniform continuity of f on Λ (3.8). Combining this inequality with (3.10), the lower bound at (τ 0 , x i , v i ), we have that f (t, x, v) a 0 (τ 0 ).
Remark 3.6. In order to lighten our presentation and because τ 0 can be chosen as small as one wants, we will only study the case of solutions to the Boltzmann equation which satisfies Proposition 3.5 at τ 0 = 0. Then we will immediatly create the exponential lower bound after at τ 1 for all τ 1 > 0. Then we apply the latter result to F (t, x, v) = f (t + τ 0 , x, v) to obtain the exponential lower bound for f at time τ 0 + τ 1 which can be chosen as small as one wants.
Global strict positivity of the Maxwellian diffusion.
In this subsection, we focus on the positivity of the Maxwellian diffusion boundary condition. More precisely, we prove that the boundary of the domain Ω diffuses in all directions a minimal strictly positive quantity.
Proposition 3.7. Let f be the mild solution of the Boltzmann equation described in Theorem 2.2 and consider ∆ > 0 constructed in Proposition 3.5.
Then for all τ 0 in (0, ∆], there exists b
∂ (τ 0 ) > 0 such that ∀t ∈ [τ 0 , ∆], ∀x ∂ ∈ ∂Ω, v * •n(x ∂ )>0 f (t, x ∂ , v * ) (v * • n(x ∂ )) dv * > b ∂ (τ 0 ).
Proof of Proposition 3.7. Let τ 0 be in (0, ∆], take t in [τ 0 , ∆] and x ∂ on ∂Ω.
We consider (x 1 , v 1 ) constructed in Lemma 3.3 and will use the same notations as in the proof of Proposition 3.5.
In the spirit of the proof of Proposition 3.5 we define
v ∂ = 2 t (x ∂ -x 1 ),
which gives, because ∂Ω is convex, that X t/2,s t/2 s t is a straight line not intersecting ∂Ω apart at time s = t. We can thus write f under its Duhamel form without contact (2.10) starting at t/2. We keep only the first term on the right hand-side and control the operator L by Lemma 3.1.
f (t, x ∂ , v ∂ ) f t 2 , x 1 , v ∂ e -t 2 C L 2 t (x ∂ -x 1 ) γ + α n t 2 e -t 2 C L 2d Ω t γ + 1 B(v 1 ,rn) (v ∂ ) α n τ 0 2 e -∆ 2 C L 2d Ω τ 0 γ + 1 B(v 1 ,rn) (v ∂ ),
where we used (3.5) for the second inequality since t/2 belongs to [τ 0 /2, ∆]. Note that we choose n exactly as in the proof of Proposition 3.5 and, for the same reasons,
v ∂ thus belongs to B(v 1 , r n ) which implies f (t, x ∂ , v ∂ ) α n τ 0 2 e -∆ 2 C L 2d Ω τ 0 γ + .
Here again, the continuity of f away from the grazing set implies the existence of δ(τ 0 ) independent of t, x and v ∂ such that
(3.11) ∀v * ∈ B(v ∂ , δ(τ 0 )), f (t, x ∂ , v * ) 1 2 α n τ 0 2 e -∆ 2 C L 2d Ω τ 0 γ + = A(τ 0 ) > 0.
We now deal with the scalar product v * • n(x ∂ ) appearing in the Maxwellian diffusion.
We notice that for all x in ∂Ω we have that n(x) • (x -x 1 ) > 0, because Ω has nowhere null normal vector by hypothesis. But the function x -→ n(x) • (x -x 1 ) is continuous (since Ω is C 2 ) on the compact ∂Ω and therefore has a minimum that is atteined at a certain X(x 1 ) on ∂Ω.
Hence,
∀x ∈ ∂Ω, (x -x 1 ) • n(x) n(X(x 1 )) • (X(x 1 ) -x 1 ) = 2B(x 1 ) > 0.
We define δ ′ (x 1 ) = B(x 1 ) and a mere Cauchy-Schwarz inequality implies that
∀x ∈ ∂Ω, ∀v * ∈ B ((x -x 1 ), δ ′ (x 1 )) , v * • n(x) B(x 1 ) > 0,
which in turns implies
(3.12) ∀t ∈ [τ 0 , ∆], ∀x ∈ ∂Ω, ∀v * ∈ B 2 t (x -x 1 ), 2 ∆ δ ′ (τ 0 ) , v • n(x) 2 ∆ B(x 1 ).
To conlude we combine (3.11) and (3.12) at point x ∂ inside the integrale to get, with δ ′′ (τ 0 ) = min (δ(τ 0 ), 2δ ′ (x 1 )/∆),
v * •n(x ∂ )>0 f (t, x ∂ , v * ) (v * • n(x ∂ )) dv * v * ∈B(v ∂ ,δ ′′ (τ 0 )) f (t, x ∂ , v * ) (v * • n(x ∂ )) dv * 2 ∆ A(τ 0 )B(x 1 ) |B(v ∂ , δ ′′ (τ 0 ))| = 2 ∆ A(τ 0 )B(x 1 ) |B(0, δ ′′ (τ 0 ))| .
This yields the expected result with
(3.13) b ∂ (τ 0 ) = 2A(τ 0 )B(x 1 ) |B(0, δ ′′ (τ 0 ))| /∆.
Remark 3.8. For now on, τ 0 used in Proposition 3.7 will be the same as the one used in Proposition 3.5. Moreover, as already mentioned and explained in Remark 3.6, we will consider in the sequel that τ 0 = 0.
3.3.
Spreading of the initial localised bounds and global Maxwellian lower bound. In Section 3.1 we proved the immediate appearance of initial lower bounds that are localised in space and in velocity. The present subsection aims at increasing these lower bounds in a way that larger and larger velocities are taken into account, and compare it to an exponential bound in the limit.
3.3.1.
Spreading of the initial "upheaval points". First, we pick N in N * , construct δ V , R min and cover Ω with 1 i N X B(x i , δ X /2 N ) as in Proposition 3.5 where we dropped the dependencies in τ 0 and N.
Then for any sequence (ξ n ) in (0, 1) and for all τ > 0 we define three sequences in R + by induction. First,
(3.14) r 0 = δ V r n+1 = √ 2 (1 -ξ n ) r n .
Second, with the notation
r n = R min + r n , (3.15) b 0 (τ ) = b ∂ b n+1 (τ ) = b ∂ e -C L τ r n+1 γ + e - ( r
a 0 (τ ) = a 0 a n+1 (τ ) = min (a n (τ ), b n (τ )) 2 C Q r d+γ n ξ d/2-1 n+1 τ 2 n+2 r n+1 e -C L τ r n+1 γ + 2 n+2 r n+1 .
We express the spreading of the lower bound in the following proposition.
Proposition 3.9. Let f be the mild solution of the Boltzmann equation described in Theorem 2.2 and suppose that f satisfies Proposition 3.5 with τ 0 = 0. Consider 0 < τ δ T and N in N. Let (x i ) i∈{1,...,N X } and (v i ) i∈{1,...,N X } be given as in Proposition 3.5 with τ 0 = 0. Then for all n in {0, . . . , N} we have that the following holds: Proof of Proposition 3.9. We are interested in immediate appearance of a lower bound so we can always choose δ t δ X and also R min 1. This Proposition will be proved by induction on n, the initialisation is simply Proposition 3.5 so we consider the case where the proposition is true for n < N.
∀t ∈ τ - τ 2 n+1 r n , τ , ∀x ∈ B x i , δ X 2 n , f (t, x, v) min (a n (τ ), b n (τ )) 1 B(v i ,rn) (v),
Take t ∈ [τ -τ /(2 n+2 r n+1 ), τ ], x ∈ B (x i , δ X /2 n+1 ) and v ∈ B(v i , 2r n+1 ).
There are two possible cases depending on t ∂ (x, v) t or t ∂ (x, v) > t. We remind here that t ∂ (see (2.6)) is the first time of contact of the backward trajectory starting at x with velocity v.
1 st case: no contact against the boundary from 0 to t. We can therefore use the Duhamel formula without contact (2.10) and bound it by the second term on the right-hand side (every term being positive). Then Lemma 3.1 on the linear operator L and the fact that
v v i + r n+1 R min + r n+1 = r n+1 ,
gives the following inequality
f (t, x, v) τ - τ 2 n+2 r n+1 τ - τ 2 n+1 r n+1 e -C L (t-s) r n+1 γ + Q + [f (s, X s,t (x, v), •), f (s, X s,t (x, v), •)] ds. (3.17)
We now see that for all s in the interval considered in the integral above
x i -X s,t (x, v) x i -x + |t -s| v δ X 2 n+1 + τ 2 n+1 δ X 2 n ,
where we used that δ T δ X . We can therefore apply the induction hypothesis to f (t, X s,t (x, v), v * ) into (3.17), where we denoted by v * the integration parameter in Q + . This yields
f (t, x, v) a n (τ ) 2 e -C L τ r n+1 γ + 2 n+2 r n+1 τ - τ 2 n+2 r n+1 τ - τ 2 n+1 r n+1 Q + 1 B(v i ,rn) , 1 B(v i ,rn) ds (v).
Applying the spreading property of Q + , Lemma 3.2, with ξ = ξ n+1 gives us the expected result for the step n + 1 with the lower bound a n+1 (τ ).
2 nd case: there is at least one contact before time t. In that case we have that t ∂ (x, v) t and we can use the Duhamel formula with contact (2.11) for f . Both terms on the right-hand side are positive so we can lower bound f (t, x, v) by the first one. Denoting
x ∂ = x ∂ (x, v) (see Definition (2.7)), this yields f (t, x, v) e -C L t r n+1 γ + f ∂ (t ∂ (x, v), x ∂ (x, v), v) e - ( r n+1 ) 2 2T ∂ (2π) d-1 2 T d+1 2 ∂ e -C L τ r n+1 γ + v * •n(x ∂ )>0 f (t, x ∂ , v * ) (v * • n(x ∂ )) dv * (3.18)
where we used the definition of f ∂ (1.2).
Thanks to the previous subsection (Proposition 3.7), we obtain straightforwardly the expected result for step n + 1 with the lower bound b n+1 (τ ).
3.3.2.
A Maxwellian lower bound: proof of Theorem 2.2. In this subsection we prove Theorem 2.2.
We take f being the mild solution described in Theorem 2.2 and we suppose, thanks to Remarks 3.6 and 3.8, that f satisfies Propositions 3.5 and 3.7 with τ 0 = 0.
We fix τ > 0 and we keep the notations defined in Section 3.3.1 for the sequences (r n ), (a n ) and (b n ) (see (3.14)-(3.16)-(3.15), with (ξ n ) to be defined later).
In Proposition 3.9, we showed that we can spread the localised lower bound with larger and larger velocities taken into account, i.e. by taking ξ n = 1/4 the sequence r n is strictly increasing to infinity. We can consider r 0 > 0 and find an n 0 in N such that B(0, R min ) ∀v ∈ B(0, R min ), B(0, r 0 ) ⊂ B(v, r n 0 ). By setting N to be this specific n 0 and applying Proposition 3.9 with this N we obtain a uniform lower bound:
∀t ∈ τ - τ 2 n 0 +1 r n 0 , τ , ∀x ∈ Ω, f (t, x, v) min (a n 0 (τ ), b n 0 (τ )) 1 B(0,rn 0 ) (v).
This bound is uniform in x and the same arguments as in the proof of Proposition 3.9 allows us to spread it in the same manner in an even easier way since it is a global lower bound. Therefore, without loss of generality, we can assume n 0 = 0 and that the following holds, The proof of Theorem 2.2 is then done in two steps. The first one is to establish the Maxwellian lower bound at time τ using a slightly modified version of the argument in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] Lemma 3.3. The second is to prove that the latter bound holds for all t > τ .
∀n ∈ N, ∀t ∈ τ - τ 2 n+1 r n , τ , ∀x ∈ Ω, f (t, x, v) min (a n (τ ), b n (τ )) 1 B(0,rn) (v),
1 st step: a Maxwellian lower bound at time τ . A natural choice for (ξ n ) is a geometric sequence ξ n = ξ n for a given ξ in (0, 1). With such a choice we have that
(3.19) r n r 0 2 n 2 and (3.20) r n = 2 n 2 r 0 n k=1 (1 -ξ k ) c r 2 n 2 ,
with c r > 0 depending only on r 0 and ξ.
It follows that f satisfies the following property
(3.21) ∀n ∈ N, ∀x ∈ Ω, ∀v ∈ B(0, c r 2 n 2 ), f (τ, x, v) c n , with c n = min(a n , b n ).
It has been proven in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] Lemma 3.3 that for a function satisfying the property (3.21) with c n α 2 n , for some α > 0, there exist ρ and θ strictly positive explicit constants such that
∀x ∈ Ω, ∀v ∈ R d , f (τ, x, v) ρ (2πθ) d/2 e -|v| 2 2θ .
It thus only remains to show that there exists α 1 and α 2 strictly positive such that b n α 2 n 1 and a n α 2 n 2 .
The case of (b n ) is quite straightforward from (3.19) and 0 γ + 1. Indeed, there exist an explicit constants C 1 and C 2 > 0, independent of n, such that for all n 1
b n = b ∂ e -C L τ rn γ + e -( rn) 2 2T ∂ (2π) d-1 2 T d+1 2 ∂ b ∂ 2T ∂ (2π) d-1 2 e -C 1 (R min +rn) 2 C 2 e -2C 1 r 2 0 2 n . Therefore if C 2 1 we define α 1 = min(b ∂ , e -2C 1 r 2 0
) or else we define α 1 to be min(b ∂ , C 2 e -2C 1 r 2 0 ) and it yields b n α 2 n 1 for all n 0.
We recall the inductive definition of (a n ) for n 0, with ξ n = ξ n ,
a n+1 = min (a n , b n ) 2 C Q r d+γ n ξ (n+1)(d/2-1) τ 2 n+2 r n+1 e -C L τ r n+1 γ + 2 n+2 r n+1 .
First, using (3.19) we have that for all n 0,
-C L τ r n+1 γ + 2 n+2 r n+1 C L τ R γ + min 2R min + C L τ r γ + n+1 2 n+2 R min C L τ R γ + min 2R min + C L τ r γ + 0 2 (n+1)γ + 2 2 n+2 R min ,
which is bounded from above for all n since 0 γ + 1. Therefore, if we denote by C 3 any explicit non-negative constant independent of n, we have for all n 0
a n+1 C 3 r d+γ n ξ ((n+1))(d/2-1) 2 n+2 (R min + r n+1 ) min (a n , b n ) 2 .
Thus using (3.19) and (3.20) to bound r n we have
a n+1 C 3 2 n(d+γ) 2 ξ (n+1)(d/2-1) 2 n+2 (R min + c r 2 n+1 2 ) min (a n , b n ) 2 C 3 2 n(d+γ) 2 ξ n(d/2-1) 2 3n+5 2 min (a n , b n ) 2 C 3 2 (d+γ) 2 ξ (d/2-1) 2 3 2 n min (a n , b n ) 2 .
We define
λ = min (1, C 3 ) 2 (d+γ) 2 ξ (d/2-1) 2 3 2
, which leads to,
(3.22) ∀n 1, a n+1 λ n min(a n , b n ) 2 .
We could always have chosen b 0 and then b 1 respectively smaller than a 0 and a 1 (by always bounding from below by the minimum) and we assume that it is so. We can therefore define
∀n 1, k n = min {0 k n -1 : a n-k b n-k } .
Notice that n -k n 1, hence (3.22) can be iterated k n times which yields ∀n 1, a n+1 λ n+2(n-1)+•••+2 kn (n-kn) min (a n-kn , b n-kn ) 2 kn+1 λ 2 kn+1 (n-k+1)-(n+2) α 2 n-kn 1 2 kn+1
.
Thus, if λ 1 we can choose α 2 = α 1 and else we have 2 kn+1 (n-k+1)-(n+2) 2 n+1 and we can choose α 2 = λα 1 . In any case, α 2 does not depend on n and we have a n α 2 n 2 .
We therefore proved the existence of α > 0 such that for all n in N, min(a n , b n ) α 2 n which implies that there exist ρ and θ strictly positive explicit constants such that
(3.23) ∀x ∈ Ω, ∀v ∈ R d , f (τ, x, v) ρ (2πθ) d/2 e -|v| 2 2θ .
2 nd step: a Maxwellian lower bound for all T > t τ . To complete the proof of Theorem 2.2, it remains to prove that (3.23) actually holds for all t τ . All the results and constants we obtained so far do not depend on an explicit form of f 0 but just on uniform bounds and continuity that are satisfied at all times, positions and velocities (by assumption). Therefore, we can do the same arguments starting at any time and not t = 0. So if we take τ > 0 and consider τ t < T we just have to make the proof start at t -τ to obtain Theorem 2.2.
3.4.
A constructive approach to the initial lower bound and positivity of the diffussion. In previous subsections, we can see that explicit and constructive constants are obtained from given initial lower bounds and uniform positivity of the Maxwellian diffusion. Therefore, a constructive approach to the latter two will lead to completely explicit constants in the Maxwellian lower bound, depending only on a priori bounds on the solution and the geometry of the domain.
Localised "upheaval points". A few more assumptions on f 0 and f suffice to obtain a completely constructive approach for the "upheaval points". This method is based on a property of the iterated Q + operator discovered by Pulvirenty and Wennberg [START_REF] Pulvirenti | A Maxwellian lower bound for solutions to the Boltzmann equation[END_REF] and reformulated by Mouhot ([10] Lemma 2.3) as follows.
Lemma 3.10. Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν 0. Let g(v) be a nonnegative function on R d with bounded energy e g and entropy h g and a mass ρ g such that 0 < ρ g < +∞.
Then there exist R 0 , r 0 , η 0 > 0 and v ∈ B(0, R 0 ) such that
Q + Q + g1 B(0,R 0 ) , g1 B(0,R 0 ) , g1 B(0,R 0 ) η 0 1 B(v,r 0 ) ,
with R 0 , r 0 , η 0 being constructive in terms on ρ g , e g and h g .
We now suppose that 0 < ρ f 0 < +∞, h f 0 < +∞ and that
∀(x, v) ∈ Ω × R d , f 0 (x, v) ϕ(v) > 0
and we consider R 0 , r 0 , η 0 and v from Lemma 3.10 associated to the function ϕ.
We consider x 1 is in Ω and we denote d 1 = d(x 1 , ∂Ω) the distance between x 1 and ∂Ω. Define ∆ 1 = min(1, d 1 /3R 0 ). Take 0 < τ 0 ∆ 1 and v in B(0, R 0 ). We have, by construction that
(3.24) ∀t ∈ [0, ∆ 1 ], x 1 -(x 1 -vt) d 1 3 ,
which means that t ∂ (x, v) > ∆ 1 . By the Duhamel form without contact (2.10) of f and Lemma 3.1 we have for all t in [τ 0 , ∆ 1 ],
(3.25)
f (t, x 1 , v) f 0 (x, v)e -tC L v γ + ϕ(v)e -tC L R 0 γ + and f (t, x 1 , v) t 0 e -(t-s)C L v γ + Q + [f (s, x 1 -(t -s)v, •), f (s, x 1 -(t -s)v, •)] (v) ds e -(∆ 1 )C L R 0 γ + t 0 Q + f (s, x 1 -(t -s)v, •)1 B(0,r 0 ) , f (s, x 1 -(t -s)v, •)1 B(0,r 0 ) (v) ds. ( 3
f (t, x 1 , v) τ 2 0 e -(3∆ 1 )C L R 0 γ + t 0 Q + Q + ϕ1 B(0,R 0 ) , ϕ1 B(0,R 0 ) , ϕ1 B(0,R 0 ) (v) ds.
Applying Lemma 3.10 and remembering that ∆ 1 1 we obtain that
(3.27) ∀t ∈ [τ 0 , ∆ 1 ], ∀v ∈ B(0, R 0 ), f (t, x 1 , v) τ 3 0 e -3C L R 0 γ + η 0 1 B(v,r 0 ) .
f is uniformly continuous on the compact [0, T /2] × Ω × B(0, R 0 ) so there exists δ T , δ X , δ V > 0 such that
∀|t -t ′ | δ T , ∀ x -x ′ δ X , ∀ v -v ′ δ V , (3.28) |f (t, x, v) -f (t ′ , x ′ , v ′ )| a 0 (τ 0 ),
where we defined 2a 0 (τ 0 ) = τ 3 0 e -3C L R 0 γ + η 0 .
From (3.27) and (3.28), we find
(3.29) ∀t ∈ [τ 0 , ∆ 1 ], ∀x ∈ B(x 1 , δ X ), ∀v ∈ B(0, R 0 ), f (t, x, v) τ 3 0 e -3C L R 0 γ + η 0 1 B(v,r 0 ) .
To conclude we construct x 2 , . . . , x N X such that Ω ⊂ 1 i N X B (x i , δ X ). We can use exactly the same arguments as for x 1 on each x i on the time interval [τ 0 , ∆ i ] and we reach the conclusion (3.29) on each B(x i , δ X ). This gives, with ∆ = min(∆ i ),
(3.30) ∀t ∈ [τ 0 , ∆], ∀x ∈ Ω, ∀v ∈ B(0, R 0 ), f (t, x, v) a 0 (τ 0 )1 B(v,r 0 ) .
Remark 3.11. We emphasize here that even though we used compactness arguments, they appeared to be solely a technical artifice. The constants a 0 (τ 0 ), v, R 0 and r 0 are entirely explicit and depends on a priori bounds on f . The only point of concern would be that ∆ is not constructive since it depends on the covering. However, in previous sections, only the constant bounding the diffusive process (Proposition 3.7 contains a dependency in ∆ (see (3.11) and (3.13)) but depend on an upper bound on ∆ which is less than 1.
Starting from this explicit bound we can use the proofs made in the previous subsections, that are constructive, to therefore have a completely constructive proof as long as the bound on the diffusion (Proposition 3.7) can be obtained without compactness arguments.
Constructive approach of positivity of diffusion. A quick look at the proof of Proposition 3.7 shows that we only need to construct δ(τ 0 ) in (3.11) and δ ′ (x 1 ) in (3.12) explicitly.
The first equality (3.11) is obtained constructively combining the arguments to obtain it pointwise (see proof of Proposition 3.7) together with the method of the proof of Lemma 3.3.
Indeed, take x ∂ on ∂Ω and fix an x 0 in ω. We can grow the initial lower bound (3.30) at x 1 for t in [τ 0 , ∆] such that it includes B(0, 2d Ω /τ 0 + v ) (as in Lemma 3.3). Then, as in the beginning of the proof of Proposition 3.5 we obtain that,
(3.31) ∀t ∈ [τ 0 , ∆], f t, x ∂ , 2 x ∂ -x 0 τ 0 A > 0.
We can now do that for all v in
B 2 x ∂ -x 0 τ 0 , min r 0 ; 2 d(x 0 , ∂Ω) τ 0 by just defining, for each v, x(v) the point in Ω such that v = 2 x ∂ -x(v) τ 0 .
Note that this point is always well defined since Ω is convex and
x 0 -x(v) d(x 0 , ∂Ω).
For any given x(v) we apply the same argument as for x 0 so that the lower bound includes v. Therefore, there is an infimum constant A min > 0 satisfying,
∀t ∈ [τ 0 , ∆ min ], ∀v ∈ B 2 x ∂ -x 0 τ 0 , min r 0 ; 2 d(x 0 , ∂Ω) τ 0 , f (t, x ∂ , v) A min ,
which is a constructive version of (3.11). We emphasize here that A min exists and is indeed independent on v since it depends on the number of iteration of Lemma 3.3 (itself determined by the norm of v which is bounded) and the initial lower bound at x(v) which is uniform in space by (3.30).
The second inequality (3.12) is purely geometric as long as we fix a x 1 satisfying the initial lower bound, which is the case with x 0 used above. We therefore obtained an entirely constructive method for the positivity of the diffusion process.
The non-cutoff case: an exponential lower bound
In this section we prove the immediate appearance of an exponential lower bound for solutions to the Boltzmann equation (1.1) in the case of a collision kernel satisfying the non-cutoff property.
The definition of being a mild solution in the case of a non-cutoff collision kernel, Definition 2.4 and equation (2.13), shows that we are in fact dealing with a cutoff kernel to which we add a non locally integrable remainder. As we shall see in Section 4.1, S ε enjoys the same kind of L ∞ control than the operator L whilst Q 1 ε , the noncutoff part of the gain operator, has an L ∞ -norm that decreases to zero as ε goes to zero.
The strategy used is therefore utterly identical to the cutoff case: creation of localised "upheaval points" and spreading of these initial lower bounds up to an exponential lower bound. The difference will be that, at each step n of the spreading process we will choose an ε n and a ξ n such that the perturbation added by the noncutoff part -Q 1 εn L ∞ v still preserves a uniform positivity in larger and larger balls in velocity.
Note that the uniform positivity of the Maxwellian diffusion still holds in the non-cutoff case since it only comes from an initial positivity and the geometry of the domain.
4.1. Controls on the operators S ε and Q 1 ε . We gather here two lemmas, proven in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], which we shall use in this section. They control the L ∞ -norm of the linear operator S ε and of the bilinear operator Q 1 ε . We first give a property satisfied by the linear operator S, (2.13), which is Corollary 2.2 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], where we define We will compare the lower bound created by the cutoff part of our kernel to the remaining part Q 1 ε . To do so we need to control its L ∞ -norm. This is achieved thanks to Lemma 2.5 in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF], which we recall here.
(4.1) m b = S d-1 b (cos θ) (1 -cos θ)dσ = S d-2 π 0 b (cos θ) (1 -cos θ)sin d-2 θ dθ. Lemma 4.1. Let g be a measurable function on R d . Then ∀v ∈ R d , |S[g](v)| C S g v γ + , where C S g is defined by: (1) If Φ satisfies (1.4) with γ 0 or if Φ satisfies (1.5), then C S g = cst m b C Φ e g . (
∀v ∈ R d , Q 1 b (g, f )(v) cst m b C Φ g L 1 γ f W 2,∞ v γ .
(2) If Φ satisfies (1.4) with 2 + γ < 0, then
∀v ∈ R d , Q 1 b (g, f )(v) cst m b C Φ g L 1 γ + g L p f W 2,∞ v γ
with p > d/(d + γ + 2).
4.2. Proof of Theorem 2.5. As explained at the beginning of this section, the main idea is to compare the loss due to the non-cutoff part of the operator Q 1 ε with the spreading properties of the cutoff operator Q + ε . More precisely, due to Lemmas 3.1, 4.1 and 4.2 we find that for all 0 < ε < ε 0 , (4.2)
L ε [f ] + S ε [f ] C f n b CO ε + m b NCO ε v γ + and (4.3) Q + ε (f, f ) + Q 1 ε (f, f ) Q + ε (f, f ) -Q 1 ε (f, f ) Q + ε (f, f ) -C f m b NCO ε v (2+γ) + ,
where C f > 0 is a constant depending on E f , E ′ f , W f (and L pγ f if Φ satisfies (1.4) with γ < 0).
Moreover, by definitions (3.1), (2.9) and (4.1), the following behaviours happen: for ν = 0. This shows that the contribution of Q 1 ε decreases with ε so this operator should not affect the spreading method whereas the contribution of S ε increases, which is why we lose the Maxwellian lower bound to get a faster exponential one.
l b CO
We just briefly describe the changes to make into the proof of the cutoff case to obtain Theorem 2.5.
Localised "upheaval points". The creation of localised initial lower bounds (Proposition 3.5 in the cutoff case) depends on the Boltzmann operator for two different reasons:
• the creation of a first lower bound in a neighborhood of a point (x 1 , v 1 ) in the phase space (Lemma 3.3) • the creation of localised lower bounds in Ω via the free transport part. Since L ε + S ε satisfies the same bounds as L in the cutoff case, the second step can be made identically as in the proof of Proposition 3.5. It remains to prove the creation of an initial bound in a neighborhood of (x 1 , v 1 ).
We use the same definition of ∆, x 1 , v 1 , α n (t) and (r n ) n∈N as in Lemma 3.3 apart from t n (t) = max τ 0 , t -∆ 2 n+1 ( v 1 + r n ) .
Note that the step n = 0 holds here since it only depends on the continuity of the solution f and f 0 .
The key difference will be that the equivalent statement of Lemma 3.3 can only be done on time intervals of the form [τ 0 , ∆] for any τ 0 > 0. Indeed, take τ 0 > 0 and therefore ∀n ∈ N, ∀t ∈ [τ 0 , ∆], α n (t) α n (τ 0 ).
Exactly the same arguments as in the inductive step n + 1 in the proof of Lemma 3.3 we reach (3.4) with our new operators (with a cutoff function depending on ε n+1 ) f (t, x, v) This proves the following Lemma.
Lemma 4.3. Let f be the mild solution of the Boltzmann equation described in Theorem 2.5.
Then there exist ∆ > 0, (x 1 , v 1 ) in Ω × R d such that for all n ∈ N there exist r n > 0, depending only on n, such that for all τ 0 in (0, ∆] there exists and α n (τ 0 ) > 0 such that for ∀t ∈ [τ 0 , ∆], ∀x ∈ B x 1 , ∆ 2 n , ∀v ∈ R d , f (t, x, v) α n (τ 0 )1 B(v 1 ,rn) (v).
Exponential lower bound. As explained before, the strict positivity of the diffusion still holds in our case since we proved the initial lower bound in Lemma 4.3. It therefore remains to show that we can indeed spread the "upheaval points". This is achieved by adapting the arguments of the cutoff case together with careful choices of ε n+1 and ξ n+1 at each step of the induction. This has been done in [START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF] and in [START_REF] Briant | Instantaneous filling of the vacuum for the full Boltzmann equation in convex domains[END_REF] and we refer to these works for deeper details.
Basically, we start by spreading the initial "upheaval points" (obtained from Lemma 4.3 with the same method as Proposition 3.5) by induction. At each step of the induction we use the spreading property of the Q + εn operator between t
(2) n and t
(1) n (see (3.17)) and we fix ε n small enough to obtain a strictly positive lower bound (see (4.7)).
There is, however, a subtlety in the non-cutoff case that we have to deal with. Indeed, at each step of the induction we choose an ε n of decreasing magnitude, but at the same time in each step the action of the operator -(L εn + S εn ) behaves like (see (4.6)) exp -C f m b NCO εn + n b CO εn (t (1) n -t (2) n ) v γ + .
By (4.4) -(4.5), as ε n tends to 0 we have that n b CO εn goes to +∞ and so the action of -(Q - ε + Q 2 ε ) seems to decrease the lower bound to 0 exponentially fast. The idea to overcome this difficulty is to find a time interval t
2. 1 .
1 Notations. First of all, we denote • = 1 + |•| 2 and y + = max{0, y}, the positive part of y.
Now we state our result. Theorem 2 . 2 .
22 Let Ω be a C 2 open bounded domain in R d with nowhere null normal vector and let f 0 be a non-negative continuous function on Ω × R d . Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying(1.6)
4 b
4 (cos θ) . Lemma 3.2. Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν 0. Then for any v ∈ R d , 0 < r R, ξ ∈ (0, 1), we have
,
where b ∂ defined in Proposition 3.7 and C L in Lemma 3.3. And finally, with C Q being defined inLemma 3.3, (3.16)
where (r n ), (a n ) and (b n ) are defined by (3.14)-(3.16)-(3.15).
with (r n ), (a n ) and (b n ) satisfying the same inductive properties (3.14)-(3.16)-(3.15) (with r 0 = r n 0 , a 0 = a n 0 and b 0 = b no ), with (ξ n ) to be chosen later.
2 )
2 If Φ satisfies (1.4) with γ ∈ (-d, 0), then C S g = cst m b C Φ e g + l p g , p > d/(d + γ).
Lemma 4 . 2 . 1 )
421 Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν ∈ [0, 2). Let f, g be measurable functions on R d . Then (If Φ satisfies (1.4) with 2 + γ 0 or if Φ satisfies (1.5), then
2 nr d+γ n ξ d 2 - 1 1
221 α n (τ 0 ) 2 Q + ε n+1 [1 B(v,rn) , 1 B(v,rn) ] -C f m b NCO ε n+1 R (2+γ) + (v) ds, using the shorthand notations C ε f (R) = C f (n b CO ε +m b NCO ε ) R γ + and R = v 1 +2r n . Due to the spreading property of Q + ε n+1 (see Lemma 3.2) with ξ = 1/4 we reach f (t, x, v) (τ 0 )cst l b CO ε n+1 c Φ B(v,rn √ 2(1-ξ)) -C f m b NCO ε n+1 R (2+γ) + (v) ds,(4.6)Thus, at each step of the induction we just have to choose ε n+1 small enough such that(4.7) C f m b NCO ε n+1 R (2+γ) + 1 2 α 2 n (τ 0 )cst l b c Φ r d+γ n ξ d 2 -1 .
∆ k = 1 ,
1 n , at each step to be sufficiently small to counterbalance the effect of n b CO εn .More precisely, taking t fixing ε n by (4.7) and choosing carefully ξ n (exactly as in[START_REF] Mouhot | Quantitative lower bounds for the full Boltzmann equation. I. Periodic boundary conditions[END_REF][1])we reach the desired Theorem 2.5.
are defined by (2.6) and (2.7) respectively. Theorem 2.5. Let Ω be a C 2 open bounded domain in R d with nowhere null normal vector and let f 0 be a non-negative continuous function on Ω × R d . Let B = Φb be a collision kernel satisfying (1.3), with Φ satisfying (1.4) or (1.5) and b satisfying (1.6) with ν in [0, 2). Let f (t, x, v) be a mild solution of the Boltzmann equation in
Now we state our result.
Hence, if we call v * the integral variable in Q + , we have that for all v * in B(0, R 0 ) and all s in [0, t], t ∂ (x-(t-s)v, v * ) > t (same arguments as for (3.24)). The function f (s, x 1 -(t -s)v, v * ) thus satisfies (3.25) and (3.26) as well.We can do this iteration one more time since R 0 ∆ 1 d 1 /3 and this yields
.26)
Now we notice that (3.24) implies the following.
∀s ∈ [0, t], d(x 1 -(t -s)v, ∂Ω) 2d 1 /3. | 71,511 | [
"739558"
] | [
"14471"
] |
01492041 | en | [
"math"
] | 2024/03/04 23:41:50 | 2017 | https://hal.science/hal-01492041/file/BE_boundeddomains_SR_MD.pdf | Marc Briant
PERTURBATIVE THEORY FOR THE BOLTZMANN EQUATION IN BOUNDED DOMAINS WITH DIFFERENT BOUNDARY CONDITIONS
Keywords: Boltzmann equation, Perturbative theory, Specular reflection boundary conditions, Maxwellian diffusion boundary conditions
in L ∞
x,v (1 + |v|) β e |v| 2 /4 , we prove existence, uniqueness, continuity and positivity of solutions for less restrictive weights in the velocity variable; namely, polynomials and stretch exponentials. The methods developed here are constructive.
Introduction
The Boltzmann equation rules the dynamics of rarefied gas particles moving in a domain Ω of R 3 with velocities in R 3 when the sole interactions taken into account are elastic binary collisions. More precisely, the Boltzmann equation describes the time evolution of F (t, x, v), the distribution of particles in position and velocity, starting from an initial distribution F 0 (x, v). It reads
∀t 0 , ∀(x, v) ∈ Ω × R 3 , ∂ t F + v • ∇ x F = Q(F, F ), (1.1) ∀(x, v) ∈ Ω × R 3 , F (0, x, v) = F 0 (x, v).
The author was supported by the 150 th Anniversary Postdoctoral Mobility Grant of the London Mathematical Society. The author would also like to acknowledge the Division of Applied Mathematics at Brown University, where this work was achieved.
To which one have to add boundary conditions on F . We decompose the phase space boundary Λ = ∂Ω × R 3 into three sets
Λ + = (x, v) ∈ ∂Ω × R 3 , n(x) • v > 0 , Λ -= (x, v) ∈ ∂Ω × R 3 , n(x) • v < 0 , Λ 0 = (x, v) ∈ ∂Ω × R 3 , n(x) • v = 0 ,
where n(x) the outward normal at a point x on ∂Ω. The set Λ 0 is called the grazing set.
In the present work, we will consider two types of interactions with the boundary of the domain ∂Ω. Either the specular reflections
(1.2) ∀t > 0, ∀(x, v) ∈ Λ -, F (t, x, v) = F (t, x, R x (v))
where R x stands for the specular reflection at the point x on the boundary:
∀v ∈ R 3 , R x (v) = v -2(v • n(x))n(x).
This interaction describes the fact that the gas particles elastically collide against the wall like billiard balls. The second type is the Maxwellian diffusion boundary condition (1.3) This boundary condition expresses the physical process where particles are absorbed by the wall and then emitted back into Ω according to the thermodynamical equilibrium distribution between the wall and the gas.
∀t > 0, ∀(x, v) ∈ Λ -, F (t, x, v) = c µ µ(v) v * •n(x)>0 F (t, x, v * ) (v * • n(x))
The operator Q(F, F ) encodes the physical properties of the interactions between two particles. This operator is quadratic and local in time and space. It is given by
Q(F, F ) = R 3 ×S 2
In the present paper we are interested in the well-posedness of the Boltzmann equation (1.1) for fluctuations around the global equilibrium
µ(v) = 1 (2π) 3/2 e -|v| 2 2 .
More precisely, in the perturbative regime F = µ + f we construct a Cauchy theory in L ∞ x,v spaces endowed with strech exponential or polynomial weights and study the continuity and the positivity of such solutions for both specular reflections and diffusive boundary conditions.
Under the perturbative regime, the Cauchy problem amounts to solving the perturbed Boltzmann equation (1.4)
∂ t f + v • ∇ x f = Lf + Q(f, f )
with L being the linear Boltzmann operator Lf = 2Q(µ, f ) and we considered Q as a symmetric bilinear operator
(1.5) Q(f, g) = 1 2 R 3 ×S 2 B (|v -v * |, cos θ) [f ′ g ′ * + g ′ f ′ * -f g * -gf * ] dv * dσ.
Throughout this paper we deal with the perturbed Boltzmann equation (1.4) and the domain Ω is supposed to be C 1 so that its outwards normal is well-defined (it will be analytic and strictly convex in the case of specular reflections or just connected in the case of Maxwellian diffusion).
1.1. Notations and assumptions. We describe the assumptions and notations we shall use throughout the sequel.
Function spaces. Define
• = 1 + |•| 2 .
The convention we choose is to index the space by the name of the concerned variable so we have, for p in [1, +∞],
L p [0,T ] = L p ([0, T ]) , L p t = L p R + , L p x = L p (Ω) , L p v = L p R 3 .
For m : R 3 -→ R + a strictly positive measurable function we define the following weighted Lebesgue spaces by the norms
f L ∞ x,v (m) = sup (x,v)∈Ω×R 3 [|f (x, v)| m(v)] f L 1 v L ∞ x (m) = R 3 sup x∈Ω |f (x, v)| m(v) dv
and in general with p, q in [1, ∞):
f L p v L q x (m) = f L q x m(v) L p v .
We define the Lebesgue spaces on the boundary:
f L ∞ Λ (m) = sup (x,v)∈Λ [|f (x, v)| m(v)] f L 1 L ∞ Λ (m) = R 3 sup x: (x,v)∈Λ |f (x, v)v • n(x)| m(v) dv
with obvious equivalent definitions for Λ ± or Λ 0 . However, when we do not consider the L ∞ setting in the spatial variable we define
f L 2 Λ (m) = Λ f (x, v) 2 m(v) 2 |v • n(x)| dS(x)dv 1/2
, where dS(x) is the Lebesgue measure on ∂Ω.
For a function g defined on a space E and a subset E ⊂ E we denote by g| E the restriction of g on E.
Assumptions on the collision kernel. We assume that the collision kernel B can be written as
(1.6) B(v, v * , θ) = Φ (|v -v * |) b (cos θ) ,
which covers a wide range of physical situations (see for instance [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] Chapter 1). Moreover, we will consider only kernels with hard potentials, that is
(1.7) Φ(z) = C Φ z γ , γ ∈ [0, 1],
where C Φ > 0 is a given constant. Of special note is the case γ = 0 which is usually known as Maxwellian potentials. We will assume that the angular kernel b • cos is positive and continuous on (0, π), and that it satisfies a strong form of Grad's angular cut-off:
(1.8) b ∞ = b L ∞ [-1,1]
< ∞
The latter property implies the usual Grad's cut-off [START_REF] Grad | Principles of the kinetic theory of gases[END_REF]:
(1.9)
l b = S d-1 b (cos θ) dσ = S d-2 π 0 b (cos θ) sin d-2 θ dθ < ∞.
Such requirements are satisfied by many physically relevant cases. The hard spheres case (b = γ = 1) is a prime example.
1.2. Our goals, strategies and comparison with previous studies. Few results have been obtained about the perturbative theory for the Boltzmann equation with other boundary conditions than the periodicity of the torus. On the torus we can mention [START_REF] Ukai | On the existence of global solutions of mixed problem for non-linear Boltzmann equation[END_REF][16] [START_REF] Guo | Boltzmann diffusive limit beyond the Navier-Stokes approximation[END_REF][27][4] [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] for collision kernels with hard potentials with cutoff, [START_REF] Gressman | Global classical solutions of the Boltzmann equation without angular cut-off[END_REF] without the assumption of angular cutoff or [START_REF] Guo | Classical solutions to the Boltzmann equation for molecules with an angular cutoff[END_REF][23] for soft potentials.
A good review of the methods and techniques used can be found in the exhaustive [START_REF] Ukai | Mathematical theory of the Boltzmann equation[END_REF].
The study of the well-posedness of the Boltzmann equation, as well as the trend to equilibrium, when the spatial domain is bounded with non-periodic boundary conditions is scarce and only focuses on hard potential kernels with angular cutoff. The cornerstone is the work by Guo [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] who established a complete Cauchy theory around a global Maxwellian and prove the exponential convergence to equilibrium in L ∞
x,v with an important weight v β µ(v) -1/2 . The latter weight is quite restrictive and has been required in all the studies so far. This perturbative theory is done in smooth convex domain for Maxwellian diffusion boundary conditions and strictly convex and analytic domains in the case of specular reflections (note that in-flow and bounce-back boundary conditions are also dealt with). The method of Guo is based on an L 2 -L ∞ theory, we briefly explain it later, that was then used in [START_REF] Kim | The Boltzmann equation near a rotational local Maxwellian[END_REF] (to obtain similar perturbative results around a rotational local Maxwellian in the case of specular reflections) and recently in [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] to deal with non global diffusive boundary conditions in more general domains.
To conclude this overview let us mention that unlike the case of the torus where regularity theory in Sobolev spaces is now well established, a recent result by Kim [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF] showed that singularities arise at non-convex points on the boundary even around a global Maxwellian. However, we can still recover some weak form of regularity in Ω is strictly convex [START_REF] Guo | Regularity of the Boltzmann Equation in Convex Domains[END_REF] or if the boundary conditions are diffusive [START_REF] Guo | BV-regularity of the Boltzmann equation in non-convex domains[END_REF].
As mentioned before, the main goal of the present work is to establish the perturbative well-posedness and exponential trend to equilibrium for the Boltzmann equation with specular reflexion or diffusive boundary conditions in the L ∞
x,v setting with less restrictive weights than the studies mentioned above. More precisely, we shall deal with L ∞
x,v (m) where m is either a stretch exponential or a polynomial instead of m = v β µ(v) -1/2 with β large. There are two main advances in this work. The first one is a study of transport-like equations with diffusive boundary conditions in a mixed setting
L 1 v L ∞ x .
The second one is a new analytic version of the extension theory of Gualdani, Mischler and Mouhot [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] that fits both the boundary conditions and the lack of hypodissipativity of the linear operator.
More precisely, the main contribution of our work if to establish a Cauchy theory in more general spaces. The main strategy is to combine a decomposition of the Boltzmann linear operator L into A + B where B will act like a small perturbation of the operator G ν = -v • ∇ x -ν(v) and A has a regularizing effect. This idea comes from the recent work [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] for which we develop here an analytic and nonlinear version. The regularizing property of the operator A allows us to decompose the perturbative equation into a system of differential equations
∂ t f 1 + v • ∇ x f 1 = Bf 1 + Q(f 1 , f 1 + f 2 ) (1.10) ∂ t f 2 + v • ∇ x f 2 = Lf 2 + Q(f 2 , f 2 ) + Af 1 (1.11)
where the first equation can be solved in L ∞
x,v (m) and the second is dealt with in
L ∞ x,v v β µ -1/2
) where the theory of Guo [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] is known to hold.
The key ingredient to study (1.10) is to show that G ν along with boundary conditions generates a semigroup S Gν (t) exponentially decaying in L ∞
x,v (m). The specular reflections and diffusive boundary conditions cannot be treated by the standard semigroup results in bounded domain [START_REF] Beals | Abstract time-dependent transport equations[END_REF] and we adapt the tools developed in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] to the weights m considered here. We obtain an explicit form for S Gν (t) in the case of specular reflection whereas we only have an implicit description of it in the case of Maxwellian diffusion. The latter implicit description includes the contribution of all the possible backward characteristic trajectories starting at (t, x, v). We then use the fact that the measure of the set of trajectories not reaching the initial plane {t = 0} is small.
The second difficulty in solving (1.10) is to prove that B does not perturb "too much" the exponential decay generated by the semigroup S Gν (t). Indeed, the latter semigroup is not strongly continuous and we therefore loose the hypodissipativity properties that hold for G ν is the case of the torus [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF]. The case of specular reflections can be dealt with thanks to a Duhamel formulation because S Gν (t) has a good contractive property. Such a property is missing in the case of diffusive boundary condition. Due to the implicit description of S Gν (t), the proof of B being a small perturbation of G ν requires a L 1 v L ∞ x -theory for the semigroup S Gν (t) as well as a new mixing estimate for B. The study of transport-like equations with boundary conditions in mixed norms seems new to our knowledge. The second equation (1.11) can be solved easily using the regularizing property of the operator A and the results already described for specular reflections in strictly convex and analytic domains or [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] for Maxwellian diffusion boundary condition in C 1 bounded domains.
We conclude by mentioning that our results also give the continuity of the aforementioned solutions away from the grazing set Λ 0 . Such a property also allows us to obtain the positivity (and quantify it explicitely) of the latter solutions thanks to recent results by the author [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF].
1.3. Organisation of the article. Section 2 is dedicated to the statement and the description of the main results proved in this paper. We also give some background properties about the linear Boltzmann operator.
In Section 3 we study the semigroup generated by the transport part and the collision frequency kernel G ν = -v • ∇ x -ν along with boundary conditions.
We give a brief review of the existing L 2 -L ∞ theory for the full linear perturbed operator
G = -v • ∇ x + L in Section 4.
We present and solve the system of equations (1.10)-(1.11) in Section 5. Lastly, Section 6 is dedicated to the proof of existence, uniqueness, exponential decay, continuity and positivity of solutions to the full Boltzmann equation (1.1).
Main results
Some essential background on the perturbed Boltzmann equation.
We gather here some renown properties about the Boltzmann equation.
A priori conservation laws.We start by noticing the symmetry property of the Boltzmann operator (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF][10] [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] among others).
Lemma 2.1. Let f be such that Q(f, f ) is well-defined. Then for all Ψ(v) we have R 3 Q(f, f )Ψ dv = C Φ 4 R d ×R d ×S d-1 q(f )(v, v * ) [Ψ ′ * + Ψ ′ -Ψ * -Ψ] dσdvdv * , with q(f )(v, v * ) = |v -v * | γ b (cos θ) f f * .
This result is well-known for the Boltzmann equation and is a simple manipulation of the integrand using changes of variables (v, v * ) → (v * , v) and (v, v * ) → (v ′ , v ′ * ), as well as using the symmetries of the operator q(f ). A straightforward consequence of the above is the a priori conservation of mass when one consider either specular reflections or Maxwellian diffusion (2.1) ∀t 0,
Ω×R 3 f (t, x, v) dxdv = Ω×R 3 f 0 (x, v) dxdv.
In the case of specular reflections Lemma 2.1 also implies the a priori conservation of energy (2.2) ∀t 0,
Ω×R 3 |v| 2 f (t, x, v) dxdv = Ω×R 3 |v| 2 f 0 (x, v) dxdv.
Lastly, in the specific case of specular reflections inside a domain Ω with an axis of rotation symmetry:
(2.3) ∃x 0 , ω ∈ R 3 , ∀x ∈ ∂Ω, {(x -x 0 ) × ω} • n(x) = 0,
we also obtain the a priori conservation of the following angular momentum (2.4) ∀t 0,
Ω×R 3 {(x -x 0 ) × ω}•vf (t, x, v)dxdv = Ω×R 3 {(x -x 0 ) × ω}•vf 0 (x, v)dxdv.
The linear Boltzmann operator. We gather some well-known properties of the linear Boltzmann operator L (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF][START_REF] Cercignani | The mathematical theory of dilute gases[END_REF][START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF][START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] for instance).
L is a closed self-adjoint operator in
L 2 v µ -1/2 with kernel Ker (L) = Span {φ 0 (v), . . . , φ 4 (v)} µ, where (φ i ) 0 i 4 is an orthonormal basis of Ker (L) in L 2 v µ -1/2 . More precisely, if we denote π L the orthogonal projection onto Ker (L) in L 2 v µ -1/2 : (2.5) π L (g) = 4 i=0 R 3 g(v * )φ i (v * ) dv * φ i (v)µ(v) φ 0 (v) = 1, φ i (v) = v i , 1 i 3, φ 4 (v) = |v| 2 -3 √ 6 ,
and we define
π ⊥ L = Id -π L . The projection π L (f (x, •))(v) of f (x, v
) onto the kernel of L is called its fluid part whereas π ⊥ L (f ) is its microscopic part. L can be written under the following form
(2.6) L = -ν(v) + K,
where ν(v) is the collision frequency
ν(v) = R 3 ×S 2 b (cos θ) |v -v * | γ µ * dσdv *
and K is a bounded and compact operator in L 2 v µ -1/2 that takes the form
K(f )(v) = R 3 k(v, v * )f (v * ) dv * .
Finally we remind that there exist ν 0 , ν 1 > 0 such that
(2.7) ∀v ∈ R 3 , ν 0 (1 + |v| γ ) ν(v) ν 1 (1 + |v| γ ),
and that L has a spectral gap λ L > 0 in L 2 x,v µ -1/2 (see [START_REF] Baranger | Explicit spectral gap estimates for the linearized Boltzmann and Landau operators with hard potentials[END_REF][START_REF] Mouhot | Explicit coercivity estimates for the linearized Boltzmann and Landau operators[END_REF] for explicit proofs)
(2.8) ∀f ∈ L 2 v µ -1/2 , L(f ), f L 2 v( µ -1/2 ) -λ L π ⊥ L (f ) 2 L 2 v (µ -1/2 ) .
The linear perturbed Boltzmann operator. The linear perturbed Boltzmann operator is the full linear part of the perturbed Boltzmann equation (1.4):
G = L -v • ∇ x .
An important point is that the same computations as to show the a priori conservation laws implies that in L 2
x,v µ -1/2 the space Span µ, |v| 2 µ ⊥ is stable under the flow
∂ t f = G(f )
with specular reflections whereas (Span {µ}) ⊥ is stable under the same differential equation with diffusive boundary conditions. We thus define the L 2 x,v µ -1/2projection onto that space (2.9)
Π G (f )(v) = Ω×R 3 h(x, v * ) dxdv * µ(v) + Ω×R 3 |v * | 2 h(x, v * ) dxdv * |v| 2 µ(v),
(with the addition of the angular momentum term when Ω is axis-symmetric) and in the case of Maxwellian diffusion
(2.10) Π G (f )(v) = Ω×R 3 h(x, v * ) dxdv * µ(v).
Again we define Π
⊥ G = Id -Π G .
In order to avoid repeating the conservation laws, for a function space E we define the following sets
SR [E] = {f ∈ E, Π G (f ) = 0, Π G defined for specular reflection (2.9)} MD [E] = {f ∈ E, Π G (f ) = 0, Π G defined for specular reflection (2.10)} .
This amounts to saying that the functions in SR [E] satisfy the conservation of mass (2.1) and energy (2.2) (and angular momentum (2.4) if Ω is axis-symmetric) whilst the functions in MD [E] satisfy the conservation of mass (2.1).
Main theorems.
We start with the following definition. Definition 2.2. Let Ω be a bounded domain in R 3 . We say that Ω is analytic and strictly convex if there exists an analytic function ξ : R 3 -→ R such that Ω = {x : ξ(x) < 0} and
• at the boundary ξ(x) = 0 and ∇ξ(x) = 0,
• there exists c ξ > 0 such that for all x ∈ R 3 ,
(2.11)
1 i,j 3 ∂ ij ξ(x)x i x j c ξ |x| 2 .
The present work is dedicated to proving the following two perturbative studies for the Boltzmann equation in bounded domains.
Theorem 2.3. Let Ω be an analytic strictly convex (2.11) bounded domain and B be a collision kernel of the form (1.6) with hard potential (1.7) and angular cutoff (1.8). Let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k > 1 + γ + 16πb ∞ l b where b ∞ and l b were defined by (1.8) and (1.9). Then there exists η 0 , C 0 and λ 0 > 0 such that if
F 0 = µ + f 0 with f 0 in SR L ∞ x,v (m) satisfies f 0 L ∞ x,v (m) η 0
then there exists a unique
F = µ + f with f in L ∞ [0,+∞) SR L ∞ x,v (
m) solution to the Boltzmann equation (1.1) with specular reflections boundary conditions (1.2). Moreover, the following holds
(1) ∀t 0,
f (t) L ∞ x,v (m) C 0 e -λ 0 t f 0 L ∞ x,v (m) ;
(2) if F 0 0 is continuous on Ω × R 3 -Λ 0 and satisfies the specular reflections boundary condition then F 0 and
F is continuous on [0, +∞) × Ω × R 3 -Λ 0 .
Remark 2.4. We make a few comments about the previous result.
• The analyticity and the strict convexity of the domain are required to ensure that one can use the control of the L ∞
x,v v β µ -1/2 theory by the L 2 x,v µ -1/2 developed in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] (see Remark 4.4). Moreover, the methods are constructive starting from [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]. The constants are thus not explicit since the methods in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] are not. Obtaining a constructive theory in the latter spaces, thus getting rid of the strong assumption of analyticity, would be of great interest;
• The positivity of F is actually quantified [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF] and is an explicit Maxwellian lower bound. We refer to Subsection 6.4 for more details; • The uniqueness is obtained in a perturbative setting, i.e. on the set of function of the form F = µ + f with f small. If the uniqueness of solutions to the Boltzmann equation in
L 1 v L ∞ x ( v 2+0
) is known on the torus [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] a uniqueness theory outside the perturbative regime remains, at this date, an open problem in the case of bounded domains.
We obtain a similar result in the case of Maxwellian diffusion boundary condition. As explained in the introduction, the assumptions on the domain Ω are far less restrictive. We however define two new sets.
For (x, v) define the backward exit time by
t b (x, v) = inf {t > 0, x -tv / ∈ Ω} and the footprint x b (x, v) = x -t b (x, v)v. Define the singular grazing boundary Λ (S) 0 = {(x, v) ∈ Λ 0 , t b (x, v) = 0 or t b (x, -v) = 0}
and the discontinuity set
D = Λ 0 ∪ (x, v) ∈ Ω × R 3 , (x b (x, v), v) ∈ Λ (S) 0 Theorem 2.5.
Let Ω be a C 1 connected bounded domain and B be a collision kernel of the form (1.6) with hard potential (1.7) and angular cutoff (1.8). Let m = e κ|v| α with κ > 0 and α in (0, 2)
or m = v k with k > 1 + γ + 16πb ∞ l b
where b ∞ and l b were defined by (1.8) and (1.9).
Then there exists η 0 , C 0 and λ 0 > 0 such that if
F 0 = µ+f 0 with f 0 in MD L ∞ x,v (m) satisfies f 0 L ∞ x,v (m) η 0
then there exists a unique
F = µ + f with f in L ∞ [0,+∞) MD L ∞ x,v (
m) solution to the Boltzmann equation (1.1) with Maxwellian diffusion boundary conditions (1.3). Moreover, the following holds
(1) ∀t 0,
f (t) L ∞ x,v (m) C 0 e -λ 0 t f 0 L ∞ x,v (m) ; (2) if F 0 0 is continuous on Ω × R 3 -Λ 0 and satisfies the Maxwellian diffusion boundary condition then F 0 is continuous on [0, +∞) × Ω × R 3 -D .
Remark 2.6. We first emphasize that the latter Theorem is obtained with constructive arguments and the constants η 0 , C 0 and λ 0 > 0 can be computed explicitly in terms of m and the collision operator. Then we make a few comments.
• In the case of a convex domain D = Λ 0 (see [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] Lemma 3.1);
• The rate of trend to equilibrium λ 0 can be chosen as close as one wants from the optimal one in the L 2 µ -1/2 framework; • The positivity of F can be quantified in the case Ω convex [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF]. We obtain an explicit Maxwellian lower bound, see Subsection 6.4; • Here again the uniqueness is obtained only in a perturbative setting.
Preliminaries: semigroup generated by the collision frequency
For general domains Ω, the Cauchy theory in L p x,v (1 p < +∞) of equations of the type
∂ t f + v • ∇ x f = g with boundary conditions ∀(x, v) ∈ Λ -, f (t, x, v) = P (f )(t, x, v),
where P : L p Λ + -→ L p Λ -is a bounded linear operator, is well-defined in L p x,v when P < 1 [START_REF] Beals | Abstract time-dependent transport equations[END_REF]. The specific case P = 1 can still be dealt with ([2] Section 4) but if the existence of solutions in L p
x,v can be proved, the uniqueness is not always given unless one can prove that the trace of f belongs to L 2 loc R + ; L p x,v (Λ) . For specular reflections or Maxwellian diffusion boundary conditions, the boundary operator P is of norm exactly one and the general theory fails. If this generates difficulties for the full linear operator L, we can overcome this problem in the case of a mere multiplicative function g = ν(v).
This section is devoted to proving that the following operator
G ν = -ν(v) -v • ∇ x
generates a semigroup S Gν (t) in two different frameworks: the specular reflections and the Maxwellian diffusion . We prove that G ν along with either specular reflections or Maxwellian diffusion generates a semigroup with exponential decay in L ∞
x,v spaces endowed with polynomial and stretch exponential weights.
The L ∞ v setting is essential for the existence of solutions in the case of specular reflections since one needs to control the solution along the characteristic trajectories (see Remark 3.2) whereas we show that in the case of diffusion it also generates a semigroup in weighted
L 1 v L ∞ x .
We emphasize here that such a study was done in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] in
L ∞ x,v m(v)µ -1/2
. We extend his proofs to more general and less restrictive weights as well as to a new
L 1 v L ∞
x setting in the diffusive setting. Regarding existence and uniqueness, the methods are standard in the study of linear equations in bounded domains [START_REF] Beals | Abstract time-dependent transport equations[END_REF] for specular reflections and rely on an approximation of the boundary operator P ( P = 1). The case of Maxwellian diffusion is different since the norm of the boundary operator heavily depends on the weight function. In the case L ∞
x,v we prove that we can use the arguments developed in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] whereas the new framework L 1 v L ∞ x requires new estimates to obtain weak converge which does not come directly from uniform boudnedness in L 1 .
The exponential decay is more intricate and requires a description of the characteristic trajectories for the free transport equation with boundary conditions to obtain explicit formula in terms of f 0 for S Gν (t)f 0 . Although this is possible in the case of specular reflections, such an explicit form is not known for the Maxwellian diffusion and it has to be dealt with using equivalent norms.
3.1. The case of specular reflections. As we shall see, the case of specular reflections in a weighted Lebesgue space is equivalent to the same problem with a weight 1. The study in L ∞
x,v has been done in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Lemma 20 but we write it down for the sake of completeness.
Proposition 3.1. Let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k in N; let f 0 be in L ∞ x,v (m).
Then there exists a unique solution
S Gν (t)f 0 ∈ L ∞ x,v (m) to (3.1) [∂ t + v • ∇ x + ν(v)] (S Gν (t)f 0 ) = 0 such that (S Gν (t)f 0 )| Λ ∈ L ∞ Λ (m) and satisfying the specular reflections (1.2) with initial data f 0 . Moreover it satisfies ∀t 0, S Gν (t)f 0 L ∞ x,v (m) e -ν 0 t f 0 L ∞ x,v (m) , with ν 0 = inf {ν(v)} > 0.
Proof of Proposition 3.1. The proof will be done in three steps: uniqueness, existence and finally the exponential decay. We start by noticing that if f belongs to L ∞ x,v (m) and satisfies
[∂ t + v • ∇ x + ν(v)] f (t) = 0
with specular reflections boundary condition then h = m(v)f is also a solution with specular reflections and h belongs to L ∞
x,v and its restriction on Λ belongs to L ∞ Λ . Thus, we only prove the proposition in the case m = 1.
Step 1: Uniqueness. Assume that there exists such a solution
f in L ∞ x,v . Consider the function h(t, x, v) = v -β f (t, x, v) where β is chosen such that (3.2) v -2β (1 + |v|) ∈ L 1 v . A mere Cauchy-Schwarz inequality shows that h is in L 2
x,v and h| Λ ∈ L 2 Λ . Moreover, h satisfies the same differential equality as f . Multiply (3.1) by h and integrating in x and v, we can use the divergence theorem on Λ and the fact that ν(v) ν 0 > 0:
1 2
d dt h 2 L 2 x,v = Ω×R 3 h(t, x, v) [-v • ∇ x -ν(v)] h(t, x, v) dxdv = - Ω×R 3 v • ∇ x h 2 dxdv -ν(v)h 2 L 2 x,v - Λ |h(t, x, v)| 2 (v • n(x)) dS(x)dv -ν 0 h 2 L 2 x,v . (3.3)
The integral on Λ is null since h satisfies the specular reflections and therefore we can apply a Grönwall lemma to h L 2 x,v and obtain the uniqueness for h and thus for f .
Step 2: Existence. Existence is proved by approximating the specular reflections in order to get a decrease at the boundary and be in the case P < 1.
Let f 0 be in L ∞
x,v . For any ε in (0, 1) we consider the following differential problem with
h ε ∈ L ∞ x,v and h ε | Λ ∈ L ∞ Λ : (3.4) [∂ t + v • ∇ x + ν] h ε = 0, h ε (0, x, v) = f 0 (x, v)
with the absorbed specular reflections boundary condition
∀(t, x, v) ∈ R 3 × Λ -, h ε (t, x, v) = (1 -ε)h ε (t, x, R x (v))).
This problem has a unique solution. Indeed we construct the following iterative scheme
[∂ t + v • ∇ x + ν] h (l+1) ε = 0, h (l+1) ε (0, x, v) = f 0 (x, v) h (0) ε Λ + = 0 and ∀t > 0, ∀(x, v) ∈ Λ -, h (l+1) ε (t, x, v) = h (l) ε (t, x, R x (v)) . The functions h (l)
ε are well-defined because the boundary condition is now an in-flow boundary condition to which existence is known (see [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Lemma 12 for instance).
We know that along the characteristic trajectories (straight lines in between two rebounds, see [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] Appendix for rigorous construction of characteristics in a C 1 bounded domain) e νt h (l+1) ε is constant. We denote the first backward exit time by
(3.5) t min (x, v) = max t 0; x -sv ∈ Ω, ∀0 s t . Consider (x, v) / ∈ Λ 0 ∪ Λ -, then t min (x, v) > 0. If t min (x, v
) t then the backward characteristic line starting at (x, v) at time t reaches the initial plan {t = 0} whereas if t min (x, v) < t it hits the boundary at time t -t min (x, v) at (x -t min (x, v)v, v) ∈ Λ -, where we can apply the boundary condition. Therefore we have the following representation for h (l+1) ε for all t 0 and for almost all (x, v) / ∈ Λ 0 ∪ Λ -,
h (l+1) ε (t, x, v) =1 t min (x,v) t e -ν(v)t f 0 (x -tv, v) + 1 t min (x,v)<t (1 -ε)e -ν(v)t min (x,v) h (l) ε Λ + (t -t min , x 1 , v 1 ), (3.6)
where we defined
x 1 = x -t min (x, v)v and v 1 = R x 1 (v).
For all t 0, for all (x, v) / ∈ Λ 0 ∪ Λ -and for all l 1,
(3.7) h (l+1) ε (t, x, v) -h (l) ε (t, x, v) (1 -ε) h (l) ε Λ + (t, x 1 , v 1 ) -h (l-1) ε Λ + (t, x 1 , v 1 ) . Thus, considering (x, v) ∈ Λ + we show that h (l) ε Λ + l∈N is a Cauchy sequence in L ∞ t L ∞ Λ + .
Then by the boundary condition it implies that h
(l) ε Λ -l∈N is also a Cauchy sequence in L ∞ t L ∞ Λ -. Finally from (3.7), h (l) ε l∈N is also a Cauchy sequence in L ∞ t L ∞ x,v . Remark 3.2. The L ∞ framework is essential to obtain the control of h (l+1) ε -h (l) ε by the control of h (l) ε -h (l-1) ε at (x 1 , v 1 )
. Any other L p x spaces would have required to change the variable v 1 → v to which a computation of the jacobian is still a very hard problem (see [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Subsection 4.3.1) and can be 0.
We obtain existence of h ε solution to (3.4) by letting l tend to infinity. The latter solution is unique since its restriction on the boundary belongs to L ∞ Λ . Indeed, we can apply the divergence theorem as in (3.3) which yields uniqueness because the integral on Λ is positive since 1 -ε < 1.
It only remains to show that one can indeed take the limit of (h ε ) ε>0 when ε goes to zero.
We remind that we chose h
(0) ε Λ +
= 0 and therefore by (3.6) applied to (x, v) ∈ Λ + :
h (l+1) ε (t, x, v) f 0 L ∞ x,v if t t min (x, v) h (l) ε Λ + L ∞ Λ + if t > t min (x, v).
The latter further implies
(3.8) ∀l 0, ∀t 0, h (l) ε (t, •, •) L ∞ Λ + f 0 L ∞ x,v .
The boundary condition then implies
(3.9) ∀l 0, ∀t 0, h (l) ε (t, •, •) L ∞ Λ - f 0 L ∞ x,v ,
and finally the representation of h
(3.10) ∀t 0, h (l) ε (t, •, •) L ∞ x,v f 0 L ∞ x,v .
From the uniform controls (3.8) -(3.9) -(3.10) one can take a weak-
* limit of h ε in L ∞ t,x,v and of h ε | Λ in L ∞ t L ∞
Λ and such a limit is solution to our initial problem.
Step 3: Exponential decay. We use the study of backwards characteristic trajectories of the transport equation in C 1 bounded domains derived in [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF].
For (x, v) / ∈ Λ 0 , the backwards trajectory starting from (x, v) are straight lines in between two consecutive rebounds. We define a sequence of rebounds
(t i , x i , v i ) = (t i (x, v), x i (x, v), v i (x, v)) with (t 0 , x 0 , v 0 ) = (t, x, v
) that are the footprints (and time) of the backward trajectories of the transport equation in Ω starting at (x, v) at time t (see [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] Proposition A.8 and Definition A.6). Moreover, the sequence (t i , x i , v i ) is almost always well defined (countably many rebounds) and finite for any given t 0 (see [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] Proposition A.4).
With this description of characteristics we can iterate the process initiated in (3.6). This gives that e ν(v)t h ε is constant along characteristics and
(3.11) h ε (t, x, v) = i 1 [t i+1 ,t i ) (0) [1 -ε] i e -ν(v)t f 0 (x i -t i v i , v i ), for almost every (x, v) ∈ Ω × R 3 -Λ 0 . Note that we used that ν(v i ) = ν(v) because
ν is invariant by rotations. Moreover, the summation is almost always finite and when it is there is only one term (see [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] Appendix). For this i th term we have
|h ε (t, x, v)| e -ν 0 t |f 0 (x i -t i v i , v i )| e -ν 0 t f 0 l ∞ x,v
, which is the desired exponential decay by taking the weak-* limit of h ε in L ∞ t,x,v .
3.2. The case of Maxwellian diffusion. The diffusion operator on the boundary does not have a norm equals to one, the latter norm heavily depends on the weight of the space. The exponential decay is delicate since we do not have an explicit representation of S Gν (t) along characteristic trajectories. One needs to control the characteristic trajectories that do not reach the plane {t = 0} in time t. As we shall see, this number of problematic trajectories is small when the number of rebounds is large and so can be controlled for long times.
Proposition 3.3. Let q ∈ {1, ∞} m = e κ|v| α with κ > 0 and α in (0, 2)
or m = v k with k > 2 1/q 4 1-1/q ; let f 0 be in L q v L ∞ x (m).
Then there exists a unique solution
S Gν (t)f 0 ∈ L q v L ∞ x (m) to (3.12) [∂ t + v • ∇ x + ν(v)] (S Gν (t)f 0 ) = 0 such that (S Gν (t)f 0 )| Λ ∈ L q L ∞ Λ (m) and satisfying the Maxwelian diffusion (1.3) with initial data f 0 . Moreover it satisfies ∀ν ′ 0 < ν 0 , ∃ C ν ′ 0 > 0, ∀t 0, S Gν (t)f 0 L q v L ∞ x (m) C ν ′ 0 e -ν ′ 0 t f 0 L q v L ∞ x (m) , with ν 0 = inf {ν(v)} > 0.
Proof of Proposition 3.3. We first prove uniqueness, then existence and finally exponential decay of solutions.
Step 1: Uniqueness. Assume that there exists such a solution f in L ∞
x,v (m). The choice of weight implies
m(v) -1 (1 + |v|) ∈ L 1 v ,
and hence f belongs to L 1 x,v and f | Λ belongs to L 1 Λ . We can therefore use the divergence theorem and the fact that ν(v) ν 0 > 0:
d dt f L 1 x,v = Ω×R 3 sgn(f (t, x, v)) [-v • ∇ x -ν(v)] f (t, x, v) dxdv = - Ω×R 3 v • ∇ x (|f |) dxdv -ν(v)f L 1 x,v - Λ |f (t, x, v)| (v • n(x)) dS(x)dv -ν 0 f L 1 x,v . (3.13)
Then using the change of variable v → R x (v), which has jacobian one, we have the boundary conditions (1.3)
Λ - |P Λ (f )(x, v)| |v • n(x)| dS(x)dv Λ + |f (t, x, v * )| |v * • n(x)| dS(x)dv * ,
which implies that the integral on the boundary is positive. Hence uniqueness follows from a Grönwall lemma.
The case q = 1 is dealt with the same way since
L 1 v L ∞ x (m) ⊂ L 1 x,v and also L 1 L ∞ Λ (m) ⊂ L 1 L ∞ Λ .
Step
2: Existence. Let f (t, x, v) ∈ L q v L ∞
x (m) be a solution to (3.12) satisfying Maxwellian diffusion boundary conditions and f
| Λ ∈ L q L ∞ Λ (m). Then h(t, x, v) = m(v)f (t, x, v) belongs to L q v L ∞ x with h| Λ ∈ L 1 L ∞ Λ .
Moreover, h satisfies the differential equation (3.12) with the following boundary condition for all t > 0 (3.14)
∀(x, v) ∈ Λ -, h(t, x, v) = c µ m(v)µ(v) v * •n(x)>0 h(t, x, v * )m(v * ) -1 |v * • n(x)| dv * .
In order to work without weight we will prove the existence of
f ∈ L q v L ∞ x such that f | Λ ∈ L q L ∞ Λ and f satisfies [∂ t + v • ∇ x + ν(v)] f = 0
with the new diffusive condition (3.14). And we will prove exponential decay in L q v L ∞ x for this function f .
To prove existence we consider the following iterative scheme with
h (l) ∈ L q v L ∞ x and h (l) Λ ∈ L q L ∞ Λ : [∂ t + v • ∇ x + ν] h (l) = 0, h (l) (0, x, v) = f 0 (x, v)1 {|v| l} with the absorbed diffusion boundary condition for t > 0 and (x, v) in Λ - h (l) (t, x, v) = P (l) Λ,m (h (l) Λ + )(t, x, v) = 1 - 1 l c µ m(v)µ(v) v * •n(x)>0 h (l) (t, x, v * )m(v * ) -1 |v * • n(x)| dv * . (3.15)
Again, multiplying h (l) by the appropriate weight raise the uniqueness of such a h (l) for any given l. The existence is proved via h
(l) = m(v) -1 µ -1 h (l) since it satisfies [∂ t -v • ∇ x -ν(v)] h (l) = 0 with the boundary condition h (l) (t, x, v) = 1 - 1 l v * •n(x)>0 h (l) (t, x, v * )c µ µ(v * ) |v * • n(x)| dv *
and the initial data
h (l) 0 L q v L ∞ x = m -1 µ -1 f 0 1 {|v| l} L q v L ∞ x C l,m f 0 L q v L ∞ x .
The boundary operator from
L q L ∞ Λ + to L q L ∞ Λ -applied to h (l) is bounded by (1 - l -1 ) < 1 and therefore h (l) ∈ L q v L ∞
x exists with its restriction in L q L ∞ Λ (see [START_REF] Beals | Abstract time-dependent transport equations[END_REF]). Thus the existence of h (l) . The proof that h (l) is indeed in L q v L ∞ x and converges as l tends to infinity will be done within the proof of exponential decay uniformly in l.
Step 3: Exponential decay. As for the specular case (3.6), we can use the flow of characteristic to obtain a representation of h (l+1) in terms of f 0 and h (l) . We recall the boundary operator
P (l) Λ,m (3.15) and for all (x, v) / ∈ Λ 0 ∪ Λ -, h (l) (t, x, v) =1 t 1 (x,v) 0 e -ν(v)t f 0 (x -tv, v)1 {|v| l} + 1 t 1 (x,v)>0 e -ν(v)(t-t 1 ) P (l) Λ,m h (l) Λ + (t 1 , x 1 , v), (3.16)
where we defined t 1 = t -t min (x, v) and
x 1 (x, v) = x -(t -t 1 (x, v))v.
The idea is to iterate the latter representation inside the integral term P (l) Λ,m . This leads to a sequence of functions (t p , x p , v p ) depending on the independent variables
(t i , x i , v i ) 0 i p-1 with (t 0 , x 0 , v 0 ) = (t, x, v).
To shorten notations we define the probability measure on Λ +
dσ x (v) = c µ µ(v) |v • n(x)| dv
and remark that the boundary condition (3.15) becomes
h (l) (t, x, v) = 1 - 1 l 1 m(v) v * •n(x)>0 h (l) (t, x, v * ) m(v * ) dσ x (v * ) with (3.17) m(v) = 1 c µ µ(v)m(v)
.
With these notations one can derive the following implicit iterative representation of h (l) . We refer to [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Lemma 24 and (208) for a rigorous induction.
• If t 1 0 then (3.18) h (l) (t, x, v) = e -ν(v)t f 0 (x -tv, v)1 {|v| l} ; • If t 1 > 0 then for all p 2, h (l) (t, x, v) = 1 m(v) e -ν(v)(t-t 1 ) p i=1 1 - 1 l i p j=1 {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) h (l) 0 (x i -t i v i , v i )dΣ i (0) + 1 - 1 l p 1 m(v) e -ν(v)(t-t 1 ) p j=1 {v j •n(x i )>0} 1 t p+1 >0 h (l) (t p , x p , v p )dΣ p (t p ), (3.19)
where
dΣ i (s) = e -ν(v i )(t i -s) m(v i ) i-1 j=1 e -ν(v j )(t j -t j+1 ) dσ x 1 (v 1 ) . . . dσ xp (v p ).
The last term on the right-hand side of (3.19) represents all the possible trajectories that are still able to generate new trajectories after p rebounds. The first term describes all the possible trajectories reaching the initial plane {t = 0} in at most p rebounds.
Computations are similar either q = 1 or q = ∞. For q = ∞, it is enough to bound h (l) l∈N to obtain weak-* convergence whereas q = 1 requires more efforts. We therefore only deal with q = 1 and point out the few differences for q = ∞ in Remark 3.4.
We will prove that h (l)
L 1 v L ∞
x satisfies an exponential decay uniformly in l and then show that h (l) l∈N (resp. its restrictions on Λ + and Λ -) is weakly compact in
L ∞ t L 1 v L ∞
x (resp. on Λ + and Λ -). The proof will be done in three steps. We first study the sequence h (l)
1 t 1 0 l∈N in L ∞ t L 1 v L ∞ x , then h (l) 1 t 1 >0 l∈N in L ∞ [0,T 0 ] L 1 v L ∞
x with T 0 large and finally h (l) 1 t 1 >0 l∈N .
Step 3.1: {t 1 0}. We first use (3.18) for all l in N and all t 0, (3.20)
h (l) 1 t 1 0 (t, •, •) L 1 v L ∞ x e -ν 0 t f 0 L 1 v L ∞ x .
And also for all measurable set K ⊂ R 3 , (3.21)
K sup x∈Ω h (l) (t, x, v)1 t 1 0 dv K sup x∈Ω |f 0 (t, x, v)| dv. f 0 belongs to L 1 v L ∞
x and therefore the latter inequality implies that the sequence sup x∈Ω h (l) 1 t 1 0 (t, x, •) l∈N is bounded and equi-integrable. The latter is also true restricted to Λ + since in that case h (l) 1 t 1 0 Λ + = 0.
Step 3.2: {t 1 > 0} and 0 t T 0 . We focus on the case t 1 > 0. The exponential decay in dΣ i (s) is bounded by e -ν 0 (t 1 -s) and we notice that the definition of m (3.17) implies
m(v)dσ x (v) = m(v) -1 |v • n(x)| dv
We first take the supremum over x in Ω and then integrate in v over R d the first term on the right-hand side of (3.19) and we obtain the following upper bound
e -ν 0 t R 3 dv m(v) sup x∈Ω p i=1 p j=1 j =i {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) R 3 sup y∈Ω h (l) 0 (y, v i ) |v i | m(v i ) dv i p j=1 j =i dσ x j (v j ) .
C m e -ν 0 t R 3 dv m(v) f 0 L 1 v L ∞ x sup x,v p i=1 p j=1 {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) dσ x 1 . . . dσ xp C m e -ν 0 t R 3 dv m(v) f 0 L 1 v L ∞ x C m e -ν 0 t f 0 L 1 v L ∞ x , (3.23)
where we used the fact that v i •n(x i )>0 dσ x i (v i ) = 1 and the following control
(3.24) R 3 dv m(v) C m .
We now turn to the study of the second term on the right-hand side of (3.19).
We first notice that on the set {t p+1 > 0} we have t 1 (t p , x p , v p ) > 0 and therefore (3.25)
1 t p+1 >0 h (l) (t p , x p , v p ) 1 tp>0 sup y∈Ω h (l) (t p , y, v p )1 t 1 >0 .
We take the supremum in x ∈ Ω and integrating in v over R 3 the second term on the right-hand side of (3.19) and make the same computations as for the first term. This yields the following upper bound for 0 t T 0
C m e -ν 0 (t-t 1 ) R 3 dv m(v) sup x∈Ω p j=1 {v j •n(x i )>0} e -ν 0 (t 1 -tp) sup y∈Ω h (l) (t p , y, v p )1 t 1 >0 1 tp>0 C m e -ν 0 t R 3 dv m(v) sup 0 s T 0 e ν 0 s h (l) 1 t 1 >0 sup x,v p j=1 {v j •n(x i )>0} 1 tp>0 p j=1 dσ x i
As said at the beginning of the section, the trajectories not hitting the initial plane after p rebounds is small when p becomes large. This is given by [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] Lemma 4.1 which states that there exist C 1 , C 2 > 0 such that for all T 0 sufficiently large,
taking p = C 1 T 5/4 0 yields (3.26) ∀0 s T 0 , ∀x ∈ Ω, ∀v ∈ R 3 , p j=1 {v j •n(x i )>0} 1 tp>0 p j=1 dσ x i 1 2 C 2 T 5/4 0 .
Plugging it into the last inequality yields the following bound for the second term on the right-hand side of (3.19) for all t in [0, T 0 ]
C m e -ν 0 t 1 2
C 2 T 5/4 0 R 3 dv m(v) sup 0 s T 0 e ν 0 s h (l) 1 t 1 >0 L 1 v L ∞ x
C m e -ν 0 t 1 2
C 2 T 5/4 0 sup 0 s T 0 e ν 0 s h (l) 1 t 1 >0 L 1 v L ∞ x , (3.27)
where we used (3.24).
Gathering (3.23) and (3.27) gives sup
0 t T 0 e ν 0 t h (l) 1 t 1 >0 L 1 v L ∞ x C m f 0 L 1 v L ∞ x + C m 1 2 C 2 T 5/4 0 sup 0 t T 0 e ν 0 t h (l) 1 t 1 >0 L 1 v L ∞ x .
Choosing T 0 even larger if need be such that
C m 1 2 C 2 T 5/4 0 1 2 , gives (3.28
)
∃C m > 0, ∀t ∈ [0, T 0 ], h (l) 1 t 1 >0 (t, •, •) L 1 v L ∞ x C m e -ν 0 t f 0 L 1 v L ∞ x .
Moreover, in (3.23) and (3.27) we kept the dependencies in the integration against v in R 3 . Taking the integration over a measurable set K ⊂ R 3 , the same computations and the same choice of T 0 would give (3.29)
∃C m > 0, ∀t ∈ [0, T 0 ], K sup x∈Ω h (l) 1 t 1 >0 (t, x, v) dv C m f 0 L 1 v L ∞ x K dv m(v) . Since m -1 is integrable on R 3 , sup x∈Ω h (l) 1 t 1 >0 (t, x, •)
l∈N is bounded and equiintegrable.
Step 3.3: conclusion The constant C m in (3.28) does not depend on T 0 . Therefore, for any ν ′ 0 < ν 0 we can choose T 0 = T 0 (m, ν ′ 0 ) large enough so that (3.28) holds for 0 t T 0 and C m e -ν 0 T 0 e -ν ′ 0 T 0 . For that specific T 0 one has
h (l) (T 0 ) L 1 v L ∞ x e -ν ′ 0 T 0 f 0 L 1 v L ∞ x .
We could now start the proof at T 0 up to 2T 0 and iterating this process we get
∀n ∈ N, h (l) (nT 0 )1 t 1 >0 L 1 v L ∞ x e -ν ′ 0 T 0 h (l) ((n -1)T 0 L 1 v L ∞ x e -2ν ′ 0 T 0 h (l) ((n -2)T 0 ) L 1 v L ∞ x . . . e -ν ′ 0 nT 0 f 0 L 1 v L ∞ x .
Finally, for all t in [nT 0 , (n + 1)T 0 ] we apply (3.28) with the above to get
h (l) 1 t 1 >0 (t, •, •) L 1 v L ∞ x C m e -ν 0 (t-nT 0 ) h (l) (nT 0 ) L 1 v L ∞ x C m e -ν 0 t+(ν 0 -ν ′ 0 )nT0 f 0 L 1 v L ∞ x .
Hence the uniform control in t,
(3.30) ∃C m > 0, ∀t 0, h (l) 1 t 1 >0 (t, •, •) L 1 v L ∞ x C m e -ν ′ 0 t f 0 L 1 v L ∞ x .
Again, we could only integrate over a measurable set K ⊂ R 3 , the same computations and the same choice of T 0 would give
(3.31) ∃C k > 0, ∀t 0, K sup x∈Ω h (l) 1 t 1 >0 (t, x, v) dv C m f 0 L 1 v L ∞ x K dv m(v)
.
Combining (3.20)-(3.30) we see that h (l) l∈N is bounded in L ∞ t L 1 v L ∞ v . Moreover, by (3.21)-(3.31) sup x∈Ω h (l) l∈N is equi-integrable on L 1 v .
We can therefore apply the Dunford-Pettis theorem for L 1 combined with weak-compactness property of L ∞ and find that h (l) l∈N converges (up to a subsequence) weakly-* in
L ∞ t L 1 v L ∞ x . The limit f in L ∞ t L 1 v L ∞
x is solution to the linear equation ∂ t f = G ν f with initial data f 0 . Besides, since we always bound the integral of h (l) on {v i • n(x i )} > 0 by its integral on R 3 , we could do the same computations for h (l) Λ + by keeping the integral on {v i • n(x i )} > 0. Dunford-Pettis theorem again and then the boundary conditions implies that h (l) Λ l∈N converges weakly-* in L ∞ t L 1 L ∞ Λ . f thus satisfies the diffusion boundary condition (3.14), has its restriction on Λ in L 1 L ∞ Λ and the exponential decay holds (since it holds for all h (l) uniformly in l). Which concludes the proof of existence and exponential decay.
Remark 3.4. The case q = ∞ is dealt with the same way since the function that we bound |v| m(v) -1 and the one we integrate m(v) -1 are respectively integrable and bounded since k > 4 in the case q = ∞. Therefore in (3.22) we can take out the L ∞
x,v -norm of h (l) 0 and have C m be the integral of |v| m(v) -1 and finally bound m(v) -1 instead of integrating it in (3.23). This leads to the same estimates in L ∞
x,v .
The proof above can be adapted to obtain that S Gν (t) controls 'a bit more' than the mere L q v L ∞ x (m)-norm. This property will play a key role in the nonlinear case. In (3.22) one could multiply and divide by ν(v i ) and the function
|v i | ν(v i ) m(v i ) is still bounded (resp. integrable) on R 3 if m is a stretch exponential or if m = v k with k > 1 + γ (resp. k > 4 + γ).
The conclusion (3.28) then becomes
∃C m > 0, ∀t ∈ [0, T 0 ], h (l) 1 t 1 >0 (t, •, •) L q v L ∞ x C m e -ν 0 t f 0 L q v L ∞ x (ν -1
) . This inequality is true at T 0 so using the induction that lead to (3.30) and using the latter gives the following corollary.
Corollary 3.5. Let q ∈ {1, ∞} m = e κ|v| α with κ > 0 and α in (0, 2)
or m = v k with k > 2 1/q 4 1-1/q + γ; let f 0 be in L q v L ∞ x (m). Then the solution S Gν (t)f 0 ∈ L q v L ∞ x (m) built in Proposition 3.3 satisfies for all ν ′ 0 < ν 0 ∃ C ν ′ 0 > 0, ∀t 0, S Gν (t) (f 0 ) 1 t 1 >0 (t, •, •) L q v L ∞ x (m) C m e -ν ′ 0 t f 0 L q v L ∞
x (mν -1 ) .
4. Review of the L 2 -L ∞ theory for the full linear part
As discussed in the introduction, a mixed L 2 -L ∞ theory has been developed [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] for the linear perturbed Boltzmann equation (4.1)
∂ t f + v • ∇ x f = L(f ),
together with boundary conditions. The idea of studying the possible generation of a semigroup with exponential decay in
L 2 x,v µ -1/2 by G = -v • ∇ x + L,
together with boundary conditions, is a natural one because of Subsection 2.1. This section is devoted to the description of the L 2 -L ∞ theory developed first by Guo [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] and extended by Esposito, Guo, Kim and Marra [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF]. This theory will be the starting point of our main proofs.
L 2
x,v (µ -1/2 ) theory for the linear perturbed operator. As seen discussed before, the general theory [START_REF] Beals | Abstract time-dependent transport equations[END_REF] for equations of the form
∂ t f + v • ∇ x f = g
fails for specular reflections or Maxwellian diffusion boundary conditions because the boundary operator P is of norm one. However, restricting the Maxwellian diffusion to the set of functions in L 2
x,v satisfying the preservation of mass implies that, in some sense, P < 1 (mere strict Cauchy-Schwarz inequality). One can therefore hope to develop a semigroup theory for G in L 2
x,v µ -1/2 with mass conservation. This has been recently achieved by constructive methods [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF]. They proved the following theorem (see [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] Theorem 6.1 with g = r = 0). Theorem 4.1. Let f 0 be in MD L 2
x,v µ -1/2 . Then there exists a unique mass preserving solution S G (t)f 0 ∈ L 2
x,v µ -1/2 to the linear perturbed Boltzmann equation (4.1) with Maxwellian diffusion boundary condition
(1.3). Moreover there exist explicit C G , λ G > 0, independent of f 0 , such that ∀t 0, S G (t)f 0 L 2 x,v (µ -1/2 ) C G e -λ G t f 0 L 2 x,v (µ -1/2 ) .
Unfortunately, in the case of specular reflections the uniqueness is not true in general due to a possible blow-up of the L 2 loc R + ; L 2 x,v (Λ) at the grazing set Λ 0 [START_REF] Ukai | Solutions of the Boltzmann equation[END_REF][START_REF] Beals | Abstract time-dependent transport equations[END_REF][START_REF] Cercignani | The mathematical theory of dilute gases[END_REF]. However, an a priori exponential decay of solutions is enough to obtain an L ∞ theory provided that we endow the space with a strong weight (see next subsection). Such an a priori study has been derived in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] by a contradiction argument.
Theorem 4.2. Let f 0 be in SR L 2
x,v µ -1/2 . Suppose that f (t, x, v) is a solution to the linear perturbed Boltzmann equation (4.1) in SR L 2
x,v µ -1/2 with initial data f 0 and satisfying the specular reflections boundary condition (1.2). Suppose also that
f | Λ belongs to L 2 Λ µ -1/2 . Then there exists C G , λ G > 0 such that ∀t 0, f (t) L 2 x,v (µ -1/2 ) C G e -λ G t f 0 L 2 x,v( µ -1/2 ) .
The constants C G and λ G are independent of f .
Note that the two previous theorems hold for Ω being a C 1 bounded domain.
4.2.
The L ∞ framework. It has been proved in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] section 4 that if Theorem 4.1 and Theorem 4.2 hold true then one can develop and L ∞
x,v v β µ -1/2 theory for the semigroup generated by G = -v • ∇ x + L, as long as β is sufficiently large. We already discussed the fact that we do not have a semigroup property for G in L 2
x,v due to the possible lack of uniqueness. To overcome this inconvenient it is compulsory to go into L ∞ x,v v β µ -1/2 ) where trace theorems are known to hold and β is large enough so that one can use the a priori estimate in L 2
x,v µ -1/2 thanks to a change of variable along the characteristic trajectories (which requires the strict convexity and the analyticity of the domain in the case of specular reflexions).
In other words, having only an a priori exponential decay in L 2
x,v can be used to obtain that G actually generates an exponentially decaying semigroup in L ∞ , where we have Ukai's trace theorem [START_REF] Ukai | Solutions of the Boltzmann equation[END_REF] Theorem 5.1.1 to have well-defined restrictions at the boundary and therefore uniqueness of solutions.
Moreover, this semigroup theory is compatible with the remainder term Q(f, f ) and offers existence, uniqueness and solutions to the perturbed Boltzmann equation (4.2)
∂ t f = G(f ) + Q(f, f ) in L ∞ v β µ -1/2 as long as f 0 L ∞ ( v β µ -1/2
) is small enough.
We state here a theorem adapted from [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]. The case of specular reflections is derived from Theorem 8 and the proof of Theorem 3 and the case of Maxwellian diffusion from Theorem 9 and the proof of Theorem 4. One can also look at [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] Theorem 1.3 for a constructive proof in the case of Maxwellian diffusion boundary conditions. Theorem 4.3. Let Ω be a C 1 bounded domain if boundary conditions are Maxwellian diffusion and let Ω be analytic and strictly convex in the sense of (2.11) if they are specular reflections. Define
w β (v) = v β µ(v) -1/2 . Then for all β such that β -2 (1 + |v|) 3 ∈ L 1 v the operator G = -v •∇ x +L generates a semigroup S G (t) in SR L ∞ x,v (w β ) and in MD L ∞ x,v (w β ) . Moreover, there exists C G , λ G > 0 such that for all f 0 in L ∞ t L ∞
x,v (w β ) satisfying the appropriate conservation laws and all t 0
S G (t)f 0 L ∞ x,v (w β ) C G e -λ G t f 0 L ∞ x,v (w β ) ,
and for all 0
< λ ′ G < λ G , t 0 S G (t -s)Q(f 0 , f 0 ) ds L ∞ x,v (w β ) C G e -λ ′ G t sup s∈[0,t] e λ ′ G s f 0 (s) 2 L ∞
x,v (w β ) .
Remark 4.4. We emphasize that the strict convexity required in our Theorem 2.3 only comes from the fact that such a geometric property is needed in order to apply the theorem above and thus having a well-established semigroup theory in the framework of specular reflections. Moreover, if the proof in the case of Maxwellian diffusion has been made constructive [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF], the case of specular reflections heavily relies on a contradiction argument combined with analyticity ( [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Lemma 22) and a constructive proof is still an open problem.
System of equations solving the perturbed Boltzmann equation
This section is dedicated to the proofs of Theorem 2.3 and Theorem 2.5. The latter proofs rely on a specific decomposition of the operator G = -v • ∇ x + L that allows to solve a system of differential equations that connect the larger spaces L ∞
x,v (m) to the more regular space L ∞ x,v v β µ -1/2 where solutions to the perturbed Boltzmann equation are known to exists (see Subsection 4.2). As said in the introduction. our method follows the recent extension methods for strongly continuous semigroups [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] but an analytic adaptation has to be developed since we already saw that S G (t) is not necessarily strongly continuous.
Firstly, Subsection 5.1 describes the strategy we shall use and presents the new system of equations we will solve. Then Subsection 5.2 and Subsection 5.3 solve the system of differential equations.
Decomposition of the perturbed Boltzmann equation and toolbox.
The main strategy is to find a decomposition of the perturbed Boltzmann equation (1.4) into a system of differential equations where we could use of the theory developed in L ∞
x,v . More precisely, one would like to solve a somewhat simpler equation in L ∞
x,v (m) and that the remainder part has regularising properties and thus be handled in the smaller space L ∞ x,v v β µ -1/2 . Then the exponential decay in the more regular space could be carried up to the bigger space. One can easily see that for the weights considered in the present work and for q = 1 or q = ∞ we have
L ∞ x,v v β µ -1/2 ⊂ L q v L ∞ x (m) .
We follow the decomposition of G proposed in [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF]. For δ in (0, 1), to be chosen later, we consider Θ δ = Θ δ (v, v * , σ) in C ∞ that is bounded by one everywhere, is exactly one on the set We define the splitting G = A (δ) + B (δ) , with
A (δ) h(v) = C Φ R 3 ×S 2 Θ δ [µ ′ * h ′ + µ ′ h ′ * -µh * ] b (cos θ) |v -v * | γ dσdv * and B (δ) h(v) = B (δ) 2 h(v) -ν(v)h(v) -v • ∇ x h(v) = G ν h(v) + B (δ) 2 h(v)
, where
B (δ) 2 h(v) = R 3 ×S 2 (1 -Θ δ ) [µ ′ * h ′ + µ ′ h ′ * -µh * ] b (cos θ) |v -v * | γ dσdv * .
A (δ) is a kernel operator with a compactly supported kernel. Therefore it has the following regularising effect. Lemma 5.1. For any q in {1, ∞}, the operator A (δ) maps L q v into L q v with compact support. More precisely, for all β 0 and all α 0, there exists R δ and
C A = C (δ, q, β, α) > 0 such that ∀h ∈ L q v , supp A (δ) h ⊂ B(0, R δ ), A (δ) h L q v ( v β µ -α ) C A h L q v .
Proof of Lemma 5.1. The kernel of the operator A (δ) is compactly supported so its Carleman representation (see [START_REF] Carleman | Problèmes mathématiques dans la théorie cinétique des gaz[END_REF] or [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF]) gives the existence of
k (δ) in C ∞ c (R 3 × R 3 ) such that (5.1) A (δ) h(v) = R 3 k (δ) (v, v * )h(v * ) dv * ,
and therefore the control on
A (δ) h L q v ( v β µ -α ) is straightforward.
Thanks to this regularising property of the operator A (δ) we are looking for solutions to the perturbed Boltzmann equation
∂ t f = Gf + Q(f, f ) in the form of f = f 1 + f 2 with f 1 in L ∞ x,v (m) and f 2 in L ∞ x,v
v β µ -1/2 and (f 1 , f 2 ) satisfying the following system of equation
∂ t f 1 = B (δ) f 1 + Q(f 1 , f 1 + f 2 ) and f 1 (0, x, v) = f 0 (x, v), (5.2) ∂ t f 2 = Gf 2 + Q(f 2 , f 2 ) + A (δ) f 1 and f 2 (0, x, v) = 0 (5.3)
with either specular reflections or Maxwellian diffusion boundary conditions.
The equation in the smaller space (5.3) will be treated thanks to the previous study in the L ∞ v β µ -1/2 whilst we expect an exponential decay for solutions in the larger space (5.2). Indeed, B (δ) can be controlled by the multiplicative operator ν(v) because it has a small norm in the following sense. Lemma 5.2. Consider q in {1, ∞}. Let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k > k * q where
(5.4)
k * q = 16πb ∞ l b -2 1/q 1 + γ + 16πb ∞ l b 1-1/q . Then B (δ) 2 satisfies ∀h ∈ L q v L ∞ x (m) , B (δ) 2 (h) L q v L ∞ x (ν -1 m) ∆ m,q (δ) h L q v L ∞ x (m)
, where ν(v) is the collision frequency (2.6) and ∆ m,q (δ) is a constructive constant such that
• if m = e κ|v| α then lim δ→0 ∆ m,q (δ) = 0
• if m = v k then lim δ→0 ∆ m,q (δ) = φ q (k) = 16πb ∞ l b 1 k + 2 1/q 1 k -1 -γ 1-1/q .
This was proved in [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF]Lemma 4.12] in the case if hard sphere (γ = b = 1) and extended to more general hard potential with cutoff kernels in [START_REF] Briant | The Boltzmann equation for multi-species mixture close to global equilibrium[END_REF]Lemma 6.3] for
L 2 v and [3, Lemma 2.4] for L ∞ v .
Remark 5.3. We point out that for k > k * q one can check that φ q (k) < 1. This will be of great importance for B (δ) 2 to be controlled by the semigroup generated by collision frequency S Gν (t).
We conclude this subsection with a control on the bilinear term in the L ∞
x,v setting.
Lemma 5.4. For all h and g such that Q(h, g) is well-defined, Q(h, g) belongs to [Ker(L)] ⊥ in L 2 v : π L (Q(h, g)) = 0. Moreover, let q be in {1, ∞} and let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k 0. Then there exists C Q > 0 such that for all h and g,
Q(h, g) L q v L ∞ x (mν -1 ) C Q h L q v L ∞ x (m) g L q v L ∞
x (m) . The constant C Q is explicit and depends only on q, m and the kernel of the collision operator.
Proof of Lemma 5.4. Since we use the symmetric definition of Q (1.5) the orthogonality property can be found in [START_REF] Briant | From the Boltzmann equation to the incompressible Navier-Stokes equations on the torus: A quantitative error estimate[END_REF] Appendix A.2.1.
The estimate follows directly from [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] Lemma 5.16 and the fact that ν(v) ∼ m(v) (see (2.7)).
Study of equation (5.2) in L ∞
x,v (m). In the section we study the differential equation (5.2). We prove well-posedness for this problem and above all exponential decay as long as the initial data is small. The case of specular reflections and the case of diffusion are rather different since their treatment relies on the representation of the semigroup S Gν we derived in Section 3.
The case of specular reflections in L ∞
x,v (m). We prove the following wellposedness result in the case of specular reflections. Proposition 5.5. Let Ω be a C 1 bounded domain. Let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k > 5 + γ. Let f 0 be in L ∞
x,v (m) and g(t, x, v) in L ∞ x,v (m). Then there exists δ m , λ m (δ) > 0 such that for any δ in (0, δ m ] there exist
C 1 , η 1 > 0 such that if f 0 L ∞ x,v (m) η 1 and g L ∞ t L ∞ x,v (m) η 1 ,
then there exists a solution f 1 to
(5.5) ∂ t f 1 = G ν f 1 + B (δ) 2 f 1 + Q(f 1 , f 1 + g),
with initial data f 0 and satisfying the specular reflections boundary conditions (1.2). Moreover, this solution satisfies
∀t 0, f 1 (t) L ∞ x,v (m) C 1 e -λm(δ)t f 0 L ∞ x,v (m) ,
and also lim δ→0 λ e κ|v| α (δ) = ν 0 and lim
k→∞ lim δ→0 λ v k (δ) = ν 0 .
The constants C 1 and η 1 are constructive and only depend on m, δ and the kernel of the collision operator.
Proof of Proposition 5.5. If f 1 is solution to (5.5) then, thanks to Proposition 3.1, G ν combined with boundary conditions generates a semigroup S Gν (t) in L ∞ x,v (m). Therefore f 1 has the following Duhamel representation almost everywhere in
R + × Ω × R 3 (5.6) f 1 (t, x, v) = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 f 1 + Q(f 1 , f 1 + g) ds.
To prove existence and exponential decay we use the following iteration scheme starting from h 0 = 0.
h l+1 = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 h l+1 + Q(h l , h l + g) ds h l+1 (0, x, v) = f 0 (x, v).
A contraction argument with the Duhamel representation would imply that (h l ) is well-defined in L ∞ x,v (m) and satisfies specular reflections boundary condition (because S Gν does). The computations to prove this contraction are similar to the ones we make to prove that (h l ) is a Cauchy sequence and we therefore only write down the latter.
|h l+1 -h l | (t, x, v) t 0 S Gν (t -s)B (δ) 2 (h l+1 -h l ) ds + t 0 |S Gν (t -s)Q (h l -h l-1 , h l + h l-1 + g)| ds.
For almost all (t, x, v) using the representation of S Gν (t) (3.11) with ε = 0 there exists (X(t, x, v), V (t, x, v)) in Ω×R 3 such that the backward characteristics starting at (x, v) reaches the initial plane {t = 0} at (X(t, x, v), V (t, x, v)) and
S Gν (t)h = e -ν(v)t h(X(t, x, v), V (t, x, v)).
This implies that for almost all (t, x, v)
m(v) |h l+1 -h l | (t, x, v) t 0 e -ν(v)(t-s) m B (δ) 2 (h l+1 -h l ) (s, X(t -s), V (t -s)) ds + t 0 e -ν(v)(t-s) m |Q (h l -h l-1 , h l + h l-1 + g) (s, X, V )| ds = I 1 + I 2 .
(5.7) I 1 is dealt with using Lemma 5.2,
I 1 t 0 ν(v)e -ν(v)(t-s) B (δ) 2 (h l+1 -h l ) (s) L ∞ x,v (mν -1 ) ds ∆ m,∞ (δ) t 0 ν(v)e -ν(v)(t-s) (h l+1 -h l )(s) L ∞ x,v (m) ds.
For δ small enough we have ∆ m,∞ (δ) < 1, as emphasized in Remark 5.3. Therefore,
(5.8) ∃ ε ∈ (0, 1), ε < 1 -∆ m,∞ (δ). Since 0 < ε < 1 it follows ∀0 s t, -ν(v)(t -s) -εν 0 t -ν(v)(1 -ε)(t -s) + εν 0 s.
We can further bound I 1 ,
I 1 e -εν 0 t ∆ m,∞ (δ) t 0 ν(v)e -ν(v)(1-ε)s ds sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) e -εν 0 t ∆ m,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) .
(5.9)
For the second term I 2 we multiply by ν(v)ν(v) -1 to compensate for the loss of weight ν(v) in the control of Q. Then, with previous computations and using Lemma 5.4
this yields
I 2 C Q 1 -ε e -εν 0 t sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m) × h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) .
(5.10)
We plug (5.9) and (5.10) into (5.7) and multiply it by e εν 0 t before taking the supremum in x, v and t. It yields
1 - ∆ m,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) C Q 1 -ε h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) × sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m) . Our choice of ε (5.8) implies [1 -∆ m,∞ (δ)(1 -ε) -1 ] > 0.
Denoting by C m any positive constant independent of l it follows sup
0 s t e εν 0 s h l -h l+1 L ∞ x,v (m) C m h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m) .
(
We now prove that h l L ∞ x,v (m) is uniformly bounded. Starting with the definition of h l+1 and making the same computations without subtracting h l we obtain
1 - ∆ m,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l+1 L ∞ x,v (m) e -ν 0 (1-ε)t f 0 L ∞ x,v + C Q ∆ m,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l L ∞ [0,t],x,v (m) × h l L ∞ [0,t],x,v (m) + g L ∞ [0,t],x,v (m)
, where we used the exponential decay of S Gν (t) on f 0 (Proposition 3.1). Denoting C
(1) m and C
(2) m any positive constant independent of l, we further bound sup
0 s t e εν 0 s h l+1 L ∞ x,v (m) C (1) m f 0 L ∞ x,v + C (2) m sup 0 s t e εν 0 s h l L ∞ [0,t],x,v (m) h l L ∞ [0,t],x,v (m) + g L ∞ [0,t],x,v (m) .
(5.12)
Therefore, if h 0 L ∞ x,v and g L ∞ t,x,v are smaller than η 1 > 0 such that C (1) m η 1 + 2 1 + C (1) m 2 C (2) m η 2 1 1 + C (1) m η 1
then for all l ∈ N (5.13) sup
t 0
e εν 0 s h l+1 L ∞ x,v (m) 1 + C (1) m f 0 L ∞ x,v ,
which gives the desired exponential decay if (h l ) l∈N converges.
Combining (5.11) and (5.13) we have sup
0 s t e εν 0 s h l+1 -h l L ∞ t,x,v (m) 3C m 1 + C (1) m η 1 sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m) .
Therefore, for η 1 small enough the sequence (h l ) l∈N is a Cauchy sequence in L ∞ t,x,v (m) and therefore converges towards f 1 . From (5.13), f 1 satisfies the desired exponential decay with λ m (δ) = εν 0 .
The asymptotic behaviour of λ m (δ) is straightforward since as ∆ m,∞ (δ) is closer to 0 we can choose ε closer to 1 by (5.8).
The case of Maxwellian diffusion in L ∞
x,v e κ|v| α . We prove the following wellposedness result for (5.2) in the case of Maxwellian diffusion. Proposition 5.6. Let Ω be a C 1 bounded domain. Let m = e κ|v| α with κ > 0 and α in (0, 2).Let f 0 be in L ∞
x,v (m) and g(t, x, v) in L ∞ x,v (m). Then there exists δ m , λ m (δ) > 0 such that for any δ in (0, δ m ] there exist
C 1 , η 1 > 0 such that if f 0 L ∞ x,v (m) η 1 and g L ∞ t L ∞ x,v (m) η 1 ,
then there exists a solution f 1 to
(5.14) ∂ t f 1 = G ν f 1 + B (δ) 2 f 1 + Q(f 1 , f 1 + g),
with initial data f 0 and satisfying the Maxwellian diffusion boundary conditions (1.3). Moreover, this solution satisfies
∀t 0, f 1 (t) L ∞ x,v (m) C 1 e -λm(δ)t f 0 L ∞ x,v (m) ,
and also
lim δ→0 λ m (δ) = ν 0 .
The constants C 1 and η 1 are constructive and only depend on m, δ and the kernel of the collision operator.
Proof of Proposition 5.6. Thanks to Proposition 3.3, G ν combined with Maxwellian diffusion boundary conditions generates a semigroup S Gν (t) in all the L ∞ x,v (m). Therefore a solution f 1 to (5.14) has the following Duhamel representation almost everywhere in
R + × Ω × R 3 (5.15) f 1 (t, x, v) = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 f 1 + Q(f 1 , f 1 + g) ds.
We use the same iteration as for specular reflections, starting from h 0 = 0 and defining
h l+1 = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 h l+1 + Q(h l , h l + g) ds h l+1 (0, x, v) = f 0 (x, v).
Again, the well-posedness of h l+1 follows a contraction argument with the Duhamel representation and the estimates we shall prove in order to show that (h l ) l∈N is a Cauchy sequence with uniform exponential decay. We therefore only prove the latter.
Using the implicit representation of S Gν (t) (3.18)-(3.19) (note that we do not have the change of weight) we have for h in L ∞
x,v (m):
• if t 1 0 then (5.16) S Gν (t)h(x, v) = e -ν(v)t h(x -tv, v); • if t 1 > 0 then for all p 2, S Gν (t)h(x, v) = c µ µ(v)e -ν(v)(t-t 1 ) p i=1 p j=1 {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) h(x i -t i v i , v i )dΣ i (0) + c µ µ(v)e -ν(v)(t-t 1 ) p j=1 {v j •n(x i )>0} 1 t p+1 >0 S Gν (t p )h(x p , v p )dΣ p (t p ), (5.17) where (5.18
)
dΣ i (s) = 1 c µ µ(v i ) e -ν(v i )(t i -s) i-1 j=1 e -ν(v j )(t j -t j+1 ) dσ x 1 (v 1 ) . . . dσ xp (v p ).
We shall prove that (h l ) l∈N is a Cauchy sequence in
L ∞ t L ∞ x,v (m). We bound |h l+1 -h l | in L ∞ x,v (m) by h l+1 -h l L ∞ x,v (m) sup (x,v)∈Ω×R 3 t 0 m(v) S Gν (t -s)B (δ) 2 (h l+1 -h l ) (x, v) ds + sup (x,v)∈Ω×R 3 t 0 m(v) |S Gν (t -s)Q (h l -h l-1 , h l + h l-1 + g)| (x, v) ds .
Since the behaviour of S Gν (t -s) differs whether t 1 0 or t 1 > 0, where t 1 = t 1 (t -s, x, v), we can further decompose each of the terms on the right-hand side.
(5. [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF])
h l+1 -h l L ∞ x,v (m) max {I 1 ; I 3 } + max {I 2 ; I 4 } ,
where we defined
I 1 = sup (x,v)∈Ω×R 3 t 0 m(v) S Gν (t -s)B (δ) 2 (h l+1 -h l ) 1 t 1 0 (x, v) ds I 2 = sup (x,v)∈Ω×R 3 t 0 m(v) |S Gν (t -s)Q (h l -h l-1 , h l + h l-1 + g) 1 t 1 0 | (x, v) ds I 3 = sup (x,v)∈Ω×R 3 t 0 m(v) S Gν (t -s)B (δ) 2 (h l+1 -h l ) 1 t 1 >0 (x, v) ds I 4 = sup (x,v)∈Ω×R 3 t 0 m(v) |S Gν (t -s)Q (h l -h l-1 , h l + h l-1 + g) 1 t 1 >0 | (x, v) ds .
We fix ε in (0, 1).
Study of I 1 and I 2 . When t 1 (t -s, x, v) 0 the semigroup S Gν (t -s) is a mere multiplication by e -ν(v)(t-s) and so
I 1 sup v∈R 3 t 0 ν(v)e -ν(v)(t-s) sup x∈Ω m(v)ν(v) -1 B (δ) 2 (h l+1 -h l ) ds
and equivalently for I 2 . Similar computations as (5.9)-(5.10) yields
I 1 e -εν 0 t ∆ m,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) (5.20) I 2 C Q 1 -ε e -εν 0 t sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m) (5.21) × h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) .
Study of I 3 and I 4 . We study I 3 and I 4 are dealt with the same way and we therefore only write down the details for I 3 .
We decompose (5.22)
I 3 I (1)
3 + I
into two terms defined by (5.17):
I (1) 3 = sup (x,v)∈Ω×R 3 t 0 c µ µ(v)m(v)e -ν(v)(t-s-t 1 ) × p i=1 p j=1 {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) B (δ) 2 (h l+1 -h l ) (s, x i -t i v i , v i )dΣ i (0) I (2) 3
= sup
(x,v)∈Ω×R 3 t 0 c µ µ(v)m(v)e -ν(v)(t-t 1 ) × p j=1 {v j •n(x i )>0} 1 t p+1 >0 S Gν (t p ) B (δ) 2 (h l+1 -h l ) (x p , v p )dΣ p (t p ) .
We multiply and divide by m(v i )ν -1 (v i ) and we take the supremum over Ω × R 3 inside the i th integral. We know that c µ µ(v)m(v) is bounded in v and therefore, denoting by C m any positive constant, we obtain
I (1) 3 C m t 0 e -ν 0 (t-s) B δ 2 (h l+1 -h l )(s) L ∞ x,v (mν -1 ) × sup (x,v)∈Ω×R 3 p i=1 p j=1 j =i {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) R 3 |v i | ν(v i ) m(v i ) dv i dσ x i ds. (5.23)
We recall that dσ x i is a probability measure on {v j • n(x i ) > 0}. The integral in the variable v i is finite and only depends on m and ν. Using Lemma 5.2 we conclude
I (1) 3 C m ∆ m,∞ (δ) t 0 e -ν 0 (t-s) h l+1 (s) -h l (s) L ∞ x,v (m) ds C m ∆ m,∞ (δ) 1 -ε e -εν 0 t sup 0 s t e εν 0 s h l+1 (s) -h l (s) L ∞ x,v (m) . (5.24)
To estimate I
(2) 3 we first see that, as noticed in (3.25),
1 t p+1 >0 S Gν (t p ) B (δ) 2 (h l+1 -h l ) (x p , v p ) 1 tp>0 S Gν (t p ) B (δ) 2 (h l+1 -h l ) 1 t 1 (tp,xp,vp) (x p , v p ).
Thanks to Corollary 3.5 with ν ′ 0 = (1-ε ′ )ν 0 , where 0 < ε < ε ′ < 1, and then Lemma 5.2 we can estimate the above further by
1 t p+1 >0 S Gν (t p ) B (δ) 2 (h l+1 -h l ) C m e -ν ′ 0 tp B (δ) 2 (h l+1 -h l ) L ∞ x,v (mν -1 ) C m ∆ m,∞ (δ)e -(1-ε ′ )ν 0 tp h l+1 -h l L ∞ x,v (m) .
Plugging the above into the definition of I
(2) 3
yields
I (2) 3 C m ∆ m,∞ (δ)e -ν 0 (t-t 1 ) × t 0 sup (x,v)∈Ω×R 3 p j=1 {v j •n(x i )>0} e -ν 0 (t 1 -tp) e -(1-ε ′ )ν 0 tp h l+1 -h l L ∞ x,v (m) (t p )dσ x i . C m ∆ m,∞ (δ)te -ε ′ ν 0 t sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m)
C m ∆ m,∞ (δ)e -εν 0 t sup
0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) .
(5.25)
We conclude the estimate about I 3 by gathering (5.24) and (5.25) inside the decomposition of I 3 (5.22):
(5.26)
I 3 C m ∆ m,∞ (δ)e -εν 0 t sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m)
For the term I 4 we can do exactly the same computations with ∆ m,∞ (δ) replaced by C Q from Lemma 5.4, which can be included into the generic constant C m . Hence
I 4 C m e -εν 0 t sup 0 s t e εν 0 s h 1 -h l-1 L ∞ x,v (m) × h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m)
(1 -C m ∆ m,∞ (δ)) sup 0 s t h l+1 -h l L ∞ x,v (m) C m h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) × sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) + sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m)
In the case of a stretch exponential m, Lemma 5.2 states that ∆ m,∞ (δ) tends to 0 as δ goes to 0. We can therefore choose δ small enough such that
(5.28) 1 -C m ∆ m,∞ (δ) 1 2 .
With such a choice the following holds sup
0 s t h l+1 -h l L ∞ x,v (m) C m h l L ∞ t,x,v (m) + h l-1 L ∞ t,x,v (m) + g L ∞ t,x,v (m) × sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m) + sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m)
(5.29)
Similar computations with the use of the exponential decay of S Gν (t) on f 0 gives us the following bound on h l+1 sup
0 s t e εν 0 s h l+1 L ∞ x,v (m) C (1) m f 0 L ∞ x,v + C (2) m h l L ∞ [0,t],x,v (m) + g L ∞ [0,t],x,v (m) sup 0 s t e εν 0 s h l L ∞ [0,t],
x,v (m) .
(5.30)
The latter results (5.29) and (5.30) are identical to respectively (5.11) and (5.11) in the case of specular reflections boundary conditions. Therefore the same arguments hold and if f 0 L ∞ x,v (m) and g L ∞ x,v (m) are smaller than η 1 > 0 small enough we obtain that (h l ) l∈N is a Cauchy sequence and thus converges towards the desired solution f 1 , which satisfies the required exponential decay. This concludes the proof of Proposition 5.6.
The case of Maxwellian diffusion in L ∞
x,v v k . Looking at the proof of Proposition 5.6 we remark that the key property used in the case of a stretch exponential weight m is that ∆ m,∞ (δ) tends to 0 as δ goes to 0. This strong property allowed us to control the supremum of c µ µ(v)m(v) in I
(1) 3 thanks to ∆ m,∞ (δ) and still obtain a quantity that is less than 1, see (5.24) and (5.28).
Unfortunately, in the case of a polynomial weight m k (v) = v k , Lemma 5.2 states that ∆ m,∞ (δ) converges to a quantity less than 1 but not as small as one wants unless one allows k to be as large as one wants. However, ∆ m k ,∞ (δ) goes to 0 as k tends to infinity like k -1 which is not enough to control the supremum of c µ µ(v)m k (v) which grows like (2k) k .
The key idea to deal with the polynomial weight m k is that fact that
B (δ) 2 can also be estimated in L 1 v L ∞ x ( v 2+γ+0 ) (see Lemma 5.
2). The latter norm is weaker than L ∞
x,v (m k ) and appear in the estimate of I
∆ k (δ) such that for all h in L ∞ x,v (m k ), B (δ) 2 h L 1 v L ∞ x ( v 2 ) ∆ k (δ) h L ∞ x,v (m k ) .
Moreover, the following holds for any k > 5 + γ
lim δ→0 ∆ k (δ) = 0.
Proof of Lemma 5.7. We recall the definition of
B (δ) 2 h, B (δ) 2 h(v) = R 3 ×S 2 (1 -Θ δ ) [µ ′ * h ′ + µ ′ h ′ * -µh * ] b (cos θ) |v -v * | γ dσdv * , where Θ δ = Θ δ (v, v * , σ) is a C ∞ function such that 0 1 -Θ δ 1 and such that 1 -Θ δ = 0 on the set Ξ δ = |v| δ -1 and 2δ |v -v * | δ -1 and |cos θ| 1 -2δ . We denote by Ξ c δ the complementary set of Ξ δ in R 3 × R 3 × S 2 .
Only h has a dependency in x hence
B (δ) 2 h L 1 v L ∞ x ( v 2 ) Ξ c δ [µ ′ * H ′ + µ ′ H ′ * + µH * ] |v -v * | γ |b (cos θ)| v 2 dvdv * dσ,
where we used the notation
H(v) = sup x∈Ω |h(x, v)| .
We notice that Ξ c δ ⊂ Ξ δ where we defined
Ξ δ = |v| 2 + |v * | 2 1 δ ∪ {|v -v * | 2δ} ∪ |v -v * | 1 δ ∪ {1 -δ |cos θ| 1} = Ξ (1) δ ∪ Ξ (2) δ ∪ Ξ (3) δ ∪ Ξ (4) δ (5.31)
and hence
B (δ) 2 h L 1 v L ∞ x ( v 2 ) Ξ δ [µ ′ * H ′ + µ ′ H ′ * + µH * ] |v -v * | γ |b (cos θ)| v 2 dvdv * dσ.
The set Ξ δ is invariant under the standard changes of variables (v, v * , σ)
→ (v * , v, -σ) and (v, v * , σ) → (v ′ , v ′ * , k) with k = (v -v * ) / |v -v * |
that have Jacobian 1 (see [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF] or [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] for instance). Applying these change of variables gives
B (δ) 2 h L 1 v L ∞ x ( v 2 ) Ξ δ µ * H v ′ * 2 + v ′ 2 + v 2 |v -v * | γ |b (cos θ)| dvdv * dσ.
Thanks to the elastic collisions one has
v ′ * 2 + v ′ 2 = v * 2 + v 2 .
Therefore we have
B (δ) 2 h L 1 v L ∞ x ( v 2 ) ∆ k (δ) h L ∞ x,v (m k ) , with ∆ k (δ) = 3 Ξ δ µ * v * 2 v 2 m k (v) |v -v * | γ |b (cos θ)| dvdv * dσ = 3 Ξ δ µ * v * 2 1 v k-2 |v -v * | γ |b (cos θ)| dvdv * dσ.
It remains to show that when k > 5 + γ is fixed then ∆ k (δ) goes to 0 when δ goes to 0.
We decompose the integral over Ξ δ into integrals over Ξ where these domains are given by (5.31):
∆ k (δ) = ∆ (1) k (δ) + ∆ (2) k (δ) + ∆ (3) k (δ) + ∆ (4) k (δ) For ∆ (1)
k (δ) and ∆
∆ (i) k (δ) l b |v * | 1 2δ µ * v * 2+γ dv * R 3 dv v k-2-γ + l b R 3 µ * v * 2+γ dv * |v| 1 2δ dv v k-2-γ ,
where l b is the integral of b (cos θ) on S 2 . Since k > 5 + γ, v k-2-γ is integrable and all the integrals on the right-hand side are well defined. Moreover, by integrability it follows that as δ goes to 0 the integrals involving δ tend to 0 as well.
At last, ∆
k (δ) and ∆
k (δ) also tend to 0 since
∆ (2) k (δ) 2l b δ R 3 µ * v * 2 dv * R 3 dv v k-2 and ∆ (4) k (δ) R 3 µ * v * 2+γ dv * R 3 dv v k-2-γ |cos θ|∈[1-δ,1]
b (cos θ) dσ and b (cos θ) is integrable on the sphere. Which concludes the proof of Lemma 5.7.
We are now able prove the following well-posedness result for (5.2) in the case of Maxwellian diffusion with polynomial weight.
(v) = v k . Let f 0 be in L ∞ x,v (m k ) and g(t, x, v) in L ∞ x,v (m k ).
Then there exists δ k , λ k (δ) > 0 such that for any δ in (0, δ k ] there exists C 1 , η 1 > 0 such that if
f 0 L ∞ x,v (m k ) η 1 and g L ∞ t L ∞ x,v (m k ) η 1 ,
then there exists a solution f 1 to
(5.32)
∂ t f 1 = G ν f 1 + B (δ) 2 f 1 + Q(f 1 , f 1 + g),
with initial data f 0 and satisfying the Maxwellian diffusion boundary conditions (1.3). Moreover, this solution satisfies
∀t 0, f 1 (t) L ∞ x,v (m k ) C 1 e -λ k (δ)t f 0 L ∞ x,v (m k ) ,
and also
lim δ→0 λ k (δ) = ν 0 .
The constants C 1 and η 1 are constructive and only depend on k, δ and the kernel of the collision operator.
Proof of Proposition 5.8. We closely follow the proof of Proposition 5.6 in the case of a stretch exponential and we refer to it for most of the details of computations.
To build a solution of (5.32) we use the iterative scheme
h l+1 = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 h l+1 + Q(h l , h l + g) ds h l+1 (0, x, v) = f 0 (x, v) and h 0 = 0.
and prove that (h l ) l∈N is a Cauchy sequence in L ∞ x,v (m k ). Again the well-posed of h l+1 would follow from a contraction argument from similar computations.
We use the same decomposition as in (5. [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]) with m(v) replaced by m k (v) :
(5.33)
h l+1 -h l L ∞ x,v (m k ) max {I 1 ; I 3 } + max {I 2 ; I 4 } .
Since the control of Q and B
I 1 e -εν 0 t ∆ m k ,∞ (δ) 1 -ε sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) (5.34) I 2 C Q 1 -ε e -εν 0 t sup 0 s t e εν 0 s h l -h l-1 L ∞ x,v (m k ) (5.35) × h l L ∞ t,x,v (m k ) + h l-1 L ∞ t,x,v (m k ) + g L ∞ t,x,v (m k ) I 4 C m k e -εν 0 t sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) (5.36) × h l L ∞ t,x,v (m k ) + h l-1 L ∞ t,x,v (m k ) + g L ∞ t,x,v (m k ) .
The main difference lies in I 3 where we recall the decomposition (5.22)
(5.37)
I 3 I (1)
3 + I
with
I (1) 3 = sup (x,v)∈Ω×R 3 t 0 c µ µ(v)m k (v)e -ν(v)(t-s-t 1 ) × p i=1 p j=1 {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) B (δ) 2 (h l+1 -h l ) (s, x i -t i v i , v i )dΣ i (0) I (2) 3 = sup (x,v)∈Ω×R 3 t 0 c µ µ(v)m k (v)e -ν(v)(t-t 1 ) × p j=1 {v j •n(x i )>0} 1 t p+1 >0 S Gν (t p ) B (δ) 2 (h l+1 -h l ) (x p , v p )dΣ p (t p ) .
By definition of dΣ i (0) (5.18) and denoting by C k the supremum of c µ µ(v)m k (v) we get
I (1) 3 C k t 0 e -ν 0 (t-s) sup (x,v)∈Ω×R 3 p i=1 p j=1 j =i {v j •n(x i )>0} 1 [t i+1 ,t i ) (0) R 3 B δ 2 (h l+1 -h l )(s, x i , v i ) |v i | dv i dσ x i ds.
We use Lemma 5.7 to estimate the integral in the i th variable and we remind that dσ x i is a probability measure on {v j • n(x i ) > 0}. This yields
I (1) 3 C k ∆ k (δ) t 0 e -ν 0 (t-s) (h l+1 -h l )(s) L ∞ x,v (m k ) ds e -εν 0 t C k ∆ k (δ) 1 -ε sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) . (5.38) The term I (2) 3 needs the L 1 v L ∞ x ( v 3
) semigroup theory for S Gν (t). Indeed, as in the proof of Proposition 5.6 we estimate it by
I (2) 3 C k sup (x,v)∈Ω×R 3 e -ν 0 (t-t 1 ) t 0 e -ν 0 (t 1 -tp) p i=1 p j=1 j =i {v j •n(x i )>0} 1 tp>0 R 3 S Gν (t p ) B δ 2 (h l+1 -h l ) 1 t 1 (tp,xp,vp)>0 |v p | dv p dσ x i ds C k sup (x,v)∈Ω×R 3 e -ν 0 (t-t 1 ) t 0 ds e -ν 0 (t 1 -tp) p i=1 p j=1 j =i {v j •n(x i )>0} 1 tp>0 S Gν (t p ) B δ 2 (h l+1 -h l ) 1 t 1 >0 L 1 v L ∞ x ( v 3 ) dσ x i ds
Using Corollary 3.5 with k = 3 and ν ′ 0 = (1-ε ′ )ν 0 with ε < ε ′ < 1 and then applying Lemma 5.7 to obtain
S Gν (t p ) B δ 2 (h l+1 -h l ) 1 t 1 >0 L 1 v L ∞ x ( v 3 ) C m k e -(1-ε ′ )ν 0 tp B δ 2 (h l+1 -h l ) L 1 v L ∞ x ( v 2 ) C m k ∆ k (δ)e -(1-ε ′ )ν 0 tp h l+1 -h l L ∞ x,v (m k ) .
This estimates allow us to copy the computations made in (5.25) and conclude, with C m k > 0 a generic constant depending only on k (5.39)
I (2) 3 e -εν 0 t C m k ∆ k (δ) sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) .
Plugging (5.38) and (5.39) into (5.37) yields the last estimate (5.40)
I 3 e -εν 0 t C m k ∆ k (δ) sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) .
To conclude, we choose δ small enough such that
∆ m k ,∞ (δ) < ∆ k = 4 k-1-γ + 1 2 < 1.
Fix ε in (0, 1) such that ε < 1 -∆ k and finally make δ even smaller so that in (5.39)
C m k ∆ k (δ) 1 - ∆ k 1 -ε .
We gather (5.34)-(5.35)-(5.38)-(5.36) and combine them with (5.33)
1 - ∆ k 1 -ε -C m k ∆ k (δ) sup 0 s t h l+1 -h l L ∞ x,v (m k ) C m k sup 0 s t e εν 0 s h l+1 -h l L ∞ x,v (m k ) × h l L ∞ t,x,v (m k ) + h l-1 L ∞ t,x,v (m k ) + g L ∞ t,x,v (m k ) .
Since the constant on the left-hand side is positive we conclude with exactly the same arguments as in the end of the proof of Proposition 5.5 or Proposition 5.6.
Existence and exponential decay for equation
(5.3) in L ∞ x,v v β µ -1/2 .
In this section we establish the well-posedness and the exponential decay of (5.3)
in L ∞ x,v v β µ -1/2
, with β such that Theorem 4.3 holds. Proposition 5.9. Let Ω be an analytic strictly convex domain (resp. a C 1 bounded domain)and let 0 < λ ′ G < λ G (defined by Theorem 4.3). Let m = e κ|v| α with κ > 0 and α in (0, 2) or m = v k with k > 5 + γ. Then there exists η
2 > 0 such that if g(t, x, v) is in L ∞ x,v (m) with ∀t 0, g(t) L ∞ x,v (m) η 2 e -λ G t .
Then there exists a solution
f 2 in L ∞ x,v v β µ -1/2 to (5.41) ∂ t f 2 = Gf 2 + Q(f 2 , f 2 ) + A (δ) g,
with zero as initial data and satisfying the specular reflections (resp. Maxwellian diffusion) boundary conditions. Moreover, if we assume Π G (f 2 + g) = 0 then there exists C 2 > 0 such that
∀t 0, f 2 (t) L ∞ x,v ( v β µ -1/2 ) C 2 η 2 e -λ ′ G t .
Π G is the projection on the kernel of G and depends on the boundary conditions (see (2.9)-(2.10)).The constants η 2 and C 2 are constructive and only depend on λ ′ G , k, q, δ and the kernel of the collision operator.
Proof of Proposition 5.9. We start by noticing that the Cauchy problem for
∂ t f = Gf + Q(f, f ) has been solved in L ∞ x,v v β µ -1/2
for small initial data in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Section 5, both for specular and diffusive boundary conditions. The addition of a mere source term A (δ) g for a null initial data is handled the same way and we therefore have the existence of f 2 solution to (5.41)
in L ∞ x,v v β µ -1/2 .
We suppose that Π G (f 2 + g) = 0, in other words we ask for f 2 + g to have the appropriate conservation laws depending on the boundary conditions. We would like to apply the L ∞ theory for S G given by Theorem 4.3 but it is only applicable in the space of functions in L ∞
x,v v β µ -1/2 satisfying the respective boundary conditions. We thus need to independently study Π G (f 2 ) and Π ⊥ G (f 2 ).
Study of the projection Π G (f 2 ). Since Π G (f 2 + g) = 0, we have that Π G (f 2 ) = -Π G (g). By Definition 2.9 for specular reflections or Definition 2.10 for Maxwellian diffusion we have the following form for Π G (g):
Π G (g) = d+1 i=0 c i Ω×R 3 g(t, x, v)φ i (v) dxdv φ i (v)µ(v)
with c i is either 0 or 1 and φ i is given by (2.5). It follows that
Π G (f 2 ) L ∞ x,v ( v β µ -1/2 ) d+1 i=0 Ω×R 3 g(t, x, v)φ i (v) dxdv sup v∈R 3 v β φµ 1/2 .
Since k > 5 + γ and φ i is a polynomial in v of order 0, 1 or 2 it follows
Ω×R 3 g(t, x, v)φ i (v) dxdv |Ω| R 3 v -k φ i (v) dv g L ∞ x,v (m) .
As a conclusion, there exists C Π > 0 such that
(5.42) ∀t 0, Π G (f 2 ) L ∞ x,v ( v β µ -1/2 ) C Π g L ∞ x,v (m) C Π η 2 e -λ G t .
Study of the orthogonal part of f 2 . By definition we have that Π
G (G(f 2 )) = 0 and G(f 2 ) = G(Π ⊥ G (f 2 )
). Thanks to the orthogonality property of Q, given by Lemma
5.4, Π G (Q(f 2 , f 2 )) = 0. Therefore, F 2 = Π ⊥ G (f 2 ) satisfies the following differential equation ∂ t F 2 = G(F 2 ) + Q(f 2 , f 2 ) + Π ⊥ G A (δ) g .
Every term in the latter equation satisfies the conservation laws associated to the boundary conditions. We can therefore use Theorem 4.
′ G < λ G . t 0 S G (t -s)Q(f 2 , f 2 )(x, v) ds L ∞ x,v ( v β µ -1/2 ) C G e -λ ′ G t sup s∈[0,t] e λ ′ G s f 2 (s) 2 L ∞
x,v ( v β µ -1/2 ) .
Then we use the fact that f 2 = F 2 + Π G (f 2 ) together with the exponential decay of Π G (f 2 ) (5.42). This yields
t 0 S G (t -s)Q(f 2 , f 2 )(x, v) ds L ∞ x,v ( v β µ -1/2 ) 2C G e -λ ′ G t sup s∈[0,t] e λ ′ G s F 2 (s) L ∞ x,v ( v β µ -1/2 ) 2 + C 2 Π η 2 2 .
(5.44)
For the second term on the right-hand side of (5.43) we use Theorem 4.3 to get t 0 S G (t -s)Π ⊥ G A (δ) g ds
L ∞
x,v ( v β µ -1/2 )
C G t 0 e -λ G (t-s) Π ⊥ G A (δ) g (s) L ∞ x,v( v β µ -1/2 ) ds.
Here again we can bound the norm of Π ⊥ G A (δ) g by the norm of A (δ) g which is itself bounded by Lemma 5.1. This yields
Π ⊥ G A (δ) g (s) L ∞ x,v ( v β µ -1/2 ) C Π C A g(s) L ∞ x,v (m) η 2 C Π C A e -λ G s . Hence, since λ G λ ′ G , t 0 S G (t -s)Π ⊥ G A (δ) g ds L ∞
x,v( v β µ -1/2 ) (5.45) where C Π ⊥ > 0 is a constant depending on λ ′ G .
η 2 C Π C A C G te -λ G t η 2 C Π ⊥ e -λ ′ G t ,
Plugging (5.44) and (5.45) into (5.43) yields, with C 2 denoting any positive constant independent of η 2 ,
e λ ′ G t F 2 (t) L ∞ x,v( v β µ -1/2 ) C 2 η 2 2 + η 2 + sup s∈[0,t] e λ ′ G s F 2 (s) L ∞ x,v( v β µ -1/2 ) 2 .
where β is such that Theorem 4.3 holds. If the latter inequalities hold at rank l then by definition of η 0 we have that
f (l) 2 L ∞ t L ∞ x,v ( v β µ -1/2 ) η 1 and f (l) 1 L ∞ t L ∞ x,v (m)
η 2 e -λt and by Proposition 5.5, Proposition 5.6 and Proposition 5.9 we can therefore construct f (l+1) 1
and f (l+1) 2
. Moreover, these functions satisfy for all t 0,
f (l+1) 1 L ∞ x,v (m) C 1 e -λt f 0 L ∞ x,v (m) , f (l+1) 2 L ∞ x,v ( v β µ -1/2 ) C 2 e -λ ′ t f (l) 1 L ∞ t L ∞ x,v (m) C 1 C 2 e -λ ′ t f 0 L ∞ x,v (m) .
We thus derive the weak-* convergence of f (up to subsequences) towards f 1 and f 2 solutions of the system of equations (5.2)-(5.3). Therefore f = f 1 + f 2 is a solution to the perturbed Boltzmann equation (1.4) and satisfies the desired exponential decay.
6.2. Uniqueness of solutions. Like the results about uniqueness obtained in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] in L ∞
x,v v β µ -1/2 , the uniqueness results given in Theorem 2.3 and of Theorem 2.5 only apply in a perturbative regime. In other words it states the uniqueness of functions of the specific form F = µ+f . This allows to use most of the computations made in previous sections.
More precisely, we fix boundary conditions (either specular or diffusive) and we consider f 0 such that f 0 L ∞ x,v (m) η 0 with η 0 small enough such that we can construct (see previous subsection) F = µ+f a solution to the full Boltzmann equation. Note that we have exponential decay for f in the following form (6.1) ∃C m > 0, ∀t 0,
f (t) L ∞ x,v (m) C m e -λmt f 0 L ∞ x,v (m)
.
We are about to prove that any other solution to the full Boltzmann equation of the form H = µ + h with h 0 = f 0 and satisfying the boundary conditions must be F itself on condition that η 0 is small enough.
Consider H = µ + h to be another solution to the Boltzmann equation with the same boundary conditions as F and the same initial data then f -h satisfies
∂ t (f -h) + = G (f -h) + Q (f + h, f -h) ,
with the same boundary conditions and zero as initial data. If (6.2) sup
0 t T 0 f + h L ∞ x,v (m) η
where η is small enough then we can use exactly the same computations as in Section 5 to prove that for a given initial data g 0 ∈ L ∞ x,v (m) there exists a solution in L ∞ x,v (m) to (6.3) ∂ t g+ = G (g) + Q (f + h, g) , with the required boundary condition (the smallness assumption on f + h playing the same role as the smallness of ∆ m,∞ (δ)). Moreover, if g 0 is small enough to fit the computations in Section 5 we have an exponential decay for g and in particular ∃C m > 0, ∀t 0, g(t) L ∞ x,v (m) C m g 0 L ∞ x,v (m) .
The latter inequality yields uniqueness for g for small initial data g 0 .
The uniqueness f = h follows from a bootstrap argument. Consider η 0 such that η 0 η 4 max {1, C m } with η defined in (6.2) and C m defined by (6.1). Define
T 0 = sup T > 0, f (T ) L ∞ x,v (m) η/2 and h(T ) L ∞ x,v (m) η/2 .
Suppose T 0 < +∞, then we have that (6.2) holds on [0, T 0 ] and therefore f -h is the unique solution to (6.3) with initial value 0 and thus almost everywhere in Ω × R 3
∀t ∈ [0, T 0 ], f (t, x, v) = h(t, x, v).
Thanks to the exponential decay (6.1) satisfied by f we have at T 0
f (T 0 ) L ∞ x,v (m) C m η 0 η 4 and h(T 0 ) L ∞ x,v (m) C m η 0 η 4 .
This contradicts the definition of T 0 and therefore we must have T 0 = +∞ and f (t, x, v) = h(t, x, v) for all t 0 and almost every (x, v) in Ω × R 3 . This concludes the proof of uniqueness in the perturbative regime.
6.3. Continuity of solutions away from the grazing set. The continuity of solutions away from the grazing set have already been studied in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF], both for specular reflections and Maxwellian diffusion in convex domains, and in [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] for Maxwellian diffusion with more general bounded domains. We prove here that their results apply in our present work. Obviously, the continuity of F = µ + f is equivalent to the one of f , which we tackle here.
We recall [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Lemma 21 with our notations. Lemma 6.1. Let Ω be a C 2 strictly convex in the sense (2.11) bounded domain. Let f 0 be continuous on Ω × R 3 -Λ 0 and satisfying the specular reflections condition at the boundary. At last, let q(t, x, v) be continuous in the interior of [0, +∞) × Ω × R 3 with sup [0,+∞)×Ω×R 3 q(t, x, v) ν(v) < +∞.
Then the solution to ∂ t f = G ν (f ) + q(t, x, v) with initial data f 0 satisfying the specular reflections boundary conditions is continuous on [0, +∞) × Ω × R 3 -Λ 0 . In the present work we constructed solutions to the Boltzmann equation in L ∞ x,v (m) of the form F = µ + f with F 0 = µ + f 0 0 satisfying the conservation laws associated with the boundary conditions. Therefore F preserves the total mass so in our case M = 1 > 0 so that point (i) is satisfied. Point (ii) is exactly what we proved in the previous subsection. Finally, since the solution F is in L ∞
x,v (m) with exponential trend to equilibrium it follows
R 3 v 2 F (t, x, v) dv F (t) L ∞ x,v (m) R 3 |v| 2 m(v) 2 dv C m F 0 L ∞ x,v (m)
and point (iii) is also satisfied. The positivity of F therefore follows from the lower bound property described above.
dv * where µ(v) = 1 (2π) 3 / 2 e -|v| 2 2
1322 and c µ v•n(x)>0 µ(v) (v • n(x)) dv = 1.
( 3 . 6 )
36 combined with (3.8) yields
(3. 22 )
22 Since there exists C m > 0 (note that in what follows C m will stand for any explicit positive constant only depending on m) such that |v| m(v) C m we can further bound (3.22) by
|v| δ - 1
1 and 2δ |v -v * | δ -1 and |cos θ| 1 -2δ and whose support is included in |v| 2δ -1 and δ |v -v * | 2δ -1 and |cos θ| 1 -δ .
(5. 27 )
27 Conclusion. From the decomposition (5.19) of h l+1 -h l and estimates (5.20)-(5.21)-(5.26)-(5.27) we obtain
( 1 ) 1 .
11 Again, Lemma 5.2 still does not give the appropriate decay for ∆ m k ,1 (δ). The following lemma shows a new estimate on B (δ) 2 involving a mixing of the L 1 v L ∞ x and the L ∞ x,v frameworks. Lemma 5.7. Let k > 5 + γ and m k (v) = v k . Then for any δ > 0 there exists
( 3 )
3 k (δ) we bound crudely |v -v * | by v v * . Moreover, the inequality |v| 2 + |v * | 2 δ -1 implies that |v| 1/(2δ) or |v * | 1/(2δ). And the same holds for |v -v * | δ -1 . Therefore, for i = 1 and i = 3 we have
Proposition 5 . 8 .
58 Let Ω be a C 1 bounded domain. Let k > 5 + γ and m k
in L ∞ x,v (m k ) (see respectively Lemma 5.4 and Lemma 5.2) the terms I 1 , I 2 and I 4 are estimated in the same way as (5.20)-(5.21)-(5.27) with ∆ m,∞ (δ) replaced by ∆ m k ,∞ (δ) defined in Lemma 5.2. This gives for any ε in (0, 1)
0 S 0 S
00 [START_REF] Briant | Stability of global equilibrium for the multi-species Boltzmann equation in L ∞ settings[END_REF] and have the following Duhamel representation for F 2 almost everywhere(5.43) F 2 (t, x, v) = t G (t -s)Q(f 2 , f 2 )(x, v) ds + t G (t -s)Π ⊥ G A (δ) g dsThe first term on the right-hand side of (5.43) is dealt with by Theorem 4.3 with λ
= Ω×R 3 F 3 |v| 2 F
332 collision kernels considered in the present work). If a solution F to the Boltzmann equation on [0, T max ), T max can be infinity, satisfies (i) F 0 is a non-negative function with positive massM (t, x, v) dxdv > 0, (ii) F is continuous on [0, T max )× Ω × R 3 -Λ 0, in other words continuous away from the grazing set, (iii) F has uniformly bounded local energyE F = sup (t,x)∈[0,Tmax)×R 3 R (t, x, v) dv < +∞,then for all τ ∈ (0, T max ) there exists ρ τ , θ τ > 0 depending only on M, E F , τ and the collision kernel such that almost everywhere∀t ∈ [τ, T max ), ∀(x, v) ∈ Ω × R 3 , F (t, x, v) ρ τ (2πθ τ ) 3/2 e -|v| 2 2θτ .
e-mail: [email protected]
At t = 0, F 2 = 0 and therefore we can define
Suppose that t M < +∞ then we have that
and if η 2 is small enough:
Therefore if η 2 is small enough we reach a contradiction by definition of t M , which implies t M = +∞ and
(5.46)
To conclude the proof we simply gather the control of Π G (f 2 ) of (5.42) and the control of the orthogonal part (5.46).
Cauchy theory for the full Boltzmann equation
This section is dedicated to the proof of Theorem 2.3 and of Theorem 2.5. We tackle each of the issues: existence and exponential decay, uniqueness, continuity and positivity separately.
6.1.
Existence and exponential decay. The existence and exponential trend to equilibrium of F solution to the full Boltzmann equation (1.1) near to equilibrium F = µ + f is equivalent to the existence and exponential decay of f solution to the perturbed Boltzmann equation (1.4). The latter directly follows from Proposition 5.5, Proposition 5.6 and Proposition 5.9. Indeed, we consider the following scheme.
Define
2 (t, x, v) = 0 and the iterative process
with either specular reflections or Maxwellian diffusion boundary conditions and the additional condition Π G (f
where the constants C 1 , η 1 are defined in Proposition 5.5 or Proposition 5.6 (depending on the boundary conditions) and C 2 , η 2 in Proposition 5.9.
We define
By induction we shall prove that for all l, f (l) 1 and f (l) 2 are well-defined and satisfy ∀t 0, f
Thanks to the previous proof of uniqueness, f is the limit of f
2 where the two sequences have been defined in Subsection 6.1 by
(0, x, v) = 0 and specular reflections boundary condition.
We use the method of [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] Section 1 and approximate these solutions by respectively f
with same initial data and boundary conditions and satisfying both f
= 0 and the inductive property
Note that all the functions involved here are in L ∞
x,v (m).
The proof of continuity is then done by induction on l. Suppose that f
as well. Then we can easily apply Lemma 6.1 to f
thanks to the control on A (δ) (Lemma 5.1), on B (δ) 2 (Lemma 5.2) and on Q (Lemma 5.4) and the fact that f
To conclude, same computations as in Subsection 5.2 and Subsection 5.3 shows that f
are Cauchy sequences in L ∞ x,v (m) and therefore their respective limits, f x,v (m) and hence their respective limits are also continuous away from the grazing set. This concludes the fact that f 1 + f 2 and therefore F = µ + f 1 + f 2 are continuous on [0, +∞) × Ω × R 3 -Λ 0 in the case of specular reflections.
The case of Maxwellian diffusion boundary condition is dealt with thanks to similar arguments, starting from the continuity Lemma 26 in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] which is equivalent to Lemma 6.1 for diffusive boundary in the case of a convex bounded domain or from Proposition 6.1 in [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] for more general C 1 bounded domains. 6.4. Positivity of solutions. The positivity of the solutions to the Boltzmann equation (1.1) follows from two recent results by the author [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF]. The latter articles give constructive a priori maxwellian lower bounds on the solutions to the Boltzmann equation in C 2 convex bounded domains with specular reflections boundary conditions [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] and Maxwellian diffusion boundary conditions [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF].
More precisely, in both cases, the following property holds when the collision kernel describes a hard potential with Grad's angular cutoff (which is true for the | 100,955 | [
"739558"
] | [
"14471"
] |
01492050 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2018 | https://hal.science/hal-01492050/file/articleAnton_revision%20-%20Technical_Rapport.pdf | Xavier Bombois
email: [email protected]
Anton Korniienko
email: [email protected]
Håkan Hjalmarsson
email: [email protected]
Gérard
G Scorletti
email: [email protected]
Optimal identification
Keywords: Experiment Design, Identification for Control, Interconnected systems
Introduction
In this paper, we consider the problem of designing an identification experiment that will allow to improve the global performance of a network made up of the interconnection of locally controlled systems. The identification experiment will be designed in such a way that we obtain a sufficiently accurate model of each module in the network to be able to improve the global performance of the network by redesigning the local controllers. The type of networks considered in this paper is usual in the literature on multi-agent systems (see e.g. [START_REF] Fax | Information flow and cooperative control of vehicle formations[END_REF][START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF]).
This paper contributes to the efforts of developing techniques for the identification of large-scale or interconnected systems when the topology of the network is known. In many papers, the problem is seen as a multivariable identification problem and structural properties of the system are then used to simplify this complex problem (see e.g. [START_REF] Haber | Moving horizon estimation for large-scale interconnected systems[END_REF]). The identifiability of the multivariable structure is studied in a prediction error context in [START_REF] Weerts | Identifiability in dynamic network identification[END_REF] while this multivariable structure is exploited in other papers to reduce the variance of a given module in the network (see [START_REF] Hägg | On identification of parallel cascade serial systems[END_REF][START_REF] Gunes | A variance reduction technique for identification in dynamic networks[END_REF][START_REF] Everitt | On the variance analysis of identified linear MIMO models[END_REF]). Unlike most of these papers, we consider here a network whose interconnection is realized by exchanging the measured (and thus noisy) output of neighbouring modules. Another important difference is that, in our setting, all modules can be identified independently using single-input single-output identification. Consequently, we are close to the situation considered in our preceding papers on dynamic network identification (see e.g. [START_REF] Dankers | Identification of dynamic models in complex networks with prediction error methods -predictor input selection[END_REF]). In these contributions, we have developed conditions for consistent estimation of one given module in a dynamic network. Since general networks were considered in these contributions, the data informativity was tackled with a classical condition on the positivity of the spectral density matrix [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF]. The first contribution of this paper is to extend these results for the considered type of networks by giving specific conditions for data informativity. In particular, we show that it is not necessary to excite a specific module i to consistently identify it as long as there exists at least one path from another module j to that particular module i. In this case, the noise present in the noisy output measurement y j will give sufficient excitation for consistent estimation.
However, the main contribution of this paper is to tackle the problem of optimal experiment design for (decentralized) control in a network context. More precisely, our contribution lies in the design of the identification experiment that will lead to sufficiently accurate models of each module of the network to guarantee a certain level of global performance via the design of local controllers. The identification experiment consists of simultaneously applying an excitation signal in each module (i.e. in each closed-loop system) and our objective is to design the spectra of each of these excitations signals in such a way that the global control objective is achieved with the least total injected power. In this sense, we extend the results in [START_REF] Bombois | Least costly identification experiment for control[END_REF][START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF] considering one local loop with a local performance objective to the case of network of closed-loop systems with (both a local and) a global performance objectives. Like in [START_REF] Bombois | Least costly identification experiment for control[END_REF][START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF], the uncertainty of an identified model will be represented via its covariance matrix. The difference is that this covariance matrix will here be a function of the excitation signals injected in each module that has a path to the considered module and of course that there will be a covariance matrix per identified module. Like in [START_REF] Bombois | Least costly identification experiment for control[END_REF][START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF], the maximal allowed uncertainty will be determined using tools from robustness analysis. To avoid heavy computational loads linked to a high number of modules N mod and to structured uncertainties characterized by N mod uncertain parameter vectors, the uncertainty is first projected into an unstructured uncertainty on the complementary sensitivity describing each connected closed-loop system and then the robustness analysis is based on the interconnection of these unstructured uncertainties. This approach (called hierarchical approach) to analyze the robustness of large-scale (interconnected) systems has been introduced in [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF] and further developed in [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF]. A technical contribution of this paper is to develop a methodology that allows the use of the hierachical approach in the presence of the nonstandard uncertainty delivered by system identification.
Note that the framework considered here is much different than the frameworks of [START_REF] Vincent | Input design for structured nonlinear system identification[END_REF][START_REF] Hägg | On optimal input design for networked systems[END_REF] which is, to our knowledge, the only other papers treating the optimal experiment design problem in a network. In [START_REF] Vincent | Input design for structured nonlinear system identification[END_REF], the authors consider input design for nonparametric identification of static nonlinearities embedded in a network. The main purpose of [START_REF] Hägg | On optimal input design for networked systems[END_REF] lies in the use of measurable disturbances in optimal experiment design.
Notations. The matrix X 1 0 0 0 . . . 0 0 0 X N
will be denoted diag(X 1 , ..., X N ) if the elements X i (i = 1, ..., N ) are scalar quantities while it will be denoted bdiag(X 1 , ..., X N ) if the elements X i (i = 1, ..., N ) are vectors or matrices.
Identification of interconnected systems 2.1 Description of the network configuration
We consider a network made up of N mod single-input single-output (SISO) systems S i (i = 1...N mod ) operated in closed loop with a SISO decentralized controller K i (i = 1...N mod ):
S i : y i (t) = G i (z, θ i,0 )u i (t) + v i (t) (1)
u i (t) = K i (z)(y ref,i -y i (t)) (2) ȳref (t) = A ȳ(t) + B ref ext (t)
(3) Let us describe these equations in details. The signal u i is the input applied to the system S i and y i is the measured output. This output is made up of a contribution of the input u i and of a disturbance term v i (t) = H i (z, θ i,0 )e i (t) that represents both process and measurement noises. The different systems are thus described by two stable transfer functions G i (z, θ i,0 ) and H i (z, θ i,0 ), the later being also minimum-phase and monic. The signals e i (i = 1...N mod ) defining v i are all white noise signals. Moreover, the vector ē ∆ = (e 1 , e 2 , ..., e N mod ) T has the following property:
Eē(t)ē T (t) = Λ Eē(t)ē T (t -τ ) = 0 for τ = 0 (4)
with E the expectation operator and with Λ a strictly positive definite matrix. With (4), the power spectrum Φ ē(ω) of ē is given by Φ ē(ω) = Λ for all ω. We will further assume that the signals e i (i = 1...N mod ) are mutually independent. The matrix Λ is then diagonal
1 i.e. Λ = diag(Λ 1,1 , Λ 2,2 , ..., Λ N mod ,N mod ) > 0.
The systems S i in (1) may all represent the same type of systems (e.g. drones). However, due to industrial dispersion, the unknown parameter vectors θ i,0 ∈ R n θ i can of course be different for each i, as well as the order of the transfer functions G i and H i . Consequently, it will be necessary to identify a model for each of the systems S i in the sequel. 2)). The signal ri is used for identification purpose (see ( 5)).
In this paper, we consider the type of interconnections used in formation control or multi-agent systems (see e.g. [START_REF] Fax | Information flow and cooperative control of vehicle formations[END_REF][START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF]). As shown in (2), each system S i is operated with a decentralized controller K i (z). In [START_REF] Bombois | Quantification of frequency domain error bounds with guaranteed confidence level in prediction error identification[END_REF], the signal y ref,i is a reference signal that will be computed via [START_REF] Bombois | Robustness analysis tools for an uncertainty set obtained by prediction error identification[END_REF]. The matrix A and the vector B in (3) represent the interconnection (flow of information) in the network and we have ȳref = (y ref,1 , y ref,2 , ..., y ref,N mod ) T and ȳ = (y 1 , y 2 , ..., y N mod ) T . The signal ref ext is a (scalar) external reference signal that should be followed by all outputs y i and that is generally only available at one node of the network.
As an example, let us consider the network in Figure 1. In this network, we have N mod = 6 systems/modules, all of the form (1) and all operated as in (2) with a decentralized controller K i . These local closed loops are represented by a circle/node in Figure 1 and are further detailed in Figure 2 (consider r i = 0 for the moment in this figure). The objective of this network is that the outputs y i of all modules follow the external reference ref ext even though this reference is only available at Node 1. For this purpose, a number of nodes are allowed to exchange information (i.e. their measured output) with some other neighbouring nodes. The arrows between the nodes in Figure 1 indicate the flow of information. For example, Node 5 receives the output of two nodes (i.e. Nodes 3 and 4) and sends its output (i.e. y 5 ) to three nodes (Nodes 3, 4 and 6). The reference signal y ref,i of Node i will be computed as a linear combination of the received information at Node i. For Node 5, y ref,5 will thus be a linear combination of y 3 and y 4 . More precisely, for all outputs y i to be able to follow the external reference ref ext , A and B in (3) are chosen as [START_REF] Fax | Information flow and cooperative control of vehicle formations[END_REF][START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF]:
A =
0 0 0 0 0 0 1/3 0 1/3 1/3 0 0 0 0.5 0 0 0.5 0 0 0.5 0 0 0.5 0 0 0 0.5 0.5 0 0 0 0 0 0 1 0
B = (1, 0, ..., 0) T .
The matrix A is called the normalized adjacency matrix in the literature [START_REF] Fax | Information flow and cooperative control of vehicle formations[END_REF]. Using (3), we, e.g., see that the tracking error signals y ref,1 -y 1 and y ref,2 -y 2 of Nodes 1 and 2 are respectively given by ref ext -y 1 and 1/3 ((y 1 -y 2 ) + (y 3 -y 2 ) + (y 4 -y 2 )). Similar relations can be found for all the other nodes. If the different loops [K i G i ] are designed to make the tracking error y ref,i -y i as small as possible, it can be proven that such an interconnection allows good tracking of ref ext at all nodes. A normalized adjacency matrix can be defined for any information flow using the following rules. Row i of A is zero if no output is sent to node i. If y j is sent to node i, entry (i, j) of A will be nonzero. Finally, all nonzero entries in a row are equal and sum up to one.
Network identification procedure
In this paper, our objective is to redesign the local controllers K i (i = 1...N mod ) in order to improve the performance of the network by identifying sufficiently accurate models of each interconnected system S i (i = 1...N mod ). Let us first consider the identification procedure. An identification experiment is performed by adding an excitation signal r i (t) (t = 1...N ) having spectrum Φ ri at the output of each decentralized controller (see Figure 2). Since the external reference signal ref ext is, as we will see, not required for identification purpose, ref ext is put to zero during the identification experiment. This transforms Equations ( 2) and (3) into:
u i (t) = r i (t) + K i (z)(y ref,i -y i (t)) (5) ȳref (t) = A ȳ(t) (6)
This experiment allows to collect the input-output data sets
Z N i = {u i (t), y i (t) | t = 1...N } (i = 1...N mod ) corresponding to each of the N mod modules.
Instead of using one single global MIMO identification criterion to identify in one step all systems S i (i = 1...N mod ) with all data sets Z N i (i = 1...N mod ), we will here use a simpler, but equivalent identification procedure. Indeed, we will show that a consistent estimate θi of the true parameter vector θ i,0 of system S i can be identified using only the data set Z N i . For this purpose, we use the classical SISO prediction-error identification criterion [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF] with a full-order model structure
M i = {G i (z, θ i ), H i (z, θ i )} (i.e. a model structure such that S i ∈ M i ): θi = arg min θi 1 N N t=1 2 i (t, θ i ) (7) i (t, θ i ) = H -1 i (z, θ i ) (y i (t) -G i (z, θ i )u i (t)) (8)
This approach is simple since it only involves N mod individual SISO prediction-error identification criteria. This can be an important advantage when N mod is large. Moreover, we will show in Subsection 2.3 that this simple approach is in fact equivalent to the global MIMO prediction error identification criterion.
Before going further, one should verify that θi obtained in this way is indeed a consistent estimate of θ i,0 or, in other words that θ i,0 is the unique solution of the asymptotic identification criterion θ * i = arg min θ Ē2 i (t, θ i ) where
Ē 2 i (t, θ i ) is defined as lim N →∞ 1 N N t=1 E 2 i (t, θ i ).
For this purpose, we will need to make a number of classical structural assumptions; assumptions that have also to be made when we consider classical (direct) closed-loop identification (see e.g. [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF]). Assumptions. (A.1) For all i, θ i = θ i,0 is the only parameter vector for which the models G i (z, θ i ) and H i (z, θ i ) correspond to the true system S i . (A.2) For all i, the product K i (z)G i (z, θ i,0 ) contains (at least) one delay. (A.3) For all i, the excitation signal r i is statistically independent of ē = (e 1 , e 2 , ..., e N mod ) T .
Conditions for consistent identification of a module in an arbitrary network are given in [START_REF] Dankers | Identification of dynamic models in complex networks with prediction error methods -predictor input selection[END_REF]. Because arbitrary networks are considered in [START_REF] Dankers | Identification of dynamic models in complex networks with prediction error methods -predictor input selection[END_REF], more attention is given to the selection of the right predictor inputs for the identification criterion. Other important aspects as the informativity of the input-output data are dealt by the classical condition of the positivity of the spectral density matrix. However, as mentioned in [START_REF] Gevers | Identification in dynamic networks: identifiability and experiment design issues[END_REF], there is very little information in [START_REF] Dankers | Identification of dynamic models in complex networks with prediction error methods -predictor input selection[END_REF] on how to obtain informative data by an appropriate design of the experiment (i.e. the application of the excitation signals r i ) 2 . In particular, this condition is very far away from the very detailed conditions deduced in e.g. [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF] to guarantee the consistency of ( 7) when the data Z N i are collected in a simple closed loop i.e. when the system
S i in (1) is operated with u i (t) = r i (t) -K i (z)y i (t) (y ref,i = 0)
. Indeed, under the assumptions (A.1), (A.2) and (A.3), [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF] shows that ( 7) is a consistent estimate of θ i,0 if and only if the number of frequencies, at which the spectrum Φ ri (ω) is nonzero, is larger than a given threshold. This threshold uniquely depends on the order of the controller K i (z) and on the respective parametrization and the orders of G i (z, θ i ) and H i (z, θ i ). If the order of K i (z) is large, this threshold can be negative and consistency is then also guaranteed when r i = 0. Moreover, choosing r i as a filtered white noise is always a sufficient choice to guarantee consistency when considering a single closed-loop system.
In Theorem 1, we will extend these conditions to the case where the closed-loop systems are interconnected in the manner presented above. Before presenting this result, let us observe that, due to the interconnection (6), the input signals u i (i = 1...N mod ) during the identification experiment can be expressed as:
u i (t) = N mod j=1 (R ij (z) r j (t) + S ij (z) e j (t)) (9)
for given transfer functions R ij and S ij (i, j = 1...N mod ) that can easily be computed using ( 1), ( 5), ( 6) and LFT algebra [START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF]. The transfer functions R ii and S ii are always nonzero. Excluding pathological cases, the transfer functions R ij and S ij for i = j are both nonzero if and only if there exists a path from node j to node i. In the example of Figure 1, we can e.g. say that R 31 and S 31 will be nonzero. Indeed, there exists a path from Node 1 to Node 3 since y 1 is sent to Node 2 and y 2 is in turn sent to Node 3. Consequently, u 3 will be, via y ref,3 , a function of y 1 which is in turn a function of r 1 and e 1 . As another example, R 56 and S 56 will be zero because there is no path from Node 6 to Node 5. Indeed, y 6 is not sent to any node and can therefore not influence u 5 .
Theorem 1 Consider the data set
Z N i = {u i (t), y i (t) | t = 1.
..N } collected in one arbitrary node i of a network made up N mod modules [START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF]. The modules in this network are operated as in [START_REF] Boyd | Linear Matrix Inequalities in Systems and Control Theory[END_REF] and the interconnection of the network is defined via an adjacency matrix (see ( 6)). Consider furthermore (4) with a matrix Λ = Λ T that is strictly positive definite and consider also the assumptions (A.1), (A.2) and (A.3). Then, the estimate θi obtained via ( 7) is a consistent estimate of θ i,0 if and only if one of the following two conditions are satisfied (i) there exists at least one path from one node j = i to the considered node i (i.e. R ij (z) = 0 and S ij (z) = 0) (ii) the excitation signal r i satisfies the signal richness condition of [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF] that guarantees consistency of ( 7) in a simple closed loop (i.e. when the system S i (see ( 1)) is operated with
u i (t) = r i (t) -K i (z)y i (t))
. This condition uniquely depends on the order of the controller K i (z) and on the respective parametrization and the orders of G i (z, θ i ) and H i (z, θ i ). Proof. Without loss of generality, let us suppose that i = 1. Moreover, for conciseness, let us drop the argument z in the transfer functions. Using ( 1) and ( 8), the prediction error 1 (t, θ 1 ) is given by:
1 (t, θ 1 ) = e 1 (t) + ∆H 1 (θ 1 ) H 1 (θ 1 ) e 1 (t) + ∆G 1 (θ 1 ) H 1 (θ 1 ) u 1 (t) (10)
with
∆H 1 (θ 1 ) = H 1 (θ 1,0 ) -H 1 (θ 1 ) and ∆G 1 (θ 1 ) = G 1 (θ 1,0 ) -G 1 (θ 1 )
. Inserting ( 9) into (10) and using the notation ē = (e 1 , e 2 , ..., e N mod ) T and r = (r 1 , r 2 , ..., r N mod ) T , 1 (t, θ 1 ) can be rewritten as:
1 (t, θ 1 ) = e 1 (t) + L 1 (θ 1 ) ē(t) + R 1 (θ 1 ) r(t) (11)
L 1 (z, θ 1 ) = V 1 (θ 1 ), ∆G 1 (θ 1 ) H 1 (θ 1 ) S 12 , ..., ∆G 1 (θ 1 ) H 1 (θ 1 ) S 1N mod R 1 (z, θ 1 ) = ∆G 1 (θ 1 ) H 1 (θ 1 ) (R 11 , R 12 , ..., R 1N mod ) with V 1 (θ 1 ) ∆ = ∆H1(θ1) H1(θ1) + ∆G1(θ1) H1(θ1) S 11 . (A.
2) implies that, when nonzero, the transfer functions ∆G1(θ1) H1(θ1) S 1j ∀j all contain at least one delay. Moreover, when nonzero, ∆H 1 (θ 1 ) also contains one delay since H 1 is monic. Combining these two facts and (A.3) and recalling that Φ ē(ω) = Λ, the power Ē 2 1 (t, θ 1 ) of 1 (t, θ 1 ) is equal to:
Λ 1,1 + 1 2π π -π L 1 (e jω , θ 1 )ΛL * 1 (e jω , θ 1 )dω + .... ... + 1 2π π -π R 1 (e jω , θ 1 )Φ r (ω)R * 1 (e jω , θ 1 )dω
with Λ 1,1 the (1,1) entry of Λ (i.e. the variance of e 1 ) and Φ r (ω) ≥ 0 the power spectrum of r. Note that, for all θ 1 , Ē 2 1 (t, θ 1 ) ≥ Λ 1,1 . Consequently, since we have that 1 (t, θ 1,0 ) = e 1 (t) and thus that Ē
2 1 (t, θ 1,0 ) = Λ 1,1 , all minimizers θ * 1 of Ē 2 1 (t, θ 1 ) must be such that Ē 2 1 (t, θ * 1 ) = Λ 1,1 .
To prove this theorem, we will first show that Condition (i) is sufficient to guarantee that θ * 1 = θ 1,0 is the unique minimizer of Ē 2 1 (t, θ 1 ) i.e. that θ * 1 = θ 1,0 is the only parameter vector such that Ē 2 1 (t, θ * 1 ) = Λ 1,1 . For this purpose, let us start by the following observation. Due to our assumption that Λ > 0, any parameter vector θ *
1 such that Ē 2 1 (t, θ * 1 ) = Λ 1,1 must satisfy: L 1 (θ * 1 ) ≡ 0 ( 12
)
If Condition (i) holds (i.e. if there is a path from a node j = 1 to node 1), we have that S 1j = 0. To satisfy [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF], it must in particular hold that ∆G1(θ * 1 ) H1(θ * 1 ) S 1j ≡ 0. Since S 1j = 0, this yields ∆G 1 (θ * 1 ) = 0. Using ∆G 1 (θ * 1 ) = 0 and the fact that the first entry V 1 (θ * 1 ) of L 1 (θ * 1 ) must also be equal to zero to satisfy [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF], we obtain ∆H 1 (θ * 1 ) = 0. Since, by virtue of (A.1), θ 1,0 is the unique parameter vector making both ∆G 1 and ∆H 1 equal to 0, we have thus proven the consistency under Condition (i). Note that this result is irrespective of Φ r i.e. it holds for any choice of Φ r .
In order to conclude this proof, it remains to be proven that, if Condition (i) does not hold, Condition (ii) is a necessary and sufficient condition for consistency. This part is straightforward. Indeed, if Condition (i) does not hold, y ref,1 = 0 and the data set
Z N 1 = {u 1 (t), y 1 (t) | t = 1.
..N } is generated as in an isolated closed-loop system. The result is thus proven since the paper [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF] gives necessary and sufficient conditions on the richness of r 1 to guarantee consistency in an isolated closed-loop system.
Theorem 1 shows that the network configuration considered in this paper is in fact beneficial for identification. Indeed, consistency of ( 7) is not only guaranteed in all situations where consistency is guaranteed in the simple closed-loop case (i.e. when y ref,i = 0), but also in many other cases (via Condition (i)). Indeed, Condition (i) shows that, due to the interconnection, disturbances v j in other nodes connected via a path to node i are sufficient to lead to consistency of [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF] and this even in the extreme case where all excitations signals r j (j = 1...N mod ) are set to zero. For example, in the network of Figure 1, Condition (i) applies to Nodes 2, 3, 4, 5 and 6. Consequently, the richness condition of [START_REF] Gevers | Informative data: how to get just sufficiently rich?[END_REF] has only to be respected to identify a consistent estimate θ1 of the module S 1 which is the only isolated module in this network. Remark 1. Note that the result of Theorem 1 only requires that Λ = Λ T > 0. Consequently, it not only applies to signals e i that are mutually independent, but also to signals e i that are spatially correlated. This is an interesting observation since the independence of the disturbances having a path to Node i is an assumption in [START_REF] Dankers | Identification of dynamic models in complex networks with prediction error methods -predictor input selection[END_REF].
SISO criterion vs. MIMO criterion
Let us define θ = ( θT 1 , θT 2 , ..., θT N mod ) T using the different estimates θi (i = 1...N mod ) obtained using the individual SISO criteria [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF] for all modules in the network. In this subsection, we show that this estimate θ of θ 0 = (θ T 1,0 , ..., θ T N mod ,0 ) T is equal to the estimate θmimo obtained with the prediction error criterion using all data sets Z N i (i = 1...N mod ). Using (1), the optimally weighted MIMO prediction error identification criterion [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF] is θmimo = arg min θ V (θ) with
V (θ) = 1 N N t=1 ¯ T (t, θ)Λ -1 ¯ (t, θ) (13)
¯ (t, θ) = H -1 (z, θ) (ȳ(t) -G(z, θ)ū(t)) (14)
with θ = (θ T 1 , θ T 2 , ..., θ T N mod ) T , G(z, θ) a diagonal transfer matrix equal to diag(G 1 (θ 1 ), G 2 (θ 2 ), ..., G N mod (θ N mod )) and H(z, θ) defined similarly as G(z, θ). The data ȳ ∆ = (y 1 , ..., y N mod ) T and ū ∆ = (u 1 , ..., u N mod ) T are the data collected in
Z N i (i = 1...N mod ).
Let us observe that ¯ (t, θ) = ( 1 (t, θ 1 ), ..., N mod (t, θ N mod )) T with i (t, θ i ) as in [START_REF] Doyle | Review of LFT's, LMI's and µ[END_REF]. Using this expression for ¯ (t, θ) and the assumption that Λ is diagonal, we can rewrite V (θ) as:
V (θ) = N mod i=1 1 Λ i,i V i (θ i ) (15)
with Λ i,i the (i, i)-entry of Λ (i.e. the variance of e i ) and
V i (θ i ) = 1 N N t=1
2 i (t, θ i ) the cost function used in the SISO criterion [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF]. This last expression shows that minimizing the individual cost functions V i (θ i ) (as done in [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF]) is equivalent to minimizing V (θ) and thus that θ = θmimo . Consequently, when Λ is diagonal, there is no disadvantage whatsoever in using the individual SISO criteria (7) instead of the MIMO criterion [START_REF] Gevers | Identification in dynamic networks: identifiability and experiment design issues[END_REF].
If the global experiment is designed such that consistency is guaranteed for each Module i (see Theorem 1), the estimate θ = θmimo of θ 0 has the property that √ N ( θ -θ 0 ) is asymptotically normally distributed around zero [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF]. The covariance matrix of θ is moreover given by
P θ = 1 N ĒΨ(t, θ 0 )Λ -1 Ψ T (t, θ 0 ) -1 where Ψ(t, θ) = -∂¯ T (t,θ)
∂θ [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF]. Let us observe that Ψ(t, θ) is a block-diagonal matrix:
Ψ(t, θ) = bdiag(ψ 1 (t, θ 1 ), ψ 2 (t, θ 2 ), ..., ψ N mod (t, θ N mod )) with ψ i (t, θ i ) = -∂ i(t,θi)
∂θi . Consequently, P θ has the following block-diagonal structure: P θ = bdiag(P θ1 , P θ2 , ..., P θ N mod ) ( 16)
P θi = Λ i,i N Ēψ i (t, θ i,0 )ψ T i (t, θ i,0 ) -1 i = 1...N mod
The covariance matrices P θi in P θ are the covariance matrices of the individual estimates θi for each i. Note that P θi can be estimated from the data Z N i and θi [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF]. However, for further use, we also derive an expression of P θi as a function of the experimental conditions. For this purpose, we recall (see e.g. [START_REF] Bombois | Least costly identification experiment for control[END_REF]) that
ψ i (t, θ i,0 ) = F i (z, θ i,0 )u i (t) + L i (z, θ i,0 )e i (t) with F i (θ i ) = H -1 i (θ i ) ∂G(θi) ∂θi and L i (θ i ) = H -1 i (θ i ) ∂H(θi)
∂θi . Using [START_REF] Everitt | On the variance analysis of identified linear MIMO models[END_REF] and assuming that the excitation signals r j (j = 1...N mod ) are all mutually independent, we obtain:
P -1 θ i = N 2πΛi,i π -π
Zi(e jω )ΛZ * i (e jω )dω + ...
.
.. N 2πΛi,i π -π Fi(e jω )F * i (e jω ) N mod j=1 |Rij(e jω )| 2 Φr j (ω) dω
with Z i (z) a matrix of transfer functions of dimension n θi × N mod whose i th column is L i + F i S ii and whose j th -column (j = i) is equal to F i S ij . Note that this expression depends not only on θ i,0 , but also, via the nonzero transfer functions S ij , R ij , on the true parameter vector θ j,0 of the systems S j (j = i) having a path to node i.
It is also important to note that not only the excitation signal r i but also all r j in nodes having a path to i contributes to the accuracy P -1 θi of θi . In the network of Figure 1, the accuracy P -1 θ6 of the model of S 6 will thus be influenced by the excitations signals r j (j = 1...6) in all nodes. Moreover, due to the structure of ( 17), we could also theoretically obtain any accuracy P -1 θ6 for that model by e.g. only exciting at Node 1 (i.e. r 1 = 0 and r j = 0 for j = 2...6). It is nevertheless to be noted that a larger excitation power (or a longer experiment) will then be typically necessary to guarantee this accuracy because of the attenuation of the network.
Uncertainty bounding
Suppose we have made an experiment leading to informative data Z N i for all modules S i (i = 1...N mod ). We can thus obtain the estimate θ of θ 0 using the individual identification criteria [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF]. Given the properties of θ given above, the ellipsoid U = {θ | (θ -θ) T P -1 θ (θ -θ) < χ} with P r(χ 2 (n θ ) < χ) = β (n θ is the dimension of θ) will for sufficiently large sample size N be a β%-confidence region for the unknown parameter vector θ 0 (say β = 95%). For the robustness analysis via the hierarchical approach that will be presented in the next section, we will also need the projections U i of U into the parameter space θ i of each of the modules i = 1...N mod . Using [START_REF] Hägg | On identification of parallel cascade serial systems[END_REF] and the fact that θ = (θ T 1 , θ T 2 , ..., θ T N mod ) T , the projection
U i = {θ i | θ ∈ U } (i = 1...N mod
) is an ellipsoid given by (see e.g. [START_REF] Bombois | Quantification of frequency domain error bounds with guaranteed confidence level in prediction error identification[END_REF]):
U i = {θ i | (θ i -θi ) T P -1 θi (θ i -θi ) < χ} (18)
3 Controller validation In the previous section, we have seen that models G i (z, θi ) of the systems S i (i = 1...N mod ) can be obtained using a global identification experiment on an interconnected network. The identified models can now be used to design improved decentralized controllers Ki for each module (see e.g. [START_REF] Scorletti | An LMI approach to decentralized H∞ control[END_REF]). Note that, if the different systems S i are homogeneous i.e. they are identical up to industrial dispersion, one could design a common controller Ki = K ∀i using an average of the identified models (see e.g. [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF]).
In any case, the decentralized controllers Ki are designed to guarantee both a nominal local performance (performance of the loop [ Ki G i (z, θi )]) and a nominal global performance (performance of the network). In [START_REF] Scorletti | An LMI approach to decentralized H∞ control[END_REF][START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF], the H ∞ framework is used to measure both the local and global performance. As usual in classical H ∞ control design, a sufficient level of local performance is ensured by imposing, for all i, a frequency-dependent threshold on the modulus of the frequency responses of transfer functions such as 1/(1 + Ki G i (z, θi )) and Ki /(1 + Ki G i (z, θi )). This indeed allows e.g. to guarantee a certain tracking ability for each local loop (since the first transfer function is the one between y ref,i and y i -y ref,i ) and to limit the control efforts (since the second transfer function is the one between y ref,i and u i ). Since the loops [ Ki G i (z, θi )] are not isolated, but interconnected as in (3), the control design method in [START_REF] Scorletti | An LMI approach to decentralized H∞ control[END_REF][START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] also imposes specific constraints on the global performance by imposing a frequency-dependent threshold on the modulus of the frequency responses of transfer functions P describing the behaviour of the network as a whole. Examples of such global transfer functions P are the transfer functions between the external reference ref ext and the tracking error y i -ref ext at each node of the network. Other examples are the transfer functions between ref ext and the input signal u i at each node of the network. Finally, we can also consider the transfer functions between a disturbance v i in one node and an output signal y j in the same or another node. It is clear that these transfer functions reflect the performance of the whole network with respect to tracking, control efforts and disturbance rejection, respectively.
For the design of Ki , the thresholds (weightings) corresponding to each transfer function described in the previous paragraph must be chosen in such a way that they improve the performance of the original network. The performance of the original network can be evaluated by computing the frequency responses of these transfer functions in the network made up of the interconnection of the loops [K i G i (z, θi )].
Since the decentralized controllers Ki are designed based on the models G i (z, θi ) of the true systems S i (i = 1...N mod ), it is important to verify whether these decentralized controllers will also lead to a satisfactory level of performance both at the local level and at the global level when they will be applied to the true systems S i (i = 1...N mod ). Since both the local and the global performance may be described by different transfer functions, the verification that will be presented below must be done for each of these transfer functions. For the transfer functions describing the local performance, the robustness analysis results of [START_REF] Bombois | Robustness analysis tools for an uncertainty set obtained by prediction error identification[END_REF] can be used. In the sequel, we will thus restrict attention to the global performance and show how to perform this verification for one arbitrary transfer function P describing the global performance. The input of this transfer function will be denoted by w (e.g. w = ref ext ) and its output will be denoted by s (e.g. s = y i -ref ext for some i). Let us thus denote by P (z, θ) the value of this transfer function for the network made up of the interconnection of the loops [ Ki G i (z, θi )]. Similarly, let us denote by P (z, θ 0 ) its value for the network made up of the loops [ Ki G i (z, θ i,0 )]. In order to verify the robustness of the designed controllers Ki with respect to this global transfer function P , we have thus to verify whether, for all ω,:
|P (e jω , θ 0 )| < W (ω) ( 19
)
where W (ω) is the threshold that defines the desired global performance (wrt. P ) in the redesigned network. Since the unknown θ 0 lies in U (modulo the confidence level β), [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] will be deemed verified if sup θ∈U |P (e jω , θ)| < W (ω) at each ω. Note that a necessary condition for the latter to hold is obviously that the designed transfer function P (z, θ) satisfies |P (e jω , θ)| < W nom (ω) with W nom (ω) < W (ω) for all ω. For a successful redesign of the network, the controller Ki must thus be designed with a nominal performance that is (at least slightly) better than the desired performance.
Computing sup θ∈U |P (e jω , θ)| exactly is not possible. However, we can deduce upper bounds for this quantity. One possible approach to do this is to use µ-analysis based on the parametric uncertainty set U [START_REF] Zhou | Essentials of Robust Control[END_REF][START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF]. However, since networks can in practice be made up of a large number N mod of modules yielding a parametric uncertainty set of large dimension, this direct approach could reveal impractical from a computational point-of-view [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF][START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF] and the computed upper bound could possibly turn out to be relatively conservative. Consequently, we will here consider the two-step hierarchical approach proposed in [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF][START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF] and show how this two-step approach can be applied for the type of parametric uncertainties delivered by network identification.
As a first step, we will give a convenient expression of the global transfer function P (z, θ 0 ). For this purpose, note that P (z, θ 0 ) pertains to a network characterized by the interconnection equation ( 3) and the following equation describing the local loop [ Ki G i (z, θ i,0 )]:
y i (t) = v i (t) + T i (z, θ i,0 )(y ref,i (t) -v i (t)) (20)
with
T i (z, θ i,0 ) = KiGi(z,θi,0) 1+ KiGi(z,θi,0)
. Consequently, the usually used global performance transfer functions P (z, θ 0 ) can be written as an LFT of T (z, θ 0 ) = diag(T 1 (z, θ 1,0 ), T 2 (z, θ 1,0 ), ..., T N mod (z, θ N mod ,0 )) i.e. we can determine vectors of signals p and q such that the transfer function P (z, θ 0 ) between w and s can be written as:
p = T (z, θ 0 )q and q s = I(z) p w ( 21
)
for a given matrix of transfer functions I(z) that does not depend on θ 0 (i.e. that does not depend on G i (z, θ i,0 ) (i = 1...N mod )). For this LFT representation, we will use the shorthand notation: P (z, θ 0 ) = F(I(z), T (z, θ 0 )).
As an example, in the network of Figure 1, let us consider the transfer function between w = ref ext and s = y 6 -ref ext and let us consequently pose v i = 0 (i = 1...N mod ). This transfer function can be described as in [START_REF] Ljung | System Identification: Theory for the User, 2nd Edition[END_REF] with q = ȳref , p = ȳ and the following constant matrix I:
I = A B
(0, 0, 0, 0, 0, 1) -1
The matrix T (z, θ 0 ) depends on the unknown true parameter vector θ 0 . Since θ 0 ∈ U (modulo the confidence level β), T (z, θ 0 ) obviously lies in the parametric uncertainty set T s = {T (z, θ) | θ ∈ U }. Note that sup θ∈U |P (e jω , θ)| = sup T ∈T s |F(I(e jω ), T (e jω ))|. Computing an upper bound of the latter expression remains as complicate as with the former since T s is still a parametric uncertainty set of large dimension (if N mod is large). However, this LFT representation of P (z, θ 0 ) enables the use of the hierarchical approach. The idea of the hierarchical approach is to embed T s into an uncertainty set T having a structure for which the robustness analysis of the global performance is tractable even if N mod is large. For this purpose, we can choose the following structure for T :
T = {T (z) | T (z) = (I N mod + ∆(z)) T (z, θ) with ... ∆(e jω ) ∈ ∆(ω) ∀ω} (22) where ∆(z) = diag(∆ 1 (z), ∆ 2 (z), ..., ∆ N mod (z))
is the (stable) uncertainty made up of N mod scalar transfer functions ∆ i (z) (i = 1...N mod ). The set ∆(ω) will have a similar diagonal structure: ∆(ω) = diag(∆ 1 (ω), ∆ 2 (ω), ..., ∆ N mod (ω)). The elements ∆ i (ω) constrain the frequency response ∆ i (e jω ) of ∆ i (z) as follows: 23) i.e. ∆ i (ω) is a disk (in the complex plane) of radius ρ i (ω) and of (complex) center c i (ω).
∆ i (ω) = {∆ i (e jω ) | |∆ i (e jω ) -c i (ω)| < ρ i (ω)} (
Since, as mentioned above, T will be determined in such a way that T s ⊆ T , T (z, θ 0 ) will also lie in T and we will thus be able to verify [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] by verifying at each ω that P wc (ω, T ) < W (ω) with
P wc (ω, T ) ∆ = sup T (z)∈T |F(I(e jω ), T (e jω ))| (24)
Since
T (z) = (I N mod + ∆(z)) T (z, θ) is an LFT in ∆(z), F(I(z), T (z)
) can be rewritten in an LFT in ∆(z) i.e. F(I(z), T (z)) = F(M (z), ∆(z)) with M (z) a function of I(z) and T (z, θ). Consequently, ( 24) is also given by:
P wc (ω, T ) = sup ∆(e jω )∈∆(ω)
|F(M (e jω ), ∆(e jω ))|
Before presenting how we can evaluate (25), we will first present how the uncertainty set T can be determined in practice. First note that we can decompose the uncertainty set T into N mod SISO (unstructured) uncertainty sets T i defined as follows:
T i = {T i (z) | T i (z) = (1 + ∆ i (z)) T i (z, θi ) with ... ∆ i (e jω ) ∈ ∆ i (ω) ∀ω} (26)
with ∆ i (ω) as defined in [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF]. Ensuring T s ⊆ T can thus be obtained by determining the sets T i in such a way that, for all i = 1...N mod , T s i ⊆ T i with T s i defined as follows:
T s i = {Ti(z, θi) | Ti(z, θi) = Ki(z)Gi(z, θi) 1 + Ki(z)Gi(z, θi) with θi ∈ Ui}. ( 27
)
with U i as in [START_REF] Jansson | Input design via LMIs admitting frequency-wise model specifications in confidence regions[END_REF]. In order to achieve this in an optimal way, the frequency functions ρ i (ω) and c i (ω) defining, via [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF], the size of T i will be determined in such a way that T i is the smallest set for which it still holds that T s i ⊆ T i . By doing so, we indeed reduce as much as possible the conservatism linked to the embedding of the uncertainty set T s i (that follows from the identification experiment) into the unstructured uncertainty set T i . Consequently, for a given i and for a given ω, ρ i (ω) and c i (ω) have to be chosen as the solution of the following optimization problem:
min ρ i (ω) s.t. | Ti (e jω , θ i ) -c i (ω)| < ρ i (ω) ∀θ i ∈ U i (28)
with Ti (e jω , θ i ) = T i (e jω , θ i ) -T i (e jω , θi )
T i (e jω , θi )
with T i (z, θ i ) as defined in [START_REF] Weerts | Identifiability of dynamic networks with part of the nodes noise-free[END_REF]. As shown in the following theorem, the solution of the optimization problem (28) can be efficiently determined using LMI optimization [START_REF] Boyd | Linear Matrix Inequalities in Systems and Control Theory[END_REF]. Before presenting this result, let us first give an expression of Ti (e jω , θ i ) as a function of θ i using the following notation for G i (e jω , θ i ) = Z1,i(e jω )θi 1+Z2,i(e jω )θi . In the last expression, Z 1,i (z) and Z 2,i (z) are row vectors containing only delays or zeros (see [START_REF] Bombois | Robustness analysis tools for an uncertainty set obtained by prediction error identification[END_REF]). This yields
Ti (e jω , θ i ) = -1 + Z N,i (e jω )θ i 1 + Z D,i (e jω )θ i (29) with Z D,i = Z 2,i + Ki Z 1,i and Z N,i = KiZ1,i
Ti(e jω , θi)
-Z D,i .
Theorem 2 Consider the notation Ti (e jω , θ i ) = -1+Z N,i (e jω )θi 1+Z D,i (e jω )θi given in (29). The optimization problem (28) at a given ω and at a given i is equivalent to the following LMI optimization problem having as decision variable a positive real scalar α i (ω), a complex scalar c i (ω), a positive real scalar ξ i (ω) and a skew-symmetric matrix
X i (ω) ∈ R (n θ i +1)×(n θ i +1) : min α i (ω) subject to -α i (ω) λ i (ω) λ * i (ω) -A i (ω) -ξ i (ω)B i + jX i (ω) < 0 ( 30
)
with λ i (ω) = Z N,i -Z D,i c i -1 -c i and Ai(ω) = Z * D,i ZD,i Z * D,i ZD,i 1
Bi = P -1 θ i -P -1 θ i θi -θT i P -1 θ i θT i P -1 θ i θi -χi
The above optimization problem is not explicitly function of ρ i (ω). However, the optimal ρ i (ω) can be obtained by taking the square root of the optimal α i (ω).
Proof. For conciseness, we will drop the frequency argument ω in the variables. Using the notations ρ 2 i = α i and θi = (θ T i 1) T and using (29), we can rewrite the constraint in (28) as:
θT i A i -λ * i -1 α i λ i θi < 0 ∀θ i ∈ U i (31)
while the constraint θ i ∈ U i is equivalent to θT i B i θi < 0. Consequently, by virtue of the S-procedure [START_REF] Boyd | Linear Matrix Inequalities in Systems and Control Theory[END_REF] and Lemma 2 in [START_REF] Bombois | Least costly identification experiment for control[END_REF], (31) holds if and only if there exist ξ i > 0 and X i = -X T i such that
A i -λ * -1 α i λ -ξ i B i + jX i < 0 (32)
Since α i > 0, an application of the Schur complement [START_REF] Boyd | Linear Matrix Inequalities in Systems and Control Theory[END_REF] shows that (32) is equivalent to (30). This concludes the proof.
Remark 2. The novelty of this theorem resides in the determination of the (complex) center c i of the unstructured uncertainty. Given the definition of T (e jω , θ i ), one could of course consider that this center is zero and just compute the radius ρ i (ω). In this case, the LMI optimization above is the same as the one in [START_REF] Bombois | Robustness analysis tools for an uncertainty set obtained by prediction error identification[END_REF]. However, since the mapping between θ i and Ti (e jω , θ i ) is in general not linear, this could lead to larger embedding sets T i and thus to more conservative results for the robustness analysis (as will be illustrated in Section 5).
Using Theorem 2, we can determine ρ i (ω) and c i (ω) for any value of i (i = 1...N mod ) and for any value of ω. In this way, we fully determine the sets T i (see [START_REF] Weerts | Identifiability in dynamic network identification[END_REF]) for all values of i and therefore also the set T (see [START_REF] Safonov | Stability and Robustness of Multivariable Feedback Systems[END_REF]). With this information, we will be able, as shown in the following theorem, to evaluate the worst case global performance P wc (ω, T ) defined in [START_REF] Vincent | Input design for structured nonlinear system identification[END_REF] for any value of ω. Theorem 3 Consider a given frequency ω and and the set T (see [START_REF] Safonov | Stability and Robustness of Multivariable Feedback Systems[END_REF]) with a diagonal uncertainty ∆(e jω ) whose elements ∆ i (e jω ) are constrained to lie in a disk ∆ i (ω) of radius ρ i (ω) and of center c i (ω) (see [START_REF] Safonov | Propagation of conic model uncertainty in hierarchical systems[END_REF]). Define R ω (resp. C ω ) as a diagonal matrix of dimension N mod whose elements are ρ 2 i (ω) (resp. c i (ω)) (i = 1...N mod ). Then, an upper bound P wc,ub (ω, T ) of the worst case global performance P wc (ω, T ) defined in [START_REF] Vincent | Input design for structured nonlinear system identification[END_REF] is given by γ opt (ω) where γ opt (ω) is the solution of the following LMI optimization problem. This LMI optimization problem has as decision variables a real scalar γ(ω) > 0 and a strictly positive definite diagonal matrix T ω ∈ R N mod ×N mod . with N (γ(ω))
∆ = Tω(Rω -C * ω Cω) 0 0 1 TωC * ω 0 0 0 TωCω 0 0 0 -Tω 0 0 -γ(ω) (34)
This theorem can be straightforwardly deduced from the separation of graph theorem [START_REF] Safonov | Stability and Robustness of Multivariable Feedback Systems[END_REF] and from the results in [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF]. It follows from the fact that the constraint (33) is a sufficient condition for |F(M (e jω ), ∆(e jω ))| 2 < γ(ω) ∀∆(e jω ) ∈ ∆(ω) to hold. Remark 3. The so-called hierarchical approach for robustness analysis presented above is preferred upon a direct µ-analysis approach based on the structured uncertainty U when the number N mod of modules is large. Indeed, even in this case, its computational complexity remains low while the µ-analysis approach would involve very complex multipliers of large dimensions. The radius ρ i (ω) and the center c i (ω) are indeed computed at the local level and the computation of (the upper bound of) the worst case performance P wc (ω, T ) in Theorem 3 uses a very simple multiplier T ω . This approach of course only yields an upper bound of sup θ∈U |P (e jω , θ)|. However, the µ-analysis approach that would be used to compute sup θ∈U |P (e jω , θ)| would also only lead to an upper bound of this quantity which can turn out to be conservative for large N mod . In the simulation example, we will show that the conservatism linked to the hierarchical approach remains limited. Remark 4. Like the µ-analysis approach, the proposed robustness analysis approach is a frequency-wise approach (i.e. the upper bound P wc,ub (ω, T ) on the worst case performance is computed frequency by frequency). Consequently, the performance constraint [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] can only be verified for a finite amount of frequencies of the frequency interval [0 π].
Optimal experiment design for networks
Given an arbitrary identification experiment, the obtained confidence ellipsoid U and the corresponding uncertainty set T can be such that the upper bound P wc,ub (ω, T ) (computed using Theorem 3) is larger than the threshold W (ω) for some frequencies. In this section, we will design the experiment in order to avoid such a situation. More precisely, we will design the spectra Φ ri of the signals r i (i = 1...N mod ) for an identification experiment of (fixed) duration N in such a way that the total injected power is minimized while guaranteeing that the obtained uncertainty region T is small enough to guarantee P wc,ub (ω, T ) < W (ω) (35) at each frequency ω. Note that, for conciseness, we restrict attention to one single global performance objective (see also Remark 5 below). Note also that we here suppose that the signals r i (i = 1...N mod ) are all mutually independent. Consequently, the experiment is thus indeed entirely described by the spectra Φ ri (i = 1...N mod ).
An important step is to parametrize these spectra Φ ri (ω). Here, we will use the parametrization [START_REF] Jansson | Input design via LMIs admitting frequency-wise model specifications in confidence regions[END_REF] i.e.: Φ ri (ω) = σ i,0 + 2 M l=1 σ i,l cos(lω) (i = 1...N mod ) for which Φ ri (ω) > 0 ∀ω can be guaranteed using an extra LMI constraint on the decision variables σ i,l (i = 1...N mod , l = 0...M ) [START_REF] Jansson | Input design via LMIs admitting frequency-wise model specifications in confidence regions[END_REF]. With this parametrization, the cost function in our optimization problem is a linear function of the decision variables:
J = N mod i=1 1 2π π -π Φ ri (ω)dω = N mod i=1 σ i,0 (36)
Using [START_REF] Hägg | On optimal input design for networked systems[END_REF], we see also that the matrices P -1 θi that determines the size ρ i (ω) of the elements T i of T via the LMI (30) are all affine functions of the to-be-designed spectra and thus of the decisions variables σ j,l (j = 1...N mod , l = 0...M ).
By reducing J , we increase the size of U and we thus increase ρ i (ω) ∀i. By increasing ρ i , we increase P wc,ub (ω, T ). Our goal is thus to find the spectra Φ ri (ω) minimizing J while leading to ρ i (ω) that are sufficient small for (35) to hold for each ω in an user-chosen grid Ω of the frequency range. For this purpose, we propose the following optimal identification experiment design problem. This optimization problem has the following decision variables: σ i,l , T ω , α i (ω) = ρ 2 i (ω), c i (ω), ξ i (ω), X i (ω) with the same structure as in Theorems 2 and 3 and defined for all i = 1...N mod , for all l = 0...M and for all ω ∈ Ω. min J under the constraints that, for all ω ∈ Ω,
M (e jω ) I * N (W 2 (ω)) M (e jω ) I < 0 E(ρ 2 i (ω), c i (ω), ξ i (ω), X i (ω), P -1 θi ) < 0 i = 1...N mod (37)
The constraints of the above optimization have to be completed by the N mod constraints on σ i,l guaranteeing that Φ ri (ω) > 0 ∀ω (i = 1...N mod ). The matrix E(ρ 2 i (ω), c i (ω), ξ i (ω), X i (ω), P -1 θi ) above is the matrix defined in (30) in which P -1 θi is replaced by its affine expression in the decision variables σ j,l defining the identification spectra Φ rj ∀j. This optimization problem will lead to the least costly identification experiment (according to J ) that nevertheless guarantees that the identified models will be sufficiently accurate to deliver an acceptable global performance. Indeed, the combination of the set of constraints in (37) at a given ω guarantee that (35) holds for the uncertainty T defined based on the uncertainty U delivered via the identification.
There are two issues with this optimization problem. First, as in all optimal identification experiment design problem, it depends on the unknown parameter vectors θ i,0 via P -1 θi , on the to-be identified parameter vectors θi (via e.g. A i ) and on the to-be-designed controllers Ki (via e.g. B i ). Those unknown variables will be replaced by initial guesses such as in [START_REF] Bombois | Least costly identification experiment for control[END_REF][START_REF] Barenthin | Identification for control of multivariable systems: controller validation and experiment design via LMIs[END_REF]. In particular, we need to pre-select a control design method which, based on a model of the modules, leads to decentralized controllers achieving a nominal performance that is (slightly) better than the desired one. This control design method will be used to determine the initial guesses for Ki based on the initial guesses for the modules.
The second issue is that the optimization problem is a bilinear problem via the products 34)) and ξ i B i (and thus ξ i P -1 θi ) in the constraint E < 0 (see (30)). Indeed, here, unlike in Section 3, all these variables are together decision variables in the same optimization problem. To tackle this issue, we propose the following iterative algorithm inspired by the so called D-K iterations [START_REF] Zhou | Essentials of Robust Control[END_REF]. Before presenting the algorithm, we note that, if we arbitrarily choose spectra Φ ri (ω) (i = 1...N mod ), we can compute the corresponding P θi (i = 1...N mod ) via [START_REF] Hägg | On optimal input design for networked systems[END_REF]. With these P θi , we can compute P wc,ub (ω, T ) via Theorems 2 and 3 and we can therefore verify whether (35) holds at each ω ∈ Ω. If that is the case, we will say that the spectra Φ ri (ω) (i = 1...N mod ) are validated. Algorithm 1. The algorithm is made up of an initialization step (step 0) and each iteration consists of three steps. S.0. We initialize the algorithm by arbitrarily choosing the spectra Φ ri (i = 1...N mod ) (e.g. Φ ri (ω) = 1). S.1. Using a subdivision algorithm, we determine, using the notion of validation defined above, the minimal positive scalar γ ∈ R such that the spectra γΦ ri (ω) (i = 1...N mod ) remain validated. Denote this minimal γ by γ min . S.2. To validate γ min Φ ri (ω), the optimization problems in Theorems 2 and 3 have been used. The corresponding decision variables are α i = ρ 2 i (ω), c i (ω), ξ i (ω), X i (ω) and T w (i = 1...N mod , ω ∈ Ω). From those decisions variables, ξ i (ω), c i (ω) and T ω (i = 1...N mod , ω ∈ Ω) are conserved for Step 3. S.3. The optimal experiment design problem (37) is transformed into an LMI optimization problem by fixing the decision variables ξ i (ω), c i (ω) and T ω (i = 1...N mod , ω ∈ Ω) to the ones determined in Step 2. The solution of this transformed optimization problem define, via σ i,l , new spectra Φ ri (ω). These new spectra Φ ri (ω) can then be used in Step 1 for a new iteration. The algorithm is stopped when the optimal cost J opt in Step 3 no longer decreases significantly after each iteration. The optimal spectra Φ ri (ω) are then the ones corresponding to this last iteration (Step 1 can be used a last time to further refine these spectra). Remark 5. The optimal experiment design problem has been presented in the case where the objective is to guarantee a certain level of global performance described by one transfer function P . However, it is straightforward to extend it to the case where different transfer functions P are considered and, using the results in [START_REF] Bombois | Least costly identification experiment for control[END_REF], also to the case where, in addition, a certain level of local performance has to be guaranteed. Remark 6. Let us now consider briefly the case where the noises e i in (1) are not independent, but spatially correlated with a strictly positive covariance matrix Λ. We had already observed in Remark 1 that Theorem 1 also holds in that case. As opposed to this, the individual SISO identification criteria [START_REF] Dinh | Convex hierarchical analysis for the performances of uncertain large-scale systems[END_REF] are, for e i having that property, no longer equivalent to the global MIMO identification criterion (13) and will therefore not lead to the smallest variance. However, since i (t, θ i,0 ) remains equal to e i (t), √ N ( θi -θ i,0 ) will in that case too be asymptotically normally distributed around zero and the covariance matrix P θi of θi will be still given by [START_REF] Hägg | On optimal input design for networked systems[END_REF]. We also observe that the hierarchical approach only uses the ellipsoids U i (and not U ). Consequently, the controller validation and experiment design results can also be applied when the white noises e i (i = 1...N mod ) are spatially correlated. Indeed, even though the covariance matrix P θ of θ is no longer block-diagonal in that case, the projections U i = {θ i | θ ∈ U } are still given by [START_REF] Jansson | Input design via LMIs admitting frequency-wise model specifications in confidence regions[END_REF] [START_REF] Bombois | Quantification of frequency domain error bounds with guaranteed confidence level in prediction error identification[END_REF]. the sizes of the obtained uncertainties are different at different nodes. As an example, more uncertainty is allowed in Node 1 than in Node 6. Interestingly, the smaller uncertainty in Node 6 is obtained with less power than the larger uncertainty in Node 1: σ 6,0 < σ 1,0 . This is a consequence of the network configuration. Indeed, the uncertainty of Node 6 is not only function of Φ r,6 , but also of all the other spectra applied during the identification experiment (see [START_REF] Hägg | On optimal input design for networked systems[END_REF]). To illustrate this, we have also represented, in red dashed in Figure 4, the radius ρ 6 (ω) that would have been obtained if Node 6 would have been isolated as Node 1 (and thus if P θ6 would have been determined uniquely with Φ r,6 ). We observe that the obtained uncertainty would have been much larger.
T ω (R ω -C ω C * w ), T ω C ω in N (see (
Concluding remarks
This paper is one of the first contributions on optimal experiment design in a network context. The type of networks considered in this paper is usual in the literature on multi-agent systems. We have seen that many results of this paper not only apply to systems S i with independent white noises e i , but also to spatially correlated ones. However, as mentioned in Remark 6, if the white noises e i are spatially correlated, our identification procedure using individual SISO criteria is no longer optimal since it is no longer equivalent to the global MIMO prediction error identification criterion. Future work will therefore consider the question of how to deal with this MIMO criterion in the case of spatially correlated noises without increasing too much the computational complexity. This complexity is indeed an important feature when the number N mod of modules is large.
Fig. 2 .
2 Fig. 1. Example of a network
Fig. 3 .
3 Fig.3. Desired global performance W (ω) (black dotted), expected worst case performance P wc,ub (ω, T ) after optimal experiment design and using the initial guesses (black dashed), obtained worst case performance P wc,ub (ω, T ) after identification and redesign of the controllers (red solid)
We will nevertheless see in the sequel that many of the results of this paper also apply to the case of spatially-correlated noises ei i.e. to the case where (4) holds with a matrix Λ = Λ T > 0 that is not necessarily diagonal.
The same observation can be made for the conditions derived in the paper[START_REF] Weerts | Identifiability of dynamic networks with part of the nodes noise-free[END_REF] which considers the identification of all the modules in an arbitrary network.
Numerical illustration
In this numerical illustration, we consider the network of Figure 1 made up of six nodes (N mod = 6). We consider here the case of nodes made up of homogenous systems and, for simplicity, the true systems S i that will be identified will all be identical and given by the following ARX system [START_REF] Landau | A flexible transmission system as a benchmark for robust digital control[END_REF]: y i (t) = (z -3 B 0 (z))/(A 0 (z))u i (t) + (1)/(A 0 (z))e i (t) with B 0 (z) = 0.10276 + 0.18123z -1 , A 0 (z) = 1 -1.99185z -1 + 2.20265z -2 -1.84083z -3 + 0.89413z -4 . The variances Λ i,i of the white noises e i (t) are all equal to one i.e. Λ = I 6 . We suppose that these true systems are all controlled by the same local controller K that is designed using the local method in [START_REF] Ferreres | H∞ control for a flexible transmission system[END_REF]. The global performance P (z, θ 0 ) that we consider in this example is the transfer function between w = ref ext and s = y 6 -ref ext . Our objective is to determine the identification experiment leading to models whose uncertainty U is small enough to guarantee (35) with the threshold W represented in Figure 3. This threshold requires a global bandwidth that is higher than the one achieved with the controller K present in the network. We suppose that, based on the identified models, the method of [START_REF] Korniienko | Performance control for interconnection of identical systems: Application to pll network design[END_REF] will be used to design a unique controller K that satisfies the global performance defined by W (but also a certain level of local performance).
Using the methodology presented in Section 4, we design the spectra Φ ri (M = 10) that have to be applied to each module for the identification. For this design, we fix the experiment length to N = 2000 and we need to have initial guesses of the true systems, the identified models and K. We have here used for this purpose an ARX system having the same structure as the true systems, but described by B init (z) = 0.1192 + 0.1651z -1 and
. With this initial guess S init for the system, we have designed a controller K init using the same global method that will be used with the identified models. This controller K init has been used as initial guess for the controller K. The worst-case performance P wc,ub (ω, T ) corresponding to these designed spectra and these initial guesses is given in Figure 3 (black dashed) and we see that the performance objective is satisfied. The optimal cost J opt is equal to 108. This result has been obtained by considering the centers c i (ω) as decision variables. This is important since the optimal cost would be equal to 133 if we would force these centers to be zero for all i and for all frequencies. We have also verified that the conservatism linked to the chosen hierarchical robustness analysis approach remains limited. For this purpose, we have computed a lower bound of the exact worst-case performance sup θ∈U |P (e jω , θ)| by randomly generating parameter vectors θ on the contour of the obtained uncertainty ellipsoid U and by computing, for each ω, the largest value of |P (e jω , θ)| for those random values of θ. When generating 10000 random θ, the relative error between this lower bound and P wc,ub (ω, T ) is, for each ω, smaller than 20% (the maximal relative error is attained at ω ≈ 1.5) and has a mean of 3% over the frequencies.
In order to further verify the validity of our results, we realize the optimal spectra and apply the corresponding excitation signals r i (i = 1...6) of length N = 2000 to the network. The models G(z, θi ) are identified using the procedure of Section 2 and a controller K is designed with the average of these six models using the chosen global control design method. Finally, using the covariance matrices P θi determined along with the models, we use the procedure of Section 3 to determine the worst-case global performance. This worst-case global performance P wc,ub (ω, T ) is given in Figure 3 (red solid). We observe that, even if the optimal spectra have been designed with initial guesses for the identified parameter vectors and for K as well as with the asymptotic formula [START_REF] Everitt | On the variance analysis of identified linear MIMO models[END_REF] for the covariance matrices P θi , the worst-case performance actually obtained after the identification experiment satisfies (35). Distribution of power and of uncertainty among the nodes Consequently, our methodology leads to models that are sufficiently accurate to guarantee a certain level of global performance in this example. Let us analyze how the excitation power and the uncertainty is distributed among the nodes. For this purpose, we give in Table 1 the power injected for each node (i.e. σ i,0 ) and a normalized image of the volume of U i i.e. V i = det(10 5 P θi ). Since V i is only one of the possible measures to evaluate the size of the uncertainty, we also represent, in Figure 4, the radius ρ i (ω) (representing the size of T i and computed using Theorem 2) for Nodes i = 1 (black dotted), i = 5 (black circles) and i = 6 (black solid). We observe that both the excitation powers and | 68,199 | [
"169748",
"1228",
"1271"
] | [
"408749",
"408749",
"489656",
"408749"
] |
01492052 | en | [
"math"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01492052/file/BE_boundeddomains_MaxwellBC.pdf | Marc Briant
Yan Guo
ASYMPTOTIC STABILITY OF THE BOLTZMANN EQUATION WITH MAXWELL BOUNDARY CONDITIONS
Keywords: Boltzmann equation, Perturbative theory, Maxwell boundary conditions, Specular reflection boundary conditions, Maxwellian diffusion boundary conditions
published or not. The documents may come
Asymptotic stability of the Boltzmann equation with
Maxwell boundary conditions
Introduction
The Boltzmann equation rules the dynamics of rarefied gas particles moving in a domain Ω of R 3 with velocities in R 3 when the sole interactions taken into account are elastic binary collisions. More precisely, the Boltzmann equation describes the time evolution of F (t, x, v), the distribution of particles in position and velocity, starting from an initial distribution F 0 (x, v). It reads
∀t 0 , ∀(x, v) ∈ Ω × R 3 , ∂ t F + v • ∇ x F = Q(F, F ), (1.1) ∀(x, v) ∈ Ω × R 3 , F (0, x, v) = F 0 (x, v).
To which one have to add boundary conditions on F . Throughout this work we consider C 1 bounded domains which allows us to decompose the phase space boundary
Λ = ∂Ω × R 3
The authors would like to acknowledge the Division of Applied Mathematics at Brown University, where this work was achieved.
into three sets
Λ + = (x, v) ∈ ∂Ω × R 3 , n(x) • v > 0 , Λ -= (x, v) ∈ ∂Ω × R 3 , n(x) • v < 0 , Λ 0 = (x, v) ∈ ∂Ω × R 3 , n(x) • v = 0 ,
where n(x) is the outward normal at a point x on ∂Ω. The set Λ + is the outgoing set, Λ -is the ingoing set and Λ 0 is called the grazing set.
In the present work, we consider the physically relevant case where the gas interacts with the boundary ∂Ω via two phenomena. Part of the particles touching the wall elastically bounce against it like billiard balls (specular reflection boundary condition) whereas the other part are absorbed by the wall and then emitted back into the domain according to the thermodynamical equilibrium between the wall and the gas (Maxwellian diffusion boundary condition). This very general type of interactions will be referred to as Maxwell boundary condition and they mathematically translate into ∃α ∈ (0, 1], ∀t > 0, ∀(x, v) ∈ Λ -,
F (t, x, v) = (1 -α)F (t, x, R x (v)) + αP Λ (F (t, x, •))(v) (1.2)
where the Maxwellian diffusion is given by
(1.3) P Λ (F (t, x, •))(v) = c µ µ(v) v * •n(x)>0 F (t, x, v * ) (v * • n(x)) dv * with µ(v) = 1 (2π) 3/2 e -|v| 2
of Q ( [START_REF] Cercignani | The mathematical theory of dilute gases[END_REF][9] [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] among others) imply that if F is solution to the Boltzmann equation then (1.4) ∀t 0,
Ω×R 3 F (t, x, v) dxdv = Ω×R 3 F 0 (x, v) dxdv,
which physically means that the mass is preserved along time.
In the present paper we are interested in the well-posedness of the Boltzmann equation (1.1) for fluctuations around the global equilibrium
µ(v) = 1 (2π) 3/2 e -|v| 2 2 .
More precisely, in the perturbative regime F = µ + f we construct a Cauchy theory in L ∞ x,v spaces endowed with strech exponential or polynomial weights and study the continuity and the positivity of such solutions.
Under the perturbative regime, the Cauchy problem amounts to solving the perturbed Boltzmann equation
(1.5) ∂ t f + v • ∇ x f = Lf + Q(f, f )
with L being the linear Boltzmann operator Lf = 2Q(µ, f ) where we considered Q as a symmetric bilinear operator
(1.6) Q(f, g) = 1 2 R 3 ×S 2 B (|v -v * |, cos θ) [f ′ g ′ * + g ′ f ′ * -f g * -gf * ] dv * dσ.
Note that f also satisfies the Maxwell boundary condition (1.2) since µ does.
1.1. Notations and assumptions. We describe the assumptions and notations we shall use throughout the article.
Function spaces. Define for any k 0 the functional
∀ • k = 1 + |•| k .
The convention we choose is to index the space by the name of the concerned variable so we have, for p in [1, +∞],
L p [0,T ] = L p ([0, T ]) , L p t = L p R + , L p x = L p (Ω) , L p v = L p R 3 .
For m : R 3 -→ R + a positive measurable function we define the following weighted Lebesgue spaces by the norms
f L ∞ x,v (m) = sup (x,v)∈Ω×R 3 [|f (x, v)| m(v)] f L 1 v L ∞ x (m) = R 3 sup x∈Ω |f (x, v)| m(v) dv
and in general with p, q in [1, ∞):
f L p v L q x (m) = f L q x m(v) L p v .
We define the Lebesgue spaces on the boundary:
f L ∞ Λ (m) = sup (x,v)∈Λ [|f (x, v)| m(v)] f L 1 L ∞ Λ (m) = R 3 sup x: (x,v)∈Λ |f (x, v)v • n(x)| m(v) dv
with obvious equivalent definitions for Λ ± or Λ 0 . However, when we do not consider the L ∞ setting in the spatial variable we define
f L 2 Λ (m) = Λ f (x, v) 2 m(v) 2 |v • n(x)| dS(x)dv 1/2
, where dS(x) is the Lebesgue measure on ∂Ω. We emphasize here that when the underlying space in the velocity variable is L p with p = ∞, the measure we consider is |v • n(x)| dS(x) as it is the natural one when one thinks about Green formula.
Assumptions on the collision kernel. We assume that the collision kernel B can be written as
(1.7) B(v, v * , θ) = Φ (|v -v * |) b (cos θ) ,
which covers a wide range of physical situations (see for instance [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF]Chapter 1]). Moreover, we will only consider kernels with hard potentials, that is
(1.8) Φ(z) = C Φ z γ , γ ∈ [0, 1],
where C Φ > 0 is a given constant. Of special note is the case γ = 0 which is usually referred to as Maxwellian potentials. We will assume that the angular kernel b • cos is positive and continuous on (0, π), and that it satisfies a strong form of Grad's angular cut-off:
(1.9) b ∞ = b L ∞ [-1,1]
< ∞
The latter property implies the usual Grad's cut-off [START_REF] Grad | Principles of the kinetic theory of gases[END_REF]: Such requirements are satisfied by many physically relevant cases. The hard spheres case (b = γ = 1) is a prime example.
1.2. Comparison with previous studies. Few results have been obtained about the perturbative theory for the Boltzmann equation with other boundary condition than the periodicity of the torus. On the torus we can mention [34][18][20] [START_REF] Mouhot | Quantitative perturbative study of convergence to equilibrium for collisional kinetic models in the torus[END_REF][5] [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF] for collision kernels with hard potentials with cutoff, [START_REF] Gressman | Global classical solutions of the Boltzmann equation without angular cut-off[END_REF] without the assumption of angular cutoff or [START_REF] Guo | Classical solutions to the Boltzmann equation for molecules with an angular cutoff[END_REF][25] for soft potentials. A good review of the methods and techniques used can be found in the exhaustive [START_REF] Ukai | Mathematical theory of the Boltzmann equation[END_REF].
The study of the well-posedness of the Boltzmann equation, as well as the trend to equilibrium, when the spatial domain is bounded with non-periodic boundary condition is scarce and only focuses on hard potential kernels with angular cutoff. In [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF], exponential convergence to equilibrium in L ∞
x,v with the important weight v β µ(v) -1/2 was established. The boundary condition considered in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] are pure specular reflections with Ω being strictly convex and analytic and pure Maxwellian diffusion with Ω being smooth and convex. Note that the arguments used in the latter work relied on a non-constructive L 2
x,v theory. More recently, the case of pure Maxwellian boundary condition has been resolved by [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] in L ∞ x,v v β µ(v) -1/2 in Ω smooth but not necessarily convex and, more importantly, with constructive arguments. They also deal with non-global Maxwellian diffusion and gave an explicit domain of continuity for the solutions. We also mention [START_REF] Kim | The Boltzmann equation near a rotational local Maxwellian[END_REF] for a perturbative study around a non-local and rotating Maxwellian. At last, a very recent work by the first author [START_REF] Briant | Perturbative theory for the Boltzmann equation in bounded domains with different boundary conditions[END_REF] extended the domain of validity of the previous study to L ∞
x,v (m) where m is a less restrictive weight: a stretched exponential or a polynomial; both for specular reflection and Maxwellian diffusion. His methods are constructive from the results described above (but therefore still rely on the contradiction argument in L 2
x,v and the analyticity of Ω for specular reflections). We also mention some works in the framework of renormalized solutions in bounded domains. The existence of such solutions has been obtained in different settings [START_REF] Mischler | On the initial boundary value problem for the Vlasov-Poisson-Boltzmann system[END_REF] [START_REF] Mischler | Kinetic equations with Maxwell boundary conditions[END_REF] with Maxwell boundary condition. The issue of asymptotic convergence for such solutions was investigated in [START_REF] Desvillettes | On the trend to global equilibrium for spatially inhomogeneous kinetic systems: the Boltzmann equation[END_REF] where they proved a trend to equilibrium faster than any polynomial on condition that the solutions has high Sobolev regularity
The present work establishes the perturbative Cauchy theory for Maxwell boundary condition and exponential trend to equilibrium in L ∞
x,v with a stretched exponential and polynomial weight. There are four main contributions in this work. First, we allow mere polynomial weights for the perturbation, which is a significant improvement over the work [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]. Then we deal with more general, and more physically relevant, boundary conditions and we recover the existing results in the case of pure Maxwellian diffusion. Third, delicate uses of the diffusive part, since α > 0, gives constructive proofs and there are the first, to our knowledge, entirely constructive arguments when dealing with specular reflections. Finally, we propose a new method to establish an L 2 -L ∞ theory that simplifies both technically and conceptually the existing L 2 -L ∞ theory [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF]. We indeed estimate the action of the operator K in between two consecutive rebounds against the wall and work with the different weight than all the previous studies, namely µ -1-0 where we prove that K almost acts like 3ν(v). Also, with such an estimate we get rid of the strict convexity and analyticity of Ω that was always required when dealing with some specular reflections. We only need Ω to be a C 1 bounded domain but as a drawback we require α > 2/3 (this explicit threshold being obtained thanks to the precise control over K).
We conclude by mentioning that our results also give an explicit set of continuity of the aforementioned solutions. This was known only in the case of pure Maxwellian diffusion, in-flow and bounce-back boundary conditions [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF]. In the case of Ω convex we recover the fact that the solutions are continuous away from the grazing set Λ 0 [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]. Concerning the regularity of solutions to the Boltzmann equation with boundary conditions we also refer to [START_REF] Guo | Regularity of the Boltzmann Equation in Convex Domains[END_REF] [START_REF] Guo | Regularity of the Boltzmann Equation in Non-Convex Domains[END_REF]. 1.3. Organisation of the article. Section 2 is dedicated to the statement and the description of the main results proved in this paper. We also describe our strategy, which mainly consists in four steps that make the skeleton of the present article.
Section 3 is dedicated to the a priori exponential decay of the solutions to the linear part of the perturbed equation in the L 2 setting.
In Section 4 we start by giving a brief mathematical description of the specular characteristics. We then study the semigroup generated by the transport part and the collision frequency kernel G ν = -v • ∇ x -ν along with the Maxwell boundary condition.
We develop an L 2 -L ∞ theory in Section 5 and we prove that G = -v • ∇ x + L generates a C 0 -semigroup in L ∞
x,v ( v β µ -1/2 ) that decays exponentially.
We prove the existence and uniqueness of solutions for the full Boltzmann equation (1.1) in the perturbative regime F = µ + f in Section 6.
At last, Section 7 deals with the positivity and the continuity of the solutions to the full Boltzmann equation that we constructed.
Main results
The aim of the present work is to prove the following perturbative Cauchy theory for the full Boltzmann equation with Maxwell boundary condition.
k ∞ = 1 + γ + 16πb ∞ l b .
Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2) or m = v k with k > k ∞ .
There exists η > 0 such that for any
F 0 = µ + f 0 in L ∞ x,v (
m) satisfying the conservation of mass (1.4) with
F 0 -µ L ∞ x,v (m) η,
there exists a unique solution F (t, x, v) = µ(v) + f (t, x, v) in L ∞ t,x,v (m) to the Boltzmann equation (1.1) with Maxwell boundary condition (1.2) and with f 0 as an initial datum. Moreover,
• F preserves the mass (1.4);
• There exist C, λ > 0 such that ∀t 0,
F (t) -µ L ∞ x,v (m) Ce -λt f 0 L ∞ x,v (m) ;
• If F 0 0 then F (t) 0 for all t.
Remark 2.2. We make a few comments about the above theorem.
(1) Notice that we recover the case of pure diffusion [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF][12] since α = 1 is allowed.
(2) It is important to emphasize that the uniqueness holds in the pertubative sense, that is in the set of functions of the form F = µ + f with f small. The uniqueness for the Boltzmann equation in L ∞ t,x,v (m) with Maxwell boundary condition in the general setting would be a very interesting problem to look at.
(3) Recent results [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF][6] established a quantitative lower bound for the solutions in the case of pure specular reflections and pure diffusion respectively. We think that their methods could be directly applicable to the Maxwell boundary problem and the solutions described in the theorem above should have an exponential lower bound, at least when Ω is convex. However, we only give here a qualitative statement about the positivity.
Remark 2.3 (Remarks about improvement over α).
As we shall mention it in next sections, we can construct an explicit L 2 x,v linear theory if α > 0 whereas we strongly need α > 2/3 to develop an L ∞
x,v linear theory from the L 2 one. However, the L 1 v L ∞
x nonlinear theory only relies on the L ∞ x,v linear one. Decreasing our expectations on Ω would allow to increase the range for α.
• Ω smooth and convex: α > 0 and constructive. Very recent result [START_REF] Kim | The Boltzmann equation with specular boundary condition in convex domains[END_REF] managed to obtain an L ∞ x,v theory for sole specular reflections by iterating Duhamel's form three times (see later). Thus, a convex combination of their methods and ours allow to derive an L 2 -L ∞ theory for any α in [0, 1] and it would be entirely constructive thanks to our explicit L 2 linear theory.
• Unfortunately, a completely constructive L 2
x,v theory for α = 0 is still missing at the moment.
In order to state our result about the continuity of the solutions constructed in Theorem 2.1 we need a more subtle description of ∂Ω. As noticed by Kim [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF], some specific points on Λ 0 can offer continuity.
We define the inward inflection grazing boundary
Λ (I-) 0 = Λ 0 ∩ t min (x, v) = 0, t min (x, -v) = 0 and ∃δ > 0, ∀τ ∈ [0, δ], x -τ v ∈ Ω c
where t min (x, v) is the first rebound against the boundary of a particle starting at x with a velocity -v (see Subsection 4.1 for rigorous definition). That leads to the boundary continuity set
C - Λ = Λ -∪ Λ (I-) 0
. As we shall see later, the continuity set C - λ describes the set of boundary points in the phase space that lead to continuous specular reflections.
The key idea is to understand that the continuity of the specular reflection at each bounce against the wall will lead to continuity of the solution. We thus define the continuity set
C = {0} × Ω × R 3 ∪ Λ + ∪ C - Λ ∪ (0, +∞) × C - Λ ∪ (t, x, v) ∈ (0, +∞) × Ω × R 3 ∪ Λ + : ∀1 k N(t, x, v) ∈ N, (X k+1 (x, v), V k (x, v)) ∈ C - Λ . The sequence (T k (x, v), X k (x, v), V k (x, v))
k∈N is the sequence of footprints of the backward characteristic trajectory starting at (x, v) and overcoming pure specular reflections; N(t, x, v) is almost always finite and satisfies T N (t,x,v) t < T N (t,x,v)+1 (x, v). We refer to Subsection 4.1 for more details.
Theorem 2.4. Let F (t, x, v) = µ+f (t, x, v) be the solution associated to F 0 = µ+f 0 described in Theorem 2.1. Suppose that F 0 = µ + f 0 is continuous on Ω × R 3 ∪ Λ + ∪ C - Λ and satisfies the Maxwell boundary condition (1.2) then F = µ + f is continuous on the continuity set C.
Remark 2.5. We emphasize here again that the above theorem holds only in the perturbative regime. We also point out the following properties of the continuity set.
(1) From [7, Proposition A.4] we know that the set of points (x, v) in Ω × R 3 that lead to problematic backward characteristics is of Lebesgue measure zero (see later for more details). We infer that C is non-empty and when we only consider t in [0, T ] for a given T > 0, its complementary set is of measure zero.
(2) In the case of a convex domain Ω, we recover the previous results [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] for both pure specular reflections and pure diffusion:
C = R + × Ω × R 3 -Λ 0 .
2.1. Description of the strategy. Our strategy can be decomposed into four main steps and we now describe each of them briefly.
Step 1: A priori exponential decay in L 2
x,v µ -1/2 for the full linear operator. The first step is to prove that the existence of a spectral gap for L in the sole velocity variable can be transposed to L 2
x,v µ -1/2 when one adds the skew-symmetric transport operator -v • ∇ x . In other words, we prove that solutions to
∂ t f = Gf = Lf -v • ∇ x f in L 2
x,v µ -1/2 decays exponentially fast. Basically, the spectral gap λ L of L implies that for such a solution
d dt f 2 L 2 x,v (µ -1/2 ) -2λ L f -π L (f ) 2 L 2
x,v (µ -1/2 ) , where π L is the orthogonal projection in L 2 v µ -1/2 onto the kernel of the operator L. This inequality exhibits the hypocoercivity of L. Therefore, one would like that the microscopic part
π ⊥ L (f ) = f -π L (f ) controls the fluid part which has the following form π L (f )(t, x, v) = a(t, x) + b(t, x) • v + c(t, x) |v| 2 µ(v).
It is known [START_REF] Guo | The Vlasov-Poisson-Boltzmann system near Maxwellians[END_REF][20] that the fluid part has some elliptic regularity; roughly speaking one has (2.2) ∆π L (f ) ∼ ∂ 2 π ⊥ L f + higher order terms, that can be used in Sobolev spaces H s to recover some coercivity. We follow the idea of [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] for Maxwellian diffusion and construct a weak version of the elliptic regularity of a(t, x), b(t, x) and c(t, x) by multiplying these coordinates by test functions. Basically, the elliptic regularity of π L (f ) will be recovered thanks to the transport part applied to these test functions while, on the other side, L will encode the control by π ⊥ L (f ). The test functions we build works with specular reflections but the estimate for b requires the integrability of the function on the boundary. Such a property holds for Maxwellian diffusion and this is why we cannot deal with the specific case α = 0.
Step 2: Semigroup generated by the collision frequency kernel. The collision frequency operator G ν = -ν(v) -v • ∇ x together with Maxwell boundary condition is proved to generate a strongly continuous semigroup with exponential decay in L ∞
x,v (m) with very general weights m(v). The boundary operator associated with the Maxwell condition is of norm exactly one and therefore the standard theory of transport equation in bounded domains [START_REF] Beals | Abstract time-dependent transport equations[END_REF] fails. The core idea is to obtain an implicit description of the solutions to ∂ t f = G ν f along the characteristic trajectories and to prove that the number of trajectories that do not reach the initial plane {t = 0} after a large number of rebounds is very small. Such a method has been developed in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] and extended in [START_REF] Briant | Perturbative theory for the Boltzmann equation in bounded domains with different boundary conditions[END_REF]; we adapt it to the case of Maxwell characteristics.
Step 3: L ∞
x,v (µ -ζ ) theory for the full nonlinear equation. The underlying L 2
x,v -norm is not an algebraic norm for the nonlinear operator Q whereas the L ∞ x,vnorm is (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF][10] or [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF] for instance). We therefore follow an L 2 -L ∞ theory [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] to pass on the previous semigroup property in L 2 to L ∞ via a change of variable along the flow of characteristics.
Basically, L can be written as L = -ν(v) + K with K a kernel operator. If we denote by S G (t) the semigroup generated by G = L -v • ∇ x we have the following implicit Duhamel along the characteristics
S G (t) = e -ν(v)t + t 0 e -ν(v)(t-s) K [S G (s)] ds.
The standard methods [START_REF] Vidav | Spectra of perturbed semigroups with applications to transport theory[END_REF][21] [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] used an iterated version of this Duhamel's formula to recover some compactness property, thus allowing to bound the solution in L ∞ by its L 2 norm. To do so they require to study the solution f (t, x, v) along all the possible characteristic trajectories (X t (x, v), V t (x, v)). We propose here a less technical strategy by estimating the action of K in between two consecutive collisions against ∂Ω thanks to trace theorems. The core contribution, which also gives the threshold α > 2/3, is to work in L ∞
x,v (µ -ζ ) as ζ goes to 1 where K is proven to act roughly like 3ν(v).
Step 4: Extension to polynomial weights. To conclude the present study, we develop an analytic and nonlinear version of the recent work [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF], also recently adapted in a nonlinear setting [START_REF] Briant | Perturbative theory for the Boltzmann equation in bounded domains with different boundary conditions[END_REF]. The main strategy is to find a decomposition of the full linear operator G into G 1 + A. We shall prove that G 1 acts like a small perturbation of the operator G ν = -v • ∇ x -ν(v) and is thus hypodissipative, and that A has a regularizing effect. The regularizing property of the operator A allows us to decompose the perturbative equation (1.5) into a system of differential equations
∂ t f 1 + v • ∇ x f 1 = G 1 (f 1 ) + Q(f 1 + f 2 , f 1 + f 2 ) (2.3) ∂ t f 2 + v • ∇ x f 2 = L (f 2 ) + A (f 1 ) . (2.4)
The first equation is solved in L ∞
x,v (m) with the initial datum f 0 thanks to the hypodissipativity of G 1 . The regularity of A (f 1 ) allows us to use Step 3 and thus solve the second equation with null initial datum in L ∞
x,v (µ -ζ ).
3. L 2 µ -1/2 theory for the linear part of the perturbed Boltzmann equation
This section is devoted to the study of the linear perturbed equation
∂ t f + v • ∇ x f = L(f ),
with the Maxwell boundary condition (1.2) in the L 2 setting. Note that we only need α in (0, 1] in this section. As we shall see in Subsection 3.1, the space L 2 v µ -1/2 is natural for the operator L. In order to avoid carrying the maxwell weight throughout the computations we look at the function h(t, x, v) = f (t, x, v)µ(v) -1/2 . We thus study in this section the following equation in L 2
x,v
(3.1) ∂ t h + v • ∇ x h = L µ (h),
with the associated boundary conditions
(3.2) ∀t > 0, ∀(x, v) ∈ Λ -, h(t, x, v) = (1 -α)h(t, x, R x (v)) + αP Λµ (h)(t, x, v)
where we defined
L µ (h) = 1 √ µ L ( √ µh)
and P Λµ can be viewed as a L 2 v -projection with respect to the measure |v • n(x)|:
(3.3) ∀(x, v) ∈ Λ -, P Λµ (h) = c µ µ(v) v * •n(x)>0 h(t, x, v * ) µ(v * ) (v * • n(x)) dv * .
We also use the shorthand notation P ⊥ Λµ = Id -P ⊥ Λµ .
For general domains Ω, the Cauchy theory in L p x,v (1 p < +∞) of equations of the type
∂ t f + v • ∇ x f = g with boundary conditions ∀(x, v) ∈ Λ -, f (t, x, v) = P (f )(t, x, v),
where P :
L p Λ + -→ L p Λ -is a bounded linear operator, is well-defined in L p
x,v when P < 1 [START_REF] Beals | Abstract time-dependent transport equations[END_REF]. The specific case P = 1 can still be dealt with ([2] Section 4) but even though the existence of solutions in L p
x,v can be proven, the uniqueness is not always given unless one can prove that the trace of f belongs to L 2 loc R + ; L p x,v (Λ) . For Maxwell boundary conditions, the boundary operator P is of norm exactly one and the general theory fails. The need of a trace in L 2
x,v is essential to perform Green's identity and obtain the uniqueness of solutions. The pure Maxwellian boudary conditions with mass conservation can still be dealt with because one can show that P ⊥ Λµ (h) is in L 2 Λ + [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF]. Unfortunately, in the case of specular reflections the uniqueness is not true in general due to a possible blow-up of the L 2 loc R + ; L 2 x,v (Λ) at the grazing set Λ 0 [START_REF] Ukai | Solutions of the Boltzmann equation[END_REF][START_REF] Beals | Abstract time-dependent transport equations[END_REF][START_REF] Cercignani | The mathematical theory of dilute gases[END_REF].
Following ideas from [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF], a sole a priori exponential decay of solutions is necessary to obtain a well-posed L ∞ theory provided that we endow the space with a strong weight. This section is thus dedicated to the proof of the following theorem. Theorem 3.1. Let α > 0 and let h 0 be in L 2
x,v such that h 0 satisfies the preservation of mass
Ω×R 3 h 0 (x, v) µ(v) dv = 0. Suppose that h(t, x, v) in L 2
x,v is a mass preserving solution to the linear perturbed Boltzmann equation (3.1) with initial datum h 0 and satisfying the Maxwell boundary condition (3.2). Suppose also that h| Λ belongs to L 2 Λ . Then there exist explicit C G , λ G > 0, independent of h 0 and h, such that
∀t 0, h(t) L 2 x,v C G e -λ G t h 0 L 2 x,v .
In order to prove Theorem 3.1 we first gather in Subsection 3.1 some well-known properties about the linear operator L. Subsection 3.2 proves a very important lemma which allows to use the hypocoercivity of L -v • ∇ x in the case of Maxwell boundary conditions. Finally, the exponential decay is proved in Subsection 3.3.
Preliminary properties of
L µ in L 2 v .
The linear Boltzmann operator. We gather some well-known properties of the linear Boltzmann operator L µ (see [START_REF] Cercignani | The Boltzmann equation and its applications[END_REF][10] [START_REF] Villani | A review of mathematical topics in collisional kinetic theory[END_REF][17] for instance).
L µ is a closed self-adjoint operator in
L 2 v with kernel Ker (L µ ) = Span {φ 0 (v), . . . , φ 4 (v)} √ µ,
where (φ i ) 0 i 4 is an orthonormal basis of Ker (L µ ) in L 2 v . More precisely, if we denote π L to be the orthogonal projection onto Ker (L µ ) in L 2 v ):
(3.4) π L (h) = 4 i=0 R 3 h(v * )φ i (v * ) µ(v * ) dv * φ i (v) µ(v) φ 0 (v) = 1, φ i (v) = v i , 1 i 3, φ 4 (v) = |v| 2 -3 √ 6 ,
and we define
π ⊥ L = Id -π L . The projection π L (h(x, •))(v) of h(x, v
) onto the kernel of L µ is called its fluid part whereas π ⊥ L (h) is its microscopic part. Also, L µ can be written under the following form
(3.5) L µ = -ν(v) + K,
where ν(v) is the collision frequency
ν(v) = R 3 ×S 2 b (cos θ) |v -v * | γ µ * dσdv *
and K is a bounded and compact operator in L 2 v . Finally we remind that there exists ν 0 , ν 1 > 0 such that
(3.6) ∀v ∈ R 3 , ν 0 (1 + |v| γ ) ν(v) ν 1 (1 + |v| γ ),
and that L µ has a spectral gap λ L > 0 in L 2 x,v (see [1][31] for explicit proofs)
(3.7) ∀g ∈ L 2 v , L µ (g), g L 2 v -λ L π ⊥ L (g) 2 L 2 v .
The linear perturbed Boltzmann operator. The linear perturbed Boltzmann operator is the full linear part of the perturbed Boltzmann equation (1.5):
G = L -v • ∇ x or, in our L 2 setting, G µ = L µ -v • ∇ x .
An important point is that the same computations as to show the a priori conservation of mass implies that in L 2 x,v the space Span √ µ ⊥ is stable under the flow
∂ t h = G µ (h)
with Maxwell boundary conditions (3.2). Coming back to our general setting f = h √ µ we thus define the L 2 x,v µ -1/2 projection onto that space
(3.8) Π G (f ) = Ω×R 3 f (x, v * ) dxdv * µ(v),
and its orthogonal projection Π ⊥ G = Id -Π G . Note that Π ⊥ G (f ) = 0 amounts to saying that f satisfies the preservation of mass.
3.2.
A priori control of the fluid part by the microscopic part. As seen in the previous section, the operator L µ is only coercive on its orthogonal part. The key argument is to show that we recover the full coercivity on the set of solutions to the differential equation. Namely, that for these specific functions, the microscopic part controls the fluid part. This is the purpose of the next lemma. Lemma 3.2. Let h 0 (x, v) and g(t, x, v) be in L 2
x,v such that Π G (h 0 ) = Π G (g) = 0 and let h(t, x, v) in L 2
x,v be a mass preserving solution to (3.9)
∂ t h + v • ∇ x h = L µ (h) + g
with initial datum h 0 and satisfying the boundary condition (3.2). Suppose that h| Λ belongs to L 2 Λ . Then there exists an explicit C ⊥ > 0 and a function N h (t) such that for all t 0
(i) |N h (t)| C ⊥ h(t) 2 L 2 x,v ; (ii) t 0 π L (h) 2 L 2 x,v ds N h (t) -N h (0) + C ⊥ t 0 π ⊥ L (h) 2 L 2 x,v + P ⊥ Λµ (h) 2 L 2 Λ + ds + C ⊥ t 0 g 2 L 2 Λ + ds.
The constant C ⊥ is independent of h.
The methods of the proof are a technical adaptation of the methods proposed in [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF] in the case of purely diffusive boundary condition.
Proof of Lemma 3.2. We recall the definition of π L (3.4) and we define the function a(t, x), b(t, x) and c(t, x) by
(3.10) π L (h)(t, x, v) = a(t, x) + b(t, x) • v + c(t, x) |v 2 | -3 2 µ(v).
The key idea of the proof is to choose suitable test function ψ in H 1 x,v that will catch the elliptic regularity of a, b and c and estimate them. Note that for a we strongly use the fact that h preserves the mass.
For a test function ψ = ψ(t, x, v) integrated against the differential equation (3.1) we have by Green's formula
t 0 d dt Ω×R 3 ψh dxdvds = Ω×R 3 ψ(t)h(t) dxdv - Ω×R 3 ψ 0 h 0 dxdv = t 0 Ω×R 3 h∂ t ψ dxdvds + t 0 Ω×R 3 L µ [h]ψ dxdvds + t 0 Ω×R 3 hv • ∇ x ψ dxdvds - t 0 Λ ψhv • n(x) dS(x)dvds + t 0 Ω×R 3 ψg dxdvds.
We decompose h = π L (h) + π ⊥ L (h) in the term involving v • ∇ x and use the fact that
L µ [h] = L µ [π ⊥ L (h)
] to obtain the weak formulation
(3.11) - t 0 Ω×R 3 π L (h)v•∇ x ψdxdvds = Ψ 1 (t)+Ψ 2 (t)+Ψ 3 (t)+Ψ 4 (t)+Ψ 5 (t)+Ψ 6 (t)
with the following definitions For each of the functions a, b and c, we shall construct a ψ such that the lefthand side of (3.11) is exactly the L 2
Ψ 1 (t) = Ω×R 3 ψ 0 h 0 dxdv - Ω×R 3 ψ(t)h(t) dxdv, (3.12) Ψ 2 (t) = t 0 Ω×R 3 π ⊥ L (h)v • ∇ x ψ dxdvds, (3.13) Ψ 3 (t) = t 0 Ω×R 3 L µ π ⊥ L (h)
x -norm of the function and the rest of the proof is estimating the six different terms Ψ i (t). Note that Ψ 1 (t) is already under the desired form
(3.18) Ψ 1 (t) = N h (t) -N h (0) with |N h (s)| C h 2 L 2 x,v if ψ(x, v) is in L 2
x,v and its norm is controlled by the one of h (which will be the case for our choices).
Remark 3.3. The linear perturbed equation (3.9), the Maxwell boundary condition (3.2), and the conservation of mass are invariant under standard time mollification. We therefore consider for simplicity in the rest of the proof that all functions are smooth in the variable t. Exactly the same estimates can be derived for more general functions: study time mollified equation and then take the limit in the smoothing parameter.
For clarity, every positive constant not depending on h will be denoted by C i .
Estimate for a.
By assumption h is mass-preserving which is equivalent to
0 = Ω×R 3 h(t, x, v) µ(v) dxdv = Ω a(t, x) dx.
We can thus choose the following test function
ψ a (t, x, v) = |v| 2 -α a √ µv • ∇ x φ a (t, x)
where -∆ x φ a (t, x) = a(t, x) and ∂ n φ a | ∂Ω = 0, and α a > 0 is chosen such that for all 1 i 3
R 3 |v| 2 -α a |v| 2 -3 2 v 2 i µ(v) dv = 0.
The differential operator ∂ n denotes the tangential derivative at the boundary. The fact that the integral over Ω of a(t, •) is null allows us to use standard elliptic estimate [START_REF] Evans | Partial differential equations[END_REF]:
(3.19) ∀t 0, φ a (t) H 2 x C 0 a(t) L 2 x .
The latter estimate provides the control of
Ψ 1 = N (a) h (t) -N (a)
h (0), as discussed before, and the control of (3.17), using Cauchy-Schwarz and Young's inequalities,
|Ψ 6 (t)| C t 0 φ a 2 L 2 x g L 2 x,v ds C 1 4 t 0 a L 2 x ds + C 6 t 0 g 2 L 2
x,v ds, (3.20) where C 1 > 0 is given by (3.21) below.
Firstly we compute the term on the right-hand side of (3.11).
- t 0 Ω×R 3 π L (h)v • ∇ x ψ a dxdvds = - 1 i,j 3 t 0 Ω a(s, x) R 3 |v| 2 -α a v i v j µ(v) dv ∂ x i ∂ x j φ a (s, x) dxds - 1 i,j 3 t 0 Ω b(s, x) • R 3 v |v| 2 -α a v i v j µ(v) dv ∂ x i ∂ x j φ a (s, x) dxds - 1 i,j 3 t 0 Ω c(s, x) R 3 |v| 2 -α a |v| 2 -3 2 v i v j µ(v) dv ∂ x i ∂ x j φ a (s, x).
By oddity the second term is null, as well as the first and last ones are when i = j. When i = j in the last term we recover exactly our choice of α a which makes the last term being null too. It only remains the first term when i = j
- t 0 Ω×R 3 π L (h)v • ∇ x ψ a dxdvds = -C 1 t 0 Ω a(s, x)∆ x φ a (s, x) dxds = C 1 t 0 a 2 L 2
x ds. We recall L ν = -ν(v) + K where K is a bounded operator and that the H 2
x -norm of φ a (t, x) is bounded by the L 2
x -norm of a(t, x). For the terms Ψ 2 (3.13) and Ψ 3 (3.14) a mere Cauchy-Schwarz inequality yields
∀i ∈ {2, 3} , |Ψ i (t)| C t 0 a L 2 x π ⊥ L (h) L 2 x,v ds C 1 4 t 0 a 2 L 2 x ds + C 2 t 0 π ⊥ L (h) 2 L 2
x,v ds.
(3.22)
We used Young's inequality for the last inequality, with C 1 defined in (3.21).
The term Ψ 4 (3.15) deals with boundary so we decompose it into Λ + and Λ -. In the Λ -integral we apply the Maxwell boundary condition satisfied by h and use the change of variable v → R x (v). Since |v| 2 , µ(v), φ a (s, x), the specular part and P Λµ (3.3) are invariant by this isometric change of variable we get
Ψ 4 (t) = - t 0 Λ + h |v| 2 -α a |v • n(x)| ∇ x φ a (s, x) • v √ µ dS(x)dvds + (1 -α) t 0 Λ + h |v| 2 -α a |v • n(x)| ∇ x φ a • R x (v) √ µ dS(x)dvds + α t 0 Λ + P Λµ (h) |v| 2 -α a |v • n(x)| ∇ x φ a • R x (v) √ µ dS(x)dvds. so Ψ 4 (t) = -(1 -α) t 0 Λ + h |v| 2 -α a |v • n(x)| ∇ x φ a • [v -R x (v)] √ µ dS(x)dvds -α t 0 Λ + |v| 2 -α a |v • n(x)| ∇ x φ a • vh -R x (v)P Λµ (h) √ µ (3.23)
By definition of the specular reflection and the tangential derivative
|v • n(x)| ∇ x φ a (s, x) • (v -R x (v)) = 2 (v • n(x)) n • ∇ x φ a (s, x) = 2 (v • n(x)) ∂ n φ a (s, x).
The contribution of the specular reflection part is therefore null since φ a was chosen such that ∂ n φ a | ∂Ω = 0. For the diffusive part we compute
vh -R x (v)P Λµ (h) = vP ⊥ Λµ (h) + 2P Λµ (h) (v • n(x)) n(x)
and again the term in the direction of n(x) gives a zero contribution since ∂ n φ a | ∂Ω = 0. It only remains
Ψ 4 (t) = -α t 0 Λ + |v| 2 -α a |v • n(x)| √ µ v • ∇ x φ a P ⊥ Λµ (h) dS(x)dvds.
We apply Cauchy-Schwarz inequality and the control on the H 2 norm of φ a to finally obtain the following estimate
|Ψ 4 (t)| C t 0 a L 2 x P ⊥ Λµ (h) L 2 Λ + ds C 1 4 t 0 a 2 L 2 x ds + C 4 t 0 P ⊥ Λµ (h) 2 L 2 Λ + ds, (3.24)
where we used Young's inequality with C 1 defined in (3.21).
It remains to estimate the term with time derivatives (3.16). It reads
Ψ 5 (t) = t 0 Ω×R 3 h |v| 2 -α a v • [∂ t ∇ x φ a ] √ µ dxdvds = 3 i=1 t 0 Ω×R 3 π L (h) |v| 2 -α a v i √ µ ∂ t ∂ x i φ a dxdvds + t 0 Ω×R 3 π ⊥ L (h) |v| 2 -α a √ µ v • [∂ t ∇ x φ a ] dxdvds
Using oddity properties for the first integral on the right-hand side and then Cauchy-Schwarz and the following bound
R 3 |v| 2 -α a 2 |v| 2 µ(v) dv < +∞ we get (3.25) |Ψ 5 (t)| C t 0 b L 2 x + π ⊥ L (h) L 2 x,v ∂ t ∇ x φ a L 2 x ds.
The estimation on ∂ t ∇ x φ a L 2 x will come from elliptic estimates in negative Sobolev spaces. We use the decomposition of the weak formulation (3.11) between t and t+ ε (instead of between 0 and t) with ψ(t, x, v) = φ(x)
√ µ ∈ H 1 x with the integral of φ on Ω being zero. ψ(x)µ(v) and vψ(x)µ(v) are in Ker(L µ ) and therefore are orthogonal to π L (h) and L µ [h]. Moreover, ψ does not depend on time. Hence,
Ψ 2 (t) = Ψ 3 (t) = Ψ 5 (t) = 0.
At last, with the same computations as before the boundary term is
Ψ 4 = -α t 0 ∂Ω φ(x) v•n(x)>0 Id -P Λµ (h) |v • n(x)| √ µ dS(x)dvds = 0.
The weak formulation associated to φ(x) √ µ is therefore
Ω×R 3 φ(x)h(t + ε) √ µ dxdv - Ω×R 3 φ(x)h(t) √ µ dxdv = t+ε t Ω×R 3 [π L (h)v • ∇ x φ(x) + gφ(x)] √ µ dxdvds, which is equal to Ω [a(t + ε) -a(t)] φ(x) dx = C t+ε t Ω b(s, x) • ∇ x φ(x) dxds + Ω×R 3 gφ √ µ dxdv ds.
Dividing by ε and taking the limit as ε goes to 0 yields the following estimates, thanks to a Cauchy-Schwarz inequality,
Ω ∂ t a(t, x)φ(x) dx C b(t) L 2 x ∇ x φ L 2 x + g L 2 x,v φ L 2 x .
Since φ has a null integral on Ω we can apply Poincaré inequality.
Ω ∂ t a(t, x)φ(x) dx C b(t) L 2 x + g L 2 x,v ∇ x φ L 2 x .
The latter inequality is true for all φ in H 1
x the set of functions in H 1 x with a null integral. Therefore, for all t 0
(3.26) ∂ t a(t, x) (H 1 x ) * C b(t) L 2 x + g L 2 x,v
where (H 1 x ) * is the dual of H 1 x . We fix t and thanks to the conservation of mass we have that the integral of ∂ t a is null on Ω. We can construct φ(t, x) such that
-∆ x φ(t, x) = ∂ t a(t, x) and ∂ n φ| ∂Ω = 0.
and by standard elliptic estimates [START_REF] Evans | Partial differential equations[END_REF] and (3.26):
φ H 1 x ∂ t a (H 1 x ) * C b(t) L 2 x + g L 2 x,v .
Combining this estimate with
∂ t ∇ x φ a L 2 x = ∇ x ∆ -1 ∂ t a L 2 x ∆ -1 ∂ t a H 1 x = φ H 1 x
we can further control Ψ 5 in (3.25)
(3.27) |Ψ 5 (t)| C 5 t 0 b 2 L 2 x + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
We x ds N (a)
h (t) -N (a) h (0) + C a,b t 0 b 2 L 2 x ds + C a t 0 P ⊥ Λµ (h) 2 L 2 Λ + + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x,v ds.
(3.28)
Estimate for b. The choice of function to integrate against to deal with the b term is more involved.
We emphasize that b(t, x) is a vector (b
1 (t, x), b 2 (t, x), b 3 (t, x)). Fix J in {1, 2, 3} and define ψ J (t, x, v) = 3 i=1 ϕ (J) i (t, x, v), with ϕ (J) i (t, x, v) = |v| 2 v i v J √ µ∂ x i φ J (t, x) - 7 2 v 2 i -1 √ µ∂ x J φ J (t, x), if i = J 7 2 v 2 J -1 √ µ∂ x J φ J (t, x), if i = J.
where
-∆ x φ J (t, x) = b J (t, x) and φ J | ∂Ω = 0.
Since it will be important we emphasize here that for all i = k
(3.29) R 3 v 2 i -1 µ(v) dv = 0 and R 3 v 2 i -1 v 2 k µ(v) dv = 0.
The vanishing of φ J at the boundary implies, by standard elliptic estimate [START_REF] Evans | Partial differential equations[END_REF],
(3.30) ∀t 0, φ J (t) H 2 x C 0 b J (t) L 2 x .
Again, this estimate provides the control of
Ψ 1 = N (J) h (t) -N (J) h (0) and of Ψ 6 (t) as in (3.20): (3.31) |Ψ 6 (t)| 7 4 t 0 b J 2 L 2 x ds + C 6 t 0 g 2 L 2
x,v ds.
We start by the right-hand side of (3.11). By oddity, there is neither contribution from a(s, x) nor from c(s, x). Hence,
- t 0 Ω×R 3 π L (h)v • ∇ x ψ J dxdvds = - 1 j,k 3
3 i=1 i =J t 0 Ω b k (s, x) R 3 v 2 v k v i v j v J µ(v) dv ∂ x j ∂ x i φ J (s, x) dxds + 7 2 1 j,k 3
3 i=1 i =J t 0 Ω b k (s, x) R 3 v 2 i -1 v k v j µ(v) dv ∂ x j ∂ x J φ J (s, x) dxds - 7 2 1 j,k 3 t 0 Ω b k (s, x) R 3 v 2 J -1 v j v k µ(v) dv ∂ x j ∂ x J φ J (s, x) dxds.
The last two integrals on R 3 are zero if j = k. Moreover, when j = k and j = J it is also zero by (3.29). We compute directly for j = J
R 3 v 2 J -1 v 2 J µ(v) dv = 2.
The first term is composed by integrals in v of the form
R 3 |v| 2 v k v i v j v J µ(v) dv
which are always null unless two indices are equals to the other two. Therefore if i = j then k = J and if i = j we only have two options: k = i and j = J or k = j and i = J. Hence,
- t 0 Ω×R 3 π L (h)v • ∇ x ψ J dxdvds = - 3 i=1 i =J t 0 Ω b J (s, x)∂ x i x i φ J R 3 |v| 2 v 2 i v 2 J µ(v) dv dxds - 3 i=1 i =J t 0 Ω b i (s, x)∂ x i x J φ J R 3 |v| 2 v 2 i v 2 J µ(v) dv dxds +7 3 i=1 i =J t 0 Ω b i (s, x)∂ x i x J φ J dxds -7 t 0 Ω b J (s, x)∂ x J ∂ x J φ J (s, x) dxds.
To conclude we compute
R 3 |v 2 | v 2 i v 2
J µ(v) dv = 7 whenever i = J and it thus only remains the following equality
- t 0 Ω×R 3 π L (h)v • ∇ x ψ a dxdvds = -7 t 0 Ω b J (s, x)∆ x φ J (s, x) dxds = 7 t 0 b J 2 L 2 x ds. (3.32)
Then the term Ψ 2 and Ψ 3 are dealt with as in (3.22)
(3.33) ∀i ∈ {2, 3} , |Ψ i (t)| 7 4 t 0 b J 2 L 2 x ds + C 2 t 0 π ⊥ L (h) 2 L 2
x,v ds.
The boundary term Ψ 4 is divided into Λ + and Λ -, we apply the Maxwell boundary condition (3.2) and the change of variable
v → R x (v) on the Λ -part Ψ 4 (t) = - 3 i=1 t 0 Λ + h |v • n(x)| ϕ (J) i (s, x, v) dS(x)dvds + (1 -α) 3 i=1 t 0 Λ + h |v • n(x)| ϕ (J) i (s, x, R x (v)) dS(x)dvds + α 3 i=1 t 0 Λ + P Λµ (h) |v • n(x)| ϕ (J) i (s, x, R x (v)) dS(x)dvds.
We decompose h = P Λµ (h) + P ⊥ Λµ to obtain
Ψ 4 (t) = - 3 i=1 t 0 Λ + P Λµ (h) (v • n(x)) ϕ (J) i (s, x, v) -ϕ (J) i (s, x, R x (v)) dS(x)dvds - 3 i=1 t 0 Λ + P ⊥ Λµ (h) |v • n(x)| × ϕ (J) i (s, x, v) -(1 -α)ϕ (J) i (s, x, R x (v)) dS(x)dvds. (3.34)
We apply Cauchy-Schwarz inequality and the elliptic estimate on φ J (3.30) to the second integral obtain the following estimate
- 3 i=1 t 0 Λ + P ⊥ Λµ (h) |v • n(x)| ϕ (J) i (s, x, v) -(1 -α)ϕ (J) i (s, x, R x (v)) dS(x)dvds C t 0 b J P ⊥ Λµ (h) L 2 Λ + ds 7 4 t 0 b J 2 L 2 x ds + C 4 t 0 π ⊥ L (h) 2 L 2 Λ + ds, (3.35)
where we also used Young's inequality.
The term involving P Λµ (h) in (3.34) is computed directly by a change of variable v → R x (v) to come back to the full boundary Λ and the property (3.3) that is
P Λµ (h)(s, x, v) = z(s, x) µ(v).
We also have ϕ
(J) i in the following form ϕ (J) i (t, x, v) = ϕ (J) i (v) µ(v)∂φ J (t, x)
where ∂ i begin a certain derivative in x and ϕ (J) i is an even function. We thus get
t 0 Λ + P Λµ (h) (v • n(x)) ϕ (J) i (s, x, v) -ϕ (J) i (s, x, R x (v)) dS(x)dvds = t 0 Λ P Λµ (h) (v • n(x)) ϕ (J) i (s, x, v) dS(x)dvds = 3 k=1 t 0 Ω z(s, x)n k (x)∂ i φ J (s, x) R 3 ϕ (J) i (v)v k µ(v)
(3.36) |Ψ 4 (t)| C 1 4 t 0 b J 2 L 2 x ds + C 4 t 0 P ⊥ Λµ (h) 2 L 2 Λ + ds.
It remains to estimate Ψ 5 which involves time derivative (3.16):
Ψ 5 (t) = 3 i=1 t 0 Ω×R 3 h∂ t ϕ (J) i (s, x, v) dxdvds = 3 i=1 t 0 Ω×R 3 π ⊥ L (h)∂ t ϕ (J) i (s, x, v) dxdvds + 3 i=1 i =J t 0 Ω×R 3 π L (h) |v| 2 v i v J √ µ∂ x i φ J dxdvds + 3 i=1 ± 7 2 t 0 Ω×R 3 π L (h) v 2 i -1 √ µ∂ x J φ J dxdvds.
By oddity arguments, only terms in a(s, x) and c(s, x) can contribute to the last two terms on the right-hand side. However, i = J implies that the second term is zero as well as the contribution of a(s, x) in the third term thanks to (3.29). Finally, a Cauchy-Schwarz inequality on both integrals yields as in (3.25)
(3.37) |Ψ 5 (t)| C t 0 c L 2 x + π ⊥ L (h) L 2 x,v ∂ t ∇ x φ J L 2 x ds.
To estimate ∂ t ∇ x φ J L 2 x we follow the idea developed for a(s, x) about negative Sobolev regularity. We apply the weak formulation (3.11) to a specific function between t and t + ε.
The test function is ψ(x, v) = φ(x)v J √ µ with φ in H 1
x and null on the boundary. Note that ψ does not depend on t, vanishes at the boundary and belongs to Ker(L). Hence,
Ψ 3 (t) = Ψ 4 = Ψ 5 (t) = 0. It remains C Ω [b J (t + ε) -b J (t)] φ(x) dx = t+ε t Ω×R 3 π L (h)v J v • ∇ x φ(x) √ µ dxdvds + t+ε t Ω×R 3 π ⊥ L (h)v J v • ∇ x φ(x) √ µ dxdvds + t+ε t Ω×R 3 gφ(x)v J √ µ dxdvds.
As for a(t, x) we divide by ε and take the limit as ε goes to 0. By oddity, the first integral on the right-hand side only gives terms with a(s, x) and c(s, x). The second term is dealt with by a Cauchy-Schwarz inequality. Finally, we apply a Cauchy-Schwarz inequality for the last integral with a Poincaré inequality for φ(x) (φ is null on the boundary). This yields (3.38)
Ω ∂ t b J (t, x)φ(x) dx C a L 2 x + c L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x ∇ x φ L 2 x .
The latter is true for all φ(x) in H 1 x vanishing on the boundary. We thus fix t and apply the inequality above to -∆ x φ(t, x) = ∂ t b J (t, x) and φ| ∂Ω = 0, and obtain
∂ t ∇ x φ J 2 L 2 x = ∇ x ∆ -1 ∂ t b J 2 L 2 x = Ω ∇ x ∆ -1 ∂ t b J ∇ x φ(x) dx.
We integrate by parts (the boundary term vanishes because of our choice of φ).
∂ t ∇ x φ J 2 L 2 x = Ω ∂ t b J (t, x)φ(x) dx
At last, we use (3.38)
∂ t ∇ x φ J 2 L 2 x C a L 2 x + c L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x ∇ x φ L 2 x = C a L 2 x + c L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x ∇ x ∆ -1 x ∂ t b J L 2 x = C a L 2 x + c L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x ∂ t ∇ x φ J L 2 x
Combining this estimate with (3.37) and using Young's inequality with any ε b > 0
(3.39) |Ψ 5 (t)| ε b t 0 a 2 L 2 x ds + C 5 (ε b ) t 0 c 2 L 2 x + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
We now gather (3.32), (3.18), (3.33), (3.36) and (3.39)
t 0 b J 2 L 2 x ds N (J) h (t) -N (J) h (0) + ε b t 0 a 2 L 2 x ds + C J,c (ε b ) t 0 c 2 L 2
x ds
+ C J (ε b ) t 0 P ⊥ Λµ (h) 2 L 2 Λ + + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
Finally, summing over all J in {1, 2, 3}
t 0 b 2 L 2 x ds N (b) h (t) -N (b) h (0) + ε b t 0 a 2 L 2 x ds + C b,c (ε b ) t 0 c 2 L 2 x ds + C b (ε b ) t 0 P ⊥ Λµ (h) 2 L 2 Λ + + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
(3.40)
Estimate for c. The handling of c(t, x) is quite similar to the one of a(t, x) but it involves a more intricate treatment of the boundary terms as h does not preserves the energy. We choose the following test function
ψ c (t, x, v) = |v| 2 -α c v • ∇ x φ c (t, x) µ(v)
where
-∆ x φ c (t, x) = c(t, x) and φ c | ∂Ω = 0, and α c > 0 is chosen such that for all 1 i 3 R 3 |v| 2 -α c v 2 i µ(v) dv = 0.
The vanishing of φ c at the boundary implies, by standard elliptic estimate [START_REF] Evans | Partial differential equations[END_REF],
(3.41) ∀t 0, φ c (t) H 2 x C 0 c(t) L 2 x .
Again, this estimate provides the control of
Ψ 1 = N (c) h (t) -N (c) h (0) and of Ψ 6 (t) as in (3.20): (3.42) |Ψ 6 (t)| C 1 4 t 0 c 2 L 2 x ds + C 6 t 0 g 2 L 2
x,v ds,
where C 1 is given by (3.43) below.
We start by the right-hand side of (3.11).
-
t 0 Ω×R 3 π L (h)v • ∇ x ψ c dxdvds = - 1 i,j 3 t 0 Ω a(s, x) R 3 |v| 2 -α c v i v j µ(v) dv ∂ x i ∂ x j φ c (s, x) dxds - 1 i,j 3 t 0 Ω b(s, x) • R 3 v |v| 2 -α c v i v j µ(v) dv ∂ x i ∂ x j φ c (s, x) dxds - 1 i,j 3 t 0 Ω c(s, x) R 3 |v| 2 -α c |v| 2 -3 2 v i v j µ(v) dv ∂ x i ∂ x j φ c (s, x).
By oddity, the second integral vanishes, as well as all the others if i = j. Our choice of α c makes the first integral vanish even for i = j. It only remains the last integral with terms i = j and therefore the definition of ∆ x φ c (t, x) gives
(3.43) - t 0 Ω×R 3 π L (h)v • ∇ x ψ c dxdvds = C 1 t 0 c 2 L 2
x ds. Again, direct computations show α c = 5 and hence C 1 > 0.
Then the term Ψ 2 and Ψ 3 are dealt with as for a(t, x) and b(t, x).
(3.44) ∀i ∈ {2, 3} , |Ψ i (t)| C 1 4 t 0 c 2 L 2 x ds + C 2 t 0 π ⊥ L (h) 2 L 2 x,v ds,
where C 1 is defined in (3.43).
The term Ψ 4 involves integral on the boundary Λ. Again, we divide it into Λ + and Λ -, we use the Maxwell boundary condition (3.2) satisfied by h and we make the change of variable v → R x (v) on Λ -. As for (3.23) dealing with a(t, x) we obtain
Ψ 4 (t) = -2(1 -α) t 0 Λ + h |v| 2 -α c (v • n(x)) 2 ∂ n φ c √ µ dS(x)dvds -α t 0 Λ + |v| 2 -α c |v • n| ∇ x φ c • vP ⊥ Λµ (h) + 2P Λµ (h) (v • n)n) √ µ.
We decompose h = P Λµ (h) + P ⊥ Λµ (h) in the first integral and use (3.3) which says that
P Λµ (h)(t, x, v) = z(t, x) µ(v). Ψ 4 (t) = -2 t 0 ∂Ω ∂ n φ c z(s, x) v•n(x)>0 |v| 2 -α c (v • n(x)) 2 µ(v) dv dS(x)ds -2(1 -α) t 0 Λ + |v| 2 -α c (v • n(x)) 2 ∂ n φ c √ µ P ⊥ Λµ (h) dS(x)dvds -α t 0 Λ + |v| 2 -α c |v • n| ∇ x φ c • v √ µ P ⊥ Λµ (h) dS(x)dvds. Because (v • n(x)) 2 = 1 i,j 3 v i v j n i n j
the first term is null when i = j and vanishes for i = j thanks to our choice of α c . The last two integrals are dealt with by applying Cauchy-Schwarz inequality and the elliptic estimate on φ c in H 2 (3.41). As for the case of a(t, x), we obtain
(3.45) |Ψ 4 (t)| C 1 4 t 0 c 2 L 2 x ds + C 4 t 0 P ⊥ Λµ (h) 2 L 2 Λ + ds.
As for a(t, x) and b(t, x), the estimate on Ψ 5 (3.16) will follow elliptic arguments in negative Sobolev spaces. With exactly the same computations as in (3.25) we have
(3.46) |Ψ 5 (t)| C t 0 π ⊥ L (h) L 2 x,v ∂ t ∇ x φ c L 2 x ds.
Note that the contribution of π L vanishes by oddity on the terms involving a(t, x) and c(t, x) and also on the terms involving b(t, x) thanks to our choice of α c .
To estimate ∂ t ∇ x φ c L 2 x we use the decomposition of the weak formulation (3.11) between t and t+ε (instead of between 0 and t) with
ψ(t, x, v) = √ µ |v| 2 -3 φ(x)/2
where φ belongs to H 1 x and is null at the boundary. ψ does not depend on t, vanishes at the boundary and ψ(x)µ(v) is in Ker(L). Hence,
Ψ 3 (t) = Ψ 4 = Ψ 5 (t) = 0. It remains C Ω [c(t + ε) -c(t)] φ(x) dx = t+ε t Ω×R 3 π L (h) |v| 2 -3 2 v • ∇ x φ(x) √ µ dxdvds + t+ε t Ω×R 3 π ⊥ L (h) |v| 2 -3 2 v • ∇ x φ(x) √ µ dxdvds t+ε t Ω×R 3 g |v| 2 -3 2 √ µφ(x) dxdvds.
As for a(t, x) we divide by ε and take the limit as ε goes to 0. By oddity, the first integral on the right-hand side only gives terms with b(s, x). The second and third terms are dealt with by a Cauchy-Schwarz inequality and we apply on φ a Poincaré inequality. This yields
Ω ∂ t c(t, x)φ(x) dx C b L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x ∇ x φ L 2 x .
The latter is true for all φ(x) in H 1 x vanishing on the boundary. We thus fix t and apply the inequality above to
-∆ x φ(t, x) = ∂ t c(t, x) and φ| ∂Ω = 0.
Exactly the same computation as for b J we obtain for any ε c > 0
|Ψ 5 (t)| C t 0 b L 2 x + π ⊥ L (h) L 2 x,v + g L 2 x π ⊥ L (h) L 2 x,v ds ε c t 0 b 2 L 2 x ds + C 5 (ε c ) t 0 π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
(3.47)
We now gather (3.43), (3.18), (3.44), (3.45), (3.47) and (3.42)
t 0 c 2 L 2 x ds N (c) h (t) -N (c) h (0) + ε c t 0 b 2 L 2 x ds + C c (ε c ) t 0 P ⊥ Λµ (h) 2 L 2 Λ + + π ⊥ L (h) 2 L 2 x,v + g 2 L 2
x ds.
(3.48)
Conclusion of the proof. We gather the estimates we derived for a, b and c. We compute the linear combination (3.28
) + η × (3.40) + β × (3.48). For all ε b > 0 and ε c > 0 this implies t 0 a 2 L 2 x + η b 2 L 2 x + β c 2 L 2 x ds N h (t) -N h (0) + C ⊥ t 0 P ⊥ Λµ (h) 2 L 2 Λ + + π ⊥ L (h) 2 L 2 x,v + g 2 L 2 x ds + t 0 ηε b a 2 L 2 x + (C a,b + βε c ) b 2 L 2 x + ηC b,c (ε b ) c 2 L 2 x ds.
We first choose η > C a,b , then ε b such that ηε b < 1 and then β > ηC b,c (ε b ). Finally, we fix ε c small enough such that C a,b + βε c < η . With such choices we can absorb the last term on the right-hand side by the left-hand side. This concludes the proof of Lemma 3.2.
Exponential decay of the solution.
In this section we show that a solution to (3.1) that preserves mass and has its trace in L 2 Λ decays exponentially fast.
Proof of Theorem 3.1. Let h be a solution described in the statement of the theorem and define for λ > 0, h(t, x, v) = e λt h(t, x, v). h satisfies the conservation of mass and is solution to
∂ t h + v • ∇ x h = L µ ( h) + λ h
with the Maxwell boundary condition. Moreover, since h| Λ belongs to L 2 Λ µ -1/2 so does h Λ . We can use Green formula and get 1 2
d dt h 2 L 2 x,v = - 1 2 Ω×R 3 v•∇ x h 2 dxdv+ Ω L µ ( h)(t, x, •), h(t, x, •) L 2 v dx+λ h 2 L 2 x,v
Therefore, thnaks to the spectral gap (3.7) of L in L 2 v we get
(3.49) 1 2 d dt h 2 L 2 x,v - 1 2 Λ h 2 v • n(x) dS(x)dv -λ L π ⊥ L ( h) 2 L 2 x,v + λ h 2 L 2 x,v
.
As we did in previous section, we divide the integral over the boundary and we apply the boundary condition (3.2) followed by the change of variable v → R x (v) that sends Λ -to Λ + . At last, we decompose h| Λ + into P Λµ (h) + P ⊥ Λµ (h) and this yields
- Λ h 2 v • n(x) dS(x)dv = - Λ + h 2 -(1 -α) h + αP Λµ ( h) 2 v • n(x) dS(x)dv = -(1 -(1 -α) 2 ) P ⊥ Λµ ( h) 2 L 2 Λ + +2α Λ + P Λµ ( h)P ⊥ Λµ ( h)v • n(x) dS(x)dv = -(1 -(1 -α) 2 ) P ⊥ Λµ ( h) 2 L 2 Λ + . (3.50)
Combining (3.49) and (3.50) and integrating from 0 to t we get (3.51)
h(t) 2 L 2 x,v + C t 0 P ⊥ Λµ ( h) L 2 Λ + + π ⊥ L ( h) L 2 x,v ds h 0 2 L 2 x,v + 2λ t 0 h 2 L 2 x,v
ds.
To conclude we use Lemma 3.2 for h with g = λ h:
t 0 π L ( h) 2 L 2 x,v ds N h (t) -N h (0) + C ⊥ t 0 π ⊥ L ( h) 2 L 2
x,v
+ P ⊥ Λµ ( h) 2 L 2 Λ + + λ 2 h 2 L 2 x,v ds (3.52)
and we combine ε × (3.52) + (3.51) for ε > 0.
h 2 L 2 x,v -εN h (t) + C ε t 0 π L ( h) 2 L 2 x,v + π ⊥ L ( h) 2 L 2 x,v ds
+ (C -εC ⊥ ) t 0 P ⊥ Λµ ( h) 2 L 2 Λ + ds h 0 2 L 2 x,v (µ -1/2 ) -εN h (0) + εC ⊥ λ 2 + 2λ t 0 h 2 L 2 x,v ds with C ε = min {εC ⊥ , C -εC ⊥ }. Thanks to the control N h (s) C h(s) 2 L 2
x,v and the fact that
π L ( h) 2 L 2 x,v + π ⊥ L ( h) 2 L 2 x,v = h 2 L 2 x,v
we can choose ε small enough such that C ε > 0 and then λ > 0 small enough such that (εC
⊥ λ 2 + 2λ) < C ε . Such choices imply that h 2 L 2 x,v is uniformly bounded in time by C h 0 2 L 2
x,v . By definition of h, this shows an exponential decay for h and concludes the proof of Theorem 3.1.
Semigroup generated by the collision frequency
This section is devoted to proving that the following operator
G ν = -ν(v) -v • ∇ x
with the Maxwell boundary condition generates a semigroup S Gν (t) with exponential decay in L ∞
x,v endowed with different weights. Such a study has been done for pure specular reflections (α = 0) whereas a similar result has been obtained in the purely diffusive case (α = 1) (see [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] for maxwellian weights and [START_REF] Briant | Perturbative theory for the Boltzmann equation in bounded domains with different boundary conditions[END_REF] for more general weights and L 1 v L ∞ x framework). We adapt the methods of [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF][3] in order to fit our boundary condition. They consist in deriving an implicit formulation for the semigroup along the characteristics and then we need to control the characteristic trajectories that do not reach the plane {t = 0} in a time t. As we shall see, this number of problematic trajectories is small when the number of rebounds is large and so can be controlled for long times.
Theorem 4.1. Let m(v) = m(|v|) 0 be such that (4.1) (1 + |v|) ν(v) m(v) ∈ L 1 v and m(v)µ(v) ∈ L ∞ v .
Then for any
f 0 in L ∞ x,v (m) there exists a unique solution S Gν (t)f 0 in L ∞ x,v (m) to (4.2) [∂ t + v • ∇ x + ν(v)] (S Gν (t)f 0 ) = 0 such that (S Gν (t)f 0 )| Λ ∈ L ∞ Λ (m)
and satisfying the Maxwell boundary condition (1.2) with initial datum f 0 . Moreover it satisfies
∀ν ′ 0 < ν 0 , ∃ C m,ν ′ 0 > 0, ∀t 0, S Gν (t)f 0 L ∞ x,v (m) C m,ν ′ 0 e -ν ′ 0 t f 0 L ∞ x,v (m) , with ν 0 = inf {ν(v)} > 0.
A corollary of the proof of this theorem is a gain of weight when one integrates in the time variable. This will be of core importance to control the nonlinear operator. Corollary 4.2. Let m be such that m(v)ν(v) -1 satisfies the requirements of Theorem 4.1. Then there exists C 0 > 0 such that for any (f s ) s∈R + in L ∞
x,v (m), any ε in (0, 1) and all t 0,
t 0 S Gν (t -s)f s (x, v) ds L ∞ x,v (m) C 0 1 -ε e -εν 0 t sup s∈[0,t] e εν 0 s f s L ∞ x,v (mν -1 )
.
The rest of this Section is entirely devoted to the proof of these results.
4.1. Brief description of characteristic trajectories. The characteristic trajectories of the free transport equation
∂ t f (t, x, v) + v • ∇ x f (t, x, v) = 0
with purely specular reflection boundary condition will play an important role in our proof. Their study has been done in [7, Appendix A] and we describe here the results that we shall use later on.
The description of backward characteristics relies on the time of first rebound against the boundary of Ω. For x in Ω and v = 0 define
t min (x, v) = max t 0 : x -vs ∈ Ω, ∀ 0 s t .
Note that for all (x, v) / ∈ Λ 0 ∪ Λ -, t min (x, v) > 0. The characteristic trajectories are straight lines in between two rebounds against the boundary, where the velocity then undergo a specular reflection.
From [7, Appendix A.2], starting from (x, v) in Ω × (R 3 -{0}), one can construct T 1 (x, v) = t min (x, v) and the footprint X 1 (x, v) on ∂Ω of the backward trajectory starting from x with velocity v has well as its resulting velocity V 1 (x, v):
X 1 (x, v) = x -T 1 (x, v)v and V 1 (x, v) = R X 1 (x,v) (v) ,
where we recall that R y (v) is the specular reflection of v at a point y ∈ ∂Ω. One can iterate the process and construct the second collision with the boundary at time
T 2 (x, v) = T 1 (x, v) + t min (X 1 (x, v), V 1 (x, v)), at the footprint X 2 (x, v) = X 1 (X 1 (x, v), V 1 (x, v)) and the second reflected velocity V 2 (x, v) = V 1 (X 1 , V 1
) and so on so forth to construct a sequence (T k (x, v), X k (x, v), V k (x, v)) in ∂Ω × R 3 . More precisely we have, for almost every (x, v),
T k+1 (x, v) = T k +t min (X k , V k ), X k+1 (x, v) = X k -t min (X k , V k )V k , V k+1 = R X k+1 (V k ) .
Thanks to [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF]Proposition A.4], for a fixed time t and for almost every (x, v) there are a finite number of rebounds. In other terms, there exists N(t, x, v) such that the backward trajectories starting from (x, v) and running for a time t is such that
T N (t,x,v) (x, v) t < T N (t,x,v)+1 (x, v).
We conclude this subsection by stating a continuity result about the footprints of characteristics. This is a rewriting of [24, Lemmas 1 and 2]. (1) the backward exit time
t min (x, v) is lower semi-continuous; (2) if v • n(X 1 (x, v)) < 0 then t min (x, v) and X 1 (x, v) are continuous functions of (x, v); (3) let (x 0 , v 0 ) be in Ω × R 3 with v 0 = 0 and t min (x 0 , v 0 ) < ∞, if (X 1 (x 0 , v 0 ), v 0 ) belongs to Λ I- 0 then t min (x, v) is continuous around (x 0 , v 0 ).
Note that [24, Lemma 2] also gives that if (X 1 (x 0 , v 0 ), v 0 ) belongs to Λ I+ 0 then t min (x, v) is not continuous around (x 0 , v 0 ). Therefore, points (2) and (3) in Lemma 4.3 imply that C - Λ = Λ -∪ Λ I- 0 is indeed the boundary continuity set.
4.2.
Proof of Theorem 4.1: uniqueness. Assume that there exists a solution f of (4.2) in L ∞ x,v (m) satisfying the Maxwell boundary condition and such that f | Λ belongs to L ∞ Λ (m). With the assumptions on the weight m(v) and the following inequalities
f L 1 x,v R 3 dv m(v) f L ∞ x,v (m) and f L 1 Λ R 3 |v| m(v) dv f L ∞ Λ (m)
we see that f belongs to L 1 x,v and its restriction f | Λ belongs to L 1 Λ . We can therefore use the divergence theorem and the fact that ν(v) ν 0 > 0:
d dt f L 1 x,v = Ω×R 3 sgn(f (t, x, v)) [-v • ∇ x -ν(v)] f (t, x, v) dxdv = - Ω×R 3 v • ∇ x (|f |) dxdv -ν(v)f L 1 x,v - Λ |f (t, x, v)| (v • n(x)) dS(x)dv -ν 0 f L 1 x,v .
Using the Maxwell boundary condition (1.2) and then applying the change of variable v → R x (v), which has a unit jacobian since it is an isometry, gives
Λ - |f (t, x, v)| (v • n(x)) dS(x)dv = - Λ - |(1 -α)f (t, x, R x (v)) + αP Λ (f (t, x, •)) (v)| (v • n(x)) dS(x)dv = - Λ + |(1 -α)f (t, x, v) + αP Λ (f (t, x, •)) (v)| (v • n(x)) dS(x)dv Λ + |f (t, x, v)| (v • n(x)) dS(x)dv.
We used the fact that
P Λ (f )(R x (v)) = P Λ (f )(v).
The integral over the boundary Λ is therefore positive and so uniqueness follows from a Grönwall lemma.
4.3.
Proof of Theorem 4.1: existence and exponential decay. Let f 0 be in L ∞
x,v (m). Define the following iterative scheme:
[∂ t + v • ∇ x + ν] f (n) = 0 and f (n) (0, x, v) = f 0 (x, v)1 {|v| n}
with a damped version of the Maxwell boundary condition for t > 0 and (x, v) in Λ -
(4.3) f (n) (t, x, v) = (1 -α)f (n) (t, x, R x (v)) + α 1 - 1 n P Λ (f (n) Λ + )(t, x, v).
Denote by P (n) : Λ + -→ Λ -the boundary operator associated to (4.3).
Note that µ(v) -1/2 f 0 (x, v)1 {|v| n} is in L ∞ x,v and ∀ µ -1/2 f ∈ L ∞ Λ + , P (n) (µ -1/2 f ) L ∞ Λ - 1 - α n µ -1/2 f L ∞ Λ +
.
The norm of the operator P (n) is thus strictly smaller than one. We can apply [21, Lemma 14] which implies that f
(n) is well-defined in L ∞ x,v µ -1/2 with f (n) Λ in L ∞ Λ µ -1/2 .
We shall prove that in fact f (n) decays exponentially fast in L ∞ x,v (m) and that its restriction to the boundary is in L ∞ Λ (m). Finally, we will prove that f (n) converges, up to a subsequence, towards f the desired solution of Theorem 4.1. The proof of Theorem 4.1 consists in the following three steps developed in Subsections 4.3.1, 4.3.2 and 4.3.3.
4.3.1.
Step 1: Implicit formula for f (n) . We use the conservation property that e ν(v)t f (n) (t, x, v) is constant along the characteristic trajectories. We apply it to the first collision with the boundary (recall Subsection 4.1 for notations) and obtain that for all (x, v) /
∈ Λ 0 ∪ Λ - f (n) (t, x, v) =1 {t-t min (x,v) 0} e -ν(v)t f 0 (x -tv, v)1 {|v| n} + 1 {t-t min (x,v)>0} e -ν(v)t min (x,v) f (n) Λ -(t -t min (x, v), X 1 (x, v), v
). Indeed, either the backward trajectory hits the boundary at X 1 (x, v) before time t (t min < t) or it reaches the origin plane {t = 0} before it hits the boundary (t min 0). Defining t 1 = t 1 (t, x, v) = t -t min (x, v), and recalling the first footprint X 1 = X 1 (x, v) and the first change of velocity V 1 (x, v)), we apply the boundary condition (4.3) and obtain the following implicit formula.
f (n) (t, x, v) = 1 {t 1 (x,v) 0} e -ν(v)t f 0 (x -tv, v)1 {|v| n} + (1 -α) 1 {t 1 (x,v)>0} e -ν(v)(t-t 1 ) f (n) (t 1 , X 1 , V 1 ) + 1 {t 1 (x,v)>0} ∆ n µ(v)e -ν(v)(t-t 1 ) v 1 * •n(x 1 )>0 1 µ(v 1 * ) f (n) (t 1 , X 1 , v 1 * ) dσ x 1 (v 1 * ) , (4.4)
where we denoted ∆ n = α(1 -1/n) and we defined the probability measure on Λ + (4.5)
dσ x (v) = c µ µ(v) |v • n(x)| dv.
Moreover, once at (t 1 , X 1 , v 1 ) with v 1 being either V 1 (x, v) or v 1 * either t 2 0 (where t 2 = t 1 (t 1 , X 1 , v 1 ) 0) and the trajectory reaches the initial plane after the first rebound or t 2 > 0 and it can still overcome a collision against the boundary in the time t. Again, the fact that e ν(v)t f (n) (t, x, v) is constant along the characteristics implies
(4.6) f (n) (t, x, v) = I 1 f (n) 0 (t, x, v) + R 1 f (n) (t, x, v)
with I 1 accounting for all the trajectories reaching the initial plane in at most 1 rebound in time t
I 1 f (n) 0 = 1 {t 1 0} e -ν(v)t f (n) 0 (x -tv, v) + 1 {t 1 >0} 1 {t 2 0} (1 -α)e -ν(v)(t-t 1 ) e -ν(V 1 )t 1 f 0 (X 1 -t 1 V 1 , V 1 ) + 1 {t 1 >0} ∆ n µ(v)e -ν(v)(t-t 1 ) v 1 * •n(x 1 )>0 1 {t 2 0} e -ν(v 1 * )t 1 µ(v 1 * ) f (n) 0 (x 1 -t 1 v 1 * , v 1 * ) dσ x 1 , (4.7)
and R 1 f (n) encodes the contribution of all the characteristics that after one rebound are still able to generate new collisions against ∂Ω
R 1 f (n) (t, x, v) = (1 -α) 1 {t 2 >0} e -ν(v)(t-t 1 ) f (n) (t 1 , X 1 , V 1 ) + ∆ n µ(v)e -ν(v)(t-t 1 ) v 1 * •n(x 1 )>0 1 {t 2 >0} 1 µ(v 1 * ) f (t 1 , X 1 , v 1 * ) dσ x 1 (v 1 * ). (4.8)
Of important note, to lighten computations, in each term the value t 2 refers to t 1 of the preceding triple (t 1 , x 1 , v 1 ) where v 1 is V 1 (x, v) in the first term and v 1 * in the second. As we are about to iterate (4.6), we shall generate sequences (t k+1 , x k+1 , v k+1 ) which have to be understood as (t 1 (t
(l) k , x (l) k , v (l) k ), x 1 (x (l) k , v (l) k ), v k+1 ) and v k+1 being either V 1 (x (l) k , v (l) k ) or an integration variable v (k+1) * .
By a straightforward induction we obtain an implicit form for f (n) when one takes into account the contribution of the characteristics reaching {t = 0} in at most p 1 rebounds (4.9)
f (n) (t, x, v) = I p f (n) 0 (t, x, v) + R p f (n) (t, x, v). I p f (n) 0
contains all the trajectories reaching the initial plane in at most p rebounds whereas R p f (n) gathers the contributions of all the trajectories still coming from a collision against the boundary. A more careful induction gives an explicit formula for R p and this is the purpose of the next Lemma. The main idea is to look at every possible combination of the specular reflections among all the collisions against the boundary, represented by the set ϑ defined below.
Lemma 4.4. For p 1 and i in {1, . . . , p} define ϑ p (i) the set of strictly increasing functions from {1, . . . , i} into {1, . . . , p}. Let (t 0 , x 0 , v 0 ) = (t, x, v) in R + × Ω × R 3 and (v 1 * , . . . , v p * ) in R 3p . For l in ϑ p (i) we define the sequence (t
(l) k , x (l)
k , v
(l) k ) 1 k p by induction t k = t (l) k-1 -t min (x (l) k-1 , v (l) k-1 ) , x (l) k = X 1 (x (l) k-1 , v (l) k-1 ) v (l) k = V 1 (x (l) k-1 , v (l) k-1 ) if k ∈ l [{1, . . . , i}] , v k * otherwise.
At last, for 1 k p define the following measure on R 3k
dΣ k l (v 1 * , . . . , v p * ) = µ(v) µ(v (l) k ) k-1 j=0 e -ν(v (l) j )(t (l) j -t (l) j+1 ) dσ x 1 (v 1 * ) . . . dσ xp (v p * )
and the following sets
(4.10) V (l) j = v j * ∈ R 3 , v j * • n(x (l) j ) > 0 .
Then we have the following identities
I p f (n) 0 (t, x, v) = p k=0 k i=0 (1 -α) i ∆ k-i n × l∈ϑ k (i) 1 j p V (l) j 1 t (l) k >0, t (l)
k+1 0 e -ν(v (l)
k )t (l) k f (n) 0 (x (l) k -t (l) k v (l) k , v (l) k ) dΣ k l (4.11)
and (4.12)
R p f (n) (t, x, v) = p i=0 l∈ϑp(i) (1 -α) i ∆ p-i n 1 j p V (l) j 1 t (l) p+1 >0 f (n) (t (l) p , x (l) p , v (l) p ) dΣ p l ,
where we defined t
(l) p+1 = t (l) p -t min (x (l) p , v (l)
p ) and also by convention l ∈ ϑ p (0) means that l = 0.
Proof of Lemma 4.4. The proof is done by induction on p and we start with the formula for R p .
By definition of R 1 f (n) (t, x, v) (4.8), the property holds for p = 1 since on the pure reflection part µ(v) = µ(v 1 ) and dσ x 1 is a probability measure.
Suppose that the property holds at p 1. Then we can apply the property (4.6) at rank one to f (n) (t
(l) p , x (l) p , v (l)
p ). In other terms this amounts to applying the preservation of e ν(v)t f (n) (t, x, v) along characteristics and to keep only the contribution of trajectories still able to generate rebounds. Using the notations t
(l) p+1 = t (l) p -t min (x (l) p , v (l) p ), x (l) p+1 = X 1 (x (l) p , v (l)
p ), and the definition (4.8), it reads
R 1 f (n) (t (l) p , x (l) p , v (l) p ) = (1 -α) 1 t 2 (t (l) p ,x (l) p ,v (l) p )>0 e -ν(v (l) p )(t (l) p -t (l) p+1 ) f (n) (t (l) p+1 , x (l) p+1 , V 1 (x (l) p , v (l) p )) + ∆ n µ(v (l) p )e -ν(v (l) p )(t (l) p -t (l) p+1 ) V (l) p+1 1 t 2 (t (l) p ,x (l)
p ,v (p+1) * ) >0 dσ x (l) p+1 µ(v (p+1) * ) f (t (l) p+1 , x (l)
p+1 , v (p+1) * ).
Since µ only depends on the norm and since dσ x (l) p+1 is a probability measure, the specular part above can be rewritten as
(1 -α)µ(v (l) p ) V (l) p+1 1 {t 2 >0} e -ν(v (l) p )(t (l) p -t (l) p+1 ) µ(V 1 (x (l) p , v (l)
p ))
f (n) (t (l) p+1 , x (l) p+1 , V 1 (x (l) p , v (l) p )) dσ x (l) p+1 (v (p+1) * ).
For each l in ϑ p (i) we can generate l 1 in ϑ p+1 (i+1) with l 1 (p+1) = p+1 (representing the specular reflection case) and
l 2 = l in ϑ p+1 (i). Plugging R 1 f (n) (t (l) p , x (l) p , v (l) p ) into R p f (n) (t, x, v
) we obtain for each l the desired integral for l 1 and l 2
I l 1,2 = 1 j p+1 V (l 1,2 ) j 1 t (l 1,2 ) p+1 >0 f (n) (t (l 1,2 ) p+1 , x (l 1,2 ) p+1 , v p+1 ) dΣ p l 1,2 v 1 * , . . . , v (p+1) * .
Our computations thus lead to
R p f (n) (t, x, v) = p i=0 l∈ϑ p+1 (i+1) l(i+1)=p+1 (1 -α) i+1 ∆ p-i n I l + p i=0 l∈ϑ p+1 (i) l(i) =p+1 (1 -α) i ∆ p+1-i n I l
which can be rewritten as
R p f (n) (t, x, v) = p+1 i=1 l∈ϑ p+1 (i) l(i)=p+1 (1 -α) i ∆ p+1-i n I l + p i=0 l∈ϑ p+1 (i) l(i) =p+1 (1 -α) i ∆ p+1-i n I l
For i = 0 there can be no l such that l(i) = p + 1 and for i = p + 1 the only l in ϑ p+1 (i) is the identity and so there is no l such that l(i) = p + 1. In the end, for 0 i p + 1, we are summing exactly once every function l in ϑ p+1 (i). This concludes the proof of the lemma for R p .
At last, I p could be derived explicitely by the same kind of induction. However, I p contains all the contributions from characteristics reaching {t = 0} in at most p collisions against the boundary. It follows that I p is the sum of all the possible R k with k from 0 to p such that 1 {t k+1 0} to which we apply the preservation of e ν(v)t f (n) (t, x, v) along the backward characteristics starting at (t
(l) k , x (l) k , v (l)
k ) up to t. And since dσ x k+1 (v (k+1) * ) . . . dσ xp (v p * ) is a probability measure on R 3(p-k) we can always have an integral against
dσ x 1 (v 1 * ) . . . dσ xp (v p * ).
This concludes the proof for I p .
4.3.2.
Step 2: Estimates on the operators I p and R p . The next two lemmas give estimates on the operator I p and R p . Note that we gain a weight of ν(v) which will be of great importance when dealing with the bilinear operator. Lemma 4.5. There exists C m > 0 only depending on m such that for all p 1 and all h in L ∞
x,v (m),
I p (h)(t) L ∞ x,v (m) pC m e -ν 0 t h L ∞ x,v (m) .
Moreover we also have the following inequality for all
(t, x, v) in R + × Ω × R 3 m(v) |I p (h)| (t, x, v) pC m ν(v)e -ν(v)t + e -ν 0 t h L ∞ x,v (mν -1 ) .
Proof of Lemma 4.5. We only prove the second inequality as the first one follows exactly the same computations without multiplying and dividing by ν(v
(l) k ). Bounding by the L ∞
x,v (mν -1 )-norm out of the definition (4.11) gives
|I p (h)(t, x, v)| h L ∞ x,v (mν -1 ) p k=0 k i=0 (1 -α) i ∆ k-i n l∈ϑ k (i) 1 j p V (l) j 1 t (l) k >0, t (l) k+1 0 ν(v (l) k ) m(v (l) k ) e -ν(v (l) k )t (l) k dΣ k l . (4.13)
Fix k, i and l. Then by definition of (v
(l) k ): either v (l) k = V 1 (. . . (V 1 (x, v))
)) k iterations (case of k specular reflections which means that l is the identity) or there exists J in {1, . . . , p} such that v
(l) k = V 1 (. . . (V 1 (x j , v J * )))) k -J iterations.
Since m, ν and µ are radially symmetric this yields
1 j p V (l) j 1 m(v (l) k ) e -ν(v (l) k )t (l) k dΣ k l = 1 j J V (l) j µ(v)ν(v (l) k ) m(v J * )µ(v J * ) k-1 j=0 e -ν(v (l) j )(t (l) j -t (l) j+1 ) e -ν(v (l) J )t (l) k dσ x 1 . . . dσ x J . (4.14)
We use the convention that v 0 * = v so that this formula holds in both cases.
In the case J = 0, all the collisions against the boundary were specular reflections and so for any j, v (l) j is a rotation of v and t (l) k does not depend on any v j * . As ν is rotation invariant the exponential decay inside the integral is exactly e -ν(v)(t-t (l)
k ) e -ν(v)t (l) k . The dσ x j are probability measures and therefore in the case when J is zero
(4.14) = ν(v) m(v) e -ν(v)t .
In the case J = 0 we directly bound the exponential decay by e -ν 0 t and integrate all the variable but v J * . Therefore, by definition (4.5) of dσ x (4.14) c µ e -ν 0 t µ(v)
v J * •n(x (l) J ) ν(v J * ) m(v J * ) v J * • n(x (l) J ) dv J * C m m(v) e -ν 0 t ,
where we used the boundedness and integrability assumptions on m (4.1).
To conclude we plug our upper bounds on (4.14) inside (4.13) and use
p k=0 k i=0 l∈ϑ k (i) (1 -α) i ∆ k-i n = p k=0 1 - α n k to finally get m(v) |I p (h)(t, x, v)| p ν(v)e -ν(v)t + C m e -ν 0 t h L ∞ x,v (mν -1 )
which concludes the proof.
The estimate we derive on R p needs to be more subtle. The main idea behind it is to differentiate the case when the characteristics come from a majority of pure specular reflections, and therefore has a small contribution because of the multiplicative factor (1 -α) k , from the case when they come from a majority of diffusions, and therefore has a small contribution because of the small number of such possible composition of diffusive boundary condition. Lemma 4.6. There exists C m > 0 only depending on m and N, C > 0 only depending on α and the domain Ω such that for all T 0 > 0, if
p = N CT 0 + 1
, where [•] stands for the floor function; then for all h = h(t, x, v) and for all t in [0,
T 0 ] sup s∈[0,t] e ν 0 s R p (h)(s) L ∞ x,v (m) C m 1 2 [ CT 0] sup s∈[0,t] e ν 0 s 1 {t 1 >0} h(s) L ∞ Λ + (m) .
Moreover, the following inequality holds for all
(t, x, v) in R + × Ω × R 3 and all ε in [0, 1], m(v) |R p (h)| (t, x, v) C m e -εν 0 t 1 2 [CT0] ν(v)e -ν(v)(1-ε)t + e -ν 0 (1-ε)t × sup s∈[0,t] e εν 0 s h(s) L ∞ Λ + (mν -1 ) .
Proof of Lemma 4.6. Let (t, x, v) in R × Ω×R 3 . Again, we shall only prove the second inequality, the first one being dealt with exactly the same way. First, the exponential decay inside dΣ p l (see Lemma 4.4) is bounded by e -ν 0 (t-t (l) p )
if there is at least one diffusion or by e -ν(v)(t-t (l) p ) if only specular reflections occur in the p rebounds (because then the reflection preserves |v| and ν only depends on the norm ), that is i = p and l = Id. Second, by definition of (t
(l) k , x (l) k , v (l) k ) (see Lemma 4.4) we can bound 1 t (l) p+1 >0 h(t (l) p , x (l) p , v (l) p ) m(v (l) p ) = 1 t 1 (t (l) p ,x (l)
p ,v
(l) p )>0 h(t (l) p , x (l) p , v (l) p ) m(v (l) p ) 1 t (l) p >0 ν(v (l) p ) 1 {t 1 >0} h(t (l) p ) L ∞ Λ + (mν -1
) . We thus obtain the following bound
|R p (h)(t, x, v)| p 1 i=0 l∈ϑp(i) (1 -α) i ∆ p-i n 1 j p V (l) j 1 t (l) p >0 µ(v)ν(v (l) p ) m(v (l) p )µ(v (l) p ) × e -ν 0 (t-t (l) p ) 1 {t 1 >0} h(t (l) p ) L ∞ Λ + (mν -1 ) dσ x (l) 1 . . . dσ x (l) p + (1 -α) p 1 t (Id) p >0 ν(v)e -ν(v)(t-t (Id) p ) 1 {t 1 >0} h(t (Id) p ) L ∞ Λ + (mν -1 )
.
Which implies for 0 ε 1
e εν 0 t m(v) |R p (h)(t, x, v)| p i=0 l∈ϑp(i) (1 -α) i ∆ p-i n 1 j p V (l) j 1 t (l) p >0 µ(v)m(v) m(v (l) p )µ(v (l) p ) dσ x (l) 1 ..dσ x (l) p × ν(v)e -ν(v)(1-ε)t + e -ν 0 (1-ε)t sup s∈[0,t] e εν 0 s 1 {t 1 >0} h(s) L ∞ Λ + (mν -1
p = t (l) p (t, x, v, v (l) 1 , v (l) 2 , . . . , v (l)
p ) and thus for all j in {1, . . . , p},
1 t (l) p >0
1 t (l) p-j >0 . Following the reasoning of the proof of Lemma 4.5, for fixed i and l, there exists J in {0, . . . , p} such that v
(l) p = V 1 (. . . (V 1 (x J , v J * ))
)) p -J iterations, with the convention that v 0 * = v. The measures dσ x are probability measures and the functions m, ν and µ are rotation invariant. Therefore
1 j p V (l) j 1 t (l) p >0 µ(v)m(v) m(v (l) p )µ(v (l) p ) dσ x (l) 1 . . . dσ x (l) p V (l) j µ(v)m(v) m(v (l) J * )µ(v (l) J * ) dσ x (l) J * (v j * ) 1 j J -1 v j * •n(x (l) j )>0 1 t (l) (J -1) * >0 dσ x (l) 1 ..dσ x (l) J -1 .
In the case J = 0 we have v (l)
J * = v and therefore the above is exactly one. In the case J 1, assumption (4.1) on m implies that the integral over v (l)
J * is bounded uniformly by C m . So we have
1 j p V (l) j 1 t (l) p >0 µ(v)m(v) m(v (l) p )µ(v (l) p ) dσ x (l) 1 ..dσ x (l) p C m 1 j J -1 V (l) j 1 t (l) (J -1) * >0 dσ x (l) 1 ..dσ x (l) J -1 . (4.16)
Plugging (4.16) into (4.15) gives
e εν 0 t m(v) |R p (h)(t, x, v)| C m F p (t) ν(v)e -ν(v)(1-ε)t + e -ν 0 (1-ε)t × sup s∈[0,t] e ν 0 s 1 {t 1 >0} h(s) L ∞ Λ + (mν -1 )
(4.17 with
F p (t) = sup x,v p i=0 l∈ϑp(i) (1 -α) i ∆ p-i n 1 j J -1 V (l) j 1 t (l) (J -1) * >0 dσ x (l) 1 ..dσ x (l) J -1 .
It remains to prove an upper bound on F p (t) for 0 t T 0 when T 0 and p are large. Let T 0 > 0, p in N and 0 < δ < 1 to be determined later.
For any given i in {1, . . . , p} and l in ϑ p (i) we define the non-grazing sets for all j in {1, . . . , p} as
Λ (l),δ j = v (l) j • n(x (l) j ) δ ∩ v (l) j 1 δ .
By definition of the backward characteristics we have x
(l) j -x (l) j+1 = (t (l) j -t (l) j+1 )v (l) j . Since Ω is a C 1 bounded it is known [21, Lemma 2] that there exists C Ω > 0 such that ∀v (l) j ∈ Λ (l),δ j , t (l) j -t (l) j+1 v (l) j • n(x (l) j ) C Ω v (l) j δ 3 C Ω . Therefore, for t in [0, T 0 ], if t (l) (J-1) * (t, x, v, v (l) 1 , v (l) 2 , . . . , v (l)
J-1 ) > 0 then there can be at most [C Ω T 0 δ -3 ] + 1 velocities v (l) j in Λ (l),δ j . Among these, we have exactly k velocities v (l) j that are integration variables v j * and the rest are specular reflections. Since i represents the total number of specular reflections, it remains exactly p -i -k integration variables that are not in any Λ (l),δ j . Recalling that dσ x is a probability measure, if v (l) j is a specular reflection we bound the integral in v j * by one. All these thoughts yield
F p (t) sup x,v p i=0 l∈ϑp(i) (1 -α) i ∆ p-i n × C Ω T 0 δ 3 +1 j=0 j k=0 exactly k of v * ∈ Λ (l),δ , j -k of specular in Λ (l),δ , p -i -k of v * not in Λ (l),δ 1 m J-1 dσ x (l) m (v m * ) p i=0 l∈ϑp(i) (1 -α) i ∆ p-i n C Ω T 0 δ 3 +1 j=0 j k × j k=0 sup t,x,v,i,l,j Λ (l),δ dσ x (l) j (v * ) k sup t,x,v,i,l,j v * / ∈Λ (l),δ dσ x (l) j (v * ) p-i-k
.
In what follows we denote by C any positive constant independent of t, x, v, i, l and j. We bound first v * / ∈Λ (l),δ dσ x (l)
j (v * ) 0<v * •n(x (l) j ) δ dσ x (l) j ) + |v * | δ -1 dσ x (l) j )
Cδ and second we bound by one the integrals on Λ (l),δ . With Cδ < 1 we end up with
F p (t) p i=0 l∈ϑp(i) (1 -α) i (C∆ n δ) p-i C Ω T 0 δ 3 +1 j=0 j k=0 j k (Cδ) -k p i=0 l∈ϑp(i) (1 -α) i (C∆ n δ) p-i C Ω T 0 δ 3 +1 j=0 1 + 1 Cδ j Cδ 1 + 1 Cδ C Ω T 0 δ 3 +2 p i=0 p i (1 -α) i (Cαδ) p-i 2 1 + 1 Cδ C Ω T 0 δ 3 +1 ((1 -α) + Cδα) p .
Since α > 0 we can choose δ > 0 small enough such that (1 -α)
+ Cδα = α 0 < 1. Then choose N in N large enough such that 1 + 1 Cδ α N 0 1 2 .
Finally choose p = N( C Ω T 0 δ 3 + 1). It follows that
F p (t) 2 1 + 1 Cδ α N 0 C Ω T 0 δ 3 +1 1 2 C Ω T 0 δ 3 .
This inequality with (4.17) concludes the proof of the lemma.
4.3.3.
Step 3: Exponential decay and convergence of f (n) . Fix T 0 > 0 to be chosen later and choose p = p R (T 0 ) defined in Lemma 4.6. We have that for all n in N,
• by (4.6), for every (t,
x, v) in R + × Ω × R 3 , 1 {t 1 (t,x,v)} 0 f (n) (t, x, v) = e -ν(v)t f (n) 0 (x -tv, v) and hence (4.18) sup s∈[0,t] e ν 0 s 1 {t 1 (t,x,v)} 0 f (n) (t, x, v) L ∞ x,v (m) f (n) 0 L ∞ x,v (m) f 0 L ∞ x,v (m) ;
• by Lemmas 4.4, 4.5 and 4.6, for every (t,
x, v) in [0, T 0 ] × Ω × R 3 sup s∈[0,t] e ν 0 s 1 {t 1 (s,x,v)>0} f (n) (s, x, v) L ∞ x,v (m) sup s∈[0,t] e ν 0 s I p (f (n) 0 )(s) L ∞ x,v (m) + sup s∈[0,t] e ν 0 s R p (h)(s) L ∞ x,v (m) pC m f 0 L ∞ x,v (m) + C m 1 2 [CT0] sup s∈[0,t] e ν 0 s 1 {t 1 (s,x,v)>0} f (n) (s, x, v) L ∞ Λ + (m) (4.19)
We recall Lemma 4.6 and we have p R (T 0) N (CT 0 +1). Let ν ′ 0 in (0, ν 0 ). Suppose T 0 was chosen large enough such that
C m 1 2 [ CT 0] 1 2 and 2C m N(CT 0 + 1)e -ν 0 T 0 e -ν ′ 0 T 0 .
Applying (4.19) at T 0 gives
1 {t 1 (T 0 )>0} f (n) (T 0 ) L ∞ x,v (m) 2C m p R (T 0 )e -ν 0 T 0 f 0 L ∞ x,v (m) e -ν ′ 0 T 0 f 0 L ∞ x,v (m) ,
and with (4.18) we finally have
f (n) (T 0 ) L ∞ x,v (m) e -ν ′ 0 T 0 f 0 L ∞ x,v (m) .
We could now start the proof at T 0 up to 2T 0 and iterating this process we get
∀n ∈ N, f (n) (nT 0 )1 t 1 >0 L ∞ x,v (m) e -ν ′ 0 T 0 f (n) ((n -1)T 0 L ∞ x,v (m) e -2ν ′ 0 T 0 f (n) ((n -2)T 0 ) L ∞ x,v (m) . . . e -ν ′ 0 nT 0 f 0 L ∞ x,v (m) .
Finally, for all t in [nT 0 , (n + 1)T 0 ] we apply (4.19) with the above to get
f (n) 1 t 1 >0 (t) L ∞ x,v (m) 2C m p R (T 0 )e -ν 0 (t-nT 0 ) f (n) (nT 0 ) L ∞ x,v (m) 2C m p R (T 0 )e -ν ′ 0 t e -(ν 0 -ν ′ 0 )(t-nT 0 ) f 0 L ∞ x,v (m) .
Hence the uniform control in t, where C 0 > 0 depends on m, T 0 and ν ′ 0 , ∃C 0 > 0, ∀t 0,
f (n) 1 t 1 >0 (t) L ∞ x,v C 0 e -ν ′ 0 t f 0 L ∞ x,v (m)
which combined with (4.18) implies
(4.20) ∀n ∈ N, ∀t 0, f (n) (t) L ∞ x,v (m) max {1, C 0 } e -ν ′ 0 t f 0 L ∞ x,v (m) .
Since (4.18) and (4.19) holds for x in Ω, inequality (4.20) holds in
L ∞ Ω × R 3 (m). Therefore, f (n) n∈N is bounded in L ∞ t L ∞ Ω × R 3 (m) and converges, up to a subsequence, weakly-* towards f in L ∞ t L ∞ Ω × R 3 (m)
and f is a solution to ∂ t f = G ν f satisfying the Maxwell boundary condition and with initial datum f 0 . Moreover, we have the expected exponential decay for f thanks to the uniform (4.20). This concludes the proof of Theorem 4.1 and we now prove Corollary 4.2.
Proof of Corollary 4.2. Thanks to the convergence properties of f (n) n∈N , Lemmas 4.4, 4.5 and 4.6 are directly applicable to the semigroup S Gν (t) with ∆ n replaced by α. Therefore, as usual, for f s in L ∞
x,v (m) we decompose into t -s t min (x, v) and t -s t min , which gives thanks to (4.9)
t 0 S Gν (t -s)f s (x, v) ds = t max{0,t-t min } e -ν(v)(t-s) f s (x -(t -s)v, v) ds + max{0,t-t min } 0 I p (f s )(t -sx, v) ds + max{0,t-t min } 0 R p (S Gν f s )(t -s, x, v) ds. (4.21)
Let ε be in (0, 1). We bound e -ν(v)(t-s) e -εν 0 t e -(1-ε)ν(v)(t-s) e -εν 0 s and thus, using the estimate with a gain of weight for I p in Lemma 4.5, we control from above the absolute value of the first two terms by
t max{0,t-t min } e -ν(v)(t-s) f s (x -(t -s)v, v) ds + max{0,t-t min } 0 I p (f s )(t -sx, v) ds pC m t 0 ν(v)e -ν(v)(t-s) + e -ν 0 (t-s) f s L ∞ x,v (mν -1 ) ds pC m e -εν 0 t t 0 ν(v)e -(1-ε)ν(v)(t-s) + e -(1-ε)ν 0 (t-s) sup s∈[0,t] e εν 0 s f s L ∞ x,v (m) C m p 1 -ε e -εν 0 t sup s∈[0,t] e εν 0 s f s L ∞ x,v (mν -1 ) . (4.22)
The third term is treated using Lemma 4.6 with an exponential weight e εν 0 t . This yields
max{0,t-t min } 0 R p (S Gν f s )(t -s, x, v) ds C m 1 2 [CT0] max{0,t-t min } 0 e -εν 0 (t-s) ν(v)e -(1-ε)ν(v)(t-s) + e -(1-ε)ν 0 (t-s) × sup s * ∈[0,t-s] e εν 0 s * S Gν (s * )(f s ) L ∞ x,v (mν -1 ) ds
which is further bounded as
C m 1 2 [ CT 0] e -εν 0 t t 0 ν(v)e -(1-ε)ν(v)(t-s) + e -(1-ε)ν 0 (t-s) × sup s * ∈[0,t-s] e εν 0 (s+s * ) S Gν (s * )(f s ) L ∞ x,v (mν -1 ) ds.
Since mν -1 satisfies the requirements of Theorem 4.1, we can use the exponential decay of S Gν (s * ) with the exponential rate being εν 0 < ν 0 and obtain
max{0,t-t min } 0 R p (S Gν f s )(t -s, x, v) ds C m 1 -ε 1 2 [CT0] e -εν 0 t sup s∈[0,t] e εν 0 s f s L ∞ x,v (mν -1 ) . (4.23)
For any T 0 1, 2 [ CT 0] 2 -1 and thus (4.23) becomes independent of T 0 and holds for all t 0. Plugging (4.22) and (4.23) into (4.21) yields the expected gain of weight with exponential decay.
L ∞ theory for the linear operator with Maxwellian weights
As explained in the introduction, the L 2 setting is not algebraic for the bilinear operator Q. We therefore need to work within an L ∞ framework. This section is devoted to the study of the semigroup generated by the full linear operator together with the Maxwell boundary condition in the space L ∞
x,v µ -ζ with ζ in (1/2, 1). This weight allows us to obtain sharper estimates on the compact operator K and thus extend the validity of our proof up to α = 2/3. In this section we establish the following theorem.
G (t) on L ∞ x,v (µ -ζ ). Moreover, there exists λ ∞ and C ∞ > 0 such that ∀t 0, S G (t) (Id -Π G ) L ∞ x,v (µ -ζ ) C ∞ e -λ∞t ,
where Π G is the orthogonal projection onto Ker(G) in L 2
x,v µ -1/2 (see (3.8)). The constants C ∞ and λ ∞ are explicit and depend on α, ζ and the collision kernel.
5.1.
Preliminaries: pointwise estimate on K and L 2 -L ∞ theory. We recall that L = -ν(v) + K. The following pointwise estimate on K has been proved in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]Lemma 3] for hard sphere models and [8, Lemma 5.2] for more general kernels. Lemma 5.2. There exists k(v, v * ) 0 such that for all v in R 3 ,
K(f )(v) = R 3 k(v, v * )f (v * ) dv * . Moreover, for ζ in [0, 1) there exists C ζ > 0 and ε ζ > 0 such that for all ε in [0, ε ζ ), R 3 |k(v, v * )| e ε 8 |v-v * | 2 + ε 8 | |v| 2 -|v * | 2 | 2 |v-v * | 2 µ(v) -ζ µ(v * ) -ζ dv * C ζ 1 + |v| .
We now prove a more precise and more explicit control over the operator K. The idea behind it is that as ζ goes to 1, the operator K gets closer to the collision frequence 3ν(v).
Lemma 5.3. There exists
C K > 0 such that for all ζ in [1/2, 1], ∀f ∈ L ∞ x,v (µ -ζ ), K(f ) L ∞ x,v (ν -1 µ -ζ ) C K (ζ) f L ∞ x,v (µ -ζ )
where
C K (ζ) = 3 + C K (1 -ζ).
Proof of Lemma 5.3. The change of variable σ → -σ exchanges v ′ and v ′ * and we can so rewrite
(5.1) K(f )(v) = R 3 ×S 2 b(cos θ) |v -v * | γ [2µ ′ f ′ * -µf * ] dσdv * = K 1 (f ) -K 2 (f )
where K 1 and K 2 are just the integral divide into the two contributions.
We start with K 1 . We use the elastic collision identity µµ * = µ ′ µ ′ * to get
ν(v) -1 µ(v) -ζ K 1 (f )(v) 2ν(v) -1 R 3 ×S 2 b(cos θ) |v -v * | γ µ ζ * µ(v ′ ) 1-ζ f ′ * µ(v ′ * ) ζ dσdv * 2 f L ∞ x,v (µ -ζ ) ν(v) -1 R 3 ×S 2 b(cos θ) |v -v * | γ µ ζ * dσdv * .
But then by definition of ν(v),
R 3 ×S 2 b(cos θ) |v -v * | γ µ ζ * dσdv * = ν(v) + R 3 ×S 2 b(cos θ) |v -v * | γ (µ ζ * -µ * )dσdv * which implies, since b is bounded and ν(v) ∼ (1 + |v| γ ) (see (3.6)), R 3 ×S 2 b(cos θ) |v -v * | γ µ ζ * dσdv * ν(v)+C(1-ζ)ν(v) R 3 ×S 2 (1+|v * | γ ) |v * | 2 µ ζ * dσdv * .
To conclude we recall that ζ > 1/2 and the integral above on the right-hand side is uniformly bounded in v * . Hence, (5.2)
∃C K > 0, K 1 (f ) L ∞ x,v (µ -ζ ) 2 + C K 2 (1 -ζ) f L ∞ x,v (µ -ζ ) .
The term K 2 is similar :
ν(v) -1 µ(v) -ζ K 2 (f )(v) µ 1-ζ ν -1 R 3 ×S 2 b(cos θ) |v -v * | γ |f * | dσdv * f L ∞ x,v (µ -ζ ) ν(v) -1 R 3 ×S 2 b(cos θ) |v -v * | γ µ ζ * dσdv * 1 + C K 2 (1 -ζ) f L ∞ x,v (µ -ζ ) .
Plugging the above and (5.2) inside (5.1) concludes the proof.
We conclude this preliminary section with a statement of the L 2 -L ∞ theory that will be at the core of our main proof. It follows the idea developed in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF] that the L 2 theory of previous section could be used to construct a L ∞ one by using the flow of characteristics to transfer pointwise estimates at x -vt into an integral in the space variable. The proof can be found in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]Lemma 19] and holds as long as
L ∞ x,v (w) ⊂ L 2 x,v (µ -1/2 ).
Proposition 5.4. Let ζ be in [1/2, 1) and assume that there exist T 0 > 0 and
C T 0 , λ > 0 such that for all f (t, x, v) in L ∞ x,v (µ -ζ ) solution to (5.3) ∂ t f + v • ∇ x f = L(f )
with Maxwell boundary condition α > 0 and initial datum f 0 , the following holds
∀t ∈ [0, T 0 ], f (t) L ∞ x,v (µ -ζ ) e λ(T 0 -2t) f 0 L ∞ x,v (µ -ζ ) + C T 0 t 0 f (s) L 2 x,v (µ -1/2 ) ds.
Then for all 0 < λ < min {λ, λ G }, defined in Theorem 3.1, there exists C > 0 independent of f 0 such that for all f solution to
(5.3) in L ∞ x,v (µ -ζ ) with Π G (f ) = 0, ∀t 0, f (t) L ∞ x,v (µ -ζ ) Ce -λt f 0 L ∞ x,v (µ -ζ ) .
5.2.
A crucial estimate between two consecutive collisions. The core of the L ∞ estimate is a delicate control over the action of K in between two rebounds. We define the following operator
(5.4) K(f )(t, x, v) = µ -ζ (v) t max{0,t min (x,v)} e -ν(v)(t-s) K(f (s))(x -(t -s)v, v) ds.
We shall prove the following estimate of this functional along the flow of solutions.
α > 0 and 0 < C
(2)
α < 1 such that for any T 0 > 0 there exists C T 0 > 0 such that if f is solution to ∂ t f = Gf with Maxwell boundary conditions then for all t in [0, T 0 ]
e εν 0 t K(f )(t, x, v) C (1) α f 0 L ∞ x,v (µ -ζ ) + C T 0 t 0 f (s) L 2 x,v (µ -1/2 ) ds + C (2) α 1 -e -ν(v)(1-ε) min{t,t min (x,v)} sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) .
and at last
J dif f =α 2 1 {s>t * 1 } c 2 µ e -ν(v * )t * 1 µ(v * ) V 1 1 {s-t * 1 > t 1} e -ν(v 1 * ) t 1 µ(v 1 * ) × V 1 f (s -t * 1 -t 1 , x 1 , v 1 * ) ( v 1 * • n( x 1 )) (v 1 * • n(x 1 )) d v 1 * dv 1 * .
(5.12)
We now bound the operator K in (5.4) for each of the terms above. We will bound most of the terms uniformly because for any ε in [0, 1], from Lemma 5.3 and e -ν(v)(t-s) e -εν 0 t e -ν(v)(1-ε)(t-s) e εν 0 s the following holds
|K(F )| C K (ζ)e -εν 0 t t max{0,t-t 1 } ν(v)e -ν(v)(1-ε)(t-s) ds sup s∈[0,t] e εν 0 s F (s) L ∞ x,v (µ -ζ ) C K (ζ) 1 -ε 1 -e -ν(v)(1-ε) min{t,t 1 } e -εν 0 t sup s∈[0,t] e εν 0 s F (s) L ∞ x,v (µ -ζ ) (5.13)
Step 2: Estimate for J 0 . We straightforwardly bound J 0 in (5.6) thanks to the exponential decay with rate εν 0 of S Gν (t) in Theorem 4.1 (that also holds on the boundary) for all (x, v) in Ω × B(0, R).
|J 0 | C ε µ -ζ (v) e -εν 0 t f 0 L ∞ x,v (µ -ζ ) 1 + µ -ζ (v) µ -ζ (v 1 ) + c µ µ -ζ (v)µ(v) V 1 v 1 * • n(x 1 ) µ -ζ (v 1 * ) dv 1 * .
To conclude we use the fact that |v 1 | = |v| and µ -ζ is radially symmetric. This yields
(5.14) |J 0 | C ε µ -ζ (v) -1 e -εν 0 t f 0 L ∞ x,v (µ -ζ )
Bounding J 0 (t -t * 1 , x 1 , v 1 * ) exactly the same way yields the same bound for the full J 0 in (5.9). We conclude thanks to (5.13)
(5.15) ∀0 ε 1, ∀(t, x, v), |K(J 0 )(t, x, v)| C ε,ζ f 0 L ∞ x,v (µ -ζ ) e -εν 0 t .
Step 3: Estimate for J K . We write K under its kernel form with Lemma 5.2 in (5.7). To shorten notations and as we shall legitimate the following change of variable we use y
(v * ) = x -(t -s)v -(s -s * )v * . Since ν(v) ν 0 we see |K(J K )| t 0 s 0 e -ν 0 (t-s * ) R 3 |k(v, v * )| µ -ζ µ -ζ * 1 {y(v * )∈Ω} R 3 |k(v * , v * * )| µ -ζ * µ -ζ * * f (s * , y(v * ), v * * )µ -ζ *
(v * , v * * )| e -ε ζ 8 R 2 k(v * , v * * )e ε ζ 8 |v * -v * * | 2 . Taking the L ∞ x,v (µ -ζ
)-norm of f out of the integral and decomposing the exponential decay as for (5.13), we infer that for all ε in [0, 1] and any R 1:
{terms in (5.16) outside {|v| R ∩ |v * | 2R ∩ |v * * | 3R}} C ε,ζ 1 1 + R + e -ε ζ 8 R 2 e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) (5.17)
In order to deal with the remaining terms in (5.16) we first approximate k(•, •) by a smooth and compactly supported function k R uniformly in the following sense: (5.18) sup
|V | 3R |v * | 3R |k(V, v * ) -k R (V, v * )| µ -ζ (V ) dv * 1 1 + R . We decompose k = k R + (k -k R )
C ε,ζ 1 + R e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C R,ζ t 0 s 0 e -ν 0 (t-s * ) {|v * | 2R} {|v * * 3R|} 1 {y(v * )∈Ω} |f (s * , y(v * ), v * * )|
At last, we would like to apply the change of variable v * → y(v * ) which has Jacobian (s -s * ) -3 , and so is legitimate if s -s * η > 0. We thus consider η > 0 and decompose the integral over s * into an integral on [s -η, s] and an integral on [0, s -η]. In the first one we bound as before which gives the following upper bound
C ε,ζ 1 1 + R + η e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C R,ζ t 0 s-η 0 e -ν 0 (t-s * ) {|v * | 2R} {|v * * 3R|} 1 {y(v * )∈Ω} |f (s * , y(v * ), v * * )| .
We now perform v * → y(v * ) inside the remaining integral term which makes f (s * ) L 1 x,v appear, which is itself controlled by f (s * ) L 2 x,v (µ -1/2 ) thanks to Cauchy-Schwarz inequality. We therefore proved that
{terms in (5.16) in {|v| R ∩ |v * | 2R ∩ |v * * | 3R}} C ε,ζ 1 1 + R + η e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C R,ζ,η t 0 f (s) L 2 x,v (µ -1/2 ) . (5.19)
Gathering (5.17) and (5.19) inside (5.16) finally yields
|K(J K )| C ε,ζ η + 1 1 + R + e -ε ζ 8 R 2 e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C R,ζ,η t 0 f (s) L 2 x,v (µ -1/2 ) .
(5.20)
It remains to deal with the second term in J K given by (5.10). However, by definition of the boundary operator P Λ and since
|v 1 * • n(x * 1 )| C ζ µ -ζ (v 1 * ) it follows directly that (5.21) µ -ζ (v * )P Λ (J K (s -t * 1 , x * 1 ))(v * ) c µ C ζ µ 1-ζ * R 3 J K (s -t * 1 , x * 1 , v 1 * )µ -ζ (v 1 * ) dv 1 * .
We bound the integral term uniformly exactly as (5.20) for J K and since ζ < 1 the integral over v * of µ(v * ) 1-ζ is finite. As a conclusion, for all ε in (0, 1), R 1 and η > 0,
K(J K ) C ε,ζ η + 1 1 + R + e -ε ζ 8 R 2 e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C R,ζ,η t 0 f (s) L 2 x,v (µ -1/2 ) .
(5.22)
Step 4: Estimate for J f . The control of J f given by (5.11) is straightforward for the first term from (5.13) by taking the
L ∞ x,v (µ -ζ )-norm of f (remember that µ(v * ) = µ(v *
1 ) for a specular reflection). The second term is dealt with the same way and noticing that, in the spirit of Lemma 5.3,
(5.23) c µ µ 1-ζ (v * ) V 1 µ -ζ (v 1 * ) (v 1 * • n(x * 1 )) dv 1 * 1 + C 0 (1 -ζ)
where C 0 > 0 is a universal constant. In the end,
|K(J f )| 1 {t t 1 } (1 -α)C K (ζ) 1 -ε (1 + α(1 + C 0 (1 -ζ))) 1 -e -ν(v)(1-ε) min{t,t 1 } × e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) . (5.24)
Step 5: Estimate for J diff . The last term J dif f given by (5.12) is treated thanks to a change of variable on the boundary and a trace theorem from [START_REF] Esposito | Non-isothermal boundary in the Boltzmann theory and Fourier law[END_REF]. First,
µ -ζ * J dif f α 2 c 2 µ µ 1-ζ (v * ) R 3 d v 1 * R 3 dv 1 * e -ν 0 (t * 1 + t 1 ) |v 1 * | µ(v 1 * ) |n( x 1 ) • v 1 * | f (s -t * 1 -t 1 , x 1 , v 1 * ) . (5.25)
Using the spherical coordinate v 1 * = r 1 * u 1 * with u 1 * in S 2 we have by definition
(5.26) t 1 = t * min (x 1 , v 1 * ) = t * 1 (x 1 , u 1 * ) r 1 * and x 1 = x 1 -t 1 v 1 * = x 1 -t * 1 (x 1 , u 1 * )u 1 *
and, using the parametrization (θ, φ) of the sphere S 2 we compute
µ -ζ * J dif f C R 3 d v 1 * ∞ 0 2π 0 π 0 e -ν 0 (t * 1 + t 1 ) f (s -t * 1 -t 1 , x 1 , v 1 * ) |n( x 1 ) • v 1 * | µ(r 1 * )r 3 1 * sin θ dr 1 * dθdφ . (5.27)
We want to perform the change of variable (r 1 * , θ, φ) → (s -t * 1 -t 1 , x 1 ). Thanks to (5.26) we have
∂ r 1 * t 1 = - 1 r 2 1 * t * 1 (x 1 , u 1 ) and ∂ r 1 * x 1 = 0.
The jacobian of (θ, φ) → x 1 has been calculated in [12, Proof of Lemma 2.3]. More precisely, from [12, (2.8)] we have, calling ξ(x) a parametrization of ∂Ω
|det (∂ θ,φ x 1 )| t * 1 (x 1 , u 1 * ) 2 sin θ |n( x 1 ) • u 1 * | × ∂ 3 ξ( x 1 ) ∇ξ( x 1 )
.
Therefore, we bound from below the full jacobian by
det ∂ r 1 * ,θ,φ t -t * 1 -t 1 , x 1 t * 1 (x 1 , u 1 * ) 3 sin θ r 2 1 * |n( x 1 ) • u 1 * | × ∂ 3 ξ( x 1 ) ∇ξ( x 1 )
.
The important fact is that x 1 belongs to ∂Ω and u 1 * is on S 2 . With these conditions, we know from [21, (40)], that t
* 1 (x 1 , u 1 * ) C Ω |n(x 1 ) • u 1 * | and hence det ∂ r 1 * ,θ,φ t -t * 1 -t 1 , x 1 C Ω |n(x 1 ) • u 1 * | 3 sin θ r 2 1 * × ∂ 3 ξ( x 1 ) ∇ξ( x 1
) .
We therefore need |n(x 1 ) • u 1 * | to be non zero and we thus decompose (5.25) into two integrals. The first one on {|n(x 1 ) • u 1 * | η} and the second on {|n(x 1 ) • u 1 * | η}. On the first one we take the L ∞
x,v (µ -ζ )-norm of f out and on the second one we use the spherical coordinates (5.27) and apply, as announced, the change of variable (r 1 * , θ, φ) → (s -t * 1 -t 1 , x 1 ), which is legitimate on this set. It follows, playing with the exponential decay as for (5.13),
µ -ζ * J dif f C ε e -εν 0 s sup s * ∈[0,s] e εν 0 s * f (s * ) L ∞ x,v (µ -ζ ) R 3 |n(x 1 )•u 1 * | η | v 1 * | |v 1 * | µ(v 1 * ) µ -ζ ( v 1 * ) d v 1 * dv 1 * + C η,ε e -εν 0 s R 3 d v 1 * s 0 ∂Ω e εν 0 s * |f (s * , y, v 1 * )| r 5 1 * µ(r 1 * ) |n(y) • v 1 * | dS(y)ds *
where we recall that dS(y) = |∂ 3 ξ(y)| -1 |∇ξ(y)| dy is the Lebesgue measure on ∂Ω and we also denoted r 1 * = r 1 * (s, y). Since ζ 1/2, the integral in the first term on the right-hand side above uniformly tends to 0 as η goes to 0. Hence, for any η > 0,
µ -ζ * J dif f C ε ηe -εν 0 s sup s * ∈[0,s] e εν 0 s * f (s * ) L ∞ x,v (µ -ζ ) + C ε,η e -εν 0 s s 0 Λ e εν 0 s * |f (s * , y, v 1 * )| dλ(y, v 1 * )ds * (5.28)
where dλ(x, v) is the boundary measure on the phase space boundary Λ (see Section 1.1).
We now decompose Λ into
Λ η = {(x, v) ∈ Λ, |n(x) • v| η or |v| η} and Λ -Λ η .
Since ζ 1/2 there exists a uniform C > 0 such that (5.29)
s 0 Λη e εν 0 s * |f (s * , y, v 1 * )| dλ(y, v 1 * ) Csη sup s * ∈[0,s] e εν 0 s * f (s * ) L ∞ x,v (µ -ζ ) .
For the integral on Λ -Λ η we use the trace lemma [12, Lemma 2.1] that states
Λ-Λη |f (s * , y, v 1 * )| dλ(y, v 1 * ) C η f (s * ) L 1 x,v + s * 0 f (s * * ) L 1 x,v + L(f (s * * )) L 1 x,v .
Thanks to Cauchy-Schwarz inequality and the boundedness property (see Section 3.1) of L in L 2
x,v (µ -1/2 ) we get (5.30)
s 0 Λ-Λη e εν 0 s * |f (s * , y, v 1 * )| dλ(y, v 1 * ) C η se εν 0 s s 0 f (s * ) L 2 x,v (µ -1/2 ) ds * .
Gathering (5.29) and (5.30) with (5.28) we conclude (5.31)
µ -ζ * J dif f C ε ηse -εν 0 s sup s * ∈[0,s] e εν 0 s * f (s * ) L ∞ x,v (µ -ζ ) + C ε,η s s 0 f (s * ) L 2 x,v (µ -1/2 ) .
Again, plugging this uniform bound in (5.13) yields for any ε in (0, 1) and η > 0,
|K(J dif f )| C ε,ζ ηte -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C ε,η,ζ t t 0 f (s) L 2 x,v (µ -1/2 ) .
(5.32)
Step 6: Choice of constants and conclusion. We consider T 0 > 0 and t in [0, T 0 ]. We bound the full K(f ) by gathering (5.15), (5.22), (5.24) and (5.32) into (5.8). It yields, for any ε, η in (0, 1) and R 1,
|K(f )| C ε,ζ e -εν 0 t f 0 L ∞ x,v (µ -ζ ) + C ε,ζ,η t t 0 f (s) L 2 x,v (µ -1/2 ) ds + C α (ζ) 1 -ε (1 -e -ν(v)(1-ε) min{t,t 1 } ) + C ε,ζ (η(1 + t) + 1 1 + R ) × e -εν 0 t sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) .
We used the following definition
C α (ζ) = (1 -α) (3 + C K (1 -ζ)) [1 + α(1 + C 0 (1 -ζ))] . For α in ( 2/3, 1], lim ζ→1 C α (ζ) = 3(1 -α)(1 + α) < 1.
We therefore choose our parameters as
(1) ζ sufficiently close to 1 such that C α (ζ) < 1, (2) ε sufficiently small so that C α (ζ)/(1 -ε) < 1, (3)
R large enough and η small enough such that
C ε,ζ (η(1 + T 0 ) + 1/(1 + R)) < 1 -C α (ζ)/(1 -ε).
Such choices terminates the proof.
5.3.
Semigroup generated by the linear operator. This subsection is dedicated to the proof of Theorem 5.1, that is uniqueness and existence of solutions to (5.3) together with the Maxwell boundary condition (1.2) in L ∞ x,v (µ -ζ ). Moreover if f 0 satisfies the conservation laws then f will be proved to decay exponentially.
Proof of Theorem 5.1. Let f 0 be in L ∞ x,v (µ -ζ ) with 1/2 < ζ < 1. If f is solution to (5.3) in L ∞ x,v (µ -ζ ) with initial datum f 0 then f belongs to L 2
x,v µ -1/2 and f (t) = S G (t)f 0 is in the latter space. This implies first uniqueness and second that Ker(G) and (Ker(G)) ⊥ are stable under the flow of the equation (5.3). It suffices to consider f 0 such that Π G (f 0 ) = 0 and to prove existence and exponential decay of solutions to (5.3) in L ∞
x,v (µ -ζ ) with initial datum f 0 . Thanks to boundedness property of K, the Duhamel's form (5.5) is a contraction, at least for small times. We thus have existence of solutions on small times and proving the exponential decay will also imply global existence. For now on we consider f as described in the Duhamel's expression (5.5).
Looking at previous section and using t 1 = t min (x, v), f can be implicitely written as (5.8)
f (t, x, v) =J 0 (t, x, v) + µ ζ K(f )(t, x, v) + αe -ν(v)t 1 P Λ (J K (t -t 1 , x 1 ))(t, x, v) + 1 {t>t 1 } [J f (t, x, v) + J dif f (t, x, v)] .
The L ∞
x,v (µ -ζ )-norm of each of these terms has already been estimated. More precisely, J 0 by (5.14), K(f ) by Proposition 5.5, P Λ (J K ) by (5.21), J f is direct from (5.11) and (5.23) and finally J dif f by (5.31). With the same kind of choices of constant as in Step 6 of the proof of Proposition 5.5 (note that ζ α and ε α are the same) we end up with
e εν 0 t f (t) L ∞ x,v (µ -ζ ) C ε,α f 0 L ∞ x,v (µ -ζ ) + sup x,v (C ∞ (x, v)) sup s∈[0,t] e εν 0 s f (s) L ∞ x,v (µ -ζ ) + C T 0 t 0 f (s) L 2 x,v (µ -1/2 ) ds
where
C ∞ = C (α) 2 (1 -e -ν(v)(1-ε) min{t,t 1 } ) + 1 {t>t 1 } (1 -α) (1 + α(1 + C 0 (1 -ζ))) e -ν(v)(1-ε)t 1
and thus, with our choice of constants,
C ∞ (x, v) max C (α) 2 , (1 -α)(1 + α) < 1 which implies ∀t ∈ [0, T 0 ], f (t) L ∞ x,v (µ -ζ ) C 1 e -εν 0 t f 0 L ∞ x,v (µ -ζ ) + C T 0 t 0 f (s) L 2 x,v (µ -1/2 ) ds.
To conclude we choose T 0 large enough such that C 1 e -ε 2 ν 0 T 0 1 so that assumptions of the L 2 -L ∞ theory of Proposition 5.4 are fulfilled so we can apply it, thus concluding the proof.
Perturbative Cauchy theory for the full nonlinear equation
This section is dedicated to establishing a Cauchy theory for the perturbed equation (6.1)
∂ t f + v • ∇ x f = L(f ) + Q(f, f ),
together with the Maxwell boundary condition (1.2) in spaces L ∞ x,v (m), where m is a polynomial or a stretched exponential weight. More precisely we shall prove that for any small initial datum f 0 satisfying the conservation of mass Π G (f 0 ) = 0 there exists a unique solution to (6.1) and that this solution decays exponentially fast and also satisfies the conservation of mass.
We divide our study in two different subsections. First, for any small f 0 , we build a solution that satisfies the conservation law and decays exponentially fast; this is the purpose of Subsection 6.1. Second, we prove the uniqueness to (6.1) when the initial datum is small in Subsection 6.2.
6.1. Existence of solutions with exponential decay. The present subsection is dedicated to the following proof of existence. Theorem 6.1. Let α be in ( 2/3, 1] and let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2)
or m = v k with k > k ∞ . There exists η > 0 such that for any f 0 in L ∞ x,v (m) with Π G (f 0 ) = 0 and f 0 L ∞ x,v (m) η,
there exist at least one solution f to the Boltzmann equation (6.1) with Maxwell boundary condition and with f 0 as an initial datum. Moreover, f satisfies the conservation of mass and there exist C, λ > 0 such that ∀t 0,
f (t) L ∞ x,v (m) Ce -λt f 0 L ∞ x,v (m) .
As explained in Section 2.1, we decompose (6.1) into a system of differential equations. More precisely, we shall decompose G = L -v • ∇ x as G = A + B in the spirit of [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF], where B is "small" compared to ν(v) and A has a regularising effect. We then shall construct (f 1 , f 2 ) solutions to the following system of equation
∂ t f 1 = Bf 1 + Q(f 1 + f 2 , f 1 + f 2 ) and f 1 (0, x, v) = f 0 (x, v), (6.2) ∂ t f 2 = Gf 2 + Af 1 and f 2 (0, x, v) = 0, (6.3)
each of the functions satisfying the Maxwell boundary condition. Note that for such functions, the function f = f 1 + f 2 would be a solution to (6.1) with Maxwell boundary condition and f 0 as initial datum. Subsection 6.1.1 explicitly describes the decomposition G = A+ B and gives some estimates on A, B and Q. Subsections 6.1.2 and 6.1.3 deal with each differential equation (6.2) and (6.3) respectively. Finally, Subsection 6.1.4 combines the previous theories to construct a solution to the full nonlinear perturbed Boltzmann equation. 6.1.1. Decomposition of the linear operator and first estimates. We follow the decomposition proposed in [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF].
For δ in (0, 1), we consider Θ δ = Θ δ (v, v * , σ) in C ∞ that is bounded by one everywhere, is exactly one on the set We define the splitting G = A (δ) + B (δ) , with
A (δ) h(v) = C Φ R 3 ×S 2 Θ δ [µ ′ * h ′ + µ ′ h ′ * -µh * ] b (cos θ) |v -v * | γ dσdv * and B (δ) h(v) = B (δ) 2 h(v) -ν(v)h(v) -v • ∇ x h(v) = G ν h(v) + B (δ) 2 h(v), where
B (δ) 2 h(v) = R 3 ×S 2 (1 -Θ δ ) [µ ′ * h ′ + µ ′ h ′ * -µh * ] b (cos θ) |v -v * | γ dσdv * .
The following lemmas give control over the operators A (δ) and B (δ) for m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2) or m = v k with k > k ∞ . Their proofs can be found in [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF]Section 4] in the specific case of hard sphere (b = γ = 1) and for more general hard potential with cutoff kernels in [8, Section 6. x,v (m)
A (δ) (f ) L ∞ x,v (µ -ζ ) C A f L ∞ x,v (m) .
The constant C A is constructive and only depends on m, ζ, δ and the collision kernel.
Lemma 6.3. B (δ) 2 satisfies ∀f ∈ L ∞ x,v (m) , B (δ) 2 (f ) L ∞ x,v (ν -1 m) C B (δ) f L ∞ x,v (m) ,
where
C B (δ) > 0 is a constructive constant such that • if m = v k then lim δ→0 C B (δ) = 4 k -1 -γ 4πb ∞ l b ; • if m = e κ 1 |v| κ 2 then lim δ→0 C B (δ) = 0.
The operator
B (δ)
2 also has a smallness property as an operator from
L ∞ x,v (m) to L 1 v L ∞ v (|v| 2 ).
The following lemma is from [START_REF] Briant | Perturbative theory for the Boltzmann equation in bounded domains with different boundary conditions[END_REF]Lemma 5.7] in the case of polynomial weights m but is utterly applicable in the case of stretched exponential weights. Lemma 6.4. For any δ > 0 there exists C B (δ) such that for all f in L ∞
x,v (m),
B (δ) 2 (f ) L 1 v L ∞ x ( v 2 ) C B (δ) f L ∞ x,v (m) .
Moreover, the following holds: lim δ→0 C B (δ) = 0.
Remark 6.5. We emphasize here that for our choices of weights (see definition of k ∞ (2.1)), lim δ→0 C B (δ) = C B (m) < 1. Until the end of the present section we only consider 0 < δ small enough such that C B (δ) < 1.
We conclude this subsection with a control on the bilinear term in the L ∞ x,v setting.
Lemma 6.6. For all h and g such that Q(h, g) is well-defined, Q(h, g) belongs to [Ker(L)] ⊥ in L 2 v : π L (Q(h, g)) = 0. Moreover, there exists C Q > 0 such that for all h and g,
Q(h, g) L ∞ x,v (ν -1 m) C Q h L ∞ x,v (m) g L ∞ x,v (m) .
The constant C Q is explicit and depends only on m and the kernel of the collision operator.
Proof of Lemma 6.6. Since we use the symmetric definition of Q (1.6) the orthogonality property can be found in [START_REF] Briant | From the Boltzmann equation to the incompressible Navier-Stokes equations on the torus: A quantitative error estimate[END_REF]Appendix A.2.1]. The estimate follows directly from [START_REF] Gualdani | Factorization for non-symmetric operators and exponential H-theorem[END_REF]Lemma 5.16] and the fact that ν(v) ∼ v γ (see (3.6)).
6.1.2. Study of equation (6.2) in L ∞
x,v (m). In the section we study the differential equation (6.2). We prove well-posedness for this problem and above all exponential decay as long as the initial datum is small. We deal with the different types of weights m in the same way. Since the "operator norm" of B (δ) 2 tends to zero as δ tends to zero in the case of stretched exponential weight, one has a more direct proof in this case and we refer to [3, Section 5.2.2], where the author dealt with pure diffusion, for the interested reader. Proposition 6.7. Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2)
or m = v k with k > k ∞ . Let f 0 be in L ∞ x,v (m) and g(t, x, v) in L ∞ x,v (m).
Then there exists δ m > 0 such that for any δ in (0, δ m ] there exist C 1 , η 1 and λ 1 > 0 such that if
f 0 L ∞ x,v (m) η 1 and g L ∞ t L ∞ x,v (m) η 1 ,
then there exists a solution f 1 to (6.4)
∂ t f 1 = G ν f 1 + B (δ) 2 f 1 + Q(f 1 + g, f 1 + g)
, with initial datum f 0 and satisfying the Maxwell boundary condition (1.2). Moreover, this solution satisfies
∀t 0, f 1 (t) L ∞ x,v (m) C 1 e -λ 1 t f 0 L ∞ x,v (m) .
The constants C 1 , η 1 and λ 1 are constructive and only depend on m, δ and the kernel of the collision operator.
Proof of Proposition 6.7. Thanks to Proposition 4.1, G ν combined with the Maxwell boundary condition generates a semigroup S Gν (t) in L ∞ x,v (m). Therefore if f 1 is solution to (6.4) then it has the following Duhamel representation almost everywhere in
R + × Ω × R 3 f 1 (t, x, v) =S Gν (t)f 0 (x, v) + t 0 S Gν (t -s) B (δ) 2 (f 1 (s)) (x, v) ds + t 0 S Gν (t -s) [Q(f 1 (s) + g(s), f 1 (s) + g(s))] (x, v)) ds.
To prove existence and exponential decay we use the following iteration scheme starting from h 0 = 0. (6.5)
h l+1 = S Gν (t)f 0 + t 0 S Gν (t -s) B (δ) 2 (h l+1 ) + Q(h l + g, h l + g) ds h l+1 (0, x, v) = f 0 (x, v).
A contraction argument on the Duhamel representation above would imply that (h l ) l∈N is well-defined in L ∞
x,v (m) and satisfies the Maxwell boundary condition (because S Gν does). The computations to prove this contraction property are similar to the ones we are about to develop in order to prove that (h l ) l∈N is a Cauchy sequence and we therefore only write down the latter.
Considering the difference h l+1 -h l we write, since Q is a symmetric bilinear operator,
h l+1 (t, x, v) -h l (t, x, v) = t 0 S Gν (t -s) B (δ) 2 (h l+1 (s) -h l (s)) ds + t 0 S Gν (t -s) [Q(h l -h l-1 , h l + h l-1 + g)(s, x, v)] ds (6.6)
As now usual, we write S Gν (t -s) under its implicit form after one rebound against the boundary (see (4.4) or Step 1 of the proof of Proposition 5.5). It reads
t 0 S Gν (t -s) B (δ) 2 (h l+1 (s) -h l (s)) (x, v) ds = t max{0,t-t 1 } e -ν(v)(t-s) B (δ) 2 (h l+1 (s) -h l (s)) (x -(t -s)v, v) ds + (1 -α)e -ν(v)t 1 1 {t>t 1 } t-t 1 0 S Gν ((t -t 1 ) -s)B (δ) 2 (h l+1 (s) -h l (s)) (x 1 , v 1 ) ds + αµe -ν(v)t 1 1 {t>t 1 } t-t 1 0 V 1 1 µ(v 1 * ) S Gν (t -t 1 -s)B (δ) 2 (h l+1 -h l ) (x 1 , v 1 * )dσ x 1 ds,
where t 1 = t min (x, v), x 1 = x -t 1 v and v 1 = R x 1 (v). Using the decomposition S Gν (f ) = I p (f ) + R p (S Gν (f )) given by Lemma 4.4, we obtain for all p 1:
(6.7) m(v) |h l+1 -h l | (t, x, v) = J B + J t 1 + J IB + J RB + J Q with the following definitions (6.8) J B = t max{0,t-t 1 } m(v)e -ν(v)(t-s) B (δ) 2 (h l+1 (s) -h l (s)) (x -(t -s)v, v) ds, (6.9) J t 1 = (1 -α)e -ν(v)t 1 1 {t>t 1 } m(v) |h l+1 -h l | (t -t 1 , x 1 , v 1 ), J IB =αµe -ν(v)t 1 1 {t>t 1 } t-t 1 0 V 1 m(v) µ(v 1 * ) × I p B (δ) 2 (h l+1 -h l ) (t -t 1 -s, x 1 , v 1 * ) dσ x 1 (v 1 * )ds, (6.10)
J RB =αµe -ν(v)t 1 1 {t>t 1 } t-t 1 0 V 1 m(v) µ(v 1 * ) × R p S Gν B (δ) 2 (h l+1 -h l ) (t -t 1 -s, x 1 , v 1 * ) dσ x 1 (v 1 * )ds, (6.11)
J Q = t 0 m(v) |S Gν (t -s) [Q(h l -h l-1 , h l + h l-1 + g)(s, x, v)]| ds + α1 {t>t 1 } × µe -ν(v)t 1 t-t 1 0 |S Gν (t -t 1 -s) [Q(h l -h l-1 , h l + h l-1 + g)(s, x 1 , v 1 )]| ds.
(6.12)
We now estimate each of these terms separately.
Estimate for J B and J t 1 These two terms are connected via t 1 and it is important to understand that the contributions of J B and J t 1 are interchanging.
By crudely bounding the integrand of (6.8) by the L ∞ x,v -norm and controlling B (δ) 2
by Lemma 6.3,
m(v) B (δ) 2 (h l+1 -h l ) (x -(t -s)v, v) ν(v) B (δ) 2 (h l+1 (s) -h l (s)) L ∞ x,v (mν -1 ) C B (δ)ν(v) h l+1 (s) -h l (s) L ∞ x,v (m) .
For ε in (0, 1) we have e -ν(v)(t-s) e -εν 0 t e -ν(v)(1-ε)(t-s) e εν 0 s and thus
J B C B (δ) 1 -ε 1 -e -ν(v)(1-ε) min{t,t 1 } e -εν 0 t sup s∈[0,t] e εν 0 s h l+1 (s) -h l (s) L ∞ x,v (m) . (6.13)
For J t 1 we notice in (6.9) that |v| = |v 1 | and therefore m(v) = m(v 1 ). Also, for t > t 1 and all ε in (0, 1) we have e -ν(v)t 1 e -(1-ε)ν(v)t 1 e -εν 0 t e εν 0 (t-t 1 ) and therefore (6.14)
J t 1 (1 -α)e -(1-ε)ν(v)t 1 e -εν 0 t 1 {t>t 1 } e εν 0 (t-t 1 ) (h l+1 -h l )(t -t 1 ) L ∞ x,v (m) .
From (6.13) and (6.14) we deduce (6.15) ∀ε ∈ (0, 1),
J B +J t 1 min (1 -α), C B (δ) 1 -ε e -εν 0 t sup s∈[0,t] e εν 0 s h l+1 -h l L ∞ x,v (m) .
Estimate for J IB Using Lemma 4.4 (replacing ∆ n by α) to control I p in (6.10) and bounding the exponential decay in dΣ k l by e -ν 0 (t-t 1 -s-t (l) k gives the following control
J IB t-t 1 0 e -ν 0 (t-s) V 1 p k=0 k i=0 l∈ϑ k (i) (1 -α) i α k-i p j=1 V j µ(v)m(v) µ(v (l) k ) × B (δ) 2 (h l+1 -h l ) (s, x (l) k -t (l) k v (l) k , v (l) k ) 0 j k dσ x (l) j (v j * )ds,
where the sequence (t
(l) k , x (l) k , v (l) k ) is associated to the initial point (t -t 1 -s, x 1 , v 1 * ).
For a given (k, i, l) there exists a unique J in {0, p} such that v
(l) k = V 1 (. . . (V 1 (x j , v J * )))) k -J iterations and thus (t (l) k , x (l) k , v (l)
k ) only depends on (t -t 1 -s, x, v, v 1 * , . . . , v J * ) and we can integrate the remaining variables. We remind here that dσ x j is a probability measure on V j and also
dσ x J (v J * ) = c µ µ(v J * )v J * • n(x J )dv J * . Since v (l) k = |v J * | and µ(v)m(v) C, we infer the following, J IB C t-t 1 0 e -ν 0 (t-s) V 1 p k=0 k i=0 l∈ϑ k (i) (1 -α) i α k-i J -1 j=1 V j ds 0 j J-1 dσ x (l) j (v j * ) × R 3 B (δ) 2 (h l+1 -h l ) (s, x (l) k -t (l) k v (l) k , v (l) k ) |v J * | dv J * .
We now make the change of variable v (l) k → v J * which preserves the norm and then the integral in v J * can be bounded by the
L 1 v L ∞ x ( v 2 )-norm of B (δ) 2 .
As in the proof of Lemma 4.5 k,i,l (1 -α) i α k-i p and this yields the following estimate
J IB pC t 0 e -ν 0 (t-s) B (δ) 2 (h l+1 -h l ) (s) L 1 v L ∞ x ( v 2 )
ds.
To conclude the estimate on J IB we choose p = p(T 0 ) defined in Lemma 4.6 (which makes p bounded by C(1 + T 0 )) and we control B (δ) 2 thanks to Lemma 6.4. (6.16)
J IB C(1 + T 0 ) C B (δ)e -εν 0 t sup s∈[0,t] e εν 0 s (h l+1 -h l )(s) L ∞ x,v (m)
for all ε in (0, 1), T 0 0 and t in [0, 1].
Estimate for J RB The term (6.11) is dealt with by crudely bounding the integrand of (6.11) by its L ∞
x,v -norm. Using Lemma 4.6 to estimate R p we get for all t in [0, T 0 ],
µ(v) µ(v 1 * ) e -ν(v)t 1 m R p S Gν B (δ) 2 (h l+1 -h l ) (t -t 1 -s, x 1 , v 1 * ) C µ(v)m(v) µ(v 1 * )m(v 1 * ) ν(v 1 * )e -ν 0 (t-s) 1 2 [CT0] × sup s * ∈[0,t-t 1 -s] e ν 0 s * S Gν (s * ) B (δ) 2 (h l+1 -h l ) L ∞ x,v (mν -1 )
.
Then, we apply Theorem 4.1 about the exponential decay of S Gν (s * ) in L ∞ x,v (mν -1 ) with a rate (1 -ε)ν 0 . At last, we control B (δ) 2 thanks to Lemma 6.3. This yields, (6.17)
J RB C 1 2 [ CT 0] C B (δ)e -εν 0 t sup s∈[0,t] e εν 0 s (h l+1 -h l )(s) L ∞ x,v (m)
for all ε in (0, 1), T 0 0 and t in [0, 1].
Estimate for J Q The term (6.12) is dealt with using the gain of weight of S Gν (t) upon integration in time: Corollary 4.2. A direct application of this corollary yields for all ε in (0, 1)
J Q C 0 1 -ε e -εν 0 t sup s∈[0,t] e εν 0 s Q(h l -h l-1 , h l + h l-1 + g)(s) L ∞ x,v (mν -1 )
.
We control Q thanks to Lemma 6.6 and we infer ∀ε ∈ (0, 1),
J Q C 1 -ε h l L ∞ [0,t] L ∞ x,v (m) + h l-1 L ∞ [0,t] L ∞ x,v (m) + g L ∞ t L ∞ x,v (m) × e -εν 0 t sup s∈[0,t] e εν 0 s (h l -h l-1 )(s) L ∞ x,v (m) .
(6.18)
Conclusion of the proof. We gather (6.15), (6.16), (6.17) and (6.12) inside (6.7). This gives for all 0 < ε < 1, all T 0 > 0 and all t in [0, T 0 ],
sup s∈[0,t] e εν 0 s (h l+1 -h l )(s) L ∞ x,v (m) min (1 -α), C B (δ) 1 -ε + C(1 + T 0 ) C B (δ) + 2 -[CT 0] C × sup s∈[0,t] e εν 0 s h l+1 -h l L ∞ x,v (m) + C 1 -ε h l L ∞ [0,t] L ∞ x,v (m) + h l-1 L ∞ [0,t] L ∞ x,v (m) + g L ∞ t L ∞ x,v (m) × sup s∈[0,t] e εν 0 s h l -h l-1 L ∞ x,v (m)
We choose our constants as follow.
• From Lemma 6.3 we define δ 0 > 0 such that for all δ < δ 0
, C B (δ) C B (δ 0 ) < 1; • Since C B (δ 0 ) < 1 we fix ε in (0, 1) such that C B (δ 0 ) + ε < 1; • We choose T 0 large enough such that 2 -[CT 0] C 1 4 1 -min (1 -α), C B (δ 0 ) 1 -ε ;
• At last, we take δ < δ 0 such that, from Lemma 6.4,
C(1 + T 0 ) C B (δ) 1 4 1 -min (1 -α), C B (δ 0 ) 1 -ε . Denoting C 0 = 2C 1 -min (1 -α), C B (δ 0 ) 1 -ε -1
, our choice of constants implies that for all t in [0, T 0 ],
sup
s∈[0,t] e εν 0 s h l+1 -h l L ∞ x,v (m) C 0 h l L ∞ [0,t] L ∞ x,v (m) + h l-1 L ∞ [0,t] L ∞ x,v (m) + g L ∞ t L ∞ x,v (m) × sup s∈[0,t] e εν 0 s h l -h l-1 L ∞ x,v (m) . (6.19)
Of important note is the fact that C 0 and ε do not depend on T 0 and therefore we can iterate the process on [T 0 , 2T 0 ] and so on. The inequality above thus holds for all t 0. To conclude, we first prove that
h l+1 L ∞ [0,t] L ∞ x,v (
m) is uniformly bounded. We could do exactly the same computations but subtracting S Gν (t)f 0 (x, v) instead of h l (t, x, v) to h l (t, x, v) in (6.6). Thus, (6.19) would become for all t 0, sup
s∈[0,t] e εν 0 s (h l+1 -S Gν f 0 )(s) L ∞ x,v (m) C 0 h l L ∞ [0,t] L ∞ x,v (m) + g L ∞ t L ∞ x,v (m) × sup s∈[0,t] e εν 0 s h l L ∞ x,v (m) .
Using the exponential decay of S Gν (t)f 0 , which is faster than εν 0 (see Theorem 4.1), and assuming that the norms of g and f 0 are bounded by η 1 to be determined later,
sup s∈[0,t] e εν 0 s h l+1 (s) L ∞ x,v (m)
C
(1) 0
f 0 L ∞ x,v (m) + C (2) 0 h l L ∞ [0,t] L ∞ x,v (m) + η 1 sup s∈[0,t] e εν 0 s h l (s) L ∞ x,v (m) ,
where C
0 and C
0 are two positive constants independent of h l+1 and η 1 . If we choose η 1 > 0 small enough such that
C (1) 0 η 1 + C (2) 0 2 + C (1) 0 1 + C (1) 0 η 2 1 (1 + C (1) 0 )η 1 then we obtain by induction that (6.20) ∀l ∈ N, ∀t 0, sup s∈[0,t] e εν 0 s h l (s) L ∞ x,v (m) 1 + C (1) 0 f 0 L ∞ x,v (m)
.
We now plug (6.20) into (6.19) and use the fact that f 0 and g are bounded by η 1 . This gives
h l+1 -h l L ∞ t,x,v (m) 3C 0 1 + C (1) 0 η 1 h l -h l-1 L ∞ t,x,v (
m) . The latter implies that for η 1 small enough, (h l ) l∈N is a Cauchy sequence in L ∞ t,x,v (m) and therefore converges towards f 1 in L ∞ t,x,v (m). Since γ < k, we can take the limit inside the iterative scheme (6.5) and f 1 is a solution to (6.4). Moreover, by taking the limit inside (6.20), f 1 has the desired exponential decay. This concludes the proof of Proposition 6.7.
Study of equation (6.3) in L ∞
x,v (µ -ζ ). We turn to the differential equation (6.3) in L ∞
x,v (µ -ζ ) with ζ in (1/2, 1) so that Theorem 5.1 holds. Proposition 6.8. Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2)
or m = v k with k > k ∞ . Let g = g(t, x, v) be in L ∞ t L ∞ x,v (m). Then there exists a unique function f 2 in L ∞ t L ∞ x,v (µ -ζ ) such that ∂ t f 2 = G (f 2 ) + A (δ) (g) and f 2 (0, x, v) = 0. Moreover, if Π G (f 2 + g) = 0 and if ∃ λ g , η g > 0, ∀t 0, g(t) L ∞ x,v (m) η g e -λg t ,
then for any 0 < λ 2 < min {λ g , λ ∞ }, with λ ∞ defined in Theorem 5.1, there exist
C 2 > 0 such that ∀t 0, f 2 (t) L ∞ x,v (µ -ζ ) C 2 η g e -λ 2 t .
The constant C 2 only depends on λ 2 .
Proof of Proposition 6.8. Thanks to the regularising property of A (Lemma 6.2)
A (δ) (g) belongs to L ∞ t L ∞ x,v (µ -ζ
). Theorem 5.1 implies that there is indeed a unique f 2 solution to the differential equation and it is given by
f 2 = t 0 S G (t -s) A (δ) (g) (s) ds, where S G (t) is the semigroup generated by G = L -v • ∇ x in L ∞
x,v (µ -ζ ). Suppose now that Π G (f 2 + g) = 0 and that there exists η 2 > 0 such that g
(t) L ∞ x,v (m)
η g e -λt . Using the definition of Π G (3.8), the projection part of f 2 is straightforwardly bounded for all t 0: Π
G (f 2 ) (t) L ∞ x,v (µ -ζ ) = Π G (g) (t) L ∞ x,v (µ -ζ ) C Π G g L ∞ x,v (m)
C Π G η g e -λg t . (6.21) Applying Π ⊥ G = Id -Π G to the equation satisfied by f 2 we get, thanks to (3.8),
∂ t Π ⊥ G (f 2 ) = g Π ⊥ G (f 2 ) + Π ⊥ G A (δ) (g) . This yields Π ⊥ G (f 2 ) = t 0 S G (t -s) Π ⊥ G A (δ) (g) (s) ds.
We use the exponential decay of S G (t) on (Ker(g)) ⊥ (see Theorem 5.1).
Π ⊥ G (f 2 ) L ∞ x,v (µ -ζ ) C ∞ t 0 e -λ∞(t-s) A (δ) (g) (s) L ∞ x,v (µ -ζ ) ds.
Using the definition of Π G (3.8) and then the regularising property of A Lemma 6.2 we further can further bound. Fix λ 2 < min {λ ∞ , λ g },
Π ⊥ G (f 2 ) L ∞ x,v (µ -ζ ) C G C ∞ C Π G C A C g η g t 0 e -λ∞(t-s) e -λg s ds C G C ∞ C Π G C A C g η g te -min{λg,λ∞}t
C 2 (λ 2 )η g e -λ 2 t . (6.22) Gathering (6.21) and (6.22) yields the desired exponential decay. 6.1.4. Proof of Theorem 6.1. Take f 0 in L ∞
x,v (m) such that Π G (f 0 ) = 0. The existence will be proved by an iterative scheme. We start with f (0) 1 = f (0) 2 = 0 and we approximate the system of equations (6.2) -(6.3) as follows.
∂ t f (n+1) 1 = B (δ) f (n+1) 1 + Q f (n+1) 1 + f (n) 2 ∂ t f (n+1) 2 = G f (n+1) 2 + A (δ) f (n+1) 1
, with the following initial data
f (n+1) 1 (0, x, v) = f 0 (x, v) and f (n+1) 2 (0, x, v) = 0. Assume that (1 + C 2 ) f 0 η 1 ,
where C 2 was defined in Proposition 6.8 and η 1 was defined in Proposition 6.7. Thanks to Proposition 6.7 and Proposition 6.8, an induction proves first that f
(n) 1 n∈N and f (n) 2 n∈N
are well-defined sequences and second that for all n in N and all t 0
f (n) 1 (t) L ∞ x,v (m) e -λ 1 t f 0 L ∞ x,v (m) (6.23) f (n) 2 (t) L ∞ x,v (µ -ζ ) C 2 e -λ 2 t f 0 L ∞ x,v (m) , (6.24) with λ 2 < min {λ 1 , λ ∞ }. Indeed, if we constructed f (n) 1 and f (n) 2
satisfying the exponential decay above then we can construct f (n+1) 1
, which has the required exponential decay (6.23), and then construct f (n+1) 2
. Finally, we have the following equality
∂ t f (n+1) 1 + f (n+1) 2 = g f (n+1) 1 + f (n+1) 2 + Q f (n+1) 1 + f (n) 2 .
Thanks to orthogonality property of Q in Lemma 6.6 and the definition of Π G (3.8) we obtain that the projection is constant with time and thus
Π G f (n+1) 1 + f (n+1) 2 = Π G (f 0 ) = 0.
Applying Proposition 6.8 we obtain the exponential decay (6.24) for f (n+1) 2 .
We recognize exactly the same iterative scheme for f (n+1) 1 as in the proof of Proposition 6.7 with g replaced by f (n) 2 . Moreover, the uniform bound (6.24) allows us to derive the same estimates as in the latter proof independently of f
(n) 2 . As a conclu- sion, f (n) 1 n∈N is a Cauchy sequence in L ∞ t L ∞ x,v (
m) and therefore converges strongly towards a function f 1 .
By (6.24), the sequence f
(n) 2 n∈N is bounded in L ∞ t L ∞ x,v (µ -ζ
) and is therefore weakly-* compact and therefore converges, up to a subsequence, weakly-* towards
f 2 in L ∞ t L ∞ x,v (µ -ζ
). Since the kernel inside the collision operator behaves like |v -v * | γ and that our weight m(v) is either exponetial or of degree k > 2 > γ, we can take the weak limit inside the iterative scheme. This implies that (f 1 , f 2 ) is solution to the system (6.2) -(6.3) and thus f = f 1 + f 2 is solution to the perturbed equation (6.1). Moreover, taking the limit inside the exponential decays (6.23) and (6.24) yields the expected exponential decay for f . 6.2. Uniqueness in the perturbative framework. We conclude the proof of our main Theorem 2.1 stated in Section 2 by proving the uniqueness of solutions in the perturbative regime. Theorem 6.9. Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2) or m = v k with k > k ∞ . There exits η > 0 such that for any
f 0 in L ∞ x,v (m) such that f 0 L ∞ x,v (m) η there exists at most one solution f (t, x, v) in L ∞ t L ∞ x,v (
m) to the perturbed Boltzmann equation (6.1) with Maxwell boundary condition and with f 0 as initial datum.
Proof of Theorem 6.9. Let f 0 be in L ∞
x,v (m) such that f 0 L ∞ x,v (m) η, η to be chosen later. Suppose that there exist two solutions f and f in L ∞ t L ∞ x,v (m) associated to the initial datum f 0 .
Subtracting the equations satisfied by f and f we get
∂ t f -f = G f -f + Q f -f , f + f
and following the decomposition of the previous subsection
∂ t f -f = G ν f -f + B (δ 2 f -f + Q f -f , f + f + A (δ) f -f .
Since G ν generates a semigroup in L ∞ x,v (m) we can write the equation above under its Duhamel form:
f -f = t 0 S Gν (t -s) B (δ 2 f -f + Q f -f , f + f ds + t 0 S Gν (t -s) A (δ) f -f ds. (6.25)
The first term on the right-hand side can be treated the same way as in the proof of Proposition 6.7 and therefore, for δ small enough, there exists 0
< C 1 < 1 such that t 0 S Gν (t -s) B (δ 2 f -f + Q f -f , f + f ds (1 -C 1 ) + C Q f L ∞ [0,t] L ∞ x,v (m) + f L ∞ [0,t] L ∞ x,v (m) f -f L ∞ [0,t] L ∞ x,v (m)
.
(6.26)
Since S Gν (t) is bounded on L ∞ t,x,v (m) (see Theorem 4.1), as well as A (δ) is (see Lemma 6.2, we can bound the second term on the right-hand side of (6.25) by (6.27)
t 0 S Gν (t -s) A (δ) f -f ds C 2 t f -f L ∞ [0,t] L ∞ x,v (m)
.
Plugging (6.26) and (6.27) into (6.25) we obtain
f -f L ∞ [0,t] L ∞ x,v (m) (1 -C 1 ) + C Q f L ∞ [0,t] L ∞ x,v (m) + f L ∞ [0,t] L ∞ x,v (m) + C 2 t × f -f L ∞ [0,t] L ∞ x,v (m)
.
(6.28)
We now need to prove that the L ∞ x,v (m)-norm of f and f must be bounded by the one of f 0 . But this follows from (6.28) when one subtracts S Gν (t)f 0 instead of f to f . This yields, after controlling S Gν (t)f 0 by its L ∞
x,v (m)-norm,
f L ∞ [0,t] L ∞ x,v (m) C 0 f 0 L ∞ x,v (m) + (1 -C 1 ) + C Q f L ∞ [0,t] L ∞ x,v (m) + C 2 t f L ∞ [0,t] L ∞
x,v (m) . Since C 1 < 1 we fix T 0 such that C 2 T 0 < C 1 /4. We deduce that for all t in [0,
T 0 ] ∀t ∈ [0, T 0 ], 3C 1 4 f L ∞ [0,t] L ∞ x,v (m) C 0 f 0 L ∞ x,v (m) + C Q f 2 L ∞ [0,t] L ∞ x,v (m)
and therefore, if f 0 L ∞ x,v (m) η with η small enough such that
3C 1 4 -2 C Q C 0 C 1 η > C 1 2 then (6.29) ∀t ∈ [0, T 0 ], f L ∞ [0,t] L ∞ x,v (m) 2C 0 C 1 f 0 L ∞ x,v (m) .
To conclude the proof of uniqueness we see that (6.29) is also valid for f and (6.28) thus becomes
∀t ∈ [0, T 0 ], f -f L ∞ [0,t] L ∞ x,v (m) 1 - 3C 1 4 + 4 C 0 C Q C 1 η f -f L ∞ [0,t] L ∞ x,v (m)
.
We can choose η even smaller such that the term on the right-hand side can be absorbed by the left-hand side. This implies that f = f on [0, T 0 ]. Starting at T 0 we can iterate the process and finally getting that f = f on R + ; which concludes the proof of Theorem 6.9.
7. Qualitative study of the perturbative solutions to the Boltzmann equation
In this last section, we address the issue of positivity and continuity of the solutions to the Boltzmann equation (7.1)
∂ t F + v • ∇ x F = Q (F, F ) .
Note that even if our arguments are constructive, we only prove qualitative behaviours and we do not tackle the issue of quantitative estimates. For instance, we prove the positivity of the solutions but do not give any explicit lower bound. Such explicit lower bounds have been recently obtained in the case of pure specular reflections [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF] and in the case of pure Maxwellian diffusion [START_REF] Briant | Instantaneous exponential lower bound for solutions to the boltzmann equation with maxwellian diffusion boundary conditions[END_REF]. We think that the proofs can be adapted to fit the case of Maxwell boundary condition as it is a convex combination of these boundary conditions. However, the techniques required to deal with this are very different from the one developed throughout this paper and we therefore did not looked into it much further. where η > 0 is chosen such that Theorems 6.1 and 6.9 hold and denote f the unique solution of the perturbed equation associated to f 0 . Suppose that F 0 = µ + f 0 0 then F = µ + f 0.
Proof of Proposition 7.1. Since we are working with hard potential kernels we can decompose the nonlinear operator into
Q(F, F ) = -Q -(F, F ) + Q + (F, F )
where Following the idea of [START_REF] Briant | Instantaneous Filling of the Vacuum for the Full Boltzmann Equation in Convex Domains[END_REF][6], we obtain an equivalent definition of being a solution to (7.1) by applying the Duhamel formula along backward characteristics that is stopped right after the first collision against the boundary. If F is solution to the Boltzmann equation then for almost all (x, v) in Ω × R 3 , F (t, x, v) =F 0 (x -vt, v)exp -t 0 q[F (s, x -(t -s)v, •)](v) ds
Q -(F, F )(v) =
+ t 0 exp - t s q[F (s ′ , x -(s -s ′ )v, •)](v) ds ′ × Q + [F (s, x -(t -s)v, •), F (s, x -(t -s)v, •)] (v) ds (7.2)
if t t min (x, v) := t 0 or else F (t, x, v) =F Λ (t 0 , x -t 0 v, v)exp - We denoted by F Λ the Maxwell boundary condition for (t ′ , x ′ , v) in R + × Λ F Λ (t ′ , x ′ , v) = (1 -α)F (t ′ , x ′ , R x ′ (v)) + αP Λ (F (t ′ , x ′ , •)) (v).
We construct an iterative scheme (F (n) ) n∈N with F (0) = µ and F (n+1) (t, x, v) being defined by (7.2) and (7.3) with all the F on the right-hand side being replaced by F (n) except in the definition of F Λ where we keep F (n+1) instead. In other terms, F (n+1) is solution to
∂ t + v • ∇ x + q(F (n) ) F (n+1) = Q(F (n) , F (n) )
with the Maxwell boundary condition; which is an approximative scheme to the Boltzmann equation (7.1).
Defining f (n) = F (n) -µ we have the following differential iterative scheme
∂ t f (n+1) + v • ∇ x f (n+1) = -ν(v) f (n+1) + K f (n) + Q + f (n) -q f (n) f (n+1) .
As before, we prove that f (n) n∈N is well-defined and converges in L ∞ t,x,v (m) towards f , the unique solution of the perturbed Boltzmann equation. Therefore, the same holds for F (n) n∈N converging towards F the unique perturbed solution of the original Boltzmann equation (7.1).
From the positivity of q and Q + and F 0 , a straightforward induction from (7.2) shows that F (n) (t, x, v) 0 for all n when t t 0 . This implies that for all n and all (x, v), F (n+1) Λ (t 0 , x -t 0 v, v) 0 and therefore (7.3) gives F (n+1) (t, x, v) 0 for all (t, x, v) and all n. The positivity of F follows by taking the limit as n tends to infinity. 7.2. Continuity of solutions. The last issue tackled in the present article is the continuity of the solutions described in Section 6. More precisely, we prove the following proposition. We also rewrite the boundary continuity set
C - Λ = Λ -∪ Λ (I-) 0
and the continuity set
C = {0} × Ω × R 3 ∪ Λ + ∪ C - Λ ∪ (0, +∞) × C - Λ ∪ (t, x, v) ∈ (0, +∞) × Ω × R 3 ∪ Λ + : ∀1 k N(t, x, v) ∈ N, (X k+1 (x, v), V k (x, v)) ∈ C - Λ .
The sequence (T k (x, v), X k (x, v), V k (x, v)) k∈N is the sequence of footprints of the backward characteristic trajectory starting at (x, v); N(t, x, v) is almost always finite and such that T N (t,x,v) t < T N (t,x,v)+1 (x, v). We refer to Subsection 4.1 for more details.
As explained in Lemma 4.3, the set C - λ describes the boundary points in the phase space that lead to continuous specular reflections.
The proof of Proposition 7.2 relies on a continuity result for the non-homogeneous transport equation with a mixed specular and in-flow boundary conditions when Ω is not necessarily convex.
Similar results have been obtained in [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF]Lemma 12] or [21, Lemma 13] (when Ω is convex) for purely in-flow boundary condition as well as for purely bounce-back reflections [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF]Lemma 15]. We recover their results when α = 1 or by replacing (T k , X k , V k ) k by the sequence associated to bounce-back characteristics. The continuity for pure specular reflections has been tackled in [START_REF] Guo | Decay and continuity of the Boltzmann equation in bounded domains[END_REF]Lemma 21] but required strict convexity of Ω.
The following lemma therefore improves and extends the existing results.
Lemma 7.3. Let Ω be a C 1 bounded domain of R 3 and let f 0 (x, v) be continuous on
Ω × R 3 ∪ Λ + ∪ C - Λ
and g(t, x, v) be a boundary datum continuous on [0, T ] × C - Λ . At last, let q 1 (t, x, v) and q 2 (t, x, v) be two continuous function in the interior of [0, T ] × Ω × R 3 satisfying sup t∈[0,T ] q 1 (t, x, v) L ∞ x,v (m) < ∞ and sup t∈[0,T ] q 2 (t, x, v) L ∞ x,v (m) < ∞.
Assume f 0 satisfies the mixed specular and in-flow boundary condition:
∀(x, v) ∈ C - Λ , f 0 (x, v) = (1 -α)f 0 (x, R x (v)) + g(0, x, v) and suppose f (t, x, v) is the solution to [∂ t + v • ∇ x + q 1 (t, x, v)] f (t, x, v) = q 2 (t, x, v) ∀(t, x, v) ∈ [0, T ] × Λ -, f (t, x, v) = (1 -α)f (t, x, R x (v)) + g(t, x, v) associated to the initial datum f 0 . Then f (t, x, v) is continuous on the continuity set C.
Proof of Lemma 7.3. As now standard, in the homogeneous case q 2 = 0, we can use a Duhamel formula along the backward characteristics because q 1 belongs to L ∞ t,x,v . More precisely, as in Subsection 4.3.1 with q 1 (t, x, v) replacing ν(v) we obtain that if h(t, x, v) is solution to [∂ t + v • ∇ x + q 1 (t, x, v)] h(t, x, v) = 0 with the mixed specular and in-flow boundary conditions then h takes the form • if t t min (x, v) = T 1 , h(t, x, v) = h 0 (x -tv, v)e -t 0 q 1 (s,x-(t-s)v,v)ds ;
• if t > T 1 h(t, x, v) = [(1 -α)h(t -T 1 , X 1 , V 1 ) + g(t -T 1 , X 1 , V 1 )] e -t
t-T 1 q 1 (s,x-(t-s)v,v)ds .
Unlike the case of Maxwell boundary condition, we see that in the case of mixed specular and in-flow boundary condition we can always reach the initial plane {t = 0}. We obtain an explicit form for h(t, x, v) for almost every (t, x, v) by iterating the property above (see [21, lemma 20] for more details or [START_REF] Kim | Formation and propagation of discontinuity for Boltzmann equation in non-convex domains[END_REF]Lemma 15] replacing the bounce-back trajectories by the specular ones). It reads with N = N(t, x, v)) and the usual notation
t k = t -T k (x, v) h(t, x, v) =(1 -α) N h 0 (X N -t N V N , V N ) e - N k=0 t k min{0,t k+1 } q 1 (s,X k -(t k -s)V k ,V k ) ds + N -1 k=0 g(t k+1 , X k+1 , V k+1 ) e - t k min{0,t k+1 } q 1 (s,X k -(t k -s)V k ,V k ) ds (7.4)
for almost every (t, x, v). Note that this expression is indeed well-defined since N(t, x, v) is finite almost everywhere and q 1 belongs to L ∞ t,x,v . We also emphasize that min {0, t k+1 } only plays a role when k = N(t, x, v); it encodes the fact that we integrate all the complete lines between t k and t k+1 and only the remaining part [t -T N , t] of the last line.
Since the source term q 2 also belongs to L ∞ t,x,v , we obtain an explicit formula for f (t, x, v) from (7.4). It reads, for almost every (t, x, v),
f (t, x, v) =(1 -α) N f 0 (X N -t N V N , V N ) e - N k=0 t k min{0,t k+1 } q 1 (s,X k -(t k -s)V k ,V k ) ds + N -1 k=0
g(t k+1 , X k+1 , V k+1 ) e max{s,t l+1 } q 1 (s 1 , X l -(t l -s 1 )V l , V l )ds 1 × q 2 (s, X k -(t k -s)V k , V k ) ds.
(7.5)
Note that in the expression above we used the change of variable s 1 → t -s 1 to recover exactly the sequence (t l , X l , V l ) associated to (t, x, v) instead of ( t l , X l , V l ) associated with (t -s, x, v).
By assumptions on f 0 and g, we deduce that f is continuous on
{0} × Ω × R 3 ∪ Λ + ∪ C - Λ ∪ (0, +∞) × C - Λ . Now if (t, x, v) belongs to (t, x, v) ∈ (0, +∞) × Ω × R 3 ∪ Λ + : ∀1 k N(t, x, v) ∈ N, (X k+1 (x, v), V k (x, v)) ∈ C - Λ
we have by iterating Lemma 4.3 that the finite sequence (T k , X k , V k ) 0 k N (t,x,v) is continuous around (x, v).
Let (t ′ , x ′ , v ′ ) be in the same set as (t, x, v). In the case T N (t,x,v) t t ′ < T N (t,x,v)+1 or T N (t,x,v) t ′ t < T N (t,x,v)+1 , by continuity of the t -T k (x, v) we have that for (t ′ , x ′ , v ′ ) sufficiently close to t, N(t ′ , x ′ , v ′ ) = N(t, x, v) and the continuity
1 b 0 b
10 (cos θ) dσ = S d-2 π (cos θ) sin d-2 θ dθ < ∞.
Theorem 2 . 1 .
21 Let Ω be a C 1 bounded domain and let α in ( 2/3, 1]. Define
(3. 21 )
21 Direct computations show α a = 10 and C 1 > 0.
now gather (3.21), (3.18), (3.22), (3.24), (3.27) and (3.20) into (3.11)
dv dS(x)ds = 0, by oddity. Combining the latter with (3.35) inside (3.34) yields
Lemma 4 . 3 .
43 Let Ω be a C 1 bounded domain.
)
Theorem 5 . 1 .
51 Let α be in ( 2/3, 1]. There exist ζ α in (1/2, 1) such that for any ζ in (ζ α , 1), the linear perturbed operator G = L -v • ∇ x , together with Maxwell boundary condition, generates a semigroup S
Proposition 5 . 5 .
55 Let α in ( 2/3, 1]. There exists ζ α in (1/2, 1), ε α in (0, 1), C
|v| δ - 1
1 and 2δ |v -v * | δ -1 and |cos θ| 1 -2δ and whose support is included in |v| 2δ -1 and δ |v -v * | 2δ -1 and |cos θ| 1 -δ .
Lemma 6 . 2 .
62 1.1] (polynomial weight) or [4, Section 2] (exponential weight). Let ζ be in (1/2, 1]. There exists C A > 0 such that for all f in L ∞
7. 1 .Proposition 7 . 1 .
171 Positivity of solutions. This subsection is dedicated to proving the following positivity property. Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0, 2)or m = v k with k > k ∞ . Let f 0 be in L ∞ x,v (m) with Π G (f 0 ) = 0 and f 0 L ∞ x,v (m) η,
R 3 ×S 2 BR 3 ×S 2 B
232 (|v -v * |, cos θ) F * dv * dσ F (v) = q(F )(v)F (v), Q + (F, F ) = (|v -v * |, cos θ) F ′ F ′ * dv * dσ.
(s, x -(t -s)v, •)](v) ds + (s ′ , x -(s -s ′ )v, •)](v) ds ′ × Q + [F (s, x -(t -s)v, •), F (s, x -(t -s)v, •)] (v) ds.
Proposition 7 . 2 .
72 Let m = e κ 1 |v| κ 2 with κ 1 > 0 and κ 2 in (0,[START_REF] Beals | Abstract time-dependent transport equations[END_REF] or m = v k with k > k ∞ . Let f 0 be in L ∞ x,v (m) with Π G (f 0 ) = 0 and f 0 L ∞ x,v (m) η,where η > 0 is chosen such that Theorems 6.1 and 6.9 hold and denote f the unique solution of the perturbed equation associated to f 0 . Suppose thatF 0 = µ + f 0 is continuous on Ω × R 3 ∪ Λ + ∪ C - Λ and satisfies the Maxwell boundary condition (1.2) then F = µ + f is continuous on the continuity set C.We recall the definition of inward inflection grazing boundaryΛ (I-) 0 = Λ 0 ∩ t min (x, v) = 0, t min (x, -v) = 0 and ∃δ > 0, ∀τ ∈ [0, δ], x -τ v ∈ Ω c .
k+1 } q 1 (s,X k -(t k -s)V k ,V k ) ds +
and bound the terms where k -k R are appearing as before and get (remember that k R is compactly supported): the terms in (5.16) where |v| R and |v * | 2R and |v * * | 3R are bounded from above by
Proof of Proposition 5.5. We recall that G = L -v • ∇ x = G ν + K. Thanks to Theorem 4.1 with the weight µ -ζ , G ν generates a semigroup S Gν (t) in L ∞
x,v (µ -ζ ). Moreover, Lemma 5.2 implies that K is a bounded operator in L ∞
x,v (µ -ζ ). We can therefore write a Duhamel's form for the solution f for almost every (s, x, v * ) in R + × Ω × R 3 :
(5.5)
Step 1: New implicit form for f . In what follows, C r will stand for any positive constant not depending on f but depending on a parameter r. We recall (4.10) the definition of the set of integration V 1 . We use the description (4.4), α replacing ∆ n , of S Gν (s -s * ) along characteristics until the first collision against the boundary that we denote
where we defined
and
(5.7)
We iterate this formula inside the integral over V 1 . Using the notation ( t 1 , x 1 , v 1 ) to denote the first backard collision starting from (x * 1 , v 1 * ) and P Λ for the diffuse boundary operator (1.3) we end up with a new implicit form for f
with the following definitions (5.9)
(5.10)
of (T k , X k , V k ) 0 k N (t,x,v) , g, q 1 and q 2 implies f (t ′ , x ′ , v ′ ) → f (t, x, v). It remains to deal with the case t ′ t = T N (t,x,v) where N(t ′ , x, v) = N(t, x, v) -1. Exactly as proved in [21, Lemma 21], in that case t N (t,x,v) = 0 and the integrals from 0 to t N are null in formula (7.5). Moreover, (X N (t,x,v)-1 -(t ′ -T N (t,x,v)-1 )V N (t,x,v)-1 ) converges to X N (t,x,v) as t ′ tends to t. Finally, since f 0 satisfies the boundary condition, we obtain here again that f (t ′ , x ′ , v ′ ) → f (t, x, v). Which concludes the proof.
We now prove the continuity of the solutions constructed in Section 6.
Proof of Proposition 7.2. We use a sequence to approximate the solution of the full Boltzmann equation with initial datum F 0 = µ + f 0 . We start from F (0) = µ and define by induction
with the mixed specular and diffusive boundary conditions
Since we impose a specular part in the boundary condition, similar computations as in Section 4 show that f (n) n∈N is well-defined in L ∞ t,x,v (m). Moreover, similar computations as Subsection 6.1.2 prove that f (n) n∈N is a Cauchy sequence, at least on [0, T ] for T sufficiently small, as well as F (n) n∈N . Therefore f (n) n∈N converges towards f the unique solution of the perturbed Boltzmann equation with initial datum f 0 and F (n) n∈N converges to F the unique solution of the full Boltzmann equation with initial datum F 0 = µ + f 0 .
We apply Lemma 7.3 inductively on ν(v) -1 F (n+1) . Indeed, [24, Theorem 4 and Corollary 5] showed that q 1 = ν(v) -1 q(F (n) ) and q 2 = ν(v) -1 Q(F (n) , F (n) ) are continuous in the interior of [0, T ] × Ω × R 3 if F (n) is continous on C (see also Lemma 6.6). And [24, Proof of 2 of Theorem 3, Step 1]
Hence, by induction F (n) is continuous on C for all n and is a Cauchy sequence. Therefore its limit F is continuous as well. | 139,765 | [
"739558"
] | [
"236317",
"236317"
] |
01345066 | en | [
"math"
] | 2024/03/04 23:41:50 | 2017 | https://hal.science/hal-01345066v4/file/CampbellDMTCSfinal.pdf | John M Campbell
A class of symmetric difference-closed sets related to commuting involutions
Keywords: symmetric difference-closed set, commuting involution, Klein four-group, permutation group, combinatorics of finite sets
Recent research on the combinatorics of finite sets has explored the structure of symmetric difference-closed sets, and recent research in combinatorial group theory has concerned the enumeration of commuting involutions in Sn and An. In this article, we consider an interesting combination of these two subjects, by introducing classes of symmetric difference-closed sets of elements which correspond in a natural way to commuting involutions in Sn and An. We consider the natural combinatorial problem of enumerating symmetric difference-closed sets consisting of subsets of sets consisting of pairwise disjoint 2-subsets of [n], and the problem of enumerating symmetric difference-closed sets consisting of elements which correspond to commuting involutions in An. We prove explicit combinatorial formulas for symmetric difference-closed sets of these forms, and we prove a number of conjectured properties related to such sets which had previously been discovered experimentally using the On-Line Encyclopedia of Integer Sequences.
Introduction
Combinatorial properties concerning symmetric difference-closed (∆-closed) sets were explored recently in [START_REF] Gamble | Symmetric difference-free and symmetric difference-closed collections of sets[END_REF] and [START_REF] Buck | Size-maximal symmetric difference-free families of subsets of [n][END_REF]. In this article, we consider an interesting class of ∆-closed sets related to commuting involutions in the symmetric group S n and the alternating group A n .
The study of combinatorial properties associated with pairs of commuting involutions in S n and A n is an interesting subject in part because this area is related to the classifications of abstract regular polytopes for fixed automorphism groups, as shown in [START_REF] Kiefer | On pairs of commuting involutions in Sym(n) and Alt(n)[END_REF]. In [START_REF] Kiefer | On pairs of commuting involutions in Sym(n) and Alt(n)[END_REF] it is proven that up to conjugacy, there are
-2n + n k=1 k 2 + 1 2 • (n -k + 1)
ordered pairs of commuting involutions in S 2n and S 2n+1 . This formula is used in [START_REF] Kiefer | On pairs of commuting involutions in Sym(n) and Alt(n)[END_REF] to prove new formulas for the number of unordered pairs of commuting involutions up to isomorphism in a given symmetric or alternating group. These formulas may be used to determine the total number of Klein four-subgroups for S n and A n .
In our present article, we consider a natural variation of the problem of counting (C 2 × C 2 )-subgroups of a given permutation group. In particular, we consider the problem of enumerating Klein permutation subgroups which are, in a specific sense, analogous to ∆-closed sets. Inspired in part by [START_REF] Kiefer | On pairs of commuting involutions in Sym(n) and Alt(n)[END_REF], [START_REF] Buck | Size-maximal symmetric difference-free families of subsets of [n][END_REF], and [START_REF] Gamble | Symmetric difference-free and symmetric difference-closed collections of sets[END_REF], we introduce new classes of ∆-closed sets consisting of elements which correspond in a natural way to commuting involutions in S n and A n , and we prove new combinatorial formulas for these classes of ∆-closed sets.
An enumerative problem concerning symmetric difference-closed sets
Given two sets S 1 and S 2 , recall that the symmetric difference of S 1 and S 2 is denoted by S 1 ∆S 2 , and may be defined so that S 1 ∆S 2 = (S 1 \ S 2 ) ∪ (S 2 \ S 1 ). An n-subset T consisting of sets is a symmetric difference-closed set or a ∆-closed n-subset if S 1 ∆S 2 ∈ T for all S 1 , S 2 ∈ T . In other words, restricting the ∆ operation to T × T yields a binary operation on T . Our present article is largely motivated by the enumerative problem described below, which may be formulated in a natural way in terms of ∆-closed sets.
Suppose that n people arrive at a meeting, and suppose that these n people arrange themselves into pairs (except for a loner if n is odd) and that these pairs then form various organizations. What is the total number, taken over all possible pairings, of possible collections C of three distinct organizations such that given two organizations in C, a pair P of people belongs to only one of these two organizations iff P is a member of the remaining (third) organization?
The number of possible collections C as given above is also equal to the number of ∆-closed 4-subsets S ⊆ 2 2 [n] such that there exists a set T consisting of pairwise disjoint 2-subsets of [n] such that S ⊆ 2 T . A set S of this form endowed with the ∆ operation forms a group which is isomorphic to the Klein fourgroup C 2 × C 2 , and the elements consisting of pairwise disjoint 2-sets in S may be regarded in an obvious way as commuting involutions in S n . We thus have that the total number of collections C as given above is also equal to the number of subgroups G of the symmetric group S n such that G is isomorphic to the Klein four-group, and such that there exists a set T ⊆ S n of pairwise disjoint transpositions such that each element in G is a product of elements in T . We refer to this latter property as the totally disjoint transposition (TDT) property. It is clear that it is not the case that all Klein four-subgroups of S n satisfy this property. For example, the permutation subgroup {id, (12)(34), (13)(24), (14)(23)} forms a Klein four-subgroup of S 4 , but the 2-sets {3, 4} and {1, 3} are not pairwise disjoint.
The enumerative problem given above may be formulated in a more symmetric way in the following manner. The total number of possible collections C as described in this problem is also equal to the number of 3-sets of the form {A ∪ B, A ∪ C, B ∪ C} such that A, B, and C are pairwise disjoint sets contained in a set of pairwise disjoint 2-subsets of [n], and at most one of A, B, and C can be empty.
The sequence labeled A267840 which we contributed to OEIS Foundation Inc. (2011) enumerates ∆-closed sets of the form described above. Accordingly, let A267840 n denote the number of Klein foursubgroups of S n satisfying the TDT property. In the OEIS entry for A267840, we provided the following intriguing, exotic triple sum for A267840 n for n ∈ N, letting δ denote the Kronecker delta function:
n! n 2 i=1 i j=1 min(j, 1 4 (2i+2j-1) ) k=max( i 2 ,i+j-n 2 ) 2 k-i-j k!(i -k)!(j -k)!(n -2i -2j + 2k)!(δ i,j + δ i,2k + 1)! . (1)
In our present article, we offer an elegant proof of the formula indicated above, by introducing a new class of integer partitions which we refer to as "Klein partitions". After our formula indicated in ( 1) was added to OEIS Foundation Inc. ( 2011), Václav Kotěšovec used this formula together with the plinrec Mathematica function to determine a conjectural linear recurrence with polynomial coefficients for the integer sequence (A267840 n : n ∈ N) (Kotěšovec (2016)).
Kotěšovec also used the formula given in [START_REF] So | we have that d n,k = 2d n,k-1 + 3d n,k-2 + 1. The integer sequence given by coefficients of the form 1 24 -6 -3(-1) k + 3[END_REF] to construct a conjectured exponential generating function (EGF) for A267840 using the dsolve Maple function, together with the rectodiffeq Maple command. Amazingly, the integer sequence (A267840 n ) n∈N seems to have a surprisingly simple EGF, in stark contrast to the intricacy of the above triple summation given in (1):
EGF(A267840; x) ? = e x 3 - e x(x+2) 2 2 + e
x(3x+2) 2
6 .
We refer to this conjecture as Kotěšovec's conjecture. We present a combinatorial proof of this conjecture in our article, and we use this result to prove a conjectural asymptotic formula for (A267840 n : n ∈ N) given by Václav Kotěšovec in OEIS Foundation Inc. (2011).
A class of symmetric difference-closed sets related to commuting even involutions
Since the number of pairs of commuting involutions in the alternating group A n up to isomorphism is also considered in [START_REF] Kiefer | On pairs of commuting involutions in Sym(n) and Alt(n)[END_REF], it is also natural to consider analogues of the results given above for even products of transpositions.
The sequence labeled A266503 which we contributed to OEIS Foundation Inc. ( 2011) enumerates subgroups G of the alternating group A n such that G is isomorphic to the Klein four-subgroup C 2 × C 2 , and each element in G is the product of the elements in a subset of a fixed set of pairwise disjoint transpositions in A n . It is important to note that this is not equal to the number of Klein four-subgroups of A n . This may be proven by the same counterexample as above in the case of the symmetric group.
Letting A266503 n denote the n th entry in the OEIS sequence A266503, A266503 n is also equal to the number of ∆-closed subsets S ⊆ 2 2 [n] such that there exists a set T consisting of pairwise disjoint 2-subsets of [n] such that S ⊆ 2 T , and each element in S is of even order.
In the OEIS sequence labeled A266503, we provided the following beautiful expression for A266503 n for n ∈ N:
n! n 4 i=1 i j=1 min(2j, 1 4 (4i+4j-1) ) k=max(i,2i+2j-n 2 ) 2 k-2i-2j k!(2i -k)!(2j -k)!(n -4i -4j + 2k)!(δ i,j + δ i,k + 1)! .
Václav Kotěšovec used the above formula to conjecture that the EGF for (A266503 n ) n∈N is equal to the following expression OEIS Foundation Inc. ( 2011).
e x 3 - e -x(x-2) 2 8 - e x(x+2) 2 4 + e x(3x+2) 2 24 .
We also offer a combinatorial proof of this conjectured EGF formula. This EGF formula may be used to prove other conjectured results concerning A266503, such as an asymptotic formula for A266503.
A proof of Kot ěšovec's conjecture
Let A000085 n denote the n th entry in the OEIS sequence labeled A000085 for n ∈ N 0 , so that A000085 n is equal to the number of self-inverse permutations on n and the number of Young tableaux with n cells.
Similarly, let A115327 n denote the n th entry in the OEIS sequence labeled A115327, which is defined so that the EGF of this sequence is e 3 2 x 2 +x . Since the EGF for the integer sequence
(A000085 n : n ∈ N 0 ) is e x 2
2 +x , we find that Kotěšovec's conjecture concerning the integer sequence (A267840 n : n ∈ N) is equivalent to the following conjecture, as noted by Kotěšovec in OEIS Foundation Inc. (2011).
Conjecture 2.1. (Kotěšovec, 2016) The n th entry in A267840 is equal to
1 3 -A000085n 2 + A115327n 6 .
As indicated in OEIS Foundation Inc. ( 2011), based on results introduced in Leaños et al. ( 2012), we have that A115327 n is equal to the number of square roots of an arbitrary element σ ∈ S 3n such that the disjoint cycle decomposition of σ consists of n ∈ N 0 three-cycles. We thus find that Kotěšovec's conjecture relates ∆-closed sets as given by A267840 to combinatorial objects such as Young tableaux and permutation roots, in an unexpected and yet simple manner.
To prove Kotěšovec's conjecture, our strategy is to make use of known summation formulas for the OEIS sequences labeled A000085 and A115327. The known formula
A000085 n = n 2 k=0 n! (n -2k)!2 k k! is given in OEIS Foundation Inc. (2011). The new summation formula A115327 n = n! n 2 k=0 3 k 2 k k!(n -2k)!
is also given in OEIS Foundation Inc. ( 2011), based on a result proven in [START_REF] Leaños | On the number of mth roots of permutations[END_REF] concerning mth roots of permutations. So, we find that Kotěšovec's conjecture is equivalent to the following conjectural formula:
A267840 n ? = n 2 k=2 3 k-1 -1 2 • n! 2 k (n -2k)!k! .
We are thus lead to consider the number triangle given by expressions of the form
n! 2 k (n-2k)!k! , for k ∈ N 0 such that k ≤ n 2 .
From OEIS Foundation Inc. ( 2011), we find that this number triangle is precisely
the triangle of Bessel numbers T (n, k), whereby T (n, k) is the number of k-matchings of the complete graph K n , with T (n, k) = n! k!(n-2k)!2 k .
So, we find that Kotěšovec's conjecture may be formulated in the following equivalent manner.
Theorem 2.1. For n ∈ N such that n ≥ 4, we have that
A267840 n = n 2 k=2 3 k-1 -1 2 • T (n, k).
Proof: Recall that T (n, k) is the number of k-matchings of the complete graph K n . We claim that
A267840 n = n 2 k=2 c n,k • T (n, k),
where c n,k is the number of Klein four-subgroups of S n satisfying the TDT property consisting precisely of the following transpositions: (1, 2), (3, 4), . . . , (2k -1, 2k). This is easily proven bijectively, in the following way. Given a k-matching of the complete graph K n , letting this matching be denoted with the k pairwise disjoint transpositions
(x 1 < x 2 ) < (x 3 < x 4 ) < . . . < (x 2k-1 < x 2k ),
which we order lexicographically, and given a Klein-subgroup of S n satisfying the TDT property consisting of the transpositions (1, 2), (3, 4), . . . , (2k-1, 2k), we obtain another Klein-subgroup of S n satisfying the TDT property consisting of the transpositions
(x 1 , x 2 ) < (x 3 , x 4 ) < . . . < (x 2k-1 , x 2k )
by replacing each occurrence of (i, i + 1) with (x i , x i+1 ) for i = 1, 3, . . . , 2k -1. This defines a bijection
φ : C n,k × T n,k → A n,k ,
where C n,k is the set of TDT Klein four-subgroups of S n as given by the coefficient c n,k , T n,k is the set of all k-matchings of K n , and A n,k is the set of all TDT Klein-four subgroups of the symmetric group S n consisting of exactly k transpositions in total.
We claim that
c n,k = 1 + 3 + • • • + 3 k-2
for all n ≥ 4 and k ∈ N ≥2 whereby k ≤ n 2 . We proceed by induction on k. In the case whereby k = 2, we have that c n,k is the number of Klein-subgroups of S n satisfying the TDT property consisting of the transpositions (1, 2) ∈ S n and (3, 4) ∈ S n . But it is clear that there is only one such group, namely:
{(12)(34), (12), (34), id} ∼ = C 2 × C 2 .
So, we find that c n,2 = 1, as desired. We may inductively assume that
c n,k = 1 + 3 + • • • + 3 k-2
for some n ≥ 4 and k ∈ N ≥2 whereby k < n 2 . For each Klein-subgroup K ≤ S n satisfying the TDT property consisting of the transpositions (1, 2), (3, 4), . . . , (2k -1, 2k) ∈ S n , we may create three distinct Klein-subgroups of S n satisfying the TDT property, by adjoining the transposition (2k + 1, 2k + 2) to two nonempty products of transpositions in K ≤ S n in three different ways to produce three TDT Klein four-subgroups. For example, we may adjoin the permutation (56) to two non-identity elements within the TDT (C 2 × C 2 )-subgroup {(12)(34), ( 12), (34), id} in three different ways to obtain three additional subgroups which are isomorphic to C 2 × C 2 and which satisfy the TDT property: So, we may obtain 3
{(12)(
• (1 + 3 + • • • + 3 k-2 ) = 3 + 3 2 + • • • + 3 k-1
new TDT Klein four-subgroups in this manner. But there is a unique remaining Klein four-subgroup satisfying the TDT property consisting of the transpositions (1, 2), (3, 4), . . . , (2k-1, 2k), (2k+1, 2k+2) which cannot be obtained in the preceding manner, namely:
{(2k + 1, 2k + 2), (12)(34) • • • (2k -1, 2k)(2k + 1, 2k + 2), (12)(34) • • • (2k -1, 2k), id} .
So, this shows that the total number of Klein four-subgroups satisfying the TDT property consisting of the transpositions (1, 2), (3, 4), . . . , (2k -1, 2k), (2k + 1, 2k + 2) is equal to
1 + 3 + 3 2 + • • • + 3 k-1 ,
thus completing our proof by induction. So, since
A267840 n = n 2 k=2 c n,k • T (n, k), and
c n,k = 1 + 3 + • • • + 3 k-2 = 3 k-1 -1 2 ,
we thus have that
A267840 n = n 2 k=2 3 k-1 -1 2 • T (n, k),
as desired.
Corollary. The EGF for (A267840 n ) n∈N is e x 3 -e
x(x+2) 2
2 + e
x(3x+2) 2
6
.
Proof: This follows immediately from Theorem 2.1, since the equality given in Kotěšovec's conjecture is equivalent to the equation given in Theorem 2.1.
Kotěšovec also provided the following conjectural asymptotic expression for the integer sequence (A267840 n : n ∈ N) in OEIS Foundation Inc. ( 2011):
A267840 n ∼ 2 -3 2 3 n 2 -1 exp n 3 - n 2 - 1 12 n n 2 .
(2)
The above conjectured asymptotic formula may be proven using the EGF given in Corollary 2. In particular, the algolib Maple package together with the Maple command equivalent may be used to derive (2) from Corollary 2 (Kotěšovec (2016)). Kotěšovec discovered the following unexpected recurrence with polynomial coefficients for the sequence (A267840 n ) n∈N using the Mathematica function plinrec (Kotěšovec (2016)):
(n -4)(n -2)A267840 n = 3(n 2 -5n + 5)A267840 n-1 + (n -1)(4n 2 -27n + 41)A267840 n-2 -(n -2)(n -1)(8n -29)A267840 n-3 -(n -3)(n -2)(n -1)(3n -16)A267840 n-4 + 3(n -4)(n -3)(n -2)(n -1)A267840 n-5 .
We leave it as an easy computational exercise to verify this recurrence using a computer algebra system (CAS) together with Corollary 2, by comparing the EGF for the left-hand side of the above equation with the EGF for the right-hand side of this equality using Corollary 2.
3 Kot ěšovec's conjecture for TDT Klein four-subgroups of alternating groups
Let A000085 and A115327 be as given above. Similarly, for n ∈ N 0 , let A001464 n denote the n th term given by the sequence labeled A001464 in OEIS Foundation Inc. ( 2011). This sequence is defined so that the EGF for this sequence is exp(-x -1 2 x 2 ). So it is clear that the problem of proving the above conjectural expression for the EGF of A267840 is equivalent to the problem of proving the following identity given in OEIS Foundation Inc. (2011).
Conjecture 3.1. (Kotěšovec, 2016)
For n ∈ N, A266503 n = 1 3 + 1 8 (-1) n+1 A001464 n -A000085n 4 + A115327n 24
.
The following formula for A001464 is given by Benoit Cloitre in OEIS Foundation Inc. ( 2011):
A001464 n = (-1) n n 2 k=0 (-1) k (2k -1)!! n 2k .
Also, recall that
A000085 n = n 2 k=0 n! (n -2k)!2 k k! and A115327 n = n! n 2 k=0 3 k 2 k k!(n -2k)!
.
So, we have that the problem of proving Conjecture 3.1 is equivalent to the problem of proving the following identity.
Theorem 3.1.
For n ∈ N, A266503 n = n 2 k=1 1 24 -6 -3(-1) k + 3 k T (n, k).
Proof: Recall that T (n, k) denotes the number of k-matchings of the complete graph K n . We claim that
A267840 n = n 2 k=2 d n,k • T (n, k),
where d n,k is the number of Klein-subgroups of A n satisfying the TDT property consisting of the following transpositions: (1, 2), (3, 4), . . . , (2k -1, 2k). Again, this is easily seen bijectively, just as in the proof of Theorem 2.1.
Letting n ∈ N be sufficiently large, we proceed to construct an expression for the coefficient d n,k in terms of d n,k-1 and d n,k-2 . For each Klein-subgroup of A n satisfying the TDT property consisting of the transpositions (1, 2), (3, 4), . . . , (2k -5, 2k -4), we obtain 3 TDT Klein four-subgroups of A n by adjoining the product (2k -3, 2k -2)(2k -1, 2k) twice in three different ways, in essentially the same manner as in the proof Theorem 2.1. Now consider the remaining Klein four-subgroups of A n consisting of the transpositions (1, 2), (3, 4), . . . , (2k -1, 2k)
which cannot be obtained from the d n,k-2 -subgroups in the manner described above. For a subgroup S of this form, exactly two separate products in S contain (2k -3, 2k -2) as a factor, and exactly one product in S does not contain (2k -3, 2k -2) as a factor. The factor (2k -1, 2k) cannot be in a same product as (2k -3, 2k -2) twice within S, because otherwise S could be obtained from a d n,k-2 -subgroup, as above. But since there is exactly one product in S which does not contain (2k -3, 2k -2) as a factor, by the pigeonhole principle, it cannot be the case that that factor (2k -1, 2k) is never in a same product as (2k -3, 2k -2) within S. So, we may conclude that (2k -1, 2k) is an a same product as (2k -3, 2k -2) within S exactly once. So, by deleting the unique product of the form (2k -3, 2k -2)(2k -1, 2k) in each such remaining subgroup, and then replacing the unique remaining transposition of the form (2k -1, 2k) with (2k -3, 2k -2), we obtain:
(i) Exactly one multiset consisting of an additional "empty product", which corresponds to the unique subgroup {id, (2k -1, 2k), (12)(34) By comparing the positions of (78) and (9, 10), we find that reducing each expression of the form (9, 10) to (78) yields two copies of each subgroup in d n,k-1 . This shows that d 5,k = 2d 4,k + 3d 3,k + 1.
• • • (2k), (12)(34) • • • (2k -3, 2k -2)} ∼ = C 2 × C 2 which cannot
We may use the above EGF evaluation for A266503 together with a CAS such as Maple to prove the following asymptotic result conjectured by Kotěšovec in OEIS Foundation Inc. ( 2011):
A266503 n ∼ 2 -7 2 3 n 2 -1 exp n 3 - n 2 - 1 12 n n 2 .
Kotěšovec also discovered the following interesting recursive formula for the sequence (A266503 n ) n∈N :
(n -6)(n -4)(n -2)A266503 n = (2n -7)(2n 2 -14n + 15)A266503 n-1 + 3(n -7)(n -1)(n 2 -7n + 11)A266503 n-2 -(n -2)(n -1)(9n 2 -85n + 189)A266503 n-3 + (n -3)(n -2)(n -1)(n 2 -n -22)A266503 n-4 -2(n -4) 2 (n -3)(n -2)(n -1)A266503 n-5 -(n -5)(n -4)(n -3)(n -2)(n -1)(3n -19)A266503 n-6 + 3(n -6)(n -5)(n -4)(n -3)(n -2)(n -1)A266503 n-7 .
This recursion also may be proven using the EGF for A266503 together with a CAS.
A triple summation formula for TDT Klein four-subgroups
Let n ≥ 4. Let S be a ∆-closed 4-set consisting of the empty set together with subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n}. Then one of the following two situations must occur.
(i) The ∆-closed set S is of the form Given an arbitrary ∆-closed 4-set S consisting of subsets of a set consisting of pairwise disjoint 2subsets of [n], we define the partition type type(S) of S as the unique partition λ = type(S) of length 3 such that a largest set of 2-sets in S consists of λ1 2 pairwise disjoint 2-sets consisting of a total of λ 1 elements, a second-largest set of 2-sets in S consists of We define a Klein partition for n ∈ N as a partition λ such that λ = type(S) for some set S of the form described above. (c) The first entry λ 1 of λ satisfies λ 1 ≤ 2 n 2 ; and
(d) There exists an index i in 0, λ2
2 such that λ 1 + λ 2 -4i = λ 3 and λ 1 + λ 2 -2i ≤ n.
Proof: (=⇒) Suppose that λ is a Klein partition for n ∈ N. We thus have that λ = type(S) for some ∆-closed 4-set S consisting of ∅ together with subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n}. By definition of the partition type of a set of this form, we have that λ = type(S) must be of length 3 and must have even entries. The first entry λ 1 of λ is equal to the total number of elements among all 2-sets in a largest set of 2-sets in S. If n is even, then the maximal total number of elements among all 2-sets in a largest set of 2-sets in S is n, and otherwise, λ 1 is at most n -1. We thus have that λ 1 ≤ 2 n 2 . Let p 1 , p 2 , and p 3 be pairwise distinct nontrivial sets of 2-sets such that p 1 is a largest set of 2-sets in S, p 2 is a second-largest set of 2-sets in S, and p 3 is a smallest set of 2-sets in S. Note that it is possible that p 1 , p 2 , and p 3 are all sets of equal cardinality. Also observe that p 1 ∆p 2 = p 3 ∈ S. Suppose that p 1 and p 2 share exactly j ∈ N 0 2-sets in common. It is easily seen that j > 0 since S forms a group under the binary operation ∆ : S × S → S, and since the number of 2-sets of p 1 is greater than or equal to the number of 2-sets of p 2 and greater than or equal to the number of 2-sets of p 3 . Since λ 2 ≤ λ 1 it is thus clear that j ∈ 0, λ2 2 . Since p 1 and p 2 share exactly j 2-sets, we thus have that total number λ 3 of elements among all 2-sets in p 3 is (λ 1 -2j) + (λ 2 -2j) . Now consider the total number of elements among the 2-sets in either p 1 or p 2 . Since p 1 and p 2 share exactly j 2-sets, by the principle of inclusion-exclusion, we have that the total number of elements among the 2-sets in either p 1 or p 2 is equal to λ 1 + λ 2 -2j. We thus have that λ 1 + λ 2 -2j ≤ n, and we thus have that there exists an index i in 0, λ2 2 such that
λ 1 + λ 2 -4i = λ 3 , λ 1 + λ 2 -2i ≤ n
as desired.
(⇐=) Conversely, suppose that λ is a partition such that (a) The length of λ is 3; (b) Each entry of λ is even; (c) The first entry λ 1 of λ satisfies λ 1 ≤ 2 n 2 ; and (d) There exists an index i in 0, λ2 2 such that
λ 1 + λ 2 -4i = λ 3 , λ 1 + λ 2 -2i ≤ n.
Let p 1 denote the following set of pairwise disjoint 2-sets:
p 1 = {{1, 2}, {3, 4}, . . . , {λ 1 -1, λ 1 }}.
Since λ 1 is even (since each entry of λ is even), our above definition of p 1 is well-defined. Since λ 1 ≤ 2 n 2 , we thus have that p 1 is a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n}. Since there exists an integer i in the interval 0, λ2 2 such that
λ 1 + λ 2 -4i = λ 3 , λ 1 + λ 2 -2i ≤ n
by assumption, let j ∈ 0, λ2 2 denote a fixed integer satisfying λ 1 + λ 2 -4j = λ 3 and λ 1 + λ 2 -2j ≤ n. Now let p 2 denote the following set of pairwise disjoint 2-sets:
p 2 = {λ 1 -+ 1, λ 1 -2j + 2} , {λ 1 -2j + 3, λ 1 -2j + 4} , . . . , {λ 1 -2j + λ 2 -1, λ 1 -2j + λ 2 } .
The total number of elements among the distinct 2-sets in p 2 is thus
(λ 1 -2j + λ 2 ) -(λ 1 -2j + 1) + 1 = λ 2
and since λ 1 and λ 2 are both even, the above definition of p 2 is thus well-defined. Furthermore, since λ 1 + λ 2 -2j ≤ n, we thus have that p 2 is a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n}. Now consider the expression p 1 ∆p 2 ∈ S. The total number of elements among the 2-sets in the expression p 1 ∆p 2 is equal to
(λ 1 -2j) + (-2j + λ 2 ) = λ 1 + λ 2 -4j
and thus since λ 1 + λ 2 -4j = λ 3 we have that the expression p 1 ∆p 2 ∈ S consists of λ3 2 pairwise disjoint 2-sets consisting of a total of λ 3 entries. Now consider the expression type(S). The set p 1 consists of λ1 2 2-sets, the set p 2 consists of λ2 2 2-sets, and the set p 3 consists of λ3 2 2-sets, with
λ 1 2 ≥ λ 2 2 ≥ λ 3 2 since λ = (λ 1 , λ 2 , λ 3
) is a partition. We thus have that type(S) = λ, thus proving that λ is a Klein partition for n ∈ N.
Lemma 4.2. For n ∈ N, the Klein partitions for n are precisely tuples of the form
(2a, 2b, 2a + 2b -4i)
such that:
1. 1 ≤ a ≤ n 2 ; 2. 1 ≤ b ≤ a; and 3. max a + b -n 2 , a 2 ≤ i ≤ min b, 2a+2b-1 4
.
Proof: Let n ∈ N. Let λ = (2a, 2b, 2a + 2b -4i) be a tuple satisfying the conditions ( 1), (2), and (3) given above. We have that 1 ≤ λ 2 ≤ λ 1 from condition (2). We have that 2a + 2b -4i ≤ 2b since a 2 ≤ i since a 2 ≤ i from condition (3), and we thus have that λ 3 ≤ λ 2 ≤ λ 1 as desired. We have that 1 ≤ 2a + 2b -4i since ≤ 2a+2b-1 4 since i ≤ 2a+2b-1 4 from condition (3), and we thus have that
1 ≤ λ 3 ≤ λ 2 ≤ λ 1 thus proving that that the tuple λ = (2a, 2b, 2a + 2b -4i)
is in fact an integer partition. We proceed to make use of Lemma 4.1. Certainly, λ is of length 3, and each entry of λ is even. The first entry λ 1 = 2a of λ satisfies λ 1 = 2a ≤ 2 n 2 since a ≤ n 2 from condition (1). Certainly, λ 1 + λ 2 -4i = λ 3 = 2a + 2b -4i. Furthermore, we have that
λ 1 + λ 2 -2i ≤ n since a + b -n 2 ≤ i since a + b -n 2 ≤ i from condition (3)
. By Lemma 4.1, we thus have that the partition λ is a Klein partition for n ∈ N.
Conversely, let λ be a Klein partition for n. By Lemma 4.1, we thus have that:
(a) The length (λ) of λ is 3;
(b) Each entry of λ is even;
(c) The first entry λ 1 of λ satisfies λ 1 ≤ 2 n 2 ; and (d) There exists an integer i ∈ 0, λ2 2 such that:
λ 1 + λ 2 -4i = λ 3 , λ 1 + λ 2 -2i ≤ n.
Begin by rewriting the entries of λ = (λ 1 , λ 2 , λ 3 ) as follows. By condition (b) we may thus write λ 1 = 2a and λ 2 = 2b, letting a, b ∈ N. Let i ∈ 0, λ2 2 be as given in condition (d) above. We thus have that λ 3 = 2a + 2b -4i and we thus have that the integer partition λ = (λ 1 , λ 2 , λ 3 ) is a tuple of the following form: λ = (2a, 2b, 2a + 2b -4i) .
Since λ is an integer partition, we have that 1 ≤ a. Since the first entry λ 1 of λ satisfies λ 1 ≤ 2 n 2 by condition (c) above, we thus have that a ≤ n 2 and we thus have that the first condition given in Lemma 4.2 holds. Since λ is an integer partition, we have that 1 ≤ b ≤ a, and we thus have that the second condition given in Lemma 4.2 holds.
From condition (d), we have that
λ 1 + λ 2 -2i ≤ n
and we thus have that 2a + 2b -2i ≤ n and thus i ≥ a + b -n 2 and we thus have that a + b -n 2 ≤ i. Since λ is a partition, we have that 2a + 2b -4i ≤ 2b
and we thus have that a 2 ≤ i and therefore a 2 ≤ i. From the inequality a + b -n 2 ≤ i together with the inequality a 2 ≤ i we thus have that
max a + b - n 2 , a 2 ≤ i.
Since i ∈ 0, λ2 2 , we thus have that i ≤ b. Since λ is an integer partition, we have that 1 ≤ λ 3 . Therefore, 1 ≤ 2a + 2b -4i. We thus have that i ≤ 2a+2b-1 4 , and we thus have that i ≤ 2a+2b-1 4 . From the inequality i ≤ 2a+2b-1 4 together with the inequality i ≤ b, we thus have that i ≤ min b, 2a + 2b -1 4
thus proving that condition (3) given in Lemma 4.2 holds.
Definition 4.1. Let λ be a partition. We define the maximum repetition length of λ as the maximum natural number m such that
λ i+1 = λ i+2 = • • • = λ i+m
for some i ∈ N 0 . The maximum repetition length of a partition λ is denoted by repeat(λ).
Example 4.1. For a partition λ of length three, repeat(λ) = 1 if all three entries of λ are pairwise distinct, repeat(λ) = 2 if two entries of λ are equal but different from the remaining (third) entry, and repeat
(λ) = 3 if λ 1 = λ 2 = λ 3 .
Lemma 4.3. Letting λ be a fixed Klein partition, the number of ∆-closed 4-sets consisting of ∅ together with subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n} of partition type λ is
1 (repeat(λ))! λ 1 2 -1 j=0 n-2j 2 λ1 2 ! λ1 2 λ1+λ2-λ3 4 λ 2 -λ 1 +λ 3 4 -1 j=0 n-λ1-2j 2 λ2-λ1+λ3 4
! .
Proof: There are
λ 1 2 -1 j=0 n-2j 2 λ1 2 ! distinct sets of 2-sets in S n of length λ1
2 . Let i denote the unique index in 0, λ2 2 such that λ 1 + λ 2 -4i = λ 3 and λ 1 + λ 2 -2i ≤ n. We thus have that there are precisely i "overlap" 2-sets shared among the largest set of sets in a 4-set S of partition type λ and the second-largest set of sets in S. There are 2 k-i-j k!(i -k)!(j -k)!(n -2i -2j + 2k)!(δ i,j + δ i,2k + 1)! for arbitrary n ∈ N.
Proof: From the above lemma, we have that the number of ∆-closed 4-sets consisting of the empty set together with subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n} is Rewriting the above expression by evaluating the products in the summand yields the desired result.
The integer sequence (0, 0, 0, 3, 15, 105, 525, 3255, 17703, 112455, 669735, 4485195, 29023995, 205768563, . . .) given by the number of ∆-closed 4-sets consisting of the empty set together with subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n} is given in the On-Line Encyclopedia of Integer Sequences sequence A267840 which we contributed. For example, there are A267840 n = 15 symmetric differenceclosed 4-sets of this form in the case whereby n = 5: It it natural to use Lemma 4.2 to determine "even" analogues of the above results.
Lemma 4.1. A partition λ is a Klein partition for n ∈ N if and only if (a) The length (λ) of λ is 3; (b) Each entry of λ is even;
the remaining 2-sets for the second-largest set in S.Theorem 4.4. The number of ∆-closed 4-sets S such that there exists a set T consisting of pairwise disjoint 2-subsets of [n] such that each element in S is contained in T is
!
where the above sum is over all Klein partitions λ for n. By Lemma 4.2, we thus have that the above summation may be rewritten as:
{t 1 , t 2 , . . . , t i , t i+1 , t i+2 , . . . , t j }, {t i+1 , t i+2 , . . . , t j , t j+1 , t j+2 , . . . , t k }, {t 1 , t 2 , . . . , t i , t j+1 , t j+2 , . . . , t k }.
{∅, {t 1 , t 2 , . . . , t i }, {t i+1 , t i+2 , . . . , t j }, {t 1 , t 2 , . . . , t i , t i+1 , t i+2 , . . . , t j }} where {t 1 , t 2 , . . . , t j } is a set consisting of j distinct pairwise disjoint 2-sets in {1, 2, . . . , n}, or (ii) The 4-set S consists of ∅ together with non-empty elements of the following forms, letting {t 1 , t 2 , . . . , t k } be a set consisting of k distinct pairwise disjoint 2-sets in {1, 2, . . . , n}:
{∅, {{1, 2}} , {{3, 4}} , {{1, 2} , {3, 4}}} , {∅, {{1, 2}} , {{3, 5}} , {{1, 2} , {3, 5}}} , {∅, {{1, 2}} , {{4, 5}} , {{1, 2} , {4, 5}}} , {∅, {{1, 3}} , {{2, 4}} , {{1, 3} , {2, 4}}} , {∅, {{1, 3}} , {{2, 5}} , {{1, 3} , {2, 5}}} , {∅, {{1, 3}} , {{4, 5}} , {{1, 3} , {4, 5}}} , {∅, {{1, 4}} , {{2, 3}} , {{1, 4} , {2, 3}}} , {∅, {{1, 4}} , {{2, 5}} , {{1, 4} , {2, 5}}} , {∅, {{1, 4}} , {{3, 5}} , {{1, 4} , {3, 5}}} , {∅, {{1, 5}} , {{2, 3}} , {{1, 5} , {2, 3}}} , {∅, {{1, 5}} , {{2, 4}} , {{1, 5} , {2, 4}}} , {∅, {{1, 5}} , {{3, 4}} , {{1, 5} , {3, 4}}} , {∅, {{2, 3}} , {{4, 5}} , {{2, 3} , {4, 5}}} , {∅, {{2, 4}} , {{3, 5}} , {{2, 4} , {3, 5}}} ,{∅, {{2, 5}} , {{3, 4}} , {{2, 5} , {3, 4}}} .
Acknowledgments
The author would like to thank Jeffrey Shallit for some useful feedback. We would like to thank Václav Kotěšovec for a useful discussion concerning the OEIS sequence A267840, and two anonymous reviewers for many useful comments concerning this paper.
Lemma 4.5. For n ∈ N, the Klein partitions for n corresponding to 4-sets consisting of ∅ together with subsets of a set consisting of pairwise disjoint 2-subsets of [n] are precisely tuples of the form (4d, 4e, 4d+ 4e -4i) such that:
.
Proof:
The above lemma follows immediately from Lemma 4.2 by letting a = 2d and b = 2e.
Theorem 4.6. The number of ∆-closed 4-sets consisting of even-order subsets of a set consisting of pairwise disjoint 2-subsets of {1, 2, . . . , n} is
Proof: The above theorem follows from Lemma 4.3 by analogy with Theorem 4.4.
The corresponding integer sequence is given below, and is given in the sequence A266503 which we contributed to OEIS Foundation Inc. ( 2011).
(0, 0, 0, 0, 0, 15, 105, 735, 4095, 26775, 162855, 1105335, 7187895, 51126075, 356831475, . . .) .
Conclusion
The number of Klein partitions for n = 1, 2, . . . is given by the following integer sequence: (0, 0, 0, 1, 1, 3, 3, 6, 6, 10, 10, 16, 16, 23, 23, 32, 32, 43, 43, 56, 56, 71, 71, 89, . . .) .
We have previously noted that the corresponding integer sequence (0, 1, 3, 6, 10, 16, 23, 32, 43, 56, 71, 89, 109, 132, 158, . . .) coincides with the sequence A034198 given by the number of binary codes of a given length with 3 words, as indicated in OEIS Foundation Inc. (2011). We currently leave it as an open problem to use Lemma 4.2 to prove this. Proving this problem is nontrivial in the following sense. It may be difficult to construct a closed-form formula for the number of Klein partitions of n ∈ N, since the definition of a Klein partition is somewhat complicated. Moreover, it may not be obvious as to how to relate such a formula to a known formula for the sequence A034198.
Interestingly, there are known connections between the OEIS sequence A034198 and Klein foursubgroups. In particular, A034198 n is the number of orbits of Klein subgroups of C n 2 under automorphisms of C n 2 , and A034198 n is the number of faithful representations of K 4 = C 2 2 of dimension n up to equivalence by automorphisms of C 2 2 (OEIS Foundation Inc. ( 2011)). | 35,171 | [
"9320"
] | [
"55054"
] |
01492343 | en | [
"math"
] | 2024/03/04 23:41:50 | 2019 | https://hal.science/hal-01492343/file/tv-wass-flow4.pdf | Guillaume Carlier
email: [email protected]
Clarice Poon
On the total variation Wasserstein gradient flow and the TV-JKO scheme
Keywords: total variation, Wasserstein gradient flows, JKO scheme, fourth-order evolution equations. MS Classification: 35G31, 49N15
We study the JKO scheme for the total variation, characterize the optimizers, prove some of their qualitative properties (in particular a sort of maximum principle and the regularity of level sets). We study in detail the case of step functions. Finally, in dimension one, we establish convergence as the time step goes to zero to a solution of a fourth-order nonlinear evolution equation.
Introduction
Variational schemes based on total variation are extremely popular in image processing for denoising purposes, in particular the seminal work of Rudin, Osher and Fatemi [START_REF] Rudin | Nonlinear total variation based noise removal algorithms[END_REF] has been extremely influential and is still the object of an intense stream of research, see [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF] and the references therein. Continuous-time counterparts are well-known to be related to the L 2 gradient flow of the total variation, see Bellettini, Casselles and Novaga [START_REF] Bellettini | The total variation flow in R n[END_REF] and the mean-curvature flow, see Evans and Spruck [START_REF] Evans | Motion of level sets by mean curvature[END_REF]. The gradient flow of the total variation for other Hilbertian structures may be natural as well and in particular the H -1 case, leads to a singular fourth-order evolution equation studied by Giga and Giga [START_REF] Giga | Very singular diffusion equations: second and fourth order problems[END_REF], Giga, Kuroda and Matsuoka [START_REF] Giga | Fourth-order total variation flow with Dirichlet condition: characterization of evolution and extinction time estimates[END_REF]. In the present work, we consider another metric, namely the Wasserstein one.
Given an open subset Ω of R d and ρ ∈ L 1 (Ω), recall that the total variation of ρ is given by
J(ρ) := sup Ω div(z)ρ : z ∈ C 1 c (Ω), z L ∞ ≤ 1 (1.1)
and BV(Ω) is by definition the subspace of L 1 (Ω) consisting of those ρ's in L 1 (Ω) such that J(ρ) is finite. The following fourth-order nonlinear evolution equation
∂ t ρ + div ρ ∇div ∇ρ |∇ρ| = 0, on (0, T ) × Ω, ρ | t=0 = ρ 0 , (1.2)
supplemented by the zero-flux boundary condition
ρ∇div ∇ρ |∇ρ| • ν = 0 on ∂Ω (1.3)
has been proposed in [START_REF] Burger | Regularized regression and density estimation based on optimal transport[END_REF] for the purpose of denoising image densities. Numerical schemes for approximating the solutions of this equation have been investigated in [START_REF] Burger | Regularized regression and density estimation based on optimal transport[END_REF][START_REF] Düring | A high-contrast fourthorder pde from imaging: numerical solution by ADI splitting[END_REF][START_REF] Benning | A primal-dual approach for a total variation Wasserstein flow[END_REF]. One should of course interpret the nonlinear term div( ∇ρ |∇ρ| ) as the negative of an element of the subdifferential of J at ρ. At least formally, when ρ 0 is a probability density on Ω, (1.2)-(1.3) can be viewed as the Wasserstein gradient flow of J (we refer to the textbooks of Ambrosio, Gigli, Savaré [START_REF] Ambrosio | Gradient flows: in metric spaces and in the space of probability measures[END_REF] and Santambrogio [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF], for a detailed exposition). Following the seminal work of Jordan, Kinderlehrer and Otto [START_REF] Jordan | The variational formulation of the Fokker-Planck equation[END_REF] for the Fokker-Planck equation, it is reasonable to expect that solutions of (1.2) can be obtained, at the limit τ → 0 + , of the JKO Euler implicit scheme:
ρ τ 0 = ρ 0 , ρ τ k+1 ∈ argmin 1 2τ W 2 2 (ρ τ k , ρ) + J(ρ), ρ ∈ BV(Ω) ∩ P 2 (Ω) (1.4)
where P 2 (Ω) is the space of Borel probability measures Ω with finite second moment and W 2 is the quadratic Wasserstein distance:
W 2 2 (ρ 0 , ρ 1 ) := inf γ∈Π(ρ 0 ,ρ 1 ) R d ×R d |x -y| 2 dγ(x, y) , (1.5)
Π(ρ 0 , ρ 1 ) denoting the set of transport plans between ρ 0 and ρ 1 i.e. the set of probability measures on R d × R d having ρ 0 and ρ 1 as marginals. Our aim is to study in detail the discrete TV-JKO scheme (1.4) as well as its connection with (suitable weak solutions) of the PDE (1.2). Although the assertion that (1.2) is the TV Wasserstein gradient flow is central to the numerical schemes described in [START_REF] Burger | Regularized regression and density estimation based on optimal transport[END_REF][START_REF] Düring | A high-contrast fourthorder pde from imaging: numerical solution by ADI splitting[END_REF][START_REF] Benning | A primal-dual approach for a total variation Wasserstein flow[END_REF], there has been so far, to the best of our knowledge, no theoretical justification of this fact. Fourth-order equations which are Wasserstein gradient flows of functionals involving the gradient of ρ, such as the Dirichlet energy or the Fisher information, have been studied by McCann, Matthes and Savaré [START_REF] Matthes | A family of nonlinear fourth order equations of gradient flow type[END_REF] who found a new method the flow interchange technique to prove higher-order compactness estimates, we refer to [START_REF] Loibl | Existence of weak solutions to a class of fourth order partial differential equations with Wasserstein gradient structure[END_REF] for a recent reference on this topic. The total variation is however too singular for such arguments to be directly applicable, as far as we know.
The paper is organized as follows. In section 2, we start with the discussion of a few examples. Section 3 is devoted to some properties of solutions of JKO steps and in particular a maximum principle based on a result of [START_REF] De Philippis | BV estimates in optimal transportation and applications[END_REF]. Section 4 establishes optimality conditions for JKO steps thanks to an entropic regularization scheme. Section 5 discusses regularity properties of the boundaries of the level sets of JKO solutions. In section 6, we address in detail the case of step functions in dimension one. Finally, in section 7, we prove convergence of the JKO scheme, as τ → 0 + , in the case of a strictly positive and bounded initial condition on a bounded interval of the real line.
Some examples
We first recall the Kantorovich dual formulation of W 2 2 :
1 2 W 2 2 (µ 0 , µ 1 ) = sup R d ψdµ 0 + R d ϕdµ 1 : ψ(x) + ϕ(y) ≤ |x -y| 2 2 (2.1)
an optimal pair (ψ, ϕ) for this problem is called a pair of Kantorovich potentials. The existence of Kantorovich potentials is well-known and such potentials can be taken to be conjugates of each other, i.e. such that
ϕ(x) = inf y∈R d { 1 2 |x -y| 2 -ψ(y)}, ψ(y) = inf x∈R d { 1 2 |x -y| 2 -ϕ(x)},
which implies that ϕ and ψ are semi-concave (more precisely 1 2 |.| 2 -ϕ is convex). If µ 1 is absolutely continuous with respect to the d-dimensional Lebesgue measure, ϕ is differentiable µ 1 a.e. and the map T = id -∇ϕ is the gradient of a convex function pushing forward µ 1 to µ 0 which is in fact the optimal transport between µ 0 and µ 1 thanks to Brenier's theorem [START_REF] Brenier | Polar factorization and monotone rearrangement of vector-valued functions[END_REF]. In such a case, we will simply refer to ϕ as a Kantorovich potential between µ 1 and µ 0 . We refer the reader to [START_REF] Villani | Topics in optimal transportation[END_REF] and [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF] for details.
In this section, we will consider some explicit examples which rely on the following sufficient optimality condition (details for a rigorous derivation of the Euler-Lagrange equation for JKO steps will be given in section 4) in the case of the whole space i.e. Ω = R d .
Lemma 2.1. Let ρ 0 ∈ P 2 (R d ), τ > 0 and Ω = R d (so J is the total variaton on the whole space), if ρ 1 ∈ BV(R d ) ∩ P 2 (R d ) is such that ϕ τ + div(z) ≥ 0, with equality ρ 1 -a.e. (2.2)
where ϕ is a Kantorovich potential between ρ 1 and ρ 0 and
z ∈ C 1 (R d ), with z L ∞ ≤ 1, div(z) ∈ L d , and
J(ρ 1 ) = R d div(z)ρ 1 . (2.3)
Then, setting
Φ τ,ρ 0 (ρ) := 1 2τ W 2 2 (ρ 0 , ρ) + J(ρ), ∀ρ ∈ BV(R d ) ∩ P 2 (R d ) (2.4) one has Φ τ,ρ 0 (ρ 1 ) ≤ Φ τ,ρ 0 (ρ), ∀ρ ∈ BV(R d ) ∩ P 2 (R d ). Proof. For all ρ ∈ BV(R d )∩P 2 (R d ), J(ρ) ≥ R d div(z)ρ = J(ρ 1 )+ R d div(z)(ρ- ρ 1 )
, and it follows from the Kantorovich duality formula that
1 2τ W 2 2 (ρ 0 , ρ) ≥ 1 2τ W 2 2 (ρ 0 , ρ 1 ) + R d ϕ τ (ρ -ρ 1 ).
The claim then directly follows from (2.2).
The case of a characteristic function
A simple illustration of Lemma 2.1 in dimension 1 concerns the case of a uniform ρ 0 , (here and in the sequel we shall denote by χ A the characteristic function of the set A):
ρ 0 = ρ α 0 , α 0 > 0, ρ α := 1 2α χ [-α,α] .
It is natural to make the ansatz that the minimizer of Φ τ,ρ 0 defined by (2.4) remains of the form ρ 1 = ρ α 1 for some α 1 > α 0 . The optimal transport between ρ α 1 and ρ 0 being the linear map T = α 0 α 1 id, a direct computation gives
Φ τ,ρ 0 (ρ α 1 ) = 1 α 1 + 1 6τ (α 1 -α 0 ) 2
which is minimal when α 1 is the only root in (α 0 , +∞) of
α 2 1 (α 1 -α 0 ) = 3τ. (2.5)
To check that this is the correct guess, we shall check that the conditions of Lemma 2.1 are met. First define the Kantorovich potential
ϕ(x) = 1 2α 1 (α 1 -α 0 )x 2 - 3τ 2α 1
and z 1 by
τ z 1 (x) := - (α 1 -α 0 ) 6α 1 x 3 + 3τ x 2α 1 , x ∈ [-α 1 , α 1 ]
extended by 1 on [1, +∞) and -1 on (-∞, -1). Then -1 ≤ z 1 ≤ 1 (use the fact that it is odd and nondecreasing on [0, α 1 ] thanks to (2.5)), also
z 1 (±α 1 ) = 0 so that z 1 ∈ C 1 (R) and z 1 (α 1 ) = 1, z 1 (-α 1 ) = -1 hence J(ρ 1 ) = - R z 1 Dρ 1 = R z 1 ρ 1 (
here and in the sequel Dρ 1 denotes the Radon measure which is the distributional derivative of the BV function ρ 1 ). Moreover τ z 1 + ϕ ≥ 0 with an equality on [-α 1 , α 1 ]. The optimality of ρ 1 = ρ α 1 then directly follows from Lemma 2.1.
Of course, the argument can be iterated so as to obtain the full TV-JKO sequence:
ρ τ k+1 = argmin Φ τ,ρ τ k = α τ k+1 α τ k id # ρ τ k = α τ k+1 α 0 id # ρ 0 where α τ k is defined inductively by (α τ k+1 -α τ k )(α τ k+1 ) 2 = 3τ, α τ 0 = α 0 which is nothing but the implicit Euler discretization of the ODE α α 2 = 3, α(0) = α 0 , whose solution is α(t) = (α 3 0 + 9t) 1 3
. Extending ρ τ k in a piecewise constant way: ρ τ (t) = ρ τ k+1 for t ∈ (kτ, (k + 1)τ ], it is not difficult to check that ρ τ converges (in L ∞ ((0, T ), (P 2 (R), W 2 )) and in L p ((0, T )×R) for any p ∈ (1, ∞) and any T > 0) to ρ given by ρ(t, .) = ( α(t) α 0 id) # ρ 0 . Since v(t, x) = α (t) α(t) x is the velocity field associated to X(t, x) = α(t) α 0 x, ρ solves the continuity equation
∂ t ρ + (ρv) x = 0.
In addition, ρv = -ρz xx where z(t, x) = -α (t) 6α(t)
x 3 + 3x 2α(t) , x ∈ [-α(t), α(t)],
extended by 1 (respectively -1) on [α(t), +∞) (respectively (-∞, -α(t)]).
The function z is C 1 , z L ∞ ≤ 1 and z • Dρ = -|Dρ| (in the sense of measures). In other words the limit ρ of ρ τ satisfies
∂ t ρ -(ρz xx ) x = 0
with |z| ≤ 1 and z • Dρ = -|Dρ| which is the natural weak form of (1.2).
Instantaneaous creation of discontinuities
We now consider the case where ρ 0 (x) = (1 -|x|) + and will show that the JKO scheme instantaneously creates a discontinuity at the level of ρ 1 , the minimizer of Φ τ,ρ 0 when τ is small enough. We indeed look for ρ 1 in the form:
ρ 1 (x) = 1 -β/2 if |x| < β, (1 -|x|) + if |x| ≥ β,
for some well-chosen β ∈ (0, 1). The optimal transport map T between such a ρ 1 and ρ 0 is odd and given explicitly by
T (x) = 1 -1 -x(2 -β) if x ∈ [0, β), x if x ≥ β.
The Kantorovich potential which vanishes at β (extended in an even way to R -) is then given by
ϕ(x) = x 2 2 -x -(1-x(2-β)) 3/2 3(1-β/2) + C if x ∈ [0, β), 0 if x > β,
where
C = - β 2 2 + β + 2(1 -β) 3 3(2 -β) .
Let us now integrate τ z = -ϕ on [0, β] with initial condition z(0) = 0, i.e.
for x ∈ [0, β] τ z(x) = - x 3 6 + x 2 2 - 4 15(2 -β) 2 [1 -(1 -2β)x] 5 2 + β 2 2 -β - 2(1 -β) 3 3(2 -β) x + 4 15(2 -β) 2 1 β 0 -β -1 1
Figure 1: The probablity density functions ρ 0 and ρ 1 from section 2.2 Note that z is nondecreasing on [0, β] (because ϕ(0) < 0, ϕ(β) = 0 and ϕ is convex on [0, β] so that ϕ ≤ 0 on [0, β]), our aim now is to find β ∈ (0, 1) in such a way that z(β) = 1 i.e. replacing in the previous formula
τ = β 3 3 - β 2 2 + 4(1 -(1 -β) 5 ) 15(2 -β) 2 - 2(1 -β) 3 β 3(2 -β)
the right hand-side is a continuous function of β ∈ [0, 1] taking value 0 for β = 0 and 1 10 for β = 1, hence as soon as 10τ < 1 one may find a β ∈ (0, 1) such that indeed z(β) = 1. Extend then z by 1 on [β, +∞) and to R -in an odd way. We then have built a function z which is C 1 (ϕ(β) = 0), such that |z| ≤ 1, z • Dρ 1 = -|Dρ 1 | and such that z + ϕ τ = 0. Thanks to Lemma 2.1, we conclude that ρ 1 is optimal. This example shows that discontinuities may appear at the very first iteration of the TV-JKO scheme.
Maximum principle for JKO steps
Throughout this section, we assume that Ω is a convex open bounded subset of R d and denote P ac (Ω) the set of Borel probability measures on Ω that are absolutely continuous with respect to the Lebesgue measure (and will use the same notation for µ ∈ P ac (Ω) both for the measure µ and its density). Given ρ 0 ∈ P ac (Ω) and τ > 0, we consider one step of the TV-JKO scheme:
inf ρ∈Pac(Ω) 1 2τ W 2 2 (ρ 0 , ρ) + J(ρ) . (3.1)
It is easy by the direct method of the calculus of variations to see that (3.1) has at least one solution, moreover J being convex and ρ → W 2 2 (ρ, ρ 0 ) being strictly convex whenever ρ 0 ∈ P ac (Ω) (see [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF]), the minimizer is in fact unique, and in the sequel we denote it by ρ 1 .
Preliminaries
Our aim is to deduce some bounds on ρ 1 from bounds on ρ 0 . To do so, we shall combine some convexity arguments and a remarkable BV estimate due to De Philippis et al. [START_REF] De Philippis | BV estimates in optimal transportation and applications[END_REF]. First we recall the notion of generalized geodesic from Ambrosio, Gigli and Savaré [START_REF] Ambrosio | Gradient flows: in metric spaces and in the space of probability measures[END_REF]. Given µ, µ 0 and µ 1 in P ac (Ω), and denoting by T 0 (respectively T 1 ) the optimal transport (Brenier) map between µ and µ 0 (respectively µ 1 ), the generalized geodesic with base µ joining µ 0 to µ 1 is by definition the curve of measures:
µ t := ((1 -t)T 0 + tT 1 ) # µ, t ∈ [0, 1]. (3.2)
A key property of these curves introduced in [START_REF] Ambrosio | Gradient flows: in metric spaces and in the space of probability measures[END_REF] is the strong convexity of the squared distance estimate:
W 2 2 (µ, µ t ) ≤ (1 -t)W 2 2 (µ, µ 0 ) + tW 2 2 (µ, µ 1 ) -t(1 -t)W 2 2 (µ 0 , µ 1 ). (3.3)
It is well-known that if G : R + → R ∪ {+∞} is a proper convex lower semicontinuous (l.s.c.) internal energy density, bounded from below such that G(0) = 0 and which satisfies McCann's condition (see [START_REF] Mccann | A convexity principle for interacting gases[END_REF])
λ ∈ R + → λ d G(λ -d ) is convex nonincreasing (3.4)
then defining the generalized geodesic curve (µ t ) t∈[0,1] by (3.2), one has
Ω G(µ t (x))dx ≤ (1 -t) Ω G(µ 0 (x))dx + t Ω G(µ 1 (x))dx. (3.5)
In particular L p and uniform bounds are stable along generalized geodesics:
µ t p L p ≤ (1 -t) µ 0 p L p + t µ 0 p L p , µ t L ∞ ≤ max( µ 0 L ∞ , µ 1 L ∞ ), (3.6)
and
Ω µ t (x) log(µ t (x))dx ≤ (1-t) Ω µ 0 (x) log(µ 0 (x))dx+t Ω µ 1 (x) log(µ 1 (x))dx (3.7) An immediate consequence of (3.3) (see chapter 4 of [2] for general con- traction estimates) is the following Lemma 3.1. Let K be a nonempty subset of P ac (Ω), let µ 0 ∈ K, µ 1 ∈ P ac (Ω), if μ1 ∈ argmin µ∈K W 2 2 (µ 1 , µ)
and if the generalized geodesic with base µ 1 joining µ 0 to μ1 remains in K then
W 2 2 (µ 0 , μ1 ) ≤ W 2 2 (µ 0 , µ 1 ) -W 2 2 (µ 1 , μ1 ). (3.8) Proof. Since µ t ∈ K we have W 2 2 (µ 1 , μ1 ) ≤ W 2 2 (µ 1 , µ t ), applying (3.
3) to the generalized geodesics with base µ 1 joining µ 0 to μ1 we thus get
(1 -t)W 2 2 (µ 1 , μ1 ) ≤ (1 -t)W 2 2 (µ 1 , µ 0 ) -t(1 -t)W 2 2 (µ 0 , μ1 ),
dividing by (1 -t) and then taking t = 1 therefore gives the desired result.
The other result we shall use to derive bounds is a BV estimate of De Philippis et al. [START_REF] De Philippis | BV estimates in optimal transportation and applications[END_REF], which states that given µ, ∈ P ac (Ω) ∩ BV(Ω), and G : R + → R ∪ {+∞}, proper convex l.s.c., the solution of inf ρ∈Pac(Ω)
1 2 W 2 2 (µ, ρ) + Ω G(ρ(x))dx (3.9)
is BV with the bound J(ρ) ≤ J(µ).
(3.10)
Taking in particular,
G(ρ) := 0 if ρ ≤ M , +∞ otherwise,
this implies that the Wasserstein projection of µ onto the set defined by the constraint ρ ≤ M has a smaller total variation than µ.
Maximum and minimum principles
Theorem 3.2. Let ρ 0 ∈ P ac (Ω) ∩ L ∞ (Ω) and let ρ 1 be the solution of (3.1), then
ρ 1 ∈ L ∞ (Ω) with ρ 1 L ∞ (Ω) ≤ ρ 0 L ∞ (Ω) . (3.11)
Proof. Thanks to (3.6) the set
K := {ρ ∈ P ac (Ω)∩L p (Ω) : ρ ≤ ρ 0 L ∞ (Ω)
a.e.} has the property that the generalized geodesics (with any base) joining two of its points remains in K. Let then ρ1 be the W 2 projection of ρ 1 onto K i.e. the solution of inf ρ∈K W 2 2 (ρ 1 , ρ). Thanks to Lemma 3.1 we have
W 2 2 (ρ 0 , ρ1 ) ≤ W 2 2 (ρ 0 , ρ 1 ) -W 2 2 (ρ 1 , ρ1
) and thanks to Theorem 1.1 of [START_REF] De Philippis | BV estimates in optimal transportation and applications[END_REF], J(ρ 1 ) ≤ J(ρ 1 ). The optimality of ρ 1 for (3.1) therefore implies
W 2 (ρ 1 , ρ1 ) = 0 i.e. ρ 1 ≤ ρ 0 L ∞ (Ω) .
Remark 3.3. In section 4, we shall use an approximation of (3.1) with an additional small entropy term, the same bound as in Theorem 3.2 will remain valid in this case. Indeed, consider a proper convex l.s.c. and bounded from below internal energy density G and consider given h ≥ 0, the variant of (3.1)
inf ρ∈Pac(Ω) 1 2τ W 2 2 (ρ 0 , ρ) + J(ρ) + h Ω G(ρ(x))dx . (3.12)
Then we claim that the solution ρ h still satisfies ρ h ≤ ρ 0 L ∞ (Ω) . Indeed we have seen in the previous proof that the Wasserstein projection ρh of ρ h onto the constraint ρ ≤ ρ 0 L ∞ (Ω) both diminishes J and the Wasserstein distance to ρ 0 . It turns out that it also diminishes the internal energy. Indeed, thanks to Proposition 5.2 of [START_REF] De Philippis | BV estimates in optimal transportation and applications[END_REF], there is a measurable set
A such that ρh = χ A ρ h + χ Ω\A ρ 0 L ∞ , it thus follows that |Ω \ A| ρ 0 L ∞ = Ω\A ρ h . So,
from the convexity of G and Jensen's inequality,
G(ρ h ) = A G(ρ h ) + |Ω \ A|G |Ω \ A| -1 Ω\A ρ h ≤ G(ρ h ),
thus yielding the same conclusion as above.
In dimension one, it turns out that we can similarly obtain bounds from below: Proposition 3.4. Assume that d = 1, that Ω is a bounded interval and that ρ 0 ≥ α > 0 a.e. on Ω then the solution ρ 1 of (3.1) also satifies ρ 1 ≥ α > 0 a.e. on Ω.
Proof. The proof is similar to that of Theorem 3.2 but using the Wasserstein projection on the set K := {ρ ∈ P ac (Ω) : ρ ≥ α}, the only thing to check to be able to use Lemma 3.1 is that for any basepoint µ and any µ 0 and µ 1 in K, the generalized geodesic with base point µ joining µ 0 to µ 1 remains in K. The optimal transport maps T 0 and T 1 from µ to µ 0 and µ 1 respectively are nondecreasing and continuous and setting
T t := (1 -t)T 0 + tT 1 , one has µ = µ t (T t )T t = µ 0 (T 0 )T 0 = µ 1 T 1 = (1 -t)µ 0 (T 0 )T 0 + tµ 1 (T 1 )T 1 ≥ αT t
which is easily seen to imply that µ t ≥ α a.e..
Euler-Lagrange equation for JKO steps
The aim of this section is to establish optimality conditions for (3.1). Despite the fact that it is a convex minimization problem, it involves two nonsmooth terms J and W 2 2 (ρ 0 , .), so some care should be taken of to justify rigorously the arguments. In the next section, we introduce an entropic regularization approximation, the advantage of this strategy is that the minimizer will be positive everywhere, giving some differentiability of the transport term.
Entropic approximation
In this whole section we assume that Ω is an open bounded connected subset of R d with Lipschitz boundary and that ρ 0 ∈ P ac (Ω). Given h > 0 we consider the following approximation of (3.1):
inf ρ∈P(Ω) F h (ρ) := 1 2τ W 2 2 (ρ 0 , ρ) + J(ρ) + hE(ρ) (4.1)
where
E(ρ) := Ω ρ(x) log(ρ(x))dx.
It is easy to see that (4.1) admits a unique solution ρ h and since J(ρ h ) is bounded, up to a subsequence of vanishing h's, one may assume that ρ h converges as h → 0 a.e. and strongly in L p (Ω) for every p ∈ [1, d d-1 ) to ρ 1 the solution of (3.1).
We first have a bound from below on ρ h : Lemma 4.1. There is an α h > 0 such that ρ h ≥ α h a.e..
Proof. Assume on the contrary that |ρ ≤ α| > 0 for every α > 0. For small ε ∈ (0, 1) set µ ε,h := max((1
- √ ε)ρ h + ε, ρ h ) that is (1 - √ ε)ρ h + ε on A ε,h := {ρ h ≤ √ ε) and ρ h elsewhere. Define c ε,h := Ω (µ ε,h -ρ h )
and observe that c ε,h ≤ ε|Ω| ≤ √ ε|Ω|. Now chose M h > 0 such that V h := {ρ h > M h } has positive Lebesgue measure and finite perimeter (recall that ρ h is BV) and chose ε small enough so that
√ ε ≤ M h |V h | 2|Ω| . (4.2) Note that (4.2) implies that c ε,h ≤ 1 2 M h |V h | and M h > √ ε (so that A ε,h and V h are disjoint). Finally, define ρ ε,h := µ ε,h -c ε,h χ V h |V h | . By construction ρ ε,h ∈ P(Ω) hence 0 ≤ F h (ρ ε,h ) -F h (ρ h )
, in this difference we have four terms, namely
• the Wasserstein term, which, using the Kantorovich duality formula (2.1) and the fact that Ω is bounded can be estimated in terms of
ρ ε,h -ρ h L 1 = 2c ε,h : 1 2τ W 2 2 (ρ ε,h , ρ) - 1 2τ W 2 2 (ρ h , ρ) ≤ C τ c ε,h . (4.3)
for a constant C that depends on Ω but neither on ε nor h,
• the TV term: J(ρ ε,h ) -J(ρ h ): outside V h we have replaced ρ h by a 1-Lipschitz function of ρ h which decreases the TV semi-norm, on V h on the contrary we have created a jump of magnitude c ε,h /|V h | so
J(ρ ε,h ) -J(ρ h ) ≤ c ε,h Per(V h ) |V h | (4.4)
where Per(V h ) = J(χ V h ) denotes the perimeter of V h (in Ω),
• the entropy variation on A ε,h , on this set both ρ ε,h and ρ h are less than
√ ε so that (1 + log(t)) ≤ (1 + log( √ ε)) whenever t ∈ [ρ h , ρ ε,h
] which by the mean value theorem yields
A ε,h (ρ ε,h log(ρ ε,h ) -ρ h log(ρ h )) ≤ (1 + log( √ ε))c ε,h (4.5)
• the entropy variation on V h , but on V h , if ρ ε,h ≥ 1 e then (ρ ε,h log(ρ ε,h )ρ h log(ρ h )) ≤ 0, we then observe that the remaining set
V h ∩ {ρ ε,h ≤ 1 e } ⊂ {ρ h ≤ 1 e + M h
2 } so that both ρ ε,h and ρ h are bounded away from 0 and infinity on this set so remain in an interval where t log(t) is Lipschitz with Lipschitz constant at most
C h (M h ) := max |1 + log(t)| : M h 2 ≤ t ≤ 1 e + M h 2 , (4.6)
we thus have
V h (ρ ε,h log(ρ ε,h ) -ρ h log(ρ h )) ≤ C h (M h )c ε,h . (4.7)
Putting together (4.3)-(4.4)-(4.5)-(4.7), we arrive at
0 ≤ C τ + Per(V h ) |V h | + hC h (M h ) + h log( √ ε) + h c ε,h
which for small enough ε is possible only when c ε,h = 0 i.e. |A ε,h | = 0. More precisely, either we have the lower bound:
h log(ρ h ) ≥ - C τ -hC h (M h ) - Per(V h ) |V h | -h (4.8)
or (4.2) is impossible i.e.
ρ h ≥ M h |V h | 2|Ω| . (4.9)
We actually also have uniform bounds with respect to h: Lemma 4.2. The family θ h := -h log(ρ h ) is (up to a subsequence) uniformly bounded from above. Moreover, θ h is bounded in L p (Ω) for any p > 1.
Proof. In view of (4.6), (4.8) and (4.9), it is enough to show that we can find a family M h , bounded and bounded away from 0, such that setting V h := {ρ h > M h }, |V h | remains bounded away from 0, and Per(V h ) is uniformly bounded from above as h → 0. First note that, since J(ρ h ) is bounded, there exists ρ such that ρ h → ρ in L 1 and a.e. up to a subsequence, note also that ρ ∈ BV and ρ is a probability density. Setting F h t := {ρ h > t} and F t := {ρ > t}, it is easy to deduce from Fatou's Lemma that when s > t, lim inf h |F h t | ≥ |F s |, hence choosing 0 < β 1 < β 2 < β so that |F β | > 0 we have that there exists h 0 > 0 and c 1 > 0 such that for all t ∈ [β 1 , β 2 ]
c 1 ≤ |F h t | ≤ |Ω|
whenever 0 < h < h 0 . Also, since J(ρ h ) ≤ C, by the co-area formula
β 2 β 1 Per(F h t )dt ≤ J(ρ h ) ≤ C.
So, there exists
t h ∈ [β 1 , β 2 ] such that Per(F h t h ) ≤ C/(β 2 -β 1 )
. Therefore, it suffices to choose M h = t h and V h = F h t h . We may assume that ρ h ≤ φ for some φ ∈ L 1 , then by Dominated convergence and since log(max(φ, 1)) ∈ L p (Ω) for every p > 1, we have that log(max(ρ h , 1)) converges a.e. and in L p , in particular this implies that max(0, -θ h ) converges to 0 strongly in L p (Ω), and we have just seen that max(0,
θ h ) is bounded in L ∞ (Ω).
Let us also recall some well-known facts (see [START_REF] Chambolle | Theoretical foundations and numerical methods for sparse recovery[END_REF]) about the total variation functional J viewed as a convex l.s.c. and one-homogeneous functional on
L d d-1 (Ω). Define Γ d := ξ ∈ L d (Ω) : ∃z ∈ L ∞ (Ω, R d ), z L ∞ ≤ 1, div(z) = ξ, z•ν = 0 on ∂Ω (4.
10) where div(z) = ξ, z • ν = 0 on ∂Ω are to be understood in the weak sense
Ω ξu = - Ω z • ∇u, ∀u ∈ C 1 (Ω).
Note that Γ d is closed and convex in L d (Ω) and J is its support function:
J(µ) = sup ξ∈Γ d Ω ξµ, ∀µ ∈ L d d-1 (Ω). ( 4
.11)
As for the Wasserstein term, recalling Kantorovich dual formulation (2.1), the derivative of the Wasserstein term ρ → W 2 2 (ρ 0 , ρ) term will be expressed in terms of a Kantorovich potential between ρ and ρ 0 .
We then have the following characterization for ρ h :
Proposition 4.3. There exists z h ∈ L ∞ (Ω, R d ) such that div(z h ) ∈ L p (Ω) for every p ∈ [1, +∞), z h L ∞ ≤ 1, z h • ν = 0 on ∂Ω, J(ρ h ) = Ω div(z h )ρ h and ϕ h τ + div(z h ) + h log(ρ h ) = 0, a.e. in Ω (4.12)
where ϕ h is the Kantorovich potential between ρ h and ρ 0 .
Proof. Let µ ∈ L ∞ (Ω) ∩ BV(Ω) such that Ω µ = 0. Thanks to Lemma 4.1, we know that ρ h is bounded away from 0 hence for small enough t > 0, ρ h +tµ is positive hence a probability density. Also, as a consequence of Theorem 1.52 in [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF], we have that lim
t→0 + 1 2t [W 2 2 (ρ 0 , ρ h + tµ) -W 2 2 (ρ 0 , ρ h )] = Ω ϕ h µ (4.13)
where ϕ h is the (unique up to an additive constant) Kantorovich potential between ρ h and ρ 0 , in particular ϕ h is Lipschitz and semi concave (D 2 ϕ h ≤ id in the sense of measures and id -∇ϕ h is the optimal transport between ρ h and ρ 1 ). By the optimality of ρ h and the fact that J is a semi-norm, we get
J(µ) ≥ J(ρ h + µ) -J(ρ h ) ≥ lim t→0 + t -1 (J(ρ h + tµ) -J(ρ h )) ≥ Ω ξ h µ, (4.14)
where
ξ h := - ϕ h τ -h log(ρ h ).
Since ϕ h is defined up to an additive constant, we may chose it in such a way that ξ h has zero mean, doing so, (4.14) holds for any µ ∈ L ∞ (Ω) ∩ BV(Ω) (not necessarily with zero mean). Being Lipschitz, ϕ h is bounded, also observe that h(log(ρ h ))
+ = h log(max(1, ρ h )) is in L p (Ω) for every p ∈ [1, +∞) since ρ h ∈ L d d-1 (Ω) and h log(ρ h ) -= -h log(min(1, ρ h )) is L ∞ (Ω) thanks to Lemma 4.1, hence we have ξ h ∈ L p (Ω) for every p ∈ [1, +∞).
By approximation and observing that ξ h ∈ L d (Ω), (4.14) extends to all µ ∈ L d d-1 (Ω). In particular, we have sup
ξ∈Γ d Ω ξµ ≥ Ω ξ h µ
but since Γ d is convex and closed in L d (Ω), it follows from Hahn-Banach's separation theorem that ξ h ∈ Γ d . Finally, getting back to (4.14) (without the zero mean restriction on µ) and taking µ = -ρ h gives J(ρ h ) ≤ Ω ξ h ρ h , and we then deduce that this should be an equality.
Euler-Lagrange equation
We are now in position to rigorously establish the Euler-Lagrange equation for (3.1): Theorem 4.4. If ρ 1 solves (3.1), there exists ϕ a Kantorovich potential between ρ 0 and ρ 1 (in particular id -∇ϕ is the optimal transport between ρ 1 and ρ
0 ), β ∈ L ∞ (Ω), β ≥ 0 and z ∈ L ∞ (Ω, R d ) such that ϕ τ + div(z) = β, z • ν = 0 on ∂Ω, (4.15)
and
βρ 1 = 0, z L ∞ ≤ 1, J(ρ 1 ) = Ω div(z)ρ 1 . (4.16)
Proof. As in section 4.1, we denote by ρ h the solution of the entropic approximation (4.1). Up to passing to a subsequence (not explicitly written), we may assume that ρ h converges a.e. and strongly in L p (Ω) (for any p ∈ [1, d d-1 )) to ρ 1 . We then rewrite the Euler-Lagrange equation from Proposition 4.3 as
ϕ h τ + div(z h ) + β + h = β - h , (4.17)
where β + h := h log(max(ρ h , 1)), β - h := -h log(min(ρ h , 1)), and
z h L ∞ ≤ 1, z h • ν = 0 on ∂Ω and J(ρ h ) = Ω div(z h )ρ h . (4.18)
It is easy to see that β + h converges to 0 strongly in any L q , q ∈ [1, +∞) and it follows from Lemma 4.2 that β - h is bounded in L ∞ . Up to subsequences, we may therefore assume that z h and β - h weakly- * converge in L ∞ respectively to some z and β with z L ∞ ≤ 1, z • ν = 0 on ∂Ω and β ≥ 0. As for ϕ h , it is an equi-Lipschitz family and Ω ϕ h = τ Ω (β - h -β + h ) which remains bounded, hence we may assume that ϕ h converges uniformly to some potential ϕ and it is well-known (see [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF]) that ϕ is a Kantorovich potential between ρ 1 and ρ 0 . Letting h tends to 0 gives (4.15).
Since ρ h converges strongly in L 1 to ρ 1 and β - h converges weakly- * to β in L ∞ we have
Ω ρ 1 β = lim h Ω ρ h β - h = lim h h Ω ρ h | log(min(1, ρ h ))| = 0,
hence βρ 1 = 0. Thanks to (4.11), we obviously have J(ρ 1 ) ≥ Ω div(z)ρ 1 , for the converse inequality, it is enough to observe that
J(ρ 1 ) ≤ lim inf h J(ρ h ) = lim inf h Ω div(z h )ρ h
and that div(z h ) = -ϕ h τ -β + h + β - h converges to div(z) weakly in L q for every q ∈ [1, +∞). Since ρ h converges strongly to ρ 1 in L q when q ∈ [1, d d-1 ) we deduce that J(ρ 1 ) = Ω div(z)ρ 1 which completes the proof of (4.16).
Remark 4.5. It is not difficult (since (3.1) is a convex problem) to check that (4.15)-(4.16) are also sufficient optimality conditions. The main point here is that the right hand side β in (4.15) which is a multiplier associated with the nonnegativity constraint is better than a measure, it is actually an L ∞ function.
In dimension 1, we can integrate the Euler-Lagrange equation and then deduce higher regularity for the dual variable z: Proof. The first claim is obvious because both ϕ and β are bounded hence so is z . As for the second one when ρ 0 ≥ α > 0, thanks to Proposition 3.4, we also have ρ 1 ≥ α hence β = 0 in (4.15) and in this case div(z) = z = -ϕ τ is Lipschitz i.e. z ∈ W 2,∞ . One can actually go one step further because x -ϕ (x) = T (x) where T is the optimal (monotone) transport between ρ 1 and ρ 0 . This map is explicit in terms of the cumulative distribution function of ρ 1 , F 1 , and
F -1 0 the inverse of F 0 , the cumulative distribution function of ρ 0 , namely T = F -1 0 • F 1 . But F 1 is Lipschitz since its derivative is ρ 1 is BV hence bounded and F -1 0 is Lipschitz well since ρ 0 ≥ α > 0. This gives that ϕ ∈ W 2,∞ hence z ∈ W 3,∞ .
Regularity of level sets
We discuss in this section how the fact that div(z) ∈ L ∞ in Theorem 4.4 allows for conclusions about the regularity of the level sets of ρ 1 , the solution of (3.1). A first consequence of the high integrability of div(z) is that one can give a meaning to z • ∇u for any u ∈ BV(Ω). Indeed, following Anzellotti [START_REF] Anzellotti | Pairing between measures and bounded functions and compensated compactness[END_REF]
, if u ∈ BV(Ω) and σ ∈ L ∞ (Ω, R d ) is such that div(σ) ∈ L d (Ω), one can define the distribution σ • Du by σ • Du, v = - Ω div(σ) uv - Ω u σ • ∇v, ∀v ∈ C 1 c (Ω).
Then σ • Du is a Radon measure which satisfies |σ • Du| ≤ σ L ∞ |Du| (in the sense of measures) hence is absolutely continuous with respect to |Du|. Moreover one can also define a weak notion of normal trace of σ, σ • ν ∈ L ∞ (∂Ω) such that the following integration by parts formula holds
Ω σ • Du = - Ω div(σ)u + ∂Ω u(σ • ν).
We refer to [START_REF] Anzellotti | Pairing between measures and bounded functions and compensated compactness[END_REF] for proofs. These considerations of course apply to σ = z and u = ρ 1 ∈ BV(Ω) and in particular enable one to see z • Dρ 1 as a measure and to interpret the optimality condition
J(ρ 1 ) = Ω div(z)ρ 1 as |Dρ 1 | = -z • Dρ 1
in the sense of measures. It now follows from Proposition 3.3 of [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF], that every (not only almost every) level set
F t = {ρ 1 > t} with t > 0 satisfies Per(F t ) = Ft div(z) and F t ∈ argmin G⊂Ω Per(G) - G div(z) . (5.1)
This means that -div(z) is the variational mean curvature of F t . Indeed, recall, following Gonzalez and Massari [START_REF] Gonzalez | Variational mean curvatures[END_REF], that a set of finite perimeter
E ⊂ Ω ⊂ R d is said to have variational mean curvature g ∈ L 1 (Ω) precisely when E minimizes min F ⊂Ω Per(F ) + F g. (5.2)
Regularity of sets with an L p variational mean curvature, in connection with the so-called quasi-minimizers of the perimeter has been extensively studied, see Tamanini [START_REF] Tamanini | Boundaries of Caccioppoli sets with Hölder-continuous normal vector[END_REF], Massari [START_REF] Massari | Esistenza e regolarita delle ipersuperfici di curvatura media assegnata in R n[END_REF][START_REF] Massari | Frontiere orientate di curvatura media assegnata in L p[END_REF], Theorem 3.6 of [START_REF] Gonzalez | Variational mean curvatures[END_REF] and Maggi's book [START_REF] Maggi | Sets of finite perimeter and geometric variational problems: an introduction to Geometric Measure Theory[END_REF].
It follows from the results of [START_REF] Tamanini | Boundaries of Caccioppoli sets with Hölder-continuous normal vector[END_REF] that if E has variational curvature g ∈ L p (Ω) with p ∈ (d, +∞], then its reduced boundary (see [START_REF] Ambrosio | Functions of bounded variation and free discontinuity problems[END_REF])
∂ * E is a (d -1)-dimensional manifold of class C 1, p-d 2p and H s ((∂E \ ∂ * E) ∩ Ω)
= 0 for all s > d -8. We thus deduce from Theorem 4.4: Theorem 5.1. If ρ 1 solves (3.1), then for every t > 0, the level set F t = {ρ 1 > t} has the property that its reduced boundary,
∂ * F t is a C 1, 1 2 hypersur- face and (∂F t \ ∂ * F t ) ∩ Ω has Hausdorff dimension less than d -8.
Finally, the question of whether one can assign a pointwise geometric meaning to z • Dχ E was addressed by Chambolle, Goldman and Novaga in [START_REF] Chambolle | Fine properties of the subdifferential for a class of one-homogeneous functionals[END_REF]. In dimensions d = 2 and d = 3, it is indeed proved in [START_REF] Chambolle | Fine properties of the subdifferential for a class of one-homogeneous functionals[END_REF] that if g = -div(z) ∈ L d (Ω) and E minimizes (5.2), then any point x ∈ ∂ * E is a Lebesgue point of z and z(x) = ν E (x) where ν E is the unit outward normal to ∂ * E.
The case of step functions
As another illustration of the results of section 4, we have the following result concerning step-functions in dimension one: Theorem 6.1. Let d = 1, Ω = (a, b) and ρ 0 be a step function with at most N -discontinuities i.e.:
ρ 0 := N j=0 α j χ [a j , a j+1 ) , a 0 = a < a 1 • • • < a N < a N +1 = b, (6.1)
then the solution ρ 1 of (3.1) is also a step function with at most N discontinuities.
Proof.
Step 1: reduction to the positive case We first claim that we may reduce ourselves to the case where ρ 0 ≥ α > 0 (so that ρ 1 ≥ α > 0 as well by virtue of Proposition 3.4). Indeed, assume that the statement of Theorem 6.1 holds under the additional assumption that α := min(α 0 , • • • , α N ) > 0.
Then, setting for every integer n ≥ 1, ρ n 0 := 1 n + (1 -1 n )ρ 0 , the corresponding solution of (3.1), ρ n 1 will also be a step function with at most N discontinuities. It is clear that up to a subsequence, ρ n 1 converges strongly in L 1 as n → ∞ and a.e. to ρ 1 which thus also has to be a step function with at most N discontinuities. We therefore assume from now on that ρ 0 and ρ 1 are everywhere positive.
Step 2 : ρ 1 is a jump function. Thanks to Theorem 4.4 and Corollary 4.6, there is a z ∈ W 3,∞ such that z(a) = z(b) = 0, |z| ≤ 1 and a Kantorovich potential ϕ such that
z + ϕ τ = 0, ϕ (x) = x -T (x), (6.2)
where T is the optimal (monotone nondecreasing) transport between ρ 1 and ρ 0 :
F 0 • T = F 1 , F 0 (x) := x a ρ 0 , F 1 (x) := x a ρ 1 , (6.3)
(note that T is a bi-Lipschitz homeomorphism) and
J(ρ 1 ) = b a z ρ 1 = - b a z • Dρ 1 = |Dρ 1 |(a, b)
where Dρ 1 is the (signed measure) distributional derivative of ρ 1 . Observe also that from (6.2) points at which z vanish are fixed points of T .
We then perform a Hahn-Jordan decomposition of Dρ 1 :
Dρ 1 = µ + -µ -, µ + ≥ 0, µ -≥ 0, µ + ⊥ µ -, (6.4)
and set
A := spt(|Dρ 1 |) = A + ∪ A -with A + := spt(µ + ), A -:= spt(µ -). (6.5)
Next, noting that |Dρ 1 | = µ + + µ -= -z(µ + -µ -), we deduce that z = -1 µ + -a.e and since z is continuous we should have z = -1 on A + = spt(µ + ). In a similar way, z = 1 on A -:= spt(µ -), it implies in particular that the compact sets A + and A -are disjoint so that the distance between A + and A -is positive. Note also that since z is C 2 , minimal on A + and maximal on A -we have (also see [START_REF] Chambolle | Geometric properties of solutions to the total variation denoising problem[END_REF] for a similar discussion): z = 0 on A, z ≥ 0 hence T ≥ id on A + , and z ≤ 0 hence T ≤ id on A -. (6.6) Since z = 0 on A, it follows from Rolle's Theorem that if a < x < y < b with x, y ∈ A × A, there exists c ∈ (x, y) such that z (c) = 0 i.e. T (c) = c. In particular T = id on the set of limit points of A. We now further decompose µ ± in its purely atomic and nonatomic parts:
µ ± = x∈J ± µ ± ({x})δ x + µ ± , (6.7)
where J ± is the (finite or countable) set of atoms of µ ± and µ ± has no atom.
Our aim is to show that the sets
A ± := spt( µ ± ), (6.8)
are empty. Assume on the contrary that A + = ∅, then since all points of A + are limit points of A + , T = id on A + . In particular this implies that
χ A + ρ 1 = T # (χ A + ρ 1 ) = χ T ( A + ) T # ρ 1 = χ A + ρ 0 , i.e. ρ 0 = ρ 1 on A + . Now if x ∈ A + \ {a 0 , • • • a N +1 }, we may find δ > 0 such that ρ 0 is constant on [x -δ, x + δ] and [x -δ, x + δ] ∩ A -= ∅, so that ρ 1 is nondecreasing on [x -δ, x + δ]. Define then x 1 := inf A + ∩ [x -δ, x + δ], x 2 := sup A + ∩ [x -δ, x + δ],
since both x 1 and x 2 lie in A + and Dρ 1 = µ + ≥ µ + on [x -δ, x + δ] we have
ρ 1 (x 2 ) -ρ 1 (x 1 ) = ρ 0 (x 2 ) -ρ 0 (x 1 ) = 0 ≥ µ + ([x 1 , x 2 ]) = µ + ([x -δ, x + δ])
which contradicts x ∈ A + . This proves that µ + (and µ -likewise) are purely atomic (i.e. ρ 1 is a jump function in the terminology of [START_REF] Ambrosio | Functions of bounded variation and free discontinuity problems[END_REF]):
Dρ 1 = x∈J + µ + ({x})δ x - x∈J - µ -({x})δ x .
Step 3: the jump sets J + and J -are finite. Recall from the previous step that A + = J + and A -= J -are disjoint sets. In particular, there cannot be points which are both limit points of J + and J -. We argue by contradiction that J + is a finite set (a similar argument can be applied for J -). Suppose that J + is not finite so that for some x ∈ J + , every neighbourhood of x contains an element of J + . Then, there exists
x 1 ∈ J + with x 1 = x (x 1 > x say) such that [x, x 1 ] ∩ J -= ∅ (which implies that F 1 is convex on [x, x 1
]). If x 2 ∈ (x, x 1 ) ∩ J + , then we know from the previous step that T (x 2 ) ≥ x 2 and there exist c 1 ∈ (x, x 2 ) and c 2 ∈ (x 2 , x 1 ) which are fixed points of T .
F 1 (x 2 ) -F 1 (c 1 ) = F 0 (T (x 2 )) -F 0 (c 1 ) ≥ F 0 (x 2 ) -F 0 (c 1 )
and similarly
F 1 (c 2 ) -F 1 (x 2 ) = F 0 (c 2 ) -F 0 (T (x 2 )) ≤ F 0 (c 2 ) -F 0 (x 2 )
but since ρ 1 has an upward jump at x 2 we have
F 1 (x 2 ) -F 1 (c 1 ) x 2 -c 1 < F 1 (c 2 ) -F 1 (x 2 ) c 2 -x 2 hence F 0 (x 2 ) -F 0 (c 1 ) x 2 -c 1 < F 0 (c 2 ) -F 0 (c 1 ) c 2 -x 2 implying that ρ 0 has a discontinuity point in [c 1 , c 2 ] hence in [x, x 1 ]
, since there are only finitely many such points this shows that J + is finite.
Step 4: ρ 1 has no more than N jumps. We know from the previous steps that ρ 1 can be written as
ρ 1 = K k=0 β k χ [b k ,b k+1 ) , b 0 = a < b 1 • • • < b K < b K+1 = b, β k = β k+1
If β k+1 > β k arguing exactly as in the previous step, we find two fixed-points of T , c k ∈ (b k , b k + 1) and c k+1 ∈ (b k+1 , b k+2 ) such that ρ 0 has a discontinuity in (c k , c k+1 ), the case of a downward jump β k > β k+1 can be treated similarly (using T (b k+1 ) ≤ b k+1 in this case). This shows that ρ 0 has at least K jumps so that N ≥ K.
Convergence of the TV-JKO scheme in dimension one
We are now interested in the convergence of the TV-JKO scheme to a solution of the fourth-order nonlinear equation (1.2) in dimension 1, as the time step τ goes to 0. Throughout this section, we assume that Ω = (0, 1) and that the initial condition ρ 0 satisfies ρ 0 ∈ P ac ((0, 1)) ∩ BV ((0, 1)), ρ 0 ≥ α > 0 a.e. on (0, 1). (7.1)
We fix a time horizon T , and for small τ > 0, define the sequence ρ τ k by
ρ τ 0 = ρ 0 , ρ τ k+1 ∈ argmin 1 2τ W 2 2 (ρ τ k , ρ) + J(ρ), ρ ∈ BV ∩ P ac ((0, 1)) (7.2)
for k = 0, . . . N τ with N τ := [ T τ ]. Thanks to Proposition 3.4, (7.1) ensures that the JKO-iterates ρ τ k defined by (7.2) also remain bounded from below by α. We also extend this discrete sequence by piecewise constant interpolation i.e.
ρ τ (t, x) = ρ τ k+1 (x), t ∈ (kτ, (k + 1)τ ], k = 0, . . . N τ , x ∈ (0, 1). ( 7.3)
We shall see that ρ τ converges to a solution ρ of
∂ t ρ + ρ ρ x |ρ x | xx x = 0, (t, x) ∈ (0, T ) × (0, 1), ρ | t=0 = ρ 0 , (7.4)
with the no-flux boundary condition
ρ ρ x |ρ x | xx = 0, on (0, T ) × {0, 1}. (7.5)
Since ρ is no more than BV in x, one has to be slightly cautious in the meaning of ρx |ρx| which be conveniently done by interpreting this term as the negative of a suitable z in the subdifferential of J (in the L 2 sense for instance):
z ∈ H 1 0 ((0, 1)), z L ∞ ≤ 1 and J(ρ) = 1 0 z x ρ. (7.6) This leads to the following definition Definition 7.1. A weak solution of (7.4)-(7.1) is a ρ ∈ L ∞ ((0, T ), BV((0, 1)))∩ C 0 ((0, T ), (P, W 2 )) such that there exists z ∈ L ∞ ((0, T )×(0, 1))∩L 2 ((0, T ), H 2 ∩ H 1 0 ((0, 1))) with z(t, .) L ∞ ≤ 1 and J(ρ(t, .)) = We extend z τ k in a piecewise constant way i.e. set z τ (t, x) = z τ k+1 (x), t ∈ (kτ, (k + 1)τ ], k = 0, . . . , N τ , x ∈ (0, 1). (7.15) We then observe that We may therefore assume (up to further suitable extractions) that there is some z ∈ L ∞ ((0, T ) × (0, 1)) ∩ L 2 ((0, T ), H 2 ((0, 1))) such that z τ converges weakly * in L ∞ ((0, T ) × (0, 1)) and weakly in L 2 ((0, T ), H 2 ((0, 1))) to z.
Of course z L ∞ ≤ 1 and z ∈ L 2 ((0, T ), H 1 0 ((0, 1)). Note also that ρ τ z τ xx converges weakly to ρz xx in L 1 ((0, T ) × (0, 1)).
The limiting equation can now be derived using standard computations (see the proof of Theorem 5.1 of the seminal work [START_REF] Jordan | The variational formulation of the Fokker-Planck equation[END_REF], or chapter 8 of [START_REF] Santambrogio | Optimal transport for applied mathematicians[END_REF] Recalling that ρ τ k = T τ k+1 # ρ τ k+1 , and applying Taylor's theorem, we have Passing to the limit τ to 0 in (7.17) yields that ρ is a weak solution to ∂ t ρ -(ρz xx ) x = 0, ρ | t=0 = ρ 0 , ρz xx = 0 on (0, T ) × {0, 1}.
It remains to prove that J(ρ(t, .)) = 1 0 z x (t, x)ρ(x)dx, for a.e. t ∈ (0, T ). The inequality J(ρ(t, .)) ≥ 1 0 z x (t, x)ρ(x)dx is obvious since z(t, .) ∈ H 1 0 ((0, 1)) and z(t, .) L ∞ ≤ 1. To prove the converse inequality, we use Fatou's Lemma, the lower semi-continuity of J, (7.13) and the weak-convergence in L 1 ((0, T )× (0, 1)) of z τ x ρ τ to z x ρ:
Corollary 4 . 6 .
46 Assume that d = 1 and Ω is a bounded interval. If ρ 1 solves (3.1) and z is as in Theorem 4.4 then z ∈ W 1,∞ 0 (Ω). If in addition ρ 0 ≥ α > 0 a.e. on Ω, then z ∈ W 3,∞ (Ω).
1 0z( 1 0
11 x (t, x)ρ(x)dx, for a.e. t ∈ (0, T ), (7.7) and ρ is a weak solution of∂ t ρ -(ρz xx ) x = 0, ρ | t=0 = ρ 0 , ρz xx = 0 on (0, T ) × {0, 1}. (7.8) i.e. for every u ∈ C 1 c ([0, T ) × [0, 1]) ∂ t u ρ -(ρz xx )u x )dxdt = -u(0, x)ρ 0 (x)dx.
to (7.10) we thus get an L 2 ((0, T ), H 2 ((0, 1)) boundz τ L 2 ((0,T ),H 2 ((0,1))) ≤ C. (7.16)
): Let u ∈ C 1 c ([0, T ) × [0, 1]) and observe that
1 0( 1 0
11 τ k+1 (x) -x)u x (kτ, x) + Rτ (x)z τ k+1 ) xx u x (kτ, x) + Rτ (x))ρ τ k+1 dx , where | Rτ (x)| ≤ C u xx (kτ, •) L ∞ |T τ k+1 (x) -x| 2 . Note also that for t ∈ (kτ, (k + 1)τ ], |u x (kτ, •) -u x (t, •)| ≤ τ u xt L ∞ . Therefore, T 0 ∂ t u ρ τ -ρ τ z τ xx u x )dxdt = -u(0, x)ρ τ 1 (x)dx + R τ (u) (7.17) with |R τ (u)| ≤ C max{ u xx L ∞ , u xt L ∞ }
t, x)ρ(t, x)dxdt which concludes the proof.
Acknowledgements: The authors wish to thank Vincent Duval and Gabriel Peyré for suggesting the TV-Wasserstein problem to them as well as for fruitful discussions. They also thank Maxime Laborde and Filippo Santambrogio for helpful remarks in particular regarding the maximum principle.
We then have Theorem 7.2. If ρ 0 satisfies (7.1), there exists a vanishing sequence of time steps τ n → 0 such that the sequence ρ τn constructed by (7.2)-(7.3) converges strongly in L p ((0, T )×(0, 1)) for any p ∈ [1, +∞) and in C 0 ((0, T ), (P([0, 1]), W 2 )) to ρ ∈ L ∞ ((0, T ), BV((0, 1))) ∩ C 0 ((0, T ), (P([0, 1]), W 2 )), a weak solution of (7.4)-(7.1).
Proof. First, ρ 0 being BV it is bounded on (0, 1) which gives uniform bounds on ρ τ thanks to Theorem 3.2, moreover we know from (7.1) and Proposition 3.4 that we also have a uniform bound from below
Moreover by construction of the TV-JKO scheme (7.2), one has
By using an Aubin-Lions type compactness Theorem of Savaré and Rossi (Theorem 2 in [START_REF] Rossi | Tightness, integral equicontinuity and compactness for evolution problems in Banach spaces[END_REF]), the fact that the imbedding of BV((0, 1)) into L p ((0, 1)) is compact for every p ∈ [1, +∞) as well as a refinement of Arzèla-Ascoli Theorem (Proposition 3.3.1 in [START_REF] Ambrosio | Gradient flows: in metric spaces and in the space of probability measures[END_REF]), one obtains (see section 4 of [START_REF] Di | Curves of steepest descent are entropy solutions for a class of degenerate convection-diffusion equations[END_REF] or section 5 of [START_REF] Carlier | A splitting method for nonlinear diffusions with nonlocal, nonpotential drifts[END_REF] for details) that, up to taking suitable sequence of vanishing times steps τ n → 0, we may assume that ρ τ → ρ a.e. in (0, T ) × (0, 1) and in L p ((0, T ) × (0, 1)), ∀p ∈ [1, +∞) (7.11) and sup
for some limit curve ρ ∈ C 0, 1 2 ((0, T ), (P([0, 1]), W 2 )) ∩ L p ((0, T ) × (0, 1)). From (7.9) and (7.10), one also deduces ρ ∈ L ∞ ((0, T ), BV((0, 1)) and from (7.9) M ≥ ρ ≥ α.
We deduce from the fact that ρ τ k ≥ α > 0 and Theorem 4.4 that for each k = 0, . . . , N τ , there exists z τ k ∈ W 2,∞ ((0, 1)) such that
and the optimal (backward) optimal transport T τ k+1 from ρ τ k+1 to ρ τ k is related to z τ k+1 by id -T τ k+1 = -τ (z τ k+1 ) xx . (7.14) | 47,339 | [
"960680"
] | [
"454660",
"60",
"454660",
"54798"
] |
01492559 | en | [
"info"
] | 2024/03/04 23:41:50 | 2014 | https://hal.science/hal-01492559/file/ICServ2014.pdf | Koji Makita
email: [email protected]
Thomas Vincent
Soichi Ebisuno
Masakatsu Kourogi
Tomoya Ishikawa
Takashi Okuma
Minoru Yoshida
Laurence Nigay
Takeshi Kurata
Mixed Reality Navigation on a Tablet Computer for Supporting Machine Maintenance in Wide-area Indoor Environment
Keywords: Maintenance support, Human navigation, Mixed reality, Mobile computing
This paper describes a maintenance service support system for wide-area indoor environment, such as a factory and a hospital. In maintenance services, operators often have to check a map to find out a way to a target machine, and also have to refer documents to get information about check-up and repair of the machine. In order to reduce working load of operators, information technology can help operators carry out additional but important operations during maintenance, such as referring documents and maps, recording maintenance logs and so on. In this paper, we propose mixed reality navigation on a tablet computer composed of augmented virtuality mode and augmented reality mode. Augmented virtuality mode performs map-based navigation shows positions of the user and the target machine. Augmented reality mode performs intuitive visualization of information about the machine by overlaying annotations on camera images. The proposed system is based on a hybrid localization technique realized with pedestrian dead reckoning (PDR) and 3D model-based image processing for the purpose of covering wide-area indoor environment. Experimental results using our prototype with a mock-up model of a machine are also described for showing feasibility of our concept in the paper.
INTRODUCTION
Machine maintenance services are essential for workers to safely and efficiently use machines. For maintenance operators, main activities of maintenance services are to go in front of the target machine, and check-up of the machine. Moreover, in case failure parts are detected, additional activities such as doing repairs, component replacement and orders for components are occurred. Because maintenance services are not routine, the activities can be usually inexperienced works. Therefore, maintenance services can be difficult and time-consuming because they require lots of information. Operators often have to check a map to find out a way to a target machine, and also have to refer documents to get information about check-up and repair of the machine. Therefore, especially in wide-area indoor environment, such as a factory and a hospital, working load of operators should be reduced.
In this paper, we propose a maintenance service support system for wide-area indoor environment to reduce working load of operators. We focus on working load of obtaining information. In order to realize the system to reduce working load of obtaining information, we introduce mixed reality (MR) concept. MR is an inclusive term of techniques to merge virtual and real world. Since MR has a possibility to realize intuitive information presentation, we propose mixed reality navigation on a tablet computer composed of augmented virtuality mode and augmented reality mode. Augmented virtuality (AV) and Augmented Reality (AR) are components of MR. Augmented virtuality mode performs map-based navigation shows positions of the user and the target machine. Augmented reality mode performs intuitive visualization of information about the machine by overlaying annotations on camera images.
RELATED WORKS
How MR can assist in reducing the time and effort in maintenance is evaluated in previous works [START_REF] Feiner | Evaluating the Benefits of Augmented Reality for Task Localization in Maintenance of an Armored Personnel Carrier Turret[END_REF] [START_REF] Platonov | A mobile markerless AR system for maintenance and repair[END_REF]. The majority of related work focuses on MR applications work in front of the target object. On the contrary, we focus on how to realize localization in wide-area indoor environment for MR.
To achieve robust and global localization in wide areas for mobile MR, one solution is to combine a method works constantly and the visual tracking method, because visual tracking can be used for precisely estimating position and posture [START_REF] Klein | Parallel tracking and mapping on a camera phone[END_REF] only in case a camera of mobile device is active. GPS is often used for initialization [START_REF] Reitmayr | Initialisation for Visual Tracking in Urban Environments[END_REF] [START_REF] Höllerer | Exploring MARS: developing indoor and outdooruser interfaces to a mobile augmented reality system[END_REF]. While GPS can provide global position and posture without any previous knowledge, it works only in outdoor environments. On the other hand, for indoor MR, several approaches requiring previous knowledge have been proposed, one of which is the marker based localization method. This method can be used in environments with marker infrastructure already in place [START_REF] Saito | Indoor Marker-based Localization Using Coded Seamless Pattern for Interior Decoration[END_REF]. Therefore, these methods are appropriate for constructing a special environment or desktop application that works in small spaces. Marker based methods can be used for initialization only when markers are captured by a camera. For more efficient construction of a wide MR space, there exist devices that can be applied to construct positioning infrastructure. For example, the radio frequency identification RFID and infrared data association IrDA based methods [START_REF] Tenmoku | A wearable augmented reality system using positioning infrastructures and a pedometer[END_REF] have been proposed. On the other hand, a positioning system using wireless LAN has also been proposed [START_REF] Yoshida | Evaluation of Pre-Acquisition Methods for Position Estimation System using Wireless LAN[END_REF]. These methods are effective for acquiring an approximate global position in large indoor environments. However, the accuracy of these methods is not normally adequate for the initialization of visual tracking methods. Image based matching methods [START_REF] Cipolla | Imagebased localization[END_REF] [10] [START_REF] Zisserman | Feature based methods for structure and motion estimation[END_REF] can also be applied for initialization. However, because these methods use 2D image based matching, many reference images are needed to cover large environments and for accurate initialization. On the other hand, in existing tracking methods using 3D models [START_REF] Bleser | Online camera pose estimation in partially known and dynamic scenes[END_REF] [13], a matching between a captured image and an image generated from models is used to estimate relative position and posture.
Recently, 3D reconstruction methods have been proposed. For large scale accurate reconstruction, modeling methods using robots with range sensors are effective [START_REF] Hähnel | Learning Compact 3D Models of Indoor and Outdoor Environments with a Mobile Robot[END_REF] [START_REF] Jensen | Laser Range Imaging using Mobile Robots: From Pose Estimation to 3d-Models[END_REF]. Moreover, interactive image-based modeling methods have been proposed [16] [17] [18] [START_REF] Langer | An Interactive Vision-based 3D Reconstruction Workflow for Industrial AR Applications[END_REF]. These methods encourage more convenient 3D reconstruction, and can be applied to various environments. For example, Figure 1 shows a sample generated image from 3D models generated using an interactive 3D modeler [START_REF] Ishikawa | In-Situ 3D Indoor Modeler with a Camera and Self-Contained Sensors[END_REF]. Once models are generated, images from any views can be obtained. Therefore, models can be applied various applications. Figure 2 shows a sample image of augmented virtuality (AV) application. In a scene 3D dataset (point features, edgelets, etc.) can be obtained, global position and posture of a device are estimated with the dataset [START_REF] Arth | Wide Area Localization on Mobile Phones[END_REF] [21] [START_REF] Irschara | From structure-from-motion point clouds to fast location recognition[END_REF]. But normally generation cost of the 3D dataset is higher than the one of 3D models in terms of a number of images needed to be generated. Therefore, the estimation using 3D models has possibility to efficiently extend areas for mobile MR.
This study focuses on the ability to merge pedestrian dead reckoning (PDR) and 3D model-based image processing for the purpose of covering wide-area indoor environment. PDR is a technique for the measurement of position and orientation based on dead reckoning that focuses on human walking motion detection. Because the PDR is normally realized with self-contained wearable or mobile sensors, localization can always work. The PDR works well for application systems such as humannavigation systems which user is mainly walking during using the system. However, the positioning accuracy of the PDR is often getting down in working situation, and includes accumulative errors. On the contrary, image processing can work only in case a camera of mobile device is active and quality of camera images are well. For instance, blur of the images and less image features affect accuracy of the localization. However, normally the accuracy of the localization is high, and accumulative errors can be reduced with key-frame based initialization.
PROTOTYPE
Figure 3 shows an overview of the proposed localization method. Position and orientation of the user of a tablet computer can be always estimated with PDR. However, the positioning accuracy is getting down with the accumulative errors because of various kinds of actions except for walking. For creating a prototype system in this paper, we implemented a PDR application based on a method shown in [START_REF] Kourogi | A Method of Pedestrian Dead Reckoning Using Action Recognition[END_REF] on a smart phone (Samsung RZ Galaxy). Acceleration sensors, angular velocity sensors, and magnetic sensors in the smart phone are used for the estimation procedure of PDR. The smart phone is attached to the waist of the user, and position and orientation data estimated by the PDR application is always sent to the tablet computer via wireless network.
Only with PDR, the accumulative errors are not able to be reduced.
On the other hand, when the camera of the tablet computer is active, comparisons of camera image and images used for creating 3D model are conducted. Figure 4 shows an overview of the comparisons. Afterward, we call this comparison "key frame matching". Images used for creating 3D model are photos with known photo-shoot positions and orientations, and depth data. For creating a prototype system in this paper, we implemented an application for (re-)initialization based on a method shown in [START_REF] Makita | Photo-shoot localization of a mobile camera based on registered frame data of virtualized reality models[END_REF] on a tablet computer (Apple iPad2). Because of applying secondary product of modeling process, additional works are not necessary for set up. In the method, an image of real environment taken by a mobile camera of the tablet computer is compared to images used for creating 3D model. Because images used for creating 3D model are linked with photo-shoot position, orientation, and depth data, 3D coordinates of each pixel on the images is available in the model coordinate system. Therefore, various types of camera tracking method using image features, such as feature points and edges, are able to be applied to the localization. For the prototype, we applied an estimation technique using feature point-matching method, and selected Speeded Up Robust Features (SURF) [START_REF] Bay | SURF: Speeded Up Robust Features[END_REF] for the detection of feature points. Normally, the calculation time of the matching procedure is too long to be applied to MR application. Therefore, after the matching is succeeded, image processing procedure is changed to a relative camera tracking. Since normally the relative camera tracking can also be realized with image features, in this paper, we applied Lucas-Kanade-Tomasi (LKT) trackers shown in [START_REF] Tomasi | Detection and tracking of point features[END_REF]. When the relative camera tracking fails, the PDR application is used again. Currently, for re-initializing the PDR, the last result of the relative camera tracking is simply applied, because the accuracy of relative camera tracking is supposed to be higher than that of the PDR in most cases. However, since there is a case to get errors of the key frame matching and relative camera tracking, we have the possibility of improvement in re-initialization of the PDR. After that, the key frame matching also work again.
EXPERIMENTS WITH THE PROTOTYPE
We conducted an experiment in our office building in order to check the behavior of the prototype system. In the experiment, we evaluated a moving distance when the key frame matching is succeeded. Motivation of the evaluation is below. Figure 5 shows samples of moving distances when the key frame matching is succeeded. As shown in Figure 4, when the key frame matching is conducted, a camera image captured by the tablet computer is compared to images used for creating 3D model. Generally, we have lots of images used for creating 3D model, and retrieve the images in distance order from the position estimated by PDR. Therefore, the computational time of the key frame matching is proportional to the moving distance. Fundamentally, lots of images should be used, and the computational time should be measured. However, since currently we do not have any speeding up method in the retrieval, there is a high possibility of exceedingly-long computational time. Therefore, in this experiment, we preliminarily set only one image as a target of the key frame matching, and measured the moving distance. Experimental set up is below. Figure 6 shows appearances of the user and the AV/AR application. First, we created a virtualized reality model of our office building with the method shown in [START_REF] Ishikawa | In-Situ 3D Indoor Modeler with a Camera and Self-Contained Sensors[END_REF], and prepare one image for the key frame matching. Next, we set a round course in the building whose distance is about 60 meters, and set one user as a subject. In the experiment, at first, the user stands at the start position of the course, and hold the tablet computer to conduct the key frame matching with AR mode. After key frame matching is succeeded, the user walks along the course with AV mode. Finally, the user stands at the start position and conduct the key frame matching with AR mode again, and the moving distance of latter matching is measured. In order to check difference between walking distance and the moving distance of latter matching, we conducted experiments five times with one cycle (about 60 meters) and two cycles (about 120 meters). Figure 8 shows all results of the experiments. In the results, average moving distance of latter matching was about 1.44 meter with one cycle, and about 2.03 meter with two cycles. As a result, we have successfully checked the behaviour of the prototype system. Because we have the variability of walking motion, the moving distances of latter matching also have the variability. In terms of the average, however, the results shown in Figure 8 are reasonable because basically the error of PDR is proportional to the walking distance. In future, in case we use lots of images for the key frame matching, we have to consider about the trade-off between the number of chances of the matching and the computational time. Specifically, spatial configuration and resolution of the images for the matching should be optimized. In the experiment, we set the resolution at 240 * 180, and the computational time was about 1-2 seconds. Moreover in the future, we plan to conduct subjective experiments with a mock-up model of a machine. Figure 9 shows appearances of the tablet computer (Toshiba dynabook Tab for weight saving) and the mock-up. In order to study the effectiveness, we have been implementing both AV and AR applications to effectively create and show annotations with precise pointing techniques shown in [START_REF] Vincent | Precise pointing techniques for handheld Augmented Reality[END_REF]. In the experiments of this section, AV mode was only used for the map based navigation. However, we already have implemented both AR and AV mode to showing annotations. In experiments described in Section 5, we apply AV mode to show annotations, and conduct subjective experiments.
IN-FIELD EVALUATION
We conducted customer interest check (CIC) as in-field evaluations. The goal of the evaluations is to survey the acceptability, usefulness and usability of the prototype system for engineers and operators working in a factory. The evaluations were conducted with six maintenance engineers and operators working in France and six maintenance engineers and operators working in Japan. Procedures of evaluations are below.
Presentation of the concept and the movie
Each interviewee watches a movie for showing the concept of the proposed system. Interviewers supplement the explanation of the movie with their talking when the interviewee has questions about the proposed system.
Evaluation of the acceptability
Each interviewee is asked 20 questions of CAUTIC [START_REF] Cautic | Conception Assistée par l'Usage pour les Technologies l'Innovation et le Changement[END_REF] about the proposed system, and choose the one answer from "Acceptable", "Acceptable under conditions" and "Not acceptable". CAUTIC is an interview method for evaluating an acceptability of a product. CAUTIC has scientifically identified 20 criteria at 4 levels of analysis (technical, practical, user identity and user environment) that must be validated in order to permit to establish, beyond doubt, whether or not the compelling reason to buy exists. If the 20 criteria are not or only partially validated, the method identifies the problem areas from each of the 4 levels that obstruct acceptance of the innovation in question. These areas can be subsequently given remedial treatment, which in turn can be verified by the CAUTIC method if seemed necessary.
Evaluation of usefulness
Each interviewee is asked a couple of questions about each function of the proposed system from KANO method [START_REF] Kano | Attractive Quality and Must-Be Quality[END_REF]. In KANO method, each question is composed of a functional and dysfunctional questions, and the interviewee choose the one answer from "Like", "Must be", "Neutral", "Live with" and "Dislike". By combining the two answers, quality attributes are grouped into six categories (Attractive, One-dimensional, Must be, Questionable, Reverse, Indifferent) with a different impact on customer satisfaction [START_REF] Sauerwein | The Kano model: how to delight your customers[END_REF].
Results of the evaluations are shown below. Acceptability : Table 1 indicates CAUTIC's questions and of results of six respondents in France and Japan. The number of stars (* in the right side of Table 1) is decided with an average score M of the answers. We set scores (2,1,0) for "Acceptable", "Acceptable under conditions" and "Not acceptable" respectively to calculate M (M >= 1.33 : ***, 0.67 <= M < 1.33 : **, M < 0.67 : *). Totally, the evaluation of the acceptability by CAUTIC reveals that the proposed system is well received by maintenance technicians and managers. In the results of France, average score M of 1.2 and 2.5 are comparatively low. In 1.2, there were several questions about how to connecting the machine to get information of the machine, and about scalability of the system. In 2.5, there were several questions about capability of false recognition of the machine, and about capacity to update technical documentations. In the results of Japan, average score M of 3.2 and 4.3 are comparatively low. The main reason of these results were interviewees' impression of the proposed system as their private role. Two interviewees answered "Not acceptable" for 3.2 and 4.3 because they do not have any ideas of using the proposed system privately. As was pointed out in the results of France, scalability and the capability of false recognition are very important in terms of the comfort of the system. Experiments in wide area with multiple machines are next steps of this study. Required time and false recognition rate of the key frame matching are supposed to become progressively higher with an increase in the number of key frames. In future, we plan to study fast search of the key frames. Humancomputer interaction technique is one way to realize the fast search. For example, showing candidates of the key frames is supposed to be worth to shorten the required time of the image matching.
Usefulness : Table 2 indicates functions of the proposed system, and KANO's evaluation table of six respondents in France and Japan. First, in France, we applied eight questions for the KANO method. Subsequently, in Japan, we added two functions : "Freeze function" and "Visualization of the machine in Augmented Virtuality" to the system, and applied ten questions for the KANO method. "Freeze functions" is a function to temporarily stop updating the displayed image of AR/AV mode. We assume "Freeze functions" is helpful as hands-free capability. "Visualization of the machine in Augmented Virtuality " is a function to overlay annotations with AV mode as shown in Figure 9(d). Dim1 and Dim2 indicate first and second components of the category of the answers (A: Attractive, M: Must-be, R: Reverse, O: One-dimensional, Q: Questionable, I: Indifferent, N : None) with the number of the answers. For example, "Q(5)" indicates that there are five answers categorised in "Questionable". In case the number of the category of multiple answers is one, we applied "None" as the component of the category. In the results of France, functions 4(Access to the diagnosis), 5(Technical documentation), 7(Contents of the electric cabinet without having to open the machine) and 8(Creation of maintenance report) were categorised in " Attractive" as first component. In the results of Japan, 2(Guidance toward the machine), 4(Access to the diagnosis) and 10(Visualization of the machine in Augmented Virtuality) were categorised in " Attractive" as first component. Throughout the whole experiments, interviewees in Japan had more previous knowledge of AR than interviewees in France, and there were several answers categorised in "Questionable" only in France. 1(Visualization of the machine in Augmented Reality) was categorised in "Questionable" as first component in France. This is due to the difficulty of the users to imagine AR function before using it. Five interviewees could not understand the difference of the way of provision of information. Therefore, they selected the same answer for the both questions. Moreover, in the results in France, 4(Access to the diagnosis) and 5(Technical documentation) were also categorised in "Questionable" as second component. These are also probably due to the difficulty to imagine AR function. On the contrary, in Japan, there were not any answers categorised in "Questionable".
CONCLUSION
This paper has proposed a maintenance service support system for wide-area indoor environment. In order to realize MR maintenance service support system, we proposed a hybrid localization technique realized with pedestrian dead reckoning (PDR) and 3D model-based image processing for the purpose of covering wide-area indoor environment. As a first step of the realization, this paper presents experimental results that show the relation between the walking distance and the moving distance of the key frame matching that is strongly related to the computational time of the key frame matching. As regards future works, we plan to study the optimization of the key frame matching to realize appropriate accuracy and computational time of the localization, and subjective experiments with both mock-up and real machine in wide indoor environments.
ACKNOWLEDGEMENTS
Fig. 1 :Fig. 2 :Fig. 3 :Fig. 4 :
1234 Fig.1: An example of an input image and a corresponding image from models. (Left: Input image taken in indoor environment. Right: Corresponding image from models that are generated using an interactive 3D modeler[START_REF] Ishikawa | In-Situ 3D Indoor Modeler with a Camera and Self-Contained Sensors[END_REF].)
Figure 7 Fig. 5 :Fig. 6 :Fig. 7 :
7567 Fig. 5 : Moving distances when the key frame matching is succeeded.
Fig. 8 :
8 Fig. 8 : Moving distances of latter matching.
Fig. 9 :
9 Fig. 9 : Appearances of the tablet computer and the mock-up. (a) and (b): Overviews of the mock-up. (c): Annotation overlay with AR mode. (d): Annotation overlay with AV mode.
Functions
Table 1 .
1 This work was supported by Strategic Japanese-French Cooperative Program on Information and Communication Technology Including Computer Science (ANR in France and JST in Japan). Results of CAUTIC Does the user recognize himself as the targeted user and know who else the concept concerns?
France Japan
4.2. Is the concept adapted to the user's client-supplier / family relations evolution? *** *** 4.3. Is the concept adapted to the user's position in his/her professional / private circle? *** ** 4.4. Is the concept adapted to the user's working organization/ way of living and its evolution? *** *** 4.5. Does the user agree to pay for the concept?
Table 2 .
2 Results of KANO
The 2nd International Conference on Serviceology | 25,780 | [
"9671"
] | [
"302425",
"49637",
"302425",
"302425",
"302425",
"302425",
"73614",
"49637",
"302425"
] |
01492560 | en | [
"info"
] | 2024/03/04 23:41:50 | 2014 | https://hal.science/hal-01492560/file/EICS2014-Nigay-Luyten-DC.pdf | Laurence Nigay
email: [email protected]
Kris Luyten
email: [email protected]
In this short extended abstract, we present the doctoral consortium of the Engineering Interactive Computing Systems (EICS) 2014 Symposium. Our goal is to make the doctoral consortium a useful event with a maximum benefit for the participants by having a dedicated event the day before the conference as well as the opportunity to present their on-going doctoral work to a wider audience during the conference.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous. D2.m Software Engineering: Miscellaneous.
PARTICIPATION IN THE DOCTORAL CONSORTIUM
Doctoral Consortium submissions typically present a PhD thesis topic motivated by aims and goals, supported by some work in-progress and present original, sound, and well-founded results in order to address a well-defined problem.
The motivation of the Doctoral Consortium is to foster PhD students in the field of Engineering Interactive Computing Systems (EICS) by offering them the opportunity to receive feedback about their research by more senior colleagues with similar research interests.
The Doctoral Consortium is a closed event, the day before the conference, in which a selected group of doctoral candidates present their research to each other and to a panel of advisors. Each presentation follows the same structure highlighting the addressed research questions, the method and initial results. The presentations are kept short on purpose to force focus on the key issues. In addition to the presentations, discussion between the participants is encouraged: each participant is playing a supporting/opposing role to another participant, by identifying positive or negative points on the doctoral research of another participant. The goal is to become better acquainted with one another, be part of the EICS community and reflect on her/his own research in the larger context of EICS.
A further goal is to better link the Doctoral Consortium to the main conference, by providing the participants with the opportunity to present their work to all the EICS 2014 attendees during a dedicated session at the conference. This session is organized on the first day of the conference so that participants can have the opportunity to further discuss their work with colleagues during the entire conference. The presentation is accompanied with a poster that will be on display during the entire conference.
LIST OF PARTICIPANTS
This year seven doctoral candidates are selected for the Doctoral Consortium. The students range from being in the initial stages of their doctoral research planning, to the final stages of dissertation completion.
The set of seven participants of this year cover a wide variety of EICS topics and application domains. Two sessions structure the Doctoral Consortium:
• HCI and Model-Driven Engineering: this session includes research involving model-based approaches.
Examples of models include task models, UI models, context models as well as user models.
• Large, Complex and Critical Systems: this session is dedicated to research on engineering interactive large complex and critical systems. To do so different research axes are adopted from collaborative annotation, visual interactive support, formal specification to incident investigation methodology.
The accepted PhD students to the Doctoral Consortium are:Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.
Ragaad AlTarawneh (University of Kaiserslautern,
Germany) presenting Visual Interactive Support For
Understanding Structural and Behavioral Aspects of
Complex Systems.
Amjad Hawash (Sapienza University, Rome, Italy)
presenting Introducing Groups to an Annotation System:
Design Perspective
Huayi Huang (Queen Mary, University of London, UK)
presenting Two Analytical Approaches To Support Patient
Safety Incident Investigation Methodology.
ACKNOWLEDGEMENTS
The organizers would like to thank the committee of advisors for their contribution to the Doctoral Consortium.
Yucheng Jin (Fortiss GmbH, institute associated with the Technical University of Munich, Germany) presenting Generating Model-Based User Interfaces for the Connected Appliances.
Eyfrosyni (Effie) Karuzaki
ORGANIZERS BACKGROUND
Laurence Nigay is a full Professor in Computer Science at Université Joseph Fourier (UJF, Grenoble 1). She is the director of the Engineering Human-Computer Interaction (EHCI) research group of the Grenoble Informatics Laboratory (LIG). From 1998-2004, she vice-chair of the IFIP working group WG 2.7/13.4 "User Interface Engineering". She was advisor or co-advisor of 14 students who defended their theses: 8 of them are currently professors, lecturers or CNRS researchers. She is currently advising or co-advising 5 students. More on her research can be found at http://iihm.imag.fr/nigay/ Kris Luyten is an associated professor in Computer Science at Hasselt University in Belgium and member of the HCI lab of the iMinds research institute Expertise Centre for Digital Media. He was full paper co-chair for both EICS 2011 and EICS 2013. He was advisor or co-advisor of 7 students who defended their theses and is currently advising or co-advising 7 students. More on his research can be found at http://research.edm.uhasselt.be/kris | 5,716 | [
"9671"
] | [
"49637",
"263858"
] |
00149259 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2005 | https://hal.science/hal-00149259/file/BoutayebBandStructure.pdf | Band Structure Analysis of Crystals with Discontinuous Metallic Wires
Halim Boutayeb, Member, IEEE, Tayeb A. Denidni, Senior Member, IEEE, Abdel Razik Sebak, Senior Member, IEEE, and Larbi Talbi, Member, IEEE Abstract-The band structure for normal propagation of crystals with finite straight metallic wires is studied for different wire diameters and lengths. The crystal is considered as a set of parallel grids. Dispersion characteristics are obtained by using a transmission line model where the parameters are calculated from the reflection and transmission coefficients of the grids. These coefficients are computed rigourously with a Finite Difference Time Domain (FDTD) code. Simulated and experimental results for two structures with the dual behavior, pass-band and stop-band, are presented. This study have potential applications in electrically controlled microwave components.
Index Terms-Metallic crystals, band structure, periodic structures
I. INTRODUCTION
R ECENTLY, the propagation of electromagnetic waves in periodic structures has received important interest [1]. Potential applications have been suggested in microwave and antenna domains, such as suppressing surface waves [START_REF] Yang | Microstrip antennas integrated with electromagnetic bandgap (EBG) structures: a low mutual coupling design for array applications[END_REF], designing directive antennas [START_REF] Thevenot | Directive Photonic Band-Gap Antennas[END_REF], or creating controllable beams [START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF][START_REF] Poilasne | Active Metallic Photonic Band-Gap materials (MPBG): experimental results on beam shaper[END_REF]. The propagation of waves in periodic structures is described by means of a band theory. Different methods have been proposed for computing the band structure of a periodic structure, e.g., the average field method [START_REF] Simovski | The average field approach for obtaining the band structure of crystals with conducting wire inclusions[END_REF], the order-N method [START_REF] Chan | Order-N method spectral method for electromagnetic waves[END_REF], and a hybrid plane-wave-integral-equation method [START_REF] Silveirinha | A Hybrid method for the efficient calculation of the band structure of 3D-metallic crystal[END_REF]. A particular interest has been given to the dispersion characteristics of periodic structure formed by infinitely long metallic wires [1-10]. To our knowledge, the band structure of periodic materials with finite straight wires have not been studied enough. These materials are interesting for designing reconfigurable microwave components. In [START_REF] Poilasne | Active Metallic Photonic Band-Gap materials (MPBG): experimental results on beam shaper[END_REF], the authors have proposed to commute between continuous and discontinuous wire structures for reconfiguring the radiation patterns of an antenna. Indeed, these two structures behave differently at low frequencies, by presenting a pass-band and a stop-band for discontinuous and continuous wires cases, respectively. To obtain the commutation, the authors have placed periodically diodes in the broken wires. When the diodes are switched on, the structure behaves as a continuous wire medium, if the impedance of the diodes are neglected, whereas when the diodes are switched off, the structure behaves as a discontinuous wire medium. However, the band structure of the discontinuous wire medium for different wire diameters and lengths have not been studied enough. In this contribution, numerical results are presented for the pass-bands and stop-bands of these 3-D periodic structures, at normal incidence. To compute the propagation constant, a transmission line model is used, where a 2-D periodic structure (grid) of discontinuous wires is modelled by a T-circuit. The T-circuit parameters are written in terms of the S-parameters of the grid, computed rigourously using the FDTD method. Experimental results for two structures with the dual behavior of their bands are also presented. The infinite 3-D periodic structure of perfect metallic wires shown in Fig. 1 is considered. Its parameters are the periods P x , P y and P z , the wire diameter a and the width w. The propagation of the transverse Electric field in x-direction is considered. To compute the propagation constant β x , the transmission line model shown in Fig. 2 is used, where a 2-D periodic structure in y-direction (see Fig. 1) is modelled by a T-circuit. The propagation constant β x of this transmission line is obtained from its well known dispersion equation, which is written [START_REF] Silver | Microwave Antenna Theory and Design[END_REF] cos (
II. TRANSMISSION LINE MODEL AND FDTD METHOD
β x P x ) = (1 + ZY ) cos (kP x ) + j Z + Y 2 1 + Z 2 sin (kP x ) (1)
where k is the free space wave number. The expressions for Y and Z are derived in terms of the complex reflection and transmission coefficients r and t of the 2-D periodic structure, at normal incidence. By converting the chain matrix of the T-circuit to the S matrix, r and t can be expressed in terms of Z and Y . After inverting these relations, Z and Y are written as functions of r and t
Y = (r -t -1)(r + t -1) 2t (2)
Z = -r + t -1 r -t -1 (3)
In this work, r and t coefficients are computed rigourously with the FDTD method, where Floquet boundaries conditions and a thin mesh (△ = P eriod/80) are used. It should be noted that the T-circuit model gives more precise results than the simple circuit using only the admittance Y . Indeed, the impedance Z is negligible for thin wire, but it is not for thick wire. Only the fundamental mode is considered, then the limitations P y ≤ P x , P x ≤ λ and P z ≤ λ are used.
III. RESULTS
Fig. 3. Comparison of the dispersion diagrams calculated with the present method and obtained with a Finite Element method (HFSS) for Px = Py = P , a/P = 5% and a/P = 40%.
To validate the method, Fig. 3 presents the computed propagation constants for a continuous wire medium (w = 0) compared with the results of a Finite Element method (HFSS-Ansoft), which considers all modes. A good agreement is observed between the results for both thin and thick wires. Fig. 4 shows the dispersion diagrams of structures with continuous (P x = P y = P , a/P = 5%) and discontinuous (P z = P , w/P z = 5%) wires. The dual behavior in the pass-band and stop-band of these structures is nearly obtained in the two first bands. The limits of the two first bands of these structures are now studied for different wire lengths and diameters. We consider P x = P y = P . We consider three values of the period in z-direction, P z = P , P z = 2P and P z = P/2. Fig. 5 presents the limits of the two first bands for continuous and discontinuous cases versus the fill factor a/P , for different widths w. To obtain a dual behavior in the first band, the lower limits of stop-band for discontinuous wires and passband for continuous wires must match. In addition, to obtain the dual behavior in the second band, the upper end of stopband for discontinuous wires must match with the upper end of pass-band for continuous wires. To satisfy these conditions, for small diameter, the width must be near 5% of the period. When the diameter increases, the width should be increased. Fig. 4. Dispersion diagrams for structures with continuous wires (a/P = 5%) and discontinuous wires (P z = P , w/P = 5%). Fig. 5. Two first bands limits for structures with continuous and discontinuous wires (P z = P ) versus fill factor a/P , for different widths w/P . Fig. 6 presents the same diagrams as that of Fig. 5 for the case P z = 2P (within the range P/λ ≤ 0.5). From these curves, it can be seen that is difficult to obtain the dual behavior in both the first and second bands. To match the first bands, the width w must be very large (w/P > 40%). Fig. 7 presents the same diagrams as in Fig. 5 but for P z = P/2. For this case, for thin wires, to match the first band limits, the width w must be very small (w/P < 2.5%), whereas the second limits can be matched more easily. The results presented in Figs. 5 to 7 are useful for the design of two structures with the dual behavior in their bands. Fig. 6. Two first bands limits for structures with continuous and discontinuous wires (P = 2P ) versus fill factor a/P , for different widths w/P . Fig. 7. Two first bands limits for structures with continuous and discontinuous wires (P z = P/2) versus fill factor a/P , for different widths w/P .
To validate the proposed model, two structures with five rows in x-direction were fabricated (with wood supports) and their transmission coefficients were measured in a anechoic chamber using two horn antennas and a network analyzer. The fabricated structures have the finite dimensions L z = 280mm and L y = 600mm and the following parameters: P z = P = 40mm, a = w = 2mm. The frequency range is 1GHz-4.5GHz. The structures were at the distances 40m and 2mm from the transmitter and the receiver, respectively (at 2mm the evanescent waves are negligible, this distance has been chosen to avoid edge effects). A measurement without the structures was carried out for the normalization. For comparison, numerical simulations (FDTD) for the transmission coefficients were also carried out. Fig. 8 presents the simulated and measured transmission coefficients of the continuous and discontinuous wires structures, illustrating the dual bands. The simulated and experimental results are slightly different because of the finite dimensions of the fabricated structures.
IV. CONCLUSION
The band structure for normal propagation of crystal formed by discontinuous metallic wires has been analyzed for different wire diameters and lengths. To compute the propagation constant, the transmission line model and the T-circuit model have been used. The proposed approach has been validated with experimental results. The obtained results are useful for the design of two structures with the dual behavior of their band characteristics that can be used for controllable antennas applications.
Manuscript received November 2004. This work was supported in part by the National Science Engineering Research Council of Canada (NSERC). H. Boutayeb and T. A. Denidni are with INRS-EMT, Montréal, Canada. A. Sebak is from the University of Concordia, Montréal, Canada. L. Talbi is from Université du Québec en Outaouais, Ottawa, Canada.
Fig. 1 . 2 ZFig. 2 .
122 Fig. 1. Infinite 3-D periodic structure of discontinuous metallic wires in air.
Fig. 8 .
8 Fig. 8. Simulated (FDTD) and measured transmission coefficients for two structures with five rows of : (a) continuous wires (a/P = 5%) and (b) discontinuous wires (Pz = P , w/P = 5%). | 10,800 | [
"838442"
] | [
"214739",
"214739"
] |
00149260 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149260/file/BoutayebDirectiveCrystal.pdf | H Boutayeb
T A Denidni
ANALYSIS AND DESIGN OF A HIGH-GAIN ANTENNA BASED ON METALLIC CRYSTALS
In this paper, a new high-gain antenna made from a monopole embedded in a crystal, also called Electromagnetic Bandgap (EBG) material, with metallic wires and without defect is described. To design the antenna, a model is proposed, using the crystal as a Fabry-Perot cavity constructed of multiple Partially Reflecting Surfaces (PRSs). Frequency and pattern responses of the EBG structure excited by electromagnetic waves from its interior are used to analyze the antenna performance. To validate the proposed approach, numerical simulations using the Finite Difference Time Domain (FDTD) method were carried out, and an antenna prototype was fabricated and tested. The radiation patterns obtained from measurements and simulations show an excellent directivity performance and give a good agreement between measured, simulated and predicted results. With the proposed design, a more than 19 dBi gain is achieved across the matching band.
INTRODUCTION.
Electromagnetic bandgap (EBG) materials, also known as photonic crystals [START_REF] Joannopoulos | Photonic crystals: molding the flow of light[END_REF], have been the subject of intensive research in the past few years. Potential applications have been suggested in microwave and antenna domains, such as suppressing surface waves [START_REF] Park | A Photonic Bandgap (PBG) structure for guiding and suppressing surface waves in millimeter-wave antennas[END_REF], [START_REF] Yang | Microstrip antennas integrated with electromagnetic bandgap (EBG) structures: a low mutual coupling design for array applications[END_REF], creating controllable beams [START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF], [START_REF] Poilasne | Active metallic photonic bandgap material MPBG: experimental results on beam shaper[END_REF], and designing high-gain antennas with a single feed [START_REF] Akalin | A highly directive dipole antenna embedded in a Fabry-Perot type cavity[END_REF]- [START_REF] Enoch | A metamaterial for directive emission[END_REF]. This paper presents the analysis and results of a new high-gain antenna based on a metallic crystal. High-gain, low-cost antennas using a single feed and based on EBG materials have an attractive interest for several wireless communication systems such as high-speed Wireless Local Area Networks (WLANs), satellite receiver systems, and various point-to-point links. Their single-feed system reduces the feed complexity compared to feeding networks used in conventional antenna arrays. Furthermore, EBGantennas are typically more compact than parabolic reflector antennas. Recently, various techniques have been proposed to design directive antennas formed by a dipole embedded inside a Fabry-Perot cavity. In this context, a relation between the half power beamwidth and the quality factor of the cavity has been proposed and validated numerically [START_REF] Akalin | A highly directive dipole antenna embedded in a Fabry-Perot type cavity[END_REF], and the directivity improvement has been suggested based on past research in optical physics [START_REF] Biswas | Exceptionally directional sources with Photonic Band-Gap crystals[END_REF]. Recently, in [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Theory and Design[END_REF], analytical expressions and experimental results have been proposed for the directivity of antennas based on a Fabry-Perot cavity at different frequencies. In [START_REF] Thevenot | Directive Photonic Band-Gap antennas[END_REF], the authors have presented an antenna based on a dielectric EBG structure with a defect, which consists of a Fabry-Perot cavity between a patch antenna and the EBG structure. The defect generates the radiation of localized modes in the frequency bandgaps. The same configuration has been revisited in [START_REF] Cheype | An Electromagnetic Bandgap resonator antenna[END_REF], where a method based on the Fourier transform has been proposed to obtain the properties of the EBG structure in the angular domain and then to predict the radiation pattern of an EBG antenna. This antenna can be seen as a dielectric leaky-wave antenna (LWA) [START_REF] Yang | Gain enhancement methods for printed circuit antennas[END_REF]. It has been shown that the fundamental principle of operation of a LWA is due to the excitation of leaky modes in the structure [START_REF] Jackson | A leaky wave analysis of the high gain printed antenna configuration[END_REF], and a further examination of the structure has been given in [START_REF] Jackson | Leaky wave propagation and radiation for a narrow beam multiple layer dielectric structure[END_REF]. In addition, a simple model has been proposed and analyzed using a transverse equivalent network [START_REF] Zhao | Simple CAD model for a dielectric leaky wave antenna[END_REF]. Other studies of a dipole or a monopole embedded inside a periodic structure without defect have been reported in [START_REF] Bulu | Highly directive radiation from sources embedded inside photonic crystals[END_REF], [START_REF] Enoch | A metamaterial for directive emission[END_REF], where the dispersion diagram of the crystal and the the effective refractive index have been used to explain the directivity improvement, and numerical and experimental results showing high directivity have also been presented. However, the return loss of the antenna is not low enough. In [START_REF] Boutayeb | Design of a directive and matched antenna with a planar EBG structure[END_REF], a study of the input impedance of an EBG antenna fed by a monopole has been presented, and a low return loss has been achieved. Another method for producing a high-gain antenna has been introduced in the 50's [START_REF] Trentini | Partially reflecting sheet arrays[END_REF]. This technique uses a PRS to introduce leaky wave and beamforming effects when it is placed in front of a grounded waveguide aperture. A ray theory has been proposed, which shows that the directivity of the antenna increases when the reflectivity of the PRS increases. This type of antenna has been improved recently [START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF], where the PRS is optimized to enlarge the antenna bandwidth. Furthermore, experimental results have been presented for the case of multiple PRSs in front of the waveguide [START_REF] James | Leaky-wave multiple dichroic beamformers[END_REF]. To analyze rectangular crystals, various methods have been proposed for calculating the scattering of the structures, e.g. the rigorous scattering matrix method [START_REF] Felbacq | Scattering by a random set of parallel cylinders[END_REF], and the generalized scattering matrix method with a cascading approach [START_REF] Hall | Analysis of multilayered periodic structures using generalized scattering matrix[END_REF], where the periodic structure is made from the cascading of multiple layers of PRSs, also called Frequency Selective Surfaces (FSSs) [START_REF] Munk | Frequency Selective Surfaces: Theory and Design[END_REF]. These methods are rigorous but they are typically time-consuming. In this paper, a modeling method based on a spectral plane-wave generalized-ray analysis with a cascading process and the Fabry-Perot model is presented to predict the characteristics of a high-gain antenna based on a crystal without defect. The proposed method does not consider the higher Floquet modes interactions between PRSs, but it reduces considerably the computational time compared to the preceding methods and it is well suited for the design process of the proposed EBG antenna. Using this method, a new directive antenna was designed, fabricated and tested.
ANALYSIS
In order to predict the antenna performance, the main properties of a 2-D periodic crystal were studied in the frequency and angular domains. One can characterize an EBG material by its band structure when it is infinite in all directions of propagation and by its transmission and reflection coefficients when it is illuminated by plane waves from the exterior and it is finite in the direction of propagation. Another possible characterization of the EBG material is obtained from its response when it is illuminated by plane waves from its interior. In this study, a TM-wave excitation is considered. The band structure of the proposed EBG material and the characterization of the EBG material excited internally are described in the following subsections.
Band Structure
A 2-D EBG material made of infinite metallic wires is considered as shown in Figure 1(a). P x and P y are the periods of the structure in the x and y directions and d is the diameter of the wires. Figure 1(b) presents the irreducible Brillouin zone of the rectangular lattice [START_REF] Joannopoulos | Photonic crystals: molding the flow of light[END_REF]. Figure 2 presents the band structure of the metallic crystal, which was computed with Ansoft High Frequency Structure Simulator (HFSS) [START_REF]High Frequency Structure Similator 8.5 User manual[END_REF]. From these curves, it can be observed that a bandgap (stopband) is obtained at low frequencies until 2.6 GHz. In this band, no propagation of the electromagnetic wave is possible, then for the proposed application, the EBG material will be used at the propagating band. In the next section, the properties of this EBG material are analyzed when it is excited from its inside.
Internally excited EBG material
Figure 3 shows the model of the EBG structure excited from its inside. The EBG structure is now finite in x-direction, and it is composed of six infinite rows of metallic wires. The source is located at the center. The rays going throw the structure at θ direction are considered, and each row acts as a Partially Reflecting Surface (PRS) for these rays.
The EBG structure is considered as a set of n PRS(s) on each side of the source. Each PRS is characterized by its complex transmission and reflection coefficients: t and r, respectively. To investigate the property of the EBG material, the Fabry-Perot cavity model [START_REF] Hecht | Optics[END_REF] presented in Fig. 4 is used, where the n PRS(s) are replaced by one PRS with the transmission and reflection coefficients t n and r n , respectively (t 1 = t, r 1 = r). Note that, usually, in the Fabry-Perot interferometer, the source is outside the Fabry-Perot cavity [START_REF] Hecht | Optics[END_REF], [START_REF] De | Fabry-Perot approach to the design of double layer FSS[END_REF], whereas here the source is placed inside the cavity. The variable T n characterizes the entire structure, and it is obtained by calculating the sum of all transmitted rays (Fig. 4). In Fig. 4, the semi-axis X is used for the phase reference. The sum of all rays gives the following expression for the total transmitted amplitude T n
T n (f, θ) = t n (f, θ) ∞ i=0 r n (f, θ) i e jk(2i+1)Px/2 tan θ sin θ-jk(2i+1)Px 2 cos θ = tn(f,θ)e -jkPx cos (θ)/2 1-rn(f,θ)e -jkPx cos (θ) . (1)
To obtain t n and r n , the schematic presented in Fig. 5 is used, where the semi-axis X ′ is used for the phase reference for t n and the ray 0 in the left is used for the phase reference for r n . These coefficients are written as functions of t, r, t n-1 and r n-1 , where t n-1 and r n-1 are the coefficients of the structure composed of (n -1) PRS(s). The sum of the transmitted rays gives the following iterative expressions for t n and r n In this work, t and r coefficients are computed rigourously using a Finite Difference Time Domain (FDTD) code, where a thin mesh (△ = P eriod/80) is used. As an approximation, the dependance of t and r on the incidence angle is not taking into account in Eqs. ( 1)-( 3). This approximation is valid when P x is sufficiently small compared to λ [START_REF] Trentini | Partially reflecting sheet arrays[END_REF]. To validate the preceding relations for the T n variable at the normal incidence (θ = 0 • ), direct simulations with the FDTD code were carried out. Figure 6 shows the scheme used in the FDTD code to calculate T 3 at θ = 0 • . In the FDTD code, the boundaries in the xy and zx planes satisfy Floquet's theorem since the structure is infinite in the two dimensions. Each boundary in the yz plane uses the Perfect Matched Layers (PML) technique [START_REF] Berenger | Perfectly matched layer for the FDTD solution of wave-structure interaction problems[END_REF] to model the free space. The wires were modeled using a thin wire formalism [START_REF] Nadobny | A thin-rod approximation for the improved modeling of bare and insulated cylindrical antennas using the FDTD method[END_REF], the plane wave excitation were modeled with a plane of current sources, and an observation point were considered to compute the transverse electric field. Two simulations were carried out: one with the structure and another without the structure for normalization. Figure 7 presents the coefficient |T 3 (f, 0 • )| obtained with the FDTD method and calculated with Eqs. ( 1)-(3). A good agreement is observed between the results obtained by the current method and those computed by the FDTD method.
t n (f, θ) = t n-1 (f, θ)t(f, θ)e -jkPx cos(θ) 1 -r n-1 (f, θ)r(f, θ)e -j2kPx cos(θ) (2)
r n (f, θ) = r n-1 (f, θ) + t n-1 (f, θ) 2 t(f, θ)e -jk2Px cos(θ) 1 -r n-1 (f, θ)r(f, θ)e -j2kPx cos(θ) . (3)
Figure 8 shows |T n (f, 0)| obtained for different values of n. From these curves, |T n (f, 0)| presents a stopband in low frequencies and a passband with higher-values peaks, where the number of peaks is 1)-( 3) and calculated directly with the FDTD method. correlated to n. The reason of a strength of |T n | greater than one is that the cavity modifies the matching of the plane-wave source, which is not initially matched to free space, and then the power supplied outside the cavity, at resonance, is greater than the power supplied by the source alone. From this, the strength of |T n | is not a direct evaluation of the directivity of the EBG-antenna. However, the halfpower beam-width and then the directivity of a directive antenna based on a Fabry-Perot cavity can be evaluated by using this coefficient as has been demonstrated recently in [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Theory and Design[END_REF]. As referring to Fig. 9, when the angle θ increases, the level of |T 3 | at f 0 decreases, and for sufficiently large angle, this frequency belongs to the first stop-band of |T 3 |. Then, the crystal acts as a spacial filter. To illustrate this spacial filtering, Fig. 10 presents |T 3 | versus the incidence angle at three frequencies f 0 , f 1 , and f 2 (these are the first, the second and the third resonance frequencies of |T 3 (f, 0 • )|, respectively). These curves can be interpreted as the radiation patterns of the structure. At the frequency f = f 0 , there are two directive beams at the normal with a half power beamwidth equals to 11.5 • . The directivity is maximum near the frequency f 0 . At frequencies greater than f 0 , lobes appear in other directions as illustrated in Fig. 10. To design a directive antenna, the frequency f 0 is then chosen as the operating frequency. The variable |T n | is useful for predicting the radiation property of EBG antennas. The frequency f 0 , the bandgap and the passband positions depend on the geometrical parameters of the EBG structure: wire diameter and periods between wires and surfaces. The proposed method can be used to match the operating frequency and the pattern shapes. Compared to structures composed of only one PRS on each side of the source, structures with multiple PRSs give more design parameters. The method can also be used for structures with different periods between PRSs and/or different PRSs, in order to investigate antennas producing multiple frequency operation or with larger bandwidth.
ANTENNA DESIGN
In this section, the previous theoretical investigations are applied to the design of a high-gain EBG antenna. The antenna, shown in Fig. 11, is composed of a monopole as an excitation source, a ground plane, the crystal of metallic wires studied in the previous section (see Fig. 3), and four metallic planes : one on the top and three on the lateral sides. These metallic reflectors allow to focalize the radiation forward one side.
FDTD simulations were carried out to match the positions of the metallic reflectors so the antenna performance would not be affected.
To validate the proposed concept, an EBG antenna prototype was fabricated and tested. Figure 12 shows a photograph of the fabricated antenna. Figure 13 presents simulated and measured return loss of the proposed antenna. A slight difference between the simulation and measurement results is noted, and this is most likely due to the finite dimensions of the ground plane of the experimental prototype. From the measured return loss, a bandwidth (S 11 < -10 dB) from 2.61 GHz to 2.65 GHz (a fractional bandwidth of 1.5%) is achieved.
To examine the radiation performance of the fabricated prototype, measurements were done using an anechoic chamber and a hybrid nearfield system from ANTCOM (http://www.antcom.com). Figure 14 shows the measured and simulated radiation patterns of the antenna at the center frequency 2.63 GHz in the H-plane and E-plane. There is a slight difference between measured and simulated results in the E-plane, which is due to effects from the edge of the ground plane that were not included in the FDTD model. The half-power beamwidths are 15. To evaluate the performance of the antenna, the antenna aperture taper efficiency was calculated using the following equation [START_REF] Stutzman | Antenna theory and design[END_REF] e = Directivity/(10 log(4πA/λ 2 )),
where A = 0.600 × 0.275 m 2 is the area occupied by the antenna in the yz plane. The antenna taper aperture efficiency is between 89.4% (at 2.61 GHz) and 94.4% (at 2.65 GHz). With such features, this antennas is suitable for wireless communication systems, where high gain is needed.
CONCLUSION
A new high-gain antenna has been developed from a crystal without defect and constituted of metallic wires. A technique for characterizing the EBG structure excited from its interior has been described, and numerical and experimental results of the proposed antenna have been presented with a good agreement. A gain of more than 19 dBi has been obtained across the matching band of the antenna, at the lower end of pass-band of the EBG material. The proposed EBG antenna has the advantages in terms of high gain and simple single feed, which can reduce complexity compared to feeding networks used in conventional antenna arrays. Future work will concentrate on the examination of other structures using different distances between PRSs or/and different PRSs in order to increase the bandwidth of the antenna or to obtain multi-band antennas.
Figure 1 .
1 Figure 1. (a) Infinite 2-D periodic structure of continuous metallic wires in air (dimensions in mm) (b) Irreducible Brillouin zone of the rectangular lattice with corners labelled Γ, X, M.
Figure 2 .
2 Figure 2. Band diagram for the rectangular 2-D periodic structure (P x = 40 mm, P y = 20 mm and d = 1.8 mm).
Figure 3 .
3 Figure 3. EBG structure excited by an omnidirectional source (dimensions in mm).
Figure 4 .
4 Figure 4. Fabry-Perot cavity model for the calculation of the EBG structure response.
Figure 5 .
5 Figure 5. Calculation of t n and r n coefficients from t n-1 and r n-1 .
Figure 6 .
6 Figure 6. Schematic for the calculation of T n (f, 0) (n = 3) in the FDTD code (a) xoy plane view (b) yz plane view.
Figure 7 .
7 Figure 7. |T 3 (f, 0 • )| versus frequency calculated using Eqs. (1)-(3) and calculated directly with the FDTD method.
Figure 9 presents the |T 3 |
3 variables versus frequency, calculated using Eqs. (1)-(3), at θ = 0 • and θ = 30 • . The first resonance frequency f 0 of |T 3 | at θ = 0 • is about 2.6 GHz.
Figure 8 .
8 Figure 8. |T n (f, 0 • )| for different number of layers n on each side of the source.
Figure 9 .
9 Figure 9. |T n | versus frequency at θ = 0 • and θ = 30 • , of the EBG structure composed of n = 3 PRSs on each side of the source.
Figure 10 .
10 Figure 10. Normalized |T 3 | versus θ at f 0 , f 1 and f 2 .
Figure 11 .
11 Figure 11. Geometry of the EBG antenna with reflectors (dimensions in mm).
Figure 12 .
12 Figure 12. Photograph of the fabricated EBG antenna.
Figure 13 .
13 Figure 13. Measured return loss of the proposed EBG antenna.
Figure 14 .
14 Figure 14. Measured and simulated co-pol radiation patterns in the H-plane and E-plane at 2.63 GHz.
3 • and 23.1 • in the H-plane and the E-plane, respectively. The measured directive gains are between 19 dBi (at 2.61 GHz) and 20.2 dBi (at 2.65 GHz). The X-pol level is about -20 dB. | 21,012 | [
"838442"
] | [
"214739",
"214739"
] |
00149261 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149261/file/BoutayebMOTL3.pdf | Halim Boutayeb
email: [email protected].
Comparison Between Two Semi-Analytical Methods for Computing the Radiation Characteristics of a Fabry-Perot Cavity
Keywords: Periodic structures, Fabry-Perot cavity, Directive antenna, leaky wave antenna
In this contribution, two semi-analytical methods are compared for computing the radiation characteristics of a Fabry-Perot cavity excited by an internal current line source. One method uses the leaky wave theory and the other method uses a ray analysis. Compared to the conventional ray method, the first method offers more accurate results without increasing computing time. A full wave method is used as a reference.
Introduction
For the analysis of directive antennas based on a Fabry-Perot cavity, the ray method has been used [START_REF] Trentini | Partially reflecting sheet arrays[END_REF][START_REF] Temelkuran | Resonant cavity enhanced detectors embedded in photonic crystal[END_REF][START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity: Analysis and Design[END_REF]. For instance, a cavity composed of a ground plane and a Partially Reflecting Surface (PRS) has been considered in [START_REF] Trentini | Partially reflecting sheet arrays[END_REF]. In [START_REF] Temelkuran | Resonant cavity enhanced detectors embedded in photonic crystal[END_REF], a detector has been embedded inside a crystal, and in [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity: Analysis and Design[END_REF], the directivity of a Fabry-Perot cavity antenna has been analysed for different frequencies. The proposed semi-analytical method gives rapid results and allows interesting analysis [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity: Analysis and Design[END_REF]. In this letter, we compare this method to a another semi-analytical method based on leaky wave theory, in order to obtain the radiation patterns of a Fabry-Perot cavity excited by a line source. The leaky wave based method gives more accurate results than those obtained by the ray method without increasing processing time.
Fabry-Perot type cavity Structure
In this work, the structure shown in Figure 1 is considered. It is composed of a line source and a PRS at each side of the source. The PRS surface is composed of infinite long metallic wires, and it is characterized by the following parameters: the wire diameter a , the transversal period , and the length . The surface characteristics are represented in terms of the reflection and transmission coefficients r and t , which are calculated at the normal direction and for an infinite long surface using FDTD method.
t P R L
Ray method
In this method, it is assumed that the source is sending infinite number of rays in all directions . The complex transmission coefficient, defined by the amplitude of the wave outside the cavity and normalized by the amplitude of the incident wave, is obtained by adding the successive reflected and transmitted rays. Neglecting the angular dependency of and , this coefficient can be written as [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity: Analysis and Design[END_REF]:
θ r t ( ) ( ) ( ) ( ) ( ) θ - - θ - = θ cos exp / cos exp , jkD r jkD t f T 1 2 (1)
where is the wave number. From this equation, it can be noted that the coefficient T can describe the behaviour of the structure as a function of frequency and angle.
Leaky wave method
The leaky wave radiation of a Fabry-Perot type antenna has been studied in [START_REF] Collin | Analytical solution for a leaky-wave antenna[END_REF], and its radiation pattern is expressed as:
( ) ( ) ( ) 2 2 2 0 x k F γ + θ θ - ≈ θ cos cos (2)
where is the wave number at the resonance frequency and shown in Figure 2, the coefficient is the normalized impedance that models one PRS, and it can be written as a function of :
s z r ( ) r r z s 2 1 + - =
. Referring to Figure 2, Z and represent the impedance seen at each side of the doted line. The leaky wave transverse resonance condition imposes [START_REF] Collin | Analytical solution for a leaky-wave antenna[END_REF], which can be written for the present structure as
+ - Z 0 = + + Z - Z ( ) ( ) 1 2 2 1 1 1 2 2 1 1 1 + - = γ - + + γ - + -
This relation is used in Eq. ( 2) to obtain the radiation pattern of the structure.
Numerical results and discussion
To demonstrate the validity of the proposed approach, numerical simulations were carried out for calculating the PRS characteristics and the radiation pattern of the structure illustrated in . For the first case, Figure 4 shows the radiation patterns calculated by the FDTD method, the hybrid FDTD-ray method (using Eq. ( 1) with '
0 k k =
) and the hybrid FDTD-leaky wave method (using Eqs. ( 2) and ( 4)). For the second case, Figure 5 gives the radiation patterns at the resonance frequency obtained with different methods. From these patterns, it can be concluded that the hybrid FDTD-leaky wave method is more close to the FDTD method than the hybrid FDTD-ray method. As a result, the accuracy of the hybrid FDTD-ray method decreases when the transversal period enhance.
Conclusion
In this letter, two semi-analytical methods for analysing the radiation patterns of Fabry-Perot cavities internally excited have been compared. The leaky wave based method produces more accurate results compared to the ray method without increasing processing time.
k
γ
is the x component of the propagation constant. It is shown now that the coefficient x γ can be evaluated as a function of the PRS reflection coefficient. Considering the transmission line model of the structure,
Figure
Figure 1. The following parameters D mm 80 = , mm a 2 = are fixed and the two cases
Figure Captions : FIG. 1 :FIG. 2 :FIG. 3 :FIG. 5 :
:1235 Figure Captions: | 5,888 | [
"838442"
] | [
"214739"
] |
00149262 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2005 | https://hal.science/hal-00149262/file/AWPLEllipticalEBG.pdf | Member, IEEE Halim Boutayeb
email: [email protected]
Senior Member, IEEE Tayeb A Denidni
email: [email protected].
Senior Member, IEEE Abdel Razik Sebak
Member, IEEE Larbi Talbi
email: [email protected]
Abdel Razik Sebak
IEEE ANTENNAS AND WIRELESS PROPAGATION
Keywords: Electromagnetic Band Gap, elliptical structures, wireless communications, periodic structures, metallic wires
This paper presents the design of directive antennas for wireless communication systems by incorporating Elliptical Electromagnetic Bandgap (EEBG) materials composed of metallic wires. These structures have an elliptical shape and are radially periodic, and they are excited at their center using a monopole antenna. Defects were introduced and designed in the EEBG structure to open electromagnetic localized modes inside its frequency bandgap, and then to create a directive beam. An antenna prototype, operating at DCS, PCS and UMTS bands, was designed, fabricated, and tested. A good agreement is obtained between simulated and experimental results for both return loss and radiation patterns.
I. INTRODUCTION
E LECTROMAGNETIC bandgap (EBG) materials, also called Photonic crystals [START_REF] Yablonovitch | Inhibited spontaneous emission in solid state physics[END_REF][START_REF] Joannopoulos | Photonic crystals: molding the flow of light[END_REF], offer pass-band and stopband (bandgap) to electromagnetic waves in the same way as the semiconductors do with electrons. Another important characteristic of these materials is the ability to open localized electromagnetic modes inside the forbidden frequency bandgap by introducing defects into the periodical structures. In microwave and antenna domains, EBG structures, periodic in cartesian coordinates, have been used in several applications, such as suppressing surfaces waves [START_REF] Yang | Microstrip antennas integrated with electromagnetic bandgap (EBG) structures: a low mutual coupling design for array applications[END_REF], designing directive antennas [START_REF] Thevenot | Directive Photonic Band-Gap Antennas[END_REF][START_REF] Cheype | An Electromagnetic Bandgap Resonator Antenna[END_REF], creating a controllable beam [START_REF] Poilasne | Active metallic photonic bandgap material MPBG: experimental results on beam shaper[END_REF] or miniaturizing an antenna and enhancing its bandwidth [START_REF] Mosallaei | Antenna Miniaturization and Bandwidth Enhancement Using a Reactive Impedance Substrate[END_REF]. Recently, cylindrical EBG structures have been proposed for designing directive antennas for wireless communication systems [START_REF] Palikaras | Cylindrical Electromagnetic bandgap structures for direcive Base Station Antennas[END_REF][START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF][START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF]. In [START_REF] Palikaras | Cylindrical Electromagnetic bandgap structures for direcive Base Station Antennas[END_REF], an antenna with a high directivity in the elevation plane and wide horizontal beam has been presented. In [START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF], a method has been proposed for designing EBG structures composed of multiple layers of cylindrical periodic surfaces, and in [START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF], these structures have been applied for base station antennas. In this letter, a new design of Elliptical EBG structures (EEBG) is proposed to increase the directivity of a simple monopole antenna. To validate this approach, an antenna prototype operating at 2 GHz was simulated, fabricated and tested. Both theoretical and experimental results are presented and discussed.
II. DESIGN
Cylindrical EBG structures are composed of multiple layers of cylindrical periodic surfaces having the same curvilinear period between two adjacent wires [START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF][START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF]. To make the same design with EEBG, the positions of the different wires have been calculated numerically. The elliptical periodic surface composed of twelve wires (N = 12) and shown in Fig. 1 is considered. To conserve the same elliptical distance (P e ) between wires, the angles φ 1 and φ 2 are calculated numerically using the following equations
b φi 0 1 -e 2 sin(θ) 2 dθ = iL/N, i = 1, 2 (1)
where
e = √ b 2 -a 2 b
and L is the perimeter of the ellipse
L = 4b π 2 0 1 -e 2 sin(θ) 2 dθ (2)
The same design is applied for structures composed of multiple layers of elliptical periodic surfaces as shown in Fig. 2. It can be noted that an analogous wave propagation study than the study performed for cylindrical structures [START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF] can be performed for the elliptical case but this is not the subject of the present paper. As an example, an EEBG structure composed of four (4) layers is considered. In this structure, the elliptical distances between two adjacent wires (P e ) of all ellipses are equal. To evaluate its properties, the EBG structure is excited by a line source in its center, and the transmitted transverse electric field is calculated at a point outside the structure (see Fig. 2) (two simulations were carried out: one with the structure and another without the structure for the normalization). At low frequencies until 2.3 GHz (see Fig. 3), the EEBG structure attenuates strongly the transmitted waves, and this filtering property increases when the number of layers increases. To open localized modes in this bandgap, defects are applied to the structure as illustrated in Fig. 4. In this figure, the defects consist of removing multiple wires: 3 wires are removed from the first layer, 5 from the second, 7 from the third, and so on. To examine the effect of the number of layers in the EBG with defects, a parametric study was carried out. Fig. 5 shows the radiation patterns for structures with two and four layers. From this figure, it can be noted that these radiation patterns show a unique directive beam without side lobes and the directivity of the pattern is improved by increasing the number of layers.
III. EXPERIMENTAL RESULTS
The configuration of the proposed antenna is shown in Fig. 6. It consists of a monopole, as an excitation source, a ground plane, and a four-layer EEBG structure made of metallic wires (see Fig. 4). This antenna was simulated using the Finite Difference Time Domain (FDTD) method and fabricated to validate the proposed concept. Fig. 7 shows the photograph of the fabricated EEBG antenna. In the FDTD simulations, the ground plane is considered infinite. The monopole antenna have the same diameter as the parasitic wires, and its length has been optimized to obtain a matched impedance at 2 GHz. The simulated and measured return loss of the antenna are shown in Fig. 8. A good agreement is obtained between theoretical and measured results. From the measured curve, a bandwidth (S 11 < -10 dB) from 1.76 GHz to 2.29 GHz (a fractional bandwidth of 26%) is achieved, which is enough to cover the DCS, PCS and UMTS bands (1.77 GHz -2.17 GHz). This EEBG antenna can find potential applications for mobile communications at the base stations. A further study of the EEBG antenna was performed on its radiation performance. The radiation patterns were measured in an anechoic chamber, located at INRS-EMT, Canada. The measured and simulated E-plane and H-plane patterns are shown in Figs. 9 to 12 at two different frequencies corresponding to DCS and UMTS. With reference to these curves, a good agreement between predictions and measured data can be observed. The half-power beam widths in the H-plane are 48 • and 37.7 • at 1.77 GHz and 2.17 GHz, respectively. In the E-plane, the half-power beam widths are 30.3 • and 28.3 • at 1.77 GHz and 2.17 GHz, respectively. The values of the maximum radiation in the cross polarization levels are -16.8 dB and -20.5 dB at 1.77 GHz and 2.17 GHz, respectively. Note that, in the Fig. 11. Simulated and measured radiation patterns in the H-plane at 2.17 GHz (Simulated X-pol is lower than -30 dB). Fig. 12. Simulated and measured radiation patterns in the E-plane at 2.17 GHz (Simulated X-pol is lower than -30 dB).
IV. CONCLUSION
The design of new directive antennas for wireless communication systems, using Elliptical EBG structures, has been presented. An antenna prototype was built and tested. This antenna offers a bandwidth of 26%, which is enough for wireless applications, such as DCS, PCS and UMTS, and it offers a gain between 11.6 dBi and 12.6 dBi. The proposed EEBG antenna provides several advantages: low-cost, easy fabrication and a single feed, which can reduce complexity compared to feeding networks used in conventional antenna arrays. Furthermore, the gain can be increased by adding other layers. A further study which compare cylindrical and elliptical EBG structures performances will be presented elsewhere.
Fig. 1 .
1 Fig. 1. Elliptical periodic surface of metallic wires (dimensions in mm).
Fig. 2 .
2 Fig. 2. EEBG structure composed of multiple layers of elliptical surfaces.
Fig. 3 .
3 Fig. 3. Relative transmitted power at the observation point (FDTD) of the EEBG structures with two and four layers.
Fig. 4 .
4 Fig. 4. EEBG structure with defects.
Fig. 5 .
5 Fig. 5. Radiation patterns in the H-plane at 2 GHz of the EEBG structures with two and four layers.
FeedFig. 6 .
6 Fig.6. Side view of the EEBG antenna (dimensions in mm).
Fig. 7 .Fig. 8 .
78 Fig. 7. Photograph of the fabricated EEBG antenna.
Fig. 9 .
9 Fig. 9. Simulated and measured radiation patterns in the H-plane at 1.77 GHz (Simulated X-pol is lower than -30 dB).
Fig. 10 .
10 Fig.10. Simulated and measured radiation patterns in the E-plane at 1.77 GHz (Simulated X-pol is lower than -30 dB). | 10,130 | [
"838442"
] | [
"214739",
"214739"
] |
00149263 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149263/file/BoutayebAWPL0132_2006.pdf | I. INTRODUCTION
D IFFERENT methods have been proposed for designing agile antennas. For instance, mechanical steerable-beam systems have been proposed, but they are easily deteriorated and their maintenance is too complex. Another solution to modify the radiation pattern consists on using smart antennas, such as adaptive antenna arrays [START_REF] Compton | Adaptive Antennas: Concepts and Performance[END_REF] or phased arrays [START_REF] Mailloux | Phased Array Antenna Handbook[END_REF]. However, the cost of smart antennas is high, and these technologies lead often to complex designs. As an alternative to conventional beam switching systems, reconfigurable Electromagnetic Band Gap (EBG) materials [START_REF] Poilasne | Active Metallic Photonic Band-Gap materials : experimental results on beam shaper[END_REF][START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF] can be used. To obtain a directive pattern being able to turn over 360 • range, Cylindrical EBG (CEBG) structures present an attractive solution [START_REF] Ratasjack | A reconfigurable EBG structure for a beam steering base station antenna[END_REF][START_REF] Boutayeb | Analysis and Design of a Cylindrical EBG-based directive antenna[END_REF]. The technique proposed in [START_REF] Ratasjack | A reconfigurable EBG structure for a beam steering base station antenna[END_REF][START_REF] Boutayeb | Analysis and Design of a Cylindrical EBG-based directive antenna[END_REF] consists on using a cylindrical EBG structure constituted of discontinuous wires with active elements. The beam switching is obtained by creating defects consisting of discontinuous wires in an initial continuous-wire structure. The inconvenient of this design is that it necessitates an important power supply DC current to control the active elements. To reduce the required power for controlling the active elements in reconfigurable CEBG-based antennas, this paper proposes a new design. The proposed configuration is based on a discontinuous-wire CEBG structure with defects consisting of continuous wires. The transmission coefficients of structures with all continuous or discontinues wires are used for the design. Then, Finite Difference Time Domain (FDTD) results for the radiation patterns of the proposed configuration, without active elements, are presented and discussed.
II. CYLINDRICAL EBG STRUCTURES CHARACTERIZATION
A Cylindrical EBG structure as shown in Fig. 1 is considered. To calculate the transmission coefficient of this structure, the characteristics of a single cylindrical Frequency Selective Surface (FSS) are first calculated, and then a cascading approach is applied [START_REF] Boutayeb | Analysis and Design of a Cylindrical EBG-based directive antenna[END_REF]. The transmission coefficient is calculated by considering cylindrical incident and transmitted waves. Figure 2 presents the calculated magnitude of the transmission coefficient for four layers of cylindrical shells with the two cases: continuous wires and discontinuous wires. For the discontinuous wire case, the finite wires are L w = 18mm length, and the vertical distance between two wires is w = 2mm. The CEBG structures have the following parameters: P r = 45mm and a = 1.5mm. From Fig. 2, the two structures have the dual behaviour in their pass-band and stop-band characteristics in two bands (DC -2.2GHz and 2.6GHz -3.2GHz).
In the next section, a new design for reconfigurable-pattern antennas is presented. This design is based on a structure with discontinuous and continuous wires. The operating band is 2.6GHz -3.2GHz, where the structure with all discontinuous wires presents a stop-band, whereas the structure with all continuous wires presents a pass-band (see Fig. 2).
Fig. 2.
Transmission coefficients for two structures with four layers of cylindrical FSSs with discontinuous or continuous wires.
III. CEBG STRUCTURE USING DEFECTS WITH CONTINUOUS WIRES
In this section, a new structure, which can be used for designing antennas with a reconfigurable beam, is proposed and tested with a FDTD code. The new configuration is presented in Fig. 3, where defects are made by continuous wires. The wire diameter (a), the radial period (P r ) and the parameters of the discontinues wires (L w and w) are the same than previously. The structure is considered infinite in the vertical direction. A current line source, placed in the center, is used as an excitation. Figure 4 presents the computed radiation patterns in the Hplane at 2.7GHz, 3GHz, and 3.2GHz. These frequencies belong to the band 2.6GHz -3.2GHz, which is the passband of the all continuous wires structure as shown in Fig. 2. From these results, it can be noted that, in this band, a directive beam is obtained in the direction of the defects. In the new design, the number of wires behaving as continuous wires is reduced by 75% compared to the previous design [START_REF] Boutayeb | Analysis and Design of a Cylindrical EBG-based directive antenna[END_REF].
Then, in the reconfigurable structure, the number of active elements in On-state and the required power supply used for the control of the active elements will be reduced by the same percentage.
IV. CONCLUSION A new Cylindrical Electromagnetic Band Gap (CEBG) structure with defects, for beam-switching antennas applications, has been designed and tested numerically. This configuration uses a structure composed of discontinuous metallic wires and with defects consisting of continuous wires. The proposed solution will allow to reduce the number of active elements in On-state, in CEBG-based reconfigurable antennas, and thus will reduce the required power supply DC current.
Technique for Reducing the Power Supply in Reconfigurable Cylindrical Electromagnetic Band Gap Structures Halim Boutayeb and Tayeb A. Denidni Abstract-In this letter, a new design of Cylindrical Electromagnetic Band Gap (CEBG) structures is proposed in order to reduce the power supply used to control active elements in EBG-based reconfigurable antennas. The proposed configuration uses structures of discontinuous wires with defects consisting of continuous wires. To examine its performances, numerical simulations were carried out using a Finite Difference Time Domain (FDTD) code, for the structure without active elements. The obtained results show that the new configuration allows to obtain a directive beam in the pass-band of the all continuouswires structure. This work has potential application for designing an antenna with a directive pattern able to turn over 360 • range. Index Terms-Periodic structures, cylindrical structures, reconfigurable materials
Fig. 1 .
1 Fig. 1. Cylindrical Electromagnetic Band Gap (CEBG) structure composed of multiple layers of cylindrical FSSs with metallic wires.
Fig. 3 .
3 Fig.3. CEBG structure composed of discontinuous wires using defects with continuous wires. The structure is excited by a line source placed in the center.
Fig. 4 .
4 Fig.[START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF]. Radiation patterns in the H-plane of the CEBG structure, presented in Fig.3, at 2.7GHz, 3GHz and 3.2GHz. | 7,228 | [
"838442"
] | [
"214739",
"214739"
] |
00149264 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149264/file/NormalisationAWPL.pdf | Internally-Excited Fabry-Perot Type Cavity: Power Normalization and Directivity Evaluation Halim Boutayeb, Member, IEEE, and Tayeb A. Denidni, Senior Member, IEEE, Abstract-This letter presents a new approach to analyze Fabry-Perot cavities excited from their inside by electromagnetic waves. While Fabry-Perot cavity antennas found in literature use often the magnitude of the transmission coefficient to determine the antenna directivity, the proposed approach suggests a new strategy that uses a normalized transmission coefficient, which is derived by using a transmission line model and by considering the available power from the source. Furthermore, a new analytical expression is also proposed for evaluating the directivity. The obtained results are presented and compared to experimental results reported in literature.
Index Terms-Fabry-Perot cavity, directive antennas.
I. INTRODUCTION
S EVERAL techniques for designing highly-directive an- tennas by incorporating a Fabry-Perot cavity have been proposed [START_REF] Akalin | A Highly Directive Dipole Antenna Embedded in a Fabry-Perot Type Cavity[END_REF][START_REF] Biswas | Exceptionally directional sources with Photonic Band-Gap crystals[END_REF][START_REF] Thevenot | Directive Photonic Band-Gap antennas[END_REF][START_REF] Trentini | Partially reflecting sheet arrays[END_REF][START_REF] James | Leaky-wave multiple dichroic beamformers[END_REF][START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF][START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF]. In [START_REF] Akalin | A Highly Directive Dipole Antenna Embedded in a Fabry-Perot Type Cavity[END_REF], a relation between the half-power beamwidth and the quality factor of the cavity (at the resonant frequency of the cavity) has been proposed. In [START_REF] Biswas | Exceptionally directional sources with Photonic Band-Gap crystals[END_REF], the authors have demonstrated that this directivity improvement has an analogy with past research in optical physics. Another antenna consisting on a Fabry-Perot cavity between a patch antenna and an Electromagnetic Band Gap (EBG) material has been analyzed [START_REF] Thevenot | Directive Photonic Band-Gap antennas[END_REF]. In addition, a Fabry-Perot cavity consisting on a Partially Reflecting Surface (PRS) and a ground plane has been used [START_REF] Trentini | Partially reflecting sheet arrays[END_REF][START_REF] James | Leaky-wave multiple dichroic beamformers[END_REF][START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF]. Furthermore, analytical expressions have been proposed in [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF] for the frequency of maximum directivity and the minimum of half-power beamwidth of antennas based on a Fabry-Perot. In the same perspective, a ray theory has been used by many authors for considering the problem of a source located inside a Fabry-Perot cavity [START_REF] Trentini | Partially reflecting sheet arrays[END_REF][START_REF] James | Leaky-wave multiple dichroic beamformers[END_REF][START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF][START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF][START_REF] Temelkuran | Resonant cavity enhanced detectors embedded in photonic crystal[END_REF]. This model is equivalent to the classical model of multiple wave reflections used for the Fabry-Perot interferometer in optics [START_REF] Hecht | Optics[END_REF]. However, the source is considered inside the cavity instead of outside. For this case, the magnitude of the obtained transmission coefficient may achieve some values that can be greater than one. In [START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF], this magnitude as been used for evaluating the antenna directivity.
To achieve a more accurate characterization for Fabry-Perot type cavities excited from their inside by electromagnetic waves, a new approach is proposed. This approach is based on a new normalized coefficient using a transmission line model and a new analytical expression for the evaluation of the Manuscript received November 2005. This work was supported in part by the National Science Engineering Research Council of Canada (NSERC).
The authors are with Institut National de Recherche Scientifique (INRS)-EMT, Montréal, Canada. Email : [email protected],ca, [email protected],ca antenna directivity. To validate this work, the obtained results are compared to experimental results reported in literature.
II. RAY ANALYSIS
For this study, a Fabry-Perot type cavity is considered. Fig. 1(a) shows this cavity. It is constituted by two Partially Reflecting Surfaces (PRSs) spaced by the distance D and characterized by their complex transmission and reflection coefficients t and r, for a plane wave incidence. For convenience, in this work, these coefficients are considered independent of frequency and incidence angle, which does not limit the studied problem. As illustrated in Fig. 1(a), a point source is considered inside the Fabry-Perot cavity. The source is considered transparent to electromagnetic waves and the characterization of the structure is obtained by using a one-dimensional model [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF]. For a given angle θ, considering the multiple wave reflections (see Fig. 1(a)), the amplitude of the transmitted wave outside the cavity is expressed as following [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF]:
T (θ, f ) = t ∞ n=0
r n e jk(2n+1)D/2 tan θ sin θ-jk(2n+1)D
2 cos θ = te -jkD/2 cos θ 1 -re -jkD cos θ (1)
where k is the free space wave number. Note that a similar form is obtained if a ground plane is placed in the same position as the source [START_REF] Trentini | Partially reflecting sheet arrays[END_REF][START_REF] James | Leaky-wave multiple dichroic beamformers[END_REF][START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF], as illustrated in Fig 1(b) (there is just an additional jπ coefficient in the exponential of the denominator). For these two problems, illustrated in Fig. 1(a) and Fig. 1(b), the maximum transmitted power, achieved at resonance, is expressed as following [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF]
|T | 2 max = 1 + |r| 1 -|r| (2)
From Eq. ( 2), |T | 2 max is equal or greater than one and can theoretically become very large if |r| is near one (note that 0 ≤ |r| ≤ 1). The transmission coefficient greater than 1 means that the source inside the cavity can supply more power in the presence of the cavity than without the cavity. In [START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF], this relation has been used for evaluating the directivity of Fabry-Perot cavity based high-gain antenna.
III. TRANSMISSION LINE MODEL : POWER
NORMALIZATION
According to the energy conservation principle, the transmission coefficient should not be greater than one. For this raison, a normalized version for the transmission coefficient is proposed taking into account the available power from the source. Note that, according the ray analysis, the response of the cavity of width D at the angle θ, is equivalent to response of the cavity of width D cos θ at normal incidence. From this and according to circuit theory [START_REF] Pozar | Microwave Engineering[END_REF], the Fabry-Perot cavity excited from its interior can be presented by the transmission line model illustrated in Fig. 2(b). In Fig. 2(a), the transmission line model of the source with its matched impedance is also presented in order to calculate the available power from the source. Z c is the free-space characteristic impedance (Z c = 120π). Z is the equivalent circuit model of the PRS. Z s and e s are the equivalent impedance and equivalent voltage of the source, respectively. To calculate the transmission coefficient outside the cavity, the available power P i (see Fig. 2(a)) is first calculated [START_REF] Pozar | Microwave Engineering[END_REF]:
P i = 1 2 |V i | 2 Z * s = 1 2 |Z * s | 2 Z * s |Z * s + Z s | 2 |e s | 2 (3)
where e s , Z s and V i are parameters shown in Fig. 2(a). Then referring to Fig. 2(b), the transmitted power P t for one side of the transmission line is given by:
P t = 1 2 |V t | 2 Z c (4)
To simplify the expressions, the transmission and reflection coefficients t and r can be expressed as functions of Z [START_REF] Pozar | Microwave Engineering[END_REF]:
t = 2Z 2Z + Z c (5) r = - Z c 2Z + Z c (6)
In addition, t s and r s are the transmission and reflection coefficients of the source, and they can be written :
t s = 2Z s 2Z s + Z c (7)
r s = - Z c 2Z s + Z c (8)
In order to derive Eq. ( 4), Fig. 3 In this Figure, the impedance Z ′ can be written as following:
Z ′ = Z c 2 Z//Z c + jZ c tan (k cos θD/2) Z c + jZ//Z c tan (k cos θD/2) = Z c 2 1 -Zc 2Z+Zc e -jkD cos θ 1 + Zc 2Z+Zc e -jkD cos θ = Z c 2 1 + re -jkD cos θ 1 -re -jkD cos θ (9)
The expression for the voltage V 1 (see Fig. 3) can be written: [START_REF] Hecht | Optics[END_REF] And the current I 1 is expressed: [START_REF] Pozar | Microwave Engineering[END_REF] Then, the voltage V t (Fig. 2(b)) is deduced from these results:
V 1 = Z ′ Z ′ + Z s e s = Z c (1 + re -jkD cos θ ) (2Z s + Z c ) -(2Z s -Z c )re -jkD cos θ e s
I 1 = V 1 Z ′ = 2 1 -re -jkD cos θ (2Z s + Z c ) -(2Z s -Z c )re -jkD cos θ e s
V t = V 1 cos (k cos (θ)D/2) + jZ c I 1 /2 sin (k cos (θ)D/2) = Z c (1 + r)e -jkD/2 cos θ (2Z s + Z c ) -(2Z s -Z c )re -jkD cos θ = Z c 2Z s + Z c (1 + r)e -jk cos θD/2 1 -2Zs-Zc 2Zs+Zc re -jkD cos θ = Z c 2Z s + Z c te -jk cos θD/2 1 -(r s + t s )re -jkD cos θ (12)
Thus, Eq. ( 4) can be written as
P t = 1 2 Z c |2Z s + Z c | 2 |te -jkD cos θ/2 | 2 |1 -r(r s + t s )e -jkD cos θ | 2 |e s | 2 (13)
Taking into account the power transmitted from both sides of the line, the normalized transmitted power |T | 2 N orm can be expressed as: For the case of real Z s , Eq. (13) becomes
|T | 2 N orm = 2P t P i = 2Z c Z * s |Z * s + Z s | 2 |Z * s | 2 |2Z s + Z c | 2 |te -jkD cos θ/2 | 2 |1 -r(r s + t s )e -jkD cos θ | 2 (14)
|T | 2 N orm = 4|r s ||t s | |te -jkD cos θ/2 | 2 |1 -r(r s + t s )e -jkD cos θ | 2 (15)
The maximum is then written
|T | 2 N orm,max = 4|r s ||t s | 1 -|r| 2 (1 -|r||r s + t s |) 2 (16)
In Fig. N orm is always limited to 1.
IV. DISCUSSION Theoretically, Eq. ( 15) is more accurate than Eq. (1). However, Eq. ( 15) is less applicable in practice, because we don't have necessarily access to the transmission and reflection coefficients of the source. A rapid comparison between Eq. (1) and Eq. ( 15) shows that the normalized version (Eq. ( 15)) has two factors more: 4|r s ||t s | and (r s + t s ) (in the denominator). For instance, for directive-antennas incorporating a Fabry-Perot cavity, the angular dependence of the transmission coefficient is often used to evaluate the directivity. If the source is omnidirectional, the factor 4|r s ||t s | is independent on transmitting angle and it can then be omitted. Furthermore, the point source can be considered having a negligible interaction with the reflecting waves inside the cavity, which leads to r s ≈ 0, t s ≈ 1 and r s + t s ≈ 1. From these, using Eq. ( 1) for predicting the radiation patterns and the directivity of directive-antennas based on a Fabry-Perot cavity excited internally is justified. However, the strength of the coefficient is not a direct evaluation of this directivity. The directivity is more associated with the inverse of the half power beamwidth. The half power beamwidth can be evaluated by using Eq. ( 1) as has been demonstrated in [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF].
V. DIRECTIVITY EVALUATION According to [START_REF] Boutayeb | Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design[END_REF], the minimum of the half power beamwidth of an antenna based on a Faby-Perot cavity, ∆θ 3dB,min , achieved near the resonant frequency, can be expressed as
∆θ 3dB,min ≈ 2 Q ( 17
)
where Q is the quality factor and can be expressed as a function of the magnitude and phase of the reflection coefficient, |r| and ϕ r , as following
1 Q ≈ 2 1 -|r| φ r |r| (18)
Using these relations and considering an antenna with an unique beam and same half-power beamwitdh in the H-plane and E-plane, the maximum of directivity of this antenna, is obtained approximatively with the following expression [START_REF] Balanis | Antenna Theory: Analysis and Design[END_REF] DIR max,dB ≈ 10 log 26000 (∆θ 3dB,min 180/π) 2
The results obtained with this relation are compared to available experimental results in literature. In [START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF], the values of the magnitude and phase of the reflection coefficient of the Partially Reflecting Surface (PRS), |r| and ϕ r can be extracted. Using these values and Eq. ( 17)-(19), we obtain the maximum directivity of the proposed antenna. These results are compared to the measured maximum gain presented in the same references. Tab. 1 compares the directivity obtained analytically to the measured gain. From these results, one can see that the proposed analytical model gives accurate results.
Tab. 1: Comparaison between predicted results (Eq. ( 18)) and measured gain [START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF][START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF]. Reference |r|, ϕ r (rad) DIR (dB) Measured gain [START_REF] Feresidis | Artificial magnetic conductor surfaces and their application to low-profile high-gain planar antennas[END_REF] 0.944, 2.8798 19.9 19 [START_REF] Feresidis | High gain planar antenna using optimized partially reflective surfaces[END_REF] 0.9788, 2.9446 24.34 21.9
VI. CONCLUSION
In this paper, a theoretical analysis on internally-excited Fabry-Perot cavity for directive antenna application, based on a one-dimensional model, has been presented. A new normalized coefficient using a transmission line model has been proposed. Then, a new analytical expression for the evaluation of the directivity of this type of antenna has also been formulated. The obtained results with the proposed analytical expression have been compared to experimental results available in literature. An excellent agreement was obtained between these results, demonstrating the accuracy and usefulness of our approach.
Fig. 1 .
1 Fig. 1. (a) Fabry-Perot type cavity excited from its inside by a point source (in the center). (b) Fabry-Perot cavity constituted by a Partially Reflecting Surface and a metallic plane excited from its inside by a point source.
Fig. 2 .
2 Fig. 2. Transmission line models: (a) Source with matched impedance (b) Source-Fabry Perot cavity structure.
presents the equivalent model for the circuit of Fig. 2(b).
1 Fig. 3 .
13 Fig. 3. Equivalent transmission line model of Fig. 2(b).
Fig. 4 .
4 Fig. 4. |T | 2 N orm,max coefficient versus Zs for different values of |r|.
4, |T | 2 N orm,max is plotted versus Z s for different values of |r|. From these curves, one can see that the strength of |T | 2 | 16,785 | [
"838442"
] | [
"214739",
"214739"
] |
01492645 | en | [
"info"
] | 2024/03/04 23:41:50 | 2016 | https://hal.science/hal-01492645/file/EmergeablesArticle.pdf | Simon Robinson
Céline Coutrix
Jennifer Pearson
Juan Rosso
Matheus Fernandes Torquato
Laurence Nigay
Matt Jones
email: [email protected]@imag.fr
Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction
Keywords: Deformable devices, continuous control, mobiles, tangiblity, shape-changing. ACM Classification H.5.2 User Interfaces: Input devices and strategies
Figure 1. The Emergeables concept: instead of graphical widgets, tangible continuous controls emerge from the surface of the mobile device.
INTRODUCTION
Mobile phone and tablet touchscreens are flat, lifeless surfaces. In contrast, the physical controls that touchscreens attempt to emulate (such as raised buttons and other widgets) support rich interactions that are directly afforded through their tangibility.
The benefit of such tangible elements has long been accepted in the HCI community (e.g., [START_REF] Ishii | Tangible bits: towards seamless interfaces between people, bits and atoms[END_REF]), but prior work has largely focused on display deformation to render information to users (or "physicalisation" -e.g., [START_REF] Follmer | Inform: dynamic physical affordances and constraints through shape and object actuation[END_REF]). Our interest here is in direct, tangible, hands-in and hands-on interaction with mobile devices -that is, creating a physical surface which can be both a visual display, and present physical control widgetsbuttons, dials, sliders and so on-that emerge from the screen to provide control when needed, disappearing back into the surface when not required. In this work, then, we present what we believe is the first exploration of hands-on continuous interaction with dynamically deformable mobile devices.
The basic advantages of tangibility for interaction are clear, ranging from ease of manipulation to the reduced need for visual attention in safety-critical situations (e.g., driving). These advantages are also evident in dynamic situations when, for instance, people still prefer to switch between multiple physical controls over combined digital versions [START_REF] Bernhaupt | Trends in the living room and beyond: results from ethnographic studies using creative and playful probing[END_REF]. Prior work has demonstrated the benefits of reconfigurable tangible controls via detachable widgets that can be used with a mobile touchscreen surface [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF]. However, while these are beneficial for interaction performance for single tasks in isolation, carrying a collection of tangible elements at all times is clearly impractical in reality. The advantages of mobile deformable displays in these situations could, therefore, be highly significant, allowing a single device to take the form of the most appropriate control for the situation at hand.
In this work we are interested in displays that are able to deform to create and present these controls dynamically. Producing controls on-demand has previously been proven beneficial for static buttons [START_REF] Bailly | Métamorphe: augmenting hotkey usage with actuated keys[END_REF][START_REF] Harrison | Providing dynamically changeable physical buttons on a visual display[END_REF]. However, there is a significant gap around tangibility of continuous controls that we aim to address here. Secondly, it is clear that in order to create truly deformable mobile displays there is a large amount of research and development work still to be done. In order to support and direct this effort, then, in this research we also aim to quantify the gains that can be had by creating tangible controls on-demand.
In the process of investigating this question, we introduce Emergeables -a demonstration of how tangible controls could be dynamically created on mobiles. Emergeables depart from existing deformable research by endeavouring to provide truly direct interaction with affordances, controls and content integrated within a visual display (see Figs. 2 and4). Our ultimate long-term aim is to create a mobile device where any control widget can appear anywhere on its surface -in essence, affording the same flexibility as a graphical interface, but with the affordance and tactile benefits of tangibles. Consider the following scenario, then, which illustrates the approach:
Alex is playing a role-playing game on his Xbox and is keen to use his new Emergeable mobile to enhance the experience. While focused on the television screen, Alex pulls out his mobile, which begins acting as a controller. At the start of a mission, his character needs to drive a car, so the controls on his touchscreen become a steering wheel, joystick gear lever and raised gas and brake pedals. When he arrives at his destination, there's a lock to pick, so the controls morph into two levers he has to gently manipulate to tease out the pins of the bolt. After opening the door, he notices some accessories on the table. His mobile shifts to reveal 3D representations of the objects so he can select which ones he wants to pick up by touch alone. As he moves towards the next room he hears voices: the touchscreen quickly changes shape to reveal the weapons Alex has in his possession, and he quietly arms himself ready for combat . . . The contribution of this work is to quantify the performance benefits of, and gain qualitative user experience insights into, the concept of emergeables. It may take many years of development to produce such a display at consumer level, but we believe that this paper provides a solid grounding for future work in this adventurous area.
Our first motivation in carrying out this work is to consider whether such an ambitious end-goal will provide enough benefits given the costs associated with its development. We are also interested in the value of intermediary articulations of the concepts -that is, devices that afford continuous inputs but at a lower level of resolution. We sought, then, to understand the relative performance of such displays-that may be more achievable in the short-term-compared to both a high-level prototype and the conventional, flat GUI.
To demonstrate and test the potential of this concept, we created two emergeable prototypes: one high-resolution, where controls appear on the surface on-demand, but in a set of fixed positions; and, a second that uses a pixel-based widget model to produce lower-resolution controls, but already provides flexibility in positioning. In the rest of this paper we discuss background work, present the design space of emergeables, describe the prototypes we developed, and present the results of a study conducted to measure the effect of dynamic tangibility with regards to accuracy, visual attention and user preference. We conclude by discussing design implications for the control widgets that we have studied, and suggesting potential future pathways to developing emergeable mobile devices.
BACKGROUND Mobile eyes-free continuous interaction
Previous work has explored mobile eye-free continuous interaction, for instance through devices (e.g., smart watches [START_REF] Simon | Watchit: simple gestures and eyes-free interaction for wristwatches and bracelets[END_REF], haptic feedback on mobiles [START_REF] Gupta | Squeezeblock: using virtual springs in mobile devices for eyes-free interaction[END_REF], on-clothing control [START_REF] Karrer | Pinstripe: eyes-free continuous input on interactive clothing[END_REF], or elastic control [START_REF] Klamka | Elasticcon: elastic controllers for casual interaction[END_REF]). Another approach is to leverage the user's body, for instance through foot control [START_REF] Scott | Sensing foot gestures from the pocket[END_REF], on-skin control [START_REF] Mujibiya | The sound of touch: on-body touch and gesture sensing based on transdermal ultrasound propagation[END_REF] finger-based [START_REF] Ho Yoon | Plex: finger-worn textile sensor for mobile interaction during activities[END_REF] or face-based [START_REF] Serrano | Exploring the use of hand-to-face input for interacting with head-worn displays[END_REF] control. All of these works emphasise the need for eyes-free, continuous control. However, most of the proposed solutions require the user to learn new interaction techniques on surfaces that do not provide feedback specific to the interaction itself. For instance, touching your arm to control a slider will likely be convenient and effective, but will not give the tangible feedback of a real, physical slider. One exception to these previous approaches is the use of tangibles on a touchscreen, such as Jansen et al.'s Tangible remote controllers [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF], or Born's Modulares1 , which leverage users' existing experience with physical controllers.
Tangible User Interfaces (TUIs) for eyes-free interaction
Harrison and Hudson [START_REF] Harrison | Providing dynamically changeable physical buttons on a visual display[END_REF], Tory and Kincaid [START_REF] Tory | Comparing physical, overlay, and touch screen parameter controls[END_REF] and Tuddenham et al. [START_REF] Tuddenham | Graspables revisited: multi-touch vs. tangible input for tabletop displays in acquisition and manipulation tasks[END_REF] showed that, when the item being controlled is adjacent to the controls used (e.g., the tangible is situated right next to the display screen), tangibles outperform touchscreens. The benefit for entirely eyes-free interaction (where the feedback area is completely separated from the control area -on a remote surface, for example) has also been demonstrated by Fitzmaurice and Buxton [START_REF] Fitzmaurice | An empirical evaluation of graspable user interfaces: towards specialized, space-multiplexed input[END_REF] and Jansen et al. [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF]. In the latter, a tangible slider outperformed a tactile slider on a tablet. However, the solution Jansen et al. proposed was to attach tangible controls to a mobile surface. Clearly, carrying a collection of tangible controls at all times in order to switch between them according to the task at hand is not practical. This technique also requires an additional articulatory stage for each action to add the controls to the screen. In this paper we take at a different approach, where the system can dynamically provide the necessary tangible controls as and when needed.
Motor-spatial memory is considered a key factor in the success of TUIs ( [START_REF] Fitzmaurice | Bricks: laying the foundations for graspable user interfaces[END_REF][START_REF] Shaer | Tangible user interfaces: past, present, and future directions[END_REF]). With their dynamic and system-controlled actuation of controls, our prototypes reduce the ability of users to rely on this motor-spatial memory, so provide a challenge to previous results. From prior work, it is not possible to tell how important motor-spatial memory is in the combined factors that result in TUIs' benefits -a question we address here.
Organic User Interfaces (OUIs)
In arguing for OUIs, Holman and Vertegaal present the benefits of computing form factors that can physically adapt depending on the functionality available and on the context of use [START_REF] Holman | Organic user interfaces: designing computers in any way, shape, or form[END_REF]. Their three principles for OUI design (input equals output; function equals form; form follows flow) have informed our work, but while their vision accommodates a comprehensive range of novel deformations, we focus on OUIs that allow the presentation of well-known controls (e.g., dials and sliders). Previous work showed that learning very different controls is a possible source of difficulty and frustration for users [START_REF] Cockburn | Supporting novice to expert transitions in user interfaces[END_REF].
Dynamic and shape-changing tangible interaction
Self-actuated tangible interfaces were originally introduced by Poupyrev et al. [START_REF] Poupyrev | Actuation and tangible user interfaces: the vaucanson duck, robots, and shape displays[END_REF], who presented the concept of dynamically surfacing 3D controls, implementing buttons using the Lumen display [START_REF] Poupyrev | Lumen: interactive visual and shape display for calm computing[END_REF]. The efficacy of such controls has been demonstrated for discrete tasks -for example, in the work of Harrison and Hudson [START_REF] Harrison | Providing dynamically changeable physical buttons on a visual display[END_REF], who showed that inflatable buttons were able to provide the tangibility benefits of physical buttons together with the flexibility of touchscreens. However, the prototype was limited to buttons alone, and so did not allow the control of continuous parameters. In addition, the inflatable membrane technique did not allow flexible placement of controls on the interface -a limitation that is also seen in the commercial version of the technology, which provides only a keyboard. 2More flexible approaches have been proposed through reconfigurable materials placed on top of a sensing surface. Ferromagnetic input [START_REF] Hook | A reconfigurable ferromagnetic input device[END_REF], for example, allows for the physical form of a device to be defined using combinations of ferrous objects such as fluids or bearings. This allows users to construct new forms of tangible widgets, such as trackballs, sliders and buttons. However, when using separate magnetic objects the approach suffers from the same mobility problems as Jansen et al.'s approach (cf. [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF]); and, when using a fluid-filled bladder, it is only able to detect input, rather than provide tangible controls. Another approach to reconfigurable tangibility is MudPad [START_REF] Jansen | Mudpad: tactile feedback and haptic texture overlay for touch surfaces[END_REF], which provides localised haptic feedback through an array of electromagnets combined with an overlay containing magnetorheological fluid. However, the haptic feedback that is generated is not strong enough to constrain the user's displacement of the widgets. ForceForm [START_REF] Tsimeris | Forceform: a dynamically deformable interactive surface[END_REF] is a similarly mouldable interactive surface that can create concave areas in order to guide touches on the surface. Users can, for instance, feel a finger guide for a touch slider, though cannot grasp objects to interact, unlike our design.
In contrast to these approaches, we aim to provide controls that present the same grasp and feel as their physical counterparts, in addition to offering the same manipulation flexibility. In order to do so, our technique uses a grid of 'sensels' ( [START_REF] Rosenberg | The unmousepad: an interpolating multi-touch force-sensing input pad[END_REF]; see design space, Fig. 3) that are able to rise or fall in any location depending on the interaction that is required. This class of approach has previously been demonstrated on tabletop interfaces -inFORM [START_REF] Follmer | Inform: dynamic physical affordances and constraints through shape and object actuation[END_REF], for example, provides an impressive array of visualisations and dynamic affordances that are created by reconfiguring the tabletop surface in the Z dimension. However, input capability is limited to vertical displacement and touch on the shape display (detected by a depth camera). Rod displays have not yet provided manipulation beyond pull and push (e.g., [START_REF] Taher | Exploring interactions with physically dynamic bar charts[END_REF]). In our work we go further by dynamically reproducing existing tangible controls through emergeable elements. Adding such capabilities to mimic current TUIs brings new challenges to maintain performance.
The Haptic Chameleon [START_REF] Michelitsch | Haptic chameleon: a new concept of shape-changing user interface controls with force feedback[END_REF] was an early demonstration of shape-changeable tangible controls applied to video playback. For example, controlling video playback at a frame-by-frame level used a large dial; scene-by-scene level used a thin central wedge of the same dial. More recently, dynamic tangible controls with pneumatic actuation have been demonstrated, allowing programmatic manipulation of tactile responses to require different levels of actuation force from users [START_REF] Vázquez | 3d printing pneumatic device controls with variable activation force capabilities[END_REF]. The same technology was used in [START_REF] Yao | Pneui: pneumatically actuated soft composite materials for shape changing interfaces[END_REF] to change the shape of tangible elements. Moving away from buttons and dials, recent work demonstrated how a single tangible slider could be deformed to adapt to users' needs [START_REF] Coutrix | Shapechange for zoomable TUIs: opportunities and limits of a resizable slider[END_REF]. Unlike these approaches-which each address a single tangible widget-we propose and demonstrate a broader view of this area, studying interfaces that can deform their entire surfaces to create various different tangible widgets at any position.
Shape-change for mobile interaction
Many approaches exist for changing the physical shape of mobile devices. One technique for example, is to allow the interface to be folded, expanded or contracted by the system or the user, such as FoldMe [START_REF] Khalilbeigi | Foldme: interacting with double-sided foldable displays[END_REF], PaperFold [START_REF] Gomes | Paperfold: evaluating shape changes for viewport transformations in foldable thin-film display devices[END_REF] or Morphees [START_REF] Roudaut | Morphees: toward high "shape resolution" in self-actuated flexible mobile devices[END_REF]. The MimicTile [START_REF] Nakagawa | Mimictile: a variable stiffness deformable user interface for mobile devices[END_REF] used smart memory alloys to sense bending of a mobile interface and also provide feedback through the device's stiffness. Another approach is to expand the physical volume of a device (e.g., [START_REF] Dimitriadis | Evaluating the effectiveness of physical shape-change for in-pocket mobile device notifications[END_REF][START_REF] Hemmert | Shape-changing mobiles: tapering in one-dimensional deformational displays in mobile phones[END_REF]). The key difference between our work and these examples is that each of these prototypes deforms the whole interface at once, rather than bringing tangibility to the widgets it is composed from.
A second approach to mobile tangibility is to provide deformability to dedicated parts of the mobile interface with localised protrusions (e.g., [START_REF] Dimitriadis | Evaluating the effectiveness of physical shape-change for in-pocket mobile device notifications[END_REF][START_REF] Hemmert | Dynamic knobs: shape change as a means of interaction on a mobile phone[END_REF]). However, the placement of these is not flexible, as it is in our technique. In addition, existing research that takes this approach has primarily investigated shape-change for notifications (e.g., [START_REF] Esben | Is my phone alive?: a large-scale study of shape change in handheld devices using videos[END_REF]), rather than for the control of continuous parameters.
Another approach for deforming mobile UI is to raise tangible pixels from the surface -our work falls into this classification. One example of this approach is Horev's TactoPhone [START_REF] Horev | Talking to the hand -the interactive potential of shape-change behavior in objects and tangible interfaces[END_REF], a video concept scenario using a Pinscreen. 3 Horev envisioned deformation on the back of a mobile device to allow for a rich vocabulary of notifications through shape coding. In contrast, ShapeClip [START_REF] Hardy | Shapeclip: towards rapid prototyping with shape-changing displays for designers[END_REF] uses linear motors on top of a regular capacitive touchscreen. Through control of the underlying screen's brightness and colour, deformation events can be communicated between the detachable widgets and the device underneath. However, ShapeClip focuses on deformation for output -with direct input capability limited to capacitive forwarding onto the screen below. While we use a similar approach, with micro linear stepper motors to raise and lower emergeable elements, our aim is to widen the direct tangible input possibilities for mobile devices.
EMERGEABLES: CONCEPT AND DESIGN SPACE
To provide a framework to guide our prototype development, informing options and choices, we first developed a design space, illustrated in Fig. 3. While display-and touch-screens 3 See pinscreens.net (also known as Pinpression). The core design space for emergeables describes devices that allow for both translation and rotation in three axes. Our low-resolution prototype fully supports translation, and partially supports rotation (Zaxis), while similar prior work (e.g., ShapeClip [START_REF] Hardy | Shapeclip: towards rapid prototyping with shape-changing displays for designers[END_REF]) has supported only Z-axis translation. Our high resolution prototype, in contrast, supports X-axis translation and Z-axis rotation.
are constructed of pixels, emergeables' elementary unit is a sensel [START_REF] Rosenberg | The unmousepad: an interpolating multi-touch force-sensing input pad[END_REF], with two key properties:
Manipulation: Sensels can be manipulated by the user. As a starting point to explore their potential, we consider two basic tangible manipulations from [START_REF] Stuart | A morphological analysis of the design space of input devices[END_REF]: translation and rotation, each in three dimensions. Size: The size of each sensel defines the resolution of the emergeable interface. Sensels' physical size is completely independent of the pixel resolution of the display surface.
As illustrated in Fig. 3, our ultimate aim is for emergeables to be created at very high resolutions, on the order of millions of sensels (just like today's visual displays have millions of pixels). Such a display would allow users to grab and manipulate groups of sensels to interact with as, for example:
• A slider, by translating the sensels in the Y-axis only;
• A dial or handle, where the central sensel rotates around the Z-axis, while other sensels translate in the X-and Y-axes; • A mouse wheel, where the central sensel rotates around the X-axis, while other sensels translate in the Y-and Z-axes.
There is a richer design space to be explored that goes beyond manipulation and resolution. Alexander et al. [START_REF] Alexander | Characterising the physicality of everyday buttons[END_REF]'s survey of more than 1500 household physical buttons led to a range of features of existing physical buttons (e.g., bigger buttons are pressed with less force) that can inform the design of emergeables. As explained in [START_REF] Alexander | Characterising the physicality of everyday buttons[END_REF], controls could be physically modified to make critical actions harder to invoke. Moreover, controls could be provided with a range of textures and vary in response, some gliding smoothly, others providing more resistance. Adding such features to a prototype will certainly create a broad range of interaction experiences, including a method to address the eyes-free recognition of controls as they emerge. In this work, though, we have focused on the two key ones that describe the fundamental operation of the controls.
PROTOTYPES
We built two prototypes with different levels of resolution to both demonstrate the emergeables concept and test its potential (see Fig. 2). The first is a low-resolution emergeable designed after the popular Pinscreen 3 desk toys, and existing research implementations, such as [START_REF] Follmer | Inform: dynamic physical affordances and constraints through shape and object actuation[END_REF]. Each 15 mm sensel can emerge and be fully rotated, or translated up to 15 mm in any direction.
The second, built to the predefined tasks of our experiment, raises real tangible controls on-demand. Using real controls allowed us to explore the benefits of high-resolution, fully manipulable future emergeables. The full dynamics of both systems are shown in the video accompanying this paper.4
Low-resolution emergeable
The low-resolution emergeable prototype (see Fig. 2 (left)) consists of an array of 4×7 circular sensels of 15 mm diameter. Each sensel moves independently (powered by a micro stepper motor), and can be raised and lowered up to 15 mm to create a three-dimensional relief. Each individual sensel can also be manipulated by the user in three ways: pushing (as a button); rotating (as a dial); and, tilting to simulate a limited translation (15 mm in any direction), which is used to create sliders in conjunction with adjacent sensels (see Fig. 4). Sensels are surrounded by a bristle mesh that fills gaps as they are moved during interaction (see Fig. 2 (centre left)). With these features, it is possible to emerge a dial or slider in any location on the prototype's surface, but remove it when not required. To create a dial, a single sensel is raised for the user to turn. To create a slider, one sensel is raised -when this sensel is tilted, the next sensel along the line of the slider is raised, and the movement continues. In this way, it is possible to simulate fluid interaction with a slider via tilting alone (see Fig. 4). While the current version has relatively large sensels, we believe this approach could in future be greatly miniaturised, allowing for richer interaction as illustrated in Fig. 3.
High-resolution emergeable
The high-resolution emergeable prototype (see Fig. 2 (right)) is far simpler, and is made of actual dials and sliders that can be revealed when needed by rotating a panel on its surface. The prototype consists of four of these rotatable panels, each of which is controlled by a separate motor, allowing it to display either a slider, dial or flat surface (mimicking the 'un-emerged' display) in each position (see Fig. 2 (centre right)).
Construction of the prototypes
The prototypes were custom-made to provide manipulation as specified in the design space. The high-resolution prototype was made of laser-cut MDF panels, with four compartments, each containing actuators that rotate the sensor panel to display the correct combination of dial, slider or blank space. The low-resolution prototype consists of 28 micro stepper motors repurposed from laptop DVD drives. These are vertically mounted on navigation switches with inbuilt rotary encoders, which provide both X-and Y-axis translation, and Z-axis rotation. Each sensel has a 3D-printed gearing shaft that allows Z-axis translation, and also functions as the interaction point. The high-resolution prototype uses dials and sliders that are of comparable resolution to those used in everyday objects.
The operation of the low-resolution dial is less smooth and continuous relative to the high-resolution one (having only 30 steps per revolution). The low-resolution slider provides three steps per sensel in either of the X-or Y-axes.
EXPERIMENT
We conducted an experiment in order to evaluate the impact of resolution on performance with continuous, mobile deformable controls. With this in mind, we chose to focus on dials and sliders, as they provide continuous adjustment of a parameter; and, the benefits of tangibility for buttons have been well demonstrated previously (e.g., [START_REF] Harrison | Providing dynamically changeable physical buttons on a visual display[END_REF]).
In addition to the two emergeable prototypes, we also created a non-emergeable touchscreen interface for comparison (developed on an Android tablet), which displayed standard platform dials and sliders in the same positions and at the same sizes as the two physical prototypes (see Fig. 5). The size, input resolution, location and latency of each widget was the same between all three designs. That is, sliders were 80 mm in length, and consisted of four sensels on the low-resolution prototype; dials were all 15 mm in diameter.
Method
The experiment followed a within-subjects design with three independent variables:
Resolution: GUI, low-res emergeable or hi-res emergeable; Complexity: 1 or 2 widgets (controlled simultaneously); Widget: Dial or slider.
The order of presentation of Resolution conditions was counterbalanced across participants using a Latin square design.
The Complexity variable was presented in increasing order. Finally the order of presentation of Widgets was randomised. For instance, participant 1 was presented with the following sequence: single dial, single slider, two dials, two sliders, dial+slider; and, used each widget first with the GUI, followed by the low-resolution and then the high-resolution prototypes. In all cases, the physical location of the widgets was randomised between one of four positions (see Figs. 2, 5 and7).
Tasks
To simulate mobility and users switching between continuous control tasks, the main part of the study involved participants using the prototypes to control a graphical display projected on Participants were positioned between two projected screens, and used each of the prototypes in turn to perform a pursuit task with dials and sliders (see Fig. 7). The task switched between the two screens every 15 s, and participants performed the task for 60 s at a time. When using a single control, participants stood; for two controls participants were seated.
two separate screens either side of their location (see Fig. 6).
Participants used each prototype in turn for pursuit tasks.
As in previous work on continuous parameter adjustments (e.g., [START_REF] Fitzmaurice | An empirical evaluation of graspable user interfaces: towards specialized, space-multiplexed input[END_REF][START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF]), the tasks required participants to follow a target cursor along either a linear or circular control. We chose this type of task for the trials as many higher-level human actions depend on this one-dimensional pursuit method (cf. [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF]).
Figure 7 shows the graphical representation of both the slider and dial pursuit tasks. In each case, the current position of the cursor is shown as a thick white line, and the target region is in blue. Participants were instructed to keep the white line within the blue target area at all times.
Following [START_REF] Jansen | Tangible remote controllers for wall-size displays[END_REF], the target moved at constant speed and darted off at pseudo-random intervals (2 s to 4 s). The full projected size of each control was 20 cm. The movement speed was 0.15 × the control's range (R), and the dart-off distance was 0.25 × R. Every 15 s, the projected control moved to the other screen, and participants were prompted by on-screen instructions to turn around, simulating a change in focus or application. This was repeated four times (i.e., 60 s total), after which participants were able to take a short break if they wished. With one-widget Complexity, participants performed a second iteration of 4 ×15 s tasks. With two-widget Complexity only a single iteration was performed.
When the task changed between screens, the location of the widget(s) used changed randomly (consistently on both the projected screen and the prototype). As a consequence, participants needed to reacquire the control(s), but could not use their spatial memory for this. This task design allowed us to take the change of focus or application into account in our evaluation (as happens in ecological, uncontrolled settings), and measure the impact of this change on the interaction. In the case of the two-widget Complexity, each target moved independently, and participants were required to control both widgets simultaneously. Participants stood between the two Figure 7. An example of the displays the user saw on the projected screen while carrying out the pursuit task. Widgets on the emergeable are used to control the sliders (far left and centre right) or dials (far right and centre left). There are four positions that controls could be displayed, as illustrated in the image. Each position can display a slider or a dial.
Only one (single widget task) or two (dual widget task) widgets were visible at any one time. The widgets on each prototype were rendered in the same positions relative to the large screen. Solid white lines are the user's controller in each case; blue shaded areas are their target.
display screens for the single widget task; for two widgets they sat (to allow both hands to be used at the same time).
Procedure
We recruited 18 participants (9M, 9F, aged 18-66) to take part in the experiments. All except one of the participants had at least two years' experience with touchscreens (the remaining participant had no experience), and four were left-handed.
Sessions lasted around 50 min on average.
After a discussion of the experiment and obtaining informed consent, each experiment began with a short demographic questionnaire, including questions regarding the participant's preference for physical or digital interaction with buttons, dials and sliders. Following this, we showed the participant a short video of concept designs illustrating our intended use of the system. The participants were then given training on each of the prototypes, first using a dial on each (GUI, low-res, hi-res), then a slider in the same order.
Participants then performed the series of tasks according to the experimental design described above. In cases where there was only one widget to control, participants were asked to stand up holding the prototype with one hand while controlling the widget with the other. In cases where there were two widgets to control, we allowed the participant to sit in a swivel-chair with the prototype on their lap (to free up both hands for controlling widgets). In both cases, participants were free to move their entire body (or body+chair) to face the appropriate screen.
Participants' accuracy for each task was captured in software, and all tasks were recorded on video to allow analysis of participants' head direction as a proxy for visual attention.
The study ended with a short structured interview probing participants' views on each interface. Participants were given a £10 gift voucher in return for their time.
Measures
Our main objective in this study was to determine the effect of Resolution on performance. To this end, we recordedvia logs-the accuracy of each participant's tasks -that is, how well they were able to follow the blue target region using the controls given. The accuracy was computed for each frame as the distance between the centre of the cursor and the centre of the blue target region. The accuracy was then aggregated for each participant using the geometric mean (giving a better indicator of location than the arithmetic mean, as the distribution of the error is skewed). In the case of two widgets, the accuracy was then computed as the geometric mean of the accuracy of both widgets.
In addition to this measure, we also wanted to determine the level of visual attention required to operate each prototypethat is, how often the user needed to look down at the device while controlling the projected widget(s). To capture this information, we analysed each study's video footage using ELAN [START_REF] Sloetjes | Annotation by category: ELAN and ISO DCR[END_REF], recording points where the user's head direction moved from the projected screen to the physical device, and the time spent looking at the controls.
Finally, as an indication of participants' perceived usability of the devices, we asked them to rate each prototype out of 10 for how easy they found it to use (10 easiest). They were also asked to rank the prototypes in order by the amount of perceived visual attention required to use each one.
RESULTS
Pre-study questionnaire
The results from the pre-study questionnaire mirror previous work in this area (e.g., [START_REF] Harrison | Providing dynamically changeable physical buttons on a visual display[END_REF]), showing that the majority of participants favour physical widgets over touchscreen interactions.
Of the 17 participants that could answer this question, 5 13 preferred physical buttons, 9 of 17 preferred physical sliders, and 15 of 17 preferred physical dials. The reasons for this preference as stated by the participants included the tangibility and "feel" given by physical controls; but, more commonly, the precision that is afforded by these widgets. One even went as far as describing the poor migration of physical widgets to digital representations, stating "[ . . . ] touchscreen widgets are only attempts to imitate the real thing -they try and give the same experience but in a format that fits in your pocket".
Pursuit accuracy
Figure 8 shows the mean pursuit error (as a percentage of the whole widget's range), for each combination of Resolution, Widget and Complexity, aggregated over all tasks. Overall (Fig. 8 (A)), the high-resolution prototype led to 6.7 % of pursuit error, and the low-resolution and GUI prototypes to 11.6 % and 12.0 % of pursuit error, respectively (all 95 % CI). The high-resolution prototype was the most accurate, while the low-resolution and GUI designs were broadly similar, overall. In order to further unpack the differences between the prototypes, and understand the performance of the lowresolution emergeable prototype, we analysed the results for one and two widgets separately (see Fig. 8,(B) and (C)).
Single widget task
A two-way ANOVA shows a significant main effect of Resolution (F(2, 102) = 27.671, p < 0.001) and Widget (F(1, 102) = 72.308, p < 0.001) on the pursuit error. We also found a significant interaction between Resolution and Widget (F(2, 102) = 18.674, p < 0.001).
For the single dial task, comparisons using paired t-tests with Bonferroni corrections revealed significant differences between the low-res and GUI, hi-res and GUI, and between 5 The participant with no touchscreen experience did not respond. low-res and hi-res prototypes (all p < 0.001). The low-res and hi-res prototypes' dials led to 4.8 % and 4.0 % of pursuit error, respectively, whereas the GUI dial led to 7.9 % of the error. For the single slider task, the same comparison method revealed significant differences between the low-res and GUI (p < 0.01), and between hi-res and GUI, and low-res and hires prototypes (both p < 0.001). The low-res prototype's slider led to 12.6 % of pursuit error; whereas the hi-res slider led to 6.2 %, and the GUI slider to 9.5 %.
Two-widget tasks
A two-way ANOVA shows a significant main effect of Resolution (F(2, 153) = 85.954, p < 0.001) and Widget (F(2, 153) = 26.270, p < 0.001) on the pursuit error. We also found a significant interaction of Resolution and Widget (F(4, 153) = 14.716, p < 0.001).
For the dual dial task, comparisons using paired t-tests with Bonferroni corrections revealed significant differences between the low-res and GUI, hi-res and GUI, and between low-res and hi-res prototypes (all p < 0.001). The low-res and hi-res prototypes' dual dials led to 8.2 % and 6.5 % of the pursuit error, respectively, whereas the GUI dual dial led to 14.2 % of the error. For the dual slider task, the same comparison method revealed significant differences between the low-res and GUI (p < 0.01), and between hi-res and GUI, and low-res and hi-res prototypes (both p < 0.001). The low-res dual slider prototype led to 16.8 % of the pursuit error; whereas the hi-res sliders led to 8.5 %, and the GUI sliders to 13.2 %. For the dial+slider task, the same comparison method revealed significant differences between the hi-res and GUI, and between the low-res and hi-res prototypes (both p < 0.001). However, no significant difference was found between the low-res and GUI prototypes. The low-res and GUI prototypes' dial+slider controls led to 15.6 % and 15.3 % of the pursuit error, respectively, whereas the hi-res dial+slider led to 8.5 % of the error.
Reacquiring controls after a change of focus
After switching targets, there is naturally a period of time at the beginning of each task where the participant needs to reacquire the control, due to it moving to a different position on the device and display -a novelty of our experiment. Overall, the GUI and low-res sliders take the most time for users to catch up with the target, causing an impact on the respective mean pursuit error. With one widget, it took 4.7 s on average to reacquire a low-res or hi-res dial, or a hi-res slider, whereas it took 5.9 s on average with a low-res slider or either of the GUI widgets. With two widgets, it took 5.9 s on average to reacquire a low-res or hi-res dial, or a hi-res slider, whereas it took 6.9 s on average with a low-res slider or either of the GUI widgets.
Glance rate
One of the metrics we feel is vital to the use of mobile devices for eyes-free interaction is the visual attention required to use the controller. To measure this, we systematically analysed the video footage from each participant's tasks, annotating every time their gaze switched from one of the projected screens to the controlling device (see Fig. 9 (top, N1 and N2)). We also recorded the time spent looking down, as shown in Fig. 9 (bottom, T1 and T2). Although the overall time spent looking down is an interesting metric, we chose in our analysis to focus primarily on the number of times the user glanced down. As participants tend to look down to reacquire a control, we believe this provides a more accurate measure of how often the user loses control of a particular widgetparticularly important for deformable devices-as opposed to how long it takes to reacquire it.
Single widget task
A two-way ANOVA on the glance data shows a significant main effect of Resolution (F(2, 102) = 106, p < 0.0001), indicating that the hi-res prototype requires the least amount of visual attention, while the GUI requires the most. The main effect of the type of Widget was also significant on the glance rate (F(1, 102) = 8.34, p < 0.05), as was the interaction of Widget and Resolution (F(2, 102) = 4.7, p < 0.05). Paired t-tests with Bonferroni corrections found no significant differences between dials and sliders on the hi-res or GUI prototypes. A significant difference was found between sliders and dials on the low-res prototype (p < 0.0001,t = 6.29, d f = 17) which shows that sliders on the low-res prototype require more visual attention than dials when performing a single widget task.
Dual widget task
A two-way ANOVA showed significant results for the main effects of Resolution (F(2, 153) = 383, p < 0.0001) and Widgets (F(2, 153) = 4.8, p = 0.01) on visual attention. Furthermore, the interaction between Resolution and Widgets was also significant (F(4, 153) = 8.16, p < 0.0001). This shows that, as with the single widget Complexity, the hi-res prototype requires the least visual attention, followed by the low-res prototype, and finally the GUI prototype. Further post-hoc tests indicated significant differences between sliders, dials and slider+dial, indicating that, on the whole, the slider+dial task required greater visual attention than other dual-widget tasks. For the low-res prototype, dual dials required less visual attention than dual sliders; conversely, dual sliders required less visual attention than dual dials on the GUI prototype. . Glance rates. Top: mean number of times participants' gaze was averted from the projected screen. Bottom: the mean time participants spent looking at the prototype (rather than the display) per trial.
Subjective results and observations
The ratings given for the ease of use of each prototype (1-10; 10 easiest) resulted in average scores of 8.8, 4.8 and 3.4 for the hi-res, low-res and GUI prototypes respectively. A Friedman test of these results shows the difference to be statistically significant (p < 0.0001, df = 2). These results confirm that participants found the hi-res prototype the easiest to use, the touchscreen GUI the most difficult, and the low-res senselbased approach somewhere between the two. These results coincide with participants' opinions around tangible versus touch-screen controls. Many comments we recorded from participants after the trials discussed the benefit of tangible control: "It's more precise. I find doing games on my iPad difficult -I'd much prefer using something tactile to have the feedback"; "It's more responsive -if you're touching it, you know [ . . . ] if you are using the touchscreen you have to look, but if something is protruding you can feel for it"; and, "In gaming situations it's more satisfying. I've played games on a touchscreen and its really not the same as having a controller because you need to be spatially aware of where they are [ . . . ] as opposed to having something you can physically manipulate. It would give a higher sense of control than just a flat surface."
In addition to their perceived ease of use, we also asked participants to rank the interfaces in order for how much visual attention they felt each one required to use. One participant thought the low-res design required the most visual attention. The remaining participants (17 of 18) ranked them as: GUI (most visual attention), low-res, hi-res (least visual attention).
As part of our post-study analysis, we studied the video footage from each of our participants' tasks to determine any interesting or unusual behaviours. One discovery we made during this analysis was that even though all participants were instructed to keep the pursuit error as little as possible at all times, two distinct behaviours were apparent. Some users clearly chased the blue target region when it darted away. Other participants, however, simply waited for the blue target to come closer to their cursor before they began following it with the physical controls. How this differing behaviour affected the accuracy of each participant is not clear. Controlling or correcting this subjective accuracy requirement a posteriori is not straightforward, and needs further research. As a comparison, controlling or correcting the subjective speed-accuracy tradeoff for Fitts' law pointing tasks is a research area in itself [START_REF] Zhai | Speed-accuracy tradeoff in Fitts' law tasks: on the equivalency of actual and nominal pointing precision[END_REF].
While analysing the video footage, we classified the interaction methods participants used for each of the prototypes. From this, we hoped to gain some insight into how our new emergeable methods of interaction were approached by users. As expected, users had little difficulty using the well-known interaction methods of GUI and hi-res controls. In these cases, all participants used the same action to operate the widgets: a thumb-index grip on the hi-res widgets and a one-digit touch on the GUI. Similar interaction was seen during the low-res dial tasks. We attributed this behaviour to the experience users already have in operating touch-screens and physical widgets.
The low-res slider interaction, however, is one in which none of our participants had any prior expertise in how to manipulate. From analysis of the video data, we observed that participants had several ways of interacting with this new control. Specifically, we identified three interaction styles: pushing, sliding and gripping. Sixteen participants pushed the sensels using one, two or three fingers; seven slid their fingers on top of the sensels to interact with them, and just one participant tried to grip each sensel to operate it. Participants used different strategies to control the slider -some used just one finger (thumb, index, middle or ring); others used two or three fingers at the same time, while some mixed the number of fingers used to control the sensels for each interaction style. This gives an insight into how users might interact with sensels; and, into how the low-res prototype could be redesigned to better facilitate the slider interaction.
Overall, despite the difficulties some users had in using the low-res slider, the majority of participants saw the potential in the emergeable concept. Comments made included: "[It's] easier to use a tangible object but if it could go back into the screen you keep the flexibility of having the flat screen device"; and, "dynamic changing into anything would be very interesting especially for things you need more precision on."
Sixteen participants stated that they would use emergeables in the future if they became available in a commercial product, citing the ease of tangible controls as a major factor. For example, "Yes, I'm likely to use touchscreens more if they have buttons. Im using this old [featurephone] for a reason"; and, "Yes, I've grown up with normal buttons, I now struggle with touchscreens -I'd give it a go to see what it feels like."
DISCUSSION
This paper has introduced the concept of Emergeables, exploring and analysing the design space around continuous, eyes-free controls for tangible mobile interaction. Our vision for this research is to combine the flexible, mobile versatility of touchscreen displays with the affordability and precision of tangible controls. In doing this we hope to facilitate rich interactions with continuous parameters for situations such as safety-critical, remote, or even game-play scenarios. In this section, we present the main insights from our evaluation, discuss the levels of resolution necessary for future emergeable controls, and suggest how higher-resolution versions of our prototypes may be achieved.
Turning first to the accuracy of each approach. The results of our experiment show that when controlling eyes-free, continuous tasks, the high-resolution emergeable user interface was the most accurate. Overall, this prototype was found to be almost twice as accurate as the GUI, and even higher when controlling two parameters simultaneously. This result illustrates the strong potential of high-resolution emergeable controls, showing their improvement over GUI displays -the current interaction style of state-of-the art mobile devices. Our next step, then, is to focus on the accuracy of the lowresolution emergeable design. In the case of a single dial, the low-resolution prototype can provide almost the same level of accuracy as the high-resolution prototype -a result we anticipated based on the current similarities between the two designs. The GUI dial, however, is almost twice as inaccurate compared with our tangible prototypes. This is a promising outcome for our sensel-based approach, but clearly more work can be done to refine our prototype and improve the accuracy of its sliders. Indeed, in the case of a single slider, while the hi-res emergeable provided a gain in accuracy over the GUI, the low-res slider performed worse. The resolution provided in our prototype was created using four sensels, each of a size comparable to previous work (e.g., [START_REF] Follmer | Inform: dynamic physical affordances and constraints through shape and object actuation[END_REF][START_REF] Hardy | Shapeclip: towards rapid prototyping with shape-changing displays for designers[END_REF]). This experiment suggests that this resolution is not yet sufficient to provide a comparable accuracy to either high-resolution tangibles or GUI touchscreens.
For both single and dual dial tasks, the low-res prototype offers almost the same level of accuracy as the hi-res prototype. In the case of two sliders, as with a single slider, the low-res prototype did not provide better accuracy to users. This makes the need for future improvements in emergeable technology (for example, in the size of the moveable sensels) even more important for complex tasks. In the case of slider+dial, the accuracy benefit of the low-res dial was able to compensate for the loss in accuracy of the low-res slider.
Beyond performance, users' safety can be at stake in situations where visual attention is critical -for example, controlling a car stereo while driving. The results of our video analysis show that emergeables require significantly less visual attention than the GUI approach. Since the pursuit tasks in the study required as much of the users' focused attention as possible, we can deduce from this that the best interface for such activities is the high-resolution approach -requiring around 74 % less visual attention than the GUI on the single widget task and 78 % less on the dual widget task. Even the low-resolution, sensel-based emergeable prototype performed better than the touchscreen for the amount of visual attention required -requiring around 57 % less visual attention than the touchscreen on the single widget task, and 61 % on the dual widget task. As a consequence, emergeables are a promising direction for mobile user safety, and, indeed, other scenarios where eyesfree interaction would be beneficial.
In terms of specific widgets, when using a single widget there was little difference between the loss of control for highresolution sliders and dials -that is, there was no significant difference in the visual attention demanded by them. However, when controlling two widgets at once, it requires more visual attention to control one of each type of high-resolution widget (i.e., one slider and one dial) than two of the same (i.e., two sliders or two dials).
In general, sliders were less accurate and required more visual attention than dials. We anticipated that the sliders on the low-res prototype would be harder to use than its dials as not only are they an entirely new way of interacting which participants would not be used to, but are also an early prototype design with interaction limitations. Despite being able to control sliders to a certain extent, this effect was seen in the accuracy results (low-res sliders were more inaccurate than low-res dials) and in the glance rate results (low-res sliders required more attention than low-res dials), but only partially in the subjective scoring (low-res controls were rated higher overall than GUI controls). Our prediction is that as sensel-based emergeables get higher in resolution and users accrue increased exposure to the type of interaction, this gap in accuracy and loss of control between the high-and lowresolution approaches will reduce.
When we consider subjective preference, users preferred using the emergeables rather than the touchscreen GUI. Results from the pre-study questionnaire revealed that 73 % of participants preferred tangibles over touchscreens, especially for dials (88 %). After participation in the study, 100 % of participants found the high-resolution emergeable easier to use than the GUI, and 72 % found the low-resolution prototype easier to use than the GUI. They also rated the touchscreen approach significantly lower on average than both the emergeable alternatives (3.4/10 for the GUI, versus 4.8/10 and 8.8/10 for low-res and hi-res prototypes).
In summary, this evidence suggests that emergeables are easier to use, require less visual attention, are largely preferred by users, and are more often more accurate than the GUI alternative. Our results suggest that the high-resolution emergeable is the optimum prototype for controlling continuous parameters -an encouraging result which we believe justifies the continuation of this work. In addition to this, however, we have also identified the sensel approach as a promising candidate for further development.
While we strive for a fully-emergeable, high-resolution surface, we understand that this will not simply happen overnight, and are aware that there will be many iterations and refinements along the way. The sensel-based approach, which is currently at a relatively low-resolution (28 sensels) state, was our first step in this process. With additional work, however, this prototype can be miniaturised and increased in resolution, thus improving the usability and accuracy of its controls. We are pleased that even in this state our sensel-based approach performed well compared to its touchscreen counterpart, proving to be easier to use, more accurate for the use of dials and more preferred by users.
Our future work in this area will focus initially on increasing the resolution and decreasing the size of the sensel-based display. The current prototype uses stepper motors harvested from laptop DVD drives to facilitate the Z-axis motion of the sensels. The size of these motors, coupled with the additional wiring required for the joystick and rotation controllers, has resulted in the display being relatively large and adjacent sensels having small gaps between them. The next generation prototype could include smaller actuation motors or even nanolevel Janus particles to create smaller sensels and allow a higher-resolution display in the same physical space.
Based on the results of our experiment, further study into how best to improve the sensel-based slider interaction is now on our research agenda. Before conducting trials on a new prototype-that will likely have smaller sensels that can be placed closer together-we would like to test slider controls on our current design using additional adjacent sensels (e.g., up to seven, as opposed to the four used in the study). This will allow us to investigate any differences in task outcomes when using different numbers of sensels in each widget. After this work has been carried out, our next step will be to create dials made up of multiple sensels, allowing rotational controls of different sizes to be created anywhere on the display.
CONCLUSION
In this paper we have presented emergeable surfaces for eyes-free control of continuous widgets. We have explored the design space around the area by building two prototype devices to test the viability of tangible, continuous controls that 'morph' out of a flat screen. Our first prototype-a highresolution deformable UI-uses static dials and sliders that rotate on blocks to 'change' the device's shape. This design gave us insight into the use of fully working widgets, but as a trade-off only allowed them to be placed in four specific locations on the display. Our second prototype-a lowerresolution sensel-based approach-is an initial demonstration implementation providing dials and sliders that can be placed anywhere on its display, and which could be refined over time to become smaller and of higher-resolution.
Our results show the value and benefits of emergeable, highresolution, tangible controls in terms of accuracy, visual attention and user preference. While clearly still in its early stages, we have also shown the potential of our low-resolution sensel-based approach, which we hope will be the start of a series of future iterations of a fully-emergeable, dynamic interactive control surface.
Figure 2 .
2 Figure2. Far left: the design of the low-resolution emergeable. Each circular 'sensel' of the device can be pushed vertically as a button, rotated as a dial, and tilted vertically or horizontally to form part of a slider. Centre left: the low-resolution emergeable prototype, with projection to highlight a raised slider (top) and dial (bottom). See Fig.4for an example of using the prototype. Centre right: the high-resolution emergeable prototype. Far right: the design of the high-resolution emergeable. The box contains four rotatable subsections, each capable of flipping between dial, slider or flat surface.
Figure 3 .
3 Figure3. The core design space for emergeables describes devices that allow for both translation and rotation in three axes. Our low-resolution prototype fully supports translation, and partially supports rotation (Zaxis), while similar prior work (e.g., ShapeClip[START_REF] Hardy | Shapeclip: towards rapid prototyping with shape-changing displays for designers[END_REF]) has supported only Z-axis translation. Our high resolution prototype, in contrast, supports X-axis translation and Z-axis rotation.
Figure 4 .
4 Figure 4. Slider interaction with the low-resolution prototype. First, a single sensel emerges at the slider thumb's current position (image A). The user can then tilt this and each adjacent sensel in succession to simulate movement along the slider's path (images B-H).
Figure 5 .
5 Figure 5. The graphical interface used for comparison in the study. A widget is shown in each of the four positions used (where the leftmost is a slider and the rightmost is a dial. Note that at most only two of these positions were used at any one time in the study.
Figure 6 .
6 Figure 6. Experimental setting. Participants were positioned between two projected screens, and used each of the prototypes in turn to perform a pursuit task with dials and sliders (see Fig.7). The task switched between the two screens every 15 s, and participants performed the task for 60 s at a time. When using a single control, participants stood; for two controls participants were seated.
Figure 8 .
8 Figure 8. Mean pursuit error as a percentage of control range. Error bars show 95 % confidence intervals.
Figure 9
9 Figure 9. Glance rates. Top: mean number of times participants' gaze was averted from the projected screen. Bottom: the mean time participants spent looking at the prototype (rather than the display) per trial.
See: florianborn.com/projects/modulares_interface
See: tactustechnology.com and getphorm.com.
See ACM Digital Library resources or goo.gl/sPKtyu.
ACKNOWLEDGMENTS
We gratefully acknowledge the support of ANR (ANR-11-EQPX-0002, ANR-11-LABX-0025-01, ANR-14-CE24-0013 and ANR-15-CE23-0011-01), EPSRC (EP/M00421X/1 and EP/N013948/1), CAPES Foundation and fabMSTIC. | 62,222 | [
"739671"
] | [
"36731",
"1041973",
"63008",
"36731",
"1041973",
"36731",
"1041964",
"322817"
] |
00149265 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2005 | https://hal.science/hal-00149265/file/BoutayebAPS2005.pdf | H Boutayeb
T A Denidni
A Sebak
L Talbi
Metallic EBG Structures for Directive Antennas using Rectangular, Cylindrical and Elliptical Shapes
This paper presents different designs of directive antennas using Electromagnetic Band Gap (EBG) structures. The EBGs consist of periodic structures of metallic wires. In this study, periodic structures in cartesian, cylindrical and elliptical coordinates are considered. Experimental results of antennas using these different geometries and a monopole as an excitation source are presented, and their performances are compared.
Introduction
Electromagnetic Band Gap (EBG) structures are periodic structures, which offer passbands and stopbands to electromagnetic waves [START_REF] Joannopoulos | Photonic crystals: molding the flow of light[END_REF]. Recently, different techniques have been proposed for designing high-gain antennas with a single feed by using the proprieties of EBG structures [START_REF] Cheypre | An Electromagnetic Bandgap Resonator Antenna[END_REF][START_REF] Boutayeb | Design of a Directive and Matched Antenna with a planar EBG structure[END_REF][START_REF] Palikaras | Cylindrical Electromagnetic bandgap structures for direcive Base Station Antennas[END_REF][START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF][START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF][START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF]. Rectangular [START_REF] Cheypre | An Electromagnetic Bandgap Resonator Antenna[END_REF][START_REF] Boutayeb | Design of a Directive and Matched Antenna with a planar EBG structure[END_REF], cylindrical [START_REF] Palikaras | Cylindrical Electromagnetic bandgap structures for direcive Base Station Antennas[END_REF][START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF][START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF] and elliptical structures [START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF] have been proposed. In [START_REF] Cheypre | An Electromagnetic Bandgap Resonator Antenna[END_REF], a method based on the Fourier transform has been proposed to obtain the proprieties of the EBG structures in the angular domain and then to predict the radiation patterns of the EBG antenna. In [START_REF] Boutayeb | Design of a Directive and Matched Antenna with a planar EBG structure[END_REF], frequency and pattern responses of the EBG structures excited by electromagnetic waves from the interior of these structures have been used to analyze the antenna performances. Using a cylindrical EBG structure, an antenna with high directivity in the elevation plane and wide horizontal beam, has been also presented [START_REF] Palikaras | Cylindrical Electromagnetic bandgap structures for direcive Base Station Antennas[END_REF]. In [START_REF] Boutayeb | Analysis of radius-periodic cylindrical structures[END_REF], a method has been proposed for designing EBG structures composed of multiple layers of cylindrical periodic surfaces, and in [START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF], these structures have been applied for base station antennas. In addition, a design of an EBG antenna with elliptical geometry has also been proposed in [START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF]. In this paper, experimental results of directive antennas with three different configurations of EBG structures composed of metallic wires are presented. Structures with different rectangular, cylindrical and elliptical shapes are considered. The impedance and directivity performances of these three antennas are compared.
Rectangular Structure
Figure 1(a) presents the geometry of a rectangular EBG antenna. The EBG structure is composed of a 2-D periodic structure of metallic wires, excited at its center by a monopole. The structure is used at its first resonant frequency. The antenna is designed using the method presented in [START_REF] Boutayeb | Design of a Directive and Matched Antenna with a planar EBG structure[END_REF]. Note that metallic reflectors are added to focalize the radiation in one side. As an example, Figure 1(b) presents the measured and simulated radiation pattern in the H-plane (xoy plane) at 2.63 GHz. The half-power beamwidth is 15.3 • in the H-plane. The measured directive gains are between 19 dBi (at 2.61 GHz) and 20.2 dBi (at 2.65 GHz). In addition, an impedance bandwidth (S 11 < -10 dB) from 2.61 GHz to 2.65 GHz is achieved, which represents a fractional bandwidth of 1.5%.
Cylindrical Structure
Figure 2(a) presents the geometry of the Cylindrical EBG (CEBG) antenna. The CEBG structure has been designed using the method presented in [START_REF] Boutayeb | A controllable conformal Electromagnetic Band Gap Antenna for base station[END_REF][START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF]. The radius of the first cylindrical layer and the period between layers are equal to 45 mm. The wires length and diameter are 200 mm and 1.5 mm, respectively. The monopole length is 30 mm. Defects are applied to the structure to allow directive radiation in the direction of the defects, at the stopband of the CEBG. The defects consist of removing wires: 3 wires are removed from the first layer, 5 from the second, 7 from the third, and so on. As an example, Figure 2(b) presents the measured and simulated radiation pattern in the H-plane at 2.17 GHz. A bandwidth (S 11 < -10 dB) from 1.74 GHz to 2.31 GHz (a fractional bandwidth of 28%) is achieved. The half-power beam widths in the H-plane are 47.8 • and 37.9 • at 1.77 GHz and 2.17 GHz, respectively. The measured gains are 12.2 dB and 15.8 dB at 1.77 GHz and 2.17 GHz, respectively.
Elliptical Structure
Figure 3(a) presents the geometry of an Elliptical EBG (EEBG) antenna [START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF]. This Elliptical EBG is designed by conserving the elliptical period between wires. The semi-major and semi-minor axis of the first ellipse are equal to 64 mm and 32 mm, respectively, and they correspond to the radial periods in the two axis. The position of the different wires have been calculated numerically. The wires length and diameter are 270 mm and 1.8 mm, respectively. The monopole length is 32 mm. A bandwidth (S 11 < -10 dB) from 1.76 GHz to 2.29 GHz (a fractional bandwidth of 26%) is achieved. The half-power beam widths in the H-plane are 48 • and 37.7 • at 1.77 GHz and 2.17 GHz, respectively. As an example, Figure 3(b) presents the measured and simulated radiation pattern in the H-plane at 2.17 GHz. The measured gains are 11.6 dBi and 12.6 dBi at 1.77 GHz and 2.17
Comparison
The antenna aperture taper efficiency is an important tool for evaluating the performance of directive antennas, and it is calculated using the equation [START_REF] Stutzman | Antenna Theory and Design[END_REF] :
e = Gain/(10 log(4πA/λ 2 )) ( 1
)
where A is the area occupied by the antenna in the zoy plane. Using this equation, Tab. The gain and efficiency for rectangular, cylindrical and elliptical structures are measured at 2.65 GHz, 2.17 GHz and 2.17 GHz, respectively. From these results, it can be seen that the cylindrical EBG antennas present the same efficiency as the rectangular ones but with a greater bandwidth. The elliptical EBG antennas present the lowest efficiency. Elliptical and cylindrical EBG antennas have similar bandwidth.
Conclusion
Experimental results of directive antennas using EBG structures of metallic wires have been presented. Periodic structures with rectangular, cylindrical and elliptical shapes have been studied, and their performances in terms of bandwidth and directivity have been presented and compared.
Figure 1 :
1 Figure 1: (a) Geometry of a rectangular EBG antenna (dimensions in mm) (b) Simulated and measured radiation patterns in the H-plane at 2.63GHz.
Figure 2 :
2 Figure 2: (a) Geometry of a CEBG structure with defects (b) Simulated and measured radiation patterns in the H-plane at 2.17GHz.
Figure 3 :
3 Figure3(a) presents the geometry of an Elliptical EBG (EEBG) antenna[START_REF] Boutayeb | Design of Elliptical Electromagnetic Bandgap Structures for Directive Antennas[END_REF]. This Elliptical EBG is designed by conserving the elliptical period between wires. The semi-major and semi-minor axis of the first ellipse are equal to 64 mm and 32 mm, respectively, and they correspond to the radial periods in the two axis. The position of the different wires have been calculated numerically. The wires length and diameter are 270 mm and 1.8 mm, respectively. The monopole length is 32 mm. A bandwidth (S 11 < -10 dB) from 1.76 GHz to 2.29 GHz (a fractional bandwidth of 26%) is achieved. The half-power beam widths in the H-plane are 48 • and 37.7 • at 1.77 GHz and 2.17 GHz, respectively. As an example, Figure3(b) presents the measured and simulated radiation pattern in the H-plane at 2.17 GHz. The measured gains are 11.6 dBi and 12.6 dBi at 1.77 GHz and 2.17 GHz, respectively
1 presents the performances of the different EBG antennas in terms of bandwidth, gain and efficiency.
Shape Bandwidth (%) Gain (dBi) Area (mm 2 ) Efficiency (%)
rectangular 1.5 20.2 540*260 94.4
cylindrical 28 15.8 360*200 94.32
elliptical 26 12.6 512*270 64.3
Tab. 1: Performances of the different EBG antennas. | 9,730 | [
"838442"
] | [
"214739",
"214739",
"355894",
"434321"
] |
00149266 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149266/file/HalimAPS20061.pdf | Halim Boutayeb
Tayeb
Tayeb A Denidni
Band Structure of Crystals with Periodically Loaded Metallic Wires
Introduction
The propagation of electromagnetic waves in periodic structures has received recently an important interest [START_REF] Sakoda | Optical Properties of Photonic Crystal[END_REF]. Potential applications have been suggested in microwave and antenna domains, such as suppressing surface waves [START_REF] Yang | Microstrip antennas integrated with electromagnetic bandgap (EBG) structures: a low mutual coupling design for array applications[END_REF], designing directive antennas [START_REF] Thevenot | Directive Photonic Band-Gap Antennas[END_REF], or creating controllable microwave components [START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF]. The propagation of waves in periodic structures is described by means of a band theory. Different methods have been proposed for computing the band structure of periodic structures, e.g., the average field method [START_REF] Simovski | The average field approach for obtaining the band structure of crystals with conducting wire inclusions[END_REF], the order-N method [START_REF] Chan | Order-N method spectral method for electromagnetic waves[END_REF], and the hybrid plane-wave-integral-equation method [START_REF] Silveirinha | A Hybrid method for the efficient calculation of the band structure of 3D-metallic crystal[END_REF]. A particular interest has been given to the dispersion characteristics of periodic structure formed by infinitely long metallic wires [START_REF] Sakoda | Optical Properties of Photonic Crystal[END_REF][START_REF] Yang | Microstrip antennas integrated with electromagnetic bandgap (EBG) structures: a low mutual coupling design for array applications[END_REF][START_REF] Thevenot | Directive Photonic Band-Gap Antennas[END_REF][START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF][START_REF] Chang | An active square loop frequency selective surfaces[END_REF][START_REF] Simovski | The average field approach for obtaining the band structure of crystals with conducting wire inclusions[END_REF][START_REF] Chan | Order-N method spectral method for electromagnetic waves[END_REF][START_REF] Silveirinha | A Hybrid method for the efficient calculation of the band structure of 3D-metallic crystal[END_REF][START_REF] Tretyakov | Analytical modeling in applied electromagnetics[END_REF][START_REF] Moses | Electromagnetic wave propagation in the wire medium: a complex medium with long thin inclusions[END_REF]. The band structure of periodic materials with loaded wires has not been studied enough. These materials are interesting for designing reconfigurable microwave components. The band structure of the discontinuous wire medium for different wire diameters and lengths has been studied in [START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF] in order to design controllable crystals. However, the effects of the active element have not been taken into account. In [START_REF] Belov | Two-dimensional electromagnetic crystals formed by reactively loaded wires[END_REF], an analysis of the dispersion of crystals with loaded wires has been presented. However, in open literature, there is no parametrical study for showing the effect of the value of the active elements. In this contribution, numerical results are presented for the pass-bands and stop-bands of these 3-D periodic structures, at normal incidence. To compute the propagation constant, a transmission line model is used, where a 2-D periodic structure (grid) of discontinuous wires is modelled by a T-circuit. The T-circuit parameters are written in terms of the S-parameters of the grid, computed rigourously using the FDTD method.
Computation of the propagation constant
An infinite 3-D periodic structure of perfect metallic wires shown in Fig. 1 is considered. Its parameters are the periods P x , P y and P z , the wire diameter a and the width w. The propagation of the transverse electric field in x-direction is considered. To compute the propagation constant β x , the transmission line model is used, where a 2-D periodic structure in y-direction (see Fig. 1) is modelled by a T-circuit [START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF]. The T-circuit parameters are written in terms of the S-parameters of the grid, computed rigourously using the FDTD method, where Floquet boundaries conditions and a thin mesh (△ = P eriod/80) are used. Only the fundamental mode is considered, then the limitations P y ≤ P x , P x ≤ λ and P z ≤ λ are used. In a first approximation, an electronic switch can be simulated by an equivalent circuit including R-L-C elements. The inductive term L, which essentially represent the connection wires to the device, can be considered included in the metallic wire, then active device can be represented only by an R-C circuit [START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF]. For a parallel or a series combination of a capacitor, C, a resistor, R, and an inductance, L, we integrated a model in our FDTD code, based on the scheme introduced by Piket-May et al [START_REF] Piket-May | FD-TD modeling of digital signal propagation in 3-D circuits with passive and active loads[END_REF]. The R-C circuits are periodically distributed along the wires, which form the 2-D photonic lattice.
Results and discussion
The dual behavior in the pass-band and stop-band of the on-state and off-state structures is nearly obtained in the two first bands [START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF]. The limits of the two first bands of these structures are now studied for different wire diameters. We consider P x = P y = P z = P . The R-C elements are chosen in agreement with characterization results obtained on high-speed commercial devices [START_REF] Lourtioz | Toward controllable photonic crystals for centimeter and millimeter wave devices[END_REF]. Based on practical considerations, we consider, for the on-state, R = 10Ω; for off-state, we consider R = 30kΩ. Three capacitance values are chosen: C = 150 f F , 30 f F and 13 f F . Fig. 2 presents the limits of the two first bands for the on-state case, and for the continuous-wire structure, versus the fill factor a/P . From this figure, it can be seen that the active element has less influence when the wire diameter is small. This is due to the fact that, for large diameter wires, the contrast between the thickness of the wire and the thickness of the active element is more important. Fig. 3 presents the limits of the two first bands for the off-state case, for the continuous-wire structure, and for the discontinuous-wire structure, versus the fill factor a/P . According to this figure, compared to the discontinuous wire case, the active element have effect on small diameter wires and has no influence on large diameter wires. In addition, it can be also observed that for small diameter wires, the increase of the capacitance has the same effect that the increase of the width between wires for the discontinuous-wire case [START_REF] Boutayeb | Band structure analysis of crystals with discontinuous metallic wires[END_REF].
Conclusion
In this paper, the band structure for normal propagation of crystal formed by periodically loaded metallic wires has been analyzed for different wire diameters and for different values of the load, which are assimilated as diodes. The diodes have been simulated by an equivalent R-C circuit, which has been chosen in agreement with characterization results obtained with high-speed commercial devices. The influences of the values of the R-C elements on on-state and off-state have been analyzed and the results have been compared to the previous results presented for continuous and
Figure 1 :
1 Figure 1: Infinite 3-D periodic structure of loaded metallic wires in air and equivalent RLC circuits for numerical simulations.
Figure 2 :
2 Figure 2: Two first bands limits for structures with continuous wires an for wires periodically loaded with R = 10Ω wires versus fill factor a/P .
Figure 3 :
3 Figure 3: Two first bands limits for structures with continuous, discontinuous, and loaded wires versus fill factor a/P , with R = 30kΩ, for different values of C. | 8,544 | [
"838442"
] | [
"214739",
"214739"
] |
00149267 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2006 | https://hal.science/hal-00149267/file/EUCAP2006.pdf | Halim Boutayeb
email: [email protected]
Tayeb A Denidni
email: [email protected]
BANDWIDTH WIDENING TECHNIQUES FOR HIGH-GAIN ANTENNAS BASED ON PARTIALLY REFLECTING SURFACES
In this paper, the performance in terms of frequency bandwidth of Fabry-Perot based directive antennas is first evaluated theoretically. Then, different techniques are proposed to widen the directivity bandwidth of antennas based on Partially Reflecting Surfaces. The bandwidths obtained with the proposed solutions are compared to the bandwidth obtained with a classical Fabry-Perot cavity based directive antenna. It is well known that the bandwidth of a Fabry-Perot based directive antenna decreases significantly when the desired gain is high. In this section, a relation between the minimum half-power beamwidth and the bandwidth is derived. The structure shown in Fig. 1 is considered. It is composed of a Fabry-Perot type cavity composed of two Partially Reflecting Surfaces (PRS) made of metallic wires. An omni directional source is in the center of the cavity.
INTRODUCTION
High-gain and compact antennas with of a single feed present an attractive solution for several wireless communication systems. Their single-feed system allows to increase the gain with low complexity compared to feeding networks used in conventional antenna arrays. In addition, the compactness represents an important advantage compared to parabolic antennas. To design low profile high-gain antennas with a single feed, various methods have been proposed, such as the employment of Fabry-Perot type cavities [1][2][3][4], electromagnetic crystals or zero index metamaterials [5][6][7]. , where is the quality factor of the cavity. Fig. 3 shows
Q / Q
T versus angle at different frequencies. These radiation patterns exhibit directive beams at the normal directions ( θ and °= 0 °= θ 180 0 f ) for frequencies lower than the resonant frequency .
In Section 2, the directivity bandwidth of a simple Fabry-Perot type cavity based antenna is analysed theoretically. Then, in Section 3, a Fabry-Perot cavity with aperiodic PRSs is considered, whereas, in Section 4, a structure using two different PRSs is studied. In Section 5, the two techniques are combined. The obtained directivity bandwidth of the different configurations are compared with that of a simple Fabry-Perot based antenna.
The minimum half-power beamwidth is obtained for f<f 0 (Eq. ( 2) ) and for x = 1.5:
For frequencies greater than , lobs appear on each side of the normal axis.
is defined as the halfpower beamwidth of the main lobs at the normal directions (see Fig. 3).
0 f dB 3 θ ∆ T Q x 1 - - Q dB 2 4 min 3 ≈ θ ∆ (4)
Note that Eq. ( 4) is a little different from the formula in [2], where there is a minor error. 2) and ( 3)).
By using Eqs. ( 2) and ( 3), the half-power beamwidth
dB 3 θ ∆
is plotted in Fig. 4 as a function of the frequency, for the considered example. In Fig. 4, f 1 and f 2 are the frequencies for which
Q dB dB 2 2 2 min , 3 3 = θ ∆ = θ ∆
. The coefficients x 1 and
x 2 , corresponding to f 1 and f 2 , respectively, were calculated numerically from Eqs. ( 2) and (3):
x 1 ≈10.985, x 2 ≈1.175.
From this, and using the relation
Q x Q 1 x 1 1 - ≈
[2], we obtain : , it has been shown that the half-power beamwidth can be expressed as following :
Q Q f f f x 895 . 9 1 2 1 0 1 0 ≈ = - (
- ≈ Q f f 2 895 . 9 1 0 1 (6)
We have also
x f f dB 1 2 2 0 , 3 - ≈ θ ∆ ≤ (2) Q Q f f f x 175 . 0 1 2 2 0 0 2 ≈ = - (7)
and then
Q x f f f dB 1 1 2 0 , 3 + - ≈ θ ∆ + ≤ ≤ (3) - ≈ Q f f 2 175 . 0 1 0 2 (8)
Now, one can calculate the bandwidth between f 2 and f 1 : where
+ - + = + - = θ ∆ Q f f f f B (9)
By using Eq. ( 4), the following relation is obtained:
( ) ( ) 2 min 3 2 min 3 2 895 . 9 175 . 0 2 2 2 895 . 9 175 . 0 2 θ ∆ - + θ ∆ + = θ ∆ dB dB B (10) θ ∆
B is the frequency bandwidth where the half-power beamwidth is less than 2 times the minimum halfpower beamwidth. Using Eq. ( 10), the bandwidth is plotted versus the minimum half-power beamwidth in Fig. 5. This curve characterizes the performance of Fabry-Perot cavity based directive antennas. It can be seen that the beamwidth decreases drastically when the minimum half-power beamwidth is small. . For instance, we consider that the wires are length and diameter, the number of wires in each row is , the cavity is width, the period is , and the dipole is 65 length and diameter. The structure was simulated with a Finite Difference Time Domain (FDTD) code. For the considered example, the half-power beamwidth in Hplane, and E-plane, is plotted vs. frequency in Fig. 6. From this figure, the bandwidths are 20% and 23.8%, in the H-plane and E-plane respectively. These results agree with the results predicted by Eq. ( 10) and illustrated in Fig. 5.
USING AN APERIODIC PRS
A first method in order to increase the directivity bandwidth of the antenna consists on using a nonuniform PRS. If one use a PRS which presents a phase of the reflection coefficient increasing with incidence angle, the bandwidth will increase. Indeed, the resonant frequency at the angle θ , can be written [2] :
( )
) cos( 2 , 0 θ π θ ϕ = θ D c f r (11)
where c is the speed of light and is the phase of the reflection coefficient r. From Eq. ( 11), if increases significantly with the angle θ , the resonant frequencies for angle higher than 0° will be shifted to higher frequencies, and the frequency f
Table 1. Spacing between wires in the aperiodic PRS
We propose to change the period of the PRS, as illustrated in Fig. 6, in order to obtain a coefficient r ϕ increasing with incidence angle.
The example with the specifications indicated in Tab. 1, is considered. The dimensions of the antenna (length, larger and dipole dimensions) are the same than previously.
In Figs. 7 and 8, the half-power beamwidths are plotted in the H-plane and the E-plane respectively, for the structure with aperiodic PRSs and for the structure with periodic PRS. From these figures, one can see that the aperiodic PRS allows to decrease the half-power beamwidth after the resonant frequency (f 0 =3GHz). In the E-plane (Fig. 8), a widening of the bandwidth is also observed.
USING TWO DIFFERENT PRSs
Another method for increasing the directivity bandwidth consists on using two different PRSs, as illustrated in Fig. 9. The use of multiple cavities allows to modify the response of the structure (ie. T), and then the curve of the half-power beamwidth. In Fig. 9, the PRSs are spaced by the distance D=35mm, and the two periods are P 1 =20mm, P 2 =40mm. Figs. 10 and 11 present the simulated half-power beamwidth of the antenna in the H-plane and E-plane. A structure combining the two previous techniques, shown in Fig. 12, is now considered. The dimensions of the antenna are to the previous ones (56×56cm 2 , D=35mm). Figs. 13 and 14, present the half-power beamwidth in the H-plane and the E-plane.
From Figs 1 and 14, with the proposed structure, the bandwidths are 32% and 34%, in the H-plane and Eplane, respectively, which represent a widening of 60% and 42% by comparison with the bandwidths obtained with the simple Fabry-Perot cavity structure.
Structures with more Partially Reflecting Surfaces and using other variations in the distance between elements of the PRSs can be analysed to increase more the bandwidth of these type of antennas.
Figure 1 .
1 Figure 1. Principle of the Fabry-Perot cavity based directive antenna. An inconvenient of high-gain antennas based on Fabry-Perot cavities, Electromagnetic Band Gap (EBG) materials, or metamaterials is their narrow directivity bandwidth. This can represent a drawback of these antennas compared to parabolic antennas, which have a large directivity bandwidth.The frequency and angular response of the Fabry-Perot cavity to a plane wave excitation, which is in the center of the cavity, is called T, and it is obtained by summing all the transmitted rays [2]:
Figure
Figure 2. T versus frequency (at ), °= θ 0 Figure 4.
Figure 5. vs.(Eq. (10))θ ∆ B
is of interest to find a structure which presents the same minimum beamwidth but with larger bandwidth. It is clear that antennas based on electromagnetic crystals or zero index metamaterials[5- 7] are not good candidates. In the next sections we investigate two techniques to widen the bandwidth of a directive antenna based on Partially Reflecting Surfaces.
Figure 6 .
6 Figure 6. Directive antenna based on an aperiodic PRS.
Figure 7. , H-plane dB 3 θ ∆
Figure 9. Directive antenna based on two different PRSs
Figure 12. Directive antenna based on two aperiodic PRSs. [1] T. Akalin, J. Danglot, O. Vanbesien, and D. Lippens, "A highly directive dipole antenna embedded in a Fabry-Perot type cavity", IEEE Microwave Wireless Comp. Lett., vol. 12, pp. 48-50, Feb. 2002. [2] H. Boutayeb, K. Mahdjoubi, A.C. Tarot, and T. Denidni, "Directivity of an antenna embedded inside a Fabry-Perot cavity : Analysis and Design", Microw. Opt. Technol. Lett., vol. 48, pp. 12-17, Jan. 2006. [3] N. Guerin S. Enoch, G. Tayeb, P. Sabouroux, P. Vincent, H. Legay, "A metallic Fabry-Perot directive antenna", IEEE Trans. On Anten. And Propag. 54 (2006), pp. 220-224 [4] A.P. Feresidis, G. Goussetis, S. Wang and J.C. Vardaxoglou, "Artificial magnetic conductor surfaces and their application to low-profile highgain planar antennas" IEEE Trans. Antennas Propag., vol. 53, pp. 209-215, Jan. 2005.
Figure
Figure 13. ∆ , H-plane. dB 3 θ
Figure 14. , E-plane. dB 3 θ ∆ | 9,576 | [
"838442"
] | [
"214739",
"214739"
] |
01449478 | en | [
"math"
] | 2024/03/04 23:41:50 | 2018 | https://hal.science/hal-01449478v2/file/Scheid_Sokolowski_final.pdf | Jean-François Scheid
Jan Sokolowski
SHAPE OPTIMIZATION FOR A FLUID-ELASTICITY SYSTEM
Keywords: Mathematics Subject Classification. 35Q35, 35Q74, 35Q93, 49Q10, 74F10, 74B05 Shape optimization, Fluid-structure interaction, Stokes equations coupled with linear elasticity
In this paper, we are interested in a shape optimization problem for a fluid-structure interaction system composed by an elastic structure immersed in a viscous incompressible fluid. The cost functional to minimize is an energy functional involving together the fluid and the elastic parts of the structure. The shape optimization problem is introduced in the 2-dimensional case. However the results in this paper are obtained for a simplified free-boundary 1-dimensional problem. We prove that the shape optimization problem is wellposed. We study the shape differentiability of the free-boundary 1-dimensional model. The full characterization of the associated material derivatives is given together with the shape derivative of the energy functional. A special case is explicitly solved, showing the relevancy of this shape optimization approach for a simplified free boundary 1-dimensional problem. The full model in two spatial dimensions is under studies now.
Introduction
Free boundary problems are classical models e.g., for phase transitions or contact problems in structural mechanics. The optimal control or shape optimization of free boundary problems are challenging fields of research in the calculus of variations and in the theory of nonlinear partial differential equations. The obtained results can be verified by using numerical methods specific for the models. The questions to be adressed within the shape optimization framework are the existence and uniqueness of optimal shapes as well as the necessary and sufficient optimality conditions. The velocity method of shape sensitivity analysis can be applied to shape optimization problems. The existence of topological derivatives for the energy type shape functionals in multiphysics can be considered.
An important class of free boundary problems [START_REF] Friedman | Variational principles and free-boundary problems[END_REF] are variational inequalities [START_REF] Kinderlehrer | An introduction to variational inequalities and their applications[END_REF]. The optimal control [START_REF] Barbu | Optimal control of variational inequalities[END_REF] and the shape optimization [START_REF] Sokołowski | Introduction to shape optimization: shape sensitivity analysis[END_REF] of variational inequalities are well understood for unilateral constraints. In such a case the polyhedricity property of the solution with respect to the shape can be exploited. The concurrent approach is the penalization technique as it is described e.g., in [START_REF] Barbu | Optimal control of variational inequalities[END_REF]. The multiphysics models are new and important branch of applied shape optimization. In this paper a simple model of this type is rigorously analyzed from the point of view of sensitivity analysis. We present an approach of shape optimization to fluid structure interaction which can be generalized to more complex structures.
We consider an elastic structure immersed in a viscous incompressible fluid. Let ω ⊂⊂ Ω S ⊂⊂ Ω ⊂ R 2 be three bounded domains where Ω S and Ω are simply-connected domains. The deformed elastic body occupies the domain Ω S = Ω S \ ω ⊂ R 2 and the elastic structure is attached to the inner fixed boundary ∂ω. The fluid fills up a bounded domain Ω F = Ω \ Ω S = Ω \ (Ω S ∪ ω) surrounding the elastic body Ω S . We denote by
Ω S Ω F Γ F S ω n Σ Figure 1.
The geometry of the fluid-elasticity system Γ F S = ∂Ω F ∩ ∂Ω S the boundary between the fluid and the elastic structure and we have ∂Ω F = Γ F S ∪ Σ where Σ = ∂Ω. The boundary Σ corresponds also to the outer boundary of the fluid domain Ω F (see Figure 1).
The fluid flow is governed by the Stokes equations for the velocity u and the pressure p of the fluid:
-div σ(u, p) = f in Ω F (1.1) div u = 0 in Ω F (1.2)
where σ(u, p) = 2νD(u) -pI d is the Cauchy stress tensor with the symetric strain tensor D(u) = 1 2 ∇u + ∇u . The fluid is subjected to a given force f and ν is the viscosity of the fluid. At the boundary of the fluid domain, we impose (1.3) u = 0 on ∂Ω F = Γ F S ∪ Σ.
The elastic structure Ω S is a deformation of a given reference bounded domain Ω 0 ⊂ R 2 by a mapping X i.e. Ω S = X(Ω 0 ) (see Figure 2). The deformation mapping is given by X = I d + w where w is the elastic displacement of the structure which satisfies the linearized elasticity equation (1.4) -div Π(w) = g in Ω 0
where Π is the second Piola-Kirchhoff stress tensor of the elastic structure given by (1.5) Π(w) = λtr(D(w))I d + 2µD(w)
with the Lamé coefficients λ > 0, µ > 0. The elastic body is subjected to a given external force g. Since the elastic structure is clamped to the inner boundary ∂ω, we have X(∂ω) = ∂ω and
(1.6) w = 0 on ∂ω.
We also denote by Γ 0 the outer boundary of Ω 0 and we have Γ F S = X(Γ 0 ).
According to the action-reaction principle, we have
Γ 0 Π(w)n 0 • v • X dΓ = Γ F S σ(u, p)n • v dΓ
for all function v defined on Ω F . We denote by n 0 the normal unit vector directed outwards to the domain Ω 0 and n is the unit normal vector to Γ F S directed from Ω S to Ω F . This leads to the local relation
(1.7) Π(w)n 0 = (σ(u, p) • X) cof (∇X) n 0 on Γ 0 ,
where cof (∇X) denotes the cofactor matrix of the jacobian matrix (for an invertible matrix A, we have A -1 = 1 det(A) cof(A) ). The relation (1.7) can also be written on the boundary Γ F S with
(1.8) σ(u, p)n = Π(w) • X -1 cof ∇X -1 n on Γ F S .
In summary, the fluid-elasticity system for (u, p, w) reads as
-div σ(u, p) = f in Ω F (1.9) div u = 0 in Ω F (1.10) u = 0 on ∂Ω F = Γ F S ∪ Σ (1.11) -div Π(w) = g in Ω 0 (1.12) w = 0 on ∂ω (1.13) Π(w)n 0 = (σ(u, p) • X) cof (∇X) n 0 on Γ 0 . (1.14)
In [START_REF] Halanay | Existence of a Steady Flow of Stokes Fluid Past a Linear Elastic Structure Using Fictitious Domain[END_REF], the authors prove the existence of a solution to (1.9)-(1.14) using a fictitious domain approach and a fixed point procedure involving convergence of domains. This article contains in particular some interesting ideas that should be helpful for the shape optimization study associated to (1.9)- (1.14). We also mention the results in [START_REF] Grandmont | Existence for a three-dimensional steady state fluid-structure interaction problem[END_REF] where the existence of a solution to a coupled fluid-elasticity system for Stokes equation with a nonlinear elastic structure is established. A similar system to (1.9)-(1.14) has also been studied in [START_REF] Galdi | Steady flow of a Navier-Stokes liquid past an elastic body[END_REF] with the stationary Navier-Stokes equations and where the elastic structure is assumed to be a S t Venant-Kirchhoff material involving the first nonlinear Piola-Kirchhoff stress tensor (see also [START_REF] Surulescu | On the stationary motion of a Stokes fluid in a thick elastic tube: a 3D/3D interaction problem[END_REF]). We shall consider the shape optimization for a free boundary problem originated from the fluid-structure interaction. There is the following structure of coupled fields. Given a reference domain Ω 0 for the elasticity part of the system and a vector field V defined on Γ 0 , we solve the elasticity subproblem and find the displacement field w = w(V) on Γ 0 from the following boundary value problem with nonhomogeneous Neumann boundary condition
Remark
-div Π(w) = g in Ω 0 (1.16) w = 0 on ∂ω (1.17) Π(w)n 0 = V on Γ 0 . (1.18)
In other words, we consider the Neumann-to-Dirichlet mapping associated with the elastic body. As a result, the deformation field X = X(V) is determined for the boundary of the fluid subdomain
X = I d + w. The Stokes problem for (u, p) = (u(V), p(V)) is solved in the new subdomain Ω F : -div σ(u, p) = f in Ω F (1.19) div u = 0 in Ω F (1.20) u = 0 on ∂Ω F = Γ F S ∪ Σ (1.21)
and the fixed point condition for V on Γ 0 reads
V = (σ(u(V), p(V)) • X(V)) cof (∇X(V)) n 0 on Γ 0
The existence of solutions for the free boundary problem is already shown in [START_REF] Halanay | Existence of a Steady Flow of Stokes Fluid Past a Linear Elastic Structure Using Fictitious Domain[END_REF] and in [START_REF] Grandmont | Existence for a three-dimensional steady state fluid-structure interaction problem[END_REF] for a nonlinear elastic structure. We are interested in the question of shape sensitivity analysis for the free boundary problem. The first problem to solve is the stability of the free boundary with respect to the sequence of domains Ω k 0 . Such sequence is produced by shape optimization techniques applied to a given shape functional. In such a case, Ω k 0 → Ω ∞ 0 is the minimizing sequence and we want to assure that the corresponding fixed point conditions on Γ k 0 the outer boundary of Ω k 0 :
V k = (σ(u k (V k ), p k (V k )) • X k (V k )) cof (∇X k (V k )) n 0 on Γ k 0 ,
also converges to the fixed point condition in the limiting domain Ω ∞ 0 . To our best knowledge such results are not known in the literature. Shape optimization formulation. We describe the shape optimization problem associated to (1.9)- (1.14). We aim to determine the optimal reference domain for which an energy type functional is minimum. More precisely, we want to determine a bounded domain Ω 0 ∈ U ad which minimizes (1.22) min
Ω 0 ∈U ad J(Ω 0 )
where U ad is the set of admissible domains :
U ad = {Ω 0 ⊂ R 2 , Ω 0 = D 0 \ ω where D 0 is a simply-connected,
bounded and regular domain containing ω}.
The energy functional J(Ω 0 ) is defined by
(1.23) J(Ω 0 ) = Ω F |D(u)| 2 dx + η Ω 0 |D(w)| 2 dy
with a given parameter η > 0 and where u and X = I d + w satisfy (1.9)-(1.14). In (1.23), we use the notation |D(u)| 2 = D(u) : D(u) where the double product « : » is defined by A : B = i,j A ij B ij for two matrices A and B. The energy functional J(Ω 0 ) is composed by a fluid energy term and the elastic energy of deformation weighted by the parameter η.
A one-dimensional free-boundary model
In order to appreciate the relevance of the shape optimization problem presented in the introduction, we study a simplified one-dimensional freeboundary model. This system reads as follows. Let y 0 ∈ (0, 1) be given. We are seeking for two scalar functions u and w satisfying (2.1)
-∂ xx u(x) = f (x), x ∈ (0, x * ) u(0) = u(x * ) = 0 (2.2) -∂ yy w(y) = g(y), y ∈ (y 0 , 1) w(1) = 0
The (free) boundary point x * is obtained by the deformation of the reference point y 0 with (2.3)
x * = x * (y 0 ) = y 0 + w(y 0 ).
We also impose
(2.4) ∂ x u(x * ) = ∂ y w(y 0 )
which is the 1d-analogous of (1.14). We point out that the 1d-model does not account for the "volume conservation" constraint (1.15) derived in the 2d model.
The energy functional associated to the system (2.1),(2.2) is given by (2.5)
J(y 0 ) = x * 0 |∂ x u| 2 dx + η 1 y 0 |∂ y w| 2 dy
with a parameter η > 0. The one-dimensional shape optimization problem consists in finding the reference point y 0 ∈ I 0 that minimizes (2.6) min y 0 ∈I 0 J(y 0 ).
where I 0 = {y 0 ∈ (0, 1) such that x * = x * (y 0 ) ∈ (0, 1)}.
2.1. Well-posedness. In this section, we show that for y 0 ∈ (0, 1) and for f and g small enough, the problem (2.1)-(2.4) admits a unique solution (u, w, x * ) with x * ∈ (0, 1). This will be proved by a fixed point argument using the contraction mapping theorem. Let us fix y 0 ∈ (0, 1), f ∈ L 2 (0, 1) and g ∈ L 2 (0, 1). We introduce the mapping T :
(2.7)
T (s) = y 0 + v(s, y 0 ) for s ∈ (0, 1),
where v is the solution of (2.8) (2.9)
-∂ yy v(s, y) = g(y), y ∈ (y 0 , 1) v(s, 1) = 0 ∂ y v(s, y 0 ) = ∂ x u(s)
-∂ xx u(x) = f (x), x ∈ (0, s) u(0) = u(s) = 0
For any s ∈ (0, 1), Problem (2.9) admits a unique solution u = u(s, •) ∈ H 1 0 (0, s) ∩ H 2 (0, s). The derivative ∂ x u is then continuous in [0, s] and Problem (2.8) also admits a unique solution v = v(s, •) ∈ H 2 (y 0 , 1). It is clear that x * ∈ (0, 1) is a fixed point for T i.e. x * = T (x * ) if and only if (u(x * , •), v(x * , •), x * ) is a solution of Problem (2.1)-(2.4). The following existence result holds.
Proposition 2.1. Let 0 ≤ ε < 1, y 0 ∈ (ε, 1) and f, g ∈ L ∞ (0, 1). There exists δ 0 = δ 0 (y 0 , ε) > 0 such that if f ∞ + g ∞ ≤ δ 0 then Problem (2.1)- (2.4) admits a unique solution (u, w, x * ) with u ∈ H 2 (0, x * ), w ∈ H 2 (y 0 , 1)
and x * ∈ (ε, 1) which satisfies the following relation
(2.10) x * = y 0 + 1 y 0 (1 -y)g(y) dy + (1 -y 0 ) x * x * 0 xf (x) dx.
Moreover, δ 0 can be choosen as a non-decreasing function of y 0 with
(2.11) δ 0 (y 0 , ε) = 2 min(1, y 0 -ε 1 -y 0 , 1 3(1 -y 0 ) ) > 0.
Proof. Let ε ∈ [0, 1). We prove that for sufficiently small f and g, the mapping T defined by (2.7) maps the interval (ε, 1) into itself and T is a contraction mapping on (ε, 1). This ensures the existence and the uniqueness of a fixed point x * ∈ (ε, 1) for T .
According to (2.7), if |v(s, y 0 )| < min(y 0 -ε, 1 -y 0 ) for all s ∈ (ε, 1) then T (s) ∈ (ε, 1) for all s ∈ (ε, 1). Let s ∈ (ε, 1) be fixed. We estimate v(s, y 0 ) with respect to f and g. To this end, let us write
v(s, y 0 ) = - 1 y 0 ∂ y v(s, y) dy = - 1 y 0 ∂ y v(s, y)∂ y ϕ(y) dy,
with ϕ(y) = y -1. Since ϕ(1) = 0 and ∂ y ϕ ≡ 1 in (y 0 , 1), we obtain by integrating by parts
v(s, y 0 ) = 1 y 0 ∂ yy v(s, y) (y -1)dy + ∂ y v(s, y 0 ) (y 0 -1) = - 1 y 0 g(y) (y -1)dy + ∂ y v(s, y 0 ) (y 0 -1) = - 1 y 0 g(y) (y -1)dy + ∂ x u(s) (y 0 -1), (2.12)
thanks to the boundary condition in (2.8). In addition, starting from (2.9) we have
- s 0 ∂ xx u(x) φ(x)dx = s 0 f (x) φ(x)dx, with φ(x) = x.
Integrating by parts, using φ(0) = 0 and ∂ x φ ≡ 1 in (0, s) together with the boundary conditions for u in (2.9), we get
∂ x u(s) = - 1 s s 0 xf (x) dx. (2.13)
Combining (2.12) with (2.13), we finally obtain
(2.14) v(s, y 0 ) = 1 y 0 (1 -y)g(y) dy + (1 -y 0 ) s s 0 xf (x) dx.
We are now in position to estimate v(s, y 0 ) :
|v(s, y 0 )| ≤ g ∞ 1 y 0 (1 -y) dy + (1 -y 0 ) s f ∞ s 0 x dx ≤ (1 -y 0 ) 2 2 g ∞ + s 2 (1 -y 0 ) f ∞ ≤ (1 -y 0 ) 2 ( g ∞ + f ∞ ) (2.15)
We choose f and g such that
(2.16) g ∞ + f ∞ ≤ 2 min( y 0 -ε 1 -y 0 , 1)
so that we have |v(s, y 0 )| < min(y 0 -ε, 1 -y 0 ) and thus T (s) ∈ (ε, 1). Now, we prove that T is a contraction mapping on (0, 1). According to (2.14), we have, for any s 1 , s 2 ∈ (0, 1),
s 1 = s 2 , T (s 1 ) -T (s 2 ) = v(s 1 , y 0 ) -v(s 2 , y 0 ) = (1 -y 0 ) 1 s 1 s 1 0 xf (x) dx - 1 s 2 s 2 0 xf (x) dx .
Without loss of generality we assume that s 1 > s 2 and we write
T (s 1 ) -T (s 2 ) = (1 -y 0 ) 1 s 1 - 1 s 2 s 2 0 xf (x) dx + 1 s 1 s 1 s 2 xf (x) dx .
This leads to
|T (s 1 ) -T (s 2 )| ≤ (1 -y 0 ) f ∞ 1 s 1 - 1 s 2 s 2 2 2 + 1 s 1 s 1 2 2 - s 2 2 2 ≤ (1 -y 0 ) 2 f ∞ s 2 s 1 + s 1 + s 2 s 1 |s 1 -s 2 | .
Since s 1 > s 2 , we obtain
(2.17) |T (s 1 ) -T (s 2 )| < 3 2 (1 -y 0 ) f ∞ |s 1 -s 2 |
We choose f such that
(2.18) f ∞ ≤ 2 3(1 -y 0 ) , so that |T (s 1 ) -T (s 2 )| < |s 1 -s 2 |
and thus T is a contraction mapping on (0, 1).
Let δ 0 = δ 0 (y 0 , ε) = 2 min(1, y 0 -ε 1-y 0 , 1 3(1-y 0 ) ) > 0. Combining (2.16
) with (2.18), we conclude that if g ∞ + f ∞ ≤ δ 0 then T admits a unique fixed point x * ∈ (ε, 1) which thus satisfies (2.10).
2.2.
A fixed domain formulation. In this section we transform the 1d fluid-elastic system (2.1)-(2.4) in a nonlinear problem posed in reference intervals. Let us fix two reference points x0 , ŷ0 ∈ (0, 1). For given s and t ∈ (0, 1), we introduce the one-to-one regular mappings ϕ s and φ t defined in [0, 1] such that
(2.19) ϕ s ([0, x0 ]) = [0, s] with ϕ s (0) = 0, ϕ s (x 0 ) = s φ t ([ŷ 0 , 1]) = [t, 1] with φ t (ŷ 0 ) = t, φ t (1) = 1, with (2.20) ϕ x0 ≡ I d , φ ŷ0 ≡ I d .
We suppose that ϕ s ∈ C 2 ([0, 1]) for all s ∈ (0, 1) and s → ϕ s (x) belongs to C 1 (0, 1) for all x ∈ [0, 1]. Similarly, we suppose φ t ∈ C 2 ([0, 1]) for all t ∈ (0, 1) and t → φ t (y) belongs to C 1 (0, 1) for all y ∈ [0, 1]. We have that ϕ s > 0 in [0, x0 ], for all s ∈ (0, 1) and φ t > 0 in [ŷ 0 , 1], for all t ∈ (0, 1).
Let (u, w, x * ) be the solution of (2.1)-(2.4). Then we define the following changes of variables
(2.21) û(x) = u(x), f (x) = f (x) with x = ϕ x * (x) for x ∈ [0, x0 ], ŵ(ŷ) = w(y), ĝ(ŷ) = g(y) with y = φ y 0 (ŷ) for ŷ ∈ [ŷ 0 , 1].
The functions (û, ŵ) satisfy the following nonlinear problem posed in the reference intervals [0, x0 ] and [ŷ 0 , 1]:
(2.22) -∂ x 1 ϕ x * (x) ∂ x û(x) = ϕ x * (x) f (x), x ∈ (0, x0 ) û(0) = û(x 0 ) = 0 (2.23) -∂ ŷ 1 φ y 0 (ŷ) ∂ ŷ ŵ(ŷ) = φ y 0 (ŷ) ĝ(ŷ), ŷ ∈ (ŷ 0 , 1) ŵ(1) = 0 (2.24) 1 ϕ x * (x 0 ) ∂ x û(x 0 ) = 1 φ y 0 (ŷ 0 ) ∂ ŷ ŵ(ŷ 0
). The mappings ϕ x * and φ y 0 can be chosen for instance, as the unique solutions of the two problems (2.25)
ϕ x * = 0 in (0, x0 ) ϕ x * (0) = 0, ϕ x * (x 0 ) = x * φ y 0 = 0 in (ŷ 0 , 1) φ y 0 (ŷ 0 ) = y 0 , φ y 0 (1) = 1 that is (2.26) ϕ x * (x) = x * x0 x = y 0 + ŵ(ŷ 0 ) x0 x for x ∈ [0, x0 ] φ y 0 (ŷ) = (y 0 -1) (ŷ 0 -1) (ŷ -1) + 1 for ŷ ∈ [ŷ 0 , 1]
With that choices for ϕ x * and φ y 0 , the unknows (û, ŵ) satisfy (2.27)
-∂ xx û(x) = ( y 0 + ŵ(ŷ 0 ) x0 ) 2 f (x), x ∈ (0, x0 ) û(0) = û(x 0 ) = 0 -∂ ŷ ŷ ŵ(ŷ) = ( y 0 -1 ŷ0 -1 ) 2 ĝ(ŷ), ŷ ∈ (ŷ 0 , 1) ŵ(1) = 0 ( x0 y 0 + ŵ(ŷ 0 ) )∂ x û(x 0 ) = ( ŷ0 -1 y 0 -1 )∂ ŷ ŵ(ŷ 0 ) 2.3.
Existence of an optimal interval. We shall prove that the optimal problem (2.5),(2.6) admits an optimal reference point y 0 . More precisely, we have the following result
Proposition 2.2. Let 0 < ε 1 < ε 2 < 1 and f, g ∈ L ∞ (0, 1). There exists η 0 = η 0 (ε 1 ) > 0 such that if f ∞ + g ∞ ≤ η 0 then there exists y * 0 ∈ [ε 1 , ε 2 ] that realizes min y 0 ∈[ε 1 ,ε 2 ] J(y 0 ). Proof. We fix 0 < ε 1 < ε 2 < 1. We define η 0 (ε 1 ) = δ 0 (ε 1 , ε 1 /2) > 0 where δ 0 is given by (2.11) in Proposition 2.1. We choose f, g ∈ L ∞ (0, 1) such that f ∞ + g ∞ ≤ η 0 (ε 1 ) = δ 0 (ε 1 , ε 1 /2). Since δ 0 is a non-decreasing function of y 0 , we have η 0 (ε 1 ) ≤ δ 0 (y 0 , ε 1 /2) for all y 0 ∈ [ε 1 , ε 2 ]. According to Propo- sition 2.1, Problem (2.1)-(2.4) admits a unique solution for all y 0 ∈ [ε 1 , ε 2 ], with x * ∈ [ε 1 /2, 1). Thus, J is well-defined in [ε 1 , ε 2 ]. Let (y n ) n≥1 ∈ [ε 1 , ε 2 ] be a minimizing sequence of J i.e. lim n→+∞ J(y n ) = inf y 0 ∈[ε 1 ,ε 2 ] J(y 0 ).
There exists a subsequence still denoted y n and y * 0 ∈ [ε 1 , ε 2 ] such that lim n→+∞ y n = y * 0 . We have to prove that lim n→+∞ J(y n ) = J(y * 0 ). We denote by
(u n , w n , x * n ) ∈ H 2 (0, x * n ) × H 2 (y n , 1) × [ε 1 /2, 1) the solution of (2.28) -∂ xx u n (x) = f (x), x ∈ (0, x * n ) u n (0) = u n (x * n ) = 0 -∂ yy w n (y) = g(y), y ∈ (y n , 1) w n (1) = 0 ∂ x u n (x * n ) = ∂ y w n (y n ) x * n = y n + w n (y n ) According to
ϕ(x) = y n + ŵn (y * 0 ) y * 0 x for x ∈ [0, ŷ * 0 ] φ(ŷ) = (y n -1) (y * 0 -1) (ŷ -1) + 1 for ŷ ∈ [ŷ * 0 , 1]
The functions (û n , ŵn ) satisfy
(2.30) -∂ xx ûn (x) = ( yn+ ŵn(y * 0 ) y * 0 ) 2 f (x), x ∈ (0, y * 0 ) ûn (0) = ûn (y * 0 ) = 0 (2.31) -∂ ŷ ŷ ŵn (ŷ) = ( yn-1 y * 0 -1 ) 2 ĝ(ŷ), ŷ ∈ (y * 0 , 1) ŵn (1) = 0 ( y * 0 yn+ ŵn(y * 0 ) )∂ x ûn (y * 0 ) = ( y * 0 -1 yn-1 )∂ ŷ ŵn (y * 0 ) Since x * n = y n + w n (y n ) = y n + ŵn (y * 0 ) ∈ [ε 1 /2, 1)
, we deduce from (2.30) that ûn H 2 (0,y * 0 ) ≤ C where C > 0 is a constant independent of n. Then there exists a subsequence still denoted ûn and û0 ∈ H 2 (0, y * 0 ) such that ûn n→+∞ û0 weakly in H 2 (0, y * 0 ). From (2.31), we deduce that ŵn H 2 (y * 0 ,1) ≤ C where C > 0 is a constant independent of n. Then there exists a subsequence still denoted ŵn and ŵ0 ∈ H 2 (y * 0 , 1) such that ŵn n→+∞ ŵ0 weakly in H 2 (y * 0 , 1) and ŵ0 satisfies
(2.32) -∂ ŷ ŷ ŵ0 (ŷ) = ĝ(ŷ), ŷ ∈ (y * 0 , 1) ŵ0 (1) = 0.
Since x * n = y n + ŵn (y * 0 ) and due to the compactness of the embedding
H 2 (y * 0 , 1) → C 1 ([y * 0 , 1]), we deduce that lim n→+∞ x * n = x * 0 with (2.33) x * 0 = y * 0 + ŵ0 (y * 0 ) ∈ [ε 1 /2, 1] ⊂ (0, 1].
In addition, we obtain that û0 satisfies
(2.34) -∂ xx û0 (x) = ( y * 0 + ŵ0 (y * 0 ) y * 0 ) 2 f (x), x ∈ (0, y * 0 ) û0 (0) = û0 (y * 0 ) = 0
and due to the compactness of the embedding H 2 (0,
y * 0 ) → C 1 ([0, y * 0 ]) we have (2.35) ( y * 0 y 0 + ŵ0 (y * 0 ) )∂ x û0 (y * 0 ) = ∂ ŷ ŵ0 (y * 0 )
We transform the problem (2.34), (2.35) on the interval (0, x * 0 ) by using the change of variables û0 (x) = u 0 (x) with (see Section 2.2)
(2.36) x = x * 0 y * 0 x = y * 0 + ŵ0 (y * 0 ) y * 0 x for x ∈ [0, y * 0 ].
Thus the function u 0 satisfies (2.37)
-∂ xx u 0 (x) = f (x), x ∈ (0, x * 0 ) u 0 (0) = u 0 (x * 0 ) = 0 ∂ x u 0 (x * 0 ) = ∂ ŷ ŵ0 (y * 0 )
Moreover, using the change of variable (2.29) we have
J(y n ) = x * n 0 |∂ x u n | 2 dx + η 1 yn |∂ y w n | 2 dy = ( y * 0 y n + ŵn (y * 0 )
)
y * 0 0 |∂ x ûn | 2 dx + η( y * 0 -1 y n -1 ) 1 y * 0 |∂ ŷ ŵn | 2 dŷ
We deduce that (2.38)
lim n→+∞ J(y n ) = ( y * 0 y * 0 + ŵ0 (y * 0 )
)
y * 0 0 |∂ x û0 | 2 dx + η 1 y * 0 |∂ ŷ ŵ0 | 2 dŷ
Using the change of variable (2.36) with (2.33) in the right hand side of (2.38), we obtain
(2.39) lim n→+∞ J(y n ) = x * 0 0 |∂ x u 0 | 2 dx + η 1 y * 0 |∂ ŷ ŵ0 | 2 dŷ = J(y * 0 )
where (u 0 , ŵ0 ) satisfies (2.32),(2.37). The proof is then complete.
2.4. Shape differentiability. In this section, we prove the existence of the material derivatives associated to the solution (u, w) of the coupled problem (2.1)-(2.4). A full characterization of the material derivatives is given as the solution of an adjoint problem. For a given t ∈ (0, 1), we consider the following problem for (u t , w t , x * t ):
(2.40)
-∂ xx u t (x) = f (x), x ∈ (0, x * t ) u t (0) = u t (x * t ) = 0 -∂ yy w t (y) = g(y), y ∈ (t, 1) w t (1) = 0 ∂ x u t (x * t ) = ∂ y w t (t) x * t = t + w t (t).
Let y 0 ∈ (0, 1) and γ > 0 given. We choose t ∈ (y 0 -γ, y 0 + γ) ∩ (0, 1). We assume that the functions f , g ∈ W 1,∞ (0, 1) and
(2.41)
f L ∞ (0,1) + g L ∞ (0,1) ≤δ 0 (y 0 -γ, y 0 -2γ) = 2 min 1, γ 1 -y 0 + γ , 1 3(1 -y 0 + γ)
where δ 0 is given by (2.11). Since δ 0 (y 0 , ε) is a non-decreasing function of y 0 , choosing ε = y 0 -2γ we have δ 0 (y 0 -γ, y 0 -2γ) ≤ δ 0 (t, y 0 -2γ) for all t ∈ (y 0 -γ, y 0 + γ). Then, according to Proposition 2.1, Problem (2.40) admits a unique solution (u t , w t , x * t ) ∈ H 2 (0, x * t ) × H 2 (t, 1) × (y 0 -2γ, 1), for all t ∈ (y 0 -γ, y 0 + γ) ∩ (0, 1).
We emphasize that the solution (u, w, x * ) of (2.1)-(2.4) coincides with the solution of (2.40) with t = y 0 , i.e. (u, w, x * ) = (u y 0 , w y 0 , x * y 0 ). Moreover, since we choose f , g ∈ W 1,∞ (0, 1), the solution of (2.1)-(2.4) has the additionnal regularity
(2.42) (u, w) ∈ H 3 (0, x * ) × H 3 (y 0 , 1).
We are dealing with a fixed domain formulation by using the one-to-one regular mappings ϕ s and φ t defined on [0, 1] such that (see Section 2.2) :
(2.43)
ϕ s ([0, x * ]) = [0, s] with ϕ s (0) = 0, ϕ s (x * ) = s φ t ([y 0 , 1]) = [t, 1] with φ t (y 0 ) = t, φ t (1) = 1, with (2.44) ϕ x * ≡ I d , φ y 0 ≡ I d .
We suppose that ϕ s ∈ C 2 ([0, 1]) for all s ∈ (0, 1) and s → ϕ s (x) belongs to C 1 (0, 1) for all x ∈ [0, 1]. Similarly, we suppose φ t ∈ C 2 ([0, 1]) for all t ∈ (0, 1) and t → φ t (y) belongs to C 1 (0, 1) for all y ∈ [0, 1]. We have that ϕ s > 0 in [0, x * ], for all s ∈ (0, 1) and φ t > 0 in [y 0 , 1], for all t ∈ (0, 1).
Following [5, p.13-14], we shall say that a map F : t ∈ R → f (t) ∈ X where X is a Banach space, is weakly continous at t = t 0 if for any sequence t n → t 0 as n → ∞, we have f (t n ) f (t 0 ) weakly in X. The map F is weakly-differentiable at t = t 0 if for any sequence t n → t 0 , there exists f (t 0 ) ∈ X such that f (tn)-f (t 0 ) tn-t 0 f (t 0 ) weakly in X as n → ∞. Proposition 2.3. Let y 0 ∈ (0, 1) and γ ∈ (0, 1/4) given. We assume that f , g ∈ W 1,∞ (0, 1) satisfy (2.41). For all t ∈ (y 0 -γ, y 0 + γ) ∩ (0, 1), we consider the solution (u t , w t , x * t ) of (2.40) and let (ϕ s , φ t ) be the mappings defined by (2.43),(2.44). Then, the map F : t → (u t • ϕ x * t , w t • φ t , x * t ) ∈ H 2 (0, x * )×H 2 (y 0 , 1)×(0, 1) defined for t ∈ (y 0 -γ, y 0 +γ)∩(0, 1), is weaklycontinuous and weakly-differentiable at t = y 0 and the associated material derivative ( u, ẇ, ẋ * ) ∈ H 2 (0, x * ) × H 2 (y 0 , 1) × R is the solution of (2.45)
-∂ xx u = -ẋ * ∂ xx (∂ x u) dϕ s ds | s=x * in (0, x * ) u(0) = u(x * ) = 0 -∂ yy ẇ = -∂ yy (∂ y w) dφ t dt |t=y 0 in (y 0 , 1) ẇ(1) = 0 (2.46) ∂ x u(x * ) -ẋ * ∂ x u(x * ) d ds ϕ s (x * ) | s=x * = ∂ y ẇ(y 0 ) -∂ y w(y 0 ) d dt φ t (y 0 ) |t=y 0
Moreover, the derivative ẋ * is given by
(2.47) ẋ * = 1 -d * -(1 -y 0 )g(y 0 ) 1 + (1 -y 0 )(d * /x * -f (x * )) with d * = 1 x * x * 0 xf (x) dx.
Proof. We first prove that the map F : t → (u t (ϕ x * t ), w t (φ t ), x * t ) is weaklycontinuous at t = y 0 . More precisely, we shall prove that x * t → x * and u t (ϕ x * t ) u weakly in H 2 (0, x * ), w t (φ t ) w weakly in H 2 (y 0 , 1) as t → y 0 .
According to (2.10), for all t ∈ (y 0 -γ, y 0 + γ) ∩ (0, 1), x * t satisfies (2.48)
x * t = t + 1 t (1 -y)g(y) dy + (1 -t) x * t x * t 0 xf (x) dx.
Since x * t ∈ (0, 1), there exists a subsequence t n → y 0 such that x * tn → x ∈ [0, 1] which satisfies (2.49) x = y 0 +
1 y 0 (1 -y)g(y) dy + (1 -y 0 ) x x 0 xf (x) dx.
Since x * is the unique point satisfying (2.49) (see (2.10)), we have x = x * . We can also prove that the whole sequence x * t is converging with t → y 0 . Thus, we have (2.50)
x * t → x * as t → y 0 . Now, we turn to the convergence of u t and w t . Using the changes of variables û = u t (ϕ x * t ) and ŵ = w t (φ t ) (see (2.21)) with x = ϕ x * t (x) and y = φ t (ŷ), the system (2.40) becomes (see (2.22), (2.23) and (2.24)):
(2.51) -∂ x 1 ϕ x * t ∂ x û = ϕ x * t f (ϕ x * t ) in (0, x * ) û(0) = û(x * ) = 0 (2.52) -∂ y 1 φ t ∂ y ŵ = φ t g(φ t ) in (y 0 , 1) ŵ(1) = 1 (2.53) 1 ϕ x * t (x * ) ∂ x û(x * ) = 1 φ t (y 0 ) ∂ y ŵ(y 0 )
We introduce
(2.54) c 1,t = û -u = u t (ϕ x * t ) -u ∈ H 2 (0,
x * ) and substracting (2.51) with (2.40) at t = y 0 for u, we get (2.55)
-∂ x 1 ϕ x * t ∂ x c 1,t -∂ x 1 ϕ x * t -1 ∂ x u = ϕ x * t f (ϕ x * t ) -f in (0, x * ) c 1,t (0) = c 1,t (x * ) = 0
Due to (2.50) and the fact that ϕ s L ∞ (0,x * ) → 1 as s → x * , we have
(2.56) 1 ϕ x * t -1 L ∞ (0,x * ) ---→ t→y 0 0, ϕ x * t f (ϕ x * t ) -f L ∞ (0,x * ) ---→ t→y 0 0.
As a result, we deduce from (2.55) that for |t -y 0 | small enough,
c 1,t H 2 (0,x * ) ≤ C
where C > 0 does not depend on t. Thus, there exists a subsequence t n → y 0 and c 1 ∈ H 2 (0, x * ) such that c 1,tn c 1 weakly in H 2 and c 1 satisfies
-∂ xx c 1 = 0 in (0, x * ) c 1 (0) = c 1 (x * ) = 0
Thus, we have c 1 ≡ 0 in (0, x * ) and in addition we can prove that the whole sequence c 1,t is converging to 0. Thus, (2.57)
u t (ϕ x * t ) u weakly in H 2 (0, x * ) as t → y 0 .
Moreover, from the compactness of the embedding of
H 2 (0, x * ) into C 1 ([0, x * ]), we deduce that (2.58) ∂ x c 1,t (x * ) → 0 as t → y 0 , that is (2.59) ∂ x u t (ϕ x * t )(x * ) → ∂ x u(x * ) as t → y 0 . Now, we introduce (2.60) c 2,t = ŵ -w = w t (φ t ) -w ∈ H 2 (y 0 , 1).
Substracting (2.52) and (2.53) with (2.40) at t = y 0 for w, we get (2.61) -∂ y
1 φ t ∂ y c 2,t -∂ y 1 φ t -1 ∂ y w = (φ t g(φ t ) -g) in (y 0 , 1) c 2,t (1) = 0 1 ϕ x * t (x * ) ∂ x c 1,t (x * ) + ( 1 ϕ x * t (x * ) -1)∂ x u(x * ) = 1 φ t (y 0 ) ∂ y c 2,t (y 0 ) + ( 1 φ t (y 0 ) -1)∂ y w(y 0 )
We deduce that for all v ∈ H 1 (y 0 , 1) with v(1) = 0, we have (2.62)
1 y 0 1 φ t (y) ∂ y c 2,t (y)∂ y v(y) dy+ 1 ϕ x * t (x * ) ∂ x c 1,t (x * ) + ( 1 ϕ x * t (x * ) -1)∂ x u(x * ) v(y 0 ) = 1 y 0 1 φ t (y) -1 ∂ y w(y)∂ y v(y) dy + 1 y 0 φ t (y)g(φ t (y)) -g(y) v(y) dy,
We recall that φ t > 0 in [y 0 , 1] and φ t L ∞ (y 0 ,1) → 1 as t → y 0 . Then, (2.63)
1 φ t -1 L ∞ (y 0 ,1) ---→ t→y 0 0, φ t g(φ t ) -g L ∞ (0,x * ) ---→ t→y 0 0.
We take v = c 2,t in (2.62). Using (2.58) and the trace inequality |v(y 0 )| ≤ C ∂ y v L 2 (y 0 ,1) for all v ∈ H 1 (y 0 , 1) with v(1) = 0, where C is independent of v, we obtain that for |t -y 0 | small enough,
c 2,t H 1 (y 0 ,1) ≤ C
where C > 0 does not depend on t. Going back to the strong form (2.61), we get a uniform bound for ∂ xx c 2,t L 2 (y 0 ,1) with respect to t and thus for |t -y 0 | small enough, we have
(2.64) c 2,t H 2 (y 0 ,1) ≤ C
where C > 0 does not depend on t. Thus, there exists a subsequence t n → y 0 and c 2 ∈ H 2 (y 0 , 1) such that c 2,tn c 2 weakly in H 2 and c 2 satisfies
(2.65) -∂ yy c 2 = 0 in (y 0 , 1) c 2 (1) = 0
We can prove that the whole sequence c 2,t is converging. We have c 2,t (y 0 ) → c 2 (y 0 ) as t → y 0 . Furthermore, since c 2,t (y 0 ) = w(y 0 ) -w t (φ t (y 0 )) = -y 0 + x * + t -x * t , we deduce that c 2,t (y 0 ) → 0 as t → y 0 thanks to (2.50). Hence, we obtain that c 2 (y 0 ) = 0 and using (2.65) we conclude that c 2 ≡ 0 in (y 0 , 1). We have proved that (2.66) w t (φ t ) w weakly in H 2 (y 0 , 1) as t → y 0 .
The properties (2.50),(2.57),(2.66) show that the map
F : t → (u t • ϕ x * t , w t • φ t , x * t ) is weakly-continuous at t = y 0 .
Now, let us prove the weak-differentiability of F at t = y 0 . We first prove that the map t → x * t is differentiable at t = y 0 . Let us introduce (2.67)
τ t = x * t -x * h with h = t -y 0 .
Starting from (2.10) and (2.48), we obtain that τ t satisfies the relation
(2.68) 1 - (1 -t) x * t S t -d * τ t = 1 + R t -d * with d * = 1 x * x * 0 xf (x) dx R t = 1 t -y 0 y 0 t (1 -y)g(y) dy S t = 1 x * t -x * x * t x * xf (x) dx
We clearly have
|S t | x * t ≤ f ∞ and |d * | x * t ≤ x * 2x * t f ∞ and then (1 -t) x * t S t -d * ) ≤ (1 -y 0 + γ) 1 + x * 2x * t f ∞
for all t ∈ (y 0 -γ, y 0 + γ). Since x * t → x * as t → y 0 , we deduce that for |t -y 0 | small enough, we have
(1 -t) x * t S t -d * ) ≤ 2(1 -y 0 + γ) f ∞ .
The assumption (2.41) ensures that f ∞ ≤ 2γ 1 -y 0 + γ and then we obtain
(1 -t) x * t S t -d * ) ≤ 4γ
and therefore, for |t -y 0 | small enough,
(2.69) 1 - (1 -t) x * t S t -d * ≥ 1 -4γ > 0.
Hence τ t is well defined by (2.68) for |t -y 0 | small enough. Moreover, when t → y 0 , we have
(2.70) R t → -(1 -y 0 )g(y 0 ) S t → x * f (x * )
Thus, there exists ẋ * ∈ R such that (2.71) τ t → ẋ * as t → y 0 and we deduce from (2.68) and (2.70) that ẋ * satisfies
(2.72) ẋ * = 1 -d * -(1 -y 0 )g(y 0 ) 1 + (1 -y 0 )(d * /x * -f (x * ))
Now, we turn to the differentiability of û and ŵ. We define (2.73)
d 1,t = û -u h = u t (ϕ x * t ) -u h , d 2,t = ŵ -w h = w t (φ t ) -w h , with h = t -y 0 . The function d 1,t ∈ H 2 (0, x * ) satisfies (2.74) -∂ x 1 ϕ x * t ∂ x d 1,t -∂ x 1 h 1 ϕ x * t -1 ∂ x u = 1 h ϕ x * t f (ϕ x * t ) -f in (0, x * ) d 1,t (0) = d 1,t (x * ) = 0
From (2.50), (2.71), we deduce that (2.75)
1 h 1 ϕ x * t -1 + ẋ * dϕ s ds | s=x * L ∞ (0,x * ) -→ 0 as t → y 0 (2.76) 1 h ϕ x * t f (ϕ x * t ) -f -ẋ * d ds ϕ s f (ϕ s ) | s=x * L ∞ (0,x * ) -→ 0 as t → y 0
As a result, we deduce from (2.74) that for |t -y 0 | small enough,
d 1,t H 2 (0,x * ) ≤ C
where C > 0 does not depend on t. Thus, there exists a subsequence t n → y 0 and u ∈ H 2 (0, x * ) such that d 1,tn u weakly in H 2 and u satisfies (2.77)
-∂ xx u + ẋ * ∂ x dϕ s ds | s=x * ∂ x u = ẋ * d ds (ϕ s f (ϕ s )) | s=x * in (0, x * ) u(0) = u(x * ) = 0
Using the fact that u ∈ H 3 (0, x * ) and -∂ xxx u = ∂ x f in (0, x * ), we obtain by straightforward calculations that
∂ x dϕ s ds | s=x * ∂ x u - d ds ϕ s f (ϕ s ) | s=x * = ∂ xx (∂ x u) dϕ s ds | s=x * in (0, x * ) Then (2.77) becomes (2.78) -∂ xx u = -ẋ * ∂ xx (∂ x u) dϕ s ds | s=x * in (0, x * ) u(0) = u(x * ) = 0
In addition it can be proved that the whole sequence d 1,t is converging to u * .
The function d 2,t ∈ H 2 (y 0 , 1) satisfies (2.79) -∂ y
1 φ t ∂ y d 2,t -∂ y 1 h 1 φ t -1 ∂ y w = 1 h (φ t g(φ t ) -g) in (y 0 , 1) d 2,t (1) = 0 (2.80) 1 ϕ x * t (x * ) ∂ x d 1,t (x * ) + 1 h ( 1 ϕ x * t (x * ) -1)∂ x u(x * ) = 1 φ t (y 0 ) ∂ y d 2,t (y 0 ) + 1 h ( 1 φ t (y 0 ) - 1
(x * ) | s=x * ∂ x u(x * ) = ∂ y ẇ(y 0 ) - d dt φ t (y 0 ) |t=y 0 ∂ y w(y 0 )
The proof of Proposition 2.3 is then complete.
Remark 2.4. Due to the compactness of the embedding of
H 2 (0, x * ) × H 2 (y 0 , 1) into C 1 ([0, x * ]) × C 1 ([y 0 , 1]), Proposition 2.3 ensures that the map t → (u t • ϕ x * t , w t • φ t ) ∈ C 1 ([0, x * ]) × C 1 ([y 0 , 1]
) is (strongly) differentiable at t = y 0 . Now, we are in position to compute the shape derivative of the solution of (2.1)- (2.4). We first extend the solution (u t , w t ) of (2.40) to the whole real line : u t ∈ H 1 0 (0, x * ) is extended by 0 outside the interval (0, x * ), so that we consider u t ∈ H 1 (R). In the same way, w t ∈ H 1 (y 0 , 1) is extended to 0 outside (y 0 , 1) so that we consider w t ∈ L 2 (R).
Proposition 2.5. Under the hypothesis of Proposition 2.3, the map t → (u t , w t ) ∈ L 2 (R) × L 2 (R) is differentiable at t = y 0 . The shape derivatives (u , w ) ∈ H 2 (0, x * ) × H 2 (y 0 , 1) are given by (2.86)
u = u -ẋ * (∂ x u) dϕ s ds | s=x * in (0, x * ) w = ẇ -(∂ y w) dφ t dt |t=y 0 in (y 0 , 1)
and satisfy
(2.87)
∂ xx u = 0 in (0, x * ) u (0) = 0 (2.88) u (x * ) = -ẋ * ∂ x u(x * ) (2.89) ∂ yy w = 0 in (y 0 , 1) w (1) = 0 w (y 0 ) = ẋ * -1 -∂ y w(y 0 ) (2.90) ∂ y w (y 0 ) -g(y 0 ) = ∂ x u (x * ) -ẋ * f (x * ) (2.91)
Proof. The proof is a direct consequence of the derivability of û and ŵ stated in Proposition (2.3) (see also [START_REF] Sokołowski | Introduction to shape optimization: shape sensitivity analysis[END_REF]Proposition 2.32] and [START_REF] Henrot | Variation et optimisation de formes: Une analyse géométrique[END_REF]Lemme 5.3.3]. We start from the relations (2.92)
u t = (u t • ϕ x * t ) • ϕ -1 x * t = û • ϕ -1 x * t w t = (w t • φ t ) • φ -1 t = ŵ • φ -1 t .
The derivability of u t and w t with respect to t at t = y 0 is a direct consequence of the derivability of t → (û, ŵ, x * t ) established in Proposition 2.3. We denote by (u , w ) the derivative of t → (u t , w t ) at t = y 0 . Differentiating (2.92) with t, we obtain at t = y 0 :
u = u -ẋ * (∂ x u) dϕ s ds | s=x * ∈ H 1 (0, x * ) w = ẇ -(∂ y w) dφ t dt |t=y 0 ∈ H 1 (y 0 , 1)
According to Proposition 2. We define the energy functional J associated to the solution (u t , w t , x * t ) of (2.40) by (2.93)
J(t) = x * t 0 |∂ x u t | 2 dx + η 1 t |∂ y w t | 2 dy.
From Proposition 2.5, we deduce the following differentiability result for the function J.
Proposition 2.6. Under the hypothesis of Proposition 2.3, the functional t → J(t) is differentiable at t = y 0 and its derivative at t = y 0 is given by
(2.94) J (y 0 ) = ∂ y w(y 0 ) 2 (1 + ∂ y w(y 0 ) -η) + ∂ y w (y 0 ) (y 0 -1) ∂ y w(y 0 ) 2 -2ηw(y 0 ) with (2.95) ∂ y w (y 0 ) = x * g(y 0 ) -1 + ∂ y w(y 0 ) ∂ y w(y 0 ) + x * f (x * ) (x * + y 0 -1) .
Proof. From (2.1) and (2.2), we deduce that Moreover, using the regularity of u and u with (2.88), we get
J(t) = x * t 0 f u t dx + η
x * 0 f u dx = - x * 0 (∂ xx u)u dx = x * 0 ∂ x u ∂ x u dx -u (x * )∂ x u(x * ) = x * 0 ∂ x u ∂ x u dx + ẋ * ∂ x u(x * ) 2 = - x * 0 u ∂ xx u =0 dx + u =0 ∂ x u x * 0 + ẋ * ∂ x u(x * ) 2
Then, we have (2.100)
x * 0 f u dx = ẋ * ∂ x u(x * ) 2 .
Similarly, we obtain Combining (2.91) with (2.104),(2.105),(2.88) and(2.90), we obtain the desired formula (2.95) for ∂ y w (y 0 ).
An explicit one-dimensional optimal solution
In this section, we study in details the particular case where the functions f and g are two constants. These constants have to be chosen small enough for ensuring the well-posedness of (2.1)-(2.4) (see Proposition 2.1). We choose f ≡ 1 and g ≡ α ∈ R a constant. The solution of Problem (2.1)-(2.4) is then given by u(x) = - The constant α must be chosen small enough. In order to make certain that x * lies in the interval (0, 1), we shall see that we have to restrict the where x * and c 0 are given by (3.3) and (3.4). This formula provides a fully explicit expression of the functional J with y 0 . The derivative J (y 0 ) of the functional with respect to y 0 can be computed exactly as well as the optimal value y * 0 that minimizes J. It can be checked that this direct calculation coincides with the general formula (2.94),(2.95) given in Proposition 2.6. In the sequel, we do not give this expression for J (y 0 ), we only consider a numerical example of an optimal solution. Numerical example. We choose α = 0.4 and η = 0.442. The energy functional J(y 0 ) is depicted on Figure 5. The minimum of J(y 0 ) is reached at y * 0 0.6868. The corresponding optimal point x * is equal to x * 0.8376. The optimal solutions u and w are drawn on Figure 6. We point out that the functional J has a nontrivial behaviour with respect to y 0 , in particular J is a nonconvex function of y 0 . This indicates the difficulty and the pertinence of the two-dimensional shape optimization problem (1.22),(1.23) introduced at the beginning of this paper.
Conclusion
We introduced a shape optimization problem for a fluid-structure interaction system coupling the Stokes equations with the linear elasticity equation. We have shown that a shape optimization problem for a simplified model in one spatial dimension is well-posed and we are able to fully characterize the shape derivatives associated to this one-dimensional free-boundary problem. All the (variational) technical tools we have employed for the study of the one-dimensional free-boundary problem have been made in the spirit to tackle and solve the two-dimensional shape optimization problem presented in the introduction of this paper. We aim to extend our 1d technics to the two dimensional problem for getting a rigorous statement of the 2d shape derivatives.
Figure 2 .
2 Figure 2. The elastic structure Ω S is a deformation of a reference domain Ω 0
.
Due to the incompressibility property of the fluid, the volume of the elastic structure is conserved during the deformation. Hence, we must have |Ω S | = |Ω 0 | and the elastic displacement w satisfies (1.15) Ω 0 det(∇X) dy = |Ω 0 |.
Figure 3 .
3 Figure3. The bound δ 0 (y 0 , 0) on f and g for the wellposedness of (2.1)-(2.4) for y 0 ∈ (0, 1).
Section 2.2, we transform the system (2.28) on a fixed domain independent of n by setting ûn (x) = u n (x) with x = ϕ(x) for x ∈ [0, ŷ * 0 ] and ŵn (ŷ) = w n (y) with y = φ(ŷ) for ŷ ∈ [ŷ * 0 , 1]. The functions ϕ and φ (see (2.26)) are given by(2.29)
|t=y 0 = 0 f u dx + η 1 y 0 2 -
00102 w (y 0 ) + ∂ y w(y 0 ) ∂ y w(y 0 ) +w(y 0 ) ∂ y w (y 0 ) + ∂ yy w(y 0 ) = w (y 0 )∂ y w(y 0 ) + w(y 0 )∂ y w (y 0 ) + (∂ y w(y 0 )) 2 -w(y 0 )g(y 0 ) (2.98)Combining (2.97) and (2.98), we obtain (2.99) J (y 0 ) =x * gw dy -η ∂ y w(y 0 ) η w (y 0 )∂ y w(y 0 ) + w(y 0 )∂ y w (y 0 ) .
∂dy + w∂ y w 1 y 0 1 y 0 ∂ 1 y 0 ∂y 0 = 0 ∂
10101000 y w ∂ y w dy + w (y 0 )∂ y w(y 0 ) + w (y 0 )∂ y w(y 0 ) dy = -w(y 0 )∂ y w (y 0 ) + w (y 0 )∂ y w(y 0 ). Relations (2.100), (2.101) with (2.4) in (2.99) lead to (2.102)J (y 0 ) = ( ẋ * -η) ∂ y w(y 0 ) 2 -2ηw(y 0 )∂ y w (y 0 )From (2.90), we have ẋ * = 1 + w (y 0 ) + ∂ y w(y 0 ) and then we can express the derivative J (y 0 ) as follows(2.103) J (y 0 ) = 1 + w (y 0 ) + ∂ y w(y 0 ) -η ∂ y w(y 0 ) 2 -2ηw(y 0 )∂ y w (y 0 ) Now,we derive a relation between w (y 0 ) and ∂ y w (y 0 ). For y ∈ [y 0 , 1], we introduce the function ψ(y) = y -1 which satisfies ∂ y ψ ≡ 1 in [y 0 , 1] and ψ(1) = 0. Then, we write w (y 0 ) =y w (y)∂ y ψ(y) dy = yy w (y) =0 ∂ y ψ(y) dy -∂ y w (y)ψ(y) 1 ∂ y w (y 0 )ψ(y 0 ) and thus we get (2.104) w (y 0 ) = (y 0 -1)∂ y w (y 0 ).Combining (2.103) with (2.104), we obtain the desired formula (2.94).Finally, we turn to the expression of ∂ y w (y 0 ) with respect to ∂ y w(y 0 ). For x ∈ [0, 1], we introduce the function ψ(x) = x which satisfies ∂ x ψ ≡ 1 in [0, 1] and ψ(0) = 0. Then, we writeu (x * ) = x * 0 ∂ x u (x)∂ x ψ(x) dx = -x * xx u (x) =0 ∂ x ψ(x) dx + ∂ x u (x)ψ(x)x * 0 = ∂ x u (x * )ψ(x * ) and thus we have (2.105) u (x * ) = x * ∂ x u (x * ).
1 2 x 1 -
21 (x -x * ), x ∈ (0, x * ) (3.1) w(y) = c 0 -α 2 (y -1) (y -1), y ∈ (y 0 , y 0 )(3 + y 0 ) + 2y 0 2(1 + y 0 ) (3.4) x * = 2 (α(y 0 -1) -c 0 ) = α(1 -y 0 ) 2 + 2y 0 1 + y 0 .
Figure 5 .
5 Figure 5. The energy functional y 0 → J(y 0 )
Figure 6 .
6 Figure 6. The optimal solutions u and w.
)∂ y w(y 0 )Proceeding as for the proof of the continuity of c 2,t (see (2.61)-(2.64)), we deduce from (2.79) that there exists ẇ ∈ H 2 (y 0 , 1) such that d 2,t ẇ weakly in H 2 as t → y 0 and ẇ satisfies Using the fact that w ∈ H 3 (y 0 , 1) and -∂ yyy w = ∂ y g in (y 0 , 1), we obtain by straightforward calculations that
Moreover, we have that
(2.81) 1 h 1 φ t -1 + dφ t dt |t=y 0 L ∞ (y 0 ,1) -→ 0 as t → y 0
(2.82) 1 h φ t g(φ t ) -g - d dt φ t g(φ t ) |t=y 0 L ∞ (y 0 ,1) -→ 0 as t → y 0
(2.83) -∂ yy ẇ + ∂ y dφ t dt |t=y 0 ∂ y w = d dt (φ t g(φ t )) |t=y 0 in (y 0 , 1)
ẇ(1) = 0
∂ y dφ t dt |t=y 0 ∂ y w - d dt φ t g(φ t ) |t=y 0 = ∂ yy (∂ y w) dφ t dt |t=y 0 in (y 0 , 1)
Then (2.83) becomes
(2.84) -∂ yy ẇ = -∂ yy (∂ y w) dφ t dt |t=y 0 in (y 0 , 1)
ẇ(1) = 0
Finally, (2.80) leads to
(2.85) ∂ x u(x * ) -ẋ * d ds ϕ s
3, we have that (u , w ) ∈ H 2 (0, x * ) × H 2 (y 0 , 1)
and from (2.45),we deduce that (2.87), (2.88), (2.89) hold. Relation (2.46)
yields (2.91). Finally, differentiating the relation x * t = t + w t (t) in (2.40)
leads to (2.90).
According to the differentiability result established in Proposition 2.5 (see also Remark 2.4), we deduce that J is differentiable at t = y 0 and differentiating (2.96) at t = y 0 leads to (2.97) J (y 0 ) =
(2.96)
0 x * f u dx + ẋ * [f u] x * =0 0 +η 1 y 0 gw dy -ηg(y 0 )w(y 0 )
-η d dt w t (t)∂ y w t (t) |t=y 0 .
Moreover, we have
d dt w t (t)∂ y w t (t) |t=y 0 = d dt w t (t) |t=y 0 ∂ y w(y 0 ) + w(y 0 ) d dt ∂ y w t (t)
1 t gw t dy -ηw t (t)∂ y w t (t).
values of α and y 0 . Indeed, we have that
Then, we introduce the admissible domain D where the parameters (y 0 , α) are allowed to lie for ensuring x * ∈ (0, 1):
The admissible domain D is drawn in Figure 4.
We recall that the energy functional J is given by
with a parameter η > 0. Let α ∈ R be fixed. The shape optimization problem consists in finding the reference point y 0 that minimizes | 42,097 | [
"1794",
"5275"
] | [
"418609",
"211251",
"211251"
] |
01492775 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492775/file/978-3-642-39614-4_3_Chapter.pdf | Andreas Fuchs
email: [email protected]
Sigrid Gürgens
email: [email protected]
Preserving Confidentiality in Component Compositions
The preservation of any security property for the composition of components in software engineering is typically regarded a non-trivial issue. Amongst the different possible properties, confidentiality however poses the most challenging one. The naive approach of assuming that confidentiality of a composition is satisfied if it is provided by the individual components may lead to insecure systems as specific aspects of one component may have undesired effects on others. In this paper we investigate the composition of components that each on its own provide confidentiality of their data. We carve out that the complete behaviour between components needs to be considered, rather than focussing only on the single interaction points or the set of actions containing the confidential data. Our formal investigation reveals different possibilities for testing of correct compositions of components, for the coordinated distributed creation of composable components, and for the design of generally composable interfaces, ensuring the confidentiality of the composition.
Introduction
Software design and engineering makes strong use of composition in many ways. From the orchestration of web services in a Business Process Engines to the integration of libraries or object files compilers and linkers the principles of composition apply on any of these layers of abstraction.
Beyond the general problems of feature interaction, there exist many specific security related challenges that can introduce serious flaws in a software product. A prominent example for such a flaw is the integration of TLS libraries into the German eID Application [START_REF] News | Neuer Personalausweis: AusweisApp mit Lücken[END_REF] that caused the acceptance of update packages by any server with a valid certificate, as the name within the certificate was not checked. In other cases, integrators of TLS libraries do not provide enough entropy for key generation which leads to a series of servers on the Internet with similar private key values [START_REF] Heninger | Mining your Ps and Qs: Detection of widespread weak keys in network devices[END_REF].
Practical solutions for composition include the provisioning of verbal best-practice catalogues [START_REF] Anderson | Security Engineering: A guide to building dependable distributed systems[END_REF], tool-based solution databases [START_REF]Serenity[END_REF][START_REF]Trusted computing Engineering for Resource constrained Embedded Systems Applications[END_REF][6] or guides, tutorials and code examples in general. However, there does not exist much research that targets the challenges imposed by composition on a more general and broader scope.
In this contribution we present an approach based on the formal semantics of our Security Modelling Framework SeMF (see e.g. [START_REF] Gürgens | On a formal framework for security properties[END_REF][START_REF] Gürgens | Parameter confidentiality[END_REF][START_REF] Gürgens | Abstractions preserving parameter confidentiality[END_REF]) that targets the investigation and validation of general component composition regarding the property of data confidentiality. SeMF has available a comprehensive vocabulary for statements of confidentiality that provides the necessary expressiveness to reason about conditions of general composability decoupled from any specific scenario.
In the following Section we introduce a scenario that serves as test case for our approach. It is composed of two components that (each on its own) provide a certain confidentiality property, but fail to do so when composed into a joined system. Section 3 gives a brief introduction to the SeMF framework. In Section 4 we introduce our formalization of system composition and demonstrate it using the example scenario. We then explain and formalize the conditions for confidentiality composition in Section 5 and illustrate them by means of the example scenario in Section 6. Section 7 provides an overview of related work on composition of security and Section 8 finalizes the paper and provides an outlook to ongoing and future work.
Example Scenario
The provisioning and quality of entropy is a central aspect for many security functionalities. However, the generation of entropy and randomness in computers is a hard problem on its own [START_REF] Kerrisk | LCE: Don't play dice with random numbers[END_REF] and at the same time programmers are usually not introduced to its challenges in a correct way. Many code examples and explanations for randomness today advice to use the current time or uptime as seed for a random number generator. This approach may be adequate for desktop applications started by the user at an unforeseeable as well as undetectable point in time. Whenever these conditions do not hold howeveras is the case especially in e.g. system service applications or embedded platforms [START_REF] Corbet | Random numbers for embedded devices[END_REF] such date/uptime values do not provide enough entropy. These scenarios rather require specialized entropy sources in CPU through a TPM or a SmartCard.
In our example scenario, we investigate such a case, i.e. a system which is composed of a security library for key generation that targets desktop applications whilst being utilized by an embedded platform system service. The KeyGenerator component hereby uses the current time of the system when being called in order to initialize its random number generator and to create the corresponding key. The Application component of the system represents a system service that is started during boot and calls the KeyGenerator for a key to be generated. Both components have the property of confidentiality for the key that is generated / further used. However, their composition introduces side effects that make the key calculable for a third party.
Formal Semantics of SeMF
In our Security modelling Framework SeMF, the specification of any kind of cooperating system is composed of (i) a set P of agents (e.g. an application and a key generator), (ii) a set Σ of actions, (iii) the system's behaviour B ⊆ Σ * (Σ * denoting the set of all words composed of elements in Σ), (iv) the local views λ P : Σ * → Σ * P , and (v) initial knowledge W P ⊆ Σ * of agents P ∈ P. The behaviour B of a discrete system S can be formally described by the set of its possible sequences of actions (which is always prefix closed). An agent P 's initial knowledge W P about the system consists of all traces the agent initially considers possible. This includes a representation of conclusions that an agent may be able to derive; i.e. that the reception of a message implies the sending of this message to have happened before. Finally, an agent's local view essentially captures what an agent can see from the system. Together, the local view and initial knowledge represent what an agent may know about the system at a given point in time based on what he/she knows in general, has seen and has concluded from this. Different formal models of the same system are partially ordered with respect to the level of abstraction. Formally, abstractions are described by alphabetic language homomorphisms that map action sequences of a finer abstraction level to action sequences of a more abstract level while respecting concatenation of actions. In fact, the agents' local views are expressed by homomorphisms. Note that homomorphisms are in general neither injective nor surjective. For Σ 1 ⊆ Σ 2 , the homomorphism h : Σ 2 -→ Σ 1 that keeps all actions of Σ 1 and maps those in Σ 2 \ Σ 1 onto the empty word is called projection homomorphism.
In SeMF, security properties are defined in terms of such a system specification. Note that system specification does not require a particular level of abstraction. The underlying formal semantics then allows formal validation, i.e. allows to prove that a specific formal model of a system provides specific security properties.
Confidentiality in SeMF
Based on the SeMF semantics, we have specified various instantiations of security properties such as precedence, integrity, authenticity and trust (see e.g. [START_REF] Gürgens | On a formal framework for security properties[END_REF][START_REF] Fuchs | A Formal Notion of Trust -Enabling Reasoning about Security Properties[END_REF][START_REF] Fuchs | Formal Notions of Trust and Confidentiality -Enabling Reasoning about System Security[END_REF]). In this paper however we focus on our notion of parameter confidentiality [START_REF] Gürgens | Parameter confidentiality[END_REF][START_REF] Gürgens | Abstractions preserving parameter confidentiality[END_REF]. Various aspects are included in this concept. First, we have to consider an attacker Eve's local view λ Eve of the sequence ω she has monitored and thus the set of sequences λ -1
Eve (λ Eve (ω)) that are, from Eve's view, identical to ω. Second, Eve can discard some of the sequences from this set, depending on her knowledge of the system and the system assumptions, all formalized in W Eve . For example, there may exist interdependencies between the parameter p to be confidential in different actions, such as a credit card number remaining the same for a long time, in which case Eve considers only those sequences of actions possible in which an agent always uses the same credit card number. The set of sequences Eve considers possible after ω is λ -1
Eve (λ Eve (ω)) ∩ W Eve . Third, we need to identify the actions in which the respective parameter(s) shall be confidential. Many actions are independent from these and do not influence confidentiality, thus need not be considered. For this we use a homomorphism µ : Σ * -→ (Σ τ × M ) * that maps actions to be considered onto a tuple (actiontype, parameter).
Essentially, parameter confidentiality is captured by requiring that for the actions that shall be confidential for Eve with respect to some parameter p, all possible (combinations of) values for p occur in the set of actions that Eve considers possible. What are the possible combinations of parameters is the fourth aspect that needs to be specified, as we may want to allow Eve to know some of the interdependencies between parameters (e.g. in some cases Eve may be allowed to know that the credit card number remains the same, in others we may want to require Eve not to know this). The notion of (L, M )-Completeness captures which are the dependencies allowed to be known within a set of sequences of actions. For the formal definition of (L, M )-completeness, some additional notations are needed: For f : M -→ M and g : N -→ N we define (f, g) : M × N -→ M × N by (f, g)(x, y) := (f (x), g(y)). The identity on M is denoted by i M : M -→ M , while M IN denotes the set of all mappings from IN to M , and p τ : (Σ t × M ) -→ Σ t is a mapping that removes the parameters.
Definition 1 ((L,M)-completeness) Let L ⊆ (Σ t × IN) * and let M be a set of parame- ters. A language K ⊆ (Σ t × M ) * is called (L, M )-complete if K = f ∈M IN (i Σt , f )(L)
The definition of parameter confidentiality captures all the different aspects described above: Definition 2 (Parameter Confidentiality) Let M be a parameter set, Σ a set of actions, Σ t a set of types, µ :
Σ * → (Σ t × M ) * a homomorphism, and L ⊆ (Σ t × IN) * . Then M is parameter-confidential for agent R ∈ P with respect to (L, M )-completeness if there exists an (L, M )-complete language K ⊆ (Σ t × M ) * with K ⊇ µ(W R ) such that for each ω ∈ B holds µ(λ -1 R (λ R (ω)) ∩ W R ) ⊇ p -1 τ (p τ (µ(λ -1 R (λ R (ω)) ∩ W R ))) ∩ K
Here p -1 τ • p τ first removes and then adds again all values of the parameter that shall be confidential, i.e. constructs all possible value combinations. (L, M )-completeness of K captures that R is required to consider all combinations of parameter values possible except for those that it is allowed to disregard (i.e. those that are not in K). Hence the right hand side of the inequality specifies all sequences of actions agent R shall consider as the ones that have possibly happened after ω has happened. In contrast, the left hand side represents those sequences that R actually does consider as those that have possibly happened. For further explanations we refer the reader to [START_REF] Gürgens | Parameter confidentiality[END_REF][START_REF] Gürgens | Abstractions preserving parameter confidentiality[END_REF].
Notation: We will use Λ R (ω, W R ) = λ -1 R (λ R (ω)) ∩ W R ) as an abbreviation.
Modelling Composition
Based on SeMF we now introduce the definition of the composition of two systems with the same set of agents and a shared interface. Applying this definition, we then specify the composition of the scenario application and key generator.
Formalizing Composition
The idea of our formalization is to interpret the individual components S 1 and S 2 as homomorphic images of the composed system and to express this system in terms of the inverses of the components with respect to the homomorphisms. Figure 1 illustrates the relationship between the systems: Both components S 1 and S 2 are abstractions (i.e. images of homomorphisms h 1 and h 2 , respectively) of their composition S 0 , while S 1 and S 2 in turn are abstracted (by homomorphisms h IF 1 and h IF 2 , respectively) onto their joined interface. Agent P 's initial knowledge about the composition does only contain those sequences that P considers possible for both S 1 and S 2 , hence it is given by the intersection of the inverses of the two homomorphisms. Further, agents' local views for the composed system need to capture what agents can see in both S 1 and S 2 . The projections of S 1 and S 2 into the interface system will be of interest for a theorem to be introduced in Section 6.2. In the following we formalize this composition approach.
SIF S1 S2 S0 h 1 h 2 h I F 1 h I F 2 Fig. 1. System relations
Definition 3 (System Composition) Let S 1 and S 2 be two systems with Σ i their respective sets of actions, P 1 = P 2 their set of agents, λ i P their agents' local views, and W i P their agents' initial knowledge, respectively (i = 1, 2). Let further Σ 0 := Σ 1 ∪ Σ 2 , and
h i : Σ * 0 → Σ * i the projection homomorphisms into Σ i (i = 1, 2).
Then the composition S 0 of S 1 and S 2 is constructed as follows:
-P 0 := P 1 = P 2 , -B 0 := h -1 1 (B 1 ) ∩ h -1 2 (B 2 ) -W 0 P := h -1 1 (W 1 P ) ∩ h -1 2 (W 2
P ) -In order to define the local view of agents in S 0 , we define for i = 1, 2:
λ i P : Σ 0 → Σ P,i λ i P (a) = λ i P (a) if a ∈ Σ i ε else
Then the local view of S 0 can be defined as follows:
λ 0 P (a) := (λ 1 P (a), λ 2 P (a)) Further, for Σ IF := Σ 1 ∩ Σ 2 , the projection homomorphisms into Σ * IF are denoted by h IF i : Σ * i → Σ * IF (i = 1, 2). Note, the above definition is equivalent to λ 1 P (a) = λ 1 P (h 1 (a)), λ 2 P (a) = λ 2 P (h 2 (a)
). Also from the above definition it follows Σ 0,P = (Σ 1,P × Σ 2,P ) with Σ i,P being the image of λ i P (i = 1, 2), and
(λ 0 P ) -1 ((x, y)) = (λ 1 P ) -1 (x) ∩ (λ 2 P ) -1 (y).
Composing the scenario systems
We now model the interface composition of an application (S 1 ) and a key generation module (S 2 ) following the above definition. We assume that the application generates a key directly after each system boot. The model for the application is independant from any key generation modul that is actually being used, and abstracts from the actual key generation (this is not part of the application model and happens magically).
The application model S 1 can be specified as follows:
-Agents of this model (and of S 2 ) are the application, the key genration module, and a third agent that is not allowed to know the key:
P 1 = {App, KGen, Eve}
-The system is booted, the application calls the key generation module, and the key generation module returns a key key ∈ K, all actions happening at time t ∈ T : Σ 1 = t∈T,key∈K {boot(t), callGenKey(App, t), returnKey(KGen, key, t)} -We assume that Eve can see the time of system boot but can neither see the key generation request nor the key that is returned:
λ 1 Eve (boot(t)) = boot(t); ∀a ∈ Σ 1 \ {boot(t)|t ∈ T } : λ 1
Eve (a) = ε -Eve knows that before a key generation request, the system has been booted. For simplicity we assume the period of time between these two actions to be equal to δ 1 .
Eve may further know that the time of actions in a sequence is strictly monotonic increasing. This is however not relevant for the given scenario. Hence sequences of actions that contradict this fact are not included in Eve's initial knowledge. Formally:
W 1 Eve = Σ * 1 \ tj -ti=δ1 (Σ 1 \ {boot(t i )}) * {callGenKey(App, t j )}Σ * 1
-We focus on the confidentiality of the key returned to the application, hence µ 1 maps returnKey(KGen, key, t j ) onto (returnKey(KGen), key) and all other actions onto the empty word.
According to this system model, it is easy to see that the returned key is parameter confidential for Eve regarding µ 1 and (L,M)-completeness regarding an adequate L and the set of possible keys M . We now model a concrete key generation module. This module is not able to retrieve a seed for key generation other than the system clock.
-P 2 = {App, KGen, Eve} -The key generation module is called by the application, generates a key, and returns this key, all actions occurring at a specific time t ∈ T : Σ 2 = t∈T,key∈K {callGenKey(App, t), genKey(KGen, key, t), returnKey(KGen, key, t)} -We assume that Eve cannot see any of the actions of the key generation module, hence λ 2 Eve (Σ 2 ) = ε -Eve knows that before a key can be generated, the respective key generation call must have happened, and that the time passing between these two actions is at most δ 2 . Eve also knows that the key generator only returns keys it has generated before.
Eve finally knows that the system time is used as seed for key generation. Formally:
W 2 Eve = Σ * 2 \ tj -ti=δ2 (Σ 2 \ {callGenKey(App, t i )}) * {genKey(App, key, t j )}Σ * 2 \ keym=keyn (Σ 2 \ {genKey(KGen, key m , t j }) * {returnKey(KGen, key n , t k )}Σ * 2 \ key=k(tj ) (Σ 2 \ {genKey(KGen, key, t j }) *
-As above, we focus on the confidentiality of the key returned to the application, hence µ 2 maps returnKey(KGen, key, t j ) onto (returnKey(KGen), key) and all other actions onto the empty word.
Also in this system model, it is easy to see that the returned key is parameter confidential for Eve regarding µ 2 and (L,M)-completeness regarding the same L and set of possible keys M . Following Definition 3 we can now construct the composed system S 0 with Σ IF = t∈T,key∈K {callGenKey(App, t), returnKey(KGen, key, t)}:
-P 0 = {App, KGen, Eve} -Σ 0 = t∈T,key∈K { boot(t), callGenKey(App, t), genKey(KGen, key, t), returnKey(KGen, key, t)}
-λ 0 Eve (boot(t)) = (boot(t), ε), ∀a ∈ Σ 0 \ t∈T {boot(t)} : λ 0 Eve (a) = (ε, ε) -W 0 Eve = Σ * 0 \ tj -ti=δ1 (Σ 0 \ {boot(t i )}) * {callGenKey(App, t j )}Σ * 0 \ t k -tj =δ2 (Σ 0 \ {callGenKey(App, t j )}) * {genKey(App, key, t k )}Σ * 0 \ keym=keyn (Σ 0 \ {genKey(KGen, key m , t k }) * {returnKey(KGen, key n , t l )}Σ * 0 \ key=k(tj ) (Σ 0 \ {genKey(KGen, key, t k }) *
The question that now needs to be answered is whether or not confidentiality is preserved in this system composition. In the following section, we will introduce theorems that can be used to answer this question.
Investigating the Composition of Confidentiality
In this section we provide sufficient conditions under which a composition of two systems preserves the confidentiality properties of each of its components. We start with a very generic approach that is most broadly applicable -however depends on concrete inquiry regarding the satisfaction of the sufficient conditions. Then we provide two more specialized conditions that are less broadly applicable but easier testable.
For each of these cases we first provide a verbal explanation of the concept and then its formal representation. Readers not interested in these formalizations may skip the latter parts. The formalizations all refer to the representation of composition as described in the previous section. An application to the example scenario will be given in Section 6.
For the proofs in this Section we utilize the following lemmata and considerations: The first lemma provides a relation between the local view in the composed system based on the local views from each of the component systems within the integration. This directly reflects the construction rules from Definition 3:
Lemma 1. (λ 0 P ) -1 (λ 0 P (ω)) = h -1 1 ((λ 1 P ) -1 (λ 1 P (h 1 (ω))))∩ h -1 2 ((λ 2 P ) -1 (λ 2 P (h 2 (ω)))) Proof. (λ 0 P ) -1 (λ 0 P (ω)) = (λ 0 P ) -1 ((λ 1 P (ω), λ 2 P (ω)) = (λ 1 P ) -1 (λ 1 P (ω)) ∩ (λ 2 P ) -1 (λ 2 P (ω)) = h -1 1 ((λ 1 P ) -1 (λ 1 P (h 1 (ω)))) ∩ h -1 2 ((λ 2 P ) -1 (λ 2 P (h 2 (ω))))
Given a composition we need to find the traces of actions in Component 1 that correspond to those traces in Component 2 -and vice versa. The construction of these relations can be performed via the interface system S i as well as via the composed system S c as expressed by the following lemma:
Lemma 2. Given a system composition as in Definition 3,
h 1 • h -1 2 = (h IF 1 ) -1 • h IF 2 .
Proof. For x ∈ Σ * 2 always holds h IF 2 (x) = h 1 (x). For ♦ denoting the shuffle product,
h 1 (h -1 2 (x)) = h 1 (x♦(Σ 1 \ Σ 2 ) * ) = h 1 (x♦(Σ 1 \ Σ IF ) * ) = h 1 (x)♦(Σ 1 \ Σ IF ) * = h IF 2 (x)♦(Σ 1 \ Σ IF ) * = (h IF 1 ) -1 (h IF 2 (x)).
For arbitrary sets X and Y and A, C ⊆ X, B, D ⊆ Y and a mapping f : X -→ Y we always have the equality
f -1 (B) ∩ f -1 (D) = f -1 (B ∩ D), but only the inclusion f (A ∩ C) ⊆ f (A) ∩ f (C).
However, for particular intersections we have equality:
Lemma 3. Let X, Y be arbitrary sets, f : X -→ Y a mapping, and
A ⊆ X, B ⊆ Y . Then f (A ∩ f -1 (B)) = f (A) ∩ B.
For the proof of this lemma we refer the reader to [START_REF] Gürgens | Abstractions preserving parameter confidentiality[END_REF].
General Conditions for Confidentiality Composition
The definition of confidentiality in SeMF relies on the extraction and testing of those actions and data that are identified as being confidential. This extraction is applied to every state that the system may take and bases on what an attacker has observed up to this point and what she can deduce from these observations through her initial knowledge. When two systems that both provide confidentiality are composed into a new system (w.r.t. to some common interface), the conclusion about some data that an attacker may derive at any given state in the composed system is the combination of conclusions she has derived with regards to each of the components. If this combination results in what the attacker is allowed to know in the system composition, then obviously confidentiality is satisfied in the composition.
Within the semantics for confidentiality of SeMF this combination of conclusions about the sequences that may have happened in the individual systems and the value of data used in these sequences is represented as the intersection of these sets -i.e. the smaller a set becomes the more conclusions an attacker can draw, because she considers less values as possible candidates for the confidential data.
It should be noted though that these considerations have to be executed for every state -i.e. every possible sequence of actions -that the system may take. Further, they require a level of detail that would allow for the direct assessment of confidentiality of the composed system instead. However, while these conditions are of less practical relevance, they form the basis for the more restricted conditions presented in the subsequent sections. Formally this approach can be expressed as follows: Definition 4 Given a composition as defined in Definition 3, we call h 1 confidentiality composable with h 2 for R with respect to µ 0 , µ 1 and µ 2 , if for all ω ∈ B 0 holds:
µ 0 [Λ R0 (ω, W 0 R )] = µ 1 [Λ R1 (h 1 (ω), W 1 R )] ∩ µ 2 [Λ R2 (h 2 (ω), W 2 R )]
Theorem 1 Given a confidentiality composable composition as defined in Definition 4, if S 1 and S 2 both are parameter confidential for agent R with respect to some µ 1 and
µ 2 with µ 1 • h 1 = µ 2 • h 2 , then S 0 is parameter confidential for R with respect to µ 0 := µ 1 • h 1 = µ 2 • h 2 and L 0 := L 1 = L 2 , M 0 := M 1 = M 2 . Proof. S 1 , S 2 parameter confidential, h 1 (B 0 ) ⊆ B 1 , h 2 (B 0 ) ⊆ B 2 implies ∀ω ∈ B 0 : µ 1 [Λ R1 (h 1 (ω), W 1 R )] ⊇ p -1 t (p t (µ 1 [Λ R1 (h 1 (ω), W 1 R )])) ∩ K and µ 2 [Λ R2 (h 2 (ω), W 2 R )] ⊇ p -1 t (p t (µ 2 [Λ R2 (h 2 (ω), W 2 R )])) ∩ K.
Taking the intersection of these equations leads to
µ 1 [Λ R1 (h 1 (ω), W 1 R )] ∩ µ 2 [Λ R2 (h 2 (ω), W 2 R )] ⊇ p -1 t (p t (µ 1 [Λ R1 (h 1 (ω), W 1 R )])) ∩ p -1 t (p t (µ 2 [Λ R2 (h 2 (ω), W 2 R )])) ∩ K = p -1 t [p t (µ 1 [Λ R1 (h 1 (ω), W 1 R )]) ∩ p t (µ 2 [Λ R2 (h 2 (ω), W 2 R )]) ∩ K] ⊇ p -1 t (p t (µ 1 [Λ R1 (h 1 (ω), W 1 R )] ∩ µ 2 [Λ R2 (h 2 (ω), W 2 R )])) ∩ K.
By assumption of h 1 and h 2 being confidentiality composable it follows that
µ 0 [Λ R0 (ω, W 0 R )] = p -1 t (p t (µ 0 [Λ R0 (ω, W 0 R )])) ∩ K.
Independantly Testable Conditions for Confidentiality Composition
Testing for the confidentiality of data by analysing data values considered possible by the attacker, as presented in the previous approach, is performed on the same level of detail as the direct assessment of confidentiality. In the approach presented in this section, we instead perform an assessment of the usage of the interface by the composed components regarding observations and knowledge that can be gained by an attacker. Following this approach it is possible for two component designers to agree about the information regarding the components' interface that an attacker may get and thereby allows for a more distributed development of each of the components.
For a given state (i.e. sequence of actions) in the composed system, the conclusions regarding the interface behaviour that an attacker can draw from her observations and initial knowledge from each of the components must be equal. Consequently, during the design of the interface the component designers must agree on the interface behaviour that shall be considered possible by the attacker when observing the behaviour of the individual components.
The interaction of designers can be further decoupled by overestimating the set of possible states: Instead of considering all possible states / sequences of actions of the composed system, the designers may only define the set of possible sequences of actions at the interface (interface behaviour). This set can then be associated with the sequences considered possible by the attacker in each of the components, which leads to an agreement over the attacker's deductive capabilities.
The component designers can then independently assess if their component fulfils this requirement (equality of interface behaviour concluded from the individual components) by focussing on all sequences of actions that their component can take that will result in one of the agreed interface behaviour sequences. Definition 5 A composition following Definition 3 is called confidentiality preserving if the following assumption holds for all P ∈ P 0 , ω ∈ B 0 :
a) h IF 1 ((λ 1 P ) -1 (λ 1 P (h 1 (ω))) ∩ W 1 P ) = h IF 2 ((λ 2 P ) -1 (λ 2 P (h 2 (ω))) ∩ W 2 P )
6 Revisiting the Scenarios
In this section, we revisit the scenario composition introduced in Section 4.2 and demonstrate where and how this composition fails with regards to the formal considerations presented in Section 5. From the description in Section 2 it is already known that the example scenario does not preserve confidentiality during composition. In this section we demonstrate how our sufficient conditions, if not met, give hints regarding the possible reasons of confidentiality being violated in the composition, and how the components can be changed in order to preserve confidentiality. For the following illustrations, we do not require the point in time at which a key is returned for assessing the confidentiality of the key; hence we define Σ t := {returnKey(KGen)}. For the ease of reading we further simplify the system by restricting it to one single run; i.e. ∀a ∈ Σ, ω ∈ B : card(a, alph(ω)) = 1. This results in a considerable reduction of complexity but does not affect the applicability of our methods. Analogous results can be obtained for the full system behaviour.
General Conditions for Confidentiality Composition
Following the system definitions in Section 4.2 we investigate the preservation of confidentiality in the example composition. We demonstrate that Theorem 1 is not applicable and show how this fact can be used to identify the side effects that violate the confidentiality in the composed system. We use the following sequence of actions: ω 0 =boot(t 1 ) callGenKey(App, t 2 ) genKey(KGen, key 0 , t 3 ) returnKey(KGen, key 0 , t 4 ) with t 2 = t 1 + δ 1 and t 3 = t 2 + δ 2
We start by assessing the left hand side of the equation of Definition 4, followed by the two sets for the right hand side.
Given ω 0 , we can assess the sequences that Eve considers possible in S 0 (with pre(ω) denoting the set of prefixes of ω):
Λ 0 Eve (ω 0 , W 0 Eve ) = pre[ tx∈T {boot(t 1 ) (callGenKey(App, t 1 + δ 1 )
genKey(KGen, key 0 , t 1 + δ 1 + δ 2 ) returnKey(KGen, key 0 , t x )}] \{ε} Since Eve knows δ 1 and δ 2 , and since the key is completely determined by its time of generation, she only considers one value possible for the returned key
µ 0 [Λ 0 Eve (ω 0 , W 0 Eve )] ={(returnKey(KGen), key 0 )} with key 0 = k(t 1 + δ 1 + δ 2 )
Regarding the conception of Eve with respect to each of the component systems, we again assess all those sequences that Eve considers possible for the respective images of ω 0 in these systems:
Λ 1 Eve (h 1 (ω 0 ),W 1 Eve ) = pre[ tx∈T ,keyi∈K {boot(t 1 ) callGenKey(App, t 1 + δ 1 ) returnKey(KGen, key i , t x )] \ {ε} Λ 2 Eve (h 2 (ω 0 ),W 2 Eve ) = pre[ tx,ty∈T {callGenKey(App, t x )
genKey(KGen, key j , t x + δ 2 ) returnKey(KGen, key j , t y )}]
with
key j = k(t x + δ 2 )
This leads to the following sets of values that Eve considers as candidates for the confidential data (as t x originates from all of T , every key i ∈ K is possible):
µ 1 [Λ 1 Eve (h 1 (ω 0 ), W 1 Eve )] = keyi∈K {(returnKey(KGen), key i )} µ 2 [Λ 2 Eve (h 2 (ω 0 ), W 2 Eve )] = keyj ∈K {(returnKey(KGen), key j )} ∪ {ε}
Coming back to Definition 4 we can see that the values considered possible by Eve in the composition do not equal the combined (i.e. intersected) knowledge from each of the component systems:
{(returnKey(KGen), key 0 } = keyi∈K {(returnKey(KGen), key i )} ∩ keyj ∈K {(returnKey(KGen), key j )} ∪ {ε} implies µ 0 [Λ 0 Eve (ω 0 , W 0 Eve )] =µ 1 [Λ 1 Eve (h 1 (ω 0 ), W 1 Eve )] ∩ µ 2 [Λ 2 Eve (h 2 (ω 0 ), W 2 Eve )]
It can be seen however, that if t 1 or δ 1 were unknown to Eve, the confidentiality would be preserved. This relates to the use case as Desktop Application where an attacker does not know at which point in time a user initiates a key generation. It can further be seen that if key was not derived from these values but for example from a non-pseudo random number generator, Eve would also not be able to derive the key's value in the composition.
Independantly Testable Conditions for Confidentiality Composition
Similarly, Definition 5 can be used to illustrate that the condition of Theorem 2 sufficient for preserving confidentiality does not hold. Using the same ω 0 as in the previous section results in the same sets
Λ 1 Eve (h 1 ( ω 0 ), W 1 Eve ) and Λ 2 Eve (h 2 (ω 0 ), W 2
Eve ). We now investigate the projections of these sets into the interface system in order to compare the interface expectations of both components. with
h IF 1 [Λ 1 Eve (h 1 (ω 0 ),W 1 Eve )] = pre[ tx∈T ,keyi∈K { callGenKey(App, t 1 + δ 1 ) returnKey(KGen, key i , t x )] \ {ε} h IF 2 [Λ 2 Eve (h 2 (ω 0 ),W 2
key i = k(t y + δ 2 )
As we can see, these sets are not equal. The dependence of key i on the point in time of callGenKey being performed is not expected by the App component, which hints to the confidentiality preservation error.
In order to avoid such a situation, the developers of the components could have agreed a priory to a common assumed interface behaviour when they agreed on the interface design. Following option b) of Definition 5 this could have been
B IF = pre[ tx<ty-δ∈T ,keyi∈K
callGenKey(App, t x ) returnKey(KGen, key i , t y )]
In this case the developer of the key generator would have needed to alter his/her implementation to reflect the functional independence of t x and key i , leading to a confidentiality preserving composition.
Design of Generally Composable Component Interfaces
Finally, we demonstrate that our example scenario does not satisfy the sufficient condition specified in Definition 6 and show how in particular scenarios the system specification can be corrected in order for the condition to hold and thus confidentiality to hold as well in the composition. We choose the following two sequences of actions from the respective sets:
-h IF 1 (W 1
Eve ) A = {callGenKey(App, t 1 ) returnKey(KGen, key A , t y )} with key A ∈ K (key A can be chosen independently of t 1 ).
-h IF 2 (W 2
Eve ) B = {callGenKey(App, t 2 ) returnKey(KGen, key B , t y )} with key B = k(t 2 + δ 2 ) according to S 2 .
Obviously, as for t 1 = t 2 A and B are distinct sets, µ IF (A ∩ B) = ∅. However, for
key A = k(t 2 + δ 2 ) = key B , it follows µ(A) = µ(B) = {(returnKey(KGen), key A )} = µ IF (A) ∩ µ IF (B).
In order to construct a system that fulfills the condition for a generally composable interface, S IF must be designed in such a way that µ IF is an isomorphism. This is the case e.g. if the interface only consists of a stream of generated keys that are handed over from the key generator to the application with Σ IF = {provideKey(KGen, key i )}. As there exists no functional relation from App to KeyGen there cannot be side-effects that destroy the confidentiality property on the key generator's side during composition.
Related Work
The model based composition of systems is a field of growing research activity in the last decade. Tout et al. [START_REF] Tout | Towards a bpel model-driven approach for web services security[END_REF] have developed a methodology for the composition of web services with security. They use the Business Process Execution Language (BPEL) for the specification of web services composition and expand it in order to specify the security properties independently from the business logic based on policy languages using a UML Profile for specifying the required security properties. Their approach focusses on how to specify security requirements of web service compositions and does not address verification of security properties in such compositions. Sun et al. propose in [START_REF] Sun | A decomposition-based approach for service composition with global qos guarantees[END_REF] a service decomposition-based approach for service composition in which the utility of a composite service can be computed from the utilities of component services, and the constraints of components services can be derived from the constraints of the composite service. Their approach manages the selection of each component service, leading to more scalability and more flexibility for service composition in a dynamic environment. However, this approach focusses on maximizing the utility of the composition and does not address security properties. A method for composing a system from service components with anonymous dependencies is presented by Sora et al. in [START_REF] Sora | Automatic composition of systems from cponents with anonymous dependencies specified by semantic-unaware properties[END_REF]. They specify component descriptions by means of semantic-unaware properties, an applicationdomain independent formalism for describing the client-specific configuration requests in terms of desired properties, and propose a composition algorithm. Using a different approach, Lei Zhang and Jun Wu [START_REF] Zhang | Research on trustworthy software composition architecture[END_REF] analyse the relationship between trustworthiness attributes and propose models of these attributes and their relationship. They use a Trustworthy Software Composition Architecture (TSCA) software as evaluation method.
Rossi presents in [START_REF] Rossi | Model checking adaptive multilevel service compositions[END_REF] a logic-based technique for verifying both security and correctness properties of multilevel service compositions. Service compositions are specified in terms of behavioural contracts which provide abstract descriptions of system behaviours by means of terms of a process algebra. Multi-party service compositions are modelled as the parallel composition of such contracts. Modal mu-calculus formulae are used to characterize non-interference and compliance (i.e. deadlock and livelock free) properties. The well-known concepts of non-interference or information flow control address confidentiality with respect to actions. In the above approach, these concepts are used to specify that public synchronizations (i.e. actions concerned with the communication between services) are unchanged as confidential communications are varied. Hence it is not clear how this approach can be extended to cover cases in which satisfaction of confidentiality depends solely on whether specific parameters of an action are visible.
Universal Composability is another prominent branch of research addressing the composition of cryptographic protocols while preserving certain security properties (see for example [START_REF] Canetti | Security and composition of multiparty cryptographic protocols[END_REF][START_REF] Canetti | Universally composable security: A new paradigm for cryptographic protocols[END_REF][START_REF] Canetti | Universally composable symbolic analysis of mutual authentication and key-exchange protocols[END_REF]). A common paradigm in this area of research is that a protocol that "securely realizes" its task is equivalent to running an idealized computational process (also called "ideal functionality") where security is guaranteed. A main disadvantage of the Universal Composability approach seems to be that for every property that shall be proven, a new ideal process has to be constructed whose interactions with the parties result in providing this property.
Pino et al. present in [START_REF] Pino | Constructing secure service compositions with patterns[END_REF] an approach for constructing secure service compositions, making use of composition patterns and security rules. They prove integrity and confidentiality of service compositions based on specific security properties provided by the individual components of such a composition. While the proofs are based on the same formal framework as the one presented in this paper, their approach uses an intermediate orchestration component. We in contrast focus on the direct composition of any type of components, deriving security proofs from specific conditions concerning the component interfaces.
Conclusions & Future Work
In this paper we presented the formalization of the composition of two systems that allows to formally reason about the preservation of confidentiality properties. The central idea is to view each of the systems as an abstraction of their composition, and to describe each aspect of the composition (e.g. its behaviour, agents' local views and initial knowledge) in terms of these abstractions. We then introduced conditions that allow to prove that a specific confidentiality property holds for the composition if it holds for the individual components. Using the composition of an application with a key generation module as scenario, we then demonstrated that the fact that these conditions do not hold reveals side effects with non-trivial implications regarding confidentiality. In particular, we presented a general sufficient condition for preservation of confidentiality that is of more theoretical interest, and derived two more specific conditions that are applicable in distributed system engineering and point to particular aspects of the two components that need to be taken into consideration by the developers. The first concerns additional agreements on interface level between component developers that can be independently tested for each component, the second provides sufficient conditions regarding the interface itself that rules out side effects during composition and thereby guarantees the preservation during composition of any two components that implement these interfaces.
Currently we are working on other types of conditions sufficient for proving confidentiality of a system. Finding relations of these conditions to the ones presented in this paper may broaden their scope of application. Future work includes the application of the foundations layed out in this paper to general software engineering by projecting the semantic knowledge onto rules and guidelines for composition of software components.
Alternatively, for all
which implies condition a) by overestimation of possible component state combinations.
Theorem 2 Given a confidentiality preserving composition according to Definition 5 and given that system S 1 has a confidentiality property w.r.t. some µ 1 and K then S 0 has the confidentiality property regarding µ 0 = µ 1 • h 1 and the same K.
which concludes our proof.
Design of Generally Composable Component Interfaces
This final approach for composition targets the design of interfaces between components.
The goal is to design the interface between two components in such a way that no additional considerations have to be made when composing confidentiality properties. This is for example possible if an interface handles only the single transfer of confidential data. Obviously, if this data is handled in a confidential way by both components, there cannot be any side effects within the interface that may destroy the confidentiality property. This is expressed by testing that for any two combinations of sequences of actions within the interface, the extraction of confidential data from their combination will equal those candidates that result from the combinations of candidates derived independently from each of the sequences.
Most notably in this approach, it is not necessary to assess the capabilities (in terms of local view and initial knowledge) of a possible attacker. The design of the interface will make it impossible for any attacker to gain advantage by the composition of the components as long as they each provide confidentiality of the data. Formally, this is expressed as: Definition 6 A composition following Definition 3 has a generally composable interface with respect to some µ IF if
Trivially, if µ IF is an isomorphism, the above property is implied.
Theorem 3 Given a generally composable interface composition as defined in Definition 6, if S 1 and S 2 both are parameter confidential for agent R with respect to some µ 1 and µ 2 , then S 0 is parameter confidential for R with respect to | 43,396 | [
"989250",
"1004348"
] | [
"466808",
"466808"
] |
01492777 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492777/file/978-3-642-39614-4_5_Chapter.pdf | José Sánchez
email: [email protected]
Gary T Leavens
email: [email protected]
Separating Obligations of Subjects and Handlers for More Flexible Event Type Verification
Keywords: Event type, specification, verification, Ptolemy language
Implicit invocation languages, like aspect-oriented languages, automate the Observer pattern, which decouples subjects (base code) from handlers (advice), and then compound them together in the final system. For such languages, event types have been proposed as a way of further decoupling subjects from handlers. In Ptolemy, subjects explicitly announce events at certain program points, and pass the announced piece of code to the handlers for its eventual execution. This implies a mutual dependency between subjects and handlers that should be considered in verification; i.e., verification of subject code should consider the handlers and vice versa. However, in Ptolemy the event type defines only one obligation that both the handlers and the announced piece of code must satisfy. This limits the flexibility and completeness of verification in Ptolemy. That is, some correct programs cannot be verified due to specification mismatches between the announced code and the handlers' code. For example, when the announced code does not satisfy the specification of the entire event and handlers must make up the difference, or when the announced code has no effect, imposing a monotonic behavior on the handlers. In this paper we propose an extension to the specification features of Ptolemy that explicitly separates the specification of the handlers from the specification of the announced code. This makes verification in our new language PtolemyRely more flexible and more complete, while preserving modularity.
Introduction
Event types [START_REF] Rajan | Ptolemy: A language with quantified, typed events[END_REF], and other similar approaches like XPIs [START_REF] Sullivan | Modular aspect-oriented design with xpis[END_REF], AAI [START_REF] Kiczales | Aspect-oriented programming and modular reasoning[END_REF], Open Modules [START_REF] Aldrich | Open modules: Modular reasoning about advice[END_REF][START_REF] Ongkingco | Adding open modules to AspectJ[END_REF], IIIA with Join Point Types [START_REF] Steimann | Types and modularity for implicit invocation with implicit announcement[END_REF] and Joint Point Interfaces (JPI) [START_REF] Inostroza | Join point interfaces for modular reasoning in aspect-oriented programs[END_REF][START_REF] Bodden | Joint point interfaces for safe and flexible decoupling of aspects[END_REF][START_REF] Bodden | Closure Joinpoints: Block joinpoints without surprises[END_REF], have been proposed as a way to further decouple subjects from handlers in implicit invocation and aspect-oriented languages. The verification systems for such languages should, as usual, strive to be as complete as possible while staying sound. In this work we propose some enhancements to the Ptolemy language and its specification and verification system for making it more complete while keeping it sound.
Completeness as a Measure of Usefulness
We work in the framework of a partial-correctness Hoare logic [START_REF] Hoare | An axiomatic basis for computer programming[END_REF]. A judgement of the form Γ ⊢ {P }S{Q} means that the Hoare-triple {P }S{Q} is provable using the type environment Γ . The judgement Γ ⊢ {P }S{Q} is valid iff for every state σ that agrees with the type environment Γ , if P is true in σ (written σ |= P ) and if the execution of S terminates in a state σ ′ , then σ ′ |= Q. Such a logic is sound if whenever a judgment Γ ⊢ {P }S{Q} is provable, then it is valid. Conversely, such a logic is complete if whenever such a judgment is valid, then it is provable in the logic.
To compare two logics, one can ask if both are sound, and if so one can compare how complete they are. Logic A is strictly more complete than logic B if there is some valid judgment that is provable in A but not in B, but every judgment that is provable in B is provable in A. Given that both logics are sound, then a more complete logic is potentially more useful for users, as they will be able to prove more programs correct.
A Brief on Ptolemy Language
Ptolemy's [START_REF] Rajan | Ptolemy: A language with quantified, typed events[END_REF] event type concept decouples subjects (base code), which explicitly announce events, from the handlers that process these events. The event type establishes the contract every handler must satisfy. In this way the base (or announcing) code can be modularly reasoned about using the contract, instead of using each handler's code. The contract not only defines the precondition and postcondition every handler method should satisfy, but also the abstract algorithm they must refine, called a translucid contract [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF]. In the body of a translucid contract, specification expressions can abstract away details of particular implementation expressions, by only specifying their effects. Invoke expressions in the contract's body show where a handler triggers the execution of the next handler in the execution chain (until eventually reaching the originally announced code that stands at the end). In the base code, announce expressions are used to explicitly announce occurrences of events, starting the execution chain and passing the announced code to it. All this is schematized in Figure 1. The invoke expressions in the contract make visible the control effects of the handlers. Active handlers are registered as such using register expressions and handlers are bound to the corresponding event by when expressions.
In Ptolemy, every handler method must be verified to satisfy the contract's pre-and postconditions and also to structurally refine (see section 2) the translucid contract's body, providing conforming implementations for every specification expression. 1 The announced code is also verified to satisfy the same contract's pre-and postconditions.
The Billing Example
The billing system example in Figure 2 illustrates the basic concepts of Ptolemy and motivates our proposed extension. In this system, each bill includes the amount (a) to be paid and the extra charges (c) like taxes. When the base code totals a bill, adding the charges to the principal amount (line 7), the corresponding event is announced (lines [START_REF] Fernando | Event type polymorphism. In Proceedings of the eleventh workshop on Foundations of Aspect-Oriented Languages[END_REF][START_REF] Hoare | An axiomatic basis for computer programming[END_REF][START_REF] Inostroza | Join point interfaces for modular reasoning in aspect-oriented programs[END_REF]. This gives registered handlers (maybe P aymentHandler or ShippingHandler) the chance to do some adjustments, like adding some extra charges. In this case we register just one handler at random (line 5) to emphasize the fact that the reasoning is based on the event definition, instead of the particular implementation of any specific handler. The TotalingEvent definition specifies the behavior and abstract algorithm of every admissible handler. The requires (line 14) and ensures (line 21) 2 clauses specifies the behavior: every handler requires (line 14) that the existing charges are not negative and ensures (line 21) that the resulting amount of the bill is greater than or equal to the sum of the original amount plus the original charges. The excess, if any, is due to the extra charges added by the handlers. The translucid contract (lines 16-19, inside assumes{. . .}) forces the handlers to make the charges greater than or equal to their current value, but allows charges to be added by each handler in any consistent way. The specification expression (lines 17-18) must be refined by each conforming handler, with code that satisfies the stated pre-post conditions. Also any invoke expression must be made explicit in the translucid contract (as on line 19). This allows modular verification of control effects, using the specification of the announced event.
This example is verified by Ptolemy's proof system. Both handlers refine the event's translucid contract. The specification expression in this contract (lines 17-18) is refined by P aymentHandler by increasing the charges (c ′ = c + 1, line 27), and by ShippingHandler by leaving the charges the same (c ′ = c + 0, line 38). Considering the above and the effect of the invoke expression, it can be seen that both handlers satisfy the event specification ( a ′ ≥ a + c, line 21), and so both are proven valid. The announced code (a ′ = a + c, line 7) also satisfies the event specification ( a ′ ≥ a + c, line 21), as required by Ptolemy's proof system, so the complete announce expression (lines 6-8 ) is proven valid. With the handlers and the announce expressions proven valid, the entire program is proven valid in Ptolemy. Now we consider a variation on the billing system. A new "business rule" requires us to enforce the "increasing" property: that all the handlers for TotalingEvent must strictly increase the total amount, by adding to the charges. Currently P aymentHandler satisfies this condition (line 27) but ShippingHandler does not (line 38). If this property were met, the assertion on line 9 could be proven true, since no matter which handler were registered (line 5) the charges would have been incremented.
We have to guarantee that any handler H bound to the event TotalingEvent, satisfies the required property, while keeping the program valid. 3 For doing that we can adjust the event specification and the handlers.
Definition 1. An implementation of the billing program satisfies the "increasing" property if for each binding clause of the form when T otalingEvent do m appearing in a class
C: if H = bodyOf (C , m) then Γ ′ |= {c ≥ 0}H{a ′ > a + c}.
The current TotalingEvent specification does not guarantee the above property, as its postcondition (a ′ ≥ a + c) does not imply (a ′ > a + c). The way for the billing system to satisfy this property is by having an event postcondition Q e such that Q e ⇒ (a ′ > a+c). However in Ptolemy this Q e must be such that (a ′ = a+c) ⇒ Q e , to meet the requirement of Ptolemy's proof system that the announced code (line 7) satisfies the event specification. The fact that these two implications result in a contradiction shows that the above property cannot be proved in Ptolemy. This shows the incompleteness of Ptolemy's proof system, that is incapable of modularly proving the assertion in line 9.
In section 3 we propose an extension to Ptolemy that makes verification more flexible and complete, and in particular able to enforce the "increasing" property and verify the aforementioned assertion. First, we explain Ptolemy verification in more detail.
Verification in Ptolemy
In Ptolemy, event types state the obligations that handlers should satisfy. In the general case that was presented in Figure 1, the event Evt's declaration specifies the precondition (P e ) and postcondition (Q e ) that handlers should conform to, and also the translucid contract (assumes clause) that they should refine.
Verification in Ptolemy is straightforward [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF]. Every handler body H for an event and every piece of announced code S for that event must satisfy the same pre-post obligations [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF]Figure 11], declared in the event's requires and ensures clauses. Besides that, the handlers must also refine the event's translucid contract. This is expressed in the requirement that a program is conformal, meaning that each handler conforms to the corresponding event declaration's specification.
Definition 2. A Ptolemy program Prog is conformal if and only if for each declaration of an event type, Evt, in Prog, and for each binding clause of the form when
Evt do m appearing in a class C of Prog: if (P e , A, Q e ) = ptolemySpec(Evt ) and H = bodyOf (C , m), then there is some type environment Γ ′ such that Γ ′ (next) = closure Evt, Γ ′ ⊢ A ⊑ H and Γ ′ |= {P e }H{Q e }.
In the above, the formula P e is the event's precondition, Q e is its postcondition, and A is the body of the assumes clause (the "translucid contract" [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF]), which in our notation is written (P e , A, Q e ) = ptolemySpec(Evt ). Similarly, bodyOf (C , m) returns the code that is the body of method m in class C. 4 The structural refinement relation ⊑ is explained below. Furthermore, we say that a Hoare-triple {P }S{Q} is valid, written Γ |= {P }S{Q}, if in every state (typable by Γ ) such that P holds, whenever S terminates normally, then Q holds in the resulting state.
In Ptolemy, the verification of handlers is done modularly and separately from the announcements. The body of each handler must structurally refine the translucid contract from the event specification. A handler body, H, structurally refines a translucid contract A, written A ⊑ H, if one can match each expression in H to an expression in A [START_REF] Shaner | Modular verification of higherorder methods with mandatory calls specified by model programs[END_REF]. The matching of most expressions are exact (only the same expression matches) with the exception of specification expressions of the form requires P ensures Q, which can occur in A and must each be matched by expressions in H of the form refining requires P ensures Q { S }, where S is the code implementing the specification expression. In Ptolemy structural refinement is checked by the type checking phase of the compiler [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF].
To summarize, according to the work on translucid contracts for Ptolemy [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF], the way that one proves that a program is conformal is by proving, for each handler body H for an event Evt such that (P e , A, Q e ) = ptolemySpec(Evt ): Γ ′ ⊢ A ⊑ H and Γ ′ ⊢ {P e }H{Q e }. In order to guarantee soundness, the body of each refining expression must satisfy the given specification, as in the (REFINING) rule of Figure 3. For every announce expression in a valid program, the announced code S should satisfy the event specification (P e , Q e ). Then, if the base code guarantees P e before the announce expression it can assume Q e holds afterwards. This constitutes Ptolemy's (ANNOUNCE) rule in Figure 3. In that rule P e [y/x] means P e with the actual parameter variables y i5 simultaneously substituted for the free occurrences of the x i , which are the event's formal parameters. Note that the body of the announcement, S, cannot use the event's formal parameters, but only has access to the original type environment, Γ . In the (ANNOUNCE) rule, there is no distinction made regarding the presence or absence of any registered handlers, because the same reasoning applies in either case.
An invoke expression in a handler is reasoned about in the same way. That is, the code executing the invoke expression must establish P e and can assume Q e afterwards. This is (INVOKE) rule in Figure 3. In this rule, the event's name is obtained from the type of next, and this gives access to the specification (P e , A, Q e ) of that event.
A Hoare logic is sound if whenever Γ ⊢ {P }S{Q} is provable then every terminating execution of S starting from a state in which P holds ends in a state in which Q holds. Soundness for Ptolemy depends on the program being conformal.
Theorem 1. Suppose that the Hoare logic for Ptolemy, without using the rules in Figure 3, is sound. Then for conformal programs, the whole logic, including the rules in Figure 3, is sound.
We omit the proof (which goes by induction on the structure of the proof in the entire Hoare logic). However, the key argument is the same as that for greybox specifications, that structural refinement implies refinement [START_REF] Shaner | Modular verification of higherorder methods with mandatory calls specified by model programs[END_REF].
Ptolemy's design makes both handlers and the announced code have the same prepost specifications (P e , Q e ). 6 This design is convenient in some cases, but it limits Ptolemy's flexibility and completeness. For example, it is not possible to use Ptolemy's event type pre and postconditions to specify and verify the "increasing" property of our billing system (section 1.4), because the announced code achieves the postcondition a ′ = a + c and not the event's postcondition a ′ > a + c. However, this property could be considered correct with respect to a more flexible specification that gives different postconditions to the announced code and handlers, which is what we do below. This example shows that verification in Ptolemy is incomplete.
We have other similar examples that show incompleteness of Ptolemy's verification rules. The common theme, like in the billing example, is that the effect of the announced code does not match the effect of the handlers.
Another situation that shows Ptolemy's incompleteness occurs when the announced code has no effect (e.g., skip). As Ptolemy imposes the event pre-post obligations on the announced code, it requires that the triple {P e }skip{Q e } holds, or, by Hoare logic, that P e ⇒ Q e . Since these same obligations are imposed on the handlers, thus they are limited to monotonic behaviors; i.e. ones that preserve the precondition P e . This is a symptom of incompleteness, because in a program where there must be registered handlers, one would not be able to verify an event announcement in which the handlers achieve a postcondition Q e that is not implied by the event's precondition (P e ).
In the next section we detail our proposed modification to solve these incompleteness issues and analyse its impact regarding modular reasoning.
Explicit Separate Specification
A solution to the incompleteness problems can be found by recognizing that there is a mutual dependency between base code, handlers and announced code, in the execution chain. The base code depends on the behavior of the activated handlers that are triggered by an announce expression. The handlers depend on the other activated handlers, and on the behavior of the announced code at the end of the chain (Figure 4).
The first change from Ptolemy in our PtolemyRely language consists in separating the specification for the handlers (P e ,Q e ) from the specification for the announced code (P s ,Q s ). As before, every handler H is reasoned about using the event requiresensures specification (P e ,Q e ). But the announced code S is reasoned about using its own specification (P s ,Q s ). (Both cases are depicted in Figure 5). This new approach allows different specifications for the handlers and for the announced code, as in our billing example. This also allows announced code that has no effect to be verified without limiting, in any way, the handlers' specification. In PtolemyRely, the second change is that the verification of both announce and invoke expressions is slightly modified. For announce expressions there are two situations, as shown in Figure 5. If there are registered handlers then the base code interacts with the first of them, which guarantees the event postcondition (Q e ). If there are no handlers then the announced code is executed, ensuring its postcondition (Q s ). This two cases are formalized by the rules (RANNOUNCEHAS) and (RANNOUNCENONE) in Figure 8.
invoke expressions are only valid inside the body of a handler, and thus should be analyzed in a context where there are registered handlers. Their effect, instead, depend on the nondeterministic position of the containing handler in the execution chain.
If there are other handlers left in the execution chain, the event specification (P e , Q e ) is used, as all handlers satisfy it. If only the announced code is left, its specification (P s , Q s ) should be used. However, for modular verification, the problem is that the event declaration, and consequently the handlers, do not know the announced code and thus do not know (P s , Q s ). To avoid whole-program reasoning, we make a third change, in this case to Ptolemy's event type declarations. Now users also specify, in the event declaration, the pre-post obligations (P r , Q r ) for any announced code. Putting this specification in the event type declaration in a new relies clause (see Figure 6) allows the handlers to be verified based on that specification, instead of the actual announced code's specification. It also allows one to avoid doing the verification that each handler satisfies the event pre-post specification one handler at time. Instead, that can be done in two separate steps: first, once and for all, verifying that the event's translucid contract satisfies the event's pre-post specification, and then verifying that each handler refines this translucid contract, which in turn guarantees every handler satisfies the event's specification.
To summarize, with our changes in PtolemyRely, the event type declares specifications for the handlers, (P e , Q e ), and for the announced code, (P r , Q r ). In the rest of this section, we give the formal details of our approach.
Syntax
For PtolemyRely, we change the syntax of Ptolemy event declarations by introducing a relies clause that establishes the specification for the announced code (P r , Q r ). This is shown in the event syntax schema, Figure 6. We make two changes to the formal syntax of Ptolemy [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF]. The first adds a predicate handlers that returns the number of handlers currently registered for its event argument. The second changes contract definitions, as shown in Figure 7. The nonterminal c stands for event names, sp stands for specification predicates, and se stands for specification expressions (the contract's body in this case).
Semantics
In PtolemyRely, as stated in the definition of conformance, we check for structural refinement of each handler to the translucid contract, and also check each handler to satisfy the event requires-ensures specification.
Definition 3. A PtolemyRely program Prog is conformal if and only if for each declara-
tion of an event type, Evt, in Prog, and for each binding clause of the form when Evt do m appearing in a class C of Prog: if (P r , Q r , P e , A, Q e ) = eventSpec(Evt ) and H = bodyOf (C , m), then there is some type environment Γ ′ such that Γ ′ (next) = closure Evt, Γ ′ ⊢ A ⊑ H, and Γ ′ |= {P e }H{Q e }.
The function eventSpec(Evt ) returns the specification information from the event type's declaration. The returned 5-tuple consists of the relies clause contract (P r , Q r ), and the translucid contract: pre and post-conditions (P e , Q e ) and assumes body A.
The announce and invoke expressions are verified using the rules in Figure 8. For announce expressions there are two rules, depending on whether one can prove that there are registered handlers for the event. (RANNOUNCEHAS) applies when there are registered handlers. In this case the announce expression is reasoned about using the event's specification (P e , Q e ). For this rule to be valid, the announced code, S, must satisfy the specification (P r , Q r ) given in the event's type. (RANNOUNCENONE) applies when there are no registered handlers. In this case only the announced code is executed, and thus the relied on specification The soundness theorem for PtolemyRely states that if a program is conformal, then all provable Hoare triples are valid.
(P r , Q r ) is used. (RANNOUNCEHAS) (Pr, Qr, Pe, A, Qe) = eventSpec(Evt), x : T = formals(Evt), Γ ⊢ {Pr[y/x] ∧ handlers(Evt) > 0}S{Qr[y/x]} Γ ⊢ {Pe[y/x]} (announce Evt(y) S) {Qe[y/x]} (
Theorem 2 (Soundness). Suppose that the Hoare logic for Ptolemy, without using the rules for invoke and announce, is sound. Then for conformal PtolemyRely programs, the whole logic, including the rules for those constructs in Figure 8, is sound. Proof: Let Γ , P , S and Q be given such that Γ ⊢ {P }S{Q} is provable using Ptole-myRely's Hoare logic, including the rules in Figure 8. We prove that Γ |= {P }S{Q} (i.e., that this Hoare triple is valid) by induction on the structure of the proof of that triple. In the base case, there are no uses of the rules in Figure 8, so validity follows by the hypothesis. For the inductive case, suppose that the proof has as its last step one of the rules in Figure 8. We assume inductively that all subsidiary proofs are valid. There are three cases. If the last step uses the (RANNOUNCENONE) rule, then the hypothesis that the announced code satisfies the specification (P r , Q r ) makes the conclusion valid. If the last step uses the (RANNOUNCEHAS) rule, then the hypothesis that the program is conformal means that, by definition 3, Γ ′ |= {P e }H{Q e } where (P e , Q e ) is the specification of the handler's from the event type. This again makes the conclusion valid. If the last step uses the (RINVOKE) rule, then there are two sub-cases, and the proof is similar to that given for the previous cases, using the definition of "conformal."
We note that proving that a program is conformal can be done in a simple way, by proving Γ ′ ⊢ {P e }A{Q e }, where (P e , Q e ) is the event's pre/post specification, and A is the translucid contract for the event, and then checking that each handler's body, H structurally refines the translucid contract (Γ ′ ⊢ A ⊑ H). After that, it follows that Γ ′ |= {P e }H{Q e } using techniques from the work of Shaner et al. [START_REF] Shaner | Modular verification of higherorder methods with mandatory calls specified by model programs[END_REF].
Billing Example Revisited (PtolemyRely)
In Figure 9 we show how our billing example could be written in PtolemyRely. Here we show how it can be verified using PtolemyRely's rules, how the "increasing" property can be specified and verified and how the assertion in line 9 is now proved.
Contrary to Ptolemy, PtolemyRely allows us to have different specifications for the handlers (P e , Q e ) and for the announced code (P s , Q s ). As mentioned before, the specification for the handlers, (P e , Q e ), goes in the requires-ensures clauses of the event declaration, meanwhile the minimum specification for any announced code, (P r , Q r ), goes in the new relies clause. The specification of the announced code S (line 7) is (P s , Q s ), that corresponds to (c ≥ 0, a ′ = a + c). We take the expected behavior for the announced code (P r , Q r ) (lines [START_REF] Shaner | Modular verification of higherorder methods with mandatory calls specified by model programs[END_REF][START_REF] Spivey | Understanding Z: a Specification Language and its Formal Semantics[END_REF] to be the same as the actual behavior for the announced-code (P s , Q s ). The specification for the handlers (P e , Q e ) is declared in line 15 and line 22 as (c ≥ 0, a ′ > a + c).
In PtolemyRely we can prove our "increasing" property: that all handlers should strictly increase the total amount of the bill. If a handler H is verified, it means that it satisfies the (P e , Q e ) specification. In this case: Γ ⊢ {c ≥ 0}H{a ′ > a + c}, which is exactly what the "increasing" property demands.
Since there are registered handlers (line 5) the (RANNOUNCEHAS) rule applies. It requires {P r }S{Q r }, which holds in the announce expression in lines 6-8. The postcondition in the consequent of this rule, Q e , corresponds in this case to a ′ > a + c, this immediately proves the assertion in line 9. To reason about invoke expressions one should use the (RINVOKE) rule, that considers (P e , Q e ) and (P r , Q r ). In this case it corresponds to the following:
Γ ⊢ {(c ≥ 0) ∧ (c ≥ 0)}next.invoke(){(a ′ > a + c) ∨ (a ′ = a + c)}
and this is equivalent to Γ ⊢ {c ≥ 0}next.invoke(){a ′ ≥ a + c} and 38) that correctly refines the specification expression in the contract (lines 18-19). Also both, PaymentHandler and ShippingHandler, satisfy the handlers specification (c ≥ 0, a ′ > a + c). This can be shown as follows. Both increment the charges, c ′ > c, (line 28 and line 38) and then invoke the next handler. Considering this increment, and the indicated postcondition of the invoke expression, we have (c ′ > c)∧(a ′ ≥ a+c ′ ), and from that we get (a ′ > a+c), that shows that both handlers satisfy the specification.
We have showed that the whole program is verified (announce expression and handlers), that the "increasing" property can also be verified and that the assertion in line 9 can be proved in PtolemyRely.
Extension of Ptolemy
Our new approach extends Ptolemy's, as stated in the following lemma. where (P e , A, Q e ) = ptolemySpec(E ). Then the rest of the proof proceeds by induction on the structure of S.
If S is not an invoke or announce expression, then the proof rules for Ptole-myRely are the same as for Ptolemy, so there are only two interesting cases.
When S is an invoke expression of the form next.invoke() then, by hypothesis, we have in Ptolemy's proof system Γ ⊢ {P }next.invoke(){Q}. Thus by the Ptolemy (INVOKE) rule, we must have Γ (next) = closure Evt, for some event name Evt, where (P, A, Q) = ptolemySpec(Evt ). By construction of P rog ′ , we have (P, Q, P, A, Q) = eventSpec(Evt ), so P plays the role of both P e and P r in Ptole-myRely's (RINVOKE) rule, and Q plays the role of both Q e and Q r in that rule. So we have Γ ⊢ {P ∧ P }next.invoke(){Q ∨ Q}. To do this we use the rule of consequence in Hoare logic, since (P ∧ P ) ≡ P and (Q ∨ Q) ≡ Q, to get the desired conclusion in the proof system for PtolemyRely.
When S is an announce expression of the form announce Evt(y) {S 0 }, then using Ptolemy's (ANNOUNCE) rule we have: Γ ⊢ {P Evt [y/x]}S{Q Evt [y/x]}, and so we also have Γ ⊢ {P Evt [y/x]}S 0 {Q Evt [y/x]}, where Γ is the type environment for expression S, (P Evt , A, Q Evt ) = ptolemySpec(Evt ) and x : T = formals(Evt ). Using PtolemyRely's (RANNOUNCEHAS) or (RANNOUNCENONE) rules, we must prove that: Γ ⊢ {P Evt [y/x]}S{Q Evt [y/x]}. Since by construction of Prog ′ we have that (P Evt , Q Evt , P Evt , A, Q Evt ) = eventSpec(Evt ), then P Evt plays the role of P e and P r , and Q Evt plays the role of Q e and Q r , and so both rules allows us to immediately prove the desired conclusion. One can apply whichever rule is appropriate, or a derived rule with precondition P Evt [y/x] ∧ P r [y/x] and postcondition Q Evt [y/x] ∨ Q r [y/x], and then use the rule of consequence.
we take ideas from both approaches, Ptolemy and rely-guarantee, and declare, as part of the event type, the conditions the advice code relies on, which corresponds to what the base code should guarantee to every applicable advice.
Conclusions and Future Work
When reasoning about event announcement in AO systems, there exists a mutual dependency between the base code (subject) and the advising code (handlers). The approach followed in systems like Ptolemy [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF], where the same requires-ensures obligation is applied to both the handlers and the announced code, limits the flexibility and the completeness of the system.
In this paper we showed an extension to the event type concept in the Ptolemy language that explicitly separates the specification and verification of these obligations. We implemented our proposal as an extension to the Ptolemy compiler and showed that the resulting methodology is more flexible and complete than the original.
We also showed how to make the verification of the handlers more concise. Instead of verifying each handler to satisfy the event pre-post specification, one can verify, once and for all, the translucid contract of the event to satisfy this pre-post specification. Then each handler can be verified to structurally refine this translucid contract. This indirectly guarantees the required behavior of the handlers.
Previous work [START_REF] Bagherzadeh | Translucid contracts for aspectoriented interfaces[END_REF] has shown how the translucid contract concept of Ptolemy can be adapted to other approaches like XPI, AAI and Open Modules; adding specification and verification capability to them. Our work suggests that these approaches, and others like JPT and JPI, would benefit from our enhancement to the translucid contract concept.
Since event subtyping has been recently proposed for Ptolemy [START_REF] Fernando | Event type polymorphism. In Proceedings of the eleventh workshop on Foundations of Aspect-Oriented Languages[END_REF], a natural future extension to our work would be to apply the added relies clause in the presence of event polymorphism, and to analyse its impact regarding modular reasoning. We also plan to apply our approach to more complex cases, and also to use static checking techniques in the verification process.
Fig. 1 .
1 Fig. 1. Event, handlers and announced code
1 }whenFig. 2 .
12 Fig. 2. Billing example in Ptolemy
(Fig. 3 .
3 Fig. 3. Hoare Logic axioms and inference rules for the interesting constructs of Ptolemy.
Fig. 4 .Fig. 5 .
45 Fig. 4. Mutual dependencies between base code, handlers and announced code
tFig. 6 .Fig. 7 .
67 Fig. 6. Event syntax schema.
Fig. 8 .
8 Fig. 8. Hoare Logic inference rules for those constructs of PtolemyRely that differ from Ptolemy.
Lemma 1 .
1 Let Prog be a program in Ptolemy and S be an expression of Prog. Let Γ be a type environment that types S. Suppose Γ ⊢ {P }S{Q} is provable in Ptolemy. Then there is a PtolemyRely program Prog ′ in which Γ ⊢ {P }S{Q} is provable by the rules for PtolemyRely. Proof: The new program Prog ′ in PtolemyRely is constructed by taking each event declaration E declared in Prog, and producing a new event declaration E ′ which is just like E, except that a relies clause is inserted of the form relies requires P e ensures Q e
Ptolemy is an expression language.
When summarizing assertions, we adopt the Z[START_REF] Spivey | Understanding Z: a Specification Language and its Formal Semantics[END_REF] convention of denoting the new value of a variable with a prime (like a ′ ), and use unprimed variables to stand for their pre-state values.
The auxiliary function bodyOf (C , m) returns the body of method m in class C.
We use variables in these rules to avoid problems with side effects in expressions, although Ptolemy allows general expressions to be passed as actual arguments to announcements.
We use the convention of denoting by (P, Q) the pre-and postconditions of some code.
Acknowledgments
The work of both authors was partially supported by NSF grant CCF-1017334. The work of José Sánchez is also supported by Costa Rica's Universidad Nacional (UNA), Ministerio de Ciencia y Tecnología (MICIT) and Consejo Nacional para Investigaciones Científicas y Tecnológicas (CONICIT).
In this revisited version we adjusted ShippingHandler to meet the "increasing" property (line 38). Both handlers refine the translucid contract, providing code (line 28
Related Work
The original work on Ptolemy [START_REF] Rajan | Ptolemy: A language with quantified, typed events[END_REF] addressed the problem of modular reasoning of implicit invocation systems, like AO systems. Many other solutions have also been proposed: XPIs [START_REF] Sullivan | Modular aspect-oriented design with xpis[END_REF], AAI [START_REF] Kiczales | Aspect-oriented programming and modular reasoning[END_REF], Open Modules [START_REF] Aldrich | Open modules: Modular reasoning about advice[END_REF][START_REF] Ongkingco | Adding open modules to AspectJ[END_REF], Join Point Types (JPT) [START_REF] Steimann | Types and modularity for implicit invocation with implicit announcement[END_REF] and Joint Point Interfaces (JPI) [START_REF] Inostroza | Join point interfaces for modular reasoning in aspect-oriented programs[END_REF][START_REF] Bodden | Joint point interfaces for safe and flexible decoupling of aspects[END_REF][START_REF] Bodden | Closure Joinpoints: Block joinpoints without surprises[END_REF]. In this work we call attention to the mutual dependency that exists between the base code (subject) and the advising code (handlers). We enhanced Ptolemy's event type specifications by clearly separating the obligations imposed on the handlers from the obligations of the announced code, in such a way that both can be reasoned about modularly. Here we review how, if at all, this problem is addressed in the other approaches and if our strategy can be applied to them.
Previous work [START_REF] Bagherzadeh | Translucid contracts for aspectoriented interfaces[END_REF] has shown how the translucid contract concept of Ptolemy can be adapted to other approaches like XPIs, AAI and Open Modules; adding specification and verification capability to them. All these approaches would benefit from our enhancement to the translucid contract concept, in case they adopted it, as they would become more complete and more flexible.
Steimann et al. [START_REF] Steimann | Types and modularity for implicit invocation with implicit announcement[END_REF] proposed an approach for dealing with Implicit Invocation and Implicit Announcement (IIIA) based on Join Point Types and polymorphic pointcuts. Ptolemy's approach [START_REF] Bagherzadeh | Translucid contracts: expressive specification and modular verification for aspect-oriented interfaces[END_REF], which we extended in this work, is similar to the work of Steimann et al. One important difference, though, is that Ptolemy does not support implicit announcement. On the other hand, Steimann et al. do not treat the issue of specification and verification, suggesting that one can "resort to an informal description of the nature of the join points" [15, p. 9]. Nevertheless, since the IIIA joinpointtype concept is very close to the event concept of Ptolemy, the translucid contract approach, including our contribution, could be partially applied to join point types.
Joint Point Interfaces (JPI) [START_REF] Inostroza | Join point interfaces for modular reasoning in aspect-oriented programs[END_REF][START_REF] Bodden | Joint point interfaces for safe and flexible decoupling of aspects[END_REF] and Closure Joint Points [START_REF] Bodden | Closure Joinpoints: Block joinpoints without surprises[END_REF] extend and refine the notion of join point types of Steimann et al. JPI decouples aspects from base code and provides modular type-checking. Implicit announcement is supported through pointcuts, and explicit announcement through closure join points. JPI, similarly to JPT, lacks specification and verification features. Thus, it could also benefit from the specification and verification approach in Ptolemy and PtolemyRely.
Khatchadourian and Soundarajan [START_REF] Khatchadourian | Rely-guarantee approach to reasoning about aspect-oriented programs[END_REF] proposed an adaptation of the rely-guarantee approach used in concurrency, to be applied in aspect orientation. The base code reasoning relies on certain constraints imposed on any applicable advice. These constraints are expressed as a rely relation between two states. A conforming piece of advice may only make changes to the state in a way that satisfies the rely relation. In this way the reasoning of the base code is stable even in the presence of advice. The event prepostconditions (P e , Q e ) that Ptolemy imposes on every handler can be thought as a realization of the rely relation: rely(σ 1 , σ 2 ) ≡ P e (σ 1 )∧Q e (σ 1 , σ 2 ). As observed by those authors, the relation between the base code and the advice is not symmetric, as it is in the case of peer parallel processing. In their approach the base code should just guarantee the preconditions required by the advice. PtolemyRely follows a similar strategy, in which the base code guarantees (to the handlers) only the preconditions of the handlers. Thus in PtolemyRely: guar(σ 1 , σ 2 ) ≡ P e (σ 1 ). Our key observation in PtolemyRely is that the advice code might depend on the piece of base code announced at a given join point, which may be eventually invoked from inside the advice. In PtolemyRely | 40,866 | [
"1004349",
"1004350"
] | [
"85862",
"85862"
] |
01403849 | en | [
"info"
] | 2024/03/04 23:41:50 | 2016 | https://theses.hal.science/tel-01403849v2/file/DDOC_T_2016_0142_LABRANDE.pdf | Keywords: . . . . . . . . . . . . . . . . . theta functions, Abel-Jacobi map, multiprecision, quasi-linear time, elliptic curves, isogenies, hyperelliptic curves, cryptography
À la fin of a three year cotutelle, there are beaucoup de people to thank, en français and in English. J'hope ne pas forget anybody, mais tellement m'ont entouré and supported during these 3 ans de thèse that I'm bound to have oublié certains. Sorry! Je tiens à remercier en premier lieu John Boxall et Guillaume Hanrot pour avoir accepté de relire ce manuscrit, et pour leurs commentaires, qui m'ont grandement aidé à finaliser ce manuscrit. I would also like to thank Matthew Greenberg and Wayne Eberly for accepting to be a part of the examination committee for my candidacy exam in Calgary, as well as Faramarz Famil Samavati for sitting as the neutral chair. Je souhaite également remercier Sylvain Lazard pour m'avoir suivi en tant que référent interne au cours de ces trois dernières années. Je souhaite enfin remercier Pierrick Gaudry d'avoir accepté d'être examinateur à la soutenance de thèse, ainsi qu'à l'examen de candidature ; mais aussi pour son aide et son expertise sur certaines parties ardues, comme la lecture du deuxième livre de Mumford. I also wish to thank Renate Scheidler, who kindly accepted to be part of the defense jury, as well as the jury for my candidacy exam in Calgary.
I also wish to thank my advisors for their guidance. Many thanks to Mike, for his advice, his help and mentorship, during this project and the previous ones, and helping to make me almost a fully-grown researcher over the last six years. Un grand merci à Emmanuel, pour m'avoir proposé ce sujet de thèse, pour s'être toujours montré disponible et patient, pour ses réponses pointues à mes questions fréquentes, pour son aide sur tous les aspects, aussi bien théoriques que d'implémentation que pratico-pratiques, ainsi que de m'avoir dit quand j'écrivais trop, ou trop de bêtises.
Je voudrais également remercier tous ceux qui se sont montrés intéressés par mes travaux de thèse et m'ont permis de les présenter tout au long de ces trois dernières années. I would like to thank the Number Theory Group in Calgary, and more particularly those of the Number Nosh, who have been very welcoming and allowed me to give practice talks there a few times. Enfin, un grand merci à Christophe Ritzenthaler, et à Andreas Enge et Damien Robert, de m'avoir invité dans leurs départements respectifs pour y présenter mes travaux, ainsi que pour les discussions qui s'en suivirent, toujours très stimulantes et dont les fruits se retrouvent dans ce manuscrit.
A cotutelle is a lot of administrative work, and I cannot forget to thank everyone who was involved in making this cotutelle possible. Merci à Vanessa Binet, ainsi que le reste du bureau des Études Doctorales de l'UL, pour leur aide. Thank you to the Graduate Studies at the Computer Science department of the University of Calgary, and many thanks to Britta Travis for answering so many of my questions. Un grand merci à Sophie Drouot pour son aide, notamment pour la planification de mes (nombreux !) voyages. Je souhaite également remercier Suzanne Collin pour m'avoir permis de faire mes heures d'enseignement à Télécom Nancy, tout en composant avec le fait que j'étais absent la moitié de l'année ; finally, I would like to thank the Computer Science department at the UofC for providing me with the opportunity to be a teaching assistant there.
Un grand merci également à tous mes collègues, des deux côtés de l'Atlantique. Thanks to Sebastian for brightening up the office and studying the theory with me. Mille mercis à Marie-Andrée, l'autre moitié du Bureau des Isogénies, pour sa bonne camaraderie. Merci à Aurélien pour ses conseils et ses meubles ; merci également à Hubert et Éric pour ces formations doctorales mémorables. Merci à Jérémie, ainsi qu'à Rémi, pour les discussions occasionnelles entre lyonnais. Merci à Aurore, et nos longs échanges de mails pendant l'hiver canadien. Et bien entendu, un grand merci à toute l'équipe Caramel/Caramba, source inépuisable de rires et de trolls, pour l'ambiance de travail exceptionnelle, que j'étais toujours très heureux de retrouver. Merci à Paul, Jérémie, Marion, Pierre-Jean, et Pierrick, pour leur encadrement bienveillant, et leur aide et leur disponibilité pour les "jeunes" ; merci à Alexander, Enea et Maike, et merci à Svyat et Simon pour les trolls et la bonne humeur. Merci à mes co-bureaux successifs -et il y en eut -Laurent, Hamza, Cyril, Nick, Stéphane et Luc, pour avoir entrecoupé les silences productifs de discussions animées sur tout et n'importe quoi. Un merci tout particulier à Laurent, qui a sait décrypter les règlements de l'Université de Lorraine comme personne, et sans qui les formations doctorales auraient paru beaucoup plus longues.
And of course, thanks to all that is not work and that allowed me to blow off steam throughout this time. Merci à Azathoth, Natrium, Yoruk, Stormi et Otto, pour leur passion et leur ténacité. Merci aux improvisateurs nancéens, et tout particulièrement à ceux de l'association Improdisiaque ; merci à Steff pour son énergie, à Jess pour son amitié et son rire, et à Fanny pour son soutien constant et les soirées saucisson épiques. Merci à Armand pour son amitié, nos discussions profondes comme légères, et pour sa pêche. Thank you to the UofC Improv Club for always making me feel so welcome every time I came back, and for opening me to so many new perspectives; thanks to the Kinkonauts Level B class for the fun times. Merci à Anh Thy et Laure pour toutes ces heures passées au téléphone ; merci à Clément, Olivier et Guillermo pour leur amitié profonde depuis si longtemps.
Many thanks to my in-laws, for making sure I had a big, loud family whenever I was in Calgary; for their support in so many ways for the last 5 years; and for always teaching me something new about Canada or the English language.
Un grand merci à mes parents, pour leur présence constante, leur soutien dans les moments difficiles, leur joie dans tous les bons moments, leurs encouragements et leur amour, depuis le début du début. Merci à Manon, toujours aussi proche, toujours aussi complice, toujours aussi présente, même avec la distance, que ce soit pour raconter des bêtises ou parler longuement.
And of course, thank you to my wife, for her unconditional love and support throughout the last three years; who stood by me always; who told me to go for it even if she still wonders what the heck is a theta-constant; who was always patient and understanding; who knows exactly when I need to stop working and go for a hot chocolate or a Blizzard; who gives me so much every day and whom I dedicate this manuscript to. Et merci à la petite Anaïs de m'avoir laissé dormir un peu pour que je finisse d'écrire ce manuscrit. Love you both.
Contents Introduction
Elliptic and hyperelliptic curves are classical objects in algebraic geometry, and their properties have been studied for centuries. They have also been proposed for use in cryptography more than thirty years ago, which along with the advent of computers rekindled the interest in effective algorithms to compute objects and perform calculations related to these curves. In particular, the computation of isogenies, which are morphisms between curves, has cryptographic applications: isogenies transport the discrete logarithm problem on a curve (on which the security of the cryptosystem depends) into an instance of the problem on another, potentially weaker, curve. Hence isogeny computation has been used to outline a decrease of the security of a cryptosystem based on curves; however, these applications only concern very specific cases so far, and elliptic curves are still considered to be secure, and are widely deployed.
Elliptic and hyperelliptic curves over the complex numbers have an "analytic representation" as complex tori, in addition to the usual algebraic representation. Analytic representations are particularly interesting, in the sense that computing isogenies (and other maps) between two such representations is as simple as multiplying two complex numbers. The translation between both representations is done via the Abel-Jacobi map, an explicit isomorphism from the algebraic to the analytic representation. In genus 1, this map is fairly well-known, and the links with the arithmetico-geometric mean are explicit; its inverse is linked to the Weierstrass ℘ function, which is also well-known. However, algorithms to compute the inverse of the Abel-Jacobi map are currently more costly than the ones which compute the Abel-Jacobi map, which is not satisfactory. Furthermore, the situation with respect to the Abel-Jacobi map in higher genus is not as explicit, and there are no fast algorithms to compute the map and its inverse.
In this thesis, we look at the θ function, a function of complex numbers which can be defined in any genus. The θ function can be linked to complex Riemann varieties, including elliptic and hyperelliptic curves over the complex numbers, and can be used to compute the Abel-Jacobi map. Furthermore, it has links with other complex functions of number theory, including the Weierstrass ℘ function.
We took a closer look at the computation of this function with arbitrary precision: the main contribution of this manuscript is to outline, in genus 1 and genus 2, fast algorithms which compute the value of this function with complexity roughly linear in the precision needed. A similar approach could be applied to the general genus g case, but this requires solving a few problems first, which we solved in genus 1 and 2 but leave as future work for higher genus. As a result, our work also gives fast algorithms for the computation of the Abel-Jacobi map and its inverse, which is also of general interest. The algorithms for θ, ℘ and the Abel-Jacobi map that we present in this manuscript have the best known asymptotic complexity. Finally we also study one application of this: a new algorithm to compute isogenies of a given kernel over C, F p or a number field.
Contributions of this thesis
Main contributions
The main contributions of this thesis lie in the design and study of algorithms to compute theta functions in quasi-linear time.
Generalizing algorithms studied in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF], we first give an algorithm to compute the value of θ(z, τ ) (Jacobi's theta function) with precision P in O(M(P ) log P ), an improvement over the O(M(P ) √ P ) running time of previous algorithms. Our implementation of the algorithm is publicly available, and shows this algorithm is faster than previous algorithms for precisions greater than 260 000 decimal digits. We present this in Chapter 6.
We then generalized this approach to higher genera, as presented in Chapter 7. We managed to obtain an algorithm to compute genus 2 theta functions in the same O(M(P ) log P ) time, although the invertibility of the Jacobian (which arises when using Newton's method) remains to be proven; this is an improvement over previous algorithms, of complexity O(M(P )P ). Once again, our implementation of the algorithm is publicly available and shows a speedup with respect to the naive algorithm for precisions greater than 3 000 digits. Furthermore, we studied the generalization of the algorithm in genus g; a similar complexity seems achievable, but this requires solving some problems which arise when attempting to use Newton's method, and which were more or less solved in genus 2.
We then applied the algorithms and ideas of these chapters to the computation of the Abel-Jacobi map and its inverse in Chapter 8. The fast algorithms for the computation of θ allowed us to show that both the complex Abel-Jacobi map and its inverse can be computed with precision P in O(M(P ) log P ) in genus 1 and 2, provided the Jacobian of the system is invertible. In genus g, the same result for the Abel-Jacobi map is achievable, but our algorithm for the fast computation of its inverse hinges on the fast computation of θ in genus g, which is not fully solved yet.
Finally, as an application of the fast computation of the genus 1 Abel-Jacobi map and its inverse, we studied an isogeny computation algorithm in Chapter 9. The algorithm allows one to compute an isogeny with given kernel over C, a number field, or a finite field. Its complexity is worse than that of Vélu's formulas, but it seems easier to generalize to genus g; however, we left the study of this generalization to future work.
Other contributions
We wish to highlight a few results we obtained in this manuscript; some of them may be of secondary importance, but we think they may be of independent interest. As these results are scattered throughout this manuscript, we summarize them here. The list includes:
• We give a new algorithm to compute ℘(z, τ ) with precision P in time O(M(P ) log P ) .
The algorithm uses the link between values of ℘ and the Landen transform, which gives a recurrence relation; it is described in Section 4.3. We also compare this algorithm to the one deriving from the well-known link between ℘ and θ, which is also of quasi-linear complexity using our algorithms for θ; the comparison in Section 8.1.3 shows that the new algorithm has a smaller constant and is faster in practice.
• We give a fast algorithm to compute E 2k (τ ) with precision P in O(P 1+ k 1+ ) bit operations; this is a better asymptotic complexity than the naive algorithm, which has complexity O((P + k) 2+ log k). Furthermore, our algorithm actually computes E 2k (τ ) for all k ≤ k in the same asymptotic running time. We refer to Section 8.4 for a more detailed analysis.
• Our algorithm for the Abel-Jacobi map in genus g can be applied to the genus 1 setting (see Section 8.1.4), which gives a new quasi-linear algorithm to compute the elliptic logarithm using the link between ℘ and θ. The resulting algorithm is only twice slower than the more direct algorithms based on the Landen transform.
• We show in Chapter 5 that using recurrence relations in the naive algorithm to compute θ can be used in any genus, and optimize the genus 1 and genus 2 naive algorithms to lower the asymptotic constant.
• We improve the uniform algorithm for genus 1 theta-constants given in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] and include the computation of θ 2 in the same uniform complexity (see Section 6.1.2).
• Finally, we study a global torsion lifting procedure in Section 9.3. In particular, we study a univariate polynomial derived from the generic -torsion polynomials; we manage to give a bound on the module of its roots, but its irreducibility is left as an open problem.
Outline of the manuscript
This manuscript is structured in nine chapters.
Background
Chapter 1 deals with background concerning elliptic curves and isogenies, the primary motivation for our study of the computation of the Abel-Jacobi map; we also discuss this map and its potential applications, and show how these notions generalize to genus 2. Chapter 2 establishes background on general (i.e. genus g) theta functions, outlining the ideas and the formulas that we use throughout the manuscript. We also mention two explicit reduction algorithms which seem relevant in the context of genus g θ functions, although they are weaker than the reduction to the genus g fundamental domain, for which there are no explicit reduction algorithms. Section 2.5 and Section 2.6 show all the formulas we use in the context of genus 1 and genus 2 theta functions, which is handy when reading the more involved chapters.
Computation of the Abel-Jacobi map
Chapter 3 deals with the arithmetico-geometric mean (AGM), an important object in this manuscript since it can be computed in quasi-linear time. We outline the well-known connection with the theta-constants; the connection with elliptic integrals is discussed in the next chapter. We also discuss a nice generalization of the AGM, the Borchardt mean.
Chapter 4 discusses the Landen isogeny, which shows the connection between the AGM and the elliptic integrals, and allows one to compute the periods of an elliptic curve and the Abel-Jacobi map in quasi-linear time. The full proof for general complex elliptic curves has been obtained rather recently, and we recall the notions involved in this proof. We also discuss in that chapter an original algorithm to compute the Weierstrass ℘ function in quasi-linear time using the Landen isogeny; finally we discuss similar ideas for the θ function.
recurrence relations to evaluate the terms efficiently, and fully evaluate the complexity of the algorithm in any genus, although we were not able to determine the dependency in τ in the general genus g case.
Chapter 6 outlines an original algorithm which computes the genus 1 theta function in quasilinear time; we use a similar strategy to the theta-constants: find a function F which can be evaluated in quasi-linear time and takes a special value at the theta functions (using a sequence inspired by the AGM), then invert this function using Newton's method. We provide a full analysis of the running time of this algorithm and of the precision loss that is incurred during the computation, and give an algorithm with complexity uniform in z and τ . We implemented this algorithm in low-level GNU MPC and show that this algorithm is faster than the naive algorithm described in Chapter 5 for precisions greater than 100 000 decimal digits. The results in this chapter have been accepted for publication in the journal Mathematics of Computation in November 2015 [START_REF] Labrande | Computing Jacobi's θ in quasi-linear time[END_REF].
Chapter 7 shows how the quasi-linear time algorithm can be generalized to theta functions of higher genera. The results in Chapter 7 were obtained with Emmanuel Thomé and written up in a paper [START_REF] Labrande | Computing theta functions in quasi-linear time in genus 2 and above[END_REF] which was accepted to the 2016 Algorithmic Number Theory Symposium (ANTS-XII). We outline explicitly the algorithm in genus 2: we define F, the function to invert, explictly and we prove that it can be evaluated in quasi-optimal time; we also solve a tricky issue, which is that Newton's method cannot be applied directly to this function. The resulting algorithm was implemented in genus 2 in Magma, and it runs faster than the naive algorithm for precisions larger than 3000 decimal digits, which is much less than in genus 1; note that this algorithm relies on a conjecture, which is that the Jacobian of F is invertible. In the general, genus g case, we show the construction of F, whose evaluation has conjectured quasi-optimal running time. However, a similar problem with Newton's method arises, and we were not able to show that one can solve it in the general case.
Abel-Jacobi map
Chapter 8 deals with the computation of the Abel-Jacobi map, and the links between this map and theta functions. We obtained results of different strength depending on the genus.
In genus 1, known AGM-based methods allow the computation of the Abel-Jacobi map in quasi-linear time; we propose an algorithm using the function F of Chapter 6, but it is twice slower than these methods. The inverse of the Abel-Jacobi map is given by Weierstrass's ℘ function, which can be expressed as a function of θ to yield a first quasi-linear time algorithm. We compare this algorithm to another original algorithm for ℘ based on the Landen isogeny, which we outlined in Chapter 4; implementations show that the latter is two to three times faster than the former.
In genus 2, no methods which compute the Abel-Jacobi map in quasi-linear time in the general complex case seem to exist. Our algorithm using the function F can be generalized here, and yields a quasi-linear time algorithm. The computation of the inverse of the Abel-Jacobi map relies once again on the computation of θ, which gives a quasi-linear time algorithm modulo the conjecture on the Jacobian of F.
In genus g, we can once again use the function F of Chapter 7, which can be evaluated in quasi-linear running time; this yields a quasi-linear time algorithm for the Abel-Jacobi map, which is the best known complexity. As for the computation of the inverse of the Abel-Jacobi map, it reduces to the computation of θ in genus g; hence, it depends on being able to find a way to apply Newton's method to F, which we were not able to achieve.
Isogeny computation
The purpose of the last chapter, Chapter 9, is to outline and study an algorithm which, given a complex elliptic curve and a subgroup of order , computes an -isogenous curve such that the subgroup is the kernel of the isogeny. This algorithm uses the Abel-Jacobi map to transpose this problem to the complex tori, where it is easy to solve. The asymptotic complexity of this algorithm is worse than the complexity of existing algorithms (e.g. Vélu formulas); however, it seems like it can be generalized quite nicely to the computation of isogenies between genus 2 curves.
We then show how one can build an algorithm which, given a curve defined over a number field K, finds an isogenous curve with given kernel. This requires embedding the number field into the complex numbers, then recognizing the coefficients of the complex isogeny as elements of the number field; however, we were unable to find an explicit formula giving the precision required to do this. Finally, this strategy can be extended to curves defined over finite fields, which requires lifting the curve to a curve defined over a number field; we propose a conjecture on the precision needed in this case.
Computational model
Throughout this manuscript, we will perform computations on complex numbers, and we will always use the same computational model in those cases. We will outline algorithms to compute mathematical quantities with arbitrary precision, i.e. using multiprecision arithmetic. Hence, the cost of our arithmetic operations depends on the number of digits, which we always denote P , of the quantities we are working with.
Precision
We introduce the notion of precision as follows: Definition 0.3.1. Let α ∈ R; we say that α is an approximation of α with absolute precision P if |α -α| ≤ 1 2 P . We say that α is an approximation of α ∈ R * with relative precision P if
| α - α α | ≤ 1 2 P .
Throughout this manuscript, we will consider the problem of computing quantities with absolute precision P -that is to say, computing P exact bits after the radix point, or computing the quantity up to 2 -P . This choice is largely without practical consequences: most of the quantities we will compute in this thesis have integral part of bounded size, and hence we could have just as easily required a relative precision P + c with c a small constant. The notion of absolute precision for complex numbers can either be transposed as "the norm of the difference is smaller than 2 -P " or "the real part and the imaginary part of the approximation are correct up to 2 -P "; these notions only differ by a √ 2 factor, which is not very important in most cases. Because of this choice, we will work using fixed-point arithmetic instead of the commonly used floating-point arithmetic. This simply means that the quantities we will work with will be of the form a × 2 -P for precision P , where a is an integer. This choice is consistent with our choice of working with absolute precision, since every number which can be represented in fixed-point arithmetic of absolute precision P is distant from its neighbours by exactly 2 -P , whereas the distance between two consecutive floating-point numbers can be greater than 2 -P , which fails to provide an approximation with absolute precision P .
As we mentioned, the quantities we will work on in this manuscript very often have a bounded size. We will frequently make the (sometimes implicit) assumption that we have taken into account the size of their integral part in the complexity; that is to say, that the integer a in the representation of the quantity has P + c bits, or even simply O(P ) bits. Should a quantity have integral part of size larger than O(P ), we will mention it and adjust the complexity accordingly; we ask the reader to assume that, if nothing is mentioned, the integral part can be coded on o(P ) or O(P ) bits, which means that the integer a of the fixed-point representation has O(P ) bits.
Loss of precision
We now define the notion of loss of precision, which will be discussed extensively throughout this manuscript. Definition 0.3.2. Three different notions of loss of precision can be defined:
• mathematical loss of precision: let x be an approximation of x ∈ C with absolute precision P , and f a complex function. The mathematical loss of precision is c ≥ 0 such that |f (x) -f (x)| ≤ 2 -P +c . This is also called the forward error [START_REF] Nicholas | Accuracy and Stability of Numerical Algorithms[END_REF].
• loss of precision induced by the algorithm: let f be a complex function and A an algorithm to compute f which works using exact arithmetic, i.e. we assume the arithmetic operations in A always give exact results. The loss of precision induced by the algorithm is c ≥ 0 such that |f (x) -A(x)| ≤ 2 -P +c for all x ∈ C. This can be rephrased as the quality of the approximation of f provided by A.
• loss of precision induced by rounding: let x be a P -bit number, A an algorithm which works using exact arithmetic (i.e. using as many digits as needed to represent the result of arithmetic operations), and A P the same algorithm but in which arithmetic operations give a result rounded with precision P . The loss of precision induced by rounding is c ≥ 0 such that |A(x) -A P (x)| ≤ 2 -P +c .
The approach which has been taken throughout this manuscript is as follows: we do not take into account the mathematical loss of precision; all our algorithms induce no loss of precision (i.e. assuming exact arithmetic, they would return a result which is always within 2 -P of the value which is sought); the loss of precision induced by rounding is analyzed and compensated for. Hence, our goal is to provide algorithms to compute an approximation of f (x) which is within 2 -P of that value, i.e. the exact value of f evaluated at the argument. Should one require an approximation of f (x) within 2 -P of that value -i.e. the correct value with absolute precision P -, they should analyze the mathematical loss of precision (say, c bits) and provide an approximation of x with precision P + c.
In the rest of this manuscript, "loss of precision" will be understood as meaning "loss of precision induced by rounding". We will state all our algorithms as taking P -bit numbers as inputs; we analyze the loss of precision induced by rounding throughout the algorithm and bound it in the worst case, to ensure our algorithms always return approximations accurate to 2 -P of the results.
Loss of precision induced by rounding
By loss of absolute precision induced by rounding, we mean the error in the final result which is due to the fact that we worked on approximations of the input coded on P bits, and that all the results of intermediate computations in the algorithm were rounded to give P -bit numbers. For instance, dividing a number by 2 then multiplying it by 2 loses one bit of precision, because the intermediate result was rounded; this is inherent to working with P -bit numbers.
We analyze here the loss of precision induced by elementary functions, which we use as a building block of all our algorithms; the following theorem shows how an error on a rounded number is transformed by the computation of a rounded result. The analysis of precision loss in our algorithms is then a matter of combining these results to compute how large the errors can get.
Theorem 0.3.3. For j = 1, 2, let z j = x j +iy j ∈ C and zj = xj +i ỹj its approximation. Suppose that |z j -zj | ≤ k j 2 -P and that k j ≤ 2 P/2 . Suppose furthermore that k j 2 -P ≤ |z j | ≤ 2 P/2-3 . Then
1. |Re(z 1 + z 2 ) -Re( z1 + z2 )| ≤ (k 1 + k 2 )2 -P 2. |Re(z 1 z 2 ) -Re( z1 z2 )| ≤ (2 + 2k 1 |z 2 | + 2k 2 |z 1 |)2 -P 3. |Re(z 2 1 ) -Re( z1 2 )| ≤ (2 + 4k 1 |z 1 |)2 -P
with the same bounds applying to imaginary parts, and
4. |e z1 -e z1 | ≤ |e z1 | 7k1+8.5 2 2 -P . Furthermore if |z j | ≥ 2k j 2 -P , 5. |Re z1 z2 -Re z1 z2 | ≤ 6(2+2k1|z2|+2k2|z1|) |z2| 2 + 2(4+8k2|z2|)(2|z1||z2|+1)+2 |z2| 4 2 -P
and the same bound applies to the imaginary part, and
6. | √ z 1 - √ z1 | ≤ k1 √ |z1| 2 -P .
Proof. See Appendix A for a proof, inspired from the techniques in [ETZ].
We will make use of this theorem to compute the amount of precision we lost during the calculations.
Remark 0.3.4. Note that not every operation creates a loss of absolute precision. For instance, the multiplication by a number much smaller than 1 can create a result which is accurate to the full precision from numbers that were not; the same goes for the computation of the square root of a large number.
Guard bits
In order to compensate for precision loss, we use the notion of guard bits: Definition 0.3.5 (guard bits). Let A be an algorithm on complex numbers with exact arithmetic, and denote by A P the same algorithm which operates on numbers with P bits of absolute precision (i.e. in which inputs are of precision P and all the intermediate quantities are rounded off to precision P ). Let x be a complex number with P bits after the radix point. We say that a computation is performed with C guard bits to denote the following process:
• Put x the number x (with P bits after the radix point) followed by C + 1 zeros, so that
x is a number with P + C + 1 bits after the radix point and an approximation of x with precision P + C + 1;
• Compute α P +C = A P +C ( x) with precision P + C + 1;
• Round off the result with precision P to get a number α P .
If the computation of f loses C bits, then |A(x) -α P | ≤ 2 -P .
This means that losses of precision throughout the computation can be compensated, and the final result is then an approximation of the result with absolute precision P . However, this means the precision at which one works increases, which can have an impact on the asymptotic cost of the algorithm; analyzing this and finding ways to reduce the precision losses becomes important in some algorithms.
Cost of computations
Working with fixed-point arithmetic representations of the form a×2 -P , with a an integer, means that the operations on these representations are essentially reduced to operations on integers. For instance, adding two fixed-point numbers of (absolute) precision P can be done in O(P ) bit operations.
As we mentioned in the previous subsection, we will assume that we are working with fixedpoint numbers of absolute precision P whose integral part can be represented in O(P ) bits at the most. The complexity of multiplying such numbers is then O(M(P )), where M(P ) is defined as follows:
Definition 0.3.6. Denote by M(P ) the number of bit operations needed to compute the product of two P -bit integers. We have • M(P ) = O(P 2 ) if the naive (schoolbook) algorithm is used;
• M(P ) = O(P log 2 3 ) = O(P 1.58 ) if Karatsuba's algorithm is used;
• M(P ) = O(P log P log log P ) = O(P 1+ ) if the algorithm of Schönage-Strassen is used;
• M(P ) = O P log P 2 O(log * n) if Fürer's algorithm is used.
For more details on these algorithms and their implementations, we refer to [START_REF] Brent | Modern Computer Arithmetic[END_REF] and [START_REF] Fürer | Faster integer multiplication[END_REF].
Newton's method
Finally, we mention that Newton's algorithm can be used to compute some quantities with precision P , for a cost which is similar to the cost of computing the function one inverts.
The following theorem is at the basis of the analysis of Newton's method; we present the onedimensional case here, but the proof (presented in [BCSS97, Section 8.1]) can be immediately generalized to higher-dimensional spaces. Theorem 0.3.7 ([BCSS97, Chapter 8]). Let f : C → C be an analytic function, and define
γ(f, z) = sup k≥2 | f (k) (z) f (z)k! | 1 k-1 . Define N f (z) = z - f (z) f (z)
and consider the sequence defined by an initial value z 0 and the relation
z n+1 = N f (z n ). Let ζ be such that f (ζ) = 0; then |z 0 -ζ| ≤ 3 - √ 7 γ(f, ζ) ⇒ |z n -ζ| ≤ 1 2 2 n -1 |z 0 -ζ|.
Hence, provided that z 0 is close enough to the zero of f , Newton's method gives an approximation of the zero with an accuracy which roughly doubles at each step.
We note the following corollary, which we will use often:
Corollary 0.3.8. Let f : C → C be an analytic function and x such that f (x) = 0. Let x be an approximation of x with precision P . Then computing N f (x) at precision 2P gives an approximation of x with precision 2P -δ, where δ > 0 depends only on f (and precision losses) and x.
Hence, one step of Newton's method "lifts" an approximation with precision P into an approximation with absolute precision roughly 2P . In practice, computing δ can be done using the following procedure (described in e.g. [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]): if x and N f (x) agree to k bits, and N f (x) and N f (N f (x)) agree to k bits, we have δ = 2k -k .
This allows to prove the following result, giving the complexity of applying Newton's method:
Theorem 0.3.9. Let f : C → C be an analytic function and x such that f (x) = 0. Denote C(f, P ) the cost of evaluating f f with arguments of precision P , and suppose that 2C(f, P ) ≤ C(f, 2P ). Then one can compute an approximation of x with precision P in O (C(f, P )) operations.
Proof. We give a sketch of the proof; a very similar result is proved in detail in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]Thm. 1.2]. Denote δ the number as in Corollary 0.3.8; note that δ is a constant in P . Let P 0 be sufficiently large so that Theorem 0.3.7 applies; we furthermore impose P 0 > δ. Note again that P 0 is a constant in P . We start by computing an approximation x P0 of x with precision P 0 ; the cost of computing this approximation (by any method) only depends on P 0 , and is hence a constant in P . Computing N f (x P0 ) with precision 2P 0 gives an approximation of x with absolute precision 2P 0 -δ > P 0 ; one can then repeat the process k times in a row. The cost of applying this process is C (f, P 0 ) + C (f, 2P 0 -δ) + . . . + C f,
2 k P 0 -(2 k -1)δ ≤ 2C f, 2 k P 0 .
Taking k = O(log P ) is enough to get a result which is an approximation of x with P bits of precision; the total cost is then O (C(f, P )).
This result means that applying Newton's method while doubling the working precision at each step is only as costly as the last full-precision step. Note that applying directly Theorem 0.3.7, i.e. computing each iteration at full precision, gives a O(C(f, P ) log P ) cost, which is not as good.
A direct application is that the division of two fixed-point numbers of precision P can be carried out in O(M(P )) bit operations, by inverting the function f z (t) = 1 -zt via Newton's method. Furthermore, the square root can also be computed in O(M(P )) bit operations, either by applying Newton's method directly (which gives in this case Heron's method), or by computing the inverse square root using Newton's method and multiplying it by the number; as explained Introduction in [BZ10, Section 4.2.3], the latter is more efficient in practice (because there are then no divisions in Newton's method), but both methods have the same asymptotic cost of O(M(P )).
Another application of Newton's method, as we explain in Chapter 3, is the computation of exp(z) with precision P in the same amount of time as log(z), which is O(M(P ) log P ) (which we call quasi-linear time or quasi-optimal time).
Finally, an interesting application of Newton's method is to prove that any algebraic function (i.e. any function which can be defined as the root of a polynomial equation) over Q[X] can be computed with absolute precision P in O(M(P )) bit operations [START_REF] Borwein | Pi and the AGM: a study in the analytic number theory and computational complexity[END_REF]Theorem 6.4]. We will see in this manuscript that a large number of transcendental functionsπ, log, exp, θcan be computed in O(M(P ) log P ) bit operations; proving that they cannot be computed in O(M(P )) bit operations would be another proof of their transcendance, but we are nowhere close to obtaining such facts.
Chapter 1
Background on elliptic and hyperelliptic curves
In this introductory chapter, we take a look at elliptic and hyperelliptic curves, which are abelian varieties that have been studied extensively over the years, in particular because of their use in cryptography. One of the ultimate goals of this manuscript is to describe a new algorithm to compute isogenies, which are morphisms between elliptic curves with interesting applications; it led us to investigate related problems, among which the computation of the theta function and of the Abel-Jacobi map. We start this chapter by giving a brief overview of elliptic curves defined over any field; then, we discuss isogenies and their applications, and survey the state of the art in computing them in several settings. In another section, we discuss complex elliptic curves, for which another representation is available, with interesting computational consequences. We then end the chapter by a discussion of hyperelliptic curves.
Elliptic curves and isogenies
We define elliptic curves over any field, then discuss isogenies; we also survey the state of the art for their computation in several settings. We refer the reader to [CFA + 10, Sil86] for more details.
Elliptic curves over a field
Definition 1.1.1 ([CFA + 10, Section 13.1.1]). Let K be a field. An elliptic curve E over K, is the set of points [X : Y : Z] ∈ P 2 (K) satisfying the equation
Y 2 Z + a 1 XY Z + a 3 Y Z 2 = X 3 + a 2 X 2 Z + a 4 XZ 2 + a 6 Z 3
with a i ∈ K. We call O = [0 : 1 : 0] the point at infinity. We denote E(K) the set of K-rational points, the points of the elliptic curves which are defined over K, i.e. for which there is λ ∈ K such that λX, λY, λZ ∈ K.
For ease of notation, we will often write the equation using non-homogeneous coordinates (x = X/Z, y = Y /Z): y 2 + a 1 xy + a 3 y = x 3 + a 2 x 2 + a 4 x + a 6 , which works for all the points except the point at infinity.
Remark 1.1.2. In the remainder of this manuscript, we will assume that char(K) = 2, 3. Hence, after a change of variables, every elliptic curve can be written in a short Weierstrass form:
y 2 = x 3 + ax + b.
There exists a group law making the elliptic curve an abelian group; this fundamental property is the basis of the cryptographic applications of elliptic curves.
Proposition 1.1.3. Let E be an elliptic curve over K. There exists a map E × E → E, the chord-and-tangent process, which defines a commutative group law on E. Furthermore the group law preserves K-rationality, i.e. it gives a group law E(K) × E(K) → E(K).
The chord-and-tangent process is described in many references, such as [Sil86, Section III.2], [Gal12, Section 7.9] or [CFA + 10, Section 13.1.1]. We do not describe it explicitly in this manuscript, as we do not use it; its existence is sufficient for our purposes.
The elliptic curve discrete logarithm problem (ECDLP)
Elliptic curves over finite fields have found an application in the last decades in the field of cryptography; indeed, cryptosystems based on elliptic curves are among the most widely spread, and are supported in many different standards and applications. As all public-key cryptography schemes, its security relies on a hardness assumption, i.e. a problem that is believed to be hard, and such that breaking the cryptographic scheme seems to be as hard as solving this problem. In the case of elliptic curve cryptography, the problem is Definition 1.1.4. Let E(F p ) : y 2 = x 3 + ax + b with a, b ∈ F p . The elliptic curve discrete logarithm problem is the following problem:
Given P ∈ E(F p ) and Q = [n]P , compute n.
We sometimes put n = log P Q, the discrete logarithm of Q with respect to the base point P . We outline a few facts on the state of the art on solving the ECDLP; for a recent and more in-depth review, we refer to [START_REF] Galbraith | Recent progress on the elliptic curve discrete logarithm problem[END_REF].
In some clearly-identified cases, the elliptic curve discrete logarithm problem (ECDLP) is easy to solve; this is the case for instance for curves such that E(F q ) ∈ {q, q + 1}, and such cases should be avoided for cryptography. Furthermore, the Silver-Pollig-Hellman algorithm [CFA + 10, Section 19.3] allows one to solve the discrete logarithm problem in O( √ n), where n is the largest prime factor of the order of the group; hence, one important guideline to follow is to work in a subgroup of points of the elliptic curve that has a large prime order. A frequent case is to pick a curve whose number of points is "almost a prime", that is to say E(F q ) = cp where c is a small number (for instance c = 16) and p is a prime.
Pollard's rho algorithm is currently one of the best general-purpose algorithms to solve the ECDLP in a subgroup of prime order r; it has an average-case complexity of ( √ 2 + o(1)) √ r operations [START_REF] Galbraith | Recent progress on the elliptic curve discrete logarithm problem[END_REF]. An interesting improvement over this approach is the "two grumpy giants and a baby" algorithm of [START_REF] Bernstein | Two grumpy giants and a baby[END_REF], an improvement over the "baby-steps giant-steps" approach. The analysis of [START_REF] Galbraith | Computing elliptic curve discrete logarithms with improved baby-step giant-step algorithm[END_REF] shows one can expect an average running time of (1.26 + o(1)) √ r steps, an improvement over Pollard's rho. The article [START_REF] Galbraith | Computing elliptic curve discrete logarithms with improved baby-step giant-step algorithm[END_REF] exhibits further improvements, bringing the average complexity down to (0.38 + o(1)) √ r for Pollard's rho and (0.36 + o(1)) √ r for the "two grumpy giants and a baby" approach.
Hence, for a curve which has a prime order subgroup of order 2 n , breaking the ECDLP costs around 2 n/2 operations. By contrast, algorithms to factor n-bit numbers (and thus who break the RSA cryptosystem) require roughly e 1.92(log n) 1/3 (log log n) 2/3 operations, which is a much faster, subexponential complexity. Comparing those complexities give the result that the key size for a cryptosystem whose security relies on the ECDLP grows as slowly as the cube root of the key size for the RSA cryptosystem; this allows cryptosystems based on elliptic curves to have much smaller key sizes than other cryptosystems, which compensates for a costlier group law. In practice, a few curves have been standardized by NIST, defined over F p for p a prime number of size 192, 224, 256, 384, or 521 bits. For reference, the largest instance of ECDLP for a curve over F p was for a subgroup with a 112 bit prime order [START_REF] Bos | PlayStation 3 computing breaks 2 60 barrier -112-bit prime ECDLP solved[END_REF], while the largest RSA instance ever broken was for a key of size 768 bits [KAF + 10].
Isogenies
This section deals with isogenies, which are essentially morphisms transporting the group law of a curve onto another one. Those maps are very useful in many domains related to elliptic curve cryptography; we detail in Section 1.2 some applications of isogenies. We introduce isogenies and state the most important mathematical theorems related to them. We do not present proofs, as those are presented in a number of textbooks; our presentation follows the one of [START_REF] Joseph | The Arithmetic of Elliptic Curves[END_REF].
Definition 1.1.5 [START_REF] Joseph | The Arithmetic of Elliptic Curves[END_REF]III.4]). Let E 1 , E 2 be two elliptic curves. An isogeny from E 1 to E 2 is a morphism φ : E 1 → E 2 such that φ(O E1 ) = O E2 and φ(E 1 ) = {O E2 }. We then say that E 1 and E 2 are isogenous if there is an isogeny from E 1 to E 2 . This property implies that isogenies transport the group law from one elliptic curve to another one, as follows:
Theorem 1.1.6 ([Sil86, Theorem 4.8]). Let φ : E 1 → E 2 be an isogeny. Then for any P, Q ∈ E 1 , φ(P + Q) = φ(P ) + φ(Q).
We then define the notion of degree of isogenies: Definition 1.1.7 [START_REF] Joseph | The Arithmetic of Elliptic Curves[END_REF]III.4]). An isogeny induces an injection of function fields φ * : K(E 2 ) → K(E 1 ). We then define the degree of φ as
deg φ = [K(E 1 ) : φ * K(E 2 )].
In all that follows, we may write "φ an n-isogeny" for "φ an isogeny of degree n".
Remark 1.1.8. An important example of an isogeny is the multiplication-by-m map [m]P = P + ... + P m times which is an isogeny from a curve to itself. Note that deg([m]) = m 2 . Remark 1.1.9. We call isogenies φ : E → E endomorphisms, and consider the endomorphism ring of the curve End(E). Usually, the only endomorphisms are the multiplication-by-m maps, and End(E) Z; however, for some curves, the endomorphism ring is larger, and we then say that the curve has complex multiplication. This is for instance the case of curves over finite fields, who have at least the Frobenius endomorphism φ : (x, y) → (x p , y p ). We refer to [Sil86, Section III.9 and Section V.3] for more on endomorphism rings in finite fields, and to [START_REF] Kohel | Endomorphism rings of elliptic curves over finite fields[END_REF][START_REF] Sutherland | Isogeny volcanoes[END_REF] for explicit algorithms.
Theorem 1.1.10 ([Sil86, Theorem III.4.12]). Let E be an elliptic curve and let Φ be a finite subgroup of E. There there exists a unique elliptic curve E and a (separable) isogeny φ : E → E such that Ker φ = Φ.
An explicit construction of the curve and the isogeny is given in Section 1.1.3. Theorem 1.1.11 ([Sil86, Theorem III.6.1]). Let φ : E 1 → E 2 be an isogeny of degree m. Then there is a unique isogeny φ : E 2 → E 1 such that φ • φ = [m]. We call φ the dual isogeny of φ.
This result has many applications; for instance, we note that it has been used in [START_REF] Doche | Efficient scalar multiplication by isogeny decompositions[END_REF] to propose curves where doubling and tripling are sped up by computing [2] and [3] using specific isogenies and their dual instead of adding points using the classical chord-and-tangent process.
Remark 1.1.12 ([Sut13, p. 6]). Note that for any prime that does not divide the characteristic of the field, there are 2 -1 non-zero -torsion points, and hence + 1 cyclic subgroups of order . Each of those subgroups is the kernel of an isogeny over K. Furthermore, every -isogeny φ from E arises this way, since P ∈ Ker φ ⇒ [n]P = φ • φ(P ) = 0. The isogeny is defined over K if the subgroup is Galois-invariant; [Sut13, Lemma 2] shows there are 0, 1, 2 or + 1 isogenies defined over K. This property is useful in the context of SEA algorithm (see Section 1.2.2).
The following theorem gives a criterion to recognize whether two curves over a finite field are isogenous:
Theorem 1.1.13 (Tate). Let E, E be two elliptic curves over F q . Then E and E are isogenous if and only if E(F q ) = E (F q ). Hence, counting points of both elliptic curves is enough to determine whether they are isogenous or not. This can be accomplished in polynomial time using the Schoof-Elkies-Atkin (SEA) algorithm, which has complexity O(log 4+ p) [START_REF] Blake | Elliptic Curves in Cryptography[END_REF], an improvement over the O(log 5+ ) complexity of the original Schoof algorithm [START_REF] Schoof | Counting points on elliptic curves over finite fields[END_REF]. We discuss this algorithm in Section 1.2.2, as the algorithm itself has a connection with the computation of isogenies.
Finally:
Theorem 1.1.14 ([CFA + 10, Corollary 4.76]). Any isogeny can be decomposed into a sequence of isogenies of prime degree.
This allows us to assume in some applications that the isogeny has degree a prime number. Also, the decomposition of an isogeny in isogenies of prime degrees can be translated in terms of paths in isogeny graphs; see Section 1.1.5 for more details.
Remark 1.1.15. An isogeny between elliptic curves in their short Weierstrass form can be written as
φ(x, y) = g(x) h(x) 2 , y k(x) h(x) 3 ,
with h(x) a polynomial defined in the next section; furthermore, we have deg h = -1 2 , and deg g = , deg k = 3 -1 2 . Another way to write this formula is given in the next section. The problem of isogeny computation can be split up in three different settings, depending on what is known and what is sought. We summarize these settings in three different problems, from easiest to hardest: "finding an -isogenous curve and the isogeny" (Problem 1.1.16), "computing an -isogeny between two given curves" (Problem 1.1.19) and "computing an isogeny between two given curves" (Problem 1.1.20). We take a look at these problems in the following sections.
Finding a -isogenous curve: Vélu's formulas
We first consider the following problem: Problem 1.1.16. Let be an odd prime. Given:
• an elliptic curve E defined over an algebraically closed field K,
• a subgroup S ⊂ E(K) of cardinality (or, alternatively, an -torsion point P , in which case S = {[i]P }), compute:
• A curve E that is -isogenous to E; • An -isogeny φ : E → E such that Ker φ = S.
This problem can be solved explicitly using Vélu's formulas, first published in [START_REF] Vélu | Isogénies entre courbes elliptiques[END_REF]. Recall we suppose here that is an odd prime and that the curve is in short Weierstrass form. A more general presentation of those formulas can be found in [START_REF] Vélu | Isogénies entre courbes elliptiques[END_REF][START_REF] Lercier | Algorithmique des courbes elliptiques dans les corps finis[END_REF], and proofs are presented in e.g. [START_REF] Washington | Elliptic Curves: Number Theory and Cryptography[END_REF].
The isogeny φ is:
φ : E(K) → E (K) O → O P = (X P , Y P ) → X P + Q∈S\{O} X P +Q -X Q , Y P + Q∈S\{O} Y P +Q -Y Q ,
The coefficients defining E and those defining φ can be recovered using the following theorem:
Theorem 1.1.17 ([Ler97, Theorem 39]). Let S be a subgroup of E(K) of cardinality . Define R as the set satisfying
S \ {O} = R ∪ (-R) with R ∩ (-R) = ∅. For any Q = (X Q , Y Q ) ∈ S \ {O}, define: t Q = 6X 2 Q + 2a u Q = 4(X 3 Q + aX Q + b) and t = Q∈R t Q and w = Q∈R u Q + X Q t Q . Define E : y 2 = x 3 + (a -5t)x + (b -7w),
then the isogeny is given by
φ(P ) X = X P + Q∈R t Q X P -X Q + u Q (X P -X Q ) 2 φ(P ) Y = Y P 1 + Q∈R t Q (X P -X Q ) 2 + 2u Q (X P -X Q ) 3
Since S = is odd prime, any point in it generates the group and is an -torsion point; hence, knowing just one point of S is enough to compute the whole subgroup. This allows one to compute the polynomial
h(x) = Q∈R (x -X Q )
whose square is the denominator of the x-coordinate of the isogeny. One can then recover t, w and the coefficients of the rational function defining φ from the coefficients of h; we refer to [Ler97, Chapter 4] for details. Applying Vélu's formulas requires O( ) multiplications in C. We then estimate the cost of writing it in the form given in Note 1.1.15; one could think of interpolating the rational function, or simply computing it from the shape given by the formulas (which requires computing Q =Qi (X -X Q )). In both cases, one uses methods related to remaindering trees (see [START_REF] Von | Modern computer algebra[END_REF] Chapter 10] or Section 9.2.3) to get the best complexity, which is O(M( ) log ) field operations.
Remark 1.1.18. A careful inspection of Vélu's formulas reveals that the isogeny is actually of the shape
φ(x, y) = g(x) h(x) 2 , y g(x) h(x) 2 .
with deg g = and deg h = -1 2 . This simplifies the computations, as this means one can only compute the x-coordinate of an isogeny. We make use of this in Chapter 9.
Computing an -isogeny
We now consider a slightly harder problem: Problem 1.1.19. Given:
• two elliptic curves E and E , defined over K,
• a prime integer such that E and E are -isogenous, compute:
• an -isogeny φ : E → E .
This problem first appeared in the context of speeding up Schoof's algorithm to compute the number of points of an elliptic curve; the polynomial h(x) is used to reduce the cost of polynomial arithmetic in the SEA algorithm (see Section 1.2.2 for details). Several algorithms exist to solve this problem; however, their complexity is more or less advantageous depending on the characteristic of K.
The first case is the large characteristic case, in which char(K) is either much larger than or 0. The complexities here are given in terms of number of field operations. Several methods to solve this problem have been proposed over the years; we refer to [START_REF] Bostan | Fast algorithms for computing isogenies between elliptic curves[END_REF] for a more in-depth review of each algorithm, including pseudocode. We note that the first algorithm of Elkies, which consists in differentiating the differential equation satisfied by ℘ (see Section 1.3.4), and Atkin's algorithm, which consists in computing the exponential of a power series, are two methods which were proposed in the context of the SEA algorithm; we mention why these algorithms were needed in this context in Section 1.2.2. Both of these algorithms require O( 2 ) field operations; this complexity is improved by the algorithm of [START_REF] Bostan | Fast algorithms for computing isogenies between elliptic curves[END_REF], which uses fast algorithms for the computation of power series to achieve a O(M( ) log ) running time -that is to say, a quasilinear number of field operations with respect to the degree of the isogeny.
The other case is the small characteristic case, for instance K = F q with q = p n and p ≤ . Two approaches by Couveignes have been seminal in this case; these algorithms, originally published in [START_REF] Couveignes | Quelques calculs en théorie des nombres[END_REF] and [START_REF] Couveignes | Computing -isogenies using the p-torsion[END_REF], are also discussed in [START_REF] Lercier | Algorithmique des courbes elliptiques dans les corps finis[END_REF]. The second algorithm relies on the computation of the isogeny from its image on the p k -torsion points, which are defined on an extension of F q ; these groups are cyclic groups, and the algorithm roughly attempts to map a generator of the p k -torsion group of E to a similar generator in E , then interpolate the isogeny and check if it is the right one. The computations on points of p k -torsion is computationally costly, and one should use computations in Artin-Schreier extensions to attempt to mitigate memory requirements, as described (along with several other improvements) in [START_REF] De | Fast algorithms for computing isogenies between ordinary elliptic curves in small characteristic[END_REF]; the complexity of this algorithm is O( 2 log q) operations in F p . These algorithms assume p is fixed, as the dependency in log p is exponential; however, we note that a recent paper [START_REF] De Feo | Explicit isogenies in quadratic time in any characteristic[END_REF] outlines a new algorithm with similar running time but without an exponential complexity in log p.
Computing an isogeny
The final problem we consider here is the hardest one:
Problem 1.1.20. Given:
• two isogenous curves E and E , defined over K, compute • an isogeny φ : E → E .
We consider here the case where K = F q . This problem is then much harder than Problems 1.1.16 and 1.1.19. In fact, the best known algorithm to solve it has exponential complexity, and [Gal99, Section 8] gives arguments showing that this problem may not have polynomialtime solutions in the general case. Probably due to this, a few cryptosystems who take this problem as a basis for their hardness assumption have been proposed [START_REF] Stolbunov | Cryptographic schemes based on isogenies[END_REF][START_REF] De Feo | Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies[END_REF]; thus any improvement on this problem has security implications for those cryptosystems.
Since an isogeny can be decomposed as a sequence of isogenies of prime degree by Theorem 1.1.14, the strategy that is generally employed is one of considering -isogeny graphs for prime, and attempt to navigate them.
Definition 1.1.21 ([Sut13, Def.1]). An -isogeny graph is a graph (V, E) with V = { elliptic curves over K up to K-isomorphism } K (via the j-invariant) E = { pairs (j 1 , j 2 ) of j-invariants of -isogenous curves }.
The isogeny graph is then the union of all -isogeny graphs for all primes .
The problem of constructing an isogeny between two curves is then rephrased as the problem of finding a path connecting the two curves in the isogeny graph.
Note that curves that are isogenous to a supersingular curve are themselves supersingular; hence, connected components are either ordinary or supersingular. The supersingular components have different properties from the ordinary components, which is why we distinguish both cases. We mention algorithms on both classical and quantum computers, since the hardness of the problem has led some to investigate its complexity on quantum computers, in an attempt to determine whether cryptosystems based on this hard problem would be quantum-resistant.
Ordinary case
In the ordinary case, -isogeny graphs have a "volcano" structure. We refer to [START_REF] Sutherland | Isogeny volcanoes[END_REF] for more precise statements of the properties in this section. Definition 1.1.22 [START_REF] Sutherland | Isogeny volcanoes[END_REF]Def.1]). An -volcano is a connected undirected graph partitioned in d levels V 0 , ..., V d such that • V 0 , the surface, is a cycle (or more generally a regular graph of degree at most 2);
• each vertex in V i+1 has exactly one neighbour in V i , and this accounts for every edge not on the surface;
• each vertex except those in V d have degree +1.
As outlined in [Koh96, Chapter 4], any1 ordinary component of the -isogeny graph is an -volcano; furthermore, all vertices at a level share the same endomorphism ring. An -isogeny φ : E → E is said to be "ascending" if End(E ) is larger than End(E), descending if it is smaller, and horizontal if it is the same, which only happens at the surface; moreover, one cannot ascend further than the surface of the volcano.
Given this structure, the general strategy for solving Problem 1.1.20, i.e. find φ between a given E 1 and E 2 , is as follows:
1. Compute an ascending chain of -isogenies from E 1 (resp. E 2 ) to E 1 (resp. E 2 ) such that End(E 1 ) = O K (resp. End(E 2 ) = O K )
. This is Kohel's algorithm [START_REF] Kohel | Endomorphism rings of elliptic curves over finite fields[END_REF].
2. We wish to reach vertices E i such that the endomorphism ring is maximal, i.e. End(E i ) = O K . The previous steps ensures only that [O K : End(E i )]; hence, one need to repeat the previous steps and ascend different -volcanoes. This part is the costliest asymptotically, but is actually very fast in practice.
3. Find a horizontal isogeny between E 1 and E 2 . The last step is the part which has been improved in different algorithms, which we mention here. The first algorithm has been the one of [START_REF] Steven | Constructing isogenies between elliptic curves over finite fields[END_REF], which uses a meet-in-the-middle strategy; it constructs isogeny trees starting at E 1 and E 2 , using the following procedure: pick a prime number at random (in a carefully chosen set of primes) and construct all -isogenies starting from a node of each tree. This procedure needs to be iterated O(log p) times on average, but the size of the trees can be as big as O(p 1/4+ ). Once a match is found, each -isogeny in the path linking E 1 and E 2 is reconstructed using algorithms to solve Problem 1.1.19; this step is analyzed as costing O(p 3/2 log p) operations in general, but this complexity can actually be made negligible if one assumes smoothness properties, which allows to bound the maximal size of the primes. The algorithm is polynomial time if the class number of the endomorphism ring is small, which is the case for instance for elliptic curves generated using the CM method.
One improvement over this algorithm is given in [START_REF] Galbraith | Extending the GHS Weil descent attack[END_REF]: instead of using isogeny trees, one can use random walks over the isogeny graph starting at E 1 and E 2 . This makes storage polynomial (instead of exponential), but the number of expected steps before finding a collision in the walks is O(p 1/4+ ). This idea is combined with a representation of the isogeny as an ideal, which undergoes a smoothing step before the step which reconstructs the final isogeny; this step costs as much as the random walk, i.e. O(p 1/4+ ), but allows a faster reconstruction (using ideas resembling the ones from the SEA algorithm of Section 1.2.2). A variant over this algorithm, which saves a constant factor, is described in [START_REF] Galbraith | Improved algorithm for the isogeny problem for ordinary elliptic curves[END_REF].
Finally, we note that there is also a quantum algorithm to solve this problem, i.e. an algorithm running on a theoretical quantum computer instead of a classical one. The application of quantum computers to the resolution of these problems is certainly motivated by the need to find cryptosystems which rely on a hardness assumption that resists attacks using a quantum computer. For example, factoring integers and the ECDLP are both solvable in polynomial time on a quantum computer, while the Shortest Vector Problem in a lattice still requires exponential time. One algorithm to solve the problem we are considering here on a quantum computer has been proposed in [START_REF] Childs | Constructing elliptic curve isogenies in quantum subexponential time[END_REF]; its running time is subexponential, i.e. O e √ 3/2(ln p log log p) 1/2 . This is accomplished by reducing the problem to the hidden shift problem, which is a problem that can be solved by a quantum computer in subexponential time. We note that they also propose a classical (non-quantum) algorithm to speed up the ideal reduction step in [START_REF] Galbraith | Extending the GHS Weil descent attack[END_REF], using more up-to-date techniques to reduce the ideal and compute the isogeny; this step requires subexponential complexity, instead of the O(q 1/4+ ) complexity in [START_REF] Galbraith | Extending the GHS Weil descent attack[END_REF].
Supersingular case
We now look at the supersingular components of isogeny graphs; these components exhibit a different structure from the volcanoes of the ordinary case, and hence the techniques above do not directly apply. This problem has been used as a hardness assumption for some cryptosystems, most notably the one of [START_REF] De Feo | Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies[END_REF]; we discuss the implication of the algorithms below in Section 1.2.3. We refer once again to [START_REF] Sutherland | Isogeny volcanoes[END_REF] for details.
Supersingular isogeny graphs have a very regular structure:
Proposition 1.1.23 [START_REF] Kohel | Endomorphism rings of elliptic curves over finite fields[END_REF]). Every vertex in a supersingular component of the -isogeny graph when considered over F p 2 has out-degree + 1. If the vertex is not 0 or 1728, nor adjacent to those two vertices, it also has in-degree + 1. Finally, the supersingular -isogeny graph is connected for every prime .
The connectedness of the isogeny graph means one can simply work with, say, 2-isogenies to construct a path between any two points. Furthermore, those graphs are expander graphs, with a small diameter (O(log p) [START_REF] Delfs | Computing isogenies between supersingular elliptic curves over F p . Designs[END_REF]) and hence a short path between two vertices.
A rather straightforward algorithm consists in, as in [START_REF] Steven | Constructing isogenies between elliptic curves over finite fields[END_REF], performing a "meet-in-themiddle" search in the full supersingular graph over F p 2 to find a path between both elliptic curves. This method will find the shortest path; however, the sheer size of the tree computations makes it a O(p 1/2+ ) algorithm (both time and storage).
A more complex algorithm is the one in [START_REF] Delfs | Computing isogenies between supersingular elliptic curves over F p . Designs[END_REF], which considers a variant of the isogeny graph above with only curves defined over F p . The graph then looks like a volcano of depth 2; hence, solving Problem 1.1.20 in the case where both E 1 and E 2 are defined over F p can be done adapting techniques from the previous section, in average running time O(p 1/4+ ). The running time of the general algorithm is still O(p 1/2+ ), since one has to find isogenies from the original curves to curves defined over F p (which is done in the article using self-avoiding random walks, and can also be done using Pollard-style stateless random walks) -unless of course the original curves are defined over F p , which makes the complexity only O(p 1/4+ ).
Finally, we also mention a quantum algorithm to compute isogenies between supersingular curves [START_REF] Biasse | A quantum algorithm for computing isogenies between supersingular elliptic curves[END_REF]. The algorithm improves the first step of [START_REF] Delfs | Computing isogenies between supersingular elliptic curves over F p . Designs[END_REF] -that is to say, finding an isogeny to a curve defined over F p -using Grover's algorithm, which in this case means that the cost of this step on a quantum computer is O(p 1/4+ ). It also extends [START_REF] Delfs | Computing isogenies between supersingular elliptic curves over F p . Designs[END_REF] in a similar way that [CJS14] extends [START_REF] Galbraith | Extending the GHS Weil descent attack[END_REF] -that is to say, in the easier case where the curves are defined over F p , the structure can be exploited to yield a reduction to the hidden shift problem, thus making the quantum cost for this step O e √ 3/2(log p log log p) 1/2 . This algorithm has direct implications for the security of a scheme; see Section 1.2.3 for details.
Applications of isogenies 1.2.1 Isogenies and the ECDLP
The morphism property of isogenies means that they "transport" the problem of the discrete logarithm from one curve to another. More precisely:
E φ -→ E Q = [n]P ⇔ φ(Q) = [n]φ(P ) log P Q ⇔ log φ(P ) φ(Q)
Hence, solving the ECDLP on E is only as hard as solving the ECDLP on E once the isogeny φ has been computed. In theory, this could lead to a faster attack on the ECDLP of E, provided one can find a weaker curve E that is isogenous to E and compute the corresponding isogeny. Such an attack has been made explicit e.g. in the case of genus 3 hyperelliptic curves [START_REF] Smith | Isogenies and the Discrete Logarithm Problem in Jacobians of Genus 3 Hyperelliptic Curves[END_REF]. In general, solving the discrete logarithm problem on these curves is hard, requiring O(q 4/3+ ) operations. However, for a large proportion (around 28%) of hyperelliptic curves of genus 3, there is an isogeny between the curve and a non-hyperelliptic curve; this yields a O(q 1+ ) attack using the algorithm of [START_REF] Diem | Index calculus in class groups of nonhyperelliptic curves of genus three[END_REF].
Generically, for elliptic curves, this amounts to solving Problem 1.1.20, which current algorithms cannot do in less than exponential time. Note also that identifying a weaker isogenous curve is not a problem one knows how to solve either; in fact, [START_REF] Jao | Do all elliptic curves of the same order have the same difficulty of discrete log?[END_REF] shows that isogenous curves all have similar security with respect to the ECDLP. Hence, this does not seem to constitute a generic threat to the security of the ECDLP.
However, the strategy of using isogenies to fall back on weaker curves can be used in the context of the Gaudry-Hess-Smart attack, presented in [START_REF] Gaudry | Constructive and destructive facets of Weil descent on elliptic curves[END_REF]. This attack uses Weil descent to reduce the ECDLP to a discrete logarithm problem on a hyperelliptic curve of high genus, where an index calculus attack (such as the one in [START_REF] Enge | A general framework for subexponential discrete logarithm algorithms[END_REF][START_REF] Enge | An L(1/3) discrete logarithm algorithm for low degree curves[END_REF]) applies; such attacks are asymptotically faster than attacks such as Pollard's rho. As it happens, the fact that an elliptic curve is vulnerable to the GHS attack can be easily checked, but there is no way to check that a curve on which the GHS attack is unsuccessful is not isogenous to one which is vulnerable. This fact was noted in [START_REF] Galbraith | Extending the GHS Weil descent attack[END_REF], where a faster algorithm to compute isogenies is used to extend the probability to find an isogenous curve that is vulnerable to the GHS attack. We refer to [START_REF] Menezes | Weak fields for ECC[END_REF] for practical implications of this attack.
Finally, the fact that finding an isogenous curve and computing the isogeny is a hard problem can be seen as worrying, as we do not have any guarantee that a curve is not isogenous to a weaker curve. Vulnerability to the GHS attack has once again been used in [START_REF] Teske | An elliptic curve trapdoor system[END_REF], in which one finds a method to construct a curve E that is vulnerable to the GHS attack, and an isogenous curve E for which the best attack is Pollard's rho, along with the corresponding isogeny. The curve E appears to be secure, and finding E is a hard problem; however, if someone knows E and the isogeny, the ECDLP on E can be easily solved. This opens the possibility of trapdoors in elliptic curves, which is a worrying possibility; [START_REF] Teske | An elliptic curve trapdoor system[END_REF] recommends choosing the coefficients provably at random to avoid mistrust.
The SEA point counting algorithm
The topic of isogeny computation, and more precisely Problem 1.1.19, is one that was first considered in the context of finding asymptotic improvements to Schoof's algorithm to compute the number of points of an elliptic curve over a finite field. We provide a brief overview of Schoof's original algorithm, then show the improvements to the algorithm and their link with isogenous curves; we follow the presentation of [START_REF] Blake | Elliptic Curves in Cryptography[END_REF].
In 1986 Schoof proposed in [START_REF] Schoof | Counting points on elliptic curves over finite fields[END_REF] an algorithm to compute the number of points on an elliptic curve E defined over F q in polynomial time; this was the first polynomial-time algorithm, down from the O(q 1/4+ ) complexity of a baby-step / giant-step algorithm. The idea is as follows: since Hasse's theorem indicates that E(F q ) = q + 1 -t with |t| ≤ 2 √ q, one computes t (mod p i ) for enough primes p i such that we can reconstruct t using the Chinese Remainder Theorem (i.e. so that the final modulus in the CRT is larger than 4 √ q). To compute t (mod p i ), the characteristic polynomial of the Frobenius endomorphism shows that for any P = (x, y) ∈ E:
x q 2 , y q 2 + [q](x, y) = [t] (x q , y q )
In particular for P ∈ E[p i ], i.e. p i -torsion points, we have x q 2 , y q 2 + [q (mod p i )](x, y) = [t (mod p i )] (x q , y q ). The computation of the left-hand side is then performed symbolically, i.e. using polynomial arithmetic. The trick is that this computation can be performed modulo φ pi , the p i -torsion polynomial, i.e. the polynomial such that (x, y) ∈ E[p i ] if and only if φ pi (x) = 0; this polynomial is of degree O(p 2 i ), which limits the degree of the polynomials and hence the cost of the computations. Then, one computes iteratively the [k](x q , y q ) until a solution to the equation is found, which gives the correct value for t (mod p i ). We refer to [START_REF] Blake | Elliptic Curves in Cryptography[END_REF] for the complexity of this algorithm.
The Schoof-Elkies-Atkin (SEA) algorithm lowers the complexity to O(log 4+ q). The prime numbers are split in two different sets: Atkin primes, i.e. primes p i for which t 2 -4q is not a square modulo p i , and Elkies primes, for which it is. Dealing with Atkin primes requires exponential time; hence, the improved running time is achieved using only Elkies primes, although in practice dealing with a small number of Atkin primes provides numerous advantages.
Improvements to this algorithm are given by the SEA algorithm, and a closer study of Elkies primes, i.e. primes p i for which t 2 -4q is a square modulo p i , and hence primes for which the polynomial x 2 -tx + q has two roots in F q . We are looking to compute one of these roots, λ, by finding a solution to (x q , y q ) = [λ](x, q). The asymptotically significant savings come from the fact that one can work modulo a factor F pi of degree O(p i ) of φ pi . As we mentioned in Note 1.1.12, subgroups of the -torsion define the kernel of an -isogeny; hence, the factor is simply the polynomial corresponding to the kernel to a p i -isogeny I from the curve E to another curve E 1 , i.e.
F pi (X) = ±Pi∈Ker I\O (X -x(P i ))
Computing the isogenous curve is done by first computing its j-invariant, which is a root of the modular polynomial Φ pi (j(E), X); we assume we get this polynomial from precomputed tables. Since any isogeny will do, we can afford to only look at isogenies to curves defined over F q , whose number is given by Note 1.1.12. Hence, we only need to compute a root of gcd(X q -X, Φ pi (j(E), X)), which is typically of degree 2. We then compute the coefficients of the new curve E 1 from the two j-invariant and the coefficients of E (see [START_REF] Blake | Elliptic Curves in Cryptography[END_REF] for details).
Once we have this, we essentially need to compute the p i -isogeny between E and E 1 , which corresponds to Problem 1.1.19. We refer to Section 1.1.4 for solutions to this problem, which were originally devised for this very setting.
The cost of the algorithms to solve Problem 1.1.19 does not matter greatly asymptotically here: the cost of computing the eigenvalue (using polynomial arithmetic modulo F pi ) is O(log 3+ q), which dominates the cost of these algorithms in any case. This is an improvement over the O(log 5+ q) complexity of the original Schoof algorithm.
Isogeny-based cryptography
A relatively novel idea that has been investigated in recent years is to use isogenies, and more precisely the hardness of Problem 1.1.20, as a basis for strong cryptosystems. Even more enticing is the fact that this problem seems to resist fairly well to quantum computers, unlike other problems like factoring integers or the ECDLP, which can both be solved in polynomial time using a quantum computer.
Cryptosystems based on isogenies between elliptic curves were proposed most notably in Stolbunov's Ph.D. thesis [START_REF] Stolbunov | Cryptographic schemes based on isogenies[END_REF], although the idea appears in a previous article by Couveignes [START_REF] Couveignes | Hard Homogeneous Spaces[END_REF]. The hardness assumption in these cryptosystems reduces to Problem 1.1.20 on ordinary elliptic curves, for which there are no known algorithms on a classical computer requiring less than an exponential number of operations. However, it does not achieve resistance to quantum computers, as the problem can be solved using only a subexponential number of operations on a quantum computer.
Another, more efficient cryptosystem was proposed in [START_REF] De Feo | Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies[END_REF], based on Problem 1.1.20 for supersingular elliptic curves; the hardness assumption is often called the Supersingular Isogeny Diffie-Hellman (SIDH) problem. Solving this problem requires O(p 1/2+ ) operations on a classical computer and O(p 1/4+ ) on a quantum one. However, if the elliptic curves are defined over F p , there is a small risk that the problem could be solved faster using the work in [START_REF] Delfs | Computing isogenies between supersingular elliptic curves over F p . Designs[END_REF][START_REF] Biasse | A quantum algorithm for computing isogenies between supersingular elliptic curves[END_REF] (respectively O(p 1/4+ ) on a classical computer and subexponential time on a quantum computer); see [START_REF] Biasse | A quantum algorithm for computing isogenies between supersingular elliptic curves[END_REF] for the full discussion, and their recommendation that the cryptosystem should avoid curves defined over F p . Efficient implementations are discussed in [START_REF] Costello | Efficient algorithms for supersingular isogeny Diffie-Hellman[END_REF].
The Abel-Jacobi map
In this section, we take a closer look at elliptic curves defined over C, which will be an important part of this manuscript. Complex elliptic curves have another representation, as complex tori, which gives nice computational properties. The map which allows the translation between the Weierstrass form and the corresponding complex torus is the Abel-Jacobi map; this is one of the main objects of this manuscript, and we describe fast algorithms to compute this map in Chapter 8.
Definition of the map
Definition 1.3.1. Let ω 1 , ω 2 ∈ C such that Rω 1 + Rω 2 = C, and define the lattice Λ = Zω 1 + Zω 2 ⊂ C. The associated complex torus is C/Λ. We call ω 1 , ω 2 the periods of the lattice, and Λ is the period lattice. The group law on C/Λ is just addition modulo the periods. The fundamental parallelogram is the set {xω 1 + yω 2 , x, y ∈ [0, 1]}.
The "torus" part comes from the fact that one obtains a torus when gluing the left and the right sides of the fundamental parallelogram and the top and bottom sides.
Establishing the link between elliptic curves defined over C and complex tori can be done using techniques which are classical within the context of complex Riemann surfaces. More precisely, one can take a look at integrals of the invariant differential ω = dx y associated to the elliptic curve E: where deg P = 3 are called elliptic integrals, as they appear in the expression of the length of an ellipse; this is where the name of elliptic curves comes from.
We note the following important property:
Proposition 1.3.3 ([CFA + 10, Corollary 5.18]). Let Λ = Zω 1 + Zω 2 be the lattice generated by the periods ω 1 , ω 2 of a complex elliptic curve. Then Λ Z + τ Z with Im(τ ) > 0.
The Abel-Jacobi map is then the map studied in Proposition 1.3.2:
Theorem 1.3.4 ([CFA + 10, Definition 5.12]). Let E be an elliptic curve defined over C. Take α, β two paths e.g. along the branch cuts; define ω 1 = α ω, ω 2 = β ω. Define Λ = Zω 1 + Zω 2 , the complex torus associated to the elliptic curve. Then the Abel-Jacobi map
E(C) → C/Λ P → P O ω (mod Λ)
is a well-defined isomorphism from the elliptic curve to the complex torus.
In the context of elliptic curves, this map is sometimes called the elliptic logarithm map, as its inverse can be considered to be an exponential morphism [START_REF] Cohen | A Course in Computational Algebraic Number Theory[END_REF]p. 398] Remark 1.3.5. In the remainder of this manuscript, we will sometimes use the following shorthands:
• the algebraic representation of a complex elliptic curve refers to an elliptic curve E described by its short Weierstrass equation, i.e. its coefficients a and b;
• the analytic representation of a complex elliptic curve refers to an elliptic curve described as a torus C/Λ, i.e. its periods.
The Abel-Jacobi map then performs the "translation" between these two representations, by converting a point P whose coordinates satisfy the short Weierstrass equation into a point z who lies on the complex torus, and vice-versa with its inverse.
Maps between complex tori
As outlined in [Sil86, Section VI.4], complex analytic maps between complex tori all have the same, very simple form:
Theorem 1.3.6 ([Sil86, Theorem VI.4.1]). Let Λ 1 , Λ 2 two lattices in C. The association {α ∈ C | αΛ 1 ⊂ Λ 2 } → {φ : C/Λ 1 → C/Λ 2 holomorphic and with φ(0) = 0} α → φ α : z → αz is a bijection. Note that φ(0) = 0 ⇒ φ(P + Q) = φ(P ) + φ(Q).
This is an interesting result from the computational perspective, as it means one can compute the image of points very easily. We outline a few applications of this theorem.
Isomorphisms
A corollary of Theorem 1.3.6 is the following: Theorem 1.3.7 ([Sil86, Corollary VI.4.1.1]). Let E 1 , E 2 two complex elliptic curves whose analytic representations are C/Λ 1 and C/Λ 2 . Then E 1 and E 2 are isomorphic over C iff ∃α ∈ C * such that αΛ 1 = Λ 2 . Remark 1.3.8. The following result is well-known, and can be obtained with simple calculations (see e.g. [Sil86, Table 3.1]):
Theorem 1.3.9. Let E 1 : y 2 = x 3 + a 1 x + b 1 and E 2 : y 2 = x 3 + a 2 x + b 2 two isomorphic elliptic curves over K. Then there exists u ∈ K * such that a 2 = u 4 a 1 and b 2 = u 6 a 2 , and the isomorphism is given by the map (x, y) → (u 2 x, u 3 y).
Note that for K = C, this result is a direct consequence of Theorem 1.3.7 and the inverse of the Abel-Jacobi map (Section 1.3.4).
Isogenies
The link with isogenies of elliptic curves is as follows:
Theorem 1.3.10 ([Sil86, Theorem VI.4.1]). Let E 1 , E 2 be two elliptic curves, with corresponding analytic representations C/Λ 1 , C/Λ 2 . Then the association { isogenies between E 1 and E 2 } → {φ : C/Λ 1 → C/Λ 2 holomorphic and with φ(0) = 0} is a bijection.
Remark 1.3.11. As we noted in Note 1.1.12, the existence of a dual isogeny means that every point P in the kernel of an -isogeny φ is an -torsion point, and Ker φ is a subgroup of the -torsion group. Hence, the complex number α describing the isogeny between complex tori necessarily sends some points of the shape mω1+nω2 (for m, n integers) to 0 (mod Λ 2 ), which gives a relation between periods of isogenous curves. We use this remark in Section 9.1.1. Remark 1.3.12. This result can be used to compute isogenies between analytic representations of complex elliptic curves, i.e. solving Problem 1.1.20. Let E 1 , E 2 be two isogenous elliptic curves with analytic representations C/Λ 1 , C/Λ 2 . Write P 1 = (ω 1 , ω 2 ) and P 2 = (ω 1 , ω 2 ), the periods of the lattices. Then there is α ∈ C such that αΛ 1 ⊂ Λ 2 . The images of periods have to be points of Λ 2 because φ(0) = 0, so we have
αP 1 = P 2 M
with M ∈ M 2 (Z). We can then compute α and M , for instance using LLL, as in [START_REF] Van Wamelen | Poonen's question concerning isogenies between Smart's genus 2 curves[END_REF], who described this method in genus 2 and used it to compute explicit isogenies between a set of curves, thus proving that the curves were indeed isogenous.
CM maps
Another application is determining whether a curve over C has complex multiplication, in the sense outlined in Note 1.1.9, using analytic representations; the method is described in [START_REF] Van Wamelen | Proving that a genus 2 curve has complex multiplication[END_REF].
We have that any endomorphism of E corresponds to a map z → αz for some α such that αΛ ⊂ Λ. Hence, we have the following relation:
αP = P M
with M a 2 × 2 integer matrix; one can then compute α and M using the LLL algorithm. The method was originally described in the case of genus 2 hyperelliptic curves in [START_REF] Van Wamelen | Proving that a genus 2 curve has complex multiplication[END_REF].
Elliptic functions and the ℘ function
In order to make explicit the inverse of the Abel-Jacobi map, we need a bit of background on elliptic functions. We refer to [START_REF] Chandrasekharan | Elliptic functions[END_REF] or [Sil86, Chapter VI] for proofs and a more thorough background. Note that, as a complex map, the inverse of the Abel-Jacobi map is invariant by translation by ω 1 or ω 2 . It is thus rather natural to study elliptic functions: Definition 1.3.13 [START_REF] Chandrasekharan | Elliptic functions[END_REF]Chapter I]). An elliptic function is a meromorphic function with two periods, ω 1 , ω 2 , such that Im ω2 ω1 > 0.
Using integration on paths slightly off the fundamental parallelogram, one can prove The first construction of an elliptic function with any periods was given by Weierstrass:
Definition 1.3.15. Given ω 1 , ω 2 , define the function ℘ : z → ℘(z, [ω 1 , ω 2 ]) by ℘(z, [ω 1 , ω 2 ]) = 1 z 2 + m,n∈Z 2 1 (z -mω 1 -nω 2 ) 2 - 1 (mω 1 + nω 2 ) 2
Then ℘ is elliptic of periods ω 1 , ω 2 , and it is of order 2. It is an even function.
Remark 1.3.16. On top of the ω 1 -periodicity and the ω 2 -periodicity of ℘, we also have the following property:
℘(z, [ω 1 , ω 2 + ω 1 ]) = ℘(z, [ω 2 , -ω 1 ]) = ℘(z, [ω 1 , ω 2 ])
This is easily proved when looking at the definition of ℘. Hence, when trying to compute ℘(z, [ω 1 , ω 2 ]) in Section 4.3 and Section 8.1.2, we will be able to assume that
Im ω 2 ω 1 ≥ √ 3 2 , |Re ω 2 ω 1 | ≤ 1 2 (cf Chapter 2), 0 ≤ Im(z) < Im ω 2 2ω 1 , |Re(z)| ≤ Re(ω 1 ) 2 .
Finally, we mention another proposition:
Proposition 1.3.17 ([Sil86, Theorem VI.3.2]). Every elliptic function of periods ω 1 , ω 2 is a rational function of z → ℘(z, [ω 1 , ω 1 ]) and z → ℘ (z, [ω 1 , ω 1 ]).
This proposition shows that, once one has an algorithm to compute ℘ and ℘ (as in Chapter 8, for instance), it is not very much harder to compute any elliptic function.
Inverse of the Abel-Jacobi map
The connection between the Weierstrass ℘ function and Weierstrass equations of elliptic curves is shown in the following proposition: Proposition 1.3.18. The function ℘ satisfies the following differential equation:
℘(z, [ω 1 , ω 2 ]) 2 = 4℘(z, [ω 1 , ω 2 ]) 3 -g 2 ℘(z, [ω 1 , ω 2 ]) -g 3 where g 2 = 60 m,n∈Z 2 1 (mω 1 + nω 2 ) 4 g 3 = 140 m,n∈Z 2 1 (mω 1 + nω 2 ) 6
are related to Eisenstein series of weight 4 and 6.
This proposition is proven using the Laurent expansion, for instance. Hence, the map
C/Λ → E(C) 0 (mod Λ) → O z (mod Λ) → [℘(z, Λ) : ℘ (z, Λ)) : 1]
is an isomorphism; in fact, it is the inverse of the Abel-Jacobi map.
A most important problem we consider in this manuscript is the problem of computing quickly the Abel-Jacobi map and its inverse. Quasi-optimal time algorithms -i.e. algorithms allowing the computation of this map with precision P in O(M(P ) log P ) -for the computation of the periods and of the elliptic logarithm have been outlined, thus giving a quasi-optimal time algorithm for the Abel-Jacobi map; we discuss these algorithms in Chapter 4. We show in Chapter 8 that the coefficients of the Weierstrass equation can be computed from the periods using the θ function (defined in Chapter 2), and that ℘ is also related to the θ function. We show in Chapter 6 that the θ function can be computed with quasi-optimal time, which gives a quasi-optimal time algorithm to compute the inverse of the Abel-Jacobi map. We also give in Chapter 4 another algorithm for ℘, with conjectural quasi-optimal running time.
Thomae formulas
We mention a last result, which links the coefficients of the curve equation (in short Weierstrass form) and the period lattice, via theta-constants, which are values of the θ function (see Section 2.1). These are the Thomae formulas: Theorem 1.3.19 (Thomae formulas). Let E be a complex elliptic curve defined by y 2 = P (x) with P = 4X 3 -g 2 X -g 3 = 4(X -e 1 )(X -e 2 )(X -e 3 ). Let ω 1 , ω 2 be the periods of E and τ = ω2 ω1 . Then
e 1 -e 2 = π ω 1 2 θ 4 0 (0, τ ), e 1 -e 3 = π ω 1 2 θ 4 1 (0, τ ), e 3 -e 2 = π ω 1 2 θ 4 2 (0, τ ) Proof. See [BM88] or [Cha85, Cor. 1 of Theorem 5, Chap V].
We use this result a few times in this manuscript, for instance to compute the curve coefficients from τ in quasi-optimal time (Section 8.1.1), but also to give a fast algorithm for the computation of the ℘ using the Landen transform (Section 4.3). Hence, these formulas are useful in making the inverse of the Abel-Jacobi map computable in quasi-optimal time, as they offer a link with the θ function.
Hyperelliptic curves and the Abel-Jacobi map
This manuscript deals mostly with the genus 1 case, i.e. elliptic curves, Jacobi's theta function and isogeny computation in genus 1. However, some of the methods we present here generalize to higher genera; in particular, we present in Chapter 8 algorithms to compute the generalization of the Abel-Jacobi map to higher genera. In this context, we give some background here on hyperelliptic curves and the corresponding Abel-Jacobi map. Most of the material presented here is taken from [Gal12, CFA + 10].
Hyperelliptic curves over C
We start by defining hyperelliptic curves over any field, then specialize over C. Definition 1.4.1 ([CFA + 10, Section 14.1]). Let K be a (perfect) field. A curve given by an equation of the form C :
y 2 + h(x)y = f (x) with f, h ∈ K[X], deg f = 2g + 1, deg h ≤ g
and f monic is called a hyperelliptic curve of genus g over K if no point (x, y) on the curve over K, the algebraic closure of K, satisfies both partial derivatives 2y(x) + h(x) = 0 and f (x) -h (x)y = 0. Furthermore, we add to this curve a single point at infinity, which we denote P ∞ .
The curve and the point at infinity can be defined using a projective equation; we refer to e.g. [Gal12, Def. 10.1.10] for details.
Group law on hyperelliptic curves
Much as in genus 1, one can define a group law from hyperelliptic curves of genus g. However, unlike in genus 1, the elements of the group are not simply points of the curve, but rather, some specific sums of points. Definition 1.4.2 ([Gal12, Section 7.6]). Let C be a hyperelliptic curve of genus g over K. A divisor over K is defined as formal sum of points with finite support, i.e.
D =
P ∈C(K)
n P P, n P ∈ Z,
with n P = 0 for all but finitely many P , and furthermore such that σ(D) = D for all σ in the Galois group of K. The set of divisors of a curve is denoted Div K (C).
Definition 1.4.3 ([Gal12, Section 7.6]). For a divisor D ∈ Div K (C), we define its degree as
deg(D) = P ∈C(k) n P ∈ Z.
We define Div 0 K (C) as the set of divisors of C of degree 0. Both Div K (C) and Div 0 K (C) are groups, the group law being simply the addition of finite sums; Div 0 K (C) is a subgroup of Div K (C). Definition 1.4.4 ([Gal12, Section 7.7]). Let f ∈ K(C) * a non-zero function defined over the curve (an element of the function field associated to the curve). Then f has finitely many zeroes and poles (see e.g. [Gal12, Thm. 7.7.1]). Define the divisor associated to f as div(f ) = P ∈C(K) v P (f )P where v P (f ) is 0 if P is not a pole or a zero of f , n if P is a zero of order n of f , and -n if it is a pole of order n of f . The divisor of a function is also called a principal divisor. We then define
Prin K (C) = {div(f ), f ∈ K(C) * },
the group of principal divisors. We have: Proposition 1.4.5 [START_REF] Steven | Mathematics of public key cryptography[END_REF]Thm. 7.7.11]). Prin K (C) ⊂ Div 0 K (C), i.e. for all f ∈ K(C), deg div(f ) = 0.
Finally, we define the group associated to a hyperelliptic curve: Definition 1.4.6. We define the Picard group or the divisor class group of an hyperelliptic curve as Pic 0 K (C) = Div 0 K (C)/ Prin K (C). This group is identified with the Jacobian of C, denoted Jac(C), an algebraic variety which is isomorphic to Pic 0 K (C) on every extension field of K. Remark 1.4.7. The situation is simpler in the case of genus 1 elliptic curves, as we have Jac(C) C; we refer to [Gal12, Section 7.9] for a proof.
Representation and addition of divisors
Proposition 1.4.8 ([CFA + 10, Thm. 14.5]). Each divisor class in the Jacobian of the curve can be represented using Mumford coordinates, i.e. a unique pair of polynomials u, v ∈ K[X] such that • u is monic;
• deg v < deg u ≤ g; • u | v 2 + vh -f .
For instance, if D = r i=1 P i -rP ∞ , with P i = P j , one can take u = r i=1 (X -x i ). Mumford coordinates are the usual representation of divisors on hyperelliptic curves.
Remark 1.4.9. Addition of divisors in Mumford representation can be computed using Cantor's algorithm, which actually generalizes the chord-and-tangent process to any genus g. We refer to [CFA + 10, Algorithm 14.7] or [Gal12, Section 10.3] for an exposition of this algorithm; we actually will not need to use it in this manuscript.
Hyperelliptic curve cryptography
As with elliptic curves, the Jacobian of a hyperelliptic curve and its associated group law can be used in cryptography, via the hyperelliptic curve discrete logarithm problem (HECDLP):
Given P ∈ Jac(C(F q )) and Q = [n]P , compute n.
Hasse-Weil's theorem for hyperelliptic curves shows that Jac(C(F q )) = O(q g ). Since the complexity of methods such as Pollard's rho have a complexity depending on the cardinal of the group, this means one can afford to take a smaller q (and hence a smaller finite field) to reach the same level of security as with an elliptic curve. This comes at the cost of a more involved arithmetic, although some settings (such as the arithmetic on the genus 2 Kummer surface which we define at Section 2.6.4) allow a fast arithmetic, which can even be faster than optimized arithmetic on elliptic curves [START_REF] Bos | Fast cryptography in genus 2[END_REF].
However, hyperelliptic curves of high genus are vulnerable to index calculus attacks, as described in Gaudry's seminal paper [START_REF] Gaudry | An algorithm for solving the discrete log problem on hyperelliptic curves[END_REF]. If g is fixed, the attack of [START_REF] Gaudry | A double large prime variation for small genus hyperelliptic index calculus[END_REF] shows for example that one can solve the HECDLP in O(q 2-2/g ) operations, which is faster than generic methods (i.e. Pollard's rho) for g ≥ 3. If, on the contrary, q is fixed (or at least does not grow too fast in size compared to the genus), [START_REF] Enge | An L(1/3) discrete logarithm algorithm for low degree curves[END_REF] shows there is actually an algorithm in time subexponential in q g which solves the DLP. Finally, we also mention that another algorithm shows that the DLP on non-hyperelliptic genus 3 curves can be solved with only O(q) operations [START_REF] Diem | Index calculus in class groups of nonhyperelliptic curves of genus three[END_REF]; since some genus 3 hyperelliptic curves are isogenous to non-hyperelliptic curves (and the isogeny is explicitly computable using [START_REF] Smith | Isogenies and the Discrete Logarithm Problem in Jacobians of Genus 3 Hyperelliptic Curves[END_REF]), the HECDLP in genus 3 can sometimes be solved more quickly. In practice, genus 2 hyperelliptic curves are the most relevant case to cryptography, as they are not vulnerable to any of these attacks.
Complex hyperelliptic curves
Hyperelliptic curves over C, and most generally complex abelian varieties, can be connected to the following: Definition 1.4.10 ([CFA + 10, Section 5.1.3]). Let g ≥ 1, and let Λ be a lattice in C g , i.e. a discrete Z-submodule of C g of rank 2g. Then C g /Λ is a complex torus of genus g; it has a (complex Lie) group structure using addition modulo Λ.
We have the following general theorem: Proposition 1.4.11 ([CFA + 10, Corollary 5.17]). Every abelian variety A of dimension g is isomorphic to a complex torus C g /Λ with Λ = Z g + ΩZ g , where Ω is symmetric and has a postive definite imaginary part. We call Ω the period matrix of A.
In particular, for a hyperelliptic curve of genus g C, we can write Jac(C) C g /(Z g + ΩZ g ).
Remark 1.4.12. Much as in genus 1, it is very easy to evaluate maps (and in particular isogenies) using the representation of a hyperelliptic curve as a torus. Indeed, for Λ 1 , Λ 2 two lattices of C g , and for any α a g × g complex matrix such that αΛ 1 ⊂ Λ 2 , we can define the map
φ α : C g /Λ 1 → C g /Λ 2 z → αz (mod Λ 2 )
Furthermore one can prove (see e.g. [Cos11, Prop. 2.3.1]) that these maps are the only holomorphic maps preserving 0. This property is used in e.g. [START_REF] Van Wamelen | Proving that a genus 2 curve has complex multiplication[END_REF][START_REF] Van Wamelen | Poonen's question concerning isogenies between Smart's genus 2 curves[END_REF] to compute maps between genus 2 curves; one can use it to evaluate complex isogenies as with the method we present in Chapter 9.
The Abel-Jacobi map
The propositions from this section are taken from [CFA + 10, Section 5. A Ω B where the columns of Ω A (resp. Ω B ) are the images of the A i (resp. B i ) by the map φ outlined in the previous proposition; alternatively, one can choose a normalized basis of holomorphic differentials such that φ(A i ) = 1, and define Ω with Ω i,j = Bi ω j .
We now define the Abel-Jacobi map: Theorem 1.4.16. Let C be a hyperelliptic curve of genus g; let (ω 1 , . . . , ω g ) be a basis of the space of holomorphic differentials, and let Λ be as in the previous proposition. Define the Abel-Jacobi map
J : C(C) → C g /Λ P → P P∞ ω 1 , . . . , P P∞ ω g (mod Λ).
This map is well-defined, and can be extended to divisors by linear extension. Then for any D ∈ Prin C (C), J(D) = 0; hence, J induces a map from Jac C (C) to C g /Λ, and this map is a group homomorphism.
We outline in Chapter 8 algorithms to compute this map in quasi-linear time, using techniques which do not require to evaluate the hyperelliptic integrals which define the map. One can also compute the inverse of the Abel-Jacobi map using genus g theta functions; we outline these algorithms in the same chapter.
Chapter 2
Background on theta functions
In this chapter, we introduce the θ function in all generality; this function will be featured in every subsequent chapter of this manuscript. This complex function of two variables, z and τ , has numerous applications in mathematics; what we are most interested in are its links to algebraic geometry and complex Riemann surfaces. We show here all the properties that we will use in this manuscript; the last sections of this chapter outline those properties in the genus 1 and 2 settings, on which our manuscript is focused for the most part.
Definition
We start by defining the theta function. Most of the background we need is described in [ Remark 2.1.5. We will often use a certain numbering of the theta functions with characteristics, to lighten notations. The numbering we choose to use is the one used e.g. in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF][START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]:
θ [a;b] (z, τ ) = n∈Z g exp(iπ t (n + a)τ (n + a)) exp(2iπ t (n + a)(z + b)).
θ [a;b] = θ i , i = 2(b 0 + 2b 1 + ... + 2 g-1 b g-1 ) + 2 g+1 (a 0 + 2a 1 + ...2 g-1 a b-1
In other words, the integer we associate with the theta function has the same binary expansion as (2a||2b). Other numbering schemes are possible (see for instance the discussion in [Cos11, p. 37]), with other pros and cons attached to them.
Chapter 2. Background on theta functions
We often deal with the functions θ 0 , ..., θ 2 g -1 (i.e. the ones with a = 0), sometimes called fundamental theta functions. Remark 2.1.6. Throughout the manuscript, we may use the following shorthands, for notational convenience: θ i,j (z, τ ) instead of θ i (z, τ ), θ j (z, τ ); θ i,j,k (z, τ ) instead of θ i (z, τ ), θ j (z, τ ), θ k (z, τ ); etc.
The θ function is quasi-periodic with respect to the lattice Z + τ Z, in the following sense:
Proposition 2.1.7 (([Mum83, p. 120-123])). For all m ∈ Z g , we have
θ [a;b] (z + m, τ ) = exp(2iπ t am)θ [a;b] (z, τ ) (in particular, θ [0;b] is invariant by z → z + m) θ [a;b] (z + τ m, τ ) = exp(-2iπ t bm) exp(-iπ t mτ m) exp(-2iπ t mz)θ [a;b] (z, τ )
One can also study the parity of θ with respect to the first argument: This means there are 2 g-1 (2 g + 1) even theta functions, and 2 g-1 (2 g -1) odd ones; the theta constant associated with an odd theta function is 0. In genus 1, the only odd theta function is θ 3 ; in genus 2, the odd ones are θ 5 , θ 7 , θ 10 , θ 11 , θ 13 , θ 14 .
The sequence θ(z, 2 n τ ) has a rather central place in the algorithms we present in this manuscript (see Chapters 3, 4, 6, 7). We establish its limit in the following proposition: Proposition 2.1.9. For any b ∈ 1 2 Z g /Z g we have
lim n→∞ θ [0;b] (z, 2 n τ ) = 1,
and even
lim n→∞ θ [0;b] (z, 2 n τ ) 2 n = 1.
Proof. Let R an orthogonal matrix such that t R Im(τ )R = Diag(λ 1 , . . . , λ g ), where the λ i denote the eigenvalues of Im(τ ). Write
|θ [0;b] (z, τ ) -1| ≤ n∈Z g \{0} |e iπ t nτ n+2iπ t nz | ≤ n∈Z g \{0}
e -π t (Rn)Diag(λ1,...,λg)Rn e -2π t n Im(z)
≤ 2 g n∈N g \{0}
q n 2 1 +...+n 2 g w -2 |ni| with q = e -πλ , where λ is the smallest eigenvalue of Im(τ ), and w = e -π maxi|Im(zi)| . Hence
|θ(z, 2 k τ )-1| ≤ 2 g n∈N g \{0} q 2 k (n 2 1 +...+n 2 g ) w -2 ni . Let k 0 be such that 2 k0 λ ≥ 2 max i |Im(z i )|; then for k ≥ k 0 , |θ [0;b] (z, 2 k τ ) -1| ≤ 2 g n∈N g \{0} q (2 k -2 k 0 )(n 2 1 +...+n 2 g ) ≤ 4 g q 2 k -2 k 0 1 -q 2 k -2 k 0 1 + q 2 k -2 k 0 1 -q 2 k -2 k 0 2 g -1
which proves the first statement. As for the second one, write θ [0;b] (z, 2 k τ ) = 1 + c, with c ∼ k→∞ c q 2 k , which goes to 0 as k grows; then θ
[0;b] (z, 2 k τ ) 2 k -1 ∼ 2 k c q 2 k
, which also goes to 0 as k grows.
The second statement refines the first one in a way that ensures quasi-optimal time in our algorithms (see Sections 6.2.3 and 7.2.2).
Addition and duplication formulas
τ -duplication formula
The following formula is of great importance to this manuscript.
Proposition 2.2.1. For all a, b ∈ 1 2 Z g /Z g , θ [a;b] (z, τ ) 2 = 1 2 g β∈ 1 2 Z g /Z g e -4iπ t aβ θ [0;b+β] z, τ 2 θ [0;β] 0, τ 2 . (2.2.1)
This formula can be found in [Cos11, formula 3.13], where it is called the change of basis formula from the F 2 basis to the F (2,2) 2 basis; it is derived from [Igu72, Section IV.1, Theorem 2] by taking m 1 = m 2 and z 1 = z 2 .
The consequences of this formula are numerous:
1. Taking z = 0 in Equation (2.2.1) shows a relationship between theta-constants at τ and theta-constants at 2τ . A closer look at the equation one gets shows a similarity with the classic arithmetico-geometric mean (in genus 1) and the Borchardt mean (in genus g).
The link between those quadratically convergent means and theta-constants is outlined in Chapter 3; it is the basis of the fast algorithms for theta-constants discussed in Chapter 6 and 7.
2. Equation (2.2.1) is at the basis of our fast algorithms for θ in Chapter 6 and 7.
3. The value at τ of the fundamental theta functions, i.e. the ones with a = 0, along with the value of the fundamental theta-constants, can be used to recover the value of all the theta functions and theta-constants at 2τ . In particular, should one want to compute the value of all the theta functions at τ , a valid strategy is to compute the value of all the fundamental ones at τ /2; we use this strategy for instance in Chapter 6. We also use this equation repeatedly in the same chapter to get an algorithm whose complexity is uniform in z, τ .
4. Equation (2.2.1) can be combined with Proposition 2.1.9, and the fact that β (-1) t aβ = 0, to prove that, for any a
∈ 1 2 Z g /Z g \ {0}, lim n→∞ θ [a;b] (z, 2 n τ ) = 0.
Riemann formulas
The following set of formulas, often called Riemann formulas, give relationships between values of θ in any genus:
Theorem 2.2.2 ([Igu72, Theorem IV.1.1]). Let (m 1 , m 2 , m 3 , m 4 ) ∈ R 8g and (z 1 , z 2 , z 3 , z 4 ) ∈ C 4g . Let T g = 1 2 I g I g I g I g I g I g -I g -I g I g -I g I g -I g I g -I g -I g I g ,
and put
(n 1 , n 2 , n 3 , n 4 ) = (m 1 , m 2 , m 3 , m 4 )T 2g , (w 1 , w 2 , w 3 , w 4 ) = (z 1 , z 2 , z 3 , z 4 )T g
Then we have (with m 1 the first g coordinates of m 1 )
θ m1 (z 1 , τ ) . . . θ m4 (z 4 , τ ) = 1 2 g (α,β) e -2 t m 1 β θ n1+(α,β) (w 1 , τ ) . . . θ n4+(α,β) (w 4 , τ )
in which (α, β) runs over a complete set of representatives of 1 2 Z 2g /Z 2g .
Riemann formulas are complementary to the τ -duplication formulas, since they encode a relationship between theta functions with the same τ (but at different z). They can be instantiated in many different ways; we mention in the next section the existence of z-duplication formulas, which are obtained from the Riemann formulas. For other applications, we refer to [Cos11, Chapter 3].
Reduction of the first argument
This manuscript describes algorithms for the computation of θ (e.g. Chapter 5, Chapter 6 and Chapter 7). We show how one can perform argument reduction in order to restrict ourselves to more favorable cases. We start by describing argument reduction of the first argument of θ, before describing argument reduction for the second argument in Section 2.4.
Quasi-periodicity
The most obvious way to perform argument reduction is to use the quasi-periodicity of z → θ(z, τ ) (i.e. Proposition 2.1.7). In all generality, one can expect to achieve the conditions
|Re(z i )| ≤ 1 2 ; |Im(z i )| ≤ j∈[1..g] |Im(τ i,j )| 2 .
(2.3.1) using the quasi-periodicity of θ. We say that z is reduced if the above conditions are met; this corresponds to z = x + τ y with x, y ∈ R g and |x i |, |y i | ≤ 1 2 . The value of θ(z, τ ) can then be obtained from θ(z , τ ) (with z reduced) simply by computing an exponential.
However, note that this exponential factor can become quite big. This would not cause problems if one wanted to compute θ to some relative precision P ; however, in this manuscript, we wish to compute θ to absolute precision P , i.e. up to 2 -P , since θ(z, τ ) can be close to 0. This means that the exponential factor should be taken into account: if one wants to compute θ(z, τ ) up to 2 -P and use argument reduction, the value θ(z , τ ) must be computed up to 2 -P -C , with C the size of the exponential factor.
Hence, the final complexity of any method relying on this argument reduction will depend on τ and z; however, since the size of the final result depends on it too, we consider this as inevitable. Supposing that z is reduced allows us to essentially (i.e. along with other hypotheses on τ ) work on values of θ of bounded size, which is more comfortable, and allows us to get complexities which depend only on P -with the understanding that recovering the original value using argument reduction has a complexity depending on the original values of z and τ .
Remark 2.3.1. In genus 1, writing z = x + τ y with |x|, |y| ≤ 1 2 is rather easy; we simply subtract kτ from z with k the integer closest to Re z τ ), then we subtract the integer which is closest from the real part. In the general case, we can write explicitly z = x + τ y with x, y ∈ R 2g as follows. Put Λ = I g τ ∈ M g×2g (C), so that the lattice associated to τ is ΛZ 2g . We then have Λ Λ
x y = z z , and we can compute x, y easily. It is then easy to subtract k + τ k with k, k ∈ Z g to z in order to get the reduced z.
Remark 2.3.2. In the case of genus 1, the function z → θ i (z, τ ) is odd for i = 3 and even for the other ones. Hence, without loss of generality, one can suppose that Im(z) ≥ 0.
z-duplication formulas
One last way to perform argument reduction is to use the so-called z-duplication formulas. These formulas can be derived from the Riemann formulas mentioned in the previous section; they fall in the more general class of addition formulas:
Proposition 2.3.3 ([Igu72, Section IV.1]). Let m 1 , m 2 , m 3 , m 4 denote elements of R 2g and u, v ∈ C. Put (n 1 , n 2 , n 3 , n 4 ) = (m 1 , m 2 , m 3 , m 4 )T
where T is the matrix defined in Theorem 2.2.2. Then, omitting τ from the equation, Riemann's formulas give (with m 1 the first g coordinates of m 1 )
θm 1 (u + v)θm 2 (u -v)θm 3 (0)θm 4 (0) = 1 2 g (α,β) e -2 t m 1 β θ n 1 +(α,β) (u)θ n 2 +(α,β) (u)θ n 3 +(α,β) (v)θ n 4 +(α,β) (v)
in which (α, β) runs over a complete set of representatives of 1 2 Z 2g /Z 2g . Taking u = v gives the z-duplication formulas
θ m1 (2z)θ m2 (0)θ m3 (0)θ m4 (0) = 1 2 g (α,β) e -2 t m 1 β θ n1+(α,β) (z)θ n2+(α,β) (z)θ n3+(α,β) (z)θ n4+(α,β) (z)
We derive one more formula, who is particularly interesting for our purposes: if θ a (0, τ ) = 0, then We use z-duplication formulas, interwoven with τ -duplication formulas, in Chapter 6, in which we also analyse the precision loss incurred. In genus g, we conjecture that p k = O(k).
θ a (2z, τ ) = α,β e -2 t m 1 β θ 4 a+(α,β) (z, τ ) 2 g θ 3 a (0, τ ) . ( 2
Reduction of
Sp 2g (Z) × H g → H g M = A B C D , τ → M • τ = (Aτ + B)(Cτ + D) -1
Furthermore, M defines an isomorphism of complex tori between Λ τ = C g τ Z g +Z g and Λ M •τ as follows:
Λ τ → Λ M •τ z → M • τ z = t (Cτ + D) -1 z
where the shorthand M • z may be used when the context allows.
Remark 2.4.3. In genus 1, the symplectic group is simply the group SL 2 (Z) of matrices a b c d with a, b, c, d ∈ Z and ad -bc = 1. The action is τ → aτ +b cτ +d .
Action of the symplectic group on θ
The following theorem shows how θ functions behave under the action of Sp 2g (Z) on H g .
θ i (M • z, M • τ ) = ζ M det(Cτ + D)e iπ t (M •z)(Cz) θ σ M (i) (z, τ )
where σ M is a permutation and ζ M is an eighth root of unity.
This theorem is proven in [Mum83, Section II.5] in a special case, and in [Igu72, Chapter 5, Theorem 2]; an outline of the proof can also be found in [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF]Prop. 3.1.24]. This allows to perform argument reduction: Proposition 2.4.5. For any z ∈ C g and τ ∈ H g , the value of θ(z, τ ) with absolute precision P can be computed from the value with absolute precision P + c of a θ i (z , τ ), for some i and some τ ∈ F g . Computing the final result requires the computation of a square root and an exponential, for a cost of O(M(P + c) log(P + c)), on top of the cost of computing θ i (z , τ ). The constant c depends on z, τ, z , τ .
The domain F g is defined in the next section. The constant c is necessary to account for the size of the exponential factor; it can be computed using e.g. a low-precision approximation of θ(z,τ ) θi(z ,τ ) .
Fundamental domain for τ
Definition 2.4.6 ([Kli90, Def. I.3.1]). The set F g ⊂ H g is defined as the matrices satisfying the conditions:
• Im(τ ) is Minkowski-reduced, i.e. t g Im(τ )g ≥ Im(τ k,k ) for all integral g with (g k , . . . , g n ) = 1 and Im(τ k,k+1 ) ≥ 0 for all k;
• |Re(τ k,l )| ≤ 1 2 for all k, l ∈ {1, ..., n}, k ≤ l;
• |det(Cτ + D)| ≥ 1 for all A B C D ∈ Sp 2g (Z).
Remark Note that the definition of F g includes a condition that one must check on an infinite number of matrices; hence, this definition does not allow us to write an algorithm that would find the representative of a τ ∈ H g in the fundamental domain. However we have Proposition 2.4.9 ([Kli90, Section I.3, Prop. 3]). For any g, there exists a finite set V g ⊂ Sp 2g (Z) such that the third condition defining F g can be replaced by
• |det(Cτ + D)| ≥ 1 for all A B C D ∈ V g .
This proposition gives a procedure for reducing a τ into the fundamental domain F g ; we outline it in Algorithm 1.
We use two subroutines here, which we do not make explicit:
• ReduceRealPart(τ ) is a function that returns τ -M , with M = (m ij ) and m ij = Re(z ij ) . This is equivalent to applying matrices of the form T ij = I g +δ i,j , where δ i,j is the Kronecker matrix, to τ . This subroutine does not require many operations to compute, and we ignore its cost.
• MinkowskiReduce is a function which takes a real matrix τ as an input and outputs a real matrix γ that is Minkowski-reduced and a matrix M with integer coefficients such that γ = M • τ . Computing the Minkowski reduction of a g × g matrix has an exponential cost, of O 2 1.3g 3 arithmetic operations [START_REF] Helfrich | Algorithms to construct Minkowski reduced and Hermite reduced lattice bases[END_REF].
Algorithm 1 Reducing τ to F g . Input: τ ∈ H g Output: τ ∈ F g 1: τ ← τ 2: τ ← ReduceRealPart(τ ). 3: v, M ← MinkowskiReduce(Im(τ )); τ ← M • τ . 4: for M ∈ V g do 5: if |det(C M τ + D M )| < 1 then 6: τ ← M • τ 7:
goto 2 8:
end if 9: end for 10: return τ This outline can be turned into an algorithm provided that V g is known, or at least computable. However, the description in general of this finite set is not known, which does not help in making reduction to the fundamental domain effective: there is no known algorithm which allows one to compute V g , and hence to reduce to the fundamental domain in genus g ≥ 3. However, we note that, if we assume V g is known, the resulting algorithms terminates, as proven in [Sie89, Chapter 6,Section 5].
Remark 2.4.10. In genus 1, the action of SL 2 (Z) on the upper-half plane is well-known and has been extensively studied; see for example [START_REF] Mumford | Tata lectures on Theta[END_REF]. The fundamental domain F 1 (sometimes denoted F when the context is clear) is defined as
F 1 = τ ∈ H | |τ | ≥ 1, |Re(τ )| ≤ 1 2
Using the notations of Algorithm 1, V 1 = {S} where S = 0 -1 1 0 . The resulting algorithm which reduces τ into F is Gauss's algorithm [START_REF] Vallée | Probabilistic analyses of lattice reduction algorithms[END_REF] to find a reduced basis of a 2-dimensional lattice (in this case, Z + Zτ ); its cost is asymptotically negligible. The fundamental domain is represented on Figure 2.1.
Remark 2.4.11. In genus 2, 19 inequalities |det(Cτ + D)| ≥ 1 defining the fundamental domain have been determined [START_REF] Gottschling | Explizite Bestimmung der Randflächen des Fundamentalbereiches der Modulgruppe Zweiten Grades[END_REF]. Each is required: for each inequality, there is a matrix outside the fundamental domain that satisfies the 18 other inequalities but not the chosen one. In the notations of Algorithm 1, this gives a set V 2 of cardinality 19; hence there is an algorithm to reduce in the fundamental domain in genus 2. The resulting algorithm is analyzed in [Str14, Section 6], who proves that it terminates in a number of steps only depending on τ .
Loosened requirements for τ for g ≥ 2
As we saw above, there is currently no known algorithm to compute the reduction of an element τ ∈ H g into the fundamental domain F g . One could introduce a larger domain, F g , for which a reduction algorithm is known.
Definition 2.4.12. The set F g ⊂ H g is defined as the matrices satisfying the conditions:
• Im(τ ) is Minkowski-reduced, i.e. t g Im(τ )g ≥ Im(τ k,k ) for all integral g with (g k , ..., g n ) = 1 and Im(τ k,k+1 ) ≥ 0 for all k;
• |Re(τ k,l )| ≤ 1 2 for all k, l ∈ {1, ..., n}, k ≤ l;
• |τ 1,1 | ≥ 1.
This domain was introduced in genus 2 by Streng [Str14], who calls B what is called here F 2 .
Put
N 0 = I g -δ 1,1 -δ 1,1 δ 1,1 I g -δ 1,1
, where δ 1,1 is the g × g Kronecker matrix (i.e. with top left coefficient equal to 1 and 0 everywhere else). This matrix is symplectic, and we have |det(Cτ + D)| = |τ 1,1 |. This remark allows us to prove that Proposition 2.4.13. We have F g ⊂ F g .
Note that
F 1 = F 1 .
The algorithm for reducing τ into F g is similar to Algorithm 1, but with the condition "M ∈ V g " replaced by "M = N 0 " (see [START_REF] Streng | Computing Igusa class polynomials[END_REF]§6.3]). This is Algorithm 2.
Algorithm 2 Reducing τ to F g . Input: τ ∈ H g Output: τ ∈ F g 1: τ ← τ 2: τ ← ReduceRealPart(τ ). 3: v, M ← MinkowskiReduce(Im(τ )); τ ← M • τ . 4: if |τ 1,1 | < 1 then 5: τ ← N 0 • τ 6:
goto Step 2 7: end if 8: return τ Termination of Algorithm 2 in genus 2 (i.e. reduction to F 2 ) is proven in [START_REF] Streng | Computing Igusa class polynomials[END_REF]. The termination of Algorithm 2 in the general, genus g case is an open problem. Note that a few lemmas used in the proof generalize to genus g, namely Lemma 6.9, Lemma 6.11 (for the set of matrices such that y 1 ≤ 1 t with t > 2) and Lemma 6.12 (for the set of matrices such that y 1 ≥ 1 t ); unfortunately the generalization of Lemma 6.14 is unclear, as it relies on Equation 6.5, which is only valid in genus 2.
Another approach to argument reduction is to use so-called Siegel reduction, as in [DHB + 04]. The conditions are even weaker than the ones we impose here; in particular, LLL reduction is used instead of Minkowski reduction. The article claims that this reduction is enough to limit the number of terms needed for the naive algorithm for θ, although no analysis is provided.
We believe that both reductions -reduction to F g and the explicit Siegel reduction of [DHB + 04]are relevant to our purposes, i.e. argument reduction in the context of the genus g θ function.
The first reduction may be costlier, as Minkowski reduction has running time exponential in the genus while the LLL reduction runs in polynomial time; furthermore, termination has not been proven, although we believe it to hold. In either case, both reduction algorithms seem to give conditions similar to the ones we use when analyzing the naive algorithm for θ in Sections 5.1 and 5.2, which indicates that both reduction algorithms are relevant.
In the remainder of this manuscript, the reduction of τ to F g is the one we will mention the most often, and in particular in the analyses of Chapter 5; however, it should be understood that similar results could probably be found for the effective Siegel reduction described in [DHB + 04].
Genus 1 instantiations
This short section is aimed at making the equations described so far more explicit in genus 1. We use quite a lot of them throughout this manuscript, but most importantly in Chapter 6, when describing our quasi-linear time algorithm for θ; such formulas are used both in the actual computation of θ and in the argument reduction strategy we set up in order to get a complexity that is independent of z and τ .
The genus 1 theta function, sometimes called Jacobi theta function, is defined as Definition 2.5.1. Jacobi's theta function is defined as
C × H → C (z, τ ) → n∈Z e iπτ n 2 e 2iπzn = 1 + n∈N q n 2 (w 2n + w -2n ) = 1 + q w 2 + 1 w 2 + q 4 w 4 + 1 w 4 + . . .
with q = e iπτ (the "nome") and w = e iπz .
There are four theta functions with characteristics, which we denote θ 0 , θ 1 , θ 2 , θ 3 using the notation of Note 2.1.5:
θ 0 (z, τ ) = θ (z, τ ) = 1 + q w 2 + 1 w 2 + q 4 w 4 + 1 w 4 + . . . θ 1 (z, τ ) = θ z + 1 2 , τ = 1 -q w 2 + 1 w 2 + q 4 w 4 + 1 w 4 -. . . θ 2 (z, τ ) = exp(πiτ /4 + πiz)θ z + τ 2 , τ = q 1/4 w 1 + 1 w 2 + q 2 w 2 + 1 w 4 + q 6 w 4 + 1 w 6 + . . . θ 3 (z, τ ) = exp(πiτ /4 + πi(z + 1/2))θ z + τ + 1 2 , τ = iq 1/4 w 1 - 1 w 2 + q 2 w 2 - 1 w 4 + q 6 w 4 - 1 w 6 -. . .
Furthermore, the expressions of the theta-constants are as follows:
θ 0 (0, τ ) = n∈Z q n 2 = 1 + 2q 2 + 2q 4 + 2q 9 + . . . (2.5.1) θ 1 (0, τ ) = n∈Z (-1) n q n 2 = 1 -2q 2 + 2q 4 -2q 9 + . . . (2.5.2) θ 2 (0, τ ) = 2 n≥0 q (n+1/2) 2 = 2q 1/4 + 2q 9/4 + 2q 25/4 + . . . (2.5.3)
Recall that θ 3 (0, τ ) = 0 since this is an odd theta function.
Duplication formulas
The τ -duplication formulas are as follows:
θ 0 (z, 2τ ) 2 = θ 0 (z, τ )θ 0 (0, τ ) + θ 1 (z, τ )θ 1 (0, τ ) 2 θ 0 (0, 2τ ) 2 = θ 0 (0, τ ) 2 + θ 1 (0, τ ) 2 2 (2.5.4) θ 1 (z, 2τ ) 2 = θ 0 (z, τ )θ 1 (0, τ ) + θ 1 (z, τ )θ 0 (0, τ ) 2 θ 1 (0, 2τ ) 2 = θ 0 (0, τ )θ 1 (0, τ ) θ 2 (z, 2τ ) 2 = θ 0 (z, τ )θ 0 (0, τ ) -θ 1 (z, τ )θ 1 (0, τ ) 2 θ 2 (0, 2τ ) 2 = θ 0 (0, τ ) 2 -θ 1 (0, τ ) 2 2 θ 3 (z, 2τ ) 2 = θ 0 (z, τ )θ 1 (0, τ ) -θ 1 (z, τ )θ 0 (0, τ ) 2
We will use the right column in Chapter 3, and the left column is a crucial component in Chapter 6. Note that a direct proof of these formulas using the definition of θ is also sometimes presented (e.g. [START_REF] Borwein | Pi and the AGM: a study in the analytic number theory and computational complexity[END_REF]), which involves some manipulations and term reorganization akin to n+m≡0 (mod 2)
q n 2 +m 2 = i,j∈Z q (i+j) 2 +(i-j) 2 .
As for the z-duplication formulas, we will use:
θ 0 (z, τ )θ 3 0 (0, τ ) = θ 4 1 (z/2, τ ) + θ 4 2 (z/2, τ ) (2.5.5) θ 1 (z, τ )θ 3 1 (0, τ ) = θ 4 0 (z/2, τ ) -θ 4 2 (z/2, τ ) θ 2 (z, τ )θ 3 2 (0, τ ) = θ 4 0 (z/2, τ ) -θ 4 1 (z/2, τ ) θ 3 (z, τ )(θ 0 θ 1 θ 2 )(0, τ ) = 2(θ 0 θ 1 θ 2 θ 3 )(z/2, τ )
These will be used in Chapter 6.
Other equations
We highlight two more equations:
θ 0 (0, τ ) 4 = θ 1 (0, τ ) 4 + θ 2 (0, τ ) 4
(2.5.6)
θ 2 0 (z, τ )θ 2 0 (0, τ ) = θ 2 1 (z, τ )θ 2 1 (0, τ ) + θ 2 2 (z, τ )θ 2 2 (0, τ ) (2.5.7)
In [START_REF] Mumford | Tata lectures on Theta[END_REF], the first one is named Jacobi's quartic formula, and the second one the equation of the variety. Those equations show another way than the τ -duplication formulas to recover θ 2 from the knowledge of the values of the fundamental theta functions. Furthermore, one can recover θ 3 (z, τ ) from the other values of θ, using:
θ 2 3 (z, τ )θ 2 0 (0, τ ) = θ 2 1 (z, τ )θ 2 2 (0, τ ) -θ 2 2 (z, τ )θ 2 1 (0, τ ).
(2.5.8)
Finally, we mention a special formula, whose generalization to higher genera is not very obvious: Jacobi's derivative formula, which is
θ 3 (0, τ ) = -πθ 0 (0, τ )θ 1 (0, τ )θ 2 (0, τ )
We use this to prove a formula in Chapter 8. Generalizations to higher genera have been considered in [START_REF] Igusa | On Jacobi's derivative formula and its generalizations[END_REF][START_REF] Grant | A generalization of jacobi's derivative formula to dimension two[END_REF]; we do not use any here.
Argument reduction
One can make Theorem 2.4.4 more explicit in the case of genus 1 and determine the correspondence γ → σ γ , using [START_REF] Mumford | Tata lectures on Theta[END_REF]p. 36] or by noticing it is independent of z and using the tables found by Gauss for theta-constants [Cox84, Eq. 2.15].
θ i z cτ + d , aτ + b cτ + d = ζ i,γ,τ √ cτ + de iπcz 2 /(cτ +d) θ σ(i) (z, τ )
where in order to be able to use the formula to compute, say, θ 0 (z, τ ). We will occasionally talk about computing θ 3 , but this will not be the focus of our algorithms, as it can easily be recovered using Equation (2.5.8).
The proof of Theorem 2.5.2 is usally done using some particularly simple relations, which describe the action of S = 0 -1 1 0 on the values of θ [Mum83, Table V, p. 36]:
θ 2 0 0, -1 τ = -iτ e iπz 2 /τ θ 2 0 (0, τ ), θ 2 2 0, -1 τ = -iτ e iπz 2 /τ θ 2 1 (0, τ ),
(2.5.9)
and their equivalent for theta constants
θ 2 0 0, -1 τ = -iτ θ 2 0 (0, τ ), θ 2 2 0, -1 τ = -iτ θ 2 1 (0, τ ).
(2.5.10)
These can be proven e.g. using Poisson's summation formula. Theorem 2.5.2 is then proven by determining the action of T = 1 1 0 1 on the values of θ, then using the fact that SL 2 (Z) = S, T . Theorem 2.5.2 allows us to suppose that τ ∈ F, the fundamental domain for the action of SL 2 (Z) on H. This translates into the conditions
|τ | ≥ 1, |Re(τ )| ≤ 1 2 , and hence Im(τ ) ≥ √ 3 2 . 0 H F Figure 2.1: Fundamental domain F.
The fundamental domain is depicted on Figure 2.1. Argument reduction in z is carried out using quasi-periodicity:
θ(z + b + aτ, τ ) = e -πia 2 τ -2πiaz θ(z, τ )
Finally, as mentioned in Note 2.3.2, we will suppose that Im(z) ≥ 0. Combining all the argument reduction strategies allows us to suppose (e.g. in Chapter 5 and Chapter 6) that:
|τ | ≥ 1, |Re(τ )| ≤ 1 2 , Im(τ ) > 0, |Re(z)| ≤ 1 2 , 0 ≤ Im(z) ≤ Im(τ ) 2 .
(2.5.11)
Genus 2 instantiations
We discuss in Chapter 5 and Chapter 7 the computation of the θ function in genus g. We often take a closer look at the case g = 2 in order to show how one can generalize existing algorithms which apply to the (genus 1) Jacobi θ function; hence, we outline explicitly in this section a few of the formulas we will use later, when discussing the case g = 2.
n m θ 0 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ n m θ 1 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ n m θ 2 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ n m θ 3 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ Figure 2
.2: Signs of the terms in the sums defining respectively θ 0 , θ 1 , θ 2 , θ 3 .
Definition
We usually write z = z 1 z 2 and τ = τ 1 τ 3 τ 3 τ 2 . The definition of the theta function can thus be rewritten as
θ(z, τ ) = m,n∈N q m 2 1 q n 2 2 q 2mn 3 w 2m 1 w 2n 2 + w -2m 1 w -2n 2 + q -2mn 3 w 2m 1 w -2n 2 + w -2m 1 w 2n 2 = 1 + 2q1 w 2 1 + 1 w 2 1 + 2q2 w 2 2 + 1 w 2 2 + q1q2 q 2 3 w 2 1 w 2 2 + 1 w 2 1 w 2 2 + q -2 3 w 2 1 w 2 2 + w 2 2 w 2 1 + . . .
where q j = e iπτj and w j = e iπzj .
Note that, in genus 2, there are 16 different theta functions, that we number θ 0 to θ 15 . Using the notation we outlined in Note 2.1.5, the fundamental genus 2 theta functions are thus denoted θ 0 , θ 1 , θ 2 , θ 3 . Note that the series defining the fundamental theta functions are made of the same terms but with different signs, which allows their simultaneous computation by a naive algorithm at no extra cost; we summarize the patterns of the signs in Figure 2.2. The even theta functions are θ i for i ∈ {0, 1, 2, 3, 4, 6, 8, 9, 12, 15}, while the odd ones are θ i for i ∈ {5, 7, 10, 11, 13, 14}; this is the notation used in [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF][START_REF] Cosset | Computing ( , )-isogenies in polynomial time on jacobians of genus 2 curves[END_REF][START_REF] Streng | Computing Igusa class polynomials[END_REF], but differs from the notation used in e.g. [START_REF] Gaudry | Fast genus 2 arithmetic based on Theta functions[END_REF].
Reduction
The condition that Im(τ ) must be Minkowski-reduced can be rewritten as
0 ≤ 2 Im(τ 3 ) ≤ Im(τ 1 ) ≤ Im(τ 2 ).
As for the condition given in Equation (2.3.1), it translates into
|Re(zi)| ≤ 1 2 , |Im(z1)| ≤ Im(τ1) + Im(τ3) 2 ≤ 3 4 Im(τ1), |Im(z2)| ≤ Im(τ2) + Im(τ3) 2 ≤ 3 4 Im(τ2)
Duplication
An important equation we will be using is the one of the τ -duplication formula. This formula can be written as follows for the fundamental theta functions:
θ 0 (z, 2τ ) 2 = θ 0 (z, τ )θ 0 (0, τ ) + θ 1 (z, τ )θ 1 (0, τ ) + θ 2 (z, τ )θ 2 (0, τ ) + θ 3 (z, τ )θ 3 (0, τ ) 4 θ 1 (z, 2τ ) 2 = θ 0 (z, τ )θ 1 (0, τ ) + θ 1 (z, τ )θ 0 (0, τ ) + θ 2 (z, τ )θ 3 (0, τ ) + θ 3 (z, τ )θ 2 (0, τ ) 4 θ 2 (z, 2τ ) 2 = θ 0 (z, τ )θ 2 (0, τ ) + θ 1 (z, τ )θ 3 (0, τ ) + θ 2 (z, τ )θ 0 (0, τ ) + θ 3 (z, τ )θ 1 (0, τ ) 4 θ 3 (z, 2τ ) 2 = θ 0 (z, τ )θ 3 (0, τ ) + θ 1 (z, τ )θ 2 (0, τ ) + θ 2 (z, τ )θ 1 (0, τ ) + θ 3 (z, τ )θ 0 (0, τ ) 4
We can also write these relations for z = 0, i.e. for theta-constants; this will emphasize the link with Borchardt mean that we use in Chapter 3 and Chapter 7.
θ 0 (0, 2τ ) 2 = θ 0 (0, τ ) 2 + θ 1 (0, τ ) 2 + θ 2 (0, τ ) 2 + θ 3 (0, τ ) 2 4 θ 1 (0, 2τ ) 2 = θ 0 (0, τ )θ 1 (0, τ ) + θ 2 (0, τ )θ 3 (0, τ ) 2 θ 2 (0, 2τ ) 2 = θ 0 (0, τ )θ 2 (0, τ ) + θ 1 (0, τ )θ 3 (0, τ ) 2 θ 3 (0, 2τ ) 2 = θ 0 (0, τ )θ 3 (0, τ ) + θ 1 (0, τ )θ 2 (0, τ ) 2
The Kummer surface
Finally, we mention a last relation, which in a sense generalizes Equation (2.5.7), in that it gives the equation of the Kummer variety defined by the values of theta in genus 2. This equation is useful in Chapter 7, where it appears as a fix which allows the use of Newton's method.
Proposition 2.6.1 [START_REF] Gaudry | Fast genus 2 arithmetic based on Theta functions[END_REF]). Let
x = θ 0 (z, τ ), y = θ 1 (z, τ ) , z = θ 2 (z, τ ), t = θ 3 (z, τ ) a = θ 0 (0, τ ), b = θ 1 (0, τ ), c = θ 2 (0, τ ), d = θ 3 (z, τ ) A = θ 0 (0, 2τ ), B = θ 1 (0, 2τ ), C = θ 2 (0, 2τ ), D = θ 3 (z, 2τ )
and define
E = 256abcdA 2 B 2 C 2 D 2 (a 2 b 2 -c 2 d 2 )(a 2 c 2 -b 2 d 2 )(a 2 d 2 -b 2 c 2 ) F = a 4 + b 4 -c 4 -d 4 a 2 b 2 -c 2 d 2 G = a 4 -b 4 + c 4 -d 4 a 2 c 2 -b 2 d 2 H = a 4 -b 4 -c 4 + d 4 a 2 d 2 -b 2 c 2 .
Note that the τ -duplication formulas of the previous subsection show that A, B, C, D can be written very simply in terms of a, b, c, d; hence, E can also be written as a function of a, b, c, d. Then
(x 4 + y 4 + z 4 + t 4 ) + 2Exyzt = F (x 2 y 2 -z 2 t 2 ) + G(x 2 z 2 -y 2 t 2 ) + H(x 2 t 2 -y 2 z 2 ),
Chapter 2. Background on theta functions which is the equation of the Kummer surface.
The Kummer surface is isomorphic to the Jacobian modulo the 2-torsion points, as the fundamental theta functions are even.
Remark 2.6.2. The Kummer surface has the advantage of having simple (and hence fast) arithmetic, and very regular formulas, which provides natural protection against side-channel attacks. We refer to [START_REF] Gaudry | Fast genus 2 arithmetic based on Theta functions[END_REF][START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF] for the description of this surface and the corresponding arithmetic; [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF] furthermore shows how to use Kummer surfaces in the context of factorization of integers, in a manner similar to the Elliptic Curve Method (ECM), and with even better implementation results. Finally, we refer to two recent papers, [START_REF] Bos | Fast cryptography in genus 2[END_REF] and [START_REF] Ngai | Fast, uniform, and compact scalar multiplication for elliptic curves and genus 2 Jacobians with applications to signature schemes[END_REF], which show how Kummer surfaces can be used in cryptographic schemes and offer arithmetic that is even faster than with elliptic curves.
Chapter 3 AGM and Borchardt mean
This chapter is dedicated to outlining the classical theory behind the arithmetico-geometric mean and its generalization to higher genus the Borchardt mean. Their study has computational applications, namely the computation of theta-constants, in genus 1 for the AGM, and in genus g for the Borchardt mean; we outline the corresponding algorithms in Section 6.1 and Section 7.1.
The main problem in the study of these means is the study of the choice of signs: either of those means requires the extraction of a square root, and hence two possible complex values. Choosing one square root over another changes the value to which the sequence converges, and sometimes even its rate of convergence; we will make this explicit. The results presented in this section are taken from [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF] and [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]. and(b n ) n∈N converge to the same limit. Define the arithmetico-geometric mean (AGM) of a and b, denoted AGM(a, b), as the limit of either
The real AGM
Rate of convergence
a 0 = a, b 0 = b a n+1 = a n + b n 2 , b n+1 = a n b n where b n+1 > 0. Then the sequences (a n ) n∈N , (b n ) n∈N are adjacent, i.e. (a n ) n∈N is decreasing, (b n ) n∈N is increasing, and (a n -b n ) n∈N goes to zero; thus (a n ) n∈N
(a n ) n∈N or (b n ) n∈N .
Proof. The concavity of x → log x can be used to prove that, ∀x,
y ∈ R + , x+y 2 ≥ √ xy.
In fact, we have
a n ≥ a n + b n 2 = a n+1 ≥ b n+1 ≥ b n b n = b n which proves that a ≥ a 1 ≥ ... ≥ a n ≥ b n ≥ ... ≥ b n ≥ b. We even have a n+1 -b n+1 ≤ a n+1 -b n = a n -b n 2 47 which proves (a n -b n ) n∈N converges to 0: hence (a n ) n∈N , (b n ) n∈N are adjacent and converge, and a n -b n ≤ a-b 2 n .
The notion of quadratic convergence is central in this manuscript:
Definition 3.1.2. Let (x n ) ∈ C N be a sequence; we say that (x n ) is quadratically convergent to , or that (x n ) converges quadratically to if there exists a C > 0 and a N ∈ N such that for all
n > N |x n+1 -| ≤ C|x n -| 2 .
Note that this implies that for n large enough, the number of exact digits one gets by taking x n+1 as an approximation of is roughly twice as much as the number of exact digits one gets from x n . This means that the first n for which |x n -| ≤ 2 -P satisfies n = O(log P ): only O(log P ) steps are needed to compute P bits of the limit of the sequence.
AGM sequences can be shown to converge quadratically (see e.g. [BB87, BM88]):
Proposition 3.1.3. a n+1 -b n+1 ≤ 1 8b (a n -b n ) 2 .
Proof. Write:
a n+1 -b n+1 = 1 2 ( √ a n -b n ) 2 ≤ 1 2 (a n -b n ) 2 ( √ a n + √ b n ) 2 ≤ 1 8b (a n -b n ) 2 .
In practice, the AGM sequences converge quite quickly; for instance, one can compute AGM(1, √ 2) with 14 decimal digits of precision in only 4 steps. This calculation was done by Gauss in 1809, who then noticed that the result corresponded to π ω , where ω is the length of the lemniscate, thus outlining a link between the AGM and elliptic integrals. We refer the reader to [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF] for more historical details on the AGM.
Given that square roots can be computed with precision P in O(M(P )) (Section 0.3.3), we have Proposition 3.1.4. AGM(a, b) can be computed with absolute precision P in O(M(P ) log P ) bit operations. This is an example of a quasi-optimal complexity, or quasi-linear complexity, which we define in this manuscript as a complexity which is essentially linear in the output size, up to logarithmic factors. Finally note that contrary to, say, Newton's method, the AGM is not self-correcting; hence, each iteration has to be carried out at maximal precision.
Brent-Salamin algorithm
An interesting application of the AGM over the positive reals is the Brent-Salamin algorithm for the asymptotically fast computation of digits of π. This algorithm was found independently by Brent and Salamin in 1975; we refer to [START_REF] Borwein | Pi and the AGM: a study in the analytic number theory and computational complexity[END_REF] for more details than what is presented here. We use this algorithm in several settings, e.g. Chapter 5.
Proposition 3.1.5 ([BB87, Algorithm 2.2]). Let a 0 = 1, b 0 = 1 √ 2 . Define π n = 2a 2 n+1 1 - n k=0 2 k c 2 k , c n = a 2 n -b 2 n = c 2 n-1 4a n
where a n , b n are the sequences computed by the AGM iteration. Then (π n ) converges quadratically to π, and
π -π n ≤ π 2 2 n+4 e -π2 n+1 AGM(1, 1/ √ 2) 2 .
This proposition can be proved by using the properties of elliptic integrals, and more precisely the change of variables given by the Landen transform. The connection between the AGM and elliptic integrals via the Landen transform is outlined in Chapter 4. The proposition gives a quasi-optimal time algorithm to compute P bits of π: compute 1 √ 2 with P bits of precision in time O(M(P )), then compute (π n ) until it is within 2 -P of π, which only requires O(log P ) terms, for a total cost of O(M(P ) log P ) bit operations. This is currently the best known asymptotic running time for the computation of P digits of π; however, the Brent-Salamin algorithm is not necessarily the fastest algorithm in practice. We refer to [START_REF] Borwein | Pi and the AGM: a study in the analytic number theory and computational complexity[END_REF] for many similar algorithms designed to compute approximations of π. Some of these algorithms have been used in the past to compute a record number of digits of π; the main problem is that they require a lot of memory, since one has to work with (and hence, store) numbers of size roughly P (and even P + log P accounting for guard bits). Other algorithms, such as the one using Chudnovsky's formula [START_REF] Chudnovsky | The computation of classical constants[END_REF], are used nowadays for records of decimals of π [START_REF] Bellard | Computation of 2700 billion decimal digits of pi using a desktop computer[END_REF][START_REF] Yee | 10 trillion digits of pi: A case study of summing hypergeometric series to high precision on multicore systems[END_REF]; their convergence is slower (about 14 decimal digits per term, i.e. per step), but combined with a binary splitting strategy, they are faster in practice than the AGM-based algorithms, most notably because they support parallelization and checkpointing.
The complex AGM
It is possible to generalize the results above to the complex case, i.e. a, b ∈ C. However, in this case, there are two possibilities at every step for computing the square root; this gives an uncountable number of AGM sequences, and defining unambiguously the AGM requires a bit more work.
Choice of signs and optimal AGM sequences
Definition 3.2.1. Let a, b ∈ C, and let (a n ) n∈N , (b n ) n∈N ∈ C N such that a 0 = a, b 0 = b. The sequence (a n , b n ) is an AGM sequence if, for all n: a n+1 = a n + b n 2 , b 2 n+1 = a n b n
As [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF], our discussion here assumes a 0 = 0, b 0 = 0, a n = ±b n , as the limit of AGM sequences in these cases is trivial. It is easy to see by induction that these conditions are satisfied for a n , b n if and only if they are satisfied for a n-1 , b n-1 .
Hence, there are uncountably many AGM sequences, since there are two distinct possible choices of sign at each step. The following notion distinguishes choices of signs:
Definition 3.2.2. Let α ∈ C * such that α 2 = b 2 n .
We say that α is a good choice of square roots or a good choice of signs if setting b n = α gives the relations
|a n -b n | < |a n + b n |, or |a n -b n | = |a n + b n | and Im b n a n > 0
If that is not the case, α is a bad choice of square roots.
Note that this is equivalent to "Re bn an > 0, or Re bn an = 0 and Im bn an > 0". Definition 3.2.3. Define the optimal AGM sequence [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF] (also called the standard AGM sequence, in e.g. [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]), as the AGM sequence for a, b where all the choices of sign are good. This sequence converges quadratically to a non-zero value; this value is defined to be AGM(a, b) (also sometimes called the simplest value [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF][START_REF] Frazer | Higher genus arithmetic-geometric means[END_REF]).
The question of the choice of sign is important in practical applications of the AGM, most notably for theta constants; we give related results in Chapter 6.
Convergence of optimal AGM sequences
Lemma 3.2.4 ([Dup11, Theorem 1]). Let (a n , b n ) be an AGM sequence in which all the choices of signs are good. Then for all n ≥ 0 we have
|a n+1 -b n+1 | ≤ π 8 min(|a 0 |, |b 0 |) |a n -b n | 2 .
This can be used to prove quadratic convergence: In particular, optimal AGM sequences converge quadratically, which means our definition of the complex AGM enjoys similar properties as the real AGM.
The convergence of the AGM can actually be studied more precisely, with a precise bound on the number of iterations required to compute AGM(1, z), as well as a bound on the number of guard bits: Theorem 3.2.6 ([Dup11, Prop. 12 & Cor. 1]). Let z be a complex number in the upper-right quadrant of the complex plane, and denote (a n , b n ) the optimal AGM sequence for (1, z). Put
n P = max ( log 2 |log 2 |z|| , 1) + log 2 (P + 3) -1 .
Then a n P is an approximation of AGM(1, z) with relative precision P bits. Furthermore, each iteration loses a constant number of bits of relative precision, which means it is enough to work at precision
P + 2 + 2n P = O(P + log 2 |log 2 |z||).
Note that this means that more iterations are needed the larger |z| is, but also the smaller |z| is.
In the end, this gives the result that the complex AGM can be computed with precision P in O(M(P ) log P ) bit operations. Note that the constant in the O depends on z; however, in our applications, we find a way to get rid of the dependency in z.
Theta-constants and arithmetico-geometric mean
Recall the τ -duplication formulas, which were mentioned in Chapter 2:
θ 0 (0, 2τ ) 2 = θ 0 (0, τ ) 2 + θ 1 (0, τ ) 2 2 , θ 1 (0, τ ) 2 = θ 0 (0, τ )θ 1 (0, τ ) (3.2.1)
A proof for these formulas can also be obtained easily by manipulating the definition of the series; see [START_REF] Borwein | Pi and the AGM: a study in the analytic number theory and computational complexity[END_REF]. This means that the sequence θ 0 (0, 2 n τ ) 2 , θ 1 (0, 2 n τ ) 2 n∈N is an AGM sequence for θ 0 (0, τ ) 2 , θ 1 (0, τ ) 2 . Studying the choice of signs in this AGM sequence was done first in [START_REF] Geppert | Zur Theorie des arithmetisch-geometrischen Mittels[END_REF], while [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF] gave a more modern treatment.
Optimal sequences
Definition 3.2.7. Define
D 1 = τ ∈ F | |Re(τ )| ≤ 1, |Re 1 τ | ≤ 1
and define D1 as the domain obtained by translating D 1 by ±2, ±4, etc.
Proposition 3.2.8 ([Cox84, Lemma 2.8 and Lemma 2.9]). For all τ ∈ D 1 , or in D1 , we have Re
θ 2 1 (0,τ ) θ 2 0 (0,τ ) > 0, or Re θ 2 1 (0,τ ) θ 2 0 (0,τ ) = 0 and Im θ 2 1 (0,τ ) θ 2
0 (0,τ ) > 0. Now, pick a τ ∈ F, and consider the AGM sequence starting with (θ 2 0 (0, τ ), θ 2 1 (0, τ )). Picking the right choice of sign for the first step is then equivalent to the condition
|θ 2 0 (0, 2τ ) -θ 2 1 (0, 2τ )| < |θ 2 0 (0, 2τ ) + θ 2 1 (0, 2τ )| or |θ 2 0 (0, 2τ ) -θ 2 1 (0, 2τ )| = |θ 2 0 (0, 2τ ) + θ 2 1 (0, 2τ )| and Im θ 2 1 (0, 2τ ) θ 2 0 (0, 2τ ) > 0 ⇔ Re θ 2 1 (0, 2τ ) θ 2 0 (0, 2τ ) > 0 or Re θ 2 1 (0, 2τ ) θ 2 0 (0, 2τ ) = 0 and Im θ 2 1 (0, 2τ ) θ 2 0 (0, 2τ ) > 0 Hence: Proposition 3.2.9 ([Cox84, Lemma 2.9]). Define D 2 = 1 2 (D 1 \ B)
, where B is the border of the half circle on the right. Define also D2 as D 2 translated by ±1, ±2, etc. Then for τ ∈ D2 , the sequence θ 2 0 (0, 2 n τ ), θ 2 1 (0, 2 n τ ) n∈N is an optimal AGM sequence, and hence
AGM(θ 0 (0, τ ) 2 , θ 1 (0, τ ) 2 ) = 1.
Hence, AGM sequences starting with the fundamental theta-constants at τ are optimal sequences for τ in the striped domain on Figure 3.1. Remark 3.2.10. Note that the AGM is homogeneous, i.e.
AGM(λx, λy
) = λ AGM(x, y) Hence, a direct consequence of Proposition 3.2.9 is that AGM 1, θ 2 1 (0, τ ) θ 2 2 (0, τ ) = 1 θ 2 0 (0, τ ) for any τ ∈ D 2 . 0 D 1 D2 F Figure 3.1:
The domains D 1 (in red) and D2 (blue lines), with the fundamental domain F.
Limits of AGM sequences
Proposition 3.2.11 ([Cox84, Lemma 2.5 and Lemma 2.7]). Define the principal congruence subgroup of level 2
Γ(2) = γ ∈ SL 2 (Z) | γ ≡ 1 0 0 1 (mod 2) . Then given τ ∈ F, there is γ ∈ Γ(2) such that γ • τ ∈ D 1 ; in fact, the fundamental domain of Γ(2) is D 1 minus the left half-circle.
Similarly, the following proposition determines the set for which D 2 is a fundamental domain:
Proposition 3.2.12 ([Cox84, Lemma 2.7]). Define Γ 2 (4), a subgroup of Γ 2 , as
Γ 2 (4) = γ = a b c d ∈ SL 2 (Z) | b ≡ 0 (mod 2), c ≡ 0 (mod 4) .
Then given τ ∈ F, there is γ ∈ Γ 2 (4) such that γ • τ ∈ D 2 . In fact, the fundamental domain of Γ 2 (4) is D 2 minus the two left half-circles.
Furthermore:
Proposition 3.2.13 ([Dup06, Prop. 2.14]). The function τ → θ 2 1 (0,τ ) θ 2 0 (0,τ ) is modular for Γ 2 (4), and in particular invariant under its action.
Now, for any
τ ∈ F take γ ∈ Γ 2 (4) such that γ • τ ∈ D 2 , and write AGM(θ 2 0 (0, τ ), θ 2 1 (0, τ )) = θ 2 0 (0, τ ) θ 2 0 (0, γ • τ ) AGM θ 2 0 (0, γ • τ ), θ 2 1 (0, γ • τ ) = θ 2 0 (0, τ ) θ 2
0 (0, γ • τ ) using the homogeneity of the AGM and Proposition 3.2.13. This proves that:
Theorem 3.2.14 ([Cox84], [Dup06, Theorem 3.1]). The set of limits of AGM sequences starting at (θ 2 0 (0, τ ), θ 2 1 (0, τ )) is θ 2 0 (0, τ ) θ 2 0 (0, γ • τ ) , γ ∈ Γ 2 (4) ∪ {0}.
This can be rephrased into a result describing the set of limits of AGM sequences starting at (a, b) ∈ C 2 ; this result was known to Gauss [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF]. In particular, this gives the rather striking result that the inverse of the limits of the AGM sequences starting at (a, b) form a lattice.
Applications of the complex AGM
We highlight a few of the computational applications of the complex AGM. The first application is the study of elliptic integrals, which actually led to the discovery of the complex AGM in the first place. A second application is the discovery, in the 1970s, of fast algorithms based on the AGM to compute important mathematical quantities, such as π (see Section 3.1.2) and log(z), and thus exp(z).
A most interesting application of the AGM is the computation of theta-constants; we give extensive details on this algorithm in Chapter 6, in which we also generalize the blueprint of the algorithm to theta functions.
Elliptic integrals
As explained in [Cox84, Section 3], the history of the arithmetico-geometric mean is very tightly connected to the study of some integrals, starting with the integral
1 0 dz √ 1-z 4 ,
which is a quarter of the arc length of the lemniscate. The lemniscate, and its sibling the elastic curve, was discovered in the 17th century by Bernoulli, then studied by Stirling and Euler; Lagrange, in 1785, came very close to linking the integral to the arithmetico-geometric mean, but seem to have missed this discovery.
Gauss studied related notions around 1795, where he attempted to study lemniscatic (elastic) functions much in the same way as one does circular geometry. He came back to it in 1798, this time studying quantities that in hindsight are closely related to theta functions; his explorations culminated to a calculation to the 11th decimal place made on May 30th 1799, which led him to conjecture that
1 0 dz √ 1 -z 4 = π 2 AGM( √ 2, 1
) .
In his famous words, "the demonstration of this fact will surely open an entirely new field of analysis".
A classical proof of this identity will be given in Chapter 4; in fact, there are many more identities of this type, including one of particular interest for us:
2π 0 dt a 2 cos 2 t + b 2 sin 2 t = 2π AGM(a, b) .
Computing the complex logarithm
Note that θ can be looked at as a function of q = e iπτ . In this context, Equation (2.5.10) is an equation linking θ to the logarithm of q; hence, one could think of computing the logarithm of a complex number using this equation and the properties of the AGM. We first outline the algorithm of Sasaki and Kanada to compute log x for real x, then show a different approach by Dupont to compute the complex logarithm in quasi-optimal time.
The algorithm of Sasaki and Kanada
Sasaki and Kanada [START_REF] Sasaki | Practically fast multiple-precision evaluation of log(x)[END_REF] proposed in 1982 an algorithm based on the AGM and theta-constants to compute log x in time O(M(P ) √ P ); we also refer to [BZ10, p.158] for more comments and implementation remarks. They start from the relation:
Proposition 3.3.1. For τ ∈ H such that -1 τ ∈ D 2 , we have AGM(θ 0 (0, τ ) 2 , θ 2 (0, τ ) 2 ) = i τ
This proposition can be proven from Equation (2.5.10) and Proposition 3.2.9, and the homogeneity of the AGM. Putting q = e iπτ , this can be rephrased as
log q iπ = τ = i AGM n∈Z q n 2 2 , n∈Z q (n+1/2) 2 2
This relation is valid for all complex numbers τ such that -1 τ ∈ D 2 ; in particular, it is valid for τ ∈ F.
Sasaki and Kanada take a look in particular at the case where q is real and smaller than 1, which corresponds to τ ∈ iR. The result above then shows how to compute the logarithm of q:
log q = -π AGM n∈Z q n 2 2 , n∈Z q (n+1/2) 2 2
Alternatively, one can apply this result to q 4 in order to avoid the q 1/4 which appears in the definition of θ 2 . Given q, the evaluation of the sums costs O(M(P ) √ P ) bit operations (see for example our analysis in Chapter 5). The evaluation of the AGM then costs O(M(P ) log P ), which is dominated by the cost of the evaluation of the sums; in the end, this gives a O(M(P ) √ P ) algorithm for log x.
Lastly, as noticed in [BB87, Section 7.2], if one works with arithmetic in base b, then for q = 1 b the computation of the arguments of the AGM is easy, since they are just sequences of 0s and 1s that can be computed very quickly (in linear time). Hence, the algorithm can compute π log b in base b in O(M(P ) log P ) time.
We will not use this algorithm in the rest of this manuscript; indeed, it uses theta-constants to compute logarithms, when the quasi-optimal time algorithms we consider (most notably in Chapter 6) require the computation of logarithms to compute theta-constants (and theta functions).
A quasi-optimal time algorithm for log z
This section describes a quasi-optimal time algorithm to compute the logarithm of a complex number with precision P ; this algorithm is presented e.g. in [Dup06, Theorem 3.3,p. 90]. Compared to the algorithm in the previous section, this algorithm does not require the computation of theta-constants, and has better running time; this is the algorithm we use in the rest of this manuscript to compute log z.
The following approximation is key to the algorithm:
Theorem 3.3.2. For any z such that |z| ≤ 2 -10 and |Arg(z)| ≤ π 4 : |log z 4 + π 2 AGM(1, z) | ≤ 0.26|z| 2 1 + |log z 4 |
The proof of this theorem uses the fact that the function τ → θ 2 1 (0,τ ) θ 2 0 (0,τ ) is surjective, and hence proves in particular for z satisfying the hypotheses of the theorem, there exists τ z ∈ F such that z = θ 2 2 (0,τz) θ 2 0 (0,τz) . All the properties in terms of AGM(1, z) are then rephrased in terms of thetaconstants, and Equation (2.5.10) is used to provide the link between theta-constants and τ z ; a careful bounding of the series defining the theta-constants in that case provides the result.
Computing log(2) Theorem 3.3.2 directly gives an algorithm that computes log 2 with precision P in O(M(P ) log P ) operations. Put z = 1 2 n , then the theorem above proves that (at least for n ≥ 12)
| log z + π 2 AGM(1,4z) log z | ≤ 1 2 2n-3 hence -π 2 AGM(1,4z
) is an approximation of -n log 2 with relative precision 2n -3 bits.
Finally, put n = P +4 2 and compute
-π 2n AGM(1, 1 2 n-2 )
; this gives an approximation of log 2 with relative precision P . Note that since log 2 0.69, this also gives an approximation of log 2 up to 2 -P . This algorithm requires the computation of O(P ) digits of π, which is done using Section 3.1.2; as for the computation of the AGM, one can show (using Theorem 3.2.6) that the number of iterations is O(log P ), and hence the total running time is O(M(P ) log P ).
Computing log(z)
This in turn yields an algorithm to compute the complex logarithm with relative precision P . Put M = P +4 2 ; one can assume that 2 -M -1 ≤ |z| ≤ 2 -M , since one can just use the previous algorithm to add the right multiple of log 2 to the final result. One can also suppose that |Arg(z)| ≤ π/4, even if it means to add a multiple of iπ/2 to the final result. Then, one applies Theorem 3.3.2 to compute an approximation of log z with relative precision P bits. This gives Algorithm 3.
Algorithm 3 Compute log z with absolute precision P . Input: z ∈ C with absolute precision P . Output: log z with absolute precision P .
1: Work with precision
P = P + 6 log P + 3|log(2 + |z|)| + 20. 2: f ← 0 3: while |Arg(z)| ≤ π/4 do 4: z ← iz, f ← f + 1 5: end while 6: M ← P+4 2 7: n ← 0 8: while |z| ≥ 2 -M do 9:
z ← z/2, n ← n + 1 10: end while 11: π P ← π with precision P. 12: s ← AGM 1, 1 2 M -2 with precision P. 13: r ← AGM 1, 1 4z with precision P. 14: return π P (-
1 2r + f i 2 -n 2M s )
Theorem 3.3.3. For z ∈ C with absolute precision P , Algorithm 3 returns an approximation of log z with absolute precision P in O(M(P + |log|z||)(log P + log|log|z||)).
Proof.
If |z| ≤ 2 -M , Theorem 3.3.2 proves that -π 2 AGM(1,4z
) is an approximation of log z with relative precision P + 1 bits. According to Theorem 3.2.6, computing AGM(1, 4z) with relative precision P requires at most log P + log|log|z|| iterations, and requires 2 + 2 log P + 2 log|log|z|| guard bits. Given our choice of P, Algorithm 3 returns an approximation of log z with relative precision at least P + |log|z||, which gives an approximation of log z with absolute precision P . Now, if |z| ≥ 2 -M , the computation of log z 2 n requires the computation of AGM (1, u) with log|log|u|| ≤ log M ≤ log P . Hence, the correct computation of this AGM with relative precision P requires at most 2 log P iterations, and the number of guard bits needed to compensate errors is 2 + 4 log P . Given our choice of P, -π 2r is an approximation of log z 2 n +1 with relative precision at least P + log P , which guarantees absolute precision P + 2.
The computation of log 2 with absolute or relative precision P also requires roughly 2 log P iterations and 2 + 4 log P guard bits. Hence, π 2M s is an approximation of log 2 with absolute precision at least P + log
P + 2|log(2 + |z|)| + 20. Numerical experiments show that log 2 n ≤ log 2 log 2 |z| + log 2 (P + 5) -1 ≤ 2 log 2 log 2 (2 + |z|) + 1.5 log 2 P + 5
for P ≥ 2. This proves (using Theorem 0.3.3) that nπ 2M s is an approximation of n log 2 with absolute precision P + 2, which proves the theorem.
Note that this gives a O(M(P ) log P ) running time if z is assumed to be in a compact set.
Computing the exponential
Classical methods to compute the exponential of a complex number involve computing a partial summation with enough terms. The series converges rather quickly, since O P log P terms are sufficient to get a result with absolute precision P [START_REF] Brent | The complexity of multiple-precision arithmetic[END_REF]. One can also use argument reduction and compute exp z 2 i for some i, so that the series converges even faster: around O( √ P ) terms are then needed, as well as O( √ P ) squarings. Moreover, the technique of binary splitting, which evaluate parts of the series recursively, can also take advantage of the fact that the denominators of several consecutive terms have a common factor, and so do the numerators. The computation of the exponential at rational points can then be done quickly, in fact in quasi-optimal time. We refer to [START_REF] Brent | The complexity of multiple-precision arithmetic[END_REF][START_REF] Brent | Modern Computer Arithmetic[END_REF] for an overview of these techniques, which achieve overall an O(M(P ) log 2 P ) running time.
An asymptotically faster method is to use Newton's method to compute exp z from log z. The Newton iteration to compute exp a is
z n+1 = z n -z n (a -log z n ).
and requires the computation of log z n at each step; however, as we explained in the introduction (Section 0.3.3), the fact that Newton's method is self-correcting means that one can afford to simply compute the k-th iteration with precision 2 k+1 P 0 . Hence, the total complexity of computing exp z via Newton's method is the same as applying the complex logarithm to numbers close to e z , i.e.
O(M(P + |z|)(log P + log|z|))
or O(M(P ) log P ) if z is in a compact set. However, note that in practice, the algorithms based on binary splitting perform better than this algorithm, even for large (e.g. millions of digits) precisions, as the constant in the O is smaller than the one of the AGM-based algorithms.
Generalization of the AGM to higher genera
This section is based on [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF], who generalized some of the results of the previous sections to genus g > 1.
Definition
An AGM-like sequence of four positive numbers was considered by Borchardt in [START_REF] Borchardt | Ueber das arithmetisch-geometrische Mittel aus vier Elementen[END_REF][START_REF] Borchardt | Theorie des arithmetisch-geometrisches Mittels aux vier Elementen[END_REF], with the relations
a n+1 = a n + b n + c n + d n 4 b n+1 = √ a n √ b n + √ c n √ d n 2 c n+1 = √ a n √ c n + √ b n √ d n 2 d n+1 = √ a n √ d n + √ b n √ c n 2
This provided an interesting generalization of the AGM: the convergence is quadratic (see, e.g. [START_REF] Spandaw | Hyperelliptic integrals and generalized arithmetic-geometric mean[END_REF], for a direct proof of this), and the limit does not depend on the order of the arguments, just as with the AGM. Borchardt also notes in his original article that one can consider a generalization to 2 g numbers, but the limit of the sequence depends on the order of the arguments. Note that Borchardt only considered sequences of positive real numbers, and hence the square roots in the definition are unambiguously defined. The Borchardt mean of four numbers is linked to genus 2 hyperelliptic integrals, via the Richelot isogeny; we discuss this link in Section 8.2.2, and refer to [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF][START_REF] Spandaw | Hyperelliptic integrals and generalized arithmetic-geometric mean[END_REF] for more details on the real case.
We outline here the generalization of this sequence to the case where we have 2 g complex numbers.
Definition 3.4.1 (e.g., [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]). A Borchardt sequence of genus g is a sequence of 2 g -uples
a (n) 0 , . . . , a (n) 2 g -1 n∈N
such that, for all n, there exists α 0 , . . . , α 2 g -1 , square roots of a
(n) 0 , . . . , α (n) 2 g -1 (i.e. α 2 i = a (n) i ), such that ∀i ∈ {0, . . . , 2 g -1}, a (n+1) i = v1⊕v2=i α v1 α v2
where ⊕ denotes the bitwise XOR operation. The choice of complex square roots is good at the rank n if for any v 1 , v 2 we have
|α v1 -α v2 | < |α v1 + α v2 |.
Note however that, given 2 g complex numbers, the existence of a good choice of signs is not guaranteed, as highlighted in Figure 3.2.
Choice of roots and convergence
As with the AGM, the notion of choice of square roots is crucial, and determines for instance the limit:
Theorem 3.4.2 ([Dup06, Theorem 7.1]). Let a (n) 0 , . . . , a (n) 2 g -1 n∈N be a Borchardt sequence. Then there is an A ∈ C such that for any i ∈ {0, . . . , 2 g -1}, lim n→∞ a (n) i = A.
Furthermore, we have A = 0 if and only if the choice of square roots is not good infinitely many times.
a 0 α 0 -α 0 a 1 α 1 -α 1 a 2 α 2 -α 2 Figure 3.2: Given a 0 = 1, a 1 = -1 -0.1i, a 2 = -1 + 0.
1i, any choice of square roots lead to a wrong choice of signs.
Remark 3.4.3. Given the definition of good choices of square roots (in which the inequality must be strict), a choice of square roots is bad as soon as one of the a (n) i is equal to 0. However, unlike the AGM, this does not necessarily mean the sequence converges to 0, as a (n+1) i is in general different from 0. On the other hand, if at least half of the a (n) i are 0, it is easy to check that at least half of the a (n+k) i will be equal to 0; the limit of the sequence is then 0, and the choice of square roots are necessarily always bad.
Definition 3.4.4. Let a (0) 0 , . . . , a (0) 2 g -1 ∈ C 2 g ,
and assume that one can define the sequence a
(n) 0 , . . . , a (n) 2 g -1 n∈N
with good choices of signs at each step. Then the Borchardt mean is the limit of this sequence; we denote it B g a (0) 0 , . . . , a (0) 2 g -1 = 0. We also have:
Theorem 3.4.5 ([Dup06, Prop. 7.1, p. 163]). Let a (n) 0 , . . . , a (n) 2 g -1 n∈N
be a Borchardt sequence such that Re(a (0) i ) > 0, and such that all the square roots are chosen with positive real part. Let N be such that for all i we have
|a (N ) i -a (N ) 0 | ≤ 0.2247|a (N ) 0 |. (3.4.1)
Then |A -a
(N +k) 0 | ≤ 1.43M N × 0.7867 2 k , with M N = max i |a (N ) i |.
The condition of Equation (3.4.1) is always satisfied for N large enough (see [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]p. 164]). Hence, this theorem establishes the quadratic convergence of any Borchardt sequence for which the elements have positive real parts, and for which the square roots are always chosen with positive real part.
Note that, provided wrong choices of square roots do not happen infinitely often, one can always fall back to this case: the sequence
a (n) 0 A , . . . , a (n) 2 g -1 A n∈N
(which is a Borchardt sequence) converges to (1, . . . , 1), which means that after a certain number of terms the real part of the sequence is always strictly positive. Furthermore, when the a (n) i are close to 1 (and to each other), choosing the square root with positive real part corresponds to the good choice of signs; hence, when always choosing the square roots with positive real parts, the choice of signs is good after a while. Hence: Theorem 3.4.6 ([Dup06, Section 7.4
.2]). Any Borchardt sequence with a finite number of wrong choices of sign converges quadratically; in particular, computing the limit of a Borchardt sequence with precision P can be done in O(M(P ) log P ).
This means in particular that one can compute B g a
(0) 0 , . . . , a (0) 2 g -1
with precision P in quasi-linear time.
Link with the theta-constants
Recall the genus g τ -duplication formulas (Equation (2.2.1)):
θ [a;b] (z, τ ) 2 = 1 2 g β∈ 1 2 Z g /Z g e -4iπ t aβ θ [0;b+β] z, τ 2 θ [0;β] 0, τ 2 .
The addition in the characteristics is addition modulo 1 2 Z g /Z g . We have the following group isomorphism:
φ : 1 2 Z g /Z g → {0, . . . , 2 g -1} (. . . , 0, δ i , 0, . . .) → 2 i if δ i = 1 2 (mod 1) 0 if δ i = 0 (mod 1) a + b → φ(a) ⊕ φ(b)
where ⊕ is the bitwise XOR operation. Hence, using the numbering described in Note 2.1.5, define a i (τ ) = θ 2 i (0, τ ); then the τ -duplication formulas for the fundamental thetas can be rewritten as
a i (2τ ) = v1⊗v2=i a v1 (τ ) a v2 (τ )
for some choice of square roots: this is exactly the definition of the Borchardt mean. Hence Proposition 3.4.7. The sequence θ 2 0 (0, 2 n τ ), . . . , θ 2 2 g -1 (0, 2 n τ ) n∈N is a Borchardt sequence.
Furthermore, Proposition 2.1.9 shows that the sequence converges to (1, 1, 1, 1). In fact, this sequence even converges quadratically; this means that the choice of square roots which corresponds to computing θ i (0, 2 k τ ) from θ 2 i (0, 2 k τ ) (which, when τ is large enough, actually corresponds to picking the square roots with positive real part) is a good choice of square roots (in the sense of Definition 3.4.1) for all but a finite number of steps.
The most interesting case is the one for which good choices of signs always coincide with theta constants at 2 k τ : Definition 3.4.8. Define U g as the set of τ ∈ H g such that B g θ 2 0 (0, τ ), . . . , θ 2 2 g -1 (0, τ ) is defined and equal to 1; that is to say, good choices of square roots always exist, and always choosing them gives rise to the sequence θ 2 0 (0, 2 n τ ), . . . , θ 2 2 g -1 (0, 2 n τ ) n∈N . Proposition 3.2.9 proves that D 2 ⊂ U 1 , and in particular F 1 ⊂ U 1 . In genus 2, we have that F 2 ⊂ U 2 [Dup06, Prop. 9.6, p. 196]. We were also able to prove a slightly better result in genus 2, which we mention in Chapter 7 (Proposition 7.1.2).
Studying this domain is interesting in the context of quasi-linear algorithms to compute thetaconstants (see Chapter 7); however, even in genus 2, it is not an easy task. In particular, a result establishing the stability of this domain under the action of some matrices is still a conjecture (see, e.g. [Dup06, Conjecture 9.1] and [ET14a, Conjecture 9]). We describe how we sidestep this difficulty in practice, in the context of our fast algorithm for theta functions and theta-constants, in Chapter 7.
To finish, recall that, in genus 1, a description of the limits of AGM sequences starting at the squares of theta-constants can be found explicitly (Theorem 3.2.14), which also shows that the inverse of these limits form a lattice. This result generalizes to genus 2, i.e. the set of limits can be written as the set of
θ 2 j (0,τ ) θ 2 j (0,γ•τ ) for γ in a subgroup of Sp 4 (Z); this is [Dup06, Chapter 8].
However, this result does not appear to generalize to genus 3 and above 2 .
2 The main obstacle is in the generalization of [Dup06, Lemme 8.1], where τ -duplication formulas are used to show that any choice of sign transforms the set of θ 2 i (τ ) into a set of θ 2 i (2γτ ) for some γ. Changing the sign of only one of the theta-constants gives formulas with one "minus" sign and 2 g-1 -1 "plus" signs, but τ -duplication formulas have an equal number of "plus" and "minus" signs; for g > 2, we cannot reconcile the two formulas.
Chapter 4 The Landen isogeny
This section is devoted to the Landen isogeny, which is an important 2-isogeny between elliptic curves over the complex numbers (and in fact over the real numbers as well). This isogeny shows up as a change of variables in elliptic integrals. Interestingly, this gives a method to compute periods of an elliptic curve, by repeatedly applying the Landen isogeny; this strategy has been described in [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF] in a specific case, then generalized (via lattice chains) in [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF]. A similar strategy allows us to solve the elliptic logarithm; hence, this chapter proves: Theorem 4.0.1. The genus 1 Abel-Jacobi map can be computed using quadratically convergent sequences.
This gives algorithms with complexity of O(M(P ) log P ) to compute the Abel-Jacobi map with absolute precision P , provided one takes into account the loss of precision. In the real case, it seems that a few papers (e.g. [START_REF] Luther | Computation of Standard Interval Functions in Multiple-Precision Interval Arithmetic[END_REF] or [START_REF] Luther | Reliable computation of elliptic functions[END_REF]) have shown that the precision lost is of no asymptotic importance. As for the complex elliptic logarithm, we prove that only O(log P ) guard bits are needed. Hence throughout this chapter, we will assume that precision loss has no impact over the running time of the algorithm, i.e. that they are at most O(P ); this allows us to claim a O(M(P ) log P ) algorithm for the complex genus 1 Abel-Jacobi map.
We also show an algorithm to compute the Weierstrass ℘ function with estimated complexity O(M(P ) log P ), once again using the Landen isogeny. However, this algorithm suffers from numerical instability: as the periods get close to the edges of the fundamental parallelogram, the accuracy of the result is greatly reduced, even for τ in the fundamental domain. Another approach, not based on the Landen transform, achieves this complexity without this numerical instability; we outline it in Chapter 8.
The real case (Bost-Mestre)
In this section, E is an elliptic curve defined over R by the Weierstrass equation E : y 2 = 4(x -e 1 )(x -e 2 )(x -e 3 ), with e 1 , e 2 , e 3 real and distinct, and e i = 0. This section establishes the connections between periods of this elliptic curve, the AGM, and a certain chain of 2-isogenies. We follow the presentation of [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF], and refer to it for the proof of most statements.
Elliptic integrals and period computation
The following proposition was known to Gauss and Lagrange:
sin t = 2a sin t (a + b) + (a -b) sin 2 t a careful calculation shows that 2π 0 dt √ a 2 cos t + b 2 sin t = 2π 0 dt a+b 2 2 cos 2 t + (ab) sin 2 t Hence, for all n ≥ 0 2π 0 dt √ a 2 cos t + b 2 sin t = 2π 0 dt a 2 n cos 2 t + b 2 n sin 2 t and, taking the limit when n → ∞, 2π 0 dt √ a 2 cos t + b 2 sin t = 2π 0 dt AGM(a, b) 2 cos 2 t + AGM(a, b) 2 sin 2 t = 2π AGM(a, b) 2 . Furthermore: Proposition 4.1.2. Let P = 4(X -e 1 )(X -e 2 )(X -e 3 ) with e 3 < e 2 < e 1 . Then +∞ e1 dx P (x) = e2 e3 dx P (x) = π 2 AGM( √ e 1 -e 3 , √ e 1 -e 2 ) e3 -∞ dx -P (x) = e1 e2 dx -P (x) = π 2 AGM( √ e 1 -e 3 , √ e 2 -e 3 )
Proof. The first half of the first identity can be proven using the change of variables
x = e 2 x -e 1 e 2 + e 1 e 2 -e 2 e 3
x -e 2 while the second half can be proven using the change of variables
x = e 3 + (e 2 -e 3 ) sin 2 t.
Recall (Proposition 1.3.2) that periods can be defined as integrals of the invariant differential following paths around the branch cuts, as in [Sil86, Section VI.1]. Hence: Proposition 4.1.3. Let P = 4(X -e 1 )(X -e 2 )(X -e 3 ) with e 3 < e 2 < e 1 , and let E be the elliptic curve defined over C by E : y 2 = P (x). Define
ω 1 = 2 e2 e3 dx P (x) , ω 2 = 2i e1 e2 dx -P (x)
Then ω 1 , ω 2 are periods of E. Hence, Proposition 4.1.2 shows that
ω 1 = π AGM( √ e 1 -e 3 , √ e 1 -e 2 ) , ω 2 = π AGM( √ e 1 -e 3 , √ e 2 -e 3 )
We evaluate precision loss in this algorithm; we suppose that the periods are of bounded size, in order to turn results on relative precision into results on absolute precision. The worst case happens when the roots are very close to each other at precision P , for instance e 1 -e 3 = 2 -P . In that case, the computation of the arguments of the AGM loses up to P/2 bits; furthermore, since log 2 |log 2 |z|| = O(log P ), Theorem 3.2.6 shows that O(log P ) guard bits are needed, and the number of iterations is still O(log P ). This proves that, for P large enough, it is enough to work at precision 4P to get a result accurate to P bits. Hence, this gives indeed a O(M(P ) log P ) algorithm.
2-isogenies
The changes of variable of the previous subsection can be interpreted in terms of isogenies. Define E : y 2 = (x -e 1 )(x -e 2 )(x -e 3 ) with e 3 < e 2 < e 1 , e i = 0. Put
a = √ e 1 -e 3 , b = √ e 1 -e 2 , a 1 = a + b 2 , b 1 = √ ab
and put
e 1 = a 2 1 + b 2 1 3 , e 2 = a 2 1 -2b 2 1 3 , e 3 = b 2 1 -2a 2 1 3 .
Then, e 3 < e 2 < e 1 , e 1 -e 3 = a 2 1 , e 1 -e 2 = b 2 1 and e i = 0. Equation (4.1.1), along with the changes of variables of Proposition 4.1.2, gives:
∞ e1 dx 4(x -e 1 )(x -e 2 )(x -e 3 ) = ∞ e 1 dx 4(x -e 1 )(x -e 2 )(x -e 3 )
In fact, the combination of these changes of variables allows us to write this equation as a consequence of the change of variables This change of variables actually defines an isogeny called the Landen isogeny or the Landen transform:
Theorem 4.1.4 (Landen transform, e.g. [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF]). Let E : y 2 = 4(x -e 1 )(x -e 2 )(x -e 3 ) be an elliptic curve over R, with e 1 > e 2 > e 3 , and E : y 2 = 4(x -e 1 )(x -e 2 )(x -e 3 ) with e 1 , e 2 , e 3 defined from e 1 , e 2 , e 3 as previously. Define the map
φ : E → E [0 : 1 : 0] → [0 : 1 : 0] [e 3 : 0 : 1] → [0 : 1 : 0] [x : y : 1] → x + (e 3 -e 1 )(e 3 -e 2 ) x -e 3 : y 1 - (e 3 -e 1 )(e 3 -e 2 ) (x -e 3 ) 2 : 1
Then φ is a 2-isogeny, i.e. an isogeny of degree 2.
Note that this amounts to considering the 2-isogeny whose kernel is the 2-torsion point (e 3 , 0), instead of other ones whose kernels are (e 1 , 0) or (e 2 , 0). Note that (e 1 , 0), (e 2 , 0), (e 3 , 0) are respectively the images by ℘ of the points
ω 1 2 , ω 1 +ω 2 2 , ω 2 2 ; hence, ω 2 2 = 0 (mod Λ).
The action of the Landen isogeny on the periods is thus
ω 1 = ω 1 , ω 2 = 2ω 2 (4.1.2)
Furthermore, the isogeny between the two tori can be written as φ(z) = z (mod Λ).
The successive changes of variables can be written as a "chain of 2-isogenies". Define (a n , b n ) to be the sequence one obtains when computing AGM(a, b), and
e (n) 1 = a 2 n + b 2 n 3 , e (n) 2 = a 2 n -2b 2 n 3 , e (n) 3 = b 2 n -2a 2 n 3 (4.1.3)
Isogenies can once again be defined, using formulas that are analogous to Theorem 4.1.4:
f n : E n+1 : y 2 = 4(x -e (n+1) 1 )(x -e (n+1) 2 )(x -e (n+1) 3 ) → E n : y 2 = 4(x -e (n) 1 )(x -e (n) 2 )(x -e (n)
3 )
This constructs a chain of 2-isogenies:
. . . → E n+1 fn -→ E n → . . . → E f -→ E Since lim a n = lim b n = AGM(a, b), we have lim e (n) 1 = 2 3 AGM(a, b) 2 , lim e (n) 2 = lim e (n) 3 = - 1 3 AGM(a, b) 2 .
This means the equation of the curve "at the limit" is
y 2 = P ∞ (x) = 4 x + 1 3 AGM(a, b) 2 2 x - 2 3 AGM(a, b) 2
We have
dt P ∞ (t) = du AGM(a, b) 2 + u 2 putting u = x - 2 3 AGM(a, b) 2 = 1 AGM(a, b) Arctan x -2 3 AGM(a, b) 2 AGM(a, b) and hence we have for instance 2 ∞ e1 dx P (x) = ω 1 = 2 +∞ 2 3 AGM(a,b) 2 dx P ∞ (x) = π AGM(a, b) .
Elliptic logarithm
Recall the definition of the elliptic logarithm map (Theorem 1.3.4):
P = [x : y : 1] → ∞ x dx P (x)
In the real case, this incomplete integral can be computed in a similar way as the periods, that is to say by repeatedly applying Landen's transform. This requires keeping track of the bound of the integral, i.e. being able to compute x such that
∞ x dt 4(t -e (0) 1 )(t -e (0) 2 )(t -e (0) 3 ) = ∞ x dt 4(t -e (1) 1 )(t -e (1)
2 )(t -e
A = a+b 2 , B = √ ab 5: X = 1 2 x -a 2 +b 2 6 + x + a 2 +b 2 6 2 -a 2 -b 2 2 2 6: a = A, b = B, x = X 7: end while 8: return 2 a π 2 -arctan √ x-2 3 a 2 a
The computation of x can be done by writing Equation (4.1.1) as a degree 2 polynomial in x (with coefficients depending on x) and solving for x . The resulting algorithm, given in [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF], is Algorithm 4.
As with the computation of periods, the loss of precision can be significant, most of all because of the extraction of square roots for numbers potentially close to 0. We did not manage to evaluate the precision loss in the computation of X; however, [START_REF] Luther | Computation of Standard Interval Functions in Multiple-Precision Interval Arithmetic[END_REF] comments that if u is close e 1 it is possible that up to P/2 bits are inaccurate in the final result3 , which does not change the asymptotics. Hence, working with precision O(P ) should be enough to compensate for the losses in precision. Note that, in any case, a variation on this algorithm which reduces the loss of relative precision to O(log P ) is studied in [START_REF] Luther | Reliable computation of elliptic functions[END_REF].
As for the running time, we analyze it as follows. Each step in the loop costs O(M(P )) bit operations; since the AGM is quadratically convergent, there are O(log P ) such steps. The last step requires us to compute arctan(x), which can be done in O(M(P ) log P ) bit operations, as in [BZ10, Section 4.8.5]; we use arctan(x) = Im (log(1 + ix))
and use the AGM to compute log(x) in O(M(P ) log P ) operations. In the end, this algorithm computes the elliptic logarithm in O(M(P ) log P ).
The complex case (Cremona-Thongjunthug)
We now turn to the general case, i.e. an elliptic curve E defined over C by a Weierstrass equation of the form E(C) : y 2 = (x -e 1 )(x -e 2 )(x -e 3 ). The previous section dealt with the case where the e i are real, and computed the periods and the elliptic logarithm using a link with the real AGM.
Generalizing this approach to the complex case involves working with the complex AGM, which means one must consider the problem of choosing the correct signs; furthermore, unlike the real case, there is no neat way to pick an ordering of the roots. The general method and theorems in this section are taken from [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF]; we skip over some details, for instance the case of rectangular lattices, so as to streamline the presentation.
Lattice chains
The notion of lattice chains is introduced in order to study the behavior and the properties of the period lattices of the curves appearing in the chain of 2-isogenies given by the Landen transform. Definition 4.2.1 ([CT13, Section 3]). Let (Λ n ) n∈N be a sequence made of lattices of C; it is a chain of lattices of index 2 if the following conditions are satisfied:
1. Λ n+1 ⊂ Λ n for all n ≥ 0; 2. [Λ n : Λ n+1 ] = 2 for all n ≥ 0; 3. Λ n+1 = 2Λ n-1 for any n ≥ 1.
Thus for each n ≥ 1 we have
Λ n+1 = w + 2Λ n for some w ∈ Λ n \ 2Λ n-1 .
Given a lattice Λ 0 , there are three possible choices for Λ 1 ; then, for any k ≥ 2, there are only two possible choices, the third one being excluded by the last condition. A notion of right choice of sublattice can actually be defined: Definition 4.2.2. For n ≥ 1 we say that Λ n+1 ⊂ Λ n is the right choice of sublattice if Λ n+1 = w + 2Λ n where w ∈ Λ n \ 2Λ n-1 and |w| is minimal in this quotient.
Definition 4.2.3.
• A chain is good if and only if Λ n+1 ⊂ Λ n is the right choice for all but finitely many n ≥ 1.
• A chain is optimal if and only if Λ n+1 ⊂ Λ n is the right choice for all n ≥ 1. There is usually one optimal chain for each choice of Λ 1 ⊂ Λ 0 .
Proposition 4.2.4. A chain is good if and only if Λ ∞ is of rank 1, in which case we note w ∞ a generator of Λ ∞ , the limiting period of the good chain. For all but finitely many n, w ∞ is the smallest element of Λ n . If the chain is not good, Λ ∞ is of rank 0.
The analogy with the complex AGM is very noticeable: at each step of the sequence there are two possible choices, only one of which is defined as the right choice; and a sequence can has all but finitely many right choices, in which case its limit shows a non-zero element, or an infinite number of right choices, in which case we get 0.
The first step in establishing the link between lattice chains and AGM sequences is to link lattices to the initial values (a 0 , b 0 ) of the AGM sequence. Definition 4.2.5. A short lattice chain of order 4 is a chain Λ 2 ⊂ Λ 1 ⊂ Λ 0 such that Λ 0 /Λ 2 is cyclic of order 4.
Proposition 4.2.6. There a bijection between • Short lattice chains of order 4 Λ
2 ⊂ Λ 1 ⊂ Λ 0 , up to homothety; • Unordered pairs of nonzero complex numbers a, b ∈ C such that a 2 = b 2 , identifying (a, b)
and (-a, -b);
2-isogenies
The connection with the real case is made explicit here; however, we do not write down all the formulas, as they are exactly the same as the ones in Section 4.1. We refer to [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF] for full details.
Let E = E 0 : y 2 = P (x) with P of degree 3, and take Λ 0 such that E 0 (C) C/Λ 0 . The change of variables given by the Landen transform (Equation (4.1.1)) gives a 2-isogeny φ 1 : E 1 → E 0 , with E 1 C/Λ 1 . However, note that the definition of the isogeny depends on the e i , which are defined (Equation (4.1.3)) from the quantities a 0 = ± √ e 1 -e 3 , b 0 = ± √ e 1 -e 2 , themselves defined from a labelling of the roots of P . Looking more closely, if e 2 and e 3 are switched, a and b are switched, but this does not affect the e i or the rest of the sequence; hence, there are only three possibilities for Λ 1 , depending on which root is labeled e 1 .
Iterating the change of variables gives a chain of 2-isogenies φ n : E n → E n-1 and the roots e (n) i of the polynomial P n such that E n : y 2 = P n (x) are defined using Equation (4.1.3):
e (n) 1 = a 2 n + b 2 n 3 , e (n) 2 = a 2 n -2b 2 n 3 , e (n) 3 = b 2 n -2a 2 n 3 Note that a 2 n = e (n)
1 -e
(n) 3 , b 2 n = e (n) 1 -e (n)
2 . Each change of variables requires the computation of one term of an AGM sequence starting at (a 0 , b 0 ); hence there are two choices for (a n , b n ).
Note that Equation (4.1.3) can be rewritten as
e (n+1) 1 = e (n) 1 + 2a n b n 4 , e (n+1) 2 = e (n) 1 -2a n b n 4 , e (n+1) 3
= -e
(n) 1 2 Hence, taking (a n , -b n ) instead of (a n , b n ) switches e (n+1) 1
and e (n+1) 2
; this does not change the equation of E n+1 , however it changes the equation of E n+2 . Hence, each choice of root for the n-th term of the AGM sequence corresponds to a curve E n+2 . Furthermore, the same reasoning can be applied to the choice of signs in a 0 , b 0 , identifying (a 0 , b 0 ) and (-a 0 , -b 0 ); this gives two possibilities for Λ 2 .
In the end, this gives:
Theorem 4.2.8. There is a bijection between
• The AGM sequences starting with (a 0 , b 0 );
• The isogeny chains starting with the short chain
E 2 → E 1 → E 0 ;
• The lattice chains starting with the short chain
Λ 2 ⊂ Λ 1 ⊂ Λ 0 .
Furthermore, we also have
Λ n+2 is the right choice of sublattice for Λ n+1 ⇔ (a n , b n ) is good
The lattice chain is good ⇔ the AGM sequence is good The lattice chain is optimal ⇔ the AGM sequence is optimal
Period computation
Note that the results above show that each good lattice chain starting at Λ 0 determines a period (from the definition of a good lattice chain) and a good AGM sequence. The connection between both is given by the proposition:
ω 1 ω 2 2Λ ω 1 ω 2 C 1 ω 1 ω 2 C 2 ω 1 ω 2 C 3 Figure 4.1: Three cosets of 2Λ in Λ = Zω 1 + Zω 2 , for ω 1 = 1, ω 2 = 2 √ 2 (1 + i).
Proposition 4.2.9. Let (Λ n ) be a good lattice chain with limiting period ω, and let the corresponding good AGM sequence be (a n , b n ) with limit M = 0. Then M = ± π ω . This means that when ω runs over all the points of Λ 0 , π ω describes all the limits of good AGM sequences starting at (a 0 , b 0 ); this yields another proof of Theorem 3.2.14.
Optimal AGM sequences can actually be used to yield a Z-basis of the lattice.
If Λ = Zω 1 + Zω 2 , define the cosets of 2Λ in Λ C j = ω i + 2Λ, with ω 3 = ω 1 + ω 2 . Define a set of minimal coset representatives as a triple (c 1 , c 2 , c 3 ) such that |c i | is minimal in C i .
The following proposition confirms an intuition one might have when looking at Figure 4.1: Proposition 4.2.10. Let Λ be a non-rectangular lattice, and let w 1 , w 2 , w 3 be minimal coset representatives of 2Λ in Λ. Then any 2 of them are a Z-basis of the lattice, and w 3 = ±(w 1 ±w 2 ).
The link with optimal chains is as follows: Theorem 4.2.11. A good chain is optimal if and only if w ∞ is one of the minimal coset representatives for Λ 0 . This means that in general, any non-rectangular lattice has 3 optimal chains, one per coset C i . Now, setting Λ 1 = C i and picking Λ 2 as the right choice for Λ 1 gives a good pair (a 0 , b 0 ); the optimal AGM sequence can then be computed starting with this good pair, and this gives AGM(a 0 , b 0 ) = π ci where c i is a minimal coset representative. Repeating this with other cosets gives a Z-basis of Λ. Keeping in mind the link with elliptic curves, this gives the following generalization of Proposition 4.1.3: Theorem 4.2.12. Let E be an elliptic curve over C given by Y 2 = (X -e 1 )(X -e 2 )(X -e 3 ). Let Λ be the period lattice. Define a 0 = √ e 1 -e 3 , b 0 = √ e 1 -e 2 and choose the signs such that (a 0 , b 0 ) is good. Define ω 1 = π AGM(a0,b0) ; then ω 1 is a period of E. Labelling the other two roots as e 1 and repeating this process gives two other such periods. Then any two of the three periods form a basis of Λ.
The process can actually be simplified, as follows. Define a 2 = e 1 -e 3 , b 2 = e 1 -e 2 , c 2 = e 2 -e 3 , so a 2 = b 2 + c 2 , and pick the signs of a, b, c so that Note that applying this process to a rectangular lattice gives a procedure very similar (after multiplying the roots by a suitable complex number so they are all real) to [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF].
|a -b| ≤ |a + b|, |c -ib| ≤ |c + ib|, |a -c| ≤ |a + c|
Elliptic logarithm
The algorithm for elliptic logarithm in the real case (Algorithm 4) requires the extraction of a square root at each step for the computation of the value of x; hence, in the complex case, the question of which square root to choose needs to be settled. An equivalent way of saying this on elliptic curves is to say that, since φ n is two-to-one as a 2-isogeny, there are uncountably many point sequences (P n ) starting from a point P 0 .
The problem is recast in [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF] in terms of coherent point sequences.
Definition 4.2.13. Let (Λ n ) be an optimal lattice chain. A sequence (P n ) of points is called
coherent if there is z ∈ C such that P n = (℘(z, Λ n ), ℘ (z, Λ n )) for all n. If that z exists, it is uniquely determined modulo ∩Λ n = ω 1 .
The problem of finding a coherent point sequence is solved by writing the change of variables given by the Landen transform as the composition of two maps. Define the curves
E n : R 2 = T 2 + a 2 n T 2 + b 2 n
These are projective curves in P 1 × P 1 , with points at infinity (∞, ±1) and (±bi, ∞). Then, define the map
α n : E n+1 → E n , α n (r, t) = (t 2 + e (n) 1 , -2rt(t 2 + b 2 n ))
where
E n , a n , b n , e (n) i
are defined as in Section 4.2.2. The curves E n and E n are also isomorphic, via the isomorphism
θ n : E n → E n θ n (t n , r n ) = (x n , y n ) = t 2 n + r n (t 2 n + a 2 n ) + a 2 n +b 2 n 6 2 , t n t 2 n + r n (t 2 n + a 2 n ) + a 2 n + b 2 n 2 θ -1 n (x n , y n ) = (r n , t n ) = 3y n 6x 1 + a 2 n + b 2 n , 12x n + 5a 2 n -b 2 n 12x n + 5b 2 n -a 2 n .
This is summarized by a commutative diagram:
. . . → E n φ n --→ E n-1 → . . . → E 1 ↓ θn ↓ θ n-1 ↓ θ 1 α0 . . . → E n φn --→ E n-1 → . . . → E 1 φ1 -→ E 0
Following a coherent point sequence (r n , t n ) on the sequence of curves E n is easier than on the E n : this gives the relations
r 2 n = a n-1 (r n-1 + 1) b n-2 r n-1 + a n-2 , t n = r n t n-1
with Re(r n ) ≥ 0 [CT13, Prop. 26], which removes any sign ambiguity. The value of the elliptic logarithm is then given by: Theorem 4.2.14. Let (P n ) be a coherent point sequence generated by z ∈ C. Assume that 2z ∈ Λ ∞ . Then for n large enough
P n = O En . Set M = π ω1 ; we have t ∞ = - 1 2 y ∞ x ∞ + M 2 /3 and t ∞ = 0, ∞. Furthermore z ∞ = 1 M arctan M t ∞
Note the similarities with the last step of the algorithm in the real case. We also have lim n→∞ r n = 1.
The final algorithm, as described by [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF], is Algorithm 5.
Algorithm 5 Compute the complex elliptic logarithm. Input: P 0 = (x 0 , y 0 ) ∈ E with y 0 = 0, with absolute precision P .
Output: the elliptic logarithm z of P 0 with absolute precision P . The loss of precision incurred in Algorithm 5 is not analyzed in [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF]; we give a few arguments proving that the number of guard bits is negligible asymptotically in P . Theorem 3.2.6 proves that the number of bits of relative precision lost at each step of the AGM is a constant in P . As for the computation of the r n , which converge to 1, the inversion and the square root extraction also lose a number of bits which is a constant in P at each step; the same goes for the computation of the t n . Theorem 3.2.6 thus proves that only O(log P ) bits are needed in the loop. Finally, the arctan computation is done via the complex logarithm (for instance arctan(z) = i 2 (log(1 -iz) -log(1 + iz))), which only requires O(log P ) guard bits; overall, Algorithm 5 requires O(log P ) guard bits.
1: a ← √ e 1 -e 3 , b ← √ e 1 -e 2 ,
An algorithm for the Weierstrass ℘ function
In this section, we outline a new algorithm that computes ℘(z, τ ) with absolute precision P in O(M(P ) log P ) operations. We describe in Chapter 8 (Section 8.1.2) another way, based on Chapter 6, to compute ℘(z, τ ) with a similar complexity; we compare the two algorithms in Section 8.1.3. This algorithm relies on using a backwards recurrence, which we get from the explicit change of variables given by the Landen transform. The analysis of this algorithm and the analysis of its precision loss relies on conjectures, verified experimentally. The algorithm is similar in its principle to Miller's algorithm for the Bessel function of the first kind [BZ10, p.153]; existing results and analyses on Miller's algorithm could open a way to prove some of the statements we make in this section. However, we did not explore this direction.
As explained in Note 1.3.16, we can assume that the following properties on z, τ are satisfied:
τ ∈ F, 0 ≤ Im(z) < Im ω 2 2ω 1 , |Re(z)| ≤ Re(ω 1 ) 2 4.3.1 Fast computation of the sequence θ i (0, 2 n τ )
Our algorithm relies on an induction formula involving the θ 2 0,1,2 (0, 2 k τ ). We first outline an algorithm to compute these quantities in quasi-optimal time.
In order to compute this sequence, we compute its first term, then use the τ -duplication formulas (Equations (2.5.4)) to compute the other terms. Since we suppose that τ ∈ F, the choice of signs is always good (cf. Proposition 3.2.9), and we thus know how to extract the square root. We compute the first term using a fast, quasi-optimal algorithm to compute thetaconstants, presented in e.g. [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]; we outline this algorithm in Section 6.1 (Algorithm 11). The algorithm computes θ i (0, τ ) with precision P in O(M(P ) log P ) operations.
Hence, we can compute the θ 2 0,1,2 (0, 2 k τ ) for k ≤ n in O(M(P )(log P + n)). However, note that the sequence we are trying to compute is quadratically convergent to (1, 1, 0); hence, for n > O(log P ), the representation with precision P of the terms of the sequence is stationary. This means the maximal cost of this algorithm is O(M(P ) log P ).
Remark 4.3.1. Obviously, this precomputation can (and should) be cached, as it can be reused when one wants to compute the value of ℘ at several different z (keeping the same Λ). This is the case in our main application (cf. Chapter 9).
A backward recurrence for ℘
In this section we rewrite the change of variables given by the Landen transform as a recurrence relation between values of ℘.
Let E : y 2 = P (x) be a complex elliptic curve of periods [ω 1 , ω 2 ]. The Landen change of variables describes a 2-isogeny E 1 → E, with E 1 : y 2 = P 1 (x) of periods [ω 1 , 2ω 2 ]. This means that for any u, there is a u such that
∞ u 1 P (x) = ∞ u 1 P 1 (x) .
The relationship between u and u is given by the explicit change of variables (Equation (4.1.1)):
u = u + (e 2 -e 1 )(e 2 -e 3 ) u -e 2
Recall that (for a given polynomial P of degree 3) the function x →
+∞ x dt √ P (t)
is the elliptic logarithm function, giving a z ∈ C/Λ. Since ℘(z, Λ) = x, the function ℘ is the inverse of this function; hence, we have
u = ℘(z, [ω 1 , ω 2 ]), u = ℘(z, [ω 1 , 2ω 2 ]).
We then rewrite Equation (4.1.1) as a function of ℘ and of theta constants at τ = ω2 ω1 , using Thomae's formulas (Theorem 1.3.19): lim
℘(z, [ω 1 , ω 2 ]) = ℘(z, [ω 1 , 2ω 2 ]) + π ω1 4 θ 0 (2τ ) 4 θ 2 (2τ ) 4 ℘(z, [ω 1 , 2ω 2 ]) + π ω1 2 θ0(2τ ) 4 +θ2(2τ ) 4 3 (4.3.1)
℘(z, [ω 1 , 2 n ω 2 ]) = π ω 1 2 1 sin 2 (zπ/ω 1 ) - 1 3
Proof. We have
℘(z, [ω 1 , 2 n ω 2 ]) = 1 z 2 + w∈Zω1+2 n Zω2 1 (z -w) 2 - 1 w 2
Write w = m 1 ω 1 + m 2 2 n ω 2 and let n go to infinity: the terms with m 2 = 0 go to 0, and all that remains is
lim ℘(z, [ω 1 , 2 n ω 2 ]) = m∈Z 1 (z -mω 1 ) 2 - 1 ω 2 1 × 2 π 2 6 But we have π 2 sin 2 (πz) = m∈Z 1 (z -m) 2
which proves the result.
Recall that sin can be computed with precision P in O(M(P ) log P ), using the AGM-based algorithm for the complex logarithm (Section 3.3.2) and Newton's method (Section 0.3.3) to compute e it . However, computing 1 sin 2 with absolute precision P when sin is small (e.g. z ω 1 ) causes a large number of bits to be lost.
A quasi-optimal time algorithm
We outline an algorithm for ℘ with conjectured quasi-optimal running time. The algorithm directly uses the backwards induction of Equation (4.3.1), combined with Theorem 4.3.2; we compute = lim n→∞ ℘(z,
[ω 1 , 2 n ω 2 ]), then determine N such that | -℘(z, [ω 1 , 2 N ω 2 ])| ≤ 2 -P ,
and use the backwards induction to compute ℘(z, [ω 1 , ω 2 ]). This strategy is similar to the one used for the real theta function in e.g. [START_REF] Luther | Reliable computation of elliptic functions[END_REF]; it also resembles other algorithms for special functions, such as Miller's algorithm for the evaluation of the Bessel function [BZ10, section 4.7.1.].
Write
℘(z, [ω 1 , 2 k ω 2 ]) -℘(z, [ω 1 , 2 k+1 ω 2 ]) = π ω1 4 θ 0 (2 k+1 τ ) 4 θ 2 (2 k+1 τ ) 4 ℘(z, [ω 1 , 2 k+1 ω 2 ]) + π ω1 2 θ0(2 k+1 τ ) 4 +θ2(2 k+1 τ ) 4 3 (4.3.2)
which gives, using cancellation
℘(z, [ω 1 , ω 2 ]) -℘(z, [ω 1 , 2 n ω 2 ]) = n+1 k=0 π ω1 4 θ 0 (2 k+1 τ ) 4 θ 2 (2 k+1 τ ) 4 ℘(z, [ω 1 , 2 k+1 ω 2 ]) + π ω1 2 θ0(2 k+1 τ ) 4 +θ2(2 k+1 τ ) 4 3 We have lim n→∞ ℘(z, [ω 1 , 2 n ω 2 ]) + π ω 1 2 θ 0 (2 n τ ) 4 + θ 2 (2 n τ ) 4 3 = π ω 1 2 1 sin 2 (zπ/ω 1 ) = 0 hence π ω1 4 θ 0 (2 k+1 τ ) 4 θ 2 (2 k+1 τ ) 4 ℘(z, [ω 1 , 2 k+1 ω 2 ]) + π ω1 2 θ0(2 k+1 τ ) 4 +θ2(2 k+1 τ ) 4 3 ∼ π ω 1 2 sin 2 (zπ/ω 1 )θ 2 (2 k+1 τ ) 4
and the series converges because θ 2 (2 k+1 τ )4 converges quadratically to 0. We can thus take the limit
℘(z, [ω 1 , ω 2 ])- π ω 1 2 1 sin 2 (zπ/ω 1 ) - 1 3 = ∞ k=0 π ω1 4 θ 0 (2 k+1 τ ) 4 θ 2 (2 k+1 τ ) 4 ℘(z, [ω 1 , 2 k+1 ω 2 ]) + π ω1 2 θ0(2 k+1 τ ) 4 +θ2(2 k+1 τ ) 4 3
In order to turn this into an explicit algorithm, we wish to transform the infinite sum into a finite one. We use the following heuristic: the numerator contains the term θ 4 2 (0, 2 k+1 τ ), which converges quadratically to 0, and thus could make the remainder very small. This also depends on the size of the denominator: when the denominator is close to 0 4 , this could create large precision losses. In practice, we take N the first integer such that θ 2 (0, 2 N τ ) ≤ 2 -P , then assume that the sum for k greater than N is smaller than 2 -P . This heuristic amounts to writing:
℘(z, [ω 1 , ω 2 ])- π ω 1 2 1 sin 2 (zπ/ω 1 ) - 1 3 = N -1 k=0 π ω1 4 θ 0 (2 k+1 τ ) 4 θ 2 (2 k+1 τ ) 4 ℘(z, [ω 1 , 2 k+1 ω 2 ]) + π ω1 2 θ0(2 k+1 τ ) 4 +θ2(2 k+1 τ ) 4 3 + with | | ≤ 2 -P ;
we can then evaluate this sum, starting with the approximation of ℘(z, [ω 1 , 2 N ω 2 ]) and using the backwards recurrence, as well as the value of the θ 2 0,1,2 (0, 2 k τ ). This is Algorithm 6.
Algorithm 6 Compute ℘ using the Landen transform. Input: z, τ with absolute precision P , satisfying conditions (2.5.11).
Output: ℘(z, [ω 1 , ω 2 ]) with absolute precision P .
1: N ← 0 2: Compute θ 0,1,2 (0, τ ) using Algorithm 11.
3: while |θ 2 (0, 2 N τ )| ≥ 2 -P do 4:
Use the τ -duplication formulas (Equation (2.5.4)) to compute θ 0,1,2 (0, 2 N +1 τ ).
5:
N ← N + 1. 6: end while 7: res ← π ω1 2 1 sin 2 (zπ/ω1) -1 3 . 8: for i = N downto 1 do 9: res ← res + π ω 1 4 θ0(0,2 i τ ) 4 θ2(0,2 i τ ) 4 res+ π ω 1 2 θ 0 (0,2 i τ ) 4 +θ 2 (0,2 i τ ) 4 3 . 10: end for 11: return res
The definition of θ 2 (Equation (2.5.3)) proves that, when N goes to infinity, θ 2 (0, 2 N τ ) ∼ 2e iπ2 N τ /4 ; hence we have N = O(log P ). As for precision losses, they could stem from two potential problems: if the computation of the limit loses a large amount of absolute precision (for instance if z is close to ω 1 ), or if the heuristic is not correct. Note that the case of small denominators, e.g. z ω 2 , could be easily avoided by checking this condition at the beginning of the algorithm. Provided the heuristic is correct and the precision losses are manageable, we get a quasi-optimal time algorithm to compute ℘ with absolute precision P .
We do not provide an analysis of the precision loss incurred, or timings, at this point. Instead, we differ such considerations to Chapter 8, in which we discuss another quasi-linear time algorithm to compute ℘; we will compare Algorithm 6 to this other algorithm in terms of timings and precision loss in Section 8.1.3, and show that the precision loss should not affect the asymptotic running time of either algorithm.
Using the Landen transform to compute θ
The change of variables given by the Landen isogeny shows how one may go from a curve with fundamental parallelogram isomorphic to [1, 2τ ] to a curve with fundamental parallelogram isomorphic to [1, τ ]; as such, this change of variables has sometimes been referred to as the descending Landen transform. A few algorithms are based on this transformation, although they sometimes only use the vocabulary related to the AGM. Most notably, a couple of algorithms have been proposed to compute the real theta function.
We find in [AS64, Section 16.32] an algorithm (similar to [AS64, Section 16.4] for the elliptic functions) to compute θ(u|m) from the arithmetico-geometric mean. The algorithm relies on a backwards induction, much like Algorithm 6: one has to compute the terms of an AGM sequence until the first index N such that a N -b N 2 is negligible at the required precision; then compute a quantity φ N from this, and use recurrence relations to compute φ N -1 , . . . , φ 0 . Since the AGM sequence is quadratically convergent, N = log P ; however, each step also requires the computation of log • cos(φ), which takes O(M(P ) log P ) if one uses the fast algorithms for log and exp of Chapter 3. Hence, the complexity of this algorithm seems to be O(M(P ) log 2 P ) bit operations in this case, i.e. the real theta function. Note that we were unable to locate a reference which proves this algorithm to be correct and, more importantly, which gives an order of magnitude for the precision loss.
More recently, a better algorithm for the real theta function was proposed and analyzed in [START_REF] Luther | Reliable computation of elliptic functions[END_REF]. The algorithm uses a similar pattern of computing a quadratically convergent sequence until the terms are small enough, then computing a quantity φ N and use a backwards induction to compute the value of θ. In this case, the computations derive directly from the descending Landen transform, i.e. the final result is obtained directly, without having to compute other quantities like log • cos. This gives a O(M(P ) log P ) algorithm; furthermore, the article provides a careful analysis of the precision loss, which shows that the result is not too inaccurate (about O(log P ) guard bits seem to be needed). We were unable to generalize this algorithm (and the corresponding analysis) to the complex case, using instead another method based on Newton's method (reasoning it would be more stable numerically, as the Newton iteration is self-correcting); we refer to Chapter 6 for more details.
Finally, we mention an attempt of ours to get a quasi-optimal time algorithm for θ. We start from the Landen transform formulated in terms of theta functions:
θ 0 (z, τ )θ 1 (z, τ ) θ 1 (2z, 2τ ) = θ 0 (0, τ )θ 1 (0, τ ) θ 1 (0, 2τ )
We rewrite this formula as
θ 1 (2z, 2τ ) = θ 1 (0, 2τ ) θ 0 (0, τ )θ 1 (0, τ ) θ 0 (z, τ ) θ 1 (z, τ ) θ 2 1 (z, τ ).
Putting
a n = log θ1(2 n z,2 n τ ) 2 n
, we get the recurrence relation:
a k+1 = a k + 1 2 k+1 log θ 0 (2 k z, 2 k τ ) θ 1 (2 k z, 2 k τ ) + log θ 1 (0, 2 k+1 τ ) -log θ 0 (0, 2 k τ ) -log θ 1 (0, 2 k τ ) .
The advantage of this relation is that we only need the theta-constants and the quotient θ 0 /θ 1 , which can be computed from other quotients, and for instance from the value of ℘. However, the resulting algorithm requires evaluating ℘(2 k z, 2 k τ ) for 1 ≤ k ≤ N ; the best algorithm to do so uses Equation (4.3.2) and the z-duplication formula for ℘:
℘(2z) = (℘(z) 2 + g2 4 ) 2 + 2g 3 ℘(z) 4℘(z) 3 -g 2 ℘(z) -g 3
We then use a recursive algorithm to compute the leaves of the tree which vertices are the values ℘(2 i z, 2 j τ ) and the edges are either the use of Equation (4.3.2) (going from 2τ to τ ) or the zduplication; this is an optimal strategy, as analysed in [START_REF] De Feo | Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies[END_REF], and requires O(log P log log P ) multiplications. We thus potentially get a O(M(P ) log P log log P ) algorithm to compute θ this way, which is not the best potential running time; we did not analyze this algorithm further.
Chapter 5
Naive algorithms for theta functions in any genus
This chapter is dedicated to presenting some algorithms to compute θ(z, τ ) by partial summation. We show in this chapter that θ(z, τ ) can be computed with absolute precision P in O(M(P )P g/2 ) bit operations; in genus 1 and 2, and for τ in the fundamental domain, the complexity is even
O M(P ) P Im(τ1,1) g/2
bit operations. We conjecture that this result holds in genus g.
In order to determine how many terms are needed, an analysis of the size of the tail of the sum is required. We produce such an analysis in genus 1 and 2, in the case where z, τ are reduced; the results we obtain show that the number of terms needed to get a result accurate to 2 -P decreases as Im(τ 1,1 ) increases. This is of importance in the context of fast algorithms with uniform running time, such as in Chapter 6 and Chapter 7. In genus g, we make the result of [DHB + 04] more explicit, which generalizes the results in genus 1 and 2 nicely; however, the running time of this method does not seem to be as good as the results we got in genus 1 and 2.
Another aspect is to determine an efficient way to compute the terms of the series. The simplest method is to compute each term independently, using an exponentiation for each term; this does not yield the best asymptotic running time, but the resulting algorithm is rather simple to implement. We discuss another way, based on recurrence relations of degree 2 linking the terms together; this method requires more storage (presumably around O(2 2g )), but the amortized cost for each term is only a few multiplications. This gives the best running time; we give explicit algorithms in genus 1 and 2, which we implemented5 , and just sketch the corresponding relations and the algorithm in genus g.
Genus 1
Argument reduction for z and τ has been studied in Section 2.5.3 (which instiatiated Section 2.3 and Section 2.4). As a result, the computational problem we study here is the computation of θ(z, τ ) to absolute precision P with z, τ such that
|τ | ≥ 1, |Re(τ )| ≤ 1 2 , Im(τ ) > 0, |Re(z)| ≤ 1 2 , 0 ≤ Im(z) ≤ Im(τ ) 2 .
Partial summation of the series defining θ
The analysis we present here is inspired by [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]. Define the following partial summation of the series defining θ(z, τ ):
S B (z, τ ) = 1 + 0<n<B q n 2 (e 2iπnz + e -2iπnz )
where we use the notation q = e iπτ . We have Proposition 5.1.1. Suppose that Im(τ ) ≥ 0.35 and that Im(z) ≤ Im(τ )
2
(which is looser than the conditions we specified at the beginning of the section). Then |θ(z, τ
) -S B (z, τ )| ≤ 3|q| B-1 .
We use the following lemma, which bounds the remainder of a series by a geometric series whose sum is easy to compute:
Lemma 5.1.2. Let q ∈ C such that |q| < 1. Let f : N → N be an increasing function such that (f (k + 1) -f (k)) k∈N is increasing. Then: n≥n0 q f (n) ≤ n≥n0 q f (n0)+(n-n0)(f (n0+1)-f (n0)) = q f (n0) 1 -q f (n0+1)-f (n0) .
We use this lemma a few times throughout this manuscript, among which in the proof of Proposition 5.1.1:
Proof of Proposition 5.1.1. We look at the remainder of the series:
|θ(z, τ ) -S B (z, τ )| ≤ n≥B |q| n 2 (|e 2iπnz | + |e -2iπnz |) ≤ n≥B |q| n 2 (1 + |q| -n ) ≤ 2 n≥B |q| n 2 -n ≤ 2 n≥B |q| (n-1) 2 ≤ 2 n≥0 |q| (B-1+n) 2 ≤ 2|q| (B-1) 2 n≥0 |q| 2n(B-1)+n 2 ≤ 2 |q| (B-1) 2 1 -|q| 2B-1
(5.1.1) the last line being a consequence of Lemma 5.1.2. A numerical calculation then proves that for Im(τ ) ≥ 0.35, we have 2 1-|q| ≤ 3, which proves the proposition. Note that we can prove the same inequality for θ 1 , since the series that define it has the same terms, up to sign, as the series for θ.
Unlike the analysis of [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF] for naive theta-constant evaluation, we cannot get a bound for the relative precision: since θ( 1+τ 2 , τ ) = 0, there is no lower bound for |θ(z, τ )|. If we set
B(P, τ ) = P + 2 π Im(τ ) log 2 (e)
+ 1.
we have 4|q| (B-1) 2 ≤ 2 -P , which means the approximation is accurate with absolute precision P . We just showed that:
Theorem 5.1.3. To compute θ(z, τ ) with absolute precision P bits, it is enough to sum over all k ∈ Z such that
|k| ≤ P + 2 π Im(τ ) log 2 (e) + 1
Note that this bound is of the same order than the one of [Dup11, p. 5], since it is greater than it by only 2 (i.e. only 4 more terms are needed).
Naive algorithm
We now present a naive algorithm to compute not only the value of θ(z, τ ), but also the value of θ 1 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ ) for only a marginal amount of extra computation; this is the algorithm we will use for comparison to the fast algorithm we propose in Chapter 6. The algorithm computes with internal precision P, which we determine later so that the result is accurate to the desired precision P .
Define the sequence (v n ) n∈N as
v n = q n 2 (e 2iπnz + e -2iπnz )
so that θ(z, τ ) = 1 + n≥1 v n . This sequence satisfies a recurrence relation for n > 1:
v n+1 = q 2n v 1 v n -q 4n v n-1 .
We use this recursion formula to compute v n efficiently, which is similar to the trick used by [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]Prop. 3]. This removes the need for divisions and the need to compute and store e -2iπnz , which can get quite big; indeed, computing it only to multiply it by the very small q n 2 is wasteful. The resulting algorithm is Algorithm 7.
Algorithm 7 Naive algorithm to compute θ 0 (z, τ ), θ 0 (0, τ ), θ 1 (z, τ ), θ 1 (0, τ ). Input: z ∈ C, τ ∈ H with absolute precision P , satisfying conditions (2.5.11).
Output: θ 0 (z, τ ), θ 0 (0, τ ), θ 1 (z, τ ), θ 1 (0, τ ) with absolute precision P .
1: Work with precision P.
2: a ← (1, 1), b ← (1, 1) These arrays will hold respectively θ i (z, τ ) and θ i (0, τ )
3: B ← P +2 π Im(τ ) log 2 (e) + 1 4: q ← UniformExp(iπτ ) 5: v 1 ← UniformExp(2iπ(z + τ /2)) + UniformExp(-2iπ(z -τ /2)) 6: q 1 ← q, q 2 ← q, v ← v 1 , v ← 2 7: for n = 1..B do 8: /* q 1 = q n , q 2 = q n 2 , v = v n , v = v n-1 */ 9: a 0 ← a 0 + v, a 1 ← a 1 + (-1) n × v 10: b 0 ← b 0 + 2q 2 , b 1 ← b 1 + (-1) n × 2q 2 11: q 2 ← q 2 × (q 1 ) 2 × q 12: q 1 ← q 1 × q 13: temp ← v, v ← q 2 1 × v 1 × v -q 4 1 × v v ← temp 14: end for 15: return a 0 , b 0 , a 1 , b 1
We use a subroutine in this algorithm, which we call UniformExp(z), and which computes e z with absolute precision P using the following algorithm: if Re(z) ≤ -P log 2 e , return 0, if not compute exp(z) using Section 3.3.3. It is then easily seen that: Proposition 5.1.4. UniformExp computes e z with absolute precision P in cM(P ) log P bit operations for any z such that Re(z) ≤ 0, where c is a constant independent of z.
Hence, the complexity of UniformExp is uniform over all z with Re(z) ≤ 0.
Error analysis and complexity
This section is dedicated to proving: Theorem 5.1.5. For z, τ with absolute precision P and satisfying conditions (2.5.11), Algorithm 7 with P = P + log B + 7 computes θ 0 (z, τ ), θ 1 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ ) with absolute precision P bits. This gives an algorithm which has bit complexity O M(P ) P Im(τ ) .
Remark 5.1.6. Note that the running time of this algorithm gets better as Im(τ ) increases (the remainder is smaller than 2 -P much quicker in that case). At the limit, if P = c Im(τ ) where c is a fixed constant, one only needs a constant number of terms to get the final result. The running time of this algorithm is then dominated by the computation of q = e iπτ and v 1 at the beginning of the algorithm, which costs cM(P ) log P operations with c independent of z, τ by Proposition 5.1.4. Note that, in that case, the running time of the algorithm is independent of z and τ ; this remark will be of use in Section 6.1.2.
Analyzing this algorithm requires bounding the error that is incurred during the computation. We then compensate the number of inaccurate bits by increasing the precision. We use Theorem 0.3.3 to estimate the number of bits lost.
Proof of Theorem 5.1.5. We first determine the size of the quantities we are manipulating; this is needed to evaluate the error incurred during the computation, as well as the number of bits needed to store fixed-precision approximations of absolute precision P of the intermediate quantities.
|v n | ≤ |q| n 2 +n + |q| n 2 -n ≤ (1 + |q| 2n )|q| n 2 -n ≤ 1.0049|q| n 2 -n ≤ 1.0049
Hence, storing all the complex numbers above, including our result, with absolute precision P only requires P + 2 bits, since their integral part is coded on only 2 bits. Note that, had we computed e -2iπnz before multiplying it by q n 2 , we would have needed O(Im(τ )) more bits, which changes the asymptotic complexity.
Computing the absolute precision lost during this computation is done using Theorem 0.3.3. We start with the bounds |τ -τ | ≤ 1 2 2 -P and |z -z| ≤ 1 2 2 -P , coming from the hypothesis that the approximations of z and τ are correctly rounded with precision P. We then need to estimate k v1 and k q , which can be done using the formula giving the absolute error when computing an exponential from Theorem 0.3.3. Given that τ ∈ F, we have
|q -q| ≤ 0.07 7 × 1/2 + 8.5 2 2 -P ≤ 0.42 × 2 -P |v 1 -ṽ1 | ≤ 6(|e -π(Im(τ )+2 Im(z)) | + |e π(2 Im(z)-Im(τ )) |) × 2 -P ≤ 6(|q| + 1)2 -P ≤ 6.42 × 2 -P
which means that k q ≤ 0.42 and k v1 ≤ 6.42. We then need to evaluate the loss of precision for each variable and at each step of the algorithm, which gives recurrence relations with nonconstant coefficients. Solving those is rather tedious, and we use loose upper bounds to simplify the computation; we do not detail this proof here. The results obtained by this method show that the error on the computation of the theta-constants is bounded by (0.3B + 105.958)2 -P , and the one on the computation of the theta function is smaller than (5.894B + 28.062)2 -P . This proves that the number of bits lost is bounded by log 2 B + c, where c is a constant smaller than 7; hence we set P = P + log B + 7. Finally, evaluating π and exp(z) with precision P can be done in O(M(P) log P), using respectively the Brent-Salamin algorithm (Section 3.1.2) and our subroutine UniformExp (Proposition 5.1.4); this is negligible asymptotically. In the end, computing an approximation up to 2 -P of θ(z, τ ) can be done in O M (P + log(P/ Im(τ )) + c)
P Im(τ ) = O M(P ) P Im(τ ) bit operations.
Computing θ 2
We mentioned in Section 2.5.3 the need to compute θ 2 (z, τ ) and θ 2 (0, τ ) as well. One could think of recovering those values using Jacobi's quartic formula and the equation of the variety, which we mentioned in Section 2.5:
θ 0 (0, τ ) 4 = θ 1 (0, τ ) 4 + θ 2 (0, τ ) 4 θ 2 0 (z, τ )θ 2 0 (0, τ ) = θ 2 1 (z, τ )θ 2 1 (0, τ ) + θ 2 2 (z, τ )θ 2 2 (z, τ )
that is to say, compute
θ 2 (0, τ ) = θ 0 (0, τ ) 4 -θ 1 (0, τ ) 4 1/4 θ 2 (z, τ ) = θ 2 0 (z, τ )θ 2 0 (0, τ ) -θ 2 1 (z, τ )θ 2 1 (0, τ ) θ 2 (0, τ ) .
However, this approach induces an asymptotically large loss of absolute precision for both θ 2 (0, τ ) and θ 2 (z, τ ). According to Theorem 0.3.3, both square root extraction and inversion induce a loss of precision proportional to |z| -1 ; since θ 2 (0, τ ) ∼ 4q 1/2 , the number of bits lost by applying those formulas is O(Im(τ )). Also note that those formulas would induce a big loss in relative precision as well, since θ 0 (0, τ ) and θ 1 (0, τ ) are very close when Im(τ ) goes to infinity, and the subtraction induces a relative precision loss of O(Im(τ )) bits; for more details, see [Dup11, Section 6.3]. Either of those analyses show that, in order to compensate precision loss, the naive algorithm should actually be run with a precision of O(P + log B + Im(τ )), which gives a running time that worsens, insteads of getting better, when Im(τ ) gets big. We do not recommend this approach.
Instead, one should compute partial summations of the series defining θ 2 , much in the same way as we did for θ(z, τ ). We write θ 2 (z, τ ) = q 1/4 w(1 + n≥1 v n ) with v n = q n 2 +n (w 2n + w -2n ); since Im(z) ≥ 0 we can just look at the problem of evaluating 1 + v n . We have
v n+1 = q 2n v 1 v n -q 4n+2 v n-1
The analysis in this case is very similar to the one for θ. We have
|v n | ≤ |q| n 2 , hence |θ 2 (z, τ ) - S B | ≤ 3|q| B 2 ,
so that the bound on B is one less than the one for θ. We have that q 2n |v 1 | is bounded by 2 instead of 1, which in the worst case means log B more guard bits are needed. This gives Algorithm 8; its asymptotic complexity is, just like Algorithm 7, O M(P ) P Im(τ bit operations, which gets better as Im(τ ) increases.
Algorithm 8 Naive algorithm to compute θ 2 (z, τ ), θ 2 (0, τ ). Input: z, τ with absolute precision P , satisfying conditions (2.5.11). Output: θ 2 (z, τ ), θ 2 (0, τ ) with absolute precision P .
1: a ← 1, b ← 1 2: B ← P +2 π Im(τ ) log 2 (e)
3: Work with precision P + 2 log B + 7. 4: q ← e iπτ , q 1 ← q, q 2 ← q
5: v 1 ← q 2 (w 2 + w -2 ), v ← v 1 , v ← 2 6: for n = 1..B do 7: /* q 1 = q n , q 2 = q n 2 +n , v = v n , v = v n-1 */ 8: a ← a + v, b ← b + 2q 2 9:
q 2 ← q 2 × (q 1 ) 2 × q 2 10:
q 1 ← q 1 × q 11: temp ← v, v ← q 2 1 × v 1 × v -(q 4 1 × q 2 ) × v v ← temp 12: end for 13: a ← a × q 1/4 w, b ← b × q 1/4 14: return a, b
We note that similar considerations apply to the problem of computing θ 3 . One can compute θ 3 (z, τ ) using the formula [Mum83, p.22]
θ 3 (z, τ ) 2 = θ 1 (z, τ ) 2 θ 2 (0, τ ) 2 -θ 2 (z, τ ) 2 θ 1 (0, τ ) 2 θ 0 (0, τ ) 2 . (5.1.2)
Using this formula loses only a few bits of precision since θ 0 (0, τ ) is bounded; however, one then needs to compute a square root, which potentially loses O(Im(τ )) bits. Hence, a summation of the series, which directly gives θ 3 , is preferable.
Genus 2
We now study the naive algorithm in the genus 2 case, i.e. z = z 1 z 2 and τ = τ 1 τ 3 τ 3 τ 2 . The number of variables is small enough that we can write the sum explicitly, as in Section 2.6, and perform an analysis of its remainder using the triangle inequality; this allows us to make the dependency in τ explicit. We can also write explicit recurrence relations between the terms to compute the sum faster.
Recall that the series defining the fundamental theta functions θ 0 , θ 1 , θ 2 , θ 3 only differ by the signs in front of the terms; the patterns are summarized in Figure 2.2. This means one can compute all the values θ [0;b] (z, τ ) at the same time for the same computational cost, i.e. with no extra multiplications. This also justifies that the analysis that we perform in Section 5.2.1 is valid for all the fundamental theta functions, since the differences of signs vanish when using the triangle inequality.
As in genus 1, we assume that z, τ are reduced as explained in Section 2.3 and Section 2.4.4 (e.g. τ ∈ F g ). This translates into the conditions: terms, is enough to get an approximation of θ(z, τ ) that is accurate to 2 -P .
|Re(τ i )| ≤ 1 2 , 0 ≤ 2 Im(τ 3 ) ≤ Im(τ 1 ) ≤ Im(τ 2 ), |τ 1 | ≥ 1 (i.e. Im(τ 1 ) ≥ √ 3 2 ) ; |Re(z i )| ≤ 1 |Im(z 1 )| ≤ Im(τ1)+Im(τ3) 2 ≤ 3 4 Im(τ 1 ), |Im(z 2 )| ≤ Im(τ2)+Im(τ3) 2 ≤ 3 4 Im(τ 2 ).
Truncated sums
Proof. Let q j = e iπτj and w j = e iπzj . Using the triangle inequality, we can write
|θ(z, τ ) -S B | ≤ S 1 + n≥B+1 |q 2 | n 2 (|w 2 | 2n + |w 2 | -2n ) + m≥B+1 |q 1 | m 2 (|w 1 | 2m + |w 1 | -2m ) + |m|>0 |n|≥B+1 |q 1 | m 2 |q 2 | n 2 |q 3 | 2mn |w 1 | 2m |w 2 | 2n
where
S 1 = |m|≥B+1 0<|n|≤B |q 1 | m 2 |q 2 | n 2 |q 3 | 2mn |w 1 | 2m |w 2 | 2n .
The second line can be bounded using a calculation very similar to the one in Section 5.1.1. The third line can be bounded as follows: we have
|q m 2 1 q n 2 2 q 2mn 3 | ≤ |q m 2 1 q n 2 2 q -m 2 -n 2 3 | ≤ |q m 2 /2 1 q n 2 /2 2 |
and furthermore, given the assumptions on z, we have:
|w 1 | + |w -1 1 | ≤ 1 + e -π(-|Im(z1)|) ≤ 2e π 3 4 Im(τ1) , |w 2 | + |w -1 2 | ≤ 2e π 3 4 Im(τ2) . Hence (|w 2m 1 | + |w -2m 1 |)(|w 2n 2 | + |w -2n 2 |) ≤ 4e π( 3 2 m Im(τ1)+ 3 2 n Im(τ2))
. Overall, we get the bound
|θ(z, τ ) -S B | ≤ S 1 + 4|q 1 | (B-1) 2 -4 1 -|q 1 | + 4|q 1 | (B-1) 2 2 -3 1 -|q 1 |
Assuming that Im(z i ) ≥ 0, which does not change the proof, we bound S 1 as follows:
S 1 ≤ |q 2 | m≥B+1 |q 1 | m 2 |q 3 | 2m (|w 1 | 2m |w 2 | 2 + |w 1 | -2m |w 2 | -2 ) + |q 3 | -2m (|w 1 | 2m |w 2 | -2 + |w 1 | -2m |w 2 | 2 ) + |q 2 | 4 m≥B+1 |q 1 | m 2 |q 3 | 4m (|w 1 | 2m |w 2 | 4 + |w 1 | -2m |w 2 | -4 ) + |q 3 | -4m (|w 1 | 2m |w 2 | -4 + |w 1 | -2m |w 2 | 4 ) + 4 m≥B+1 2<n≤B |q 1 | m 2 2 -3 2 m |q 2 | n 2 2 -3 2 n .
The last line can be bounded by 4q
(B-1) 2 -4 2 (1-|q1|)(1-|q2|) . We can refine the bound on |w 2 | -1 , writing |q 2 ||w 2 | -2 ≤ q -1
3 , which gives
S 1 ≤ q (B+1) 2 +1 1 1 -|q 1 | + q (B-1) 2 -4 1 1 -|q 1 | + q (B-1) 2 -3 2 1 1 -|q 1 | + |q 1 | (B-1) 2 -3 1 -|q 1 | + q (B+1) 2 +1 1 1 -|q 1 | + q (B-1) 2 -2 1 1 -|q 1 | + q B 2 +1 1 1 -|q 1 | + q (B-1) 2 -2 1 1 -|q 1 | + 4q (B-1) 2 -4 2 (1 -|q 1 |)(1 -|q 2 |) .
Collecting terms yields the stated result.
Genus 2 naive algorithm
Following and extending the strategy in Section 5.1.2 or [ET14a, §5.1], we use recurrence relations to compute terms more efficiently. Let
Q(m, n) = q m 2 1 q n 2 2 q 2mn 3 T (m, n) = Q(m, n)(w 2m 1 w 2n 2 + w -2m 1 w -2n 2
).
With minor rewriting, we have that
θ i (z, τ ) = m,n∈N s(i, m, n)(T (m, n) + T (m, -n)) θ i (0, τ ) = m,n∈N s(i, m, n)(Q(m, n) + Q(m, -n))
where s(i, m, n) is 1 4 for m = n = 0, ±1 2 along the axes (m, 0) and (0, n), and ±1 elsewhere. We have the following recurrence relations.
(w 2 1 + w -2 1 )T (m, n) = q -2m-1 1 q -2n 3 T (m + 1, n) + q 2m-1 1 q 2n 3 T (m -1, n) (5.2.1) (w 2 2 + w -2 2 )T (m, n) = q -2n-1 2 q -2m 3 T (m, n + 1) + q 2n-1 2 q 2m 3 T (m, n -1). (5.2.2)
We propose an algorithm, Algorithm 9, that uses those recurrence relations in a way such that the memory needed is only O(1). The algorithm consists in iteratively computing the terms for m = 0 and m = 1 (using Equations (5.2.2)), and use them as soon as they are computed to initialize the other induction (Equations (5.2.1)); this corresponds to horizontal sweeps (from bottom to top) in the square [0, B] 2 , with the first two terms of each line iteratively computed from the terms below them, as illustrated in Figure 5.1.
In Algorithm 9, terms of the form q k i as well as products thereof must also be computed recursively. We did so in our implementation, but this is deliberately omitted here for brevity. Also, despite the use of notations T (m, n), it shall be understood that only constant storage is used by this algorithm, as can be seen by inspecting where values are actually used. Note that further speed-ups are possible if one tolerates using O( √ P ) extra memory, for instance by caching the q m 1 and the q n 2 . Taking B as in Section 5.2.1 yields the right number of terms; we did not analyze the number of guard bits required by this algorithm, but as the arguments would be similar to the genus 1 n m (0, 0)
T (•, 0) T (•, 1) T (•, 2)
. . . Figure 5.1: Order of summation of the terms in Algorithm 9. Arrows going right correspond to applying Equation (5.2.1), arrows going up correspond to applying Equation (5.2.2). 4 terms need to be stored to initialize the horizontal sweeps, starting with the corners of the bottom left square, then moving that square up when increasing n. Each horizontal sweep requires the storage of 2 terms.
Algorithm 9 Naive algorithm for θ(z, τ ) in genus 2. Input: z, τ with absolute precision P , and B a summation bound. Output: θ i (z, τ ), θ i (0, τ ) for i ∈ {0..3}, with absolute precision P .
1: a ← (1, 1, 1, 1); b ← (1, 1, 1, 1)
These arrays will store θ i (z, τ ) and θ i (0, τ ). 2: Compute q 1 , q 2 , q 3 , w 1 , w 2 using our UniformExp routine (cf. Proposition 5.1.4).
3: Compute R(m, n) = Q(m, n) + Q(-m, n) for m, n ∈ {0, 1}. 4: Compute T (m, n) for m ∈ {0, 1}
and n ∈ {-1, 0, 1}. 5: Add contributions for (0, 1) to a and b, with the correct sign.
6: u ← w 2 1 + w -2 1 , v ← w 2 2 + w -2 2 7: for n = 1 to B -1 do 8:
R(0, n + 1) ← q 2n+1 2 R(0, n)
9:
Add contribution for (0, n + 1) to b, with the correct sign.
10:
T (0, n + 1) ← q 2n+1 2 vT (0, n) -q 4n 2 T (0, n -1)
11:
Add contributions for (0, n + 1) to a, with the correct sign. 12: end for 13: for m = 1 to B do 14:
ρ m ← q 2m 3 + q -2m
3 can be computed inductively 15:
Add contributions for (m, 0) and (m, 1) to a and b, with the correct sign.
16:
for n = 1 to B -1 do One may refine the bound depending on m 17:
R(m, n + 1) ← q 2n+1 2 ρ m R(m, n) -q 4n 2 R(m, n -1) 18:
Add contribution for (m, n + 1) to b, with the correct sign.
19:
T (m, n + 1) ← q 2n+1 2 q 2m 3 vT (m, n) -q 4n 2 q 4m 3 T (m, n -1) 20: T (m, -(n + 1)) ← q 2n+1 2 q -2m 3 vT (m, -n) -q 4n 2 q -4m 3 T (m, -(n -1)) 21:
Add contributions for (m, n + 1) to a, with the correct sign.
22:
end for 23:
R(m + 1, 0) ← q 2m+1 1 R(m, 0) 24: R(m + 1, 1) ← q 2m+1 1 (q 2 3 + q -2 3 )R(m, 1) -q 4m 1 R(m -1, 1) 25: T (m + 1, 0) ← q 2m+1 1 uT (m, 0) -q 4m 1 T (m -1, 0) 26:
T (m + 1, 1) ← q 2m+1 1 q 2 3 uT (m, 1) -q 4m 1 q 4 3 T (m -1, 1)
27:
T (m + 1, -1) ← q 2m+1 1 q -2 3 uT (m, -1) -q 4m 1 q -4 3 T (m -1, -1) 28: end for 29: return a, b. case, it does not seem like it will change the asymptotic running time. Finally, the remarks made in Note 5.1.6 can be generalized to apply to this algorithm; in particular, for P ≥ c Im(τ 1,1 ), the number of terms is bounded by a constant, and the asymptotic complexity is the one of UniformExp, which is quasi-linear and uniform in z, τ . We implemented Algorithm 9 in Magma, and will discuss timings in Section 7.3.
Genus g
We outline the analysis of two strategies to evaluate the series defining θ with absolute precision P , i.e. up to 2 -P . The first strategy is the one outlined in [DHB + 04], while the second one attempts to generalize the naive algorithms we outlined in genus 1 and genus 2; the number of terms that are summed in each strategy is asymptotically the same in P . However, note that the first strategy considers an ellipsoid while the second one considers a cube; thus, it is likely that the second strategy is coarser (and hence perhaps slower in practice), although we did not manage to link the ellipsoid to the cube. We also discuss a way to use recurrence relations to compute the terms, which lowers the overall asymptotic complexity by a log P factor; this method can be applied to either of these two strategies.
Recall that the argument reduction strategies we discussed in Section 2.3 and Section 2.4.4 (e.g. τ ∈ F g ) allow us to assume that
|Re(τ i,j )| ≤ 1 2 , Im(τ ) is Minkowski-reduced, |τ 1,1 | ≥ 1 |Re(z i )| ≤ 1 2 , Im(z i ) ≤ 1 2 j∈{1,...,n} Im(τ i,j
). Note that these conditions are not very well adapted to the analyses we present below. The first analysis singles out an exponential factor which cannot be controlled or dealt with using only these conditions, while the second analysis requires a conjecture (Conjecture 5.3.3) in order to exploit these conditions.
Deckoninck et. al's analysis
We find in [DHB + 04] a first method to compute the series defining θ up to 2 -P . The authors determine an ellipsoid which contains the indices over which one should sum to get a final result precise up to 2 -P . This method does not seem to depend on any conditions on z, τ ; however, the authors mention that using argument reduction is beneficial to the process, as it reduces the eccentricity of the ellipsoid.
Their argument reduction strategy is visibly inspired by [START_REF] Carl | Topics in Complex Function Theory: Abelian Functions and Modular Functions of Several Variables[END_REF], who determined the fundamental domain F g much in the same way as [START_REF] Klingen | Introductory lectures on Siegel modular forms[END_REF] (see Section 2.4); however, the reduction they actually appear to be using is a bit different, as the conditions that are imposed are
|Re(τ i,j )| ≤ 1 2 , the matrix T such that Im(τ ) = t T T is LLL-reduced, |τ 1,1 | ≥ 1.
This reduction seems even weaker than the reduction in F g of Section 2.4.4, as the LLL reduction (which runs in polynomial time but does not necessarily find the smallest vector) is used instead of the Minkowski reduction (which runs in exponential time and finds the smallest vector). The termination of the algorithm is claimed to derive from the termination of the algorithm reducing in the fundamental domain, which is proven in [Sie89, Chapter 6, Section 5]; no indications are given on the number of steps that are needed before termination. The effect of this strategy on the number of terms is not quantified, but the article claims that it reduces it.
Note that the analysis presented in [DHB + 04] gives results which are valid for a series which is equal to θ(z, τ )e -π t Im(z) Im(τ ) Im(z) , hence disregarding an "exponential growth" factor [DHB + 04, p. 3], and only computing what they call the "oscillatory part" of the theta function. The size of this exponential factor grows to infinity as Im(z) grows. Note that the conditions given by our argument reduction strategies (Section 2.3 and Section 2.4.4) do not allow us to control the size of this factor: for instance, if (z, τ ) are reduced with these strategies, (2 k z, 2 k τ ) is also reduced, yet the exponential factor goes to infinity as k grows. However, this does not mean that the size of θ grows, merely that the oscillatory part has to be computed at an increasingly larger precision in order to compensate for this factor.
The following theorem takes a closer look at the oscillatory part of the theta function.
Theorem 5.3.1 ([DHB + 04, Theorem 2]). Denote [[V ]] = V -[V ], where [V ] is the vector with integer coordinates closest to V. Define Λ = { √ πT (n + c), n ∈ Z g } with τ = t T T (Cholesky decomposition) and c = [[Im(τ ) -1 Im(z)]]. For B > 0 define the ellipsoid of size B S B = {n ∈ Z g | || √ πT (n + c)|| < B},
where ||•|| is the L 2 norm. Then the oscillatory part of the theta function can be approximated to 2 -P by summing over the terms whose indices are inside the ellipsoid of size R, where R is the solution to the equation
2 -P = g 2 2 g ρ g Γ(g/2, (R -ρ/2) 2 )
where Γ is the incomplete gamma function and ρ is the length of the shortest vector of Λ.
Note that, if τ is reduced as above, ρ = Im(τ 1,1 ) ≥ √ 3/2, and hence the number of terms needed can be upper bounded with a bound which is independent from τ .
Neglecting the dependency in τ and z, we get the rather coarse bound of O(R g ) terms needed. We complete the analysis in [DHB + 04] by computing an explicit estimate on R: Proposition 5.3.2. Treating z, τ (and hence ρ) as constants, we have R = O( √ P ), i.e. summing O(P g/2 ) terms is sufficient to get a result accurate to P bits.
Proof. Assuming that g is even (which we can do since Γ is growing in the first parameter for R large enough), we use integration by parts g/2 times to prove that
Γ(g/2, d) = (g/2 -1)!e -d + g/2 i=1 (g/2 -1) • • • (g/2 -i)d g/2-i e -d ≤ g 2 (g/2 -1)!d g/2-1 e -d ≤ e -d+g/2(log d+log(g/2))
Hence:
g 2 2 g ρ g Γ(g/2, d) ≤ 2 -d log 2 e+g/2(log d+log(g/2))+log(g/2)+g log(2/ρ)
Asymptotically, i.e. for R large enough, taking d = P log 2 e + g log P + g log(2/ρ) + g = O(P ) is enough for the right hand side to be smaller than 2 -P . Hence R = O( √ P ).
Note that this is not as good as the asymptotics in genus 1 and 2, which showed the number of terms to be O P Im(τ1,1)
g/2
. In fact, we have R = O Im(τ 1,1 ) + P -log Im(τ 1,1 ) (since ρ = Im(τ 1,1 )), which gets worse as Im(τ 1,1 ) increases. This, combined with the fact that the size of the exponential factor cannot be bounded even for z, τ reduced, means that one cannot use this analysis to build a fast algorithm with uniform complexity, as we do in genus 1 and 2 (see e.g. Algorithm 11 or Algorithm 15 for genus 1, and Note 7.2.7 for genus 2).
Truncated sums
Another method is to attempt to bound the remainder of the series defining θ by a series which can be computed more easily, e.g. a geometric series; this is the method we used in genus 1 and genus 2.
Recall the proof of Proposition 2.1.9: we took R an orthogonal matrix such that t Rτ R is diagonal, and denote λ the smallest eigenvalue of Im(τ ). Then
|θ [0;b] (z, τ ) -1| ≤ 2 g n∈N g \{0} q n 2 1 +...+n 2 g w -2 ni
with q = e -πλ and w = e -2π max Im(zi) . We can apply similar arguments to |θ
[0;b] (z, τ )-S R (z, τ )|, where S R (z, τ ) = ni∈[-R,R] e iπ t nτ
n e 2iπ t nz . We can then write for R large enough
|θ [0;b] (z, τ ) -S R (z, τ )| ≤ 2 g n1,...,ng≥R e -π λ ((n1-c) 2 +...+(ng-c) 2 )
with c = max Im(zi)
λ
. Taking R = O( P/λ) is enough to get a sum which is accurate to P bits, which means one needs to sum at most O P λ g/2 terms.
This result is not entirely satisfactory with regard to the argument reduction strategies that are deployed in genus g -either the reduction to F g (Section 2.4) or the weaker reductions of Section 2.4.4. Indeed, these reductions give conditions on the coefficients of τ (or Im(τ )) but none on the eigenvalues of Im(τ ). To the best of our knowledge, there is no result linking the eigenvalues of Im(τ ) to the coefficients of τ ; we note, however, that [Dup06, p. 127] puts forward the following conjecture: Conjecture 5.3.3 ([Dup06, p. 127]). For any g, there exists a constant c g such that for any matrix M ∈ M g (R) such that M is symmetric, positive definite and Minkowski-reduced, its smallest eigenvalue λ is such that λ ≥ c g M 1,1 .
The conjecture holds in genus 1 and 2 [Dup06, p. 137]. Should that conjecture be proven for any genera, this would prove that the number of terms needed is in fact
O P Im(τ1,1) g/2
, which corresponds exactly to the complexities we found in genus 1 and 2. However, if this conjecture is not true, the number of terms needed in this algorithm for a τ ∈ F g could be arbitrarily big.
Recurrence relations
One method of computing the sum above is to compute each term explicitly, i.e. with the computation of an exponential for each term. This algorithm is fairly straightforward to implement, as one iterates over all the indices and computes the corresponding exponential term; its cost is O(M(P ) log P P g/2 ), and it requires only a small amount of memory. Note that this is more expensive than the algorithms we showed in genus 1 (Algorithm 7) and genus 2 (Algorithm 9).
We show a faster method, which uses the recurrence relations of degree two that exist for any choice of index; this allows to compute all the terms using only multiplications once the initial O(g 2 ) exponentiations e iπτ j,k , e iπzj are computed. This is a generalization of Algorithm 7 and Algorithm 9. However, note that this method has a larger memory requirement; we conjecture that one needs to store at least O(2 2g ) P -bit numbers. Furthermore, the implementation of this method seems tricky in the general case, as one can certainly notice when looking at the algorithm in genus 2 (Algorithm 9); it may be possible to implement this as a recursive function with some global variables, but we did not undertake this.
We outline the recurrence relations in the g-th (i.e. last) index. For ( 1 , . . . , g ) ∈ {-1, 1} g define α ( 1 ,..., g ) ng = e iπ t nτ n (e 2iπ j nj zj + e -2iπ j nj zj )
treating n i , i < g as constants. Note that since α
( 1,..., g ) ng = α (-1 ,...,-g ) ng
, we only need to consider the case g = 1; hence this defines 2 g-1 quantities. We have
approx P (θ(z, τ )) = n1,...ng-1∈[0,R] g-1 ( 1,..., g-1 )∈{-1,1} ng∈[0,R] α ( 1 ,..., g-1,1) ng (5.3.1)
Taking a closer look to α
(1, 2,..., g ) ng yields a recurrence relation:
(e 2iπzg +e -2iπzg )α ( 1 ,..., g-1 ,1) ng = g-1 i=1 q -2 i |n i | i,g q -2ng -1 g,g α ( 1 ,..., g-1 ,1) ng +1 + g-1 i=1 q 2 i |n i | i,g q 2ng -1 g,g α ( 1 ,..., g-1 ,1) ng -1 i.e. α ( 1 ,..., g-1 ,1) ng +1 = g-1 i=1 q 2 i |n i | i,g q 2ng +1 g,g (e 2iπzg + e -2iπzg )α ( 1 ,..., g-1 ,1) ng - g-1 i=1 q 2 i |n i | i,g 2 q 4ng g,g α ( 1 ,..., g-1 ,1) ng -1
Recall that we assume that the n i are constants; hence, we can suppose that
g-1 i=1 q -2 i |ni| i,g is precomputed. The q -2ng-1 g,g , q
2ng-1 g,g can be computed iteratively; hence, this recurrence relation allows one to compute all the α
( 1,..., g-1 ,1) ng+1 from the α ( 1 ,..., g-1 ,1) ng , α ( 1,..., g-1 ,1) ng-1
in 3 × 2 g-1 + 3 multiplications. Hence evaluating the inner sum in Equation (5.3.1) for all b ∈ 1 2 Z g /Z g can be done in O(R2 g-1 ) multiplications.
Hence, we just showed how, given n 1 , . . . , n g-1 , 1 , . . . , g-1 and α
( 1 ,..., g-1,1) 0
, α
( 1,..., g-1 ,1) 1 , we can compute the following terms of the sum using a recurrence relation. Computing these first two terms can be done using similar recurrence relations on the other variables, with a reasoning which is very similar to the one we just used: similar recurrence relations of degree 2 exist for these, and it only costs a constant number of multiplications to increase one of the indices.
This means that the total cost of this method is O(2 g R g ) multiplications and O(g 2 ) exponentiations, which gives a cost of O(M(P )P g/2 ) bit operations. This agrees with the running time of Algorithm 7 and Algorithm 9. Finally, note that once again, one can compute all the fundamental theta functions at once for the same cost, since one simply needs to change the sign of the terms accordingly.
Chapter 6
Fast computation of the theta function in genus 1
Recall that Jacobi's theta function is defined as
C × H → C (z, τ ) → n∈Z e iπτ n 2 e 2iπzn = 1 + n∈N q n 2 (w 2n + w -2n )
with q = e iπτ (the "nome") and w = e iπz . This chapter provides an asymptotically fast algorithm to compute θ(z, τ ) with absolute precision P in O(M(P ) log P ) bit operations. We start with the easier case of the fast, quasi-optimal time algorithm to compute theta-constants featured in [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF], which we refine; we then use a similar approach for the general case. The results from this chapter are taken from a paper that has been accepted for publication in the Mathematics of Computation journal [START_REF] Labrande | Computing Jacobi's θ in quasi-linear time[END_REF].
Preamble: fast theta-constants
We discuss an algorithm which computes those three values with absolute precision P in time O(M(P ) log P ) bit operations. A fuller description of the algorithm can be found in [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]. This algorithm has been applied to the computation of modular functions such as j(τ ) and Dedekind's η function, using formulas linking these quantities to theta-constants. This is of use for instance in the CM method, where the computation of class polynomials (which zeroes are values of j(τ )) has to be performed; see [START_REF] Enge | The complexity of class polynomial computation via floating point approximations[END_REF] for more details.
Recall the definition of theta-constants:
θ 0 (0, τ ) = n∈Z q n 2 , θ 1 (0, τ ) = n∈Z (-1) n q n 2 , θ 2 (0, τ ) = n∈Z q (n+ 1 2 ) 2 with q = e iπτ .
As shown in Section 2.5.3, one only needs to consider the case τ ∈ F. Theorem 3.2.8 then shows that the AGM of θ 2 0 (0, τ ) and θ 2 1 (0, τ ) is 1. Finally, recall Note 3.2.10, which uses the homogeneity of the AGM function
(i.e. AGM(a, b) = a AGM 1, b a ) to get AGM 1, θ 1 (0, τ ) 2 θ 0 (0, τ ) 2 = 1 θ 0 (0, τ ) 2 (6.1.1) 91
This gives a way to recover θ 0 (0, τ ) 2 , θ 1 (0, τ ) 2 from the knowledge of their quotient.
A quasi-optimal time algorithm to compute theta-constants
We outline the algorithm in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] to compute theta-constants. Recall Equation (2.5.10):
θ 2 (0, τ ) 2 = i τ θ 1 0, -1 τ 2 , θ 0 (0, τ ) 2 = i τ θ 0 0, -1 τ 2 .
This means that θ2(0,τ ) 2 θ0(0,τ 2.2, p.60]), and hence the choices of signs for -1 τ are all good (Prop. 3.2.9). Hence, for all τ ∈ F,
) 2 = θ1(0,τ ) 2 θ0(0,τ ) 2 with τ = -1 τ . Note that for τ ∈ F, -1 τ ∈ D 1 ⊂ D 2 (see e.g. [Dup06, Figure
AGM(θ 2 0 (0, τ ), θ 2 2 (0, τ )) = i τ . (6.1.2)
Furthermore, the Jacobi formula (Equation (2.5.6)) is θ 4 2 (0, τ ) = θ 4 0 (0, τ ) -θ 4 1 (0, τ ), which can be rewritten as
θ 2 2 (0, τ ) θ 2 0 (0, τ ) = 1 - θ 2 1 (0, τ ) θ 2 0 (0, τ )
, the square root being the one with positive real part since, for τ ∈ F, τ = -1 τ ∈ D 1 and Re
θ 2 2 (0,τ ) θ 2 0 (0,τ ) = Re θ 2 1 (0,τ ) θ 2 0 (0,τ ) ≥ 0.
Putting it all together, and using the homogeneity of the AGM (Note 3.2.10), we have for all τ ∈ F AGM 1, 1 -
θ 4 1 (0,τ ) θ 4 0 (0,τ ) AGM 1, θ 2 1 (0,τ ) θ 2 0 (0,τ ) = i τ
which means that the function
f τ : z → τ AGM(1, 1 -z 2 ) -i AGM(1, z)
is 0 at θ1(0,τ ) 2 θ0(0,τ ) 2 . The algorithm then consists in using Newton's method on f τ to compute θ1(0,τ ) 2 θ0(0,τ ) 2 . The starting point of the method must be close enough to θ1(0,τ ) 2 θ0(0,τ ) 2 ; [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] finds that computing (using the naive algorithm) an approximation of the quotient with precision 4.16 Im(τ ) bits is enough to make Newton's method converge. Finally, recall Theorem 0.3.9, which shows that one should apply Newton's method while doubling the working precision at each step. The full algorithm is Algorithm 10.
We refer to [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF] for notes on the fast computation of fτ (t) f τ (t) ; note however that one can simply use finite differences fτ (t+ )-fτ (t) as an approximation of this derivative. Proposition 6.1.1. For τ ∈ F with absolute precision P , taking P = O(P + Im(τ )) in Algorithm 10 allows the computation of θ 2 0 (0, τ ), θ 2 1 (0, τ ) with absolute precision P in time
O (M(P + Im(τ )) × (log P + log Im(τ )))
which is quasi-optimal time if τ is assumed to be in a compact set.
Algorithm 10 Compute θ 2 0 (0, τ ), θ 2 1 (0, τ ) with absolute precision P . Input: τ with absolute precision P .
1: Compute θ 2 0 (0, τ ), θ 2 1 (0, τ ) with absolute precision P 0 using Algorithm 7. 2: t ← θ1(0,τ ) 2 θ0(0,τ ) 2 3: p ← P 0 4: while p ≤ P do 5:
t ← t -fτ (t)
f τ (t) , computed with precision 2p 6:
p ← 2p -δ with δ defined in Corollary 0.3.8
7:
Remove the last δ digits of t. 8: end while 9: t 0 ← AGM(1, t) 10: return (t 0 , t 0 t).
Proof. We refer to [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF] for a full analysis of this algorithm. Correctness follows from the fact that the choice of square roots are good for τ and -1 τ since τ ∈ F. First of all, recall that the number of iterations necessary to compute the AGM increases as √ 1 -z 2 gets small (Theorem 3.2.6). Hence the number of iterations needed in the computation of AGM 1,
θ 2 2 (0,τ ) θ 2 0 (0,τ )
with precision P is O(log 2 P + log 2 Im(τ )). Secondly, recall that • Computing a -b can induce a potentially large loss of relative precision, of the order of log 2 |a -b| bits, whereas multiplication and square roots only induce a loss of at most 2 bits of relative precision [Dup06, Section 1.1.2];
• Computing √ a with a very close to 0 induces a loss of absolute precision of the order of -1 2 log 2 |a|, whereas adding two complex numbers only induces a loss of at most 1 bit of precision (Theorem 0.3.3).
In either case, i.e. whether we are looking to compute the quantities above with absolute or relative precision, the computation of √ 1 -z 2 for z θ 2 1 (0,τ ) θ 2 0 (0,τ ) will cause a potentially large loss of precision. In fact, since θ 4 2 (0, τ ) ∼ 16q, the loss of precision is O(Im(τ )). This means that one should take P = O(P + Im(τ )) in the algorithm in order to compensate for the loss of precision.
Putting it all together, the cost of the evaluation of f τ in the algorithm is
O (M(P + Im(τ )) × (log P + log Im(τ ))) .
This is also the final complexity, since applying Newton's method is as costly as the last, fullprecision iteration.
An application of this algorithm can be found in [START_REF] Enge | The complexity of class polynomial computation via floating point approximations[END_REF], where the computation of thetaconstants is used to compute certain modular forms such as Dedekind's η function. These computations are useful to compute class polynomials, which are of interest in the CM method, used to generate cryptographically safe elliptic curves. The use of this algorithm provides the best theoretical complexity for the computation of η. In practice, this approach gives a better running time than a naive algorithm (exploiting the sparseness of the series defining η) for precisions above 250 000 bits, which corresponds to the largest example (a field of class number 100 000) that is computed in [START_REF] Enge | The complexity of class polynomial computation via floating point approximations[END_REF].
A faster algorithm with uniform complexity
This section gives an algorithm with overall better complexity, which furthermore does not depend on τ at all.
Making the complexity uniform
The τ -duplication formulas (Equation (2.5.4)) allow us to compute easily θ 0 (0, τ ) 2 and θ 1 (0, τ ) 2 from θ 0 0, τ 2 n , θ 1 0, τ 2 n . Hence, they can be used for argument reduction purposes: Proposition 6.1.2. Let τ ∈ F, with absolute precision P . Let s ≥ 0 be such that 1 ≤ |τ | 2 s < 2. Then running Algorithm 10 on τ 2 s costs cM(P ) log P operations with c independent of τ . One can then recover the final result using s τ -duplication formulas, for a cost of O(M(P )s) = O(M(P ) log Im(τ )).
Note that in the case P ≥ c Im(τ ), we have s = O(log P ) in Proposition 6.1.2, and hence quasi-optimal running time uniformly in τ . If this is not the case, we can use the remark made in Section 5.1.3: if P ≤ c Im(τ ) for c a constant, the complexity of the algorithm is dominated by the complexity of evaluating π and exp(iπτ ), which is O(M(P ) log P ).
However, note that the complexity of evaluating e iπτ at a precision P depends in τ unless one uses the UniformExp subroutine (see Proposition 5.1.4); this difficulty does not seem to appear in the analysis of [Dup06, Algorithm 8], although it is relevant when computing e iπτ with relative precision P and large Im(τ ). In the case of theta-constants, it is actually even simpler, since
|θ 0 (0, τ ) -1| ≤ 2 |q| 1 -|q| ≤ 4|q|
Hence, if P ≤ π log 2 e Im(τ ) -2, we have 4|q| ≤ 2 -π log 2 e Im(τ )+2 ≤ 2 -P ; hence, 1 is an approximation of θ 0 (0, τ ) (and of θ 1 (0, τ )) with absolute precision P . Furthermore, since θ is close to 1, a similar argument can be used for relative precision.
Computation of θ 2
We now show how to compute θ 2 (0, τ ) in the same uniform quasi-linear time.
Recall from Section 5.1.4 (or from [Dup06, Section 4.2.4]) that θ 2 (0, τ ) 2 can be recovered using Jacobi's formula (2.5.6), at the cost of O(Im(τ )) bits. This requires computing θ 0,1 (0, τ ) 2 with relative precision O(P + Im(τ )), which changes the asymptotics of the algorithm in the case where one needs to use the naive method: the complexity becomes worse as Im(τ ) grows. Hence, we cannot use Jacobi's formula to compute θ 2 (0, τ ) 2 if P ≤ c Im(τ ).
Instead, one could use Algorithm 8 (Section 5.1.4); however, the same considerations on the computation of e iπτ apply. Note once again that the triangle inequality gives, for τ ∈ F:
| θ 2 (0, τ ) q 1/4 | ≤ 1 + 2|q| 2 1 -|q| 4 ≤ 1.009 Hence, if P ≤ π log 2 e 4
Im(τ ) -1, an approximation of θ 2 (0, τ ) with P bits of absolute precision is 0.
In the case where P ≥ π log 2 e 4 Im(τ ) -1, we use a trick to compute θ 2 from the other thetaconstants. Recall (cf Equation (2.5.4)) the τ -duplication formula for the third theta-constant:
θ 2 (0, 2τ ) 2 = θ 0 (0, τ ) 2 -θ 1 (0, τ ) 2 2 ,
Hence, if we use τ -duplication formulas for θ 0 (0, τ ), θ 1 (0, τ ), we can also at the same time recover θ 2 ; this loses much fewer bits than Jacobi's formula.
We then need to make sure that for every τ ∈ F, at least one τ -duplication formula is used. Note that this is not necessarily the case in [Dup06, Algorithm 8], e.g. for |τ | < 2. We have: Proposition 6.1.3. Consider the domain
D = {τ ∈ H | |Re(τ )| ≤ 1 4 , 1 2 ≤ |τ | < 1}
Then Algorithm 10 outputs the correct result, i.e. the choices of signs are always good.
Proof. We have D ⊂ D 1 (see Figure 3.1). Now, for τ ∈ D, we have
1 |τ | > 1, |Re -1 τ | = |Re(τ )| |τ | 2 ≤ 1 hence -1 τ ∈ F.
This means that f τ θ 2 1 (0,τ ) θ 2 0 (0,τ ) = 0 for τ ∈ D, which proves correctness. Hence, we simply take the argument reduction strategy of [Dup06, Algorithm 8] one step further: for τ ∈ F, we take s ≥ 0 such that τ 2 s+1 ∈ D, then use at least one τ -duplication formula to recover θ(0, τ ) 2 .
We combine the remarks here in Algorithm 11, which is a variant on [Dup06, Algorithm 8].
Algorithm 11 Compute θ 0,1,2 (0, τ ) with absolute precision P . Input: τ ∈ F with absolute precision P .
1: if P ≤ 1.13 Im(τ ) -2 then 2: return (1,1,0). 3: else 4: Let s ∈ N such that 1 ≤ |τ | 2 s < 2.
5:
Compute θ 0 0, τ 2 s+1
2 , θ 1 0, τ 2 s+1 2 using Algorithm 10.
6:
for i = 1 to s + 1 do 7:
Compute θ 0 0, τ 2 s-i 2 , θ 1 0, τ 2 s-i 2 using the τ -duplication formulas.
8:
if i = s + 1 then 9:
Compute θ 2 (0, τ ) 2 = θ0(0,τ /2) 2 -θ1(0,τ /2) 2 2 .
10:
end if 11:
end for 12:
return θ 0,1,2 (0, τ ). 13: end if Proposition 6.1.4. Algorithm 11, working with internal precision O(P ), has complexity O(M(P ) log P ), uniformly in τ .
Proof. For P ≤ 1.13 Im(τ ) -2, we have P ≤ π log 2 e 4 Im(τ ) -1 and P ≤ π log 2 e Im(τ ) -2; hence (1, 1, 0) is an approximation of the theta-constants. We then look at the other case.
Algorithm 10 is called in Step 5 with the argument τ 2 s+1 , which is of norm smaller than 1. One can thus bound its imaginary part by 1 in Proposition 6.1.1, which shows that the cost of Step 5 is O(M(P ) log P ), independently of τ . Furthermore, note that s ≤ log 2 |Im(τ )|, and hence s ≤ log 2 P + c given our bound on Im(τ ) in this case. The τ -duplication formulas are thus applied O(log P ) times; the analysis in [Dup06, Section 4.2.3] shows that this loses O(log P ) bits of precision. In the end, we get a O(M(P ) log P ) algorithm with asymptotic complexity independent of τ .
A function related to θ(z, τ )
We generalize the algorithm presented in Section 6.1, which gives an algorithm that computes θ 0,1,2 (z, τ ) in O(M(P ) log P ), for z, τ verifying the usual argument reduction conditions (see Chapter 5). We use the same strategy as [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF], that is to say, find a quadratically convergent sequence and attempt to invert the function computing its limit using Newton's method. To be more precise, we exhibit a function F such that
F 1, θ 1 (z, τ ) 2 θ 0 (z, τ ) 2 , 1, θ 1 (0, τ ) 2 θ 0 (0, τ ) 2 = (z, τ )
Furthermore, our algorithm has uniform complexity O(M(P ) log P ), i.e. the complexity is independent of z, τ .
The F sequence
Definition of the F sequence
Recall the τ -duplication formulas in genus 1:
θ 0 (z, 2τ ) 2 = θ 0 (z, τ )θ 0 (0, τ ) + θ 1 (z, τ )θ 1 (0, τ ) 2 θ 1 (z, 2τ ) 2 = θ 0 (z, τ )θ 1 (0, τ ) + θ 1 (z, τ )θ 0 (0, τ ) 2
We then define a function F as:
F : C 4 → C 4 (x, y, z, t) → √ x √ z + √ y √ t 2 , √ x √ t + √ y √ z 2 , z + t 2 , √ z √ t
The τ -duplication formulas (Equation (2.5.4)) show that for some appropriate choice of roots we have F θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ ) = θ 2 0 (z, 2τ ), θ 2 1 (z, 2τ ), θ 2 0 (0, 2τ ), θ 2 1 (0, 2τ ) .
Remark. One can also write rewrite F using Karatsuba-like techniques
F (x, y, z, t) = ( √ x + √ y)( √ z + √ t) + ( √ x - √ y)( √ z - √ t) 4 , (6.2.1) ( √ x + √ y)( √ z + √ t) -( √ x - √ y)( √ z - √ t) 4 , ( √ z + √ t) 2 + ( √ z - √ t) 2 4 , ( √ z + √ t) 2 -( √ z - √ t) 2 4
to speed up computations.
Good choices of sign
Similarly to the complex AGM (Section 3.2), we define a good choice for square roots at the rank n as the following conditions being satisfied: t n | and Im
√ tn √ zn > 0.
Note that the condition |x -y| < |x + y| is equivalent to Re y x > 0. Again, similarly to the AGM, we define an optimal F sequence: Definition 6.2.1. Let x, y, z, t ∈ C, and define the optimal F sequence associated to x, y, z, t as the sequence ((x n , y n , z n , t n )) n∈N such that:
(x 0 , y 0 , z 0 , t 0 ) = (x, y, z, t) (x n+1 , y n+1 , z n+1 , t n+1 ) = F (x n , y n , z n , t n )
where all the choices of sign for the square roots are good. We sometimes denote F ∞ (x, y, z, t) the limit of this optimal F sequence.
The study of this sequence and its convergence is done in Section 6.2.4. Note that
z n+1 -t n+1 = z n + t n 2 - √ z n t n = ( √ z n - √ t n ) 2 2 z n+1 + t n+1 = ( √ z n + √ t n ) 2 2 hence |z n+1 -t n+1 | < |z n+1 + t n+1 | ⇔ | √ z n - √ t n | < | √ z n + √ t n |
Hence, our third condition and the condition "b n+1 is the good choice of square roots" in the AGM are very similar. There are, however, subtle differences, which is why we wrote √ z √ t instead of √ zt in the definition of F . The computation of the AGM involves only one square root computation, and the study of the choice of signs for theta-constants (Section 3.2.3) involves determining the sign of θ 2 1 (0,2τ ) θ 2 0 (0,2τ ) . However, the definition of F requires the computation of both √ z and √ t, which we then multiply (the final result being the same as in the AGM); hence, our third condition leads us to study the sign of θ1(0,τ ) θ0(0,τ ) instead. This is accomplished in Proposition 6.2.4, which uses different methods than the ones in [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF].
Finally, note that the condition
| √ z n - √ t n | < | √ z n + √ t n |
can be satisfied in two ways, depending on which sign we pick for √ z n ; this is of no consequence for the value of z n+1 , t n+1 , but the sign matters for the computation of x n+1 , y n+1 . This is why we choose to impose Re( √ z n ) ≥ 0; this choice is justified in the proof of Theorem 6.2.6.
Link with theta functions
More argument reduction
Recall (from Section 2.5.3) that argument reduction in genus 1 allows us to work under the conditions (2.5.11):
τ ∈ F, |Re(z)| ≤ 1 2 , 0 ≤ Im(z) ≤ Im(τ ) 2 .
We go slightly further than these conditions in order to justify the forthcoming results.
Proof. We compute the following upper bounds using the triangle inequality and Lemma 5.1.2:
|θ 0 (z, τ ) + θ 1 (z, τ ) -2| ≤ 2 n≥2,n even |q n 2 (w 2n + w -2n )| ≤ 2 n≥2,n even |q| n 2 (1 + |q| -n/2 ) ≤ 2 n≥1 |q| 4n 2 (1 + |q| -n ) ≤ 2|q| 3 + 2|q| 4 + 2|q| 16 1 -|q| 20 + 2|q| 14 1 -|q| 19 |θ 0 (z, τ ) -θ 1 (z, τ )| ≤ 2 n≥1,n odd |q| n 2 (1 + |q| -n/2 ) ≤ 2|q| 1/2 + 2|q| + 2|q| 9 1 -|q| 16 + 2|q| 7.5 1 -|q| 19
Numerical evaluation shows that
2|q| 1/2 + 2|q| + 2|q| 9 1 -|q| 16 + 2|q| 7.5 1 -|q| 19 < 2 -2|q| 3 + 2|q| 4 + 2|q| 16 1 -|q| 20 + 2|q| 14 1 -|q| 19
for Im(τ ) ≥ 0. 335, which proves the lemma.
The result, and our proof, seems coarser and less subtle than in [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF]; however, note that Cox's methods cannot be applied here 6 .
Note that the same method can be used to prove that Proposition 6.2.5. For any τ such that Im(τ ) > 0.251 (in particular for τ ∈ F) we have Re θ1(0,τ ) θ0(0,τ ) > 0. Since {τ ∈ H | Im(τ ) > 0.251} ⊂ D 1 , the domain for which we have a good choice of sign for F is strictly smaller than the one for which we have a good choice of sign for the AGM (Proposition 3.2.9). Figure 6.1 displays the different domains. 0 Im(τ ) = 0. 335 Im(τ ) = 0.251 Figure 6.1: The green stripes represent a domain for which the choice of sign in F is good (assuming z is reduced). The blue dots represent Cox's domain for which the choice of sign is good in the AGM (i.e. for quotients of squares of theta-constants at 2τ ). Above the orange line is a domain for which the choice of sign is good for quotients of theta-constants at τ ; this domain is strictly contained in the one with blue dots.
6 For instance, Cox uses the invariance of θ 2 1 θ 2 0 (0, τ ) under the action of the subgroup Γ 2 (4) (i.e. Proposition 3.2.13), which does not hold for θ 1 θ 0 (z, τ ).
We are now ready to prove: Theorem 6.2.6. Let (x n , y n , z n , t n ) be the optimal F sequence starting with θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ ). If (z, τ ) ∈ A and Im(τ ) ≥ 0. 335, we have
(x n , y n , z n , t n ) = θ 2 0 (z, 2 n τ ), θ 2 1 (z, 2 n τ ), θ 2 0 (0, 2 n τ ), θ 2 1 (0, 2 n τ )
Proof. This is true for n = 0; we prove the statement inductively. Suppose it is true for n = k ≥ 0. For any τ , we have (using Lemma 5.1.2):
θ 0 (0, τ ) = 1 + 2q + c, |c| ≤ 2|q| 4 1 -|q| 5
For any τ such that Im(τ ) ≥ 0. 335, we have 2|q| ≤ 0.676 and |c| ≤ 0.027; hence Re(θ 0 (0, 2 k τ )) > 0 for any k, which proves that √ z k = θ 0 (0, 2 k τ ). Proposition 6.2.4 shows that Re θ1(0,2 k τ ) θ0(0,2 k τ ) > 0, and we also have Re
√ t k √ z k ≥ 0 since the choice of roots is good, hence √ t k = θ 1 (0, 2 k τ ).
Equation (2.5.4) then shows that t k+1 = θ 2 1 (0, 2 k+1 τ ) and z k+1 = θ 2 0 (0, 2 k+1 τ ). Similarly, given that (z, τ ) ∈ A and using Lemma 5.1.2, we find:
|θ 0 (z, τ ) -1| ≤ |q| 1/2 + |q| + |q| 3 + |q| 4 + |q| 7/2 + |q| 9 + 2|q| 14 1 -|q| 2 (6.2.2)
For Im(τ ) ≥ 0. 335, this is strictly smaller than 1; hence Re(θ 0 (z, τ )) > 0, which proves that √ x k = θ 0 (z, 2 k τ ). Again, Proposition 6.2.4 proves that Re θ1(z,2 k τ ) θ0(z,2 k τ ) > 0, and since the choice of signs is good, Re
√ y k √ x k ≥ 0, necessarily √ y k = θ 1 (z, 2 k τ ).
This along with the τ -duplication formulas (Equations (2.5.4)) finishes the induction.
Note that a consequence of this proposition is: Proposition 6.2.7. For (z, τ ) ∈ A and Im(τ ) ≥ 0. 335, the optimal F sequence for θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ ) converges quadratically, and
F ∞ (θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ )) = (1, 1, 1, 1).
A function with quasi-optimal time evaluation
The strategy of [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF] is to use an homogenization of the AGM to get a function f τ : C → C, on which Newton's method can be applied. To generalize this, we homogenize the function F ∞ , which gives a function from C 2 to C 2 . We call this function G; this function is a major building block for the function we use to compute our two parameters z, τ using Newton's method.
Proposition 6.2.8. Let λ, µ ∈ C * . Let ((x n , y n , z n , t n )) n∈N be the optimal F sequence for (x, y, z, t), and ((x n , y n , z n , t n )) n∈N the optimal F sequence for (λx, λy, µz, µt). Put lim n→∞ z n = z ∞ and lim n→∞ z n = z ∞ . Then we have
µ = z ∞ z ∞ , λ = lim n→∞ x n z ∞ 2 n × z ∞ lim n→∞ xn z∞ 2 n × z ∞
Proof. We prove by induction that
x n = n λ 1/2 n µ 1-1/2 n x n , y n = n λ 1/2 n µ 1-1/2 n y n , z n = µz n , t n = µt n ,
where Re(λ 1/2 n ) ≥ 0, Re(µ 1-1/2 n ) ≥ 0, and n is a 2 n -th root of unity. This is enough to prove the proposition above, since then
lim n→∞ x n z ∞ 2 n = lim n→∞ λµ 2 n -1 x n z ∞ 2 n = λ µ lim n→∞ x n z ∞ 2 n .
Since this is true for n = 0, suppose this is true for n = k. We have
z k+1 = z k + t k 2 = µz k+1 .
As for t k+1 , we can write
z k = z √ µ √ z k , t k = t √ µ √ t k
where z = ±1 and t = ±1, and the square roots are taken with positive real part. This gives
√ t k √ z k = t z √ t k √ z k .
Since the sequences we are considering are optimal, we have either Re
√ t k √ z k > 0 and Re √ t k √ z k > 0, or Im √ t k √ z k ≥ 0 and Im √ t k √ z k
≥ 0 if the real parts are zero. In both cases, this proves that z = t . Hence
t k+1 = ( z √ µ √ z k ) z √ µ √ t k = µt k+1 .
As for the other coordinates, we have
x k = x √ k λ 1/2 k+1 µ 1/2-1/2 k+1 √ x k , y k = y √ k λ 1/2 k+1 µ 1/2-1/2 k+1 √ y k
where the roots are taken with positive real part, and x , y ∈ {-1, 1}. Once again, since √
y k √ x k = y x √ y k √
x k and since the sequences are optimal, we have x = y ; hence
x k+1 = x k z k + y k t k 2 = k+1 λ 1/2 k+1 µ 1-1/2 k+1 √ x k √ z k + √ y k √ t k 2 = k+1 λ 1/2 k+1 µ 1-1/2 k+1 x k+1
where k+1 = x z √ k is indeed such that 2 k+1 k+1 = 1. This proves the proposition.
The formulas can be simplified in our case since lim n→∞ θ(z, 2 n τ ) 2 n = 1 (Proposition 2.1.9). We thus define the function G as follows: Definition 6.2.9. The function G : C 4 → C 2 is defined as
G(x, y, z, t) = lim n→∞ x n z ∞ 2 n × z ∞ , z ∞
where z ∞ = lim n→∞ z n .
Proposition 6.2.10. Let z, τ be as in the hypotheses of Theorem 6.2.6. Then for any λ, µ ∈ C * we have G λθ 2 0 (z, τ ), λθ 2 1 (z, τ ), µθ 2 0 (0, τ ), µθ 2 1 (0, τ ) = (λ, µ).
This is a consequence of Proposition 6.2.8 and Theorem 6.2.6. For instance, we get
G 1, θ 2 1 (z, τ ) θ 2 0 (z, τ ) , 1, θ 2 1 (0, τ ) θ 2 0 (0, τ ) = 1 θ 2 0 (z, τ ) , 1 θ 2 0 (0, τ ) . (6.2.3)
This is similar to Equation (6.1.1), and will play a similar role in the computation of θ(z, τ ).
Convergence
Let us start by showing that, contrary to the AGM and despite Proposition 6.2.7, an optimal F sequence does not always converge quadratically; for instance, the optimal F sequence for (2, 2, 1, 1) is ((2 1/2 n , 2 1/2 n , 1, 1)) n∈N , which does not converge quadratically. This is a big difference from the AGM, and this is why we are reluctant to call optimal F sequences a "generalization of the AGM". However, we now show that the sequence (λ n ) = Proof. The upper bound result follows from a trivial induction using the equations defining F . We now prove the existence of c. Recall that the choice of signs for the square roots are good at all steps, since we assume (x n , y n , z n , t n ) is an optimal F sequence. Thus there exists α, β ∈ C * such that Re(x 1 /α) > 0, Re(y 1 /α) > 0, Re(z 1 /β) > 0, Re(t 1 /β) > 0.
For instance, in most cases one can take α = x 1 and β = z 1 . Let us assume without loss of generality that |α| = |β| = 1, and let c = min(Re(x 1 /α), Re(y 1 /α), Re(z 1 /β), Re(t 1 /β)
| √ x n + √ y n | ≥ √ 2c | √ z n + √ t n | ≥ √ 2c
and hence
| √ x n - √ y n | ≤ |x n -y n | √ 2c , | √ z n - √ t n | ≤ |z n -t n | √ 2c
Proof. The parallelogram identity gives . This proves that (z n -t n ) converges quadratically. Take a c 1 ≥ 0; for now, the value of c 1 is unimportant, but throughout the proof we will find sufficient conditions on c 1 for the result to hold. We then consider the first integer n for which |z n -t n | ≤ η with η = 2 -P -c1-n ; the existence of n is guaranteed by the quadratic convergence of (z n -t n ). Note that Aη ≤ π 8 ≤ 1 2 , since min(|z 0 |, |t 0 |) ≥ 2 -P ; we will make use of this fact repeatedly throughout the proof.
| √ x n + √ y n | 2 = 2| √ x n | 2 + 2| √ y n | 2 -| √ x n - √ y n | 2 ≥ 2| √ x n | 2 + 2| √ y n | 2 -| √ x n + √ y n | 2 since
We then have for all k ≥ 0:
|z n+k -t n+k | ≤ A 2 k -1 η 2 k . Furthermore, |z n+1 -z n | = 1 2 |z n -t n |, so that |z ∞ -z n+k | ≤ 1 2 ∞ i=k A 2 i -1 η 2 i and we have |z ∞ -z n+k | ≤ 1 A (Aη) 2 k
, using the fact that Aη ≤ 1 2 . Finally, using Equation (6.2.1), one can write
|x n+k+1 -y n+k+1 | ≤ | √ x n - √ y n || √ z n - √ t n | 2 ≤ √ 2C |z n+k+1 -t n+k+1 | since z n+k+1 -t n+k+1 = ( √ z n+k - √ t n+k ) 2 2 ≤ √ 2AC|z n+k -t n+k |.
Now, define q n = (xn/z∞) 2 xn-1/z∞ , so that λn+1 λn = q 2 n n . Note that if one makes the approximation x n+k+1 ≈ y n+k+1 and z n+k+1 ≈ t n+k+1 ≈ z ∞ , we have x n+k+2 ≈ √ x n+k+1 z ∞ which gives q n+k+2 ≈ 1. We take a closer look at those approximations:
|x n+k+2 - √ x n+k+1 √ z n+k+1 | ≤ | √ y n+k+1 - √ x n+k+1 || √ z n+k+1 + √ t n+k+1 | 4 + | √ y n+k+1 + √ x n+k+1 || √ z n+k+1 - √ t n+k+1 | 4 ≤ √ C 2 (| √ y n+k+1 - √ x n+k+1 | + | √ z n+k+1 -t n+k+1 |) ≤ √ C 2 √ 2AC|z n+k -t n+k | | √ x n+k+1 + √ y n+k+1 | + √ 2A|z n+k+1 -t n+k+1 | ≤ B(Aη) 2 k
for some constant B, where we used Lemma 6.2.12 to get a lower bound on the denominators.
Put ξ = x n+k+2 - √ x n+k+1 √ z n+k+1 for notational convenience.
Then
|q n+k+2 -1| = |(x n+k+2 /z ∞ ) 2 -x n+k+1 /z ∞ | |x n+k+1 /z ∞ | ≤ |x 2 n+k+2 -x n+k+1 z ∞ | |x n+k+1 z ∞ | ≤ | ξ + √ x n+k+1 √ z n+k+1 2 -x n+k+1 z ∞ | |x n+k+1 z ∞ | ≤ |ξ 2 + 2ξ √ x n+k+1 √ z n+k+1 + x n+k+1 (z n+k+1 -z ∞ )| |x n+k+1 z ∞ | ≤ |ξ| 2 + 2|ξ √ x n+k+1 √ z n+k+1 | + |x n+k+1 (z n+k+1 -z ∞ )| |x n+k+1 z ∞ | ≤ B 2 (Aη) 2 k+1 + 2BC(Aη) 2 k + C A (Aη) 2 k c 2 ≤ B × (Aη) 2 k
for some constant B . This proves that (q n ) converges quadratically to 1. Now, put X k = 2 n+2 log 2 q n+2 + . . . + 2 n+k log 2 q n+k . Assume that B Aη ≤ 1, which is true for instance for c 1 ≥ log 2 ( π 8 B ). We have
|X k | ≤ k-2 i=0 2 n+i+2 |log 1 + (q n+i+2 -1)| ≤ B k-2 i=0 2 n+i+2 (Aη) 2 i 1 -(Aη) 2 i ≤ 8B 2 n k-2 i=0 2 i (Aη) 2 i using |log(1 + x)| ≤ |x| 1-|x|
, which is valid for any |x| ≤ 1, and Aη ≤ 1 2 . The series converge (using e.g. d'Alembert's ratio test), which means that X k is absolutely convergent; hence, (λ n ) converges.
Furthermore we can write
|X k | ≤ 8B 2 n k-2 i=0 2 i (Aη) 2 i ≤ 8B 2 n Aη 1 -2Aη ≤ 16AB 2 n η assuming Aη ≤ 1 4 , which is true for instance for c 1 ≥ 1. Assume that 16AB 2 n η ≤ 1 (which is true if c 1 ≥ log 2 (2πB )); we then have |X k | ≤ 1 and |q 2 n+2 n+2 . . . q 2 n+k n+k -1| = |exp X k -1| ≤ |X k | 1 -|X k |/2 ≤ 32AB 2 n η.
This proves that
|λ -λ n+1 | ≤ |λ n+1 |32AB 2 n η ≤ 32AB |λ n+1 |2 -c1 × 2 -P . For c 1 ≥ log 2 (8πB ) we have |λ -λ n+1 | ≤ |λn+1| 2
, which proves that |λ n+1 | ≤ 2|λ|. Hence if we suppose c 1 ≥ log 2 (64AB |λ|), we have that λ n+1 is an approximation of λ with P bits of absolute precision.
Collecting the conditions on c 1 throughout the proof, we see that the theorem holds for any
c 1 such that c 1 ≥ log 2 max(64AB |λ|, 8πB , 2)
which does not depend on P or n.
Algorithm 12 Compute G. Input: x, y, z, t with absolute precision P . Output: G(x, y, z, t) with absolute precision P .
1: Work at precision P. (x, y, z, t) ← F (x, y, z, t) 6: end while 7: (x, y, z, t) ← F (x, y, z, t)
8: return x z 2 n+1 × z, z
This gives an algorithm, Algorithm 12, to compute G(x, y, z, t). Proposition 6.2.14. For any arguments x, y, z, t ∈ C with absolute precision P , the time complexity for computing G(x, y, z, t) to absolute precision P is O(M(P ) log P ).
Proof. Theorem 3.2.6 (i.e. [Dup11, Theorem 12]) proves that, if n = max(log|log|z 0 /t 0 ||, 1) + log(P + c 1 ), a n is an approximation of AGM 1, | z0 t0 | with relative precision P bits. This proves that at the end of the algorithm, n = O(log P ); in fact, we have more precisely n ≤ log 2 P + C with C a constant independent of P . Finally, the next subsection proves that taking P = P + O(log P ), which proves the result.
Number of bits lost
We use Theorem 0.3.3 in order to evaluate the precision lost when computing G(x, y, z, t). First note that the upper and lower bounds on the terms of the sequence allow us to write ( 1
|z n | + 1 |t n | )( |z n | + |t n |) ≤ b/2
( 1
|z n | + 1 |t n | )( |x n | + |y n |) ≤ b
( 1
|x n | + 1 |y n | )( |z n | + |t n |) ≤ b
for some b > 1; for instance, one can take b = max 1, 4 C c . We prove in Section 6.3.3 that, for any values of theta we consider in our final algorithm, the same value for c and C can be taken.
We first evaluate a bound on the error incurred when computing F using Equation (6.2.1). Using those formulas allows us to get error bounds that are identical for F x and F y on the one hand, and F z and F t on the other hand. For simplicity, we assume that the error on z and t is the same (which we denote k z ), as well as the error on x and y (which we denote k x ). This gives:
|Re(F x ) -Re( Fx )| ≤ (1 + ( k z |z| + k z |t| )( |x| + |y|) + ( k x |x| + k x |y| )( |z| + |t|))2 -P |Re(F z ) -Re( Fz )| ≤ (1 + 2( k z |z| + k z |t| )( |z| + |t|))2 -P
We thus get the following recurrence relations when looking at what happens when applying F n times in a row:
k (n) x ≤ 1 + bk (n-1) z + bk (n-1) x , k (n) z ≤ 1 + bk (n-1) z
The last equation can be rewritten as k
(n) z + 1 b-1 ≤ b k (n-1) z + 1 b-1 , which gives k (n) z ≤ b n k z + 1 b-1 . The induction for x becomes k (n) x ≤ 1 + (k z + 1 b-1 )b n + bk (n-1) x
, which we solve:
k (n) x ≤ (1 + b + ... + b n )k x + nb n k z + 1 b -1 ≤ b n nk z + b + n b -1
For b > 1, we have for n large enough that k (n) ≤ b 2n , which ultimately means the number of bits lost when applying F n times in a row is bounded by 2n log b + 1.
Finally we need to find the number of bits lost in the computation of xn z∞ 2 n
. Call E k the error made after computing k squarings in a row; we have a recurrence relation:
E k+1 ≤ 2 + 4E k |x n /z ∞ | 2 k However, since (λ n ) converges, |λ n | ≤ ρ for some constant ρ; furthermore, for any k ≤ n, one has |x n /z ∞ | 2 k ≤ 1 + ρ z∞ .
Hence the recurrence becomes E k+1 ≤ 2 + 4 1 + ρ z∞ E k , which we solve to get
E n ≤ 2 C n+1 -1 C -1 ≤ 2 C -1 C n+1
with C = 4 1 + ρ z∞ . This means the number of bits lost after n successive squarings is at the most (n + 1) log C + 1 -log(C -1).
Overall, if we write that the final value of n in Algorithm 12 is bounded by log 2 P + C , we have: Proposition 6.2.15. The number of bits lost in the computation of G is bounded by
(2 log 2 b + log C )(log 2 P + C ) + log C + 2 -log C -1,
where C , C and b are constants in P .
As a result, one should take P = P + O(log P ) in Algorithm 12: this gives a quasi-linear complexity for the evaluation of G with absolute precision P .
Fast computation of θ
We use a similar method as [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF], that is to say finding a function F such that
F θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ ) = (z, τ ),
which can then be inverted using Newton's method. One can then compute θ(z, τ ) by, for instance, using Equation (6.2.3) and extracting a square root, determining the correct choice of sign by computing a low-precision (say, 10 bits) approximation of the value using the naive method; we use a different trick in our final algorithm (Algorithm 15). We build this function F using G as a building block.
Building F
Just as with the algorithm for theta-constants, we use formulas derived from the action of SL 2 (Z) on the values of θ in order to get multiplicative factors depending on our parameters; this will allow us to build a function which computes z, τ from the values θ i (z, τ ).
Definition 6.3.1. We define the function F : C 2 → C 2 as the result of Algorithm 13.
The forthcoming Proposition 6.3.2 is dedicated to explaining the calculations done by Algorithm 13.
Algorithm 13 Compute F. Input: s, t with absolute precision P . Output: F (s, t) with absolute precision P .
1: b ← √ 1 -t 2
Choose the root with positive real part [Cox84, Prop. 2.9]
2: a ← 1-st b 3: (x, y) ← G (1, a, 1, b) 4: (q 1 , q 2 ) ← G (1, s, 1, t)
5: Return log q2x q1y × q2/y -2π , i q2 y , choosing the sign of the square root so that it has positive imaginary part. Proposition 6.3.2. Let (z, τ ) satisfying the conditions
(z, τ ) ∈ A, |Re(τ )| ≤ 1 2 , |Re(z)| ≤ 1 8 , Im(τ ) ≥ 0. 335, Im -1 τ ≥ 0. 335 . Then F θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ ) = (z, τ )
Remark 6.3.3. Let (z, τ ) satisfying conditions 2.5.11 and such that 1 ≤ |τ | < 2. Then z 4 , τ 2 satisfy the hypotheses of this theorem. We use this, along with Proposition 6.2.3, in our final algorithm.
Proof of Proposition 6.3.2. Equation (6.2.3) proves that (q 1 , q 2 ) =
1 θ0(z,τ ) 2 , 1 θ0(0,τ ) 2 .
Furthermore, using Jacobi's formula (2.5.6) and the equation defining the variety (2.5.7), it is easy to see that b = θ2(0,τ ) 2 θ0(0,τ ) 2 and a = θ2(z,τ
) 2 θ0(z,τ ) 2 . The formulas in [Mum83, Table V, p.36] give θ 2 0 (z, τ ), θ 2 2 (z, τ ), θ 2 0 (0, τ ), θ 2 2 (0, τ ) = λθ 2 0 z τ , -1 τ , λθ 2 1 z τ , -1 τ , µθ 2 0 0, -1 τ , µθ 2 1 0, -1 τ with λ = e -2iπz 2 /τ -iτ
, µ = 1 -iτ . Proposition 6.2.2 shows we can apply Theorem 6.2.6 to z τ , -1 τ , which proves that
G θ 2 0 (z, τ ), θ 2 2 (z, τ ), θ 2 0 (0, τ ), θ 2 2 (0, τ ) = e -2iπz 2 /τ -iτ , 1 -iτ .
By homogeneity, we get in Step 3 (x, y) = e -2iπz 2 /τ -iτ θ0(z,τ ) 2 , 1 -iτ θ0(0,τ ) 2 . We thus have
x q 1 , y q 2 = e -2iπz 2 /τ -iτ , 1 -iτ ,
and Step 5 consists precisely in extracting (z, τ ) from these.
Remark 6.3.4. In genus 1, we can output either z, τ or λ, µ without affecting the rest of the algorithm, although returning z, τ requires one more logarithm computation. We choose to output z, τ for the clarity of the presentation.
This means that, starting from the knowledge of z and τ with precision P and a low-precision approximation of the quotients θ1(z,τ ) θ0(z,τ ) and θ1(0,τ ) θ0(0,τ ) , one can compute those quotients with precision P using Newton's method. This is Algorithm 14. Note that we put P a precision that is large enough to ensure that the final result is accurate up to 2 -P ; we discuss the matter of precision loss later in this subsection.
We make a few remarks:
• Much in the same way as [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF], we find it preferable to use finite differences to compute the coefficients a 11 , a 21 , a 22 of the Jacobian, as it does not require the computation of the derivative of F, which could be tedious. This requires the computation of F(s+ , t), F(s, t+ ) and F(s, t).
Algorithm 14 Compute θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ ) with absolute precision P . Input: (z, τ ) with absolute precision P , satisfying the conditions of Proposition 6.3.2.
1: Compute θ 2 0,1 (z, τ ), θ 2 0,1 (0, τ ) with absolute precision P 0 using Algorithm 7. Return (a,s,b,t).
2: s ← θ1(z,τ ) 2 θ0(z,τ ) 2 , t ← θ1(0,τ ) 2 θ0(0,
• The value of P 0 has to be large enough that Newton's method converges. We note that, in general, a lower bound on P 0 may depend on the arguments; for instance, [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF] experimentally finds 4.53 Im(τ ) to be a suitable lower bound for P 0 when computing thetaconstants. However, we outline in the next section a better algorithm which only uses the present algorithm for z, τ within a compact set; hence, P 0 can be chosen to be a constant, and we use in practice P 0 = 30000.
We do not provide a full analysis for Algorithm 14; we outline in the next section a better algorithm, which uses this algorithm as a subroutine, and we will provide a full analysis at that time. We can however sketch the proof of the following result: Proposition 6.3.5. For (z, τ ) with absolute precision P and satisfying the hypotheses of Proposition 6.3.2, Algorithm 14 computes θ 2 0 (z, τ ), θ 2 1 (z, τ ), θ 2 0 (0, τ ), θ 2 1 (0, τ ) with absolute precision P in time bounded by kM(P ) log P , where k is a constant in P but a function of z, τ .
Proof (sketch). The computation of G at precision p is done in time O(M(p) log p) using Algorithm 12; this running time depends on z, τ , since it depends on the bounds C, c that one can write for |x n |, |y n |, |z n |, |t n |. Hence, the cost of evaluating F at precision p is O(M(p) log p) bit operations, and the fact that we double the working precision at every step means that the algorithm is as costly as the last iteration (see e.g. the proof of Theorem 0.3.9). Furthermore, one should choose P so that the final result is accurate with absolute precision P . This means compensating the loss of absolute precision incurred during the computation of F; in general, this only depends on Im(τ ) and linearly in log p. In addition, recall (Corollary 0.3.8) that each step of Newton's iteration roughly doubles the accuracy of the result, i.e. gives approximations with absolute precision 2P -δ from approximations with absolute precision P . In practice, we found δ = 4 to be enough; but determining the number of bits lost at each step can be done in the same way as [ET14a, p. 19]: if s (n-1) and s (n-2) agree to k bits, and s (n) and s (n-1) agree to k bits, the number of bits lost can be computed as 2k -k . In the end, working at precision P = P + c log P + d, with c, d independent of P but functions of z, τ , is enough to compensate all the precision loss; this proves that the running time of this algorithm is asymptotically O(M(P ) log P ). 0 Im(τ ) = 0. 335 P = 25 Im(τ ) Figure 6.2: In Algorithm 15, τ can either be in the green zone (in which case the naive algorithm is used) or in the blue zone, in which case we divide τ by 2 until it is in the red zone, apply Algorithm 14, then use τ -duplication formulas to recover the result. Note that z, which is not represented here, also needs to be divided by a power of 2 to get (z, τ ) ∈ A.
Computing θ(z, τ ) in uniform quasi-optimal time
We now show an algorithm with complexity bounded by kM(P ) log P bit operations, with k a constant independent in z and τ , which computes θ(z, τ ) for any (z, τ ) satisfying conditions (2.5.11). We use the same strategy as [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]; namely, we use the naive algorithm for large Im(τ ); and for smaller values, we divide τ by a power of 2 to get arguments within a compact set, and we also divide z by a power of 2 in order to have (z, τ ) ∈ A. In fact, we divide τ, z by 2 until 1 ≤ |τ | < 2, then use Note 6.3.3 and Proposition 6.2.3 to get z 2 t , τ 2 s in a compact set and belonging to A. We then apply Algorithm 14 to these arguments; its running time is then uniform in the arguments since they are in a compact set. We then alternate between using Equation (2.5.4) to double the second argument and Equation (2.5.5) to double the first argument, until finally recovering θ(z, τ ). Figure 6.2 summarizes different domains τ can belong to in the algorithm.
A few notes on this algorithm:
• As we mentioned in Section 5.1, the computation of the complex exponentials in the naive algorithm must use the UniformExp subroutine (Prop 5.1.4) in order to get a uniform complexity for the naive algorithm.
• We note that, at several steps of the algorithm (e.g. Steps 9, 14, 16) we need to compute theta-constants from their square. The correct choice of signs is given by the proof of Theorem 6.2.6, which shows that Re(θ 0 (0, τ )) ≥ 0 and Re(θ 1 (0, τ )) ≥ 0; and furthermore, since Re(q 1/4 ) ≥ |q| 1/4 cos(π/8), we also have Re(θ 2 (0, τ )) ≥ 0.
• Taking τ 2 = τ 1 /2 allows us to use the τ -duplication formulas for θ 2 (Equation (2.5.4)) in step 8, instead of using Equation (2.5.6) and Equation (2.5.7) to recover θ 2 ; this is more efficient and loses fewer bits.
• The knowledge of θ 2 2 (2 i-2 z 1 , 2 i τ ) is enough for the z-duplication formulas of step 17, and it can be computed directly from θ 0 and θ 1 using the τ -duplication formulas for θ 2 .
Algorithm 15 Compute θ 0 (z, τ ), θ 1 (z, τ ), θ 2 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ ), θ 2 (0, τ ) with absolute precision P . Input: τ ∈ F and z reduced, with absolute precision P .
1: if P ≤ 25 Im(τ ) then 2:
Compute θ 0,1,2 (z, τ ), θ 0,1,2 (0, τ ) with precision P using the naive method (Algorithm 7 and Algorithm 8).
3: else
4:
Take s ∈ N such that 1 ≤ |τ |/2 s < 2 5:
• Computing θ 3 (z, τ ) is also possible; one should use a partial summation if P ≤ 25 Im(τ ).
In the other case, since all the z-duplication formulas for θ 3 (z, τ ) involve a division by θ 2 (0, τ ) [Mum83, p.22], it is just as efficient to simply use Equation (5.1.2) after Step 20, then extract the square root. The square root extraction loses O(Im(τ )) = O(P ) bits, and this also gives a quasi-optimal time algorithm.
Proving the correctness of the algorithm
This section is devoted to proving the following theorem: Theorem 6.3.6. For any τ, z with absolute precision P satisfying conditions (2.5.11), Algorithm 15 with P = 2P computes θ 0,1,2 (z, τ ), θ 0,1,2 (0, τ ) with absolute precision P in cM(P ) log P bit operations, where c is independent of z, τ .
As we discussed in Section 2, this also gives an algorithm that computes θ(z, τ ) for any (z, τ ) ∈ C × H; one simply needs to reduce τ to τ ∈ F, then reduce z to z , and deduce θ(z, τ ) from θ(z , τ ) using Proposition 2.1.7 and Theorem2.4.4. This causes a loss of absolute precision which depends on z and τ , and this algorithm is no longer uniform.
We need to perform an analysis of the number of bits lost by the algorithm; once again, we use Theorem 0.3.3. For each step, we proceed as follows: we assume that the error on all the quantities is bounded by k (i.e. if â is our approximation of a ∈ C, we have |a -â| < k), and we determine a factor x such that the error on the quantities we get after the computation is bounded by xk. We then declare the number of bits lost in this step to be log 2 x; this gives a very loose upper bound, but simplifies the process.
Finally, we also need to prove that the hypotheses made in Sections 6.2.4 and 6.2.5 are satisfied in Step 7 of the algorithm. This is necessary to prove that the sequence (λ n ) we consider is quadratically convergent, and that the number of bits lost is only O(log P ). We prove this in Section 6.3.3, which then completes the proof that the running time is indeed uniform and quasi-optimal.
Invertibility of the Jacobian
Newton's method can only be applied if the Jacobian of the function we invert (here, F) is invertible. The following proposition establishes this: Proposition 6.3.7. The Jacobian of F at
θ 2 1 (z,τ ) θ 2 0 (z,τ ) , θ 2 1 (0,τ ) θ 2 0 (0,τ ) is of the form a b 0 c with a, c = 0.
This proves that the Jacobian is invertible on a neighbourhood of
θ 2 1 (z,τ ) θ 2 0 (z,τ ) , θ 2
1 (0,τ ) θ 2 0 (0,τ ) . In practice, numerical experiments indicate that for any z, τ in the compact we consider, the Jacobian of the system seems invertible on a ball of radius 10 -P0 with P 0 = 30000, and that this base precision is enough to make Newton's method converge.
Proof. We have
a = ∂F 1 ∂z 1 θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ ) c = ∂F 2 ∂z 2 θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ )
.
Given the expression of the function F , where only the third and fourth argument influence the third and fourth coordinate, we have that c = ∂fτ ∂z θ 2 1 (0,τ ) θ 2 0 (0,τ )
where f τ is the function defined in Section 6.1.1 such that f τ θ 2 1 (0,τ ) θ 2 0 (0,τ ) = 0. We then have c = 0 by [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]Prop. 4.3,p. 102]. We prove a = 0 using the chain rule: define u : (z, τ ) →
θ 2 1 (z,τ ) θ 2 0 (z,τ ) ,
θ 2 1 (0,τ ) θ 2 0 (0,τ ) . Then (F•u)(z, τ ) = (z, τ ) and we thus have
1 = ∂(F • u) 1 ∂z (z, τ ) = a × ∂u 1 ∂z (z, τ ) + ∂F 1 ∂u 2 θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ ) × ∂u 2 ∂z (z, τ ) = a × ∂u 1 ∂z (z, τ ) since ∂u 2 ∂z (z, τ ) = 0. Hence a = ∂u 1 ∂z (z, τ ) -1 = 2 θ 1 (z, τ ) θ 0 (z, τ ) θ 1 (z, τ ) θ 0 (z, τ ) = 2 θ 1 (z, τ ) θ 3 0 (z, τ ) (θ 1 (z, τ )θ 0 (z, τ ) -θ 0 (z, τ )θ 1 (z, τ )) = 2 θ 1 (z, τ ) θ 3 0 (z, τ ) πθ 2 2 (0, τ )θ 2 (z, τ )θ 3 (z, τ ) = 0
the last equality deriving from Formula 10 in [Web21, Section 23]. This finishes the proof.
Naive algorithm
Since P ≤ 25 Im(τ ), we have
P +2
π log 2 e Im(τ ) + 1 ≤ 4, and hence at most 4 terms are needed to compute an approximation with absolute precision P . Theorem 5.1.5 shows that at most 9 bits are lost during these computations.
The cost of the algorithm is then asymptotically dominated by the cost of computing π, q and v 1 with precision P = P + 10. As we highlighted in Proposition 5.1.4 and Note 5.1.6, the cost of these steps is O(M(P ) log P ) independently of z and τ ; hence, the complexity of this step is quasi-linear and uniform in z, τ .
Square root extraction
Steps 9, 16 and 14 of Algorithm 15 require extracting square roots, which multiply the error by Step 14 loses more bits since θ 2 (0, τ ) is smaller; indeed, |θ 2 (0, τ )| ∼ |q| 1/4 . This means the number of bits lost during this step is bounded by log|q| 8 = π 8 Im(τ ) log 2 e.
Duplication formulas
Algorithm 15 uses both τ -duplication formulas and z-duplication formulas, and we need to analyse how many bits are lost for each application of those formulas. The τ -duplication formulas are nothing more than applying F to θ 2 0,1 (z, τ ) and θ 2 0,1 (0, τ ). However, the analysis here is simpler than in section 6.2.5, because we do not need to compute the square roots of θ 0,1 (z, τ ), since they are directly given by step 17. Hence we just need to account for the error of the additions, subtractions and multiplications in Equation (6.2.1); since all the quantities are bounded, this means each step loses a constant number g of bits (our analysis, which is tedious and unilluminating, shows that g ≤ 10.48). In the end, the τ -duplication formulas account for the loss of g × s ≤ g log P bits of precision.
As for the z-duplication formulas, we need to perform several analyses. Looking at the z-duplication formulas (Equation (2.5.5)), one needs to evaluate the fourth power of theta functions, then add them; then evaluate the third power of theta-constants, then perform a division. Computing the error using the formulas from Theorem 0.3.3 is rather straightforward when one has bounds on those quantities, which are given by the following theorem:
Lemma 6.3.8. Assume Im(τ ) > √ 3/2. Then 0.859 ≤ |θ 0,1 (0, τ )| ≤ 1.141, |θ 2 (0, τ )| ≤ 1.018
We also have:
• Suppose that 0 ≤ Im(z) ≤ Im(τ ) 8 . Then |w| -2 ≤ e π Im(τ )/4 and 0.8038
≤ |θ 0,1 (z, τ )| ≤ 1.1962 |θ 2 (z, τ )| ≤ 1.228 • Suppose that 0 ≤ Im(z) ≤ Im(τ ) 4 . Then |w| -2 ≤ e π Im(τ )/2 and 0.6772 ≤ |θ 0,1 (z, τ )| ≤ 1.3228 |θ 2 (z, τ )| ≤ 1.543
Proof. The bounds on the theta-constants come from [Dup11, p. 5], who proves |θ 0,1 (0, τ )| ≤ |θ 2 (z, τ ) -q 1/4 (w + w -1 )| ≤ |q| 15/8 1 -|q| 3/8 ≤ 0.009 so |θ 2 (z, τ )| ≤ |q| 1/4 (|w| + |w| -1 ) + 0.009 ≤ 1.228. In the second case:
|θ 0,1 (z, τ ) -1| ≤ |q| 1/2 + |q| + |q| 3 1 -|q| ≤ 0.3228 |θ 2 (z, τ ) -q 1/4 (w + w -1 )| ≤ |q| 5/4 + |q| 9/4 + . . . ≤ |q| 5/4 1 -|q| ≤ 0.0357.
Combining these bounds with formulas from Theorem 0.3.3 gives the bounds
|θ 0 (z 1 /2, τ 1 ) - θ0 (z 1 /2, τ 1 )| ≤ (20051 + 1819k θ1(z2,τ1) + 1967k θ2(z2,τ1) + 33516k θ0(0,τ1) )2 -P |θ 1 (z 1 /2, τ 1 ) - θ1 (z 1 /2, τ 1 )| ≤ (20051 + 1819k θ0(z2,τ1) + 1967k θ2(z2,τ1) + 33516k θ1(0,τ1) )2 -P
which means losing at most 16 more bits of precision.
Step 19 causes a larger number of lost bits. We use Equation (2.5.7) instead of the third z-duplication formula, because dividing by θ 2 (0, τ ) 2 loses fewer bits than dividing by θ 2 (0, τ ) 3 , and we only need the knowledge of θ 2 2 (2 s-1 z, 2 s τ ) for the next step anyway. This amounts to computing:
θ 2 2 (z, τ ) = θ 2 0 (z, τ )θ 2 0 (0, τ ) -θ 2 1 (z, τ )θ 2 1 (0, τ ) θ 2
2 (0, τ ) Computing the numerator multiplies the error by a factor at most 60, and the norm of this numerator is bounded by 4.557; we then get from Theorem 0.3.3 that the error is bounded by m |θ2(0,τ )| 8 ∼ m|q| -2 , with m ≤ 1600. In the end, we lose at most 2π(log 2 e) Im(τ ) + 11 bits.
Finally, we also lose many bits during the last application of the z-duplication formulas in step 20, since the formula for θ 2 (z, τ ) requires dividing by θ 2 (0, τ ) 3 . The error is thus multiplied by |q| -3 up to a constant factor; this means a loss of 3π Im(τ ) log 2 e bits, up to a constant.
In the end, we see that the number of lost bits is bounded by (2π + π/4 + 3π) Im(τ ) log 2 e + c log P + d; given that P ≥ 25 Im(τ ), and that 5.25π log 2 e ≤ 23.8, the number of bits lost is thus less than P .
Proof of correctness
Proposition 6.3.9. For all τ ∈ F and z reduced, with absolute precision P , Algorithm 15 computes θ 0 (z, τ ), θ 1 (z, τ ), θ 2 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ ), θ 2 (0, τ ).
Proof. The trickiest part is checking that F returns the right result, as in Proposition 6.3.2. We have that 1 ≤ |τ 1 | < 2 and Im(z 1 ) ≤ Im(τ1) 2 ; conditions (2.5.11) and the condition (z 2 , τ 2 ) ∈ A hold, since we have:
|Re(τ 2 )| ≤ 1/2 s+2 ≤ 1/4, 0. 335 ≤ √ 3 4 ≤ Im(τ 2 ) ≤ 1, |Re(z 2 )| ≤ 1/2 s+3 ≤ 1/8, 0 ≤ Im(z 2 ) ≤ Im(τ 2 ) 4 .
This means the choices of signs are always good, and hence our result is indeed theta functions and theta-constants. Finally, collecting the number of lost bits in the previous subsections show that
P = 1.952P + c log P + d ≤ 2P
is enough to get a result which is accurate to absolute precision P ; this also means that we indeed never have an error k bigger than 2 (2P )/2 , which is necessary to apply Theorem 0.3.3.
Proof of quasi-optimal running time
It remains to prove that the complexity is the right one. If P ≥ 25 Im(τ ), log 2 P > log 2 Im(τ ) + 4.7, which means s ≤ log P and the cost of Steps 11 to 18 is O(M(P) log P ); this running time is uniform in z, τ , as highlighted in Note 5.1.6. We then need prove that there is a C > 1 such that, for all z 2 , τ 2 that we consider,
θ 2 1 (0, τ ) θ 2 0 (0, τ ) ≤ C, θ 2 2 (0, τ ) θ 2 0 (0, τ ) ≤ C, θ 2 1 (z, τ ) θ 2 0 (z, τ ) ≤ C, θ 2 2 (z, τ ) θ 2 0 (z, τ ) ≤ C.
This is a direct consequence of the fact that z 2 , τ 2 are within a compact set that does not contain any zero of θ(z, τ ); hence one can write (non-zero) lower and upper bounds for any of the values of theta. One can be more precise using the same reasoning as in Lemma 6.3.8: since √ 3/4 ≤ Im(τ 2 ) ≤ 1:
|θ 0,1 (z 2 , τ 2 ) -1| ≤ |q| 1/2 + |q| + |q| 3 1 -|q| ≤ 0.7859, |θ 2 (z 2 , τ 2 )| ≤ 1 + |q| 1/4 + |q| 5/4 1 -|q| ≤ 1.958.
This gives C ≤ 83.64 and c ≥ 0.042 2 1.7859 2 1 1808 . Furthermore, with a careful analysis, one can prove that c 1 = 55 is enough in Theorem 6.2.13.
The existence of c and C proves that (λ n ) is quadratically convergent. The last thing left to analyze is the number of bits lost in the computation, which is given by Section 6.2.5. We note that the fact that z 2 , τ 2 are within a compact shows that the constants b, C , C exist and are independent of z, τ . (For implementation purposes, we were able to determine experimentally that C = 2.05; furthermore a rough analysis, which we do not detail here, showed that log C ≤ 11.65 and log 2 b ≤ 15.52.) This makes the running time of Step 7 only dependent on P , which was the point of the uniform algorithm. In particular, the number of bits lost during the computation of G or in F can be written as c 1 log P + c 2 , with c 1 , c 2 constants independent in z, τ . Hence, the number of bits that are lost in the whole of Step 7 is
n i=1 δ + c 1 log(p/2 i ) + c 2 ≤ G log P + H
for some constants G, H, since the number n of steps in Newton's method is O(log P ).
This means the computations in step 7 should be carried out at precision P = 2P +G log P + H, so that the result is accurate to 2P bits. This gives a running time of O(M(P ) log P ), independently of z and τ . All the other steps cost no more than O(M(P )) bit operations, which indeed gives us a running time of O(M(P ) log P ).
Implementation results
An implementation using the GNU MPC library [START_REF] Enge | GNU MPC -A library for multiprecision complex arithmetic with exact rounding[END_REF] for arithmetic on multiprecision complex numbers was developed; we compared our algorithm to our own implementation of Algorithm 7 using MPC7 . The code is distributed under the GNU General public license, version 3 or any later version (GPLv3+); it is available at the address http://www.hlabrande.fr/pubs/fastthetas.tar.gz
We compared those implementations to MAGMA's implementation of the computation of θ(z, τ ) (function Theta). Each of those implementations computed θ(z, τ ) at different precisions for z = 0.123456789 + 0.123456789i and τ = 0.23456789 + 1.23456789i; the computations took place on a computer with an Intel Core i5-4570 processor. The results are presented in Figure 6.3 and Table 6.1.
Our figures show that our algorithm outperforms Magma even for computations at relatively low precision8 (i.e. 1000 decimal digits). Furthermore, it is faster than our implementation of Algorithm 7 for precisions greater than 260 000 decimal digits. Hence, a combined algorithm which uses the naive method for precisions smaller than 260 000 decimal digits, and our method for larger precision, will yield the best algorithm, and outperform Magma in all cases, as shown in Table 6.1. Note that Algorithm 7 returns 4 values (the fundamental theta functions and theta-constants), while our quasi-linear time algorithm (Algorithm 15) returns in fact 6 values. Our implementation of a naive algorithm which computes all 6 values (i.e. combining Algorithm 7 and Algorithm 8) is slower than our quasi-linear time algorithm for P = 30 000 decimal digits; this is the reason why we chose to take P 0 = 30 000 in Algorithm 14, i.e. start the Newton iterations on an approximation of the quotients with precision 30 000 digits.
Batching computations of theta for different z
To finish this chapter, we consider the problem of batching the computation of θ at different z -that is to say, computing θ(z k , τ ) for z 1 , ..., z n ∈ C faster than with simply n evaluations. We find speedups by a constant factor for both the naive algorithm and the quasi-linear algorithm; note that there does not seem to be a speedup if the value of τ is also different. These results are useful in a variety of settings, including in our case Chapter 9. However, we did not implement those speedups, since our main application for them (Algorithm 25) has asymptotic complexity which is worse than the state of the art anyway, which means that constant-factor speedups are not that interesting. Remark 6.5.1. The naive algorithm in genus 2 (Section 5.2.2), as well as the generalization of the fast algorithm in genus 2, can also be sped up when batching computations of θ at different z, using techniques that are very similar to the ones presented here. We leave their precise description and implementation to future work.
Throughout this section, we assume that the z k are reduced, i.e. |Re(z k )| ≤ 1 2 , 0 ≤ Im(z k ) < Im(τ ).
Batch naive algorithm
We propose Algorithm 16, a variant on Algorithm 7 for batch computations of θ.
Prec (digits)
This π Im(τ ) log 2 (e) + 1
3: θ 0,z ← 1, θ 1,z ← 1, θ 0,0 ← 1, θ 1,0 ← 1 4: q ← UniformExp(iπτ ), q 1 ← q, q 2 ← q 5: for k = 1, .., n do 6: v (k) 1 ← UniformExp(2iπ(z k + τ /2)) + UniformExp(-2iπ(z k -τ /2)), v (k) ← v (k) 1 , v (k) ← 2 7: end for 8: for n = 1..B do 9:
For k = 1, .., n : θ
(k) 0,z ← θ (k) 0,z + v (k) , θ (k) 1,z ← θ (k) 1,z + (-1) n × v (k)
10:
θ 0,0 ← θ 0,0 + 2q 2 , θ 1,0 ← θ 1,0 + (-1) n × 2q 2 11: q 2 ← q 2 × (q 1 ) 2 × q; q 1 ← q 1 × q 12: For k = 1, .., n : temp ← v (k) , v (k) ← q 2 1 × v (k) 1 × v (k) -q 4 1 × v (k) v (k) ← temp 13: end for
This algorithm computes n values of θ using n + 1 exponentiations, n divisions, and (3n + 5)B multiplications. Algorithm 7 requires 2 exponentiations, 1 division and 8B multiplications. Hence, asymptotically, Algorithm 16 is about 8 3 = 2.3 times faster than running n times Algorithm 7.
Batch quasi-linear algorithm
We can also modify the quasi-optimal algorithm presented in this chapter to achieve a constantfactor speedup in the evaluation of n values of θ(z k , τ ). Again, the strategy is to avoid recomputing theta-constants several times; instead, we compute the theta-constants once, using Algorithm 11 for instance, and store them for future use. Incidentally, this makes the problem only depend on z.
We can modify Algorithm 12 so that it computes a two-variable function G 2 such that G 2 (λθ 0 (z, τ ) 2 , λθ 1 (z, τ ) 2 ) = λ. Assuming the θ i (0, 2 k τ ) are also computed once and stored, and putting N the number of steps (recall that N = O(log P )), the computation of this function requires 2N multiplications and 2N square root computations, compared to 4N square roots and 3N multiplications. This represents an asymptotic factor of 1.75 if we assume that square roots and multiplications have the same cost; however, in practice, since square root extractions cost a few multiplications (experiments using GNU MPC showed a constant greater than 100), this asymptotic factor is probably closer to 2.
We can then modify Algorithm 13 so that it computes a function F 2 so that F 2 θ1(z,τ ) 2 θ0(z,τ ) 2 = z; such a modification does not yield important savings. We then use Newton's method on this function to compute θ1(z,τ ) 2 θ0(z,τ ) 2 , and ultimately the values of θ. We compute the derivative of F 2 using finite differences, which requires 2 evaluations of F 2 ; this saves 33% over the algorithm we presented in this chapter, which requires 3 evaluations of F to compute the 3 coefficients of the Jacobian.
Finally, since we assume we precomputed the θ i (0, 2 k τ ), a few steps in Algorithm 15 are simplified. Taking a closer look at Steps 8 to 20, we find that this variant costs 6s + 11 multiplications and 2 square root extractions, while Algorithm 15 costs 7s + 12 multiplications and 2s + 8 square root extractions. If we assume multiplications and square roots have the same cost, this is a factor 1.5; however, in practice, the asymptotic factor is likely to be much bigger: if a square root costs 100 multiplications, this is a factor 34.
Putting it all together, we get a factor 2.3 in the execution of the naive algorithm (either for P ≤ 25 Im(τ ) or in the initial approximation in Algorithm 14), a factor 3 in Algorithm 14, and a large factor in Steps 8 to 20 of Algorithm 15. The resulting speedup factor is difficult to estimate; however since Algorithm 14 represents the bulk of the computation, one can expect a factor 3 to be saved.
Chapter 7
Computing the Riemann theta function in genus 2 and above
This chapter is dedicated to attempting to generalize the algorithm outlined in the previous chapter to higher genera. We succeed in making the algorithm fully explicit in the case of genus 2 theta functions, i.e. θ(z, τ ) with z ∈ C 2 , τ ∈ H 2 : the running time is also of O(M(P ) log P ) bit operations. We suppose that z and τ are reduced as explained in Section 2.3 and Section 2.4.
The generalization of the algorithm in genus g is not complete yet. We managed to generalize the function G, the function giving something depending only on z, τ from the quotients of theta functions and theta-constants; this function can be conjecturally computed with precision P in O(2 g M(P ) log P ), although we managed to prove the result in genus 2. However, constructing F from G so that we get a function with an invertible Jacobian is harder: we managed to construct such a function in genus 2, although the invertibility of the Jacobian is conjectural; in genus g, we did not manage to provide an explicit definition which would have a good chance of being invertible. Hence, the algorithm is completely described in genus 2, and has even been implemented, and we also managed to generalize the definition of G to genus g, but we did not manage to define F in genus g.
As in the previous chapter, we start by studying the case of genus 2 theta-constants, which was first outlined in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] and used in [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] for the computation of modular polynomials of record size. We then present our algorithm in genus 2, along with timing results, and discuss genus g; the presentation of these results is taken from an article with Emmanuel Thomé [START_REF] Labrande | Computing theta functions in quasi-linear time in genus 2 and above[END_REF] which was published at the Twelfth Algorithmic Number Theory Symposium (ANTS-XII).
Preamble: genus 2 theta-constants
We discuss here the computation of theta-constants in genus 2 in O(M(P ) log P ), using the algorithm outlined in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] and the refinements shown in [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF].
Recall first the naive algorithm for genus 2 theta functions, which we outlined in Algorithm 9. As we showed in Section 5.2.1, the cost of this algorithm is O M(P ) P Im(τ ) . A variant of this algorithm which evaluates only the theta-constants can be written easily; this gives an algorithm with a smaller constant in the O, but does not change its asymptotic running time. Note that the algorithms of [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]Prop. 3] and [Dup06, Algorithme 15.] use an analysis which does not highlight the dependency in the size of Im(τ 1,1 ), and hence get an asymptotic complexity of 121 O(M(P )P ); however, [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]Prop. 3] shows how to use recurrence relations to compute the terms of the sum efficiently, an idea we borrowed for our naive algorithms.
Recall from Section 3.4 the definition of the Borchardt mean in genus 2:
Definition 7.1.1. The Borchardt mean of four complex numbers, denoted B 2 (a 0 , a 1 , a 2 , a 3 ), is defined as the common limit of the sequences (a
(n) 0 , a (n) 1 , a (n) 2 , a (n)
3 ) n∈N defined by
a (n+1) 0 = a (n) 0 + a (n) 1 + a (n) 2 + a (n) 3 4 a (n+1) 1 = α 0 α 1 + α 2 α 3 2 a (n+1) 2 = α 0 α 2 + α 1 α 3 2 a (n+1) 3 = α 0 α 3 + α 1 α 2 2
where α 2 0 = a
(n) 0 , α 2 1 = a (n) 1 , α 2 2 = a (n) 2 , α 2 3 = a (n)
3 , and where the choice of signs are good, i.e.
|α i -α j | < |α i + α j |
We previously noted that the Borchardt mean of 4 numbers cannot always be defined, as it is impossible to define a good choice of signs for some quadruplets (see Figure 3.2).
Recall also the τ -duplication formulas for the fundamental thetas:
θ [0;b] (0, 2τ ) 2 = 1 4 β∈ 1 2 Z 2 /Z 2 θ [0;β] (0, τ )θ [0;b+β] (0, τ )
As we discussed in Section 3.4.3, this formula shows that θ 2 0 (0, 2 k τ ), . . . , θ 2 3 (0, 2 k τ ) k∈N is a Borchardt sequence.
The link between theta-constants and the Borchardt mean is the equivalent in genus 2 of the link between theta-constants and the AGM in genus 1. The genus 2 algorithm to compute theta-constants in quasi-linear time is then very similar to the one described in Section 6.1: the goal here is then to construct a function F which returns τ from the knowledge of quotients of squares of theta-constants, and we do so once again by applying the Borchardt mean to different quotients. However, this requires first to study the domain for which good choices of square roots always coincide with the values θ i (0, 2 k τ ), which is (Definition 3.4.8):
U 2 = {τ ∈ H 2 | B 2 (θ 2 0 (0, τ ), . . . , θ 2 3 (0, τ )) = 1}.
We can determine sufficient conditions on τ so that τ ∈ U 2 . We were able to prove a result which is stronger than [Dup06, Prop. 6.1]: Proposition 7.1.2. Let τ ∈ H 2 such that Im(τ ) is Minkowski-reduced and Im(τ 1 ) ≥ 0.594. Then Re(θ i (0, τ )) ≥ 0 for i = 0, 1, 2, 3, and furthermore |θ 0 (0, τ ) -θ j (0, τ )| < |θ 0 (0, τ ) + θ j (0, τ )| for j = 1, 2, 3.
Hence {τ ∈ H 2 | Im(τ ) is Minkowski-reduced, Im(τ 1,1 ) ≥ 0.594} ⊂ U 2 . In particular, we have F 2 ⊂ U 2 and F 2 ⊂ U 2 .
Proof. The first statement can be proven with the same proof as [Dup06, Prop. 6.1]. The proof of the other statement is tedious and unillumating; we do not reproduce it fully here. The method is similar to that of Proposition 6.2.4. We first take advantage of cancellations from the series defining the theta-constants; for instance:
θ 0 (0, τ ) + θ 1 (0, τ ) = 2 n=(n1,n2) n1∈Z,n2 even e iπ t nτ n , θ 0 (0, τ ) -θ 1 (0, τ ) = 2 n=(n1,n2) n1∈Z,n2 odd e iπ t nτ n
Preamble: genus 2 theta-constants 123
We then attempt to compute a bound on the quotient |θ0(0,τ )-θ1(0,τ )| 2-|θ0(0,τ )+θ1(0,τ )-2| , using the triangle inequality to bound each of the absolute value; when the quotient has an absolute value smaller than 1, the choice of signs is good. To get a rather fine bound, we compute explicitly terms with small (e.g. smaller than 4) value for n 1 or n 2 and bound them individually, then use Lemma 5.1.2 to bound the remainders. This result thus proves that, for τ verifying the hypotheses of Proposition 7.1.2, and using homogeneity of B 2 ,
B 2 1, θ 1 (0, τ ) 2 θ 0 (0, τ ) 2 , θ 2 (0, τ ) 2 θ 0 (0, τ ) 2 , θ 3 (0, τ ) 2 θ 0 (0, τ ) 2 = 1 θ 0 (0, τ ) 2
which is a property very similar to the situation in genus 1 (Note 3.2.10).
We then take a look at the Borchardt mean of other quotients and determine some which allow the computation of coefficients of τ :
Proposition 7.1.3. Define J = 0 -I 2 I 2 0 and M i = I 2 m i 0 I 2 , with m 1 = 1 0 0 0 , m 2 = 0 0 0 1 , m 3 = 0 1 1 0 (as in [Dup06, Chapter 6]). Then θ 0 0, (JM 1 ) 2 • τ 2 = -τ 1 θ 8 (z, τ ) 2 , θ 0 0, (JM 2 ) 2 • τ 2 = -τ 2 θ 4 (z, τ ) 2 , θ 0 0, (JM 3 ) 2 • τ 2 = (τ 2 3 -τ 1 τ 2 )θ 0 (z, τ ) 2 .
This result is a direct consequence of Theorem 2.4.49 . Using Theorem 2.4.4 again for the numerators, and provided once again that good choices of sign correspond to values of theta, we have [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]p.197
] B2 1, θ 2 9 (0, τ ) θ 2 8 (0, τ ) , θ 2 0 (0, τ ) θ 2 8 (0, τ ) , θ 2 1 (0, τ ) θ 2 8 (0, τ ) = B2 1, θ 2 1 (0, (JM1) 2 • τ ) θ 2 0 (0, (JM1) 2 • τ ) , θ 2 2 (0, (JM1) 2 • τ ) θ 2 0 (0, (JM1) 2 • τ ) , θ 2 3 (0, (JM1) 2 • τ ) θ 2 0 (0, (JM1) 2 • τ ) = 1 -τ1θ 2 8 (0, τ ) (7.1.1) B2 1, θ 2 0 (0, τ ) θ 2 4 (0, τ ) , θ 2 6 (0, τ ) θ 2 4 (0, τ ) , θ 2 2 (0, τ ) θ 2 4 (0, τ ) = B2 1, θ 2 1 (0, (JM2) 2 • τ ) θ 2 0 (0, (JM2) 2 • τ ) , θ 2 2 (0, (JM2) 2 • τ ) θ 2 0 (0, (JM2) 2 • τ ) , θ 2 3 (0, (JM2) 2 • τ ) θ 2 0 (0, (JM2) 2 • τ ) = 1 -τ2θ 2 4 (0, τ ) (7.1.2) B2 1, θ 2 8 (0, τ ) θ 2 0 (0, τ ) , θ 2 4 (0, τ ) θ 2 0 (0, τ ) , θ 2 12 (0, τ ) θ 2 0 (0, τ ) = B2 1, θ 2 1 (0, (JM3) 2 • τ ) θ 2 0 (0, (JM3) 2 • τ ) , θ 2 2 (0, (JM3) 2 • τ ) θ 2 0 (0, (JM3) 2 • τ ) , θ 2 3 (0, (JM3) 2 • τ ) θ 2 0 (0, (JM3) 2 • τ ) = 1 (τ 2 3 -τ1τ2)θ 2 0 (0, τ ) (7.1.3)
However, note that the hypotheses of Proposition 7.1.2 are not necessarily satisfied by (JM 1 ) 2 •τ , (JM 2 ) 2 • τ or (JM 3 ) 2 • τ ; in fact, these period matrices do not even necessarily have a Minkowskireduced imaginary part. We were unable to find conditions on τ so that either of the propositions could be applied to these matrices, and were also unable to determine the shape of the set described by the (JM i ) 2 • τ for, say, τ ∈ F 2 (which would have allowed us to attempt to prove a more general version of Proposition 7.1.2). One could also think of looking at the sums one gets when considering θ i (0, τ ) ± θ j (0, τ ), but no obvious cancellations seem to occur, and then the triangle inequality does not seem like the right tool; we did not pursue this line of thought. Note that in practice, all the quotients in Equations (7.1.1) to (7.1.3) appear to have a positive real part, and probably also satisfy the condition of the good choice of signs. Hence, the following conjecture is proposed in [Dup06, Conjecture 9.1] and [ET14a, Conjecture 9]: Conjecture 7.1.4. For k = 1, 2, 3 and τ ∈ F 2 , we have
(JM 1 ) 2 • τ ∈ U 2 .
We show in Section 7.2 how we can circumvent the "good choice vs correct value of θ" problem using low-precision approximations.
Proposition 7.1.3 and Equations (7.1.1) to (7.1.3) allow the computation of the τ i from the knowledge of the θ 2 i for 0 ≤ i ≤ 15; this is similar to what Equation (6.1.2) gave us in genus 1. Hence, we can use these equations to construct the function F, which returs τ from quotients of fundamental theta-constants; we will then invert F using Newton's method. The algorithm is as follows: starting with θ 2 1,2,3 (0,τ ) θ 2 0 (0,τ ) , we recover the individual fundamental theta-constants, by computing θ 2 0 (0, τ ) as the inverse of the Borchardt mean of the quotients. We then need to compute the theta-constants θ 4 , θ 6 , θ 8 , θ 9 , θ 12 ; once this is done, we apply Equations (7.1.1) to (7.1.3) to recover τ . Computing these theta-constants can be done in two ways: using some explicit formulas, as in [Dup06, Section 6.4], or by directly using the τ -duplication formulas to compute all the theta-constants at 2τ , then account for the factor 2 in the final result. We prefer to use the latter approach, as it is generalizable to any genus. We summarize this in Algorithm 17.
Algorithm 17 Compute τ from θ 2 1,2,3 (0,τ ) θ 2 0 (0,τ ) , assuming Conjecture 7.1.4. Input:
θ 2 1,2,3 (0,τ ) θ 2
0 (0,τ ) with absolute precision P . Output: τ with absolute precision P .
1: t 0 ← B 2 1, θ 2 1 (0,τ ) θ 2 0 (0,τ ) , θ 2 2 (0,τ ) θ 2 0 (0,τ ) , θ 2 3 (0,τ ) θ 2 0 (0,τ ) . 2: t 0 ← 1 t0 . 3: t 1 ← t 0 × θ 2 1 (0,τ ) θ 2 0 (0,τ ) ; t 2 ← t 0 × θ 2 2 (0,τ ) θ 2 0 (0,τ ) ; t 3 ← t 0 × θ 2
3 (0,τ ) θ 2 0 (0,τ ) . 4: t i ← √ t i , choosing the square root corresponding to θ i (0, τ ). 5: Apply the τ -duplication formula (Equation (2.2.1)) to get t i ← θ 2 i (0, 2τ ) for i = 0 . . . 16.
6: τ 1 = B 2 1, t9 t8 , t0 t8 , t1 t8 , τ 2 = B 2 1, t0 t4 , t6 t4 , t2 t4 , τ 3 = B 2 1, t8 t0 , t4 t0 , t12 t0 . 7: τ 1 ← -t8 2τ1 , τ 2 ← -t4 2τ2 , τ 3 = t0 4τ3 + τ 1 τ 2 . 8: return τ 1 τ 3 τ 3 τ 1 .
Computing theta-constants is then done by applying Newton's method to the function defined by Algorithm 17, which is a function C 3 → C 3 . The Jacobian appears to be invertible in practice, but no proof of this fact has been found; both [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] and [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] assume it is the case. One can compute the Jacobian either with a quadratically convergent algorithm which works conjecturally [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF], or with finite differences [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]; both have the same quasi-optimal asymptotic running time, but [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] finds that the first approach is 45% more expensive in practice. We refer to [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] or [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] for more details, and to [START_REF] Enge | Cmh -Computation of Igusa Class Polynomials[END_REF] for an implementation of this algorithm (i.e. Algorithm 17 and the three-dimensional Newton scheme which allows to compute theta-constants in quasi-optimal time).
The implementation of this algorithm [START_REF] Enge | Cmh -Computation of Igusa Class Polynomials[END_REF] was used in [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] to compute the Igusa class polynomials corresponding to Jacobians of genus 2 curves, where the theta-constants are linked to the roots of such a polynomial, which is then reconstructed from the roots. The authors implemented the algorithm using the multiprecision arithmetic library GNU MPC [START_REF] Enge | GNU MPC -A library for multiprecision complex arithmetic with exact rounding[END_REF]; their implementation results show that this approach beats the naive algorithm for precisions as low as a few thousand bits. Using this algorithm, they were able to compute a class polynomial of record size, corresponding to a class number of 20016.
The algorithm
The function F
Recall the τ -duplication formulas (Prop. 2.2.1): for all a, b
∈ 1 2 Z g /Z g , θ [a;b] (z, τ ) 2 = 1 2 g β∈ 1 2 Z g /Z g e -4iπ t aβ θ [0;b+β] z, τ 2 θ [0;β] 0, τ 2 . (7.2.1)
This prompts us to define the following function, crafted so that Proposition 7.2.1 holds, as a direct consequence of the τ -duplication formula. The definition below is ambiguous (because of square roots), an issue we deal with in what follows.
F : C 8 → C 8 a 0...3 , b 0...3 → √ a 0 √ b 0 + √ a 1 √ b 1 + √ a 2 √ b 2 + √ a 3 √ b 3 4 , √ a 0 √ b 1 + √ a 1 √ b 0 + √ a 2 √ b 3 + √ a 3 √ b 2 4 , √ a 0 √ b 2 + √ a 1 √ b 3 + √ a 2 √ b 0 + √ a 3 √ b 1 4 , √ a 0 √ b 3 + √ a 1 √ b 2 + √ a 2 √ b 1 + √ a 3 √ b 0 4 , b 0 + b 1 + b 2 + b 3 4 , 2 √ b 0 √ b 1 + 2 √ b 2 √ b 3 4 , 2 √ b 0 √ b 2 + 2 √ b 1 √ b 3 4 , 2 √ b 0 √ b 3 + 2 √ b 1 √ b 2 4 .
Proposition 7.2.1. For a suitable choice of square roots, we have
F θ 0,1,2,3 (z, τ ) 2 , θ 0,1,2,3 (0, τ ) 2 = θ 0,1,2,3 (z, 2τ ) 2 , θ 0,1,2,3 (0, 2τ ) 2
Bad, good, and correct choices of square roots
We discuss what we mean above by suitable choice of square roots. Two different notions must be considered:
• "Good choices" in the sense of [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF], i.e. such that Re
√ ai √ aj , Re √ bi √ bj
≥ 0. Note that not all tuples of complex numbers admit a set of "good" square roots. In genus 1, having infinitely many bad choices means the sequence converges (at least linearly) to zero; to avoid this case, we need them to all be good after a while, which in addition ensures we have quadratic convergence. This is key to our strategy to get a quasi-linear running time.
• The choice of signs that corresponds to θ, i.e. given two quadruples which are (proportional to) approximations of θ 0,1,2,3 (z, τ ) 2 and θ 0,1,2,3 (0, τ ) 2 , the ones which approximate well the values θ 0,1,2,3 (z, τ ) and θ 0,1,2,3 (0, τ ). We call these the "correct" choice, which need not be a "good" choice. We need this in order to compute the right value of θ in the end.
Fortunately, the notions of "good" and "correct" choices overlap very often. In genus 1, results from [START_REF] Cox | The arithmetic-geometric mean of Gauss[END_REF] (i.e. Proposition 3.2.9) shows that the correct choice for theta-constants is the good choice for τ within a large domain which includes the fundamental domain (see Figure 3.1); we proved a similar result for theta functions in Theorem 6.2.6 (see Figure 6.1). In genus 2, we do not determine an explicit domain for which correct choices are good. Although one can try to improve on the approach of Section 5.2 to establish such a result, the mere requirement that τ be in F g is already too strict for our further use (in particular in Section 7.2.2), so that proofs are difficult to obtain.
Iterates of F Proposition 2.1.9 shows that lim k→∞
θ [0;b] (z,2 k τ ) θ [0;b ] (z,2 k τ ) = 1
, which easily implies that correct choices are good for large enough τ . Therefore, given an 8-uple X which approximates (θ 0,1,2,3 (z, τ ) 2 , θ 0,1,2,3 (z, τ ) 2 ), computing iterates F n (X) and making correct choices consistently is bound to coincide with good choices after a finite number of iterations. To ensure that the first few choices are indeed the correct ones, it suffices to rely on low-precision approximations of θ, so that we know the sign of either Re(θ) or Im(θ). The number of terms and the precision needed to achieve this do not asymptotically depend on P , but only in z and τ ; since we neglect the dependency in z, τ in the complexity of our algorithm 10 , determining the correct square root requires only a constant number of operations. We used this strategy in our implementation; furthermore, it generalizes easily to genus g. Lemma 7.2.2. Let (a
(0) 0,1,2,3 , b (0) 0,1,2,3 ) ∈ C 8 , and let a (n+1) 0,1,2,3 , b (n+1) 0,1,2,3 = F a (n) 0,1,2,3 , b (n) 0,1,2,3
for any integer n ∈ N. Assume that there exists α, β ∈ C * and n 0 ∈ N such that Re(a
(n0) i /α) > 0 and Re(b (n0) i /β) > 0 for all i ∈ {0, 1, 2, 3}.
Then there exists positive real constants c, C such that assuming all choices of square roots from iteration n 0 onwards are good, we have ∀n ≥ n 0 , ∀i ∈ {0, 1, 2, 3}, c ≤ |a
(n) i |, |b (n) i | < C.
Proof. The upper bound result follows trivially from the definition. For the lower bound, let us assume without loss of generality that |α| = |β| = 1, and let c = min(Re(a
(n0) 0,1,2,3 /α), Re(b (n0) 0,1,2,3 /β)).
For good choices of square roots and any i, j, we have Re( a
(n0) i /α b (n0) j /β) ≥ min(Re(a (n0) i /α), Re(b (n0) j /β)) ≥ c
(for a proof, see e.g. [Dup06, Lemme 7.3]). This implies from the definition that a . The result follows by induction (with √ αβ, of modulus 1, playing the role of α at the next iteration).
An important remark is that the "low-precision" strategy described above is sufficient to ensure that conditions of Lemma 7.2.2 hold after a few steps.
A Karatsuba-like trick to compute F Proposition 7.2.3 computes F in 4 products and 4 squares instead of the 22 products in its definition. Section 7.4 extends this to genus g.
Proposition 7.2.3. Put H = 1 1 1 -1 and H 2 = H ⊗ H. Let t (m 0,1,2,3 ) = H 2 t a (n) 0,1,2,3 and t (s 0,1,2,3 ) = H 2 t b (n) 0,1,2,3 . We have t (a (n+1) 0,1,2,3 ) = 1 16 H 2 t (m 0,1,2,3 * s 0,1,2,3
) ( * being the termwise product), and t (b
(n+1) 0,1,2,3 ) = 1 16 H 2 t (s 0,1,2,3 * s 0,1,2,3 ).
10 To extend this work into an algorithm whose complexity is uniform in z, τ , one could follow the same approach as in genus 1 (see [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF], or Algorithms 11 and 15), since we have once again a naive algorithm whose complexity decreases as Im(τ 1 ) increases.
Constructing and inverting the F function
Proposition 7.2.4 (extension of Prop 7.1.3). Define the matrices J, M 1 , M 2 , M 3 as in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]Chapter 6]. Then
θ 0 (JM 1 ) 2 • z, (JM 1 ) 2 • τ 2 = -τ 1 e 2iπz 2 1 /τ1 θ 8 (z, τ ) 2 , θ 0 (JM 2 ) 2 • z, (JM 2 ) 2 • τ 2 = -τ 2 e 2iπz 2 2 /τ2 θ 4 (z, τ ) 2 , θ 0 (JM 3 ) 2 • z, (JM 3 ) 2 • τ 2 = (τ 2 3 -τ 1 τ 2 )e 2iπ z 2 1 τ 2 +z 2 2 τ 1 -2z 1 z 2 τ 3 det(τ ) θ 0 (z, τ ) 2 .
This result is a direct consequence of Theorem 2.4.4. The next proposition looks at how the sequence of F n behaves with respect to homogeneity; its proof is similar to Proposition 6.2.8, i.e. an induction. Proposition 7.2.5. Let (a
(n) i ) 0≤i≤3 , (b (n) i ) 0≤i≤3 = F n θ [0;b] (z, τ ) 2 b∈ 1 2 Z 2 /Z 2 , θ [0;b] (0, τ ) 2 b∈ 1 2 Z 2 /Z 2 , (a i (n) ) 0≤i≤3 , (b i (n) ) 0≤i≤3 = F n λ θ [0;b] (z, τ ) 2 b∈ 1 2 Z 2 /Z 2 , µ θ [0;b] (0, τ ) 2 b∈ 1 2 Z 2 /Z 2 .
Then we have a 0
(n) = n λ 1/2 n µ 1-1/2 n a (n) 0
(where 2 n n = 1), and b 0
(n) = µb (n) 0 ,
and we can compute λ, µ as
µ = lim n→∞ b 0 (n) and λ = lim n→∞ a 0 (n) b 0 (n) 2 n × µ.
We define G as the function which computes these two quantities, that is to say such that
G λ θ [0;b] (z, τ ) 2 b∈ 1 2 Z 2 /Z 2 , µ θ [0;b] (0, τ ) 2 b∈ 1 2 Z 2 /Z 2 = (λ, µ).
We prove in Section 7.2.3 that G can be computed in O(M(P ) log P ) operations.
We now build F from G. The idea is to evaluate G at quotients of theta functions after the action of (JM i ) 2 ; the λ and µ we recover are the inverse of the quantities in Proposition 7.2.4. Note that (JM i ) 2 • τ ∈ F 2 , which prevents us from generalizing proofs which worked for genus 1 to make "good choices" and "correct choices" coincide. However it is still possible to determine the sign using low-precision approximations.
Evaluating the quotients after the action of (JM i ) 2 is actually simply evaluating different quotients of theta functions; for instance:
θ 1 (JM 1 ) 2 • z, (JM 1 ) 2 • τ 2 θ 0 ((JM 1 ) 2 • z, (JM 1 ) 2 • τ ) 2 = θ 9 (z, τ ) 2 θ 0 (z, τ ) 2 .
Hence, we need to compute θ [a;b] (z, τ ) for a = 0. The approach used in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] for thetaconstants is to use explicit formulas linking the θ [a;b] (0, τ ) to the θ [0;b] (0, τ ) . Instead, we use the approach of [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF], which is simpler and more generalizable. The τ -duplication formulas (Equation (2.2.1)) allow us to compute θ [a;b] (z, 2τ ) from the fundamental thetas, and we then compute λ and τ corresponding to the quotients at 2τ , instead of the ones corresponding to the same quotients at τ . This still gives two numbers that are a simple function of z and τ , which is all we need to use Newton's method.
Defining F so that it is locally invertible, in order to use Newton's method, requires some care. In genus 1 (Chapter 6), we simply compute z and τ , which gives a C 2 → C 2 function; however in higher genus this approach leads to a function from C 2 g+1 -2 to C g(g+3)/2 , and we cannot apply Newton's method to recover the quotients of thetas. In genus 2, the function is from C 6 to C 5 ; there are two ways to work around the issue:
1. A natural idea would be to add an extra equation, which would be the equation of a variety which the thetas lie on. For instance, we can take the equation of the Kummer surface, as described in [START_REF] Gaudry | Fast genus 2 arithmetic based on Theta functions[END_REF] (see Section 2.6.4), which links the fundamental theta functions and theta-constants. Evaluating this equation with x = a = 1 and (y, z, t, b, c, d) equal to the inputs of F gives one more complex number, which makes F from C 6 to C 6 ; this sixth complex number would be 0 if the inputs were exact quotients of squares of θ, hence Newton's method should strive to make it 0. This approach, however, does not appear to be easily generalizable to higher genus.
Algorithm 19 Computation of F(a 1,2,3 , b 1,2,3 ) with absolute precision P . Input: a 1,2,3 , b 1,2,3 ∈ C 6 , and a pair z, τ ∈ C 2 × H 2 , with absolute precision P .
We assume that a 1,2,3 , b 1,2,3 are approximations to
θ 2 1,2,3 θ 2 0 (z, τ ), θ 2 1,2,3
θ 2 0 (0, τ ) to some constant base precision. These coarse estimates serve as a guide to choose the correct signs of square roots.
1: (λ 0 , µ 0 ) ← G(1, a 1,2,3 , 1, b 1,2,3 ), using (z, τ ) to inform the choices of sign. 2: x 0 ← 1 λ0 , y 0 ← 1 µ0 . 3: x 1,2,3 ← a 1,2,3 × x 0 , y 1,2,3 ← b 1,2,3 × y 0 . 4: for i = 0 to 3 do 5:
x 0,1,2,3 ← ± √ x 0,1,2,3 , using a low-precision approximation of θ i (z, τ ) for the sign.
6: y 0,1,2,3 ← ± √ y 0,1,2,3 , using a low-precision approximation of θ i (0, τ ) for the sign. 7: end for 8: for i = 4 to 15 do
9: b ← i (mod 4), a = i-b 4 .
10:
x i ← 1 4 3 j=0 (-1) a•j x b⊕j y j .
11:
y i ← 1 4 3 j=0 (-1
) a•j y b⊕j y j . 12: end for 13:
(λ 1 , µ 1 ) ← G 1, x9,0,1 x8 , 1, y9,0,1 y8
, using ((JM 1 ) 2 • z, (JM 1 ) 2 • 2τ ) to inform the choices of sign.
14: (λ 2 , µ 2 ) ← G 1, x0,6,2 x4 , 1, y0,6,2 y4
, using ((JM 2 ) 2 • z, (JM 2 ) 2 • 2τ ) to inform the choices of sign.
15: (λ 3 , µ 3 ) ← G 1, x8,4,12 x0 , 1, y8,4,12 y0
, using ((JM 3 ) 2 • z, (JM 3 ) 2 • 2τ ) to inform the choices of sign.
16: µ 1 ← y8 µ1 , µ 2 ← y4 µ2 , µ 3 ← y0 µ3 . 17: return µ1x8 λ1 , µ2x4 λ2 , µ3x0 λ3 , µ 1 /2, µ 2 /2, µ 3 /4
δ. We prove in the next section that computing F with precision p costs O(M(p) log p) for any arguments; this implies that applying one step of Newton's method costs O(M(p) log p). Thus, as in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] or Chapter 6, we can compute an approximation of θ with precision P 0 using the naive algorithm, then use Newton's method to refine it into a value of θ with precision P provided P 0 is large enough (as in Theorem 0.3.7). The total cost of this algorithm is O(M(P ) log P ). Remark 7.2.7. Note that the complexity of this algorithm depends on z, τ . However, one could build an algorithm with complexity uniform in z, τ using the same techniques as the ones we showed in genus 1 (Algorithm 11 or Algorithm 15). Namely, one uses the naive algorithm (Algorithm 9) for P ≤ c Im(τ 1,1 ); recall that we noted at the end of Section 5.2.2 that the complexity was then uniform, as in genus 1. If P ≥ c Im(τ 1,1 ), one uses τ -duplication formulas and z-duplication formulas so that the previous algorithm is called for z, τ in a compact. We leave the precise analysis of such an algorithm to future work.
Proof of quasi-optimal time
Theorem 7.2.8. One can compute G(a
(0) 0,1,2,3 , b (0)
0,1,2,3 ) = (λ, µ) with precision P with complexity O(M(P ) log P ) operations, assuming the choice of signs is always good.
The result if the arguments are λθ 2 0,1,2,3 (z, τ ), µθ 2 0,1,2,3 (0, τ ) is merely a consequence of the quadratic convergence of (θ(z, 2 k τ )) k∈N ; however we need to prove the result for any arguments to apply it to the computation of the Jacobian. The proof is very similar to the proofs in Section 6.2.4.
Proof. By Lemma 7.2.2, we have 0 < c ≤ |a (n) i |, |b (n) i | ≤ C for any i, for n large enough (inde- pendent of P ). The sequence d n = max i,j |b (n) i -b (n) j | converges quadratically to zero [Dup06, Prop. 7.1]. So µ can be computed in time O(M(P ) log P ).
Now let A > 0 and n 1 be large enough so that d n+1 ≤ Ad 2 n for all n ≥ n 1 , and additionally that d n1 < 1 2A . This implies
d n ≤ 1 A 2 -2 n-n 1 for any n ≥ n 1 . The |a (n) i -a (n)
j | can be linked to d n . For instance, for any n ≥ n 1 we have
|a (n+1) 0 -a (n+1) 1 | = |m 1 s 1 + m 3 s 3 | 2 ≤ 2C(|s 1 | + |s 3 |) (using notations of Prop. 7.2.3) ≤ 4C(| b 0 -b 1 | + | b 2 -b 3 |) ≤ 4C( | b 0 -b 1 | 2 + | b 2 -b 3 | 2 ) ≤ 4C( | b 0 -b 1 || b 0 + b 1 | + | b 2 -b 3 || b 2 + b 3 |) ≤ 4C( |b 0 -b 1 | + |b 2 -b 3 |) ≤ 8C d n .
Calculus also shows that
|a (n+1) 0 -a (n) 0 b (n) 0 | = 1 8 3 i=1 ( √ a i - √ a 0 ) b i + b 0 + ( √ a i + √ a 0 ) b i -b 0 ≤ K d n for some explicit constant K.
Superscripts (n) have been omitted from the right-hand sides above for brevity. For both inequalities, we used the fact that choices of roots are good.
We now show that λ n = (a
(n) 0 /b (n) 0 ) 2 n converges quadratically. Let q n = (a (n+1) 0 /b (n+1) 0 ) 2 a (n) 0 /b (n) 0
, so that λ n+1 = λ n q 2 n n . It is relatively easy to check that |q n+1 -1| is also bounded by K √ d n for an explicit constant K , given the bounds established above, as well as the inequality |µ -b
(n+m) 0 | ≤ 2 A (Aη) 2 m ,
which is proven as follows:
|b (n+m+1) 0 -b (n+m) 0 | ≤ d n+m since b (n+m+1) 0
is the arithmetic mean of the b
(n+m) j |b (n+m+l) 0 -b (n+m) 0 | ≤ l-1 i=0 d n+m+i ≤ 1 A l-1 i=0 (Aη) 2 i+m ≤ 2 A (Aη) 2 m .
It follows, from an unsurprising calculation similar to that of Theorem 6.2.13, that the sequence λ n also converges quadratically. This concludes the proof that only a logarithmic number of steps are needed to compute the values taken by G, and hence by F, to precision P .
Implementation results
Our Magma implementation of Algorithm 9 and our quasi-linear time algorithm is at http://www.hlabrande.fr/pubs/fastthetasgenus2.m We compared with Magma's general-purpose Theta function. Assuming the latter computes each term by exponentiation, its complexity would be O(M(P )P log P ). However, practice reveals it behaves much worse. Table 7.1 show that for precision above 1 000 decimal digits our algorithm, which outputs 8 values, is faster than one call to Magma's Theta function, which only computes θ(z, τ ). Furthermore, it is also faster than Algorithm 9 for precisions greater than 3 000 decimal digits. This cut-off is much lower than in genus 1, which is expected since the complexity of the naive algorithm is O(M(P ) √ P ) in genus 1 and O(M(P )P ) in genus 2. Our results are consistent with the situation for theta constants, studied in [START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF]
Computing theta functions of higher genera
This section outlines ideas for extending the previous strategy to the case g > 2. The complexity of such an algorithm will certainly be exponential (or worse) in g; we do not make any attempt at lowering this complexity, and in fact we do not even evaluate it fully. However, the complexity in P would still be O(M(P ) log P ), which is desirable.
The function F
We use once again the τ -duplication formula (Equation (2.2.1)) where a = 0:
θ i (z, 2τ ) 2 = 1 2 g k∈{0,...,2 g -1} θ i⊕k (z, τ )θ k (0, τ ), (7.4.1)
where ⊕ is the bitwise XOR. This gives us a function
F : C 2 g+1 → C 2 g+1 θ [0;b] (z, τ ) , θ [0;b] (0, τ ) → θ [0;b] (z, 2τ ) , θ [0;b] (0, 2τ ) .
Just like in genus 2, we solve the problem of determining the correct square root by using lowprecision approximations of θ, which only require a number of terms and a precision that are independent of P . Furthermore, Proposition 2.1.9 proves that lim n→∞ θ [0;b] (z, 2 n τ ) = 1, which shows that, after a finite number of steps, correct choices of sign correspond to good choices of sign, i.e. the ones corresponding to Re
√ ai √ aj
> 0. The Karatsuba-like trick we used in Proposition 7.2.3 can be generalized here for the evaluation of F . Sums involving bitwise XORs as the one in Equation (2.2.1) are called dyadic convolutions in [START_REF] Makarov | The connection between algorithms of the fast Fourier and Hadamard transformations and the algorithms of Karatsuba, Strassen, and Winograd[END_REF], which also gives an algorithm to compute them with an optimal number of multiplications. The method is exactly the one we used in Proposition 7.2.3, using this time
H g = H ⊗ • • • ⊗ H (g times).
This means we only need 2 g+1 multiplications, instead of the 2 2g+1 multiplications that appear in the definition of F .
Extending the quasi-linear time algorithm
Iterates of F
We then define the sequence of iterates of F as:
a (n+1) 0 , . . . , a (n+1) 2 g -1] , b (n+1) 0 , . . . , b (n+1) 2 g -1 = F a (n) 0 , . . . , a (n) 2 g -1] , b (n) 0 , . . . , b (n) 2 g -1 and we denote F ∞ (a (0) 0 , . . . , a (0) 2 g -1 , b (0) 0 , . . . , b (0)
2 g -1 ) as the limit of this sequence. Note that the choice of signs in this sequence are taken to correspond to the correct choices of signs with respect to values θ i (z, 2 n τ ). Hence, by definition, we have F θ 2 0,...,2 g -1 (z, τ ), θ 2 0,...,2 g -1 = 1, since the choices are always chosen to be correct. Furthemore, Proposition 2.1.9 proves that there is only a finite number of bad choices of signs. Finally, note that the operations on the b (n) i are exactly those of the Borchardt mean; Theorem 3.4.6 then proves that the convergence of the b
(n) i is quadratic.
The following generalizes Lemma 7.2.2; its proof is essentially the same.
Lemma 7.4.1. Let a (0) 0,...,2 g -1 , b (0) 0,...,2 g -1 ∈ C 2 g+1 , and define a (n) i , b (n) i
for all n > 0 as previously. Assume that there exists α, β ∈ C * and n 0 ∈ N such that Re(a
(n0) i /α) > 0 and Re(b (n0) i /β) > 0 for all i ∈ {0, . . . , 2 g -1}.
Then there exists positive real constants c, C such that assuming all choices of square roots from iteration n 0 onwards are good, we have ∀n ≥ n 0 , ∀i ∈ {0, . . . , 2 g -1}, c ≤ |a
(n) i |, |b (n) i | < C.
and we define the function
F : C 2 2g-2 → C 2 2g-2 as in Algorithm 20, then the Jacobian of F at θ 2 1,...,2 g -1 (z,τ ) θ 2 0 (z,τ ) , θ 2 1,...,2 g -1 (0,τ ) θ 2 0 (0,τ ) is invertible. Algorithm 20 Compute F θ 1,...,2 g -1 (z,τ ) 2 θ0(z,τ ) 2 , θ 1,...,2 g -1 (0,τ ) 2 θ0(0,τ ) 2
with precision P .
Input:
θ 1,...,2 g -1 (z,τ ) 2 θ0(z,τ ) 2 , θ 1,...,2 g -1 (0,τ ) 2 θ0(0,τ ) 2
with absolute precision P .
1: Compute λ 0 = 1 θ0(z,τ ) 2 , 1 θ0(0,τ ) 2 = G 1, θ 1,...,2 g -1 (z,τ ) 2 θ0(z,τ ) 2 , 1, θ 1,...,2 g -1 (0,τ ) 2 θ0(0,τ ) 2 . 2: Compute the individual θ i (z, τ ), θ i (z, τ ) for i ∈ {0, ..., 2 g -1}. 3: Use Equation (2.2.1) to compute θ i (z, 2τ ) 2 , θ i (0, 2τ ) 2 for i ∈ {0, ..., 2 2g -1}. 4: for i = 1 to 2 g -1 do 5: Compute λ i , µ i = G 1, θ 1,...,2 g -1 (Mi•z,Mi•2τ ) 2 θ0(Mi•z,Mi•2τ ) 2 , 1, θ 1,...,2 g -1 (0,Mi•2τ ) 2 θ0(0,Mi•2τ ) 2
6: end for 7: return (λ 0 , . . . , λ 2 g -1 , µ 0 , . . . , µ 2 g -1 ).
In genus 2, Conjecture 7.2.6 is simply that the set {(JM 1 ) 2 , (JM 2 ) 2 , (JM 2 ) 2 } is the one needed in Conjecture 7.4.4.
The final algorithm
Conjecture 7.4.4 simply expresses that there is a set of symplectic matrices such that considering their action on quotients of theta-constants yields a function on which Newton's method can be applied. Furthermore, note that the shape of λ i , µ i is given by Theorem 2.4.4, and that they are simple functions of z and τ .
Hence, provided that the conjecture is true, one can simply compute the λ i , µ i at full precision, compute a low-precision approximation of the fundamental theta functions and theta constants, then refine this approximation using Newton's method on F. The total complexity of this method is the same as the complexity of evaluating F at full precision, since Newton's method (when doubling the working precision at each step) does not add any asymptotic complexity. The complexity of the evaluation of F is O(4 g M(P ) log P ) bit operations. Although this is exponential in the genus g, this is quasi-linear in the precision P ; hence, as it was the case between genus 1 and 2, we expect the precision for which our algorithm is better than a naive approach to be smaller as the genus grows.
Remark 7.4.5. A similar algorithm can be constructed to compute the theta-constants. In this case, the function G is simply the Borchardt mean, which converges quadratically and is homogeneous. However, there is still the problem of determining symplectic matrices so that one can apply Newton's method to the function which outputs the corresponding set of µ i , which is also unresolved.
We were able to write a prototype implementation for the computation of genus 3 thetaconstants using this method. In that particular case, there are 7 quotients
θ 2 i θ 2 0
(0, τ ), while there are 6 coefficients for τ : this requires adding one equation to the output, just as in the case of θ(z, τ ) in genus 2 (Section 7.2.2). To solve this problem, we considered the action of the 6 symplectic matrices (JM 1 ) 2 , (JM 2 ) 2 , (JM 3 ) 2 , (JM 1,2 ) 2 , (JM 1,3 ) 2 , (JM 2,3 ) 2 as defined by Dupont (see e.g. Section 8.3.1), along with the action of the matrix J; the 7 µ i one gets are not easily linked together, which may justify why the Jacobian of the system appears to be invertible, although we still do not have any proof of this invertibility.
Preliminary results confirm that the method appears to work, and that the Jacobian appears to be invertible. The algorithm appears to be much faster than the naive algorithm for precisions greater than 450 decimal digits, and is also much faster than Magma's Theta. The implementation has been made available publicly; however, a more thorough analysis of the algorithm is still needed at this point.
where
g 2 = 60G 4 (τ ) = 60 × 2ζ(4) ω 4 1 E 4 (τ ) = 1 12 2π ω 1 4 E 4 (τ ) g 3 = 140G 6 (τ ) = 140 × 2ζ(6) ω 6 1 E 6 (τ ) = 1 216 2π ω 1 6 E 6 (τ ),
and where
E 4 (τ ) = ω∈Z+τ Z,ω =0 1 ω 4 , E 6 (τ ) = ω∈Z+τ Z,ω =0 1 ω 6
are the normalized Eisenstein series of weight 2 and 3.
The computation of such Eisenstein series can be done using their expansion as Lambert series; we study such expansions in Section 8.4. For now, we just mention the expansions for E 4 and E 6 (see, e.g., [START_REF] Cohen | A Course in Computational Algebraic Number Theory[END_REF]Prop. 7.4.1]):
E 4 (τ ) = 1 + 240 n≥1 n 3 q n 1 -q n , E 6 (τ ) = 1 -504 n≥1 n 5 q n 1 -q n
The terms of those series are eventually decreasing to zero in a geometric fashion, which means computing E 4 and E 6 with precision P using those expansions requires O(P ) terms and a final complexity of O(M(P )P ) bit operations. We refer to Section 8.4 for a more thorough analysis. We now show how to evaluate g 2 , g 3 (and hence E 4 , E 6 ) with precision P in quasi-optimal time O(M(P ) log P ), using connections with theta-constants. Recall the Thomae formulas (Theorem 1.3.19), which link theta-constants and the polynomial defining the elliptic curve:
Theorem 8.1.2. Let P = 4X 3 -g 2 X -g 3 = 4(X -e 1 )(X -e 2 )(X -e 3 ). Then e 1 -e 2 = π ω 1 2 θ 4 0 (0, τ ), e 1 -e 3 = π ω 1 2 θ 4 1 (0, τ ), e 3 -e 2 = π ω 1 2 θ 4 2 (0, τ )
We showed in Chapter 6 (Algorithm 11) a way to compute theta-constants in O(M(P ) log P ) bit operations. We then use the following proposition to compute g 2 and g 3 : Proposition 8.1.3. We have:
g 2 = 2 3 π ω 1 4 (θ 0 (0, τ ) 8 + θ 1 (0, τ ) 8 + θ 2 (0, τ ) 8 ) g 3 = 4 27 π ω 1 6 (θ 0 (0, τ ) 4 + θ 1 (0, τ ) 4 )(θ 0 (0, τ ) 4 + θ 2 (0, τ ) 4 )(θ 1 (0, τ ) 4 -θ 2 (0, τ ) 4 )
We digress briefly to discuss other formulas for g 2 and g 3 found in the literature. Note that the first formula is rather well-known and appears in many references; for instance it corresponds to the theta function linked to the lattice E 8 , and a proof using this fact can be found in [CS93, Prop. 8.1, Chapter 4]. The second formula is not mentioned in that form as often; we derived it from similar-looking formulas in [START_REF] Clemens | A scrapbook of complex curve theory[END_REF], and it is implicit in [START_REF] Abramowitz | Handbook of mathematical functions: with formulas, graphs, and mathematical tables[END_REF] (one just needs to combine Equations 18.10.9 to 18.10.11 with Equation 18.10.16). Other formulas for g 3 are sometimes given, such as:
g 3 = 4 27 π ω 1 6 (θ 0 (0, τ ) 8 + θ 1 (0, τ ) 8 + θ 2 (0, τ ) 8 ) 3 -54(θ 0 (0, τ )θ 1 (0, τ )θ 2 (0, τ )) 8 2 = 1 6 3 π ω 1 6 θ 2 (0, τ ) 12 - 3 2 θ 2 (0, τ ) 8 θ 0 (0, τ ) 4 - 3 2 θ 2 (0, τ ) 4 θ 0 (0, τ ) 8 + θ 0 (0, τ ) 12
These formulas can be proven, for instance, by looking at the first few coefficients of the Laurent series and showing that they agree, and thus that the difference between the functions is an elliptic holomorphic function which is 0 at 012 . The first one is usually derived from the expression of the discriminant of the curve ∆, as a function of theta-constants using Thomae's formulas (see e.g [Cha85, p.34 and proof of Cor. 1 of Theorem 5, Chap V]) and the equation ∆ = g 3 2 -27g 2 3 . The formula of Proposition 8.1.3 is simpler, as one does not need to extract a square root (and hence worrying about picking the right sign) and requires fewer multiplications.
Proof of Prop. 8.1.3. We prove the proposition using relations between the coefficients of a polynomial and its roots:
e 1 + e 2 + e 3 = 0,
e 1 e 2 e 3 = g 3 4 , e 1 e 2 + e 1 e 3 + e 2 e 3 = -g 2 4 Now,
e 2
1 + e 2 2 + e 2 3 = (e 1 + e 2 + e 3 ) 2 -2(e 1 e 2 + e 1 e 3 + e 2 e 3 ) = g 2 2
Hence :
(e 1 -e 2 ) 2 + (e 1 -e 3 ) 2 + (e 2 -e 3 ) 2 = 2(e 2 1 + e 2 2 + e 2 3 ) -2(e 1 e 2 + e 1 e 3 + e 2 e 3 ) = 3g 2 2 This proves the first formula. The second one is a consequence of e 1 = -e 2 -e 3 and so on, which means 27e 1 e 2 e 3 = (e 1 -e 2 + e 1 -e 3 )(e 1 -e 2 + e 3 -e 2 )(e 1 -e 3 + e 2 -e 3 ).
These formulas, combined with Algorithm 11, proves that one can compute the coefficients g 2 , g 3 of the equation of the algebraic representation of the elliptic curve with precision P with only O(M(P ) log P ) bit operations. This also provides a O(M(P ) log P ) algorithm to compute E 4 , E 6 . We show in Section 8.4 how this can be used to compute any E 2k faster than with the naive method.
Computing Weierstrass's ℘ function using the θ function
Suppose that we have an elliptic curve E = C/(Zω 1 +Zω 2 ). Given a z ∈ E, we would like to compute its image P z ∈ E(C) by the inverse of the Abel-Jacobi map. Recall that Proposition 1.3.18 proves that P z = (℘(z, [ω 1 , ω 2 ]), ℘ (z, [ω 1 , ω 2 ])); hence, we are interested in this section in the computation of ℘ and its derivative.
Some references (e.g. [Coh93, Section 7.4]) compute those quantities by using their series expansion; an algorithm of Coquereaux et al. [START_REF] Coquereaux | Iterative method for calculation of the Weierstrass elliptic function[END_REF] seems to claim a O(M(P )P ) complexity, by computing ℘ at z 2 N (with N = O(P ) for reasons of accuracy) using the series expansion, then using N duplication formulas for ℘. We outlined in Section 4.3 an algorithm with complexity O(M(P ) log P ) to compute ℘(z, τ ). We provide another one here, based on the results of Chapter 6.
The connection between ℘(z, τ ) and θ(z, τ ) is well-known: the formula allowing one to compute ℘ from θ appears in numerous references, e.g. [START_REF] Mumford | Tata lectures on Theta[END_REF][START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF] 13 . We show a proof of this result, which is Theorem 8.1.4.
Adding to this, we prove Theorem 8.1.6, which gives a formula allowing the computation of ℘ from the knowledge of θ. Combining both these theorems and the algorithm from Chapter 6 then show that the inverse of the Abel-Jacobi map can be computed in O(M(P ) log P ). Theorem 8.1.4 ([Mum83, p. 26 & p. 73], [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF]). Suppose E = C/Λ, with Λ = Zω 1 + Zω 2 . Define:
℘(z, τ ) = 1 ω 2 1 ℘(z, Λ) = 1 z 2 + ω∈Z+τ Z 1 (z -ω) 2 - 1 ω 2
We then have:
℘(z, τ ) = π 2 3 (θ 4 2 (0, τ ) -θ 4 1 (0, τ )) -π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) θ 2 0 (z, τ ) θ 2 3 (z, τ ) (8.1.1)
The explicit determination of the constant term (i.e. the one independent in z) is often left to the reader; hence we present the full proof here.
Proof. We start from the equation [Mum83, p. 25]
℘(z, τ ) = - d 2 dz 2 log θ 3 (z, τ ) + c
This classical equation can be proven using the theory of elliptic functions, for instance by outlining the connections between ℘, θ and the Weierstrass functions ζ and σ; we refer to e.g. [Cha85, Sections IV.1, IV.2, V.1]. We then look at the addition formula (A 10 ) of [Mum83, p.26]:
θ 3 (x + u, τ )θ 3 (x -u, τ )θ 2 0 (0, τ ) = θ 2 3 (x, τ )θ 2 0 (u, τ ) -θ 2 0 (x, τ )θ 2 3 (u, τ
) Some terms simplify using θ 3 (0, τ ) = 0 and θ 0 (0, τ ) = 0, which are consequences of the parity of these functions. This gives:
℘(z, τ ) = c - θ 3 (0, τ ) 2 θ 0 (0, τ ) 2 × θ 0 (z, τ ) 2 θ 3 (z, τ ) 2
where θ 3 (0, τ ) is the derivative in z of θ 3 (z, τ ) in z = 0. We then use Jacobi's derivative formula [Mum83, p.64]: θ 3 (0, τ ) = -πθ 0 (0, τ )θ 1 (0, τ )θ 2 (0, τ ) which means:
℘(z) = c - (-πθ 0 (0, τ )θ 1 (0, τ )θ 2 (0, τ )) 2 θ 0 (0) 2 × θ 0 (z, τ ) 2 θ 3 (z, τ ) 2 = c -π 2 θ 1 (0, τ ) 2 θ 2 (0, τ ) 2 × θ 0 (z, τ ) 2 θ 3 (z, τ ) 2 .
Note that the equation can also be rewritten as
℘(z, τ ) = c -π 2 exp(-2πiz)θ(1/2, τ ) 2 θ(τ /2, τ ) 2 θ(z, τ ) 2 θ(z + (τ + 1)/2, τ ) 2 .
All that remains is determining c. We have
θ 1+τ 2 , τ = θ 3 (0, τ ) = 0 since θ 3 is odd. This means that c = ℘( τ +1 2 ) = ω 2 1 ℘( ω1+ω2 2
). Thomae's formula gives
π ω 1 2 (θ 4 2 (0, τ ) -θ 4 1 (0, τ )) = 2e 3 -e 1 -e 2 = 3e 3
since the sum of the roots is 0. Hence
c = ω 2 1 ℘ ω 1 + ω 2 2 = π 2 3 (θ 4 2 (0, τ ) -θ 4 1 (0, τ )).
Note 1.3.16 showed that the value of ℘ is not changed after reduction of τ (so that τ ∈ F) and z (so that 0 ≤ Im(z) < Im(τ )/2). Those are exactly the conditions we required in Algorithm 15 to compute θ. Hence, one can directly use the value of θ at reduced arguments to compute ℘, without even needing to compute the exponential factor which appears in Proposition 2.1.7. Remark 8.1.5. The algorithm we presented in Section 6 did not compute θ 3 (z, τ ); however Equation (2.5.7) and the equation (cf. [Mum83, p. 22])
θ 2 3 (z, τ )θ 2 0 (0, τ ) = θ 2 1 (z, τ )θ 2 2 (0, τ ) -θ 2 2 (z, τ )θ 2 1 (0, τ ) can be combined to prove that ℘(z, τ ) = π 2 3 (θ 4 2 (0, τ ) + θ 4 1 (0, τ )) -π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) θ 2 1 (z, τ )θ 2 1 (0, τ ) + θ 2 2 (z, τ )θ 2 2 (0, τ ) θ 2 1 (z, τ )θ 2 2 (0, τ ) -θ 2 2 (z, τ )θ 2
1 (0, τ ) which only uses quantities that Algorithm 15 computes.
We now prove a second formula, linking this time ℘ to θ. A similar formula providing this link, which we were not aware of, seems to be already known [AS64, Equation 18.10.6]. The formula is a bit different from ours, suggesting a different method or proof has been used; we do not know of a proof for that formula. Theorem 8.1.6.
℘ (z, τ ) = -π 3 θ 3 0 (0, τ )θ 3 1 (0, τ )θ 3 2 (0, τ ) θ 3 (2z, τ ) θ 4 3 (z, τ ) (8.1.2) Proof. We start from the relation ℘(z, τ ) = c - θ 3 (0,τ ) 2 θ0(0,τ ) 2 × θ0(z,τ ) 2 θ3(z,τ
) 2 , which we proved while proving the previous theorem. Taking the derivative in z:
℘ (z, τ ) = - θ 3 (0, τ ) 2 θ 0 (0, τ ) 2 × 2θ 2 3 (z, τ )θ 0 (z, τ )θ 0 (z, τ ) -2θ 2 0 (z, τ )θ 3 (z, τ )θ 3 (z) θ 4 3 (z, τ ) Furthermore Equation (A 10 ) in [Mum83, p.26] is: θ 3 (x + u, τ )θ 3 (x -u, τ )θ 2 0 (0, τ ) = θ 2 3 (x, τ )θ 2 0 (u, τ ) -θ 2 0 (x, τ )θ 2 3 (u, τ
) Take the derivative in u of the latter:
θ 3 (x+u, τ )θ 3 (x-u, τ )θ 2 0 (0, τ )-θ 3 (x+u, τ )θ 3 (x-u, τ )θ 2 0 (0, τ ) = 2θ 2 3 (x, τ )θ 0 (u, τ )θ 0 (u, τ )-2θ 2 0 (x, τ )θ 3 (u, τ )θ 3 (u, τ )
Taking x = u = z, we notice that this right-hand side is the numerator which appears in the derivative of ℘; hence we have:
℘ (z, τ ) = - θ 3 (0, τ ) 2 θ 0 (0, τ ) 2 × θ 3 (2z, τ )θ 3 (0, τ )θ 2 0 (0, τ ) -θ 3 (2z, τ )θ 3 (0, τ )θ 2 0 (0, τ ) θ 4
3 (z) Using the parity of θ 3 and Jacobi's formula finally gives:
℘ (z, τ ) = -π 3 θ 3 0 (0, τ )θ 3 1 (0, τ )θ 3 2 (0, τ ) θ 3 (2z, τ ) θ 4 3 (z, τ ) .
This formula shows that one can compute ℘ in the same running time as ℘, that is to say O(M(P ) log P ). However, if one has already computed ℘(z, Λ) and g 2 , g 3 , it is more efficient to use the differential equation satisfied by ℘:
℘ (z, Λ) 2 = 4℘(z, Λ) 3 -g 2 ℘(z, Λ) -g 3 .
This gives ℘ (z, Λ) 2 directly from the knowledge of ℘(z, τ ). The correct sign for ℘ can then be determined by computing low-precision (e.g. 10 significant digits) approximations of θ, then using the formula in Theorem 8.1.6 to compute a low-precision approximation of ℘ , which gives the correct sign.
t ℘ (π2 -t , [1, i]) ℘ (π, [2 -t , 2 t i]) ℘ (π2 -t , [2 -t ,
Timings
We implemented Algorithm 6 in MAGMA, and we also write a MAGMA script which calls our MPC implementation of Algorithm 15 and applies Equation (8.1.1). We then measured the time taken by the computation of ℘(0.123456789 + 0.123465789i, [1, 0.23456789 + 1.23456789i]) at various precisions P , using internal precision of P (i.e. without attempting to compensate for the potential precision loss by working at precision 2P or 3P ). The time needed by Magma to parse the results given by the MPC program implementing Algorithm 15 is not counted in the running times.
This comparison is somewhat biased against Algorithm 6, as it is implemented in Magma (which has more overhead than MPC) and we do not take into account the fact that the working precision one needs to ensure a correct result up to 2 -P is smaller for this algorithm than for the other one. Regardless, the timings presented in Table 8.2 show that Algorithm 6 is faster than the other method.
To introduce a point of comparison with currently used methods, we also measured timings of the PARI/GP [START_REF]PARI/GP version 2.7.2[END_REF] function ellwp, by calling ellwp([1, 0.23456789+1.23456789i], 0.123456789+ 0.123465789i); we used PARI/GP because we did not find a function which would perform this computation in MAGMA. The function performing the computation in the library is called ellwpnum_all, and performs summation of the series described in [Coh93, Prop. 7.4.4]; this seems to give a O(M(P )P ) algorithm. As Pari is implemented in C, the comparison is once again biased against our Magma implementation of Algorithm 6; however, this algorithm is still faster than the Pari one, even if we were to use internal precision 2P to compensate the potential loss of precision. It is likely that an implementation of Coqueraux et al.'s algorithm [START_REF] Coquereaux | Iterative method for calculation of the Weierstrass elliptic function[END_REF] would yield similar results, since its asymptotic complexity is also greater than the one of our algorithms.
Remark 8.1.8. Both algorithms can be modified to compute n different values ℘(z k , [ω 1 , ω 2 ]) faster than by just applying the algorithm n times. Indeed, both algorithms require the computation of theta-constants, which can then be cached and reused. However, the amount of computation saved is hard to estimate, and we do not know which algorithm would end up being faster, as we did not investigate this further.
Prec (decimal digits) Algorithm 6 Equation ( 8
Computing the Abel-Jacobi map
We now consider the problem of computing the Abel-Jacobi map, i.e. given a curve E(C) defined with the equation y 2 = x 3 + ax + b, compute:
• a lattice Λ ⊂ C such that E(C) C/Λ; • for any Q = (x, y) ∈ E(C), its elliptic logarithm z Q .
Using the Landen isogeny
Both of these problems can be solved directly using the Landen isogeny, as we mentioned in Chapter 4. The paper [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF] gives relevant formulas, theorems, and algorithms in the case where the roots of the polynomial 4x 3 + ax + b are real; the general case is explained in [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF].
We briefly recall the methods outlined in Chapter 4 for the sake of completeness.
Recall that the Landen isogeny can be used to provide a change of variables in the elliptic integrals that appear in the Abel-Jacobi maps -both to compute the periods (complete elliptic integrals) and the elliptic logarithm (incomplete elliptic integrals). However, recall that the Landen isogeny is defined using a certain numbering of the roots of the polynomial defining the elliptic curve; one needs to pick the correct numbering (ultimately, the right square root) to see the connection with quadratically convergent AGM sequences (e.g. optimal ones, in the sense defined in Section 3.2) and get a sequence of elliptic integrals that is quadratically convergent.
This reduces the computation of the periods to the computation of optimal AGM sequences, as per Theorem 4.2.12, while the computation of the elliptic logarithm is basically the computation of an optimal AGM sequence while keeping track of the integration bounds (see Algorithm 5). The cost of both methods is quasi-optimal, i.e. O(M(P ) log P ).
Dupont's algorithm
Another algorithm, outlined in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]p. 195], computes τ associated to a given elliptic curve; we discuss it here, also using [Sil86, p.50] to make the link with Thomae's formulas explicit. This algorithm generalizes well to higher genera (cf. Section 8.2 and Section 8.3).
Rewriting the equation defining E as y 2 = (x -e 1 )(x -e 2 )(x -e 3 ), we have that E is isomorphic to a curve of the form
E : y 2 = x(x -1)(x -λ)
with λ = 0, 1: this is called the Legendre form. In fact, following the proof of [START_REF] Joseph | The Arithmetic of Elliptic Curves[END_REF]p.50] shows that λ = e 3 -e 1 e 2 -e 1 .
However, note that this value of λ depends on a choice in the numbering of the roots of the polynomial x 3 + ax + b. The values corresponding to the other numberings of the roots are
λ, 1 λ , 1 -λ, 1 1 -λ , λ λ -1 , λ -1 λ .
In Section 4.2, the choice of a numbering of the roots determined the beginning of the short chain of lattices (i.e. the lattices Λ 2 ⊂ Λ 1 ⊂ Λ 0 ) or the first two isogenies of the isogeny chain. Each of those led to different periods being computed, and in general led to a different value of τ ; there are in general 6 different values of τ one can compute from the elliptic curve E. The link between the 6 values of τ and the 6 values of λ is given by Thomae's formulas (see Theorem 1.3.19), which give λ = θ1(0,τ ) 4 θ0(0,τ ) 4 . The algorithm then consists in the following steps:
1. Compute a low-precision approximation of τ corresponding to the curve using an algorithm that evaluates complex integrals, such as Gaussian quadrature (such algorithms are discussed e.g. in [Dup06, Section 9.2.1]);
2. Reduce τ so that it is in the fundamental domain;
3. Evaluate θ1(0,τ ) 4 θ0(0,τ ) 4 with low precision, then compare this value to λ in order to determine the correct numbering of the roots: this gives a high-precision approximation of λ = θ1(0,τ ) 4 θ0(0,τ ) 4 with τ ∈ F; 4. Use the fact that Re θ1(0,τ ) 2 θ0(0,τ ) 2 ≥ 0 (Theorem 6.2.6) to obtain θ1(0,τ ) 2 θ0(0,τ ) 2 with high precision;
5. Use the formula τ
= i AGM 1, θ 1 (0,τ ) 2 θ 0 (0,τ ) 2 AGM 1, 1- θ 1 (0,τ ) 4 θ 0 (0,τ ) 4
, valid for τ in the fundamental domain, to recover τ with high precision.
This algorithm runs in quasi-linear time. It requires two evaluations of the AGM and two square root extractions; this is comparable to the algorithm for computing periods using the Landen transform (Theorem 4.2.12), which requires 3 square roots but outputs ω 1 , ω 2 .
Computation of the elliptic logarithm
The previous algorithm exploited the explicit link between the curve equation and the thetaconstants, then used the AGM (in fact, the function f τ , on which Newton's method is applied in order to get the theta-constants) to compute τ . We show how a similar idea can be applied to the computation of z, i.e. of the elliptic logarithm. This algorithm can also be generalized to higher genera; see Section 8.2 and Section 8.3. Let P = (x, y) ∈ E(C), and denote by z its elliptic logarithm. Theorem 8.1.4 shows that
x ω 2 1 = ℘(z, [ω 1 , ω 2 ]) ω 2 1 = π 2 3 (θ 4 2 (0, τ ) -θ 4 1 (0, τ )) -π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) θ 2 0 (z, τ ) θ 2 3 (z, τ ) Hence θ 2 3 (z, τ ) θ 2 0 (z, τ ) = π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) 8: Compute a low-precision approximation of ℘ (z, [ω 1 , ω 2 ]), using e.g. Theorem 8.1.6; if ℘ (z, [ω 1 , ω 2 ]) = -y, set z ← -z. 9: return z.
The cost of this algorithm is roughly the cost of applying two AGMs, then applying F. On the other hand, [CT13, Algorithm 28] (i.e. Algorithm 5 of Section 4.2.4) requires an Arctan computation (performed using the AGM), as well as an extra inversion and square root extraction (and a few multiplications) for every AGM iteration. We implemented Algorithm 21 and [CT13, Algorithm 28] in MAGMA; some timings are presented in Table 8.3, and show that the method presented in this section is slower than the method of [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF]
Computing the Abel-Jacobi map
Going from the algebraic representation to the analytic representation involves computing periods and computing the genus 2 hyperelliptic logarithm. This corresponds to the problem of evaluating hyperelliptic integrals of the form This problem is discussed in the case where P has real roots in [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF]: much as in the genus 1 case (see Chapter 4), there is a change of variables which corresponds to an isogeny between hyperelliptic curves, and which is associated to quadratically convergent sequences. Hence, after O(log P ) steps, the integral can be evaluated easily, which gives a O(M(P ) log P ) algorithm to compute complete integrals (and hence periods), and a similar one to compute incomplete integrals (and hence the hyperelliptic logarithm). However, to the best of our knowledge, this algorithm has not been generalized to the general case, unlike in genus 1 where [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF] extended the algorithms of [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF] to the complex case.
An easier way could be to proceed as in Section 8.1.4. This method has the advantage of being generalizable to higher genus; we outline the method here, and refer to the genus g algorithms (Algorithm 22 and Algorithm 23) for the full method.
First of all, as described in [Dup06, Chapitre 9], one can compute the genus 2 periods by computing theta-constants using Thomae's formulas and the Borchardt mean, then applying the Borchardt mean to appropriately chosen quotients; in fact, we can use the same ones as in Section 7.2.2. As in genus 1, the ordering of the roots has its importance in Thomae's formula, but this does not affect the running time. This allows one to recover τ from the equation of the curve in O(M(P ) log P ) provided theta-constants can be computed in this amount of time (i.e. provided the Jacobian of the system is invertible, which is covered by Conjecture 7.2.6). We refer to [Dup06, Chapitre 9] for a more precise exposition of the algorithm in this case.
As for the computation of the hyperelliptic logarithm, we can use the work of Cosset [Cos11, Chapter 5], which allows one to go from theta coordinates to Mumford coordinates, using explicit formulas which are valid in any genus but require the computation of theta-constants. We can thus retrieve the value of quotients of theta function corresponding to the Mumford coordinates of the divisor, then apply the function F we defined in Section 7.2.2 to appropriately chosen quotients, which allows us to compute λ 1 and λ 2 , and in the end z. All in all, this gives a O(M(P ) log P ) algorithm to compute the periods and the hyperelliptic logarithm, provided Conjecture 7.2.6 holds.
Computing the inverse of the Abel-Jacobi map
Given the periods of the lattice representing the Jacobian, computing the coefficients of a hyperelliptic equation corresponding to the Jacobian seems feasible using Thomae's formula, or, more precisely, formulas derived from them sometimes called the "Umemura forumas" or the "reverse Thomae formulas"; these formulas express quotients a k -ai a k -aj as a function of the (squares of) theta constants of characteristic 2 associated to the curve. We refer to [START_REF] Mumford | Tata lectures on Theta[END_REF]IIIc] or [Cos11, Theorem 3.1.20 and p.151] for a discussion of these formulas. In genus 2, it is possible to compute a model of the curve in several different ways; see [START_REF] Costello | Constructing abelian surfaces for cryptography via Rosenhain invariants[END_REF] for a summary of these models. For instance, the model outlined in [START_REF] Gaudry | Fast genus 2 arithmetic based on Theta functions[END_REF] or [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF] gives:
C : y 2 = x(x -1)(x -λ 1 )(x -λ 2 )(x -λ 3 ) with λ 1 = θ 2 0 (0, τ )θ 2 2 (0, τ ) θ 2 3 (0, τ )θ 2 1 (0, τ ) , λ 2 = θ 2 2 (0, τ )θ 2 12 (0, τ ) θ 2 1 (0, τ )θ 2 15 (0, τ ) , λ 3 = θ 2 0 (0, τ )θ 2 12 (0, τ ) θ 2 3 (0, τ )θ 2 15 (0, τ )
Note that the denominators are never zero since, in genus 2, we have θ i (0, τ ) = 0 ⇔ θ i is even, which is true of θ 1 , θ 3 , θ 15 . One can also, given a point z in the lattice corresponding to the Jacobian, compute the Mumford coordinates of the divisor which corresponds to it by the Abel-Jacobi map. The method is simply to evaluate the theta functions at z and τ , then use [Cos11, Chapter 5] to recover the Mumford coordinates of the divisor. The evaluation of θ can be done in O(M(P ) log P ) , provided Conjecture 7.2.6 holds; hence the inverse of the Abel-Jacobi map can be computed in quasi-linear time under this conjecture.
Extending the strategy to higher genus
Finally, we give a brief outline of how one could extend the strategy to the computation of the Abel-Jacobi map in genus g.
Computing the Abel-Jacobi map
We look at the problem of computing the period matrix τ and hyperelliptic logarithms of points of a hyperelliptic curve given by an equation y 2 = f (x). We follow and generalize the approach of [Dup06, Section 9.2.3], using for instance the same matrices.
Define J = 0 -I g I g 0 ∈ Sp 2g (Z) and
M j = I g δ j,j 0 I g ∈ Sp 2g (Z), M j,k = I g δ j,k + δ k,j 0 I g ∈ Sp 2g (Z) then (JM j ) 2 = -I g -δ j,j δ j,j δ j,j -I g , (JM j,k ) 2 = -I g -δ j,k -δ k,j δ j,k + δ k,j δ j,j + δ k,k -I g
We then have by Theorem 2.4.4:
θ 2 0 (JM j ) 2 • z, (JM j ) 2 • τ = ζ j τ j,j e iπ(-1) g 2z 2 j /τj,j θ 2 σj (0) (z, τ ) (8.3.1) θ 2 0 (JM j,k ) 2 • z, (JM j ) 2 • τ = ζ j,k τ 2 j,k -τ j,j τ k,k f (z, τ )θ 2 σ j,k (0) (z, τ )
with ζ j and ζ k are fourth roots of unity.
For z = 0 we find the equations of [Dup06, p. 202]:
θ 2 0 0, (JM j ) 2 • τ = ζ j τ j,j θ 2 σ(0) (0, τ ) θ 2 0 0, (JM j,k ) 2 • τ = ζ j,k τ 2 j,k -τ j,j τ k,k θ 2 σ (0) (0, τ ).
These formulas allows one to recover the coefficients of τ following the algorithm presented in [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF]. This is Algorithm 22, which computes the g(g+1)
2 coefficients of τ from the 2 g -1 quotients of squares of theta-constants.
Since the Borchardt mean converges quadratically, Algorithm 22 requires O(M(P ) log P ) operations per coefficient.
Computing the hyperelliptic logarithm can be done using only Equation (8.3.1), i.e. the action of (JM j ) 2 on the values of the theta functions. The algorithm is similar to the previous one; we present it in Algorithm 23.
Algorithm 22 Compute the period matrix τ associated to a hyperelliptic curve C of genus g. Input: the equation of a curve C : y 2 = 2g+2 i=1 (x -x i ), and more precisely the roots x i with absolute precision P . Output: τ ∈ H g with absolute precision P .
1: Compute τ with low precision (e.g. a few dozen bits) by evaluating the complete hyperelliptic integral with low precision. 2: Use Thomae's formulas (see [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF]Théorème 3.1.19] or [Mum84, Section 8]) to compute the fourth powers of quotients of the squares of theta-constants associated to the curve. 3: Use the low-precision approximation of τ to extract the correct square root and compute θ1(0,τ ) 2 θ0(0,τ ) 2 , . . . ,
θ 2 g -1 (0,τ ) 2 θ0(0,τ ) 2
.
4: Compute low-precision approximations of θ i (0, 2 k τ ); use these to pick the correct sign (the one corresponding to the values of theta-constants) in the Borchardt mean of θ1(0,τ ) 2 θ0(0,τ ) 2 , . . . ,
θ 2 g -1 (0,τ ) 2 θ0(0,τ ) 2
. This gives
1 θ0(0,τ ) 2 . 5: For i = 1, . . . , 2 g -1, compute θ 2 i (0, τ ) =
θ 2 i (0,τ ) θ 2 0 (0,τ ) × θ 0 (0, τ ) 2 . 6: Use τ -duplication formulas to compute θ 2 i (0, 2τ ) for i > 2 g -1. 7: for j = 1...g do 8:
Compute B g 1, θ 2 σ j (1) (0,2τ ) θ 2 σ j (0) (0,2τ ) , . . . , θ 2 σ j (2 g -1) (0,2τ ) θ 2 σ j (0) (0,2τ ) = 1 θ0(0,(JMj ) 2 •2τ )
; retrieve τ j,j .
9: end for 10: for j, k = 1...g do 11:
Compute B g 1, θ 2 σ j,k (1) (0,2τ ) θ 2 σ j,k (0) (0,2τ ) , . . . ,
θ 2 σ j,k (2 g -1) (0,2τ ) θ 2 σ j,k (0) (0,2τ ) = 1 θ0(0,(JMj ) 2 •2τ )
; retrieve τ j,k .
12: end for 13: return τ .
Algorithm 23 Compute the hyperelliptic logarithm of a divisor on a genus g hyperelliptic curve.
Input: the equation of a curve C : y 2 = 2g+2 i=1 (x -x i ), the Mumford coordinates of the divisor D, the period matrix τ associated to C. All these quantities are with absolute precision P . Output: z ∈ C g with absolute precision P .
1: Compute the quotients of squares of theta-constants using Thomae's formulas, as in Algorithm 22. 2: Use these quotients and the formulas in [Cos11, Section 5.3] to compute the quotients of the theta functions corresponding to D, and in particular θ1(z,τ ) 2 θ0(z,τ ) 2 , . . . ,
θ 2 g -1 (z,τ ) 2 θ0(z,τ ) 2 1 θ 2 0 ((JMj) 2 •z,(JMj ) 2 •2τ )
. 9: end for 10: return z. Theorem 8.3.1. The Abel-Jacobi map can be computed in O(M(P ) log P ) bit operations.
Proof. The complexity of the computation is dominated by the cost of evaluating G. Note that G is always evaluated at quotients of theta functions, which means that the sequences a (n) i necessarily converge quadratically, and hence that the evaluation requires only O(M(P ) log P ) bit operations. This means that each coefficient can be computed in O(M(P ) log P ).
Computing the inverse of the Abel-Jacobi map
Going from the analytical representation (points on the torus) to the algebraic representation (Mumford coordinates of the corresponding divisor) can, once again, be done via the theta functions; the formulas in [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF] allow one to compute Mumford coordinates from the theta coordinates (i.e. values of quotients of theta functions). The problem thus reduces to that of computing genus g theta functions. As we outlined in Section 7.4, the strategy which gives quasilinear time evaluation of theta functions in genus 1 and 2 may be generalizable to arbitrary genera, but there are a few obstacles which prevent this generalization to be completely straightforward.
The first obstacle is that we need a proof that the G function can be evaluated in quasilinear time for any arguments. Note that, by the arguments of Proposition 2.1.9, the sequences
θ(z, 2 k τ ) 2 k converge quadratically, which means that G 1, θ 2 1,...,2 g -1 (z,τ ) θ 2 0 (z,τ ) , 1, θ 2 1,...,2 g -1 (0,τ ) θ 2 0 (0,τ )
can be evaluated in quasi-linear time. We simply need to prove that this running time holds for any arguments, or at least for arguments which are at a distance from quotients of thetas; this is needed to prove that the evaluation of the Jacobian matrix using finite differences can be done in quasi-linear time. We think is a reasonable assumption to make, the biggest obstacle to a
Genus
θ 2 i θ 2 0 (z, τ ), θ 2 i θ 2 0
(0, τ ) of fundamental thetas), versus the number of λ, µ one can compute with G by using the action of (JM j ) 2 and (JM j,k ) 2 on the quotients. If both numbers do not match, one needs to consider the action of more matrices on the quotients.
proof being that our proof in genus 1 (Section 6.2.4) or 2 (Section 7.2.3) is very technical and thus tricky to generalize to arbitrary genus. The biggest obstacle is to manage to define a function F : C 2 g+1 -2 → C 2 g+1 -2 with an invertible Jacobian from the function G such that Equation (7.4.2) holds. In genus 1 and 2, this was achieved by computing the value of G at several different quotients, corresponding to the value of fundamental theta functions under the action of different matrices: we considered the action of S in genus 1, and the action of (JM 1 ) 2 , (JM 2 ) 2 and (JM 1,2 ) 2 in genus 2. Note that we did not manage to prove that this approach yielded invertible Jacobians in genus 2. In higher genus, the approach of using the action of (JM j ) 2 and (JM j,k ) 2 to define F is not sufficient, as highlighted by Table 8.4; we need to consider a larger set of matrices, so that the dimensions match.
Another approach was to include the equations defining the theta variety to the parameters on which Newton's method should be applied -i.e. make the results output by F contain f i (X) for each equation f i (X) = 0 defining the theta variety. For this method to be effective, one has to find these equations explicitly; and once again, this does not appear to yield straighforwardly a proof that the Jacobian is invertible.
Assuming these problems can be solved, we would get a O(M(P ) log P ) algorithm for the computation of the inverse of the genus g Abel-Jacobi map.
Interlude: a faster algorithm to compute E 2k (τ )
This section is unrelated to the rest of this thesis; however, it describes a result that does not appear to be known yet. Recall the definition of the normalized Eisenstein series of weight 2k:
E 2k (τ ) = 1 2ζ(2k) ω∈Z+τ Z ω =0 1 ω 2k
We look here at the problem of computing E 2k (τ ) for τ ∈ F with absolute precision P .
Naive algorithm for the Eisenstein series
Putting q = e iπτ , we can start from the expression of Eisenstein series as a function of the divisor function (see e.g. [Mum83, Section 15] or [Cha85, Section VI.2]) and rewrite the sum as a Lambert series (e.g. [AS64, Section 24.3.3]). This yields:
E 2k (τ ) = 1 + 2 ζ(1 -2k) n≥0 n 2k-1 q 2n 1 -q 2n
A naive algorithm to compute E 2k (τ ) with absolute precision P is to evaluate ζ(1 -2k) and the series with sufficient precision.
Note that ζ(1 -2k) = -B 2k 2k , and
2k B 2k ∼ (πe) 2k 2 √ πk 2k-1/2 .
This estimate will be useful later.
As for the series, we write
| n≥B n 2k-1 q 2n 1 -q 2n | ≤ n≥B n 2k-1 |q| 2n |1 -q 2n | ≤ n≥B n 2k-1 |q| 2n 1 -|q| 2n ≤ 1 1 -|q| 2B n≥B n 2k-1 |q| 2n ≤ 2 n≥B e (2k-1) log n-2nπ Im(τ )
For τ, k fixed, the function f : n → 2 ζ(1-2k) n 2k-1 |q| 2n is at first increasing, then decreases to 0. We look at its derivative:
(2k -1)n 2k-2 |q| 2n -n 2k-1 2 Im(τ )π|q| 2n = 0 ⇔ (2k -1) = n2 Im(τ )π Put N 0 = 2k-1
2π Im(τ ) ; then f is increasing when n < N 0 , and decreasing for n > N 0 . Its maximal value is then
f (N 0 ) = 8k B 2k e (2k-1)(log 2k-1 2π Im(τ ) -1)
≤ e 2k log(π)+2k+2 log 2-1 2 log π-(2k-1/2) log k+(2k-1) log(k/π Im(τ ))
≤ e 2k-(2k-1) log(Im(τ )) ≤ e (2- √ 3)k if we suppose that τ ∈ F. This also means that, in order for the result to be accurate up to 2 -P , we need to work with O(P + k)-bit numbers, to take into account the maximal size of the integral part and the precision loss. The convergence of the sequence (f (n)) n∈N is somewhat geometric; the ratio rule gives
|q| 2 1 + 1 n 2k-1 → n→∞ |q| 2 < 1
Suppose that (f (n)) n>N0 decreases in a geometric fashion, of a factor |q| 2 for each term (which is better than what happens in practice). We can then estimate the number of terms needed before f (n) ≤ 2 -P : the function rises for n < N 0 , then decreases to 1, then decreases to 2 -P . This means that f (n) ≤ 2 -P for
n = O(N 0 + log(f (N 0 ))/ Im(τ ) + P/ Im(τ )) = O k 1 π + 2 - √ 3 Im(τ ) + P Im(τ ) = O P + k Im(τ )
An improvement over this algorithm is outlined in [START_REF] Brent | Fast computation of bernoulli, tangent and secant numbers[END_REF] (or alternatively in [BZ10, Exercise 4.41]); it uses an equality between the generating function for Bernoulli numbers and the power series corresponding to x e x -1 , then uses fast algorithms on power series to compute the first k terms of the generating function. The cost of this method is O(k 2 log 2+ k) = O(k 2+ ) bit operations.
Finally, we note that Harvey gave in [START_REF] Harvey | A subquadratic algorithm for computing the n-th Bernoulli number[END_REF] an algorithm which has the currently best running time to recover B 2k ; the algorithm requires O(k 4/3+ ) bit operations to compute a single B 2k .
Note that the first (resp. the second) method actually computes the B 2k (2k)! (resp. B 2k k! ) for all k ≤ k. Hence, given that we likely compute factorials using an iterative algorithm, we can easily transform these methods in an algorithm that computes B 2k for k ≤ k in the same amount of time. This gives, for the second method, an amortized cost of O(k 1+ ), which is better than using the algorithm [START_REF] Harvey | A subquadratic algorithm for computing the n-th Bernoulli number[END_REF].
Computing the terms of the series
We can compute q n inductively, and hence the computation of each q 2n 1-q 2n costs only O(M(P +k)) bit operations. However, the computation of n 2k-1 is more costly: it requires O(k) multiplications with a naive method, or O(log k) multiplications with fast exponentiation. Furthermore, the multiplication n 2k-1 × q 2n 1-q 2n induces a loss of precision, since n 2k-1 is large; for the result to still be accurate to 2 -P , we need O(k log B) = O k log
Total cost
Hence, the total cost of the naive algorithm to compute E 2k (τ ) with absolute precision P is
O M P + k log P Im(τ ) P + k Im(τ ) log k + k 4/3+ = O (P + k) 2+ log k .
An algorithm based on the coefficients of the series expansion of ℘
We now show a faster algorithm to compute E 2k (τ ) with high precision, based on the link between the Laurent series expansion of ℘ and the values E 2k . We have:
Theorem 8.4.1 ([Sil86, Theorem VI.3.5, p. 169]). Write
℘(z) = 1 z 2 + ∞ n=0 b n z 2n
Then we have
b n = (2n + 1) ω∈L 1 ω 2n+2 = 2n + 1 2ζ(2n) E 2n+2 (τ )
Furthermore, there is an algorithm to compute b n knowing g 2 and g 3 , the coefficients in the differential equation of ℘: Theorem 8.4.2 ([BMSS08, Theorem 1]). There is an algorithm that computes the first n coefficients of the Laurent series of ℘ from the knowledge of g 2 , g 3 ; this algorithm performs O(M(n)) operations.
Section 8.1.1 shows how to compute g 2 , g 3 from the theta-constants; Chapter 6 gives an algorithm for theta-constants which complexity is O(M(P ) log P ). Hence, it is possible to compute the first k coefficients of the Laurent series of ℘ with precision P in O(M(P ) log P + M(k)).
As for the computation of ζ(2k), one can use for instance the FEE method of Karatsuba; as described in [START_REF] Ekatherina | Fast calculation of the Riemann zeta function ζ(s) for integer values of the argument s[END_REF], one value of ζ(2k) can be computed with precision P in O(M(P ) log 2 P ) = O(P 1+ ). However, we were not able to determine the dependency in k. A possibly better way would be to use the connection with Bernoulli numbers
ζ(2k) = (-1) k+1 (2π) 2k 2(2k)! B 2k
then use the results mentioned in Section 8. 1: Compute θ 0 (0, τ ), θ 1 (0, τ ), θ 2 (0, τ ) using Algorithm 11. 2: Compute g 2 , g 3 using Proposition 8.1.3. 3: Compute the first k coefficients of the Laurent series of ℘ using the algorithm of [START_REF] Bostan | Fast algorithms for computing isogenies between elliptic curves[END_REF]. If one uses this algorithm to compute one value of E 2k (τ ), the running time one can achieve is O(M(P )(log P + M(k)) + k 4/3+ ); if it is used to compute all values E 2k (τ ) for k ≤ k, its running time is O(M(P )(log P + M(k)) + k 2+ ).
Comparison
We assume that τ ∈ F, and distinguish whether we want to compute one value E 2k (τ ) of the Eisenstein series, or all the values E 2k (τ ), k ≤ k. Since both algorithms require the computation of Bernoulli numbers B 2k with precision P , we do not include the complexity of computing them in those running times, so that only the core running times are compared; recall (see Section 8.4.1) that computing one Bernoulli number costs O(M(P ) + k 4/3+ ), while computing the first k Bernoulli numbers costs O(M(P )k + k 2+ ).
Computing one value
The naive algorithm is able to compute one value E 2k (τ ) with precision P using
O M P + k log P Im(τ ) P + k Im(τ ) log k = O (P + k) 2+ log k
bit operations. Our algorithm can output this value using
O(M(P )(log P + M(k))) = O(P 1+ k 1+ )
bit operations, which is a better asymptotic complexity in most cases.
Computing all the values
Our algorithm computes all E 2k (τ ), k ≤ k in time
O(M(P )(log P + M(k)) = O(P 1+ k 1+ )
As for the naive algorithm, computing each of the k series requires to compute all of the n 2k -1 , which means one should use the naive, iterative powering algorithm. Hence, the naive algorithm can compute all E 2k (τ ), k ≤ k in time
O M P + k log P Im(τ ) P + k Im(τ ) k = O (P + k) 2+ k .
In both cases, our algorithm outperforms asymptotically the naive algorithm. We leave the implementation of both algorithms and their comparison for future work.
Recall that, in the case of short Weierstrass forms, we have [START_REF] Bostan | Fast algorithms for computing isogenies between elliptic curves[END_REF]:
φ(x, y) = N (x) D(x) , y N (x) D(x)
with deg N = , deg D = -1. This means one only needs to compute the rational function giving the x-coordinate of the isogeny; doing so entails significant savings (by a constant factor) in the algorithm.
The strategy we follow is that of an evaluation-interpolation; more precisely, we wish to take advantage of the fact that the isogeny between tori is easy to compute, using the quasi-optimal algorithms to go from the curve to the torus and back. This can be summarized with the diagram:
C/Λ → C/Λ ↑ ↓ E/C E /C
Determining the isogenous curve
The first step of the algorithm is to determine the equation of the isogenous curve we are looking for. For this, we adopt the following strategy: we compute the periods using the AGM (as in Chapter 4), then use the elliptic logarithm of P to compute the isogenous periods, and finally we compute the coefficients of the isogenous curve as in Section 8.1.1.
Computing the isogenous periods is done as follows. There are + 1 possibilities for the period lattice; they are given by the lattices:
{Zω 1 + Zω 2 , ω 2 = ω 2 + k ω 1 (0 ≤ k ≤ l -1)} ∪ {Z ω 1 + Zω 2 }
The isogenous period lattice corresponds to one for which the elliptic logarithm of Q is mapped to 0, since Q generates Ker φ. Hence, we first compute the elliptic logarithm of Q using the algorithm of [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF]; this gives us a point aω1+bω2 . We then have to determine which of the + 1 lattices contains the elliptic logarithm of Q. The procedure is as follows:
• if b = 0, the lattice Z ω1 + Zω 2 is the one we want;
• if not, the point aω1+bω2 × (b -1 (mod )) is the elliptic logarithm of a point in Ker φ, and hence also generates Ker φ, so the lattice Zω 1 + Z ω2 + (ab -1 (mod )) ω1 is the one we want.
Evaluating the isogeny
Once again, the strategy is to travel through the analytic representation of the elliptic curve in order to evaluate the isogeny at 2 points, so that we can retrieve the rational function via interpolation. For this, we compute the elliptic logarithm (see Chapter 4) of 2 points, determine the image of those logarithms in the isogenous torus, then find the image point on the isogenous curve that we are looking for. Finally, we use rational function interpolation to compute the isogeny; note that we do not need to compute ℘ nor the y-coordinate of the isogenous curve, which saves a constant factor. Recall that we outlined two algorithms to compute ℘(z, Λ) in quasi-optimal time: the first one, based on the fast computation of θ, was presented in this chapter (Section 8.1.2, and more precisely Note 8.1.5); the second one, based on the Landen transform, was presented in The complexity of this algorithm depends on the complexity of the computation of the GCD of polynomials. We find in [START_REF] Von | Modern computer algebra[END_REF]p.313] that, if polynomials are of degree n, the complexity is O(M(n) log n) field operations. In our case, those operations are multiplications of precision P , and n = O( ); we find a running time of O(M(P )M( ) log ) bit operations.
Description of the algorithm and complexity
We summarize the final algorithm in Algorithm 25.
Algorithm 25 Compute an -isogenous curve and an isogeny with a given kernel, over C.
Input: E(C) : y 2 = x 3 + ax + b (with a, b ∈ C), Q ∈ E[ ].
Output: E (C) : y 2 = x 3 + a x + b (with a , b ∈ C), -isogenous to E; the rational function defining the -isogeny φ : E → E such that φ(Q) = 0.
7:
Compute the elliptic logarithm z k of P k .
8:
Compute z k , the image of z k by the isogeny (Section 9.1.2).
9:
Compute ℘(z k , [ω 1 , ω 2 ]), the x-coordinate of the corresponding point (Section 8.1.3). 10: end for 11: Use rational function interpolation (Section 9.1.2) to recover g(x) h(x) 2 .
12: Return E and φ : (x, y) → g(x) h(x) 2 , y g We described in Chapter 1 a few algorithms used to compute isogenies, noting that their asymptotic cost was roughly O(M(P )M( ) log ) -which is just the running time of the rational function interpolation in our method. Hence, our algorithm does not provide a better running time; in fact, since P , it is much slower, because of the log P factor. We provide a few timings for our algorithm. Since Magma does not suppose Vélu's formulas over C, we were not able to get such timings for comparison with our algorithm; however, it is clear that such a comparison would show that Vélu's formulas are much faster than our algorithm.
P
Computing isogenous curves over a number field
We now outline a method to compute an isogeny between two curves defined over a number field K of degree n, using the method described in the previous section. Once again, we summarize the algorithm with the diagram
C/Λ → C/Λ ↑ ↓ E/C E /C ↑↑↑ ↓↓↓ E/K E /K
The algorithm consists in computing the curves that are images of E by all the embeddings of K in C, then use the previous algorithm to compute an isogenous curve over C; we then use interpolation to compute the isogeny over K, using continued fractions to recognize the rational numbers. This will be outlined in Algorithm 26. One important issue in the algorithm is the precision at which we wish to work in C; this has a big impact on the overall complexity of the algorithm. Unfortunately, we do not have a definite answer to this question, which we mention in Section 9.2.5.
Computing embeddings
The first step of the algorithm is to compute every embedding of the number field in C, with sufficient precision, in order to apply the previous section; we will then combine all the complex isogenies we obtain to find the isogeny over K. We discuss the complexity of this step.
Root-finding methods
There are a wealth of methods dedicated to computing complex approximations of roots of polynomials; this problem is a fundamental problem in analysis. We refer the reader to [START_REF] Mcnamee | Numerical methods for roots of polynomials[END_REF] for an overview of a great number of methods, from methods using companion matrices to those based on Newton's iterations. We assume that (at the cost of computing gcd(f, f ) with high enough precision) we are trying to compute the roots with precision P of a polynomial f of degree n.
An interesting family of algorithms is the splitting methods. The concept is to first perform a crude splitting of the complex plane, determining regions of C that contain some, but not all, of the roots of f ; this can be used to determine factors of the polynomial, and we can apply the procedure recursively. Note that, at some point in the algorithm, it is faster to refine the approximation of the roots using Newton's method than to continue with this strategy. A notable algorithm following this pattern is the splitting circle method, designed by Schönhage in 1982 [START_REF] Schönhage | The fundamental theorem of algebra in terms of computational complexity[END_REF]. It claims a complexity of O(n 3 log n + n 2 P ) up to logarithmic factors. The same concept was used and refined by Pan [START_REF] Victor | Univariate polynomials: nearly optimal algorithms for factorization and rootfinding[END_REF], which claims a quasi-optimal bit complexity of O(n log 2 n(log 2 n + log P )M(P )) to find all the roots.
We do not know if this algorithm has been implemented anywhere; [START_REF] Victor | Univariate polynomials: nearly optimal algorithms for factorization and rootfinding[END_REF] notes that it would be quite challenging. However, note that Schönhage's algorithm has been implemented in Magma by Xavier Gourdon; an in-depth discussion of the implementation of Schönhage's algorithm can be found in his thesis [START_REF] Gourdon | algorithmique et géométrie des polynômes[END_REF]. Regardless, we assume that one can use Pan's complexity of O(n log 2 n(log 2 n + log P )M(P )).
Total complexity
The complexity estimates in Pan's algorithm assume that the roots of the polynomial have norm bounded by 1, since several classical scaling techniques allow one to reduce the general problem to this particular case. Hence, in order to get a final result accurate up to 2 -P , we need to compute such roots with absolute precision P + s, where s is the maximum size of the roots of our polynomial. In all generality, Rouché's theorem shows that the norm of any root of the polynomial is bounded by 1 + max|a i | (where a i are the coefficients of the polynomial).
Hence, the complexity of computing all embeddings of K in C is
O(n log 2 n(log 2 n + log(P + log(1 + max|a i |)))M(P + log(1 + max|a i |)))
If we assume that max|a i | = O(2 P ), i.e. that the coefficients of the polynomial defining K can be stored with absolute precision P in O(P ) bits, the complexity becomes O(n log 2 n(log 2 n + log P )M(P ))
Using complex conjugation
The following result is not very hard to prove, but is nonetheless important for an efficient implementation.
Proposition 9.2.1.
Let K = Q[X]/(f ) with f ∈ Z[X]
, and let α ∈ C \ R be a root of f . Let Q K be an -torsion point of E(K). Denote E α (resp. Q α ) the image of E K (resp Q K ) by the embedding X → α; let E α be the -isogenous curve over C such that the isogeny φ α :
E α → E α is such that Ker φ α =< P α >.
Then E α is -isogenous to E α and the isogeny φ α is such that Ker φ α =< P α >.
That is to say, the coefficients of the curves we obtain when using the embedding X → α are the complex conjugates of the coefficients of the curves we obtain when using the embedding X → α, and the same goes for the coefficients of the isogenies.
As for the proof, one can for instance take a look at Vélu's formulas (Section 1.1.3 and propagate the complex conjugation. This result means that one only needs to compute one isogeny per pair of complex conjugate roots of f , since the other (complex conjugate) isogeny is simply deduced using complex conjugation of the coefficients. This induces at best a factor 2 speedup in the algorithm, which is not negligible; however, we did not implement this in our script by lack of time.
Multi-evaluation and fast interpolation
We now take a look at the problem of computing the coefficients of the isogenous curve over K and the isogeny. What we computed so far are n complex values for each of those coefficients, Hence evaluating all the s i is just a multi-evaluation of a polynomial, and we use the algorithm we just described. The next step is to evaluate i v i s i m x-ui quickly; we use a similar method of splitting the sum in half:
i∈[1..2 k ] v i s i m x -u i = i∈[1..2 k ] v i s i M 0,k × M 1,k x -u i = M 1,k i∈[1..2 k-1 ] v i s i M 0,k x -u i + M 0,k i∈[2 k-1 +1,2 k ] v i s i M 1,k x -u i = . . .
At the leaves, we just need to evaluate v i s i . Hence the cost of the algorithm T (n) satisfies
T (n) ≤ 2T (n/2) + 2M (n/2) + n ≤ 2T (n/2) + M (n) + n which is O(M (n) log n).
In our case here, we are looking to interpolate O( ) coefficients knowing their values at α 1 , . . . , α n -hence, anything that depends only on the u i , which is to say the computation of the remainder trees, of m and of the s i , can be computed once for all the computations. However, in the computation of the interpolation (the recursive algorithm that splits the sums in half), we do not seem to obtain any savings, since the values in the leaves are different. Hence, the caching of remainder trees and of the s i gives a speedup in practice compared to the strategy which simply applies the interpolation algorithm for each coefficient, although this does not improve the asymptotic complexity. We did not implement this in our script for lack of time.
Hence, the total running time of interpolating O( ) elements at the α i is O( M(n) log n) operations.
Recovering coefficients as rationals
We now discuss how to convert the interpolated coefficients, which are written as complex numbers of precision P , into rational numbers. Of course, there are infinitely many approximations of a real number by rational numbers, including the representation a 2 P that the interpolation algorithms outputted; we are looking for a fraction p q with q < 2 P . In addition, we could also enforce a bound on the denominator if we happen to know one. We will simply assume that P has been chosen such that 2 P is a bound on the denominators we are looking for; we discuss this condition in the next section.
To solve this problem, we use the continued fraction expansion of our rational number, which in this case simply amounts to a Euclidean algorithm. Denote by a n the successive quotients when running Euclid's algorithm on a and 2 P ; we can then build a sequence hn kn of fractions approximating a 2 P using the relations:
h n = a n h n-1 + h n-2 k n = a n k n-1 + k n-2 h -1 = 1 h -2 = 0 k -1 = 0 k -2 = 1
The approximation we are looking for is simply the last one with k n < 2 P . In the worst case, this algorithm is asymptotically as costly as the whole Euclidean algorithm, which is
O(M(n) log n),
where n is the size of the numerator/denominator; here, this gives O(M(P ) log P ).
Description of the algorithm
We outline here the algorithm to compute an isogeny over K.
Algorithm 26 Compute an -isogenous curve and an isogeny with a given kernel, over K. Input: K = Q(X)/(f ) a number field of degree n; E(K) :
y 2 = x 3 +ax+b (a, b ∈ K), Q K ∈ E[ ].
Output: E (K) : y 2 = x 3 + a x + b (a , b ∈ K), -isogenous to E; the rational function defining the -isogeny φ : E → E .
1: Compute the n embeddings of K in C -that is to say, compute approximations of the roots of f in C with absolute precision P . 2: for i = 1 to n do 3:
if α i = α j for j < i then 4:
Put E i = E j , φ i,x = φ j,x . 5: else 6: Compute a i , b i ∈ C, the image of a, b by the ith complex embedding of K in C. Put E i (C) : y 2 = x 3 + a i x + b i . 7:
Compute also Q i ∈ E i (C), the image of Q K by the ith embedding of K in C.
8:
Use Algorithm 25 to compute E i and φ i,x (the rational function giving the xcoordinate of the isogeny).
9:
end if 10: end for 11: Perform interpolation (Section 9.1.2) on the coefficients of E i and φ i,x to recover a curve E and a rational function φ x , both of which will have coefficients in C[X] (but the coefficients are actually rational) 12: Recognize the coefficients, i.e. write them as fractions (Section 9.2.4). This makes E and φ have coefficients in K. 13: Return E and φ = (φ x , yφ x ). Proof. Assume that max|a i | = O(2 P ), i.e. that the coefficients of the polynomial defining the number field can be stored with absolute precision P in O(P ) bits. The overall complexity is then:
• O(n log 2 n(log 2 n + log P )M(P )) for Step 1;
• O(nM( ) log M(P ) log P ) for the For loop;
• O( M(n) log nM(P )) for the interpolation;
• O( M(P ) log P ) for the conversion to rational numbers. This gives the claimed total complexity. We can compare this algorithm to Vélu's formula applied over K: Proposition 9.2.3. Assuming that the integers (e.g. the denominators) which are computed when applying Vélu's formulas are bounded by O(2 P ), Vélu's formulas applied over K have a cost of O(M( ) log M(n) log nM(P )).
Proof. We evaluated in Section 1.1.3 the cost of Vélu's formulas, which is here of O(M( ) log ) operations in the number field. If the number field is represented as K = Q[X]/(f ) with f a polynomial of degree n, the cost of arithmetic operations in K is O(M(n) log n) integer operations, using the fast GCD algorithms [START_REF] Von | Modern computer algebra[END_REF]Chapter 11]. If we assume that the maximal denominator is smaller than 2 P and the maximal numerator is also O(2 P ), we get a total cost bounded by O(M( ) log M(n) log nM(P )) bit operations. This is a lower complexity than the one of Algorithm 26, at least by logarithmic factors. Furthermore, there is a fundamental difference between the two algorithms: our algorithm requires a choice of P , then performs essentially only operations on P -bit integers, which means it is very sensitive to our choice of P ; on the contrary, Vélu's formulas will in general handle integers which can be much smaller than 2 P , which yields a better complexity. Obtaining the best running time for our algorithm requires to know the smallest value of P for which Algorithm 26 returns the right value, i.e. recognizes the correct elements of K (and hence the correct rational numbers).
Discussion on P
The total complexity depends on P , which is the precision we choose to work with when computing the isogenies over the complex numbers. This precision must be big enough to allow the recognition of the coefficients of the isogeny and of the isogenous curve as elements of K; in particular, we must have that the maximal denominator in the rational numbers that appear in each coefficient must be smaller than 2 P .
We did not manage to obtain any result tying the height of the rational numbers appearing in the coefficients of the isogeny or of the isogenous curve to anything, such as the height of the coefficients of the original curve, or the coefficients of the polynomial f defining the number field. The determination of P would allow us to make the complexities above more precise. However, in a specific case, which we outline in the next section, we managed to formulate some heuristics on P , finding that it was roughly the same size as the largest coefficient of the polynomial defining the number field.
We carried out an experiment and attempted to determine the smallest P such that the recognition of the coefficients of the isogeny and the isogenous curve succeded, in order to compare our algorithm to Vélu's formulas. This value of P was determined by running the algorithm at several different precisions and seeing if the interpolation succeded and gave the correct result. Generalizing this amounts to assuming we have an oracle giving us the correct value of P . Recall that, as mentioned previously, our implementation of Algorithm 26 lacks several interesting optimizations that we have mentioned previously, among which a batched algorithm to compute θ(z, τ ), the use of complex conjugation to reduce the number of embeddings for which the full computations have to be carried out, and the caching of the remaindering tree in the fast interpolation algorithm. We can hope to achieve a factor 4 speedup by combining these optimizations, which would make our algorithm within the same realm as Vélu's formulas over a number field, at least for these examples; hence, further investigation is needed before discounting our algorithm.
Computing isogenous curves over F p
The idea of this algorithm is, once again, to use the previous algorithms to compute the isogeny. More specifically, we lift the curve and the torsion point on a number field K, then use Algorithm 26 to compute an isogenous curve over K, and deduce from this the isogenous curve over F p . We summarize the algorithm with the diagram
C/Λ → C/Λ ↑ ↓ E/C E /C ↑↑↑ ↓↓↓ E/K E /K ↑ ↓ E/F p E /F p
Going from the curve E /K to E /F p requires taking the quotient of K by a maximal ideal which we determine in the next subsection. The elements of K -that is to say, the coefficients of the curve and those of the isogeny -are then sent to the right elements of F p (i.e. those we were looking for) provided we picked the maximal ideal which sends the coefficients of E/K to E/F p . In all that follows, we suppose that is an odd prime greater than 3. Our algorithm does not seem to be directly generalizable to the case = 2; however, note that in this particular case, Vélu's formulas are particularly easy to compute.
Global torsion lifting
Transforming the problem over F p in a problem over K requires solving the lifting problem: Definition 9.3.1 [START_REF] Joseph | Lifting and Elliptic Curve Discrete Logarithms[END_REF]). Let e/k be an elliptic curve and q ∈ e(k). The lifting problem for (k, e, p) is the problem of finding the following quantities:
• a field K with subring R;
• a maximal ideal p of R satisfying R/p k;
• an elliptic curve E/K satisfying E (mod p) e;
• a point Q ∈ E(K) satisfying Q (mod p) = q.
As described in [START_REF] Joseph | Lifting and Elliptic Curve Discrete Logarithms[END_REF], who looked at this lifting problem in the context of the ECDLP, there are essentially four ways to solve the lifting problem. The lift is called a global lift (resp. local lift) if K is a global field such as Q or a number field (resp. K is a local field such as Q p ); it is called a torsion lift if q is a torsion point, and a non-torsion lift if it is not. However, in the context of [START_REF] Joseph | Lifting and Elliptic Curve Discrete Logarithms[END_REF], none of those approaches seem to yield any speedup for the ECDLP.
We show here how to perform a global torsion lifting from F p to a number field K. There are two reasons for this: first, as noted in [Sil09, Table 1], this is the only way to move the problem to C, where we can use our algorithm based on the evaluation of the Abel-Jacobi map; furthermore, the idea seems natural as we are given an -torsion point Q ∈ E/F p as input for the problem of finding an isogenous curve. Hence, assuming we are given E/F p : y 2 = x 3 + a Fp x + b Fp and Q = (x, y) an -torsion point, we are looking to determine a number field K = Q[α], a maximal ideal p such that K/p = F p , an elliptic curve E K /K : y
2 = x 3 + a K x + b K such that (a K , b K ) reduce to (a Fp , b Fp ), and a point Q K = (x K , y K ) of -torsion on E K such that Q K reduces to Q.
We start by choosing to lift Q using the map
F p → Z, x → x K ∈ - p + 1 2 , ..., p -1 2 such that x ≡ x K (mod p).
We then consider the resulting integers x K , y K as elements of K. This lift can be done regardless of the number field K, since any number field contains Z. Secondly, we choose the equation of the elliptic curve E K , imposing that a K = α. The condition
Q K ∈ E K then imposes that b k = y 2 K -x 3 K -x K α.
This imposes the conditions x K , y K , α (mod p) = x, y, a Fp . We now determine the number field K using the condition that Q K should be an -torsion point. This condition can be translated in terms of generic -division polynomials ψ ; we introduce those polynomials in the next section (Section 9.3.2). The condition "Q K is of -torsion on E K " translates as
ψ (x K , α, y 2 K -x 3 K -αx K ) = Φ (α) = 0.
where Φ is a polynomial with integer coefficients 14 . We study Φ in Section 9.3.3; experiments show this polynomial to be irreducible, but we did not manage to prove it in the general case. Assuming this polynomial is irreducible, it is the minimal polynomial of α, and we can define the corresponding number field K as Q[X]/(Φ ). Furthermore, we have
Φ (a Fp ) = ψ (x K , a Fp , y 2 K -x 3 K -a Fp ) (mod p) = ψ (x, a Fp , y 2 -x 3 -a Fp ) (mod p) since x K ≡ x (mod p) = 0 since (x, y) ∈ E[ ] This means that X -a Fp |Φ (in F p [X]
). We thus put p = (p, α -a Fp )
Note that K/p = F p ; this indeed defines a maximal ideal such that E K (mod p) = E and P K (mod p) = P . Reducing an element u = r(α) ∈ K into an element of F p simply requires reducing the coefficients of r modulo p, then evaluating the resulting polynomial in a Fp . This procedure solves the lifting problem, and lifts the curve and the -torsion point to a number field, where Algorithm 26 can be applied. We then recover the F p -isogeny by reducing the coefficients of the curve and of the isogeny modulo p. In the next two sections, we study the polynomials ψ (Section 9.3.2) and Φ (Section 9.3.3) and determine some of their properties; this is crucial in order to determine the complexity of the computation of the lift and of the reduction, as well as the precision needed when embedding K in C.
Generic division polynomials
We define and study the generic -division polynomials and establish a few of their properties. The results from this part are taken in part from [START_REF] Blake | Elliptic Curves in Cryptography[END_REF] and [START_REF] Mckee | Computing division polynomials[END_REF]. Definition 9.3.2. Consider the generic complex curve given by the short Weierstrass equation
E A,B : y 2 = x 3 + Ax + B
We define the generic -division polynomial as the polynomial ψ ∈ Z[x, A, B] such that for any
P = (x, y) ∈ E A,B , [ ]P = Q (x, A, B) ψ (x, A, B) 2 , y R (x, A, B) ψ (x, A, B) 3 ,
or, equivalently, as the polynomial f such that
P ∈ E A,B [ ] ⇔ f (x, A, B) = 0.
A key point here is that this polynomial has integer coefficients and only depends on x, A, B. Hence, we can lift the condition "Q is an -torsion point on E/F p ", i.e. ψ (x, a, b) = 0, into the condition "Q K = (x K , y K ), the lift of Q in the number field, is an -torsion point of the lift over K of the curve E", which is exactly ψ
(x K , a K , b K ) = 0.
The generic -division polynomials have not been discussed in many papers besides [START_REF] Mckee | Computing division polynomials[END_REF]. Hence, this section provides a few results on those polynomials, which will be of help later.
Computing ψ
One can compute the polynomial ψ (x, A, B) using recurrence relations [START_REF] Blake | Elliptic Curves in Cryptography[END_REF] 15 : Theorem 9.3.3. We have
ψ 0 = 0, ψ 1 = 1, ψ 2 = 1 ψ 3 = 3x 4 + 6Ax 2 + 12Bx -A 2 ψ 4 = 2x 6 + 10Ax 4 + 40Bx 3 -10A 2 x 2 -8ABx -16B 2 -2A 3 ψ 2m+1 = ψ m+2 ψ 3 m -16(x 3 + Ax + B) 2 ψ m-1 ψ 3 m+1 if m odd 16(x 3 + Ax + B) 2 ψ m+2 ψ 3 m -ψ m-1 ψ 3 m+1 if m even. ψ 2m = (ψ m+2 ψ 2 m-1 -ψ m-2 ψ 2 m+1 )ψ m
We find in [START_REF] Mckee | Computing division polynomials[END_REF] a partial analysis of the complexity of using those relations to compute ψ (1, A, B): the costliest step is the last one, where two polynomials of degree O( 2 ) in A and B (hence with O( 4 ) terms) are multiplied together. This gives a cost for that last step of O( 8 ) integer multiplications using a naive method, and O( 4 log 2 ) integer multiplications using an FFT method. Hence, we obtain a bound on the total complexity of O( 5+ ) integer multiplications.
The second method is the purpose of [START_REF] Mckee | Computing division polynomials[END_REF], which fixes and considers recurrence relations between the coefficients in front of the different monomials. This method requires O( 6 ) multiplications of integers (with O( 2 ) digits), which is worse asymptotically; however, we presumably compute ψ for small enough that the FFT methods are slower than the naive ones, and hence the worse asymptotic complexity is not much of a problem. Experimental data in [START_REF] Mckee | Computing division polynomials[END_REF] shows that this algorithm can be faster than the last step of the induction alone, for ≥ 23; presumably, it is also faster than the whole algorithm for some smaller values of . This means we can write:
ψ (x, A, B) = α i,j A i B j x χ( )-2i-3j (9.3.1)
Theorem 9.3.5.
• deg x ψ (x, A, B) = χ( ) -i.e. the monomial x χ( ) appears in ψ .
• deg A ψ (x, A, B) = χ( )/2 -i.e. the monomial A χ( )/2 appears in ψ .
Proof. Both of those statements can be proven using induction, in the same way as the previous theorem. We outline a very similar proof in Section 9.3.3, and hence refer to it for full details.
Size of coefficients of ψ
The following proposition gives some information on the size of the α i,j : ]). We have
Proposition 9.3.6 ([ McK94
|α i,j ( )| ≤ 2 ( 2 -1/2)! [(( 2 -1)/2)!] 2 ( 2 /2 + 1)!
We simplify the shape of this bound using the following variants on Stirling's formula [START_REF] Robbins | A remark on Stirling's formula[END_REF]:
√ 2πn n+1/2 e -n ≤ √ 2πn n+1/2 e -n e 1 12n+1 ≤ n! ≤ √ 2πn n+1/2 e -n e 1 12n
which gives
|α i,j ( )| ≤ 2 ( 2 -1/2)! [(( 2 -1)/2)!] 2 ( 2 /2 + 1)! ≤ 2 × ( 2 -1/2) 2 e -2 +1/2+ 1 12n 2π 2 -1 2 2 e -2 +1 × 2 2 + 1 2 /2+3/2 e -2 /2-1 ≤ 3 2 × e 2 /2+1/2+ 1 12n 2π 2 -1 2 2 × 2 2 + 1 2 /2+3/2 ≤ e 2 /2+1/2+ 1 12n 2 (3 2 +1)/2 3 2 π ( 2 -1) 2 × ( 2 + 2) 2 /2+3/2 ≤ e 1/2+ 1 12 π 3 e 2 /2 2 (3 2 +1)/2 3 2 ( 2 -1) 2 × ( 2 + 2) 2 /2 ≤ e 2 /2 2 (3 2 +1)/2
3 since the fraction on the right is bounded by 1.41 for ≥ 2. This upper bound is slightly worse (by a factor π) than the equivalent given in [START_REF] Mckee | Computing division polynomials[END_REF], but this is not a problem for our purposes. Hence, we have log|α r,s | ≤
2 2 + 3 2 + 1 2 log 2 -3 log ≤ 2 ≤ 2 2 which makes the estimate [McK94, Corollary 1] more precise. Hence, deg(Φ m+2 Φ 3 m ) > deg(Φ m-1 Φ 3 m+1 ). This proves that deg Φ 2m+1 = χ(2m+1)
2
. Furthermore, the monomial with highest degree of Φ 2m+1 is simply the monomial of highest degree of Φ m-1 Φ 3 m+1 , which means
c 2m+1 = c m+2 c 3 m = (-1) (m+1)/2+3(m-1)/2 = (-1) -1 = (-1) m
2. For Φ 2m+1 with m even:
χ(m + 2) 2 + 3 χ(m) 2 = (m + 2) 2 -4 + 3(m 2 -4) 4 < 4m 2 + 4m 4 = χ(2m + 1) 2 χ(m -1) 2 + 3 χ(m + 1) 2 = (m -1) 2 -1 + 3((m + 1) 2 -1) 4 = m 2 -2m + 3m 2 + 6m 4 = χ(2m + 1) 2 This time, deg(Φ m+2 Φ 3 m ) < deg(Φ m-1 Φ 3 m+1 ); this also proves that deg Φ 2m+1 = χ(2m+1)
2 , and furthermore:
c 2m+1 = -c m-1 c 3 m+1 = (-1) (m-2)/2+3(m/2)+1 = 1 = (-1) m 3. For Φ 2m : χ(m) 2 + χ(m + 2) 2 + 2 χ(m -1) 2 = m 2 -4 + (m + 2) 2 -4 + 2((m -1) 2 -1) 4 = 4m 2 -4 + 4m -4m 4 = χ(2m) 2 χ(m) 2 + χ(m -2) 2 + 2 χ(m + 1) 2 = m 2 -4 + (m -2) 2 -4 + 2((m + 1) 2 -1) 4 = 4m 2 -4 4 = χ(2m) 2
Those polynomials have the same degree; hence we have to show that we do not have cancellation of the highest-degree coefficients. We look at the two subcases:
(a) If m is even:
leading coeff.(Φ m+2 Φ 2 m-1 Φ m ) = (-1) m+2 2 +1 m + 2 2 × (-1) m 2 +1 m 2 = - m 2 + 2m 4 leading coeff.(Φ m-2 Φ 2 m+1 Φ m ) = (-1) m-2 2 +1 m -2 2 × (-1) m 2 +1 m 2 = - m 2 -2m 4
Hence the coefficient of degree χ(2m)
2 is -m 2 +2m-m 2 +2m 4 = -m = 0. (b) If m is odd: leading coefficient(Φ m+2 Φ m Φm -1 2 ) = - (m -1) 2 4 leading coefficient(Φ m-2 Φ m Φ 2 m+1 ) = - (m + 1) 2 4
We then use the triangle inequality, and the fact that |d| = |y
2 K -x 3 K | ≤ p 3 2 : n+2i+3j=χ( ) j k=0 (-1) j |αi,j( ) j k x n+k K (-d) j-k A i+k | ≤ n+2i+3j=χ( ) j k=0 αi,j( )2 j x n+k K |d| j-k A i+k ≤ n+2i+3j=χ( ) j k=0 αi,j( )2 j p n+k p 3 2 j-k A i+k
We now take a closer look at the coefficient α i,j ( )2 j p n+k p 3 2 j-k
. This coefficient is maximized when k = 0, which means the coefficients are bounded by
α i,j ( )p n+3j = α i,j ( )p χ( )-2i
The biggest of those coefficients is when i = 0, which in the end gives the bound
|α i,j ( )2 j p n+k p 3 2 j-k | ≤ e 2 2 p χ( )
Since there are at most χ( ) 2 monomials, this gives a bound on the coefficients of the polynomial Φ : χ( ) 2 e 2 2 p χ( )
Hence the coefficients have a size bounded by
2 2 + χ( ) log p + 2 log χ( ) ≤ χ( )(log p + 4) + 2 log χ( ) + 1.
Root size
We compute an upper bound on the size of the complex roots of this polynomial; this will be useful to bound the running time of the root-finding algorithm we will use to compute the embeddings from K to C explicitly.
Theorem 9.3.9. Let z ∈ C such that Φ (z) = 0. Then
|z| ≤ 2χ( ) 2 e 2 2 p 2
Proof. Our proof is heavily inspired by the one for Cauchy's theorem on bounds of roots of polynomials
. Define Q = X d - d-1 i=0 |a i |X i
, where the a i denote coefficients of Φ . Then write
Q(x) x d = 1 -f (x)
with f : R * + → R a continuous, strictly decreasing function, which is +∞ at 0 and 0 at +∞. Hence this function is equal to 1 only once, which proves that Q has only one positive root. Call r this positive root; since Q(0) < 0, we have that Q is negative on [0, r] and positive on [r, +∞[. Using the same arguments as in the previous section, but in a different order, one can prove that a bound on the coefficient of
A i in Φ is χ( ) 2 e 2 2 p χ( )-2i
Put g = χ( ) 2 exp(2 2 ) and h = p 2 ; then the coefficient of A χ( )/2-i in Φ is bounded by gh i . Putting d = χ( )/2, we have
d-1 i=0 |a i |(2gh) i ≤ d-1 i=0 h d-i h i 2 i g i+1 = gh d (2g) d -1 2g -1 ≤ (2gh) d g 2g -1 < (2gh) d
This proves that Q(2gh) > 0, and hence that r ≤ 2gh. Finally, the triangle inequality gives
|P (z) -z n | ≤ n-1 i=0 |a i ||z| i Hence for any root ζ ∈ C * we have |ζ n | ≤ n-1 i=0 |a i ||ζ| i , hence Q(|ζ|) ≤ 0. Thus f (|ζ|) ≥ 1 = f (r)
; since f is strictly decreasing, this proves that |ζ| ≤ r. This proves the result.
Hence we can write |z| ≤ p 2 χ( ) 2 2 2.89 2 +1 , and hence log 2 |z| ≤ 2.89 2 +2 log 2 p+4 log 2 +1 = O( 2 + log p). Note that this is a tighter bound than a bound given by Rouché's theorem, which would be of the form r = 1 + max i=0,...,n-1 {|a i |}, and give an upper bound of size O( 2 log p).
Computation of the polynomial
One strategy to compute Φ (A) is simply to use its definition; this means computing the generic -division polynomial ψ , then evaluating it at x = x K and B = d-x K A. The evaluation requires computing each monomial individually; assuming the powers of x and B are precomputed and cached, each monomial only requires a constant number of multiplications, which gives a total cost of O( 4 ) multiplications of integers. This is dominated by the cost of computing ψ ; in the end we get a complexity of O( 5+ ) integer multiplications.
A much faster way is to use the recurrence relations for Φ , which are the recurrence relations defining ψ instantiated for x = x K and B = d -x K A. Those are recurrence relations between univariate polynomials; using them limits the number of terms and of operations compared with the case of the tri-variate polynomials ψ . The complexity analysis of this method is exactly the same as the one for the computation of the division polynomial of an explicit elliptic curve in short Weierstrass. This is not surprising, since the former amounts to computing ψ (x, A, B) knowing x, B, and the latter, knowing A, B. Hence Proposition 9.3.10. Computing Φ requires O( M( 2 )) arithmetic operations. Since the size of the coefficients of Φ is O( 2 log p), this gives a bit complexity of O( M( 2 )M( 2 log p)).
Our implementation of the latter method (i.e. univariate recurrence relations) in Magma shows that the computation of Φ takes 0.02s for = 23 and 0.07s for = 37, while the first strategy (compute the generic -division polynomial then substitute x and B by some values) took respectively 2s and 47s.
Empirical observations on the coefficients
We finish this section by outlining some empirical observations on Φ . These assertions seem to hold for any polynomial we obtained when running the algorithm described in this section on several dozen examples; the parameters were ≤ 29 and p ≤ 1789. Proving these propositions is likely to be hard, most of all because we do not know much about the coefficients α r,s of the generic division polynomial.
Conjecture 9.3.11. The following assertions, from weakest to strongest, hold:
1. The coefficient with the largest absolute value is the constant coefficient |a 0 |.
The coefficients have decreasing absolute value, ie |a
0 | ≥ |a 1 | ≥ |a 2 | ≥ . . . ≥ |a χ( )/2 |. 3. We have |a 0 | ≥ χ( )/2 i=1 |a i |.
Any of these assertions would yield the bound χ( )e 2 2 2 χ( )/3 p χ( ) on the coefficients of Φ , which saves a factor χ( )2 χ( )/3 compared to our bound; that saving would carry over to the bound on its roots. However, this does not seem to change the asymptotic running time of our algorithm.
Precision required
Recall that Section 9.2.4 established that we need to choose P such that the largest denominator in the coefficients of the elements in K is smaller than 2 P , so that the conversion of complex coefficients into rational numbers can be done. Hence, we need to determine a value for P , or at least its asymptotic behavior as a function of and p. This task seems difficult, and we did not manage to prove anything rigourously; it may involve taking a closer look at Vélu's formulas and keep track of the height of the coefficients. However, we propose a conjecture, which seems to hold in practice.
Recall that a root-finding algorithm will find all d roots with precision P in time O(n log 2 n(log 2 n + log(P + log S))M(P + log S))
with S a bound on the size of the roots. Judging by the previous section, we have
S ≤ 2χ( ) 2 exp(2 2 )p 2 ≤ p 2 χ( ) 2 2 2.89 2 +1
which means log S = 2.89 2 + 2 log p + 4 log + 1. Hence, we can take P = O( 2 + log p) without it changing the asymptotic complexity of our algorithm. If we had just used Rouché's theorem, and our assumption that the constant coefficient is the biggest one (which we did not manage to prove), we would have gotten the bound χ( )(log p + 4.47) + 2 log χ( ) + 1 = O( 2 log p), which is a bit worse. In practice, our experiments showed that taking P = 2 log p always gave us more than enough precision for the rational reconstruction to succeed; indeed, the size of the denominators was often much smaller than this bound. Hence, we put forward the following conjecture: Conjecture 9.3.12. Taking P = O( 2 log p) is enough to recognize the rational coefficients; in fact, P = 2 log p is enough.
We verified this conjectures on curves up to = 29 and on fields up to p = 1789.
Description of the algorithm
We now state the final algorithm to compute an isogenous curve over F p with given kernel. Recall from the discussion in Section 9.3.3 that we assume that Φ is an irreducible polynomial over Z[X], which seems true in practice and is true generically with probability 1. Algorithm 27 Compute an -isogenous curve and an isogeny with a given kernel, over F p . Input: E(F p ) :
y 2 = x 3 + ax + b (a, b ∈ F p ), P ∈ E[ ]. Output: E (F p ) : y 2 = x 3 + a x + b (a , b ∈ K), -isogenous to E; the rational function defining the -isogeny φ : E → E . 1: Put P k = (x K , y K ), where -p/2 ≤ x K , y K ≤ p/2. 2: Compute the polynomial Φ ; define K = Q[X]/(Φ ). 3: Put a K = X and b K = (y 2 K -x 3 K -a K x K ) and define E K : y 2 = x 3 + a K x + b K . 4: Use Algorithm 26 to compute E K : y 2 = x 3 + a K x + b K ,
and the -isogeny over K, φ K . 5: Compute the elements in F p corresponding to each coefficient of the curve and of the rational function, using Section 9.3.1; return the result.
Note that this complexity is much lower than the cost of factoring Φ over Z[X] (which is O( 20 log 2 p) as analyzed in Section 9.3.3); hence, patching the algorithm in the event that Φ has a cost far superior to the rest of the algorithm.
Proof. Recall that the total cost of Algorithm 26 is
O((n log 4 n + n log 2 n log P + n log P + M(n) log n)M(P ))
In our case, we have n = deg K = O( 2 ), which gives a running time of O(( 3 log P + M( 2 ) log )M(P )).
Step 2 requires, according to Proposition 9.3.10, O( M( 2 )M( 2 log p)) bit operations. We determine the complexity of Step 5 by first determining the complexity of reducing one element, u = r(α) ∈ K, into an element of F p . Reducing the O( 2 ) coefficients of r, which are rational numbers, requires reducing O( 2 ) numerators and denominators modulo p, and computing the inverse of the denominators. If we suppose that the size of each numerator and denominator is O(P ), one can then determine the quotient of the division of one of those integers by p by computing 1 p with precision P , then multiply the integer by 1 p and take the integral part; this requires O(M(P )) operations per coefficient. Computing the inverse of the denominators is done by computing the inverse of the product and recover each inverse, which requires O(M(log p)( 2 + log log p)) bit operations. Hence the cost of this step is O( 2 M(P ) + M(log p) log log p). Finally, evaluating r in a Fp can be done using Horner's rule, and requires O( 2 ) operations in F p , which is negligible. We thus get a total cost of O( 2 M(P ) + M(log p) log log p) bit operations for the reduction of a coefficient in F p , and hence a cost of
O( 3 M(P ) + M(log p) log log p)
for reducing all the coefficients of an -isogeny. This gives the claimed complexity.
If we suppose that the estimate P = O( 2 log p) is correct, we get an overall complexity of
O ( 3 + 3 log log p + M( 2 ) log )M( 2 log p) = O( 5+ log 1+ p).
Vélu's formulas require O(M( ) log ) field operations, each costing O log 1+ p ; this complexity is much better than our algorithm's.
Extending this idea to other settings
The algorithms we presented in this chapter relied on the idea of using the algorithms for computing Abel-Jacobi map quickly in order to reduce the problem to the easy case of isogeny computation on complex tori. We show here how this idea could be adapted to other settings.
Extending the algorithm to higher genera
We start by noting that there is no straightforward generalization of Vélu's formulas to genus g; hence, other methods may need to be used to solve the hyperelliptic equivalent of Problem 1.1.16.
It should be possible to generalize Algorithm 25 (i.e. the one working over C) to curves of higher gernus in a rather straightforward manner. Indeed, in higher genera, evaluating isogenies on complex tori is also computationally easy (see Note 1.4.12), and much simpler than evaluating isogenies on algebraic representations. Our algorithms in Chapter 8 can then provide the link between analytic and algebraic representations in quasi-linear time, at least for genus 2. Hence, the strategy of going to the complex torus to evaluate the isogeny, then come back to the algebraic representation and interpolate the isogeny can be generalized to higher genus. We leave the details of this generalization to future work.
Generalizing the algorithm to K would then be very straightforward: the same method of computing embeddings to C to a high enough precision, solving the problem on C then interpolating the coefficients of the rational function over K would work in exactly the same way.
However, note that the generalization of the algorithm to F p is not as straightforward. In particular, the generalization of the global torsion lifting procedure is unclear, as it requires the computation of (generic) -division polynomials in genus 1; the generalization of such polynomials to genus 2 and their properties have to be studied further in order to show that a similar global torsion lifting procedure could be used.
Solving other problems
Finally, we note that it may be possible to use a very similar idea to solve other problems related to isogenies. This section is inspired by an idea of [START_REF] Van Wamelen | Poonen's question concerning isogenies between Smart's genus 2 curves[END_REF], who uses a similar idea to find an isogeny between two genus 2 curves.
In particular, it seems like the problem of finding -isogenies between two given curves (Problem 1.1.19) may be solvable using this idea. This would first require the computation of the two periods associated to each of the two given curves; once this is done, one can attempt to compute the -isogeny between the tori, by computing an α ∈ C which sends Λ 1 onto Λ 2 . This is potentially not very hard, as we can reduce the periods so that their quotient is in the fundamental domain, then look for relations of the form τ 2 = mτ1+n . Once α has been found, the isogeny can easily be computed.
This algorithm would extend to K very straightforwardly. Extending it to F p would require finding a lifting procedure which would preserve the isogeny and lift both curves to curves defined over the same number field; we do not know how to solve this problem.
Finally, we note that [VW00] uses very similar ideas to find isogenies between genus 2 curves, without knowing the degree of the isogeny beforehand. The idea is to use the LLL algorithm to attempt to find integer relations between the periods defining each lattice; this idea seems rather natural, and may lead to an algorithm to solve Problem 1.1.20 over C and K. As before, the generalization of the lifting procedure in a way that preserves the isogeny is unclear, and the algorithm does not seem to generalize straightforwardly to F p . which gives the bounds
|x -x| ≤ (2 + 2k 1 (|x 1 | + |y 1 |))2 -P |y -ỹ| ≤ (2 + 2k 1 (|x 1 | + |y 1 |))2 -P
Bounding the norms of real and imaginary parts by the norm of the number itself gives:
|x -x| ≤ (2 + 4k 1 |z 1 |)2 -P |y -ỹ| ≤ (2 + 4k 1 |z 1 |)2 -P
Norm
We wish to compute z = |z 1 | 2 . We use previous estimates to write:
||z| 2 -|z| 2 | ≤ |x 2 1 -x1 2 | + |y 2 1 -ỹ1 2 | ≤ (4 + 4k 1 |x 1 | + 4k 1 |y 1 |)2 -P
Again, bounding the norms of real and imaginary parts by the norm of the number itself:
||z| 2 -|z| 2 | ≤ (4 + 8k 1 |z 1 |)2 -P
Interlude: Division of a real by a positive real
Let us assume |a 1 -ã1 | ≤ k 1 2 -P and |a 2 -ã2 | ≤ k 2 2 -P , with a 2 > 0. In addition, assume that a 2 > k 2 2 -P ; this means that the sign of a 2 is known and that we are not at risk of dividing by 0. Then we have
a 1 a 2 - ã1 ã2 ≤ | a 1 ã2 -a 2 ã1 a 2 ã2 | ≤ | (a 1 -ã1 )(a 2 + ã2 ) -(a 1 a 2 -ã1 ã2 ) a 2 ã2 | ≤ a 2 + ã2 a 2 ã2 k 1 2 -P + k 1 |a 2 | + k 2 |a 1 | + 1 a 2 ã2 2 -P ≤ 2a 2 + ã2 a 2 ã2 k 1 + k 2 |a 1 | + 1 a 2 ã2 2 -P ≤ 3a 2 + k 2 2 -P a 2 ã2 k 1 + k 2 |a 1 | + 1 a 2 ã2 2 -P ≤ 3k 1 a 2 -k 2 2 -P + k 2 (|a 1 | + k 1 2 -P ) + 1 a 2 (a 2 -k 2 2 -P ) 2 -P
To further simplify, we assume that a 2 ≥ 2k 2 2 -P ; otherwise, we might end up in a case where not even the high bit of ã2 is correct (for instance if a 2 = 2k 2 2 -P and ã2 = k 2 2 -P ). This means that a 2 -k 2 2 -P ≥ a 2 /2 and helps with denominators:
a 1 a 2 - ã1 ã2 ≤ 6k 1 a 2 + 2k 2 (|a 1 | + k 1 2 -P ) + 2 a 2 2 2 -P
Finally, we assume that |a 1 | ≥ k 1 2 -P to further simplify:
a 1 a 2 - ã1 ã2 ≤ 6k 1 a 2 + 4k 2 |a 1 | + 2 a 2 2 2 -P
Square root
We write
| √ z 1 - √ z1 | = |z 1 -z1 | | √ z 1 + √ z1 | ≤ k 1 | √ z 1 + √ z1 | 2 -P
If we suppose that z 1 and z1 are in the same quadrant, which is true if we suppose |x 1 | > k 1 2 -P and |y 1 | > k 1 2 -P , then √ z 1 and √ z1 are in the same quadrant (since the angle is just divided by 2). This means that |
√ z 1 + √ z1 | ≥ | √ z 1 |. Hence | √ z 1 - √ z1 | ≤ k 1 |z 1 | 2 -P
Exponential
Starting with real numbers: we have |e x -e x| ≤ e t |x -x| with t in the interval bounded by (but not containing) x and x, using Taylor-Lagrange with order 1 (or Rolle's theorem). Hence
|e x -e x| ≤ e t |x -x| ≤ max(e x , e x)k x 2 -P ≤ e x e kx2 -P k x 2 -P Since k x 2 -P ≤ 2 -P/2 ≤ 1/2, we have e kx2 -P ≤ 1 + k x 2 -P + (k x 2 -P ) 2 2 1 1 -k x 2 -P ≤ 1 + k x 2 -P + (k x 2 -P ) 2 ≤ 1 + 2(k x 2 -P ) Hence |e x -e x| ≤ e x (1 + 2k x 2 -P )k x 2 -P ≤ e x (k x + 2)2 -P
Now for complex numbers:
|e x+iy -e x+iỹ | ≤ |e x -e x+i(ỹ-y) | ≤ (e x -e x cos(ỹ -y)) 2 + e 2x sin 2 (ỹ -y) Since for positive numbers a + b ≤ a + b + 2 √ a √ b, √ a + b ≤ √ a + √ b and |e x+iy -e x+iỹ | ≤ |e x -e x cos(ỹ -y)| + e x| sin(ỹ -y)| ≤ |e x -e x| + e x(|1 -cos(ỹ -y)| + | sin(ỹ -y)|) ≤ e x (k x + 2)2 -P + e x (1 + 2k x 2 -P )(|1 -cos(ỹ -y)| + | sin(ỹ -y)|)
For x > 0 we have sin(x) ≤ x and |1 -cos(x)| ≤ x 2 2 (since cos x = 1 -2 sin 2 (x/2) or the theorem for alternate series), hence
|e x+iy -e x+iỹ | ≤ e x (k x + 2)2 -P + e x (1 + 2k x 2 -P )((k x 2 -P ) 2 /4 + k x 2 -P ) ≤ (e x (k x + 2))2 -P + e x (1 + 2k x 2 -P )(1/4 + k x )2 -P
because as always we suppose k x 2 -P ≤ 2 -P/2 ; hence |e z -e z | ≤ e x (7/2k x + 4.25)2 -P
Résumé de la thèse en français
Cette thèse s'intéresse à l'évaluation rapide de fonctions complexes en précision arbitraire, et plus précisément de fonctions liées aux courbes elliptiques et hyperelliptiques définies sur les nombres complexes. Nos résultats, en particulier ceux sur l'évaluation de la fonction θ en temps quasi-linéaire en la précision voulue, ont une portée plus générale et peuvent être réutilisés dans d'autres contextes. Nous décrivons dans les deux premières parties les fonctions pour lesquelles nous donnons des algorithmes rapides dans cette thèse ; nous montrons également une méthode utilisée par [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] pour calculer certaines valeurs de la fonction theta en temps quasi-linéaire. Cette méthode est celle que nous généralisons pour obtenir nos algorithmes rapides : nous décrivons dans un premier temps une méthode pour calculer θ(z, τ ) en genre 1 en temps quasi-linéaire, puis généralisons cette méthode une nouvelle fois pour donner un algorithme calculant θ(z, τ ) en genre 2. De plus, il semblerait que cette méthode pourrait se généraliser à des genres plus grands, mais quelques problèmes surviennent alors et nous n'avons pas trouvé de façon satisfaisante de les règler. L'application principale que nous faisons de nos résultats est le calcul rapide de l'application d'Abel-Jacobi et de son inverse en temps quasi-linéaire. Ce résultat nous permet de plus de décrire un nouvel algorithme de calcul d'isogénies de noyau donné ; l'algorithme sur C utilise directement l'évaluation rapide de l'application d'Abel-Jacobi, et nous montrons comment utiliser cet algorithme pour le calcul d'isogénies de noyau donné sur un corps de nombre, puis sur un corps fini, en utilisant un procédé de relèvement de la courbe.
Courbes elliptiques et application d'Abel-Jacobi
Les courbes elliptiques, ainsi que les courbes hyperelliptiques, leur généralisation en genre supérieur, sont étudiées depuis des siècles par les mathématiciens ; c'est à la fin des années 1980 que l'idée de les utiliser en cryptographie a fait surface, et avec elle la question de concevoir des algorithmes efficaces pour calculer des objets reliés à ces courbes. Nous ne traiterons ici que le cas des courbes elliptiques, mais beaucoup de propriétés, ainsi que certains algorithmes, se généralisent au genre supérieur.
Une courbe elliptique sur un corps K peut être définie comme l'ensemble des points vérifiant l'équation dite "forme courte de Weierstrass" suivante: Une courbe elliptique complexe possède une autre représentation, différente de sa représentation algébrique (par une équation courte de Weierstrass). En effet, on a le théorème suivant, dit d'Abel-Jacobi : On parle alors de représentation analytique d'une courbe elliptique, ou du tore complexe associé à une courbe, pour parler de l'ensemble C/(Zω 1 + Zω 2 ). Le calcul de cette application est possible par des algorithmes d'évaluation d'intégrales elliptiques ; nous renvoyons à la section ci-dessous sur l'application d'Abel-Jacobi. L'intérêt de cette application dans le contexte des isogénies est qu'une isogénie entre tores complexes C/Λ 1 → C/Λ 2 est toujours de la forme z → αz pour un certain α : l'évaluation de l'isogénie est ainsi très aisée. Nous détaillons dans une section ci-dessous un algorithme de calcul d'isogénies qui tire parti de cette propriété.
y 2 = x 3 + ax + b avec a, b ∈ K,
Afin de calculer l'application d'Abel-Jacobi, ainsi que son inverse, de façon asymptotiquement rapide, nous avons étudié une fonction liée à cette application, la fonction θ de Jacobi. Nous détaillons le résultat le plus important de ce manuscrit, qui est un algorithme pour l'évaluation de θ en complexité quasi-linéaire en la précision demandée ; notre méthode s'inspire d'une méthode similaire pour les theta-constantes, qui sont des valeurs spéciales de cette fonction, étudiée dans [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF].
Fonction theta et calcul rapide de theta-constantes
Fonction theta
En genre 1, la fonction θ est définie par Ainsi, on peut supposer que Im(z) ≤ Im(τ ) 2 et |Re(τ )| ≤ 1 2 . Cependant, notons que déduire les valeurs finales de θ(z , τ ) avec z , τ réduits nécessite de calculer des facteurs exponentiels, dont la taille peut être grande ; ainsi, la complexité de la réduction d'argument dépend fortement des arguments, alors que nous verrons plus tard que l'algorithme calculant θ(z , τ ) a une complexité indépendante des arguments.
La généralisation de cette fonction au genre supérieur n'est pas très compliquée ; pour la fonction θ, la définition se fait au moyen d'une formule similaire, mais avec z ∈ C g et τ une matrice g × g de partie imaginaire définie positive. On définit alors 2 g fonctions thêta fondamentales, et 2 2g fonctions theta avec caractéristiques au total ; toutes les propriétés que nous avons énoncé dans cette section se généralisent également au genre supérieur. La seule exception concerne la généralisation du domaine fondamental au genre supérieur, qui n'est pas aussi simple qu'elle n'y paraît, car le domaine que l'on obtient est caractérisé par des inéquations qui n'ont pas été déterminées explicitement ; cependant, il est possible de considérer deux réductions moins strictes, pour lesquelles un algorithme explicite existe, et qui semblent quand même avoir des propriétés désirables pour la réduction d'arguments pour θ.
Moyenne arithmético-géométrique
Parmi les nombreuses formules vérifiées par les valeurs de la fonction thêta, on peut citer les formules de τ -duplication des thêta-constantes: Une convergence quadratique signifie que le nombre de chiffres exacts à une étape est (quasiment) le double du nombre de chiffres exacts à l'étape précédente ; ainsi, une approximation de la limite avec P chiffres exacts peut être obtenue en calculant O(log P ) termes de la suite. Dans le cas de l'AGM, cela signifie que AGM(a, b) peut être calculée à précision P en O(M(P ) log P ) opérations, où M(P ) représente le temps nécessaire pour multiplier deux nombres de taille P bits.
Pour relier formellement les thêta-constantes à la moyenne arithmético-géométrique, il convient de déterminer si la condition que les choix de signes soient bons est remplie pour certaines valeurs de τ : Pour τ ∈ F k , le choix de signe est bon ; en particulier, on a pour tout τ ∈ F k AGM(θ 2 0 (0, τ ), θ 2 1 (0, τ )) = 1.
Calcul rapide de theta-constantes
Nous décrivons maintenant l'algorithme, exposé dans [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF][START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF], qui calcule les thetaconstants rapidement à partir de l'AGM. La proposition précédente se réécrit en utilisant l'homogénéité de l'AGM:
AGM 1, θ 2 1 (0, τ ) θ 2 0 (0, τ ) = 1 θ 2 0 (0, τ ) .
On peut ensuite utiliser l'action de SL 2 (Z), qui donne le résultat suivant:
AGM 1, θ 2 2 (0, τ ) θ 2 0 (0, τ ) = 1 -iτ θ 2 0 (0, τ ) .
L'équation de Jacobi θ 4 0 (0, τ ) = θ 4 1 (0, τ ) + θ 4 2 (0, τ ) permet de montrer que la fonction
f τ : x → AGM(1, z) AGM(1, √ 1 -z 2 ) + iτ
vérifie f τ θ 2 1 (0,τ ) θ 2 0 (0,τ ) = 0. L'algorithme consiste alors à appliquer la méthode de Newton à la fonction f τ , afin de calculer une approximation du quotient des carrés des thêta-constantes fondamentales ; on peut ensuite récupérer chacune d'entre elles en appliquant l'AGM, et la troisième en utilisant l'équation de Jacobi. Il importe d'appliquer la méthode de Newton de façon efficace, afin de préserver une complexité quasi-optimale : on part ainsi d'une approximation du résultat à basse précision, puis on applique chaque étape de la méthode de Newton en doublant la précision à laquelle on travaille. En effet, la méthode de Newton étant auto-correctrice, on peut se contenter de travailler à précision réduite et de doubler la précision requise à chaque étape ; l'analyse asymptotique permet de démontrer que la complexité totale est la même que la dernière étape, qui travaille à pleine précision. Ici, l'on obtient une complexité de O(M(P ) log P ) opérations.
Enfin, notons que l'on peut aussi utiliser cet algorithme pour obtenir un algorithme qui calcule θ(0, τ ), pour τ ∈ F, avec une complexité bornée par cM(P ) log P , avec c une constante indépendante de τ -la complexité est ainsi uniforme en τ . L'idée est d'utiliser également l'algorithme naïf, qui approxime θ en calculant une somme partielle avec assez de termes : la complexité de cet algorithme est en O M(P ) P Im(τ ) , i.e. elle s'améliore lorsque Im(τ ) grossit, alors que la complexité de l'algorithme ci-dessus devient de pire en pire (il faut calculer le nombre de bits perdus par les erreurs d'approximation pour s'en convaincre). On peut ainsi utiliser l'algorithme naïf pour des valeurs de Im(τ ) relativement grosses par rapport à la précision voulue, et l'algorithme rapide pour les valeurs de Im(τ ) plus petites ; on peut montrer que cet algorithme a la bonne complexité. Cette stratégie d'uniformisation de la complexité sur le domaine fondamental peut aussi être appliquée dans nos algorithmes, comme nous allons le voir.
Calcul de la fonction thêta de Jacobi
Nous allons maintenant décrire comment nous avons généralisé cette méthode au calcul de θ(z, τ ), pour tout z et τ réduits, en temps quasi-linéaire. Les résultats de cette section sont tirés d'un article accepté pour publication dans le journal Mathematics of Computation.
L'ingrédient nécessaire est la généralisation des formules de τ -duplication aux fonctions thêta: θ 2 0 (z, 2τ ) = θ 0 (z, τ )θ 0 (0, τ ) + θ 1 (z, τ )θ 1 (0, τ ) 2 , θ 2 1 (z, 2τ ) = θ 0 (z, τ )θ 1 (0, τ ) + θ 1 (z, τ )θ 0 (0, τ ) 2 .
On peut ainsi définir une fonction de 4 variables Re θ1(z,τ ) θ0(z,τ ) > 0 ; ainsi, la suite des itérés de (θ 0 (z, τ ), θ 1 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ )) par la fonction F correspond aux valeurs où τ a été multiplié par 2, et la suite converge vers (1, 1, 1, 1).
F (x, y, z, t) = √ x √ z + √ y √ t 2 , √ x √ t + √ y √ z 2 , z + t 2 , √ z
Notons que la condition sur Im(z) est un peu plus stricte que celle obtenue simplement par réduction du premier argument ; ainsi, l'application de ce théorème nécessite d'utiliser les formules de z-duplication, qui donnent θ(2z, τ ) en fonction de θ i (z, τ ).
Cependant, la suite définie dans la proposition précédente ne généralise pas exactement l'AGM, car elle ne converge pas nécessairement quadratiquement ; on peut le voir en prenant par exemple x = y, z = t. La difficulté est en réalité contournable dans notre cas : ce que nous recherchons en réalité est la définition d'une fonction F telle que l'on ait par exemple (tout comme pour l'AGM) La preuve de cette proposition est technique. Notons que, si x, y sont les fonctions thêta en z et en τ , et z, t les thêta-constantes, lim x n = lim z n = 1 et les membres droits se simplifient. On peut ainsi définir F comme la fonction qui calcule la limite des suites qui apparaissent dans cette proposition.
F
Tout comme pour les thêta-constantes, on utilise l'action de SL 2 (Z) pour faire ressortir les paramètres : on a ainsi que
θ 2 2 z τ , -1 τ = -iτ e 2iπz 2 /τ θ 2 1 (z, τ ), θ 2 0 z τ , -1 τ = -iτ e 2iπz 2 /τ θ 2 0 (z, τ ). Ainsi F θ 2 2 (z, τ ) θ 2 0 (z, τ ) , θ 2 2 (0, τ ) θ 2 0 (0, τ ) = e -2iπz 2 /τ -iτ θ 2 0 (z, τ ) , 1 -iτ θ 2 0 (0, τ )
pour peu que les choix de signes soient bons pour θ z τ , -1 τ . Afin de s'en assurer, on impose les conditions |Re(z)| ≤ 1 8 , Im(z) ≤ Im(τ ) 2 , 0.345 ≤ Im(τ ) < 1, ce qui assure que les choix de signes sont bons pour θ à la fois en (z, τ ) et en z τ , -1 τ . Ceci contraint z, τ à des valeurs particulières, à l'intérieur d'un compact ; cependant, on peut toujours se ramener à ce compact à partir des conditions obtenues après réduction des deux arguments, en utilisant des formules de z-duplication et de τ -duplication. Il est ensuite aisé de construire une fonction f z,τ qui s'annule en θ 2 1 (z,τ ) θ 2 0 (z,τ ) ; puis, on utilise la méthode de Newton pour calculer ce quotient, ce qui (après une ultime application de F) donne θ(z, τ ). Le coût de cet algorithme est O(M(P ) log P ) , mais la complexité dépend a priori de z et de τ (en particulier, elle dépend du nombre de formules de τ -et de z-duplications que l'on doit appliquer) ; nous sommes parvenu à donner une version de l'algorithme qui a une complexité uniforme en z et τ sur tout le domaine fondamental, et pour lequel les pertes de précision ne sont jamais plus grandes que P chiffres (ainsi, travailler à précision 2P suffit pour obtenir le résultat exact).
Nous avons implanté l'algorithme ci-dessus en utilisant la bibliothèque MPC [START_REF] Enge | GNU MPC -A library for multiprecision complex arithmetic with exact rounding[END_REF], une bibliothèque C qui permet d'effectuer des calculs sur des nombres complexes de précision arbitraire. Nous avons également implanté une version optimisée de l'algorithme naïf afin d'obtenir une comparaison adéquate ; enfin, nous avons également comparé notre algorithme à la fonction Theta de MAGMA. Les temps de calcul sont résumés dans la Table 6.1 ; ils montrent que les deux algorithmes sont toujours plus rapides que MAGMA, et que notre algorithme est plus rapide que l'algorithme naïf pour des précisions supérieures à 260 000 chiffres décimaux (ce qui nécessite environ une minute de calcul).
Généralisation de l'algorithme au genre supérieur
Nous décrivons dans cette section les résultats du chapitre 7, qui sont tirés d'un article co-écrit avec Emmanuel Thomé et publié à l'Algorithmic Number Theory Symposium XII ; il s'agit de montrer comment l'algorithme décrit dans la section précédente peut se généraliser au genre supérieur.
Notons d'abord que l'algorithme naïf (par sommation partielle de la série) est plus difficile à analyser dans le cas général du genre g. En genre 1, l'algorithme nécessite O M(P ) P Im(τ ) opérations pour le calcul de θ à précision P ; nous avons réalisé une analyse similaire, rendue plus technique par le nombre plus grand de variables, en genre 2, qui donne un algorithme en O M(P ) P Im(τ1,1) opérations. Cependant, une telle preuve ne peut se généraliser en genre g très facilement : nous avons obtenu une complexité en O M(P ) P λ g/2 , où λ est la plus petite valeur propre de Im(τ ), mais le lien avec Im(τ 1,1 ), bien que pressenti par une conjecture, n'a pu être formellement établi. Notons cependant qu'une approche tout aussi naturelle est décrite en d'autres termes par [DHB + 04] ; notre analyse montre une complexité de O M(P )P g/2 , mais nous n'avons pas pu déterminer la dépendance en τ de l'algorithme.
Revenons maintenant au cas du genre 2. Un algorithme similaire à celui existant pour les theta-constantes en genre 1 est décrit dans [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] ; la généralisation de l'AGM qui est utilisée est la moyenne de Borchardt, définie par les relations de récurrence suivantes :
a n+1 = a n + b n + c n + d n 4 , b n+1 = √ a n √ b n + √ c n √ d n 2 , c n+1 = √ a n √ c n + √ b n √ d n 2 , d n+1 = √ a n √ d n + √ b n √ c n 2 .
Là encore, la notion de bons choix de signes est cruciale ; elle est définie par Re La moyenne de Borchardt correspond exactement aux formules de τ -duplication des thêtaconstantes en genre 2 ; ainsi, on se retrouve à étudier Re θ1(0,τ ) θ0(0,τ ) . Nous sommes parvenu à étendre [Dup06, Prop. 6.1] et montrer que les choix de signes étaient bons dès que Im(τ ) est réduite par la réduction de Minkowksi et Im(τ 1,1 ) > 0.594. Ainsi, par homogénéité de la moyenne de Borchardt, on obtient pour τ vérifiant ces conditions : B 1, θ 2 1 (0, τ ) θ 2 0 (0, τ ) , θ 2 2 (0, τ ) θ 2 0 (0, τ ) , θ 2 3 (0, τ ) θ 2 0 (0, τ ) = 1 θ 2 0 (0, τ )
.
Il s'agit ensuite, comme en genre 1, d'utiliser l'action de Sp 4 (Z) sur les valeurs de θ pour calculer les coefficients de τ : en considérant trois matrices bien choisies (données dans [START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF], ou dans Proposition 7.1.3), on obtient un moyen de calculer les 3 coefficients de τ , et d'appliquer la méthode de Newton, pour obtenir un algorithme en temps quasi-linéaire (cf. Section 7.1). Cependant, il faut aussi démontrer que les choix de signes sont bons pour les thêta-constantes que l'on considère en regardant l'action de ces matrices ; malheureusement, il semble très difficile de prouver un tel résultat, qui reste à l'état de conjecture. La généralisation de cet algorithme s'opère de la même façon qu'en genre 1. (Dans le reste de cette section, nous utilisons la notation θ i1,i2,...,in pour signifier θ i1 , θ i2 , . . . , θ in .) On considère les formules de τ -duplication des fonctions thêta en genre 2 :
θ 2 i (z, 2τ ) = j∈{0,...,2 g -1}
θ j (z, τ )θ i⊕j (0, τ ) où ⊕ dénote le XOR bit à bit. On peut ensuite définir une fonction F de 8 variables telle que F (θ 2 0,1,2,3 (z, τ ), θ 2 0,1,2,3 (0, τ )) = θ 2 0,1,2,3 (z, 2τ ), θ 2 0,1,2,3 (0, 2τ ) .
Cette fonction nécessite l'extraction de racines carrées, ce qui nécessite l'étude de Re θ1,2,3(z,τ ) θ0(z,τ )
. Cependant, nous ne sommes pas arrivés à déterminer une condition suffisante sur τ pour garantir que ces parties réelles soient positives. Nous nous sommes alors tournés vers une autre solution, qui est celle de calculer une approximation à petite précision du quotient afin de déterminer quelle racine carrée correspond au quotient de thêtas voulu -ce que nous appelons un choix correct de racines, car c'est celui qui donne la valeur correcte pour la limite de la suite (qui est 1). Nous conjecturons que les bons choix de racines et les choix corrects de racines correspondent au moins pour τ ∈ F 2 et z réduit, ce qui se vérifie expérimentalement.
De même qu'en genre 1, la suite des itérés de F ne converge pas nécessairement quadratiquement ; mais, comme en genre 1, étudier F (λa 0,1,2,3 , µb 0,1,2,3 ) fait apparaître des suites très similaires. Nous avons réussi à montrer que ces suites convergent quadratiquement, en généralisant la même preuve qu'en genre 1 ; l'obstacle principal à la généralisation aux genres supérieurs est essentiellement la technicité de la démonstration. Ainsi, nous arrivons à définir une fonction F, calculable en temps O(M(P ) log P ) , telle que . 16 En réalité, on peut même tolérer un nombre fini de mauvais choix.
Les trois matrices de Sp 4 (Z) utilisées pour le calcul des coefficients de τ peuvent aussi être utilisées ici, et permettent de calculer les coefficients de z. Cependant, l'application de la méthode de Newton n'est pas aussi directe qu'en genre 1 ; en effet, nous avons ici une fonction de 6 variables, et seulement 5 coefficients (2 pour z et 3 pour τ ). Deux solutions sont possibles :
• Utiliser l'équation de la variété de Kummer, une équation qui lie les thêta-fonctions et les thêta-constantes que nous considérons ici, pour en quelque sorte abaisser la dimension de l'espace de départ ;
• Définir une fonction dont la valeur en les 6 quotients considérés ici est un sextuplet dépendant uniquement de z et τ (on prend par exemple les λ et µ obtenus en considérant l'action des trois matrices mentionnées plus haut), en espérant que la matrice jacobienne de la fonction soit inversible.
Nous utilisons la deuxième approche, qui semble fonctionner en pratique : ceci correspond à rajouter une sixième équation qui, bien qu'elle ne dépende que de 5 paramètres calculés par la fonction, donne une jacobienne inversible. Nous n'avons pas réussi à trouver une explication convaincante qui justifierait cette inversibilité, qui est donc conjecturelle. En tout cas, modulo cette conjecture, la méthode de Newton s'applique, et l'on obtient un algorithme en O(M(P ) log P ) pour calculer θ en genre 2. Nous avons réalisé une implantation de cet algorithme, ainsi qu'une implantation optimisée de l'algorithme naïf ; les deux implantations ont été réalisées en MAGMA et ont été comparées à la fonction Theta de ce logiciel. Les résultats sont présentés dans la Table 7.1 ; notre algorithme est plus rapide que l'algorithme naïf pour des précisions supérieures à 3 000 chiffres décimaux, ce qui est encore mieux qu'en genre 1, et est sans doute dû à la complexité de l'algorithme naïf, plus grande qu'en genre 1.
Enfin, nous avons détaillé des pistes pour généraliser l'algorithme ci-dessus en genre g. La clé est une fois encore les formules de τ -duplication pour les fonctions thêta, qui sont valides en genre g ; on peut ainsi s'en servir pour définir une fonction F de 2 g+1 variables, qui double τ quand on l'applique aux carrés des fonctions thêta et des thêta-constantes en τ . Là encore, les choix de signes doivent être guidés par des approximations à faible précision du résultat, qui donnent les valeurs correctes de θ. L'étude de l'homogénéité de la suite des itérés de F fait apparaître, pour le calcul de λ et µ, les mêmes suites qu'en genre 1 ou 2 ; nous pensons qu'il est possible de prouver qu'elles convergent quadratiquement, même si la technicité de notre preuve ne nous permet pas de la généraliser en genre supérieur. Au final, nous parvenons à définir une fonction F telle que F θ 2 1,...,2 g -1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1,...,2 g -1 (0, τ ) θ 2 0 (0, τ ) = 1 θ 2 0 (z, τ ) , 1 θ 2 0 (0, τ )
. et qui, modulo une généralisation de notre preuve quant à la convergence de λ et µ, peut être calculée en temps O(M(P ) log P ) . L'application de la méthode de Newton n'est pas aisée ; en effet, nous obtenons une fonction
C 2 g+1 → C t où t = g + g(g+1)
2
. Il s'agit d'adapter les stratégies que nous avons mis en place en genre 2 ; nous pourrions tenter d'abaisser la dimension de l'espace de départ, ou calculer plus de valeurs de F en des points dictés par l'action de matrices de Sp 2g (Z). Il faut de plus assurer que la jacobienne du système est inversible, ce qui semble compliqué ; en réalité, nous ne sommes pas parvenus à déterminer une famille de matrices de Sp 2g (Z) qui consituterait une bonne candidate à cette étape de l'algorithme. Ainsi, le cas du genre g est plus complexe, et nécessite de régler plusieurs problèmes ; cependant, notons qu'il semblerait que notre approche ait de bonnes chances d'aboutir.
Calcul de l'application d'Abel-Jacobi
Nous montrons ensuite comment utiliser les algorithmes pour le calcul rapide de θ pour calculer l'application d'Abel-Jacobi ainsi que son inverse ; ceci est expliqué dans le chapitre 8 de ce manuscrit.
Application d'Abel-Jacobi
En genre 1, l'application d'Abel-Jacobi peut se calculer à l'aide de la transformation de Landen, un changement de variables sur les intégrales elliptiques complexes qui peut également s'interpréter comme une 2-isogénie entre courbes elliptiques. La méthode générale est exposée dans [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF] dans le cas réel: répéter ce changement de variables donne une chaîne de 2-isogénies qui tend vers une courbe limite dont l'intégrale elliptique associée est particulièrement simple à calculer. On obtient ainsi un algorithme pour le calcul des intégrales elliptiques complètes (calcul des périodes) et incomplètes (calcul de l'application d'Abel-Jacobi); de plus, la suite ayant des liens très forts avec la moyenne arithmético-géométrique, le temps de calcul est de O(M(P ) log P ) opérations. Cet algorithme est généralisé dans le cas complexe par un article récent [START_REF] Cremona | The complex AGM, periods of elliptic curves over C and complex elliptic logarithms[END_REF], qui traite notamment du choix de signes dans les itérations.
Nous proposons un autre algorithme, basé sur notre travail sur la fonction thêta et inspiré par un algorithme de [Dup06, Chap. 9], pour calculer le logarithme elliptique d'un point. Le lien entre la fonction θ et la fonction ℘ donne l'équation suivante:
θ 2 3 (z, τ ) θ 2 0 (z, τ ) = π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) F θ 2 1 (z, τ ) θ 2 0 (z, τ )
, θ 2 1 (0, τ ) θ 2 0 (0, τ ) = (z, τ ) pour récupérer z, le logarithme elliptique. Ceci donne un deuxième algorithme pour calculer le logarithme elliptique en temps quasi-linéaire ; cependant, nos expériences montrent que cet algorithme est environ 2 fois plus lent que l'algorithme basé sur la transformation de Landen. Cependant, cet algorithme peut se généraliser au genre supérieur, alors que l'algorithme basé sur la transformation de Landen ne semble pas se généraliser aussi facilement (le cas réel en genre 2 est esquissé dans [START_REF] Bost | Moyenne arithmético-géométrique et périodes des courbes de genre 1 et 2[END_REF], mais le lien avec les suites de Borchardt n'est pas vraiment explicite). Dans un premier temps, il s'agit de calculer les quotients de carrés de fonctions thêta à partir des coordonnées de Mumford du diviseur voulu ; ceci s'effectue en utilisant un algorithme pour calculer les thêta-constantes, ainsi que les formules données dans [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF], qui permettent de passer des coordonnées Mumford aux coordonnées thêta et vice-versa. Ensuite, il suffit d'appliquer une généralisation en genre g de la fonction F, qui permet d'extraire les coefficients de z et de τ en considérant certains quotients de fonctions thêta bien précis. Nous formulons la conjecture que les suites considérées pendant ce calcul convergent quadratiquement, ainsi que celle que les thêta-constantes peuvent être calculées en temps quasi-optimal ; si ces deux conjectures sont vérifiées, nous obtenons un algorithme quasi-linéaire pour le calcul de l'application d'Abel-Jacobi.
Inverse de l'application d'Abel-Jacobi
Aucun algorithme quasi-linéaire pour le calcul de l'inverse de l'application d'Abel-Jacobi ne semble avoir été proposé ; nous montrons comment effectuer ce calcul en genre 1 et 2, et comment il pourrait être effectué en genre supérieur.
En genre 1, ce calcul revient à l'évaluation rapide de la fonction ℘ de Weierstrass. Le lien entre ℘ et θ est donné par la formule classique suivante:
℘(z, [ω 1 , ω 2 ])
ω 2 1 = π 2 3 (θ 4 2 (0, τ ) -θ 4 1 (0, τ )) -π 2 θ 2 1 (0, τ )θ 2 2 (0, τ ) θ 2 0 (z, τ ) θ 2 3 (z, τ ) A similar formula can be proven for ℘ . Ainsi, nos algorithmes pour le calcul rapide de θ donnent un algorithme pour le calcul de ℘ en temps quasi-linéaire.
Nous avons également trouvé un deuxième algorithme rapide pour calculer ℘, qui se base cette fois-ci sur la transformation de Landen. Celle-ci s'exprime comme une 2-isogénie entre courbes, et le lien entre les périodes des deux courbes est explicite ; ainsi, le changement de variables peut se réécrire comme une relation entre des valeurs de ℘ en des périodes différentes. En utilisant les formules de Thomae, qui permettent d'écrire les coefficients de la courbe en fonction des theta-constantes, on obtient la relation La comparaison entre ces deux algorithmes montre que les pertes de précision sont à peu près les mêmes ; dans les deux cas, travailler avec P (voire 2P ) bits de garde permet de retrouver le bon résultat. Quant à la vitesse de chacun de ces algorithmes en pratique, une comparaison entre leurs implantations en MAGMA montre que le deuxième algorithme est environ 3 fois plus rapide que celui basé sur le calcul rapide de θ.
℘(z, [ω 1 , ω 2 ]) = ℘(z, [ω 1 , 2ω 2 ]) +
Cependant, le premier algorithme peut se généraliser aisément en genre supérieur ; en effet, il existe des formules, données dans [START_REF] Cosset | Applications des fonctions thêta à la cryptographie sur courbes hyperelliptiques[END_REF], qui permettent de relier les coordonnées de Mumford d'un diviseur à des quotients de fonctions thêta. Ainsi, le problème de l'évaluation de l'inverse de l'application d'Abel-Jacobi en genre supérieur se réduit au problème du calcul de θ ; comme nous l'avons indiqué précédemment, des algorithmes en temps quasi-linéaire existent en genre 1 et en genre 2, mais la généralisation en genre g nécessite d'abord de régler quelques problèmes qui surviennent lors de l'utilisation de la méthode de Newton.
Calcul d'isogénies de noyau donné
Une isogénie est un morphisme de groupe de courbes elliptiques, i.e. une fonction rationnelle telle que φ(P + Q) = φ(P ) + φ(Q). Une isogénie transporte donc le problème du logarithme discret sur une courbe elliptique vers une autre courbe elliptique ; le risque est alors que ce problème soit plus facile sur une courbe que sur une autre, ce qui aurait des conséquences sur la sécurité d'un cryptosystème basé sur une de ces courbes. On s'intéresse ainsi au problème de calculer des isogénies rapidement.
Nous considérons le problème suivant:
Problème. Étant donnée une courbe elliptique E et un point P de -torsion (avec premier), calculer l'unique courbe E et l'unique isogénie φ telles que Ker φ est le sous-groupe d'ordre engendré par P .
Ce problème est résolu par l'utilisation des formules de Vélu [START_REF] Vélu | Isogénies entre courbes elliptiques[END_REF], qui nécessitent O(M( ) log ) opérations élémentaires sur le corps. Il s'agit du problème sur les isogénies le plus simple; d'autres problèmes, tels que "étant données deux courbes et un entier premier, calculer une -isogénie entre les courbes" ou "étant données deux courbes isogènes, calculer une isogénie", peuvent être résolus par des algorithmes de complexité plus importante, qui d'ailleurs font bien souvent appel aux formules de Vélu lors d'une de leurs étapes.
Nous proposons un algorithme différent, se basant sur le théorème suivant:
Théorème ([Sil86, Theorem VI.4.1]). Une isogénie entre deux courbes elliptiques en représentation analytique, i.e. C/Λ 1 et C/Λ 2 , est de la forme φ(z) = αz (mod Λ 2 ) pour un certain α ∈ C.
Calculer des isogénies entre tores complexes est ainsi très simple ; l'idée est d'utiliser l'application d'Abel-Jacobi pour étendre l'algorithme aux représentations algébriques de courbes elliptiques. Nous étudions cet algorithme dans la section 9.1 (Algorithme 25). Il consiste en une évaluation de l'isogénie en O( ) points, en utilisant l'application d'Abel-Jacobi pour ramener le problème à une évaluation d'isogénie entre tores complexes, puis l'inverse de l'application d'Abel-Jacobi pour trouver la valeur de l'isogénie en le point ; une fois ces images calculées, il suffit d'interpoler pour trouver la fraction rationnelle définissant l'isogénie. La complexité de l'algorithme est de O(M(P ) log P M( ) log ) opérations sur les bits, ce qui est plus grand d'un facteur log P que la complexité des formules de Vélu.
L'algorithme peut se généraliser pour résoudre le même problème sur un corps de nombres K = Q[X]/(f ). En effet, si f est de degré n, il existe n plongements de K dans C, qui sont les applications qui évaluent les éléments du corps de nombre (considérés comme des polynômes) en une racine de f . On peut ainsi calculer les n plongements dans C de la courbe définie sur K, et appliquer l'algorithme pour calculer l'isogénie correspondant à chacun de ses plongements ; il est ensuite nécessaire d'interpoler les n plongements pour retrouver l'isogénie sur K. La complexité de cet algorithme est O(n 1+ 1+ P 1+ ) ; l'analyse détaillée montre que la complexité est une fois encore un peu moins bonne (i.e. avec quelques facteurs logarithmiques de plus) que celle des formules de Vélu sur K, même si la comparaison sur quelques exemples a montré qu'il était possible que cet algorithme soit compétitif en pratique.
Enfin, nous avons également généralisé l'algorithme pour résoudre le problème sur F p , en utilisant une procédure de relèvement global par torsion pour transformer le problème en un problème sur des courbes elliptiques sur un corps de nombres. Nous avons étudié la procédure de relèvement, et plus particulièrement le polynôme qui définit le corps de nombre que l'on obtient ; nous avons réussi à donner une borne sur la taille de ses racines. En utilisant ce dernier résultat, nous avons conjecturé que la précision à laquelle les plongements de K dans C devaient être calculés pour que l'algorithme fonctionne était en O( 2 log p), ce qui nous a permis d'obtenir la complexité finale de l'algorithme, en O( 5+ log 1+ p). Cette complexité est bien pire (d'un facteur O( 4 )) que celle des formules de Vélu sur F p , et cela se vérifie en pratique.
Il serait intéressant de pousser plus loin l'étude de cette méthode. En particulier, il semblerait que l'algorithme se généralise relativement bien en genre supérieur, pour des courbes hyperelliptiques définies sur C ou sur K, en utilisant nos méthodes pour calculer l'application d'Abel-Jacobi ; en revanche, la procédure de relèvement se généralise mal en genre supérieur, car elle implique de calculer des polynômes de division, qui sont mal connus en genre g. Enfin, cette méthode pourrait peut-être permettre de résoudre un problème plus complexe, celui de déterminer une -isogénie entre deux courbes données, en s'aidant d'un travail similaire [START_REF] Van Wamelen | Poonen's question concerning isogenies between Smart's genus 2 curves[END_REF].
Conclusion
Durant cette thèse, nous avons mis au point plusieurs algorithmes de complexité quasi-linéaire pour calculer des fonctions en précision arbitraire. L'étude d'un algorithme existant pour les theta-constantes, utilisant les liens avec l'AGM et la méthode de Newton, nous a permis de trouver une généralisation de cet algorithme, en étudiant des suites dérivées des formules de τ -duplication de theta et en combinant leur évaluation à la méthode de Newton. Nous sommes parvenu à généraliser l'algorithme en genre 2, modulo une conjecture d'inversibilité de la matrice jacobienne du système ; une généralisation en genre g est sans doute possible, mais l'utilisation de la méthode de Newton dans ce cas-là n'est pas aussi directe. Grâce à ces algorithmes, il est possible de calculer l'application d'Abel-Jacobi en temps quasi-linéaire, de même que son inverse en genre 1 et 2, grâce à des algorithmes que nous décrivons ici. Une application concerne le calcul d'isogénies de courbes elliptiques : nous utilisons nos résultats pour construire un algorithme calculant une isogénie de noyau donné entre courbes complexes, que nous généralisons également pour des courbes définies sur des corps de nombres et des corps finis. Cet algorithme est de complexité plus grande que l'état de l'art, mais contrairement à celui-ci pourrait se généraliser plus facilement au genre supérieur.
Résumé
L'application d'Abel-Jacobi fait le lien entre la forme de Weierstrass d'une courbe elliptique définie sur C et le tore complexe qui lui est associé. Il est possible de la calculer en un nombre d'opérations quasi-linéaire en la précision voulue, c'est à dire en temps O(M(P ) log P ). Son inverse est donné par la fonction ℘ de Weierstrass, qui s'exprime en fonction de θ, une fonction importante en théorie des nombres. L'algorithme naturel d'évaluation de θ nécessite O(M(P )
√ P )
opérations, mais certaines valeurs (les thêta-constantes) peuvent être calculées en O(M(P ) log P ) opérations en exploitant les liens avec la moyenne arithmético-géométrique (AGM). Dans ce manuscrit, nous généralisons cet algorithme afin de calculer θ en O(M(P ) log P ). Nous exhibons une fonction F qui a des propriétés similaires à l'AGM. D'une façon similaire à l'algorithme pour les thêta-constantes, nous pouvons alors utiliser la méthode de Newton pour calculer la valeur de θ. Nous avons implanté cet algorithme, qui est plus rapide que la méthode naïve pour des précisions supérieures à 260 000 chiffres décimaux.
Nous montrons comment généraliser cet algorithme en genre supérieur, et en particulier comment généraliser la fonction F. En genre 2, nous sommes parvenus à prouver que la même méthode mène à un algorithme qui évalue θ en O(M(P ) log P ) opérations ; la même complexité s'applique aussi à l'application d'Abel-Jacobi. Cet algorithme est plus rapide que la méthode naïve pour des précisions plus faibles qu'en genre 1, de l'ordre de 3 000 chiffres décimaux. Nous esquissons également des pistes pour obtenir la même complexité en genre quelconque.
Enfin, nous exhibons un nouvel algorithme permettant de calculer une isogénie de courbes elliptiques de noyau donné. Cet algorithme utilise l'application d'Abel-Jacobi, car il est facile d'évaluer l'isogénie sur le tore ; il est sans doute possible de le généraliser au genre supérieur.
Mots-clés: fonctions thêta, application d'Abel-Jacobi, multiprécision, temps quasi-linéaire, courbes elliptiques, isogénies, courbes hyperelliptiques, cryptographie
Proposition 1.3.2 ([Sil86, Section VI.I],[CFA + 10, Section 5.1.2]). Let E(C) : y 2 = x 3 + ax + b be an elliptic curve, and ω = dx y = dx √ x 3 +ax+b its invariant differential. Then, in general, the integral P O w for P ∈ E(C) is not path-independent. More precisely, if one takes α, β two paths that generate the first homology group of the corresponding torus (for instance, paths around the branch cuts of √ x 3 + ax + b, as in [Sil86, Section VI.I]), then the value of P O w is only defined up to mω 1 + nω 2 , where ω 1 = α ω, ω 2 = β are the periods of the elliptic curve. Integrals of the form dx √ P (x)
Definition 2.1.4 (theta-constants). Let τ ∈ H g ; the theta-constants are the values at z = 0 of the functions θ [a;b] .
∈
Sp 2g (Z), and (z, τ ) ∈ C g × H g . We have the functional equation:
Theorem 2.5.2 (extension of [Mum83, Theorem 7.1]). Let τ ∈ H and z ∈ C, and let γ = a b c d ∈ SL 2 (Z). Suppose c > 0, or c = 0 and d > 0; if not, take -γ. Then we have:
Recall the arithmetic mean of two positive numbers (a, b) → a+b 2 , and the geometric mean (a, b) → √ ab. The following proposition gives the definition of the arithmetico-geometric mean: Proposition 3.1.1. Let a, b ∈ R + such that a ≥ b. Define the sequences (a n ) n∈N , (b n ) n∈N as follows:
Proposition 3.2. 5 .
5 Let a, b ∈ C such that ab = 0 and a = -b. Any AGM sequence for a, b with only a finite number of bad choices converges quadratically to a non-zero limit. Any AGM sequence for a, b with infinitely many bad choices converges (at least linearly) to 0. Proof. The properties of the limit come from [Cox84, Prop. 2.1]; the proof of the quadratic convergence uses a lot of the same arguments, and can be found in [Dup06, Section 3.4] or [Dup11, Theorem 1 and Prop. 12].
Proposition 4.1.1. For any positive real numbers a, b: 2π 0 dt a 2 cos 2 t + b 2 sin 2 t = 2π AGM(a, b) Proof. If one makes the change of variables
x
= x + (e 3 -e 1 )(e 3 -e 2 )x -e 3 (4.1.1)
Compute the elliptic logarithm over the reals. Input: u ∈ R with absolute precision P . Output: ∞ u with absolute precision P . 1: a ← √ e 1 -e 3 , b ← √ e 1 -e 2 2: x ← u 3: while a = b at the given precision do 4:
This is always possible, but we may need to pick another labelling of the roots. Then the pairs (a, b), (c, ib), (ia, ic) are good, and computing π AGM(a,b) , π AGM(c,ib) , π AGM(ia,ic) gives three periods.
choosing the numbering of roots and the signs so that|a 0 -b 0 | < |a 0 + b 0 |. 2: r = x0-e3 x0-e2 with Re(r) ≥ 0. 3: t = -y0 2r(x0-e2)4: while a = b and r = 1 at the given precision do 5: A = a+b 2 , B = √ ab choosing the good sign 6: r ← A(r+1) br+a with Re(r) ≥ 0 7: t ← rt 8: a = A, b = B 9: end while 10: return 1 a arctan a t
Proposition 4.4.1 ([Bel61, p.54], [WW27, Section 21.52, p.469]).
Taking B = 1 in Proposition 5.1.1 gives |θ(z, τ )-1| ≤ 3, so |θ(z, τ )| ≤ 4; actually, this also proves |S B (z, τ )| ≤ 4, which means that |a 0 |, |a 1 |, |b 0 |, |b 1 | are bounded by 4. We also have |q| ≤ 0.07, and |q 2 | ≤ |q| n 2 ≤ |q| n = |q 1 | ≤ |q| ≤ 0.07. As for the v i , we have v 0 = 2, and for n ≥ 1
2: n ← 0 3 :
3 while |z -t| ≤ 2 -P -n-c1 do
Theorem 0.3.3). We prove in the next subsection that |θ 0,1 (0, 2 i τ 1 )| ≥ 0.859 for i = [1 . . . s]. Hence, each extraction of square root loses at most 4 bits: step 9 loses 4 bits, and step 16 loses 4s ≤ 4 log P bits.
2|q| 1 -
1 |q| . The techniques are the same as the proof of Proposition 6.2.4 or Theorem 5.1.1, that is to say we combine the triangle inequality with Lemma 5.1.2. This gives in the first case |θ 0,1 (z, τ ) -1| ≤ |q| 3/4 + |q| + |q| 7/2 + |q| 4 + |q| 8.25 + |q| 9 + |q| 15 1 -|q| ≤ 0.1962 since Im(τ ) ≥ √ 3/2
Figure 6.3: Timing results
c, and similarly for b (n0+1) i
P
Im(τ ) guard bits. Hence, the computation of each term of the series requires O(M P + k log P Im(τ ) log k) bit operations.
4 :
4 Compute ζ(2k), using [Har14] [respectively, compute all ζ(2k ) for k ≤ k using [BH13]]. 5: Return E 2k (τ ) [respectively, return E 2k (τ ) for k ≤ k].
1:
Compute the periods ω 1 , ω 2 ∈ C corresponding to E. 2: Compute the elliptic logarithm of Q, i.e. z ∈ C/(Zω 1 + Zω 2 ). 3: Determine the periods ω 1 , ω 2 ∈ C of the isogenous curve (Section 9.1.1). 4: Compute the coefficients a , b of E using theta-constants (Section 8.1.1) 5: for k = 1 to 2 do 6: Take a point P k ∈ E -for instance, P k = (k, √ k 3 + ak + b).
Assuming P , Algorithm 25 has a complexity of O(M(P )( log P + M( ) log )) = O(M(P ) log P M( ) log ). Proof. Steps 7 and 9, i.e. the computation of the Abel-Jacobi map and its inverse, each cost O(M(P ) log P ); hence Steps 5 to 10 cost O( M(P ) log P ). Step 11 costs O(M(P )M( ) log ). All the other steps have negligible cost.
Proposition 9.2. 2 .
2 Algorithm 26 requires O((n log 4 n + n log 2 n log P + nM( ) log log P + M(n) log n)M(P )) = O(n 1+ 1+ P 1+ ) bit operations.
Proposition 9.3. 13 .
13 Algorithm 27 has an asymptotic complexity of O ( 3 + 3 log P + M( 2 ) log )M(P ) + M( 2 )M( 2 log p) + M(log p) log log p bit operations.
Theorem A.1. 1 .
1 Soit E/C une courbe elliptique complexe définie par sa forme de Weierstrass. Alors il existe ω 1 , ω 2 (les périodes de la courbe elliptique) tels que E/C C/(Zω 1 + Zω 2 ). De plus, l'isomorphisme est donné par l'application d'Abel-Jacobi, définie parE/C → C/(Zω 1 + Zω 2 ) P → P O dx √ x 3 + ax + b
θ
: C × H → C (z, τ ) → n∈Z e iπn 2 τ e 2iπnτ .avec H le demi-plan supérieur. On définit plusieurs fonctions, les fonctions theta avec caractéristique:θ 0 (z, τ ) = θ(z, τ ) θ 2 (z, τ ) = e iπτ /4+iπz θ z + τ 2 , τ θ 1 (z, τ ) = θ z + 1 2 , τ θ 3 (z, τ ) = e iπτ /4+iπ(z+1/2) θ z + 1 + τ 2 , τLes fonctions θ 0 , θ 1 sont parfois appelées fonctions thêta fondamentales. Les theta-constantes sont simplement les valeurs en z = 0 de ces fonctions ; notons que θ 3 (0, τ ) = 0. Le groupe SL 2 (Z) agit sur les valeurs de θ pour donner la formule suivante:θ z cτ + d , aτ + b cτ + d = ζ √ cτ + de iπ z 2 cτ +d θ i (z, τ )pour ζ une racine huitième de l'unité, et pour un certain i. Cette propriété permet d'opérer une réduction d'argument : si l'on dénote F le domaine fondamental de l'action τ → aτ +b cτ +d de SL 2 (Z) sur le demi-plan supérieur, cette formule permet de déduire la valeur finale de la valeur de θ(z, τ ) avec τ ∈ F. De plus, une propriété de la fonction thêta permet d'effectuer également une réduction d'argument en z: θ(z + aτ + b, τ ) = e -iπa 2 τ -2πiaz θ(z, τ ).
2τ ) = θ 0 (0, τ )θ 1 (0, τ ) Ces formules correspondent à celles que l'on trouve dans la définition de la moyenne arithméticogéométrique:Definition A.2.1. Soient deux nombres complexes a et b. On pose a 0 = a, b 0 = b, et pour tout n ≥ 0 a n+1 = a n + b n 2 , b n+1 = a n bn où la racine carrée est choisie de telle sorte que Re b n+1 a n+1 > 0 ou Re b n+1 a n+1 = 0 et Im b n+1 a n+1 > 0 ce que l'on appelle un bon choix de racines. La suite ainsi définie, où tous les choix de racines sont bons, converge quadratiquement ; sa limite est appelée la moyenne arithmético-géométrique de a et b, notée AGM(a, b).
Proposition A.2.2 ([Cox84, Lemma 2.8]). Soit le domaineF k = {τ ∈ H | |Re(τ )| ≤ 1, |τ ± 1/4| > 1/4, |τ ± 3/4| > 1/4}.
à démontrer le résultat suivant: Proposition A.3.1. Pour z, τ vérifiant Im(z) ≤ Im(τ ) 4 et Im(τ ) > 0.345, on a Re(θ 0 (z, τ )) > 0 et
ainsi que les mêmes inéquations obtenues en permutant les quatre nombres. On a le théorème suivant : Proposition A.4.1 ([Dup06, Théorème 7.1]). Une suite de Borchardt où tous 16 les bons choix de signes sont bons converge quadratiquement ; la limite B(a, b, c, d) peut donc être calculée en O(M(P ) log P ).
3
2τ ) 4 θ 2 (2τ ) 4 ℘(z, [ω 1 , 2ω 2 ]) + π ω1 2 θ0(2τ ) 4 +θ2(2τ ) 4Appliquer la transformation de Landen plusieurs fois de suite revient à étudier la suite℘(z, [ω 1 , 2 k ω 2 ]) pour k tendant l'infini ; or on a lim k→∞ ℘(z, [ω 1 , 2 k ω 2 ]Une stratégie est alors de calculer un rang N à partir duquel ℘(z, [ω 1 , 2 N ω 2 ]) est égal à cette limite, à 2 -P près ; nous prenons N comme le rang à partir duquel |θ 2 (0, 2 N τ )| ≤ 2 -P , mais nous n'avons pu prouver que ce choix était le bon. Une fois que N a été choisi, on peut appliquer la relation de récurrence N fois afin de récupérer ℘(z, [ω 1 , ω 2 ]). Puisque N = O(log P ), ceci donne un algorithme en O(M(P ) log P ) opérations.
Theorem 1.3.14[START_REF] Chandrasekharan | Elliptic functions[END_REF] Chapter II]). No non-constant elliptic function is entire. A nonconstant elliptic function has at least one pole in any parallelogram. The sum of residues of an elliptic function in a parallelogram is 0; the number of zeros is equal to the number of poles, counted with multiplicity, and we say that this number is the order of the elliptic function.
Definition 2.1.3 (θ functions with characteristics). Let z ∈ C g , τ ∈ H g and a, b ∈ 1 2 Z g /Z g . The theta function with characteristic [a; b] is defined as
Mum83,
Chapter 2].
Definition 2.1.1 ([Mum83, Section II.1]). The Siegel upper-half space in dimension g, H g , is
the set of symmetric g × g matrices whose imaginary part is positive definite.
Definition 2.1.2 (θ function). Let z ∈ C g and τ ∈ H g . The θ function is defined as
θ(z, τ ) = exp(iπ t nτ n) exp(2iπ t nz).
n∈Z g
τ via the symplectic group 2.4.1 Symplectic group Definition 2.4.1
([Kli90, Def. I.1.1]). The symplectic group of dimension g, denoted Sp 2g (Z) is
the set of matrices A B C D with A, B, C, D ∈ M g (Z) such that:
t AC = t CA, t BD = t DB, t AD -t CB = I g .
Proposition 2.4.2 ([Kli90, Prop. I.1.1]). Sp 2g (Z) acts on H g as follows:
F g is the fundamental domain of the action of Sp 2g (Z) on H g -that is to say, for any τ ∈ H g there is γ ∈ Sp 2g (Z), τ ∈ F g such that τ = γτ , and such a τ is unique if it is not on the boundary of F g .
2.4.7. Computing the Minkowski reduction of a g × g matrix is, in general, a hard prob-
lem. Note that a Minkowski-reduced matrix gives the shortest vector of a lattice, i.e. Minkowski
reduction solves the SVP. This problem is a hard problem for which there are currently no subex-
ponential algorithms, only exponential ones; it is a standard hardness assumption in lattice-based
cryptography. The best known complexity for computing the Minkowski reduction of a g × g
matrix is O 2 1.3g 3 arithmetic operations [Hel85]. However, note that there is an algorithm for
Minkowski reduction in dimensions up to 4 whose running time is polynomial in the size of the
longest vector of the given basis [NS04].
This is the fundamental domain for the action of Sp 2g (Z) on H g :
Theorem 2.4.8 ([Kli90, Section I.3]).
This relation allows us to compute ℘(z, [ω 1 , ω 2 ]) from ℘(z, [ω 1 , 2ω 2 ]); we call this relation a backwards induction formula (as in [BZ10, p. 153]), since we are interested in computing the first term of the sequence (℘(z, [ω 1 , 2 n ω 2 ])) n∈N .The following proposition deals with the behavior of ℘(z, [ω 1 , 2 n ω 2 ]) n∈N :
Theorem 4.3.2 ([CT13], [Kob84, Section 1.6, exercise 7], [Cha85, p. 46]).
Let z, τ be reduced as previously. Let S B denote the sum of the series defining θ with (m, n) ∈ [-B, B] 2 . Then taking
Proposition 5.2.1. B = 2(P + 5) π log 2 e Im(τ 1 ) + 2
i.e. summing no more than O P Im(τ1)
The sequence (λ n ) converges, to a limit λ. Furthermore, for P large enough, there exists a constant c 1 > 0, depending on|z 0 |, |t 0 |, C, c and |λ|, such that, if k is the first integer such that |z k -t k | ≤ 2 -P -k-c1, then λ k+1 is an approximation of λ with absolute precision P bits.
choice of signs are good
and hence | √ x n + √ y n | 2 ≥ 2c. The proof is the same for | √ z n + √ t n |.
We now prove that (λ n ) = xn z∞ 2 n × z ∞ converges quadratically, by proving that:
n∈N
Theorem 6.2.13.
Proof. The point here is that once z n and t n are close enough, x n+1 and y n+1 are also close and the value of λ n does not change much after that.
Recall Lemma 3.2.4 (i.e. [Dup11, Theorem 1]), which establishes that
|z n+k+1 -t n+k+1 | ≤ A|z n+k -t n+k | 2 with A = π 8 min(|z0|,|t0|)
Table 6
6
work Naive Magma
(Algorithm 15) (Algorithm 7)
1000 0.008 0 0.01120
2000 0.032 0.008 0.05520
4000 0.092 0.032 0.2790
8000 0.292 0.112 1.316
16000 0.812 0.380 7.692
32000 2.184 1.32 38.99
64000 6.152 4.55 158.2
128000 16.6 15.1 922.2
256000 41.8 41.3
512000 102 130
1024000 248 405
.1: Timings (in s) of different methods Algorithm 16 Compute (θ 0,1 (z k , τ )) k=1,...,n and θ 0,1 (0, τ ), with absolute precision P . Input: z k , τ with absolute precision P , satisfying conditions (2.5.11).
1: Work with precision P. 2: B ← P +2
by a factor of roughly 2.5.Fast computation of the Abel-Jacobi map in genus 2 is attainable given the state of the art and the algorithms presented in this manuscript. As for the computation of its inverse, we proved in Chapter 7 that there was a O(M(P ) log P ) algorithm for the computation of θ(z, τ ) with precision P , provided Conjecture 7.2.6 holds; we show how one can use this, and obtain a O(M(P ) log P ) algorithm under Conjecture 7.2.6.
Prec (decimal digits) [CT13, Algorithm 28] Algorithm 21
(Algorithm 5 of Section 4.2.4)
2000 0.02 0.03
4000 0.03 0.06
8000 0.07 0.19
16000 0.24 0.56
32000 0.74 1.74
64000 2.7 5.3
128000 6 16
256000 15 39
512000 37 101
1024000 98 256
Table 8.3: Timings (in s) of different methods to compute the elliptic logarithm
8.2 In genus 2
Table 8.4: Table highlighting the number of arguments F (and G) takes (i.e. the number of quotients
Number of Number of
arguments of F pairs (λ, µ)
1 2 1
2 6 3
3 14 6
g 2 g+1 -2 g(g+1) 2
4.1: this gives a O(kM(P ) + k 4/3+ ) algorithm to compute one value of ζ(2k), and a O(kM(P ) + k 2+ ) algorithm to compute all values ζ(2k ) for k ≤ k (which is useful in order to recover all E 2k from the first k coefficients of the Laurent series of ℘). If we assume that P is larger than k, or even than k log k, this gives a O(M(P )(log P + k)) algorithm for either task.In the end, we propose Algorithm 24, which condenses two algorithms: one to compute only E 2k (τ ), another one to compute E 2k (τ ) for all k ≤ k.Algorithm 24 Compute Eisenstein series with absolute precision P . Input: k ∈ N, τ ∈ F with absolute precision P . Output: E 2k (τ ) up to 2 -P [respectively E 2k (τ ) up to 2 -P for all k ≤ k].
The following table hence shows the fastest running time possible of our algorithm on several examples, and compare it to Vélu's formulas as implemented in Magma; we stress that the comparison is unfair and biased towards our algorithm.
n P required Our algorithm (s) Vélu in Magma (s)
23 132 650 460 174
29 210 1390 3665 2111
auquel on adjoint un point dit "à l'infini". On peut définir une loi de groupe commutatif sur l'ensemble des points d'une courbe elliptique ; le procédé est bien connu, puisque Fermat l'utilisait déjà, et a une interprétation géométrique simple sur les nombres réels. Ainsi, on munit l'ensemble E(K) des points d'une courbe d'une loi de groupe calculable complètement algébriquement.197La sécurité d'un système de chiffrement basé sur les courbes elliptiques repose sur le problème difficile dit du logarithme discret sur les courbes elliptiques:Étant donnés P ∈ E(F p ) et Q = [n]P, trouver n.Ce problème est l'instanciation du problème du logarithme discret dans le groupe E(F p ). À l'heure actuelle, aucun algorithme de complexité meilleure que l'algorithme rho de Pollard, un algorithme générique en O( √ r) opérations où r est le cardinal du groupe, n'est connu ; le problème du logarithme discret sur les courbes elliptiques offre ainsi une sécurité relativement forte, puisque la taille des clés nécessaire à atteindre un certain niveau de sécurité est grosso modo égale à la racine cubique d'un clé RSA de sécurité équivalente. Cette petite taille de clés permet de compenser le fait que l'arithmétique est plus complexe que pour le système RSA ; les courbes elliptiques sont ainsi standardisées et utilisées dans un nombre croissant d'applications.Ainsi, le problème du logarithme discret est transporté d'une courbe à une autre par une isogénie. Il est donc possible, en théorie, de ramener une instance de ce problème à une autre où il est plus simple à résoudre. On ne connaît actuellement pas d'algorithmes pour exploiter ceci, car le calcul d'isogénies est un problème complexe et les algorithmes pour le résoudre n'ont pour l'instant pas une complexité satisfaisante (de sorte que certains cryptosystèmes basés sur le calcul d'isogénies ont été proposés). Cependant, en genre 3, un article [Smi09] a montré que le problème du logarithme discret sur une courbe hyperelliptique pouvait parfois se réduire en un problème sur une courbe non-hyperelliptique, qui est un problème résoluble plus facilement ; il s'agit ainsi d'une direction de recherche intéressante.
Une isogénie est un morphisme d'une courbe elliptique à une autre, c'est à dire une application
telle que
φ(P + Q) = φ(P ) + φ(Q), pour tous P, Q.
De plus, il faut que la fonction F soit calculable en temps quasi-linéaire. Dans le cas de l'AGM, la proprété découlait directement de la propriété d'homogénéité de l'AGM -c'est à dire AGM(λa, λb) = λ AGM(a, b). Nous avons ainsi eu l'idée d'étudier la limite de la suite des itérés de F démarrant avec (λx, λy, µz, µt), avec l'idée que nous pourrions ensuite prendre λ = 1 (0,τ ) . Nous sommes arrivés à prouver la proposition suivante : Proposition A.3.2. Si (x n , y n , z n , t n ) est la suite des itérés de F démarrant avec (x, y, z, t), et (x n , y n , z n , t n ) celle démarrant avec (λx, λy, µz, µt), on a
θ 2 1 (z, τ ) θ 2 0 (z, τ ) , θ 2 1 (0, τ ) θ 2 0 (0, τ ) = 1 0 (z, τ ) θ 2 , 1 0 (0, τ ) θ 2 .
θ 2 0 (z,τ ) , µ = 1 θ 2 0 µ = lim n→∞ z n z n , λ = lim n→∞ x n z n xn zn 2 n 2 n z n z n
.
De plus, les suites du membre droit de ces égalités convergent quadratiquement, i.e. on peut calculer µ, λ avec précision P en O(M(P ) log P ) .
except those who contain curves of j-invariant 0 or 1728.
The comment seems to be about relative precision; however, if a -b creates a large loss of relative precision, there is also a large loss of absolute precision since we compute √ a -b, and the remark is thus still valid.
For instance when ℘(z, [ω 1 , 2 k ω 2 ]) is close to e (k)
= ℘(2 k ω 2 , [ω 1 , 2 k ω 2 ]) -note however that this only happens once for every z.
Timings are provided respectively in Section
6.4 and Section 7.3.
The naive algorithm which only computes θ(z, τ ) is only 5% faster; furthermore since Algorithm 15 computes these 4 values, it is fair to compare it to Algorithm 7.
This is even though Magma only returns θ 0 (z, τ ), when our algorithm returns more values.
Note that the result appears different from[START_REF] Dupont | Moyenne arithmético-géométrique, suites de Borchardt et applications[END_REF] only because the tables for M 1 and M 2 (page 146) have been switched by mistake; and it differs from[START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF] because their M 2 is our M 3 , and vice-versa.
Compared to[START_REF] Enge | Computing class polynomials for abelian surfaces[END_REF], we compute more, and we do it in Magma, not in C. Hence the slower timings.
Michael Somos, in a personal communication, indicated that many identities related to theta-constants and Eisenstein series that he added to the OEIS were found this way.
We find in[START_REF] Cohen | A Course in Computational Algebraic Number Theory[END_REF] p. 397] the recommendation to "use the formulas that link elliptic functions with the [Weierstrass] σ function, since the latter are theta series and so can be computed efficiently", which amounts to the same proof.
The recurrence relations in[START_REF] Mckee | Computing division polynomials[END_REF] are essentially the same, except for a constant factor (a power of two) in the ψ 2k .
(z,τ ) . On peut ensuite utiliser la fonction F, qui est à la base de notre algorithme rapide pour le calcul de θ ; plus précisément, on utilise le fait que
Remerciements
We define the following domain:
This condition allows us to avoid z = τ +1 2 , which is a zero of θ(z, τ ), and hence a pole of quotients of the form θi θ0 , which we consider in our algorithm much in the same way as [START_REF] Dupont | Fast evaluation of modular functions using Newton iterations and the AGM[END_REF]. We prove Proposition 6.2.4 and Theorem 6.2.6 under the assumption (z, τ ) ∈ A. Proposition 6.2.2. Let (z, τ ) ∈ A such that |Re(τ )| ≤ 1 2 and |Re(z)| ≤ 1 8 . Then z τ , -1 τ ∈ A.
Proof. Write
Proposition 6.2.3. Let z, τ such that conditions (2.5.11) are satisfied. Put τ = τ 2 and z = z 4 . Then (z , τ ) ∈ A and z τ , -1 τ ∈ A. Furthermore, one can compute θ 0 (z, τ ), θ 1 (z, τ ), θ 0 (0, τ ), θ 1 (0, τ ) from their equivalents at (z , τ ) in O(M(P )) operations.
Proof. The first part of the proposition is simply Proposition 6.2.2. The second part is simply the application of the τ -duplication formulas (which we recalled in Section 6.2.1), as well as the z-duplication formulas, which we already mentioned in Chapter 2: θ 0 (2z, τ )θ 3 0 (0, τ ) = θ 4 1 (z, τ ) + θ 4 2 (z, τ ) θ 1 (2z, τ )θ 3 1 (0, τ ) = θ 4 0 (z, τ ) -θ 4 2 (z, τ ) θ 2 (2z, τ )θ 3 2 (0, τ ) = θ 4 0 (z, τ ) -θ 4 1 (z, τ )
Using these formulas requires the knowledge of θ 2 2 (z, τ ) and of θ 2 2 (0, τ ); this could be computed using Jacobi's formula (Equation (2.5.6)) and the equation of the variety (Equation (2.5.7)), but we end up using a different trick in our final algorithm.
Good choices of sign and thetas
We now prove that, for the arguments we consider, the good choices of sign correspond exactly to values of θ: Proposition 6.2.4. For (z, τ ) ∈ A such that Im(τ ) ≥ 0. 335 (in particular, for τ ∈ F) we have |θ 0 (z, τ ) -θ 1 (z, τ )| < |θ 0 (z, τ ) + θ 1 (z, τ )|, which also proves that Re θ1 (z,τ ) θ0(z,τ ) > 0, Re θ1(0,τ ) θ0(0,τ ) > 0.
Put τ 1 = τ 2 s and z 1 = z 2 s , so that Im(z 1 ) ≤ Im(τ 1 )/2.
6:
Put z 2 = z 1 /4 and τ 2 = τ 1 /2.
7:
Compute approximations of absolute precision P of θ 2 0 (z 2 , τ 2 ), θ 2 1 (z 2 , τ 2 ), θ 2 0 (0, τ 2 ), and θ 2 1 (0, τ 2 ) using Algorithm 14.
8:
Compute θ 2 0 (z 2 , τ 1 ), θ 2 1 (z 2 , τ 1 ), θ 2 0 (0, τ 1 ), θ 2 1 (0, τ 1 ), θ 2 2 (z 2 , τ 1 ), θ 2 2 (0, τ 1 ) using the τduplication formulas (Equation (2.5.4)).
9:
Compute θ 0,1,2 (0, τ 1 ).
10:
Compute θ 0,1 (z 1 /2, τ 1 ) using the z-duplication formulas (Equation (2.5.5)).
11:
for i = 1 .. s do 12:
Compute θ 2 0 (0, 2 i τ 1 ), θ 2 1 (0, 2 i τ 1 ) using the AGM.
13:
Compute θ 2 0 (2 i-2 z 1 , 2 i τ 1 ), θ 2 1 (2 i-2 z 1 , 2 i τ 1 ) using the τ -duplication formulas (Equation (2.5.4)).
14:
If i = s, compute also θ 2 2 (0, 2 i τ 1 ) using the τ -duplication formulas, then θ 2 (0, 2 i τ 1 ) by taking the square root.
15:
Compute θ 2 2 (2 i-2 z 1 , 2 i τ 1 ) using the τ -duplication formulas.
16:
Compute θ 0,1 (0, 2 i τ 1 ). 17:
) using the z-duplication formulas.
18:
end for
19:
Compute θ 2 2 (2 s-1 z 1 , 2 s τ 1 ) using Equation (2.5.7).
20:
Compute θ 0,1,2 (z, τ ) using the z-duplication formulas. 21: end if 0 (0, 2τ ) .
We conjecture the following, which holds in practice:
Conjecture 7.2.6. The Jacobian of F system is invertible.
As described in Corollary 0.3.8, applying Newton's method allows us to compute an approximation of θ with precision p -δ, where δ is a small constant, from an approximation with precision p/2. Instead of computing the Jacobian exactly, we compute an approximation of ∂Fi ∂xj with precision p using finite differences, i.e.
Fi(x+
, for j a perturbation of 2 -p on the j-th coordinate; this does not modify the result of Corollary 0.3.8, but may require to increase
Homogeneity
We take a look at how the sequence of iterates of F behaves with respect to homogeneity. The following proposition is a straightforward generalization of Propositions 6.2.8 and 7.2.5 Proposition 7.4.2. Let (a
Then we have a 0
(where 2 n n = 1), and b 0
Call G the function which computes λ, µ with precision P by applying the formulas above; by definition, we then have
Once again, the a
do not necessarily converge quadratically (for instance, take a
). However, note that µ can be computed in O(log P ) steps, since the Borchardt mean converges quadratically. We then propose the following conjecture: Conjecture 7.4.3. If all the choices of sign are good, λ and µ can be computed with absolute precision P using only O(log P ) iterations of F .
We proved this conjecture in genus 1 (Theorem 6.2.13) and in genus 2 (Theorem 7.2.8). We believe that the proof carries to genus g, using Lemma 7.4.1; however, due to the technical nature of the proof, we were unable to generalize it in a rigourous and satisfying manner. This conjecture means that G can be evaluated in O(2 g M(P ) log P ) operations.
Defining F
We now wish to define F, a function which can be evaluated in complexity quasi-linear in the precision, which takes simple values at arguments which are quotients of fundamental theta functions and of theta constants, and which can be inverted.
As we mentioned in genus 2 (Section 7.2.2), Newton's method cannot be applied as straightforwardly as in genus 1. We are considering 2 g -1 quotients θ 2 i (z,τ ) θ 2 0 (z,τ ) of fundamental theta-functions and as many quotients of fundamental theta-constants, while there are only g coordinates in z and g(g+1) 2 in τ . Hence, using G to compute the z i and τ i,j is not a process which can be inverted using Newton's method, as it leads to a function from C 2 g+1 -2 to C g(g+3)/2 . We considered two possible workarounds in the genus 2 case. The first one was to take a look at the equations defining the variety defined by the theta functions and the theta-constants, such as the Kummer variety in genus 2; however, we are not aware of an explicit description of these equations in genus g. The other possibility is simpler, and we believe that it still yields to a correct algorithm, which we express in a conjecture: Conjecture 7.4.4. There is a set of 2 g -1 matrices M 0 , . . . , M 2 g -1 of Sp 2g (Z), containing I 2g , such that, if we write
Chapter 8
Fast computation of the Abel-Jacobi map and its inverse in genus 1, 2 and above
This chapter uses the algorithms of Chapter 6 and Chapter 7 to compute the Abel-Jacobi map.
We show that the Abel-Jacobi map can be computed in quasi-optimal time in genus g; as for its inverse, its computation of quasi-linear time requires the existence of a quasi-linear time algorithm for θ, which means solving the problems outlined in Section 7.4.
In genus 1
We first show in this section how to compute the inverse of the Abel-Jacobi map, as doing so establishes a result one can use for the computation of the Abel-Jacobi map. Computing the inverse of the Abel-Jacobi map requires solving two problems: computing the equation of the curve given the periods, and computing the image of points by the inverse map. The latter problem can be solved using a formula linking between ℘ and θ; we compare the resulting algorithm to our algorithm using Landen transform to compute ℘ (Algorithm 6). The last subsection shows how to compute the Abel-Jacobi map, and in particular shows an algorithm which uses the link between ℘ and θ.
Computing the equation of the curve
Suppose that we have an elliptic curve E = C/Λ in its analytic representation, with Λ = Zω 1 + Zω 2 . We put τ = ω 2 /ω 1 and suppose that Im(τ ) > 0. We would like to compute two complex numbers a, b such that E is isomorphic to the curve defined by y 2 = 4x 3 -ax -b.
The equation of the curve is given by the differential equation satisfied by ℘ (Proposition 1.3.18): Theorem 8.1.1. For all ω 1 , ω 2 ∈ C, the Weierstrass-℘ function satisfies the differential equation
Comparing methods for the computation of ℘
We compare here Algorithm 6, which computes ℘ using the Landen transform, and the algorithm consisting in using Theorem 8.1.4 with Algorithm 15, which computes ℘ via θ. We discuss the precision loss and compare timings.
Precision loss
Recall that we analyzed the precision loss in Algorithm 15, and showed how working with internal precision 2P gave the value of θ up to 2 -P . However, applying Equation (8.1.1) can cause a potentially large loss of precision. For instance, the fact that θ 3 (z, τ ) = 0 means that θ 3 gets arbitrarily close to 0 as z nears the corners of the fundamental parallelogram; and Theorem 0.3.3 shows that division causes a loss of absolute precision proportional to minus the logarithm of the denominator (provided the number of bits lost is less than half of the total number of bits). We make this more precise by determining an equivalent of θ 3 near z = 0, using the product expansion of θ (see e.g. [Mum83, Section 14, p. 69]) to show that:
Hence, we expect a loss of precision proportional tolog z. Similarly, if ω 1 is small, computing ℘ from ℘ (cf. Theorem 8.1.4) causes a loss of precision proportional to the logarithm of |ω 1 |. Hence, for the extreme case z = 2 -P and ω 1 = 2 -P , we expect a loss of precision proportional to P , which means one would need O(P ) guard bits.
As for Algorithm 6, note that, because of Theorem 4.3.2, the algorithm could require computing 1 sin 2 (z) for z 0, and 1
for ω 1 0. Hence, we expect the algorithm to also lose precision for z close to 0 as well as for small ω 1 , which is very similar to the situation for Equation (8.1.1). Again, at worst, we could have z = 2 -P and ω 1 = 2 -P ; this tends to show that only O(P ) guard bits are needed. There may also be a loss of precision in the computation of a denominator in the series, which could be in theory close to 0; however, the denominator is small only when z ω 2 , and this condition is easily checked at the beginning of the algorithm: we assume we can avoid this case easily. Overall, a more refined analysis would be needed to confirm the heuristics on precision loss we outline here; one could think for example of looking at the precision loss analysis of Miller's algorithm for Bessel's function [BZ10, p. 153], as they both rely on backwards induction.
We performed experiments to attempt to compare the precision loss in both algorithms by determining the precision lost in the cases mentioned above, i.e. z or ω 1 small. Using Magma, we attempted to compute ℘ (π2 -t , [1, i]) with decimal precision P , then ℘ (π, [2 -t , 2 t i]), and ℘ (π2 -t , [2 -t , 2 t i]). We determined the loss of precision by comparing the output of an algorithm when run at precision P to the output of the same algorithm ran at a much larger precision. The results are presented in Table 8.1; the computations were performed at precision 10000, and we stopped at t = 4096 to avoid the computation of results with a minority of correct digits.
Given the results shown in Table 8.1, we formulate the following conjecture:
Conjecture 8.1.7. For z, ω 1 , ω 2 known with absolute precision P , working with internal precision 2P (resp. 3P ) in Algorithm 6 (resp. using Equation (8.1.1)) is enough to guarantee that
Hence, both Algorithm 6 and the algorithm which uses Equation (8.1.1) and Algorithm 15 have complexity O(M(P ) log P ), even when taking the precision loss into account.
We can actually bound the size of the remainder of the series by bounding the ratio between two terms. We use
and put x = |q| 2 e (2k-1)/B , so that
We can then put everything together:
The total cost of the method breaks down as follows: compute B 2k , then compute the first B terms of the series. Recall that we need to work with numbers of size O(P + k) bits in order to get a result that is accurate to 2 -P ; hence, each multiplication has a cost of O(M(P + k)). We estimate the cost of each of those steps.
Computing the Bernoulli numbers
Note that the Bernoulli numbers B 2k are fractions with numerator and denominators of size O(k log k). Hence, we do not need to work with O(P + k)-bit approximations of those numbers: it is more efficient (provided P k log k) to compute the numerator and denominators, using operations on numbers of size O(k log k) (e.g. with integral part of maximal size 2k log k and fractional part of maximal size 2k log k), then compute a P -bit approximation of the fraction (if needed) with simply a division, of cost O(M(P + k)).
The first way to compute B 2k is to use the techniques described in [BZ10, Section 4.7.2]. The scaled Bernoulli numbers C 2k = B 2k (2k)! can be evaluated with absolute precision P using a recurrence relation [BZ10, Eq. 4.60], for a cost of O(M(P + log k)k 2 ) bit operations; here, this gives a cost of O(M(k log k)k 2 ) bit operations. Recovering the B 2k then requires the evaluation of a factorial, which only takes O(k) multiplications of numbers of maximal size O(k log k) (using the well-known equivalent log k! k log k); this step has negligible cost. The number of guard bits one must take to compensate this multiplication by a factorial is
Chapter 9
Computing isogenous curves in genus 1
Let E be an elliptic curve defined over a field K and given in short Weierstrass form, i.e.
E/K :
Suppose we are also given a point Q of -torsion. We then look at Problem 1.1.16 -that is to say, we want to find an -isogenous curve E and an -isogeny φ :
As we explained in Chapter 1, any isogeny can be written as the composition of isogenies of prime degree; hence, we only consider here -isogenies with prime. Recall that we proved in Section 1.1.3 that, in the case of Weierstrass curves, the shape of our isogeny is
Note that this problem is already solved by the use of Vélu's formulas [START_REF] Vélu | Isogénies entre courbes elliptiques[END_REF]. In this chapter, we outline a different algorithm to solve this problem. The algorithm works on curves defined over C, or over a number field, or over F p ; it uses the evaluation of the Abel-Jacobi map for complex elliptic curves, which the previous chapter showed has quasi-linear complexity.
The asymptotic complexity of the algorithm is worse than that of Vélu's formulas, and is also slower in practice. However, this algorithm is still interesting because it is generalizable to curves of higher genera. More precisely, the fact that the Abel-Jacobi map could potentially be computed in quasi-linear running time in higher genus (cf. Chapter 8) could be used to generalize this algorithm into one which solves this problem in genus g, at least for jacobians of curves defined over C or K.
Computing isogenous curves over C
We start by giving an algorithm which solves the problem for elliptic curves defined over C which is different from the one stemming from Vélu's formulas [START_REF] Vélu | Isogénies entre courbes elliptiques[END_REF], and has worse overall complexity. We assume we are given a point Q ∈ E[ ], which generates the kernel of the isogeny. We want to compute the equation of the isogenous curve, as well as the expression of the isogeny as a rational function.
Section 4.3. As outlined in Section 8.1.3, the second algorithm is around twice faster than the first one; both require working at precision O(P ) to compensate rounding errors, which are especially large when z is close to the corners of the parallelogram. However, note that we are free to pick the 2 points aribitrarily, discarding problematic points if need be. This would limit precision losses, and we could then potentially reduce the working precision in the algorithm for ℘, thus gaining a constant factor; on the other hand, throwing away each "bad" z means wasting an elliptic logarithm computation, which introduces a potential slowdown.
We implemented a version of the Landen-based algorithm in Magma which reuses computations of the θ 0,1,2 (0, 2 k τ ) 2 , which amounts to a batched version of the algorithm (as described in Note 4.3.1); the theta-constants are computed by our MPC implementation of Algorithm 11. A further speedup could be obtained by writing the full algorithm in MPC. We did not implement the batched versions of the algorithms for θ(z, τ ) which are described in Section 6.5; however, we believe that the first algorithm with batched computations of θ would still be slower that the batched Landen-based algorithm, at least for the precisions we are considering here.
Determining the image in the other parallelogram
The method to reduce z modulo Λ is simple. Even though ℘(z, Λ ) = ℘(z mod Λ , Λ ), we still need to reduce z since our algorithm for ℘ depends on computing θ(z, τ ), and the bounds and the running time we showed in Chapter 6 require that z is reduced. The reduction we have in mind is not the one in the fundamental parallelogram, but rather the reduction in
The reduction is performed by determining the coordinates of z ∈ C with respect to the periods (ω 1 , ω 2 ) of the isogenous lattice. The technique is the same as the one described in Note 2.3.1; write
with x, y ∈ R, which we can compute easily by inverting the 2 × 2 matrix. We then subtract
x ω 1 + y ω 2 to z to get the result.
Interpolation
We use Cauchy interpolation [VZGG13, p.119] to recover the rational fraction; that is to say, given n equations f (u i ) = v i and a parameter k, solve the problem
Here, given the shape of the isogeny (given in Note 1.1.15), we have k = + 1 and n = + 1 + 2 -1 2 = 2 ; hence, the image of 2 points are needed to interpolate the correct rational fraction. Note here that using the general equation giving the shape of an isogeny (Note 1.1.15) instead would require to interpolate the y-coordinate of the isogeny, which requires 3 points instead, and a second Cauchy interpolation.
The outline of this algorithm is as follows. We first find a polynomial g of the right degree that interpolates those values, then we compute an extended GCD to solve the problem r ≡ tg (mod m) where m = (x -u 0 )...(x -u n-1 ); to be more precise, we run an extended GCD algorithm on m and g, and we interrupt the algorithm when the GCD we get is of degree < k. The coefficient multiplying g is t, and the GCD is r; we have a solution if gcd(r, t) = 1. We refer to [VZGG13, p.119] for details. which correspond to the image of each coefficient by each of the n embeddings from K to C; that is to say, for each coefficient v we have n complex values v 1 , . . . , v n such that
Hence, for each coefficient, we have what amounts to n values of a polynomial with rational coefficients of degree n -1: we can thus use interpolation to recover complex approximations of the coefficients, then recognize the rational coefficients using techniques described in the next section (Section 9.2.4). Note that we need to recover the O( ) coefficients of the isogeny by interpolating at the, the α i (roots of the polynomial defining the number field). We can use a fast algorithm to perform these interpolations; we refer to [VZGG13, chap. 10] for all the details, but we sketch the algorithm here to show how the O( ) interpolations can be sped up.
We start with the problem of multi-evaluation at n = 2 k values, which is the inverse of the problem at hand, but is needed in the fast interpolation algorithm. Let f be a polynomial of degree < 2 k , and (u 0 , ..., u 2 k ) complex numbers. We can compute (f (u 0 ), . . . , f (u 2 k )) faster than applying Horner's scheme 2 k times, using the remark that
The Euclidean remainders are computed using remainder trees. Define
which corresponds to the subproducts one gets when splitting the interval [1, n] in slices of length 2 j . The idea is then to compute f (mod M i,j ) for all i, j, starting with j = k and going down (towards the leaves). Computing the remaindering tree is not hard. Start with the leaves, i.e. M i,0 = (X -u i ), then build the nodes by multiplying together the values found in the two children of the node:
Each step of the formula is the multiplication of 2 k-j polynomials of degree less than 2 j , so it costs less than O(M(n)); there are log n levels/steps in the tree, so the total cost is O(M(n) log n). Once we have this remaindering tree, we want to compute f (mod leaf), starting with f (mod M 0,k ). The computation of f (mod g) with deg f = 2n, deg g = n can be done in 5M(n) operations [VZGG13, Theorem 9.6]. Hence, going down one level in the tree is 2 i euclidean divisions of polynomials of degree 2 k-i by polynomials of degree 2 k-i-1 , hence the complexity at each step is bounded by
The algorithm for fast interpolation also uses the remainder tree. The idea is as follows: the Lagrange interpolation of the relations
ui-uj , and m = n k=1 (X -u k ). But we also have:
Degrees of ψ Proposition 9.3.4. Assign a weight 1 to x, weight 2 to A and weight 3 to B. Then all the monomials appearing in the generic division polynomial ψ (x, A, B) have the same weight, equal to
Proof. This proposition is stated by [START_REF] Mckee | Computing division polynomials[END_REF], but is not proven. The property can be readily verified on ψ i for i < 4. We use the recurrence relations for the general case:
1. Case 2m + 1 with m odd:
Using the induction hypothesis, the monomials of ψ m+2 have weight m 2 +4m+4-1 2 and those from ψ m have weight m 2 -1 2 : hence the monomials of ψ m+2 ψ 3 m have weight
Then, note that the weight of each monomial of (x 3 + Ax + B) 2 is 6. The second product is then made of the product of 3 monomials of weight m 2 +2m+1-4 2 by one monomial of weight m 2 -2m+1-4 2 and one of weight 6, which gives monomials of weight 4m 2 +4m-12+12 2 = χ(2m + 1).
Case 2m + 1 with m even:
The calculations are then:
which give in both cases χ(2m + 1).
Case 2m:
In both cases we get χ(2m).
A univariate polynomial
The condition "the lift Q K of Q is an -torsion point on E K " can be rephrased as "the generic -division polynomial is 0 when evaluated at (x K , α, b K )". We use this condition to find the minimal polynomial of α, which we then use to define explicitly the number field K. This means we only need to look at the univariate polynomials
The recurrence relations defining ψ can be instantiated for Φ :
Finally, using Equation (9.3.1), we can also write our polynomial as:
The remainder of this section will be dedicated to proving some properties of this polynomial.
Degree and coefficient of highest degree
Theorem 9.3.7. We have deg Φ = χ( ) 2 , and the coefficient c of the monomial A deg Φ is:
Proof. The intuition is that replacing x by the value of x K and B by y 2 K -x 3 K -Ax K decreases the weight of the monomials in which x and B appear, which means that α χ( )/2,0 A χ( )/2 is the only monomial with weight χ( )/2 after the substitution. We give a more formal proof by induction, which is needed anyway to compute the coefficient of highest degree of Φ . The cases are as follows:
1. For Φ 2m+1 , with m odd :
= m = 0, which completes the induction.
Remark 9.3.8. Our algorithm for isogeny computation over F p we are discussing only considers isogenies of degree an odd prime number. Hence, in that case, the number field we define is of degree χ( ) 2 , and is defined by a monic polynomial.
Irreducibility of the polynomial defining the number field
We wish to use the polynomial Φ to define a number field K = Q[X]/(Φ ); however, for K to be a number field, we need to have Φ irreducible.
We conducted a small empirical study on several dozens of cases, each time choosing an odd prime such that 3 ≤ ≤ 29, an odd prime p such that 11 ≤ p ≤ 1789, and a curve E/F p at random until we found a curve with -torsion points. For each of these examples, we computed Φ , and verified that it was an irreducible polynomial. However, a proof of this fact has so far eluded us.
We can bypass this difficulty by patching the procedure should Φ happen to be reducible. Recall from the discussion in Section 9.3.1 that Φ (a Fp ) = 0 (mod p). Let A be an irreducible factor of Φ in Z such that A (a Fp ) = 0 (mod p), and define K = Q[X]/(A ). We then follow the same procedure as in Section 9.3.1, and we then still have that ψ (x K , α, y 2 K -x 3 K -αx K ) = 0; furthermore, going from K to F p can still be done, by replacing α by a Fp . Hence, the global torsion lifting procedure can still be carried out with an irreducible factor of Φ , which means that K is indeed a number field.
The estimates for the size of the coefficients and of the complex roots of Φ that we prove in the rest of this section are still valid for an irreducible factor of Φ . In fact, these sizes are actually even lower, as well as the degree, which speeds up the rest of the procedure. However, we also have to add the cost of determining the right irreducible factor of Φ to the total cost of the procedure; this amounts to factoring Φ over Z[X]. According to [START_REF] Von | Modern computer algebra[END_REF]Thm. 16.23], this costs O( 20 + 16 log 2 A) where A is a bound on the absolute value of the coefficients of Φ . We prove in a subsequent section that log A = O( 2 log p), which gives a total complexity of O( 20 log 2 p). Although polynomial-time, this complexity is not good at all, and dominates the cost (determined in Section 9.3.5) of all the other steps.
In all that follows, we will assume that Φ is irreducible over Z[X]; this is true generically, as the probability of a random polynomial to be irreducible is 1, but should this not be the case, one should replace Φ with an irreducible factor of Φ for which a Fp is a root modulo p.
Coefficient size
We discuss the size of the coefficients of the polynomial Φ . We can make explicit the terms that appear after replacing B by (d -x K A):
Absolute loss of precision in elementary fixed-point operations
This appendix outlines bounds on the absolute error that can arise when computing elementary operations in fixed point arithmetic with numbers in precision P . It is heavily inspired by the approach and the methods of [ETZ]; our results are adapted to the case of fixed-point arithmetic and, in some cases, give smaller error bounds than what can be obtained from [ETZ].
We write z k = x k + iy k , and zk = xk + i ỹk its approximation with fixed precision P ≥ 1. We suppose that
We recall Theorem 0.3.3:
Theorem A.0.1. For j = 1, 2, let z j = x j + iy j ∈ C and zj = xj + i ỹj its approximation. Suppose that:
• |z j -zj | ≤ k j 2 -P ;
• k j ≤ 2 P/2 (which means that at least the majority of the bits are correct);
and the same bounds apply to imaginary parts as well; and
and the same bound applies to the imaginary part, and
We prove this theorem over the next few subsections.
191
Addition
If z = z 1 + z 2 we have
Predictably, "the errors add up".
Multiplication
If z = z 1 z 2 we have x = x 1 x 2 -y 1 y 2 and y = x 1 y 2 + x 2 y 1 . We write
Similarly we have
Bounding the norms of real and imaginary parts by the norm of the number itself:
Squaring
The above formulas can be simplified in the case z = z 2 1 :
Complex division
We write z1 z2 = z1z2 |z2| 2 and then chain the results. To simplify, we bound |x i |, |y i | by the norm of the number itself right away. Hence:
and we use the previous part:
Abstract
The Abel-Jacobi map links the short Weierstrass form of a complex elliptic curve to the complex torus associated to it. One can compute it with a number of operations which is quasilinear in the target precision, i.e. in time O(M(P ) log P ). Its inverse is given by Weierstrass's ℘ function, which can be written as a function of θ, an important function in number theory. The natural algorithm for evaluating θ requires O(M(P ) √ P ) operations, but some values (the theta-constants) can be computed in O(M(P ) log P ) operations by exploiting the links with the arithmetico-geometric mean (AGM).
In this manuscript, we generalize this algorithm in order to compute θ in O(M(P ) log P ). We give a function F which has similar properties to the AGM. As with the algorithm for thetaconstants, we can then use Newton's method to compute the value of θ. We implemented this algorithm, which is faster than the naive method for precisions larger than 260 000 decimal digits.
We then study the generalization of this algorithm in higher genus, and in particular how to generalize the F function. In genus 2, we managed to prove that the same method leads to a O(M(P ) log P ) algorithm for θ; the same complexity applies to the Abel-Jacobi map. This algorithm is faster than the naive method for precisions smaller than in genus 1, of about 3 000 decimal digits. We also outline a way one could reach the same complexity in any genus.
Finally, we study a new algorithm which computes an isogeny of elliptic curves with given kernel. This algorithm uses the Abel-Jacobi map because it is easy to evaluate the isogeny on the complex torus; this algorithm may be generalizable to higher genera. | 503,825 | [
"777042"
] | [
"450090",
"106219"
] |
01492815 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492815/file/978-3-642-40779-6_11_Chapter.pdf | Martin Steinebach
Peter Klöckner
Nils Reimers
Dominik Wienand
Patrick Wolf
Robust Hash Algorithms for Text
Keywords: Robust Hashing, Text Watermarking, Evaluation
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
While multimedia content and machine to machine data have seen a strong increase in recent years, written natural text still has an important role in information distribution, storing and distribution of knowledge as well as entertainment. Books, scientific papers, patents, news articles -it is easy to list many examples important in everyday life and work. Still, concepts for reliable authentication specifically designed for natural language text are rare. Most often they are based on cryptographic hash functions, which are secure and reliable, but often require a precision of reproduction that brings challenges to efficient data handling: While in contracts every single word may be of importance, in many other documents it is rather the meaning and the flow of ideas that counts. This is especially true if digital watermarking for natural language is applied as watermarking will change the wording but not the meaning of texts, e.g. by active/passive or enumeration modulation.
In this work we discuss and compare alternatives for natural language hashing. These hashes shall feature robustness comparable to robust image or audio hashes. As long as a human observer perceives copies of a work as the same, the hash should also be identical or at least similar. Our goal is to provide a system allowing the following work flow:
-Create a robust hash H of a text T -Create n individually watermarked copies TM of T -Use H to identify all n copies of TM If no hash method robust against the embedding of a watermarking, for each TM a cryptographic hash needs to be computed and stored if we want to proof TM to be a copy of T. At the same time, the only alternative to a hash is a comparison to the original copy of T. While this is acceptable with respect to computation speed and resilience to errors, here the big drawback is the need to distribute the original text. If the application is to scan the Internet for a secret document, the document often will or at least should not be available to the searching agent.
Motivation
As portable eBook reader become more and more common, the sale of eBooks grows. EBook revenues for 2012 are at $1.3 billion, up 46% from 2011 [START_REF] Hoffelder | AAP Reports US eBook Sales Up 46% in 2012[END_REF] and forecasts for 2016 range from $5 billion [START_REF] Wolf | E-book market forecast to hit $5.2B as the book industry burns[END_REF] to $10 billion [START_REF] Wauters | Total Mobile eBook Sales Forecast To Reach $10B By[END_REF]. Copyright holders, i.e. publishers, will face the same challenges as the music or film industry with pirated versions of their intellectual content. The illegal distribution of eBooks is comparably simple, due to their small file size which is usually about 1 megabyte. It takes only a few minutes to find free versions of all books from the Spiegel Bestsellerliste. On the illegal channels one can also find the scanned version of printed books.
Watermarking on content level can help to determine the leakage if an eBook is found on an illegal channel. Watermarking works in the way that it modifies the content in non-noticeable way to include a unique ID. A publisher would then be able to check the channels if his eBooks are leaked and could then identify the source of the leaked copy.
This requires of course a method to verify that a given eBook found in one of the illegal channels belongs to the publisher. Taking the content of a found eBook and comparing it on text level with a list of all owned books would be quite inefficient. Also the publisher may want to outsource checking of these illegal channels to a third party but is not willing to hand out the content of its books.
Using a hashing algorithm like SHA-1 would fail as soon as there is a minimal modification on the content. This modification could be intentional in order not to be detected. But likely it's unintentional, due to format conversion, e.g. from ePub to PDF, due to an OCR error or maybe the eBook was split into parts. As mentioned before watermarking also introduces changes into the content of the book, hence each version would have a unique SHA-1 fingerprint.
A robust hash algorithm for text documents is therefore required. It should produce the same hash value for nearly identical contents. Obviously OCR errors, small modifications and watermarks should result in the same hash value. Still defining robustness requirements can be challenging: Should a substring, e.g. the first 10 or first 100 pages of a book, produce the same hash value?
Related Work
The goal to create a text authentication method robust against slight modifications is not new. As an example, plagiarism recognition faces this challenge on a regular base.
Cryptographic Hashing
Hash functions allow securely computing a short digest of a long message. Mathematically speaking, a hash function is a function that maps a variable length input message to an output message digest of fixed length. Cryptographic hash functions (such as SHA-1 or RIPEMD) are by design extremely sensitive to changes in the input data: even changing one bit in the data results in a totally different hash value.
Piecewise Hashing
Piecewise hashing, also called fuzzy hashing, combines cryptographic hashing and data segmentation. One of best know examples is Ssdeep that implements an algorithm called context triggered piecewise hashing (CTPH) presented by Kornblum in 2006 [START_REF] Kornblum | Identifying almost identical files using context triggered piecewise hashing Digital Investigation[END_REF]. It divides a byte sequence into chunks and hashes each chunk separately using the Fowler algorithm. To represent the fingerprint of a chunk, CTPH encodes the least significant 6 bits of the FNV hash as a Base64 character. All these characters are concatenated to create the fingerprint.
Robust Hashing
Perceptual hashes usually extract features from an multimedia data which are relevant to human perception and base the digest on them; thus, by design the perceptual hash of two similar objects will be similar. Perceptual hashes are sometimes also called digital fingerprints as they allow to identify content through extracted features.
Text Hashing
In the following section we describe different hash functions for natural language text. There are many applications that use cryptographic hash functions and piecewise hashing for text hashing. These approaches have one common weakness: If the document is changed by natural language watermarking, these hash functions will fail. If piecewise hashing is applied, this depends on the relation between chunk size and distance between changes caused by watermark embedding.
Approaches
We implemented and evaluated three algorithms: WordToBit, Broder and SimHash. The first one is based on the simple idea of hashing each word in the input text to a single bit, while the latter two are borrowed from near duplicate detection methods used in web crawling.
Word to Bit
This algorithm creates a digest by hashing each word in a given text to a single bit. To do this, we first split the text into a list of word tokens. Then we convert each word to either 0 or 1. This conversion should map the space of world uniformly at random to the space {0, 1}. We use the least significant bit of the Java built in hashCode function of the word. Other, more efficient text hash algorithms could be used. The digest is the concatenation of all those bits and its length is thus equal to the number of words in the text.
For comparison a distance measure is required. The Hamming distance is not usable as a single deleted word causes the rest of the digest to be off. The Levenshtein distance is suitable comparably slow. If the hashes are close, computing it on small parts where the two hash values differ speeds it up.
We decided to use instead a sampling approach to measure the distance of two WordToBit hash values. One hash is designated the main hash and we then try to find sub-samples of the other hash in that main hash. The motivation behind this was to be able to detect parts of a text (e.g. a chapter) in digests. The hash is split into sub-samples of e.g. 128 bits size (equal to 128 words). We then compute the Hamming distance at each position of the main hash for all sub-samples. If the distance is below a given threshold (e.g. 1 / 4 of the subsample size), we assume the sub-sample to match at that position. If we can find a certain number of matches ( one is usually enough) we consider the whole text to match.
Broder's algorithm
Broder's algorithm, as described in [START_REF] Broder | Syntactic Clustering of the Web[END_REF], uses shingling to introduce a similarity measure for message digest. As mentioned before, Broder's as well as the Charikar's SimHash algorithm have been proven to be efficient to find nearduplicates in web crawling [START_REF] Manku | Detecting near-duplicates for web crawling[END_REF].
The algorithm uses m different Rabin fingerprint functions f i , 0 ≤ i < m. The procedure starts with f 0 . Each subsequence of k tokens is fingerprinted with f 0 , which leads to n -k + 1 64-bit values, called shingles, for f 0 . The smallest of these values is the first minvalue and the first value of the hash. The algorithm proceeds with doing the same for f 1 , f 2 . . . f m-1 . Thus the algorithm results in a hash consisting of m minvalues, which leads to a hash size of 64 • m bits. We use k = 8, m = 84 in most test cases. Evaluation later showed that m can be reduced to 25, which further decreases the computation time. To estimate the similarity of two texts, we determine the number of equal minvalues in their hashes and call this B-Similarity. We consider two books to be a match if the B-Similarity is at least two.
SimHash
Charikar's random projection based approach [START_REF] Charikar | Similarity estimation techniques from rounding algorithms[END_REF] is used for finding near-duplicate web pages [START_REF] Manku | Detecting near-duplicates for web crawling[END_REF]. We adopted the proposed algorithm from Charikar and introduced slight changes, to yield a higher robustness. Charikar's algorithm is a fingerprinting technique that enjoys the property that fingerprints of near-duplicates differ in a small number of bit positions. We will call in the following the proposed algorithm SimHash, following the name conversion from [START_REF] Manku | Detecting near-duplicates for web crawling[END_REF].
SimHash splits works on tokens of length n. Tokens can either single characters or words. We implemented both versions, for n-grams and word sequences of length n. We decided to use the n-grams version for the ease of use and also it seemed to have better robustness properties. Each n-gram is then randomly projected to the space {0, 1} k . We use n = 12 and k = 512 and the random projection is computed by a reduced round SHA-512 implementation. As the random project does only need proper random properties, computing all rounds of SHA-512 is not necessary. Other, more efficient, pseudo-random projections could also been used.
For each n-gram we compute such a weakened SHA-512 hash value. At the same time we initialize a counter with 512 counter values. If the first bit of a SHA-512 hash equals one, we increment the first counter by one. Otherwise we decrement the counter. This is done for each bit of the hash value. Figure 1 depicts this. After hashing all n-grams and incrementing/decrementing the counter values, we convert the counter to a final hash value. In the original paper, each entry of the counter is converted to one if it is bigger than zero, else to zero. Given a random project, the expected value of each counter entry equals zero. A single changed n-gram may change the sign of a counter value, resulting to a flip in the final hash value.
To overcome this problem, we introduce a compression step. We select out of the 512 counter entries the 128 most robust entries, i.e. the entries with the largest absolute value. Changing one n-gram will not lead to a change of the sign for these counter entries. Figure 2 illustrates this final compression step.
For a given text document our SimHash algorithm returns a 512 bit large counter map and an either 128 or 512 bit SimHash value. The counter map contains a one if the counter entry belongs to the most robust entries, i.e. to one of the 128 with largest absolute value. The SimHash value can be either 128 bits, if we only convert the counters with the largest absolute values, or 512 if we simply convert all counter values.
There are several options to compute the similarity between two SimHash values. One option would be to simply compute the Hamming distance between the two compressed 128 SimHash values, ignoring the counter map. This can simply be done and the main benefit of this would be to create an easier indexing for the hash values.
For our evaluation we used a more complex comparison routine. One hash is declared as main hash. Its counter map is then used to extract from both inputs the 128 Bit SimHash value. Then the Hamming distance is computed. This introduces an asymmetric distance measure, but which could easily be made symmetric.
Evaluation
The evaluation of the algorithms was performed in two steps, which we call white box and black box test, respectively. White box tests should examin the properties of the present algorithms, i.e. we not only wanted to show the robustness of the algorithms but also quantify the robustness in some way. Black box tests abstract from the underlying algorithm. Here for given test scenario we were just interested in the false positive and false negative rates.
White Box Tests
To be a suitable hash algorithm, the algorithms should produce distinct hash values for distinct inputs. To test this, we extracted 1000 randomly selected articles from the German Wikipedia with a size between 9,700 and 335,000 bytes. The wiki markup was removed by WikiPrep [START_REF] Gabrilovich | Wikipedia Preprocessor (WikiPrep)[END_REF] in order to gain the pure text content of the articles.
The distinction property for WordToBit is straight forward and was not further analyzed by us. As one can see in Figure 3, the mean Hamming distance of SimHash is 64, which is also the expected value for a random projection to a 128 bit space. For Broders algorithm, 998,780 out of the 999,000 possible article combinations produce a B-Similarity of zero. One pair produced a B-Similarity of 23, which is fairly high. This is due to the fact that one can find the same information or even the same paragraphs over different Wikipedia article, e.g. if a main article is split into several sub articles. In conclusion all algorithms provide sufficient distinction properties.
Revisions of Wikipedia offer a large corpus on manual modifications on text. Often a new revision changes only a typo, replaces a word or adds some information. We extract 9400 revision of various articles of the English Wikipedia to test our algorithms. We compared each revision with the previous revision of the article and computed the Levenshtein distance in order to get a measure for the performed modification. The result for SimHash and Broder can be seen in Figure 5 and Figure 6. In conclusion, both algorithm are robust to (small) human made modifications on text. As mentioned in the introduction, in some cases a book is scanned before it is distributed. Either because only the printed version is available or also to remove watermarks or circumvent DRM. We simulated typical OCR errors: I's recognized as l's, l's recognized as I's, s' recognized as f's and rn's recognized as m's. We increased the error rate from 4, 8, 25 and 10 percent in ten steps to 40, 80, 25 and 100 percent. Each bar in the following figures represents the average distance over 10 books. The distance of the SimHashes grew from zero to 10, the B-Similarity of Broders' algorithm fell reciprocally from 84 to 10.
To evaluate the performance with respect to natural language watermarking, we used eight eBooks from the current top-selling charts as covers and created marked copies of it. The watermarked versions differ from the original cover in the order of some enumerations, for example "he was smart and cute" compared to "he was cute and smart". This is one of the few accepted concepts for natural language watermarking in German language.
It turned out that using Broder with m = 84 leads to 3.33% of the eBooks having a B-Similarity of 82 compared to their watermarked versions, 35% a B-Similarity of 83 and 61.66% a B-Similarity of 84. SimHash even resulted in each pair of hashes of the eBook and its watermarked version having a distance of zero. This shows that the given watermarking algorithm has no impact on our hash methods' ability to detect matching eBooks.
Black Box Tests
To evaluate and compare the performance of all three algorithms , we implemented an automated runner for "test tracks" taken from our corpus of texts. The test tracks are sets of matching and not matching eBook pairs assembled to represent typical use case scenarios. The first test track "Publisher" simulates the case of a publisher scanning a collection of possible copies for a single work of his. The second test track, "FBI" simulates searching a list of suspicious files for a list of known "bad" works. There is also one internal test track that contains a mixture of comparisons from the white box tests. We applied each algorithm on the given test track and logged the runtime and the number of false positives/negatives. We tuned all three algorithms to produce zero false positives/negatives on the two main tracks. However, as our corpus of real eBooks is not nearly large enough to yield significant results regarding false positives/negatives, these tests were mainly used to assess the runtimes of the algorithms.There is a 32 bit implementation of WordToBit (using ints) and a 64 bit implementation (using longs). Benchmarks were done on a 64 bit machine. On a 32 bit machine the results for WordToBit and WordToBit64 would roughly be reversed. SimHashV1 runs on word tokens, SimHashV2 on n-grams. The Publisher Test Track has a 1:1 ratio of hash generations to comparisons with many real books. The FBI test track contains about 1000 works and comparisons are done on the cross product of those works. WordToBit performs extraordinarily bad because its expensive operation is the comparison as opposed to the other algorithms where hashing is expensive. The internal test track again features a 1:1 hash generation:comparison ratio but with a lower number of real books, which makes WordToBit perform better than on the publisher test track.
We used black box tests to tune the parameters for WordToBit and Broder. Test runs were done on an older version of the internal test track. We ordered the results by the sum of false positives and false negatives and then by runtime. Table 1 shows the top 10 results for the parameters of WordToBit. The best combined false rates can be achieved with subsample size 3 and the distance thresholds 15 and 18. For Broder in the top 10 the false sum was always 0, therefore runtime was the only discriminator. The best runtime of 40971 ms was achieved with m=25, k=4.
Conclusion
As a result of our evaluation, we come to the conclusion that all the algorithms are suited for specific tasks or can be applied to satisfy certain requirements. To find chapters or in general parts of a full text we recommend the use of Word-ToBit. To achieve low latency one should use SimHashV2 or Broder as both are faster than WordToBit. To achieve a high precision it is advisable to use Broder as here zero false negatives could be achieved. Still, all algorithms show that it is advisable to utilize robust text hashing in error-or noise-prone environments as their robustness is an important advantage compared to common hash methods. We show that the combined use of hashing and watermarking is applicable. Together, these tools provide a promising way to individually mark written language and identify it later on without the need of huge data bases or the leakage of the cover text. One aspect that is common in hash protocols based on similarity search is that the search and comparison part of the hash detection part must not be neglected. Depending on the application, this can become more time-demanding than the actual hash calculation.
Fig. 1 .Fig. 2 .
12 Fig. 1. SimHash mode of operation
Fig. 3 .Fig. 4 .Fig. 5 . 6 .
3456 Fig. 3. Hamming distances of SimHash for 1000 randomly selected Wikipedia articles Fig. 4. Hamming distances of SimHash for 1000 randomly selected Wikipedia articles
Fig. 7 .Fig. 8 .
78 Fig. 7. Broder, OCR Simulation
Fig. 9 .
9 Fig. 9. Publisher Test Track Runtimes
Table 1 .
1 Word to Bit Parameter Ranking
Rank SubSampleSize DistanceThreshold Runtime FalsePositives FalseNegatives False Sum
1 3 18 174904 0 3 3
2 3 15 177301 0 3 3
3 2 6 194477 0 4 4
4 2 8 196483 2 2 4
5 4 28 169801 0 5 5
6 3 12 174502 0 5 5
7 5 35 147376 0 6 6
8 4 24 168241 0 6 6
9 2 4 195907 0 6 6
10 5 30 147188 0 7 7
Acknowledgement This work has been supported by the Federal Ministry of Education and Research via the project SIDIM (01IS10054A) in the funding initiative "KMU-innovativ" for the innovative SMEs target group. | 21,788 | [
"755812"
] | [
"466808",
"161409",
"489428",
"161409",
"161409",
"161409",
"489428"
] |
01492819 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492819/file/978-3-642-40779-6_15_Chapter.pdf | Gábor György Gulyás
email: [email protected]
Sándor Imre
email: [email protected]
Hiding Information in Social Networks from De-anonymization Attacks by Using Identity Separation
Keywords: social networks, privacy, de-anonymization, identity separation
Social networks allow their users to mark their profile attributes, relationships as private in order to guarantee privacy, although private information get occasionally published within sanitized datasets offered to third parties, such as business partners. Today, powerful de-anonymization attacks exist that enable the finding of corresponding nodes within these datasets and public network data (e.g., crawls of other networks) solely by considering structural information. In this paper, we propose an identity management technique, namely identity separation, as a tool for hiding information from attacks aiming to achieve large-scale re-identification. By simulation experiments we compare the protective strength of identity management to the state-of-the-art attack. We show that while a large fraction of participating users are required to repel the attack, with the proper settings it is possible to effectively hide information, even for a handful of users.
In addition, we propose a user-controllable method based on decoy nodes, which turn out to be successful for information hiding as at most 3.33% of hidden nodes are revealed in our experiments.
Introduction
The basic concept of online social networks is to provide an interface for managing social relationships. However, social networks are not the only services that have an underlying graph structure, and recently several network alignment attacks have been published in which attackers aimed to breach the privacy of nodes within anonimized networks (e.g., obtained for business or research purposes) by using data from other (social) networks [START_REF] Narayanan | De-anonymizing social networks[END_REF][START_REF] Narayanan | Link prediction by de-anonymization: How we won the kaggle social network challenge[END_REF][START_REF] Peng | Seed and Grow: An attack against anonymized social networks[END_REF][START_REF] Srivatsa | Deanonymizing mobility traces: Using social network as a sidechannel[END_REF][START_REF] Backstrom | Wherefore art thou r3579x?: anonymized social networks, hidden patterns, and structural steganography[END_REF]. Basically, such attacks can have two goals, i.e., to achieve node or edge privacy breach (or both). In case of the first, the attacker learns the identity of a node, or some otherwise hidden profile information, and in the second case the attacker realizes the existence of a hidden relationship.
The first attack of its kind was introduced by Narayanan and Shmatikov in 2009 [START_REF] Narayanan | De-anonymizing social networks[END_REF], who proposed a structural re-identification algorithm being able to de-anonymize users at large-scale, by using data from another social network. In their main experiment they de-anonymized 30.8% of nodes being mutually present in a Twitter and a Flickr crawl. Recently it has been shown that location information can also be re-identified with similar methods [START_REF] Srivatsa | Deanonymizing mobility traces: Using social network as a sidechannel[END_REF]. As there are many services based on the graph structure (or implicitly having one), it is likely that more similar attacks will be discovered.
Attacks capable of achieving large-scale re-identification consist of two sequential phases, the global and local re-identification phase [START_REF] Gulyás | Measuring Local Topological Anonymity in Social Networks[END_REF]. In the first phase the algorithm seeks for globally outstanding nodes (called the seeds), e.g., by their degree, and then the second phase extends the seed set in an iterative way, locally comparing nodes being connected to the seed set.
For instance, an attacker may obtain datasets as depicted on figure 1, wishing to know an otherwise inaccessible private attribute: who prefers tea or coffee (dashed or thick bordered nodes). She initializes the seed set by re-identifying (or mapping) v Alice ↔ v 7 and v Bob ↔ v 3 as they have globally the highest degree in both networks (global re-identification phase). Next, she looks for nodes with locally unique degree values connecting to both seeds, and picks deg(v Harry ) = 3. By looking for nodes within the common neighbors of v 3 , v 7 with the same degree, she maps v Harry ↔ v 4 . Then, the process continues with additional iterations. In this paper we propose a privacy-enhancing method related to the identity partitioning technique [START_REF] Clauß | Privacy enhancing identity management: protection against re-identification and profiling[END_REF], called identity separation, to tackle these attacks. Identity separation allows a user to have multiple unlinkable profiles in the same network, which results in multiple unlinkable nodes in the sanitized graph (e.g., as the service provider is also unaware of the link between the identities).
We present effect of identity separation by the example of Fred on Fig. 1, who created two unlinkable profiles, v 8 for pretending being a coffee fan towards his closer friends (Alice, Ed, Greg), but also created v 12 for maintaining relationships with tea lovers (Harry, Jennie). By applying the attack algorithm, it can be seen that the hidden drink preference of Fred will not be discovered by third parties.
Our main contributions are as follows. By simulation measurements we characterize how resistant the attack in [START_REF] Narayanan | De-anonymizing social networks[END_REF] is against different features of identity separation, e.g., splitting nodes or deleting edges. In experiments we show that by using these features the quantity of information revealed by the attacker can be reduced as low as 3.21% and even lower, while identity separation cannot efficiently repel the attack on the network level. We additionally propose a method using decoys that can effectively protect privacy in a user-controllable way, for which we measure the quantity of revealed information well under 4%, even when only a few users adopt this technique.
Related Work
The first attack proposed by Narayanan and Shmatikov in [START_REF] Narayanan | De-anonymizing social networks[END_REF] (to which we later refer as Nar09) used seeding of 4-cliques, and its local re-identification phase works similarly as described in the example of Section 1, being based on a propagation step which is iterated on the neighbors of the seed set until new nodes can be identified (already identified nodes are revisited). In each iteration, candidates for the currently inspected source node are selected from target graph nodes, sharing at least a common mapped neighbor with it. At this point the algorithm calculates a score based on cosine similarity for each candidate. If there is an outstanding candidate, a reverse match checking is executed to verify the proposed mapping from a reversed point of view. If the result of reverse checking equals the source node, a new mapping is registered.
However, since then several attacks appeared, here we include the most relevant works. Narayanan et al. in 2011 presented a specialized version of their attack [START_REF] Narayanan | Link prediction by de-anonymization: How we won the kaggle social network challenge[END_REF], which was capable of achieving a higher recall rate, but was specialized for the task of working on two snapshots of the same network. More recently, Wei et al. also proposed another algorithm in [START_REF] Peng | Seed and Grow: An attack against anonymized social networks[END_REF] challenging [START_REF] Narayanan | De-anonymizing social networks[END_REF]; however, we argue that their work goes beyond [START_REF] Narayanan | De-anonymizing social networks[END_REF], for at least two reasons. First, in their paper there is no evaluation of their algorithm against the perturbation strategy proposed in [START_REF] Narayanan | De-anonymizing social networks[END_REF], although it is definitely more realistic than what is used in [START_REF] Peng | Seed and Grow: An attack against anonymized social networks[END_REF]. As the perturbation strategy of [START_REF] Narayanan | De-anonymizing social networks[END_REF] deletes edges (only), this is remarkable deficiency. Our second remark is also related to their experiments, which were performed on quite small graphs having fewer than a thousand nodes; there are no experimental results that their algorithm perform also better on networks having tens of thousands of nodes (or larger). Finally, there are some other works developing the original idea further in specific directions, such as in the case for de-anonymizing location traces by Srivatsa and Hicks [START_REF] Srivatsa | Deanonymizing mobility traces: Using social network as a sidechannel[END_REF]; however, as to the best of our knowledge no work provides better results than [START_REF] Narayanan | De-anonymizing social networks[END_REF] in general, we chose this attack as the state-of-theart, and work with it in our experiments.
For preventing de-anonymization, we consider a user centered privacy protection mechanism (instead of graph sanitization applied by the service provider), one that can be applied to existing services -otherwise one might consider using revised service models, such as distributed social networks [START_REF] Cutillo | Safebook: a Privacy Preserving Online Social Network Leveraging on Real-Life Trust[END_REF]. In our previous work we analytically showed that identity separation is an effective tool against 4-clique based global reidentification [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF] (as described later, we use structural identity separation models from this work).
Recently, Beato et al. proposed the friend-in-the-middle model [START_REF] Beato | Friend in the Middle (FiM): Tackling De-Anonymization in Social Networks[END_REF], where a proxylike nodes serve as mediators to hide connections, and presented the viability of their model (successfully) on the Slashdot network [START_REF]Stanford Large Network Dataset Collection[END_REF]. In contrast to their work, we focus also on information hiding working even for a few nodes only, and in addition, identity separation is a rather powerful method allowing a fine-grained management of information [START_REF] Clauß | Privacy enhancing identity management: protection against re-identification and profiling[END_REF], e.g., it allows hiding profile information beside relationships. Lastly, we note that as network structure has a notable bias on results, we carry out experiments on multiple datasets.
Datasets, Modeling and Simulation Settings
We partially base our notation on the one used in [START_REF] Narayanan | De-anonymizing social networks[END_REF]. Given a graph G tar to be deanonymized by using an auxiliary data source G src , let Ṽsrc ⊆ V src , Ṽtar ⊆ V tar denote the set of nodes mutually existing in both. Due to the presence of nodes using identity separation, ground truth information is represented by two mappings, µ G : Ṽsrc → Ṽtar denote mapping between nodes that are intact, and λ G : Ṽsrc ⇒ Ṽtar denote mappings between nodes in G src and the sets of their separated identities in G tar . Running a deterministic re-identification attack on (G src , G tar ) results in a re-identification mapping denoted as µ : V src → V tar .
Social Network Data and Modeling Identity Separation
During our experiments we used multiple datasets with different characteristics in order to avoid related biases. In addition, we used large networks, as brute-force attacks can be mounted against smaller ones. We obtained two datasets from the SNAP collection [START_REF]Stanford Large Network Dataset Collection[END_REF], namely the Slashdot network crawled in 2009 (82,168 nodes, 504,230 edges) and the Epinions network crawled in 2002 (75,879 nodes, 405,740 edges). The third dataset is a subgraph exported from the LiveJournal network crawled in 2010 (at our dept.; consisting of 66,752 nodes, 619,512 edges).
For modeling identity separation, it would be desirable to analyze real-world data on user behavior, but to the best of our knowledge, such datasets are unavailable and there are no trivial ways of crawling one (yet). Fortunately, data on a functionality similar to identity separation is available: structural information of social circles extracted from Google+, Twitter and Facebook [START_REF]Stanford Large Network Dataset Collection[END_REF]. We found in this data that the number of circles has a power-law distribution, for instance in the Twitter dataset we measured α = 2.31 (933 ego networks, x min = 2, x max = 18). Many users did not duplicate their connections (44.6%), and only a fragment of them had more than twice as many connections in their circles compared to the number of their unique acquaintances (6.07%). While it is not possible to draw strong conclusions from these observations, we believe they indicate the real nature of identity separation (the usability of this dataset is limited by the absence of hidden connections).
Thus, due to the lack of data, we used the probability based models we introduced in [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF], which describe identity separation from a structural point of view, and allow deriving test data from real-world datasets. These models capture identity separation as splitting a node, and assigning previously existing edges to the new nodes. The number of new identities is modeled with a random variable Y (with unspecified distribution), which we either set to a fixed value, or model it with a random variable having a powerlaw-like distribution. For edge sorting, there are four models in [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF] regarding whether it is allowed to delete edges (i.e., an edge becomes private), or to duplicate edges, from which we used three in our experiments. While the basic model is simple and easy to work with (no edge deletion or duplication allowed), we used the realistic model to capture real-life behavior, too (both operations are allowed). We additionally used the best model describing a privacy oriented user behavior (no edge duplication, but deletion allowed), and omitted the worst model (edge duplication only).
Data Preparation
During the test data creation process first we derived a pair of source and target graphs (G src , G tar ) having desired overlap of nodes and edges, and then modeled identity separation on a subset of nodes in the target graph. We used the perturbation strategy proposed by Narayanan and Shmatikov [START_REF] Narayanan | De-anonymizing social networks[END_REF]. Their algorithm considers the initial graph as the ground truth of real connections, from which graphs G src , G tar are extracted with the desired fraction of overlapping nodes (α v ), and then edges are deleted independently to achieve edge overlap α e .
We found α v = 0.5, α e = 0.75 to be a good trade-off at which a significant level of uncertainty is present in the data (capturing the essence of a life-like scenario), but the Nar09 attack is still capable of identifying a large ratio of the co-existing nodes 1 . Identity separation is then modeled on the target graph by uniformly sampling a given percent of nodes with at least deg(v) = 2 (this ratio is maintained for the ground truth nodes), and then nodes are split and their edges are sorted according to the settings of the currently used model.
Calibrating Attack Parameters and Measuring Success Rate
By comparing the directed and undirected versions of Nar09, we found little difference in results, therefore, due to this reason and for sake of simplicity, in our experiments we used undirected networks. Next, we run several measurements to find the optimal parameters of the attack. We found choosing randomly a 1,000 from the top 25% (by node degree) of mutually existing nodes to be a redundant choice modeling a strong attacker (as 750 seeds were enough for reaching the high-end of large scale propagation).
Seed location sensitivity of the algorithm is known for small networks [START_REF] Gulyás | Measuring Local Topological Anonymity in Social Networks[END_REF]. In contrast, we found that seed location matters less for large networks, likely because the greater redundancy in topology against perturbation, and larger ground truth sizes. Therefore, in each experiment we created two random perturbations, and run simulations twice on both with a different seed set. We observed only minor deviations in results, usually less than a percent.
The Nar09 algorithm has another important parameter (Θ) for controlling the ratio of true and false positives. The attack produced fairly low error rates even for small values of Θ, hence we choose to work with Θ = 0.01. The error rate stayed well under 3% in most of experiments, with only a few exceptions when it went above slightly this value.
We use two measures for evaluating simulation results. The recall rate reflects the extent of re-identification (this itself can be used due to constantly negligible error rates), describing success from an attacker point of view. It is calculated by dividing the number of correct identifications with the number of mutually existing nodes (seeds are excluded from the results).
The disclosure rate quantifies information the attacker learned from users who applied identity separation, describing an overall protection efficiency from a user point of view. As current identity separation models are bond to structural information, we use a measure reflecting the average percent of edges that the attacker successfully revealed (this can be extended for other types of information in future experiments, e.g., sensitive profile attributes).
Characterizing Weaknesses of the Nar09 Algorithm
In the first part of our experiments, in order to discover the strongest privacy-enhancing identity separation mechanisms, we investigated the efficiency of features in different models against the Nar09 algorithm.
Measuring Sensitivity to the Number of Identities
Foremost, we tested the Nar09 algorithm against the basic model with uniform edge sorting probability. Simulations of the attack were executed for all networks having a ratio of users applying identity separation of R ∈ [0.0, 0.9] (with stepping 0.1). For the selected users a fixed number of new identities were created (Y ∈ [START_REF] Narayanan | Link prediction by de-anonymization: How we won the kaggle social network challenge[END_REF][START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF]). We summarized results on Fig. 2; however, we omitted results for cases of Y = 3,Y = 4, as these can be easily inferred from the rest.
Opposing our initial expectations, the basic model with Y = 2 and uniform edge sorting probability is not effective in stopping the attack. For the Epinions and Slashdot networks the recall rate mildly decreased until the ratio of privacy-protecting users reached circa R = 0.5. For the LiveJournal graph the recall rate shows relevant fault tolerance of the attack (likely because of network structure, as this is also the most dense test network), e.g., Nar09 still correctly identified 15.12% of users even for R = 0.7. When participating users had five new identities, results were more promising, as recall rates dropped below 10% at R = 0.5 for all networks.
We also tested what if edges are sorted according to power-law distribution having Y = 5. These experiments resulted in a slightly higher true positive rate, which is not very surprising: if edges are not uniformly distributed it is more likely for an identity to appear that has more of the original edges than the others (with higher chances to be re-identified). In another comparative experiment we modeled a variable number of new identities with power-law-like distribution with Y ∈ [2, 5] and uniform edge sorting probability. Results were properly centered between cases Y = 2 and Y = 5 as the LiveJournal example shows on Fig. 2a.
Even though by looking at the recall rates the basic model seems ineffective in impeding the attack, the disclosure rates imply better results. As shown on Fig. 2b, disclosure rates are significantly lower compared to recall rates 2 . From this point of view using the basic model with Y = 5 and uniform edge sorting probability provides strong protection for even a small ratio of applying users: the disclosure rate is at most (b) Disclosure rates Fig. 2: Experimental results using the basic identity separation model.
8.03% when R = 0.1. By comparing the results of the two measures, we conclude that by using the basic model it is not feasible to repel the attack, however, by using a higher number of identities the access of the attacker to information can be effectively limited.
While conducting the analysis, we found that the recall rate was notably higher for users of identity separation (∀v ∈ dom(λ G )) than for others (∀v ′ ∈ Ṽsrc ). For low values of R this difference in the recall was almost constant and disappeared for high values 3 . This turned out to be a bias caused by the seeding strategy: after changing to mixed seeding with an equal ratio of seeds selected from dom(µ G ) and dom(λ G ), while the overall recall rate remained equivalent the difference disappeared for the LiveJournal and Slashdot networks, and significantly decreased for the Epinions. Most importantly (from the user perspective), the disclosure rates stayed equivalently low regardless of the used seeding strategy. This finding has an interesting impact for the attacker on choosing the proper seeding strategy. Using a simple seeding mechanism seems to be a natural choice, but adding fault tolerance against identity separation is not trivial: the analysis provided by in [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF] shows that the seeding method discussed in [START_REF] Narayanan | De-anonymizing social networks[END_REF] is not very resistant to identity separation. Thus, using a simpler choice of seed identification, the attacker will also have a higher rate of correct identification for nodes protecting their privacy.
Measuring Sensitivity to Deletion of Edges
We used the realistic and best models to test the Nar09 against additional edge perturbation [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF], as edge deletion is allowed during the edge sorting phase within these models. Since details are not explicitly defined in [START_REF] Gulyás | Analysis of Identity Separation Against a Passive Clique-Based Deanonymization Attack[END_REF], we used three different settings in our experiments. For all of them edge sorting probabilities are calculated according to multivariate normal distribution as P(X 1 = x 1 , . . . , X y = x y ) ∼ N y (η, Σ), where y denotes the current number of identities. We set each value of η to (y) -1 and configure Σ in a way to have higher probabilities for events when the sum of the new edges are relatively close to original node degree (in the best model when the sum was higher than the original degree, the distribution was simply recalculated).
The first setting is the realistic model with minimal deletion, in which each edge is assigned to each identity, and if there is still ample space left, random edges are assigned to those identities. In this setting edges are not deleted if it is not necessary. In the setting of the realistic model with random deletion new identities take a portion of edges proportional to (x 1 , . . . , x y ). This setting is expected to delete unassigned edges proportionally to ∏(1x i ). We also included a setting with the best model for comparison, namely the best model with random deletion 4 We ran simulations for all models in all the test networks with Y = 2, and found that recall rates strongly correlate with results of the basic model (although being slightly better); thus, these models are also incapable of repelling the attack. Fortunately, disclosure rates show significant progress from the basic model; as an example, results for the Epinions network are depicted on Fig. 3a. We conclude that while these models are also incapable of stopping large-scale propagation, they yet perform better in privacy protection.
Simulating Multiple Model Settings in Parallel
In previous subsections we described experiments in which settings of different models were used homogeneously. Naturally, the question arises whether the observed differ-ences remain if multiple settings are allowed in the same network in a mixed way? Hence in another experiment we allowed three settings in parallel: basic model with uniform edge sorting probability (34% of R), realistic model with random deletion (33% of R), best model with random deletion (33% of R). We found that for the users of each setting was proportional to results measured in previous experiments, for instance, users of the best model achieved the lowest recall and disclosure rates. Simulation results in the LiveJournal graph are plotted on Fig. 3b for demonstration (results were measured for homogeneous groups consisting of nodes having the same setting).
Searching for Strongest User Protection Mechanisms
Measuring the Best Trivial Strategies
Previously we characterized weaknesses of the Nar09 attack, and it turned out that while none of the previously analyzed defense strategies can effectively stop it, some forms of identity separation can reduce the amount of accessible information. It also turned out that increasing the number of new identities has a powerful impact on the disclosure rate, while edge perturbation has a less, but yet remarkable effect. Thus, the best model with a high number of identities seems to be the most effective setting.
We run the best model with Y = 5 (using the same distribution as described in Section 4) on all test networks. Results revealed even this method cannot prevent large-scale re-identification when a relatively low ratio of users apply the technique. Instead, for all networks the re-identification rate converged to a hypothetical linear line monotonically decreasing as R increase (see Fig. 4a). Fortunately, the setting had more convincing results for disclosure rates: even for R = 0.1 the disclosure rate topped at 2.22%. Disclosure values continued to fall as the ratio of defending users increased. In addition, we also examined disclosure rates for cases when participation were very low such as 1 of V tar , meaning only a few tens or hundreds of users using identity separation from Ṽtar . As seen on Fig. 4b, our experiments resulted in approximately constant disclosure rates for all models (variability for R < 0.01 is likely to be due to the small sample sizes). Therefore we conclude that even if only a few users use the best model with Y = 5, their privacy is protected as the attacker can reveal only a few percent of sensitive information.
Placing the User in the Position of Decision
Strategy proposed previously lacks user control, i.e., the user cannot influence on what information she wishes to hide from the attacker. Here, we introduce a simple model that puts the user into decisive position by utilizing decoy identities. Nevertheless, we note that this model is a simple example, and ones used in real-life situations can be adapted to other hypothetical attacker strategies and to the type of information for hiding (e.g., one may consider using structural steganography for hiding nodes [START_REF] Backstrom | Wherefore art thou r3579x?: anonymized social networks, hidden patterns, and structural steganography[END_REF]).
We applied the following strategy on nodes v i ∈ Ṽtar that have at least 30 neighbors 5 . First, we create a decoy node v decoy i representing non-sensitive connections with the goal of capturing the attention of attacker algorithm (this may be a public profile as well). Node v decoy i is assigned 90% of the acquaintances of v i . Next, a hidden node v hidden i is created having the rest 10% of neighbors (i.e., sensitive relationships), and an additional 10% that overlaps with the neighbors of v decoy i (i.e., modeling overlapping relationships).
This model showed promising results after being applied to our test data sets. While from the attacker point of view the algorithm was successful, as being able to produce high recall rates (until large number of decoys appeared -see details on Fig. 5a), privacy-protecting nodes achieved of revealing little sensitive information as shown on Fig. 5b. Recall rates were typically small for hidden nodes, less or equal 0.6% within the Slashdot and the Epinion networks, and with one exception less or equal 1.66% within the LiveJournal network. Misguidance was also successful when only a few users used it 6 .
Future Work and Conclusions
In this paper, we analyzed different models of identity separation to evaluate their effect in repelling structural de-anonymization attacks and in information hiding. By our experiments we found that if identity separation is used in a non-cooperative way, it is not possible to avoid large-scale re-identification regardless of the used strategy, unless a large fraction of users is involved. This finding sets a direction for future work: is there a way for cooperating users to tackle these attacks more effectively?
We also used another measure in our experiments, reflecting the quantity of information a successful attack reveals. This metric showed more promising results: experiments confirmed that using multiple identities and allowing to hide some connections 5 Resulting in a significantly smaller set of applicable nodes, e.g., in LiveJournal, even for R = 0.9 only |dom(λ G )| ≈ 11.2% of Ṽtar . 6 Appearing variability is due to small sample sizes, and almost negligible. For instance, the recall of 3.33% means one node being identified correctly within 29 cases. (a) Recall sensitivity to the use of decoys. Fig. 5: Recall rates for whole networks and nodes using decoy nodes.
favors user privacy. Moreover, in our experiments using five identities with a moderate preference for hiding or duplicating edges proved to be eligible to achieve a high rate of information hiding. Numerically, in the LiveJournal network we measured an information disclosure of 2.08% when only circa 24 users applied the proposed strategy, and yet results were well under 4% in other cases as well. However, using five identities is not realistic for most users, and in this case it is not possible to control what the attacker may reveal. Therefore, we proposed a method of using decoys, which seemingly did not affect the success of the attack (unless used by more than half of the users); however, as the attacker almost discovered decoy nodes only, a minority of hidden nodes were found: less or equal 0.6% within the Slashdot and the Epinion networks, and with one exception less or equal 1.66% within the LiveJournal network. This method also produced suitable results when applied by a few nodes only (i.e., numerically 1-2).
Therefore, we have provided guidelines (in the form of two models) for effectively realizing information hiding in social networks, that can be applied to existing social networks, even without the consent of the service provider. As our closing word, we designate an interesting direction as future work: what strategies should a user follow, if identity separation can be applied in both networks of G src and G tar ?
Fig. 1 :
1 Fig. 1: Example providing insights on de-anonymization and identity separation (left: auxiliary network G src ; right: sanitized network G tar ).
Fig. 3 :
3 Fig. 3: (a) Disclosure rates for different models in the Epinions network; (b) multiple models present in parallel in the LiveJournal network.
, |dom(λ G )| ∼24 R=0.005, |dom(λ G )| ∼121-122 Basic Y=2 Basic Y=5 Real. Y=2, min.d. Real. Y=2, rand.d. Best Y=2, rand.d. Best Y=5, rand.d. (b) Low participation rates in LiveJournal.
Fig. 4 :
4 Fig. 4: Analysis of the most effective privacy-enhancing strategies.
Results for nodes with decoys.
.
20 Basic Y=2 25 Basic (recall)
Disclosure rate (%) 5 10 15 Basic Y=5 Realistic min.d. (Y=2) Realistic rand.d. (Y=2) Best rand.d. (Y=2) Recall and disclosure rates (%) 5 10 15 20 Basic (disclosure) Real (recall) Real (disclosure) Best (recall) Best (disclosure)
0.1 0 0.2 Ratio of nodes with identity separation 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.1 0 0.2 Ratio of nodes with identity separation 0.3 0.4 0.5 0.6 0.7 0.8 0.9
(a) Epinions network (b) LiveJournal network
Without adding perturbation Nar09 could correctly identify 52.55% of coexisting nodes in the Epinions graph, 68.34% in the Slashdot graph, and 88.55% in the LiveJournal graph; identification rates were consequently proportional to the ratio of one-degree nodes.
As the disclosure rate is measured for ∀v ∈ dom(λ G ), results start from R = 0.1.
We omit plotting this on a figure due to space limitations, however, the difference was as follows: avg(∆ Slashdot ) = 2.34% ∀R ∈ [0.1, 0.4]; avg(∆ E pinions ) = 6.13% ∀R ∈ [0.1, 0.5]; avg(∆ LiveJournal ) =
4.05% ∀R ∈ [0.1, 0.7]
We note, however, that none of these settings capture aggressive edge deletion, but it might be interesting to investigate such settings in the future. | 33,860 | [
"1004368",
"1004369"
] | [
"27223",
"27223"
] |
01492820 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492820/file/978-3-642-40779-6_16_Chapter.pdf | Xiaofeng Xia
email: [email protected]
An Equivalent Access Based Approach for Building Collaboration Model between Distinct Access Control Models
Keywords: collaboration model, distinct access control models, equivalent access, mapping set, linking set
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
When several organizations want to make a collaboration, they could share resources among each other such that some common tasks can be completed. The collaboration pattern discussed in this paper refers to that for the resources shared from participating organization domains, the collaboration domain can have its own access control model. Practical security policy configurations tell us that the security models or policies are not all-purpose. To protect their resources from unauthorized access the organization domains adopt different access control models, e.g. RBAC [START_REF] Sandhu | Role-based access control models[END_REF], mandatory access control (MAC) [START_REF] Bell | Secure computer systems: Mathematical foundations and model[END_REF], and discretionary access control (DAC) [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF]. These models have different model entities related to permissions, which we call core model semantics. For example RBAC model constructs roles, MAC model has security labels. Currently there are approaches, e.g. in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] and [START_REF] Joshi | Secure Interoperation in a Multidomain Environment Employing RBAC Policies[END_REF], focusing on RBAC, which assume that all organizations adopt RBAC model, then build a global access control policy on role mappings. A global policy can be generated, because all domains have the same core model semantics, however if the domains use distinct access control models, role mapping and global policy can not be built on these models. Organizational collaboration also introduces the IDRM problem in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], which means to find out the minimal role set covering requested permissions from collaboration domain. This problem can then be generalized to distinct models and be defined as finding out an "appropriate" set of core model semantics covering requested permission set. The third problem for organization collaboration is constraints transforming. As the model entities are mapped between domains, from the perspective of participators there are some constraints that must also be held in collaboration domain, e.g. for RBAC model, the separation of duty constraint(SSD) [START_REF]Incits: ANSI INCITS 359-2004 for information technology role based access control[END_REF]. Therefore in this paper our contributions are [START_REF] Kalam | Organization based access control[END_REF]building a collaboration model between distinct access control models; [START_REF] Kalam | Access control for collaborative system: a web services based approach[END_REF]the necessary algorithms of figuring out an appropriate set of core model semantics to requested permission set; (3)constraints transforming between distinct models. The rest sections of this paper are organized as following: section 2 describes related work, in section 3 we present a new collaboration model based on equivalent access and section 4 illustrates the supporting algorithms and methods on building the collaboration model. Our testing and comparison results to algorithms is presented in section 5. Finally we have the conclusion of this paper in section 6.
Related Work
The RBAC [START_REF] Sandhu | Role-based access control models[END_REF] [START_REF] Sandhu | The ARBAC97 Model for Role-Based Administration of Roles[END_REF] model provides role-permission management, role hierarchy, and separation of duty constraints. For Lattice Based Access Control(LBAC) or MAC model [START_REF] Sandhu | Lattice based access control[END_REF], the information flow is restricted by the constraints on security labels and clearances. DAC [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF] model emphasizes owning relationships of resources and permission delegation to be the way of authorization. In the past years RBAC is the most concerned model due to the of its conforming to organization structure. An context-dependent RBAC model [START_REF] Wolf | Context-Dependent Access Control for Web-Based Collaboration Environments with Role-Based Approach[END_REF] is proposed to enforce access control in webbased collaboration environments. Organization based access control(OrBAC) [START_REF] Kalam | Organization based access control[END_REF] is constructed from a RBAC model as concrete level, and OrBAC then refers to common organizational contextual entities as abstract level. Based on OrBAC, PolyOrBAC [START_REF] Kalam | Access control for collaborative system: a web services based approach[END_REF] is proposed to implement the collaboration between organizations having OrBAC model in their domains. It takes advantage of abstract organizational entities and Web Services mechanisms, e.g. UDDI, XML, SOAP, to enforce a global framework of collaboration for engaging organization domains. Role mapping [START_REF] Joshi | Secure Interoperation in a Multidomain Environment Employing RBAC Policies[END_REF] helps one domain obtaining accesses to resources from other domains by role-inheritances across domains. A global access control policy is specified to merge the engaging organization's local policies. This approach also assumes that all domains adopt RBAC model. Due to these contributions on RBAC and collaboration, we start to focus on organization domains with distinct access control models. The other work on collaboration, or inter-domain operation, refers to IDRM problem. In [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], the proposed greedy-search based algorithm is an approximate solution to IDRM problem, however simple greedy-search has local-maxima problem, therefore a prob-ability based greedy-search algorithm in this paper is used to avoid local-maxima and get better approximate solution. In section 5 we will discuss the problems of these approaches comparing with ours. To improve the algorithms, [START_REF] Chen | Inter-domain role mapping and least privillege[END_REF] presents another idea on greedy-search, they note that the assumption of IDRM problem should be more complex and practical. IDRM problem can be reduced to a weighted-set cover problem instead of minimal set cover problem in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF]. However the algorithm by [START_REF] Chen | Inter-domain role mapping and least privillege[END_REF] can not avoid local-maxima either.
3 Equivalent Access and Collaboration Model -User, Resource, and Action: the sets of system users, resources, and operations on resources; -T : the set of Tag objects, e.g. roles, security labels;
As we constructed DAC model with a role based way [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF], we view the model semantics as Tag objects, i.e. both role and label object can be instantiated by a Tag class which has at least two attributes: < type, name >. -P ermission ⊆ Resource × Action, a set of permissions; And some predicates and functions:
-Reslabel(Resource, T ): the assignment relation between a resource and a security label; -mayAccess(U ser, Resource, Action): a common predicate indicating an access request from some user to make an operation on some resource. -U sertag : U ser × T , is to indicate the relation that certain Tag objects (roles or labels) are assigned to some user. -P S : T → P ermission: the permissions assigned or held by a Tag object; -P R : P ermission → T : the tags holding current permission;
collaboration model based on equivalent access
Any access request of a user to some resource can be enforced by different access control models. We introduce "equivalent access", which is related to two domain's access control policies. Since in organizational collaborations the preliminary goal is to find appropriate resources, equivalent access refers to that a user's access to some resource under collaboration domain policy has equivalent evaluation results as that under participating domain policy. Equivalent access should be the preliminary goal of organization collaborations, i.e. the constructing process of collaboration is to find equivalent accesses for the required resources in participating domains. The collaboration scenario we discussed here refers to a collaboration domain, denoted as D c , and a series of original domains, i.e. (D c ; D 1 , . . . , D n ), n ≥ 2. Each domain applies own access control model and policy. For a collaboration group (D c ; D 1 , . . . , D n ), there exists two sorts of entity relations between collaboration domain (D c ) and other participating domains (D i , i∈ [1, n]); one is the entity mapping set, the other is the entity linking set. We denote the former as Q, which maps the entities of D i onto those of D c simply. The mapping means that for any resource e 0 ∈ D i , it has a corresponding virtual resource e 0 ∈ D c . The mappings are classified into "user", "resource" and "action", i.e. {ζ u , ζ e , ζ a }.
Another relation is entity linking set, denoted as L, which need to be computed and will be introduced in following parts.
Definition 1 For a collaboration group (D c ; D 1 , . . . , D n ), considering any participating domain D i , i ∈ [1, n] and its mapping set Q, there are u ∈ user c and e ∈ resource c , as well as u ∈ user i and e ∈ resource i , such that < u, u >∈ ζ u 1 and < e, e >∈ ζ e <Di,Dc> ; we say that the access by u to e is equivalent to that by u to e under two policies P c and P i , if for the substitutions θ Dc = {U x /u, E x /e, A x /read} and θ Di = {U x /u , E x /e , A x /read } and:
P c |= mayAccess(U x , E x , A x ) θ Dc ∧ P i |= mayAccess(U x , E x , A x ) θ Di (1)
Then the equivalent access is denoted as:
mayAccess(U x , E x , A x ) θ Dc , θ Di | <Pc,Pi> (2)
Γ = n i=1 < Q <Dc,Di> , L <Dc,Di> > (4)
Building Collaboration Model Between Distinct Access Control Models
In this section we will analyze the problems of building a collaboration model, then introduce the algorithms we use to build the model, as well as the methods to transfer constraints into collaboration domain. According to the definition of our new collaboration model, there are basically 3 steps to enforce:(1)finding out equivalent accesses;(2)try to minimize the scale of disclosure of the organization's policy information involved into collaboration;(2)domain constraints should be transferred into collaboration domain by configuring them on the policy entities in collaboration domain.
RBAC as participator's model
Minimal role set covering requested permissions
A greedy-search based algorithm (GSA) is proposed to get a solution to IDRM problem(NP-complete) in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF]. Basically the algorithm handles each candidate role with taking all its permissions that can cover as much as possible target permissions, then put this role into solution set. [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] also provides another probabilistic-greedy-search algorithm (IGSA-PROB) which executes candidate role handling with probability p (near 1). Greedy-search based algorithm however does not guarantee to find the optimal solution R . It is an H n -approximation algorithm for IDRM problem. The IDRM approaches proposed in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] hence has the following problems:(1)the GSA algorithm is non-terminating and will probably not find any solution;(2)the GSA algorithm has local-maxima problem ;(3)the IGSA-PROB algorithm searches with probability p, while the local-maxima problem cannot be effectively avoided;(4)the inheritance hierarchy of roles can be applied to the IDRM problem. The GSA and IGSA-PROB algorithms select only the roles which have permissions as a subset of required permission set to be candidates. Thus it makes the algorithms non-terminating. We build collaboration model by entity mapping and linking sets. The entity mapping set ensures that only the requests involving mapped entities will be allowed, which means that even if a role r is linked into, but only the mapped permissions will be allowed. This enables our algorithm to terminate. Towards solving IDRM problem we propose three algorithms, the input of them includes RQ, requested permission set; R, set of all roles; P , set of all permissions; R S , set of initially selected roles; in turn the output has T S, set of candidate roles. They are specified formally in appendix. I. Improved GSA algorithm (IGSA)
(1) finding out all the roles from R, which have intersected permissions with requested RQ, and put them into R S . (2) for a role r in set R s , if r's permission set covers larger part of RQ than any other roles in R s , then put r into candidate set T S, and remove r from R s as well as remove the covered permissions of r from RQ. (3) if RQ is not empty, then go to step (2).
II. Improved algorithm for local-maxima (IGSAL)
(1) for each permission finding out those which are assigned to a single role r.
(2) for the other roles in R, remove the permissions assigned to them, but also assigned to the role r. (3) comparing each role r with all of the other roles, if one of the permissions of r belongs to another role r * and r * has more permissions than r , then remove all of the overlapped permissions from r . (4) if the permissions of r are all removed, then r should also be removed from R.
(5) performing the steps of algorithm I to compute candidate set T S.
III. Algorithm for hierarchical roles (HCHY)
(1) initially put the roles which have no parent roles, into set S 1 , remove them from their child roles' parents list, then make a new set S 2 .
(2) for each role r in R, if it has no parent roles and it dose not belong to S 1 and S 2 , and if the convergent class set Converg Classes is empty, then make a new convergent class set and add r into it; if Converg Classes is not empty, then check every convergent class set C in it, if the current role r belongs to the child role set of any role in C, add r into C. (3) remove r from the parent role set of each child role of r, add r into S 2 . (4) make a new setS 3 ; for S 3 and each permission p of P , make another new set S 4 , thus for each role r which holds p, if there is a convergent class set C containing r , add r to S 4 . ( 5) after checking all of the roles having p, add S 4 into S 3 ; make new sets S 5 and S 6 . ( 6) by a recursive process "recurse", compute the combinations of sets in S 3 and return the minimal combination results.
Constraints of participating domain
Figuring out the minimal set roles covering requested permissions is the first step to enable the collaboration process, in addition, we must see that some RBAC constraints should also be held in collaboration domain. Here we focus on the static separation of duty constraint(SSD), which is defined as the following statements where "assigned user(r)" indicates the set of users holding the role "r", and "assigned tag(u)" indicates the set of roles being assigned to user "u" [START_REF]Incits: ANSI INCITS 359-2004 for information technology role based access control[END_REF].
-SSD ⊆ (2 R × N ), R
P SSD = {s 1 , s 2 , ..., s k }, k = C n |rs| .
When the participating domain adopts RBAC model, the collaboration domain has also RBAC or DAC model (our DAC model is built by a "role" based way), it is necessary to note that there are 3 new constraints setting for collaboration domain's policy. They refer to in collaboration domain: (1) none of the "Tag" objects can have the whole permissions related to anyone of the SSD elements; (2) no user's permissions can cover the whole permissions related to one SSD element; (3)if the collaboration domain has RBAC model, then configuring new SSD constraints from the role sets which have the requested permissions. The 3 constraints are formally defined as the following statements. When the collaboration domain has MAC model, then only the constraint <1> should be held, since in MAC model each user holds one security label. Each member of P SSD will be mapped to corresponding permission sets s i , i ∈ [1, k] in collaboration domain and the permission sets accordingly to P SSD in collaboration domain.
<1> ∀s i ∈ P SSD ∀t ∈ T Dc . s i P S(t).
<2> ∀s i ∈ P SSD ∀t ∈ T Dc ∀u ∈ U Dc ∀l ∈ assigned tag(u). s i P S(l )
where g = |s i | ≥ 1, s i = {p j |j ∈ [1, g]} <3> ∀ < rs c , m >∈ SSD s i ∀t ∈ rs c .|t | ≥ m → l∈t assigned user(l) = φ where rs c ⊆ T Dc ∧ o d ⊆ rs c ∧ m = |o d | ∧ SSD s i = {o d |o d = {r 1
s , r 2 s , ..., r g s }}
MAC as participator's model
If the participating domain adopts a mandatory access control model, then a resource has exactly one label. When the requested resources and operations are confirmed, these resources can be simply mapped onto different security labels to which they are assigned in participating domain. In this section we discuss on the Bell Lapadula model (BL) [START_REF] Bell | Secure computer systems: Mathematical foundations and model[END_REF][6] in collaboration, and the other Biba model is about integrity, which is dual to BL model. The MAC model assigns for each object exactly one security label and for each user or subject only one security clearance. Comparing with the scenario where RBAC as participator's model, we only need to find out the labels of resources lying in the requested permissions, then these labels can provide equivalent accesses. To prevent disallowed information flow in collaboration domain, additional constraints must be added to collaboration domain policies. Since finding out the labels of resources is trivial, we provide only the definition of newly created constraint in collaboration domain. Assuming that a collaboration model Γ and one of the participating domains D i and the collaboration domain D c are defined as in section 3.
Single label constraint
<2> ∀u ∈ U Dc ∀l, r ∈ T Dc .U sertag(u, l) ∧ U sertag(u, r) → (l = r) <3> T = {l|∀u ∈ U Dc .U sertag(u, l)} ∀l ∈ T ∃t ∈ T Di .P l ⊆ RQ ∧ {t|∀ < e, a >∈ P l ∧ Reslabel(e, t)} = {t}
In the collaboration domain, the information flow policy of participating domain should be held. Single label constraint will make restrictions on the labels of the resources which are shared in collaboration domain. Each "Tag" object can be assigned with the permissions, whose mapping entities in participating domain have the same security label. Each user or subject in collaboration domain can have either only one "Tag" object or multiple "Tag" objects which are assigned with the permissions related to same security label. Therefore the above constraint is expressed with the following formula: <1> ∧ (<2> ∨ <3>).
DAC as participator's model
In a collaboration process, if the required permissions are provided from a participating domain with DAC model, the delegation of these permissions will not be considered in collaboration domain, since only the access permissions are necessary, while not the delegation permissions. In our DAC model definitions, resource and different operations construct permissions for which different roles are created. Each resource has an owner, who is assigned "owner role" of the resource. The "owner role" inherits all of the permissions from other relevant roles. Participating domain only needs to provide the basic roles which are related to the requested permissions. Although our DAC model adopts a "role" based way, in DAC model, there is no high level roles which hold large number of permissions related to different resources. Thus the previous algorithm of finding minimized role set for requested permissions will not be applied in DAC model. In participating domain with DAC model, there are no special constraints to be ensured in collaboration domain.
Analysis on Algorithm Properties and Testing Results
We present algorithms IGSA, IGSAL, and HCHY for handling minimal role set problem in section 5. Our collaboration model Γ verifies the entity mapping and linking sets, by which it is helpful to introduce non-required permissions. Only the collaboration relevant permissions, that is, the resources and operations are kept as entity mappings in collaboration model Γ , can be allowed for access. As discussed in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], the GSA has local-maxima problem and can be solved by GSA-PROB (probability based greedy search algorithm). By analyzing the problem we found that the permission assignment relationship, i.e. one permission assigned to multiple roles, causes local-maxima problem. Our IGSAL algorithm tries to remove this "multiinheritance" from the role-permission relation, then the greedy search can be applied to resulted roles and permissions. To describe the complexity characteristics of these 3 algorithms, we assume that the size of requested permissions is N . Comparing with IGSA and GSA-PROB algorithms, IGSAL spends computation on preprocessing the role-permission relations, then starts a greedy search to obtain solution. However on efficiency of algorithm, IGSAL has a nested loop for checking all of the requested permissions, which makes a O(N 2 ) complexity. Since the complexity of greedy search referring to IGSA and GSA-PROB is O(lnN ) [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] and the second step of IGSAL is also greedy search, the final complexity of IGSAL is still O(N 2 ). By randomly generating permissions and the assignment relationships, a testing for handling 100 roles and 43000 50000 permissions and the size of requested permission ranges from 1000 to 15000. Table 1 shows that IGSAL is less efficient than IGSA, but more precise.
It is mentioned that the role hierarchy can be used to provide minimal role set for requested permissions. The collaboration model can ensure that only mapped and linked entities related permissions can be allowed to access, even if there is a high level role is involved and has more permission than requested. Therefore from one or multiple role hierarchies in an organization domain one can find out the powerful roles to cover as much as possible requested permissions. The hierarchies discussed in section 4 is called convergent classes. The algorithm HCHY computes firstly the convergent classes of roles contained in an access control model, which will make a time consuming with complexity O(C 1 ). C 1 indicates that a constant time consuming on convergent classes, since the roles and role hierarchies in a domain has already been determined in advance. It is only necessary to compute it once. The second step of HCHY algorithm is to input the requested permissions, which takes time complexity O(N ). Finally we need to figure out by a recursive process the minimal set of roles covering requested permissions, which is only related to the size of roles, hence the complexity of this process varies by the number of involved role hierarchies, assuming C 2 . The total time complexity of HCHY on requested permissions is O(N ) + C 1 + C 2 . By Table 2 we can see that HCHY is faster than IGSA. In an organization domain with RBAC model, it adopts flat role structure or hierarchical role structure. our algorithms IGSA, IGSAL, and HCHY can handle and make use of both of these role structures.
In this paper we handle 3 problems in organizational collaboration: (1)a secure collaboration is built between the domains with the distinct access control models (2)finding out an "appropriate" set of core model semantics covering requested permission set (3)constraints transforming between organization and collaboration domains. we present an equivalent access based approach and introduce a mediator involved collaboration pattern for the first problem. New algorithms are in turn proposed for IDRM problem based on flat and hierarchical role structures. Then some new constraints are presented for the third problem. Finally we analyzes our algorithms and present the testing and comparison results with existed approaches. The collaboration pattern with "mediator" works for both situations that there is a or there is no domain access control model in collaboration. The access control policies of participating domains are respected. In our future work, we will implement the mediator role, the collaboration model, and transformed constraints in XACML.
3. 1
1 Preliminary definitionsAn organization domain or collaboration domain D should contain part of the following entity sets and relations:
Definition 2 3 ) 3
233 The elements of entity linking set indicate the pairs of related "Tag" objects respectively from collaboration(D c ) and original(D i ) domains. When two substitutions towards their own policies P c and P i have equivalent access, a set S Dc indicates the "Tag" objects which satisfy the request by θ Dc and a set S Di indicates those by θ Di , then the entity linking set L <Di,Dc> is defined as following rule: L <Di,Dc> = {< r, l > | < r, l >∈ S Dc × S Di }. (Definition For a collaboration group (D c ; D 1 , . . . , D n ), where all domain's model has the form of {D R , D M , D S }, considering any original domain D i , i ∈ [1, n] and its mapping set Q <Dc,Di> with D c , the collaboration model Γ of the group will be defined by the above definitions of organization domain as a union of pairs:
is the set of roles and N is the set of natural numbers.-∀ < rs, n >∈ SSD∀t ⊆ rs.|t| ≥ n → r∈t assigned user(r) = φNow we know that rs is a set related to SSD, the possible "n-tuple" sets from rs is C n |rs| , which means the possibilities of picking n elements from |rs| elements. For each possibility we define the set s k of all involved permissions, thus C n |rs| sets are defined as the following statements, where P SSD indicates the permission sets for each of the SSD constraint elements in participating domain: ∀r 1 , r 2 , ..., r n ∈ rs. s k = n i=1 P S(r i )
<1> P r = {< e, a > |∀e ∈ Resource Dc ∃e ∈ Resource Di .r ∈ T Dc ∧ < e , a > ∈ P S(r)∧ < e , e >∈ ζ e }. P r ⊆ RQ ∧ | {l|∀ < e, a >∈ P r ∧ Reslabel(e, l)}| = 1.
Table 1 .
1 Comparison of IGSA and IGSAL on efficiency
Role size Perm size Requested perms Time consuming(IGSA/IGSAL) Solution size(IGSA/IGSAL)
100 41613 10 3 71 / 5334 80 / 78
100 45807 2 × 10 3 79 / 14549 90 / 87
100 46055 3 × 10 3 90 / 23011 91 / 91
100 43696 4 × 10 3 104 / 31864 93 / 89
100 45252 5 × 10 3 113 / 43066 96 / 95
100 44701 6 × 10 3 121 / 54115 98 / 96
100 48191 7 × 10 3 193 / 81417 99 / 97
100 44323 8 × 10 3 143 / 84534 99 / 99
100 45879 9 × 10 3 221 / 109845 98 / 97
100 43841 10 4 164 / 110684 97 / 95
100 47209 11 × 10 3 243 / 161712 98 / 98
100 45088 12 × 10 3 266 / 161768 99 / 98
100 46269 13 × 10 3 269 / 188546 100 / 98
100 44134 14 × 10 3 300 / 197264 98 / 97
100 44036 15 × 10 3 299 / 217346 99 / 97 | 27,964 | [
"1004370"
] | [
"489435"
] |
01492828 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492828/file/978-3-642-40779-6_23_Chapter.pdf | Sachar Paulus
email: [email protected]
Nazila Gol Mohammadi
Thorsten Weyer
Trustworthy Software Development
Keywords: Software development, Trustworthiness, Trust, Trustworthy software, Trustworthy development practices
This paper presents an overview on how existing development methodologies and practices support the creation of trustworthy software. Trustworthy software is key for a successful and trusted usage of software, specifically in the Cloud. To better understand what trustworthy software applications actually mean, the concepts of trustworthiness and trust are defined and put in contrast to each other. Furthermore, we identify attributes of software applications that support trustworthiness. Based on this groundwork, some wellknown software development methodologies and best practices are analyzed with respect on how they support the systematic engineering of trustworthy software. Finally, the state of the art is discussed in a qualitative way, and an outlook on necessary research efforts and technological innovations is given.
Introduction
In the last years, many attempts have been made to overcome the issue of insecure and untrusted software. A number of terms have been used to catch the expectation on how "solid" a piece of software should be: secure, safe, dependable and trusted. Only in recent years literature related to (secure) software developments has seen the introduction of socio-technical systems (STS) (for more details, see [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF]). This concept allows to distinguish between the actual trust that users of software put into the functioning / delivery of the software in questions on the one side, and the trustworthiness of the software, i.e. properties (we call them attributes) that justify the trust that users put "into" the software. Whereas trust should primarily be the subject of the "maintenance" of the relationship between the user and the software in use ("in operations"), trustworthiness is primarily acquired during the development process of the software and can mostly only be "lost" later on. The software creation process, neither, has been addressed adequately both in theory and practice until recently regarding topics like trust, trustworthiness or similar, except either purely theoretical approaches (such as formal proofs or other forms of verification (e.g. [START_REF] Leveson | Safety analysis using Petri nets[END_REF]) or on a functional level only (using e.g. security patterns [START_REF] Schumacher | Security Patterns: Integrating Security and Systems Engineering[END_REF]). As such, an analysis of existing software development practices / methodologies with a specific view on trustworthiness is new to the field. This research has been carried out as part of the OPTET project, and the results will be presented in this paper in adequate detail. As an overview publication, it summarizes results of other very recent publications [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF].
This paper is structured as follows: in a first section, we define the notions of trust and trustworthiness and introduce the concept of trustworthiness attributes. The next section presents the analysis of the different development methodologies and practices in light of trustworthiness, followed by an analysis section on the state-of-the-art to summarize what is available today, and where there is more research needed to achieve the goal of trustworthy software. A last section summarize the research carried out and shortly indicates the future work planned in the OPTET project.
Fundamentals
In this section we introduce the two basic concepts "trust" and "trustworthiness" in order to be able to analyze how trustworthiness is addressed by different software development disciplines. Both concepts focus on the outcome of the STS but are different in the view of the trustor and trustee(s) perspective. In general, trust is the trustor's prior estimation that an STS will provide an appropriate outcome, while trustworthiness is the probability that the same STS will successfully meet all of the trustors' requirements. The balance between trust and trustworthiness is a core issue for software development because any imbalance (over-cautiousness or misplaced trust) could lead to serious negative impact, e.g. concerning the acceptance of the software by its (potential) users.
The notion "Trust"
We define trust in a system as a property of each individual trustor, expressed in terms of probabilities and reflecting the strength of their belief that engaging in the system for some purpose will produce an acceptable outcome. Thus, trust characterizes a state where the outcome is still unknown, based on each trustor's subjective perceptions and requirements. A stakeholder would decide to place trust on an STS if his trust criterion was successfully met; in other words, their perceptions exceed or meet its requirements. A trustor having engaged in a system for multiple transactions can (or will) update the current trust level of that STS by observing past outcomes. A presence of subjective factors in trust decisions means that two different trustors may have different levels of trust for the same STS to provide the same outcome in the future, even if they both have observed exactly the same system outcomes in the past. More specifically, subjective perceptions can depend on trustor attributes, which capture social factors such as age, gender, cultural background, level of experience with Internet-based applications, and view on laws. Subjective requirements, on the other hand, are represented by so-called trust attributes that quantify the anticipated utility gains or losses with respect to each anticipated outcome. Thus, relatively high levels of trust alone may not be adequate to determine a positive decision (e.g., if the minimum thresholds from requirements are even higher). Similarly, it is possible to engage in a system even if one's trust for an acceptable outcome is low (e.g., if the utility gains from this outcome are sufficiently high).
2.2
The notion "Trustworthiness"
We regard trustworthiness as an objective property of the STS, based on the existence (or nonexistence) of appropriate properties and countermeasures that reduce the likelihood of unacceptable outcomes. A stakeholder (e.g., the system designer, a party performing certification) shall decide to what extent a system is trustworthy based on trustworthiness criteria. These criteria are logical expressions in terms of systems attributes, referred to as quality attributes. For example, trustworthiness may be evaluated with respect to the confidentiality of sensitive information, the integrity of valuable information, the availability of critical data, the response time or accuracy of outputs. Such quality attributes shall be quantified by measuring systems' (or individual components') properties and/or behavior. Objectivity in assessing trustworthiness for a particular attribute is based on meeting certain predefined metrics for this attribute or based on compliance of the design process for this attribute to our predefined system specifications. Thus, the trustworthiness of an STS may be evaluated compared to a target performance level, or the target may be its ability to prevent a threat from becoming active. Such issues are defined by the trustworthiness attributes that have a dual interpretation. Until recently, trustworthiness was primarily investigated from a security or loyalty perspective while assuming that single properties (certification, certain technologies or methodologies) of services lead to trustworthiness and even to trust in it by users. Compared to this approach, we reasonably assume that such a onedimensional approach is insufficient to capture all the factors that contribute to an STS's trustworthiness and instead we consider a multitude of attributes.
In this paper, our definition for trustworthiness attributes reflects the design-time aspects. A trustworthiness attribute in this sense is a property of the system that indicates its capability to prevent potential threats to cause an unexpected and undesired outcome, e.g., a resilience assurance that it will not produce an unacceptable outcome.
Trustworthiness of a software application
In order to prove to be trustworthy, software applications could promise to cover a set of various quality attributes [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF], [START_REF] Mei | Internetware: A software paradigm for internet computing[END_REF] depending on their domain and target users.
Trustworthiness should promise a wide spectrum including reliability, security, performance, and user experience. But trustworthiness is domain-and applicationdependent, and a relative attribute that means that if a system is trustworthy with respect to some Quality of Service (QoS) like performance, it would not necessarily be successful in being secure. Consequently, trustworthiness and trust should not be regarded as a single construct with a single effect, they are rather strongly context dependent in such a way that the criteria and measures for objectively assessing the trustworthiness of a software application are based on specific context properties, like the application domain and the user groups of the software.
A broad range of literature has argued and emphasized the relation between QoS and trustworthiness (e.g. [START_REF] Neto | Untrustworthiness: A Trust-Based Security Metric[END_REF], [START_REF] San-Martín | A Cross-National Study on Online Consumer Perceptions, Trust, and Loyalty[END_REF], [START_REF]Quality Reference Model for SBA. S-Cube -European Network of Excellence[END_REF], [START_REF]Software Engineering -Product quality -Part: Quality Model[END_REF], [START_REF] Gomez | An Anticipatory Trust Model for Open Distributed Systems: From Brains to Individual and Social Behavior[END_REF]). Therefore, trustworthiness is influenced by a number of quality attributes other than just security-related.
In the context of this work we strictly adhere to the perspective of a to-beconstructed system, and therefore will ignore potential trustworthiness attributes that are at the earliest available at runtime, like reputation or similar concepts representing other users' feedback. Additionally, some literature proposes quality attributes (e.g. authentication, authorization, data encryption or access control), that refer to means for achieving certain properties of a system. These means are not reflecting attributes but defining means for establishing the corresponding attributes within the system. Such "attributes" were not within the scope of our analysis.
In prior work, we have investigated the properties and attributes of a software system that determines the trustworthiness of the system. To this end, based on the S-Cube quality reference model [START_REF] Chen | A Novel Server-based Application Execution Architecture[END_REF], we built a taxonomy of attributes (shown in Fig. 1) that is a foundation to define objective criteria and measures to assess the trustworthiness of a system. Some quality attributes referenced in the literature (e.g. [START_REF] Harris | The four levels of loyalty and the pivotal role of trust: a study of online service dynamics[END_REF], [START_REF] Yolum | Engineering self-organizing referral networks for trustworthy service selection[END_REF], [START_REF] Yan | An adaptive trust control model for a trustworthy component software platform, Autonomic and Trusted Computing[END_REF], [START_REF] Boehm | Quantitative Evaluation of Software Quality[END_REF]) refer to means for achieving a certain kind of property of a system. Therefore, we do not consider them as trustworthiness attributes, but as means to manifest the corresponding properties to the system. Only the attributes contributing to trustworthiness identified in literature review is included in the model. Some quality attributes, e.g. integrity, can be achieved, among other ways, through encryption. In this case, the high-level attribute (integrity) is included as a contributor to trustworthiness, but not encryption because it is encompassed by the higher-level attribute. We have included attributes that have been studied in the literature in terms of trustworthiness. Fig. 1 outlines the major result of that work. More details can be found in [START_REF] Leveson | Safety analysis using Petri nets[END_REF]. Actually, we have identified some additional attributes that are candidates for attributes that influence the trustworthiness of a system (e.g. provability, or predictability). These potential trustworthiness attributes need further investigation on their impact on trustworthiness.
Based on these trustworthiness attributes, we have studied several software design methodologies with respect to the extent in which these methodologies address the systematic realization of trustworthiness to a system under development. In next section, the result and evaluation of these studies is presented.
3
Review of Development Models and Practices
Recently, a number of development practices have been proposed, both from a theoretical as well as from a practical point of view, to address security of the software tobe-developed. As described above, security is an important component of trustworthy software, but neither is it the only one, nor will it be sufficient to look solely at preserving / creating a good level of security to attain trustworthiness. For example, transparency plays an important role for the creation of trust, and therefore for the trustworthiness of software.
In this section, we will look into the major software engineering processes or process enhancements that target security to build a "secure" software system and identify corresponding innovation potential, specifically towards extending security to trustworthiness. A more exhaustive overview of development methodologies can for instance be found in Jayaswal and Patton's "Design for Trustworthy Software" [START_REF] Jayaswal | Design for Trustworthy Software: Tools, Techniques and Methodology for Developing Robust Software[END_REF], though it does not specify how these methodologies contribute to the trustworthiness of the product. This reference documents their generic characteristics and an overview of the historical evolution of different development strategies and lifecycle models.
We will briefly describe which elements of the development approaches will actually increase or inhibit trust, and how the approaches could be used for modeling trustworthiness.
Plan-driven
In a plan-driven process [START_REF] Royce | Managing the Development of Large Software Systems: Concepts and Techniques[END_REF] one typically plans and schedules all of the process activities before the work can start. The Waterfall model is a well-known example of plan-driven development that typically includes the following phases:
• Requirements analysis • System design • Implementation • Testing (unit, integration and system testing)
• Deployment • Operation and maintenance Many of the simplistic software manufacturing projects follow a plan-driven model. This approach has been followed by industrial software development for a long time.
It is relatively easy to assure non-functional requirements throughout the rest of the process, but the key issue is that they need to be identified completely in the first phase. Plan-driven processes such as the Waterfall model originate from aerospace and other manufacturing industries, where robustness and correctness is usually an important concern, but are often considered being too rigorous, inflexible and a bit old-fashioned for many software development projects. There are examples of Water-fall trustworthy software development processes in the literature, e.g. COCOMO.
Therefore, there should be means to assure trustworthiness and enhance the process.
There can be more formal variants of this process, for instance the B method [START_REF] Wordworth | Software Engineering with B[END_REF], where a mathematical model of the specification is created and then automatically transferred into code. For the general plan-driven process we consider the following trustworthiness characteristics to be valid:
Trustworthiness gains:
• Formal system variants are well suited to the development of systems that have stringent safety, reliability or security -and thus potentially also trustworthinessrequirements.
Trustworthiness losses:
• Vulnerable with vague, missing or incorrect security and trustworthiness requirements in the first place. • Does not offer significant cost-benefits over other approaches, which on a tight budget can lead to less focus on trustworthiness. • Little flexibility if new attacks or types of vulnerabilities are discovered late in the development process.
• Usability for modeling trustworthiness
In a plan-driven process one can apply structured testing on units as well as on a system as a whole. In addition, it is relatively easy to keep track of the implementation of safety, reliability or security and potentially also trustworthiness requirements. As such, the plan-driven approach supports modeling in general, but not specifically for trustworthiness.
Incremental
Incremental development (cf. [START_REF] Sommerville | Software Engineering. 9 th Edition[END_REF]) represents a broad range of related methodologies where initial implementations are presented to the user at regular intervals until the software satisfies the user expectations (or the money runs out). A fundamental principle is that not all requirements can be known completely prior to development. Thus, they are evolving as the software is being developed. Incremental development covers most of the agile approaches and prototype development, although it could be enhanced by other approaches to become more formal in terms of trustworthiness.
Trustworthiness gains:
• New and evolving requirements for trust may be incorporated as part of an iterative process.
• The customer will have a good sense of ownership and understanding of the product after participating in the development process.
Trustworthiness losses:
• Mismatch between organizational procedures/policies and a more informal or agile process. • Little documentation, increasing complexity and long-lifetime systems may result in security flaws. Especially, documentation on non-functional aspects that are crosscutting among different software features implementation could not be well documented.
• Security and trustworthiness can be difficult to test and evaluate, specifically by the user, and may therefore lose focus on the development.
Incremental development allows new and evolving requirements for trustworthiness to be incorporated as part of an iterative process. Iterative processes allow for modeling of properties, but changes to the model that reflect changed or more detailed customer expectations, will in turn require changing the design and code, eventually in another iteration. Additionally, there are no specific trustworthiness modeling capabilities.
Reuse-oriented
Very few systems today are created completely from scratch; in most cases there is some sort of reuse of design or code from other sources within or outside the organization (cf. [START_REF] Sommerville | Software Engineering. 9 th Edition[END_REF]). Existing code can typically be used as-is, modified as needed or wrapped with an interface. Reuse is of particular relevance for service-oriented systems where services are mixed and matched in order to create larger systems. Reuseoriented methodologies can be very ad-hoc, and often there are no other means to assure trustworthiness.
Trustworthiness gains:
• The system can be based on existing parts that are known to be trustworthy. This does not, however, mean that the composition is just as trustworthy as the sum of its parts. • An existing, trustworthy part may increase trust (e.g. a known, trusted authentication).
Trustworthiness losses:
• Use of components that are "not-invented-here" leads to uncertainty.
• Increased complexity due to heterogeneous component assembly.
• The use of existing components in a different context than originally targeted may under certain circumstances (.e.g. unmonitored re-use of in-house developed components) jeopardize an existing security / trustworthiness property.
This approach has both pros and cons regarding trustworthiness modeling. On the positive side, already existing, trustworthy and trusted components may lead to easier, trustworthiness modeling for the overall solution; adequate software assurance, e.g. a security certification, or source code availability may help in improving trustworthiness of re-used "foreign" components. The drawback is that there is a risk that the trustworthiness of the combined system may decrease due to the combination with less trustworthy components.
Model-driven
Model-driven engineering (MDE) [START_REF] Schmidt | Model-Driven Engineering[END_REF] (encompassing the OMG term Model-driven Architecture (MDA) and others) refers to the process of creating domain models to represent application structure, behavior and requirements within particular domains, and the use of transformations that can analyze certain aspects of these models and then create artifacts such as code and simulators. A lot of the development effort is put into the application design, and the reuse of patterns and best practices is central during the modeling.
Trustworthiness gains:
• Coding practices that are deemed insecure or unreliable can be eliminated through the use of formal reasoning. • Coding policies related to trustworthiness, reliability and security could be systematically added to the generated code. • Problems that lead to trustworthiness concerns can, at least theoretically, be detected early during model analysis and simulation. • Separation of concerns allows trust issues to be independent of platform, and also less complicated models and a better combination of different expertise.
Trustworthiness losses:
• Systems developed with such methods tends to be expensive to maintain, and may therefore suffer from lack of updates. • Requires significant training and tool support, which might become outdated.
• A structured, model-driven approach does not prevent the forgetting of security and trustworthiness requirements. • Later changes during development need to review and potentially change the model. • The (time and space) complexity of the formal verification of especially nonfunctional properties may lead to omitting certain necessary computations when the project is under time and resource pressure.
With a model-driven approach it is possible to eliminate deemed insecure or unreliable design and coding practices. An early model analysis and simulation with regards to trustworthiness concerns is possible and of high value. In addition, model-driven security tests could improve the trustworthiness. However, in general, there are no specific trustworthiness related modeling properties, it is just model-driven. The ma-jor drawback (and risk) is that the computational complexity for verifying nonfunctional properties is very high.
Test-driven
Test-driven development is considered to be part of agile development practices. In test-driven development, developers first implement test code that is able to test corresponding requirements, and only after that the actual code of a module, a function, a class etc. The main purpose for test-driven development is to increase the test coverage, thereby allowing for a higher quality assurance and thus requirements coverage, specifically related to non-functional aspects. The drawback of test-driven approaches consists in the fact that due to the necessary micro-iterations the design of the software is subject to on-going changes. This makes e.g. the combination of model-driven and test-driven approaches rather impossible.
Trustworthiness gains:
• The high degree of test coverage (that could be up to 100%) assures the implementation of trustworthiness related requirements.
Trustworthiness losses:
• The programming technique cannot be combined with (formal) assurance methodologies, e.g. using model-driven approaches, Common Criteria, or formal verification.
Test-driven development is well suited for assuring the presence of well-described trustworthiness requirements. Moreover, this approach can be successfully used to address changes of the threat landscape. A major drawback, though, is that it cannot easily be combined with modeling techniques that are used for formal assurance methodologies.
Common Criteria ISO 15408
The Common Criteria (CC) is a standardized approach [START_REF]:Information technology -Security techniques -Evaluation criteria for IT security -Part 1: Introduction and general model[END_REF] to evaluate security properties of (information) systems. A "Target of Evaluation" is tested against so-called "Security Targets" that are composed of given Functional Security Requirements and Security Assurance Requirements (both addressing development and operations) and are selected based on a protection requirement evaluation. Furthermore, the evaluation can be performed at different strengths called "Evaluation Assurance Level". On the downside, there are some disadvantages: the development model is quite stiff, and does not easily allow for an adjustment to specific environments. Furthermore, Common Criteria is an "all-or-nothing" approach, one can limit the Target of Evaluation or the Evaluation Assurance Level, but it is rather difficult to then express the overall security / trustworthiness of a system with metrics related to CC.
• Evaluations related to security and assurance indicates to what level the target application can be trusted. • CC evaluations are performed by (trusted) third parties.
• There are security profiles for various types of application domains.
Trustworthiness losses:.
• Protection profiles are not tailored for Cloud services.
• A CC certification can be misunderstood to prove the security / trustworthiness of a system, but it actually does only provide evidence for a very specific property of a small portion of the system.
The Common Criteria approach is unrelated to modeling in general, although the higher evaluation assurance levels would benefit from modeling. The functional security requirements may well serve as input for a (security-related) trustworthiness modeling, whereas the security assurance requirements, as the properties of the development process itself, shall be used for a modeling of the developing organization.
Note that these constitute two different modeling approaches.
ISO 21827 Systems Security Engineering -Capability Maturity Model
Systems Security Engineering -Capability Maturity Model (SSE-CMM) is a specific application of the more generic Capability Maturity Model of the Software Engineering Institute at Carnegie Mellon University. Originally, in 1996 SSE-CCM was an initiative of the NSA, but was given over later to the International Systems Security Engineering Association, that published it as ISO 21827 in 2003. In contrast to the previous examples, SSE-CMM targets the developing organization and not the product / service to be developed. There are a number of so-called "base practices" (11 security base practices and 11 project and organizational base practices) that can be fulfilled to different levels of maturity. The maturity levels are identical to CMM.
Trustworthiness gains:
• The developing organization gains more and more experience in developing secure and more generically good quality software. • The use of a quality-related maturity model infers that user-centric non-functional requirements, such as security and trustworthiness, will be taken into account.
Trustworthiness losses:
• This is an organizational approach rather than a system-centric approach; hence there is not really any guarantee about the trustworthiness of the developed application (which could e.g. be put to use in another way than it was intended for).
This approach focuses on the development of trustworthiness for the developing organization, instead on the to-be developed software, service or system. The security base practices may serve as input for modeling trustworthiness requirements when modeling the development process.
Building Security In Maturity Model / OpenSAMM
The Building Security In Maturity Model (BSIMM) [START_REF] Mcgraw | A Software Security Framework: Working Towards a Realistic Maturity Model[END_REF] initiative has recognized the caveat of ISO 21827 being oriented towards the developing organization, and has proposed a maturity model that is centralized around the software to be developed. It defines activities in four groups (Governance, Intelligence, SSDL Touch points, Deployment) that are rated in their maturity according to three levels. OpenSAMM is a very similar approach that has the same origin, but developed slightly differently and is now an OWASP project. This standard presents an ideal starting point for developing trustworthiness activities within an organization, since it allows tracking the maturity of the development process in terms of addressing security requirements -this could also be used for trustworthiness.
Trustworthiness gains:
• The maturity-oriented approach requires the identification of security (and potentially) trustworthiness properties and assures their existence according to different levels of assurance. • The probability of producing a secure (and trustworthy) system is high.
Trustworthiness losses:
• There is no evidence that the system actually is trustworthy or secure.
This approach means to develop trustworthiness for the developing organization, instead of the to-be developed software, service, or system. The security base practices may serve as input for modeling trustworthiness requirements when modeling the development process.
Microsoft SDL
In 2001, Microsoft has started the security-oriented software engineering process that has probably had the largest impact across the whole software industry. Yet, the "process" was more a collection of individual activities along the software development lifecycle than a real structured approach. The focus point of the Microsoft SDLthat has been adopted by a large number of organizations in different variants -is that every single measure was optimized over time to either have a positive ROI or it was dropped again. This results in a number of industry-proven best practices for enhancing the security of software. Since there is no standardized list of activities, there is no benchmark to map activities against.
Trustworthiness gains:
• The world's largest software manufacturer does use this approach.
• The identified measures have proven to be usable and effective over the course of more than a decade.
Trustworthiness losses:
• There is no evidence that the system actually is trustworthy or even secure.
Microsoft SDL is a development-related threat modeling and was Microsoft`s major investment to increase the trustworthiness of its products ("Trustworthy Computing Initiative"). The comparability is only given if more detailed parameters are specified.
For the modeling of trustworthiness, this method is only of limited help.
Methodologies not covered in this paper
During the analysis process, a significant number of other methodologies and approaches have been investigated, among others, ISO 27002, OWASP Clasp or TOGAF. We dropped these here since they either replicate some of the capabilities already mentioned above or because their contribution to trustworthiness showed to be rather small.
Conclusions from the State of the Art Analysis
After having analyzed the different methodologies and best practices, we can make two major observations. The first observation is related to the nature of the methodologies and best practices. There are two major types of approaches:
• Evidence-based approaches that concentrate on evidences, i.e. some sort of qualitative "proof" that a certain level of security, safety etc. is actually met, and • Improvement-based approaches that concentrate on improving the overall situation within the software developing organization with regards to more or less specific requirements.
Evidence-based approaches are typically relatively rigid and therefore often not used in practice, except there is an explicit need, e.g. for a certification in a specific market context. The origin of evidence-based approaches is either research or a strongly regulated market, such as e.g. the defense sector. In contrast to those, improvement-based approaches allow for customization and are therefore much better suited for the application in various industries, but lack in general the possibility to create any kind of evidence that the software developed actually fulfills some even fundamental trustworthiness expectations. Assuming that evidence-based and improvement-based approaches are -graphically speaking -at the opposite ends of a continuous one-dimensional space, a way to improve trustworthiness of software applications might be to identify approaches that are "sitting in between" these two types (for example, by picking and choosing elements of different approaches, augmented with some additional capabilities). One option might be to release the burden of qualitative evidence creation by switching to / encompassing evidences based on quantitative aspects. We propose to investigate how metrics for the trustworthiness attributes presented in Section 2 can be used to create evidences by applying selected elements of the improvement-based approaches.
A second major observation relates to the scope of the activities described in the methodologies and best practices. There are three types of "scope":
• Product-centric approaches emphasize the creation and/or verification of attributes of the to-be-developed software, • Process-centric approaches concentrate on process steps that need to be adhered to enable the fulfillment of the expected goal and • Organization-centric approaches focus on the capabilities of the developing organization, looking at a longer-term enablement to sustainably develop trustworthy software.
Some approaches combine the scope, e.g. Common Criteria both mandates verifying product-related and process-related requirements, whereas others, such as SSE-CMM [START_REF]Information technology -Systems Security Engineering -Capability Maturity Model[END_REF] concentrate on only one scope. Current scientific discussions targeting trustworthiness related attributes are mainly focusing on product-centric approaches which is very understandable given the fact that this is the only approach that focuses on evidences on the software itself, whereas practices used in industry often tend towards a more process-or even organization-centric approach (SSE-CMM, CMM, ISO 9001). We therefore propose to investigate how to evolve the above-mentioned evidencebased activities around metrics towards covering process-and organization-centric approaches.
Conclusion and Future Work
In this paper we presented an overview on how existing development methods and practices support the development of trustworthy software. To this aim, we first elaborated on the notion of trust and trustworthiness and presented a general taxonomy for trustworthiness attributes of software. Then we analyzed some well-known general software development methodologies and practices with respect on how they support the development of trustworthy software.
As we have shown in the paper, existing software design methodologies have some capacities in ensuring security. But, the treatment of other trustworthiness attributes and requirements in software development is not yet well studied. Trustworthiness attributes that have major impact on acceptance of STS, must be taken to account, analyzed, and documented as thoroughly as possible. In this way the transparency of the decisions under taken during the development will remove potentially the uncertainty of stakeholders of respective software.
The main ideas and findings of our work will be further investigated. It is important to understand how the trustworthiness attributes and the corresponding system properties can be addressed in the system to be in a systematic way. As a next step, we will investigate trustworthiness evaluation techniques for enabling and providing effective measurements and metrics to assess trustworthiness of systems under development. Furthermore, we will develop an Eclipse Process Framework (EPF) based plug-in that will support the process of establishing trustworthiness attributes into a system and guiding the developer through the development activities. Using this plugin during the development process, the corresponding project team will be supported by guidelines, architectural patterns, and process chunks for developing trustworthy software and later on to analyze the results and evaluate the trustworthiness of the developed software.
Fig. 1 .
1 Fig. 1. Attributes that determine trustworthiness of a software application during development
Trustworthiness Attributes Security Compatibility Configuration related quality Compliance Cost Data related quality Dependability Performance Usability Correctness Complexity
Accountability Throughput
Auditability/ Stability Data Integrity Response Time
Traceability Confidentiality Integrity Non-Repudiation Safety Openness Reusability Completeness Data Reliability Data Validity Data Timeliness Reliability Flexibility / Robustness Failure Tolerance Availability Accuracy Efficiency of Use Effectiveness Learnability Satisfaction Composability
Scalability
Maintainability
Acknowledgements
This research was carried out with the help of the European Commission's 7 th framework program, notably the project "OPTET". We specifically would like to thank all participants of Work Package 3 for contributing to the analysis of the methodologies and best practices. | 38,555 | [
"1004389",
"998681",
"998693"
] | [
"479602",
"300612",
"300612"
] |
01492834 | en | [
"info"
] | 2024/03/04 23:41:50 | 2013 | https://inria.hal.science/hal-01492834/file/978-3-642-40779-6_5_Chapter.pdf | Y Sreenivasa Rao
email: [email protected]
Ratna Dutta
Decentralized Ciphertext-Policy Attribute-Based Encryption Scheme with Fast Decryption
Keywords: attribute-based encryption, decentralized, multi-authority, monotone access structure
In this paper, we propose an efficient multi-authority decentralized ciphertext-policy attribute-based encryption scheme dCP-ABE-MAS for monotone access structures (MAS). Our setup is without any central authority (CA) where all authorities function entirely independently and need not even be aware of each other. The scheme makes use of the minimal authorized sets representation of MAS to encrypt messages, and hence the size of ciphertext is linear in the number of minimal authorized sets in MAS and the number of bilinear pairings is constant during decryption. We describe several networks that can use dCP-ABE-MAS to control data access from unauthorized nodes. The proposed scheme resists collusion attacks and is secure against chosen plaintext attacks in the generic bilinear group model over prime order bilinear groups.
Introduction
In Attribute-Based Encryption (ABE), each user is ascribed a set of descriptive attributes (or credentials), and secret key and ciphertext are associated with an access policy or a set of attributes. Decryption is then successful only when the attributes of ciphertext or secret key satisfy the access policy. ABE is classified as Key-Policy ABE (KP-ABE) [START_REF] Goyal | Attribute Based Encryption for Fine-Grained Access Control of Encrypted Data[END_REF] or Ciphertext-Policy ABE (CP-ABE) [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF] according to whether the secret key or ciphertext is associated with an access policy, respectively. Since the invention of ABE [START_REF] Sahai | Fuzzy Identity-Based Encryption[END_REF], several improved ABE schemes [START_REF] Goyal | Attribute Based Encryption for Fine-Grained Access Control of Encrypted Data[END_REF][START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF][START_REF] Waters | Ciphertext-Policy Attribute-Based Encryption: An Expressive, Efficient, and Provably Secure Realization[END_REF][START_REF] Ibraimi | Efficient and Provable Secure Ciphertext-Policy Attribute-Based Encryption Schemes[END_REF] have been proposed. All the foregoing ABE schemes make use of a single trusted central authority (CA) to control the universe of attributes and issue secret keys to users that should not be compromised at all. Consequently, the CA can decrypt every ciphertext in the system encrypted under any access policy by calculating the required secret keys at any time, this is the key escrow problem of ABE. A solution to help mitigate the key escrow problem is distributing the functionality of the CA over many potentially untrusted authorities in such a way that as long as some of them are honest, the system would still be secure. An ABE with this mechanism is the so-called multi-authority ABE. In this scenario, each authority controls a different domain of attributes and issues attribute-related secret keys to users.
Chase [START_REF] Chase | Multi-authority Attribute Based Encryption[END_REF] devised the first multi-authority ABE as an affirmative solution to the open problem posed by Sahai and Waters [START_REF] Sahai | Fuzzy Identity-Based Encryption[END_REF] that consists of one fully trusted centralized authority (CA) and multiple (attribute) authorities. Every user is assigned a unique global identifier and the keys from different authorities are bound together by this identifier to counteract the collusion attack-multiple users can pool their secret keys obtained from different authorities to decrypt a ciphertext that they are not individually entitled to. As CA holds the system's master secret, it can decrypt all the ciphertexts in the system, thereby cannot the key escrow resists. The first CA-free multi-authority ABE is proposed by Lin et al. [START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF] wherein Distributed Key Generation (DKG) protocol and Joint Zero Secret Sharing (JZSS) protocol are deployed to remove CA. All authorities must interact to execute DKG and JZSS protocols during system setup phase. However, the scheme is collusion-resistant up to collusion of m users, where m is a system wide parameter that should be fixed during setup, and the number of JZSS protocol executions, the computation and communication costs are all linear in m. Chase and Chow [START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF] proposed CA-free multi-authority ABE with user privacy that resolves the key escrow problem using distributed Pseudo Random Functions (PRF). In this setting, each pair of authorities will communicate with each other via a 2-party key exchange protocol to generate users' secret keys during setup phase that incurs O(N 2 ) communication overhead on the system, where N is the fixed number of authorities. The foregoing constructions [START_REF] Chase | Multi-authority Attribute Based Encryption[END_REF][START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF][START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF] can only handle a set of fixed number of authorities at system initialization which exploit AND-gate access policies in key-policy setting to prevent unauthorized data access.
Müller et al. [START_REF] Müller | On Multi-Authority Ciphertext-Policy Attribute-Based Encryption[END_REF] gave two multi-authority CP-ABE schemes which employ one CA and several authorities where the authorities work independently from each other. However, the CA can still decrypt all ciphertexts in the system. The first construction uses Disjunctive Normal Form (DNF) access policies to annotate ciphertexts, thereby achieves constant computation cost during decryption. The second scheme realizes any Linear Secret Sharing Scheme (LSSS) access policy and hence the computation cost for successful decryption is linear in minimum number of attributes required to compute the target vector, i.e., a vector that contains the secret as one of its components. Lewko and Waters [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] proposed a novel multi-authority CP-ABE scheme without CA that is decentralized, where all authorities function entirely independently and need not even be aware of each other. The concept of global identifier introduced by Chase [10] is used to "link" attribute-related secret keys together that are issued to the same user by different authorities, this in turn achieves collusion-resistant among any number of users. The same scheme works on both composite order and prime order bilinear groups. The security of the former is given in random oracle model and the security of latter one is analyzed in the generic group model. In both cases, the monotone access structures are realized by LSSS, the ciphertext size is linear in the size of the LSSS, and the number of pairings is linear in the minimum number of attributes that satisfy the LSSS. Liu et al. [START_REF] Liu | Fully Secure Multi-authority Ciphertext-Policy Attribute-Based Encryption without Random Oracles[END_REF] devised a LSSS-realizable multi-authority CP-ABE system which has multiple CAs and authorities. The scheme is adaptively secure without random oracles unlike [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF]. In all the multi-authority KP/CP-ABE schemes except the one (CA based) in [START_REF] Müller | On Multi-Authority Ciphertext-Policy Attribute-Based Encryption[END_REF] discussed so far, the size of ciphertext is linear in the size of monotone span program or the number of attributes that are associated with ciphertexts and the number of bilinear pairing computations is linear in the minimum number of attributes required for successful decryption. Constant computation and low communication cost access control schemes are more practical where the computing resources have limited computing power and bandwidth is the primary concern. For these reasons, we provide a solution to help mitigate the problem of large ciphertext size and linear-size number of bilinear pairings in designing multi-authority ABE schemes. Our Contribution. We propose dCP-ABE-MAS, which is a multi-authority CP-ABE in a decentralized setting for any monotone access structure (MAS). Every MAS, A, can uniquely be represented by a set A 0 of minimal authorized sets in A (see Section 2.1). This scheme has the same functionality as the most robust and scalable multi-authority CP-ABE [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] to date. Even though the schemes [START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF][START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF] exclude the requirement of the CA, they are not fully decentralized as the number of authorities is fixed ahead of time and all authorities are communicating each other during system setup unlike [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF]. That is why we compare (in Table 1 1) our dCP-ABE-MAS only with the decentralized scheme2 of [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] in view of prime order bilinear group setting.
G User Secret E G E G T Ciphertext Size E G T Pe Access Key Size Policy [8] 2γ γB G 3α 2α + 1 2αB G + (α + 1)B G T + τ O(β) O(β) LSSS Our 2γ γB G 2k k 2kB G + kB G T + τ - 2 any MAS E G (or E G T ) = number
The ciphertext size in [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] is linear in the size, α, of LSSS, while the size of ciphertext in our construction grows linearly with k, the number of minimal authorized sets in the MAS. For (t, n)-threshold policy, where 1 < t < n, the value of k = n!/(n -t)! t! which will be larger than n, whereas there exist a LSSS with size α = n to realize the (t, n)-threshold policy. However, there are several classes of MAS for which the value of k is constant but the size of the monotone span program (or LSSS) computing the MAS is at least polynomial in the number of attributes in the access structure. As a trivial case, if one uses a single AND-gate with n attributes, the value of k will be 1, while the size of LSSS is equal to n, i.e., α = n. We now consider some non-trivial cases from [START_REF] Pandit | Efficient Fully Secure Attribute-Based Encryption Schemes for General Access Structures[END_REF]. Let A 0 = B 1 = {a 1 , . . . , a n/2 }, B 2 = {a n/2 +1 , . . . , a n } be the set of minimal sets for a MAS, A, over n attributes a 1 , . . . , a n . Then, k = 2 and the size, α, of LSSS computing A is at least O(n). Similarly, if A 0 = {B 1 = {a 1 , . . . , a n/3 }, B 2 = {a n/3 +1 , . . . , a 2n/3 }, B 3 = {a 2n/3 +1 , . . . , a n }} is the set of minimal sets for a MAS, A, then k = 3 but the size, α, of LSSS computing A is at least O(n) (for more details see Section 2.1 in [START_REF] Pandit | Efficient Fully Secure Attribute-Based Encryption Schemes for General Access Structures[END_REF]). Thus, in such cases, our dCP-ABE-MAS scheme exhibits shorter ciphertext. Moreover, our approach requires only 2 pairing computations to decrypt any ciphertext. The user secret key size is linear in the number of attributes associated with the user.
An inherent drawback of [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] is that every authority can independently decrypt every ciphertext in the system, if the set of attributes controlled by the authority satisfies the LSSS access structure associated with the ciphertext. However, this can be avoided if each authorized set contains attributes from at least two different authorities. The same problem can be eliminated in our dCP-ABE-MAS if each minimal authorized set contains attributes from at least two different authorities. This fact follows from satisfiability condition given in Definition 2.
We discuss how our dCP-ABE-MAS can provide attractive solutions to finegrained access control in various network scenarios and compare our work with the existing works in the area. Additionally, our multi-authority scheme provides a mechanism for packing multiple messages in a single ciphertext. This in turn reduces network traffic significantly. The proposed scheme is proven to be collusion-resistant and is secure against chosen plaintext attacks in the generic bilinear group model. To the best of our knowledge, our proposed multi-authority CP-ABE scheme is the only scheme in a decentralized framework where the decryption time is constant for general MAS.
Preliminaries
Definition 1. Let G and G T be multiplicative cyclic groups of prime order p. Let g be a generator of G. A mapping e : G × G → G T is said to be bilinear if e(u a , v b ) = e(u, v) ab , for all u, v ∈ G and a, b ∈ Z p and non-degenerate if e(g, g) = 1 T (where, 1 T is the unit element in G T ). We say that G is a bilinear group if the group operation in G can be computed efficiently and there exists G T for which the bilinear map e : G × G → G T is efficiently computable.
Access Structure
In this section, we briefly review the concept of general access structures [START_REF] Stinson | Cryptography: Theory and Practice[END_REF]. Let U be the universe of attributes and |U | = n. Let P(U ) be the collection of all subsets of U. Every subset of P(U ) \ {∅} is called an access structure. An access structure A is said to be monotone access structure (MAS) if
{C ∈ P(U )|C ⊇ B, for some B ∈ A} ⊆ A.
The sets in A are called the authorized sets and the sets not in A are called the unauthorized sets with respect to the monotone access structure A. Then every superset of an authorized set is again authorized set in MAS.
A set B in a monotone access structure A is a minimal authorized set in A if there exists a set
D( = B) such that D ⊆ B, then D / ∈ A.
The set of all minimal authorized sets of A, denoted by A 0 , is called the basis of A. Then we can generate A from its basis A 0 as follows:
A = {C ∈ P(U )|C ⊇ B, for some B ∈ A 0 }. (1)
Lemma 1. The monotone access structure A given in Eq. ( 1) is generated uniquely from its basis A 0 .
Proof. Suppose A is a monotone access structure generated from A 0 . Then
A = {C ∈ P(U )|C ⊇ B , for some B ∈ A 0 }. We shall prove that A = A . Let C ∈ A.
Then by Eq. ( 1), we have U ⊇ C ⊇ B, for some B ∈ A 0 and hence
C ∈ A . Therefore, A ⊆ A . Similarly, we can have A ⊆ A. Thus, A = A .
In sum, every monotone access structure can be represented by its basis.
Definition 2. Let A be a monotone access structure and A 0 be its basis. A set, L, of attributes satisfies A, denoted as L |= A if and only if L ⊇ B, for some B ∈ A 0 , and otherwise L does not satisfy A, denoted as L |= A.
Decentralized CP-ABE System
A decentralized CP-ABE system is composed mainly of a set A of authorities, a trusted initializer and users. The only responsibility of trusted initializer is generation of system global public parameters, which are system wide public parameters available to every entity in the system, once during system initialization. Each authority A j ∈ A controls a different set U j of attributes and issues corresponding secret attribute keys to users. We note here that all authorities will work independently. As such, every authority is completely unaware of the existence of the other authorities in the system. Each user in the system is identified with a unique global identity ID ∈ {0, 1} * and is allowed to request secret attribute keys from the different authorities. At any point of time in the system, each user with identity ID possesses a set of secret attribute keys that reflects a set L ID of attributes, which we call an attribute set of the user with identity ID.
Let U = Aj ∈A U j , where U j1 ∩ U j2 = ∅, for all j 1 = j 2 , be the attribute universe of the system. Due to lack of global coordination between authorities, different authorities may hold the same attribute string. To overcome such scenario, we can treat each attribute as a tuple consisting of the attribute string and the controlling authority identifier, for example ("supervisor", j), where the attribute "supervisor" is held by the authority A j . Consequently, the attributes ("supervisor", j 1 ) and ("supervisor", j 2 ) will be considered as distinct as long as j 1 = j 2 .
The decentralized CP-ABE system consists of the following five algorithms. System Initialization(κ). At the initial system setup phase, a trusted initializer chooses global public parameters GP according to the security parameter κ. Any authority or any user in the system can make use of these parameters GP in order to perform their executions. Authority Setup(GP, U j ). This algorithm is run by every authority A j ∈ A once during initialization. It accepts as input the global public parameters GP and a set of attributes U j for the authority A j and outputs public key PubA j and master secret key MkA j of the authority A j . Authority KeyGen(GP, ID, a, MkA j ). Every authority executes this algorithm upon receiving a secret attribute key request from the user. It will take as input global public parameters GP, a global identity ID of a user, an attribute a hold by some authority and the master secret key of the corresponding authority. It returns a secret attribute key SK a,ID for the identity ID. Encrypt(GP, M, A, {PubA j }). This algorithm is run by an encryptor and it takes as input the global public parameters GP, a message M to be encrypted, an access structure A, and public keys of relevant authorities corresponding to all attributes appeared in A. It then encrypts M under A and returns the ciphertext CT, where A is embedded into CT. Decrypt(GP, CT, {SK a,ID |a ∈ L ID }). On receiving a ciphertext CT, a decryptor with identity ID runs this algorithm with the input the global public parameters GP, a ciphertext CT which is an encryption of M under A, and {SK a,ID |a ∈ L ID } is a set of secret attribute keys obtained for the same identity ID. Then it outputs the message M if the user attribute set L ID satisfies the access structure A; otherwise, decryption fails.
Security Model
Following [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF], we define a security model in terms of a game which is carried out between a challenger and an adversary, where the challenger plays the role of all authorities. The adversary can corrupt authorities statically, i.e., the adversary has to announce the list of corrupted authorities before obtaining the public keys of honest authorities, whereas key queries can be made adaptively. Setup. First, the challenger obtains global public parameters GP. The adversary announces a set A ⊂ A of corrupt-authorities. Now, the challenger runs Authority Setup algorithm for each honest authority and gives all public keys to the adversary. Key Query Phase 1. The adversary is allowed to make secret key queries for the attributes coupled with user global identities (a, ID), where the attributes a are held by honest authorities. The challenger runs Authority KeyGen algorithm and returns the corresponding secret keys SK a,ID to the adversary. Challenge. The adversary submits two equal length messages M 0 , M 1 and an access structure A. The access structure A must obey the following constraint. Let F be a set of attributes belonging to the corrupt-authorities that are in A.
For each identity ID, let F ID be the set of attributes in A for which the adversary has queried (a, ID). For each identity ID, the attribute set F ∪ F ID must not satisfy the access structure A, i.e., (F ∪ F ID ) |= A. The adversary needs to give the challenger the public keys of corrupt-authorities whose attributes are in A. Now, The challenger flips a random coin µ ∈ {0, 1} and runs Encrypt algorithm in order to encrypt M µ under A. The resulting challenge ciphertext CT * is given to the adversary. Key Query Phase 2. The adversary can make additional secret key queries for (a, ID) with the same restriction on the challenge access structure stated in Challenge phase. Guess. The adversary outputs a guess bit µ ∈ {0, 1} for the challenger's secret coin µ and wins if µ = µ.
The advantage of an adversary in this game is defined to be |Pr
[µ = µ] -1 2 |
, where the probability is taken over all random coin tosses of both adversary and challenger.
Definition 3. The decentralized CP-ABE system is said to be IND-CPA (ciphertext indistinguishability under chosen plaintext attacks) secure against static corruption of authorities if all polynomial time adversaries have at most a negligible advantage in the above security game.
dCP-ABE-MAS
In this section, we present a decentralized CP-ABE scheme for monotone access structures, dCP-ABE-MAS. Note that every monotone access structure A is represented by its basis A 0 which is the set of minimal authorized sets in A.
System Initialization(κ). During system initialization phase, a six tuple GP = (p, G, g, G T , e, H) is chosen as global public parameters, where p is a prime number greater than 2 κ , G, G T are two multiplicative cyclic groups of same prime order p, g is a generator of G, e : G × G → G T is a bilinear map and H : {0, 1} * → G is a collision resistant hash function which will be modeled as a random oracle in our security proof. Authority Setup(GP, U j ). Each authority A j ∈ A possesses a set of attributes U j . For each attribute a ∈ U j , A j selects two random exponents t a , t a ∈ Z p , and computes P a = g ta , P a = e(g, g) t a . The public key of A j is published as PubA j = {(P a , P a )|a ∈ U j }. The master secret key of the authority A j is MkA j = {(t a , t a )|a ∈ U j }. Authority KeyGen(GP, ID, a, MkA j ). When a user with unique global identity ID ∈ {0, 1} * requests for a secret key associated with an attribute a which is held by A j , the authority A j returns SK a,ID = g t a H(ID) ta to the user.
Encrypt(GP, M, A 0 , {PubA j }). Here A 0 is the basis for a monotone access structure A. Let A 0 = {B 1 , B 2 , . . . , B k }, where each B i ⊂ U is a minimal authorized set in A. The set {PubA j } is a set of public keys of all authorities which are managing the attributes in A 0 . In order to encrypt a message M ∈ G T , the encryptor chooses a random exponent s i ∈ Z p , for each i, 1 ≤ i ≤ k, and computes
C i,1 = M • a∈Bi P a si , C i,2 = g si and C i,3 = a∈Bi P a si . (2)
The encryptor outputs the ciphertext
CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} . Decrypt(GP, CT, {SK a,ID |a ∈ L ID }).
When a user with global identity ID ∈ {0, 1} * receives a ciphertext CT, it first computes H(ID). Suppose the attribute set L ID of this user satisfies the monotone access structure A generated by
A 0 = {B 1 , B 2 , . . . , B k }. Then L ID ⊇ B i , for some B i ∈ A 0 .
The receiver now aggregates the secret attribute keys associated with the attributes appeared in the minimal authorized set B i and computes K i = a∈Bi (SK a,ID ) . The message can then be obtained by computing
C i,1 • e (H(ID), C i,3 ) e (K i , C i,2 ) = M • e(g, g) sib i • e(H(ID), g sibi ) e(g b i H(ID) bi , g si ) = M,
where b i = b∈Bi t a and b i = b∈Bi t a . We will use the notations b i and b i in our security proof.
Remark 1. An encryptor can pack different messages, say M 1 , M 2 , . . . , M k , where k is equal or smaller than the size of a basis of a monotone access structure, in a single ciphertext by using the following encryption algorithm. multi.Encrypt(GP, {M 1 , M 2 , . . . , M k }, A 0 , {PubA j }). Let A be a monotone access structure generated by its basis A 0 = {B 1 , B 2 , . . . , B k }. For each i, 1 ≤ i ≤ k, the encryptor chooses a random exponent s i ∈ Z p and computes the ciphertext
CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} , where C i,1 = M i • ( a∈Bi P a ) si , C i,2 = g si and C i,3 = ( a∈Bi P a ) si .
On receiving the ciphertext CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} , the recipient can be recovered respective message M i by executing the decryption algorithm Decrypt(CT, {SK a,ID |a ∈ L ID }, GP}) of dCP-ABE-MAS. The deployment of this mechanism will be discussed in Section 5.
Security Analysis
In this section, we first argue our dCP-ABE-MAS is secure against collusion attacks. We then prove dCP-ABE-MAS is IND-CPA secure in the generic bilinear group model (we refer the reader to [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF] for definition). Security against collusion attacks. A scheme is said to be collusion-resistant if no two or more recipients can combine their secret keys in order to decrypt a message that they are not entitled to decrypt alone. We will show that if two users with identities ID, ID try to collude and combine their secret keys, they will fail in decryption process even though their attributes associated with secret keys satisfy the monotone access structure A. Note that A 0 = {B 1 , B 2 , . . . , B k } is a basis for A.
The encryption algorithm blinds the message M with e(g, g) sib i . Consequently, the decryptor needs to recover the blinding term e(g, g) sib i by coupling their secret keys for attribute and identity pairs (a, ID) with the respective ciphertext components. If the decryptor has a satisfying set of keys with the same identity ID, i.e., {SK a,ID |a ∈ B i }, for some i, then the decryptor can recover the blinding term from the following computation.
e(K i , C i,2 ) e(H(ID), C i,3 ) = e(g, g) sib i
• a∈Bi e(H(ID), g) sita a∈Bi e(H(ID), g) sita = e(g, g) sib i .
Suppose two users with different identities ID and ID try to collude and combine their secret attribute keys such that L ID ⊃ B i and
L ID ⊃ B i , for any 1 ≤ i ≤ k but L ID ∪ L ID ⊇ B i , for some B i . Then K i = a∈B i,ID SK a,ID • a∈B i,ID SK a,ID , where B i,ID = L ID ∩ B i and B i,ID = L ID ∩ B i .
Consequently, there will be some terms of the form e(H(ID), g) sita in denominator and some terms of the form e(H(ID ), g) sita in numerator which will not cancel with each other as H is collision resistant, i.e., H(ID) = H(ID ), thereby preventing the recovery of the blinding term e(g, g) sib i , so is the message M. This demonstrates that dCP-ABE-MAS scheme is collusion-resistant. Guess: ADV 1 outputs his guess ν ∈ {0, 1} on ν. If ν = ν, ADV 2 outputs as its guess µ = 1; otherwise he outputs µ = 0.
-In the case where µ = 1, CT is a correct ciphertext of M ν . Consequently, ADV 1 can output ν = ν with the advantage , i.e., Pr[ν = ν|µ = 1] = 1 2 + . Since ADV 2 guesses µ = 1 when ν = ν, we get Pr[µ = µ|µ = 1] = 1 2 + . -In the next case where µ = 0, the challenge ciphertext CT * is independent of the messages M 0 and M 1 , so ADV 1 cannot obtain any information about ν. Therefore, ADV 1 can output ν = ν with no advantage, i.e., Pr
[ν = ν|µ = 0] = 1 2 . Since ADV 2 guesses µ = 0 when ν = ν, we get Pr[µ = µ|µ = 0] = 1 2 . Thus, advantage of ADV 2 = Pr[µ = µ] -1 2 ≥ 1 2 • ( 1 2 + ) + 1 2 • 1 2 -1 2 = 2 . This proves the claim 1.
This claim demonstrates that any adversary that has a non-negligible advantage in GAME 1 can have a non-negligible advantage in GAME 2 . We shall prove that no adversary can have non-negligible advantage in GAME 2 . From now on, we will discuss the advantage of the adversary in GAME 2 , wherein the adversary must distinguish between e(g, g) sib i and e(g, g) δi . Simulation in GAME 2 : To simulate the modified security game GAME 2 , we use the generic bilinear group model given in [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF]. Consider two injective random maps ψ, ψ T : Z p → {0, 1} 3 log(p) . In this model every element of G and G T is encoded as an arbitrary random string from the adversary's point of view, i.e., G = {ψ(x)|x ∈ Z p } and G T = {ψ T (x)|x ∈ Z p }. The adversary is given three oracles to compute group operations of G, G T and to compute the bilinear pairing e. The input of all oracles are string representations of group elements. The adversary is allowed to perform group operations and pairing computations by interacting with the corresponding oracles only. It is assumed that the adversary can make queries to the group oracles on input strings that were previously been obtained from the simulator or were given from the oracles in response to the previous queries. This event occurs with high probability. Since |ψ(Z p )| > p 3 and |ψ T (Z p )| > p 3 , the probability of the adversary being able to guess an element (which it has not previously obtained) in the ranges of ψ, ψ T is negligible.
The notations g x := ψ(x) and e(g, g) x := ψ T (x) are used in the rest of the proof. With this notation, g and e(g, g) can be represented as ψ(1) and ψ T (1), respectively. Setup: Note that A is the set of all authorities in the system and U is the attribute universe. The simulator obtains the global public parameters GP from the trusted system initializer and gives ψ(1) to the adversary. The adversary sends a corrupted authority list A ⊂ A to the simulator. For each attribute a ∈ U controlled by honest authorities, the simulator chooses two new random values t a , t a ∈ Z p , computes g ta , e(g, g) t a using respective group oracles and gives P a = ψ(t a ), P a = ψ T (t a ) to the adversary. Query Phase 1: The adversary issues hash and secret key queries, and consequently the simulator responds as follows.
Hash queries: When the adversary requests H(ID) for some user identity ID for the first time, the simulator chooses a new, unique random value u ID ∈ Z p , computes g u ID = ψ(u ID ) using group oracle and gives ψ(u ID ) to the adversary as H(ID). The association between values u ID and the user identities ID is stored in Hlist so that it can reply consistently for subsequent queries in the future.
Secret key queries: If the adversary requests for a secret key of an attribute a with identity ID, the simulator computes g t a H(ID) ta using the group oracle and returns SK a,ID = ψ(t a + u ID t a ) to the adversary. If H(ID) has not been stored in Hlist, it is determined as above. Challenge: In order to obtain a challenge ciphertext CT * , the adversary specifies the basis A 0 = {B 1 , B 2 , . . . , B k } of a monotone access structure A along with the public keys g ta , e(g, g) t a of attributes a ∈ U which are controlled by corrupted authorities and appeared in A 0 as members in several B i . The simulator then checks the validity of these public keys by querying the group oracles. Now, the simulator chooses a random s i for the i-th minimal set of A 0 , for each i, 1 ≤ i ≤ k and computes b i = a∈Bi t a . The simulator then flips a random coin µ ∈ {0, 1} and if µ = 1, he sets δ i = s i b i , where b i = a∈Bi t a , otherwise δ i is set to be a random value from Z p . The simulator finally computes the components of challenge ciphertext CT * by using group oracles as follows.
C i,1 = ψ T (δ i ), C i,2 = ψ(s i ), C i,3 = ψ(s i b i ) for all i, 1 ≤ i ≤ k. The ciphertext CT * = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} is sent to the adversary.
Query Phase 2: The adversary issues more hash and secret key queries. The simulator responds as in Query Phase 1. We note that if the adversary requests for secret keys of a set of attributes that allow decryption in combination with secret keys obtained from corrupted authorities, then the simulator is aborted.
The adversary now can have in his hand, all values that consists of encodings of random values δ i , 1, u ID , t a , t a , s i and combination of these values given by the simulator (e.g., ψ(t a + u ID t a )) or results of queries on combination of these values to the oracles. In turn, we can think of each query of the adversary is a multivariate polynomial in the variables δ i , 1, u ID , t a , t a , s i , where a ranges over the attributes controlled by honest authorities, i ranges over the minimal sets in the basis of monotonic access structure and ID ranges over the allowed user identities. We assume that any pair of the adversary's queries on two different polynomials result in two different answers. This assumption is false only when our choice of the random encodings of the variables ensures that the difference of two polynomial queries evaluates to zero. Following the security proof in [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF], it can be claimed that the probability of any such collision is at most O(q 2 /p), q being an upper bound on the number of oracle queries made by the adversary during the entire simulation. Therefore, the advantage of the adversary is at most O(q 2 /p). We assume that no such random collisions occur while retain 1 -O(q 2 /p) probability mass.
Under this condition, we show that the view of the adversary in GAME 2 is identically distributed when δ i = s i b i if µ = 1 and δ i is random if µ = 0, and hence the adversary cannot distinguish them in the generic bilinear group model. To prove this by contradiction, let us assume that the views are not identically distributed. The adversary's views can only differ when there exists two queries Table 2. Possible adversary's query terms in GT (here, the variables a, a are possible attributes, ID, ID are authorized user identities and i, i are indices of the minimal sets in the monotone access structure).
ta tat a u ID u ID bi(t a + u ID ta) sis i u ID tau ID u ID (t a + u ID ta) si(t a + u ID ta) sis i b i t a + u ID ta t a (t a + u ID ta) u ID bi sibi(t a + u ID ta) sis i bib i bi tabi u ID si bib i t a si tasi u ID sibi sib i b i sibi tasibi (t a + u ID ta)(t a + u ID t a ) sibib i q 1 and q 2 in G T such that q 1 = q 2 with q 1 | (δi=sib i ) = q 2 | (δi=sib i )
, for at least one i. Fix one such i. Since δ i only appears as ψ T (δ i ) and elements of ψ T cannot be used as input of this oracle takes elements of ψ as input, the adversary can only make queries of the following form involving δ i : q 1 = c 1 δ i + q 1 and q 2 = c 2 δ i + q 2 , for some q 1 and q 2 that do not contain δ i , and for some constants c 1 and c 2 . Since q 1 | (δi=sib i ) = q 2 | (δi=sib i ) , we have c 1 s i b i + q 1 = c 2 s i b i + q 2 and it gives q 2 -q 1 = (c 1 -c 2 )s i b i = cs i b i , for some constant c = 0. Therefore, the adversary can construct the query ψ T (cs i b i ), for some constant c = 0, yielding a contradiction to our claim 2 proved below. Hence the adversary's views in GAME 2 are identically distributed, i.e., the adversary has no non-negligible advantage in GAME 2 , so in the original game GAME 1 by claim 1.
Claim 2 : The adversary cannot make a query of the form ψ T (cs i b i ) for any non-zero constant c and any i.
Proof of Claim 2:
To establish this claim, we examine the information given to the adversary during the entire simulation and perform case analysis based on that information.
In Table 2, we list all the possible adversary's query terms in G T by means of the bilinear map and group elements given to the adversary during the simulation. It can be seen that the adversary can query for an arbitrary linear combination of 1 (which is ψ T (1)), δ i and the terms given in Table 2. We will now show that no such linear combination can produce a term of the form cs i b i for any non-zero constant c and any i. Note that the adversary knows the values of t a , t a for attributes a that are controlled by the corrupted authorities, so these can appear in a foregoing linear combinations as the coefficients of the terms given in Table 2.
We note that s i b i = a∈Bi s i t a . From Table 2 we see that the only way for an adversary to create a term containing s i t a is by pairing s i with t a + u ID t a . Consequently, the adversary can create a query polynomial of the form
a∈B (c (i,a) s i t a + c (i,a,ID) u ID s i t a ), (3)
for some set of attributes B and non-zero constants c (i,a) , c (i,a,ID) . In order to get a query polynomial of the form cs i b i the adversary must add other terms to cancel the extra terms a∈B c (i,a,ID) u ID s i t a . For any terms c (i,a,ID) u ID s i t a where a is an attribute held by a corrupted authority, the value of t a is revealed to the adversary, thereby the adversary can form the term -c (i,a,ID) u ID s i t a in order to cancel this from the polynomial given in Eq. ( 3). For terms c (i,a,ID) u ID s i t a where a is an attribute controlled by an uncorrupted authority, the adversary cannot construct terms to cancel these from the polynomial given in Eq. ( 3) since there is no term in Table 2 that enables the adversary to construct a term of the form -c (i,a,ID) u ID s i t a . Consequently, the adversary's query polynomial cannot be of the form cs i b i . Suppose for some identity ID, a set B of attributes in B belong to the corrupted authorities or the adversary has obtained secret keys {SK a,ID |a ∈ B } such that B ⊇ B i , for some i, 1 ≤ i ≤ k. Then the adversary can construct a query polynomial of the form a∈Bi
(cs i t a + c ID u ID s i t a ), (4)
for some non-zero constant c and c ID . The query polynomial given in Eq. ( 4) is same as
cs i a∈Bi t a + c ID u ID s i a∈Bi t a = cs i b i + c ID u ID s i b i .
The extra term c ID u ID s i b i here will be canceled by using the term u ID s i b i appeared in Table 2. In this case, even though the adversary becomes successful, the constraint mentioned in the Challenge phase of the security game is violated and simulator is aborted.
We have shown that the adversary cannot make a query polynomial of the form cs i b i , for any constant c = 0 and any i, without violating the assumptions stated in the security game. This proves the claim 2 and hence the theorem.
Applications
In this section, we propose an access control scheme in various network scenarios that make use of our dCP-ABE-MAS and then compare our scheme with the existing schemes in the respective areas. Vehicular Ad Hoc Network: Typically, a vehicular ad hoc network (VANET) mainly consists of three kinds of entities-trusted initializer (TI), road side units (RSUs) and vehicles which are equipped with wireless communication devices, called on-board units (OBUs). During registration phase, each vehicle is assigned by the TI a set of persistent attributes (e.g., year, model), which remains constant throughout the lifetime of a vehicle, and a set of different pseudonyms, which preserves location privacy of the vehicle. We assume that each vehicle is capable of changing pseudonyms from time to time. In addition, TI gives each vehicle a set of secret keys associated with the persistent attributes for each pseudonym of that vehicle. These attributes and keys are preloaded into vehicle's OBU.
There are several RSUs which are distributed across the network in a uniform fashion and each RSU provides infrastructure support for a specified region which we call communication range of that RSU. Each RSU controls a set of dynamic attributes (e.g., road name, vehicle speed). When a vehicle enters within communication range of an RSU, the RSU gives it certain dynamic attributes along with corresponding secret attribute keys after receiving a certificate relating the current pseudonym of the vehicle. We assume that there are secure communication channels between vehicles and TI as well as vehicles and RSUs.
Note that the authorities in our dCP-ABE-MAS play the role of RSUs and the attribute universe is combination of all persistent and dynamic attributes involved in the network. Every persistent attribute is different from every dynamic attribute and the attributes controlled by two different RSUs are all different from each other. The pseudonym can be treated as vehicle's identity. The setup and key generation algorithms of TI are same as authorities' setup and key generation algorithms, respectively.
Vehicles can encrypt and decrypt messages. RSUs can also encrypt messages for a set of selected vehicles. When a vehicle wants to send a message M to other vehicles in the network regarding the road situation (e.g., a car accident is ahead), it decides firstly the intended vehicles (e.g., ambulance, police car, breakdown truck) and then formulates an associated MAS in terms of minimal authorized sets over some attributes (both persistent and dynamic), for example, A 0 = {B 1 , B 2 , B 3 }, where B 1 = {ambulance, road1}, B 2 = {policecar, lane2} and B 3 = {breakdowntruck, road2}. The encryptor vehicle then uses the public keys of the attributes occurring in the access structure to encrypt a message and transmits the ciphertext. A recipient vehicle whose attribute set satisfies the access structure will only be able to decrypt the message. Refer to the above example, consider a scenario where the encryptor vehicle needs to send a different message to each category of vehicles-ambulance, police car, breakdown truck. Consequently, it has to encrypt each message separately under respective access structure for each category. In turn, the number of encryptions will grow linearly with the number of categories. In such cases, the proposed multi.Encrypt algorithm (described in Remark 1) can pack multiple messages in a single ciphertext, thereby reduces network traffic significantly, in such a way that the respective message will only be decrypted by the intended category of vehicles. This helps in the widespread dissemination of messages and early decision making in such a highly dynamic network environments.
The comparison of proposed scheme, say Scheme 1 in the VANET scenario, with the existing scheme [START_REF] Ruj | Improved Access Control Mechanism in Vehicular Ad Hoc Networks[END_REF] is presented in Table 3, 4.
Distributed Cloud Network: The cloud storage system is composed of five entities: trusted initializer (TI), key generation authorities (KGAs), cloud, data owner (data provider) and users (data consumers). The only responsibility of TI is generation of global public parameters GP of the system and assignment of a unique global identity ID to each user in the system. Each key generation authority controls a different set of attributes and generates public and secret keys for all attributes that it holds. The KGAs are also responsible to distribute secret keys for users' attribute sets on request according to their role or identity. The KGAs could be scattered geographically far apart and execute assigned tasks independently. The authorities in our dCP-ABE-MAS act as KGAs. The cloud is an external storage server that allows the data owners to store their data in the cloud in order to share their data securely to intended users. The data owners enforce an access control policy in the form of a MAS into ciphertext in such a way that only intended users can recover the data and sign the message by employing an efficient attribute-based signature scheme. Finally, the ciphertext along with signature is sent to the cloud. The cloud first verifies the signature and stores the ciphertext if the signature is valid. Each user can obtain ciphertexts from the cloud on demand. However, the users can decrypt the ciphertext only if the set of attributes associated with their secret keys satisfy the access control policy embedded in the ciphertext.
Consider a health-care scenario where the patients can be data providers, and doctors, medical researchers and health insurance companies can be data consumers. For example, a patient wishes to store his medical history in the cloud for specific users as follows: brain scan records, M 1 , for any neurologist from hospital X, ECG (Electrocardiography) reports, M 2 , for any cardiologist and Ultrasound reports, M 3 , for any radiology researcher from any medical research center. In such setting, the multi.Encrypt algorithm (described in Remark 1) is well suited to pack all the three messages in a single ciphertext. To this end, the patient first formulates a MAS whose basis is A 0 = {B 1 , B 2 , B 3 }, where B 1 = {neurologist, hospitalX}, B 2 = {cardiologist} and B 3 = {radiologist, researcher}. Once the policy is specified, multi.Encrypt algorithm is executed with the input the set of messages {M 1 , M 2 , M 3 }, A 0 and the respective public keys. Finally, the resulting ciphertext will be stored in the cloud. Refer to the decryption algorithm of dCP-ABE-MAS, only the intended users can decrypt the respective messages.
We compare our proposed construction, say Scheme 2 in the context of cloud storage, with the existing schemes [START_REF] Ruj | Privacy Preserving Access Control with Authentication for Securing Data in Clouds[END_REF][START_REF] Yang | DAC-MACS: Effective Data Access Control for Multi-Authority Cloud Storage Systems[END_REF][START_REF] Ruj | DACC: Distributed access control in clouds[END_REF] in Table 3,[START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF], where the ciphertext size is considered without signature to make consistent with other schemes.
of exponentiations in a group G (or GT , resp.), Pe = number of pairing computations, B G (or B G T ) = bit size of an element of G (or GT , resp.), α = size of LSSS access structure, β = minimum number of attributes required for decryption, γ = number of attributes annotated to a user secret key, k = number of minimal sets in MAS, τ = size of an access structure.
Table 1 .
1 Comparison of[START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] with Our (dCP-ABE-MAS) Scheme
Key Generation Encryption Decryption
Scheme E
According to ADV 1 , we can construct an adversary ADV 2 as follows. In Setup, Key Query Phase 1 and Key Query Phase 2, ADV 2 forwards all messages it receives from ADV 1 to the challenger and all messages from the challenger to ADV 1 . In the Challenge phase, ADV 2 receives two messages M 0 and M 1 from ADV 1 and the challenge ciphertext CT * from the challenger. Note that CT * contains C i,1 that is either e(g, g) sib i or e(g, g) δi . Now, ADV 2 flips a random coin ν ∈ {0, 1} and replaces C i,1 by C i,1 = M ν •C i,1 in CT * to compute a modified ciphertext CT and finally sends the resulting CT to the adversary ADV 1 .
Theorem 1. The dCP-ABE-MAS scheme is IND-CPA secure against static corruption of authorities in the generic group model.
Proof. Let ADV 1 be an adversary who plays the original security game, say GAME 1 , described in Section 3.1. According to GAME 1 , the challenge ciphertext has a component C i,1 which is either M 0 •e(g, g) sib i or M 1 •e(g, g) sib i , and the adversary ADV 1 has to distinguish them. Consequently, we define a modified game, say GAME 2 , as follows. Setup, Key Query Phase 1 and Key Query Phase 2 are similar to GAME 1 , but the challenge ciphertext component C i,1 in Challenge phase is computed as C i,1 = e(g, g) sib i if µ = 1 and C i,1 = e(g, g) δi if µ = 0, where δ i is selected uniformly at random from Z p , and other ciphertext components are computed in the same way analogous to Encrypt algorithm. Then we have the following claim. Claim 1: If ADV 1 has advantage to win GAME 1 , then there is an adversary who wins GAME 2 with advantage at least /2. Proof of Claim 1:
Table 3 .
3 Comparison of Computation Costs
Key Generation Encryption Decryption
Scheme E G E G E G T Pe E G E G T Pe
[14] 2γ + 2 4α + 1 1 --O(β) O(β)
[12, 13, 16] 2γ 3α 2α + 1 1 -O(β) O(β)
Scheme 1,2 2γ 2k k -- - 2
Table 4 .
4 Comparison of Communication Overheads
Scheme User Secret Ciphertext Size Access Policy Requirement
Key Size of CA
[14] (γ + 2)B G (3α + 1)B G + B G T + τ LSSS Yes
[12, 13, 16] γB G (2α)B G + (α + 1)B G T + τ LSSS No
Scheme 1,2 γB G 2kB G + kB G T + τ any MAS No
The description of all the symbols in Table1,3,4 is given at the bottom of Table1.
The scheme that works on prime order bilinear group and the security is analyzed in the generic group model.
Acknowledgement. The authors would like to thank the anonymous reviewers of this paper for their valuable comments and suggestions. | 48,280 | [
"1004394",
"1003117"
] | [
"301693",
"301693"
] |
01492919 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2014 | https://theses.hal.science/tel-01492919/file/These_MOHAMMADI_Ali_UTBM.pdf | Ali Mohammadi Defenced
M Ben Ammar
Faouzi Rapporteur
M Daniel
Hissel Examinateur
M Seddik Bacha
Defenced
Joseph Fourier
Saint Martin D'
Hères M Faouzi
Ben Ammar Reviewer
M David Bouquain
M Abdesslem Djerdir
M Davood
A Khaburi
M Rachid
Analysis and Diagnosis of Faults in the PEMFC for Fuel cell Electrical Vehicles
ix
(1) : IRTES-SET : Institut de Recherche sur les Trasnports, l'Energie et la Société -laboratoire « Systèmes Et Trasports ». (2) : FR FCLAB : Fédération de Recherche Fuel Cell LABoratory.
General Introduction
In recent years, the Proton Exchange Membrane Fuel cells (PEMFC) have been attracted for transport application. For several years the orientation of the laboratory IRTES-SET (1) program was focused on transportation problems notably in collaboration with FR FCLAB (2) teams within the topic of electrical and hybrid vehicles (EV and Fuel cell electric vehicle (FCEV)). Over the last years in the two laboratories, thesis have treated the problems of electrical vehicle simulation, the drivetrains design, integration and control and devolved fuel cell system, design and control of FCEV. The latter aims of these efforts to obtain at zero emission that is one of the challenges of the scientific researchers in this field.
The work of the present thesis is focused on the problem of availability of FCEV drivetrains feed by a Polymer electrolyte membrane fuel cell (PEMFC). The latter is a type of fuel cells being developed for transport applications. Because of their features are includes of low temperature/pressure ranges (50 to 100 °C) and a special polymer electrolyte membrane. However the major problems of using the Fuel cells are currently very expensive to produce. Thus, enhancing the reliability and durability of the PEMFC is the main objective of many researchers. In additional, enhancing the reliability and durability of PEMFC requires a good understanding of the important issues related to operating fuel cells, such as the actual local current density and temperature distributions within a PEMFC. Hence, the present thesis aims to propose a simulation tool able to reach these goals.
To carry out these objectives, single cell and stack of fuel cells have to be investigated experimentally in order to establish actual maps of different parameters such as voltage, current density, and temperature. Newton Raphson was used in this work for calibrations and avoid of using expensive current sensors. At the end the ANN has significant was applied to fault isolation and classification.
Through this thesis report, the developed work during the last three year, the followed methodologies as well as the obtained results are explained. This report is organized over five chapters as follows:
The first chapter presents an overview of the state of the art of electrical and fuel cell electrical vehicles (EVs and FCEVs) over the world and in Belfort. It demonstrates why FCEVs have a long way to browse before fully entering the automotive market. The main locks concern the vehicle availability, safety, cost and societal acceptance. Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why this research work is focused on enhancing the reliability and durability of PEMFC for automotive applications.
x
The second chapter is dedicated to the PEMFC modeling and diagnosis. Indeed, a good diagnosis strategy contributes to improve the lifetime of the FC and then to improve the availability of the system built around it, as for example the drivetrain of FCEVs. It has been established that the FC is subject to a lot of fault during its operating. The latter are due to multi-physical phenomenon namely the temperature, the pressure and the humidity of the gas involved within the FC stack and cells. Several models have been developed to understand this phenomenon and to evaluate the FC performances according to different conditions of use but also to detect, isolate and classify the faults when they occur. On the basis of literature, it has been noticed that the ANNs are one of the most interesting in PEMFC fault diagnosis and modeling. In fact, ANNs have the capability to learn and build non-linear mapping of complex systems such as PEMFC.
The third chapter introduces new 3D modeling to fault diagnosis in PEMFC with high accuracy. This 3D model is firstly applied on the case of a single cell to present the principle of the methodology used including formulation and calibration based on experimental data. Then the proposed model is extended to one stack. Finally, this 3D circuit model is used for training an ANN model in order to be used for on-line diagnosis of the PEMFC but also in the management of its degraded modes.
In the fourth chapter, the experimental work is exposed. This work concerned two set-ups have been developed to validate the proposed 3D model. Because of the difficulty to introduce faults in the FCs without destroying them, only the healthy mode has been focused in this study. The first one concerns one FC cell from MES-DEA technology. The second one is a FC system from Balard technology (called Nexa stack FC). After presenting the two set-ups with their corresponding environmental hardware and software materials, the obtained results are given and commented regarding to the validity of the proposed model.
The fifth band the last chapter shows how the ANN method has been used to develop diagnosis based on 3D sensitive models for fault isolation in one cell PEM. The input data of the ANN were analyzed by the FFT method. The ANN advantages consist in their ability to analysis a large quantity of the data and to classify the faults according to their types. This study has been developed in the context of a global strategy of supervision and diagnosis of the drivetrain of a FCEV.
Roman Symbols
Cell activation area (cm 2 ). Specific reaction surface (cm 2 ). Catalyst surface area per unit. Average water activity. Catalyst specific area (theoretical limit for Pt catalyst is 2400cm 2 mg -1 but state-of-the-art catalyst has about 600-l000 cm 2 mg -1 , which is further reduced by the incorporation of catalyst in the electrode structures by up to 30%). Water activity. Specific heat capacity of the stack (J Mol -1 K -1 ). Surface concentration of the reacting species.
Water diffusivity (cm 2 s -1 ). Activation energy, 66kJ Mol -1 for oxygen reduction on Pt.
Standard reference potential at standard state (V).
Voltage drops that result from losses in the fuel cell. The reverse voltage including the effect of gas pressures and temperature (V). Faraday constant (96485 C Mol -1 ). Flux of reactant per unit area (Mols -1 cm -2 ). Current density (A cm -2 ).
Limiting current density (A cm -2 ). Reference exchange current density (at reference temperature and pressure, typically 25°C and 101.25 kPa) per unit catalyst surface area, Acm -2 Pt. Thickness of the membrane (µm).
Catalyst loading (state-of-the-art electrodes have 0.3-0.5 mg Pt cm -2 ; lower loadings are possible but would result in lower cell voltages). Molar mass of the membrane (1kg Mol -1 ).
Nomenclature xii
The total mass of the FC stack (kg). Mass loading per unit area of the cathode. Number of exchanging electrons per mole of reactant (2 for the PEM fuel cell). Number of water molecules accompanying the movement of each proton (2.5). Number of molecules per mole :6.022×10 23 Number of cells in the stack.
̇
Hydrogen Flow rate ( . Gas pressures (Pa). Vapor partial pressure (Pa). Partial pressure of hydrogen (Pa). Partial pressure of oxygen (Pa). Partial pressure of air (Pa). Partial pressure of vapor (Pa). Reactant partial pressure (kPa). Reference pressure (kPa). Vapor saturation pressure (Pa). Electric power produced (w). Total charge transferred. Charge of electron:1.602× -19 (coulombs/electron) ̇ Available power produced due to chemical reaction (J). ̇ Electrical energy produced by FC (J). ̇ Heat loss which is mainly transferred by air convection (J).
̇
Net heat energy generated by the chemical reaction (J).
̇
Sensible and latent heat absorbed during the process (J).
Relative humidity of hydrogen and air. Gas constant (8.314 J Mol -1 K -1 ). Equivalent membrane resistance (Ωcm 2 ). Activation losses resistance. Concentration losses resistance. Temperature (K).
Tref
Reference temperature (298.15 K). Fuel cell stacks voltage (V).
̇
Volumetric flow rate of hydrogen consumption (in standard liters per minute or slpm). Molar volume (lit Mol -1 ) of hydrogen at standard conditions (P=1atm and T=15°C). Molar volume (m 3 mol -1 ).
Voltage of double layer effect. Electric work.
Greek Symbols
Electron transfer coefficient, 0.5 for the hydrogen fuel cell anode (with two electrons involved) and = 0.1 to 0.5 for the cathode. Pressure coefficient (0.5 to 1.0). Gibbs free energy (J Mol -1 ).
Nomenclature xiii
Hydrogen higher heating value (286 ). Enthalpy (KJ Mol -1 ). Activation polarization (V). Concentration polarization (V). Ohmic polarization (V). Entropy (KJ Mol -1 ). Fuel cell efficiency (%). Membrane water content. Specific resistivity of the membrane for the electrons flow (Ω. Cm). Dry density (0.00197kg cm -3 ). Conductivity in units (S cm -1
List of Tables
xxi
List of Tables
Chapter I:
Chapter I
State of the Art of Fuel Cell
Electrical Vehicles (FCEV)
Introduction
In recent decades, increasing production of internal combustion engine vehicles (IC) has caused severe problems for the environment and human life. Global warming, air pollution and quickly decrease in fossil fuel resources are now the principal problem in this regard. As a result, electric vehicle are considered more clean and safe transportation system. Plug in electric vehicles, hybrid electric vehicles (HEVs), and fuel cell vehicles (FCEV) have been typically suggested to replace combustion vehicles in the future [1.1]. Today all IC has been never ideal because of its fuel consumption and produced pollution such as, carbon monoxide, nitrogen oxides and other toxic substances. Furthermore, Global warming is the results of the "greenhouse effect" derived by the presences of carbon dioxide and other gases. These gases act as barriers on Sun's infrared radiation reflected to sky thus, temperature increase during this period. The distributions of electrical powers in different categories of human activity are shown in Figure .1.1 [1.2]. Revolutionized the world of electronics and electricity was occurred by the Thyristor. The most important electrical vehicles which used in Apollo by astronauts were called Lunar Roving Vehicle.
History of electrical vehicles
The modern electric vehicle peaked during the 1980s and early 1990s. One of the weak points in the development of electric vehicles to the market was the energy storage capacity of the battery. Consequently, in recent years, hybrid electric vehicles have been replaced by the electrical vehicles [1.3]. Pieper institutions of Liège, Belgium and by the Vendovelli and Priestly Electric Carriage Company of France built the first hybrid electric vehicle. The Pieper vehicle was a parallel hybrid composed by gasoline engine, the lead-acid batteries and electric motor. The basic series hybrid vehicle was derived from a pure electric vehicle. It was constructed by the French company Vendovelli and Priestly. The Lohner-Porsche vehicle of 1903 that used the magnetic clutches and magnetic couplings (regeneration braking). In 1997, Toyota released the Prius sedan that it was more important and commercialized of hybrid electrical vehicles built by Japanese manufacturers [1.3].
Brief history of Fuel Cells electric vehicles in world:
In 1958 General Electric (GE) chemist Leonard Niedrach devised a way of depositing platinum onto the ion-exchange membrane created by fellow GE scientist Willard Thomas Grubb three years earlier. This marked the beginning of PEMFC used in vehicles today. The technology was initially developed by GE and NASA for the Gemini space program; it took several decades to become viable for demonstration in cars, primarily due to cost [1.4].
In 1959 the Allis-Chalmers tractor was a farm tractor powered by an alkaline fuel cell with a 15 KW output, capable of pulling weights up to 1360 kg.
In 1966 General Motors designed the fuel cell Electro van, to demonstrate the viability of electric mobility. The Electro van was a converted Handivan with a 32 KW fuel cell system giving a top speed of 115 kmph and a range of around 240 kilometers [1.4].
In 1970 based on the Austin A 40, the K.Kordesch utilized 6 KW Alkaline fuel cell and was comparable in power to conventional cars on the road the time [1.4].
In 1993 the Energy Partners Consulier was a proof of concept vehicle that sported a lightweight plastic body and three 15 KW fuel cells in an open configuration; it had a top speed of 95 kmph and a range 95 kilometers [1.4].
In 1994 the NECAR (New Electric Car) was Dimler's first demonstration of fuel cell mobility. A converted MB-180 van, it utilized a 50 kW PEMFC that, alongside compressed hydrogen storage, took up the majority of space in the van [1.4].
In 1997 within a year Daimler, Toyota, Renault and Mazda all demonstrated viable fuel cell passenger vehicle concept. Fuel cells range from 20 KW (Mazda) to 50 KW (Daimler); both the NECAR 3 and FCHV-2 used methanol as fuel instead of hydrogen. The next year GM demonstrated a methanol-fuelled 50 KW fuel cell Opel Zafira-the first publicly drivable concept [1.4].
Between 1998-2000 during this period momentum was growing for the commercial viability of fuel cell vehicles and most of the world's major automakers (including Daimler, Honda, Nissan, Ford, Volkswagen, BMW, Peugot and Hyundai) demonstrated FCEV with varying fuel sources (methanol, liquid and compressed gaseous hydrogen) and storage methods [1.4].
The public attention on FCEV peaked in 2000. At this point a realization came that despite the promise of the technology; it was not ready for market introduction. Attention switched to hybrid electric power trains and Battery electric vehicles BEV as technologies that might deliver smaller, nearer-term benefits. The public focus for fuel cell transport shifted from cars to buses [1.4].
2005-2006 saw the unveiling of two are that continue to have an impact on the FCEV market today: the first generation edition of the Daimler F-CELL B class in 2005 and the next generation Honda FCX concept in 2006 [1.4].
In 2008 a fleet of twenty Volkswagen Passat Lingyu FCEV was used for transporting dignitaries at the 2008 Beijing Olympics [1.4].
On 8th September 2009 seven of the world's largest automakers -Daimler, Ford, General Motors, Honda, Hyundai-Kia, Renault-Nissan and Toyota -gathered to sign a joint letter of understanding. Addressed to the oil and energy industries and government organizations, it signaled their intent to commercialize a significant number of fuel cell vehicles from 2015 [1.4].
Daimler:
Daimler has a long history of fuel cell activity, spearheading the development of PEMFC for automotive use with its 1994 NECAR. The company remained active in the years after, producing four further variants of the NECAR before revealing its first-generation fuel cell passenger vehicle, the A-Class F-CELL, in 2002. Its second-generation vehicle, the B-Class F-CELL (see Figure .1.3) entered limited series production in late 2010 offering improvements in range, mileage, durability, power and top speed. A fleet total of 200 vehicles are now in operation across the world, including more than 35 in a Californian lease scheme [1.4].
Brief history of fuel cell electric vehicles in France:
In July 2013 the Mobilité Hydrogène France consortium officially launched with twenty members including gas production and storage companies, energy utilities and government departments. The group is co-funded by the consortium members and the HIT project. It aims to formulate an economically competitive deployment plan for a private and public hydrogen refueling infrastructure in France between 2015 and 2030, including an analysis of cost-effectiveness. Initial deployment scenarios for vehicles and stations will be published in late 2013 [1.4].
ECCE
The
F-CITY H 2
The F-City H 2 -a battery-electric vehicle with a fuel cell range extender-has become the first urban electric vehicle with such an energy pack to be homologated in France [1.6]
MobyPost vehicle
MobyPost is a European project aimed at developing a sustainable mobility concept by delivering a solar-to-wheel solution. The first core element of this environmentally friendly and novel project is the development of ten electric vehicles, which will be powered by hydrogen cells, conceived and designed (for post-delivery use). Besides, the development of two hydrogen production and refueling stations is a second core component of MobyPost. These will be built in the French region Franche-Comté, where photovoltaic (PV) generators will be installed on the roofs of two buildings owned by the project partner La Poste and dedicated to postal services. The PV generators allow for the production of hydrogen through electrolysis. Hydrogen is stored on site in low pressure tanks where it is available for refueling the tanks of the electric vehicles, the latter being powered by an embedded fuel cell producing electricity that directly feeds the electric motors.
Figure.1.12. Mobypost vehicle
The MobyPost vehicle is designed to be ergonomic for postal activities and small enough for very narrow streets. It carries about 100 kg of mail, more than twice as much as the postal motor scooters it's intended to replace. With four wheels, it's more stable than a scooter, especially in snow. Its windshield and roof provide some shelter in bad weather, but it has no doors to get in the way driver's way as he goes in and out making deliveries. With 300g of embedded hydrogen the postmen can do their daily tours (around 40km) at a maximum speed of 45km/H of the Mobypost vehicle [1.7].
Configuration of FCEV
From a structural viewpoint, an FCV can be considered a type of series hybrid vehicle in which the fuel cell acts as an electrical generator that uses hydrogen. The on-board fuel cell produces electricity, which either is used to provide power to the machine or is stored in the battery or the super capacitor bank for future use. Various topologies can be introduced by combining energy sources with different characteristics [1.8].
Passive cascade battery/UC system
The battery pack is directly paralleled with the Ultra capacitor (UC) bank. A bidirectional converter interfaces the UC and the dc link, controlling power flow in/out of the UC, as shown in Figure .1.13. Despite wide voltage variation across UC terminals, the dc-link voltage can remain constant due to regulation of the dc converter. However, in this topology, the battery voltage is always the same with the UC voltage due to the lack of interfacing control between the battery and the UC. The battery current must charge the UC and provide power to the load side [1.9].
Active cascaded battery/UC system.
The passive cascaded topology can be improved by adding a dc/dc converter between the battery pack and the UC, as shown in Figure .1.14 this configuration is called an active cascaded system. The battery voltage is boosted to a higher level; thus, a smaller sized battery can be selected to reduce cost. In addition, the battery current can more efficiently be controlled compared with the passive connection [1.9]. The voltages of the battery and the UC will be leveled up when the drive train demands power and stepped down for recharging conditions. Power flow directions in/out of the battery and the UC can separately be controlled, allowing flexibility for power management. However, if two dc/dc converters can be integrated, the cost, size, and complexity of control can be reduced [1.9].
Multiple-input battery/UC system
Both the battery and the UC are connected to one common inductor by parallel switches in the multiple-input bidirectional converter shown in Figure .1.16 Each switch is aired with a diode, which is designed to avoid short circuit between the battery and the UC. Power flow between inputs and loads is managed by bidirectional dc/dc converters. Both input voltages are lower than the dc-link voltage; thus, the converter works in boost mode when the input sources supply energy to drive loads and in the buck mode for recovering braking energy to recharge the battery and the UC. Only one inductor is needed, even if more inputs are added into the system. However, the controlling strategy and power-flow management of the system are more complicated [1.9].
hybrid energy-storage system
Hybrid topology, where a higher voltage UC is directly connected to the dc link to supply the peak power demand, is demonstrated in Figure .1.17, a lower voltage battery is interfaced by a power diode or a controlled switch with the dc link. This topology can be operated in four modes of low power, high power, braking, and acceleration. For light duty, the UC mainly supplies the load, and the battery will switch on when the power demand goes higher. Regenerative energy can directly be injected into the UC for fast charging or into both the battery and the UC for a deep charge [1.9].
Fuel cell History of fuel cell
The father of the fuel cell is Sir William Grove in 1839, discovered the possibility of generating electricity by reversing the electrolysis of water. Francis Bacon developed the first successful fuel cell device in 1932, with a hydrogen-oxygen cell using alkaline electrolytes and nickel electrodes. After the NASA space mission in 1950 Fuel cell now have a main role in space programs [1.3]. Also awarded the contract was for the Gemini space mission in 1962. The 1kw Gemini FC system had a platinum loading of 35 mg pt/cm 2 and performance of 37 mA/cm 2 at .78 V. In the 1960 improvements were made by incorporating Teflon in the catalyst layer directly adjacent to the electrolyte, as was done with a GE fuel cell at the time
Different type of fuel cell
The main different types of fuel cell based on the electrolytes and/or fuel with practical fuel cell type as follows:
1. Polymer electrolyte membrane fuel cells (PEMFCs)
PEMFC advantages and drawbacks
The PEM Fuel cell has the ability to develop high power density. The application in vehicles and stationary is considerable. One of the important advantages operates at low temperature between 60-80 °C. Moreover, have faster startup and immediate response to instantaneous loads [1.12].
Advantages
The advantages of PEM fuel cells are that they:
1) Are tolerant of carbon dioxide. As a result, PEM fuel cells can use unscrubbed air as an oxidant, and reformate as fuel; 2) Operate at low temperatures. This simplifies material issues, provides for quick startup and increases safety; 3) Use a solid, dry electrolyte. This eliminates liquid handling, electrolyte migration and electrolyte replenishment problems;
4) Use a non-corrosive electrolyte. Pure water operation minimizes corrosion problems and improves safety; 5) Have high voltage, current and power density ; 6) Operate at low pressure which increases safety; 7) Have good tolerance to differential reactant gas pressures; 8) Are compact and rugged; 9) Have a relatively simple mechanical design; 10) Use stable materials of construction.
The disadvantages
The disadvantages of PEM fuel cells are that they:
1) Can tolerate only about 50 ppm carbon monoxide;
2) Can tolerate only a few ppm of total sulfur compounds;
3) Need reactant gas humidification;
Humidification is energy intensive and increases the complexity of the system. The use of water to humidify the gases limits the operating temperature of the fuel cell to less than water's boiling point and therefore decreases the potential for co-generation applications.
4) Use an expensive platinum catalyst; 5) Use an expensive membrane that is difficult to work with.
Functional components of the cell:
Cell components can be separated into four sections:
1) Ion exchange membrane;
2) Electrically conductive porous backing layer;
3) Catalyst layer that (the electrodes); 4) Cell plate.
As shown in Figure .1.21 the structure of PEM FC in different parts for one cell [1.11].
Membrane
The material used is a family perfluorosulfonic acid that the commoner is used by nation (see Figure.1.22). The bulk of the Polymer is fluorinated giving a hydrophobic character. However, in the most part of the membrane, there are sulfonic acid sites which determine the ionic conductivity and have property of hydrophilic [1.11].
Electro-catalyst Layer
This layer sits between membrane and a backing layer. The catalyst consists of two electrode anode and cathode that made of platinum. To increase hydrogen oxidant in anode side and oxygen reduction in cathode side are used pure platinum metal catalyst or support platinum catalyst [1.11].
Porous Backing Layer
The membrane is surrounded between two porous layers. The quality of the backing layer is typically carbon based. Hydrophobic material will be used in this layer due to, prevent water to gases freely contact the catalyst layer (see Figure.1.23).
Performance of backing layer as follows:
1) Act as a gas diffuser; 2) Mechanical support;
3) Electrical conductivity of electron.
.
Bipolar Plate for Fuel Cell
The main task of BPs has to be collected and conduct current from the anode and cathode to another cell or external circuit. In addition, it applied to carry through a cooling system (seeFigure. 1.24). The material of BP will be used must accompany these conditions:
1) The PB must be thin due to, minimum stack volume;
2) It must be light because of stack weight;
3) It must be corrosion resistance in the face of acid electrolyte, oxygen, hydrogen; heat and humidity; 4) It must be reasonably stiff; flexural strength [1.13]. A fuel cell system generally includes a stack and needs a lot of auxiliary equipment to provide the supply of hydrogen and oxygen, the compression and humidification of the gas (e.g. Air compressor), cooling of the packaging through the electric power converters and power control system. A general diagram of the fuel cell system is given in Figure.
Hydrogen supply
A great majority of fuel cell uses hydrogen as fuel. Hydrogen can be provided either by a hydrogen tank or from external reformer hydrogen. The purpose of a reformer increases the complexity of the fuel cell system, due to processing or recovery of the heat from the reforming process and removal of the product gas must be properly handled. At the output of the reformer, hydrogen is not the only gas to be produced. Indeed, other gases such as carbon dioxide (CO 2 ), carbon monoxide (CO), and sulfur (S), can be produced simultaneously in the reformer. For some technologies, fuel cell (PEMFC, AFC), carbon monoxide and sulfur are considered poison gas. In conclusion, a filtering process gas must be added between the reformer and the fuel cell. The hydrogen in the tank can either be stored under high pressure between 350 and 700 bars (the hydrogen volume decrease by increasing the gas pressure according to the physical law of Boyle-Mariotte), liquid form or in metal cylinders. Before the gas goes into the fuel cell, a pressure regulator should control the hydrogen pressure.
Subsystem supply oxygen (air)
The air supply in the cathode of the fuel cell is usually compressed using an air compressor. In some applications (e.g. PEM fuel cells), air is humidified before entering into the fuel cell. Depending on the fuel cell technology (pressure, temperature), the air can be compressed by a motor-compressor or a turbine. A heat exchanger can also be added in the air supply system in order to preheat the air. In some applications, the fuel cell can be powered by storing compressed form pure oxygen. The use of pure oxygen can significantly increase the performance of the fuel cell and get rid of the air compressor, which has a unit of energy reduced the energy efficiency of the fuel cell system.
Cooling system
As mentioned before, the electrochemical reaction occurring within the fuel cell generates heat. It must be removed to maintain a constant operating temperature of the fuel cell. For fuel cells, low power, natural convection at the surface of the cell or the cooling fan (forced convection) is sufficient to get rid of the heat. In the case of fuel cells of high power, the cooling air is not sufficient to transfer heat. Hence, complex cooling systems, such as cooling water must be used. In fuel cells at high temperatures, the heat removed from the fuel cell applications to be utilized again for purposes co-generation, thus forming a system commonly known as Combined Heat and Power (CHP).
Power converters
The output voltage of the fuel cell varies depending on the supplied electric current (polarization curve). To maintain a constant output voltage, power converters are used as an interface between the fuel cell and the load.
Sub-control system
As discussed above, the fuel cell needs a large number of auxiliary equipment. In order to insure proper functioning of the system in terms of performance and safety, it is necessary to have a control system to oversee the various subsystems. A subsystem control well designed enables the fuel cell will operate in the best conditions.
Diagnosis of PEMFC
Fault diagnosis consists of three levels (fault detection, Isolation and analysis), accumulation dates from system, fault diagnosis and fault classification that is explained as following:
1) Accumulation dates: for this purpose, different way proposed specific electrochemical impedance spectroscopy (EIS), linear sweep Voltammetry (LSV), cyclic Voltammetry (CV), etc. which to carry out of the variation of output based on different operating conditions of input; 2) Extract fault from healthy mode: Depends on the fault extract from the original data of system miscellaneous way such as, FFT, WT and STFT; 3) Faults classification: At this stage, the following methods such as NN, FL, Neural-fuzzy and BN applied more than another.
Various diagnostic tools employed in the characterization and determination of fuel cell performances are summarized into two general categories: 1) Electrochemical techniques.
2) Physical/chemical methods [1.15].
Batteries
Among the available choices of portable energy sources, batteries have been the most popular choice of energy source for EVs since the beginning of research and development programs in these vehicles. The EVs and HEVs commercially available today that use batteries as the electrical energy source. The various batteries are usually compared in terms of descriptors, such as specific energy, specific power, operating life, etc. Similar to specific energy, specific power is the power available per unit mass from the source. The operating life of a battery is the number of deep discharge cycles obtainable in its lifetime or the number of service years expected in a certain application. The desirable features of batteries for EV and HEV applications are high specific power, high specific energy, the high charge acceptance rate for recharging and regenerative braking, and long calendar and cycle life. Additional technical issues include methods and designs to balance the battery segments or packs electrically and thermally, accurate techniques to determine a battery's state of charge, and recycling facilities of battery components. Above all, the cost of batteries must be reasonable for EVs and HEVs be commercially viable [1.16].
The major types of rechargeable batteries considered for EV and HEV applications are:
1) Lead-acid (Pb-acid);
DC/DC Converters: Power electronic converters which change the level of DC source to a different level of DC, keeping regulation in consideration, are known as DC/DC [1.16].
DC/AC Inverters: Generally, the single-phase, full bridge DC/AC inverters are popularly known as "H-Bridge" inverters. These DC/AC inverters are basically either voltages source/fed inverters (VSI) or current source/fed inverters (CSI). In the case of a VSI, the input voltage is considered to remain constant, whereas in a CSI, the input current is assumed to be constant [1.16]. Topologies of voltage source inverter (VSI), current source inverter (CSI), Z-source inverter (ZSI), and soft switching inverter can be used in traction drives.
Electric motors
Electric machines can be utilized in either the motoring mode or the generating mode of operation.
In the motoring mode, these machines use electricity to drive mechanical loads, while, in the generating mode, they are used to generate electricity from mechanical prime movers. The motor is the main component of the drive train of an EV. In addition, nowadays the electric motor is also widely used in ancillary devices of car such as power steering, air conditioning, windows up ... etc.
Challenges of FCEV
Several challenges must be overcome before fuel cell vehicles (FCVs) will be a successful competitive alternative for consumers [1.17]. These challenges concern the hydrogen and fuel cell technology, the cost but also the societal acceptance.
Onboard Hydrogen Storage
Some FCEVs store enough hydrogen to travel as far as gasoline vehicles between fill-ups-about 300 miles-but the storage systems are still too large, heavy, and expensive. FCVs are more energy efficient than conventional cars, and hydrogen contains three times more energy per weight than gasoline does. However, hydrogen gas contains only a third of the energy per volume gasoline has, making it difficult to store enough hydrogen to go as far as a gasoline vehicle on a full tank-at least within the same size, weight, and cost constraints [1.17].
Vehicle Cost
FCEVs are currently too expensive to compete with hybrids and conventional gasoline and diesel vehicles. But costs have decreased significantly and are approaching DOE's goal for 2017 (see graph 1.26). Manufacturers must bring down production costs, especially the costs of the fuel cell stack and hydrogen storage [1.17].
Fuel Cell Durability and Reliability
Fuel cell systems are not yet as durable as internal combustion engines and do not perform as well in extreme environments, such as in sub-freezing temperatures. Fuel cell stack durability in real-world environments is currently about half of what is needed for commercialization. Durability has increased substantially over the past few years from 29,000 miles to 75,000 miles, but experts believe a 150,000-mile expected lifetime is necessary for FCEVs to compete with gasoline vehicles [1.18].
Getting Hydrogen to Consumers
The extensive system used to deliver gasoline from refineries to local filling stations cannot be used for hydrogen. New facilities and systems must be constructed for producing, transporting, and dispensing hydrogen to consumers [1.17].
Competition with Other Technologies
Manufacturers are still improving the efficiency of gasoline-and diesel-powered engines, hybrids are gaining popularity, and advances in battery technology are making plug-in hybrids and electric vehicles more attractive. FCVs will have to offer consumers a viable alternative, especially in terms of performance, durability, and cost, to survive in this ultra-competitive market [1.17].
Safety
Hydrogen, like any fuel, has safety risks and must be handled with caution. We are familiar with gasoline, but handling compressed hydrogen will be new to most of us. Therefore, developers must optimize new fuel storage and delivery systems for safe everyday use, and consumers must become familiar with hydrogen's properties and risks [1.17].
Page | 25
(1) : IRTES-SET : Institut de Recherche sur les Trasnports, l'Energie et la Société -laboratoire « Systèmes Et Trasports ».
(2) : FR FCLAB : Fédération de Recherche Fuel Cell LABoratory.
Public Acceptance
Fuel cell and hydrogen technology must be embraced by consumers before its benefits can be realized. Consumers may have concerns about the dependability and safety of these vehicles, just as they did with hybrids [1.17].
Thesis objective
In recent years, the Proton Exchange Membrane Fuel cells (PEMFC) have been attracted for transport application. For several years the orientation of the laboratory IRTES-SET (1) program was focused on transportation problems notably in collaboration with FR FCLAB (2) teams within the topic of electrical and hybrid vehicles (EV and Fuel cell electric vehicle (FCEV)). Over the last years in the two laboratories, thesis have treated the problems of electrical vehicle simulation, the drivetrains design, integration and control and devolved fuel cell system, design and control of FCEV. The latter aims of these efforts are to obtain at zero emission that is one of the challenges of the scientific researchers in this field. The PEMFC is a very complex device in terms of phenomenon involved in its operation which is multi-physical. Electricity, chemistry, fluidics, thermodynamics and mechanics are domains of physics which are involved to conduct this kind of study. To overcome these difficulties an electric network approach will be used notably for easily taking into account the three space dimensions of the PEMFC stack. The resulted 3D model has to be able to simulate faults to develop an efficient algorithm for faults isolation and classification on the PEMFC. According this model faults can be localized in each point of a single cell. This is helpful for optimization and control of operating conditions. However, the local current density and temperature distributions within a single PEMFC as well as between the fuel cells in a fuel cell stack still require more attention in both the experimental and numerical investigations for better understanding. Therefore, the overall goal of the present thesis is to conduct an experimental analysis, with emphasis on both voltage and temperature distributions inside a PEMFC with different operating conditions for stack and single cell.
To carry out these objectives, single cell and stack of fuel cells have to be investigated experimentally in order to establish actual maps of different parameters such as voltage, current density, and temperature.
Conclusion
This chapter has presented an overview of the state of the art of FCEVs over the world and in Belfort. It has been established that the FCEVs have a long way to browse before fully entering the automotive market. The main locks concern the vehicle availability, safety, cost and societal acceptance. Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why this research work is focused on enhancing the reliability and durability of PEMFC for automotive applications.
The chosen way is the good understanding of the important issues related to fuel cell operation. This problematic is detailed in the next chapter, through modeling, simulation and knowledge coming from literature but also developed, locally.
Chapter II PEM Fuel Cell Modeling and Diagnosis
PEMFC modeling
Over the past decade, many proton exchange membrane fuel cell models have been reported [2.1]- [2.5]. Models play an important role in FC development since they facilitate a better understanding of parameters affecting the performance of FC. The models normally focus on one aspect or region of the fuel cell. PEMFC stack is one of the most studied parts in the fuel cell. Generally the stack modeling is divided in three main groups:
1) Empirical/semi empirical model; 2) Mechanistic model; 3) Analytical model.
Empirical model
The semi empirical model is combined theoretical and algebraic equations with empirically formulas. Empirical models are used when the physical phenomena are difficult to model or the theory rule of the phenomena is not well understood.
• Springer et al [2.6] developed a semi empirical model for a FC with partially hydrated membrane. • Amphlette et al [2.7] use semi-empirical relationships to estimate the potential losses and to fit coefficients in a formula. The goal is to predict the cell voltage with the operating current density. This model accounted for activation and ohmic over potential. The partial pressure and dissolved concentration of hydrogen and oxygen were determined empirically as a function of temperature, current density and gas channel mole frictions. • Pisani et al [2.8] also use a semi empirical approach to study the activation and ohmic losses as well as transport limitations at the cathode reactive region. • Maggio et al. [2.9] used a semi empirical model for water transport in a FC. The model concentrations over potential have been affected by allowing the cathode gas porosity to be an empirical function of current density. The effective gas porosity was assumed to decrease linearly with increasing current density. This is due to the increasing percentage of gas. Therefore, the result indicated of dehydration of the membrane is likely to occur on the cathode side than in anode side. • Chan et al.[2.10] studied the effect of CO kinetics in the hydrogen feed on the anode reactive region. When hydrogen is obtained from fuel, there are trace amounts of CO present which act as poison to the platinum catalyst and its cause is decreased in the catalyst surface area.
The fraction of catalyst an empirical factor was determined by the fraction of catalyst occupied by CO at anode sites. • Maxoulis at al. [2.11] used empirical model in stack FC. They combined the model of Amphelette et al [2.7] with a commercial software ADVISOR, which was used for driving cycle. They studied the effects of the number of cells per stack, electrode kinetics and water concentration in the membrane on the fuel consumption. They obtained that a large number of cells per stack make greater stack efficiency resulting in better economy.
The drawback of semi empirical cannot accurately predict performance outside of the range. However, it is very useful for quick prediction for design. They cannot predict the performance or the response of the fuel cell.
Generally empirical and semi empirical model are divided as follow:
1.1.1. Design of experiment (DoE) modeling DoE approach aims to design or to characterize FC stack. FCs experimental is generally long and expensive. In addition, there are complex interrelations between physical parameters to make test and implemented. Many aspects and tools of DoE methodology can be of great benefit for various scientific and technological purposes such as: Development of FC materials, components and ancillaries, analysis and improvement of single cell and FC stack performance, evaluation and development of a complete FC system [2.12].
Artificial neural network (ANN)
These models are based on a set of easily measurable inputs like temperature, pressure, and current. They are able to predict the output voltage of FC stacks. In order to give more relevance to the time dependence of an output, the feedback loops were designed to provide different time states of the output. Nevertheless, the main drawback of this model is the huge number of experimental tests [2.12].
Modeling approach based on electrical analogies
Unlike in DoE and ANN modeling approach, certain knowledge of the behavior of the stack is needed but a few internal parameters of the stack are applied to tune the parameters. Then electric modeling is proposed. In this method the basic idea is to find a common way to represent the different aspect in the stack FC such as different physical laws, thermal and fluidic mode [2.12].
Equivalent electric circuit model
In recent years, many researchers have widely investigated the dynamic modeling of the FC with emphasis on electrical terminal characteristics [2.13]- [2.17]. A detail explanation electrochemical property of the FC and simple equivalent circuit including the dynamic effect are reported in [2.18]. The electric model is a simple method to implement the system. The simple electric model represents in Figure .2.1 [2.19]. More complex models are sophisticated were illustrated in Figure.
Modeling approach based on energy analogies
This kind of approach is involved in a great number of fields of physics trough an energy approach that was carried by Band Graph modeling. The Bond graph is an explicit graphical unified formalism where the energy exchanges within an energy system are described by bonds which represent the power exchanges. A limitation of this model is necessary to describe the majority of energy systems [2.12]. Within the same scope Energetic Macroscopic Representation (EMR) is identified as the best energy modeling methodology applied to chemical reactions and mass transfer.
Mechanistic model
In this model, the phenomena internal of the cell are introduced by differential and algebraic equations based on physics and electro chemistry law. These equations are solved using adequate computational methods. The equations describe the electrochemical reaction, mass and charge transfer. Indeed, the require of accuracy water management, the dehydration of membrane, the complex electrode kinetics, the mass transport and the slow rate of oxygen reduction are limiting factors on the FC modeling. Assuming parameters caused different level complexity of models from one dimension to three dimensions. However, resolutions of complex model lead to complex calculations. This mode can be subcategorized as multi domain models or signal domain models. Multi domain approach involved the derivation of different sets of equations for each region of the FC (e.g. anode, cathode gas diffusion regions and catalyst layers).
This model depends on three basic phenomenology equations such as Butler-Volmer equation for FC voltage, the Stefan-Maxwell equation for transport phenomena and Nernst-Plank equation for species transport [2.12]. However, Gurau et al. [2.20] are approved that since governing differential equations in the gas flow channels and gas diffusion electrodes are similar, the equations can be combined for both regions.
Mechanistic model (signal and multi domain) has been utilized to study a wide range of phenomena including polarization effect, water management, thermal management, CO kinetic, catalyst and flow filed geometry. Mechanistic approaches can be the simulation of transient and steady state response. Moreover, it used to elaborate equivalent circuit models. The disadvantages of these models are how to understand their physical behaviors and improve their performances of the multi physic and multi scale of the FC stack. In addition, various skills and knowledge are needed such as chemistry, electrochemistry, fluid mechanics, thermal, electrical and mechanical engineering.
Analytical model
In analytical model, many assumptions were made concerning variable profiles within cells in order to approximate analytical voltage versus current density relationship but do not give an accurate transport processes occurring within cells. They are limited to predicting voltage losses and water management. It is useful for quick calculations in simple models [2.12].
Consideration of different modeling
Over the past decades a wide range research of steady state models of varying complexity and dimensionality, has been developed to simulate PEMFC performance. These models included 1D models (where the spatial dimension is parallel to flow of current), 2D models (where the planes considered are perpendicular to the cell plates), and more 3D that is complex models (explained more in chapter III). Springer et al [2.6] presented a 1D model for a well humidified PEMFC, which considered activation and consideration losses in the active layer and gas transport in the cathode GDL. They detected that overlook losses in well humidity H 2 /O 2 cells that could be well described by the sum of the high frequency (membrane and contact resistance) and activation losses on cathode side. Bernardi and Verbrugge developed a similar model using the Nernst Planck equation, the Butler-Volmer equation and the Stefan-Maxwell equation. The results created a typical model for calculation of contributions to the FC losses (no mass transport limitation).
One of the important things to balance between accuracy and calculation time is usually maintained by a number of assumptions. Considerations of these assumptions lead to increasing model complexity and require a more detailed in the physical model, particularly in the porous active layer.
Fuel cell basic characteristics
Fuel cells are an electrochemical device that converts chemical energy to hydrogen and oxygen in anode and cathode sides into electricity, heat and water. The basic PEM fuel cell reactions are:
Anode:
Cathode:
Overall Cells:
The stoichiometry of each reactant gas is an important experimental parameter in FC. A 1:1 stoichiometry refers to the flow rate required to maintain a constant reactant concentration at the electrode at a fixed current density. Usually a higher stoichiometry is required at the cathode side (typically 3-4) than in anode side (typically 1-2) due to sluggish mass transport rate of oxygen [2.21].
The heat (or enthalpy) of a chemical reaction is calculated by the difference between the heat of production and reaction in the cell:
Eq2.1
Heat of liquid water is -286 kJ.mol -1 (at 25°C) and heat of H 2 and O 2 are zero we can obtain that enthalpy at 25 °C is equal to -286 kJ.mol -1 Note that negative sign for enthalpy of chemical reaction means heat is being released in the reaction. 286kJ.mol -1 namely the hydrogen's heating value. However, because in every chemical reaction some entropy is produced not all this value can be converted into useful work. The portion of the enthalpy can convert to electricity in fuel cells could be obtained by Gibbs-Free energy law as follow equation:
Eq2.2
Indeed, there are some losses in energy conversion due to entropy . As well, as for the reaction obtained from the difference between the heat of formation of products and reactants.
Eq2.3
Heat of liquid water is -241.98 kjmol -1 (at 25°C) and heat of H 2 and O 2 are zero we can obtain that enthalpy at 25 °C is equal to -241.98 kJmol -1 . The total charge transferred in a fuel cell reaction can be obtained by: The maximum amount of electrical energy generated in a fuel cell is:
Eq2.6
The theoretical potential of fuel cell is:
Eq2.7
That is to say at 25°C the theoretical hydrogen/oxygen fuel cell potential is 1.23 Volts.
Effect of temperature
The theoretical cell potential, this motion changes with temperature:
( ) Eq2.8
Hence, the temperature increasing in a cell leads to a lower theoretical cell potential. Besides, both and are functions of temperature:
∫ Eq2.9 ∫ Eq2.10
Specific heat energy, C p for any gas is also a function of temperature. An empirical relationship may be used as the following equation:
Eq2.11
Where a, b and c are the empirical coefficients, different for each gas [2.21]. In fact, the voltage losses in operating condition where the FC temperature decreases allow remedying of the loss of theoretical voltage cell. Figure .2.3 displays the voltage Nernst of cell diminution by increasing the temperature.
.
Effect of Pressure
Gas partial pressure has an important effect on membrane chemical degradation. In an operating fuel cell relating to the change, Gibbs free energy can be obtained by: Eq2.12
After integration and consideration of hydrogen /oxygen fuel cell reaction, the Nernst equation becomes as follows:
( ) Eq2. 13
Then:
( ) Eq2.14
Therefore, by Eq.2.14 cell potential is a function of temperature. In addition, as following equation cell potential (depended to pressure) can be obtained by:
( ) Eq2.15
Neglecting the changes of ΔH and ΔS with temperature (have a very small error in temperature below 100 °C) the Eq.2.15 become [2.7]:
Eq2.16 The partial pressure of the P O2 and P H2 at the cathode and anode side are calculated by equations [2.22]: Eq2.17
Theoretical FC Efficiency:
In the case of FC, the useful energy output is the electrical energy produced, and energy input is the enthalpy of hydrogen. Figure .2.5 illustrates the energy inputs and output for FC.
Figure.2.5. Energy inputs and output for FC as an energy conversion device.
The FC efficiency expresses as: The ideal efficiency decreases with temperature. Specifically, at 60°C the efficiency of FC becomes reduced:
Eq2.20
Fuel Cell voltage losses
Electrochemical reactions consist of a transfer of electrical charge and change in Gibbs energy [2.18]. Current density is the current (electron or ions) per unit area of the surface. By Faraday's, law the current density as follows:
Eq2.21 nF: the charge transferred (Coulombs Mol -1 ) j: the flux of reactant per unit area (Mols -1 cm -2 )
In general, an electrochemical reaction consists, either oxidation or reduction as follows:
Eq2.22
Eq2.23
In forward and backward reaction, the flux is specifies by: Eq2.24 Eq2.25
k f = rate coefficient. k b = backward reaction.
C ox , C Rd = surface concentration of the reacting species.
The net current density is generated between the released and consumed electrons and is defined by: Eq2.26 By using the transition state theory, to calculate of the net current density equation can be rewritten by:
[ ] [ ] Eq2.27
At equilibrium, the potential is E r , and the net current is equal to zero, although the reaction moves in both directions simultaneously. The rate at which these reactions proceed at equilibrium is called the exchange current density
[ ] [ ] Eq2.28
By comparing two equations, a relation between current density and potential is calculated as follows:
[ ] [ ] Eq2.29
Exchange Current Density
Exchange current density is not constant in chemical reactions. Because, based on equation 2.27 it is a function of temperature. In addition, also it is a function of electrode catalyst loading and catalyst specific surface area. The relative exchange current density at any temperature and pressure is specified by the equation below:
( ) [ ( )] Eq2.30
Where [2.24] ( ) Eq2.31
: is the reversible or equilibrium potential = reference exchange current density (at reference temperature and pressure, typically 25°C and 101.25 kPa) per unit catalyst surface area, , = catalyst specific area (theoretical limit for Pt catalyst is 2400
But state-of-the-art catalyst has about 600-1000
, which is further diluted by the incorporation of catalyst in the electrode structures by up to 30%).
= catalyst loading (state-of-the-art electrodes have 0.3-0.5 mgPt ; lower loadings are possible but would result in lower cell voltages).
= reactant partial pressure, kPa = reference pressure, kPa = pressure coefficient (0.5 to 1.0) = activation energy, 66kJmol for oxygen reduction on Pt [8] R = gas constant, 8.314 T = temperature, K = reference temperature, 298.15 K = 0.5 for the hydrogen fuel cell anode (with two electrons involved) and 0.1 to 0.5 for the cathode [2.18]. If the exchange current density is high, the surface of electrodes will be more active. The over potential on the cathode is much bigger than in anode due to, the exchange of current density in anode larger than in the cathode. (10 -4 vs. 10 -9 Acm -2 Pt at 25 °C) [2.21].
Static characteristic (polarization curve)
The static characteristic of the fuel cell is represented by the polarization curve. First, activation over voltage occurs at FC electrodes, anode and cathode. However, the reaction of hydrogen oxidation at the anode is very rapid while the reaction of oxygen reduction at the cathode is much slower than hydrogen oxidation. Thus, the voltage drop resulting from activation losses is dominated by the cathode reaction conditions. The relation between the activation over voltage and the current density in anode and cathode can be obtained by as follows equations [2.18], [2.23]:
[ ] Eq2.32
The specific reaction surface area is given by Eq2.33
Where the catalyst is mass loading per unit area of the cathode, is the catalyst surface area per unit mass of the catalyst and is the catalyst layer thickness [2.24].
Decrease the activation losses
The exchange current density and activation losses has strongly related together. Then for reducing the activation losses it is necessary that the i o reduced. For this reason, especially at cathode side the values of i o is the most important factor to improve on FC performance. This is achieved by following ways:
1) Increasing temperature and pressure of the cell; 2) Apply more effective catalysts; 3) Increasing the roughness of the electrodes; 4) Increasing reactant concentration (Such as, apply pure O 2 instead of air).
Note that activation losses have significant effect in low and medium temperature FC. However, at high temperature and pressure they are less important.
Empirical equation for activation can obtain by:
( ) Eq2.34
Where A, b depend on the electrode and cell condition and V act is only valid for i>b (b=0.04 mAcm -2 ) .
Internal and ionic resistance
The overall ohmic voltage drop is calculated in the membrane layer. The polymer membrane used is a Nafion made by Dupont, which is widely used in PEMFCs. Nafion conductivity is highly dependent on membrane water content and temperature. Generally, the protonic conductivity of Nafion increases linearly with increasing water content and exponentially with increasing temperature. Hence, the resistivity of the membrane can be expressed by equations:
Eq2.35
Where:
Eq2.36
There are several mechanisms of water transport across a polymer membrane, namely the water diffusivity. The water diffusivity in Nafion can be expressed by the following expressions depending on water content λ:
( ) ( ) Eq2.37 ( ) ( ) Eq2.38
The membrane water content, λ varies generally between 0 and 14, which is equivalent to the relative humidity of 0% and 100% (ideal conditions) respectively. However, the parameter λ has values as high as 22 and 23 under supersaturated conditions. First, the membrane water content λ can be calculated using the activities of the gas in the anode and the cathode [2.1]:
Eq2.39
The vapor saturation pressure is a function of temperature, which is given by [2.25]:
Eq2.40
In the case of gas, the activities of the gas are equivalent to relative humidity. The index i is either anode (a) or cathode (c). The membrane water content is calculated by [2.25]:
{ Eq2.41
The average water activity a m is given by: Eq2.42
Thus, the membrane water content λ m is calculated by equation 2.41 using the average water activity a m , between the anode and cathode water activities. Since the proton conductivity of a polymer membrane is strongly dependent on membrane water content λ, the internal electrical resistance is a function of the resistivity of the membrane σ m and the thickness of the membrane tm [2.2]:
Eq2.43
Finally, the ohmic overvoltage due to the membrane resistance R m in PEMFCs is given by the following expression:
Eq2.44
Concentration losses
Then, the voltage drop resulting from concentration losses can be approximated by the following equation [2.23]:
Eq2.45
The parameter C, d, i max are constants that depend on the temperature, partial pressure of oxygen in the cathode and finally vapor partial pressure. These parameters can be determined empirically. Furthermore, the parameter i max is the current density that makes a precipitous voltage drop. In addition, empirical equation can be stated as follows [2.18]:
Eq2.46
The value of m will typically be about 3 ×10 -5 V, and n about 8 ×10 -3 cm 2 mA -1 .
Effective factor in concentration losses
1) Hydrogen is supplied from some kind of reformer 2) The air supply is not well circulated (problem in high current)
3) Removal of water 4) Internal current and Crossover of reactants
The structure of polymers and electrolyte is not electrically conductive. However, some electron passes from membrane during hydrogen diffusion from anode to cathode. This fuel crossover and the namely internal current are basically the same phenomenon.
The total electrical current can be calculated as follows:
Eq2.47
Current density can be obtained from current divided by the electrode active area:
Eq2.48
The Charge Double Layer
If two different martial are in contract, charge transfer from one to another. In FC, charge double layer occurred between the electron in the electrodes and the ions in the electrolyte. As the example in Figure .2.7 cathode side electrons will gather on the surface of the electrode and H + ions will be attracted to the surface of the electrolyte as results, will be generated an electrical voltage. These accusations have been accumulated near the electrode-electrolytes that the similar behavior such as a capacitor in an electric circuit. Indeed, if the current suddenly changes, voltage takes some time to follow the changes of load. The capacitance of a capacitor is determined by the pattern:
Eq49
Where : is the electrical permittivity, A: is the real surface area of the electrode d: is the separation of the plates Where defined as
( ) Eq2.51
The equation of the FC voltage as defined:
Eq2.52
Where is a combination of two resistance activation and consideration.
Eq2.53
These parameters are frequently changed with electrochemical characteristics, humidity, temperature, and pressure and aging effects.
Polarization Curve
The electrical domain allows describing the polarization curve and the associated losses (e.g. Activation, ohmic and concentration). Taking into account the latter, the FC stack voltage, E cell produced by FC can be expressed by the following equation:
Eq2.54
Where , the theoretical potential can be expressed as the difference between the reversible potential at the anode and cathode [2.18]: Eq2.55 By comparison, E loss is the voltage drop resulting from losses (activation, ohmic and concentration) and can be expressed as follows:
Eq2.56
The most important curve for characteristic of FC is a polarization curve that is shown in Figure .2.8.
Thermal Domain
The thermal domain describes the heat generation, heat exchanges by convection in the channels, heat diffusion by conduction or by mass transport, radiation and natural convection. Besides, in order to improve the accuracy of the thermal domain, water phase change has been taken into consideration. During stack operation, gaseous water condenses in liquid when the water vapor pressure reaches to the saturation pressure. Inversely, if the vapor pressure decreases, the liquid vapor can evaporate. During a phase change, temperature remains to be constant, but a heat exchange called "latent heat" takes place. The effect of the water phase change on the temperature distribution in PEMFC has been studied in the literature [2.25], [2.26], [2.27]. These authors have demonstrated that the water phase change has a large influence on the final temperature predicted by the thermal domain of PEMFC stack. The net heat generated by the chemical reaction inside the FC, which causes the rising or falling of temperature, can be written as:
̇
Eq2.57
̇ ̇ ̇ ̇ ̇ Eq2.58
All the mathematical expressions of the Eq.2.57 are given in details in [2.18]. At steady state, ̇ and consequently the FC operates at some constant temperature. During transitions (e.g. Load change, operating conditions change, faults), the temperature of the FC stack will rise or drop according to Eq.2.57. In addition, efficiency and hydrogen consumption have been implemented in the FC model (Figure .2.9). The volumetric flow rate of hydrogen consumption in slpm (standard liters per minute) is given by the following equation [2.18]: In comparison, the FC efficiency is defined as a ratio between the electricity produced and hydrogen consumed [2.23]:
̇ Eq2.59
Eq2.60
Electric power produced is a product between FC stack voltage and current:
Eq2.61
According to Faraday's law of electrolysis, hydrogen consumed is proportional to FC stack current:
̇ Eq2.62
Hence, the energy value of hydrogen consumed in Watts is given by: ̇ Eq2.63
Effect of the operating condition on performance of the fuel cell
3.1. Temperature
Effect of temperature on the activation over voltage
To obtain better performance in activation losses it is necessary that the temperature increases because the presence of the temperature effect, in the Tafel constant. While, the impact of increasing of the exchange current density is more important than any increase of Tafel constant by the effect of the temperature, this voltage drop is much nonlinear. According to Figure .2.10, the curve of activation voltages is reduced by increasing the temperature.
Effect of temperature on the ohmic over voltage
In the majority of FCs, the resistance is mainly caused by the electrolyte and the cell interconnects and bipolar plates. The three ways to reduce the internal impedance of the FC are as follows [2.18]:
1) Use the electrodes with the best feasibility conductivity.
2) Good design and use of suitable materials for the bipolar plates or cell interconnects.
3) To choose the electrolyte thin as much as possible.
The effect of temperature rising on ohmic losses can be demonstrated by the Figure .2.11. FC function generally improves with rising in temperature. Nevertheless, the increase of temperature has a negative effect on voltage loss (see Eq.2.2). In addition, increasing the temperature has resulted from reduction of activation and concentration losses.
Pressure
Hydrogen and oxygen must be pressured at the fuel cell inlet. The performance of FC was changed by variation of pressure as follows:
Effect of pressure on the activation losses
Activation losses are related to sluggish electrode kinetics. The rising of the exchange current density is reducing the impact of the activation over voltage [2.18]. Figure.
Effect of pressure on the concentration losses
The reactant concentration at the catalyst surface depends on current density. Thus, increasing pressure causes to improve current density then reduces the concentration voltage losses (see Figure .2.15).
Cell Voltage losses depends on Pressure
The aim of raising pressure is because of the increase in FC voltage. FC operates at ambient pressure (1 bar) or it may be pressurized. An FC potential voltage is improved when the pressure is increased as it is illustrated in Figure .2.16, [2.2] and [2.18].
Humidity
Affect Humidity on Resistive (ohmic)
Lack of water in the PEMFC will run for membrane become dry. The parameters affected by this phenomenon are included drawback of proton transfer, reducing conductivity and increasing ohmic resistance. Thus, lead to decrease power generation efficiency. The different curves of ohmic resistance are illustrated in Figure. Increasing humidity improves the cause of conductivity in the membrane. Therefore, FC voltage will be modified. This variation is illustrated in Figure .2.18. The Table .2.1 summarized the effects of the operating parameters (e.g. Temperature, mass flow, humidity, current…) on different losses such as activation, concentration and ohmic losses in the FC. The used Symbols and abbreviations in this B, i L : The empirical constants will be affected by the operating condition in a fashion that is unknown.
N/A: The parameter does not apply in any circumstances.
Yes: Indicates that the operating parameter can be been incorporated into the model.
(1), ( 2), ( 8), ( 9), ( 11), ( 12): It is assumed that the stack is operated so these parameters do not affect the model. The stack should be operated so that the membrane is well hydrated without stack flooding. This assumption may not hold in ( 1) and ( 2), in this case the opposition would be empirically modeled in regard to significant operating parameters.
(3): The activation loss is defensible in the internal current. In the case wherein is too difficult to model it should be omitted from the model. If this were done in the model would it only be valid for currents above ~0.3A.
(4), ( 5), (6), and (7): These parameters will only affect the performance of V ohm loss when the stack is run in extreme cases. At this stage, the limits of these extreme cases are unknown. If one of these parameters greatly affects the stack resistance then this parameter should not be operated to that level when the resistance starts to change.
(10), ( 13
PEMFC diagnosis:
Introduction of Fault Diagnosis
The fault diagnosis includes the following three aspects:
• Fault detection: distinguished fault detection means to discover the occurring fault with intrusive and/or non-intrusive methods. • Fault isolation: which is, finding the location of the faults.
• Fault analysis and identification (classification): in order to arbitrate the type and magnitude of the faults and to estimate and prevent the future faults based on background studies [2.29]. The main use aimed in the present research work is to estimate the state of health of the FC so that to adapt the power control of the FC notably through the management of the degraded modes. In brief, Neural Network has been used in both methods because of the best choice of approximation in nonlinear. However, the processes of training need a large number of the data under different operating conditions that a gathering might costly and time consuming.
Nowadays, modern control systems with different issues such as availability, cost efficiency, reliability, operating safety and environmental protection are taken into consideration. This involves a fault diagnosis system that is capable of detecting plant, actuator and sensor faults when they occur and of identifying and isolating the faulty component. The fault acting upon a system can be divided into three types of faults, see Figure .2.19 [2.33].
1) Sensor (Instrument) faults. Fault acting on the sensors.
2) Actuator faults. Fault acting on the actuators.
3) Component (system) faults fault acting upon the system or the process we wish to diagnose.
PEMFC Fault Conditions
All possible faults that effect on the performance of PEMFCs was presented and the available fault detection technique compared and summarized in
Faults Tolerance strategies
Figure .2.20 gives the diagram of faulty strategies that include some kinds of planned and unplanned maintenance and planned and unplanned repair. Planned maintenance is based on fixed times and/or fixed run hours. An improvement leaves the fixed schedule and applying maintenance on demand, which is based on the observed real status. Planned repair is usually within set down periods while unplanned repair is forced by faulty components. A reconfiguration is possible if redundant components can be used, which requires a redesigned fault-tolerant system. Hence, maintenance procedures are performed to prevent failures, repair procedures (to remove failures and faults) and reconfiguration (to prevent failures through redundant components usually with some degradation of functions) [2.35].
Diagnosis levels
In order to increase of reliability durability of FC, one of the most important things is fault diagnosis (FD). Nowadays, different diagnoses have been developed but FD is means that to detect, isolate and analysis faults that happened in FC according to different operating condition. The fault detection is to track faults happen throughout the operating condition. Fault isolation defines the place of fault in the system. Typically based on analytical model, two basic models can be deliberate, model base and non-model base [2.31], [2.32].
Model base/ Non-model base
Diagnosis model-based methods are comparing the available measurements of the real system (experimental) with simulation system model (see Figure .2.21). These methods are categorized in three groups [2.31]:
• Physical model (algebraic and differential equations),
• Experimental model (non-liner and complex model),
• Combination of the physical and experimental.
Residual generation and evaluation
Diagnosis purpose is to generate a fault-indicating signal-residual, using available input and output information from the monitored system. This auxiliary signal is created to reflect the beginning of possible errors in the analyzing system. The residual should be normally zero or close to zero when no fault is present, but should be distinguishable from zero when a fault occurs. In an Ideal condition, the residual is characteristically independent of the system input and output. The algorithm used to generate residuals is called a residual generator. Residual generation is a procedure for extracting fault signals from the system, with the fault signal represented by the residual signal (r). The residual should ideally carry only fault information and to ensure reliable fault detection, the loss of fault information in residual generation should be as little as possible [2.36].
In each fault detection algorithm, there should be a component of the evaluation based on the residual to be used by the analytical consideration. It could be applied in different methods such as fuzzy logic or neural network. In this stage, the decision about the existence of a fault is made together with a possible indication of this event generating the corresponding fault signal. This signal should carry information about the effect of the fault on the residual set so that the fault isolation module can isolate this fault [2.37], [2.38].
Different kind of Fault Diagnosis
Various diagnostic tools employed in the characterization and determination of fuel cell performances are summarized into two general categories:
1) Electrochemical techniques.
2) Physical/chemical methods. Fault diagnoses consist of three levels (fault detection, Isolation and analysis), accumulation dates from system, fault diagnosis and fault classifications that are explained as follow:
1) Accumulation dates: for this purpose, different ways are proposed specifically, electrochemical impedance spectroscopy (EIS), Linear Sweep Voltammetry (LSV), Cyclic Voltammetry (CV), etc. The goal is to carry out the variation of output based on different operating conditions of input. 2) Extract fault from healthy mode: Depends on the fault extracting from the original data of system miscellaneous way such as, FFT, WT and STFT. 3) Faults classification: at this stage, the following methods such as NN, FL, Neuralfuzzy and BN applied more than other technics.
Accumulation dates methods
Polarization curve
Different parameters can be characterized by polarization curve. Such as, cell polarization resistance, OCV, exchange current density, Tafel slope, etc.
Hysteresis
By registration of the plot due to increase (until limiting current) and decrease in current density, the hysteresis will be created by the two conclusion curves. This hysteresis can be useful to recognize the flooding and drying. Indeed, if in high current density, downward I (V) curve lower than the upward I(V) curve it means that the indicated flooding (in high current density more water will be produced by chemical reaction). Indeed, if in high current density, downward I (V) curve bigger than the upward I(V) curve it means that the indicated during. Individually this method is not sufficient for fault diagnosis in FC. These data are not enough to characterize the fuel cell performances (such as electrode diffusion, membrane resistance, etc.). It is used in steady state but it isn't a suitable model to evaluate losses [2.39].
Electrochemical Impedance Spectroscopy
The Electrochemical impedance spectroscopy (EIS) uses small AC perturbation signal at various frequencies from (10kz to 1 Hz) in the dynamic state cell). The impedance of the cell can be obtained by taking the ratio of AC voltage/AC current (Figure.2.22). This technique can be applied to the electrochemical system (half-cell, signal cell, stack, etc.) This method is significantly used to characterize the water management flooding and drying). Many parameters can be obtained from this method such as, activation and concentration resistance and electrolyte resistance. The EIS is difficult to utilize in high power FC [2.30]. It is for fault diagnosis of PEMFC in situ and nonintrusive method.
Membrane resistance measurement methods
Current resistance measurement
The three references of ohmic voltage loss are: a) Resistance in ion movement inside the electrolyte b) Resistance to electron carried inside the cell components c) Contact resistance
Current interrupt method
This method works in time-domain. The cell current is quickly interrupted and cell voltage is measured before and during the interrupt. This method applied widely in electrochemical devices (fuel cell, Battery, etc.) to obtain an ohmic resistance evaluation. The benefit of this model, there is no needing for any extra equipment because the interrupt can be directly come from a load. The downside of this model is data extracting will be degraded by using long cable connection and will put a critical perturbation on the cell [2.40].
High frequency resistance
To achieve internal resistance in the FC, a small AC signal is used to apply the electronic load to adjust the DC load current. This model is suitable for congenital and periodical application along the normal condition cell (see Figure.
High frequency milliohm meter method or AC resistance method
In this method, external AC milliohm meter has been used to implement a signal and load performances and it is paralleled to circuit (see Figure.2.26). Based on related AC signal to DC current, least variation in FC will be measured. Hence, this approach is interesting in the investigation of the functioning of FC. However, the accuracy of this method according to determine the high resistance will become low [2.40].
Pressure drop method
Due to friction in the electrodes and channel of gas flow, there is approximately 30% different pressure drop created between input and output [2.41]. According to the Darcy law for gas flow rate, pressure gas will be increased by water existence in fuel cells. In another world, flooding level is a direct impact on the decline pressure drop. Moreover, augmentation of water presence be related the decrease of temperature and increase in current. Water accumulates in cathode side more than in anode side because of air flow rate is slower than the hydrogen flow rate (dynamic viscosity in hydrogen is sluggish compared to oxygen). Indeed, flooding usually will be happened in cathode side [2.42].
Consideration on faults problem in PEMFC
PEM FC is an electrochemical system that is based on electro-catalytic reaction, hydrogen oxidation in anode side and oxygen reduction in cathode side. In a FC, failures can be caused by:
1) Long time operation (natural ageing); 2) Operational incidents, such as MEA contamination or reactant starvation (see Figure .2.28).
A common consequence of these failures is the voltage. In fact, if a fault occurs in FC, the voltage can be either increase or decrease according to the fault. In summary, FC stacks voltage is a first indication of a degraded working mode. In addition, water management and temperature are effects crucially important for healthy operation of a PEMFC.
Water management in PEMFC
Electro-Osmotic Back-diffusion and produced water produced by reaction have essential roles in water management. Drying at anode side with high current density because of the electro-osmotic that overcomes to back-diffusion phenomena. Fault degradation according to accumulation of water in FC [2.45]: 1) Drying;
2) Flooding 3) Benefit to increase proton conductivity ; 4) Blocks the gas diffusion layer and can lead to starving ; 5) Large quantities of water because of mechanical degradation for instance; corrosion and contamination.
Drying
Water is essential for proton conductivity in the numeric form of the membrane and active layers by dissociating the sulfuric acid bond. Leakage of water in fuel cell causes to impede of proton to the catalyst surface area thus activation loss will be increased. Isolation drying faults is occurred by comparing between osmotic drag at the anode side and back diffusion at cathode side (especially in high current in anode side electro osmosis is bigger than cathode side). In addition, at cathode side water created more than anode side. Eventually, decreasing of the lifetime of the fuel cell because of drying that creates holes in the membrane [2.40]. The probability of drying generally is happening on anode side. During long term, operating of fuel cell drying provokes to irreversible damage of the membrane and cause to break the membrane.
The main factor that is created drying follows:
1) Feeding inlet gases without of sufficient humidification;
2) Increase of cell temperature results to enhance evaporation ; 3) Electro-osmosis particular at high current [2.45].
Flooding
Flooding happened at cathode side and anode side. Flooding is occurring in accumulation water in flow filed channel or/and electrode cell. Then block the gas channels and after a several minutes, droplets drive to voltage drop quickly. Flooding occurred in all operating conditions especially in high current density. Short time flooding can be irreversible; however, oxygen feeding is blocked and conduces to mechanical degradation of the MEA material by long time operating fuel cell [2.40].
Flooding causes to increase in mass transport losses (in high current density). Thus, performance of FC is reduced. However, voltage can be recovered by fast purging at anode and cathode. Flooding can be affected on a lifetime and durability of FC in long term operation. The presence of water (in the long term operating) corrosion will be happened in electrodes, the gas diffusion media and membrane. Therefore, ohmic losses of FC are increased by this phenomenon and cause the performance of FC decrease [2.45].
Cathode flooding
Water transportation at cathode side was occurred by following factors:
1) Water production in oxygen reduction reaction;
2) Electro osmosis is phenomenal to pull the water molecules from anode to cathode; 3) Saturated water by more humidified inter air gases.
In addition, to eliminate water in cathode side influence factors as follows:
1) Back-diffusion will happen when water quantity at anode side more than cathode side. In additional, the influence of back-diffusion is in low current more than elector-osmotic. 2) Water evaporation is a way to speed up to removal of water in cell [2.45].
Anode flooding
At cathode because of the creation water flooding will be occurring more than anode sides. 1) Flooding in anode mostly happens in low current density. Moreover, low temperature and high condensation in anode channel lead to anode flooding. 2) Back-diffusion can be factored of flooding at anode side.
3) Injection water for cooling and humidification are caused by flooding [2.45].
In brief, we need to avoid membrane drying at anode side and flooding at cathode side.
Effect of operation condition in water management (flooding and drying)
To avoid of faille in FC according to the improper water management solution such as variety of operating conditions that are suggested by many authors (pressure drop, temperature gradient, control mass flow by compressor, etc.).
Humidity
To obtain high performance in FC typically gas inlets are humidified. Buechi and Srinvasan investigated that operation at humidified inlet gases are 40% greater when FC figure out without humidity. Besides, Natarajan and Nguyen declare that with the increasing of humidity at anode side to reduce water transfer due to back-diffusion, going to increase current distribution, etc. [2.46].
Flow rate
Nadia 2008 investigated for air flow rate. Stated that in low flow rate it is beneficial in keeping water in dry cells but will cause of flooding. Hakenjos at al, mentioned FC performance increase (out -put current density higher) with gain flow rate due to higher stoichiometry and the water removed from the flooded cell [2.40].
Temperature
Pressure
Electro-osmotic flow rate in normal operating condition is greater than back diffusion flow rate with homogeneous pressure. Wilkinson et al. observed water produced at the cathode side absorbed by the concentration gradient to anode and cause to prevent of flooding. Elevated temperature (evaporation) and gas flow contribute (contribute dissolve water) cause to reduce flooding. However, it cannot be guaranteed drying never happen in the membrane.
Thermal management on PEM FC
Thermal management is an important role to increase/ decrease of performance of the FC.
Influence of freezing
Freezing can effect on the durability of the FC via thermal and mechanical stress. Decreasing temperature causes to the reduction in proton conductivity of Nafion membrane. Most components that are influenced by freezing temperature such as backing layers, gas diffusion layer and membrane (rarely will be happened because water in membrane strong bond with captions).
Start up from freezing
When water at cathode side is not removed during start up with temperature below zero, ice will be covered in surface of GDL and cause to blocking at the catalyst layer. Finally, FC voltage drops and even shuts down FC.
Influence high temperature
Performance FC in high temperature has a few benefits such as increase electrochemical kinetics and the result enhance efficiency, advance endurance for contaminants and augments the water management and cooling system. However, the disadvantage is degradation of the cell and decrease in durability and lifetime of FC.
1) In high temperature, sintering and agglomeration of particles will be increased.
2) Operation at high temperature breaks oxygen molecules into oxygen atoms and reaction to carbon and water increase which results in increased contamination. 3) Performance FC at high temperature, conductivity of proton conductivity may diminish when at low relative humilities.
Degradation of electrode/electro catalyst
One of the most interesting objects to FC commercialization is corrosion of the electro catalyst layer. Conventionally, catalyst layer is made of platinum (Pt) or platinum alloy. For electrode at anode and cathode, usually the same material is used by carbon mechanical support. In addition, platinum catalyst covered by thin carbon layer. Degradation in catalyst and electrode mean loss and reform in the structure of the platinum. A corrosion carbon is manifest of the loss carbon along the surface of Pt. Two factors, humidity level and temperature are mainly serious aspect contributing to corrosion [2.45].
Cathode corrosion
Electrochemical active surface area (EASA) decreases with relate to the time of the FC. EASA losses are due to Pt-particles distribution and long run operating condition of the FC.
1) Generally, cell potential cycling is the most serious influence in contributing to Pt agglomeration and oxidation thus to reduce of the EASA. Pt particle sizes extend due to the cell potential and will be accelerated when it compared to constant potential.
2) Variable temperature during operation: Principally voltage of FC increases due to the augment temperature while the negative effect is Pt-particles grow fast.
3) Low humidity in inlet gases effect to increase in the lifetime of the catalyst. Because, humidification level of gases is results to rise of catalyst particles [2.45].
Anode corrosion
Wofgang et al found that long-term operation demonstrated that the anode no impact by Pt agglomeration/sintering, dissolution and oxidant [2.45].
Corrosion of gas diffusion layer (GDL)
Carbon corrosion has negative impact on the catalyst properties and has a consequent negative effect on the performance of FC. Carbon corrosions are occurring as well as the following factors:
1) Potential cycling: Especially in high level and constant voltage, carbon corrosion will be increased.
2) Humidity has influences to carbon losses, GDL can be handled water management by using special fabrics such as hydrophobic material's ability to remove water and improve gas diffusion. However, higher hydrophilicity that means water is remaining in GDL and make to obstacles the pore and results in reduction of performance in FC. 3) Effect of temperature on GDL corrosion it is complicated because, some researcher are believing that it does not affect others, Wolfgang et al achieved and monitored the carbon weight loss [2.45].
Chemical and mechanical degradation of the membrane
Despite of the membranes of Nafion have long lifetimes. However, in FC application is degraded very quickly (especially in electrical application during potential cycling). Many factors have been influenced to degradation of the membrane but two of them were important:
1) Production of hydroxh1 (OH) and peroxy1 (ooh) radicals due hydrogen peroxide (H 2 O 2 ). They chemically attack the polymer. 2) The chemical attack according to transient operating conditions (potential, humidification cycling and temperature) that causes of degradation in the membrane [2.40].
Corrosion and mechanics degradation of the bipolar plates and gaskets
Three main factors for degradation mechanisms are as follows:
1) Material of bipolar dissolves in water and move into the membrane 2) Increase ohms resistance by forming resistive surface layer on the plate 3) Pressure that used for sealing causes the deformation of the plates [2.44].
Contamination of the cell
Contaminations are produced inside the cell or they are carried into a cell with inlet gases. They lead to effect of the performance and life of the FC.
Contamination of the electrodes/electro catalyst
Carbon monoxide is harmful for electro catalyst. Co-concentration only happens in anode side. Pt catalyst layer observed Co-molecules results to block the hydrogen from reaching the Pt particles. This process in a long time will be happening. The voltage drop can recover by the air injected into the fuel stream because CO can be burnt by air.
Contamination of the membrane and starvation
Because of the conductivity and low level of the water at the cathode, contamination in the membrane causes to diminish the maximum current density [2.45].
Starvation degrades the FC performance and the cell voltage drop. One of the factors that cause to staring is generating hydrogen in the cathode and oxygen in anode.
Faults synthesis
A summary of the major failure modes is represented in Table .2.3. In the most cases, a combination of the inherent reactivity of component materials, harsh operating condition, contamination and poor design is responsible for the degradation.
Neural network
Deep review of the system is significant for determination of fault diagnosis method (FDM). In PEMFC unknown physical parameters the Artificial Neural Network (ANN) is one of the most interesting in FDM by comparing to other methods (for instance, fuzzy logic, support vector machines and Bayesian network). NN is a combination of numerous neurons that connect together via weighted. Artificial Neural Network (ANN) is a power full system for fault diagnosis in non-liner system modeling. ANN has the capability to learn and build non-linear mapping of the system. In addition, it is a good solution for modeling of complex systems. Fundamental section of an ANN is named neuron. Based on the structure of the neuron, there are three important topologies, single layer feed forward network, and multilayer feed forward network and recurrent network. In feed, forward all input signals flow one direction to output. However, for recurrent ANN, in the output some neurons are feedback either to the same neurons or to neurons in former layers. MIP type is the most common ANN that applied for PEMFC modeling. In Figure .2.29 illustrated the example of MLPNN with two hidden layers. In this figure Layer 1, 2 are hidden layer, , And Is weight between input/hidden layer, two hidden layers and hidden layer/output respectively. To explain of neuron is given by the following function [2.47]:
∑ Eq64
F: transfer function NN W i , j: weight of the connections between neuron j, and i B j : the bias X i : value input to the neuron S j :neuron's output
The input of the hidden layer is calculated as:
(∑ ) Eq65
Feed forward NN
In feed forward network from input to output all signals flows in one direction. The most popular for minimizing training algorithm of weights is back propagation. Feed forward NN is suitable for static mapping between input and output and improper for dynamic evaluation. To solve this problem Recurrent NN can be replaced to feed forward NN. In this structure, neurons are feedback either to the same neurons or to pervious neurons. This means signals can move in two directions (forward and backward). Therefore, outputs have a quick response from the impact of inputs compare to feed forward [2.48]. Function in this NN for the first layer is "tansig" then "pure line" use in the second layer.
Training NN
The choice input data are important in training NN results. The most efficient variables for faults must be selected for training in NN. Otherwise, the amount numbers input impels a complex and slow to run the model. Notice that there is not any precise method to choose an optimal number of the hidden layer. This mean value of hidden layer depends on increasing of the output accuracy will be augmented [2.43]. For training an ENN weight matrix and the bias are adapting by using back propagation method. In other words, in the period of the training at every repetition, the matrices will be updated by minimizing errors between input and target (output). Training explained by: { } Eq66
P is the number of points used in the training and is input system and is output system. This set data must contain of the information on all different conditions such as safety and degradation mode. For this reasons chosen four inputs involve of 4000 points. Weight coefficients of the matrices are fixed by using a standard Back propagation algorithm.
Data collection in NN
Data is organized randomly into training, validation and test. Firstly, NN trained with first data then validations data for keeping off over-learning at the end, the test will be chosen by the data that NN have never trained. To facilitate training of the NN: Input data normalized between 0 and 1 ensure homogenized ranges and permit comparing the different weight related to different factors. The NN outputs recode to back right values [2.48]. The carry out of the NN was considered by executing a linear regression between both experimental and calculated values and evaluated (the corresponding Pearson Correlation Coefficient R). The statistical analysis describes the two data. If, R=0 that mean the correlation will be unpredictable else if R closer to 1 will be a better correlation between NN training and experimental data.
Conclusion
The PEMFC modeling and diagnosis are the most important issues treated in literature. A good diagnosis strategy contributes to improve the lifetime of the FC and then to improve the availability of the system built around it, as for example the drivetrain of FCEVs. It has been established that the FC is subject to a lot of fault during its operating. The latter are due to multi-physical phenomenon namely the temperature, the pressure and the humidity of the gas involved within the FC stack and cells. Several models have been developed to understand this phenomenon and to evaluate the FC performances according to different conditions of use but also to detect, isolate and classify the faults when they occur. On the basis of literature, it has been noticed that the ANNs are one of the most interesting in PEMFC fault diagnosis and modeling. ANNs have the capability to learn and build non-linear mapping of complex systems such as PEMFC.
In this research work, the used diagnosis approach consists in combining the model and non-model based methods to train an ANN model. The next chapter will focus on the proposed equivalent circuit model taking into account the 3D geometry of the stack. Details on the modeling process as well as simulation and experimental results will be given to check the validity of the proposed model.
Chapter III 3D Fault Sensitive Modeling of PEMFC
Introduction
As highlighted above, modeling and simulation are very useful tools in the study of complex systems. They allow ascertaining the impact of a great variety of conditions and variables for studying system global or local operating points. To reach this goal a fuel cell model has to take into account the space coordinates, time and multi-physical phenomenon.
According to the dimensions taken into account the FC model can be 1D when only one dimension is involved in the model, 2D for 2 dimensions or 3D when the three space dimensions are taken into account. Within any one of these kinds of models, one has to include one or several different domains of physics: electric, fluidic and thermal. The principal physical phenomena found in PEMFC are listed by domain of physics in Table.3.1 [3.1]. Finally, the time parameter is introduced in the models to evaluate the dynamics of one of several of these phenomena.
A PEMFC model is always a combination of the elements above. For example, a system can be 1D, dynamic and analytical, involving all three domains of physics with the different phenomena modeled at the individuality layer level.
Many mathematical models can locally describe these phenomena by means of partial differential equations involving space and time [3.2], [3.3], [3.4], [3.5]. Many searchers have realized the importance of adapting models to system applications [3.6]. Mention may be made on the research works as in [3.6] and [3.7] that established suitable mathematical models for control purposes, for instance for automotive applications. Nevertheless, no model has included the three dimensions of the fuel stack in the formulation. Only the axial dimension has been generally considered by assuming all the FC cells are invariant across the transverse dimensions.
The present research work proposes a 3D model of PEMFC for diagnosis purpose. Knowing that in the PEMFC around 50% of the chemical energy available in the fuel is converted to electrical power, and the rest is waste heat, a particular attention is given for temperature in the developed 3D model. This chapter is dedicated to explain the methodology used to build this model and to show how it is used for characterizing the FC behaviors under faulty conditions. A FC cell is considered alone to establish the modeling principle before generalizing the 3D model to a complete FC stack
The proposed 3D model for one FC cell
The chosen model is a semi empirical one using fundamental equations for known phenomena. For unknown phenomena, they can be modeled by realizing experiments on the fuel cell. Indeed, all the equations that belong to electrochemical model are used analytically. In addition, experimental tests have been used to determine the thermal mode and finding impedances in all branches.
This model is built up of multiple points (nodes) at different zone of cells. It is preferable to choose the nodes that are located at critical zones of fuel cell (e.g. center of cell, inlet and outlet of gases and boundary zones). The considered cell consists of 9 nodes. All the physical phenomena that take place in these nodes will change depending on the position of the nodes and the variations of the corresponding operating conditions in terms of pressure, humidity and temperature. Moreover, several thermocouples and voltage measurements were used so that to know accurately the voltages and temperature in each node (More details on this issue will be explained in chapter IV).
Description of the modelled FC Cell
The studied fuel cell (MES-DEA single cell) has different layers included on both sides: the anode and the cathode (see in Figure .3.1). These layers are named:
1) Bipolar plate (plastic material) 2) Connector between (bipolar plate and cooling plate) 3) Cooling plate 4) Gas diffusion layer 5) Catalyst layer 6) Membrane layer
Description of the 3D model applied on one cell
This 3D model combines the electric and thermal domains in the dynamic state. The geometric description for the cell modelled is illustrated in Figure .3.2. The idea is to divide the cell into 9 elementary cells (zones) so that to take into account the differences of temperature, humidity and gas pressure in each zone. The 9 nodes represent the center of the 9 elementary cells which are respectively modelled by 9 elementary circuits. To ensure a difference between the potentials of the 9 nodes, 20 resistors are used. Thus the total cell behaviors in terms of current and voltage is obtained by the contributions of the 9 circuits. Because of the local current density and temperature distributions are closely related to various phenomena that occur in the cell, the sophisticated multidimensional is capable of predicting many phenomena occurring inside an operational fuel cell, but only to a certain limit due to the complexity and high computational cost. Therefore, the overall goal of the present 3D model is to conduct an experimental analysis with emphasis on temperature and voltage distribution inside a single cell (MES) and stack PEMFC (Nexa). The advantages of this model are:
1) All electric circuits are considered in all the available layers.
2) The simulation time, with electrical model, is about few seconds.
3) For each node, the current density is calculated.
Modeling hypotheses
These hypotheses are a compromise between electrical model and a mathematical model. They are summarized as follows:
1) The contributions of the anode, the cathode, and the membrane are not distinguished.
2) Pressure drop in the catalytic sites is negligible (both in the cathode and the anode sides).
The voltage drop associated with the activation loss is negligible at the anode when compared with that of the cathode. Generally the voltage in all nodes is considered to be the same by the authors in literature. However, reality based on varying operating conditions (temperature, gas pressure, humidity) and material, voltages in each node have different values. In order to allow these differences the 9 nods (N 1 , N 2 , …N n ) of the cell model are separated by resistors as shown in red in Figure .3.5. The resistances between nodes are named according to their location within the anode side. Setting the transverse coordinates for each node according to the node's number that is to say (X 1 , Y 1 ) are the coordinates of the node N 1 , (X 2 , Y 2 ) the coordinates of the node N 2 and so on, the resistor R 12 is set between the nods N 1 and N 2 , the resistor R 23 is set between N 2 and N 3 ,..etc. That means the index of each resistor contains the two numbers of the departure node and the arrival node respectively: index 12 means departure node is N 1 arrival node is N 2 .
These resistances have different values that can be the site of different distribution of current density. The operating conditions and material of usage in each node of the FC cell will manifest through the difference in current density distributions.
To illustrate the operating conditions and ohms resistance in z axis 9 other resistors have been added between two cells, see Figure .3.6. This means 9 resistors are set between the 9 nodes of two neighbor's cells in Z. These resistances marked such as R 12 N1N1 contains two indexes: the top index indicates the numbers of the two neighbor cells (here the cells 1 and 2), the bottom index indicates the neighbor nods connected through this resistor (here the nodes N 1 of cell 1 and N 1 of cell 2). Notice that all the nodes connected together have the same nodes numbers.
Electric Formulation
This section describes the electrochemical formulation to compute the 3D steady state distribution of temperature and potential inside a stack. This method of modeling allows the study of the electrical behavior of large stacks with efficient time computation. The proposed 3D electric model (Figure .3.6) allows designing easily electric circuit connections to other electric components in the power train such as the power DC/DC converter. In this model, the electrical phenomena at the stack's level are highlighted instead of the electrochemical and mass transport processes that take place at microscopic scale as it is done usually. The knowledge on the latter allows calibrating correctly the physical parameters of that circuit. Thus in this work, a dynamic model has been developed by MATLAB software (see, Appendix 3A). This model is based on, electrochemical and thermodynamic characteristic of the PEMFC. The inputs of this model include the influence of temperature, gas pressure (hydrogen and oxygen), the voltage Nernst and other losses (Activation, Concentration and Ohmics). Each cell is composed by Nernst voltage, activation, concentration and ohmics losses electrochemical that are computed as follows [3.1]:
Eq3.1 [ ] Eq3.2
The ohmics over voltage due to the membrane resistance R m in PEMFC is given by the following expression [3.1]:
, Eq3.3
A relationship for voltage loss due to concentration polarization is obtained as follows:
Eq3.4
By adding capacitor, dynamic system is involved in this model. Therefore, the voltage of each cell is computed by equation as defined below:
Eq3.5
Eq3.6
Thermal domain
In the thermal domain, the stack temperature can be obtained by using an empirical method (As shown in Figure.
Dynamic effect of double layer
The dynamic phenomenon of double layer capacitor influences the transit value of the stack activation and the concentration. This influence can be modeled by a first order system (See Figure .3.8).
Computing the parameters of the 3D model
In the first stage of modeling process the parameters of the proposed 3D model are computed theoretically as following:
The free load voltage E, is calculated with the Nernst Equation which is a representation of the relationship between the ideal standard potential E 0 =1.22 for the fuel cell reaction and the ideal equilibrium potential at other temperatures and pressures of reactants and products (see Eq.3.1). The parameters (R con ,R act ,R ohm ) as follows:
o R act : The first of these three major polarizations is the activation loss, which is pronounced in the low current region. In this region electronic barriers must be overcome before the advent of current and ionic flow (see equation 3.2).In this formulation current density is assumed of variable parameters. o R ohm : The ohmic loss varies proportionally to the increase in current and increases over the entire range of currents due to the constant nature of fuel cell resistance (see Eq.3.3). In this formulation the variable parameters are supposed the current density. o R con : the concentration losses occur over the entire range of current density, but these losses become prominent at high limiting currents where it becomes difficult for gas reactant flow to reach the fuel cell reaction sites (see Eq.3.4). The double layer charge in anode and cathode is consumed as equivalent capacitors and is equal 1.8 F.
The temperature in all formulation between Eq.3.1 and Eq.3.4 is depending on the temperature measurement that gained from experimental test.
Remark:
Notice that all the parameters above will change according to temperature, pressure and humidity except the double layer capacitor.
Newton-Raphson method
The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. The Newton-Raphson method is used to calculate the voltage cell and the current for each element. The latter is a very useful technique for solving equations numerically. It can also solve square non-linear systems of equations using matrices.
Newton-Raphson method it solves equations of the form f(x) = 0 for the solution nearest a starting point of x = x 0 . It then creates a list of values x n where each x n (the nth element of this list) is the xintercept of the tangent line to y = f(x) at the previous list value of x = x n-1 .
The solution vector by the Taylor series expansion in general is written for each Eq3.8
Eq3.9
This can be written more compactly in matrix from
( ) ( ) ( )
Eq3.10
Derivation of the N-R method is similar to the scalar case Eq3.11
To find solution to
Eq3.12 Eq3.13
( )
Eq3.14
Iterate until ‖ ‖
Newton-Raphson algorithms that consist on linearizing an equation around some points until convergence are reached. A Newton-Raphson algorithm was used to couple the analytical equation 3.1-3.7 with the experimental measurement of voltage and temperature.
The measurements of temperature and voltage have to be done within the PEMFC stack so that too much data as close as possible to the nodes of the 3D model presented above. Newton Raphson method is used to match between temperatures, voltage and current density at those nodes. The Newton-Raphson problem is then set as the two operations bellow:
To assume the function f = E-V. E: is defined by the voltage calculation of the cell in each node. This calculation is performed starting from the physical parameters of each elementary circuit evaluated analytically by using the measured temperature distribution (see the beginning of section §2.7). V: is defined by the voltage measurements of each node. To find the current density distribution to reach the equality f(x) = 0. Where the vector is the distribution of the searched current density.
Given a function ƒ defined over the reals x, and its derivative ƒ', we begin with a first guess x 0 for a root of the function f. Provided the function satisfies all the assumptions made in the derivation of the formula, a better approximation x 1 is
Eq3.15
The process is repeated as Eq3.16
Then the Algorithm of Newton-Raphson Method contains the seven main steps given bellow:
Step 1) Measurement of temperature and voltage in each node (more details will be given in chapter IV).
Step 2)
Calculation of voltage of each node based on formulation Eq.3.6.
Step 3) Evaluate numerically.
Step 4)
Use an initial guess of the current density to estimate the new value of the current density, as in equation 3.16.
Step 5)
Find the absolute relative approximate error | | as :
| | | | Eq3.17
Step 6) Compare the absolute relative approximate error with the pre-specified relative error tolerance.
If ||> Relative error tolerance, to upgrade the initial guess of the current density and go to "step 4". If ||< Relative error tolerance, then go to "step 7" -stop the algorithm.
Step 7) Stop the Algorithm.
Calibration of 3D model in healthy mode
An important aspect of fuel cell modelling is that, using this model, the fuel cell can be simulated in both modes: faulty and healthy. A large number of parameters need to be implemented to create a complete fuel cell model under Matlab/Simulink software in order to model and to simulate the fuel cell in healthy and faulty modes.
The temperature and voltage distributions that should be used to calculate the current distributions with the Newton-Raphson method are obtained experimentally. Air stoichiometry of 3 and hydrogen stoichiometry of 2, Current load of 10 A, Humidified cathode and dry gas hydrogen are used. The proposed fault sensitive 3D model of PEMFC developed under Simulink software is shown in Figure .3.9. Hydrogen, oxygen gas pressure, current density and temperature are the inputs of the model while the voltages and currents are the out puts. Each subsystem circled in red contains open circuit; activations, concentration and ohmic voltages (see more details in Appendix IIIA). The cell potential at each point of the cell is calculated separately based on different current densities that are obtained by the Newton-Raphson method. The significant point in this model is the computation of the connection resistors at each point. These resistors can be useful to simulate the faults. Furthermore, calibration of these resistances is the one of the most important point of this model. These parameters change with variations of temperature, humidity, pressure and aging effects. Simulation measurement of current density and experimental measurement of the temperatures and voltage are shown in Table .3.3. The latter indicates that, due to the increase in the local current density, the temperature increases too. According to the data sheet of the MES Company (as show in Figure .3.10. the voltage measurement, with a current load of 10 A should be equal to 0.8V. However, according to Table .3.2 the voltage measurements are recorded between 0.65 and 0.69. In other words, the voltage drops between 110-150 mV at each node. This can be attributed to the change in the operating conditions such as pressure, humidity, temperature. Resistances are increased because of nut and screws are used to fix the cell. In addition, the existence of multiple voltage sensors and thermocouples increase these resistances. These losses are caused by irregular pressures of the bipolar plate and the connecting points. If the pressure in some points is above than the normal average, it can block the channel of hydrogen or oxygen gases. Nevertheless, cell voltage decreases with the pressure drops. Based on adding thermocouples and voltage sensors physical failing have been occurred and it rise to voltage drop in the cell. This phenomenon can be present by adding impedance in series of each node in the 3D model. That mean impedances (R 1 -R 9 ) are added in x axes in order to indication of internal voltage losses in the fuel cell model. It is obvious that the value of these resistances that it can be easily obtained by knows current density and voltage in each node. For example in first node the current density calculated by Newton Raphson. Also voltage measured in experimental test is 0.687 V then resistant can be obtained around the 0.1324 ohm. In this way all impedances can be calculated in the Table .3.4. It has shown the internal impedances for each node. Based on current distributions obtained by the Newton Raphson, Table .3.5 illustrates the percentage of current density of the nine nods compared to 10 A. It is obvious that in each part the activation area will be to 6.7 cm 2 (61/9) and mean values of the current density calculated from simulation results are shown in Table .3.6.
Network circuit analysis
To construct an equivalent circuit of a complicated process (e.g., electrochemical parameters such as voltage losses and voltage reversible with impedance in series and parallel) and calculate its impedance, more knowledge about the network circuit is indispensable. The major factors are the parameters that change according to the operating conditions. A commonly used network analysis method is the loop and mesh analysis, which is generally based on KVL. The series of equations are in the form of ([Z].
[I]= [V]) can be established by equating the sum of the externally applied voltage sources acting in each loop to the sum of the voltage drops across the branches forming the loop.
The number of equations is equal to the number of independent loops in the network. The general equation in the loop or mesh analysis is given by
[ ] [ ] [ ]
Eq3.22
Where the impedance matrix [Z] is an N × N matrix, as described in Eq.3.26 The following rules describe how to determine the values of the voltages, currents, and impedances in Eq.3.22.[3.9].
1. The voltages in Eq.3.22 are equal to the voltage sources in an each branch. If the direction of the current caused by the voltage is the same as that of the assigned current, the voltage is positive. Otherwise, the voltage is negative. 2. The series of mesh impedances, known as the self-mesh impedances, Z 11 , Z 22 , Z 33 , …, Z NN , are given by the sum of all impedances through the loop in which the circulating current flows. 3. Each mesh mutual impedance, denoted by Z ik (i ≠ k), is given by the sum of the impedances through which both mesh currents I i and I k flow. On other words, the mesh mutual impedances are equal to the sum of the impedances shared by meshes i and k. If the direction of the current Ii in loop i is opposite to that of the current I k in the adjacent loop k, the mutual impedance equals the negative sum of the impedances, whereas if the direction of the current Ii is the same as that of the current I k , then the mutual impedance equals the positive sum. In a linear network, the following can be obtained: Z ik =Z ki .
A linear matrix equation can be solved by the application of Cramer's rule. Assuming the determinant Δ of the matrix Z is non-zero, the solution of the current can be expressed as .
Eq3.23
Where [Z] -1 is the inverse of [Z], which can be expressed as represents the matrix transpose. Δ and Δ ki can be expressed as follows:
Eq3.24
Where Δ ik is the matrix cofactor and (Δ ik ) T = Δ ki represents the matrix transpose. Δ and Δ ki can be expressed as follows:
[ ] Eq3.25 [ ] Eq3.26 Where | [Z]| is the determinant of [Z].
For an easy calculation, memory space less generally, in mesh grid used admittance rather than impedance. However, bus impedance is used for short circuit study. A set of equations can be established with the form of
[Y]. [V]= [I]: [ ] [ ] [ ]
Eq3.27
Generally to calculate Y admittance:
[ ∑ ∑ ∑ ∑ ]
Eq3.28
An equivalent circuit of this model is included the parallel and series impedances. These Impedances account the voltage and temperature distributions. Without considering of temperature, effect then fuel cell electrical model is included of the activation loss, consideration loss, ohmic loss, double layer capacitor and Nernst voltage. By Adding impedances in parallel and series Z 1 , Z 2 … we can highlight the effect temperature in the electric model of PEMFC in different space directions (X, Y and Z). This is because of there are physical relationships between temperatures and these impedances.
Furthermore, considering these impedances to simulate the PEMFC the accuracy of the PEMFC model is improved. Thus, for any reason, variation temperature in different directions in the fuel cell stacks changes the value of these impedances. Moreover, any fault that related to variable temperature such as flooding, drying, and degradations may be considered in this model.
First, a model for a stack including two-cells was developed and the simulation results were compared with experimental results (see Figure.
The 3D model applied to one stack
The
Considerations on the 3D model calibration
The concept of calibration is an important step in model validations. Calibration task involves systematic adjustment of model parameters. It allows an estimation of the model outputs. The calibration of the stack fuel cell can be summari zed as follows:
One of the simple ways to calculate the impedances in the 3D model (see Figure. Calculation of reduction and downsize in it should suppose that the impedances in a non-diagonal matrix for the impedance interaction cells are equal t o zero. This model is a semi empirical electric model. That means, we used a dynamic electric equation with regard to the temperature influence for voltage losses and reversible voltage of the fuel cell. In this model, an empirical thermo equation of the MES fuel cell is considered by using experimental test.
To build a 3D model, first, one should study the influence of operating conditions on the fuel cell performances. Then one should simulate all the effects of the polarization curve and voltage losses. Electric model was chosen to take advantage of its ability to save time. However, in reality, the electric model was not sufficient to demonstrate all variable in different directions. As illustrated in Figure .3.14, the fuel cell is divided into different branches, following x, y and z axis. At each branch, the impedance and the electrical mode are connected together.
Otherwise, the single cell is divided in several electric models connected to each other with several impedances. In the literature, all the studied models are based on the irregular distribution of the current density.
Specifying the nature of impedances is not an easy matter. So, to reduce the calculation in impedance matrix, we consider only their magnitudes (i.e. we assume they have resistance behavior).
To take into account the impedance phases is too laborious especially regarding to the necessary experimental measurements to perform (a spectroscopy measurement will be necessary in each node). However, as a future work to include the phase's aspect in the computation may improve the model accuracy in dynamic states.
To find the complete resistance in each branch, several special tests at different currents should be carried on (designs and details of test bench are discussed in the next chapter). In addition, all temperatures measurements and voltages in each node (using the voltage capture and thermocouples) should be registered on this test. Then, to obtain current at each node, first, it should be compared to the real voltage using the simulation results. Second, temperature measurements should be applied in theoretical equation. Third, the NR method for nonlinear equations should be used.
Calibration of the 3D model of FC Stack (Two cells)
The concept of calibration is an importan t step in model validation. Calibration task involves systematic adjustment of model parameters. The calibration of the 3D model of the PEMFC stack can be summarized as follows:
1) Voltage and temperature are recorded as shown in Tables .3.6 andTable.3.7 (12 sensors for each cell). 2) Application of Newton-Raphson method to calculate the current density as illustrated at the right sides of Tables.3.6 and Table .3.7. 3) Calculation of the network impedance that matches the model.
In the Table.3.7 and Table.3.8, the voltage and temperature are measured in experimental test performed on two cells (Figure .3.15) with three different values of current load: 5 A, 10 A and 15 A. For all these measurements, the H 2 stoichiometry is set to 2 while O 2 ones is set to 3. In the X direction, the voltage and temperature variation based on domination of MES cell can be negligible. Moreover, in order to simplify calculations, all the sensors have been installed in 9 nodes in each cell with the 3D model. However, voltage sensors and temperature are limited only in three nodes (see Figure3.15). Besides, mean values of 4 proper sensors have been used to execute the calculations in each zone. As depicted in this Figure .3.15, each cell is divided into three zones. These zones include the inlet, the middle and the outlet. According to the results in cell one, voltage and temperature have been changed by different operating conditions from inlet to outlet. It is shown in Figure .3.15, nodes 1, 2, 3 in cell one and nodes 4, 5, 6 in cell two are connected via impedance in y axes, and every two node that are in front of each other in each cell are joined by impedance in z axes. The variations of operating conditions from cell one to cell two are also represented. In addition, mechanical parts such as connectors and several sensors installed to measure the voltages and temperatures can increase the voltage losses in each cell. Table.3.7 and Table.3.8 (middle) represents the voltage measurements. It can be noticed that the values are situated between 0.577 V at the outlet, 0.732 V at the inlet for cell one, and 0.588 V at the outlet, 0.709 V at the inlet for cell two. This means there are voltage drops about 0.155 V and 0.121 V between inlets and outlets of the cells one and two respectively. This can be attributed to the changes in the operating conditions such as pressure, humidity, temperature, etc. Also; it may be related to circuit physical components of the 3D model such as increasing ohmic resistance by adding a lot of voltage sensors and thermocouples. It can be also noticed that the current densities are not homogeneous. This may be related to the variations in the operating conditions as the temperature distributions that are higher in cell two than in cell 1. It should be seen that, the current density distributions of the cell in x, y and z axes have different values. This phenomenon can create different voltage in each point of the cell.
The effects of the impedance in each cell can change the current density distributions in the fuel cell stack. These effects are able to use the fault isolation in the stack PEMFC. Hence, the change in the impedance in each direction represents a change in the related current density. Moreover, the deviation in the current density is related to the operating conditions such as temperature. Thus, the fuel cell would be changed from normal to a faulty mode based on the variation of the operating conditions. This is the aim of the next section §4.
Simulation of PEMFC in faulty operating modes
Generally, fuel cells operate in two modes: healthy mode, which means that, the fuel cell operates under normal conditions and degraded mode where the FC operates under faulty conditions. These faulty conditions can be caused by: 1) Long time operation (natural ageing).
2) Operational incidents, such as Membrane Electrode Assembly (MEA) Contamination or reactant starvation.
The degraded mode indicates that there is an abnormality in the FC operating conditions such as variations of temperature, causing a fault and/or a performance loss in fuel cells. The common faults that happen in the fuel cell can be divided into two categories: flooding (cathode side and anode side) and drying faults [3.9].
Flooding at cathode side
Flooding at the cathode side is a common problem for the cells. It is caused by an excess of water produced sometimes on the cathode side when the stack is operating. In case of cell flooding, the water film which is formed on the cathode side of the cell, blocks the oxygen diffusion into the positive electrode (oxygen reduction reaction site) thus decreasing the cell voltage. The magnitude of this phenomenon strongly depends on the stack current, stack temperature and reaction airflow rate.
Flooding at anode side
This phenomenon is as common as the previous one. It generally occurs during the "reconditioning" procedure because of the complete filling of the anode compartment with deionized water. In this case the water film blocks the hydrogen diffusion to the negative electrode (hydrogen oxidization reaction site) thus decreasing the cell voltage. In practically, the voltage will increases after every purge event. The single cell nominal voltage (about 600 mV) because of the flooding in anode side voltage immediately after the purge event decreases fast again. In other in other worst cases you have a constant single cell voltage around zero volts (in the range +/-50mV).
Drying in membrane
Drying of the membrane is another common fault that occurs in fuel cells. It can cause damage at the membrane level by creating holes in the structure of the polymeric one. This phenomenon occurs accidentally when the temperature is near to 70 °C. The direct consequence of such an event is again a very low voltage (near zero Volts or in the worst case low down to -1.4V). If one cell has some holes in the membrane, its voltage decreases very fast with respect to the normal single cell normal behaviour.
For example, the Figure .3.16 emphasizes the link between the relative humidity and the state of the membrane, which can be either wet or dry. It can be readily seen that for most operating conditions, the membrane of the FC is either too wet or too dry. Furthermore the humidity should be above 60% to prevent excess drying, but must be below 100% to prevent of flooding.
Higher temperatures cause better performance, mainly because the cathode overvoltage reduces. However, once over 60 °C the humidification problems increase [3.10]. For instance, if the humidity gases inlets increased, more water would accumulate in the cell. Hence, flooding could occur and block the gas inlet. By comparison, if the humidity was too low, less water would accumulate in the cell, leading up to drying [3.11]. For this reason, humidity range has been selected between 50% and 120% in order to take into account flooding, drying and normal mode.
Besides, if the humidity range is included between 80% and 100%, the FC works in healthy mode; whereas for more than 100%, flooding case occurs and less than 80%, drying case occurs. In the same way, if the inlet gas pressure increased, flooding water would occur; while in low pressure, would lead up to drying. According to the characteristic of the FC, pressure has been chosen with a different range.
Hence, for this paper pressure range has been selected between 0 and 2.2 bars. In other words, FC operates in healthy mode for a specific range, namely between 0.7 and 1 bar. On the other hand, if the pressure is included between 1 and 2.2 bars, indicates the presence of flooding in the FC; whereas a pressure lower than 0.7 bar leads up to dry. Indeed, the temperature is included between 0 and 70 °C. (This range depends strongly on the technical characteristics of FC). Starting from these different operating conditions, a 3-D fault diagram has been sketched in Figure .3.18. The latter allows summarizing the studied faults in FC, namely drying and flooding in terms of temperature, pressure and humidity.
Simulation of faulty modes examples
To simulate the effect of faults introduced in different zones of the circuit model, a given DC load current containing some typical harmonics identical to those one can find in a DC/DC boost converter generally associated to the PEMFC, has been supplied from the FC. To simulate the fault first one 3D model calibrated in healthy mode is used. As mentioned before, the impedances in different zones depend on the temperature and other operating conditions of fuel cell. In addition, impedances are attached to z axes i.e. they represent the connection losses between the different FC cells in the Z direction. Indeed, changing one of these impedances can change current distribution in the cell. This behavior can be used to simulate faults in each point of cell.
The measurements of the mean value and the first seven harmonics of the output voltage, in the steady state operation, allow computing the corresponding Harmonic Distortion Rate (HDR) and the mean values voltage variation according to the healthy value (MVV). These two parameters are used to characterize the different faults taking into account the 2D space coordinate of the fuel cell.
The Figure.3.18 gives some examples of this characterization process. The variations in impedances at the branches in the middle of two cells are represented. The z axes have been considered to realize these simulations. The significant point in these figures is that changing an impedance value of impedance affects the output voltage of the cell. By increasing and decreasing the impedance of this model current density in each point will be changed and it can be assumed the drying and flooding faults are happed in FC more detailed will explained in chapter V.
Conclusion
The proposed 3D model is implemented under Matlab/Simulink software and it has been validated experimentally in healthy mode on one air cooled PEMFC. The circuit approach has been used to divide each cell of the modelled FC into several elementary cells. The case of a circuit of 9 nodes has been studied and explained. This allows creating the most common faults in any were within the 3 space directions of the FC stack. The idea is to use this model for training an ANN model that will be used for on-line diagnosis of the PEMFC but also in the management of its degraded modes.
However, to achieve this goal one has to calibrate the 3D model in healthy mode. Such operation requires a lot of experimental data and a huge work to build the adequate test benches. In the next chapter, a special focus will be done on the developed experimental work for calibrating and validating the proposed model.
Table of contents of
Introduction
Two set-ups have been developed to validate the proposed 3D model. Because of the difficulty to introduce faults in the FCs without destroying them, only the healthy mode has been focused in this study. The first one concerns one FC cell from MES-DEA technology. The second one is a FC system from Ballard technology (called Nexa stack FC). Both technologies use the air-cooling system for FCs cooling. In this chapter, the two set-ups are presented with the corresponding environmental hardware and software materials. The obtained results are exposed and commented regarding to the validity of models.
Single cells set-up
In order to validate the 3D fault sensitive model of PEMFC cell a test bench for a single FC cell has been carried. Different parameters have been controlled to test various operating conditions such as, gas flow or pressure, temperature, air humidity rate, airs and hydrogen stoichiometry's. In addition, an electronic load is considered to simulate the load dynamic variations, and to take into account constraints related to transportation applications.
Gas supply description
The suitable pressure in the range 1-3 bars, according to the manual indicator, feeds the oxygen for the stack operation. For this reason, special devices have been designed to connect the air supply to the bipolar plate in the cathode side (see Figure.4.1). As shown in this Figure, the two parallel inlet channels of the stack are embedded on the top of the cell. Thus, the air is able to overcome the pressure drops of the cathode side and feed each compartment of the entire cell. Then, the exhausted air enters in the parallel outlet reactant air channels that finally drive it outside the stack. In order to supply the stack with hydrogen, the supplied H 2 inlet tube connector has to connected the to the hydrogen source via a proper tube. A flexible or rigid tube (e.g. silicon or Teflon respectively) should make the exhausted hydrogen circuit. The nominal flow of this exhausted hydrogen is 0.28 Nlt /s. The supply pressure of the hydrogen should be adjusted to a set value of 0.5 bar overpressure.
The physical references of the MEAs
The complete equipment consists of one cell with a connector and an isolator. These connectors have realized the air-cooling and the electric connection (current collector) between the bipolar plate, at the anode and the cathode sides. The isolators were used to prevent the hydrogen and oxygen leakage from the inlet and the outlet. As illustrated in Figure .4.3 each cell includes: a membrane with an active area of 62 cm 2 ; a gas diffusion layer with a thickness roughly equal to 0.42 mm and a graphic block in the anode and the cathode side. More details about the component characteristics can be found in Table.4.4 [4.1].
Cathode side of single cell with inlet and outlet of hydrogen and oxygen.
Single cell
Isolator
Description of the test bench
The structure of the test bench can be divided in four parts according to Figure.4.4 [4.2]. The supervisor block includes the user interface. It collects measured data, transmits the operational orders and manages safety processes. The Ancillaries that consist of different actuators and sensors apply the control and transmit back the measurements. The tested stack is equipped with specific sensors such as measurements of voltages across each elementary cell, thermocouples, current, etc. The electronic load can be computed to impose a given time evolution of the stack current [4.2].
Supervisor and control
The control has been implemented on a National Instrument PXI platform (Figure. The software interface allows the user to choose the fuel cell running mode and the parameters to be controlled. The system can run automatically, following either a computed load cycle or a manual operation, according to the user's need. An interface panel (Figure. In the "Settings" part of the supervisor (see Figure.
Ancillaries
The structure of the test bench is illustrated in Figure .4.8 Actuators and sensors implemented in the fluid circuits, such as hydrogen and air distribution, humidification rate control, cooling loop and regulation of water temperature are presented [4.2].
Electronic load
The electronic load allows the performing tests to characterize the static and dynamical behaviors of the fuel cell and the simulate high frequency disturbances (chopping frequency). It can be directly controlled by the supervisor. Their nominal values are equal to 800W, 120A and 20 kHz bandwidth. Figure .4.9 shows the electronic load in the test bench [4.2].
Thermocouple
The most common, accurate and practical method, to measure the temperature distribution within a PEMFC cell is the thermocouple. The thermocouples have nice features, such as their simple configuration, high accuracy (0.1 °C), fact response and large measurement range. The thermocouples are widely used as point temperature measurement devices that consist of two wires of different materials joined at the end. When the two junctions are subject under different temperatures, a small electrical current is generated, which leads to a small voltage drop.
The available types of thermocouples can be classified according to the American National Standards Institute (ANSI) standard as K, J, N, R, S, B, T, or E. The 12 th thermocouple type K type and the isolated parts have been selected to control the distribution of the temperature cell during its performance (see Figure
Calibration of the Thermocouples
The chosen thermocouples should be calibrated before starting using them. The thermocouple type K can be configured through the DPI 620 Series (see Figure.4.12). The GE Druck DPI 620 Series Advanced Modular Calibration and HART Communication System can measure and provide the mA, mV, V, Ohms, frequency and a variety of RTDs and T/Cs.
To calibrations, all the thermocouples and the reference thermocouple (The Canne Pyrometrque of type 14 (see Figure.4.14)) are placed in the BINDER incubator of the BD series (see Figure.4.13)) Therefore, temperature correlation between the thermocouple's and the reference temperature is developed. Because of the all temperature inside of the Binder are become homogeneous until the reference thermocouple shows the same temperature as the Binder temperature. It is necessary that reference thermocouple with others thermocouples are placed in the Binder at specific temperature during the 1 hour.
In the next step, the advanced measurement device DPI 620 measures all the thermocouples. Thus, the results are shown in Table.4.6. in this table the reference temperatures are compared with all thermocouples (12 thermocouples are used). In Table this table maximum errors are recorded in thermocouple 3 and 4 with around 2.6 %. This error would be due to the long wire connector usage or could be related to the thermocouple conjunction. The mean values of the errors (see Table.4.7) for each thermocouple are calculated with different temperatures and all the thermocouples are compared to others. It can be remarked that the maximum error (~2.6 %) is localized between the third and fourth thermocouple. Therefore, as preliminary results, the difference of temperatures between these thermocouples can be negligible in temperature distribution measurement.
Voltage
In order to investigate the voltage distributions at different parts of a single cell, 12 voltage sensors were installed in the same place as the thermocouples. They were stuck directly with tolerable adhesive (with temperature) in the bipolar plate of a single cell. In addition, it isolated from the connector. Hence, they can achieve the existing relationship between temperature and voltage (see Figure
Voltage sensors choosing for measurements
Generally in this type of fuel cell (MES fuel), an air fan has been used for cooling system. In particular, test for the influence of temperature (consider the distribution temperature) in performance of fuel cell air-cooling was omitted and 12 thermocouples were replaced in during the cell for control of temperature in a single cell (as shown in Figure.4.19). These thermocouples are isolated from the electrical parts for preventing of short circuit between thermocouples body and bipolar plate.
Voltage sensors directly connected to the graphic parts in anode and cathode sides. The sample frequency was adjusted in 3 Hz in the control test panel. Current loads profiles are set constant between 5 and 15 (A) as long as the temperature in thermocouple inlet and outlet stabilized. To select acceptable data from measurements of voltage firstly fuel cell has been run without load just with input gases (hydrogen and oxygen) data measurements noted (see Figure.4.20). In the other world only Nernst voltage was affected in these measurements. The various voltage acceptable measurements can be selected in the range of the 0.8-0.95 V. This range can be confirmed due to the data extract from datasheet of MES fuel cell based on polarization curve. In each cycle this process are repeated.
Validation tests of one single cell
The dynamic characteristics of voltage and temperatures along the one cell in different zones are investigated for various operating conditions: Load current, air and hydrogen stoichiometry ratio and different boiler temperature have been used. The sophisticated test bench is applied to study the dynamic characteristic of the voltage and temperature distributions in single and stack PEMFC. Each individual operating condition is used in a complete cycle up to that local temperatures in the each points of cell become stabilized. For example the air stoichiometry ratio has increased from 3, 5 and 7 then decrease, temperature measurement recorded until the temperature values become constant. In each local temperature and voltage is recorded by the data acquisition system at 3 Hz over a period time from the changing the experimental condition as illustrated in Figure .4.22. This procedure is repeated for other operating conditions to study their impact on the dynamic of the voltage and temperature distributions. All experimental test investigations related to the temperature distribution measurements are carried out using the PEM fuel cell in single cell (MES fuel cell) as shown in the Figure .4.19. 12 Micro thermocouples of K types are used to measure the temperatures with PC-based data acquisition system. It must be noted that at the same place that the temperature voltages sensor was placed, for also measuring voltage in order to explore the relation between these parameters (temperature and voltage). Thermocouple and voltage sensors are placed in different locations along the fuel cell (x, y and z axes). The measurement of temperatures in FC can be waited to stabilize by passing minims 360 second.
Many parameters must pay attentions to the following of temperature measurement. For Instance different heat transfer processes, such as conduction, convection and radiation can have considerable influence on the temperature measurement. However, it has pointed out in reference [4.1] the effect of conductive heat can be minimized by using long thermocouple wires. The present experiment is used the more than 2m length connector between the acquisition system and junction.
The heat radiation in not expected to affect in low temperature (Maximum 60 °C). The thermocouple connections are insulated by rating between -10°C and 105°C. Testing is performed prior to actual testing Table .4.8 summarizes the main operating conditions adopted in the present study. In the present study, each temperature measurement is collected by the data acquisition with a sampling rate of one reading each 1 second. These measurements are investigated over the intervals where the temperature is constant after changing the experimental conditions. This procedure is repeated for different current loads and different stoichiometry air and hydrogen ratios.
Voltage sensors
It is noted the advantages of the present measurement is that the thermocouple probes are not directly contacted to the reaction site. Furthermore, the voltage sensors are located before the current collector. Otherwise, chemical effects produced by the reaction such as combustion processes or catalytic reactions may lead to unexpected significant errors in the temperature measurements, and may cause same voltage in the current collector's plate.
The local temperature distributions are measured in anode and cathode sides by inserting 12 thermocouples (Type-K) in the GDL. The thermocouples have a diameter of 1mm and the specifications of thermocouple are includes of:
Mineral insulated Type 'K' Thermocouple; 310 stainless steel sheath; Highly flexible, sheath can be bent/formed to suit many applications and processes; Insulated hot junction; Probe temperature range -40°C up to +1100°C; Miniature plug termination (200°C); Conforms to IEC 584 specification.
Temperature measurements across PEMFC
It is desirable that the PEMFC operates at uniform temperature distributions. Non-uniform distributions of the temperature could result in poor reactant and catalyst utilization as well as overall cell performance degraded and also caused to faults occurred in the fuel cell. In additional, the polymer membrane is very sensitive to temperature variations, and the hydration of the membrane depends strongly on the temperature of the cell because the water vapor saturation pressure is an exponential function of temperature. In order to obtain temperature profiles across the PEMFC, 12 thermocouples are placed in different locations within the experiment set-up.
Effect of Air stoichiometry on temperature distribution
The effect of the air stoichiometry ratio on the temperature distribution along the channel for three different current values (current 5A, 10A and 15 A) are shown in Figure .4.24 (more results are given in Appenix VIA). The air stoichimetry ratio effect on the temperature is highlghted in these Figures at operating conditions: the reactants are humidified on the cathode and dry on the anode. It is obvious that the local temperature increases when the stoichimetry ratio decreases. they are declined by growing the stoichimetry in cathode side. This can explain that the stoichiometry can be as useful as the air cooling system. Also, it can said that the increasing the air stoichiometry ratio has a positive impact on the overall cell potential. In the following analysis of results all temperature measurments hold on at cathode sides. Indeed, the temeprature at the cathodes sides is more than in anode side because of the activation losses are directly proportional to the rate of electrochemical reaction and the activation at anode side are negligible at cathode sides cathode. A. The anode and oxygen stoichiometry ratios are fixed at 3 to 6 while the hydrogen one is fixed at 2. The oxygen sides are humidified with a boiler. Here also, it is clearly seen that the temperature of the anode side is lower than the cathode side by more than 1°C. These Figures shows also that the local temperature between the anode and cathode are increased follow to increasing of the stoichiometry oxygen. This can be caused with increasing stoichiomtery, the temperature in cathode side will be decreased. It is obvious and it can be using these Figures that current load increase implies the temperature increase.
. For comparision of temperature in different direction of single cell Figure .4.28 shows the temperature distributions across the cell for the three current loads of 5 A, 10 A and 15 A. The anode stoichiometry ratio is fixed at 2 but the O2 stoichiometries are selected to be between 3 and 5.
In this test the channels are divided into three different regions in depending on the temperature values. These three sections are : inlet on the top, middle and bottom of the cell. Furthermore, in each region, the temperatures are recorded with four thermocouples, placed in series and located in the inlet, middle and outlet zones. These Figures show the the temperature measurements for the inlet, middle and outlet of the cell.
The highest temperature in the profile is recorded at the outlet of the channel on the cathode side. This can be resulted by the heat generation and the transfer to the outlet.
The temperature is greater in the middle points than in the inlet. The most important conclusion is that the temperature in three regions has very similar behaviours. That means, the temperature increases in the outlet, middle and inlet of the cell have same correlations, in function of time. Also Figure .4.28 indicates that, as the time passes, by increasing the current up to 15A, the middle and inlet temperature become higher than the outlet temperature. This may be caused by drying faults happens inside the cell. It can be noticed that the temperatures increase gradually from the left side to the middle and then decline at the right side, in reference to x axis.
The temperature distributions are represented in the y-axis and they are compared within different regions. As shown in these figures, the temperature from inlet to outlet increases progressively.
Hence, at high current loads with a hydrogen stoichiometry of 1.5, temperature at the outlet decreases in y axis. However, when the stoichiometry hydrogen is equal to 2, this problem can be removed by reducing the temperature (see the Figure . These results above show that from one zone to another zone clear differences are observed in temperature and voltages for any conditions of stoichiometry and current load. It has been noticed a difference about 2°C of temperature between inlet, middle and outlet of the single cell. For the voltages a difference about 9mV has been measured. This confirm the hypothesis done in the chapter III where each FC cell is assumed as the combination of 9 elementary cells connected to each other in 9 nodes having different voltages and different temperatures. Thus to do the 3D modeling for the single cell tested above the calibration process (Cf. Chapter III §2.3) is used.
Measurements of the voltage in x and y axes together
Calibration and validation one single cell
In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on using Newton Raphson method (see in Table.4.9). In additional, the local resistance based on the current density can be summarized in Table.4.10. Furthermore,comparisons In order to check the validity of the so obtained model a simulation of polarisation cureve of has been performed. The Figure.4.38 shows a comparison between the simulation results and and experimental measurments. Three polarisation cureves are obtained accronding to the three test conditions performed. The model adjusts itsef by switching from one to another of these three polarization curves. This is very interesting for both purpose of control and diagnosis. This indicates that the model in healthy of the FC stack (of two cells) is valid according to each measurements so, it is now ready to be used namely to simulate faults.
Case of two cells
In this section two single cells as those used above, are assembled together to build a little stack. The goal is to show how to generalize the modeling process from one single cell to a stack. This requires introducing the Z direction in the model. The used set-up is the same as the single cell (Figure.4.19) but by involving two cells of the MES fuel cell instead of one.
Temperature distribution in z axes
24 thermocouples type K have been chosen (each cell 12 thermocouples that installed at cathode sides). It is noted all have been calibrated before usage (as explained before). Furthermore, 24 voltage sensors are selected for this test.
Each voltage and temperature reading is recorded by the data acquisition at 3 Hz over a period of 5 minutes. Because, the temperature stabilization need to temperature remained in constant values at minimum 5 min. Accuracy, minim error and precision of results, every test is repeated 2 times for each operating condition. Further, the local current density profiles with various operating conditions are obtained in different operating conditions such as: various current loads (5A, 10A and 15A), air and fuel stoichiometry ratios.
Voltages distribution in z axis
The Figure .4.40 illustrates the voltage curves insides two cells from the inlet to outlet with different oxygen stoichiometry (3, 4 and 5) and the hydrogen stoichiometry fixed at 2. The voltage sensors are placed in the cathode and right side of each cell. It is seen that the voltage from the inlet is lower than the outlet and have the same inherent of the temperature curves. It is seen that the highest temperatures and voltage in the profiles are recorded at the outlet of each cell (as expect from single cell in y axes). This increasing of voltage is based on augmentation of temperature (see Figure.4.39).
Cell one: red Cell two: blue
Calibration and validation for two cell
In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on using Newton Raphson method (see in Table.4.11). In additional, the local current density calculations by Newton Raphson can be summarized in Table.3.7. Furthermore,comparisons .0024
In order to check the validity of the so obtained model a simulation polarisation cureve of two cells has been performed. The
Validation with one complete PEMFC
In this section a MES PEMFC stack is used to validate the proposed model in healthy mode. The same validation process used above is applied in the case of this stack.
Set-up description
The used PEMFC is shown in Figure .4.42 where the stack and the Electronic Control Unit (ECU) of the system are focused. The latter allows supervising and controlling the FC system through a PC using software provided by the FC manufacturer. The FC system is loaded with an electronic controlled load able to simulate the current profile [4.3].
Temperature measurements
The fuel cell was tested in the climatic room at different ambient temperatures between 10°C and 30°C (see Figure.4.43). The purpose of this test is to calculate the constant parameters in thermal equation (Eq.3.7).
Voltage measurement and validation
The measurements of temperature above are used for calibrating the 3D electrical model under Matlab/Simulink software. The A second comparison has been performed between the proposed simulation 3D model and experimental tests on the stack of the FC under test. The obtained results are illustrated in the Figure4.48. The small differences observed between the simulation and experimental results can be caused by two phenomena that happen during the experimental test. First, a hydrogen purges function which is used to eliminate the water impurity at the hydrogen side. During this operation, the hydrogen value is open periodically. The duration of purge is included in the range 0.15-1 second. Second, the short circuit function is used to increase performance of the system. Short circuit happens every 20 second for duration of 50 milliseconds. According to the results above, one can conclude that the proposed 3D model is valid for the simulation of PEMFC in healthy modes. This result opens interesting prospects to use the model for simulating the faults within the FC stacks notably for diagnosis purpose.
Experimental validation of the model.
Experimental tests have been carried out in order to compare them with the multi-physical model. The latter consists of the equivalent circuit given in Figure 2.1 in which the physical parameters were computed according to rated characteristics of the studied fuel cell. The test bench is built around that FC and it uses a climatic chamber for the tests at different environmental temperatures (see Figure .4.49. Load profile. The technical characteristics of PEMFC used for modeling purpose are given in Table .4.12.
The dynamic test has been carried out through a load profile calculated with a real driving cycle. The latter lasts 12549s and is given in Figure .4.50. The load profile is controlled by a programmable electronic load connected to the FC stack (Figure.4.51.a). The same profile is then applied to the model for the simulation. In addition, the same experimental physical conditions are used in the simulation (e.g. ambient temperature, stack current). Stack PEMFC
Temperature and voltage measurement results
As explained above, the fuel cell voltage depends on many parameters, such as temperature, current, etc. In this work, the effects of the temperature variation in the 3D directions of the stack fuel cell are considered. In fact, temperature and time during operation of the fuel cell will be changed. Specifically, Figure.
3D effect on stack voltage
As illustrated in Figure .4.56 two sets of temperature measurements were taken the first one was along the y-axis (top, and bottom of the side edge of the cell). The second one was along the x axis (left, right on the top edge of each face). The third set was along the z axis (side faces of each cell). It must be noted that at the same place that the temperature sensor was placed, voltages were also measured in order to explore the relation between these parameters. Based on equations Eq.2.1, ..Eq.2.7 in chapter II, all parameters are directly related to temperature. In other words, the temperature distribution in different parts of the fuel cell has direct influence on the increase and decrease of the fuel cell output voltage. In these figures, owing to the location of the air intake, distribution of current density and other parameters, temperature in the middle of any direction (x, y and z) is warmer than other parts.
Calibration of 3D model for two cells
In order to calculate impedances in different directions in stack cell, Newton Raphson method has been used. For this aim voltage and temperatures were measured in specific points of the stack FC (Because of the unreachable install thermocouple and voltage sensors in all place of FC). The voltage and temperature measured in experimental test as shown in Table .4.14. The parameters of impedances calculated by the NR method are given in Table.4.15. It compares the variation of impedance in response to the air cooling system (Variable temperature) and current density inside the cell parameters (chemical process such in particular activation losses…). Only magnitudes of impedances are considered in this paper.
Circuit Parameters Impedance [Ω]
cell-1 Impedance[Ω] cell-2 R 1 12[Ω] 0.0577 R 2 12[Ω] 0.2300 R 1 35[Ω] 0.1007 R 2 35[Ω] 0.2107 R 1 25[Ω] 0.8286 R 2 25[Ω] 0.1691 Impedance [Ω] Interface between cell one and two R 12 11[Ω] 0.1237 R 12 55[Ω] 0.4236
Synthesis on validation with Nexa stack PEMFC
All tests have been done for Nexa stack such as temperature and voltage. Also based on the measurement calibration of two cells of this FC have been carried out. However at this stage of study, the 3D model for Nexa stack needs to be finalized by updating all the model parameters starting from the FC characteristics. In fact a lot of temperature and voltages results are available for several 3D positions within the FC stack. It is expected to use all these data in the near future for the 3D fault sensitive modeling of the Nexa FC.
Conclusion
In this chapter, two PEM fuel cells (MES FC and Nexa FC) have been considered to analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. The measurements obtained allow validating the proposed 3D model first on one single cell, second on two cells and finally on a complete stack of the MEM FC. The single and double cells allowed validating the 9 nodes model while the complete stack validated the one stack model. Otherwise, the validation stills to be finalized on the Nexa FC.
It can be concluded that the main hypothesis of the proposed multi-nodes circuit approach which assumes different potentials and currents density at each point of the FC stack, is valid. Second, the developed 3D model is valid for simulating the PEMFC operation in healthy mode. Thus, it can be used for introducing different faults to study the behaviors of the distributions of voltages and currents in X, Y and Z directions of the stack. The goal is to characterize faults for diagnosis purpose. The next chapter will explain how the proposed model can be used in the fault diagnosis of the PEMFC for automotive applications.
X direction T[°C
] V[V] Y direction T[°C] V[V]
General Diagnosis Strategy of FCEV drive trains
With increasing demands for efficiency in vehicles applications and safety-critical processes, the field of fault detection and fault diagnosis plays an important role. During the last few decades theoretical and experimental research has shown new ways to detect and diagnose faults. One distinguishes fault detection to recognize that a fault happened, and fault diagnosis to find the cause and the location of the fault. Advanced methods of fault detection are based on mathematical signal and process models and on methods of system theory and process modeling to generate fault symptoms. Compared to other electrochemical power devices such as the battery, the PEMFC is much more complicated. Its complexity derives from the following aspects [5.1].
1. Three-dimensional architecture is vitally important to performance and durability, due to the large size of PEM fuel cell stacks. 2. Local performance can seriously affect the system's performance and durability. 3. There are complicated operating conditions, such as load, temperature, pressure, gas flow, and humidification. A further important field is fault management or asset management. This means to avoid shutdowns by early fault detection and actions like process condition-based maintenance or repair. If sudden faults, failures or malfunctions cannot be avoided, fault-tolerant systems are required. Through methods of fault detection and reconfiguration of redundant components, breakdown, and in the case of safety-critical processes, accidents, may be avoided [5.2].
The diagnosis process developed within this thesis concerns the drivetrains of the FCEV (see chapter 1,Figures 1.18). Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why the part of the drivetrain, which is focused through this study, consists in the PEMFC.
Regarding to whole the vehicle, the drivetrain becomes just one subsystem among many other subsystems. Thus, the faults can be divided in different levels according to the depth of their location within the vehicle (see Table.5.1). In this section, we explain this scheme around our thesis work whose topics a green frame points out.
Level one
This diagnosis level takes care about the feature of the main systems inside the vehicle. These devices generally called subsystems or main components include the powertrain, the embedded grids (or the electrical harness), the ICE, the steering column, wheels, etc. In this level 1, a fault is detected through a basic supervision algorithm in which sensors send to the main Electronic Control Unit (ECU) the State of Operating of each Subsystem (SOS). Thus a Boolean supervision algorithm (true/false) sends a fault signal and indicates the subsystem in which the false input is detected (seeFigure.5.2) [5.3]. indicate accurately which component of the drivetrain is in fault i.e. the PEMFC, the DC/DC power converter, the battery system, the DC/AC power converter or the motorization device. This is achieved thanks a classification of faults done starting from the electrical, thermal and mechanical measurements of the drive train. At this level 2 the subsystem in faulty mode is detected by the ECU. The next step is to go deeper in the analysis of fault signals to know more on the fault occurred and to evaluate the SOH of the faulty subsystem.
Level two
Level three
When an unusual behavior is identified in on subsystem, fault diagnosis strategy is to evaluate the fault severity and its impact on the subsystem performance. The loss of performance can be expressed through a function between zero and one traducing the actual State Of Health (SOH) of the subsystem. That parameter can be used in a control algorithm in degraded mode of the drivetrain. To evaluate efficiently the parameter SOH it is necessary to have a deep knowledge on the corresponding system. In the framework of this thesis, this study has been developed on fault diagnosis on the source of the drive train (Fuel cell). This was achieved through the 3D fault sensitive modeling of PEMFC and training of model based on ANN for fault diagnosis and SOH computation (see Figure. A common consequence of the PEMFC failures is the voltage. In fact, if a fault occurs in FC, the voltage can be either increase or decrease according to the fault [5.4]. In summary, FC stacks voltage is a first indicator of a degraded working mode. Different categories of faults in PEMFCs are likely to occur during the operating conditions [5.5].
Water management and temperature are effects crucially important for healthy operation of a PEMFC.
Artificial Neural Network for diagnosis of PEMFC
In this work, a two layer feed-forward ANN has been used in order to classify. There are no general rules to determine the number of hidden layers and hidden nodes; this also depends on the complexity of the mapping to be achieved. The number of inputs (input nodes) and output (output nodes) is of course determined by the specific problem. The number of neurons and connections limit the number of patterns a neural network can store reliably [5.6]. A comprehensive investigation about ANN structure and application is presented in appendix 5A.
Fast Fourier Transform (FFT)
DC load current contains some typical harmonics identical to those one can find in a DC/DC boost converter generally associated to the PEMFC. FFT algorithm is applied to compute the 7 The DFT is extremely important in the area of frequency (spectrum) analysis because it takes a discrete signal in the time domain and transforms that signal into its discrete frequency domain representation. An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the most important difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much more accurate than evaluating the DFT definition directly. It is the speed and discrete nature of the FFT that allows us to analyze a signal's spectrum with Matlab. Matlab's FFT function is an effective tool for computing the discrete Fourier transform of a signal.
Modelling method for on-line FC diagnosis
ANN Based 3D Fault Classification in the PEMFC single cell
The PEMFC dynamic 3D model is built based on experimental results and simulation using MATLAB software. With this model and experiments, the mechanisms of different faults in PEMFC systems are analyzed. ANN is applied for the fault diagnosis and classifications. ANN is trained by data coming from FFT algorithm according to an analysis of the variation of operating conditions in the fuel cell. Fault detection in each step based on operating conditions has been simulated. The classification of flooding and drying is illustrated in Table .5.3 For the too dry faults all possibilities of humidity between 30% to 50% with constant pressure (at the cathode and the anode side) and constant temperature 65 °C have been simulated. FFT algorithm has been used to express the output voltage in each node and output voltage in the cell. The measurements of the mean value and the first seven harmonics of the output voltage, in the steady state operation, allow computing the corresponding Harmonic Distortion Rate (HDR) and the mean values voltage variation according to the healthy value (MVV). Hence, data from FFT in each operating conditions based on the values given, serve for training the ANN for fault diagnosis and classification.
In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on Newton Raphson method (see Table5.6). In addition, the local current density calculations by Newton Raphson can be summarized in Table .5.5. Furthermore, comparisons of the current distributions between different current profile loads are shown in this table. To train the neural network all impedances are used in the 3D sensitive model. Drying and flooding faults are divided into 4 types as shown in Table .5.2. In order to isolate faults, each node can be simulated for different operating conditions. In this work the temperature of 62°C and humidity of 30% is relative to the Too Dry. The temperature 55 °C and humidity 120 % are assumed for too flood. Besides, temperature 45°C, 55°C and 110%, 50% humidity is presumed for flood and dry faults respectively. To obtain these conditions the pressure of hydrogen and oxygen inlet are adjusted 1.5 bar and 2.2 bar respectively. .9 show the current distributions in different current load profile and different faults in all nodes of the cell. Actually, Table .5.8 shows that the local current density at 10 (A) in faulty mode (too flood) increased from 0.02 to 1.07 A/cm 2 from outlet to inlet. These variations for flooding fault decrease approximately from 0.02 to 0.11 A/cm 2 . However, for too dry and dry fault local current densities decreased from 0.5 to 0.29 and 0.4 to 0.17 in outlet to inlet respectively. It could be caused by temperature variations changed in the cell.
As explained previously temperature distributions are inhomogeneous distributed. Because of the heat transfer, changing the pressure of the inlet and outlet happened. In additional, these variations in current load profile of 5 (A) is not noticeable, because of temperature distribution in this level of current is not effected a lot.
ANN Faults classification in stack of 2 cells
In this work isolation and classification of faults in the PEMFC are divided in two steps:
Step 1: Isolate fault and classified the fault in stack that means detected the faults in each cell.
Step 2: Faults isolation and localized the faults in each cell. The dry faults of 567 numbers of samples contain 7 numbers of harmonics in 9 nodes for 9 classes. Each class is chosen by variations operating condition based on Table .5.2.
Faults Classification in stack
These figures show that 7 harmonic attributes will be used as inputs to the neural network and the respective target for each will be 9 classes. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix T. Each column of the input matrix will have 9 elements representing 9 nodes. That means, each corresponding column of the target matrix will have 9 elements consisting of 9 classes.
In these figures the confusion matrix shows the percentage of correct and incorrect classifications.
Correct classifications are green squares on the matrix diagonal. Incorrect classification is the red squares. Furthermore, the blue cell at the bottom right shows the total percent of correctly classified cases (in green) and the total percent of incorrect class cases (in red). The results show very good recognition and in this case, the network response is satisfactory. . The neural network architecture appropriate to solve classification problems is the feed forward one characterized by: An input layer having as many units as attributes are in the data; One or several hidden layers (as the number of hidden units is larger the model extracted by the neural network is more complex -but this is not necessarily beneficial for the problem to be solved); An output layer having as many units as classes.
Conclusion
In this chapter, the study focused on a power source consisting of a PEM Fuel cell in the power train. A new model was proposed to improve the lifetime and reliability of the power train and to detect online faults.
Fault classification and isolation are implemented in the stack fuel cell and one cell.
Besides, Current distributions in different points of the cell based on varying operating conditions are calculated by the Newton Raphson method. These variations cause Drying, Flooding, Too dry and Too flood in the cell. Current density distributions localized in each step of current and faults.
The ANN method has been used to develop diagnosis based on 3D sensitive models for fault isolation in one cell PEM. The input data of the ANN were analyzed by the FFT method. The ANN advantages consist in their ability to analysis a large quantity of the data and to classify the faults in terms of their types. The AAN are used for classification and isolation the faults. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix for 9 classes. The results show very good recognition and in this case, the network response is satisfactory.
General Conclusion
FCEV is considered by public and private research organizations, as one of the most suitable solution for clean transportation. Indeed, the use of hydrogen produced by water electrolysis using renewable energy sources, combined with a FC proton exchange membrane, allows completely green energy cycle. Hydrogen production and distribution technologies as well as FC ones as enough mature to be economically viable. Many automakers (Daimler, Honda, Chevrolet, Hyundai, Ford) already propose vehicles whose performance are comparable to internal combustion vehicles (500 km range, 130 CV, 160km/h-max).
One of the main still existing locks on the way the marketing of these vehicles is the reliability of their drivetrains that has to be increased so that to be competitive regarding the conventional vehicles. The FCEV drive trains contain the PEMFC, Batteries, DC/DC converters, DC/AC inverters and electrical motors. Among the drivetrain components, the PEMFC is the more fragile. Indeed, its performances are affected by different operating conditions such as temperature, pressure, humidity and current density. The latter influence cost, output power, energy efficiency, reliability and lifetime of the PEMFC. Thus, Understanding the operating modes will be very useful for enhancing the lifetime of the system.
To meet this objective a 3D model has been developed for modeling and simulation of a PEMFC. A circuit approach has been used to allow to easily take into account the three dimensional aspect of the PEMFC stack. Also, this kind of model offers the possibility to include through a parameterization process all the environmental conditions namely, the temperature, gas pressure and stoichiometry and humidity. It has been shown that the propose model is able to simulate the PEMFC single, double and multi cells in normal operation conditions (healthy mode) but also in faulty operating conditions (faulty mode). Thus the model has been used to train an ANN based model for on line diagnosis purpose.
The model principle as well as the used process for its establishment has been explained in details. It has been shown that the experimental tests are combined to the theoretical formula to calibrate and validate this model. In the calibration process, the Newton Raphson method has been used to find the physical parameters of the model. In such calibration the temperature and voltage distribution in the FC stack were considered for different operating conditions in terms of current load and stoichiometry of oxygen and hydrogen.
For experimental study, two PEMFCs have been considered to analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. The measurements obtained allow validating the proposed 3D model first on one single cell, second on two cells and finally on a complete stack of the PEMFC: The single and double cells allowed validating a 9 nodes model while the FC system validated the one stack model. Thus, the 3D validated model can be used for introducing different faults to study the behaviors of the distributions of voltages and currents in three space directions of the stack in the purpose of diagnosis of the FC. In this framework, an ANN based model has been also developed to classify the different faults. The input data of the ANN were analyzed by the FFT method. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix for 9 classes in a single cell. The results show very good recognition and in this case, the network response is satisfactory.
Block Diagram
A dynamic model for the PEM fuel cell has been developed in MATLAB/SIMULINK, based on the electrochemical and thermodynamic characteristics of the fuel cell discussed in chapter III. The fuel-cell output voltage, which is a function of temperature and load current, can be obtained from the model.
Effect of Air stoichiometry on temperature distribution along channel
The effect of the air stoichiometry ratio of the temperature distribution along the channel for three different current values 10 (A) and ( 15 .7. and Figure.4.A.8. This phenomena can be attributed to the higher electrochemical activity taking place over the MEA surface as a result of decreasing the cell potential. This is the most important point to be noted about the effect of the temperature on the cell voltage. This relationship can be useful to study the fault diagnosis for drying and foolding.
Static and Dynamic Artificial Neuron Models: Adaptive Function Estimators
General: a very detailed description of the artificial neuron is given below, since this is absolutely necessary for a good mathematical and physical understanding and for all those who also wish to develop other types of ANM (e.g. a fuzzy-neural model, minimum architecture neuron model, etc.). ANNs are based on crude models of the human brain and contain many artificial neurons (computational units) linked via adaptive interconnections (weights) arranged in a massively parallel structure. They are artificial 'entities' that can actually learn from given data sets (they estimate functions from datasets). In other words, they are adaptive function estimators which are coarse simulators of a biological neural network in a human brain.
It is a very important feature that a suitable ANN is capable of learning the desired mapping between the input signals and output signals of the system under consideration, without knowing the exact mathematical model of the system in this sense, the ANN is a numerical, trainable, modelfree adaptive estimator (similar to a fuzzy system). Since the ANN does not use a mathematical model of the system under consideration, the same ANN configuration and dynamics can be applied to many problems.
A human brain can perform an extremely large number of different operations. There are a number of different ANNs that try to mimic many of these features. Similarly to the human brain, the basic element of an ANN is a single computational neuron, which is basically a multi-input usually nonlinear processing element with the weighted interconnection of the neuro-biological process of a human brain neuron; it is possible to obtain a relatively simple artificial neuron model which gives a good representation. A simple model of the so-called 'static' artificial neuron has four main parts: Input(s); A weighted summer; A non-dynamic function ( so-called 'activation function', which is also sometimes referred to as a transfer function ) and in most of the applications is non-linear (there are also ANN models with use linear functions) Output(s) It must be noted that this neuron model is also referred to in the literature as the perceptron neuron, but strictly speaking, by considering its original definition. It should only be called the perceptron if a spatial form of activation function is used (e.g. where the activation function is the hard-limit function). It can seem that the static artificial neuron model does not contain dynamics. However in a so-called 'dynamic' artificial neuron model, in addition to the four main parts described above, the activation function block is followed by a dynamic block. This dynamic block can be represented by a simple delay element (first-order low-pass dynamic block). Figure .5A.1 shows the basic model of a single static artificial neuron (AN), which is the ith neuron in an artificial neuron network containing many neurons.
Although in the simplest neuron model there is only one neuron, in general there are n inputs to a general ith neuron as shown in Figure .5A.1, these are x 1 (t), x 2 (t), x 3 (t),..., x n (t). These can be considered the elements of the n-dimensional input vector x(t)=[x 1 (t), x 2 (t), x 3 (t),..., x n (t)] T . The neuron output is the scalar quantity y i (t) . The neuron contains an aggregation operator, which e.g. Where w ij are the connection weights (interconnection strengths) between the ith neuron and the jth inputs and i b is a constant (which is often called a bias or threshold of the activation function). It follows that the inputs are transmitted through the connection weighted, whereby the weights are multiplied by the inputs and the weighted sum is added, and the net value (S i ) is obtained by adding to this the bias (b i ). Finally, the output of the neuron (y i ) is obtained by using the neuron activation function (f i ), thus y i =fi(S i ) as shown in Figure .5A.1.
Input
Bias the input to a neuron has two sources: external inputs and internal inputs, the latter are inputs from other neurons. It should be noted that by also considering the bias input in Figure .5A.1, there are in total n+1 inputs and the threshold has been incorporated by employing input x 0 =1 and using a corresponding weight of b i . Thus the bias is imply added to (not that now the starting value for j is 0 and not 1 as before) where x 0 =1 and w 0j =b i . In this case, the bias in like a weight, but it has a constant input of "1". This is the main reason why one of these two very similar static neuron models can be found in various publications. A neuron fires if the weighted sum of its inputs exceeds the (threshold) bias value. In ANN, b i can be set to be a constant or a variable (which can change like the weights), since in the latter case there is added flexibility for the network.
An ANN with biases can represent input-to-output mapping more easily than one without biases, e.g. if all the inputs to a neuron are zero (x j =0, j=1,2,….,n), a neuron without a bias will have a net input:
0 ) ( 1 1 j n j j i i j n j j i i x w b t x w s
Thus the activation function becomes a single value f (S i ) = f (0), (which depends only on the activation function employed). However the same neuron has a bias, the input is Si=bi and thus the activation function becomes f (S i ) =f (b i ) , which (for a specific activation function) can have any value, depending on the bias. This results in greater flexibility.
In Figure .5A.1 the neuron also contains a non-dynamic activation function, f (S i ). The reason for the use of a non-linear activation function is to deliberately introduce non-linearity in the neuron model, since this makes the network capable of storing strong non-linear activation function were not incorporated into this model. Then the artificial neuron would represent a linear system, which could not be used for the mapping of a non-linear system and cannot suppress noise, so the linear network would not be robust. However, it should be noted that there exist neuron models, with linear activation function, but these can only be used for the modeling of linear systems.
) ) ( ( ) ( 1 i j n j j i i i i i i b t x w f s f y a
There are various types of f i activation function (a mathematical function), which can be used in ANNs. However, non-linearity and simplicity are the two key factors for the selection of a specific activation function. Furthermore, since some training techniques (e.g. the back propagation technique) require the first derivative of the activation function ( f ), when they are used in an ANN using such a learning technique, then the activation function must be differentiable.
ANNs, single layer and multilayer Feed forward ANNs
Neural network systems consist of parallel distributed information processing units with different connecting structures and processing mechanism. They have a large variety of applications in engineering such as function approximation, pattern recognition and etc. The architecture of a neural network specifies the arrangement of the neural connection as well as the type of units characterized by an activation function. The processing algorithm specifies how the neurons calculate the output vector for any input vector and for a given set of weights. The training algorithm specifies how the NN adapts its weights w with all given input vectors, called training vectors. Thus, the neural network can acquire knowledge through the training algorithm and store the knowledge in synaptic weights. The most common NN used ones are the multi-layer feedforward networks, as, a three-layer network (input, one hidden and output layers) as shown in Figure 5A.3.
Single Layer ANN
The neurons are the building blocks of an artificial neural network. In a so-called single-layer feedforward ANN there is at least a single artificial neuron of the type discussed in the previous section. As shown in Figure .5A.3 , in general , there can be n inputs X = [x 1 , x 2 , …, x n ] T and k neurons in the single layer of the ANN, where in general k ¹ n, and an input is connected to each neurons (input) through the appropriate weights. Each neuron performs the weighted sum of its inputs plus the bias and applies this to its activation function. It follows that there are k outputs y 1 = [y 11 , y 12 … y1 k ] T to the ANN described by a single layer (where the index 1 in y 1 outputs on the first layer are y 11 , y 12 … y 1k ) and:
) ( 1 1 1 1 B X W F y
In this expression F 1 is the activation matrix of this single layer, which is a diagonal matrix with k elements, and which depends on the net inputs to this layer: , where b11, b12, …, b1k are the biases of nodes 1, 2, …, k of the output layer respectively. An ANN with a single layer can be used with only a very limited number of systems and it cannot represent all nonlinear functions. When the activation functions in a single layer ANN are hard-limit functions, the so-called single layer perception model arises. Although this can be used for certain types of classification problems, science the hard limit function divides the input space (the space defined by the input vector) into two regions, and the output will be 1 or 0 depending on the input vectors.
However in fact, there can be only two different output values is a great limitation. Furthermore, the single layer perceptron cannot learn the mapping of such systems, whose input space is defined by linearly un-separable vectors. It is sometimes convenient to have a geometrical interpretation of this: if the input space contains linearly inseparable vectors, then a straight line or plan which separates the input vectors in the input plane cannot be drawn on the input plane between the input vectors. When a single layer network uses linear activation functions, where the single layer perceptron cannot train the network, so that it has linear neurons and called Widrow-Hoff neurons of ADALINE neurons (Adaptive Linear Neurons), then the resulting network using adaptive learning is called the ADALINE network or MADALINE network for many ADALINES.
Multilayer ANN
The neurons are the building blocks of an artificial neural network containing many layers. In a multilayer feed forwards ANN, the neurons are arranged into several parallel layers. The connection of several layers results in a network which has the possibility of more complex non-linear mapping between the inputs and outputs this can be used to implement classifiers and associates to represent complex non-linear relations among variables. In a multilayer artificial neural network the neurons of layer 0 (input layer) don't perform computation (processing), but only feed inputs to the neurons of layer 1 with is called the first hidden layer. There are no interconnections between the nodes of the same layer. Layer 1 can be followed by a second hidden layer (layer 2). In theory there could be any number of hidden layers, but this would significantly increase the complexity of the training of the network and also network with one or two hidden layers appear to provide adequate accuracy, robustness and generalization in many cases. If there is only a single hidden layer satisfactory performance can be obtained by using the non-linear activation function only in the hidden layer, and linear activation functions in the output layer when contrasted to the network with a single hidden layer, the network with two hidden layers may provide higher accuracy at a low cost (fewer processing units).
In the ANN with two hidden layers, the last layer (layer 3) is the output layer. In general, layers between the input and output layers are the hidden layers. Each neuron is connected to all neurons of the adjacent layers and to no other neurons. Connections within a layer are not permitted. Generally there are different numbers of neurons and different weights for different hidden layers. There are no general rules to determine the number of hidden layers and hidden nodes; this also depends on the complexity of the mapping to be achieved. The number of inputs (input nodes) and output (output nodes) is of course determined by the specific problem. The number of neurons and connections limit the number of patterns a neural network van store reliably. In a multilayer ANN the activation functions in the output layer can be linear functions, since the network is able to represent a non-linear system by using non-linear activation functions in the hidden layer(s). For illustration purposes Figure .5A.4 shows the schematic of three-layer feed forward ANN. The term 'feed-forward' refers to the fact that the arcs joining the nodes are unidirectional. Such a network is also referred to as a multilayer perceptron also strictly speaking this should only be the terminology used for the same network, when the activation functions are hard-limit functions (see the definition of the perceptron above). It should be noted that in the literature sometimes such a network called a four layer network, corresponding to the fact that there are four layers of nodes (for the input, hidden 1, hidden 2 and output layers). However the network has only three layers of processing neurons and therefore such a network is also sometimes referred to in the literature as a three layer network. If the latter definition is applied, the term 'layer' refers to the actual number of existing and processing layers. This convention is more logical, since the input nodes (in the input layer 0) don't perform computation. As a result, this is equivalent to saying that the ambiguity could be totally removed by considering such a definition where there is no terminology of an input layer and the layer and the layer to which the inputs are directly connected to is the first layer (first hidden layer).it is then very clear that the number of layers in such a network is equal to the number of hidden layers plus 1. It should also be noted that when this definition is used, an N-layer network has N-1 layers of hidden neurons, whose outputs are not directly accessible. As a consequence, the errors (difference between desired value and actual value) at these outputs are not known directly. They can be obtained by first determining the errors at the output layer, and then by backpropagation these. In general, multilayer artificial neural networks can be considered as versatile non-linear maps with the elements of the weight matrices (weights) and bias vectors as parameters. In the ANN shown in figure.5A.4 there are n inputs, there is one output layer (OL) with M output nodes and there are two hidden layers (HL1, HL2). In general in each of the layers there can be different number of nodes and all nodes in a given layer are connected to all nodes in the next layer, but there are no inter connections between the nodes of the same layer. The number of input correspond to the number of physical characteristics that are considered to be important for the neural network and the number of output nodes is equal to the number of output quantities to be determined. As discussed above, in general there can be several hidden layers, but often due to the computation burden, this is limited to one or two hidden layer. According to the universal approximation theorem one hidden layer is sufficient to perform any non-linear input-to-output mapping but the theorem doesn't give the number of hidden neurons and doesn't say if a single hidden layer would be optimal in the sense of ease of learning.
These can make the training of the network sometimes difficult and in supervised ANNs may necessitate trial-and-error-based computations aimed at obtaining an ANN with optimum of hidden layers and hidden nodes.
In recent years, according to the upcoming challenge of pollution, fuel saving, to use on FCEV is increasing. It can be that fuel cell power train divided in the PEMFC, Batteries, DC/DC converters, DC/AC inverters and electrical motors. The Proton Exchange Membrane Fuel cells (PEMFC) have consistently been considered for transportation application. Characteristic features of PEMFC include lower temperature (50 to 100 °C) and solid polymer electrolyte membrane. In this work, experiments have shown that the temperature distributions can significant influence on the performance of the PEMFC. Also analytical studies have indicated improvement of ionic resistivity of the electrolyte membrane, kinetics of electrochemical reaction and gas diffusion electrodes have directly related to temperature. This work evaluated the effectiveness of temperature on a single and stack fuel cell. In addition, a 3D model is developed by effective of temperature on performance on the fuel cell. In this thesis, two PEM fuel cells have been considered to find out the relationship and analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. An experimental study for voltage and temperature has been executed, using one cell, 12 thermocouples and 12 voltage sensors have been installed at different points of the cell.
In this work a new model was proposed to improve the lifetime and reliability of the power train and to detect online faults. Besides, current distributions in different points of the cell based on varying operating conditions are calculated by the Newton Raphson method. On the basis of the developed fault sensitive models above, an ANN based fault detection; diagnosis strategy and the related algorithm have been developed. The identified patterns ANN have been used in the supervision and the diagnosis of the PEMFC drivetrain. The ANN advantages of the ability to include a lot of data made possible to classify the faults in terms of their type.
Résumé :
Ces dernières années, la pile à combustible à membrane échangeuse de proton (PEMFC) a fait l'objet d'un intérêt particulier pour des applications liées au transport. De par le fait qu'elle fonctionne à une température de fonctionnement relativement basse (50-100°C) combiné à une membrane polymère solide empêchant tout risque de fuite. Dans ce travail, des expérimentations ont été effectuées pour démontrer que la distribution de température à une influence significative sur les performances de la PEMFC. Par ailleurs, ce travail comporte une analyse ayant pour but de d'indiquer une amélioration de la résistivité ionique de la membrane, de la vitesse de réaction et de la diffusion des gaz en fonction de la température.
Des expérimentations sur une cellule puis sur un stack complet ont permis d'évaluer l'impact de la température à l'aide d'un modèle 3D développé simulant les performances de la pile en relation avec la distribution de température. Dans cette thèse, deux piles à combustible ont permis de valider le comportement et d'en déduire une relation entre la tension de sortie et la distribution de température dans différentes conditions de fonctionnement. Une étude expérimentale prenant en compte la tension et la température a été effectuée sur une cellule en mesurant la température et le voltage en douze points à l'aide de thermocouples et de sonde de tension.
Le modèle 3D proposé permet ainsi d'améliorer la durée de vie d'une pile ainsi que sa fiabilité, il permet aussi d'effectuer un diagnostic et de détecter en ligne un défaut. Ceci est effectué en calculant la densité de courant localement à différentes conditions de fonctionnement en utilisant la méthode de Newton Raphson. De par le développement de ce modèle sensible à un défaut, un algorithme de détection de défaut ainsi que la stratégie de diagnostic ont été développé en utilisant des réseaux de neurones artificiels (RNN). Ces derniers ont été utilisés pour la classification supervisée de défaut permettant ainsi le diagnostic.
Figure. 1 . 3 .
13 Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................ Figure.1.3. Daimler fuel cell electrical vehicle. ............................................................................................... Figure.1.4. Ford fuel cell electrical vehicle. .................................................................................................... Figure.1.5. GM fuel cell electrical vehicle. ...................................................................................................... Figure.1.6. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.7. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.8. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.9. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................ Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................ Figure.1.18.Fuel Cell Electrical vehicle. ........................................................................................................ Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows. .......................................................................... Figure.1.20. Market for Fuel Cell Technologies. ......................................................................................... Figure.1.21. Single cell structure of PEMFC. ............................................................................................. Figure.1.22. Membrane electrode assembly. ................................................................................................ Figure.1.23. Gas diffusion layers.................................................................................................................... Figure.1.24. Bipolar Plate. ............................................................................................................................... Figure.1.25. System of the fuel cell ............................................................................................................... Figure.1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW ...................................................................................................
Figure. 1 . 18 .Figure. 2 . 1 .
11821 Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................ Figure.1.3. Daimler fuel cell electrical vehicle. ............................................................................................... Figure.1.4. Ford fuel cell electrical vehicle. .................................................................................................... Figure.1.5. GM fuel cell electrical vehicle. ...................................................................................................... Figure.1.6. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.7. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.8. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.9. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................ Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................ Figure.1.18.Fuel Cell Electrical vehicle. ........................................................................................................ Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows. .......................................................................... Figure.1.20. Market for Fuel Cell Technologies. ......................................................................................... Figure.1.21. Single cell structure of PEMFC. ............................................................................................. Figure.1.22. Membrane electrode assembly. ................................................................................................ Figure.1.23. Gas diffusion layers.................................................................................................................... Figure.1.24. Bipolar Plate. ............................................................................................................................... Figure.1.25. System of the fuel cell ............................................................................................................... Figure.1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW ...................................................................................................
Figure. 3 . 1 .
31 Figure.3.1. Single cell MES PEMFC with different layers. .............................................................. Figure.3.2. Single cell PEMFC based on elementary cell in three dimensions (3D). ........................ Figure.3.3. Algorithm of calibration and using of the 3D model. .................................................... Figure.3.4. Top view of 3D electric model of one cell of a PEMFC. ............................................. Figure.3.5. Transverse view (x, y axis) of the anode side with 9 nodes and 20 different resistances. Figure.3.6. View of the interface resistors (z axis) between two FC cells. ........................................ Figure.3.7. Stack temperature according to load current in MES fuel cell. ....................................... Figure.3.8. Simulation result of the polarization curve in different double layer effect for MES PEMFC. ......................................................................................................................................... Figure.3.9. Simulation models of sensitive mode. ........................................................................... Figure.3.10. Polarization curve of MES fuel cell. ............................................................................ Figure.3.11. The electric circuit model for 3D of PEMFC. ............................................................. Figure.3.12. Front view of the 3D proposed model for PEMFC stack. ......................................... Figure.3.13. Perspective view of the 3D proposed model for PEMFC stack .................................Figure.3.14. Top view of the 3D proposed model for PEMFC stack ............................................Figure.3.15. Simplified model in three dimensions (3D) for two cells. ..........................................Figure.3.16. Relative humidity according to the stack temperature of the exit air of the FC with air stoichiometry of 2 ........................................................................................................................Figure.3.17. Fault diagram according to operating conditions of FC. ............................................ Figure.3.18. MVV and HDR variations according to resistance changes in Z direction. ...............Figure.3.19. MVV and HDR variations according to resistance changes in X direction. ...............Figure.3.20. MVV and HDR variations according to resistance changes in XY direction. .............
Figure. 4 . 1 .
41 Figure.3.21. MVV and HDR variations according to resistance changes in Y direction. ...............
Figure. 4 . 19 .
419 Figure.4.1. Reactant air management with two parallel inlet pieces for oxygen feeding part. ......... Figure.4.2. Oxygen plate feeding. ................................................................................................. Figure.4.3. Components of single PEM cell. ................................................................................. Figure.4.4. Structure of the test bench. ......................................................................................... Figure.4.5. Hardware of control.................................................................................................... Figure.4.6. Interface panel. ........................................................................................................... Figure.4.7. Panel for Settings" the type fuel cell for test. ............................................................... Figure.4.8. Test bench structure. .................................................................................................. Figure.4.9. Electronic load for simulate of driving cycle. ....................................................................... Figure.4.10. Single PEM fuel cell and accessory. ........................................................................... Figure.4.11. Thermocouple type K. .............................................................................................. Figure.4.12. The relationship between the seek voltage and temperature. ..................................... Figure.4.13. DPI 620 advance measurements devise. .................................................................... Figure.4.14. Process of the calibration of thermocouples with reference thermocouple. ............... Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple. ..................................... Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures. ............... Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side. ..... Figure.4.18. Test bench structure with thermocouple and voltage sensors. ................................... Figure.4.19. Set up measuring the temperature distribution in PEMFC (MES). ............................ Figure.4.20. Boundary limitation for choice of acceptable sensors. ............................................... Figure.4.21. Schematic of one cell with thermocouple and voltage sensors. .................................. Figure.4.22. Test conditions on load current and oxygen stoichiometry. ....................................... Figure.4.23. Set up measuring the Voltage in PEMFC (MES). ...................................................... Figure.4.24. Temperature distribution for the cathode side "O 2 stoichiometry ratios of 3, 5 and 7; Current load 5 A, H2 stoichiometry of 1.5". ................................................................................. Figure.4.25. Temperature measurement for various load current, O 2 stoichiometry ratios of 3, 4, 5 and 6; H 2 stoichiometry of 2". ...................................................................................................... Figure.4.26. Schematic of positions of thermocouple in cell. ........................................................ Figure.4.27. Temperatures are measured for various loads current 5A, 10A and 15A, stoichiometry of 3 for O 2 2 for H 2 . ..................................................................................................................... Figure.4.28. Local temperature distributions along the cell at the cathode side, Opeation condition : stoichiometry of 3,4 and 5 for O 2 2 for H 2 . ..................................................................................Figure.4.29. Voltage distributions along the cell with operation condition : O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ......................................................................................................................Figure.4.30. Local temperature distributions along the two axes (x and y) at cathode side for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ............................................................. Figure.4.31. Local temperature distributions along the two axes (x and y) in PEMFC at cathode side for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ........................................
4 . 32 .
432 Figure.4.32. Temperature distribution over the cell in the x axes for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.33. Schematic of the PEM fuel cell in x and y axes in different region "inlet, middle and outlet ". ........................................................................................................................................ Figure.4.34. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5........................................................................... Figure.4.35. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ............................................................................ Figure.4.36. Temperature distributions y axes. .............................................................................. Figure.4.37. Voltage distribution over the cell for different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.38. Validation simulation and experimental test for one cell. ........................................... Figure.4.39. Temperature measurements in cathode side of PEMFC in different current density cell temperature with O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................... Figure.4.40. Voltage measurements in cathode side of PEMFC in with O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ................................................................................................................................ Figure.4.41. Validation simulation and experimental test for two cell. ........................................... Figure.4.42. The MES FC system used for validating tests. ........................................................... Figure.4.43. Test bench of fuel cell in the chamber room. ............................................................ Figure.4.44. Position of the thermal sensors. ................................................................................ Figure.4.45. Current dynamic profile of FC under test. ................................................................. Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test. ......................................................................................................................... Figure.4.47. The cell voltage versus current for MES PEMFC. ..................................................... Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile. ........................................................................................................................... Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber. ...... Figure.4.50. Load profile. ............................................................................................................................. Figure.4.51. (a) Validation of electrical model, (b) validation of thermal model. ................................ Figure.4.52. Comparison of polarization curves for different temperatures. ....................................... Figure.4.53. Test bench instrument devices. ................................................................................. Figure.4.54. Thermal impact on PEMFC stack. ............................................................................ Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell. ... Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack. ....Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC. ..................................................................................................................................................... Chapter V:
Figure. 5 . 1 .
51 Figure.5.1. Illustration of the "Level 0" of the vehicle diagnosis. ..................................................Figure.5.2. Illustration of the "Level 1" of the vehicle diagnosis. ..................................................Figure.5.3. Illustration of the "Level 3" of the vehicle diagnosis. .................................................. Figure.5.4. Implementation algorithm fault diagnosis of PEMFC in power train. ......................... Figure.5.5. Fault isolation on drying, flooding, too dry and too flood in two cell .......................... Figure.5.6. Regression plot for different faults in two cells ...........................................................
Figure.5.7. Fault isolation for Drying faults according to nine nodes during in one cell. ............... Figure.5.8. Fault isolation for flooding faults according to nine nodes during in one cell. ............. Figure.5.9. Fault isolation for too flood faults according to nine nodes during in one cell. ............ Figure.5.10. Fault isolation for too dry faults according to nine nodes during in one cell. ............. Figure.5.11. Regression plot for flooding faults according to nine nodes during in one cell. ......... Figure.5.12. Regression plot for too flooding faults according to nine nodes during in one cell. ... Figure.5.13. Regression plot for too drying faults according to nine nodes during in one cell. ...... Figure.5.14. Regression plot for drying faults according to nine nodes during in one cell..............
Figure. 1 . 1 .Figure. 1 . 2 .
1112 Figure.1.1. Energy power source from 1949-2011
Figure
Figure.1.3. Daimler fuel cell electrical vehicle.
Figure
Figure.1.4. Ford fuel cell electrical vehicle. 2.1.3. General Motors: General Motors has the longest fuel cell history of any automaker, with the Electro Van was demonstrating the potential for fuel cell technology nearly 50 years ago. The company has had a succession of fuel cell test and demonstration vehicles, including the world's first publicly drivable FCEV in 1998. 2007 saw the launch of the HydroGen4 (marketed in the USA as the Chevrolet Equinox, see Figure.1.5), representing the fourth generation of GM's stack technology. More than 120 test vehicles have been deployed since 2007 under Project Driveway, which put the vehicles into the hands of customers and has been the world's largest FCEV end user acceptance demonstration: the vehicles have accumulated more than two million miles on the road [1.4].
Figure. 1 . 5 .
15 Figure.1.5. GM fuel cell electrical vehicle.
Figure
Figure.1.6.Honda fuel cell electrical vehicle.
Figure. 1
1 Figure.1.7.Honda fuel cell electrical vehicle.
Figure
Figure.1.8. Honda fuel cell electrical vehicle.
Figure. 1 . 9 .
19 Figure.1.9. Honda fuel cell electrical vehicle.
ECCE project is developed in cooperation with the FEMTO-ST laboratory of the University of Franche-Comté and two industrial partners, HELION and PANHARD General Defense. The Electrical Chain Components Evaluation vehicle (ECCE) (See Figure.1.10) is a research project supported by the French Army General Direction (DGA), (Direction Générale de l'Armement). The ECCE vehicle, which was driven for the first time in 2003, is presented in Figure.1.10.
Figure. 1 .
1 Figure.1.10. ECCE test bed
Figure. 1 .
1 Figure.1.11. F-City H2 test bed.
The project develops and tests under real conditions two fleets of five vehicles for postal mail delivery. Consortium partner La Poste will run the field tests in close coordination with other project partners involved. Partners Institute: University of Technology Belfort-Montbéliard, EIFER and companies: LA POSTE, MA HY TEC, MES, H 2 NITIDOR DUCATI energia and Steinbeis-Europa-Zentrum (SEZ). The hydrogen part of the drive train of Mobypost vehicle has been mounted at UTBM (Belfort) while the vehicle in its full electrical version has been built by Ducati Energia (see Figure.1.12)[1.7].
Figure. 1 . 13 .
113 Figure.1.13. Passive cascaded battery/UC system.
Figure. 1 . 14 . 3 . 3 .
11433 Figure.1.14. Active cascaded battery/UC system.3.3. Parallel active battery/UC systemA parallel active battery/UC system, are shown in Figure.1.15 have been analyzed by researchers in the Energy Harvesting and Renewable Energy Laboratory (EHREL), Illinois Institute of Technology (IIT), and Solero at the University of Rome. The battery pack and the UC bank are connected to the dc link in parallel and interfaced by bidirectional converters. In this topology, both the battery and the UC present a lower voltage level than the dc-link voltage. The voltages of the battery and the UC will be leveled up when the drive train demands power and stepped down for recharging conditions. Power flow directions in/out of the battery and the UC can separately be controlled, allowing flexibility for power management. However, if two dc/dc converters can be integrated, the cost, size, and complexity of control can be reduced[1.9].
Figure. 1 .
1 Figure.1.15. Parallel active battery/UC system.
Figure. 1 . 16 .
116 Figure.1.16. Multiple-input battery/UC system.
Figure. 1 . 17 . 4 .
1174 Figure.1.17. Multiple-input battery/UC systems. 4. Components of the drive trains of FCEV: Fuel cell electrical vehicle structure is such as series-type hybrid vehicles. The fuel cell is the main energy sources which produce electricity, fuel cells in vehicles create electricity to power by supplying the engine, Battery, DC/DC converter and DC/AC inverter. (See in Figure.1.18) [1.10].
Figure. 1 . 18 .
118 Figure.1.18. Fuel Cell Electrical vehicle.
Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows.
Figure.1.20 summarized different fuel cell and compared all of them with characteristics such as operating temperature, electrolyte charge carrier, and electrochemical reactions. In this figure illustrates the relative placement of the different type of fuel cell technologies with regard to electric demand (kW). The residential market considered is from 1 kW to 10 kW and related to PEM and SOFC. Commercial market range is 25 kW to 500 kW for examples of the commercial market segment include hotels, schools, small to medium sized hospitals, office buildings, and shopping centers. MCFCs and SOFCs are the only types of fuel cell that applied in both distributed power (3 MW to 100 MW) and industrial applications (1 MW to 25 MW) [1.10].
Figure. 1 .
1 Figure.1.20. Market for Fuel Cell Technologies.
Figure. 1 .
1 Figure.1.21. Single cell structure of PEMFC.
Figure. 1 .
1 Figure.1.22. Membrane electrode assembly.
Figure. 1 .
1 Figure.1.23. Gas diffusion layers.
Figure. 1 . 24 .
124 Figure.1.24. Bipolar Plate.
1.25 the different sub-systems presented in in this Figure are defined below:
Figure. 1 . 25 .
125 Figure.1.25. System of the fuel cell [1.14].
Figure1. 26 .
26 Figure1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW [1.17]
1 . 1 .
11 Accumulation dates methods .................................................................................................... 5.2. Membrane resistance measurement methods ......................................................................... 5.3. Pressure drop method ................................................................................................................ Water management in PEMFC ................................................................................................. 6.2. Effect of operation condition in water management (flooding and drying) ...................... 6.2.1. Humidity ................................................................................................................................... 6.3. Thermal management on PEM FC .......................................................................................... 6.4. Degradation of electrode/electro catalyst ...............................................................................
2.2. (a). Simplified schematic of chemical reaction of the FC were shown inFigure.2.2.(b). In this anode and cathode parts which R ionic represents the ionic resistance of the membrane, R CT,A and R CT,C represent the charge transfer loss across the electrode-electrolyte interface at the cathode and anode side respectively. The capacitor C DL,A and C Dl,C represent the double layer effect at the cathode and anode side respectively. The Randles's FC models were depicted in Figure.2.2.(c) , R w and C w model the diffusion/mass transport losses are shown in this Figure. The modification in cathode side to taking into account diffusion impedance illustrated in Figure.2.2. (d) and in Figure.2.2.(e). In general electrochemical cell have been represented by classic transmission line model of porous electrodes that is shown in Figure.2.2. (f).
Figure. 2 . 1 .
21 Figure.2.1. Dynamic electrical circuit model of the PEMFC.
Figure. 2 . 2 .
22 Figure.2.2. Equivalent electrical circuit model of PEM FC.
Eq2. 4 n
4 = number of electrons per molecule of H 2 = 2 electrons per molecule. N Avg = number of molecules per mole (Avogadro's number) = 6.022 10 23 molecules/Mol. q ei = charge of 1 electron = 1.602 10 -19 Coulombs/electron. The product of Avogadro's number and charge of 1 electron is known as Faraday's constant: F = 96,485 Coulombs/electron-Mol. Electrical work is therefore: Eq2.5
Figure. 2 . 3 .
23 Figure.2.3. Cell potential loss at different temperature.
Based on equation above by increasing of pressure, cell potential raised too(Figure.2.4).
Figure. 2 . 4 .
24 Figure.2.4. Cell potential losses at different pressure.
Figure.2.6 shows the experimental voltage plotted as a function of current for a cell fuel cell PEM at a temperature of 23 ° C. The polarization curve shows the cell voltage of a fuel cell according drop of the output current. Even though, a fuel cell does not have load (open circuit) the theoretical potential voltage is less than one volt. Because there are, some unavoidable loss is generated in the fuel cell: 1) Activation losses. 2) Internal and ionic resistance. 3) Concentration losses. 4) Internal current. 5) Crossover of reactants. 6) Activation losses.
Figure. 2 . 6 .
26 Figure.2.6. Polarization curve for a cell of a PEM fuel cell.
Figure. 2
2 Figure.2.7. The charge double layer at the surface of a fuel cell. The Fuel cell equivalent circuit model consists of: open circuit voltage (OCV, E nernst ), ohmic losses (R ohm ), activation losses (R act ), consideration and double layer effect capacitance (C dl ). The delay of different current in FC based on the effect of double layer capacitance in both cathode and anode side. However, in cathode side it is more important than the anode side. The ohmic losses are not affected by this capacitance. In the Figure.2.7, the capacitance is placed in parallel with activation and consideration resistance, and the cause of voltage drop has a dynamic effect in the FC. The dynamic equation of FC voltage is [2.23]: Eq2.50
Figure. 2
2 Figure.2.8. Polarization Curve with different losses.
Figure. 2
2 Figure.2.9. Block diagram of the multi-physical modeling of FC.
Figure. 2 .
2 Figure.2.10. Activation losses at different temperature.
Figure. 2 .
2 Figure.2.11. Resistive loss in FC at different temperature.
Figure. 2 . 12 .
212 Figure.2.12. Concentration losses in FC at different temperature.
Figure.2.14 shows the effects of increasing temperature between 25°C and 55°C with different polarization curves. It is obviously that the voltage increased by increasing the temperatures. It should be noted that rising of the internal temperature of the FC reduces performances and has irreversible damage on the FC[2.18].
Figure. 2 .
2 Figure.2.13. Cell Voltage losses at different temperature.
2.14. allows pointing out the effect of different pressure on the activation voltage losses.
Figure. 2 . 14 .
214 Figure.2.14. Activation losses in FC at different pressure.
Figure. 2 .
2 Figure.2.15. Concentration losses in FC according to changing pressures.
Figure. 2 . 16 .
216 Figure.2.16. Voltage losses in FC at different pressure.
2.17 according to different relative humidity. In this Figure shows how with increase in the relative humidity decrease in ohmic resistance is caused. Because of the conductivity of the membrane is closely linked to the RH.
Figure. 2 . 17 .
217 Figure.2.17. Resistive losses at different Humidity.
Figure. 2 .
2 Figure.2.18. Cell Voltage losses at different Humidity.
): does not effect to change these losses I = Fuel cell current in amperes, T = Temperature of the fuel cell stack in Kelvin, = The partial pressure of H 2 in atm, = The partial pressure of O 2 in atm, = Concentration of hydrogen, = Concentration of oxygen. = Relative humidity of hydrogen or air. [2.28].
Figure 2 . 19 .
219 Figure 2.19. Fault action depend on the system.
Figure. 2 .
2 Figure.2.20. Scheme for fault-Tolerance strategies.
Figure. 2 .
2 Figure.2.21. Model base fault diagnosis diagram. In contrast, non-model based diagnosis is the fault detection and isolation according to human knowledge or qualitative reasoning techniques based on input and output data. Three categories of non-model based diagnosis methods: • The artificial intelligence (Neural Network, Fuzzy Logic and Neural-Fuzzy method), • Statistical (Principle Component Analysis, Fisher Discriminant Analysis, Kernel PCA and Kernel FDA) method, • Signal processing method (Fast Fourier Transform, Short Time Fourier Transform and Wavelet Transformer).
Figure. 2 .
2 Figure.2.22. Schematic representation of EIS applied to fuel cell characterization.
Figure. 2 .
2 Figure.2.23. Circuits model according to EIS [2.43].
Figure. 2 . 24 .
224 Figure.2.24. Bode plot of the impedance spectra simulated in the frequency range from 10 MHz to 10 kilohertz.
2.25) [2.40].
Figure. 2 . 25 .
225 Figure.2.25. Original arrange for HFR and EIS measurement techniques.
Figure. 2 . 26 .
226 Figure.2.26. Ac resistance measurement diagram with combination load parallel with mille-ohm meter.
Many Different phenomena involved to operate of fuel cell. Some of these phenomena's are the common source fault in FC: specifically, improper water management (flooding, Drying)[2.42], catalyst degradation and fuels starving, membrane electrode assembly (MEA) contamination[2.43]. These faults cause for voltage drop and reduce the lifetime of a fuel cell. The typical fault classification method can be described in Figure.2.27. This Figure shows a simplified scheme for process fault classification with several levels of information processing. The lower level contains the processing data indeed, data of systematically collected by the sensors. Faults extracting from healthy mode and faulty mode can be attached to a medium level. Fault classification is located in the high level in order to distinction different faults in the system[2.36].
Figure. 2 .
2 Figure.2.27. Faults classification process in PEM FC.
Figure. 2 .
2 Figure.2.28. Overview of the wide range of dynamic processes in FC [2.44].
Nadia 2008 ,
2008 increase temperature leads to increase in saturation pressure and causes evaporation. As a Matter of fact, reduce in flooding will be happened when liquid water diminished. He et al. Investigated of while another operating condition in which (air flow, cell voltage) are constant with increasing temperature from 40 °C to 50 °C causing improvement flooding in the cell[2.40].
Figure. 2 . 29 .
229 Figure.2.29. Multilayer feed forward neural network.
1 . 2 .
12 Description of the modelled FC Cell ......................................................................... 2.2. Description of the 3D model applied on one cell ..Calibration of 3D model in healthy mode .................................................................. 2.10. Network circuit analysis .......................................................................................... 3. The 3D model applied to one stack ................................................................................... 3.1. Considerations on the 3D model calibration ............................................................ 3.2. Calibration of the 3D model of FC Stack (Two cells) Flooding at anode side ............................................................................................ 4.3. Drying in membrane ............................................................................................... 4.4. Simulation of faulty modes examples ....................................................................... 5. Conclusion .......................................................................................................................
5 )
5 Biphasic effect of liquid and vapor of water 6) Water condensation/evaporation 7) Gas diffusion in the diffusion layer 8) Diffusion layer flooding 9) Microscopic gas diffusion in catalyst layer 10) Non uniform water distribution in the membrane 11) Water transport in the membrane 12) Dynamic water content variation in the membrane Thermal domain 1) Non-isothermal temperature distribution 2) Dynamic temperature variation 3) Conduction between solid materials 4) Forced convection in the channel 5) Heat flux due to convective mass transport 6) Natural convection on external surface 7) Latent heat due to water phase change
Figure. 3 . 1 .
31 Figure.3.1. Single cell MES PEMFC with different layers.
Figure. 3 . 2 .
32 Figure.3.2. Single cell PEMFC based on elementary cell in three dimensions (3D).
4 )
4 Thermal mode and distributions temperature are included in it. 5) Voltage distributions are recorded based on experimental test. 6) This model is able to induce inhomogeneous distribution of physical parameters. 7) Possibility of fault characterization. Before being used for characterizing the FC cell faults, the 3D needs to be calibrated. The so called calibration consists in computing all the physical components of the circuit shown in Figure 3.3. It is performed starting from voltage and temperature measurements by the mean of Newton-Raphson method. The operations of calibration as well as the fault characterization are summarized through the algorithm shown in Figure.3.3.
Figure. 3 . 3 .
33 Figure.3.3. Algorithm of calibration and using of the 3D model.
3 )
3 Voltage sensors and thermocouples installed in 9 nodes of each cell (N 1 -N 9 ). 4) Only magnitudes of impedances are considered. Thus the FC fault can be characterized only through the magnetite of voltage and current density.Voltage cell has not the same values at different points of cell, because of the following reasons:1) Non-uniform fuel/air flow distribution to individual the cells,2) Non-uniform temperature, 3) Current distribution, 4) Uniformities of the material (compositions and microstructure)[3.8].
Figure. 3 .
3 Figure.3.4 shows the top view of the basic electrical model of one FC cell including the cathode, the anode and the membrane sides. R ohm represents the resistance of the membrane, R Act (a) and R Act (C) represent the activation losses on anode and cathode sides respectively. R Con (a) and R con (C) represent the concentration losses on anode and cathode sides respectively. The capacitors C (a) and C(C) represent the double layer capacitor present at the anode and cathode. However, as mentioned before in modeling hypotheses activation in anode side compare to cathode sides negligible. R con is assumed as the sum of the concentration on the anode and cathode sides (R con =R con (a) +R con (C)). The Capacitor C is equivalent of the two capacitors on the anode and cathode sides.
Figure. 3 . 4 .
34 Figure.3.4. Top view of 3D electric model of one cell of a PEMFC.
Figure. 3 . 6 .
36 Figure.3.6. View of the interface resistors (z axis) between two FC cells.
Figure. 3 . 7 .
37 Figure.3.7. Stack temperature according to load current in MES fuel cell.
Figure. 3 . 8 .
38 Figure.3.8. Simulation result of the polarization curve in different double layer effect for MES PEMFC.
Figure. 3 . 9 .
39 Figure.3.9. Simulation models of sensitive mode. As this model is to be used for accurate diagnosis fault in PEMFC we suggest dividing each cell of the FC stack in different elementary cells. The temperature can be taken into account by adopting a different equivalent circuit for each elementary cell. The magnitude of the decrease in voltage, called the voltage variance, is associated with changes in fuel cell model parameters that include opencircuit voltage, types of losses in anode side (R a ) losses in cathode (R c ), double layer capacitance (C dl ) in anode and cathode and membrane losses (R o ).
Figure. 3 .
3 Figure.3.10. Polarization curve of MES fuel cell.
Figure.3.11. The electric circuit model for 3D of PEMFC.
proposed 3D fault sensitive model considers distributions of temperature and voltage in X, Y and Z directions. Three illustrative views of this model are shownFigure.3.12, Figure.3.13 and Figure.3.14.
Figure. 3 . 12 .
312 Figure.3.12. Front view of the 3D proposed model for PEMFC stack.
3.12) is to determine all impedances in each individual cell. Then the connection resistors between cells are calculated by known current density distributions. A set of equations can be established with the form of [Y]. [V] = [I] for the two cells.
Figure. 3 .
3 Figure.3.15. Simplified model in three dimensions (3D) for two cells.
Figure. 3 . 16 .
316 Figure.3.16. Relative humidity according to the stack temperature of the exit air of the FC with air stoichiometry of 2
Figure. 3 . 17 .
317 Figure.3.17. Fault diagram according to operating conditions of FC.
Figure. 3 .
3 Figure.3.18. MVV and HDR variations according to resistance changes in Z direction.
Figure. 3 .
3 be where gives some examples of this characterization process that variations in impedances of different branches of the 9 zones of the circuit model are assumed. The X, Y and cross sections direction have been considered in these simulations. The significant point shown in these Figures is that the voltage characteristic at the output of the cell is affected by changing the impedance value. In this example resistance in the X, Y and cross section direction increased to simulate of drying fault. In other world, with increasing the impedance in one of the direction caused to distributions current density changed in all of the FC and it caused to drying or flooding faults occurring in the FC.
Figure. 3 .
3 Figure.3.19. MVV and HDR variations according to resistance changes in X direction.
Figure. 3 .
3 Figure.3.20. MVV and HDR variations according to resistance changes in XY direction.
Figure. 3 .
3 Figure.3.21. MVV and HDR variations according to resistance changes in Y direction.
Chapter IV 1 . 2 .
12 Introduction ........................................................................................................................ Single cells set-up ............................................................................................................... 2.1. Gas supply description............................................................................................. 2.2. The physical references of the MEAs ....................................................................... 2.3. Description of the test bench .................................................................................. 2.4. Single PEMFC cell ready for tests ........................................................................... 2.5. Voltage sensors choosing for measurements ............................................................
/min ( at the nominal power and pressure )
Figure. 4 . 1 .
41 Figure.4.1. Reactant air management with two parallel inlet pieces for oxygen feeding part.
Figure. 4 .
4 Figure.4.2 shows the present design STANSTEEL plate. The dimension of the plate is equal to 80×57×9 mm. Two holes are identical to those considered in the filled oxygen flow. The desired common material physical properties that can be used for the feeding oxygen sides are a high mechanical strength and excellent stability of water corrosion.
Figure. 4 . 2 .
42 Figure.4.2. Oxygen plate feeding.
ConnectorFigure. 4 . 3 .
43 Figure.4.3. Components of single PEM cell.
Figure. 4 . 4 .
44 Figure.4.4. Structure of the test bench.
4.5) and developed in Labview ® environment.
Figure. 4 . 5 .
45 Figure.4.5. Hardware of control.
4.6) allows checking of the various available measurements and sensors states. For instance, the displayed measurements are: single cell voltages, gas flows and pressures, and current. The leads indicate in which mode that means the system operates[4.2].
Figure. 4
4 Figure.4.6. Interface panel.
4.7), some fuel cell parameters are displayed, such parameters are (number of the cells, fuel cell stoichiometry anode and fuel cell stoichiometry cathode, active surface fault thresholds and safety on cell voltages, T ° max, delta pressure maximum). These parameters can change while the program is running, it is necessary to modify some factors, for example, the choice of stoichiometry factors, and set this parameter according to the data sheet of the manufactory. Some parameters should be kept unchanged in a given range to avoid irreversible damages. The voltage across each cell of the stack should have its values greater than a given threshold. A sufficient amount of gas should be provided to the bipolar plates, according to the load current[4.2].
Figure. 4
4 Figure.4.7. Panel for Settings" the type fuel cell for test.
Figure. 4 . 8 .
48 Figure.4.8. Test bench structure.
Figure. 4 2 . 4 .
424 Figure.4.9. Electronic load for simulate of driving cycle.
.
4.11).Thermocouples are used in the present experimental work due to tolerable different range of the temperatures and availability in a low price in comparison with other types as indicated in Figure.4.12.
Figure. 4 .
4 Figure.4.11. Thermocouple type K.
Figure. 4 . 12 .
412 Figure.4.12. The relationship between the seek voltage and temperature.
.
4.7 the errors are calculated by supposing that the reference thermocouple has acceptable values of temperature reference. The possible sources of errors include compensation, linearization, thermocouple wire, and experimental errors. As shown in
Figure. 4 . 13 .
413 Figure.4.13. DPI 620 advance measurements devise.
Figure. 4 . 14 .
414 Figure.4.14. Process of the calibration of thermocouples with reference thermocouple.
Figure. 4 .
4 Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple.
Figure. 4 .
4 Figure.4.16. indicates that the thermocouples recorded the temperature measurement at different temperatures and compared them with the reference thermocouple. As illustrated in this Figure, the errors between the different thermocouples are related linearly. Moreover, the error between the thermocouples and the reference thermocouple increases with the temperature of the BINDER incubator. This error can be reliable in the test analysis and measurement comparison in temperature distributions.
Figure. 4 . 16 .
416 Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures.
4.17). During the test, a voltage acquisition device from National Instrument measures two parameters fist, the individual cell voltages second, the thermocouples capture continuously in the individual cell temperatures (see Figure.
4.18).
Figure. 4 . 17 .
417 Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side.
Figure. 4 . 19 .
419 Figure.4.19. Set up measuring the temperature distribution in PEMFC (MES).
Figure. 4 .
4 Figure.4.20. Boundary limitation for choice of acceptable sensors.
Figure
.
Figure.4.21. Schematic of one cell with thermocouple and voltage sensors.
Figure. 4 .
4 Figure.4.22. Test conditions on load current and oxygen stoichiometry.
Figure. 4 .
4 Figure.4.23 shows the schematic diagram for measuring the voltage inside the cell. The oxidant gas is heated and humidified by passing from the boiler that designed in the test bench. In the present study, each temperature measurement is collected by the data acquisition with a sampling rate of one reading each 1 second. These measurements are investigated over the intervals where the temperature is constant after changing the experimental conditions. This procedure is repeated for different current loads and different stoichiometry air and hydrogen ratios.
Figure. 4 .
4 Figure.4.23. Set up measuring the Voltage in PEMFC (MES).
Figure. 4 . 24 .
424 Figure.4.24. Temperature distribution for the cathode side "O2 stoichiometry ratios of 3, 5 and 7; Current load 5 A, H2 stoichiometry of 1.5".
Figure. 4 .
4 Figure.4.25 shows of this different temperature between anode and cathode sides. In this figure the temperature measurments are done at different for three different current loads of 5 A, 10 A and 15A. The anode and oxygen stoichiometry ratios are fixed at 3 to 6 while the hydrogen one is fixed at 2. The oxygen sides are humidified with a boiler. Here also, it is clearly seen that the temperature of the anode side is lower than the cathode side by more than 1°C. These Figures shows also that the local temperature between the anode and cathode are increased follow to increasing of the stoichiometry oxygen. This can be caused with increasing stoichiomtery, the temperature in cathode side will be decreased.
Figure. 4 . 25 .Figure. 4 .
4254 Figure.4.25. Temperature measurement for various load current, O2 stoichiometry ratios of 3, 4, 5 and 6; H2 stoichiometry of 2".
Figure. 4 . 26 .
426 Figure.4.26. Schematic of positions of thermocouple in cell.
Figure. 4 .
4 Figure.4.27. Temperatures are measured for various loads current 5A, 10A and 15A, stoichiometry of 3 for O2 2 for H2.
Figure. 4 .
4 Figure.4.28. Local temperature distributions along the cell at the cathode side, Opeation condition : stoichiometry of 3,4 and 5 for O2 2 for H2.
Figure. 4 .
4 Figure.4.28 and Figure.4.29 shows the sensitive variation temperature in y axes and effect on the profile of voltaic cell. based on these figures the temperature in the middle and inlet are lower than the temperature at the outlet. This can be explained by that the increase of the membrane hydration and the oxygen rate at the outlet may increase the temperature of the inlet.
Figure. 4 .
4 Figure.4.29 shows an example to verify the sensetivity of distributions of temperature in y axes. When the current load increases, the global voltage decreases however the voltage between inlet, middle and outlet decrease. Generally, the voltage decrease (or increase) is directly related to the temperature and current density in each region of the cell.
Figure. 4 . 29 . 5 .
4295 Figure.4.29. Voltage distributions along the cell with operation condition : O2 stoichiometry of 3, 4 and 5 for H2 of 1.5.
Figure 4 .
4 [START_REF] Hissel | A Review on Existing Modeling Methodologies for Pem Fuel Cell Systems[END_REF] shows the temperature reach its higher values at the middle point of the cell, with stoichiometry of oxygen of 3, stoichiometry hydrogen of 1.5 and a current of 15A. This figure indicates that the temperature are distributed randomly at the y axes, especially at high current load. Also, the temperature value at stoichiometry 1.5 is less than the temperature value at stoichiometry 2 for hydrogen(comparing with Figure.4.30).
Figure. 4 .
4 Figure.4.30. Local temperature distributions along the two axes (x and y) at cathode side for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 2.
Figure. 4 . 31 . 3 . 1 . 3 .
431313 Figure.4.31. Local temperature distributions along the two axes (x and y) in PEMFC at cathode side for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 1.5.
Figure. 4 . 32 .
432 Figure.4.32. Temperature distribution over the cell in the x axes for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 1.5.
Figure. 4 . 33 .
433 Figure.4.33. Shows simply a scheme model of single cells that divided by different regions in the x and y-axes .To consider the distribution of temperate in x and y axes, temperature distribution in the single cell, cell divided into three parts in y axes (Middle, left and right sides) and three parts in x axes (inlet, middle and outlet). The temperature measurements are obtained by mean of 12 thermocouples on the y axis of the cell in the middle, left and right sides of the cell (x axes).
Figure. 4 . 33 .
433 Figure.4.33. Schematic of the PEM fuel cell in x and y axes in different region "inlet, middle and outlet ".
Figure. 4 .
4 Figure.4.34 and 4.35 show the effect of temperature on the x and y axis in the cell with different O 2 stoichiometry ratios of 3, 4 and 5 on different current load 5 A, 10 A and 15 A. The mean values of the temperature measurements are recorded at the fourth nearest thermocouple at the inlet, middle and outlet of each region as follows: left, middle and right side of the cell. By calculation of four mean values, three points are achieved for each region. The x and y-axes of the cell represent the effect of temperature for different current loads of 5A, 10 A and 15 A (see Figure.4.34) and stoichiometry oxygen values between 3 and 5 with fixed stoichiometry hydrogen of 1.5.
Figure.4.34 and 4.35 show the effect of temperature on the x and y axis in the cell with different O 2 stoichiometry ratios of 3, 4 and 5 on different current load 5 A, 10 A and 15 A. The mean values of the temperature measurements are recorded at the fourth nearest thermocouple at the inlet, middle and outlet of each region as follows: left, middle and right side of the cell. By calculation of four mean values, three points are achieved for each region. The x and y-axes of the cell represent the effect of temperature for different current loads of 5A, 10 A and 15 A (see Figure.4.34) and stoichiometry oxygen values between 3 and 5 with fixed stoichiometry hydrogen of 1.5.
Figure. 4 . 34 .
434 Figure.4.34. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 1.5.
.
4.35).
Figure. 4 . 35 .
435 Figure.4.35. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 2.
Figure. 4 .
4 Figure.4.36 shows the results summary of the temperature distributions. It indicates that the temperature at the middle of the cell has the highest values (in x axes). Nevertheless, the highest values of the temperature in y axis are located in the outlet. This result can be explained by the heat transfer convection and air heat transfer passing through the oxygen channel, as explained before.
Figure. 4 .
4 Figure.4.36. Temperature distributions y axes.
Voltage
Measurements and temperature distributions are carried out in order to determine the relationship between the voltage and the temperature distribution inside the cell. The relation of the voltage and temperature in single cell Figure.4.37 shows that, from the cathode side, the voltage has its highest values near the left of the outlet. Then, they decrease gradually and they get lowest values at the right corner and the middle of the cell. This is in total agreement with what was explained before concerning the temperature distribution (Figure.4.35).Consequently, it can be resulted that the voltage cell in the outlet is greater than the inlet voltage of the cell. In fact, Figure.4.37 shows that, when the stoichiometry is getting higher, the voltage at the outlet has the highest voltage values compared with other regions. The voltage varies between 5 and 10 mV at temperature become stabilized in the inlet, center and outlet of the middle of the cell. Finally, at the left and right edges of the cell, the voltage reaches the value of 0.2 V.
Figure. 4 . 37 .
437 Figure.4.37. Voltage distribution over the cell for different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 1.5.
Figure. 4 . 38 .
438 Figure.4.38. Validation simulation and experimental test for one cell.
Figure. 4 .
4 Figure.4.39 shows the experimental for measuring the distribution of the local temperatures in two cells. The temperature increase over the y axes of each cell from 27 °C to 45 °C because of the cells current change from 5 to 15. Examining the temperature distributions in z axes it can be seen that the temperature in cell two has a quite higher value comparing to that of cell one ; 1°C to 2°C of difference is observed.
Figure. 4 .
4 Figure.4.40. Voltage measurements in cathode side of PEMFC in with O2 stoichiometry of 3, 4 and 5 for H2 of 2.
Figure. 4 .
4 Figure.4.41. Validation simulation and experimental test for two cell.
Figure. 4 .
4 Figure.4.42. The MES FC system used for validating tests.
Figure. 4 .
4 Figure.4.43. Test bench of fuel cell in the chamber room.In order to improve the accuracy of stack temperature measurements of the fuel cell, three thermocouples have been used in this test. Their positions have been selected at critical points of the fuel cell as depicted in Figure.4.44 (the inlet of hydrogen, outlet of oxygen and middle of the near reaction air inlet channel).
Figure. 4 .
4 Figure.4.44. Position of the thermal sensors.
Figure. 4 .
4 Figure.4.45. Current dynamic profile of FC under test.
Figure. 4 .
4 Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test.
Figure. 4 .Figure. 4 .
44 Figure.4.47. The cell voltage versus current for MES PEMFC.
Figure. 4 .
4 Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile.
Figure. 4 .
4 Figure.4.51 compares the experimental results both for the electrical and thermal domain. As it can be observed, the multi-physical model gives results are in a good agreement with experimental ones despite the some errors (voltage pics) due to the periodic purges of the FC which are not taken into account in the model. Indeed, in order to eliminate the water and impurities on the hydrogen side (anode) during the operation, the H 2 purging valve is opened periodically. The H 2 purge function is visible in Figure.4.51 with the drop of FC stack voltage. These results allow validating the proposed model in normal (healthy) operating mode. As illustrated in Figure.4.52 the stack voltage of fuel cells have been improved by increasing outside temperature. This effect has been observed by locating the fuel cell in a climatic chamber as shown in Figure.4.49 and measuring the polarization curves.
Figure. 4 .
4 Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber.
FigureFigure. 4 .
4 Figure.4.50. Load profile.
Figure. 4 . 52 .
452 Figure.4.52. Comparison of polarization curves for different temperatures.
Figure. 4 . 54 .
454 Figure.4.54. Thermal impact on PEMFC stack.
Figure. 4 . 55 .
455 Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell.
Figure. 4 . 56 .
456 Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack.
Figure. 4 .
4 Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC.
1 .
1 Fast Fourier Transform (FFT)................................................................................. 2.2. Modelling method for on-line FC diagnosis ............................................................. 2.3. ANN Based 3D Fault Classification in the PEMFC single cell ................................... 3. ANN Faults classification in stack of 2 cells ..................................................................... 3.1. Faults Classification in stack .................................................................................... 3.2. ANN Based Fault classifications of Drying, Flooding in the one cell ..
This level of diagnosis represents the vehicle. It includes all the external systems without diagnosis that can immobilize the vehicle. However, the simplicity of these systems from a technical point of view makes these faults easily detectable by a user of the vehicle. Furthermore, automatic supervision is not necessary at this level of diagnosis. It can be done by a visual inspection only (see Figure.5.1))[5.3].
Figure. 5 . 1 .
51 Figure.5.1. Illustration of the "Level 0" of the vehicle diagnosis.
Figure. 5 . 2 .
52 Figure.5.2. Illustration of the "Level 1" of the vehicle diagnosis.
The level 2
2 of the diagnosis deals with the faulty main components of the vehicle indicated in the level 1. The subsystem considered in the present work is the PEMFC. The aim of the level 2 is to
5.3).
Figure. 5 . 3 .
53 Figure.5.3. Illustration of the "Level 3" of the vehicle diagnosis.
Figure. 5 .
5 Figure.5.4 shows the synopsis representation of the followed method for on-line diagnosis modeling of the PEMFC. Based on aim of this work to detect of fault, FFT analysis has been used to fault characterization through proper patterns that are used for training the ANN model for on line diagnosis. In the next section, a comprehensive explanation about the structure of ANN is presented as well as detailed explanation of how fault diagnosis in each of faults mentioned previously.
Figure. 5 . 4 .
54 Figure.5.4. Implementation algorithm fault diagnosis of PEMFC in power train.
current density based on Newton Raphson in different faults are shown in In these tables, all effects of all faults in each node are shown in red rectangle boxes. It can be observed that the local current density has the highest values in the too flood fault and decreases following the too dry faults. Flooding faults cause current density increase because of the ohmic resistance decrease suddenly. However, Drying make that current density reduce, because of the ohmic resistance increase. It is seen that the current density distributions in low current (5A) are homogenous patents. These patents have been changed with increasing the load current profile and current density become uneven distributed. The significant point in these figures is that the local current density decreases very slowly from the inlet to the outlet. However, these variations have different characteristics according to each fault. For instance, as explained before current density distributions of each node have been changed according to the variation of the operating conditions. The local current density values increased noticeably along the flooding faults in each node. Hence, intervals impedances of cell in different directions have been modified by the new current density distribution. Consequently, fault isolation based on operating conditions and current density distributions can be feasible in the 3D sensitive model.
Figure. 5 . 5 -
55 Figure.5.5-Figure.5.6 show faults detection in cells based on Table.5.2 In this figure the confusion matrix shows the percentage of correct and incorrect classifications. Correct classifications are green squares on the matrix diagonal. The input of the 7 harmonic attributes will be used as inputs to the neural network and the respective target for each will be 8 classes. The results indicate the success of ANN classification in all 8 in two cells of Flooding, Too Flood, drying and Too Drying faults.
Figure. 5 . 5 .
55 Figure.5.5. Fault isolation on drying, flooding, too dry and too flood in two cell For this work, the training data indicate a good fit. The validation and test results also show R values that greater than 0.99.
Figure. 5
5 Figure.5.6. Regression plot for different faults in two cells
Figure. 5
5 Figure.5.7-Figure.5.10 shows the ANN training efforts of 9 nodes in different faulty mode. The results indicate the success of ANN classification in all 9 different nodes of Flooding and drying faults.
It is noted that each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of the data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found a retrain several times. Because of this, after retrain more than 10 times the better results are recorded and are shown in Figure.5.7-Figure.5.10.
Figure. 5
5 Figure.5.7. Fault isolation for Drying faults according to nine nodes during in one cell.
Figure. 5 . 9 .
59 Figure.5.9. Fault isolation for too flood faults according to nine nodes during in one cell.
Figure. 5 .
5 shows the validating the network is to create a regression plot, which shows the relationship between the outputs of the network and the targets. If the training were perfect, the network outputs and the targets would be exactly equal, but the relationship is rarely perfect in practice. In this case we can create a regression plot with the following commands. The first commands calculate the trained network response to 70% of the inputs in the data set. And 15 % of the data set of validation and test network. The result is shown in the following figures. The three axes represent the training, validation and testing data. The dashed line in each axis represents the perfect result -outputs = targets (Classification in 9 classes). The solid line represents the best-fit linear regression line between outputs and targets. The R-value is an indication of the relationship between the outputs and targets. If R=1, this indicates that there is an exact linear relationship between outputs and targets. If R is close to zero, then there is no linear relationship between outputs and targets. For this work, the training data indicate a good fit. The validation and test results also show R values that greater than 0.9. The response is acceptable to implement of trained ANN in experimental test results.
Figure. 5 .Figure. 5 . 12 .
5512 Figure.5.11. Regression plot for flooding faults according to nine nodes during in one cell.
Figure. 5 . 13 .
513 Figure.5.13. Regression plot for too drying faults according to nine nodes during in one cell.
Figure. 5 . 14 .
514 Figure.5.14. Regression plot for drying faults according to nine nodes during in one cell.
Figure.3A.1. Diagram of building a 3D model of PEMFC in SIMULINK.
Figure.3A. 2 . 4 )
24 Figure.3A.2. Diagram of building a dynamic model of PEMFC. In this Figure each elementary are depend on the temperature, pressure and humidity. Calculation activation, concentration and ohmic loss with reversible voltage (Nernst voltage) based on these parameters and current densities for finding the voltages are necessary. The Matlab Simulink for one elementary cell is given in Figure.3A.3 in this figure the voltage output calculated by 1) Calculation the pressure drop in channel (as shown in the Figure.3A.3 at subsystem 1). 2) Thermal domains as explained in chapter III only depend on the current of FC (as shown as shown in the Figure.3A.3 at subsystem 2). 3) Dynamic model is presented the double layer effect (shown in the Figure.3A.3 at subsystem 3) 4) Electrochemical reactions depend on the reversible voltage and losses (Activation, concentration and ohmic) as shown in shown in the Figure.3A.3 at subsystem 4.
) A are shown in Figure.4A.1 and Figure.4A.2, respectively.
Figure. 4 .A. 1 .Figure. 4 .. 2 .
4142 Figure.4.A.1. Temperature distribution for the cathode side "O2 stoichiometry ratios of 3, 5 and 7 ; Current load 10 A, H2 stoichiometry of 1.5" .
Figure. 4 .A. 3 .
43 Figure.4.A.3. Temperature is measured for various load current in anode side. "O2 stoichiometry ratios of 3, 5 and 7 ; Current load 15 A, H2 stoichiometry of 1.5 and 2".
Figure. 4 .A. 4 .
44 Figure.4.A.4. Temperature is measured for various load current in cathode side. "O2 stoichiometry ratios of 3, 5 and 7; Current load 15 A, H2 stoichiometry of 1.5 and 2".
3 .
3 Figure.4.A.5. and Figure.4.A.6 shows the temperature distributions at different operating conditions and different current loads with stoichiometry 3, 4 and 5 for oxygen and 2 for hydrogen.
Figure. 4 .A. 5 .
45 Figure.4.A.5. Temperatures are measured for various loads current 5A, 10A and 15A. "O2 stoichiometry ratios of 4; Current load 5, 10 and 15 A, H2 stoichiometry 2".
Figure. 4 .
4 Figure.4.A.6. Temperatures are measured for various loads current 5A, 10A and 15A. "O2 stoichiometry ratios of 5; Current load 5, 10 and 15 A, H2 stoichiometry of 2".
Figure. 4 .
4 Figure.4.A.7 shows the sensitive variation temperature in y axes and effect on the profile of voltaic cell. In this figure shows the temperature profile along the channel, from the cathode side of the PEMFC at different current loads of 5 A, 10 A and 15 A. based on these figures the temperature in the middle and inlet are lower than the temperature at the outlet. This can be explained by that the increase of the membrane hydration a and the oxygen rate at the outlet may increase the temperature of the inlet. It can be noted in these figures a sudden variation of the temperature (rapid decrease and increase) occurs at 5 th stoichiometry for the current of 5 A and 10 A. The possible reason for this rapid variation is the water injection outside the cell. Other notification in these figures at current 15A, the temperature at the inlet increases sharply and the temperature at the middle increases gradually. The possible reason it can be drying in the inlet of a single cell. Consequently, with increasing the temperature the cell voltage drops accordingly (as shown inFigure.4.A.8).
Figure. 4 .
4 Figure.4.A.7. Local temperature distributions along the cell on the cathode side, Opeation condition : "O2 stoichiometry ratios of 3,4 and 5 Current load 5,10 and 15 A, H2 stoichiometry of 1.5".
Figure. 4 .
4 Figure.4.A.8. Local temperature distributions along the cell on the cathode side, Opeation condition : "O2 stoichiometry ratios of 3,4 and 5 Current load 5,10 and 15 A, H2 stoichiometry of 2".
can be a weighted summer and is denoted by j in Figure.5A.1, on the output of which the net value equation.5A.1 is presented:
Figure
Figure.5A.1. Basic static artificial neuron (ith neuron).
by b i if function f i is plotted versus the so called 'net' input to the weight, which is mathematically the argument of the activation function, is the same of the weighted inputs and the bias. However, it is also possible to use a model with n inputs together with the bias. In this case
Figure.5A. 3 .
3 Figure.5A.3. Multilayer feed forward neural network.
1 .
1 Where the elements are the activation functions of each of the k nodes, which have been assumed to be equal, f 11 , = f 12 = … = f 1k = f 1 , S 1 is the net vector , S 1 = [S 1 , S 2 , …, S k ] T which contains the net inputs S 1 , S 2 , …, S k to neurons 1, 2, …, K, Furthermore, W 1 is the weight matrix of the output layer, which due to the specified architecture must contain k rows and n columns: ij w is the weight from the destination (recipient) node j to source node i , where i = 1, 2, …, k; j = 1, 2, …, ,n and B 1 is the bias vector of the single layer, B 1 = [b 11 , b 12 , …, b 1k ]T
Figure.5A. 4 .
4 Figure.5A.4. Schematic of a three-layer feed-forward ANN.
Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................
List of Figures
Chapter I:
).
Ԑ Electrical permittivity.
δ Catalyst layer thickness.
K o Rate coefficient.
Subscripts and superscripts
Anode.
Cathode.
Carbon dioxide.
Electron.
H + Proton or a hydrogen ion.
Hydrogen.
Oxygen.
Platinum.
RD Forward reaction (Reduction)
OX Backward reaction (Oxidation)
xv
Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................
Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................
Figure.2.2. Equivalent electrical circuit model of PEM FC. ............................................................ Figure.2.3. Cell potential loss at different temperature. ................................................................... Figure.2.4. Cell potential losses at different pressure. ...................................................................... Figure.2.5. Energy inputs and output for FC as an energy conversion device. ................................Figure.2.6. Polarization curve for a cell of a PEM fuel cell. .............................................................Figure.2.7. The charge double layer at the surface of a fuel cell. ......................................................Figure.2.8. Polarization Curve with different losses. .......................................................................Figure.2.9. Block diagram of the multi-physical modeling of FC. ....................................................Figure.2.10. Activation losses at different temperature. .................................................................. Activation losses in FC at different pressure................................................................ Figure.2.15. Concentration losses in FC according to changing pressures. ...................................... Figure.2.16. Voltage losses in FC at different pressure. ................................................................... Figure.2.17. Resistive losses at different Humidity. ......................................................................... Figure.2.18. Cell Voltage losses at different Humidity. ....................................................................Figure 2.19. Fault action depend on the system. ............................................................................. Figure.2.20. Scheme for fault-Tolerance strategies. ......................................................................... Figure.2.21. Model base fault diagnosis diagram. ............................................................................ Figure.2.22. Schematic representation of EIS applied to fuel cell characterization. ......................... Figure.2.23. Circuits model according to EIS [2.43]. ....................................................................... Figure.2.24. Bode plot of the impedance spectra simulated in the frequency range from 10 MHz to 10 kilohertz. ...................................................................................................................................Figure.2.25. Original arrange for HFR and EIS measurement techniques. ......................................
List of Figures
Figure.2.11. Resistive loss in FC at different temperature. .............................................................. Figure.2.12. Concentration losses in FC at different temperature. ...................................................
Figure.2.13. Cell Voltage losses at different temperature. ................................................................ xvi Figure.2.14. Figure.2.26. Ac resistance measurement diagram with combination load parallel with mille-ohm meter. ............................................................................................................................................. Figure.2.27. Faults classification process in PEM FC. ..................................................................... Figure.2.28. Overview of the wide range of dynamic processes in FC [2.44]. .................................. Figure.2.29. Multilayer feed forward neural network. ...................................................................... Chapter III:
Figure.4.6. Interface panel. ........................................................................................................... Figure.4.7. Panel for Settings" the type fuel cell for test. ............................................................... Figure.4.8. Test bench structure. ..................................................................................................
Figure.4.9. Electronic load for simulate of driving cycle. ....................................................................... Figure.4.10. Single PEM fuel cell and accessory. ........................................................................... Figure.4.11. Thermocouple type K. .............................................................................................. Figure.4.12. The relationship between the seek voltage and temperature. ..................................... Figure.4.13. DPI 620 advance measurements devise. .................................................................... Figure.4.14. Process of the calibration of thermocouples with reference thermocouple. ............... Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple. ..................................... Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures. ...............
Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side. ..... Figure.4.18. Test bench structure with thermocouple and voltage sensors. ...................................
). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.38. Validation simulation and experimental test for one cell. ........................................... Figure.4.39. Temperature measurements in cathode side of PEMFC in different current density cell temperature with O 2 stoichiometry of 3, 4 and 5 for H
2 of 1.5. ..................................................... Figure.4.40. Voltage measurements in cathode side of PEMFC in with O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ................................................................................................................................ Figure.4.41. Validation simulation and experimental test for two cell. ........................................... Figure.4.42. The MES FC system used for validating tests. ........................................................... Figure.4.43. Test bench of fuel cell in the chamber room. ............................................................
Figure.4.44. Position of the thermal sensors. ................................................................................ Figure.4.45. Current dynamic profile of FC under test. ................................................................. Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test. ......................................................................................................................... Figure.4.47. The cell voltage versus current for MES PEMFC. ..................................................... Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile. ........................................................................................................................... Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber. ...... Figure.4.50. Load profile. ............................................................................................................................. Figure.4.51. (a) Validation of electrical model, (b) validation of thermal model. ................................ Figure.4.52. Comparison of polarization curves for different temperatures. ....................................... Figure.4.53. Test bench instrument devices. ................................................................................. Figure.4.54. Thermal impact on PEMFC stack. ............................................................................ Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell. ... Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack. .... Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC. ..................................................................................................................................................... Chapter V:
Frenchman Gustave Trouvé built the first electric vehicle in 1881. It was a tricycle powered by a 0.01 HP DC motor fed by lead acid battery. A vehicle likewise to this was built in 1883 by two British professors. Due to, the low power and speed they never became commercial. The first commercial electric vehicle was Morris and Salom's Electroboat. It could be used for three shifts of 4 h with 90-min recharging periods. It had a maximum speed of between 32 km/h and a 40-km autonomous with 1.5 HP motors. Remarkable technology in this decade was a regeneration braking energy that it was invented by Frenchman M.A. Darracq on his 1897 coupe. Furthermore, the first electrical vehicles had reached 100 km/h which was built by Frenchman Camille Jenatzy. With the advent of gasoline, automobiles, which provided more power and more flexibility the electric vehicles, started to drop out of sight. The last commercial electric vehicles were issued around 1905. During nearly 60 years, the only electric vehicles sold were common golf carts and delivery vehicles.
. The F-City H2 (see Figure1.11) is the result of a partnership between the Michelin Research and Innovation Center, the French automotive producer FAM Automobiles, EVE Systems, FC LAB and the Institute Pierre Vernier. The fuel cell range extender has an energy capacity of 15 kWh and works in a power pack alongside a 2.4 kWh lithium ion battery. The Energy Pack is initially installed on the F-City H2 car designed by French automotive producer FAM automobiles. The F-City is an innovative solution for urban transport. The power modules are consisting of 4KW PAC, 1 KG hydrogen stored at 350 bars, lithium-ion battery(2.4 kWh). The fuel cell energy pack weighs 120 kg. Michelin's Energy Pack containing battery and fuel cell range extender offers a significantly improved performance over the original NiMH battery with overall energy density almost quadrupled. The range of the F-City H 2 is 150 km (93 miles).
Partners: French company FAM Automobiles and EVE System, Swiss Research unit Michelin, French research units Institute Pierre Vernier, FC-Lab/UTBM and a Swiss high school. In addition, Funded by Europe (FEDER) with French authorities and Swiss regional state The F-City H 2 vehicle, is presented in Figure.
1.11 [1.6]
.
Table . 1.1. A brief of contemporary fuel cell characteristics [1.11].
.
PEFC AFC PAFC MCFC SOFC
Electrolyte Ion Exchange Membranes Mobilized or Immobilized Potassium Hydroxide Immobilized Liquid Phosphoric Immobilized Carbonate Liquid Molten Ceramic
Operating Temperature 80°C 65°C -220°C 205°C 650° 600-1000°C
Charge Carrier H + OH - H + CO3 = O =
External
Reformer for Yes Yes Yes No No
CH4 (below)
Prime Cell Components Carbon-based Carbon-based Graphite-based Stainless-based Ceramic
Catalyst Platinum Platinum Platinum Nickel Perovskites
Product Water Management Evaporative Evaporative Evaporative Gaseous Product Gaseous Product
Product Heat Management Process Gas + Independent Cooling Medium Process Gas + Electrolyte Circulation Process Gas + Independent Cooling Medium Internal Reforming + Process Gas Internal Reforming + Process Gas
Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode:
Electrochemical
Reactions
Table of contents of Chapter II 1. PEMFC modeling ................................................................................................................ 1
.1. Empirical model .......................................................................................................................... 1.2. Mechanistic model ...................................................................................................................... 1.3. Analytical model .......................................................................................................................... 1.4. Consideration of different modeling ........................................................................................
2. Fuel cell basic characteristics .............................................................................................. 2
.1. Effect of temperature ................................................................................................................. 2.2. Effect of Pressure ....................................................................................................................... 2.3. Theoretical FC Efficiency: ......................................................................................................... 2.4. Fuel Cell voltage losses .............................................................................................................. 2.5. Exchange Current Density ........................................................................................................ 2.6. Static characteristic (polarization curve) .................................................................................. 2.7. Effective factor in concentration losses .................................................................................. 2.8. Polarization Curve ...................................................................................................................... 2.9. Thermal Domain .........................................................................................................................
3. Effect of the operating condition on performance of the fuel cell ......................................
3.1. Temperature ................................................................................................................................
3.2. Pressure ........................................................................................................................................
3.3. Humidity ......................................................................................................................................
4
. PEMFC diagnosis: ............................................................................................................... 4
.1. Introduction of Fault Diagnosis ............................................................................................... 4.2. PEMFC Fault Conditions .......................................................................................................... 4.3. Faults Tolerance strategies .........................................................................................................
4.4. Diagnosis levels ...........................................................................................................................
Table . 2.1. Effects of operating conditions on the PEMFC. Operating Parameter I
.
N/A Yes Yes Yes
T Yes Yes Yes B,
Yes Yes 4 B,
Yes Yes, minor 5 B,
N/A Yes 6 B,
N/A Yes, minor 7 B,
N/A 1 8 11
N/A 2 9 12
N/A Yes (3) 10 13
Table . 2.3. Summary of failure modes in the PEMFC. Water management in PEMFC
.
Cathode Anode
Flooding Drying Flooding Drying
Water production Water evaporation low temperature and high condensation (
low current) Water evaporation Electro osmosis Back-diffusion (Low current density) Back-diffusion (low current density) Electro osmosis Saturated Injection water
In brief, we need to avoid membrane drying at anode side and flooding at cathode side
Effects of operation condition Humidity humidify inlet gas > 40% Flow rate Flow rate due to higher stoichiometry remove the flooding. low flow rate is low risk drying Temperature increasing temperature resolve the flooding problem in the cell Pressure water produced at cathode can be removed by high pressure Current decreasing current cause the reduction the flooding Degradation of FC in long term operation condition
Corrosion Contamination Gas starving Freezing high temperature
Cathode corrosion Anode Contamination Hydrogen starving Start up from freezing
Anode corrosion Contamination of the membrane Oxygen starving
Corrosion of gas
diffusion layer
Corrosion the bipolate
Table of contents of Chapter III 1. Introduction
.........................................................................................................................
Table . 3.1. Principal physical phenomena found in PEMFC.
.
Table.3.2 and Table.3.3 indicates the obtained temperatures, voltage and current distributions. The considered conditions measurements are the following:
Table . 3.2.Temperature and voltage distributions Table.3.3.Current distributions calculated by the Newton Raphson method
.
Table . 3.4. Internal Impedance calibration according to physical failing.
.
R1=0.132(ohm) R2=0.133(ohm) R3=0.134(ohm)
R4=0.138(ohm) R5=0.135(ohm) R6=0.136(ohm)
R7=0.14(ohm) R8=0.136(ohm) R9=0.135(ohm)
Table . 3.7. Calculation current density in cell one based on voltage and temperatures which are recorded experimentally
.
Table . 3.8. Calculation current density in cell two based on voltage and temperatures which are recorded experimentally
.
Table.4.1 is shown the technical data of the MES PEMFC.
Table . 4.1. Description of the MES single cell.
.
Table . 4.2. Specification of cathode side. Cathode side : Ambient air Close to ambient pressure : 20-30mbar over pressure Stoichiometry: 3-4
.
Table . 4.3. Specification of anode side. Anode side : Dry hydrogen 4.5 standard dead-end mode ( a 0.5sec purge every 20 sec) 0.55 bar of over pressure
.
Table . 4.4. Type of MEA used.
.
Typology 3 layer MEA
Membrane thickness 18 [μm]
Anode electrode Pt loading 0.1 [mg/cm^2]
Anode electrode thickness around 8 [μm]
Cathode electrode Pt loading 0.4 [mg/cm^2]
Cathode electrode thickness around around 15 [μm]
Table . 4.5. Type of GDL used.
.
Thickness 0.42 [mm]
Density 125 [g/m^2]
Air permeability 3 [ cm^3/cm^2*sec]
Resistivity <15 [mOhm*cm^2]
PTFE loading 5%
Table . 4.6. Comparison 12 thermocouple measurements with reference thermocouple.
.
Initial temperature 38 C 45 C 50 C 55 C 62 C
1 37.32 44.16 49.28 54.05 61.2
2 37.62 44.54 49.28 54.17 61.51
3 37.6 44.61 49.37 54.34 61.51
4 37.63 44.54 49.5 54.37 61.6
5 37.18 44.47 49.17 54.24 61.51
6 37.14 44.18 49.35 54.09 61.26
7 37.3 44.47 49.26 54.2 61.22
8 37.15 44.36 49.34 54.13 61.28
9 37.54 44.42 49.43 54.31 61.41
10 36.97 44.36 49.39 54.14 61.26
11 37.48 44.44 49.15 54.14 61.12
12 37.38 44.65 49.41 54.48 61.64
Reference 37.65 44.8 50.46 55.04 62.54
Table . 4.7. Comparing means errors in different temperatures between 12 thermocouples one by one.
.
Number 1(%) 2(%) 3(%) 4(%) 5(%) 6(%) 7(%) 8(%) 9(%) 10(%) 11(%) 12(%)
1 0 0.47 0.62 0.58 0.14 0.08 0.3 0.2 0.52 0.81 0.32 0.59
2 0.47 0 0.15 0.12 0.33 0.55 0.3 0.42 0.1 0.43 0.3 1.58
3 0.62 0.15 0 2.62 0.48 0.7 0.32 0.42 0.06 0.43 0.26 0
4 0.58 0.12 2.62 0 0.44 0.66 0.28 0.38 0.06 0.05 0.26 0.45
5 0.14 0.33 0.48 0.44 0 0.22 0.16 0.06 0.38 0.05 0.19 0.45
6 0.08 0.55 0.7 0.66 0.22 0 0.38 0.28 0.6 0.11 0.4 0.67
7 0.3 0.17 0.32 0.28 0.16 0.38 0 0.1 0.22 0.11 0.02 0.29
8 0.2 0.27 0.42 0.38 0.06 0.28 0.1 0.1 0.32 0.01 0.19 0.07
9 0.52 0.05 0.1 0.06 0.38 0.6 0.22 0.32 0 0.33 0.19 0.07
10 0.81 0.28 0.43 0.39 0.05 0.27 0.11 0.01 0.33 0 0.13 0.4
11 0.32 0.14 0.3 0.26 0.19 0.4 0.02 0.12 0.19 0.13 0 0.26
12 0.59 0.12 1.58 0 0.45 0.67 0.29 0.39 0.07 0.4 0.26 0
Table . 4.8. Basic operating conditions for voltage measurements.
.
Description Values
Stoichiometry air 1.5
Stoichiometry hydrogen 3
Number of the cell 1-2
Surface area 62 Cm 2
T° max 65 °C
of the current distributions between different current profile loads are shown in this table.
Table . 4.9. Distributions current density for different current profile in outlet, inlet and middle of cell.
.
Newton Raphson
Left middle Right
Load Current Current Density Distributions Load Current
inlet 0.485 0.448 0.492
5 (A) middel 0.612 0.497 0.756 5
outlet 0.61 0.617 0.482
inlet 1.077 1.042 1.033
10 (A) middel 1.106 1.071 1.163 10
outlet 1.177 1.275 1.056
inlet 1.732 1.642 1.646
15 (A) middel 1.664 1.645 1.825 15
outlet 1.807 1.324 1.715
Table . 4.10. Calculations of impedances in x, y and xy axis.
.
5 x axis y axis xy axis
(A) R11[Ω] 0.00133 R14[Ω] 0.00384 R15[Ω] 0.000223
R23[Ω] 0.00151 R25[Ω] 0.00154 R24[Ω] 0.00514
R45[Ω] 0.00360 R36[Ω] 0.00262 R26[Ω] 0.004125
R56[Ω] 0.0025 R47[Ω] 8.46E-05 R35[Ω] 3.92E-05
R78[Ω] 4.21E-05 R58[Ω] 0.00353 R48[Ω] 4.21E-05
R89[Ω] 0.00394 R69[Ω] 0.00298 R57[Ω] 0.00351
R59[Ω] 0.00039
R68v 0.00097
x axis y axis xy axis
10 R11[Ω] 0.00097 R14[Ω] 0.000485 R15[Ω] 0.00045
(A) R23[Ω] 0.000189 R25[Ω] 0.00051 R24[Ω] 0.001452
R45[Ω] 0.00093 R36[Ω] 0.00243 R26[Ω] 0.00259
R56[Ω] 0.00208 R47[Ω] 0.00127 R35[Ω] 0.00032
R78[Ω] 0.001656 R58[Ω] 0.00383 R48[Ω] 0.0029
R89[Ω] 0.00383 R69[Ω] 0.00206 R57[Ω] 0.00219
R59[Ω] 2.77E-05
R68[Ω] 0.00178
x axis y axis xy axis
15 R11[Ω] 0.00161 R14[Ω] 0.00106 R15[Ω] 0.00158
(A) R23[Ω] 0.00050 R25[Ω] 3.22E-05 R24[Ω] 0.00055
R45[Ω] 0.00051 R36[Ω] 0.00255 R26[Ω] 0.00303
R56[Ω] 0.00300 R47[Ω] 0.00195 R35[Ω] 0.00047
R78[Ω] 0.0020 R58[Ω] 0.00448 R48[Ω] 0.00398
R89[Ω] 0.00316 R69[Ω] 0.00165 R57[Ω] 0.00246
R59[Ω] 0.00135
R68[Ω] 0.00151
of the current distributions between different current profile loads are shown in this table.
Cell one Cell two Between cell one -two
R14[Ω] 0.0015 R14[Ω] 5.0816e-4 R1211[Ω] 6.05e-4
R47[Ω] 0.0017 R47[Ω] 2.029e-4 R1244[Ω] 2.4e-4
R1277[Ω] .0018
Cell Cell Between cell
one two one -two
10(A) R14[Ω] 0.0013 R14[Ω] 8.07e-4 R1211[Ω] 4.207e-4
R47[Ω] 0.0015 R47[Ω] 3.686e-4 R1244[Ω] 9.559e-5
R1277[Ω] .002
Cell Cell Between cell
15 (A) one R14[Ω] R47[Ω] 0.0014 0.0014 two R14[Ω] R47[Ω] .0017 7.4351e-4 one -two R1211[Ω] R1244[Ω] 6.04e-4 2.64e-4
R1277[Ω]
Table . 4.12. Technical characteristics of the PEMFC.
.
Parameters Values
Number Of Cell, Ncell 40
Stack Weight, mstack 2.2 (Kg)
Stack Area 21317480 (mm)
Anode volume 4500(mm 2 )
Cathode volume 6800 (mm 2 )
Membrane thickness, tm 18 (µm)
Active area, A 61.48 (cm 2 )
Table . 4.14. Temperature and voltage obtained by experimental test Ballard Nexa stack.
.
Table . 4.15. Resistance calculations of the stack FC .
.
Table of contents of Chapter V 1. General Diagnosis Strategy of FCEV drive trains
............................................................
Table . 5.1. Global strategy of supervision and diagnosis on the power train in FCEV.
.
Th harmonic of the output signal of PEMFC. This algorithm (Fast Fourier Transform) can be used for on-line failure detection. Because of computation time hundred times is faster than other algorithms. A fast Fourier transform (FFT) is an algorithm to compute the discrete Fourier transform (DFT) and it's inverse. Fourier analysis converts time (or space) to frequency and vice versa; an FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. In other words, Fast Fourier Transform. The FFT is a faster version of the Discrete Fourier Transform (DFT) what is the DFT?
Table . 5.2. Classification fault and normal mode in Fuel cell.
.
Table . 5.3. Classification flooding and drying.
.
Table . 5.6. Local Current density distributions in different nodes based on various faults of 5 current profiles.
.
Table . 5.7. Local Current density distributions in different nodes based on various faults of 10 current profile.
.
Table . 5.8. Local Current density distributions in different nodes based on various faults of 15 current profiles. Chapter V: Diagnosis of PEMFC within FCEV powertrain
.
Page | 176
Figure.3.5. Transverse view (x, y axis) of the anode side with 9 nodes and 20 different resistances.
Eq3.[START_REF] Maggio | Modeling polymer electrolyte fuel cells: an innovative approach[END_REF]
Figure.3.13. Perspective view of the 3D proposed model for PEMFC stack
Figure.3.14. Top view of the 3D proposed model for PEMFC stack
Table.4.11. Resistance calculations of the two cell. 5 (A)
Acknowledgments
iii Acknowledgments I am very grateful with my PhD committee for the interest they showed in my work. Prof. BEN AMMAR Faouzi and Dr. Mélika HINAJE for their review of the manuscript and the suggestions they made to improve it. Prof. Bacha SEDDIK, Prof. Daniel HISSEL and Dr. Rachid OUTBIB were examiners of the PhD for their participation and their interesting questions during my thesis defense.
The next step of this work is to develop a diagnosis algorithm based on the developed ANN model and the general context of the FCEV drivetrain supervision, diagnosis but also the management of the degraded modes. The modeling process has to be continued to enhance the knowledge on different technologies of PEMFCs and the accuracy of the 3D models. This can be achieved through investigating more the modeling for single and stack fuel cell notably to do calibrations in faulty operating conditions. The aim is to propose diagnosis and control strategies in both healthy and degraded modes to improve the lifetime of the FC system and the reliability of the FCEVs drivetrains.
However, all current densities do not collected to current collector layers. Some currents are passed in X, Y direction of the cell (Z direction in stack) as shown in Table.3.6. These lost are simulated by connection resistors different directions. The amount of the latter's can be computed by the derivation of the voltage in two adjacent nodes divided by the average value of the current densities of these nodes. For example the connection resistors in cross section (X, Y) can be calculated by formulation bellow:
Where:
V 1 , V 2 , V 4 and V 5 : Nodes voltages which are recorded experimentally.
x 1 , x 2 , x 4 and x 5 : Nodes current densities which are calculated by the Newton Raphson.
Chapter IV Experimental Validation of the 3D Model in Healthy Mode
Nexa stack set-up
At this stage of research work a second validation set up has been carried out. The used FC is the commercial BALLARD Nexa Stack fuel cell rated at 1.2 KW with 47 cells. A compressor (blower) is installed to feed The Nexa stack by pure hydrogen and a low-pressure compressed air. The anode channels operate at the "dead end" mode. As for the Nexa stacks, the hydrogen at the anode inlet is not humidified. The entire stack is cooled with a forced air flux in the cooling channels. Table.4.13 summarizes the main configurations and operating conditions of the Nexa stack fuel cell [4.3].
Measuring equipment
During the experimental test, the Nexa stack's integrated control board takes most of the data measurements. These measurements include the air temperature at the inlet; stack current, stack output voltage and so on. However, the Nexa stack's control board does not measure the temperature and voltage of the individual cells. In order to get this information, some complementary instruments were added (as shown in. | 299,303 | [
"1183932"
] | [
"227671"
] |
01492936 | en | [
"spi"
] | 2024/03/04 23:41:50 | 2015 | https://hal.science/hal-01492936/file/Submission.pdf | Vikram Bhattacharjee
email: [email protected]
Debanjan Chatterjee
Permual Raman
A shield based thermoelectric converter system with a thermosyphonic heat sink for utilization in wood-stoves
Keywords: Thermoelectric Power Generator, thermosyphonic heat sink, shield, wood-stoves, conversion efficiency
The Thermoelectric Power Generators (TEG) are solid state devices which utilize temperature gradients to produce electrical energy. In domestic wood-stoves, these devices have carved out a niche for themselves and can be used for generation of electricity in rural areas. This paper presents the design of a shield based thermoelectric power generation system consisting of a thermosyphonic heat sink, for utilization in wood stoves. The average current density of the TEG module improved by 28.3% and 22.3% when compared to the conventional plate-fin heat sink based converter system and a simple single loop thermosyphonic heat sink based converter system respectively. The converter system achieved a maximum power output of 3.2 Watts along with a maximum conversion efficiency of 5.05 % which was higher than the conventional heat sink based module systems in wood burning stoves. An optimal shield thickness of 6 cm reduced the steady state hot side temperature below the permissible limit and an optimal coolant velocity of 8 m/sec ensured efficient removal of heat from the cold side of the generator.
Introduction
According to WHO around 3 billion people are utilizing simple biomass as a source of fuel for domestic cooking at present [START_REF]WHO Report on biomass consumption[END_REF]. Rural areas, where wood is the main source, domestic wood-fired stoves are being heavily used. In addition to the climatic conditions the rural homes also suffer from uneven distribution of reliable electrical power supply from the grids. As a solution to these problems researchers have investigated the concept of modelling and reconstructing these systems with integration of converter systems utilizing thermoelectric generators for power generation purposes [START_REF] Nuwayhid | Low cost stove-top thermoelectric generator for regions with unreliable electricity supply[END_REF][START_REF] O'shaughnessy | Small scale electricity generation from a portable biomass cookstove: Prototype design and preliminary results[END_REF][START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF][START_REF] Jiang | Experimental study of a plat-flame micro combustor burning DME for thermoelectric power generation[END_REF][START_REF] Ma | Waste heat recovery using a thermoelectric power generation system in a biomass gasifier[END_REF][START_REF] Lertsatitthanakorn | Study of Combined Rice Husk Gasifier Thermoelectric Generator[END_REF][START_REF] Nuwayhid | Design and testing of a locally made loop-type thermosyphonic heat sink for stove-top thermoelectric generators[END_REF][START_REF] Nuwayhid | Development and Testing of a Domestic Woodstove Thermoelectric Generator with Natural Convection cooling[END_REF][START_REF] Raman | Development, design and performance analysis of a forced draft clean combustion cookstove powered by a thermo electric generator with multi-utility options[END_REF][START_REF] Killander | A stove-top generator for cold areas[END_REF]. Nuwayhid et al. [START_REF] Nuwayhid | Low cost stove-top thermoelectric generator for regions with unreliable electricity supply[END_REF] studied the performance characteristics of a low cost stove top thermoelectric power generator where the evaluation led to the design of Peltier modules to produce maximum power for different utilities. In [START_REF] O'shaughnessy | Small scale electricity generation from a portable biomass cookstove: Prototype design and preliminary results[END_REF] a small scale electricity generation system was achieved using biomass cook stoves. The prototype produced a total power of 5.9 W and the electricity was utilized to power a 3.3 V Lithium Ion battery. Lertsatitthanakorn [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF] designed a biomass cook-stove combined with a TEG which gave a net power output of 2.4 Watts. A conversion efficiency of 3.2% enabled the system to light up a low power incandescent bulb . Jiang et al. utilized a TEG system in a plat-flame micro combustor burning dimethyl ether and giving an output power of 2 Watts with a conversion efficiency of 1.25 % .The system sustained a stable premixed flame and achieved a low wall temperature thereby reducing heat loss from the combustion system [START_REF] Jiang | Experimental study of a plat-flame micro combustor burning DME for thermoelectric power generation[END_REF]. In [START_REF] Ma | Waste heat recovery using a thermoelectric power generation system in a biomass gasifier[END_REF] a Bi2Te3 based TEG system consisting of 8 modules was used in a biomass gasifier for improved waste heat recovery, giving a maximum power output of 6.1 Watts. A rice husk gasifier coupled with a TEG system on the gasifier wall was tested in [START_REF] Lertsatitthanakorn | Study of Combined Rice Husk Gasifier Thermoelectric Generator[END_REF] where at a temperature difference of 60 °C the output power of the system was 3.9 W along with a conversion efficiency of 2.01%. In [START_REF] Nuwayhid | Design and testing of a locally made loop-type thermosyphonic heat sink for stove-top thermoelectric generators[END_REF] a TEG powered wood-stove was designed where the cold side was coupled to a loop-type thermosyphonic heat sink using water as a coolant. The system generated a total output power of 3 W making the system commercially viable for low power applications. A domestic wood stove fitted to a TEG unit working under natural convection produced a power output of 4.2 W. It was deduced that the use of multiple modules with a single heat sink reduces the power output when compared to that of a single module due to reduced temperature difference between the hot and the cold sides of the unit [START_REF] Nuwayhid | Development and Testing of a Domestic Woodstove Thermoelectric Generator with Natural Convection cooling[END_REF]. In [START_REF] Raman | Development, design and performance analysis of a forced draft clean combustion cookstove powered by a thermo electric generator with multi-utility options[END_REF] a performance evaluation was carried out to study a forced draft clean combustion cook-stove where the power output of the TEG was 4.5 Watts with a temperature difference of 240 ℃. Killander et al. [START_REF] Killander | A stove-top generator for cold areas[END_REF] designed a cook stove consisting of two Hi-ZHZ modules whose cold side was maintained by a cooling fan. A DC-DC converter was used to step up the output voltage of the TEG and the stove produced a net power output of 10 Watts. Based on the literature review it can be deduced that the performance of the thermoelectric generators in wood stoves is mainly dependent on the following factors like the temperature difference between the hot and the cold sides of the TEG and the design of the heat sink.However, the conventional heat sink based TEG-wood stoves suffer from reduced conversion efficiencies due to reduced temperature differences between the hot and cold sides as a result of increased hot side temperatures above the recommended limit for a generator and inefficient heat dissipation through the fins from its cold side. Hence the objective of this study is to present the design of a new shield based thermoelectric converter system coupled with a single loop thermosyphonic heat sink design for utilization in wood stoves where the additional conductive resistance of the shield would prevent the overheating and damage of the module by maintaining the hot side temperature within the permissible limit and the high specific heat intake of the water in the thermosyphonic heat sink would ensure efficient heat removal from its cold side. The research methodology and the design optimization strategy have been presented in this paper.
Nomenclature
TEG
Thermoelectric Power Generator
T cold
Cold side temperature (K)
T hot Hot side temperature (K)
Thermoelectricity
Background
Thermoelectric Effect was first discovered by Seebeck [START_REF] Riffat | Thermoelectrics:A review of present and potential applications[END_REF] in the year 1822. The "Seebeck Effect" principle states that when a temperature difference is maintained across the junctions of two dissimilar metals, a voltage is generated. Thermoelectric Modules, also called the Thermoelectric Power Generators are a combination of a pair of n and p type semiconductors which are combined electrically in series and thermally in parallel and are alternately arranged to ensure unidirectional career transport .The negative loading of the n type elements and the positive loading of the p type elements finally constitute the electrical power output from the system. The whole assembly is supported by two ceramic plates for mechanical support. Having a high thermal conductivity, ceramic allows efficient heat transfer from the hot to the cold side thereby ensuring a high conversion efficiency of the module.
Module Parameters
The principle parameters that determine the performance of a thermoelectric power generator are the net output power, the maximum conversion efficiency and the hot and cold side temperatures of the TEG unit. The maximum conversion efficiency, theoretical maximum power output, the voltage and the output current can be determined on basis of the contact resistances and are given by ( 1) and ( 2) respectively [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF].
1 2 hot cold hot cold 2 hot hot T T 2 T T 4 1 (2 0.5 T T Z 2 c c Ln rL L T L rL (1) 2 2 2 2 2 1 hot cold c NA T T P rL Ln L (2)
Typically the value of 𝐿 𝑐 ,n, and r , are constants for a module and depending on the material Bi-Te and the temperature difference used and were taken from [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF]. Here 𝐿 𝑐 = 0.8 𝑚𝑚 and 𝛼 =2.1226 * 10 -4 V K -1 , n=0.1 mm, r=0.2, and L=1.2 mm and 𝜌 = 2.07 * 10 -3 Ω cm. and Z= 2.75 *10 -3 𝐾 -1 .
3.Experimental Setup
Converter System Design
The Thermoelectric Converter System consisting of a single module was designed in a manner such that the hot side of the TEG unit is not directly exposed to the incoming heat energy from the source. Rather it was attached to an 15 cm cylindrical copper rod which is in direct contact with the heat source. There exists a shield between rod and the hot side of the TEG. The shield adds an additional conductive resistance to the system and hence lowers the hot side temperature below the permissible value and prevents the damage of the module due to sudden outburst of heat energy from the source. The cold side was attached to a single loop thermosyphonic system with water as the coolant.Cold water at 13 °C flowed from the reservoir whose volume was kept constant at 2 litres from an external water supply. The cooling system was a stainless steel box having dimensions 10 cm x 10 cm x 5 cm. The TEG was supported in a small socket on the surface of the coolant chamber and the rod and the shield assembly was supported by the help of Magnetic Sockets as shown in the Figure 1 . The chamber had two openings on one pair of its opposite faces. Both the openings were provided with valves and pipes for the passage and control of the coolant flow velocity. In this study a Bi2Te3 TEG module having dimensions 30 mm x 30mm x 3.3 mm was selected. The maximum hot side and cold side temperature of the module was 300℃ and 30℃ respectively. The value of thermal conductivity, the Seebeck coefficient and the electrical conductivity for the material were taken from [START_REF]Thermoelectric Engineering Handbook[END_REF].
Stove Geometry
The chamber had a squared opening (4cm x 4cm) at the bottom for the entry of air inside the system. Wood pieces (of dimensions 1.5 inch x 1.75 inch) were used for ignition inside the combustion chamber. Initially a total of 250 gm. of wood chips occupied
1 3
𝑟𝑑 of the chamber volume. The chamber was made to operate in batch mode and wood was added eventually when the temperature dropped. The cylindrical rod of the converter system was inserted into the chamber through a hole (1 cm I.D). The length of the rod inside the chamber was 10 cm. The temperature was measured by the help of three standard temperature sensors attached to the display. Air was forced into the chamber via a 5 V blower from beneath through a narrow opening to ensure efficient combustion. A similar 5V fan was attached as a load with the TEG whose RPM was measured and controlled throughout the experiment. . The experimental setup has been shown in the following Figure 2.
Conventional Heat Sink Designs for performance assessment
The performance of the converter was compared with the performance of a two conventionl heat sink designs.In the first design the coolant chamber was replaced with a aluminium based rectangular plate-fin heat sink having a fixed number of fins.The second design was a simple single loop thermosyphonic system with no shield in between the hot side and the stove wall.The dimensions of the coolant chamber in a simple single loop thermosyphonic system was similar to that in the proposed design.The material properties for the stove geometry and the converter system designs were taken from [START_REF]Thermal conductivity of metals[END_REF] and the instruments used during the experiment have been tabulated below along with their respective specifications in Table 1 and Table 2 respectively.
Guiding Equations
In order to estimate its analytical performance, a mathematical analysis of the converter system was carried out by defining the flow and energy equations with appropriate boundary conditions for its different components. Inside the stove the flow of the inlet air was modelled using the Reynolds-Averaged Navier Stokes equations [START_REF]RANS model[END_REF] including the k-∈ turbulent model [START_REF]The k-∈ turbulent model Available from[END_REF].The equations of conjugate heat transfer (3-9) involved viscous effects and the effect on the temperature profile of the flue gas due to heat generation from the heat source inside the chamber.
22 ρ ρ . .( P . ρk 33 T t u u u I μ μ u u μ μ u I I F t t (3) μ k ρ ρ . k . μ k ρ t r k P t u (4) ρ . ρ 0 t u u 2 μ ρ ρ . . μ 1.44 1.92ρ kk t r k P t u (6) 2 k μ 0.09ρ. t (7) 2 22 ( )) ( 3 μ : ( . ) 3 ρ. T rt Pk u u u u u (8)
u (9)
where μ 𝑡 represents the undamped kinematic viscosity,k represents the turbulent kinetic energy and ∈ is the turbulent dissipation rate .The terms 𝑘 𝑔𝑎𝑠 along with 𝜌 𝑔𝑎𝑠 represent the thermal conductivity and the density of the fluid respectively. gen q is the heat generation term which has been modelled as a non exhaustive heat source dependent on the source temperature and the production coefficient. Radiative heat transfer between ambient and a flame was previously modelled in [START_REF] Keramida | Radiative heat transfer in natural gas-fired furnaces[END_REF] which considered the radiative transfer equation (Eq.( 10)) for a gray medium incorporating the effects due to the phenomenon of scattering, absorption and emission. Given by [START_REF] Killander | A stove-top generator for cold areas[END_REF] the heat generation gen q inside the volume is a function of the average intensity ( , ) H r s of the scattered radiation where . ( , ) ( , )
extinction gen s H r s H r s q (10) 4 ( 4 ,
)
absorption scattering gen source r k q T d Hs k (11)
The governing equations which determine the performance parameters of the TEG are dependent on its current density and the heat transfer through the material. The thermal conductivity, the specific heat capacity along with the density of the material of construction of the TEG, determine its performance and hence were taken into account in the analysis. The governing equation of the TEG at unsteady state is a three dimensional form which can be represented considering energy balance and current conservation [START_REF] Jang | Optimal design for micro-thermoelectric generators using finite element analysis[END_REF] .They have been elucidated below as follows.
, .
TEG TEG p TEG T C q q t (17)
. 0 J [START_REF] Keramida | Radiative heat transfer in natural gas-fired furnaces[END_REF] Where 𝑞 ⃗ , 𝑞̇ and 𝐽 ⃗ ⃗⃗ represent the heat flux, the heat generation and the current density respectively. The heat flux is related to the current density and the electric field intensity vector by the equations ( 19) and ( 20) respectively.
. k T
TEG TEG TEG q T J (19)
T ) 1 (
TEG TEG JE (20)
Where 𝐸 ⃗⃗⃗⃗ =-∇Ω and Ω being the scalar electric potential with TEG and k 𝑇𝐸𝐺 being the electrical resistivity and the thermal conduction of the material of construction of the TEG respectively. Substitution of the ( 19) into [START_REF]The k-∈ turbulent model Available from[END_REF] gives the final form of the governing equation which has been used for the determination of the temperature profiles and the scalar potential in each of the three phases of the experiment.
, . k T T TEG TEG TEG TEG TEG TEG p TEG T J q C t (21)
The heat generation is dependent on power loss due Joule Heating and therefore the final equation giving the temperature profile of the TEG is given by [START_REF] Boltzmann | [END_REF].
2 , . k T T TEG TEG TEG TEG TEG TEG p TEG T J J C t (22)
Where the specific heat capacity of the thermoelectric material varies with temperature according to the Equation ( 23) [START_REF] Landolt | Landolt-Börnstein numerical data and functional relationships in science and technology[END_REF]. The proposed system reduced the maximum hot side temperature of the TEG below the permissible limit of 573 K to 542 K where as the conventional plate-fin and the simple thermosyphonic heat sink system with no shield recorded a maximum hot side temperature of 584 K and 575 K respectively. The proposed system recorded a maximum temperature difference of 250 K which is comparatively higher than the conventional plate-fin heat sink and the simple thermosyphonic systems which recorded a maximum temperature difference of 195 K and 228 K respectively. From the figure it can be inferred that the energy dissipation rate from the fluid reaches a maximum value of 844 m 2 s -3 in regions near the wall in which the rod is attached. Therefore magnitude of heat flux travelling through the rod and ultimately falling on the hot side of the TEG through the shield will vary directly with the length of the part of rod inserted inside the geometry.However increasing the length of the inserted portion will increase proximity of the TEG hot side with the wall of the stove and will lead to the overheating of the device.Thus to avoid overheating and to allow optimum module performance ,the length of the inserted portion was chosen accordingly based on optimized rates of turbulent energy dissipation and the maximum hot side temperature of the module.Figure 4. shows the variation of the turbulent dissipation rate and maximum hot side temperature with increasing length of the converter.It is evident from the figure that at a length of 8 cm the average turbulent dissipation energy is high and the maximum hot side temperature is below the allowable limit of 573 K.Hence the said geometric length was chosen and kept constant during the experiment. Radiative Heat Flux shield and the flow of the heat flux is mainly along the directions which offer lower resistance due to conduction.The conductive heat flux flowing normal to the faces excluding those parallel to the walls of the TEG is thus manifested as radiation loss into the ambient. In the figure the hot side temperature of the TEG corresponding to the conductive heat flux falling on the hot side is above the maximum allowable limit up to a shield thickness of 3 cm but gradually decreases as the shield thickness increases and the radiation loss increases. However since the chief mode of heat transfer from the shield to the TEG is in the form of conduction, increasing the additional conductive resistance drastically will reduce the power output of the generator. Hence the shield thickness should be based on the optimized rates of conductive and radiative heat transfer to simultaneously prevent module overheating and ensure efficient module performance. shows the variation of the maximum output power of the TEG with shield thickness at various coolant flow velocities. In the figure the power output initially increases with increasing thickness and after reaching a maximum the power output starts decreasing with further increase in the shield thickness.The power output is minimum in the absence of shield due to overheating of the TEG hot side.As the shield thickness increases the efficiency of the module increases due to increased temperature difference between its two sides and finally starts decreasing as the conductive flux entering the module decreases with an increase in the conductive resistance of the shield.When the thickness is constant the maximum output power also increases with an increase in the coolant flow velocity till it reaches a value of 9 m/sec. Figure 6. shows that the convective heat flux removed from the cold side of the TEG becomes constant beyond a magnitude of 9 m/sec and hence the maximum power output of the TEG remains constant at 3.2 Watts as the coolant flow rate is increased further. A shield thickness of 6 cm was chosen taking into consideration the material cost and the optimized heat transfer rates in order to achieve successful prevention of overheating of the side exposed to incoming heat flux .and a coolant flow velocity of 8 m/sec was considered in the design for ensuring optimal performance of the TEG unit. 1), Figure 8. describes the variation of the conversion efficiencies for the three systems which shows that due to higher temperature differences, the module in the proposed system, having a maximum efficiency of 5.1% at a temperature difference of 250 K, reached a higher maximum conversion efficiency of 5.05 % when compared to the other two systems which recorded efficiencies upto a maximum of 0.75% and 3.5% respectively. The process parameters have been tabulated below in Table 3. A𝑚 -2 and 2.55*10 4 A𝑚 -2 and a minimum of 175 A𝑚 -2 and 96.8 A𝑚 -2 in the second and the third case respectively. Due to higher hot side temperatures the maximum current intensity in the conventional heat sink designs, is greater than that of the proposed system but the reduced temperature differences and low conversion efficiencies gradually minimize the current intensity in larger sections of the module. Due to the increased temperature difference between the two sides of the TEG the average intensity increased by 28.3 % when the shield based thermosyphonic converter system is used in place of the conventional plate-fin heat sink system and by 22.3% when the shield was added to a single open loop thermosyphonic system.
Selection and assessment of variable parameters for optimum module performance 6.2.1 Effect on variation of inserted length on turbulent energy dissipation
Effect on variation of shield thickness for optimum module performance
Effect on variation of flow rate on boundary convective flux
Conclusion
A shield based thermosyphonic converter system was designed for power generation in wood stoves .The additional conductive resistance of the shield reduced the hot side temperature below the maximum allowable temperature for the hot side and prevented module overheating while the thermosyphonic system helped in the efficient removal of the heat energy from the cold side. The performance of the converter was studied for utilization in a wood stove consisting of a heat source and was compared to that of a conventional rectangular plate-fin heat sink based thermoelectric converter system and a simple single loop thermosyphonic heat sink based system. It was observed that the proposed system showed an appreciable increase in the maximum conversion efficiency and an increase in the average current density by 28.3% and 22.3% respectively. The maximum power output of the system was 3.2 Watts with a maximum conversion efficiency of 5.05% making the design viable for low power applications.
Fig. 1 .
1 Fig. 1.Schematic of the Converter System
Fig. 2 .
2 Fig. 2. Stove Geometry
describing absorption and scattering respectively. extinction represents the overall extinction coefficient and is the expressed as the sum of the scattering and the absorption coefficients and can be expressed by Eq.(11).
1
1 Steady State temperature Differences
Fig. 3 .Fig. 4 .
34 Fig.3. Distribution Of The Turbulence Energy Dissipation Rate Inside The Flow Field
Fig. 5 .
5 Fig.5.Variation of Heat Flux through Shield at different shield thicknesses ..
Fig. 6 .
6 Fig.6.Variation of Convective Heat Flux from TEG cold side at different coolant flow velocities.
Figure 6 .Fig. 7 .Figure 7 .
677 Figure 6. the variation of the boundary convective flux from the cold side of the TEG at different flow velocities through the chamber.The figure shows that the convective heat flux from the cold side increases gradually when the coolant velocity is gradually increased from 5 to 10 m/sec but the magnitude of the boundary convective flux from the TEG cold side becomes more or less constant at a coolant flow velocity of 9-10 m/sec. A higher coolant velocity requires larger reservoir heights and demands material costs on piping.Therefore based on the availability of water storage space and optimization of material costs ,the height of the reservoir should be judiciously chosen for achieving effective heat removal at optimum flow rates.
Fig. 8 .
8 Fig.8.Comparison of the variation of Conversion Efficiency with Temperature Difference in two converter systems
Fig. 9 .
9 Fig.9.Surface Plot for Steady State Distribution of Current Intensity in the TEG using three different systems
Table 1
1
Instrument Specifications
_____________________________________________________________________________________
Instruments used Measurement Makers Name Resolution Unit Accuracy
_____________________________________________________________________________________
Digital thermometer Temperature CIE305 0.10 •C 0.10
Multimeter V/A MecoV 0.01 V ±0.05
Multimeter V/A MecoV 0.01 A ±1.10
Tachometer Speed Techmark 1 RPM ±0.05
Digital Balance Weight Sunshine 1.00 g Auto cal ibration
Of fuel Instruments
_____________________________________________________________________________
Table 2
Parameters of the stove [23].
____________________________________________________________________________
Components Material of Construction Thermal Conductivity Notation
W m -1 K -1
_____________________________________________________________________________________
Stove Stainless Steel 16.300 k stove
Coolant Box Aluminium 204.300 --
Cylindrical Rod Copper 385.000 k
rod
Coolant Water 0.563 k
coolant
Shield Iron 71.800 k
shield
____________________________________________________________________________________
4. Mathematical Modelling
Table 3
3 Tabulation of the input parameters of the model
Components Symbol Value Reference
Forced Convection
Medium:Air
Ambient Temperature (K) T0 300
Velocity (ms -1 ) 5
Surface to Surface Radiation
Emissivity e amb 0.80 [21]
Stefan-Boltzmann Constant σ 5.670373*10 -8 [22]
(W m -2 K -4 )
Thermal conductivity of TEG k 1.20 [23]
TEG
material
Absorption Coefficient (m -1 ) 0.50 [24]
k
absorption
Scattering Coefficient (m -1 ) 0.01 [24]
k
scattering
Acknowledgements
The authors would like to acknowledge the technical staff of The Energy and Resources Institute, New Delhi, India, for conducting the study. | 28,872 | [
"1004402",
"1004403"
] | [
"367774",
"489692"
] |
01492955 | en | [
"info"
] | 2024/03/04 23:41:50 | 2015 | https://hal.science/hal-01492955/file/Meyer_HSCC15.pdf | Pierre-Jean Meyer
email: [email protected]
Antoine Girard
email: [email protected]
Emmanuel Witrant
email: [email protected]
Poster: Symbolic Control of Monotone Systems Application to Ventilation Regulation in Buildings *
Keywords: I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search-Control theory, J.7 [Computer Applications]: Computers in other systems-Command and control Symbolic control, Monotone system, Application
We describe an application of symbolic control to ventilation regulation in buildings. The monotonicity property of a nonlinear control system subject to disturbances, modeling the process, is exploited to obtain symbolic abstractions, in the sense of alternating simulation. The resulting abstractions consist of non-deterministic finite transition systems, for which we can synthesize supervisory safety controllers to keep the room temperatures within prescribed bounds. To choose among possible control inputs preserving safety, we consider the problem of minimizing a given cost function and apply a receding horizon control scheme. The approach has been applied to temperature regulation on a small-scale building equipped with underfloor air distribution (UFAD). To the best of our knowledge, this is the first report of experimental implementation of symbolic controllers.
SYMBOLIC ABSTRACTION
We consider a nonlinear control system of the form ẋ = f (x, u, w) with x ∈ R n , u ∈ R p and w ∈ R q
(1)
where x denotes the state, u the control input and w the disturbance input. We assume that the control and disturbance inputs are bounded in multidimensional intervals: u ∈ [u, u] * This work was partly supported by a PhD scholarship and the research project COHYBA funded by Région Rhône-Alpes.
and w ∈ [w, w]. The trajectories of the system are denoted Φ(•, x0, u, w) where Φ(t, x0, u, w) is the state reached at time t ∈ R + 0 from initial state x0 ∈ R n , under piecewise continuous control and disturbance inputs u : R + 0 → R p and w : R + 0 → R q . We also assume that the system is cooperative, which is a subclass of monotone systems [START_REF] Angeli | Monotone control systems[END_REF].
Definition 1 (Cooperative system). System (1) is cooperative if for all x ≥ x , u ≥ u , w ≥ w , it holds for all t ≥ 0, Φ(t, x, u, w) ≥ Φ(t, x , u , w ), where ≥ denotes the componentwise inequality.
We describe the dynamics of the sampled version of system (1) with time period τ as a non-deterministic transition system S as presented in [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF]. The control objective is to keep the state in an interval [x, x].
We define a symbolic abstraction of S as a finite transition system whose states are the elements of a partition of R n , P * = P ∪{Out} where P is a partition of [x, x] into intervals. The abstraction is Sa = (Xa, Xa0, Ua, -→) where the set of states Xa = P * , the set of initial states Xa0 = P, the set of inputs Ua is a discretization of [u, u], and the transition relation is given for all s = [s, s] ∈ P, s ∈ P * , u ∈ Ua by:
s u -→s ⇐⇒ s ∩ [Φ(τ, s, u, w), Φ(τ, s, u, w)] = ∅.
As we deal with transition systems with control inputs and non-determinism, we are interested in alternating simulation relations as behavioral relationships between S and Sa [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF]. The cooperativeness assumption allows us to prove the following result.
Proposition 1. The symbolic abstraction Sa is alternatingly simulated by the original transition system S.
As a consequence, if we design a safety controller for Sa keeping its state in P, the alternating simulation relation provides an equivalent safety controller for S keeping its state in [x, x].
SYMBOLIC CONTROL
Using a classical fixed-point algorithm [START_REF] Wonham | On the supremal controllable sublanguage of a given language[END_REF], we can synthesize a supervisory safety controller C : P → 2 Ua for Sa keeping its state in P.
To choose among possible control inputs preserving safety, we consider the cost function J0 defined iteratively by: where N ∈ N is the time horizon, λ ∈ (0, 1) is a discount factor, ĝ : P → R + and g : P × Ua → R + are cost functions. Then, we apply a receding horizon control scheme given by the controller for Sa:
JN (s) = ĝ(s) J k (s) = min u∈C(s) g(s, u) + λ max s u -→s J k+1 (s )
C * a (s) = arg min u∈C(s) g(s, u) + λ max s u -→s J1(s ) .
For the original transition system S, we define the associated controller C * given for all s ∈ P, x ∈ s, by C * (x) = C * a (s). Note that all the above computations required to obtain C * (abstraction and controller synthesis) can be done offline. We can also prove the following result showing that C * ensures safety of S with performance guarantees.
λ i g(s k+i , u k+i ) ≤ J0(s k ) + λ N +1 1 -λ M
where M is an upper bound of functions g and ĝ.
UNDERFLOOR AIR DISTRIBUTION
The UnderFloor Air Distribution (UFAD) is an alternative solution to traditional ceiling based ventilation in buildings, where the air is cooled down in an underfloor plenum and then sent into each room when needed. The system considered is based on a 4-room small-scale experimental building equipped with UFAD sketched in Figure 2. A model of the temperature variations in each room is derived from the energy and mass conservation equations in the room [START_REF] Meyer | Controllability and invariance of monotone systems for robust ventilation automation in buildings[END_REF]. The obtained model is an ordinary differential equation involving the temperature of each room (the state), the ventilation from the underfloor (control input in each room) and continuous and discrete disturbances (outside temperature, door opening, . . . ) This model is proven to be cooperative [START_REF] Meyer | Controllability and invariance of monotone systems for robust ventilation automation in buildings[END_REF] and validated by an identification procedure on the building [START_REF] Meyer | Experimental implementation of UFAD regulation based on robust controlled invariance[END_REF].
The symbolic control method is applied to this model and the resulting control strategy is implemented in the 4-room experimental building. In Figure 1 are displayed the measured temperatures (dashed blue, on the left axis) and the controlled ventilation (plain green, on the right axis) dis- cretized into 256 values. The prescribed bounds on the temperature are represented by dash-dotted horizontal lines on the figure. The symbolic abstraction was computed on a partition consisting of 10 4 intervals. The performance criterion specifies the desired tradeoff between the magnitude of the control inputs, their variations and the distance of the state to the center of the interval given by the temperature bounds, with a time horizon N = 5 and discount factor λ = 0.5. We can see that the safety specification is met: the temperatures are maintained within the prescribed bounds despite the effect of external disturbances.
Figure 1 :
1 Figure 1: UFAD experiment controlled with a symbolic method.
Proposition 2 .
2 Let (x0, u0, x1, u1, . . . ) be a trajectory of S controlled with C * , then for all k ∈ N, x k ∈ [x, x]. Moreover, let s0, s1, • • • ∈ P such that for all k ∈ N, x k ∈ s k . Then, it holds for all k ∈ N, +∞ i=0
Figure 2 :
2 Figure 2: 4-room flat equipped with UFAD. | 7,496 | [
"1231646",
"7451",
"3856"
] | [
"398719",
"1289",
"388748"
] |
01199160 | en | [
"info"
] | 2024/03/04 23:41:50 | 2017 | https://inria.hal.science/hal-01199160/file/epm_pami.pdf | Member, IEEE, Fr éd éric Gaurav Sharma
Cordelia Jurie
Fellow, IEEE Schmid
Frédéric Jurie
Cordelia Schmid
Expanded Parts Model for Semantic Description of Humans in Still Images
Keywords: human analysis, attributes, actions, part-based model, mining, semantic description, image classification. !
We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suits) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.
INTRODUCTION
T HE focus of this paper is on semantically describing humans in still images using attributes and actions. It is natural to describe a person with attributes, e.g. age, gender, clothes, as well as with the action the person is performing, e.g. standing, running, playing a sport. We are thus interested in predicting such attributes and actions for human centric still images. While actions are usually dynamic, many of them are recognizable from a single static image, mostly due to the presence of (i) typical poses, like in the case of running and jumping, or (ii) a combination of pose, clothes and objects, like in the case of playing tennis or swimming.
With the incredibly fast growth of human centric data, e.g. on photo sharing and social networking websites or from surveillance cameras, analysis of humans in images is more important than ever. The capability to recognize human attributes and actions in still images could be used for numerous related applications, e.g. indexing and retrieving humans w.r.t. queries based on higher level semantic descriptions.
Human attributes and action recognition have been addressed mainly by (i) estimation of human pose [START_REF] Yang | Recognizing human actions from still images with latent poses[END_REF], [START_REF] Yao | Modeling mutual context of object and human pose in human-object interaction activities[END_REF] or (ii) with general non-human-specific image classification methods [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Learning discriminative representation for image classification[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Yao | Combining randomization and discrimination for fine-grained image categorization[END_REF]. State-of-the-art action recognition performance has been achieved without solving the problem of pose estimation [START_REF] Yang | Recognizing human actions from still images with latent poses[END_REF], [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF], which is a challenging problem in itself. Concurrently, methods have been proposed to model interactions between humans and the object(s) associated with the actions [START_REF] Yao | Modeling mutual context of object and human pose in human-object interaction activities[END_REF], [START_REF] Delaitre | Learning person-object interactions for action recognition in still images[END_REF], [START_REF] Desai | Discriminative models for static human-object interactions[END_REF], [START_REF] Gupta | Observing humanobject interactions: Using spatial and functional compatibility for recognition[END_REF], [11], [START_REF] Yao | Grouplet: A structured image representation for recognizing human and object interactions[END_REF]. In relevant cases, modelling interactions between humans and contextual objects is an interesting problem, but here we explore the broader and complementary approach of modeling appearance of humans and their immediate context for attribute and action recognition. When compared to methods exploiting human pose and humanobject interactions, modelling appearance remains useful and complementary, while it becomes indispensable in the numerous other cases where there are no associated objects (e.g. actions like running, walking) and/or the pose is not immediately relevant (e.g. attributes like long hair, wearing a tee-shirt).
In this paper, we introduce a novel model for the task of semantic description of humans, the Expanded Parts Model (EPM). The input to an EPM is a human-centered image, i.e. it is assumed that the human positions in form of bounding boxes are available (e.g. from a human detection algorithm). An EPM is a collection of part templates, each of which can explain specific scale-space regions of an image. Fig. 1 illustrates learning and testing with EPM. In part based models the choice of parts is critical; it is not immediately obvious what the parts might be and, in particular, should they be the same as, or inspired by, the biologic/anatomic parts. Thus, the proposed method does not make any assumptions on what the parts might be, but instead mines the parts most relevant to the task, and jointly learns their discriminative templates, from among a large set of randomly sampled (in scale and space) candidate parts. Given a test image, EPM recognizes a certain action or attribute by scoring it with the corresponding learnt part templates. As human attributes and actions are often localized in space, e.g. shoulder regions for 'wearing a tank top', our model explains the images only partially with the most discriminative regions, as illustrated in Fig. 1 (right). During training we select sufficiently discriminative spatial evidence and do not include regions with low discriminative value or regions containing non-discriminative background. The parts in an EPM compete to explain an image, and different parts might be used for different images. This is in contrast with traditional part based discriminative models where all parts are used for every image.
EPM is inspired by models exploiting sparsity. In their seminal paper, Olshausen and Field [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF] argued for a sparse coding with an over-complete basis set, as a possible computation model in the human visual system. Since then sparse coding has been applied to many computer vision tasks, e.g. image encoding for classification [START_REF] Yang | Linear spatial pyramid matching using sparse coding for image classification[END_REF], [START_REF] Yang | Efficient highly over-complete sparse coding using a mixture model[END_REF], image denoising [START_REF] Mairal | Online learning for matrix factorization and sparse coding[END_REF], image super-resolution [START_REF] Yang | Image superresolution via sparse representation[END_REF], face recognition [START_REF] Wright | Robust face recognition via sparse representation[END_REF] and optical flow [START_REF] Jia | Optical flow estimation using learned sparse model[END_REF]. EPM employs sparsity in two related ways; first the image scoring uses only a small subset of the model parts and second scoring happens with only partially explaining the images spatially. The former model-sparsity is inspired by the coding of information sparsely with an over-complete model, similar to Olshausen and Field's idea [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF]. Owing to such sparsity, while the individual model part interactions are linear, the overall model becomes nonlinear [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF]. The second spatial sparsity is a result of the simple observation that many of the attributes and actions are spatially localized, e.g. for predicting if a person is wearing a tank top, only the region around the neck and shoulders needs to be inspected, hence the model shouldn't waste capacity for explaining anything else (in the image space).
To learn an EPM, we propose to use a learning algorithm based on regularized loss minimization and margin maximization (Sec. 3). The learning algorithm mines important parts for the task, and learns their discriminative templates from a large pool of candidate parts.
Specifically, EPM candidate parts are initialized with O(10 5 ) randomly sampled regions from training images. The learning then proceeds in a stochastic gradient descent framework (Sec. 3.3); randomly sampled training image is scored using up to k model parts, and the model is updated accordingly (Sec. 3.2). After some passes over the data, the model is pruned by removing the parts which were never used to score any training image sampled so far. The process is repeated for a fixed number of iterations to obtain the final trained EPM. The proposed method is validated on three publicly available datasets of human attributes and actions, obtaining interesting qualitative (Sec. 4.2) and greater than or comparable to state-of-the-art quantitative results (Sec. 4.1). A preliminary version of this work was reported in Sharma et al. [START_REF] Sharma | Expanded parts model for human attribute and action recognition in still images[END_REF].
RELATED WORK
We now discuss the related work on modeling, in particular models without parts, part-based structured models and part-based loosely structured models.
Models without parts
Image classification algorithms have been shown to be successful for the task of human action recognition, see Everingham et al. [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF] for an overview of many such methods. Such methods generally learn a discriminative model for each class. For example, in the Spatial Pyramid method (SPM), Lazebnik et al. [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] represent images as a concatenation of bag-of-features (BoF) histograms [START_REF] Csurka | Visual categorization with bags of keypoints[END_REF], [START_REF] Sivic | Video Google: A text retrieval approach to object matching in videos[END_REF], with pooling at multiple spatial scales over a learnt codebook of local features, like the Scale Invariant Feature Transform (SIFT) of Lowe [START_REF] Lowe | Distinctive image features form scale-invariant keypoints[END_REF]. Lazebnik et al. [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] then learn a discriminative class model w using a margin maximizing classifier, and score an image as w x, with x being the image vector. The use of histograms destroys 'template' like properties due to the loss of spatial information. Although SPM has never been viewed as a template learning method, methods using gradients based features [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], [START_REF] Benenson | Pedestrian detection at 100 frames per second[END_REF], [START_REF] Dollár | Fast feature pyramids for object detection[END_REF], [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF] have
Riding horse
Arms bent Female Bermuda shorts
Riding bike Using computer Formal suit Fig. 3. Illustrations of scoring for different images, for different attributes and actions. Note how the model s cores only the discriminative regions in the image while ignoring the non-discriminative or background regions (in black). Such spatial sparsity is particularly interesting when the discriminative information is expected to be localized in space like in the case of many human attributes and actions.
been presented as such, e.g. the recent literature is full of visualizations of templates (class models) learnt with HOGlike [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] features, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Pandey | Scene recognition and weakly supervised object localization with deformable part-based models[END_REF]. Both, SPM and HOG based, methods have been applied to the task of human analysis [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Khan | Coloring action recognition in still images[END_REF], where they were found to be successful. We also formulate our model in a discriminative template learning framework. However, we differ in that we learn a collection of templates instead of a single template.
In the recently proposed Exemplar SVM (ESVM) work, Malisiewicz et al. [START_REF] Malisiewicz | Ensemble of Exemplar-SVMs for object detection and beyond[END_REF] propose to learn discriminative templates for each object instance of the training set independently and then combine their calibrated outputs on test images as a post-processing step. In contrast, we work at a part level and use all templates together during both training and testing. More recently, Yan et al. [START_REF] Yan | Beyond spatial pyramids: A new feature extraction framework with dense spatial sampling for image classification[END_REF] proposed a 2-level approach for image representation. Similar to our approach it involves sampling image regions, but while they vector quantize the region descriptors, we propose a mechanism to select discriminative regions and build discriminative part based models from them.
Works have also been reported using features which exploit motion for recognizing and localizing human actions in videos [START_REF] Jain | Better exploiting motion for better action recognition[END_REF], [START_REF] Jain | Action localization with tubelets from motion[END_REF], [START_REF] Oneata | Efficient action localization with approximately normalized fisher vectors[END_REF], [START_REF] Wang | Action recognition with improved trajectories[END_REF], [START_REF] Laptev | Learning realistic human actions from movies[END_REF], [START_REF] Simonyan | Two-stream convolutional networks for action recognition in videos[END_REF]. Wang and Schmid [START_REF] Wang | Action recognition with improved trajectories[END_REF] use trajectories, Jain et al. use tubelets [START_REF] Jain | Action localization with tubelets from motion[END_REF] while Simonyan et al. [START_REF] Simonyan | Two-stream convolutional networks for action recognition in videos[END_REF] propose a two-stream convolutional network. Here, we are interested in human action and attribute recognition, but only from still images and hence do not have motion information.
Part-based structured models
Generative or discriminative part-based models (e.g. the Constellation model by Fergus et al. [START_REF] Fergus | Weakly supervised scale-invariant learning of models for visual recognition[END_REF] and the Discriminative Part-based Model (DPM) by Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]), have led to state-of-the-art results for objects that are rigid or, at least, have a simple and stable structure. In contrast humans involved in actions can have huge appearance variations due to appearance changes (e.g. clothes, hair style, accessories) as well as articulations and poses. Furthermore, their interaction with the context can be very complex. Probably because of the high complexity of tasks involving humans, DPM does not perform better than SPM for human action recognition as was shown by Delaitre et al. [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF]. Increasing the model complexity, e.g. by using a mixture of components [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], has been shown to be beneficial for object detection 1 . Such increase in model complexity is even more apparent in similar models for finer human analysis, e.g. pose estimation [START_REF] Desai | Detecting actions, poses, and objects with relational phraselets[END_REF], [START_REF] Yang | Articulated pose estimation with flexible mixtures-of-parts[END_REF], [START_REF] Zhu | Face detection, pose estimation, and landmark localization in the wild[END_REF], where a relatively large number of components and parts are used. Note that components account for coarse global changes in aspect/viewpoint, e.g. full body frontal image, full-body profile image, upper body frontal image and so on, whereas parts account for the local variations of the articulations, e.g. hands up or down. Supported by a systematic empirical study, Zhu et al. [START_REF] Zhu | Do we need more training data or better models for object detection?[END_REF] recently recommended the design of carefully regularized richer (with a larger number of parts and components) models. Here, we propose a richer and higher capacity model, but less structured, the Expanded Parts Model.
In mixture of components models, the training images are usually assigned to a single component (see Fig. 2 for an illustration) and thus contribute to training one of the templates only. Such clustering like property limits their capability to generate novel articulations, as sub-articulation in different components cannot be combined. Such clustering and averaging are a form of regularization and involve manually setting the number of parts and components. In comparison, the proposed EPM does not enforce similar averaging, nor does it forbid it by definition. It can have a large number of parts (up to the order of the number of training images) if found necessary despite sufficient regularization. Part-based deformable models initialize the parts either with heuristics (e.g. regions with high average energy [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]) or use annotations [START_REF] Desai | Detecting actions, poses, and objects with relational phraselets[END_REF], while EPM systematically explores parts at a large number of locations, scales and atomicities and selects the ones best suited for the task.
1. See the results of different versions of the DPM software http://people.cs.uchicago.edu/∼rgb/latent/ which, along with other improvements, steadily increase the number of components and parts.
Part-based loosely structured models
EPM bears some similarity with Poselets by Bourdev et al. [START_REF] Bourdev | Describing people: Poseletbased attribute classification[END_REF], [START_REF]Describing people: A poselet-based approach to attribute classification[END_REF], [START_REF] Bourdev | Poselets: Body part detectors trained using 3D human pose annotations[END_REF], [START_REF] Maji | Action recognition from a distributed representation of pose and appearance[END_REF], which are compound parts consisting of multiple anatomical parts, highly clustered in 3D configuration space, e.g. head and shoulders together. Poselets vote independently for a hypothesis, and are shown to improve performance. However, they are trained separately from images annotated specifically in 3D. In contrast, EPM tries to mine out such parts, at the required atomicity, from given training images for a particular task. Fig. 6 (top right) shows some of the parts for the 'female' class which show some resemblance with poselets, though are not as clean.
Methods such as Poselets and the proposed method are also conceptually comparable to the mid-level features based algorithms [START_REF] Boureau | Learning midlevel features for recognition[END_REF], [START_REF] Fathi | Action recognition by learning mid-level motion features[END_REF], [START_REF] Joo | Human attribute recognition by rich appearance dictionary[END_REF], [START_REF] Juneja | Blocks that shout: Distinctive parts for scene classification[END_REF], [START_REF] Lim | Sketch tokens: A learned mid-level representation for contour and object detection[END_REF], [START_REF] Oquab | Learning and transferring mid-level image representations using convolutional neural networks[END_REF], [START_REF] Sabzmeydani | Detecting pedestrians by learning shapelet features[END_REF], [START_REF] Singh | Unsupervised discovery of mid-level discriminative patches[END_REF], [START_REF] Sun | Learning discriminative part detectors for image classification and cosegmentation[END_REF]. While Singh et al. [START_REF] Singh | Unsupervised discovery of mid-level discriminative patches[END_REF] proposed to discover and exploit mid-level features in a supervised or semi-supervised way, with alternating between clustering and training discriminative classifiers for the clusters, Juneja et al. [START_REF] Juneja | Blocks that shout: Distinctive parts for scene classification[END_REF] proposed to learn distinctive and recurring image patches which are discriminative for classifying scene images using a seeding, expansion and selection based strategy. Lim et al. [START_REF] Lim | Sketch tokens: A learned mid-level representation for contour and object detection[END_REF] proposed to learn small sketch elements for contour and object analysis. Oquab et al. [START_REF] Oquab | Learning and transferring mid-level image representations using convolutional neural networks[END_REF] used the mid-level features learnt using CNNs to transfer information to new datasets. Boureau et al. [START_REF] Boureau | Learning midlevel features for recognition[END_REF] viewed combinations of popular coding and pooling methods as extracting mid-level features and analysed them. Sabzmeydani et al. [START_REF] Sabzmeydani | Detecting pedestrians by learning shapelet features[END_REF] proposed to learn mid level shapelets features for pedestrian detection. Yao et al. [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] proposed to recognize human actions using bases of human attributes and parts, which can be seen as a kind of mid-level features. The proposed EPM explores the space of such mid-level features systematically under a discriminative framework and more distinctively uses only a subset of model parts for scoring cf. all model parts by the traditional methods. In a recent approach, Parizi et al. [START_REF] Parizi | Automatic discovery and optimization of parts for image classification[END_REF] propose to mine out parts using a 1 / 2 regularization with weights on parts. They alternate between learning the discriminative classifier on the pooled part response vector, and the weight vector on the parts. However, they differ from EPM as they used pooled response of all parts for an image while EPM considers absolute responses of the best subset of parts from among the collection of an over complete set of model parts.
Many methods have also been proposed to reconstruct images using patches, e.g. Similarity by Composition by Boiman and Irani [START_REF] Boiman | Similarity by composition[END_REF], Implicit Shape Models by Leibe et al. [START_REF] Leibe | Robust object detection with interleaved categorization and segmentation[END_REF], Naive Bayes Nearest Neighbors (NBNN) by Boiman et al. [START_REF] Boiman | In defense of nearestneighbor based image classification[END_REF], and Collaborative Representation by Zhu et al. [START_REF] Zhu | Multi-scale patch based collaborative representation for face recognition with margin distribution optimization[END_REF]. Similarly sparse representation has been also used for action recognition in videos [START_REF] Guha | Learning sparse representations for human action recognition[END_REF]. However, while such approaches are generative and are generally based on minimizing the reconstruction error, EPM aims to mine out good patches and learn corresponding discriminative templates with the direct aim of achieving good classification.
Description of humans other than actions and attributes
Other forms of descriptions of humans have also been reported in the literature. E.g. pose estimation [START_REF] Andriluka | 2D human pose estimation: New benchmark and state of the art analysis[END_REF], [START_REF] Charles | Automatic and efficient human pose estimation for sign language videos[END_REF], [START_REF] Dantone | Body parts dependent joint regressors for human pose estimation in still images[END_REF], [START_REF] Fan | Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation[END_REF], [START_REF] Tompson | Joint training of a convolutional network and a graphical model for human pose estimation[END_REF], [START_REF] Toshev | DeepPose: Human pose estimation via deep neural networks[END_REF] and using pose related methods for action [START_REF] Vemulapalli | Human action recognition by representing 3D skeletons as points in a lie group[END_REF], [START_REF] Thurau | Pose primitive based human action recognition in videos or still images[END_REF], [START_REF] Chen | Describing clothing by semantic attributes[END_REF], [START_REF] Yao | Action recognition with exemplar based 2.5D graph matching[END_REF], [START_REF] Zhang | Panda: Pose aligned networks for deep attribute modeling[END_REF] and attribute [START_REF] Chen | Describing clothing by semantic attributes[END_REF] recognition have been studied in computer vision. Recognizing attributes from the faces of humans [START_REF] Bourdev | Describing people: Poseletbased attribute classification[END_REF], [START_REF] Ma | Unsupervised learning of discriminative relative visual attributes[END_REF], [START_REF] Kumar | Describable visual attributes for face verification and image search[END_REF], recognizing facial expressions [START_REF] Wang | Action recognition with improved trajectories[END_REF], [START_REF] Rudovic | Coupled gaussian processes for pose-invariant facial expression recognition[END_REF], [START_REF] Sharma | Local higher-order statistics (LHS) for texture categorization and facial analysis[END_REF], [START_REF] Wan | Spontaneous facial expression recognition: A robust metric learning approach[END_REF] and estimating age from face images [START_REF] Li | Learning ordinal discriminative features for age estimation[END_REF], [START_REF] Chang | A learning framework for age rank estimation based on face images with scattering transform[END_REF], [START_REF] Geng | Automatic age estimation based on facial aging patterns[END_REF], [START_REF] Guo | A study on automatic age estimation using a large database[END_REF], [START_REF] Guo | A study on human age estimation under facial expression changes[END_REF] have also attracted fair attention. Shao et al. [START_REF] Shao | What do you do? occupation recognition in a photo via social context[END_REF] aimed to predict the occupation of humans from images, which can be seen as a high-level attribute. In the present work, we work with full human bodies where the faces may or may not be visible and the range of poses may be unconstrained. Although some of the attributes and actions we consider here are correlated with pose, we do not attempt to solve the challenging problem of pose first and then infer the said attributes and actions. We directly model such actions and attributes from the full appearance of the human, expecting the model to make such latent factorization, implicitly within itself, if required.
In addition to the works mention above, we also refer the reader to Guo and Lai [START_REF] Guo | A survey on still image based human action recognition[END_REF], for a survey of the general literature for the task of human action recognition from still images.
EXPANDED PARTS MODEL APPROACH
We address the problem in a supervised classification setting. We assume that a training set of images and their corresponding binary class labels, i.e.
T = {(x i , y i )|x i ∈ I, y i ∈ {-1, +1}, i = 1, . . . , m} (1)
are available, where I is the space of images. We intend to learn a scoring function parametrized by the model parameters Θ,
s Θ : I → R, Θ ∈ M, (2)
where M is a class of models (details below), which takes an image and assigns a real valued score to reflect the membership of the image to the class. In the following we abuse notation and use Θ to denote either the parameters of, or the learnt model itself. We define an Expanded Parts Model (EPM) to be a collection of discriminative templates, each with an associated scale space location. Images scoring, with EPM, is defined as aggregating the scores of the most discriminative image regions corresponding to a subset of model parts. The scoring thus (i) uses a specific subset (different for different images) of model parts and (ii) only scores the discriminative regions, instead of the whole image. We make these notions formal in the next section (Sec. 3.1).
Formulation as regularized loss minimization
Our model is defined as a collection of discriminative templates with associated locations, i.e.
Θ ∈ M = {(w, )|w ∈ R N d , ∈ [0, 1] 4N } (3)
where N ∈ N is the number of parts, d ∈ N is the dimension of the appearance descriptor,
w = [w 1 , . . . , w N ], w p ∈ R d , p = 1, . . . , N (4)
is the concatenation of p = 1, . . . , N part templates and
= [ 1 , . . . , N ] ∈ [0, 1] 4N (5)
is the concatenation of their scale-space positions, with each p specifying a bounding box, i.e.
p = [x 1 , ỹ1 , x2 , ỹ2 ] ∈ [0, 1] 4 , p = 1, . . . , N (6)
where x and ỹ are fractional multiples of width and height respectively.
We propose to learn our model with regularized loss minimization, over the training set T , with the objective
L(Θ; T ) = λ 2 ||w|| 2 2 + 1 m m i=1 max(0, 1 -y i s Θ (x i )), (7)
with s Θ (•) being the scoring function (Sec. 3.2). Our objective is the same as that of linear support vector machines (SVMs) with hinge loss. The only difference is that we have replaced the linear score function, i.e.
sw (x) = w x, (8)
with our scoring function. The free parameter λ ∈ R sets the trade-off between model regularization and the loss minimization as in the traditional SVM algorithm.
Scoring function
We define the scoring function as
s Θ (x) = max α 1 α 0 N p=1 α p w p f (x, p ) (9a)
s.t. α 0 = k (9b) O v (α, ) ≤ β, (9c)
where, w p ∈ R d is the template of part p and f (x, p ) is the feature extraction function which calculates the appearance descriptor of the image x, for the patch specified by p ,
α = [α 1 , . . . , α N ] ∈ {0, 1} N (10)
are the binary coefficients which specify if a model part is used to score the image or not, O v (α, ) measures the extent of overlap between the parts selected to score the image. The 0 norm constraint on α enforces the use of k parts for scoring while the second constraint encourages coverage in reconstruction by limiting high overlaps. k ∈ N and β ∈ R are free parameters of the model. Intuitively what the score function does is that it uses each model part w p to score the corresponding region p in the image x and then selects k parts to maximize the average score, while constraining the overlap measure between the parts to be less than a fixed threshold β.
Our scoring function is inspired by the methods of (i) image scoring with learnt discriminative templates, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Hussain | Feature sets and dimensionality reduction for visual object detection[END_REF] and (ii) those of learnt patch dictionary based image reconstruction [START_REF] Mairal | Online learning for matrix factorization and sparse coding[END_REF]. We are motivated by these two principles in the following way. First, by incorporating latent variables, which effectively amount to a choice of the template(s) that is (are) being used for the current image, the full-scoring function can be made nonlinear (piecewise linear, to be more precise) while keeping the interaction with each template as linear. This allows learning of more complex and nonlinear models, especially in an Expectation Maximization (EM) type algorithm, where algorithms to learn linear templates can be used once the latent variables are fixed, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Hussain | Feature sets and dimensionality reduction for visual object detection[END_REF]. Second, similar to the learnt patch dictionary-based reconstruction, we want to have a spatially distributed representation of the image content, albeit in a discriminative sense, where image regions are treated independently instead of working with a monolithic global model. With a discriminative perspective, we would only like to score promising regions, and use only a subset of model parts, in the images and ignore the background or non-discriminative parts. Exploiting this could be quite beneficial especially as the discriminative information for human actions and attributes is often localized in space, i.e. for 'riding horse' only the rider and the horse are discriminative and not the background and for 'wearing shorts' only the lower part of the (person centric) image is important. In addition, the model could be over-complete and store information about the same part at different resolutions, which could lead to possible over-counting, i.e. scoring same image region multiple times with different but related model parts, as well; not forcing the use of all model parts can help avoid this over-counting.
Hence, we design the scoring function to score the images with the model parts which are most capable of explaining the possible presence of the class in the image, while (i) using only a subset of relevant parts from the set of all model parts and (ii) penalizing high overlap of parts used, to exploit localization and avoid over-counting as discussed above. We aim, thus, to score the image content only partially (in space) with the most important parts only.
We confirm such behavior of the model with qualitative results in Sec. 4.2.
Solving the optimization problem
We propose to solve the model optimization problem using stochastic gradient descent. We use the stochastic approximation to the sub-gradient w.r.t. w given by,
∇ w L = λw -δ i 1 α 0 α 1 f (x, 1 ) . . . α N f (x, N ) (11)
where, α p are obtained by solving Eq. 9 and
δ i = 1 if y i s Θ (x) < 1 0 otherwise. ( 12
)
Alg. 1 gives the pseudo-code for our learning algorithm. The algorithm proceeds by scoring (and thus calculating the α for) the current example with w fixed, and then updating w with α fixed, like in a traditional EM like method. The scoring function is a constrained binary linear program which is NP-hard. Continuous relaxations is a popular way of handling such optimizations, i.e. relax the α i to be real in the interval [0, 1] and replace α 0 with α 1 , and then solve the resulting continuous constrained linear program and obtain the binary values by thresholding/rounding the continuous optimum obtained. However, managing the overlap constraint with continuously selected parts would require additional thought. We instead, decide to take a simpler and direct route via an approximate greedy approach. Starting with an empty set of selected parts, we greedily add to it the best scoring part which for all (x i , y i ) ∈ S do 9:
Solve Eq. 9 to get s Θ (x i ) and α 10: if iter = 5 do η ← η/5 end if 17: end for does not overlap appreciably with all the currently selected parts, for the current image. The overlap is measured using intersection over union [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF] and two parts are considered to overlap significantly with each other if their intersection over union is more than 1/3. During training we have an additional constraint on scoring, i.e. α J ≤ 1, where J ∈ {0, 1} N ×m with J(p, q) = 1 if p th part was sampled from the q th training image, 0 otherwise. The constraint is enforced by ignoring all the parts that were initialized from the training images of the currently selected parts. This
δ i ← binarize(y i s Θ (x i ) < 1) 11: w ← w(1 -η yi λ) + δ i y i η yi α p f (x i , p
Mining discriminative parts
One of our main intentions is to address important limitations of the current methods: automatically selecting the task-specific discriminative parts at the appropriate scale space locations. The search space for finding such parts is very high, as all possible regions in the training images are potential candidates to be discriminative model parts. We address part mining by two major steps. First, we resort to randomization for generating the initial pool of candidate model parts. We randomly sample part candidates from all the training images, to initialize a highly redundant model. Second, we mine out the discriminative parts from this set by successive pruning. With our learning set in a stochastic 2. [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] achieve the same effect by biased sampling from the two classes. paradigm, we proceed as follows. We first perform a certain number of passes over randomly shuffled training images and keep track of the parts used while updating them to learn the model (recall that not all parts are used to score images and, hence, potentially not all parts in the model, especially when it is highly redundant initially, will be used to score all the training images). We then note that the parts which are not used by any image will only be updated due to the regularization term and will finally get very small weights. We accelerate this shrinking process, and hence the learning process, by pruning them. Such parts are expected to be either redundant or just non-discriminative background; empirically we found that to be the case; Fig. 4 shows some examples of the kind of discriminative parts, at multiple atomicities, that were retained by the model (for 'riding a bike' class) while also some redundant parts as well as background parts which were discarded by the algorithm.
Relation with latent SVM
Our Expanded Parts Model learning formulation is similar to a latent support vector machine (LSVM) formulation, which optimizes (assuming a hinge loss function)
L(w; T ) = λ 2 ||w|| 2 2 + 1 m m i=1 max(0, 1 -y i s L (x i )), (13)
where the scoring function is given as
s L (x) = max z w g(x, z), (14)
with z being the latent variable (e.g. part deformations in Deformable Parts-based Model (DPM) [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]) and g(•), the feature extraction function. The α, in our score function Eq. 9, can be seen as the latent variable (one for each image). Consequently, the EPM can be seen as a latent SVM similar to the recently proposed model for object detection by Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF].
In such latent SVM models the objective function is semiconvex [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], i.e. it is convex for the negative examples. Such semi-convexity follows from the convexity of scoring function, with similar arguments as in Felzenszwalb et al. (Sec. 4 in [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]). The scoring function is a max over functions which are all linear in w, and hence is convex in w which in turn makes the objective function semi-convex. Optimizing while exploiting semi-convexity gives guarantees that the value of the objective function will either decrease or stay the same with each update. In the present case, we do not follow Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF] in training, i.e. we do not exploit semi-convexity as in practice we did not observe a significant benefit in doing so. Despite there being no theoretical guarantee of convergence, we observed that, if the learning rate is not aggressive, training as proposed leads to good convergence and performance.Fig. 5 shows a typical case demonstrating the convergence of our algorithm, it gives the value of the objective function, the evolution of the model, in terms of number of parts, and the performance of the system vs. iterations (Step 4, Alg. 1), for 'interacting with a computer' class of the Willow Actions dataset.
Appearance features and visualization of scoring
As discussed previously, HOG features are not well adapted to human action recognition. We therefore resort, in our approach, to using appearance features, i.e. the bag-of-features (BoF), for EPM. When we use such appearance representation, the so-obtained discriminative models (similar to [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF]) cannot be called templates (cf. HOG based templates [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]). Thus, in the following, we use the word template to loosely denote the similar concept in the appearance descriptor space. Note, however, that the proposed method is featureagnostic and can be potentially used with any arbitrary appearance descriptor, e.g. BoF [START_REF] Csurka | Visual categorization with bags of keypoints[END_REF], [START_REF] Sivic | Video Google: A text retrieval approach to object matching in videos[END_REF], HOG [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], GIST [START_REF] Oliva | Modeling the shape of the scene: A holistic representation of the spatial envelope[END_REF], CNN [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF] etc.
Since we initialize our parts with the appearance descriptors (like BoF) of patches from training images (see Sec. 4 for details), we can use the initial patches to visualize the scoring instead of the final learnt templates as in the HOG case. This is clearly a loose association as the initial patches evolve with training iterations to give the part templates w p . However we hope that the appearance of the initial patch will suffice as a proxy for visualizing the part. We found such an approximate strategy to give reasonable visualizations, e.g. Fig. 3 shows some visualizations of scoring for different classes. While the averaging is not very good, the visualizations do give an approximate indication of which kind of image regions are scored and by which kinds of parts. We discuss these more in the qualitative results Sec. 4.2.
Efficient computation using integral histograms
Since we work with a large number of initial model parts, e.g. O(10 5 ), the implementation of how such parts are used to score the images becomes an important algorithmic design aspect. In the naïve approach, the scoring will require computing features for N local regions corresponding to each of the model part. Since N can be very large for the initial over-complete models, this is intractable. To circumvent this we use integral histograms [START_REF] Porikli | Integral histogram: A fast way to extract histograms in cartesian spaces[END_REF], i.e. 3D data structure where we keep integral images corresponding to each dimension of the appearance feature. The concept was initially introduced by Crow [START_REF] Crow | Summed-area tables for texture mapping[END_REF] as summed area tables for texture mapping. It has had a lot of successful applications in computer vision as well [START_REF] Viola | Robust real-time object detection[END_REF], [START_REF] Bay | SURF: Speeded up robust features[END_REF], [START_REF] Veksler | Fast variable window for stereo correspondence using integral images[END_REF], [START_REF] Adam | Robust fragments-based tracking using the integral histogram[END_REF].
We divide the images with axis aligned regular grid containing rectangular non-overlapping cells. Denote the location of the lattice points of the grid by X g = {x g 1 , . . . , x g s }, Y g = {y g 1 , . . . , y g t }, where, x g , y g ∈ [0, 1] are fractional multiples of width and height, respectively. We compute the BoF histograms for image regions from (0, 0) to each of lattice points (x i , y j ), i.e. we compute the feature tensor F x ∈ R s×t×d , for each image x, where the d dimensional vector corresponding to F (i, j, :) is the corresponding un-normalized BoF vector. When we do random sampling to get candidate parts to initialize the model (details in Sec. 4), we align the parts to the grid, i.e.
p = [x 1 , ỹ1 , x2 , ỹ2 ], s.t. x1 = x g i , ỹ1 = y g j , x2 = x g k , ỹ2 = y g
l , ∀ some i, k ∈ {1, . . . , s} and j, l ∈ {1, . . . , t}.
Hence, to score an image with a part we can efficiently compute the feature for the corresponding location as, f (x, p ) =F x (x g k , y g l , :) + F x (x g i , y g j , :) -F x (x g i , y g l , :) -F x (x g k , y g j , :).
f (x, p ) is then normalized appropriately before computing the score by a dot product with w p . In this way we do not need to compute the features from scratch, for all regions corresponding to the model parts every time an image needs to be scored. Also, this way we need to cache a fixed amount of data, i.e. tensor F x for every image x.
EXPERIMENTAL RESULTS
We now present the empirical results of the different experiments we did to validate and analyze the proposed method. We first give the statistics of the datasets then give implementation details of our approach as well as our baseline and, finally, proceed to present and discuss our results on the three datasets.
The datasets. We validate and empirically analyze our method on three challenging publicly available datasets:
1) Willow 7 Human Actions with the train and validation sets and the performance is reported on the test set. 3) Stanford 40 Human Actions 5 [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] is a dataset of human actions with 40 diverse daily human actions, e.g. brushing teeth, cleaning the floor, reading books, throwing a frisbee. It has 180 to 300 images per class with a total of 9352 images. We used the suggested train and test split provided by the authors on the website, with 100 images per class for training and the rest for testing.
All images are human-centered, i.e. the human is assumed to be correctly detected by a previous stage of the pipeline. On all the three datasets, the performance is evaluated with average precision (AP) for each class and the mean average precision (mAP) over all classes.
BoF features and baseline. Like previous work [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Yao | Combining randomization and discrimination for fine-grained image categorization[END_REF] we densely sample grayscale SIFT features at multiple scales. We use a fixed step size of 4 pixels and use square patch sizes ranging from 8 to 40 pixels. We learn a vocabulary of size 1000 using k-means and assign the SIFT features to the nearest codebook vector (hard assignment). We use the VLFeat library [START_REF] Vedaldi | VLFeat: An open and portable library of computer vision algorithms[END_REF] for SIFT and k-means computation.
We use a four-level spatial pyramid with C = {c × c|c = 1, 2, 3, 4} cells [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] as a baseline. To have non-linearity we use explicit feature map [START_REF] Vedaldi | Efficient additive kernels using explicit feature maps[END_REF] with the BoF features. We use a map corresponding to the Bhattacharyya kernel, i.e. we take dimension-wise square roots of our 1 normalized BoF histograms obtaining 2 normalized vectors, which we use with the baseline as well as with our algorithm. The baseline results are obtained with the liblinear [START_REF] Fan | LIBLINEAR: A library for large linear classification[END_REF] library.
Context. The immediate context around the person, which might contain partially an associated object (e.g. horse in riding horse) and/or correlated background (e.g. grass in running), has been shown to be beneficial for the task [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF]. To include immediate context we expand the human bounding boxes by 50% in both width and height. The context from the full image has also been shown to be important [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF]. To use it with our method, we add the scores from a classifier trained on full images to scores from our method. The full image classifier uses a 4 level SPM with an exponential χ 2 kernel.
Initialization and regularization constant. In the initialization we intend to generate a large number of part can-
w p = 2f (x, p ) 1 , p = 1, . . . , N (16)
where x denotes a BoF histogram. Throughout our method, we append 1 at the end of all our BoF features to account for the bias term (cf. SVM, e.g. [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF]). This leads to a score of 1 when a perfect match occurs,
w T p f (x, p ) 1 = [2f (x, p ), 1] f (x, p ) -1 = 1, (17)
and a score of -1 in the opposite case, as the appearance features are 2 -normalized. For the learning rate, we follow recent work [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] and fix a learning rate which we reduce once for annealing by a factor of 5 halfway through the iterations (Step 15, Algorithm 1). We follow [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] and fix the regularization constant λ = 10 -5 .
Deep CNN features. Recently, deep Convolutional Neural
Networks (CNN) have been very successful, e.g. for image classification [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF] and object detection [START_REF] Szegedy | Going deeper with convolutions[END_REF], [START_REF] Sermanet | Pedestrian detection with unsupervised multi-stage feature learning[END_REF], [START_REF] Girshick | Rich feature hierarchies for accurate object detection and semantic segmentation[END_REF], and have been applied for human action recognition in videos [START_REF] Ji | 3D convolutional neural networks for human action recognition[END_REF]. Following such works, we also evaluated the performances of using the recent highly successful deep Convolutional Neural Networks architectures for image classification [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF]. Such networks are trained on large external image classification datasets such as the Imagenet dataset [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] and have been shown to be successful with a large variety of computer vision tasks [START_REF] Razavian | Cnn features off-the-shelf: an astounding baseline for recognition[END_REF]. We used the publicly available matconvnet library [START_REF] Vedaldi | Matconvnet -convolutional neural networks for matlab[END_REF] and the models, pre-trained on the Imagenet dataset, corresponding to the network architectures proposed by Krizhevsky et al. [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF] (denoted AlexNet) and by Simonyan and Zisserman [100] (16 layer network; denoted VGG-16).
Quantitative results
Tab. 1 shows the results of the proposed Expanded Parts Model (EPM) (with and without context) along with our implementation of the baseline Spatial Pyramid [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] (SPM) and some competing methods using similar features, on the Willow 7 Actions dataset. We achieve a mAP of 66% which goes up to 67.6% by adding the full image context. We perform better than the current state-of-the-art method [5] (with similar features) on this dataset on five out of seven classes and on average. As demonstrated by [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], full image context plays an important role in this dataset. It is interesting to note that even without context, we achieve 3.5% absolute improvement compared to a method which models person-object interactions [START_REF] Delaitre | Learning person-object interactions for action recognition in still images[END_REF] and uses extra data to train detectors.
The second last column in Tab. 2 (upper part) shows our results, with bag-of-features based representations, along with results of the baseline SPM and other methods, on the Stanford 40 Actions. EPM performs better than the baseline by 5.8% (absolute) at 40.7%. It also performs better than Object bank [START_REF] Li | Object bank: A high-level image representation for scene classification and semantic feature sparsification[END_REF] and Locality-constrained linear coding [START_REF] Wang | Localityconstrained linear coding for image classification[END_REF] (as reported in [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF]) by 8.2% and 5.5% respectively. With context, EPM achieves 42.2% mAP which is the state-ofthe-art result using no external training data and grayscale features only. Yao et al. [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] reported higher performance on this dataset (45.7%), by performing action recognition using bases of attributes, objects and poses. To derive their bases they use pre-trained systems for 81 objects, 45 attributes and 150 poselets, using large amount (comparable to the size of the dataset) of external data. Since they use human based attributes also, arguably, EPM can be used to improve their generic classifiers and improve performance further, i.e. EPM is complementary to theirs. Khan et al. [START_REF] Khan | Coloring action recognition in still images[END_REF] also report higher (51.9%) performance on the dataset fusing multiple features, particularly those based on color, while here we have used only grayscale information.
The last column in Tab. 2 (upper part) shows ours as well as other results, with bag-of-features based representations, on the dataset of Human Attributes. Our baseline SPM is already higher than the results reported by the dataset creators [START_REF] Sharma | Learning discriminative representation for image classification[END_REF], because we use denser SIFT and more scales. EPM improves over the baseline by 3.2% (absolute) and increases further by 1% when adding the full image context. EPM (alone, without context) outperforms the baseline for 24 out of the 27 attributes. Among the different human attributes, those based on pose (e.g. standing, arms bent, running/walking) are found to be easier than those based on appearance of clothes (e.g. short skirt, bermuda shorts). The range of performance obtained with EPM is quite wide, from 24% for crouching to 98% for standing.
Tab. 2 (bottom part) shows the results of the CNN features, on the person bounding box and the whole image, as well as their combinations with EPM (averaging of the scores of combined methods), on the two larger datasets, i.e. Stanford 40 Actions and Human Attributes. We can make the following interesting observations from Tab. 2 [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] 34.9 55.5 Object bank [START_REF] Li | Object bank: A high-level image representation for scene classification and semantic feature sparsification[END_REF] full image 32.5 -LLC coding [START_REF] Wang | Localityconstrained linear coding for image classification[END_REF] bb + full img As deep features are not additive like bag-of-features histograms (feature for two image regions together is not the sum of features for each separately) we can't use the integral histograms based efficient implementation with the deep features and computing and caching features for all candidate parts is prohibitive. Hence, we can't use the deep features out-of-the-box with our method. Tailoring EPM for use with deep architectures is an interesting extension but is out of scope of the present work.
Qualitative results
We present qualitative results to illustrate the scoring, Fig. 3 shows some examples, i.e. composite images created by averaging the part patches with non-zero alphas. We can observe that the method focuses on the relevant parts, such as torso and arms for 'bent arms', shorts and tee-shirts for 'wearing bermuda shorts', and even computer (left bottom) for 'using computer'. Interestingly, we observe that for both 'riding horse' and 'riding bike' classes, the person gets ignored but the hair and helmet have been used partially for scoring. We explain this with the discriminative nature of the learnt models: as people in similar pose might confuse the two classes, the models ignore it and focus on other more discriminative aspects.
The parts mined by the model
Fig. 6 shows the distribution of the 2 norm of the learnt part templates, along with top scoring patches for the selected parts, with norms across the spectrum for three classes. The first image in any row is the patch with which the part was initialized and the remaining ones are its top scoring patches. The top scoring patches give an idea of what kind of appearances the learnt templates w p capture. We observe that, across datasets, while most of the parts seem interpretable, e.g. face, head, arms, horse saddle, legs, there are a few parts that seem to correspond to random background (e.g. row 1 for 'climbing'). This is in line with a recent study [START_REF] Zhu | Do we need more training data or better models for object detection?[END_REF], in 'mixture of template' like formulations, there are clean interpretable templates along with noisy templates which correspond to background.
We also observe that the distribution of the 2 norm of the parts follows a heavy tailed distribution. Some parts are very frequent and the system tries to tune them to give high scores for positive vectors and low scores for negative vectors and hence give them a high overall energy. There are also parts which have smaller norms, either because they are consistent in appearance (like the head and partial shoulders on clean backgrounds in row 4 of 'female' Fig. 6, or the leg/arm in the last row of 'climbing') or occur in few images. However, they are discriminative nonetheless. We observe that the model sizes and the performances for the classes are correlated. On the Stanford Actions dataset, which has the same number of training images for every class, on an average, class models with a higher number of parts obtain higher performance (correlation coefficient between number of parts and performances of 0.47). This is somewhat counter intuitive as we would expect that the model with larger number of parts, and hence larger number of parameters/higher capacity, would over-fit cf. those with smaller number of parts, for the same amount of training data for both cases. However, this can be explained as follows. The classes where there are large amounts of variations which are well captured by the train set, the model admits larger number of parts to explain the variations and then successfully generalizes to the test set. While for classes where the train set captures only a limited amount of variation, the model fits on the train set with a smaller number of parts but is then unable to generalize well to the test set with different variations. An intuitive feeling of such variations can be had by noting the classes which are relatively well predicted, e.g. 'climbing', 'riding a horse', 'holding an umbrella', vs. those that are not so well predicted, e.g. 'texting message', 'waving hands', 'drinking' -while the former classes are expected to have more visual coherence, the latter are expected to be reatively more visually varied. Similar correlation of the number of model parts with performances (Fig. 8 middle) is also observed for Human Attributes dataset (albeit weaker with correlation coefficient 0.23). Since Human Attributes dataset has different number of images for different classes, it allows us to make the following interesting observation as well. The performances for Human Attributes dataset are highly correlated with the number of training images (correlation coefficient 0.79), which is explained simply as the classes with higher number of images have higher chance performance, and the classifiers are accordingly better in absolute performance. However, the relationship between the number of training images and the model parts is close to exponential (correlation coefficient between the log of number of training images and the number of model parts 0.65). This is interesting as it is in line with the heavy tailed nature of visual information -as the number of images increase the model expands to capture the visual variability quickly initially but as the training data increases further the model only expands when it encounters rarer visual information and hence the growth decreases. The three clear outliers where the increase in training images does not lead to a increase in model size (after a limit) are 'upperbody', 'standing', 'arms bent' -these classes are also the best-performing classes; they have relatively high number of training images but still do not need many model parts as they are limited in their (discriminative) visual variations.
Effect of parameters
There are two important parameters in the proposed algorithm, first, the number of parts used to score the images k and, second, the number of candidate parts to be sampled for initializing the model n (per training image). To investigate the behavior of the method w.r.t. these two parameters, we did experiments on the validation set of the Willow Actions dataset. Fig. 7 shows the performances and the model sizes (number of parts in the final models) when varying these two parameters in the range {20, 50, 100, 150, 200}. We observe that the average number of model parts increases rapidly as k is increased (Fig. 7 middle-top). This is expected to a certain extent, as the pruning of the model parts is dependent on k; if k is large then a larger number of parts are used per image while training, and hence more parts will be used, on an average, and consequently survive pruning. However, the increase in the model size is not accompanied by a similarly aggressive increase in the validation performance (Fig. 7 left-top). The average number of model parts for k = 100 and n = 200 is 549. Similar increase in the model size but with increase in n is more varied for different values of k; for lower value of say k = 20 the increase in model size with n is subtle when compared to the same for a higher value of say k = 200. However, again such increase in model size doesn't bring increase in validation performance either. It is also interesting to note the behavior of the models of different classes when varying k and n. The bar graphs on the right of Fig. 7 show the number of model parts when n is fixed to 200 and k is varied (top) and when k is fixed to 100 and n is varied. In general, as k was increased the models of almost all the classes grew in number of parts with n fixed, while when k was fixed and more model parts were made available, the models first grew and then saturated. The only exception to this was the 'playing music' class where the models practically saturated in both cases, perhaps because of limited appearance variations. The growing of models with increasing k was followed by a slight drop in the performance, probably due to over-fitting.
Following these experiments and also for keeping a reasonable computational complexity, k was fixed to k = 100 for the experiments reported. This is also comparable to the 85 cells in the four-level spatial pyramid representation used as a baseline. Similarly, n was fixed to be n = 200 for the Willow Actions dataset and n = 20 for the about 10× larger Stanford Action and Human Attributes datasets (recall that n is the number of initial candidates parts sampled per training image) in the experiments reported.
Training/testing times
The training is significantly slower compared to a standard SPM/SVM baseline, i.e. by around two orders of magnitude. This is due to the fact that there is SVM equivalent cost (with a larger number of vectors) at each iteration. Testing is also a bit slower compared to an SPM, as it is based on a dot product between longer vectors. For example, on Stanford dataset testing is 5 times slower compared to SPM at about 35 milliseconds per image (excluding feature extraction).
CONCLUSION
We have presented a new Expanded Parts Model (EPM) for human analysis. The model learns a collection of discriminative templates which can appear at specific scale-space positions. It scores a new image by sparsely explaining only the discriminative regions in the images while using only a subset of the model parts. We proposed a stochastic sub-gradient based learning method which is efficient and scalable -in the largest of our experiments we mine models of O( 103 ) parts from among initial candidate sets of O(10 5 ). We validated our method on three challenging publicly available datasets for human attributes and actions. We also showed complementary nature of the proposed method to the current state-of-the-art deep Convolutional Neural Networks based features. Apart from obtaining good quantitative results, we analysed the nature of the parts obtained and also analysed the growth of the model size with the complexity of the visual task as well as the amount of training data available.
Fig. 1 .
1 Fig. 1. Illustration of the proposed method. During training (left) discriminative templates are learnt from a large pool of randomly sampled part candidates. During testing (right), the most relevant parts are used to score the test image.
Fig. 2 .
2 Fig. 2. Illustration of a two-component model vs. the proposed Expanded Parts Model. In a component-based model (left) each training image contributes to the training of a single model and, thus, its parts only score similar images. In contrast, the proposed EPM automatically mines discriminative parts from all images and uses all parts during testing. Also, while for component-based models, only images with typical training variations can be scored reliably, in the proposed EPM sub-articulations can be combined and score untypical variations not seen during training.
Algorithm 1 6 :
16 SGD for learning Expanded Parts Model (EPM) 1: Input: Training set T = {(x i , y i )} m i=1 ; denote m + (m -) as number of positive (negative) examples 2: Returns: Learned Expanded Parts Model, Θ = (w, ) 3: Initialize: Θ = (w, ), rate (η 0 ), number of parts for scoring (k) and regularization constant (λ) 4: for iter = 1, . . . , 10 do 5: η +1 ← η 0 × m -/m and η -1 ← η 0 × m + /m for npass = 1, . . . , 5 do 7: S ← rand shuffle(T ) 8:
parts_image_map ← note image parts (Θ, S) 15:M ← prune parts (Θ, parts_image_map) 16:
increases the diversity of learned parts, by discouraging similar or correlated parts (which emerge from the same training image initially) to score the current image. While training, we score each training image from the rest of the train set, i.e. we do not use the model parts which were generated from the same training image to avoid obvious trivial part selection. Usually, large databases are highly unbalanced, i.e. they have many more negative examples than positive examples (of the order of 50:1). To handle this we use asymmetric learning rates proportional to the number of examples of other class 2 (Step 4, Alg. 1).
Fig. 4 .
4 Fig. 4. Example patches illustrating pruning for the 'riding a bike' class. While discriminative patches (top) at multiple atomicities are retained by the system, redundant or non-discriminative patches (middle) and random background (bottom) patches are discarded. The patches have been resized and contrast adjusted, for better visualization.
Fig. 5 .
5 Fig. 5. The evolution of the (left) objective value, (middle) number of model parts along with the (right) average precision vs. number of iterations, for the validation set of 'interacting with a computer' class of the Willow Actions dataset, demonstrating the convergence of our algorithm.
Fig. 6 .
6 Fig. 6. Distribution of the norm of the part templates (left top) and some example 'parts' (rest three). Each row illustrates one of the parts: the first image is the patch used to initialize the part and the remaining images are its top scoring patches. We show, for each class, parts with different norms (color coded) of the corresponding wp vectors, higher (lower) norm part at top (bottom). (see Sec. 4.3 for a discussion, best viewed in color).
Fig. 8 (
8 left and middle) shows the relation between the performances and the number of model parts, for the different classes of the larger Stanford Actions and Human Attributes datasets. The right plot gives the number of training images vs. the number of model parts for the different classes of the Human Attributes dataset (such curve is not plotted for the Stanford Actions dataset as it has the same number of training images for each class).
Fig. 7 .
7 Fig. 7. Experiments to evaluate the impact of the number of parts and the number of initial candidate parts on the performance of the proposed model on the validation set of the Willow Actions dataset (see Tab. 1 for the full class names). The first row shows the performances and number of model parts for different values of k, i.e. the maximum number of model parts used to score a test image, while the second row shows those for varying n, i.e. the number of initial part candidates sampled per training image.
Fig. 8 .
8 Fig. 8. The average precision obtained by the models for (left) Stanford Actions, (middle) HAT dataset and (right) the number of training images (for HAT; the number of training images for Stanford Actions dataset is same for all classes) vs. the number of parts in the final trained models of the different classes (see Sec. 4.3 for discussion).
, which are subsequently refined by pruning. To achieve this, we randomly sample the positive training images for patch positions, i.e. { p } and initialize our model parts as
5. http://vision.stanford.edu/Datasets/40actions.html didates
TABLE 1
1
Performances (mAP) on the Willow Actions dataset
Class [28] [8] [5] [21] EPM EPM+C
intr. w/ comp. 30.2 56.6 59.7 59.7 60.8 64.5
photographing 28.1 37.5 42.6 42.7 40.5 40.9
playing music 56.3 72.0 74.6 69.8 71.6 75.0
riding bike 68.7 90.4 87.8 89.8 90.7 91.0
riding horse 60.1 75.0 84.2 83.3 87.8 87.6
running 52.0 59.7 56.1 47.0 54.2 55.0
walking 56.0 57.6 56.5 53.3 56.2 59.2
mean 50.2 64.1 65.9 63.7 66.0 67.6
TABLE 2
2 Performances (mAP) of EPM and deep Convolutional Neural Networks on the Stanford 40 Actions and the Human Attributes datasets
Method Image region Stan40 HAT
Discr. Spatial Repr. [4] - 53.8
Appearance dict. [50] bounding box - 59.3
SPM (baseline)
ACKNOWLDGEMENTS
This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation, by the ANR (grant reference ANR-2010-CORD-103-06) and by the ERC advanced grant ALLEGRO. | 73,306 | [
"972001",
"3233",
"831154"
] | [
"54489",
"406734",
"445108"
] |